Creating a serverless static website, part 1

Creating a serverless static website

What's the problem?

I want to deploy a plain HTML/CSS/JS website with the minimal amount of fuss. I'm going to be using it for a simple web-app so I want to make sure its always up and always working as well.

There's many different ways of deploying websites with mostly static content.

  • You could run a server using nginx/apache to serve files out of a directory, for example. This type of setup works fine for small setups, but starts to grow out of hand.
    • you'll need to keep your versions of your web server up to date to pull in any security updates for nginx/apache
    • you'll need to keep your OS patched with the latest security updates
    • you have servers to manage that need to be highly available
    • your servers are susceptible to attacks and you'll need to be defensive with your security
  • You can always use hosted services to achieve some of this, but what's the fun in that? You're here to learn how to build solutions on AWS.

What's the solution?

Here's the features we want for our website

AWS provides a number of services to make this easy. We'll be using:

  • S3 to host our static content
  • CloudFront to act as a Content Delivery Network to provide faster speeds to end-users
  • Route53 to purchase and manage your domain
  • Cloudwatch to monitor traffic flowing to your website
  • Amazon Certificate Manager for free SSL certificates

A lot of guides online will tell you, "You want to learn AWS? Get a free account and use Cloudformation to deploy everything!". That's fine advice if you're already familiar with building things on AWS. But if you're interacting with a new AWS service, it's helpful to set it up manually the first time so you can see which services are dependent on what information. (Check out Part 2 here)

For the purposes of this tutorial, we'll be setting it all up manually to familiarize ourselves with the above AWS services.

Why not just use a plain S3 website?

Unfortunately, S3 configured as a website doesn't support https by itself, so you need to add Cloudfront to the mix in order to get support for https.

Get started

Create the S3 bucket

  1. Create 2 S3 buckets – one for your access logs first, and then one for your app. In this case, I’m making dperez-test-bucket-logs and dperez-test-bucket. The logs bucket can have all the default AWS settings, so you can click next through the wizard.
    Then create the bucket for your application.

  2. Configure the bucket to send access logs to another bucket for monitoring

Configure your bucket

  1. Configure the bucket as a website and set the default index document. Notice that the URL is a really long URL hosted under Amazon's domain. You'll likely want to have your users see your domain instead of Amazon's in the browser bar. More on that later.
  2. Use the aws-cli to deploy your project to S3
☁  s3-project [master] ⚡ ll
total 4.0K
drwxr-xr-x 3 dperez 102 Apr 22 19:52 ./
drwxr-xr-x 9 dperez 306 Apr 22 19:52 ../
-rw-r--r-- 1 dperez  52 Apr 22 19:52 index.html
☁  s3-project [master] ⚡ aws s3 ls | grep dperez
2018-04-22 19:42:23 dperez-test-bucket
2018-04-22 19:39:53 dperez-test-bucket-logs
☁  s3-project [master] ⚡ aws s3 sync . s3://dperez-test-bucket/
upload: ./index.html to s3://dperez-test-bucket/index.html
☁  s3-project [master] ⚡

After finishing clicking the buttons for these steps, you should have a URL that you can access to see your site! Find the URL provided by S3 and visit your new website.

More configuration

  • The first error you'll see is a 403 Forbidden page. This is because your bucket is not public yet. The bucket itself can have a policy attached to it, when we're creating websites, we want the bucket to be public on purpose. That does mean that you need to make sure to not keep any credentials in a public S3 bucket. Go to your bucket > Permissions > Bucket Policy and add the following IAM policy (make sure to swap out dperez-test-bucket for your buckets name.
  "Version": "2012-10-17",
  "Statement": [
      "Sid": "PublicRead",
      "Principal": "*",
      "Effect": "Allow",
      "Action": [
      "Resource": "arn:aws:s3:::dperez-test-bucket/*"

The bucket is public. This is expected and intentional when deploying this website, but worth calling out after the long string of "hacks" where data was leaked out of misconfigured buckets. However, as long as the bucket is public, the general internet will be crawling your website both under your pretty domain, as well as via the bucket directy. We'll fix this to make sure your viewers are only seeing your branded domain and website.

Looking at logs

  • Take a look at the files that were dropped off in your access logs bucket. You'll see that it contains a bunch of log lines telling you who accessed your website, what they requested, how long it took, etc.

Create certificates to support HTTPS URLs

We can get free certificates from Amazon Certificate Manager. They offer a few different ways of verifying your identity to prove that you own the domain you want a certificate for. Create a certificate for the domain you want your users to view. We'll need this later.

  • Confirm an email that gets sent to the contact info for your domain - for example.
  • If you have control over the DNS, you can insert a CNAME record containing an ID that Route53 provides. As soon as the record exists, the certificate gets created.

Set up CloudFront as a CDN

Now we can create the Cloudfront distribution and finish connecting the pieces together.

  1. Create an Origin Access Identity. We'll be using this later to further lock down access to our bucket.
  2. Create the distribution.
    • For origin: the origin of the distribution will be set to your S3 bucket, so any request hitting your domain will first be received by CloudFront, and if it doesn't exist in the cache, it will grab the object from S3. Specify the Origin Access Identity that you created earlier. Also make sure to set "Grant Read Permissions on Bucket" to "Yes, Update Bucket Policy". This will update the permissions on our S3 bucket to allow Cloudfront to connect to it.
    • Cache Behavior: CloudFront acts as a cache, so for the purposes of this example, we can leave all the defaults here except for "Viewer Protocol Policy". You'll want to set that to Redirect HTTP to HTTPS.
    • Domain & Certificate: For Alternate Domain Names (CNAMEs), use the URL that you want your viewers to use. Choose the certificate for your site that you provisioned via ACM.

Check out what Cloudfront did to your bucket policy

Because we're going to be using Cloudfront for this application, we can keep the bucket private for an additional layer of security and only accessible by Cloudfront by using a feature called an Origin Access Identity. Cloudfront will create a randomized string that it will use to make authenticated requests to the S3 bucket. When your S3 bucket receives the request, it will check to make sure that it is the same string and if so, will allow Cloudfront to read your files.

Go to Permissions > Bucket Policy to see the updated IAM policy.

Update your Route53 Domain to map to your CloudFront distribution

  • When creating your record, make sure to use an Alias record. The advantage of using an Alias record over a CNAME, is that DNS queries to R53 for Alias records are free and it's one less DNS query than using a CNAME. Here's what it looks like for one of my domains.
  • Once your DNS has been updated, wait for DNS to propagate, and test out your new site to verify your requirements

Be aware of the constraints of this setup

  • CloudFront acts as a cache in front of S3. If you deploy new files to S3 and you refresh your website, you may or may not see the new content depending on the TTL of your cache. There's a balance to strike between setting a low TTL like 0 and a high TTL like 1 hour (a TTL of 0 is a feature itself that we'll dive into another time).
  • Amazon Certificates can only be used on AWS. You cannot export your certificate's private key out of AWS. It's generally fine if you're using AWS anyway, but if you need to be vendor independent, use Let's Encrypt's free certificates. There's no reason anymore to be paying for certificates.
  • CloudFront has some limitations around what headers you're able to use
    • Setting cache-control headers has to be added as metadata on your S3 object
    • Adding custom security headers like X-Frame-Origin, Strict-Transport-Security is not supported directly via CloudFront but can be added via Lambda@Edge

Go to Part 2

You might be interested in…