Michael Blum

Developer from Chicago

Building a Blog Part 5 - Securing an S3 Website


Amazon Certificate Manager

Since we’re hosting our site on Amazon’s S3 - we can take advantage of some other AWS products to secure or website, namely:

These services give us a few useful features for our simple blog:

  • Free SSL/TLS certificates managed by Amazon (Using Let’s Encrypt with S3 is awkward since there is no traditional web server to secure)

  • Global CDN - Cloudfront gives us global availablility for our S3 assets - people around the world get a fast user experiance.

Let’s setup HTTPS for our S3 bucket hosting this blog.

First things first, login to your AWS Console, browse to Certificate Manager, and create a new certificate:

request a certificate

We want to provision a certificate for our root domain as well as the traditional www.

provision a certificate

This will send a verification email to an email address. Question is, which email address? The Certificate manager FAQ tells us (as of this writing):

Certificate Manager FAQs

That verifications are based on the WHOIS of the domain as well as these addresses:

  • admin@my-domain.tld
  • administrator@
  • hostmaster@
  • postmaster@
  • webmaster@

I added a forwarding address to my domain of one of the expected addresses and voila, my verification email arrived:

valiadtion email recieved after creating a forwarding address of admin@mblum.me

Configure www subdomain for S3 buckets

You may have noticed that www.mblum.me doesn’t work. This is becuase Route 53 points to the root domain, not www. To fix this, lets create an empty www.mblum.me S3 bucket that forwards all requests to our root mblum.me bucket:

s3 buckets

re-route to root S3 bucket

If we go to www.mblum.me we get an error saying we don’t have a website configuration:

404 Not Found

  • Code: NoSuchWebsiteConfiguration
  • Message: The specified bucket does not have a website configuration
  • BucketName: www.mblum.me
  • RequestId: 2500A5DC3A9ADB9B
  • HostId: YisvNtCLPYBThIV5SXGRr+fu/pnrNChmR5RFeB2vqGQ9VqFVJqBDVlisq6LOZOL0

Lets update Route 53 to point to our new www. proxy bucket:

configure Route 53 to point www to our new bucket

Now when we re-send the validation email for our certificate we’ll get two emails. One for the root mblum.me and another for the www.mblum.me sub-domain.

Configure CloudFront distribution

Head over to the AWS console and create a new distribution. Configure it to your specifications, and what you’re willing to pay for. The important bit is assigning our freshly minted Amazon certificate to our CloudFront distribution. This sets up a few things:

  • Our custom domain name gets https://
  • edge servers speed up serving the contents of our S3 bucket

Note to use our custom CNAMES we’ll need to add them to the Alternate Domain Names of our distribution:

cloudfront distribution configuration

Confirm that the specified distribution includes the required alternate domain name and has a status of Deployed.

It took quite a while for my CloudFront distribution to become available as it propagates to the AWS edge server network.

If we visit our newly secured S3 bucket at https://mblum.me we get a connection refused exception:

wget https://mblum.me/
--2016-04-24 11:17:10--  https://mblum.me/
Resolving mblum.me... 54.231.10.140
Connecting to mblum.me|54.231.10.140|:443... failed: Connection refused.

This is becuase our Route 53 DNS configurations are still pointing directly at our S3 buckets, not the CloudFront distribution.

In Route 53, set the A record alias to point to the CloudFront url of your distribution:

Domain Name: d10b6p0n8pvsef.cloudfront.net

Browsing to our bucket using https we get an error:

<Error>
	<Code>AccessDenied</Code>
	<Message>Access Denied</Message>
	<RequestId>D90C59CB2ABF3204</RequestId>
	<HostId>lSWhJ8omb1HzzU5z96l4F1YkuQA8Jya3WyDUYGIhs+v9GqAFPXqS4ZCOAkH5juUWDUx6n9HvMUc=</HostId>
	<head/>
</Error>

Looks like we have a permissions issue.

To fix this we need to do two things to our CloudFront distribution in conjunction with our S3 bucket policy.

You may notice that https://mblum.me/ is returning a 404 but https://mblum.me/index.html loads the index page correctly. This is because even though we’re serving our S3 assets through CloudFront, we still want the original S3 Website behavior of automatically loading index.html pages.

S3 Website URL as CloudFront Origin Domain Name

The trick here is that the CloudFront origin domain name needs to point to the S3 Website url NOT the base bucket url.

  • S3 Bucket

    • mblum.me.s3.amazonaws.com

vs

  • S3 Website

    • mblum.me.s3-website-us-east-1.amazonaws.com

This setup kills two birds with one stone: it maintains the index.html behavior we’ve come to expect from a webserver, and two it makes our S3 assets publicly available.

With that enabled, our S3 website url (mblum.me.s3-website-us-east-1.amazonaws.com in this case) allows people to load our site directly aginst S3 - circumventing CloudFront entirely.

To prevent this we have two options - modify our bucket policy to only allow our CloudFront urls access (https://mblum.me and https://www.mblum.me) or create an Access Origin Identity in CloudFront.

While option two gives us the ability to generate S3 access with expiry dates - thats a bit overkill for what we want in this case.

Lets modify the S3 bucket policy to only allow users to access our site through CloudFront, and not directly against S3.

From our original configuration:

{
	"Version": "2012-10-17",
	"Statement": [
		{
			"Sid": "AddPerm",
			"Effect": "Allow",
			"Principal": "*",
			"Action": [
				"s3:GetObject"
			],
			"Resource": [
				"arn:aws:s3:::mblum.me/*"
			]
		}
	]
}

{
	"Version": "2012-10-17",
	"Statement": [
		{
			"Sid": "AddPerm",
			"Effect": "Deny",
			"Principal": "*",
			"Action": [
				"s3:GetObject"
			],
			"Resource": [
				"arn:aws:s3:::mblum.me/*"
			],
			"Condition": {
				"StringLike": {
					"aws:Referer": [
						"http://mblum.me.s3-website-us-east-1.amazonaws.com/*",
						"https://mblum.me.s3-website-us-east-1.amazonaws.com/*"
					]
				}
			}
		},
		{
			"Sid": "AddPerm",
			"Effect": "Allow",
			"Principal": "*",
			"Action": [
				"s3:GetObject"
			],
			"Resource": [
				"arn:aws:s3:::mblum.me/*"
			],
			"Condition": {
				"StringNotLike": {
					"aws:Referer": [
						"http://mblum.me.s3-website-us-east-1.amazonaws.com/*",
						"https://mblum.me.s3-website-us-east-1.amazonaws.com/*"
					]
				}
			}
		}
	]
}

Note: the Deny condition needs to be precise as to not break Continuous Integration since our CI originates from git.mblum.me.

This S3 policy has a few caveats in that it prevents assets being loaded from the S3 website url but doesn’t prevent a user from browsing to the S3 website url directly.

Upgrade HTTP to HTTPS

With our bucket policy in place, users who type in http://mblum.me get an error since we only allow https referrals to access our content. To resolve this, have CloudFront seamlessly upgrade traffic to HTTPS in the behavior settings of the distribution:

cloudfront http to https behavior setting