If you’re using Amazon S3 for a CDN, it’s possible to serve compressed, gzipped files from an Amazon S3 bucket, though there are a few extra steps beyond the standard process of serving compressed files from your own web server.
All of the steps required are separately documented in Amazon’s documentation, so it’s difficult to determine if you’ve configured everything correctly aside from failing -> trial and error -> Google -> repeat. So here are all of the steps required in one place.
First, we’ll need to create a compressed version of our file:
gzip -k my_referenced_code.js
Of course, you’ll probably want to integrate this with whatever build process you’re using for your codebase.
Log in to your Amazon AWS console, and go to your main CloudFront page listing all of your CloudFront Distributions.
Select your desired Distribution, and go to the Behaviors tab.
Select Edit and scroll down to the Compress Objects Automatically radio buttons.
Change the radio value to Yes.
Now let’s go over to the AWS S3 configuration.
Navigate to the file or folder you’d like to compress and select the Properties tab. Select the Metadata box for editing, and select + Add Metadata.
We’re still not done yet - gzip compression won’t work without setting the Content-Length HTTP header. So we need to navigate back up to the root of our S3 bucket, select the Permissions tab and select CORS Configuration.
The default CORS configuration doesn’t include the Content-Length header. Add it to the CORS configuration (gzip won’t work without this header).
<?xml version="1.0" encoding="UTF-8"?> <CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/"> <CORSRule> <AllowedOrigin>*</AllowedOrigin> <AllowedMethod>GET</AllowedMethod> <MaxAgeSeconds>3000</MaxAgeSeconds> <AllowedHeader>Authorization</AllowedHeader> <AllowedHeader>Content-Length</AllowedHeader> </CORSRule> </CORSConfiguration>
That’s it - you’ll now be serving a nice, compressed version of your script right from your S3 bucket!