Benefits of static web sites hosted on Google Cloud Storage, Azure Storage, or Amazon S3

Most of my sites have no dynamic content but I often still hosted them as a Ruby Sinatra app or use PHP.

A few years ago I experimented with using the static site generator Jekyll but still hosted the generated site on one of my own servers using nginx. After a while I decided to revert the site to its old implementation as a Sinatra web app (even though the site was always static, that is, no server side actions required except for serving up static HTML, JS, and CSS files).

I am now using a far superior setup, and I am going to document this new setup to document it for myself, and perhaps other people may find this useful also:

I chose to use Google Cloud Storage for personal reasons (I used to work as a contractor at Google and I fondly remember their infrastructure, and using GCP is slightly similar) but using Amazon S3 or Microsoft Azure is also simple to set up.

Start by installing Harp and a static site local web server:

npm install -g harp
npm install -g local-web-server


Harp by default uses Jade, and I spent about 10 minutes of "furious editing" for each of my static sites to convert to the Jade format from HTML, ERB, or PHP files. This is optional but I like Jade and I thought that long term it would save me maintenance effort using Jade. As you edit your site use "harp server" to test the site locally. When you compile a web site using "harp compile" a subdirectory www is created with your static site ready to deploy. You can test the generated static site using "cd www; ws" where ws is the local web server you just installed.

You need to create a storage bucket with your domain name, which for this example we will refer to as DOMAIN.COM. I created two buckets, one DOMAIN.COM and one www.DOMAIN.COM and for www.DOMAIN.COM I created a single index.jade file (that gets compiled to www/index.html) that just has both a HTML redirect header and Javascript for a redirect to DOMAIN.COM.

The only part of this process that takes a little time is proving to Google that you own the domain, if you have not done so in the past. Just follow the instructions when creating the buckets and then copy your local files:

cd www
gsutil -m rsync -R . gs://DOMAIN.COM
gsutil defacl ch -u AllUsers:R gs://DOMAIN.COM

I had to also manually set the static files copied to GCS to have public access. You will have to change the DNS settings for your site to create a CNAME for both www.DOMAIN.COM and for www.DOMAIN.COM pointing to c.storage.googleapis.com. Whenever you edit a local file use "cd www; gsutil -m rsync -R . gs://DOMAIN.COM" to re-sync with your GCS bucket.

After waiting for a few minutes test to make sure your site is visible on the web. For one of my sites I also used a free Cloudflare service for HTTPS support. This is very easy to setup if you already have a Cloudflare login. Just add a free web site and make the same to CNAME definitions pointing to c.storage.googleapis.com and then Cloudflare will give you two DNS servers of their own that you need to use instead of whatever DNS service you were using before. 


Comments

Popular posts from this blog

Ruby Sinatra web apps with background work threads

My Dad's work with Robert Oppenheimer and Edward Teller

Time and Attention Fragmentation in Our Digital Lives