Hosting a Static Website on AWS S3 using Terraform
Here you will understand how to use Terraform to get your static websites up and running on AWS. This is an extremely cost-effective method of website hosting. I’ll show you how to use AWS to launch a static website by uploading your website content to an S3 bucket and configuring your bucket to host your website.
First of all, let me explain some of the terminologies.
What is Terraform?
Terraform is an open-source infrastructure as code (IAC) platform for building, managing, and deploying production-ready environments. Terraform uses declarative configuration files to codify cloud APIs. Terraform is capable of managing both third-party services and unique in-house solutions.
What is Amazon S3?
Amazon S3 (Simple Storage Service) is a web-based object storage service provided by Amazon Web Services. It’s a place where you may upload files and directories. It can save and retrieve any type of data, including documents, photos, and videos.
Advantages of Using S3
- Negligible costs
- High Availability
- Scale up automatically
- No CAPEX expenditures
Prerequisites
- AWS account
- A purchased domain
- Terraform installed
S3 Static Website Infrastructure
Only a few components are required to host a static website on S3, and we don’t even need anything fancy like VPCs or security groups to get started. Once we are finished we are going to have the following components:
- S3 bucket that hosts our website files for our www subdomain.
- Route 53 records pointed at to our S3 bucket.
Setting up our Terraform components
Now we’ll go through each of the files that make up our Terraform project and explain what they perform. You may fork my repository and follow along with this by forking it at the references section below.
providers.tf
Terraform needs plugins called providers to interact with remote systems. In this case, we are only dealing with AWS but Terraform can also interact with other cloud services such as Azure and Google Cloud.
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "4.2.0"
}
}
}provider "aws" {
access_key = var.access_key
secret_key = var.secret_key
region = var.region
}
The version of Terraform we’re using, as well as the version of the AWS provider, are also specified here. This is to guarantee that any future Terraform or AWS provider breaking changes do not prevent our scripts from operating. Then we setup the aws provider that we will use for the majority of our components.
variables.tf
In this file, we define the variables that we are going to use.
variable "domain_name" {
type = string
description = "Name of the domain"
}
variable "bucket_name" {
type = string
description = "Name of the bucket."
}
variable "region" {
type = string
default = "us-west-2"
}
variable "access_key" {
type = string
}
variable "secret_key" {
type = string
}
terraform-dev.tfvars
The tfvars file is used to specify variable values. These will need to be updated for your domain. All you have to change is the below values for your website, the rest will remain the same. Replace example.com with your domain name without www
domain_name = "example.com"
bucket_name = "example.com"
We’re going to set up our S3 buckets in this file. You may technically put all of your Terraform setup in one file, but I like to break them into multiple components since it is easier to understand.
s3-bucket.tf
First, we will create a bucket.
resource "aws_s3_bucket" "bucket-1" {
bucket = "www.${var.bucket_name}"
}data "aws_s3_bucket" "selected-bucket" {
bucket = aws_s3_bucket.bucket-1.bucket
}
Here we have created a bucket with the name of www.example.com
s3-acl.tf
Then we set the acl for the given bucket.
resource "aws_s3_bucket_acl" "bucket-acl" {
bucket = data.aws_s3_bucket.selected-bucket.id
acl = "public-read"
}
s3-versioning.tf
We also set the versioning for the bucket.
resource "aws_s3_bucket_versioning" "versioning_example" {
bucket = data.aws_s3_bucket.selected-bucket.id
versioning_configuration {
status = "Enabled"
}
}
s3-cors.tf
Let us implement the cors policy for the bucket.
resource "aws_s3_bucket_cors_configuration" "example" {
bucket = data.aws_s3_bucket.selected-bucket.bucketcors_rule {
allowed_headers = ["Authorization", "Content-Length"]
allowed_methods = ["GET", "POST"]
allowed_origins = ["https://www.${var.domain_name}"]
max_age_seconds = 3000
}
}
s3-bucket-policy.tf
We setup a policy which gives read access to the bucket.
resource "aws_s3_bucket_policy" "bucket-policy" {
bucket = data.aws_s3_bucket.selected-bucket.id
policy = data.aws_iam_policy_document.iam-policy-1.json
}data "aws_iam_policy_document" "iam-policy-1" {
statement {
sid = "AllowPublicRead"
effect = "Allow"resources = [
"arn:aws:s3:::www.${var.domain_name}",
"arn:aws:s3:::www.${var.domain_name}/*",
]actions = ["S3:GetObject"]principals {
type = "*"
identifiers = ["*"]
}
}
}
s3-website
Finally, we will set the website configuration that the bucket is going to fetch from as our index page.
resource "aws_s3_bucket_website_configuration" "website-config" {
bucket = data.aws_s3_bucket.selected-bucket.bucketindex_document {
suffix = "index.html"
}error_document {
key = "404.jpeg"
}# IF you want to use the routing rulerouting_rule {
condition {
key_prefix_equals = "/abc"
}
redirect {
replace_key_prefix_with = "comming-soon.jpeg"
}
}
}
s3-object-upload.tf
We also need to upload index and some images to our website. I have them saved under uploads folder in the same path. This will be done with the help of the following terraform codes.
resource "aws_s3_object" "object-upload-html" {
for_each = fileset("uploads/", "*.html")
bucket = data.aws_s3_bucket.selected-bucket.bucket
key = each.value
source = "uploads/${each.value}"
content_type = "text/html"
etag = filemd5("uploads/${each.value}")
acl = "public-read"
}resource "aws_s3_object" "object-upload-jpg" {
for_each = fileset("uploads/", "*.jpeg")
bucket = data.aws_s3_bucket.selected-bucket.bucket
key = each.value
source = "uploads/${each.value}"
content_type = "image/jpeg"
etag = filemd5("uploads/${each.value}")
acl = "public-read"
}
Here, it checks the content of the uploads directory and uploads the file based on the type. In this case, html and jpeg files. You can use them based on your requirements.
route53.tf
Last but not least, the route 53 records must be created. I assume that you haven’t set up a hosted zone for this domain yet.
resource "aws_route53_zone" "main" {
name = var.domain_name
tags = {
Name = "www.${var.domain_name}"
description = var.domain_name
}
comment = var.domain_name
}resource "aws_route53_record" "www-a" {
zone_id = aws_route53_zone.main.zone_id
name = "www.${var.domain_name}"
type = "A"alias {
name = data.aws_s3_bucket.selected-bucket.website_domain
zone_id = data.aws_s3_bucket.selected-bucket.hosted_zone_id
evaluate_target_health = false
}
}
This file is very self-explanatory. It just creates a record for www and refers it to the appropriate S3 bucket.
Terraform command to deploy our infrastructure
Now that we’ve put everything up, all we have to do is run the scripts to deploy our infrastructure. For these commands to function, make sure your computer is set up with all of the right AWS credentials.
$ terraform init
$ export TF_VAR_access_key="xxxxxxxxxxx" && export TF_VAR_secret_key="xxxxxxxxxxxxx" && terraform apply -var-file terraform-dev.tfvars
Conclusion
You should now know how to host a static website with Amazon S3. Even though you had to go through a few steps, you did it ! If you’ve found this article helpful, please leave a clap and you can follow me for more articles . If you have any questions or constructive feedback, please let me know.