WebServer — EC2, EBS, S3 and CloudFront provisioned using Terraform+Github

Ashish Kumar
7 min readJun 11, 2020

--

Task: Have to create/launch Application using Terraform

1. Create the key and security group which allow the port 80, 22.

2. Launch EC2 instance.

3. In this Ec2 instance use the key and security group which we have created in step 1.

4. Launch one Volume (EBS) and mount that volume into /var/www/html

5. The developer has uploaded the code into GitHub repo and other repo with some images.

6. Copy the GitHub repo code into /var/www/html.

7. Create an S3 bucket, and copy/deploy the images from GitHub repo into the S3 bucket and change the permission to public readable.

8 Create a Cloudfront using S3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html.

SOLUTION ::

NOTE: The terraform has been used on a Windows system as the host.

AWS-CLI and Git should be installed on your machine where terraform is running.

So I would be explaining the code in parts going on step by step.

provider "aws" {
profile = "ashish"
region = "ap-south-1"
}
data "aws_vpc" "selected" {
default = true
}
locals {
vpc_id = data.aws_vpc.selected.id
}

Create a configuration file so terraform can access your aws account. Run this command in command prompt or terminal.

aws configure --profile profilename

Here my profile name is ashish which I have provided to terraform aws provider.

I already have a default VPC created in my account so I will be using it so I will get its id and store it in a local (think of it as a variable).

resource "tls_private_key" "webserver_key" {
algorithm = "RSA"
rsa_bits = 4096
}
resource "local_file" "private_key" {
content = tls_private_key.webserver_key.private_key_pem
filename = "webserver.pem"
file_permission = 0400
}
resource "aws_key_pair" "webserver_key" {
key_name = "webserver"
public_key = tls_private_key.webserver_key.public_key_openssh
}

Then, we create a pair of RSA private and public key using the tls_private_key resource and saved the private key locally so we can connect to our instance whenever required and from wherever we want.

The public key has been given to aws_key_pair so it can create a key_pair on our aws account so that it can be used with instances.

If you are running on a Windows system, then you can face the issue as described below while trying to ssh.

ssh error

To solve this issue, follow these simple steps:

Open the Properties of the file >>

You can find the name of your user in C:\Users.

Wait if you are running terraform on Linux machine, then you don’t need to do anything.

resource "aws_security_group" "webserver_sg" {
name = "webserver"
description = "https, ssh, icmp"
vpc_id = local.vpc_id
ingress {
description = "http"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}

ingress {
description = "ssh"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "ping-icmp"
from_port = -1
to_port = -1
protocol = "icmp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "webserver"
}
}

Now, we will create our security group as per our requirements in the default VPC. We open default ports of HTTP for the webserver, SSH for remote connection and ICMP for ping as ingress. Egress is open for all IPs and all ports. Not that CIDR here has been configured only for IPv4 and not IPv6. Neither can our instance go to IPv6 nor can they come to the instance.

resource "aws_instance" "webserver" {
ami = "ami-052c08d70def0ac62"
instance_type = "t2.micro"
key_name = aws_key_pair.webserver_key.key_name
vpc_security_group_ids = [aws_security_group.webserver_sg.id]
subnet_id = "subnet-9aeb62d6"
availability_zone = "ap-south-1b"
root_block_device {
volume_type = "gp2"
volume_size = 12
delete_on_termination = true
}
tags = {
Name = "webserver"
}
connection {
type = "ssh"
user = "ec2-user"
host = aws_instance.webserver.public_ip
port = 22
private_key = tls_private_key.webserver_key.private_key_pem
}
provisioner "remote-exec" {
inline = [
"sudo yum install httpd -y",
"sudo systemctl start httpd",
"sudo systemctl enable httpd",
"sudo yum install git -y"
]
}
}

Now we are creating an EC2 instance for the deployment of the webserver. We will be using the key pair and security group we created. The intelligence of terraform will help us set it dynamically so we don’t need to hard code repeatedly.

Then we will connect to our instance via ssh in terraform itself and install httpd server and git.

resource "aws_ebs_volume" "document_root" {
availability_zone = aws_instance.webserver.availability_zone
size = 1
type = "gp2"
tags = {
Name = "document_root"
}
}
resource "aws_volume_attachment" "document_root_mount" {
device_name = "/dev/xvdb"
volume_id = aws_ebs_volume.document_root.id
instance_id = aws_instance.webserver.id
connection {
type = "ssh"
user = "ec2-user"
host = aws_instance.webserver.public_ip
port = 22
private_key = tls_private_key.webserver_key.private_key_pem
}
provisioner "remote-exec" {
inline = [
"sudo mkfs.ext4 /dev/xvdb",
"sudo mount /dev/xvdb /var/www/html",
"sudo git clone https://github.com/devil-test/webserver-test.git /temp_repo",
"sudo cp -rf /temp_repo/* /var/www/html",
"sudo rm -rf /temp_repo",
"sudo setenforce 0"
]
}
provisioner "remote-exec" {
when = destroy
inline = [
"sudo umount /var/www/html"
]
}
}

Here we have created our own Elastic Block Storage in the same availability zone as our EC2 instance and then attached it to the same. We then connect to our instance via ssh, format our new disk, mount it and copy the HTML files in our EBS from our repository.

It is also configured to unmount the EBS when infrastructure is destroyed so that no errors arise.

NOTE : The EBS has not been mounted persistently. It can be done by making an entry in the /etc/fstab.

Also when running webpages from EBS I was facing issues of Permission Denied 403 which was arising due to SELinux. I was not able to find the exact issue so for now, I have switched it to permissive.

resource "aws_s3_bucket" "image-bucket" {
bucket = "webserver-images-test"
acl = "public-read"
provisioner "local-exec" {
command = "git clone https://github.com/devil-test/webserver-image webserver-image"
}
provisioner "local-exec" {
when = destroy
command = "echo Y | rmdir /s webserver-image"
}
}
resource "aws_s3_bucket_object" "image-upload" {
bucket = aws_s3_bucket.image-bucket.bucket
key = "myphoto.jpeg"
source = "webserver-image/StudentPhoto.jpg"
acl = "public-read"
}

Here we have created our S3 bucket with public access to read. We then clone our image repo on the host PC and then upload the image to the bucket with public read access.

When the infrastructure is destroyed, so will the cloned-repo on the local machine will be deleted. The command here has been used for windows. Please set it according to your OS.

variable "var1" {default = "S3-"}locals {
s3_origin_id = "${var.var1}${aws_s3_bucket.image-bucket.bucket}"
image_url = "${aws_cloudfront_distribution.s3_distribution.domain_name}/${aws_s3_bucket_object.image-upload.key}"
}
resource "aws_cloudfront_distribution" "s3_distribution" {
default_cache_behavior {
allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
cached_methods = ["GET", "HEAD"]
target_origin_id = local.s3_origin_id
forwarded_values {
query_string = false
cookies {
forward = "none"
}
}
viewer_protocol_policy = "allow-all"
}
enabled = trueorigin {
domain_name = aws_s3_bucket.image-bucket.bucket_domain_name
origin_id = local.s3_origin_id
}
restrictions {
geo_restriction {
restriction_type = "none"
}
}
viewer_certificate {
cloudfront_default_certificate = true
}
connection {
type = "ssh"
user = "ec2-user"
host = aws_instance.webserver.public_ip
port = 22
private_key = tls_private_key.webserver_key.private_key_pem
}
provisioner "remote-exec" {
inline = [
# "sudo su << \"EOF\" \n echo \"<img src='${self.domain_name}'>\" >> /var/www/html/test.html \n \"EOF\""
"sudo su << EOF",
"echo \"<img src='http://${self.domain_name}/${aws_s3_bucket_object.image-upload.key}'>\" >> /var/www/html/test.html",
"EOF"
]
}
}

Here we have created a bucket, downloaded image from GitHub and uploaded it to S3 bucket. Then we created a CloudFront for the same bucket.

Here is the complete code.

#to initialize terraform plugins
terraform init
#to build the infrastructure
terraform apply
#to destroy the structure
terrform destroy

You can open the webpage and test it. You can change the GitHub repos and file names according to yourself. Everything else is dynamic for RHEL 8 image.

All suggestions are welcome to make the article and code better.

Worked in collaboration with Daksh Jain.

Connect me on my LinkedIn as well.

--

--

Ashish Kumar

Just a tech enthusiast… Research, implement and share is what I like to do