Terraform is open source software managed by Hashicorp. This software used at Infrastructure as a code.
Terraform manages external resources (such as public cloud infrastructure, private cloud infrastructure, network appliances, software as a service, and platform as a service) with “providers”. HashiCorp maintains an extensive list of official providers, and can also integrate with community-developed providers. Users can interact with Terraform providers by declaring resources or by calling data sources. Rather than using imperative commands to provision resources, Terraform uses declarative configuration to state the desired final state. Declarative configuration means we want to write a code your system should be in after completing activity. In case some of the resources are already created then after running terraform job system will only create or modify resource which job needed to be at final state.
Once a user invokes Terraform on a given resource, Terraform will perform CRUD(Create, Read, Update and Delete) actions on the user’s behalf to accomplish the desired state. The infrastructure as code can be written as modules, promoting reusability and maintainability
In this story, I am planning to create three-tier architecture with the help of AWS resources. First-tier Load Balancer, Second tier(webserver) considered as application logic and last tier Database. I am using Dynamodb for the NoSQL database.
Architecture
An auto-scaling group is created with a minimum of 2 instances. ASG has two subnets both in a different availability zone. This auto-scaling group will be used as a target group for application Load Balancer. In my configuration, instances are not reached directly via there public address over port 80 will only application load balancer will be forwarding a request to EC2 instance. Session get terminated at the application load balancer.
Two buckets are needed, the first S3 bucket used to store userdata and AWS Dynamodb script in S3. The second bucket will be used for ALB to store logs. IAM roles.
Configuration list
data.aws_ssm_parameter.s3bucket: S3 bucket information to storage scripts
aws_vpc.app_vpc: VPC for environment
aws_eip.lb_eip: Elastic IP address for Load balancer
aws_iam_role.app_s3_dynamodb_access_role: Role for EC2 instnace profile
data.aws_availability_zones.azs: To get list of all availability zones
data.aws_ssm_parameter.accesslogbucket: S3 bucketname to storage ALB logs
aws_iam_role_policy.app_s3_dynamodb_access_role_policy: Policy to attach on role “app_s3_dynamodb_access_role”. Dynamodb full access is granted please grant appropriate access for your application need
aws_iam_instance_profile.app_instance_profile: EC2 instance profile to access S3 storage and Dynamodb table
aws_subnet.app_subnets: Multiple subnets are created with VPC per Availability zone in region
aws_lb_target_group.app-lb-tg: Target group for ALB
aws_security_group.app_sg_allow_public: Security group for LB. Port 80 open from all world.
aws_internet_gateway.app_ig: Internet gateway
aws_lb.app-lb: Application load balancer
app_s3_dynamodb_access_role : To access Dynamodb and S3 account from Lambda function
aws_route_table.app_rt: Route table
aws_security_group.app_sg_allow_localip: Security group to allow ssh access from “localip” from variables file and ALB to access EC2 instance over port 80
aws_instance.app-web: This is template instance will be used for AMI creation used for Launch configuration and Autoscaling group (ASG)
aws_lb_listener.app-lb_listner: ALB Listner for healthcheck
aws_ami_from_instance.app-ami: AMI resource will create ami from “app-web” instance. Will use this ami to create launch configuration.
aws_launch_configuration.app-launch-config: EC2 instnace launch configuration used to create Autoscalling group.
aws_autoscaling_group.app-asg: Autoscaling group used create two instance in different availability zone. ALB will send request on these ASG.
aws-userdata-script.sh : This will will run during userdata is executed. File will get information list instance-id, Publicip, lcoal ip and Availability zone name from metadata server and copy that to “/var/www/html/index.html” file.
nps_parks.csv : Is input file to copy data from S3 and add into dynamodb table
dynamodb.py : file used above input file and create new table and insert a record into the table. This table now used to sorting and output is store again in “/var/www/html/index.html” for future view. Objecting is to ensure instances from different availability zones able to comminited to Database our 3rd layer.
user_data.tpl : Userdata template file used by terraform
terraform.tfvars : Terraform varible file
main.tf : Terraform program file
PS. I don’t want to use this story to create a full-blown application.
Prerequisites
Download all files from the Github repository.
Download “terraform” software and copy at same downloaded location
Create S3 bucket to store scripts. Create “userdata” directory in bucket toplevel and upload “aws-userdata-script.sh”, “nps_parks.csv” and “dynamodb.py” file at that location. script “EC2 instance will copy these script using user-data template file.
Create key pair for EC2 instance.
Create the following parameter —
accesslogbucket : <buckname for ALB logs> You can use same bucket name as userdata.
ec2_keyname : <Key pair name>
s3bucket : s3://<bucketname>. Please ensure to prefix “s3://” before bucket name in value parameter.
image -2
Configuration Output
Starting running terraform template you will see below output
The output is the Load balancer output link. You can add this output to DNS records for future access. For this exercise, we will use this address directly to access our application.
image -3
Load balancer configuration. DNS name to access your ALB endpoint. VPc, Availability zone, and security group configuration. The public security group will be used to get traffic from world to ALB on port 80. image-5 has information about S3 location where ALB will going to save logs.
image — 4image — 5
ALB target group configuration and health check details. Healthcheck is performed on “/” parent page. This can be changed as per different application endpoints. Image-7 has information about instances registered to the target group via the Autoscaling group.
image — 6image — 7
I am first creating a sample instance “ya-web”. Using this application to create “golden-ami”. This AMI is been used for launch configuration and to create the Autoscaling Group(ASG). Normally golden AMI already created. That AMI information can be inputted as a variable in “terraform.tfvars” files. image — 9 is the Autoscaling group configuration. Minimum/maximum capacity can be altered as part of input as well.
image — 8image — 9
Instance information. “ya-web” is a template vm. Other two vis are part of autoscaling group.
image — 10
Accessing application with a Load Balancer. LB transferred the request to the first instance in AZ “us-west-2a”. Instance able to pull data from DynamoDB using boto API and because of instance profile, we created in our resource file. The image-12 request is transferred to a 2nd instance for different AZ “us-west-2b”. I am using stickiness for 20sec. This can be managed via cookies as well. My idea of the application is make it a simple kind of “hello world” application to get the bare minimum configuration.
ALB transferring request to First instance, image-11ALB transferring request to First instance, image-12
Instance public IPs are not able to access from outside world(image — 13). Only ssh and ping(icmp) are allowed from localip defined variables file.
image-13(a)image — 13(b)
Disclaimer
Network security and Identity security needed to be improved for production use.
Automating implementation and reducing time to deploy complex environments is key. In this story, I am planning to get one of the environments that fairly used in the industry to map NFS FS over multiple subnets. This is a very basic configuration but complexity starts when you wanted to use the same template for deploying the entire application in one go.
I am using the Terraform template function to achieve this. I am certainly can use “Ansible” or “Chef” or any other tool but I wanted to make it relatively simple and have things done by just using a single input file.
Architecture Diagram
I am creating a single EFS FS that will be part of a given region and will have a single mount target in that AZ. I am planning to use a maximum of 3 AZ in this document. AZ count can be increased in case needed for more redundancy.
Single instance started in each AZ and mounted newly created EFS using local IP. Internet gateway attached so that my local environment I could be able to access instances to check EFS is working fine.
Parameter store used to get a “keypair” name.
Architecture Diagram. Image-1
Source Code
Download source code for this implementation from Github page —
Download main.tf, terraform.tfvars and user_data_import.tpl file
user_data_import.tpl is user_data template file. You can add or modify any commands you like to execute during boot time. Mainly I am using this file to mount newly created EFS FS automatically on EC2 instance.
New EFS name is part of the input and UNIX mountpoint is also part of the input. If VPC and subnet already created and wanted to use same subnet make sure to add the “data” block in main.tf accordingly and change “EFS” and “instance” block accordingly.
Please change localip parameter to your own domain subnet ip from where you need ssh access to each EC2 instance. Do not use default 0.0.0.0/0 which opens port 22 for all world.
Execute Terraform job
To execute terraform job please download terraform file and entier following commands.
aws configure
terraform init
terraform plan
terraform apply
Please review terraform documentation for more information. You can send your questions as well.
This job will create total of 32 resources. Const be very minimum if you will use the attached configuration and upon testing perform the cleanup task.
Output “efsip” are EFS IP for each Availability Zone. Since I am working on the first 3 availability zone, I did assign 3 IP for inter AZ communication. “instance_public_ip(typo)” is an IP address for each instance that I created in given AZ. I will use this public ip to connect to each EC2 instance.
Verify FS is mounted successfully. Each instance used its own EFS IP from AZ to connect. EFS is mounted successfully.
Perform Read/Write test from each instance. I am creating new file from one of the instance and the file is visible from other two instances.
Tags are added as per EFS FS in case needed for local scripting purposes.
Elastic Filesystem Configuration
EFS fs is created with 3 mount point
Access point to used mount FS as “/” this can be easily changed as per need.
FS is part of 3 Availability zone and each availability zone has a different IP address.