Categories
AWS Compute Featured Security & Identity Storage Terraform

EFS and EC2 instance creation using Terraform templating

Automating implementation and reducing time to deploy complex environments is key. In this story, I am planning to get one of the environments that fairly used in the industry to map NFS FS over multiple subnets. This is a very basic configuration but complexity starts when you wanted to use the same template for deploying the entire application in one go.

I am using the Terraform template function to achieve this. I am certainly can use “Ansible” or “Chef” or any other tool but I wanted to make it relatively simple and have things done by just using a single input file.

Architecture Diagram

I am creating a single EFS FS that will be part of a given region and will have a single mount target in that AZ. I am planning to use a maximum of 3 AZ in this document. AZ count can be increased in case needed for more redundancy.

Single instance started in each AZ and mounted newly created EFS using local IP. Internet gateway attached so that my local environment I could be able to access instances to check EFS is working fine.

Parameter store used to get a “keypair” name.

Image for post
Architecture Diagram. Image-1

Source Code

Download source code for this implementation from Github page —

https://github.com/yogeshagrawal11/cloud/tree/master/aws/EFS/MutiAZ%20EFS%20implementation

Download main.tf, terraform.tfvars and user_data_import.tpl file

user_data_import.tpl is user_data template file. You can add or modify any commands you like to execute during boot time. Mainly I am using this file to mount newly created EFS FS automatically on EC2 instance.

New EFS name is part of the input and UNIX mountpoint is also part of the input. If VPC and subnet already created and wanted to use same subnet make sure to add the “data” block in main.tf accordingly and change “EFS” and “instance” block accordingly.

Image for post

Please change localip parameter to your own domain subnet ip from where you need ssh access to each EC2 instance. Do not use default 0.0.0.0/0 which opens port 22 for all world.

Image for post

Execute Terraform job

To execute terraform job please download terraform file and entier following commands.

aws configure

terraform init

terraform plan

terraform apply

Please review terraform documentation for more information. You can send your questions as well.

This job will create total of 32 resources. Const be very minimum if you will use the attached configuration and upon testing perform the cleanup task.

Image for post

Output “efsip” are EFS IP for each Availability Zone. Since I am working on the first 3 availability zone, I did assign 3 IP for inter AZ communication. “instance_public_ip(typo)” is an IP address for each instance that I created in given AZ. I will use this public ip to connect to each EC2 instance.

Verify FS is mounted successfully. Each instance used its own EFS IP from AZ to connect. EFS is mounted successfully.

Image for post

Perform Read/Write test from each instance. I am creating new file from one of the instance and the file is visible from other two instances.

Image for post
Image for post

Tags are added as per EFS FS in case needed for local scripting purposes.

Image for post

Elastic Filesystem Configuration

EFS fs is created with 3 mount point

Image for post

Access point to used mount FS as “/” this can be easily changed as per need.

Image for post

FS is part of 3 Availability zone and each availability zone has a different IP address.

Image for post

Clean up

To cleanup enter following command

terraform destroy

Advertisement