Categories
AWS Networking

AWS Transit Gateway

Networking is a big challenge with growing demands on diversified environments and creating datacenter across the world. Limit is just imagination. Enterprises works around different sites, different geography but common vein that join those environments are Network. With growing demands, it’s getting complicated to manage routes between sites. AWS Transit Gateway(TGW) is born to make Network Engineers life easy. TGW helps with following features –

  • Connect multiple VPC network environment together for given account
  • Connect multiple VPC network across multiple AWS account
  • Inter region connectivity across multiple VPC
  • Connect on-premise datacenter with VPC network via VPN or
  • Connect multiple cloud environment via VPN using BGP.

Benefits of Transit Gateway

Easy Connectivity : AWS Transit Gateway is cloud router and help easy deployment of network. Routes can\will be easily propagated into environment after adding new network to TGW.

Better visibility and control : AWS Transit Gateway Network Manager used to monitor Amazon VPC’s and edge locations from central location. This will helps to identify and react on network issue quickly.

Flexible multicast : TGW supports multicast. Multicast helps sending same content to multiple destinations.

Better Security: Amazon VPC and TGW traffic always remain on Amazon private environment. Data is encrypted. Data is also protected for common network exploits.

Inter-region peering : AWS Transit Gateway inter-region peering allows customers to route traffic across AWS Regions using the AWS global network. Inter-region peering provides a simple and cost-effective way to share resources between AWS Regions or replicate data for geographic redundancy.

Transit Gateway Components

There are 4 major components in transit gateway –

Attachments : Attach network component to gateway. Attachments will be added to single route table. Following are network devices can be connected to TGW –

  • Amazon VPC
  • An AWS Direct Connect gateway
  • Peering connection with another transit gateway
  • VPN connection to on-prem or multi-cloud network

Transit gateway route table : Default route table will be created. TGW can have multiple route table. Route table defines boundary for connection. Attachments will be added to route table. Given route table can have multiple attachment where as attachment can only be added to single route table.

Route table includes dynamic and static routes. It determine next stop by given destination ip.

Association : To attach your attachment to route table we use association. Each attachment is associated with single route table but route table can have multiple attachments.

Route Propagation : All VPC and VPN associated to route table can dynamically propagate routes to route table. If VPN configured with BGP protocol then routes from VPN network can automatically propagated to transit gateway. For VPC one must create static routes to send traffic to transit gateway. Peering attachments does not dynamically added routes to route table so we need to add static routes.

Architecture Design

We are going to test following TGW scenarios. In this architecture design I am creating “management VPC” that will be shared for entire organization. This VPC can be used for Active Directory, DNS, DHCP or NTP like common services for organization.

Project_VPC1 and Project_VPC2 will be able communicate with each other and managent_vpc. Private_VPC is isolated network(private project) and will not be able to communicate with project VPC’s, but should be able to communicate with management_vpc.

Following is architecture for this design –

Design document – Image 1

Pre-requisites

  • Region name
  • Ami ID – “ami id” depend upon region
  • Instance Role. We dont need this one explicitly, as we are not accessing any services or environment from instances.
  • Instance key pair : Create instance key pair. Add key pair name in parameterstore. Parameter name should be – “ec2-keypair”. Value should be name of your keypair name.

Source code

https://github.com/yogeshagrawal11/cloud/tree/master/aws/Network/Transit%20Gateway

Cost

This implementation has cost associated with it. With attached configuration testing is done within 1 hr then it will not take more than 50 cents.

For latest charges, look for AWS price calculator

Implementation

First time to Terraform check this blog to get started –

https://cloudtechsavvy.com/2020/09/20/terraform-initial-setup/

Run following command to start terraform –

  • ./terraform init
  • ./terraform plan
  • ./terraform apply –auto-approve

I am using terraform for implementation. Following is output for terraform –

Total 47(not 45) devices are configured.

4 VPC created

4 subnet created if you observe available ips are 1 less because one of the ip in a subnet will be used by transit gateway for data transfer and routing.

4 Route table created. Each route table will use transit gateway as target for other VPC network.

Security group – These are most important configuration configuration in real world. For DNS you will allow port 53 or AD server open appropriate ports. In my case, I am using ping for checking communication.

Private VPC will only able to communicate with management network.

Project VPC will able to communicate with other VPC where as not able to communicate with Private VPC.

Note : We don’t have to explicitly, block network in project VPC this should be blocked by transit gateway as we are not going to add propagation.

Transit Gateway created. Remember if 64512 ASN is used by existing VPN then this can be added as parameter to change it.

DNS support enables help to reach out cloud with dns names rather than ip address ,certainly a useful feature.

Transit gateway can be shared with other transit gateway for inter regional data transfer for VPC’s over Amazon private network. Its advisable to “disable” auto accept shared for security reason.

Default route table is created and all VPC not explicitly attached will be attached to default route table.

Each VPN needed to add to transit gateway as attachment.

Each route table is created. Route table can be created as per segregation one needed into environment. In my case I am creating 3 route table for 4 VPC’s. Generally in Enterprise environment, we do create 5 route table. Separate route table to backup and security environment.

Since project VPC1 and project VPC2 should have same network requirement so I added them to same route table.

Management Route table

Management route table has management VPC attachment. Propagation added from all network which needed to communicate with management VPC. In this case, management VPC should be able to communicate with all other network so added propagation from all networks. This will add all routes propagated automatically.

Private Network Route table

Private VPC attached to private network route table. Private network should be able to communicated with management VPC so added propagation for management VPC. Also route for management VPC is added automatically after propagation.

Project Route table

Project route table do have attachment from both project VPC. Propagation added for other project VPC network and management network. Respective routes are added.

Testing Environment

Management server is able to ping both private and project environment instance.

Project VPC can talk to management VPC and other project VPC’s but not with private VPC

Private VPC able to talk to management vpc but not able to communicated with any project VPC’s. That makes private VPC private within organization.

Delete terraform configuration

To delete terrform configuration. Ensure all resources are destroyed

./terraform destroy –auto-approve

Conclusion

Transit gateway is tool to connect multiple VPC, VPN and direct connect network to make communication over private network. Transit gateway can be used to isolate network traffic. This makes routing comparatively easy.

SD-WAN partner solution can be used to automate adding new remote site into AWS network.

References

https://docs.aws.amazon.com/index.html

Advertisement
Categories
AWS Compute

AWS Lambda 101

Lambda function Introduction

Lambda function is AWS offering commonly known for Function as a Service. AWS Lambda function help running code without provisioning or managing underline server. Many languages are supported by the Lambda function and the list is keep on growing. As of Jul2020, .Net, Go, Java, Node.js, Ruby, and my personal favorite Python among supported languages. Lambda is developed with high availability in mind and It is capable to scale during burstable requests.

We need to grant access to the Lambda function as per its use. Normally access is granted via the IAM role.

IAM Policy

Create a policy that will be able to create a log group and log stream. This is the basic execution rule required for the lambda function. Without this access, the Lambda function will not able to generate logs. Logging can be used for custom triggering events or tracking\debugging purposes.

Image for post
image-1
Image for post
image-2

IAM Role

The role is created to grant permission for specific tasks. In case, the function needed to access S3 get access to add appropriate policy into a role or create a custom policy. Always grant Least Privilege to function as per AWS security best practice.

Image for post

Select newly created policy

Image for post
image-4

Attach appropriate policy(image-5). Adding role description and tags are good practice in IAM. Click on Create Role.

Image for post
image-5

Lambda function

To create Lambda function, Goto Services and select Lambda.

Click the Lambda function.

Image for post
image-6

We have three options to choose from. Simplest on “Author from scratch”. In this option we will create “Hello world function” will also verify logging is working as per expectation.

“Use a blueprint”: AWS already created lots of useful functions that we can use to get started. Like, returns current status on AWS Batch job or retrieve an object from S3.

“Browse serverless app repository”: This will deploy sample Lambda application from different application repository. We also can use a private repository to pull code from.

Select an appropriate runtime environment. A select role that we have created in image-5.

Image for post
image-7

The designer will guide you on how the Lambda function is triggered. It can be triggered by different events like SNS topics, SQS, or event cloud watch logs. There are multiple different ways to trigger the Lambda function.

The Lambda function can be used for batch-oriented work or scripting purposes, you can use cloud watch rule to trigger cloud function using crontab at scheduled interval.

A destination can be an SQS event or SNS topic or event cloud watch log stream. We can also upload a read file from S3 and upload it to S3 after performing transformation within the Lambda function.

Image for post
image-8
Image for post
image-8a

This Hello world function is very simple, if it’s invoked from webhook it will return status code 200 with the body “Hello from Lamda!” It also writes log into the log stream. “event” and “context” been used to get input values as well as get HTTP context information that invokes the Lambda function.

An environment variable can be used to pass any static parameter to function like incase of downloading a file from S3 bucket, bucket name. Or while writing data into Dynamodb database. Its table name.

For security reasons, do not add your “Access Key” or “Secret key” values as an environment variable. One shall still use an encrypted parameter store for this purpose.

Image for post
image-9

The handler is the most important config parameter in the Lambda function. It has two parts separated by period (.)(image-10). The first part is nothing but a file name and the second part is a function definition that will read when the Lambda function is invoked(image-8a). I kept handler value default but I always recommend giving some meaning full name.

Memory is the amount of memory dedicated to Lambda function this depends upon activity you are performing and can be changed.

Timeout value determines how much time this function runs before times out. If the activity you like to perform will take more than the timeout value specified the Lambda function will be abruptly stopped. So, give some buffer time in the timeout value.

Image for post
image-10

Be default, the Lambda function does not require to be part of any VPC but in case Lambda functions needed to be able to communicate with EC2 instances or on-premise environment for data communication or invoking lambda function from EC2 instance we needed VPC configuration. You can still trigger the Lambda function using AWS SDK with VPC configuration.

It’s very common to store output data generated via Lambda function and store into Elastic Filesystem(EFS) since the Lambda function does not have static storage. We can use temporary ephemeral storage that lasted till the execution of function so EFS can be used to store all output.

Image for post
image-11

Permissions tab allows you to verify actions permitted for Lambda function on a given resource. The dropdown can be used to select on multiple resources. As per the below screenshot, the Lambda function can create Log group, Log stream, and able to put log messages on Amazon cloud watch logs.

Image for post
image-12

We can trigger the lambda function with different triggers. Here, I am creating a test event that will trigger the lambda function. Click on “Configure test events”. We are not sending any input value to function while invoking it. Key-value pair can be used (image-14) to send values to the Lambda function.

Image for post
image-13
Image for post
image-14

Once an event is created click on “Test” to invoke the Lambda function.

Image for post
image-15

The log output is shown below. Click on the “logs” URL to open the Cloud watch log group created by the Lambda function. This Log group will be available.

Image for post
image-16

Lambda event will create either a new Log stream or update into the existing log stream. Logs are put into the Log stream(image-17). Each log stream consists of many logs.

Image for post
image-16
Image for post
image-17

Deleting Lambda function

To delete the lambda function just select the Lambda function click on Actions and Delete.

Image for post
image-18
Image for post
image-19

The Lambda function successfully deleted.

Image for post
image-20

Cloud watch Log group and streams are not deleted by default. You can delete logs from cloud watch or exported into S3 for a cheaper cost.

To delete the log group. Go to Cloud watch select logs -> Log groups and select the appropriate log group names in the below format. Click on Action and delete Log groups.

/aws/lambda/<functionname>

Image for post
image-21
Image for post
image-22

Conclusion

Lambda function is the best way to run short jobs or create a script to run. This can each work with a webhook API or SNS/SQS environment. Have fun exploring multiple lambda function usage.