Categories
AWS Featured Networking

AWS Global Accelerator

Global Accelerator is a fully managed network traffic manager. Global Accelerator is a network layer service in which you create accelerators to improve availability and performance for internet applications used by a global audience. This network layer service is for a global customer using the Amazon network backend. As per Amazon documentation, customers have seen 60% network improvement while using this Global accelerator.

Global Accelerator provides “anycast” public IP from Amazon pool or you can provide one. With this static entry point, you will improve the availability and reliability of your application. Global Accelerator improves performance for all applications over TCP and UDP protocols by proxying packets at the edge. Accelerator ingress traffic to AWS closest to the end-user resulting in better performance.

Global Accelerator helps to create “Robust architecture“. It will help to increase network stability by using the AWS backend network. It provides health checking-based routing.

Traffic routing

Accelerator routes traffic to optimal AWS endpoint based on –

  • Endpoint Health (For Disaster recovery solution)
  • Client Location(Geo fencing)
  • User-configured weights(For active/active or active/passive) applications.

Endpoint Configuration

The accelerator can use a TCP/UDP-based protocol. It will use the following endpoint –

  • Application Load balancer
  • Network Load balancer
  • Elastic IP address

Use Cases

  • Gaming Industry (Use NLB instread of ALB)
  • IoT based data collection
  • Disaster recovery(Fast regional failover)
  • Voice over IP

Architecture for Disaster Recovery

I am using Global Accelerator for Disaster Recovery configuration. The same configuration can be used for Active/Active inter-regional application configuration.

The user will connect to Global Accelerator DNS address. Route 53 can easily be used for custom website links. Create DNS “A” record pointing to AWS Global Accelerator “Anycast” IP address. AWS global accelerator will transfer data user to appropriate Load balancer. In this blog, I am mainly focusing on the usage of AWS global accelerator, After ALB, you can create tiered software architecture. Take a look at my other Three-tier architecture blog for more details.

Source Code

If you like to have source code email me at info@cloudtechsavvy.com

Demonstration

I am using terraform to deploy my infrastructure. Following are the resources that will be deployed using terraform –

  • Working in two different region – “us-west-2(primary)” and “us-east-1(DR/secondary)”
  • Global Acceleretor with two endpoint each targeted to ALB of that region. Traffic distributed equally.
  • Application Load balancer (ALB) one at each region. Target group points to EC2.
  • Two EC2 instance with Apache installed on it. They will act as web/app server for my application. In production environment, auto-scaling will be used for better Elasticity.
  • one VPC per region
  • Two subnet per region in different AZ
  • Security group, Internet gateway, Route table, IAM role for instance profile etc.

Output from Terraform. This will create 46 resources total.

  • application_dns_name : Point this address to route 53 “A” record for customer website link
  • accelerator_static_ip_sets : anycast ip address
  • lb_dns_name : Load balaner IP for region 1
  • lb_dns_name_east : Load balaner IP for region 2
  • region1 : Region 1
  • region2_dr : Region 2
  • vpc_cidr : VPC CIDR Region 1
  • vpc_cidr_dr : VPC CIDR Region 2

I am pointing to “Application_dns_name” link that generated from terraform output. My webserver is connected to “us-west-2” region than “us-east-1” because “us-west-2” region is closed to me.

I am from Seattle so, I will point to “us-west-2b” region by default not “us-east-1”

Both Availability zones are used in nearly round-robin way.

The website will use both availability zone. Now request is coming from a different AZ.

AWS global accelerator instance is created with two static anycast IP address. You can point “Rote 53” to both IP addresses for redundancy.

Configured TCP port 80 listeners. That means this accelerator will listen on port 80. You can use licentner on any port with TCP and UDP protocol.

The endpoint group is per region. I have configured two regions one for “us-east-1” and “us-west-2”. Traffic dial determines how my load is distributed across regions.

For active/active configuration : use weight at same @ 100.

For Disaster recovery or “active/passive” configuration: use the first region as 100% and DR region uses 0%. Then DR region will not accept any traffic until the primary region is down.

Each endpoint is grouped together with different endpoints. One can use NLB, ALB, or Elastic IP endpoint as different endoiint. Weight will be defined as a priority. In my case, I am pointing my request to ALB. Those ALB intern transfer request to EC2 instance for web application. Both group are pointed to its own ALB from a its region.

Disaster Simulation

To simulate disaster in a region, I am deleting my ALB from “us-west-2” region(which is closest to me).

After deleting I am ALB, I am not able to access the website momentarily.

Endpoint configuration detected that “us-west-2” endpoint is unhealthy.

I have constantly, set my ping to “anycast” IP address. I am still able to get a response. Since I cant simulate region failover, I cant test anycast IP availability.

After “Ten Sec“, My website was redirected to the east region. Unfortunately, this could be better but I have not tested very well.

RPO = 10 sec

RTO will be different for different customers.

Both AZs are taking part into website service. In case of disaster recovery, you may just need one AZ.

Fallback

To simulate AWS region recovery, I am running my terraform environment again. This will create ALB and request listeners.

Accelerator was able to detect that “us-west-2” region Endpoint become healthy again.

My new traffic is diverted back to “us-west-2” region

Both DR Region failover and fallback worked successfully.

Conclusion

I am very impressed with the way the global accelerator worked. The multi-region complication was reduced. Failover and Failback were seamless. If you have a multi-region database like “AWS Aurora” this will be a very great option to reduce “RPO” and increase redundancy. This will definitely increase the user experience.

The next Step, for me to do testing on performance and user experience features for AWS Global Accelerator.

Categories
Featured GCP - Networking Google Cloud

Multi-Cloud site-to-site Network Connectivity

Multi-cloud Architecture is a smarter way to utilize public, private and hybrid environment. All enterprises wanted to have an option to choose multiple cloud providers for their usecases. Multi-cloud is now a days very popular for Enterprise and mid level companies. Following are benefits and considerations for selecting any multi-cloud environment.

Redundancy: Having more than one cloud provider, helps in redundancy. In case, if particular region for given cloud provider fails or any service fails we can configure redundancy by adding another cloud provider.

Scalability: This point may be not be that important but definitely worth to consider. Sometime, its lengthy process to increase resource limit for cloud account that can be safeguarded via having multiple cloud provider.

Cost: Cost can also be viewed at competition. Some of the services are cheaper on one cloud environment some or on another. This helps determine cheapest solution for enterprise.

Features: This is prime reason for multi-cloud environment. Having multicloud environment gives you flexibility to choose best suitable environment as per application needed than just to choose whatever is available at the time.

Customer Lock-in: Some of vendors have lock-in period for specific service. Enterprises always\rather mostly wish to avoid this lock-in time. Having multiple cloud we have more option on choosing correct vendor.

Nearest termination point/Customer Reach : Use of regional cloud provider will help enterprise to be near datacenter or user. This will improve performance and reduce latency issues. On top of that, each cloud provider’s global reach is different. So implement appropriate cloud provider whose reach is better for end user.

This procedure can be implemented for any vpn connection with BGP protocol. I am using Dynamic routing but static routing can be used as well. Below is architecture diagram for my VPN connectivity.

Architecture Diagram – Image 1

Pre-Requisites

Download Terraform software version

  • Terraform v0.12.26
  • provider.aws v2.70.0
  • provider.google v3.0.0-beta.1

AWS and Google account should be configured for terraform access.

I am using “us-west-2” region for AWS, “us-west1” region for google. If you are planning to use different region select appropriate instance image id and update image id

Create EC2 instance keypair and add keypairname inforamtion into parameterstore.

Change BGP IPs. I am using default one, these IPs should work in case those are not used for your existing environment.

Source Code

Please download all files from below location:

https://github.com/yogeshagrawal11/cloud/tree/master/google/Network

Implementation

Follow my Terraform initial setup guide in case new for terrform.

https://cloudtechsavvy.com/2020/09/20/terraform-initial-setup/

  • Download “aws_vpn.tf” and “google_vpn”
  • Run “./terraform init” to initialize Terraform setup
  • Run “./terraform plan” to verify connectivity to cloud and check for error
  • Execute “./terraform apply –auto-approve” to start implementation

Output for terraform. Take a note of IP address which will be used in later part for configuration.

IP address – Image 2.

IPSec Sharekey

I am using AWS parameter store to store password for VPN tunnels. Two parameters will be used. I am not encrypting this key but its advisable to encrypt those key as per security best practice.

AWS vpn shared key – image 3

Take a note of value from both “vpn_sharedkey_aws_to_gcp_tunnel1” and “vpn_sharedkey_aws_to_gcp_tunnel2” parameter. These values will be used while creating IPSec tunnel.

AWS VPN shared key value – Image 4

AWS & GCP network configuration

VPC with CIDR 10.1.0.0/16 created. Route table attached.

AWS VPC config – Image 5

GCP VPC configuration. Subnet is attached.

GCP VPC configuration – Image 6

GCP firewall allows traffic from AWS subnet to GCP subnet

GCP Firewall Allowing only ip from AWS subnet for icmp and SSH – image 7

Customer Gateway should have ip address from GCP VPN gateway(not AWS VPN gateway address). This ASN should be with range from 64496 till 65534. ASN used in AWS customer gateway is managed by GCP

AWS Customer Gateway Config = GCP VPN Gateway config

AWS Customer Gateway – Image 8

GCP VPN gateway ip information is matching with customer gateway. Forwarding rules are mandatory for tunnel creation. Terraform will create those rules automatically.

Google VPN gateway image 9

AWS Private Gateway will have next ASN no. Its advisable to use next number. I always follow best practice, to use Odd no for certain provider like GCP and even no for certain provider like AWS. This configuration will also works with on-premise network device in that you will define precedence. All on-premise devices will have lower ASN number and so on.

AWS Virtual private gateway with ASN 65002 – Image 10

Attach your Site-to-site VPN connection with Virtual private gateway and Customer gateway. This will create one vpn connection with Customer Gateway(GCP VPN Gateway) and AWS Virtual private gateway. I am using “ipsec.1” for connection type.

AWS – Site-to-Site-VPN Connection – Image 11

This will also create two tunnel. I am using dynamic routing. BGP protocols have limitation for 100 subnet that can be exchanged between vpn when this blog is posted. Tunnel information as follows –

AWS tunnels are down due to GCP configuration pending – Image 12
Tunnel IP address issue

Tunnels are configured properly but in down stage because corresponding GCP tunnels are not created. I tried to create those tunnel using Terraform but issue is happening that both AWS and GCP were taking own ip as first ip(169.254.1.9) from 169.254.1.8/30 subnet. And second ip will be allocated as peer ip(169.254.1.10). On contrary, we have AWS ip as first ip and second ip in subnet should be used by GCP cloud router.

Correct BGP IP for GCPs are

  • Tunnel 1 – Cloud router ip 169.254.1.10(second ip in subnet) and BGP peer ip(from AWS) = 169.254.1.9(which is correctly configured)
  • Tunnel 2 – Cloud router ip 169.254.1.14(second ip in subnet) and BGP peer ip(from AWS) = 169.254.1.13(which is correctly configured)

Create Tunnels in GCP

Now create two tunnels in GCP VPN gateway tunnel with following configuration –

  • Remote Peer IP address : 35.161.67.220 Value from Terrform output “aws_tunnel1_public_address
  • IKE Verion = 1
  • IKE pre-shared key = Copy value from “vpn_sharedkey_aws_to_gcp_tunnel1” parameter from AWS parameter store. Note. Do not copy trailing spaces.
  • Cloud Router = gcp-cloud-router
  • BGP session Information –
    • bgp name = bgp1
    • peer ASN = 65002
    • Cloud router BGP IP = 169.254.1.10 value of “aws_tunnel1_inside_gcp_address” from terraform output
    • BGP peer IP = 169.254.1.9 value of “aws_tunnel1_inside_aws_address” from terraform output
GCP BGP session config for tunnel1. – image 13
GCP VPN tunnel1 configuration – image 14

Perform same activity on tunnel-dynamic2 with following details –

  • Remote Peer IP address : 35.163.174.84 Value from Terrform output “aws_tunnel2_public_address
  • IKE Verion = 1
  • IKE pre-shared key = Copy value from “vpn_sharedkey_aws_to_gcp_tunnel2” parameter from AWS parameter store. Note. Do not copy trailing spaces.
  • Cloud Router = gcp-cloud-router
  • BGP session Information –
    • bgp name = bgp2
    • peer ASN = 65002
    • Cloud router BGP IP = 169.254.1.14 value of “aws_tunnel2_inside_gcp_address” from terraform output
    • BGP peer IP = 169.254.1.13 value of “aws_tunnel2_inside_aws_address” from terraform output

upon changing this configuration both tunnel should be ip and running at GCP and AWS environment. Try refreshing page in case status is not changed

GCP – Both tunnels are up – Image 15
AWS – Both Tunnels are up – Image 16

This completed our network connectivity between AWS and GCP environment.

Testing

To test, I am going to login to my AWS instance with my keyname that is defined in “parameterstore”. Use following ip from terraform output

EC2 and Instance IP – Image 17
Login into AWS instance via public ip from Image 17 – Image 18

We have allowed ICMP protocol for ping and “ssh” port from AWS environment to GCP environment so will test try to ping GCP instance’s private ip instance from AWS private ip address.

SSH and ping test to GCP via private ip – Image 19

Voila. I could not login to GCP because I have not copied instance json file to EC2 so that ssh will be correctly authenticated.

GCP instance access on external IP

GCP get Public ip – Image 20

Ping test from AWS EC2 over GCP public instance IP is failed as expected because of two reason, we dont have internet gateway setup on GCP VPC secondly we have not allowed ICMP and ssh in firewall from external world.

GCP test ssh and ICMP with public ip – Image 21

Test is successful.

Deletion of environment

Since we created GCP tunnel separately, we need to delete those tunnel before deleting infrastructure using terraform

Go to GCP > VPN > Cloud VPN Tunnels

Select both newly created tunnels and click “Delete”

GCP – Delete tunnel1 – Image 22

Once tunnel is deleted, Run following command from Terraform environment –

./terraform destroy –auto-approve 

Terraform Delete – Image 23

Make sure all 25 resources are deleted.

Conclusion

Multi cloud is new normal and private network connectivity that everyone wanted to have. I have given example of compute instances but this can be extended with multi level architecture. Try to get best of both world by implementing this solution.

Keep Building…

Categories
Terraform

Terraform initial setup

Terraform is open source software managed by Hashicorp. This software used at Infrastructure as a code.

Terraform manages external resources (such as public cloud infrastructure, private cloud infrastructure, network appliances, software as a service, and platform as a service) with “providers”. HashiCorp maintains an extensive list of official providers, and can also integrate with community-developed providers. Users can interact with Terraform providers by declaring resources or by calling data sources. Rather than using imperative commands to provision resources, Terraform uses declarative configuration to state the desired final state. Declarative configuration means we want to write a code your system should be in after completing activity. In case some of the resources are already created then after running terraform job system will only create or modify resource which job needed to be at final state.

Once a user invokes Terraform on a given resource, Terraform will perform CRUD(Create, Read, Update and Delete) actions on the user’s behalf to accomplish the desired state. The infrastructure as code can be written as modules, promoting reusability and maintainability

Download Software

Download Terraform software from below link –

https://www.terraform.io/downloads.html

Get Started – AWS

Follow Hashicorp link for starting AWS environment.

https://learn.hashicorp.com/collections/terraform/aws-get-started

Get Started – Google Cloud

Follow Hashicorp link for starting GCP environment.

https://learn.hashicorp.com/collections/terraform/gcp-get-started

Refrences

Categories
Application AWS Database Featured Terraform

Three Tier Architecture with AWS

In this story, I am planning to create three-tier architecture with the help of AWS resources. First-tier Load Balancer, Second tier(webserver) considered as application logic and last tier Database. I am using Dynamodb for the NoSQL database.

Architecture

An auto-scaling group is created with a minimum of 2 instances. ASG has two subnets both in a different availability zone. This auto-scaling group will be used as a target group for application Load Balancer. In my configuration, instances are not reached directly via there public address over port 80 will only application load balancer will be forwarding a request to EC2 instance. Session get terminated at the application load balancer.

Two buckets are needed, the first S3 bucket used to store userdata and AWS Dynamodb script in S3. The second bucket will be used for ALB to store logs. IAM roles.

Configuration list

  • data.aws_ssm_parameter.s3bucket: S3 bucket information to storage scripts
  • aws_vpc.app_vpc: VPC for environment
  • aws_eip.lb_eip: Elastic IP address for Load balancer
  • aws_iam_role.app_s3_dynamodb_access_role: Role for EC2 instnace profile
  • data.aws_availability_zones.azs: To get list of all availability zones
  • data.aws_ssm_parameter.accesslogbucket: S3 bucketname to storage ALB logs
  • aws_dynamodb_table.app-dynamodb-table: Dynamoddb Table resource
  • aws_iam_role_policy.app_s3_dynamodb_access_role_policy: Policy to attach on role “app_s3_dynamodb_access_role”. Dynamodb full access is granted please grant appropriate access for your application need
  • aws_iam_instance_profile.app_instance_profile: EC2 instance profile to access S3 storage and Dynamodb table
  • aws_subnet.app_subnets: Multiple subnets are created with VPC per Availability zone in region
  • aws_lb_target_group.app-lb-tg: Target group for ALB
  • aws_security_group.app_sg_allow_public: Security group for LB. Port 80 open from all world.
  • aws_internet_gateway.app_ig: Internet gateway
  • aws_lb.app-lb: Application load balancer
  • app_s3_dynamodb_access_role : To access Dynamodb and S3 account from Lambda function
  • aws_route_table.app_rt: Route table
  • aws_security_group.app_sg_allow_localip: Security group to allow ssh access from “localip” from variables file and ALB to access EC2 instance over port 80
  • aws_instance.app-web: This is template instance will be used for AMI creation used for Launch configuration and Autoscaling group (ASG)
  • aws_lb_listener.app-lb_listner: ALB Listner for healthcheck
  • aws_ami_from_instance.app-ami: AMI resource will create ami from “app-web” instance. Will use this ami to create launch configuration.
  • aws_launch_configuration.app-launch-config: EC2 instnace launch configuration used to create Autoscalling group.
  • aws_autoscaling_group.app-asg: Autoscaling group used create two instance in different availability zone. ALB will send request on these ASG.

Source code

Please download source code from my GitHub Repo —

https://github.com/yogeshagrawal11/cloud/tree/master/aws/3%20Tier%20app

  • aws-userdata-script.sh : This will will run during userdata is executed. File will get information list instance-id, Publicip, lcoal ip and Availability zone name from metadata server and copy that to “/var/www/html/index.html” file.
  • nps_parks.csv : Is input file to copy data from S3 and add into dynamodb table
  • dynamodb.py : file used above input file and create new table and insert a record into the table. This table now used to sorting and output is store again in “/var/www/html/index.html” for future view. Objecting is to ensure instances from different availability zones able to comminited to Database our 3rd layer.
  • user_data.tpl : Userdata template file used by terraform
  • terraform.tfvars : Terraform varible file
  • main.tf : Terraform program file

PS. I don’t want to use this story to create a full-blown application.

Prerequisites

Download all files from the Github repository.

Download “terraform” software and copy at same downloaded location

Create S3 bucket to store scripts. Create “userdata” directory in bucket toplevel and upload “aws-userdata-script.sh”, “nps_parks.csv” and “dynamodb.py” file at that location. script “EC2 instance will copy these script using user-data template file.

Create key pair for EC2 instance.

Create the following parameter —

accesslogbucket : <buckname for ALB logs> You can use same bucket name as userdata.

ec2_keyname : <Key pair name>

s3bucket : s3://<bucketname>. Please ensure to prefix “s3://” before bucket name in value parameter.

Image for post
image -2

Configuration Output

Starting running terraform template you will see below output

The output is the Load balancer output link. You can add this output to DNS records for future access. For this exercise, we will use this address directly to access our application.

Image for post
image -3

Load balancer configuration. DNS name to access your ALB endpoint. VPc, Availability zone, and security group configuration. The public security group will be used to get traffic from world to ALB on port 80. image-5 has information about S3 location where ALB will going to save logs.

Image for post
image — 4
Image for post
image — 5

ALB target group configuration and health check details. Healthcheck is performed on “/” parent page. This can be changed as per different application endpoints. Image-7 has information about instances registered to the target group via the Autoscaling group.

Image for post
image — 6
Image for post
image — 7

I am first creating a sample instance “ya-web”. Using this application to create “golden-ami”. This AMI is been used for launch configuration and to create the Autoscaling Group(ASG). Normally golden AMI already created. That AMI information can be inputted as a variable in “terraform.tfvars” files. image — 9 is the Autoscaling group configuration. Minimum/maximum capacity can be altered as part of input as well.

Image for post
image — 8
Image for post
image — 9

Instance information. “ya-web” is a template vm. Other two vis are part of autoscaling group.

Image for post
image — 10

Accessing application with a Load Balancer. LB transferred the request to the first instance in AZ “us-west-2a”. Instance able to pull data from DynamoDB using boto API and because of instance profile, we created in our resource file. The image-12 request is transferred to a 2nd instance for different AZ “us-west-2b”. I am using stickiness for 20sec. This can be managed via cookies as well. My idea of the application is make it a simple kind of “hello world” application to get the bare minimum configuration.

Image for post
ALB transferring request to First instance, image-11
Image for post
ALB transferring request to First instance, image-12

Instance public IPs are not able to access from outside world(image — 13). Only ssh and ping(icmp) are allowed from localip defined variables file.

Image for post
image-13(a)
Image for post
image — 13(b)

Disclaimer

Network security and Identity security needed to be improved for production use.