Categories
AWS Networking

AWS Transit Gateway

Networking is a big challenge with growing demands on diversified environments and creating datacenter across the world. Limit is just imagination. Enterprises works around different sites, different geography but common vein that join those environments are Network. With growing demands, it’s getting complicated to manage routes between sites. AWS Transit Gateway(TGW) is born to make Network Engineers life easy. TGW helps with following features –

  • Connect multiple VPC network environment together for given account
  • Connect multiple VPC network across multiple AWS account
  • Inter region connectivity across multiple VPC
  • Connect on-premise datacenter with VPC network via VPN or
  • Connect multiple cloud environment via VPN using BGP.

Benefits of Transit Gateway

Easy Connectivity : AWS Transit Gateway is cloud router and help easy deployment of network. Routes can\will be easily propagated into environment after adding new network to TGW.

Better visibility and control : AWS Transit Gateway Network Manager used to monitor Amazon VPC’s and edge locations from central location. This will helps to identify and react on network issue quickly.

Flexible multicast : TGW supports multicast. Multicast helps sending same content to multiple destinations.

Better Security: Amazon VPC and TGW traffic always remain on Amazon private environment. Data is encrypted. Data is also protected for common network exploits.

Inter-region peering : AWS Transit Gateway inter-region peering allows customers to route traffic across AWS Regions using the AWS global network. Inter-region peering provides a simple and cost-effective way to share resources between AWS Regions or replicate data for geographic redundancy.

Transit Gateway Components

There are 4 major components in transit gateway –

Attachments : Attach network component to gateway. Attachments will be added to single route table. Following are network devices can be connected to TGW –

  • Amazon VPC
  • An AWS Direct Connect gateway
  • Peering connection with another transit gateway
  • VPN connection to on-prem or multi-cloud network

Transit gateway route table : Default route table will be created. TGW can have multiple route table. Route table defines boundary for connection. Attachments will be added to route table. Given route table can have multiple attachment where as attachment can only be added to single route table.

Route table includes dynamic and static routes. It determine next stop by given destination ip.

Association : To attach your attachment to route table we use association. Each attachment is associated with single route table but route table can have multiple attachments.

Route Propagation : All VPC and VPN associated to route table can dynamically propagate routes to route table. If VPN configured with BGP protocol then routes from VPN network can automatically propagated to transit gateway. For VPC one must create static routes to send traffic to transit gateway. Peering attachments does not dynamically added routes to route table so we need to add static routes.

Architecture Design

We are going to test following TGW scenarios. In this architecture design I am creating “management VPC” that will be shared for entire organization. This VPC can be used for Active Directory, DNS, DHCP or NTP like common services for organization.

Project_VPC1 and Project_VPC2 will be able communicate with each other and managent_vpc. Private_VPC is isolated network(private project) and will not be able to communicate with project VPC’s, but should be able to communicate with management_vpc.

Following is architecture for this design –

Design document – Image 1

Pre-requisites

  • Region name
  • Ami ID – “ami id” depend upon region
  • Instance Role. We dont need this one explicitly, as we are not accessing any services or environment from instances.
  • Instance key pair : Create instance key pair. Add key pair name in parameterstore. Parameter name should be – “ec2-keypair”. Value should be name of your keypair name.

Source code

https://github.com/yogeshagrawal11/cloud/tree/master/aws/Network/Transit%20Gateway

Cost

This implementation has cost associated with it. With attached configuration testing is done within 1 hr then it will not take more than 50 cents.

For latest charges, look for AWS price calculator

Implementation

First time to Terraform check this blog to get started –

https://cloudtechsavvy.com/2020/09/20/terraform-initial-setup/

Run following command to start terraform –

  • ./terraform init
  • ./terraform plan
  • ./terraform apply –auto-approve

I am using terraform for implementation. Following is output for terraform –

Total 47(not 45) devices are configured.

4 VPC created

4 subnet created if you observe available ips are 1 less because one of the ip in a subnet will be used by transit gateway for data transfer and routing.

4 Route table created. Each route table will use transit gateway as target for other VPC network.

Security group – These are most important configuration configuration in real world. For DNS you will allow port 53 or AD server open appropriate ports. In my case, I am using ping for checking communication.

Private VPC will only able to communicate with management network.

Project VPC will able to communicate with other VPC where as not able to communicate with Private VPC.

Note : We don’t have to explicitly, block network in project VPC this should be blocked by transit gateway as we are not going to add propagation.

Transit Gateway created. Remember if 64512 ASN is used by existing VPN then this can be added as parameter to change it.

DNS support enables help to reach out cloud with dns names rather than ip address ,certainly a useful feature.

Transit gateway can be shared with other transit gateway for inter regional data transfer for VPC’s over Amazon private network. Its advisable to “disable” auto accept shared for security reason.

Default route table is created and all VPC not explicitly attached will be attached to default route table.

Each VPN needed to add to transit gateway as attachment.

Each route table is created. Route table can be created as per segregation one needed into environment. In my case I am creating 3 route table for 4 VPC’s. Generally in Enterprise environment, we do create 5 route table. Separate route table to backup and security environment.

Since project VPC1 and project VPC2 should have same network requirement so I added them to same route table.

Management Route table

Management route table has management VPC attachment. Propagation added from all network which needed to communicate with management VPC. In this case, management VPC should be able to communicate with all other network so added propagation from all networks. This will add all routes propagated automatically.

Private Network Route table

Private VPC attached to private network route table. Private network should be able to communicated with management VPC so added propagation for management VPC. Also route for management VPC is added automatically after propagation.

Project Route table

Project route table do have attachment from both project VPC. Propagation added for other project VPC network and management network. Respective routes are added.

Testing Environment

Management server is able to ping both private and project environment instance.

Project VPC can talk to management VPC and other project VPC’s but not with private VPC

Private VPC able to talk to management vpc but not able to communicated with any project VPC’s. That makes private VPC private within organization.

Delete terraform configuration

To delete terrform configuration. Ensure all resources are destroyed

./terraform destroy –auto-approve

Conclusion

Transit gateway is tool to connect multiple VPC, VPN and direct connect network to make communication over private network. Transit gateway can be used to isolate network traffic. This makes routing comparatively easy.

SD-WAN partner solution can be used to automate adding new remote site into AWS network.

References

https://docs.aws.amazon.com/index.html

Categories
Featured GCP - Networking Google Cloud

Multi-Cloud site-to-site Network Connectivity

Multi-cloud Architecture is a smarter way to utilize public, private and hybrid environment. All enterprises wanted to have an option to choose multiple cloud providers for their usecases. Multi-cloud is now a days very popular for Enterprise and mid level companies. Following are benefits and considerations for selecting any multi-cloud environment.

Redundancy: Having more than one cloud provider, helps in redundancy. In case, if particular region for given cloud provider fails or any service fails we can configure redundancy by adding another cloud provider.

Scalability: This point may be not be that important but definitely worth to consider. Sometime, its lengthy process to increase resource limit for cloud account that can be safeguarded via having multiple cloud provider.

Cost: Cost can also be viewed at competition. Some of the services are cheaper on one cloud environment some or on another. This helps determine cheapest solution for enterprise.

Features: This is prime reason for multi-cloud environment. Having multicloud environment gives you flexibility to choose best suitable environment as per application needed than just to choose whatever is available at the time.

Customer Lock-in: Some of vendors have lock-in period for specific service. Enterprises always\rather mostly wish to avoid this lock-in time. Having multiple cloud we have more option on choosing correct vendor.

Nearest termination point/Customer Reach : Use of regional cloud provider will help enterprise to be near datacenter or user. This will improve performance and reduce latency issues. On top of that, each cloud provider’s global reach is different. So implement appropriate cloud provider whose reach is better for end user.

This procedure can be implemented for any vpn connection with BGP protocol. I am using Dynamic routing but static routing can be used as well. Below is architecture diagram for my VPN connectivity.

Architecture Diagram – Image 1

Pre-Requisites

Download Terraform software version

  • Terraform v0.12.26
  • provider.aws v2.70.0
  • provider.google v3.0.0-beta.1

AWS and Google account should be configured for terraform access.

I am using “us-west-2” region for AWS, “us-west1” region for google. If you are planning to use different region select appropriate instance image id and update image id

Create EC2 instance keypair and add keypairname inforamtion into parameterstore.

Change BGP IPs. I am using default one, these IPs should work in case those are not used for your existing environment.

Source Code

Please download all files from below location:

https://github.com/yogeshagrawal11/cloud/tree/master/google/Network

Implementation

Follow my Terraform initial setup guide in case new for terrform.

https://cloudtechsavvy.com/2020/09/20/terraform-initial-setup/

  • Download “aws_vpn.tf” and “google_vpn”
  • Run “./terraform init” to initialize Terraform setup
  • Run “./terraform plan” to verify connectivity to cloud and check for error
  • Execute “./terraform apply –auto-approve” to start implementation

Output for terraform. Take a note of IP address which will be used in later part for configuration.

IP address – Image 2.

IPSec Sharekey

I am using AWS parameter store to store password for VPN tunnels. Two parameters will be used. I am not encrypting this key but its advisable to encrypt those key as per security best practice.

AWS vpn shared key – image 3

Take a note of value from both “vpn_sharedkey_aws_to_gcp_tunnel1” and “vpn_sharedkey_aws_to_gcp_tunnel2” parameter. These values will be used while creating IPSec tunnel.

AWS VPN shared key value – Image 4

AWS & GCP network configuration

VPC with CIDR 10.1.0.0/16 created. Route table attached.

AWS VPC config – Image 5

GCP VPC configuration. Subnet is attached.

GCP VPC configuration – Image 6

GCP firewall allows traffic from AWS subnet to GCP subnet

GCP Firewall Allowing only ip from AWS subnet for icmp and SSH – image 7

Customer Gateway should have ip address from GCP VPN gateway(not AWS VPN gateway address). This ASN should be with range from 64496 till 65534. ASN used in AWS customer gateway is managed by GCP

AWS Customer Gateway Config = GCP VPN Gateway config

AWS Customer Gateway – Image 8

GCP VPN gateway ip information is matching with customer gateway. Forwarding rules are mandatory for tunnel creation. Terraform will create those rules automatically.

Google VPN gateway image 9

AWS Private Gateway will have next ASN no. Its advisable to use next number. I always follow best practice, to use Odd no for certain provider like GCP and even no for certain provider like AWS. This configuration will also works with on-premise network device in that you will define precedence. All on-premise devices will have lower ASN number and so on.

AWS Virtual private gateway with ASN 65002 – Image 10

Attach your Site-to-site VPN connection with Virtual private gateway and Customer gateway. This will create one vpn connection with Customer Gateway(GCP VPN Gateway) and AWS Virtual private gateway. I am using “ipsec.1” for connection type.

AWS – Site-to-Site-VPN Connection – Image 11

This will also create two tunnel. I am using dynamic routing. BGP protocols have limitation for 100 subnet that can be exchanged between vpn when this blog is posted. Tunnel information as follows –

AWS tunnels are down due to GCP configuration pending – Image 12
Tunnel IP address issue

Tunnels are configured properly but in down stage because corresponding GCP tunnels are not created. I tried to create those tunnel using Terraform but issue is happening that both AWS and GCP were taking own ip as first ip(169.254.1.9) from 169.254.1.8/30 subnet. And second ip will be allocated as peer ip(169.254.1.10). On contrary, we have AWS ip as first ip and second ip in subnet should be used by GCP cloud router.

Correct BGP IP for GCPs are

  • Tunnel 1 – Cloud router ip 169.254.1.10(second ip in subnet) and BGP peer ip(from AWS) = 169.254.1.9(which is correctly configured)
  • Tunnel 2 – Cloud router ip 169.254.1.14(second ip in subnet) and BGP peer ip(from AWS) = 169.254.1.13(which is correctly configured)

Create Tunnels in GCP

Now create two tunnels in GCP VPN gateway tunnel with following configuration –

  • Remote Peer IP address : 35.161.67.220 Value from Terrform output “aws_tunnel1_public_address
  • IKE Verion = 1
  • IKE pre-shared key = Copy value from “vpn_sharedkey_aws_to_gcp_tunnel1” parameter from AWS parameter store. Note. Do not copy trailing spaces.
  • Cloud Router = gcp-cloud-router
  • BGP session Information –
    • bgp name = bgp1
    • peer ASN = 65002
    • Cloud router BGP IP = 169.254.1.10 value of “aws_tunnel1_inside_gcp_address” from terraform output
    • BGP peer IP = 169.254.1.9 value of “aws_tunnel1_inside_aws_address” from terraform output
GCP BGP session config for tunnel1. – image 13
GCP VPN tunnel1 configuration – image 14

Perform same activity on tunnel-dynamic2 with following details –

  • Remote Peer IP address : 35.163.174.84 Value from Terrform output “aws_tunnel2_public_address
  • IKE Verion = 1
  • IKE pre-shared key = Copy value from “vpn_sharedkey_aws_to_gcp_tunnel2” parameter from AWS parameter store. Note. Do not copy trailing spaces.
  • Cloud Router = gcp-cloud-router
  • BGP session Information –
    • bgp name = bgp2
    • peer ASN = 65002
    • Cloud router BGP IP = 169.254.1.14 value of “aws_tunnel2_inside_gcp_address” from terraform output
    • BGP peer IP = 169.254.1.13 value of “aws_tunnel2_inside_aws_address” from terraform output

upon changing this configuration both tunnel should be ip and running at GCP and AWS environment. Try refreshing page in case status is not changed

GCP – Both tunnels are up – Image 15
AWS – Both Tunnels are up – Image 16

This completed our network connectivity between AWS and GCP environment.

Testing

To test, I am going to login to my AWS instance with my keyname that is defined in “parameterstore”. Use following ip from terraform output

EC2 and Instance IP – Image 17
Login into AWS instance via public ip from Image 17 – Image 18

We have allowed ICMP protocol for ping and “ssh” port from AWS environment to GCP environment so will test try to ping GCP instance’s private ip instance from AWS private ip address.

SSH and ping test to GCP via private ip – Image 19

Voila. I could not login to GCP because I have not copied instance json file to EC2 so that ssh will be correctly authenticated.

GCP instance access on external IP

GCP get Public ip – Image 20

Ping test from AWS EC2 over GCP public instance IP is failed as expected because of two reason, we dont have internet gateway setup on GCP VPC secondly we have not allowed ICMP and ssh in firewall from external world.

GCP test ssh and ICMP with public ip – Image 21

Test is successful.

Deletion of environment

Since we created GCP tunnel separately, we need to delete those tunnel before deleting infrastructure using terraform

Go to GCP > VPN > Cloud VPN Tunnels

Select both newly created tunnels and click “Delete”

GCP – Delete tunnel1 – Image 22

Once tunnel is deleted, Run following command from Terraform environment –

./terraform destroy –auto-approve 

Terraform Delete – Image 23

Make sure all 25 resources are deleted.

Conclusion

Multi cloud is new normal and private network connectivity that everyone wanted to have. I have given example of compute instances but this can be extended with multi level architecture. Try to get best of both world by implementing this solution.

Keep Building…