Categories
AWS Management

Randomizer template for Cloud formation

An old colleague of mine reached out to me for creating a random string within cloud formation. If one has not used it in past it can get tricky. I wish amazon created a function for the same but then how would I have showcased my love to SERVERLESS with this blog. I will be using Lambda function for creating random strings and Cloud formation custom function to call that lambda function.

Template Description

RandomizerTemplate.yaml

The parameter needed for a random string is the length of the string. By default, this template will create 6 char string.

Parameter : Image 1

Lambda function and respective execution role. I have created this lambda function in python. Since this will be created once and used many times in the future don’t need any expertise for python language. Just use this function as is that will suffice usage.

Lambda function to generate parameter string : Image 2

This template will export Lambda function ARN (RandomizerLambdaArn). This Lambda function arn will be used in the custom resource into a service token section.


Image 3

Calling Randomizer in your template (CreateBucketTemplate.yaml)

This is a way to call randomizer function in the template. Copy-paste this function into a template where a random string is needed.

Calling custom resource : Image 4

To get random string use the following commands –

  • RandomizerLambda.RandomString for random string
  • RandomizerLambda.Lower_RandomString for lower case random string
  • RandomizerLambda.Upper_RandomString for upper case random string
  • RandomizerLambda.RandomNumber for numeric string
Use of ramdom string in cloudformation resource : Image 5

Download Randomizer template

Download code from the following GitHub link.

https://github.com/yogeshagrawal11/cloud/tree/master/aws/Cloud%20Formation/Randomizer

  • RandomizerTemplate.yaml : Template to create a Randomizer lambda function.
  • CreateBucketTemplate.yaml : Template to create a S3 bucket using Random string.

Prereq

Download template to a folder. Please ensure id used for implementing the template do have access to create the following resources –

  • IAM Role
  • IAM Policy
  • Lambda function. Lambda function indeed will create Log group, Log stream for events.
  • A custom resource to call the Lambda function for a random string.

Ensure cloud formation export name is not present in your environment. Export name – “RandomizerLambdaArn”

Implementation

I am working on CLI for implementation. This template can easily be deployed from the AWS console as well.

Download AWS SDK and configure it. Make sure to have proper access. Run the following command to implement the randomizer template.

  • To configure aws SDK environment run –
    • aws configure
  • To validate randomizer template is good
    • aws cloudformation validate-template –template-body “file://RandomizerTemplate.yaml”
Validate Randomizer template – Image 1
  • To install randomizer stack
    • aws cloudformation create-stack –stack-name randomizerStack –disable-rollback –capabilities CAPABILITY_IAM –template-body “file://RandomizerTemplate.yaml”
Creating Randomizer template – Image 2

Ensure Stack is configured successfully

Template status – Image 3

Lambda function is created. Lambda execution role and policy created you can use existing role as well if needed to reduce role count. Lambda function will create AWS Cloudwatch loggroup and logstream for Lambda function metrics and output information. This is very useful. One can use this lambda function and parameters like a project, stackname, application name in stack output which can be tracked as well for accounting or analysis purposes.

Randomizerstack has a default input character length as 6 but it can be changed upon request of the stack.

Outputs do have random string-like, alphanumeric character, numeric character or just lower alphabets(used for S3 bucket name)

Validating bucket creation template.

Creating a bucket using a template. I am passing parameter value of 10 is nothing but I needed 10 character string for bucketname

  • aws cloudformation create-stack –stack-name CreateBucketStack –parameters ParameterKey=RandomStringLength,ParameterValue=10 –template-body “file://CreateBucketTemplate.yaml”

A new bucket is created with 10 random string characters.

Creating a new bucket with default 6 char length string.

All 3 stacks are created.

New bucket with default 6 length character.

Default 6 value is assigned to the “RandomStringLength” parameter via RandomizerStack.

Both buckets are created. First bucket created with 10 char string whereas the second one with 6 char.

Clean up

Delete all 3 stacks via cli or GUI

CLI command to delete all 3 stacks

  • aws cloudformation delete-stack –stack-name CreateBucketStack1
  • aws cloudformation delete-stack –stack-name CreateBucketStack
  • aws cloudformation delete-stack –stack-name randomizerStack

Lambda function will create Cloudwatch loggroup and log stream. Delete those log group by going to Cloudwatch -> Log groups -> Select appropriate log group by filtering “randomizer”. Select checkbox. Go to action and click delete.

Conclusion

Use this randomizer template for the need of a randomizer string. Very useful for ami name, autoscaling group name, and S3 bucket names.

PS. Security is not in mind with this blog. The intention is purely to kickstart my builders.

Enjoy !!!!!

Categories
AWS Compute Database Featured Management Security & Identity

Convert object-oriented data to Nosql Dynamodb — 101

The IoT Ecosystem is buzz words and needed lots of data management. We receive data but how to make use of data is the most important. This design is a very small portion of a bigger portfolio. Much more application can be integrated into this design. There are many ways to perform this transformation. Athena and Glue certainly can be used here.

Design overview

Consider this design is a bare minimum requirement to convert object-oriented data into data used for analytics. I am trying to use managed service as much as possible in this design but the 3rd party tool can be used for this design.

Application or IoT device will dump data into the S3 bucket. Data can have a variable field as long as the name file is common. S3 will trigger the Lambda function upon put request completion. This Lambda function will download files from the S3 bucket and copy in its temporary storage configured via lambda. To make historic trending of time-series data, Dynamodb hash key can be used with a combination of “name” and “timestamp”.

Lambda will convert the CSV file into JSON and add each row as an item into Amazon DynamoDB Table. Upon success, lambda will send a notification to SNS topics. SNS topic is configured with two types of subscriptions “SMS” and “SQS”.

Failure events can be sent to another topic for reiteration.

Image for post
Architechture Diag (image 1)

Use Case:

This code can be used with a little tweak to get a set of S3 data and perform analysis on them (Instead of put trigger use copy or post-event). Say some team likes to perform analytics from all data for last month. We can use this type of environment and provide the DynamoDB database for this specific analysis. Once work is done all configuration will be done.

Infrastructure as code (IaC)

IaC is one of the most important application deployment tools. It will reduce errors and provide highly repeatable infrastructure. This will help me not to manually configuring parameters. All parameter resource names are prefixed with “appname” variable. Thus, the same configuration can be used for different application environments or teams.

I chose Terraform to implement this so that a hybrid implementation is an option in case of any customer requirement. Terraform support all major cloud environment. Obviously, we need to appropriately change the resources.

Terraform Provider information. I highly recommend setting up a profile while running the “terraform init” command so that different environments used with different access.

Image for post

Avoid using “access_key” and “secret_key”. You can also create ec2 instance with proper IAM role for Terraform deployment.

Following resource configuration will be added into the environment for this implementation –

  • app_lambda_role

Lambda function will use this role for internal usage. Mainly, this role should include S3 read access, Cloud watch log group and stream write access, Dynamodb add/read/update item access and SNS publish access.

  • app-lambda-cloud-watch-policy

Policy created with the above role access.

  • app-lambda-cloud-watch-policy-attachment

Attach the policy to the role.

  • allow_bucket

This will be used to trigger the lambda function from S3.

  • app-lambda-func

Lambda function will be run after triggered by S3.

  • bucket_notification

S3 notification resource that will trigger lambda function on events specified in the notification. “prefix” and “suffix” configuration can be used for different types of environments.

  • app-snstopic

SNS topic where lambda function will send notification of successful events. PS. I have not configured notification on failure events. Create another topic for the same and update the lambda code accordingly.

  • app-sns-target

SNS-target will connect “SQS” as a subscription for “app-snstopic”

  • app-snstopic-sms

SMS topic is created. We can club topics with just another SMS subscription. I wanted to ensure we have different topics to send different kinds of data. Like for SQS, we can send information about which rows are failed and try that row information. SMS topic will have concise information.

  • app-sms-target

Sms-target topic will connect SMS phone no or list of phone nowhere an event is sent.

  • app-sqs

Queue with information. This can be used to notify topics that are not successful. Lambda function can be triggered to resolve those issues or try again. I have not added that functionality.

  • app-dynamodb-table

The table will be created as per input schema. Hash-key is important and all input data should have hash-key. If hash-key is not present then the item will not be inserted into the Dynamodb NoSQL database. In my input, “name” field is used as hash-key

Source code

Download source code from below Github link –

https://github.com/yogeshagrawal11/cloud/tree/master/aws/DynamoDB/S3%20to%20Dynamodb

Download zip file and main.tf and terraform.tfvars. Change appropriate values in “terraform.tfvars” file.

Image for post

Download zip file at the same location as the terraform key. Lambda function will be created by the below terraform resource.

Image for post

Terraform apply command Output

The following resource will be created using

“terraform apply -auto-approve“ Command. All 12 resources will be created.

Image for post

Lambda function created.

Image for post

Input file uploaded to s3.

Image for post

Input file format.

Image for post

Dynamodb table created by terraform.

Image for post

Lambda function triggers after uploading the input file.

Image for post

Data inserted into nps_parks table by insertS3intoDynamodb lambda function

Image for post

SNS topic created

Image for post

SQS queue

The message is posted into SQS queue

Image for post

Next Step

  • Add an application to analyze data from DynamoDB and present virtualization information.
  • Add realist changeable data, not static data that I used in my case study.

Disclaimer

1. Code is available with Apache license agreement

2. Do not use this code for production. Educational purposes only.

3. Needed to improve security around the environment

4. Tighten IAM policy required for production use

5. I have not created a topic for the failure event

6. A failure domain is not considered in this design

7. Lambda function is created with base minimum code and not performing data validation