In this, you’ll get to learn how to create and deploy the aws lambda function with Terraform. With Terraform, it is much easier and more efficient to create and deploy the lambda function on AWS.
As IaC, Terraform provides the aws_lambda_function resource for managing the AWS Lambda functions.
Prerequisites
- AWS account and credentials
- Terraform installed on your machine (In this tutorial, I’ll be using Terraform v1.2.7)
Create and Deploy Lambda Function with Terraform
Step-1: Install and Configure AWS CLI
I have already installed AWS CLI version 1 on my Linux system so I’m gonna use version 1, but you can install the latest version 2 from the official documentation:
In this tutorial, I’ll be using AWS CLI version 1.25.59.
After you have successfully installed AWS CLI you can run the following command to see the version of the AWS CLI installed:
$ aws --version
aws-cli/1.25.59 Python/3.10.5 Linux/5.15.50-1-lts botocore/1.27.58
Now, configure the AWS CLI, to do so, enter the following command and you would be prompted with a few questions:
$ aws configure
On prompt, enter your AWS Access Key ID
and AWS Secret Access Key ID
of your AWS account. For example: (Values entered below are not valid)
$ aws configure
AWS Access Key ID [None]: AKIAS790KWUK6T5QGK63
AWS Secret Access Key [None]: kWBLO9G7JJKQWIOKL7CkkQEiBjJSKrDpHjMGyoiJWW
Default region name [None]: ap-south-1
Default output format [None]:
Here you go, you have successfully configured AWS CLI with your AWS account credentials.
Step-2: Terraform scripts to create Lambda function
First, create a new file named provider.tf
(you can use any sensible name for the files). Be sure, the files extension is of *.tf
as Terraform codes are written in the files with this extension.
We’ll be creating some more files for our Terraform configuration codes.
provider "aws" {
region = var.aws_region
}
Here, this configuration in provider.tf
file instructs terraform to use AWS as a provider.
Since we’ve configured the AWS region to be taken from variables we need to set up variables for this. So, create a new file named variables.tf
to pass the required variables to the Terraform configurations:
variable "aws_region" {
default = "ap-south-1"
}
Also, create another file named output.tf file to store the output of our Terraform execution. (To store the lambda function ARN)
output "lambda_function" {
value = aws_lambda_function.example_lambda_function.qualified_arn
}
Now, let’s create an IAM Policy Document in JSON format for our Lambda IAM Policy. It will be used for our lambda function we gonna create later.
To do so, I’ve created a new file and named it lambda_policies.tf
with the following configurations:
data "aws_iam_policy_document" "example_lambda_policy" {
statement {
sid = "examplePolicyId"
effect = "Allow"
principals {
identifiers = ["lambda.amazonaws.com"]
type = "Service"
}
actions = ["sts:AssumeRole"]
}
}
Create an AWS IAM Role with the Policy Document we just created before. In file lambda_policies.tf
add the following configuration blocks:
resource "aws_iam_role" "example_lambda_iam" {
name = "example_lambda_iam"
assume_role_policy = data.aws_iam_policy_document.example_lambda_policy.json
}
Here, we’re using the JSON output of the previous data block aws_iam_policy_document as an assume_role_policy value (data.aws_iam_policy_document.example_lambda_policy.json
).
Final lambda_policies.tf
:
data "aws_iam_policy_document" "example_lambda_policy" {
statement {
sid = "examplePolicyId"
effect = "Allow"
principals {
identifiers = ["lambda.amazonaws.com"]
type = "Service"
}
actions = ["sts:AssumeRole"]
}
}
resource "aws_iam_role" "example_lambda_iam" {
name = "example_lambda_iam"
assume_role_policy = data.aws_iam_policy_document.example_lambda_policy.json
}
Finally, after creating all the necessary files and resources, let’s create a Lambda function creator block in a new file named main.tf
, which will create our desired lambda function.
Before that let’s write a configuration block to create a zip file of our source code (in my case Python). This zip file will be used to deploy into Lambda. In main.tf
file, add the following blocks of configurations:
provider "archive" {}
data "archie_file" "example_zip_file" {
type = "zip"
source_file = "example.py"
output_path = "example.zip"
}
Here, archive_file
compresses the single file or list of files. example.py
file is our Lambda Function source code file and will be created later.
Now, create our lambda function with the zip file we’ve created before. To do so, add the following block of code into main.tf
file:
resource "aws_lambda_function" "example_lambda_function" {
function_name = "example_function"
filename = data.archive_file.example_file_zip.output_path
# source_code_hash = data.archive_file.exmaple_file_zip.output_base64sha256
role = aws_iam_role.example_lambda_iam.arn
handler = "example_function.lambda_handler"
runtime = "python3.10"
}
Here, the Lambda function named example_function
is going to be created using the zip file from archive_file
, using the role we’ve created, and using the Python runtime.
Final main.tf
looks like this:
provider "archive" {}
data "archie_file" "example_zip_file" {
type = "zip"
source_file = "example.py"
output_path = "example.zip"
}
resource "aws_lambda_function" "example_lambda_function" {
function_name = "example_function"
filename = data.archive_file.example_zip_file.output_path
# source_code_hash = data.archive_file.exmaple_zip_file.output_base64sha256
role = aws_iam_role.example_lambda_iam.arn
handler = "example_lambda_function.lambda_handler"
runtime = "python3.9"
}
Step-3: Lambda Function Source Code
We have written all the configuration code necessary to create the AWS Lambda Function. Now, it’s time to create Lambda Function source code file named example.py
.
Create an example.py
file in the same directory where our .tf
files are kept and add the following code: (This is just a sample lambda function code)
import logging
import os
logger = logging.getLogger()
ACTIONS = {
'plus': lambda x, y: x + y,
'minus': lambda x, y: x - y,
'times': lambda x, y: x * y,
'divided-by': lambda x, y: x / y}
def lambda_handler(event, context):
logger.setLevel(os.environ.get('LOG_LEVEL', logging.INFO))
logger.debug('Event: %s', event)
action = event.get('action')
func = ACTIONS.get(action)
x = event.get('x')
y = event.get('y')
result = None
try:
if func is not None and x is not None and y is not None:
result = func(x, y)
logger.info("%s %s %s is %s", x, action, y, result)
else:
logger.error("I can't calculate %s %s %s.", x, action, y)
except ZeroDivisionError:
logger.warning("I can't divide %s by 0!", x)
response = {'result': result}
return response
Our project tree will look like this:
.
├── example.py
├── lambda_policies.tf
├── main.tf
├── output.tf
├── provider.tf
└── variables.tf
0 directories, 6 files
Step-4: Deploy AWS Lambda Function with Terraform
We’ve successfully configured the Terraform to create and deploy the Lambda Function. Also, our python source code is ready. So, now it’s time to create and deploy our AWS Lambda Function with Terraform on AWS.
First, initialize the terraform script by running the following command: (It will install all the necessary packages)
$ terraform init
Initializing the backend...
Initializing provider plugins...
- Finding latest version of hashicorp/archive...
- Finding latest version of hashicorp/aws...
- Installing hashicorp/archive v2.2.0...
- Installed hashicorp/archive v2.2.0 (signed by HashiCorp)
- Installing hashicorp/aws v4.27.0...
- Installed hashicorp/aws v4.27.0 (signed by HashiCorp)
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Then run the following terraform command to plan: (It shows what changes would be made upon running the terraform apply
but does not apply)
$ terraform plan -out explan.out
Here, I’m saving the plan as an output file explan.out
this is recommended to make sure that no unintended changes are being made to the Infrastructure.
Output:
data.archive_file.example_zip_file: Reading...
data.archive_file.example_zip_file: Read complete after 0s [id=6cc6015fd71708b626ea3fc80398dc76ed84e00a]
data.aws_iam_policy_document.example_lambda_policy: Reading...
data.aws_iam_policy_document.example_lambda_policy: Read complete after 0s [id=3822789575]
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# aws_iam_role.example_lambda_iam will be created
+ resource "aws_iam_role" "example_lambda_iam" {
+ arn = (known after apply)
+ assume_role_policy = jsonencode(
{
+ Statement = [
+ {
+ Action = "sts:AssumeRole"
+ Effect = "Allow"
+ Principal = {
+ Service = "lambda.amazonaws.com"
}
+ Sid = "examplePolicyId"
},
]
+ Version = "2012-10-17"
}
)
+ create_date = (known after apply)
+ force_detach_policies = false
+ id = (known after apply)
+ managed_policy_arns = (known after apply)
+ max_session_duration = 3600
+ name = "example_lambda_iam"
+ name_prefix = (known after apply)
+ path = "/"
+ tags_all = (known after apply)
+ unique_id = (known after apply)
+ inline_policy {
+ name = (known after apply)
+ policy = (known after apply)
}
}
# aws_lambda_function.example_lambda_function will be created
+ resource "aws_lambda_function" "example_lambda_function" {
+ architectures = (known after apply)
+ arn = (known after apply)
+ filename = "example.zip"
+ function_name = "example_function"
+ handler = "example_lambda_function.lambda_handler"
+ id = (known after apply)
+ invoke_arn = (known after apply)
+ last_modified = (known after apply)
+ memory_size = 128
+ package_type = "Zip"
+ publish = false
+ qualified_arn = (known after apply)
+ reserved_concurrent_executions = -1
+ role = (known after apply)
+ runtime = "python3.9"
+ signing_job_arn = (known after apply)
+ signing_profile_version_arn = (known after apply)
+ source_code_hash = (known after apply)
+ source_code_size = (known after apply)
+ tags_all = (known after apply)
+ timeout = 3
+ version = (known after apply)
+ ephemeral_storage {
+ size = (known after apply)
}
+ tracing_config {
+ mode = (known after apply)
}
}
Plan: 2 to add, 0 to change, 0 to destroy.
Changes to Outputs:
+ lambda_function = (known after apply)
──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Saved the plan to: explan.out
To perform exactly these actions, run the following command to apply:
terraform apply "explan.out"
The explan.out
file should be used during the terraform apply
so that only the changes you have seen in the plan would be deployed.
After ensuring the changes. Go ahead and apply the configuration by running the following command:
$ terraform apply explan.out
aws_iam_role.example_lambda_iam: Creating...
aws_iam_role.example_lambda_iam: Creation complete after 3s [id=example_lambda_iam]
aws_lambda_function.example_lambda_function: Creating...
aws_lambda_function.example_lambda_function: Still creating... [10s elapsed]
aws_lambda_function.example_lambda_function: Creation complete after 14s [id=example_function]
Apply complete! Resources: 2 added, 0 changed, 0 destroyed.
Outputs:
lambda_function = "arn:aws:lambda:ap-south-1:123456789012:function:example_function:$LATEST"
Final project tree of this project:
.
├── example.py
├── example.zip
├── explan.out
├── lambda_policies.tf
├── main.tf
├── output.tf
├── provider.tf
├── terraform.tfstate
├── terraform.tfstate.backup
└── variables.tf
0 directories, 10 files
Step-4: Test The Lambda Function in Console
If you successfully executed the terraform apply
command, you can now check and test the Lambda Function you just deployed on the AWS console.
To do so, log in to your AWS console and go to Lambda, and follow the following steps:
- Select created Lambda Function.
- Click on the Test option on your lambda function page to set up a test event: (I’ve set up
exampleTest
event with JSON) - Enter your test event name and enter the following JSON value and save the test event.
{
"x": 10,
"y": 20,
"action": "plus"
}
- Click on Test to execute the test event.
Since our source code takes three arguments two numbers x and y, and one action (one of plus, minus, times, and divided-by), we passed them from JSON.
To destroy created lambda just run the following command:
$ terraform destroy
data.archive_file.example_zip_file: Reading...
data.archive_file.example_zip_file: Read complete after 0s [id=55101ea88bbe935d12a13fb65c6343ae66d20fbc]
data.aws_iam_policy_document.example_lambda_policy: Reading...
data.aws_iam_policy_document.example_lambda_policy: Read complete after 0s [id=702760738]
aws_iam_role.example_lambda_iam: Refreshing state... [id=example_lambda_iam]
aws_lambda_function.example_lambda_function: Refreshing state... [id=example_function]
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
- destroy
Terraform will perform the following actions:
# aws_iam_role.example_lambda_iam will be destroyed
- resource "aws_iam_role" "example_lambda_iam" {
- arn = "arn:aws:iam::463432632382:role/example_lambda_iam" -> null
- assume_role_policy = jsonencode(
{
...
...
Do you really want to destroy all resources?
Terraform will destroy all your managed infrastructure, as shown above.
There is no undo. Only 'yes' will be accepted to confirm.
Enter a value: yes
aws_lambda_function.example_lambda_function: Destroying... [id=example_function]
aws_lambda_function.example_lambda_function: Destruction complete after 1s
aws_iam_role.example_lambda_iam: Destroying... [id=example_lambda_iam]
aws_iam_role.example_lambda_iam: Destruction complete after 1s
That’s it.
All the snippets used in this article can be found here.
Conclusion
In this, you have successfully automated the deployment of your python code to a lambda function using Terraform and then validated using Lambda console by testing with event JSON.