Blog

How to run AWS Lambda container images using the Serverless Framework

Oct 20, 2021 | Announcements, Migration, MSP

AWS Lambda (Lambda), a fully managed serverless computing service provided by AWS, runs functions packaged as container images to build serverless applications. It takes care of infrastructure build and maintenance. Each Lambda function runs in its isolated environment, with its resources and file-system view. It uses the same techniques as Amazon Elastic Compute Cloud (Amazon EC2) to provide security and separation at the infrastructure and execution levels. In other words, Lambda is a serverless compute service that runs your code in response to event triggers and then automatically manages the underlying compute resources.

Lambda can be used to extend other AWS services with custom logic. Or, individualized back-end services can be created that will operate at AWS scale, performance, competency, and security. An example of that ability is explained below in this blog’s “Setting up a Lambda-ready Docker image” section.

Serverless Framework

Serverless computing is not the elimination of servers from distributed applications. It is called serverless because it doesn’t have any dedicated servers always up and running to host the application, so it is serverless only when the application is idle. The term extends to the way people use software to provide back-end services on an as-used basis. A company that gets back-end services from a serverless vendor is charged based on usage, not a fixed amount of bandwidth or the number of servers. For example, with AWS Lambda, you pay only for what you use. You are charged based on the number of requests for your functions and the duration, the time it takes for your code to execute.

The Serverless Framework is open-source software that builds, compiles, and packages code for serverless deployment and then deploys the package to the cloud. It helps develop and deploy Lambda functions, along with the AWS infrastructure resources required. It’s a command-line interface (CLI) that offers structure, automation and best practices to build sophisticated, event-driven, serverless architectures. There are many community-developed plugins to extend its functionality.

The Serverless Framework deploys Functions and the Events that trigger them. Core concepts of the Serverless Framework include:

  • Services. The Framework’s unit of organization, like a project file, but with the capability to have multiple services for a single application. Services are where Functions are defined with the Events that trigger them. They specify the Resources the Functions will use, describing them in the root directory of the project with YAML or JSON format, serverless.yml or serverless.json:
service: users
functions: # Your "Functions"
  usersCreate:
    events: #The "Events" that trigger this function
      - http: post users/Create
  usersDelete:
    events:
      - http: delete users/delete

resources: # The "Resources" your "Functions" use, Anything that you can define in Cloudformation  goes in here.
  • Functions. An AWS Lambda Function is an independent unit of deployment, like a microservice. It is code deployed in the cloud and most often written to perform a single job like saving a user to the database, processing a file in a database, or performing a scheduled task.
  • Events. Anything in the Framework that triggers an AWS Lambda Function to execute. Events are infrastructure events on AWS like Amazon API Gateway, Amazon Simple Storage Service (Amazon S3), Amazon Kinesis, etc.
  • Resources. The AWS infrastructure components that your Functions depend upon such as Amazon DynamoDB table, Amazon S3 bucket, or Amazon SNS topic. The Serverless Framework can support anything that can be defined in AWS CloudFormation (CloudFormation).
  • Plugins. Help to extend the functionality of the Framework. They need to be installed using npm or serverless:
$ npm install --save custom-serverless-plugin
$ serverless plugin install --name pluginName

Setting up a Lambda-ready Docker image

With the container image support for Lambda, use Docker to package your custom code and dependencies for Lambda functions. You pay for the Amazon Elastic Container Registry (Amazon ECR) repository and the standard Lambda pricing for up to 10 GB.

Here is the basic configuration for an image, but it can be modified as required:

FROM ${IMAGE}
ARG FUNCTION_DIR="/var/task"

# Create function directory
RUN mkdir -p ${FUNCTION_DIR}

# Copy handler function and package.json
COPY index.js ${FUNCTION_DIR}
COPY package.json ${FUNCTION_DIR}

# Install NPM dependencies for function
RUN npm install

# Set the CMD to your handler
CMD [ "index.handler" ]

Hands-on example

For this to work, you’ll need to install the Serverless Framework and AWS CLI and have an AWS account.
To set up your AWS credentials run:

$ aws configure

We are going to use a serverless project from the examples section of Serverless Framework.

You can see the actual code here.

We will take a couple of additional steps, creating an Amazon S3 bucket in the resources section and creating a Dockerfile to achieve a Lambda container type ready to be deployed using Serverless Framework.

Open your favorite code editor and copy the following code in a file with the name of handler.js:

'use strict';

const fetch = require('node-fetch');
const AWS = require('aws-sdk'); 

const s3 = new AWS.S3();

module.exports.lambdaHandler = (event, context, callback) => {
  fetch(event.image_url)
    .then((response) => {
      if (response.ok) {
        return response;
      }
      return Promise.reject(new Error(
            `Failed to fetch ${response.url}: ${response.status} ${response.statusText}`));
    })
    .then(response => response.buffer())
    .then(buffer => (
      s3.putObject({
        Bucket: process.env.BUCKET,
        Key: event.key,
        Body: buffer,
      }).promise()
    ))
    .then(v => callback(null, v), callback);
};

Create a file named package.json with the following:

{
  "name": "project",
  "version": "1.0.0",
  "description": "Fetch an image from remote source (URL) and then upload the image to a S3 bucket.",
  "main": "handler.js",
  "author": "",
  "license": "ISC",
  "dependencies": {
    "aws-sdk": "^2.7.9",
    "node-fetch": "^1.6.3"
  }
}

We are going to create a Dockerfile for it and deploy it with Serverless:

FROM public.ecr.aws/lambda/nodejs:12

COPY handler.js package*.json ./

RUN npm install

CMD [ "handler.lambdaHandler" ]

Save the code above in a file named Dockerfile.

Now we are going to create a serverless.yml file where we are going to declare all the configurations.

# You can give it the name you like to your project
service: serverless-container
# We are using aws as a provider in the N. Virginia region (us-east-1) and defined our stage as project
provider:
  name: aws
  stage: project
  region: us-east-1
# In this section you can define images that will be built locally and uploaded to ECR
  ecr:
    images:
      saveImage:
        path: ./
# We need this in order to allow our lambda to put the images inside of our bucket
  iamRoleStatements:
    - Effect: Allow
      Action:
        - s3:PutObject
        - s3:PutObjectAcl
      Resource: "arn:aws:s3:::${self:custom.bucket}/*"
# This will be the name of the bucket. Remember that the name of your bucket must be unique ;)
custom:
  bucket: ${self:service}-${self:provider.stage}-demo
# Here we are defining our function and the container previously configured in the ecr section
functions:
  save:
    image:
      name: saveImage
# Function level environment variables
    environment:
      BUCKET: ${self:custom.bucket}
# In here we are creating our bucket using cloudformation
resources:
  Resources:
    S3Assets:
      Type: AWS::S3::Bucket
      Properties:
        BucketName: ${self:custom.bucket}

Run the following command to deploy your new serverless.yml file:

$ sls deploy

To test your new Lambda container image, run the following command:

$ sls invoke --function save --log --data='{ "image_url": "https://www.nclouds.com/img/nclouds-logo.svg", "key": "demo.svg"}'

Or, you can create your own Amazon ECR repository to reference an existing image:

functions:
  someFunctionName:
    # Here you can specify the ecr image and the digest it's going to be used
    image: <account>.dkr.ecr.<region>.amazonaws.com/<repository>@<digest>

Login retrieving authentication token:

$ aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin <account>.dkr.ecr.us-east-1.amazonaws.com

Build Docker image locally:

$ docker build -t <image-name> .

Create an Amazon ECR repository where you are going to store all your image versions:

$ aws ecr create-repository --repository-name <repository-name> --image-scanning-configuration scanOnPush=true

After the build completes, tag your image so you can push the image to this repository:

$ docker tag <image-name>:latest <account>.dkr.ecr.<region>.amazonaws.com/<repository-name>:latest

Run the following command to push this image to your newly created AWS repository:

$ docker push <account>.dkr.ecr.<region>.amazonaws.com/<repository-name>:latest

In serverless, you can try your recently pushed image to Amazon ECR like this by replacing the code inside serverless.yml with this. Make sure to modify your ECR image:

service: serverless-container
# We are using aws as a provider in the N. Virginia region (us-east-1) and defined our stage as project
provider:
  name: aws
  stage: project
  region: us-east-1
# We need this in order to allow our lambda to put the images inside of our bucket
  iamRoleStatements:
    - Effect: Allow
      Action:
        - s3:PutObject
        - s3:PutObjectAcl
      Resource: "arn:aws:s3:::${self:custom.bucket}/*"
# This will be the name of the bucket. Remember that the name of your bucket must be unique ;)
custom:
  bucket: ${self:service}-${self:provider.stage}-demo
# Here we are defining our function and the container previously configured in the ecr section
functions:
  save:
# Make the changes accordingly with  what you defined when creating ecr repository
    image: <account>.dkr.ecr.<region>.amazonaws.com/<repository>@<digest>
# Function level environment variables
    environment:
      BUCKET: ${self:custom.bucket}
# In here we are creating our bucket using cloudformation
resources:
  Resources:
    S3Assets:
      Type: AWS::S3::Bucket
      Properties:
        BucketName: ${self:custom.bucket}

Run deploy again so serverless can update your recent changes:

$ sls deploy

To summarize, package your Lambda function code and dependencies as a container image using tools such as the Docker CLI. Then upload the image to your container registry hosted on Amazon ECR. Next, you must create the Lambda function from the same account as the container registry in Amazon ECR. After creating a container image in the Amazon ECR container registry, you can create and run the Lambda function.

Need help with DevOps or containers on AWS? The nClouds team is here to help with that and all your AWS infrastructure requirements.

nClouds Insights

Join thousands of DevOps and cloud professionals. Sign up for our newsletter for updated informaion and insights

nClouds
nClouds is a cloud-native services company that helps organizations maximize site uptime, performance, stability, and support, bringing out the best of their people and technology using AWS