set up a microservice in 5 minutes with Serverless

tl;dr: write serverless infrastructure as code

cost: $0

build time: 5 minutes (MVP)


Serverless, confusingly, means two things.

There's serverless the concept - your infra provider dynamically spins up compute at invocation time and (spins it down after). This means no more worrying about managing (or scaling) compute - it's all done for you. The major serverless products are AWS Lambda, Azure Functions, and GCP Cloud Functions.

and there's serverless the infrastructure framework. I'll refer to it from now on as It allows you to generate all the necessary DevOps components with code (specifically, the serverless.yml). can support anything from a ten minute pet project to a production microservice architecture (though there are some limitations; see Gotchas)

Serverless is a Node.js CLI, so you'll need a small amount of Javascript to set it up. The compute environment itself supports many languages (e.g. Lambda supports Node, Python, Ruby, Go, Java, PowerShell, and C#)

Below, I'll cover:

  1. setting up the local environment
  2. connecting AWS
  3. creating your first microservice
  4. invoking and passing data to a Lambda
  5. helpful commands
  6. bells and whistles

(here's the Github repo of the code covered below)

#1 - setting up the environment

# 1.0 - create a directory

mkdir serverless && cd serverless

# 1.1 - (optional) set up a virtual environment if you want

virtualenv venv && source venv/bin/activate

# 1.2 - install serverless (execute each line by itself)

npm init
sudo npm install -g serverless
# if you want to stand up a localhost server to test, add
npm install serverless-offline --save-dev
# if you want to bundle depedencies in your deploy object, add
# I generally recommend using Lambda Layers instead
npm install serverless-python-requirements --save

# 1.3 - Check it all worked out (sls is the short CLI command)

sls version

#2 - connect to AWS

serverless.yml infra-as-code can be easily ported between the major cloud platforms. That said, I'm going to cover AWS here as it is the default for most use cases.

# 2.1 - if you haven't yet, install the AWS CLI

pip3 install awscli --upgrade

# 2.2 - if you haven't yet, configure AWS CLI. I use us-west-2

aws configure

# 2.3 - configure to use AWS creds

sls config credentials --provider aws --key YOUR_KEY_HERE --secret YOUR_SECRET_HERE

(alternately, you can use environment variables to config)


# 2.4 - make sure your account has the IAM permissions it needs recommends this set of permissions for the account your key/secret are attached to. I find them a bit extensive; you can probably get away with admin on Lambda, API Gateway, CloudFormation, and CloudWatch. You will define the (much more specific) IAM role each Lambda will have in the serverless.yml

#3 - creating your first microservice function

The below command will create a Lambda function and a CloudFormation stack in your AWS account (and a local copy).

# 3.1 - create a new serverless project from template (note: no underscores in --name)

sls create \
  --template aws-python3 \
  --name lambda-test \
  --path serverless-test \
&& cd serverless-test

This will generate a couple of files (in a new child directory) for you:

  • - the Python code that your Lambda will execute
  • serverless.yml - the YAML code where you will declare what infra you want build
  • (and a .gitignore)

# 3.2 - the

Let's open up that and take a look

import json
def hello(event, context):
    body = {
        "message": "Go Serverless v1.0!",
        "input": event
    response = {
        "statusCode": 200,
        "body": json.dumps(body)
    return response

Some important parts:

  • hello() - the main function. This is invoked when the Lambda is called. Change this to the standard lambda_handler()
  • event - all variables that are passed when the Lambda is invoked. These will vary by invoke method (e.g. CLI local invoke vs API Gateway)
  • context - functions that allow you to introspect the Lambda (e.g. printing instance runtime)
  • response - Lambdas should return API-Gateway compatible objects, with JSON serialized data, even when the calling source is not necessarily an API Gateway

# 3.3 - the serverless.yml

The sls create will give you 114 lines of (mostly-commented-out) YAML so you can see different ways to use it. Let's narrow it down to what we're going to use:

service: test-stack

  name: aws
  runtime: python3.7
  stage: prod             # The default stage for sls deploy
  region: us-west-2
  logRetentionInDays: 120 # Personal preference
  memorySize: 1024        # Default for Lambdas in this stack
  timeout: 10             # Default for Lambdas in this stack. Normally 3s
    restApi: true         # Log API Gateway calls
    - admin_key           # Generate an API Key called 'admin_key'

    - Effect: Allow       # Allows Lambdas to invoke each other
        - lambda:InvokeFunction
        - lambda:InvokeAsync
        - "*"

  test-lambda:                      # Name of the Lambda Function
    handler: handler.lambda_handler # Filename.Function_name
    events:                         # What can invoke this Lambda
      - http:                       # API Gateway event
          path: /test_lambda        # API endpoint
          method: get               # Supported HTTP verbs
          private: true             # Requires API key to access

# 3.4 - let's see it in action

To invoke the locally:

sls invoke local -f

To deploy the to your AWS account:

sls deploy

To invoke the now-deployed AWS-hosted Lambda:

sls invoke -f

That's all you need to get started. You can stop here if you'd like

#4 - invoking and passing data to a Lambda

Functions need data to be useful. There are 18 ways at present to invoke a Lambda and pass data to it. Here's a couple more common ones:

local invoke: pass data as a JSON string from the CLI

sls invoke local -f test-lambda -d {"key1": "value1", "key2":2}

cloud invoke: pass data as from a .json file

sls invoke -f test-lambda -p data_file.json

API Gateway invoke:

curl -X GET \
  '' \

# 4.1 - how to implement those invocation methods

synchronous API Gateway invocation (for asynchronous add async: true)

      - http:
        async: true

For async functions, API Gateway will not wait for the Lambda to execute, and instead will return status code 202 immediately.

cron invocation - add schedule: cron or schedule: rate (Docs)

      - schedule: cron(0 18 ? * SUN *) # Runs weekly on Sun at 11 am (18 UTC -7 correction)
      - schedule: rate(7 days) # Runs weekly, you don't pick when

s3 invocation - when something is added or modified in a bucket (Docs)

      - s3: bucket-name

SQS invocation - reading from a queue

      - sqs:
          arn: arn:aws:sqs:us-west-1:${env:AWS_ACCOUNT_ID}:queueName
          batchSize: 10

#5 - helpful commands

basic deploy

sls deploy -s prod --conceal

if you rename your serverless.yml (for example, for a separate stack to manage your DynamoDB tables), add --config

sls deploy --config "serverless-dynamo.yml"

you can deploy just one lambda at a time (it is slightly faster) if you want

sls deploy function -f test-lambda

destroy the CloudFormation stack and associated resources

sls remove

check service details, including API keys, without redeploying

sls info -s <stage_name> -v

want verbose debug logging?

export SLS_DEBUG=*

#6 - bells and whistles

#6.1 - environment variables

You can pass environment variables to the cloud-hosted functions one of two ways:

for every Lambda in the stack

    ... truncated ..
        GLOBAL_KEY: ${env:GLOBAL_KEY}

for a specific Lambda function

            SPECIFIC_KEY: ${env:SPECIFIC_KEY}

then be sure your local virtual environment has the keys you specified when deploying:

export GLOBAL_KEY=1234567890
export SPECIFIC_KEY=0987654321

#6.2 - offline 'server' testing

Want to test your sls stack on a localhost 'server'? Add these two lines to the bottom of your serverless.yml

  - serverless-offline

then execute

sls offline

You can test it by going to localhost:3000/endpoint_speicifc_path

#6.3 - bundling dependencies

I recommend you use Lambda Layers to host cached versions of the libraries you use in AWS. I built a quick tool to create them yourself here.

If you have a large number of small sized non-standard libraries, you can bundle them along with the Lambda code in your sls deploy's. This will make the deployments take longer.

I prefer to use a pipenv virtual environment, and freeze only the requirements I want in the cloud to the requirements.txt; this prevents accidentally bundling large dependencies you have in your local env (e.g. boto3) that you needn't include.

Add to your serverless.yml:

  - serverless-python-requirements
    dockerizePip: non-linux  # Old templates may have true. Use non-linux
    zip: true                # Makes deploys faster
    usePipenv: false  # bundle reqs from requirements.txt rather than Pipfile

A few example OSS repos I've built with Lambda on

Thanks for reading. Questions or comments? 👉🏻