Going serverless at The Distance

Going serverless at The Distance

Dude, where’s my server?

As the tech market ascends ever further into the Cloud, the demand for small scale, on-prem hardware decreases. In response, here at The Distance we’ve shaken up how we build, structure and deploy our products. That’s right, we’ve gone Serverless.

When we say “Serverless”, what we are actually talking about is a model for executing code in the cloud. There are still servers involved, but these are maintained by the cloud provider.

The cloud provider will execute your code by dynamically allocating resources from a “farm” of these servers. Usually the resources are event triggered, stateless containers (similar to virtual machines, but without the overhead of running their own operating system).

Tech giant Amazon brings fast, reliable, and cost effective services through Amazon Web Services (AWS), the leading vendor in Cloud computing. A 2018 survey by Statista revealed 80% of their respondents were running apps or experimenting with AWS as their preferred cloud platform, with Microsoft Azure as the runner up. It’s clear to see why smaller businesses are turning to cloud computing, and why at The Distance we can provide a highly available, secure and scalable backend solution for our customer’s business requirements, however large their client base grows.

By adopting AWS into The Core, we spend less time setting up and maintaining physical servers, and more time dedicated to development; our customers benefit in reduced costs and higher quality apps.

But migrating to a serverless platform came at a cost. In our migration to the AWS platform, the full range of services and swath of configurations was realised. The AWS Management Console, being a primary point to deploy these services, became cumbersome and time consuming. We needed something sleeker, faster, and more reliable at defining our AWS requirements.

Step in, Serverless.

The Serverless Framework

Serverless Framework (formerly JAWS) launched in Oct 2015, and is one of the leading deployment frameworks for serverless application development. In simple terms, Serverless Framework abstracts all the manual tasks of setting up, and tearing down, your resources.

Although we have only mentioned AWS as a cloud provider in this post (and for the remainder will only refer to Serverless Framework in the context of AWS), it should be noted that Serverless Framework is vendor agnostic, supporting all the main cloud providers:

  • Amazon AWS
  • Google Cloud Functions
  • Microsoft Azure
  • IBM OpenWhisk
  • …and More!

The Serverless Framework documentation covers all supported providers and has information on getting started.

The actual magic happens on the command line. After defining all your functions, events and resources in a single YAML file, serverless.yml, using sls deploy will make the framework convert the YAML into a CFN (AWS Cloud Formation) template, and store it in an S3 (AWS Simple Storage Service) bucket.

If you need to ever remove the stack, rather than removing through AWS Management Console (risking deletion of the wrong stack or leaving artefacts through user error), you can simply take down your deployment from the command line with sls remove.

For further information on the CLI, I highly recommend checking out Serverless Framework’s CLI reference, and for some fantastic examples to get you started on Serverless Framework, check out their GitHub repository.

What We’ve Learned

Over the last few months we have configured and deployed some of our latest projects using Serverless framework with success, and reaped the rewards; Serverless + AWS = Success! Here’s an overview of what we have learned so far:

Boilerplate Heaven

With the ability to self-reference and reference the CF stack, boilerplating the YAML means we spend even less time setting up our resources, and can easily manage and scale them for future projects.

As the template is written entirely in YAML, different resources and functions can be split across YAML files, and added to the main ‘serverless.yml’ template as and when needed. Say goodbye to huge, difficult to maintain files, and ugly commented out blocks.

# GraphQL
- ${file(serverless/functions/graphql.yml)}

# Cognito
- ${file(serverless/resources/cognito-user-pool.yml)}

Cognito Authorisers (User Pool authentication)

One snaggle we found while deploying our functions was with authorization. Setting up a Cognito user pool and client through the management console, and THEN deploying our functions with an environment reference to an ARN (Amazon Resource Name) was a bit clunky.

So instead, we used an authorizer and referenced it in our functions. Now only a user that is a member of the associated pool can request authorization to our function!

 handler: handler.graphql
  - http:
     path: graphql
     method: ANY
     # cognito authentication
       Ref: CognitoAuthorizer

   Type: AWS::Cognito::UserPool
    # Generate a name based on the stage
    UserPoolName: ${self:service}-${self:provider.stage}-user-pool
    # Set email as an alias
     - email
     - email

  Type: AWS::ApiGateway::Authorizor
   IdentitySource: method.request.header.Authorization
   Name: ${self:service}-authorizer
    Ref: ApiGatewayRestApi
    - Fn::GetAtt: [UserPool, Arn]

IAM Roles

By default, the IAM roles created in your serverless.yml are shared by ALL your functions. Following Principle of Least Privilege, this isn’t good practice; we really need a way to fine-grain control our function’s permissions. Instead, we recommend setting your IAM roles on a per-function basis, which can be achieved by the aptly named serverless-iam-roles-per-function plugin.

Development Feedback Loop

Deploying and removing full stacks is a slow way to develop, and can significantly increase the feedback loop. In saying that, there’s no need to trash Serverless; here’s some handy tips to develop quickly while working with the framework:

  • Don’t re-deploy the full stack, just deploy the changes to your function using:

    sls deploy function [function_name]
  • Watch the logs of your function in the command line:

    serverless logs -f [function_name] #add a -t to tail 

    Note that the logs are limited to 1MB (10,000 log events).

  • Run your Lambdas locally, using the serverless-offline plugin.
  • Write your functions agnostic to the Lambda invocation, so it’s easier to test. An example of this is documented by Serverless Framework.

Use a local instance of DynamoDB through the serverless-dynamodb-local, or a Docker container for local storage testing.


One of our mobile app developers, James Shaw, recently wrote a blog post about GraphQL, and how it combats the problem of over-fetching data. It’s worth remembering that in the world of cloud computing, you pay for what you use. So why use conventional REST, when you can streamline your API down to a single endpoint, save backend development hours, reduce your data footprint, and your monthly bill? $$$


If you’re writing your backend in JavaScript like us, then you can also use the latest language features via babel, and have all the benefits of a full-featured build tool, thanks to the serverless-webpack plugin.


Through adopting the Serverless Framework into The Core, we have reduced project setup time, and streamlined our training process for developers new to AWS. If you are currently using, or plan to use, cloud computing, seriously consider it!

Keep an eye out for future posts on our Serverless experiences and successes, and if you have any questions, feel free to get in touch.

Where’s your server, dude?

Useful Links:

  1. Serverless Framework: https://serverless.com/
  2. Serverless Framework supported Cloud Providers: https://serverless.com/framework/docs/providers/ 
  3. Getting Started: https://serverless.com/framework/docs/getting-started/
  4. Serverless YAML Examples: https://github.com/serverless/examples
  5. The Distance Blog, An Introduction to GraphQL and Apollo: https://thedistance.co.uk/graphql/an-introduction-to-graphql-and-apollo/

Article by Jessica Mowatt, Product Owner of The Core at The Distance.