7 Best Serverless Security Practices

7 ways to secure your Cloud native serverless applications

Jayden Aung
AWS in Plain English

--

A sample serverless application architecture

As we move away from monolithic application architectures, and towards microservices architectures where different pieces of an application operate independently of one another and are still part of a larger cloud-native application, organizations have embarked on a different era of application development and operation. Microservices architecture is very different from monolithic application architecture. Every microservice runs independently on its own, and can have its own datastore and management — which allows you to achieve agility. Microservices can be deployed in a number of ways. They can be in hosted in both locally hosted and cloud-managed containers. They can also be part of a serverless architecture. (In theory, they can even be developed and hosted locally). In this article, we’ll be focusing on “serverless applications”.

In serverless architectures, all of the microservices are provided and managed by cloud providers. The cloud provider takes care of underlying infrastructure including compute resources, high availability, scaling, and everything that needs for a microservice to run. An AWS Lambda function is an example of a microservice which can be part of a serverless architecture. You’ll just need to provide the code. Lambda functions are stateless and ephemeral in nature. Lambda will let you run the code without provisioning and managing any compute resources like servers.

Why Serverless?

Nowadays, you can have an enterprise-grade web application completely deployed on serverless platforms. In the case of AWS, what you’ll need for a working serverless web application are S3 buckets for hosting static web pages & files, CloudFront for serving static and dynamic contents at optimized performance, Route53 for DNS hosting, KMS for encryption & key management, API gateway for exposing API endpoints, DynamoDB for database layer, Cognito for authentication, SNS/SES for notifications, SQS for message queuing, CloudWatch for logging, Secrets Manager for storing privileged credentials, Lambda functions for driving both front-end and backend business logic flow, and step functions state machines to orchestrate event-driven workflows.

With that said, there are some of the compelling reasons why many organisations are moving towards.

The hype behind migrations towards serverless platforms can be contributed to shifting operational burdens to cloud service providers. With serverless, operational responsibilities are taken care of by cloud service providers such as AWS or Azure. These activities include operating system patching for underlying runtime environments, high availability of systems, network configurations and so on. No infrastructure provisioning is required, too. Unlike hosting containers, you don’t need to keep runtime environments running. So you can now only focus on what really matters to you (or your customer) the most — the innovation. And being able to solely dedicate engineering resources to development, and not care about the infrastructure significantly increases your development team’s agility.

Pay-per-usage pricing is another factor that attracts organisations to go serverless. On serverless platforms, you only pay for what you use and you don’t need to pay for underlying compute resources and runtime environments. In the case of serverless functions, it is pay-per-invocation where you are charged for the number of times your function is invoked.

Serverless Security Challenges

While serverless functions are becoming more and more popular with cloud-native application developments, we’ve also started seeing security challenges that come with the hype. Serverless applications are also at risk of OWASP top ten application vulnerabilities because serverless functions such as Lambda still execute code. If the code is written in a manner that doesn’t follow security best practices, or if the function is using excessive permissions, they can be vulnerable to a wide range of security attacks.

We need to approach the nature of the functions differently. Because serverless functions operate differently from traditional application platforms. They’re only alive for a few minutes if not seconds. In the case of Lambda functions, the maximum duration a function can run is only 15 minutes. . Serverless functions can be triggered from a number of sources. They can be invoked by another function, an event or even by a simple SMS message. That changes everything. For a typical traditional monolithic application, the only entry point is via the exposed APIs so the attack surface is predictable. It’s not just the same for serverless functions.

Serverless Security Best Practices

Here are some of the best practices in securing your serverless functions. (My suggestions are mostly based on AWS.)

#1 The practice of “One role per Function”

Always try to adopt one-role-per-function practice as you shouldn’t dedicate a single role for multiple functions. Ideally, a single function should have a 1:1 relationship with an IAM role. When creating IAM policies, always do it in accordance with good ol’ Least Privilege Principle. For instance, if a function is only supposed to read items in DynamoDB table, it should only have read access and nothing more. Excessive permissions are one of the most critical misconfigurations that an attacker could leverage on, and make the most out of it.

Below is an example of an optimized IAM policy for a function which is supposed only to (1) read items off a DynamoDb table, (2) create a log group and log streams dedicated for the function specifically, and (3) log activities to the log group.

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DemoSid1",
"Effect": "Allow",
"Resource": [
"arn:aws:dynamodb:ap-southeast-1:116489363094:table/aws-node-rest-api-with-dynamodb-dev"
],
"Action": [
"dynamodb:GetItem"
]
},
{
"Sid": "DemoSid2",
"Effect": "Allow",
"Resource": [
"arn:aws:logs:ap-southeast-1:116489363094:log-group:/aws/lambda/myfunction-dynamodb-dev*:*"
],
"Action": [
"logs:CreateLogGroup"
]
},
{
"Sid": "DemoSid3",
"Effect": "Allow",
"Resource": [
"arn:aws:logs:ap-southeast-1:116489363094:log-group:/aws/lambda/myfunction-dynamodb-dev-get:*"
],
"Action": [
"logs:CreateLogStream"
]
},
{
"Sid": "DemoSid4",
"Effect": "Allow",
"Resource": [
"arn:aws:logs:ap-southeast-1:116489363094:log-group:/aws/lambda/myfunction-dynamodb-dev-get:*:*"
],
"Action": [
"logs:PutLogEvents"
]
}
]
}

#2 Secure your privileged credentials

Always try to use execution roles for functions for interacting with and accessing other AWS services. And you should NEVER embed any privileged credentials in your function source code in plain text. If you must really use any privileged credentials such as passwords for database or third party access tokens, for example, they need to be secured in transit or at rest. If you really need to include any privileged credentials in the source code for any reason, always use Secrets Manager or SSM Parameter Stores (AWS) together with KMS. In the case of Azure functions, use environment variables along with “Key Vault”. Here is an example python code for accessing a third party API key stored on System Manager Parameter Stores.

# Using System Parameter Parameter Stores to store third party API Keyimport boto3ssm = boto3.client(‘ssm’)parameter = ssm.get_parameter(Name=’MY_ACCESS_TOKEN’, WithDecryption=True)print(parameter[‘Parameter’][‘Value’])

In this example, the function can call an AWS API call to access to the access token that’s stored in the SSM Parameter stores without having to hardcode anything.

#3 Always be mindful of dependencies that you’re using

Because dependencies tend to have vulnerabilities. In fact, it’s one of the most serious security issues for serverless applications according to OWASP serverless top 10 report. Vulnerable dependencies can allow attackers to leverage on, and use them against your application and cloud infrastructure. Scanning source code and dependencies for vulnerabilities should be done at the development stage as well as pre-build or build stage in CICD pipeline. You should always automate vulnerability assessment every time there is a code or dependency change in the function.

#4 Use environment variables for storing configurations

In certain situations, you should use environment variables to store configurations such as hostnames. Not only would these be easy to change between deployments without changing source code, they would also minimize the risk of sensitive data exposure in the event the Lambda source code was compromised. One use case is if you are developing a serverless bot that checks for social media posts with certain hashtags and repost them with slight changes, and you need to store access tokens that are required for signing requests, consider storing them in environment variables.

#5 Tighten access control and configurations

You need to assess Lambda configurations and make sure things like events and triggers are defined correctly according to the requirements. If your function only needs to be invoked by an S3 PUT event, consider removing unnecessary services from the trigger list. When deployed in a VPC, consider employing strict network access controls using security groups and ACLs. If API gateway is used as an event source for your Lambda functions, use function-specific API authorization whenever you can. API gateway has the capability to provide multiple authentication and authorization mechanisms for API clients so you should definitely leverage on them.

Defining S3 event source

#6 Automate security check in CICD pipelines

Think DevSecOps. When developing a serverless application in a CICD pipeline, you should infuse risk mitigating activities throughout the pipeline including compliance and governance. These activities should involve checking Lambda functions control plane for misconfigurations, scanning source code and dependencies for vulnerabilities, looking for excessive roles, and assessing the function against industry standard frameworks (or your own organization-specific policies). The keyword here is “automation”. Because we wouldn’t want DevOps processes to stop for a security check.

#7 Using a commercial product to enhance protection of serverless applications.

Following best practices will definitely strengthen your serverless security. But you can never be too careful with your cloud native applications especially when there are many new threats on the rise. For example, when the application is at runtime and the function is invoked by an attacker using malicious payloads such as command injection, you will want your application to fend off these attacks. Therefore, If you want to take a step further in beefing up your cloud native security and add extra layers of defense mechanism to your serverless application, you should consider deploying a commercial security product.

In conclusion, serverless functions operate differently and as such we should take a holistic approach in securing cloud native workloads on serverless platforms, and continuously secure them throughput CICD pipelines and when they’re at runtime.

Jayden Aung

References:

More content at plainenglish.io

--

--

A Cloud Security & DevSecOps Architect. Loves travelling, music & iPhotography