Export AWS CloudWatch Logs to S3 using Lambda Functions and Node.js

Shraddha Paghdar
AWS in Plain English
5 min readJun 20, 2021

--

Image by Ag Ku from Pixabay

AWS CloudWatch Logs enables you to centralize the logs from all of your systems, applications, and AWS services that you use, in a single, highly scalable service. You can then easily view them, search them for specific error codes or patterns, filter them based on specific fields, or archive them securely for future analysis.

By default, logs are kept indefinitely and never expire. You can adjust the retention policy for each log group, keep indefinite retention, or choose a retention period between one day and 10 years. You can use CloudWatch Logs to store your log data in highly durable storage. The CloudWatch Logs agent makes it easy.

There are two ways to archive the logs:

  1. Manual process
  2. Automated process

In today’s post, we are going to see both this process.

Prerequisites

Step 1: Log in to your AWS account.

Step 2: Create an Amazon S3 Bucket with region same as cloud watch logs region.

Step 3: Create an IAM User with Full Access to Amazon S3 and CloudWatch Logs. To learn more about how to create an AWS S3 bucket & create an IAM user read here.

Step 4: Set Permissions on an Amazon S3 Bucket.
a. By default, all Amazon S3 buckets and objects are private. Only the resource owner, the AWS account that created the bucket, can access the bucket and any objects that it contains. However, the resource owner can choose to grant access permissions to other resources and users by writing an access policy.

{
"Version": "2012-10-17",
"Statement": [
{
"Action": "s3:GetBucketAcl",
"Effect": "Allow",
"Resource": "arn:aws:s3:::bucket_name",
"Principal": { "Service": "logs.selected_region.amazonaws.com" }
},
{
"Action": "s3:PutObject" ,
"Effect": "Allow",
"Resource": "arn:aws:s3:::bucket_name/random_string/*",
"Condition": { "StringEquals": { "s3:x-amz-acl": "bucket-owner-full-control" } },
"Principal": { "Service": "logs.selected_region.amazonaws.com" }
}
]
}

By setting the above policy inside S3 bucket > Permissions, Bucket policy > Bucket Policy Editor, bucket owner allows CloudWatch Logs to export log data to Amazon S3 bucket.

Manual Process

Step 1: Go to cloud watch > Log groups > Select the log group that you want to export > select Export Data to Amazon S3.

Step 6: Choose the time range & S3 bucket name, For the S3 Bucket prefix, enter the randomly generated string that you specified in the bucket policy. Click Export & you can see logs inside the selected S3 bucket.

Automated Process

Step 1: Go to AWS Lambda > Functions.
AWS Lambda is a serverless compute service that runs your code in response to events and automatically manages the underlying compute resources for you. You can use AWS Lambda to extend other AWS services with custom logic or create your own back-end services that operate at AWS scale, performance, and security.

Step 2: Choose to create function -> Choose Author from scratch.
The code you run on AWS Lambda is called a “Lambda function.” After you create your Lambda function it is always ready to run as soon as it is triggered, similar to a formula in a spreadsheet. Each function includes your code as well as some associated configuration information, including the function name and resource requirements. Lambda functions are “stateless,” with no affinity to the underlying infrastructure, so that Lambda can rapidly launch as many copies of the function as needed to scale to the rate of incoming events.

Step 3: Give the function a name. Choose runtime as Node.js 14x & select the permission as “Create a new role with basic Lambda permissions”.

Step 4: Once the lambda function is created. Go to Code and copy-paste the following code.

const AWS = require('aws-sdk')const cloudconfig = {
apiVersion: '2014-03-28',
region: 'selected_region', // replace with your region
}
const cloudwatchlogs = new AWS.CloudWatchLogs(cloudconfig)exports.handler = async (event, context) => {
const params = {
destination: 'bucket_name', // replace with your bucket name
from: new Date().getTime() - 8640000,
logGroupName: 'log-name',
to: new Date().getTime(),
destinationPrefix: 'random_string', // replace with random string used to give permisson on S3 bucket
};
await cloudwatchlogs.createExportTask(params).promise().then((data) => {
console.log(data)
return ({
statusCode: 200,
body: data,
});
}).catch((err) => {
console.error(err)
return ({
statusCode: 501,
body: err,
});
});
}

In the above code, we are creating a new cloudwatch log instance to call create export task.

i. destination: The name of S3 bucket for the exported log data. The bucket must be in the same AWS region.

ii. destinationPrefix: The prefix used as the start of the key for every object exported. If you don’t specify a value, the default is exportedlogs.

iii. from: The start time of the range for the request, expressed as the number of milliseconds after Jan 1, 1970, 00:00:00 UTC. Events with a timestamp earlier than this time are not exported.

iv. logGroupName: The name of the log group.

v. logStreamNamePrefix: Export only log streams that match the provided prefix. If you don’t specify a value, no prefix filter is applied. It’s an optional parameter.

vi. taskName: The name of the export task. It’s an optional parameter.

vii. to: The end time of the range for the request, expressed as the number of milliseconds after Jan 1, 1970, 00:00:00 UTC. Events with a timestamp later than this time are not exported.

If you test the above function it will start the export logs task & gives you taskId as a response.

Step 5: Click to “Add Trigger” and choose “Event bridge”.

To run the above function automatically we need to add the trigger event. Amazon EventBridge is a serverless event bus that makes it easier to build event-driven applications.

Choose the rule name & state the description. Schedule expression will act as CRON which will automatically trigger the event on matching expression. We are going to set the 1-day rate which invokes the lambda function every day.

Thanks for reading.

Originally published at https://noob2geek.in on 18th June 2021.

More content at plainenglish.io

--

--

Javascript Full-stack developer with a passion for building highly scalable and performant apps.