AWS Open Source Blog

Getting started with Bottlerocket on AWS Graviton2

AWS Bottlerocket is a Linux distribution that has been designed from the ground up to run containers. With its built-in security hardening and transactional update model, Bottlerocket offers improved security and operations for container infrastructure. It can be integrated with container orchestrators to allow for auto-update, thereby reducing management and operational overhead along with improved uptime. Even though AWS-provided builds of Bottlerocket work with Amazon Elastic Kubernetes Service (Amazon EKS), it’s designed to integrate with any container orchestrator.

With regards to the processors powering Amazon’s Elastic Compute Cloud (Amazon EC2) instances, AWS Graviton2-based EC2 instances provide up to 40 percent better price/performance over comparable current generation x86-based instances. Given this benefit, we’re seeing an increased adoption of Graviton2 by our customers, who run a wide range of workloads, from application servers, databases and in-memory caches, to high performance computing and electronic design automation.

In this article we bring the two technologies, Bottlerocket and Graviton2, together by spinning up a Kubernetes cluster running Bottlerocket OS on Graviton2-based worker nodes. As customers start to adopt Graviton2-based Amazon EC2 instances, we expect their Kubernetes clusters to have a mix of both Graviton2 and x86-based EC2 instances. Hence, we’ve chosen to showcase that use case here. We’ll deploy an Amazon EKS cluster composed of two managed node groups, one configured with x86-based and the other with Graviton2-based EC2 instance type, both of which utilize Bottlerocket-based Amazon Machine Image (AMI).

Solution overview

We program our infrastructure using Python with AWS Cloud Development Kit (AWS CDK). This lets us use a high-level programming language such as Python, which acts as a layer of abstraction on top of the underlying AWS CloudFormation template that AWS CDK generates.

The code for this solution can be found on GitHub under the repository named amazon-eks-bottlerocket-nodes-on-graviton2.

The solution consists of three main parts:

  1. Amazon EKS cluster set up with worker node groups running Bottlerocket OS powered by Graviton2 and x86-based processors. The key components of this part are:
    • EKS cluster with no default node group
    • Launch template specifying a Graviton2 instance type and Bottlerocket for AMI image
    • Second launch template specifying an x86 instance type and Bottlerocket for AMI image
    • Two node groups that are added to the EKS cluster using each of the above two launch templates

    Besides specifying the EC2 instance type and the Bottlerocket AMI image ID, the launch template specifies the user data that the worker nodes will be configured with. The user data is specified in TOML format. It has key data, such as the Kubernetes cluster’s API server endpoint, the cluster name, and the cluster certificate data. This allows the worker nodes to be able to discover the API server and join the EKS Kubernetes cluster on startup.

    Note: Although the Amazon EKS API provides a way to add a managed mode group by specifying the EC2 instance type and AMI type, as of this writing it does not work for Bottlerocket AMI type. This feature is currently being developed by the Amazon EKS team and will be made available in the near future. Hence, in our solution here, we achieve the same outcome in a slightly different manner by defining a launch template with a custom AMI.

    The custom AMI will be a Bottlerocket AMI corresponding to the CPU processor architecture of the specific EC2 instance type. We then refer to the launch template in the EKS cluster API call to add managed node group.

    The following diagram illustrates the EKS cluster we’ll be building here.

    Diagram illustrating the architecture described in the post.

  2. Application code. The key components are:
    • Simple Hello World application in Go
    • Dockerfile that specifies the packages and the Entrypoint for the app
    • Kubernetes manifest that defines a Deployment and a load balancer service for the app
  3. Build pipeline infrastructure setup. The key components are:
    • Amazon Elastic Container Registry (Amazon ECR) repository
    • AWS CodeCommit repository
    • Build pipeline on AWS CodePipeline, which includes:
      • An initial stage for a source commit action
      • A stage with one build action for Arm64 and another for AMD64 container image
      • A final stage for building the Docker manifest encompassing the multi-architecture images and for updating the container image setting in the Kubernetes manifest file using the image tag of the manifest

Walkthrough

The steps in this process are as follows:

  1. Initialize parameters in AWS Systems Manager (SSM) Parameter Store to keep Docker Hub user name and password.
  2. Fetch the code and explore the infrastructure code and the application code.
  3. Create the EKS cluster and the build pipeline infrastructure.
  4. Check out the Bottlerocket OS running on Graviton2 and x86 worker nodes.
  5. Push the Hello World application code to CodeCommit to trigger build pipeline.
  6. Testing the solution.
  7. Clean up.

Prerequisites

This walkthrough has the following prerequisites:

  • AWS CLI installed along with AWS Systems Session Manager plugin. We’ll be using the default profile for the AWS CLI, so make sure to configure this profile with the intended AWS Region and appropriate AWS credentials.
  • HTTPS Git credentials set up for the IAM user you will utilize for this exercise. This will be used to commit application code to AWS CodeCommit.
  • AWS CDK installed. Follow the prerequisites and installation guidance. AWS CDK will be used to deploy the application and deployment pipeline stacks.
  • Kubectl installed. Follow the instructions, as it will be used to communicate with the cluster created by EKS.
  • A Docker Hub account and access token created in Docker Hub. The user name and token will be used to pull images from Docker Hub during the build phase of CodePipeline.

Initialize parameters in AWS SSM Parameter Store

Initialize parameters in AWS SSM Parameter Store for the Docker Hub user name and password. To simplify the steps in this particular example, we’ve chosen to use a plaintext data type for storing the Docker Hub password; however, make sure to secure it in your own environment by utilizing a more secure data type, such as SecureString along with an encryption key.

Initiate the following commands to create the two parameters in SSM Parameter Store.

# Replace DOCKERHUB-USERNAME and DOCKERHUB-PASSWORD with your Docker Hub user name and password.
$ aws ssm put-parameter --name "/hello-bottlerocket/dockerhub/username" --type "String" —value <DOCKERHUB-USERNAME>
$ aws ssm put-parameter —name "/hello-bottlerocket/dockerhub/password" —type "String" —value <DOCKERHUB-PASSWORD>

Fetch the code and explore the setup

Clone the repository on the machine:

$ git clone git@github.com:aws-samples/amazon-eks-bottlerocket-nodes-on-graviton2.git hello-bottlerocket

The cdk folder contains the code to build an EKS cluster and a build pipeline infrastructure. They are located in the container_infra and build_infra folders, respectively.

The application code can be found in the app folder. It’s a simple Hello World Go application that prints the environment it’s running on. The app folder also contains a Dockerfile and a Kubernetes manifest file for the application. The manifest mainly consists of a Deployment and a Service definition.

Once the infrastructures are deployed, we will commit the application code into the newly created CodeCommit repository to trigger the build pipeline for application deployment.

Create the EKS cluster and the build pipeline

Set up the environment:

$ cd hello-bottlerocket/cdk
$ python3 -m venv .env

Run the cdk-deploy-to.sh script that utilizes CDK to deploy both an EKS cluster and a build pipeline:

# REPLACE ACCOUNT-ID and REGION with your AWS account number and region
$ cdk-deploy-to.sh <ACCOUNT-ID> <REGION>

Depending upon your security settings, you will be prompted to confirm that the required IAM and Security Policy changes are acceptable. If this looks good, confirm by entering y at the prompt.

Once the EKS cluster stack gets deployed, it will prompt you to confirm deploying the build pipeline stack. Before proceeding, make sure to copy the command needed to run later to configure kubectl. The command will be shown as the value of the CDK output parameter container-infra-stack.EKSConfigCommand<ID>:

Outputs:
container-infra-stack.EKSConfigCommand<ID> = aws eks update-kubeconfig --name EKSE2753513-1a6b2d1a8893480bb1302380ade563ea --region us-east-2 --role-arn arn:aws:iam::411317953072:role/container-infra-stack-EKSMastersRole2941C445-ZIUVEDAMSV2I_

Similar to the previous prompt, this is asking for confirmation that the security-related changes needed as part of deploying the build pipeline stack are acceptable. If everything looks good, confirm by entering y at the prompt.

After the build pipeline stack has been deployed, make a note of the CodeCommit repository URL. This will be required in a later step to trigger the deployment of the sample Hello World application container. It will be shown as the value for the CDK output parameter build-infra-stack.CodeCommitOutputrepositoryurl:

Outputs:
build-infra-stack.CodeCommitOutputrepositoryurl = https://git-codecommit.us-east-2.amazonaws.com/v1/repos/hello-bottlerocket-multi-cpu-arch

Check out the Bottlerocket OS running on Graviton2 and x86 worker nodes

Update kubeconfig to allow kubectl to connect to the newly created EKS cluster. This command was generated in the previous step when CDK deployed the EKS cluster and made a copy of it. It’s the value from the output parameter named container-infra-stack.EKSConfigCommand<ID>:

$ aws eks update-kubeconfig --name EKSE2753513-1a6b2d1a8893480bb1302380ade563ea --region us-east-2 --role-arn arn:aws:iam::411317953072:role/container-infra-stack-EKSMastersRole2941C445-ZIUVEDAMSV2I

List the worker nodes in the EKS cluster along with the attributes of interest to us:

$ kubectl get nodes -o=custom-columns=NODE:.metadata.name,ARCH:.status.nodeInfo.architecture,OS-Image:.status.nodeInfo.osImage,OS:.status.nodeInfo.operatingSystem

NODE                                        ARCH   OS-Image               OS
ip-10-0-101-226.us-east-2.compute.internal  arm64  Bottlerocket OS 1.0.7  linux
ip-10-0-187-156.us-east-2.compute.internal  amd64  Bottlerocket OS 1.0.7  linux

Check the configuration settings of the Bottlerocket OS running on the worker node. Because we enabled the SSM permission for the node instance role of the worker nodes, the AWS SSM agent should be running on the node and we can access it through a session via SSM service. We just need the worker node EC2 instance’s ID for launching a session. Let’s find the instance ID by running this command to describe instances:

$ aws ec2 describe-instances --query Reservations[*].Instances[*].[InstanceId] --output text

i-08f2426c826d38ad5
i-068d8f40c6c90a596

Choose either of the two instance IDs and launch an SSM session:

$ aws ssm start-session --target i-08f2426c826d38ad5

Starting session with SessionId: eks-course-0bf27ea1e078f5c40
Welcome to Bottlerocket's control container!

We can run a query on the control container on this instance for listing the current config settings of the Bottlerocket OS by utilizing the apiclient tool, as shown below.

$ apiclient -u /settings

We can view key details in its output, such as the node IP address, DNS settings, MOTD, update URLs, and enable status of the admin container.

We could also review the CPU architecture on which the container is running by getting the output of the uname command:

$ uname -a

Linux ip-10-0-101-226.us-east-2.compute.internal 5.4.95 #1 SMP Wed Mar 17 19:45:21 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux

As shown in the above output, the aarch64 implies that this control container is hosted on the EC2 worker node that is part of the EKS node group configured with Graviton2 instance type.

Push the application code into CodeCommit repository to trigger the build pipeline

CodeCommit created a code repository in the above step. Check out this code repository to a local folder. Let’s locate this folder at the same level as the hello-bottlerocket folder where we cloned the code initially. We’ll call it developer-repo:

cd ../..
# REPLACE build-infra-stack.CodeCommitOutputrepositoryurl with its value you saved earlier
$ git clone <build-infra-stack.CodeCommitOutputrepositoryurl> developer-repo

Copy our application code from the above cloned repository to the developer-repo folder:

$ cp -R hello-bottlerocket/* ./developer-repo/

Commit the code and push to the primary repository in CodeCommit. When prompted, enter the HTTPS Git credentials set up in the prerequisite step at the start:

$ cd developer-repo
$ git add -A
$ git commit -m "trigger app deployment build pipeline"
$ git push

Monitor the build in CodePipeline via the AWS Management Console to ensure that our build pipeline completed successfully. This usually takes a few minutes.

Testing the solution

Verify that the application pods are up and running. Also, make a note of the worker node IP addresses they are hosted on.

$ kubectl get pods -o wide

NAME READY  STATUS RESTARTS  AGE  IP  NODE  NOMINATED NODE  READINESS GATES
hello-bottlerocket-78f56897c8-hz98l  1/1  Running  0 14h  10.0.123.184  ip-10-0-101-226.us-east-2.compute.internal  <none>  <none>
hello-bottlerocket-78f56897c8-s52v6  1/1  Running  0 14h  10.0.165.236  ip-10-0-187-156.us-east-2.compute.internal  <none>  <none>

Obtain the URL to our newly deployed service. We may need to wait a couple minutes before the NLB load balancer is successfully provisioned after our build pipeline completes.

$ kubectl get services

NAME  TYPE  CLUSTER-IP  EXTERNAL-IP  PORT(S) AGE
hello-bottlerocket  LoadBalancer  172.20.92.75  aab8c97bf6d9f46788792b4319c83fab-008d9b125aa48ae5.elb.us-east-2.amazonaws.com  80:30493/TCP  14h
kubernetes  ClusterIP 172.20.0.1  <none> 443/TCP 15h

Let’s access the application to see the Hello World application in action.

$ curl http://aab8c97bf6d9f46788792b4319c83fab-008d9b125aa48ae5.elb.us-east-2.amazonaws.com

Hello there!!!
I'm running on {cpuArch: arm64, nodeName: ip-10-0-101-226.us-east-2.compute.internal}

For this application request, the load balancer sent it to the application pod running on the Graviton2-based EC2 instance worker node as indicated by the value arm64 for the value of the cpuArch field in the output.

If we try it a few more times, we’ll notice that the request gets serviced by the second application pod running on the x86-based EC2 instance worker node, like in the output shown below as amd64.

$ curl http://aab8c97bf6d9f46788792b4319c83fab-008d9b125aa48ae5.elb.us-east-2.amazonaws.com

Hello there!!!
I'm running on {cpuArch: amd64, nodeName: ip-10-0-187-156.us-east-2.compute.internal}

Cleaning up

To avoid incurring charges, tear down the infrastructure created in this post by initiating the cleanup script provided:

$ cd ../hello-bottlerocket/cdk

# REPLACE ACCOUNT-ID and REGION with your AWS account number and region
$ ./cleanup.sh <ACCOUNT-ID> <REGION>

Note that AWS CDK bootstraps a stack named CDKToolkit in CloudFormation as part of deploying our container and build pipeline stacks, in case the environment didn’t have it prior to running this exercise. This stack provides an Amazon Simple Storage Service (Amazon S3) bucket that cdk deploy will use to store synthesized templates and the related assets. By default, the CDKToolkit stack will be protected from stack termination. So, if the stack did not exist before running this exercise, make sure to delete it manually after emptying the objects in the corresponding Amazon S3 bucket. This way you can avoid incurring future charges on your AWS account.

Conclusion

We hope that this tour of running Bottlerocket on Graviton2-powered EKS worker nodes has shown you a way to leverage the benefits of a container-optimized OS as well as the performance and cost benefits reaped through Graviton2 processors.

As always, we welcome your feedback as we build new features based on customer inputs and by anticipating your needs. Provide input for features or enhancements, or add your story to existing feature requests by visiting the AWS Containers Roadmap on GitHub.

Vijoy Choyi

Vijoy Choyi

Vijoy Choyi is a Senior Solution Architect at AWS, where he helps Digital Native customers in the Bay Area build well-architected workloads running across AWS and their on-prem environments. He is passionate about building distributed systems at cloud scale and has a strong application development background. In his spare time, he enjoys exploring the outdoors, reading, cooking and spending time with his family.