In November of 2024, I set out to begin acquiring AWS certificates and increase my understanding of the tools available in the Amazon cloud. I have learned more than I expected and increased my understanding of concepts that are applicable across various cloud platforms. In my initial article I decided that on top of learning the AWS tooling I wanted to know more about containerizing applications and deploying with Kubernetes. I was excited to discover this project involving deploying docker containers to AWS using Amazon Elastic Container Service which has similarities to Kubernetes and references Amazon Elastic Kubernetes Service and provides some information on the subject. In the following article I explain more about container services on Amazon and review what I was able to deploy. This was one of my favorite projects in the Solutions Architect Certification I completed.
I am happy to have accepted a new software position! I estimate it will allow me to work on much bigger systems with many integrated technologies. Very excited to accept the challenge and grow into the role.
In traditional software deployments you will have hardware or infrastructure that runs an operating system and on top of that operating system you will run multiple applications created with languages like python, node.js, ruby, etc. These applications require specific libraries and dependencies to run properly and that can cause difficult installations or application version conflicts that make scaling your applications challenging. Software deployment using containers provide a standard way to package your applications code libraries and dependencies into a single object that includes the library. With software deployment using containers you have infrastructure and an operating system plus a container engine like docker that share the resources of the underlying operating system with your app. Creates container packages that create the libs and packages that enable your app to run. This enables you to move your application across different platforms with ease. Containers are lightweight portable and scalable.
AWS offers a range of services to help you build, deploy, and scale containerized applications. Here's an overview of the key services and tools available:
AWS provides multiple compute options for running containers, catering to different levels of management and control:
AWS Fargate
Amazon EC2 (Elastic Compute Cloud)
Container Orchestration Choices
These services allow you to run Docker containers at scale, whether you prefer a fully managed solution or greater control over the environment.
Container management involves several key components, which can be grouped into three categories:
The project used Amazon Elastic Container Registry and Amazon Elastic Container Service and Amazon Fargate to host containerized applications without the need to provision and manage servers.
In this system design we use a docker image pushed to Amazon ECR and deployed to Fargate via a task definition. Below are description of these components
The lab allows me to use AWS session manager to connect to an EC2 instance where I unzip a .zip file containing the application. The applcation is a python web app that is pretty simple.
First we create a docker image of the application. This is done by creating a dockerfile which describes the base image to use for the Docker image and includes what I want to install and run on it. I build the docker image and push it to an Amazon ECR repository. The is am image registry service that allows me to store, share, and deploy the container software anywhere.
After the image is pushed to ECR, I grab the uri from the ECR dashboard.
Then we configure the projects security group to allow traffic from a custom TCP port 8443 and source 0.0.0.0/0
This is to allow external access to this port.
Then in the ECS dashboard I create a new task definition and configure it to launch via Amazon Fargate, define its operating system, CPU, and memory. We also specify the image uri we pushed to ECR so that the task knows what image to deploy. We define the container's port map to match that of the one we configured in the projects security group.
Finally I create the task, run it, and set the task's cluster to the pre made cluster set up for the lab and confirm it is using the Fargate launch type.
In the networking section of the task, I make sure that the lab's VPC is selected with the appropriate subnets and security group that was configured in a previous step. Turn on public IP. Finally we make sure the task is assigned the appropriate IAM role so it can perform the required actions for deploying.
The lab goes on to challenge me to deploy one more task definition from what I learned without guided instructions. Although the material is fairly dry, I enjoyed the experience. I hope future me appreciates the work and anyone reading this learned something.
Cheers!