AWS Containerization With Docker, ECR, ECS & Fargate

Chinelo Osuji
18 min readSep 29, 2023

Containerization has emerged as a dominant strategy for many organizations seeking agility, efficiency, and consistency in their web deployment processes. Using Docker Hub in combination with AWS ECR can be considered a hybrid approach to containerization of web applications, combining the strengths of both public and private container image repositories to provide a versatile and secure deployment strategy. Organizations that require stricter access control and enhanced security for their container images would benefit from this approach. Some companies may use Docker Hub for development or legacy applications and migrate to ECR for production or as they grow. Others may keep images on both for redundancy. Let’s get a better understanding by first reviewing each service/platform.


Docker provides a consistent and reproducible way to build, ship, and run applications, regardless of the underlying infrastructure. To be more specific, Docker is a containerization platform that allows developers to package applications and their dependencies into lightweight, portable containers. These Containers are isolated environments that encase everything needed to run an application, including code, runtime, system libraries/packages, and settings. This is ideal for a microservices architecture where each service runs in its isolated environment.

Docker Hub

Docker Hub is a cloud-based registry service provided by Docker that allows users to store, share, and distribute Docker container images. It serves as a centralized repository where developers can find, pull, and push Docker images. Docker Hub enhances web application deployment by providing a platform for collaborating on container images, enabling version control of images, and simplifying the distribution of images to different deployment environments.

Elastic Container Registry (ECR)

ECR is a fully managed private container image registry service offered by AWS, enabling developers to store, manage, and deploy Docker container images securely. It seamlessly integrates with other AWS services like ECS, making it easier to deploy containerized applications. It offers features like image versioning, fine-grained access control, and image scanning to detect vulnerabilities, enhancing the security and compliance of containerized applications. ECR also enables the easy distribution of container images across multiple regions, improving availability and reducing latency for global deployments.

In contrast to ECR, Docker Hub is a public registry, and while DH offers private repositories, some organizations prefer to minimize exposure to external repositories. Security, redundancy, and backup considerations may favor a hybrid approach of using both ECR and Docker Hub. Essentially, allowing critical or sensitive images to reside in ECR while leveraging Docker Hub for public or less-sensitive images.

Elastic Container Service (ECS)

ECS is a highly scalable and fully managed container orchestration service provided by AWS. ECS simplifies the deployment, management, and scaling of Docker containers, allowing you to run containerized applications seamlessly. It works with ECR by providing a natural and integrated environment for deploying containers stored in ECR. ECS enables you to define how your containers should run and distribute tasks across clusters, making it easier to achieve high availability and fault tolerance for web applications. This combination of ECS and ECR streamlines the deployment of web applications by ensuring reliable access to container images and simplifying the scaling and orchestration of containers.

And while Docker Hub is a valuable repository for Docker images, ECS serves a distinct purpose in orchestrating and managing containerized applications in production. It complements Docker Hub by handling the operational aspects of containerized applications, ensuring they run reliably, are highly available, and can efficiently scale in response to user demand. ECS provides features like load balancing, auto-scaling, and task management that are essential for deploying and maintaining web applications at scale. With all of this in mind, you can use Docker Hub for development and testing phases, while using ECS to deploy production ready images stored in ECR.

In some cases, especially for smaller projects or less security-sensitive applications, this can be overkill and a more streamlined approach (Docker Hub alone) might be more suitable. However, this combination can optimize the deployment process and provide the necessary infrastructure to support huge production workloads effectively, making it a sensible choice for larger enterprises with strict security and compliance requirements.

AWS Fargate

If the goal is to prioritize application development over infrastructure management, Fargate offers a compelling advantage. Fargate is a serverless compute engine for containers provided by AWS, designed to abstract away the infrastructure layer, allowing developers to focus solely on application development and deployment. And when combined with ECS, it lets you run containers without managing the underlying EC2 instances. It also includes automatic scaling, simplified operational overhead as there’s no cluster to manage, and enhanced security with task-level isolation.


Let’s say we have a company that offers innovative web solutions to its clients. Driven by the need to revolutionize its web application deployment strategy as their clients’ demands increase, the company seeks a comprehensive solution that not only guarantees the security and scalability of their web applications but also enhances version control.

As part of their software development process, they have decided to adopt containerization using Docker to streamline their web application deployment. The elasticity of Docker containers allows them to scale resources effortlessly, aligning with spikes in user traffic. With this in mind, they adopted Docker Hub to centralize and version their container images, fostering collaboration and distribution.

As the company grows and security concerns increase, they then integrate ECR to bolster image security and achieve fine-grained access control for proprietary applications. To orchestrate these containerized applications in production seamlessly, they utilize ECS, ensuring efficient task management and high availability. Also, with the incorporation of Fargate, the company eliminates the need to manage EC2 instances, leaving them to concentrate solely on their innovative solutions.

It’s time to get our hands dirty and see how all of this works!

Let’s get started…

First, let’s set up our network infrastructure in AWS using the code below.

This code creates a VPC that is established with DNS support, and an Internet Gateway attached to it. Also, 3 public subnets and 3 private subntets are created in different Availability Zones for high availability. For the private subnets to access the internet, 3 NAT Gateways are created with Elastic IPs in the corresponding public subnets. The ALB that we will create later on will be placed in public subnets and the Fargate tasks will be placed in private subnets. Route tables are set up for public subnets to direct traffic to the internet via the Internet Gateway and for private subnets to use the NAT Gateways.

Fargate tasks describe how, and which containers should run using Fargate, without needing to manage the underlying instances. These tasks represent a set of instructions that decides what needs to be done, such as running a specific application or process within the container.

And last, security groups created in the VPC are set up to permit necessary traffic for the ALB and the Fargate tasks. The ALB security group allows HTTP and HTTPS traffic, and the Fargate task security group only allows traffic from the ALB, ensuring that the containerized applications are not directly exposed to the internet but can still be accessed through the ALB.

Copy and paste the code below into your Notepad and save the file with the extension .yaml.

Now let’s upload the template to CloudFormation.
Go to AWS CloudFormation and click Create stack.

On Step 1 page select Template is ready and Upload a template file.
Once you’ve selected the file, click Next.

On Step 2 page enter a Stack name and EnvironmentName and click Next.

On Step 3 page keep all default stack options and click Next.

On Step 4 page scroll down and click Submit.

On the next page, we will see the stack creation in progress. Wait a few minutes for completion.

Now go to Cloud9 in the AWS Console and click Create environment.

Provide a Name, Environment type, Instance type, VPC and subnet for your environment and click Create.

I chose the VPC and a public subnet from the CloudFormation stack I created earlier.

Wait a few minutes until your environment has successfully been created and click Open.

Once you’re connected to your Cloud9 environment, click the tool icon in the top right corner to view Preferences.

Click AWS Settings, then click Credentials and disable the AWS managed temporary credentials setting.

AWS managed temporary credentials expire after a certain period. This means that while you are working in the Cloud9 environment, you might suddenly lose access to AWS resources if the session expires.

So now, go to the terminal in your environment and run aws configure to set up the CLI with your specific credentials and default settings. You must provide your Access Key ID, Secret Access Key, Default region name and Default output format.

AWS managed temporary credentials in Cloud9 might provide access to more services and resources than necessary for the specific project, violating the principle of least privilege. With aws configure, you determine the exact IAM user and associated permissions you're using. Once set up, any AWS CLI command you run in the Cloud9 terminal will use these credentials.

Let’s run docker version to check if Docker is installed.

And let’s run systemctl status docker to see if Docker is running, inactive, or has encountered errors.

Now let’s check if boto3 is installed by using the pip package manager. boto3 is the Software Development Kit for Python which allows you to directly interact with AWS resources from your Python scripts.

In the terminal, run pip show boto3 to get information about the boto3 package installed in your Cloud9 environment.

The output below indicates that the boto3 package is not installed in my environment.

So, I ran pip install boto3 to install the boto3 package.

Now that boto3 is installed, let’s start by building an Ubuntu image for the container with a Dockerfile. A Dockerfile is a script that contains a series of commands and instructions used by Docker to automatically build images. When you run a build command, Docker reads the Dockerfile and executes the instructions in the order they are written to create a Docker image.

First, create a New file for the HTML file that will be served using Apache in the container.

Below is the code we will use. This code creates a simple webpage that displays “Chinelo’s Docker Container”.

Copy and paste the code in the open field and save the file with the name index.html

Now let’s create a New file for the Dockerfile.

The script below creates an Ubuntu image with an Apache HTTP server that serves the index.html file and includes a built-in health check.

Copy and paste the script in the open field and save the file with the name Dockerfile (with no extension.)

FROM ubuntu:22.04

# Avoid prompts with apt
ENV DEBIAN_FRONTEND=noninteractive

# Install apache2 and curl
RUN apt update && \
apt install -y apache2 curl && \
apt clean && \
rm -rf /var/lib/apt/lists/*

# Copy the index.html file to the default directory of Apache
COPY index.html /var/www/html/

# Expose the default apache port

# Start Apache in the foreground
CMD ["/usr/sbin/apache2ctl", "-D", "FOREGROUND"]

# Basic health check using curl
HEALTHCHECK CMD curl --fail http://localhost:80/index.html || exit 1

Now let’s run docker build -t <IMAGE_NAME> . to build a Docker image using the Dockerfile that’s in the directory you’re currently in. Change <IMAGE_NAME> with the name of your choice. Also, the dot after <IMAGE_NAME> is not a typo. The dot at the end of the command points to the current directory where your Dockerfile is expected to be located.

Then run docker run -d -p 80:80 --name <CONTAINER_NAME> <IMAGE_NAME> to launch a container using the same image we built. The container is running in a detached state and maps port 80 on the host machine to port 80 inside the container.

(Optional) You can run docker ps to see the Docker containers that are currently running.

Now go to the EC2 instance for your Cloud9 environment.

Navigate to the Security tab and click the Security group.

Click Edit inbound rules.

Now add an inbound rule to allow traffic on port 80.

Copy the IPv4 address of the Cloud9 instance.

And paste the address in your web browser. You’ll see the content of the HTML file we created earlier.

Now let’s push the container to Docker Hub.

If you don’t already have an account on Docker Hub, you’ll need to create one at

After creating your Docker Hub, go back to your Cloud9 environment and run docker login to authenticate to Docker Hub. You must provide your Username and Password to authenticate.

Now run docker commit <container_name> <new_image_name>:<tag> to create a new Docker image from the changes in the container we created earlier.

Then run docker tag <source_image_name>:<source_tag> <target_image_name>:<target_tag> to assign a new tag to the Docker image. This creates an alias with a new name and tag for the image without changing the original. By using this, an image can be referenced by tags, which can be beneficial for versioning or for preparing images to be pushed to different repositories.

And run docker push <repository_name>/<image_name>:<tag> to upload the Docker image to a remote repository on Docker Hub. Doing so allows the image to be shared with others, deployed across different environments, or stored as a backup.

Now you can see the image stored in your remote repository on Docker Hub with its assigned tag.

The company also wants to integrate Elastic Container Registry (ECR) with Docker Hub to store their web applications in AWS’ private repository.

Below is the Python script we will run in our Cloud9 environment to create the ECR repository.

The repository is set to scan images for vulnerabilities upon push. ECR provides a detailed report of vulnerabilities classified by severity and provides links to detailed descriptions. Adjusting configuration settings to harden the security of the images can help with preventing vulnerabilities.

Also, the repository is configured with immutable image tags, which means that once an image is pushed to the repository with a specific tag, that tag cannot be overwritten or reassigned to a different image. This is beneficial if having a consistent and specific version of an image is important.

Now let’s create a New File for the Python script. Copy and paste the script for creating the ECR repository in the open field.

Save the file with the name of your choice.

Now run python <filename>.py to execute the script.

Below is an example of the output.

If you go to Elastic Container Registry in the AWS Console, you will see the private repository that was created from executing the script.

Now let’s run aws ecr get-login-password --region <YOUR_REGION> | docker login --username AWS --password-stdin <YOUR_ECR_REPO_URL> to log the Docker CLI into the ECR.

This command fetches the Docker password from your ECR and then uses it to log in to the ECR repository. This authentication step is required before you can push or pull Docker images to or from the ECR repository.

Now let’s automate the workflow for pushing the Docker image to ECR with the Python script below.

First, this script creates an S3 bucket for CloudTrail logging and applies a policy to let CloudTrail write logs to it. The logs serve as a detailed record of API calls and events, which helps with troubleshooting and enhances security analysis. Also, a lifecycle policy for the ECR is created, which ensures that Docker images older than 30 days are automatically removed.

And last but not least, the script authenticates Docker to ECR, pulls the image from Docker Hub, tags it, and pushes it to the ECR repository.

Now create a New file for the script. Copy and paste the script in the open field.

Save the file with the name of your choice and run python <filename>.py to execute the script.

Below is an example of the output.

If you go back to your repository in Elastic Container Registry, you may see that your image has vulnerabilities. Click details for more information.

ECR only informs you of the vulnerabilities. It does not fix them for you. It’s the responsibility of the developers or operations teams to address these issues.

Also, if you go to the CloudTrail dashboard, you will see the trail that was created from executing the script. Below the trail, you will see an Event history list of the API call activities made in your account.

And if you go to Trails, you will see the S3 bucket used to store the CloudTrail logs.

After clicking on the S3 bucket name, you can click on the Object in the bucket to view the logs in more detail, down to the specific Account ID of the API calls.

The company wants to utilize ECS to orchestrate their containers in production.

Below is the Python script we will run in our Cloud9 environment to create a VPC endpoint for ECS to allow communication between Fargate tasks and the ECS service within the VPC without going across the public internet. To assist this, the endpoint is associated with private subnets and a security group that only allows traffic from an ALB (The ALB is associated with public subnets exposed to the internet.)

Also, an IAM role is created that gives ECS Fargate tasks permissions to interact with ECS, ECR, and ELB (ALB). The role has a max session duration of 12 hours, aligning with security best practices.

Now create a New file for the script. Copy and paste the script in the open field.

Save the file with the name of your choice and run python <filename>.py to execute the script.

Below is an example of the output.

If you go to VPC in the AWS Console and then go to Endpoints, you will see the Endpoint that was created from executing the script.

And if you go to IAM and navigate to the Role that was created from the script, you can see the Permissions and Trust relationships that allow this Role to interact with ECS, ECR, and ELB (ALB).

The company wants to incorporate Fargate with their ECS service. This eliminates the need to manage EC2 instances, leaving the company to focus on building their applications.

Below is the Python script we will run in our Cloud9 environment to create an ECS cluster, register a task definition for the Docker container, and create an ALB with its target group and listener. The script also creates an ECS service to run tasks based on the task definition. To ensure security, the ALB is placed in public subnets because it’s the entry point for external traffic. The ALB will then forward the traffic to the appropriate Fargate tasks in the private subnets.

The Fargate tasks, running the application (webpage) in the container, are launched in private subnets because, even though Fargate tasks can be given public IPs, you often don’t want them directly accessible from the internet. They should be shielded by a Load Balancer to provide better security and scalability. This way, only the Load Balancer is exposed to the internet, while your actual services remain in the private subnets.

Now create a New file for the script. Copy and paste the script in the open field.

Save the file with the name of your choice and run python <filename>.py to execute the script.

Below is an example of the output.

Now go to Elastic Container Service in the AWS Console. Here you will see the ECS cluster that we created, with 1 service running 3 Fargate tasks.

Click on the name of the cluster.

Here, for the ECS service created, you will see the deployment of the service’s 3 Fargate tasks is Completed, and the Service type is Replica.

With the replica service type in Fargate, you determine how many copies (or replicas) of a task you want to run, and Fargate ensures that many tasks are running, in this case 3. If a task stops or fails, Fargate will start another one to maintain the desired number of tasks.

While the replica service type can be used with both EC2 (instances or virtual machines to manage) and Fargate (no instances or virtual machines to manage) launch types in ECS, it aligns well with Fargate, where the emphasis is on tasks rather than the infrastructure (instances) they run on.

Click on the name of the service.

On this page, under Targets, we can see that the target, which is the index.html file for our webpage, is healthy. This means that the Apache web server running on the target is up and the index.html file can be accessed and served by the target without errors.

Under the Health and metrics tab, click View load balancer.

Copy the DNS name of the ALB

And paste it in your web browser. Here you’ll see the webpage that’s served from the container.

Also, if you’re using Route 53 for DNS resolution, you can create an alias record that points your domain to the load balancer. DNS resolution is the process of translating domain names into IP addresses, which machines can use to route traffic to the correct server. Users would access your domain, which is pointed to the DNS name of your ALB. The ALB receives the request and forwards it to one of the Fargate tasks running your container. And the Fargate task serves the web page back to the user, through the ALB to the user’s browser.

And that’s it. We’re done.

So, to sum everything up, we built an Ubuntu image for a Docker container with an Apache HTTP server that serves a simple webpage. We ran the container in a detached state, mapped port 80 on the host machine to port 80 inside the container, and pushed the container to Docker Hub. We also created an Elastic Container Registry repository in AWS, pulled the container from Docker Hub and pushed it to the ECR repo. And we created an Elastic Container Service to run Fargate tasks that uses the container from the ECR repository. To ensure security, the Fargate tasks run in private subnets, meaning they can’t be directly accessed from the public internet. And we set up an Application Load Balancer (ALB) in public subnets to distribute incoming traffic to our Fargate tasks in the private subnets. And last, we accessed the ALB’s DNS name to successfully reach the webpage served by our Apache server from the Docker container.

Thank you for reading!



Chinelo Osuji

DevOps | Cloud | Data Engineer | AWS | Broward College Student