I’ve had a pretty busy couple of weeks recently and haven’t had much time to do some things that I’d wanted to continue with. However this weekend I had some time to think a little bit more about my investment service that I’ve been writing.

Previously, I’d successfully deployed my .Net Core 2 Web API into Amazon ECS using Travis-CI. It automatically builds the Docker container, and then create the AWS service, a Task Definition and finally the ECS cluster. What I still need to do is create a Load balancer to manage this.

What I wanted to do is rationalise some of the ideas/concepts that I'd previously taken for granted in the persuit of just-getting-it-done. It also gave me a bit of time to understand more about ECS’s infrastructure and how it works, as I'll be doing more work in this area in the coming weeks. 

My initial thought was to effectively describe what a task definition was as this concept is a bit murky and to do this, I need to show you how it fits into the broader ecosystem.

Here is a diagram I drew to explain much of the fundamental concepts of ECS:

Basically when managing an application that deployed as a docker container, you need to understand how your container is organised by ECS.

Fundamentally, managing a Docker image starts with creating a ‘Service’ that is responsible for managing the infrastructure for your Docker image. Each Service will obviously then be associated with your Docker Image. To make this association, you define a Task Definition which represents which Docker image to use and which repository its hosted/located in - so it knows where to fetch it from. Then, and this is the important part, your task definition can be realised or instantiated (as a Task) onto running EC2 AMI container host instances thereby effectively running your image in EC2 host containers (EC2 instance in diagram).

The part I’ve left off, is that you also define a ECS cluster which is just how many EC2 container hosts are available for tasks to run on. These are ECS optimised AMIs provided by Amazon.

The Service will automatically run the tasks on available hosts in the cluster. Normally when you initially create the service, you specify how many tasks must be always running at the same time and the service will ensure that that many tasks are created across all the EC2 instances associated with that service(through the association the service has with a cluster)

What I still need to do is deploy my Angular front end app in the same way. This is what I’m going to be doing in the days that follow.

To give you a bit of an idea how this is programatically achieved in Travis-CI, this is what setups the AWS stuff prior to deploying this to ECS:

Firstly this is my docker file:

#Image(build) that is used to compile/publish ASP.NET Core applications inside the container. 
FROM microsoft/aspnetcore-build:2.0 AS build-env
WORKDIR /app

#Copy BUILD_DIR\*csproj and restore as distinct layers
COPY *.csproj ./
RUN dotnet restore

# Copy everything else and build
COPY . ./
RUN dotnet publish -c Release -o out

# Build runtime image by adding the compiled output above to a runtime image(aspnetcore)

FROM microsoft/aspnetcore:2.0
WORKDIR /app
COPY --from=build-env /app/out .

# Expose port 5000 on container to the world outside (container host)
EXPOSE 5000/tcp

# Ask Kestral to listen on 5000
ENV ASPNETCORE_URLS http://*:5000
ENTRYPOINT ["dotnet", "CoreInvestmentTracker.dll"]

And this is how it gets deployed by Travis-CI:

First setup some environment variables:

#!/bin/bash

# set environment variables used in deploy.sh and AWS task-definition.json:
export IMAGE_NAME=coreinvestmenttracker
export IMAGE_VERSION=latest

export AWS_DEFAULT_REGION=eu-west-2
export AWS_ECS_CLUSTER_NAME=default


# set any sensitive information in travis-ci encrypted project settings:
# required: AWS_ACCOUNT_NUMBER, AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY
# optional: SERVICESTACK_LICENSE

First, build the docker file etc.

#!/bin/bash
source ../deploy-envs.sh

#AWS_ACCOUNT_NUMBER={} set in private variable
export AWS_ECS_REPO_DOMAIN=\(AWS_ACCOUNT_NUMBER.dkr.ecr.\)AWS_DEFAULT_REGION.amazonaws.com

# Build process
docker build -t \(IMAGE_NAME ../
docker tag \)IMAGE_NAME \(AWS_ECS_REPO_DOMAIN/\)IMAGE_NAME:\(IMAGE_VERSION

and finally setup the AWS ECS infrastructure:

#!/bin/bash
source ../deploy-envs.sh

export AWS_ECS_REPO_DOMAIN=\)AWS_ACCOUNT_NUMBER.dkr.ecr.\(AWS_DEFAULT_REGION.amazonaws.com
export ECS_SERVICE=\)IMAGE_NAME-service
export ECS_TASK=\(IMAGE_NAME-task

# install dependencies
sudo apt-get install jq -y #install jq for json parsing
sudo apt-get install gettext -y 
pip install --user awscli # install aws cli w/o sudo
export PATH=\)PATH:\(HOME/.local/bin # put aws in the path

# replace environment variables in task-definition
envsubst < task-definition.json > new-task-definition.json

eval \)(aws ecr get-login --region \(AWS_DEFAULT_REGION --no-include-email | sed 's|https://||') #needs AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY envvars

## Check to see if the repository already existsm otherwise create it
if [ \)(aws ecr describe-repositories | jq --arg x \(IMAGE_NAME '[.repositories[] | .repositoryName == \)x] | any') == "true" ]; then
    echo "Found ECS Repository \(IMAGE_NAME"
else
    echo "ECS Repository doesn't exist, Creating \)IMAGE_NAME ..."
    aws ecr create-repository --repository-name \(IMAGE_NAME
fi

# Push the image to the repository
docker push \)AWS_ECS_REPO_DOMAIN/\(IMAGE_NAME:\)IMAGE_VERSION

 # Create a new task revision
aws ecs register-task-definition --cli-input-json file://new-task-definition.json --region \(AWS_DEFAULT_REGION > /dev/null
 #get latest revision
TASK_REVISION=\)(aws ecs describe-task-definition --task-definition \(ECS_TASK --region \)AWS_DEFAULT_REGION | jq '.taskDefinition.revision')
SERVICE_ARN="arn:aws:ecs:\(AWS_DEFAULT_REGION:\)AWS_ACCOUNT_NUMBER:service/\(ECS_SERVICE"
ECS_SERVICE_EXISTS=\)(aws ecs list-services --region \(AWS_DEFAULT_REGION --cluster \)AWS_ECS_CLUSTER_NAME | jq '.serviceArns' | jq 'contains(["'"\(SERVICE_ARN"'"])')
if [ "\)ECS_SERVICE_EXISTS" == "true" ]; then
    echo "ECS Service already exists, Updating \(ECS_SERVICE ..."
    aws ecs update-service --cluster \)AWS_ECS_CLUSTER_NAME --service \(ECS_SERVICE --task-definition "\)ECS_TASK:\(TASK_REVISION" --desired-count 1 --region \)AWS_DEFAULT_REGION > /dev/null #update service with latest task revision
else
    echo "Creating ECS Service \(ECS_SERVICE ..."
    aws ecs create-service --cluster \)AWS_ECS_CLUSTER_NAME --service-name \(ECS_SERVICE --task-definition "\)ECS_TASK:\(TASK_REVISION" --desired-count 1 --region \)AWS_DEFAULT_REGION > /dev/null #create service
fi
if [ "\((aws ecs list-tasks --service-name \)ECS_SERVICE --region \(AWS_DEFAULT_REGION | jq '.taskArns' | jq 'length')" -gt "0" ]; then
    TEMP_ARN=\)(aws ecs list-tasks --service-name \(ECS_SERVICE --region \)AWS_DEFAULT_REGION | jq '.taskArns[0]') # Get current running task ARN
    TASK_ARN="\({TEMP_ARN%\"}" # strip double quotes
    TASK_ARN="\){TASK_ARN#\"}" # strip double quotes
    aws ecs stop-task --task \(TASK_ARN --region \)AWS_DEFAULT_REGION > /dev/null # Stop current task to force start of new task revision with new image
fi

 

 

One very important thing about a Task definition, other than defining which docker image to use, is that you can define the environment variables that the docker image will see when its running(as a Task!). This is very important for me because I define the RDS connection string information in here, which includes passwords etc. This task definition although I define it in the source code, does not have passwords in it, but I update it by updating it's revision and then apply the new task definition to the service and that is then applied. Then then runs this runs tasks using this new revision.

Hopefully I'll be able to get the system setup in such a way I can setup and tear down the system quickly to avoid long running costs while developing. I've read alittle about AWS data pipelines as a way to achieve this so I'll look into that later. In the mean time, its slowly coming together.