KW Digital

My world of technology

Creating metrics endpoint and adding Prometheus container

Now that we have the development environment ready and containerised it is time to add some additional services to it. In this episode let’s create a metrics endpoint providing information on the API statistics as well as Prometheus container to monitor it.

For integration between Flask and Prometheus I am using Prometheus Flask exporter which allows us to create metrics endpoint in an easy way. In order to use it first you need to install it on your local machine by running the command:

pip3 install prometheus-flask-exporter

Once it’s installed also make sure to add the dependency to the application:

from prometheus_flask_exporter import PrometheusMetrics

When implementing PrometheusMetrics we get these measures out of the box:

  • HTTP request duration
  • Total number of HTTP requests
  • Total number of uncaught exceptions when serving Flask requests

Simple implementation looks the following:

app = Flask(__name__)
metrics = PrometheusMetrics(app)

# static information as metric
metrics.info('app_info', 'Application info', version='1.0.3')

@app.route('/')
def main():
    pass  # requests tracked by default

@app.route('/skip')
@metrics.do_not_track()
def skip():
    pass  # default metrics are not collected

@app.route('/<item_type>')
@metrics.do_not_track()
@metrics.counter('invocation_by_type', 'Number of invocations by type',
         labels={'item_type': lambda: request.view_args['type']})
def by_type(item_type):
    pass  # only the counter is collected, not the default metrics

@app.route('/long-running')
@metrics.gauge('in_progress', 'Long running requests in progress')
def long_running():
    pass

@app.route('/status/<int:status>')
@metrics.do_not_track()
@metrics.summary('requests_by_status', 'Request latencies by status',
                 labels={'status': lambda r: r.status_code})
@metrics.histogram('requests_by_status_and_path', 'Request latencies by status and path',
                   labels={'status': lambda r: r.status_code, 'path': lambda: request.path})
def echo_status(status):
    return 'Status: %s' % status, status

Let’s give it a try and call the endpoint:

As we can see it produces a lot of useful information. Beside the metrics I’ve already mentioned it even can give us some insight into performance of the application and our infrastructure (process_cpu_seconds_total, process_start_time_seconds, process_resident_memory_bytes) or split all http requests into different percentiles giving us info on slowest and median time needed to serve the requests.

Now that it’s in place let’s setup Prometheus so we can don’t have to call metrics endpoint every time we require information; instead we will use Prometheus to query the exact information we need.

First things first, let’s add another service in compose.yaml file:

We’ve named container Prometheus, specified location where the build file can be found (new directory called „mon”) and introduced port forwarding from 9090 in container (default for Prometheus) to 9091 on local machine. Of course let’s make sure to use the same docker network where all other containers are residing so we can reach them.

Before we prepare Dockerfile we need to prepare configuration that we are going to feed to Prometheus. You can find full documentation on this topic under this link. I am using scrape config, dedicated for interacting using HTTP method. You can see the config I am using below:

I have specified the default scrape interval to be every 15 seconds, the job name and target. I can use name of the service from Docker compose (api) and internal Flask port 5000 because both of those containers reside on the same network. I didn’t need to specify the specific endpoint, because Prometheus by default expects metrics endpoint to be published under /metrics.

Final thing left to do is to specify build file which is quite simple: we pull default Prometheus image and add our configuration file:

Let’s give it a try and rebuild the environment:

docker compose up

Now that Prometheus container is running we should be able to access it’s web console under localhost:909. Once there move to Status – Target health section:

We can see that the endpoint http://api:5000/metrics is being actively monitored (status UP). Let’s give it a try and query some specific information about the metrics in „Query” tab:

flask_http_request_created

This will give us a list of all HTTP requests sent to our application, just as presented on the screenshot below:

To get info on all available metrics use:

{__name__!=""}

That’s it for today’s tutorial! Thanks for joining me, in my next entry I will cover deploying said Prometheus container in Azure Container Apps.

Creating development environment using Docker Compose

Hello and welcome to the next entry in this series. Today I will cover creating development environment on local machine using Docker Compose. This way we can setup both API application and database server quickly and effortlessly.

For purpose of demonstration we can reuse the dockerfiles created in previous entries in the series (for postgres server and API app). We were already able to setup and run these containers separately, but using Docker Compose we can create both of them at the same time with just one command.

The important part is networking: we need to make sure that containers can „talk” to each other. For that let’s create separate network called „docker_network” utilising bridge type driver. There is also need to expose the ports; one thing changes compared to the previous setup. We don’t need to specify port forwarding for database server and API container can talk to database directly on port 5432 (default for Postgres). We still need to forward the requests to API app on different port than the one exposed by Flask container (5000).

I’ve also added some safety mechanism in form of healthcheck for database and „depends_on” clause for api which will cause the API to start only when database container is up and ready.

Putting this all together compose file looks the following:

services:
  db:
    container_name: postgres
    build: ./db/
    ports:
      - "5432"
    expose:
    - "5432"
    environment:
    - POSTGRES_USER=postgres
    - POSTGRES_PASSWORD=postgrespassword
    - POSTGRES_DB=postgres
    networks:
    - docker_network
    healthcheck:
      test: ["CMD", "pg_isready", "-U", $POSTGRES_USER, "-d", $POSTGRES_DB]
      interval: 1s
      timeout: 5s
      retries: 10
  api:
    container_name: rpgapp
    build: .
    ports:
      - "5005:5000"
    networks:
      - docker_network
    depends_on:
      db:
        condition: service_healthy
networks:
  docker_network:
    driver: bridge

Let’s give it a spin and start the environment using command:

docker compose up

Afterwards we can see in Docker desktop app that both containers are running as part of single compose stack:

To confirm that our app works let’s try to create a new character via API call:

We have successfully created environment consisting of database and application server with just one command!

That’s it for today, in my next entry I will cover creating Prometheus container for monitoring our application.

Creating Azure VM using Terraform

Hello and welcome to another entry on my blog! In this episode we will create a virtual machine which is going to serve as management server for the environment.

In order for this server to communicate with both our database server and API app, it needs to be present on the same network. In previous entry we’ve already created a virtual network; the only thing we need is dedicated subnet.

For the purpose of demonstration let’s create it manually instead of via Terraform. In resource search look for previously created virtual network, go to subnets tab and hit „+Subnet” and specify the name. All remaining fields can remain as default.

Before creating VM there are few things that need to be specified within Terraform file. It is required to create network interface, ip configuration (with information on subnet where VM needs to reside), and preferably also network security group to allow RDP connections to this VM.

Other than that you need to assign a name to the VM, choose storage plan, source image (I’ve chosen Windows Server 2016) as well as admin username and password used to connect. Entire configuration put together looks like this:

resource "azurerm_network_security_group" "rpgappvm_nsc" {
  name                = "rpgappvm_nsc"
  location            = "polandcentral"
  resource_group_name = azurerm_resource_group.project2025.name

  security_rule {
    name                       = "AllowAnyRDPInbound"
    priority                   = 100
    direction                  = "Inbound"
    access                     = "Allow"
    protocol                   = "Tcp"
    source_port_range          = "*"
    destination_port_range     = "3389"
    source_address_prefix      = "*"
    destination_address_prefix = "*"
  }
}

  resource "azurerm_network_interface" "nic" {
  name                = "rpgappvm-nic"
  location            = "polandcentral"
  resource_group_name = azurerm_resource_group.project2025.name

  ip_configuration {
    name                          = "internal"
    subnet_id                     = "/subscriptions/1d338fad-a9e9-4314-853c-5793eddb8b1b/resourceGroups/project2025/providers/Microsoft.Network/virtualNetworks/rpgapp-network/subnets/rpgapp-vm-subnet"
    private_ip_address_allocation = "Dynamic"
  }
}

resource "azurerm_windows_virtual_machine" "rpgapp-vm" {
  name                = "rpgapp-vm"
  resource_group_name = azurerm_resource_group.project2025.name
  location            = "polandcentral"
  size                = "Standard_F2"
  admin_username      = "adminuser"
  admin_password      = var.vm_password
  network_interface_ids = [
    azurerm_network_interface.nic.id,
  ]

  os_disk {
    caching              = "ReadWrite"
    storage_account_type = "Standard_LRS"
  }

  source_image_reference {
    publisher = "MicrosoftWindowsServer"
    offer     = "WindowsServer"
    sku       = "2016-Datacenter"
    version   = "latest"
  }
}

After applying Terraform file note an IP address for this resource and connect using any RDP client. Once in I installed pgAdmin to manage the PostgreSQL Flexible server.

That’s it for this entry! In the next blog post I will cover creating development environment in fast and convenient manner using Docker Compose.

Creating PostgreSQL Flexible Server with Terraform

Hello and welcome back to my blog. In this episode we will create a PostgreSQL Flexible Server in Azure using Terraform. So for development environment hooking up a new instance of PostgreSQL in Docker is great – it’ fast and easy, but for production environment we would like something more stable – that’s why I’ve chosen to use PostgreSQL server in cloud.

For creating and managing this server we will be using Terraform, to adhere to Infrastructure as Code approach. This allows us to store the information about our infrastructure in version control system and apply changes based on them.

First things first, in order to work manage Azure resources with Terraform you need to have both Azure CLI and Terraform installed locally on your machine.

Once both are in place login to Azure using the following command:

az login

To provide Terraform with necessary rights to manage Azure instance you need to create service principal. You can do so by running the following in your terminal (remember to replace <SUBSCRIPTION_ID> with the id of your Azure subscription):

az ad sp create-for-rbac --role="Contributor" --scopes="/subscriptions/<SUBSCRIPTION_ID>"

As result of this command you will receive output similar to the one below:

In the next step use the values received in the output and set up corresponding environment variables according to this schema:

export ARM_CLIENT_ID="<APPID_VALUE>"
export ARM_CLIENT_SECRET="<PASSWORD_VALUE>"
export ARM_SUBSCRIPTION_ID="<SUBSCRIPTION_ID>"
export ARM_TENANT_ID="<TENANT_VALUE>"

Note that I am using syntax valid for MacOS/Linux, if you are using Windows it is going to differ slightly.

Now we are ready and we can test the Terraform installation with following command:

terraform init

After running it you will receive confirmation of successful initialisation:

Before we create Terraform file to provision PostgreSQL server there is one last thing to do. If you have created resource group (like the one used in previous entries on this blog) you need to add this resource to be managed by Terraform. You can do so running this code snippet in your terminal (as usual replace SubscriptionID and Resource group name):

import azurerm_resource_group.<RESOURCE GROUP NAME> /subscriptions/<SUBSCRIPTION ID>/resourceGroups/<RESOURCE GROUP NAME>

Let’s get to creating the server! When working with Terraform it is best to refer to their official documentation. In this case we are working with resource called „azurerm_postgresql_flexible_server” and you can find detailed documentation on it under this link.

First we specify the provider to be „azurerm” and resource group we will be working with. Server needs to reside in virtual network, so in the next step we define it’s name, location, the resource group it belongs to and the address space. We also need to define subnet which provides us with range of IP addresses that can be assigned to resources in the network.

Afterwards we also create DNS zone which allows us to use dns aliases instead of IP addresses in relation to our resources. One last thing requires is private dns zone virtual network link; thanks to it we will be able to connect applications running within Azure to our newly created database server.

Last step is of course to create PostgreSQL server itself. Beside pointing to subnet and private DNS zone created in previous steps we also need to specify configuration of the server. I’ve gone with 30 GB of storage and B standard specs (you can check the specifics either in Terraform or Azure documentation).

One note regarding security – since during the creation of the server there is need to specify username and password for PostgreSQL instance I’ve secured them using Terraform variables.

Putting all of it together the Terraform file looks the following:

variable "db_username" {
  type = string
}

variable "password" {
  type = string
}


provider "azurerm" {
  features {}
}

resource "azurerm_resource_group" "project2025" {
  name     = "project2025"
  location = "Australia East"
}

resource "azurerm_virtual_network" "rpgappnetwork" {
  name                = "rpgapp-network"
  location            = "polandcentral"
  resource_group_name = "project2025"
  address_space       = ["10.0.0.0/16"]
}

resource "azurerm_subnet" "rpgapp-subnet" {
  name                 = "rpaapp-subnet"
  resource_group_name  = "project2025"
  virtual_network_name = "rpgapp-network"
  address_prefixes     = ["10.0.2.0/24"]
  service_endpoints    = ["Microsoft.Storage"]
  delegation {
    name = "fs"
    service_delegation {
      name = "Microsoft.DBforPostgreSQL/flexibleServers"
      actions = [
        "Microsoft.Network/virtualNetworks/subnets/join/action",
      ]
    }
  }
}

resource "azurerm_private_dns_zone" "dns" {
  name                = "rpgapp.postgres.database.azure.com"
  resource_group_name = "project2025"
}

resource "azurerm_private_dns_zone_virtual_network_link" "rpgapp-private-dns-zone" {
  name                  = "rpgapp-dns-zone"
  private_dns_zone_name = "rpgapp.postgres.database.azure.com"
  virtual_network_id    = azurerm_virtual_network.rpgappnetwork.id
  resource_group_name   = "project2025"
  depends_on            = [azurerm_subnet.rpgapp-subnet]
}

resource "azurerm_postgresql_flexible_server" "postgres" {
  name                          = "rpgapp-postgres"
  resource_group_name           = "project2025"
  location                      = "polandcentral"
  version                       = "13"
  delegated_subnet_id           = azurerm_subnet.rpgapp-subnet.id
  private_dns_zone_id           = azurerm_private_dns_zone.dns.id
  administrator_login           = var.db_username
  administrator_password        = var.password
  zone                          = "1"

  storage_mb   = 32768

  sku_name   = "B_Standard_B1ms"
  depends_on = [azurerm_private_dns_zone_virtual_network_link.rpgapp-private-dns-zone]

  }

Now that it is ready we can apply the configuration using the command:

terraform apply

The process itself might take some time to create all the required resources. You should receive output similar to the one below:

That’s it for today, in the next entry I will cover adjusting Azure Container App to interact with newly created PostgreSQL Flexible Server database.

Making application interact with a database

Hello and welcome to the next entry in the series. Now that we have a properly configured database we can rewrite our application to interact with it. For this we will be using SQLAlchemy – a powerful tool dedicated for working with databases. You can find the documentation of this library under this link.

The list of libraries necessary for this project have grown a lot due to need for SQLAlchemy, as well as driver dedicated for PostgreSQL. Thankfully Python allows us to import the requirements just by running single command. By the way, you can find the requirements.txt file in my repository.

To install needed libraries download requirements.txt file and run the following command in the context of directory where it is present:

python -m pip install -r requirements.txt

Now that’s sorted out we can get to work. We have a working database, but in what way we can tell our application how to connect to it? SQLAlchemy uses engine object which specifically describes how to „talk” to specified database. It requires info on type of database used, username, password, host, port and database name. I’ve used .env file to store all of this information in a separate file and only import the values within codebase. Creation of the engine looks the following:

I added print at the end for debug purpose (the password information is hidden). Let’s run this code and see if we connected successfully:

The engine information is there – we are using postgresql and connected as rpgapp_user to host 127.0.0.1 on port 5455 to database called „rpgapp”.

Before we get to coding the functions part we also need to specify structure of the data. In SQLAlchemy it’s called models; this structure defines both Python and SQL database object that we are going to interact with.

For our purpose I’ve created a simple model called Character model. We are going to store characters data in table called „characters” which has three columns: characterid(which at the same time is primary key), name(string) and character level(integer value). Putting this all together model definition looks like this:

Finally we can get to the main dish and start coding the functions for our API to retrieve and store data in database. Let’s start with create function so we can populate the database.

I’ve created a separate endpoint which only allows for POST method requests called „/character/create”. It takes three arguments in form of the fields in character table – name, level and id. Afterwards CharacterModel object is created based on the input.

We use the existing engine connection and establish a session. A session object in SQLAlchemy is a way to handle transactions with database. We add created character object to the session and commit it to the database. If everything goes right we then make a response to the query with info on the character data and status code (200). In case of any issues we catch an exception and return 500. Below is the code snippet:

Let’s try to raise a request and test newly created endpoint:

Indeed, it works, in return we receive name and id of the character as well as 200 return code. Now let’s code a GET /character endpoint which will return list of all our characters:

Once again we are using session object, but this time with .query method looking for all objects that match CharacterModel. Before we return data to the user we need to prepare a json object using json.dumps method out of list of characters. Let’s run a GET request to this endpoint:

I’ve created quite a few of them during testing and we get entire list of them.

The endpoints could use some more work in regards to checking the input data quality, auto numerating ID in the database etc, but this is something I will do at later date. In my next entry I will cover how to create an Azure PostgreSQL Flexible Server using Terraform. Thank you for reading and see you next time!

Creating database container

Hello again and welcome to the next entry in my blog! In this episode I will remake my application so that it interacts with a database instead of using a hardcoded dictionary as data entry. This is a step towards applying CRUD (Create Read Update Delete) principle in our application.

But first thing first – we of course need a database itself. The easiest way to setup one is by pulling official Docker image of PostgreSQL and then modify it according to our needs. I’ve chosen to go with version 15.10, but you can choose any stable version to your liking. To pull the image run the following command:

docker pull postgres:15.10

You can verify that the base image is now present on your machine by running:

docker image list | grep postgres

Now that we have an image we can run a PostgreSQL container. Setup is very simple, it all boils down to this single command:

docker run -p 5455:5432 --name postgres -e POSTGRES_PASSWORD=mypassword -d postgres:15.10

Once again we need to use port forwarding since we are working on a local machine. The default PosgreSQL port is 5432 and that is the one that container is listening on; but we will forward all of our database connection requests to port 5455 instead. In the run command we also specify name of the container, variable of default admin password and image which should be used. Before we head to actual work you should confirm if the container is running properly:

docker container list

Now that container is running we can connect to it in order to create a database and database user. We can do it by executing this command:

docker exec -it <container id> bash

Afterwards you will be presented with command prompt window logged in as root user, just as the one below:

Change context to user „postgres” utilising „su” command:

su postgres

Now you can run the postgres command line utility:

psql

From postgres we can create the database we need. Do so by running command:

create database rpgapp;

We also require a dedicated user for this database. You can create it this way:

create user rpgapp_user with encrypted password 'PASSWORD';

Of course you should replace 'PASSWORD’ string with actual strong password. Let’s also grant the rpgapp_user rights to database we’ve created:

grant all privileges on database rpgapp to rpgapp_user;

Last thing to do is to verify if the connection works and our user indeed has rights to database. Quit the context of psql with \q command and on container shell run the following:

psql -U rpgapp_user -h 127.0.0.1 -d rpgapp

That should log you back in psql context as rpgapp user. To confirm that run:

\conninfo

You will receive output like the one below:

Let’s take a break for now; in my next entry I will cover adjusting our application to interact with database.

Automating the deployment process with Azure DevOps pipeline

Hello and welcome to another entry on my blog! In this episode we will focus on automating the deployment process of our application to Azure Container Apps.

I have not chosen the best time to work on this project, because it looks like that even Microsoft is closed down during Christmas period. That is the message I’ve found on the request page for free concurrency grants:

It appears so that new Azure accounts do have access to many services for free, although there is one „but”. When it comes to access to Azure agents in pool needed to execute the pipelines one needs to send a request to Microsoft for access which usually takes 2 do 3 business days to finish. Fortunately, there is a workaround in form of hosting the agent locally on your machine using self hosted Azure agent. I won’t deep dive into it there; however if you want to know more details you can find it on this page.

In the previous episode we deployed the Azure Container App using external Image repository in form of Docker Hub. But this time let’s try something different and create an Azure Container Registry. This will allow us to build images on the fly directly from Dockerfile present in our Azure Project Repository.

To create Container registry find this resource type in Azure Services search and hit „Create”. You will be presented with screen such as the one below:

There are only few fields to fill in: the resource group to use, name of the registry, location and pricing plan (Basic is enough for our needs). For the remaining tabs we can stick to default. After all the fields are filled in hit „Create” button.

To access the container registry from the pipeline we are going to utilise username and password. In order to store them securely let’s also create an Azure Key Vault with secrets. Find Key Vault in resource list and go to creation wizard.

As usual we are required to specify resource group, name of the resource, region and pricing tier (standard is fine). Rest of the pages can stay as they are. Hit „Review and create” to create the resource.

Now that the key vault is created let’s fetch the admin password for Container Registry. Find the resource and go to „Access keys” tab in resource page:

Make sure that „Admin user” checkbox is marked as checked. Note the username and click on „Show” to retrieve the password. We can now create the Secret in Key Vault. Navigate to the Key Vault resource and go to „Secrets” tab.

Once there hit „Generate/Import” button and create two secrets – one named ACR-USERNAME and ACR-PASSWORD. In „Secret value” field fill in username and password respectively.

Now that we have setup both Container Registry and Key vault with Secrets let’s take a look at full pipeline.yml file:

trigger:
  branches:
    include:
      - master

pool: Default

variables:
  azureSubscription: '91b277db-132d-41ed-beda-d2effc05ba4a'
  port: 5000
  location: 'polandcentral'
  resourceGroup: 'project2025'
  keyVaultName: 'project2025-keyvault'
  containerAppName: 'rpg-app' 
  containerAppEnvironment: 'rpg-app-env'
  acrName: 'rpgproject'
  ingress: 'external'

steps:
  - task: AzureKeyVault@2
    inputs:
      azureSubscription: $(azureSubscription)
      KeyVaultName: $(keyVaultName)
      SecretsFilter: '*' 
      RunAsPreJob: true
  - task: AzureContainerApps@1
    inputs:
      azureSubscription: $(azureSubscription)
      appSourcePath: '$(Build.SourcesDirectory)'
      containerAppName: $(containerAppName)
      resourceGroup: $(resourceGroup)
      acrName: $(acrName)
      acrUsername: $(ACR-USERNAME)
      acrPassword: $(ACR-PASSWORD)
      ingress: $(ingress)
      location: $(location)
      targetPort: $(port)
      containerAppEnvironment: $(containerAppEnvironment)

The pipeline triggers automatically after a commit is pushed to the master branch of our Azure project repository. The agent pool I am using is called „Default”. Note this is the self hosted agent pool. Pool called „default” (with lowercase „d”) would use 'ubuntu-latest’ VM.

I decided to use variables to separate the content of values from logic of the pipeline and to not hardcode the names of resources used. This allows for reusability if me or someone else decides to use it in different project.

The pipeline itself consists of two steps. In the first one we reach to Azure Key Vault to retrieve secrets stored in it. The second one is responsible for creating the Azure Container Apps. In this task we are passing a lot of arguments to specify the name of Azure Container registry, source of application code, the location where we want the application deployed etc.

After committing the changes to repository the pipeline runs, builds a docker image from the Dockerfile in repository and deploys the application as Azure Container App.

Let’s retrieve the content of /character endpoint and see if we receive the output we are expecting:

And here are our characters! Thank you for reading through the end. In my next blog post we will create a local instance of PostgreSQL database in order to use it in our application.

Deploying application in Azure Container Apps

In the previous blog entry we’ve created a Docker image and made sure that the containerised application runs properly. Now we are ready to host the application in Azure using Azure Container Apps.

Azure Container apps is serverless platform allowing us to maintain less infrastructure and focus only on development of application. This way there is no need to provision virtual machine; instead the container consists only of bare minimum to run the application including the Linux kernel, Python and required dependencies (such as Flask).

In free plan of Azure which I am using Microsoft provides 180,000 vCPU seconds, 360,000 GiB seconds, and 2 million requests which is plentiful for the needs of such simple application.

To create an Azure Container App navigate to portal.azure.com, find „Container Apps” in Azure Services and hit „Create button in upper left corner. You will be presented with an input form like the one below:

In order to create Container App you need to have valid Azure Subscription, and create a Resource group and Container App Environment (both can be created from this view and defaults work just fine in our scenario).The „Region” field should be preferably the one geographically closest to you. Name the container app to your liking, choose „Container image” as deployment source and hit „Next”.

In the „Container” tab choose „Docker Hub” as image source and Image type „Public”. Registry login server remains as default „docker.io”. Fill in name of your image(including version tag if you use it) in „Image and tag” field. In CPU and Memory you can choose the lowest resource tier, it is enough in case of this app. Hit continue to go to next tab.

On „Ingress” tab mark Ingress as enabled and go with „Accept traffic from anywhere” in Ingress traffic section. Insecure connections can be made allowed for the demo purpose. Don’t forget to specify target port as 5000 (default for Flask). You can go to last tab, Review + Create and if validation is passed hit „Create” button in the lower left corner of the page.

The deployment process can take a while, but after it is completed you should be presented with details page of your application, such as the one below:

Make sure that the Status of application is presented as „Running”. If that’s the case you can test the application by copying the application URL and running curl against it utilising our /character endpoint. In my case that would be:

curl -i -H "Accept: application/json" -H "Content-Type: application/json" -X GET https://rpg-app.redfield-12c457b2.polandcentral.azurecontainerapps.io/character

Replace the URl with the one where your application is hosted. After query you should see output like the one below:

We’ve successfully received a list of our characters from the application deployed in Azure Container.

That’s it for today’s entry, in the next step we will automate the deployment process using Azure Devops pipeline!

Containerising the application

Hello and welcome to the second entry of my tech blog.

Now that we have built a simple API app, we need a place to run it and expose it to our users.

But first things first; in order to build a Docker container we first need a Dockerfile. Since we are deploying just a simple Flask application without any dependencies (such as database) this example is fairly simple. You can see the full Dockerfile below:

We utilise the latest Python version (as of writing this blog post on 22nd of December 2024), expose port 5000 (default for Flask), establish a working directory, install Flask, copy the content of our application there and specify command to run the app. All of it in just 6 lines of code, pretty neat, right?

Dockerfile alone is not enough, we need to build a container image out if it and upload it to container registry. For this I am using Docker Hub public repo.

In order to build an image, navigate to the folder containing Dockerfile and run following command:

docker buildx build -t imagename:version .

By using -t argument we specify a name of the image and version tag. The dot at the end means context of current directory. After execution you should see output similar to the one below (note it might take few minutes to build the image including pulling the python 3.12 base if it’s not in your cache):

Once the image is built, we can run the container instance based on it. To do so, execute the following command:

docker run imagename:version -p 5005:5000

Since our container instance is run locally we’ve utilised port forwarding and we redirect traffic from port 5005 on our local machine to port 5000 in the container instance. Now that container is created let’s make sure it is running. To do so run:

docker container list

Afterwards you should see output similar to the one below (name of the container is generated randomly). The important thing is the „STATUS” section, it should be showing „Up” (not „Exited” or „Created”).

Let’s query our /character endpoint to make sure that the app is indeed running. You can do it by running curl:

curl http://127.0.0.1:5005/character

You should receive the following output in the terminal:

Right now image is present only on our local machine. In order to use it in Azure we first need to push it to Docker Hub registry. To do so first make sure you are logged in Docker by running:

docker login

After that is done you can push the image to registry utilising this command:

docker push imagename:version

It might take a minute or two, during the process you should see output such as:

Last thing to check is the actual Docker Hub registry. Login to hub.docker.com, navigate to repositories section and you should fine your repo containing the image there. In my case it looks the following:

Here it is! Now we have our image in Docker repository ready to be used in a deployment process, which I will cover in the next blog post.

Creating API app

When I was a teenager, web browser games started to be a thing; something that everyone played. Whether it was kingdom building game (Travian), space empire simulator (Ogame) or weird PvP / roleplaying hybrid (Bitefight) – there was a niche suited for everyone.

Nowadays mobile game have taken their place, although sometimes I am still bombarded with spam ads for these kind of games on Facebook.

Due to nostalgy for such games I thought a crazy idea – why not write a backbone for a web based RPG as an API?

The plan is simple – API should have endpoints allowing user to both retrieve and create in game characters, as well as items belonging to them.

I am utilising both GET and POST methods, and outline of endpoints looks like this:

  • GET /character – fetch the entire list of characters
  • GET /character/name – fetch info about specific character
  • GET /character/name/item – fetch info about items in character’s inventory
  • POST /character – create a new character
  • POST /character/name/item – add an item to character’s inventory

To simplify things I have not added a database yet. Instead the info on characters is stored in a dictionary object. Of course it comes with a limitation – the characters we create with POST will only exist in applications memory and will only persists while the application runs, but I will address it in a later entry. For now I want something simple and working.

After creating endpoints in Flask the application can be run locally using following command:

flask run

Doing so the application will be run as an instance and published on default port 5000.

In the dictionary I’ve input two characters: Adelajda (named after my friend, my usual go-to for female characters in RPG games) and Tordek (dwarf warrior from D&D 3rd edition).

Let’s try to retrieve the list of these characters and see what we can get from Postman by hitting /characters endpoint:

{
    "characters": [
        {
            "items": [
                {
                    "name": "longbow",
                    "value": "100"
                }
            ],
            "level": "1",
            "name": "Adelajda"
        },
        {
            "items": [
                {
                    "name": "battleaxe",
                    "value": "150"
                },
                {
                    "name": "platemail",
                    "value": "1000"
                }
            ],
            "level": "100",
            "name": "Tordek"
        }
    ]
}

We have two characters, one of them being lvl 1, while the other is lvl 100 and each of them has a different items in inventory. Pretty good start. However, the usual party in roleplaying games usually consists 4 players so let’s add two additional characters by using POST method. Let’s make sure to include „name” and „level” in request body, for example:

{
    "name": "Strider",
    "level": "50"
}

I’ve added one more character and now the output of GET /character looks the following:

{
    "characters": [
        {
            "items": [
                {
                    "name": "longbow",
                    "value": "100"
                }
            ],
            "level": "1",
            "name": "Adelajda"
        },
        {
            "items": [
                {
                    "name": "battleaxe",
                    "value": "150"
                },
                {
                    "name": "platemail",
                    "value": "1000"
                }
            ],
            "level": "100",
            "name": "Tordek"
        },
        {
            "items": [],
            "level": "50",
            "name": "Strider"
        },
        {
            "items": [],
            "level": "20",
            "name": "Evelynn"
        }
    ]
}

Four characters make a decent party, however it looks like Strider and Evelynn came unprepared! Let’s try to fix that by using POST /character/name/item endpoint and add some item’s to inventory of these characters. Example request body:

{
    "name": "robe",
    "value": "150"
}

Let’s view only Evelynn’s inventory by using our last endpoint, GET /character/name/item and make sure that the newly added robe is indeed there. The output:

{
    "items": [
        {
            "name": "robe",
            "value": "150"
        }
    ]
}

And here it is!

The application itself is simple and there are quite a few things that could be improved – one thing that comes to my mind is adding default values for level and items (in case those are not specified in POST request) and I might address them in the future.

For now let’s settle on what we have. You can find a full code of this application here.

In the upcoming blog post I will dockerize the application and deploy it in Azure Container Apps. See you next time!

« Older posts

© 2025 KW Digital

Theme by Anders NorenUp ↑