My world of technology

Kategoria: API

Creating metrics endpoint and adding Prometheus container

Now that we have the development environment ready and containerised it is time to add some additional services to it. In this episode let’s create a metrics endpoint providing information on the API statistics as well as Prometheus container to monitor it.

For integration between Flask and Prometheus I am using Prometheus Flask exporter which allows us to create metrics endpoint in an easy way. In order to use it first you need to install it on your local machine by running the command:

pip3 install prometheus-flask-exporter

Once it’s installed also make sure to add the dependency to the application:

from prometheus_flask_exporter import PrometheusMetrics

When implementing PrometheusMetrics we get these measures out of the box:

  • HTTP request duration
  • Total number of HTTP requests
  • Total number of uncaught exceptions when serving Flask requests

Simple implementation looks the following:

app = Flask(__name__)
metrics = PrometheusMetrics(app)

# static information as metric
metrics.info('app_info', 'Application info', version='1.0.3')

@app.route('/')
def main():
    pass  # requests tracked by default

@app.route('/skip')
@metrics.do_not_track()
def skip():
    pass  # default metrics are not collected

@app.route('/<item_type>')
@metrics.do_not_track()
@metrics.counter('invocation_by_type', 'Number of invocations by type',
         labels={'item_type': lambda: request.view_args['type']})
def by_type(item_type):
    pass  # only the counter is collected, not the default metrics

@app.route('/long-running')
@metrics.gauge('in_progress', 'Long running requests in progress')
def long_running():
    pass

@app.route('/status/<int:status>')
@metrics.do_not_track()
@metrics.summary('requests_by_status', 'Request latencies by status',
                 labels={'status': lambda r: r.status_code})
@metrics.histogram('requests_by_status_and_path', 'Request latencies by status and path',
                   labels={'status': lambda r: r.status_code, 'path': lambda: request.path})
def echo_status(status):
    return 'Status: %s' % status, status

Let’s give it a try and call the endpoint:

As we can see it produces a lot of useful information. Beside the metrics I’ve already mentioned it even can give us some insight into performance of the application and our infrastructure (process_cpu_seconds_total, process_start_time_seconds, process_resident_memory_bytes) or split all http requests into different percentiles giving us info on slowest and median time needed to serve the requests.

Now that it’s in place let’s setup Prometheus so we can don’t have to call metrics endpoint every time we require information; instead we will use Prometheus to query the exact information we need.

First things first, let’s add another service in compose.yaml file:

We’ve named container Prometheus, specified location where the build file can be found (new directory called „mon”) and introduced port forwarding from 9090 in container (default for Prometheus) to 9091 on local machine. Of course let’s make sure to use the same docker network where all other containers are residing so we can reach them.

Before we prepare Dockerfile we need to prepare configuration that we are going to feed to Prometheus. You can find full documentation on this topic under this link. I am using scrape config, dedicated for interacting using HTTP method. You can see the config I am using below:

I have specified the default scrape interval to be every 15 seconds, the job name and target. I can use name of the service from Docker compose (api) and internal Flask port 5000 because both of those containers reside on the same network. I didn’t need to specify the specific endpoint, because Prometheus by default expects metrics endpoint to be published under /metrics.

Final thing left to do is to specify build file which is quite simple: we pull default Prometheus image and add our configuration file:

Let’s give it a try and rebuild the environment:

docker compose up

Now that Prometheus container is running we should be able to access it’s web console under localhost:909. Once there move to Status – Target health section:

We can see that the endpoint http://api:5000/metrics is being actively monitored (status UP). Let’s give it a try and query some specific information about the metrics in „Query” tab:

flask_http_request_created

This will give us a list of all HTTP requests sent to our application, just as presented on the screenshot below:

To get info on all available metrics use:

{__name__!=""}

That’s it for today’s tutorial! Thanks for joining me, in my next entry I will cover deploying said Prometheus container in Azure Container Apps.

Creating development environment using Docker Compose

Hello and welcome to the next entry in this series. Today I will cover creating development environment on local machine using Docker Compose. This way we can setup both API application and database server quickly and effortlessly.

For purpose of demonstration we can reuse the dockerfiles created in previous entries in the series (for postgres server and API app). We were already able to setup and run these containers separately, but using Docker Compose we can create both of them at the same time with just one command.

The important part is networking: we need to make sure that containers can „talk” to each other. For that let’s create separate network called „docker_network” utilising bridge type driver. There is also need to expose the ports; one thing changes compared to the previous setup. We don’t need to specify port forwarding for database server and API container can talk to database directly on port 5432 (default for Postgres). We still need to forward the requests to API app on different port than the one exposed by Flask container (5000).

I’ve also added some safety mechanism in form of healthcheck for database and „depends_on” clause for api which will cause the API to start only when database container is up and ready.

Putting this all together compose file looks the following:

services:
  db:
    container_name: postgres
    build: ./db/
    ports:
      - "5432"
    expose:
    - "5432"
    environment:
    - POSTGRES_USER=postgres
    - POSTGRES_PASSWORD=postgrespassword
    - POSTGRES_DB=postgres
    networks:
    - docker_network
    healthcheck:
      test: ["CMD", "pg_isready", "-U", $POSTGRES_USER, "-d", $POSTGRES_DB]
      interval: 1s
      timeout: 5s
      retries: 10
  api:
    container_name: rpgapp
    build: .
    ports:
      - "5005:5000"
    networks:
      - docker_network
    depends_on:
      db:
        condition: service_healthy
networks:
  docker_network:
    driver: bridge

Let’s give it a spin and start the environment using command:

docker compose up

Afterwards we can see in Docker desktop app that both containers are running as part of single compose stack:

To confirm that our app works let’s try to create a new character via API call:

We have successfully created environment consisting of database and application server with just one command!

That’s it for today, in my next entry I will cover creating Prometheus container for monitoring our application.

Making application interact with a database

Hello and welcome to the next entry in the series. Now that we have a properly configured database we can rewrite our application to interact with it. For this we will be using SQLAlchemy – a powerful tool dedicated for working with databases. You can find the documentation of this library under this link.

The list of libraries necessary for this project have grown a lot due to need for SQLAlchemy, as well as driver dedicated for PostgreSQL. Thankfully Python allows us to import the requirements just by running single command. By the way, you can find the requirements.txt file in my repository.

To install needed libraries download requirements.txt file and run the following command in the context of directory where it is present:

python -m pip install -r requirements.txt

Now that’s sorted out we can get to work. We have a working database, but in what way we can tell our application how to connect to it? SQLAlchemy uses engine object which specifically describes how to „talk” to specified database. It requires info on type of database used, username, password, host, port and database name. I’ve used .env file to store all of this information in a separate file and only import the values within codebase. Creation of the engine looks the following:

I added print at the end for debug purpose (the password information is hidden). Let’s run this code and see if we connected successfully:

The engine information is there – we are using postgresql and connected as rpgapp_user to host 127.0.0.1 on port 5455 to database called „rpgapp”.

Before we get to coding the functions part we also need to specify structure of the data. In SQLAlchemy it’s called models; this structure defines both Python and SQL database object that we are going to interact with.

For our purpose I’ve created a simple model called Character model. We are going to store characters data in table called „characters” which has three columns: characterid(which at the same time is primary key), name(string) and character level(integer value). Putting this all together model definition looks like this:

Finally we can get to the main dish and start coding the functions for our API to retrieve and store data in database. Let’s start with create function so we can populate the database.

I’ve created a separate endpoint which only allows for POST method requests called „/character/create”. It takes three arguments in form of the fields in character table – name, level and id. Afterwards CharacterModel object is created based on the input.

We use the existing engine connection and establish a session. A session object in SQLAlchemy is a way to handle transactions with database. We add created character object to the session and commit it to the database. If everything goes right we then make a response to the query with info on the character data and status code (200). In case of any issues we catch an exception and return 500. Below is the code snippet:

Let’s try to raise a request and test newly created endpoint:

Indeed, it works, in return we receive name and id of the character as well as 200 return code. Now let’s code a GET /character endpoint which will return list of all our characters:

Once again we are using session object, but this time with .query method looking for all objects that match CharacterModel. Before we return data to the user we need to prepare a json object using json.dumps method out of list of characters. Let’s run a GET request to this endpoint:

I’ve created quite a few of them during testing and we get entire list of them.

The endpoints could use some more work in regards to checking the input data quality, auto numerating ID in the database etc, but this is something I will do at later date. In my next entry I will cover how to create an Azure PostgreSQL Flexible Server using Terraform. Thank you for reading and see you next time!

Creating API app

When I was a teenager, web browser games started to be a thing; something that everyone played. Whether it was kingdom building game (Travian), space empire simulator (Ogame) or weird PvP / roleplaying hybrid (Bitefight) – there was a niche suited for everyone.

Nowadays mobile game have taken their place, although sometimes I am still bombarded with spam ads for these kind of games on Facebook.

Due to nostalgy for such games I thought a crazy idea – why not write a backbone for a web based RPG as an API?

The plan is simple – API should have endpoints allowing user to both retrieve and create in game characters, as well as items belonging to them.

I am utilising both GET and POST methods, and outline of endpoints looks like this:

  • GET /character – fetch the entire list of characters
  • GET /character/name – fetch info about specific character
  • GET /character/name/item – fetch info about items in character’s inventory
  • POST /character – create a new character
  • POST /character/name/item – add an item to character’s inventory

To simplify things I have not added a database yet. Instead the info on characters is stored in a dictionary object. Of course it comes with a limitation – the characters we create with POST will only exist in applications memory and will only persists while the application runs, but I will address it in a later entry. For now I want something simple and working.

After creating endpoints in Flask the application can be run locally using following command:

flask run

Doing so the application will be run as an instance and published on default port 5000.

In the dictionary I’ve input two characters: Adelajda (named after my friend, my usual go-to for female characters in RPG games) and Tordek (dwarf warrior from D&D 3rd edition).

Let’s try to retrieve the list of these characters and see what we can get from Postman by hitting /characters endpoint:

{
    "characters": [
        {
            "items": [
                {
                    "name": "longbow",
                    "value": "100"
                }
            ],
            "level": "1",
            "name": "Adelajda"
        },
        {
            "items": [
                {
                    "name": "battleaxe",
                    "value": "150"
                },
                {
                    "name": "platemail",
                    "value": "1000"
                }
            ],
            "level": "100",
            "name": "Tordek"
        }
    ]
}

We have two characters, one of them being lvl 1, while the other is lvl 100 and each of them has a different items in inventory. Pretty good start. However, the usual party in roleplaying games usually consists 4 players so let’s add two additional characters by using POST method. Let’s make sure to include „name” and „level” in request body, for example:

{
    "name": "Strider",
    "level": "50"
}

I’ve added one more character and now the output of GET /character looks the following:

{
    "characters": [
        {
            "items": [
                {
                    "name": "longbow",
                    "value": "100"
                }
            ],
            "level": "1",
            "name": "Adelajda"
        },
        {
            "items": [
                {
                    "name": "battleaxe",
                    "value": "150"
                },
                {
                    "name": "platemail",
                    "value": "1000"
                }
            ],
            "level": "100",
            "name": "Tordek"
        },
        {
            "items": [],
            "level": "50",
            "name": "Strider"
        },
        {
            "items": [],
            "level": "20",
            "name": "Evelynn"
        }
    ]
}

Four characters make a decent party, however it looks like Strider and Evelynn came unprepared! Let’s try to fix that by using POST /character/name/item endpoint and add some item’s to inventory of these characters. Example request body:

{
    "name": "robe",
    "value": "150"
}

Let’s view only Evelynn’s inventory by using our last endpoint, GET /character/name/item and make sure that the newly added robe is indeed there. The output:

{
    "items": [
        {
            "name": "robe",
            "value": "150"
        }
    ]
}

And here it is!

The application itself is simple and there are quite a few things that could be improved – one thing that comes to my mind is adding default values for level and items (in case those are not specified in POST request) and I might address them in the future.

For now let’s settle on what we have. You can find a full code of this application here.

In the upcoming blog post I will dockerize the application and deploy it in Azure Container Apps. See you next time!

© 2025 KW Digital

Theme by Anders NorenUp ↑