My world of technology

Miesiąc: styczeń 2025

Creating metrics endpoint and adding Prometheus container

Now that we have the development environment ready and containerised it is time to add some additional services to it. In this episode let’s create a metrics endpoint providing information on the API statistics as well as Prometheus container to monitor it.

For integration between Flask and Prometheus I am using Prometheus Flask exporter which allows us to create metrics endpoint in an easy way. In order to use it first you need to install it on your local machine by running the command:

pip3 install prometheus-flask-exporter

Once it’s installed also make sure to add the dependency to the application:

from prometheus_flask_exporter import PrometheusMetrics

When implementing PrometheusMetrics we get these measures out of the box:

  • HTTP request duration
  • Total number of HTTP requests
  • Total number of uncaught exceptions when serving Flask requests

Simple implementation looks the following:

app = Flask(__name__)
metrics = PrometheusMetrics(app)

# static information as metric
metrics.info('app_info', 'Application info', version='1.0.3')

@app.route('/')
def main():
    pass  # requests tracked by default

@app.route('/skip')
@metrics.do_not_track()
def skip():
    pass  # default metrics are not collected

@app.route('/<item_type>')
@metrics.do_not_track()
@metrics.counter('invocation_by_type', 'Number of invocations by type',
         labels={'item_type': lambda: request.view_args['type']})
def by_type(item_type):
    pass  # only the counter is collected, not the default metrics

@app.route('/long-running')
@metrics.gauge('in_progress', 'Long running requests in progress')
def long_running():
    pass

@app.route('/status/<int:status>')
@metrics.do_not_track()
@metrics.summary('requests_by_status', 'Request latencies by status',
                 labels={'status': lambda r: r.status_code})
@metrics.histogram('requests_by_status_and_path', 'Request latencies by status and path',
                   labels={'status': lambda r: r.status_code, 'path': lambda: request.path})
def echo_status(status):
    return 'Status: %s' % status, status

Let’s give it a try and call the endpoint:

As we can see it produces a lot of useful information. Beside the metrics I’ve already mentioned it even can give us some insight into performance of the application and our infrastructure (process_cpu_seconds_total, process_start_time_seconds, process_resident_memory_bytes) or split all http requests into different percentiles giving us info on slowest and median time needed to serve the requests.

Now that it’s in place let’s setup Prometheus so we can don’t have to call metrics endpoint every time we require information; instead we will use Prometheus to query the exact information we need.

First things first, let’s add another service in compose.yaml file:

We’ve named container Prometheus, specified location where the build file can be found (new directory called „mon”) and introduced port forwarding from 9090 in container (default for Prometheus) to 9091 on local machine. Of course let’s make sure to use the same docker network where all other containers are residing so we can reach them.

Before we prepare Dockerfile we need to prepare configuration that we are going to feed to Prometheus. You can find full documentation on this topic under this link. I am using scrape config, dedicated for interacting using HTTP method. You can see the config I am using below:

I have specified the default scrape interval to be every 15 seconds, the job name and target. I can use name of the service from Docker compose (api) and internal Flask port 5000 because both of those containers reside on the same network. I didn’t need to specify the specific endpoint, because Prometheus by default expects metrics endpoint to be published under /metrics.

Final thing left to do is to specify build file which is quite simple: we pull default Prometheus image and add our configuration file:

Let’s give it a try and rebuild the environment:

docker compose up

Now that Prometheus container is running we should be able to access it’s web console under localhost:909. Once there move to Status – Target health section:

We can see that the endpoint http://api:5000/metrics is being actively monitored (status UP). Let’s give it a try and query some specific information about the metrics in „Query” tab:

flask_http_request_created

This will give us a list of all HTTP requests sent to our application, just as presented on the screenshot below:

To get info on all available metrics use:

{__name__!=""}

That’s it for today’s tutorial! Thanks for joining me, in my next entry I will cover deploying said Prometheus container in Azure Container Apps.

Creating development environment using Docker Compose

Hello and welcome to the next entry in this series. Today I will cover creating development environment on local machine using Docker Compose. This way we can setup both API application and database server quickly and effortlessly.

For purpose of demonstration we can reuse the dockerfiles created in previous entries in the series (for postgres server and API app). We were already able to setup and run these containers separately, but using Docker Compose we can create both of them at the same time with just one command.

The important part is networking: we need to make sure that containers can „talk” to each other. For that let’s create separate network called „docker_network” utilising bridge type driver. There is also need to expose the ports; one thing changes compared to the previous setup. We don’t need to specify port forwarding for database server and API container can talk to database directly on port 5432 (default for Postgres). We still need to forward the requests to API app on different port than the one exposed by Flask container (5000).

I’ve also added some safety mechanism in form of healthcheck for database and „depends_on” clause for api which will cause the API to start only when database container is up and ready.

Putting this all together compose file looks the following:

services:
  db:
    container_name: postgres
    build: ./db/
    ports:
      - "5432"
    expose:
    - "5432"
    environment:
    - POSTGRES_USER=postgres
    - POSTGRES_PASSWORD=postgrespassword
    - POSTGRES_DB=postgres
    networks:
    - docker_network
    healthcheck:
      test: ["CMD", "pg_isready", "-U", $POSTGRES_USER, "-d", $POSTGRES_DB]
      interval: 1s
      timeout: 5s
      retries: 10
  api:
    container_name: rpgapp
    build: .
    ports:
      - "5005:5000"
    networks:
      - docker_network
    depends_on:
      db:
        condition: service_healthy
networks:
  docker_network:
    driver: bridge

Let’s give it a spin and start the environment using command:

docker compose up

Afterwards we can see in Docker desktop app that both containers are running as part of single compose stack:

To confirm that our app works let’s try to create a new character via API call:

We have successfully created environment consisting of database and application server with just one command!

That’s it for today, in my next entry I will cover creating Prometheus container for monitoring our application.

Creating Azure VM using Terraform

Hello and welcome to another entry on my blog! In this episode we will create a virtual machine which is going to serve as management server for the environment.

In order for this server to communicate with both our database server and API app, it needs to be present on the same network. In previous entry we’ve already created a virtual network; the only thing we need is dedicated subnet.

For the purpose of demonstration let’s create it manually instead of via Terraform. In resource search look for previously created virtual network, go to subnets tab and hit „+Subnet” and specify the name. All remaining fields can remain as default.

Before creating VM there are few things that need to be specified within Terraform file. It is required to create network interface, ip configuration (with information on subnet where VM needs to reside), and preferably also network security group to allow RDP connections to this VM.

Other than that you need to assign a name to the VM, choose storage plan, source image (I’ve chosen Windows Server 2016) as well as admin username and password used to connect. Entire configuration put together looks like this:

resource "azurerm_network_security_group" "rpgappvm_nsc" {
  name                = "rpgappvm_nsc"
  location            = "polandcentral"
  resource_group_name = azurerm_resource_group.project2025.name

  security_rule {
    name                       = "AllowAnyRDPInbound"
    priority                   = 100
    direction                  = "Inbound"
    access                     = "Allow"
    protocol                   = "Tcp"
    source_port_range          = "*"
    destination_port_range     = "3389"
    source_address_prefix      = "*"
    destination_address_prefix = "*"
  }
}

  resource "azurerm_network_interface" "nic" {
  name                = "rpgappvm-nic"
  location            = "polandcentral"
  resource_group_name = azurerm_resource_group.project2025.name

  ip_configuration {
    name                          = "internal"
    subnet_id                     = "/subscriptions/1d338fad-a9e9-4314-853c-5793eddb8b1b/resourceGroups/project2025/providers/Microsoft.Network/virtualNetworks/rpgapp-network/subnets/rpgapp-vm-subnet"
    private_ip_address_allocation = "Dynamic"
  }
}

resource "azurerm_windows_virtual_machine" "rpgapp-vm" {
  name                = "rpgapp-vm"
  resource_group_name = azurerm_resource_group.project2025.name
  location            = "polandcentral"
  size                = "Standard_F2"
  admin_username      = "adminuser"
  admin_password      = var.vm_password
  network_interface_ids = [
    azurerm_network_interface.nic.id,
  ]

  os_disk {
    caching              = "ReadWrite"
    storage_account_type = "Standard_LRS"
  }

  source_image_reference {
    publisher = "MicrosoftWindowsServer"
    offer     = "WindowsServer"
    sku       = "2016-Datacenter"
    version   = "latest"
  }
}

After applying Terraform file note an IP address for this resource and connect using any RDP client. Once in I installed pgAdmin to manage the PostgreSQL Flexible server.

That’s it for this entry! In the next blog post I will cover creating development environment in fast and convenient manner using Docker Compose.

© 2025 KW Digital

Theme by Anders NorenUp ↑