My world of technology

Kategoria: Azure

Creating Azure VM using Terraform

Hello and welcome to another entry on my blog! In this episode we will create a virtual machine which is going to serve as management server for the environment.

In order for this server to communicate with both our database server and API app, it needs to be present on the same network. In previous entry we’ve already created a virtual network; the only thing we need is dedicated subnet.

For the purpose of demonstration let’s create it manually instead of via Terraform. In resource search look for previously created virtual network, go to subnets tab and hit „+Subnet” and specify the name. All remaining fields can remain as default.

Before creating VM there are few things that need to be specified within Terraform file. It is required to create network interface, ip configuration (with information on subnet where VM needs to reside), and preferably also network security group to allow RDP connections to this VM.

Other than that you need to assign a name to the VM, choose storage plan, source image (I’ve chosen Windows Server 2016) as well as admin username and password used to connect. Entire configuration put together looks like this:

resource "azurerm_network_security_group" "rpgappvm_nsc" {
  name                = "rpgappvm_nsc"
  location            = "polandcentral"
  resource_group_name = azurerm_resource_group.project2025.name

  security_rule {
    name                       = "AllowAnyRDPInbound"
    priority                   = 100
    direction                  = "Inbound"
    access                     = "Allow"
    protocol                   = "Tcp"
    source_port_range          = "*"
    destination_port_range     = "3389"
    source_address_prefix      = "*"
    destination_address_prefix = "*"
  }
}

  resource "azurerm_network_interface" "nic" {
  name                = "rpgappvm-nic"
  location            = "polandcentral"
  resource_group_name = azurerm_resource_group.project2025.name

  ip_configuration {
    name                          = "internal"
    subnet_id                     = "/subscriptions/1d338fad-a9e9-4314-853c-5793eddb8b1b/resourceGroups/project2025/providers/Microsoft.Network/virtualNetworks/rpgapp-network/subnets/rpgapp-vm-subnet"
    private_ip_address_allocation = "Dynamic"
  }
}

resource "azurerm_windows_virtual_machine" "rpgapp-vm" {
  name                = "rpgapp-vm"
  resource_group_name = azurerm_resource_group.project2025.name
  location            = "polandcentral"
  size                = "Standard_F2"
  admin_username      = "adminuser"
  admin_password      = var.vm_password
  network_interface_ids = [
    azurerm_network_interface.nic.id,
  ]

  os_disk {
    caching              = "ReadWrite"
    storage_account_type = "Standard_LRS"
  }

  source_image_reference {
    publisher = "MicrosoftWindowsServer"
    offer     = "WindowsServer"
    sku       = "2016-Datacenter"
    version   = "latest"
  }
}

After applying Terraform file note an IP address for this resource and connect using any RDP client. Once in I installed pgAdmin to manage the PostgreSQL Flexible server.

That’s it for this entry! In the next blog post I will cover creating development environment in fast and convenient manner using Docker Compose.

Creating PostgreSQL Flexible Server with Terraform

Hello and welcome back to my blog. In this episode we will create a PostgreSQL Flexible Server in Azure using Terraform. So for development environment hooking up a new instance of PostgreSQL in Docker is great – it’ fast and easy, but for production environment we would like something more stable – that’s why I’ve chosen to use PostgreSQL server in cloud.

For creating and managing this server we will be using Terraform, to adhere to Infrastructure as Code approach. This allows us to store the information about our infrastructure in version control system and apply changes based on them.

First things first, in order to work manage Azure resources with Terraform you need to have both Azure CLI and Terraform installed locally on your machine.

Once both are in place login to Azure using the following command:

az login

To provide Terraform with necessary rights to manage Azure instance you need to create service principal. You can do so by running the following in your terminal (remember to replace <SUBSCRIPTION_ID> with the id of your Azure subscription):

az ad sp create-for-rbac --role="Contributor" --scopes="/subscriptions/<SUBSCRIPTION_ID>"

As result of this command you will receive output similar to the one below:

In the next step use the values received in the output and set up corresponding environment variables according to this schema:

export ARM_CLIENT_ID="<APPID_VALUE>"
export ARM_CLIENT_SECRET="<PASSWORD_VALUE>"
export ARM_SUBSCRIPTION_ID="<SUBSCRIPTION_ID>"
export ARM_TENANT_ID="<TENANT_VALUE>"

Note that I am using syntax valid for MacOS/Linux, if you are using Windows it is going to differ slightly.

Now we are ready and we can test the Terraform installation with following command:

terraform init

After running it you will receive confirmation of successful initialisation:

Before we create Terraform file to provision PostgreSQL server there is one last thing to do. If you have created resource group (like the one used in previous entries on this blog) you need to add this resource to be managed by Terraform. You can do so running this code snippet in your terminal (as usual replace SubscriptionID and Resource group name):

import azurerm_resource_group.<RESOURCE GROUP NAME> /subscriptions/<SUBSCRIPTION ID>/resourceGroups/<RESOURCE GROUP NAME>

Let’s get to creating the server! When working with Terraform it is best to refer to their official documentation. In this case we are working with resource called „azurerm_postgresql_flexible_server” and you can find detailed documentation on it under this link.

First we specify the provider to be „azurerm” and resource group we will be working with. Server needs to reside in virtual network, so in the next step we define it’s name, location, the resource group it belongs to and the address space. We also need to define subnet which provides us with range of IP addresses that can be assigned to resources in the network.

Afterwards we also create DNS zone which allows us to use dns aliases instead of IP addresses in relation to our resources. One last thing requires is private dns zone virtual network link; thanks to it we will be able to connect applications running within Azure to our newly created database server.

Last step is of course to create PostgreSQL server itself. Beside pointing to subnet and private DNS zone created in previous steps we also need to specify configuration of the server. I’ve gone with 30 GB of storage and B standard specs (you can check the specifics either in Terraform or Azure documentation).

One note regarding security – since during the creation of the server there is need to specify username and password for PostgreSQL instance I’ve secured them using Terraform variables.

Putting all of it together the Terraform file looks the following:

variable "db_username" {
  type = string
}

variable "password" {
  type = string
}


provider "azurerm" {
  features {}
}

resource "azurerm_resource_group" "project2025" {
  name     = "project2025"
  location = "Australia East"
}

resource "azurerm_virtual_network" "rpgappnetwork" {
  name                = "rpgapp-network"
  location            = "polandcentral"
  resource_group_name = "project2025"
  address_space       = ["10.0.0.0/16"]
}

resource "azurerm_subnet" "rpgapp-subnet" {
  name                 = "rpaapp-subnet"
  resource_group_name  = "project2025"
  virtual_network_name = "rpgapp-network"
  address_prefixes     = ["10.0.2.0/24"]
  service_endpoints    = ["Microsoft.Storage"]
  delegation {
    name = "fs"
    service_delegation {
      name = "Microsoft.DBforPostgreSQL/flexibleServers"
      actions = [
        "Microsoft.Network/virtualNetworks/subnets/join/action",
      ]
    }
  }
}

resource "azurerm_private_dns_zone" "dns" {
  name                = "rpgapp.postgres.database.azure.com"
  resource_group_name = "project2025"
}

resource "azurerm_private_dns_zone_virtual_network_link" "rpgapp-private-dns-zone" {
  name                  = "rpgapp-dns-zone"
  private_dns_zone_name = "rpgapp.postgres.database.azure.com"
  virtual_network_id    = azurerm_virtual_network.rpgappnetwork.id
  resource_group_name   = "project2025"
  depends_on            = [azurerm_subnet.rpgapp-subnet]
}

resource "azurerm_postgresql_flexible_server" "postgres" {
  name                          = "rpgapp-postgres"
  resource_group_name           = "project2025"
  location                      = "polandcentral"
  version                       = "13"
  delegated_subnet_id           = azurerm_subnet.rpgapp-subnet.id
  private_dns_zone_id           = azurerm_private_dns_zone.dns.id
  administrator_login           = var.db_username
  administrator_password        = var.password
  zone                          = "1"

  storage_mb   = 32768

  sku_name   = "B_Standard_B1ms"
  depends_on = [azurerm_private_dns_zone_virtual_network_link.rpgapp-private-dns-zone]

  }

Now that it is ready we can apply the configuration using the command:

terraform apply

The process itself might take some time to create all the required resources. You should receive output similar to the one below:

That’s it for today, in the next entry I will cover adjusting Azure Container App to interact with newly created PostgreSQL Flexible Server database.

Automating the deployment process with Azure DevOps pipeline

Hello and welcome to another entry on my blog! In this episode we will focus on automating the deployment process of our application to Azure Container Apps.

I have not chosen the best time to work on this project, because it looks like that even Microsoft is closed down during Christmas period. That is the message I’ve found on the request page for free concurrency grants:

It appears so that new Azure accounts do have access to many services for free, although there is one „but”. When it comes to access to Azure agents in pool needed to execute the pipelines one needs to send a request to Microsoft for access which usually takes 2 do 3 business days to finish. Fortunately, there is a workaround in form of hosting the agent locally on your machine using self hosted Azure agent. I won’t deep dive into it there; however if you want to know more details you can find it on this page.

In the previous episode we deployed the Azure Container App using external Image repository in form of Docker Hub. But this time let’s try something different and create an Azure Container Registry. This will allow us to build images on the fly directly from Dockerfile present in our Azure Project Repository.

To create Container registry find this resource type in Azure Services search and hit „Create”. You will be presented with screen such as the one below:

There are only few fields to fill in: the resource group to use, name of the registry, location and pricing plan (Basic is enough for our needs). For the remaining tabs we can stick to default. After all the fields are filled in hit „Create” button.

To access the container registry from the pipeline we are going to utilise username and password. In order to store them securely let’s also create an Azure Key Vault with secrets. Find Key Vault in resource list and go to creation wizard.

As usual we are required to specify resource group, name of the resource, region and pricing tier (standard is fine). Rest of the pages can stay as they are. Hit „Review and create” to create the resource.

Now that the key vault is created let’s fetch the admin password for Container Registry. Find the resource and go to „Access keys” tab in resource page:

Make sure that „Admin user” checkbox is marked as checked. Note the username and click on „Show” to retrieve the password. We can now create the Secret in Key Vault. Navigate to the Key Vault resource and go to „Secrets” tab.

Once there hit „Generate/Import” button and create two secrets – one named ACR-USERNAME and ACR-PASSWORD. In „Secret value” field fill in username and password respectively.

Now that we have setup both Container Registry and Key vault with Secrets let’s take a look at full pipeline.yml file:

trigger:
  branches:
    include:
      - master

pool: Default

variables:
  azureSubscription: '91b277db-132d-41ed-beda-d2effc05ba4a'
  port: 5000
  location: 'polandcentral'
  resourceGroup: 'project2025'
  keyVaultName: 'project2025-keyvault'
  containerAppName: 'rpg-app' 
  containerAppEnvironment: 'rpg-app-env'
  acrName: 'rpgproject'
  ingress: 'external'

steps:
  - task: AzureKeyVault@2
    inputs:
      azureSubscription: $(azureSubscription)
      KeyVaultName: $(keyVaultName)
      SecretsFilter: '*' 
      RunAsPreJob: true
  - task: AzureContainerApps@1
    inputs:
      azureSubscription: $(azureSubscription)
      appSourcePath: '$(Build.SourcesDirectory)'
      containerAppName: $(containerAppName)
      resourceGroup: $(resourceGroup)
      acrName: $(acrName)
      acrUsername: $(ACR-USERNAME)
      acrPassword: $(ACR-PASSWORD)
      ingress: $(ingress)
      location: $(location)
      targetPort: $(port)
      containerAppEnvironment: $(containerAppEnvironment)

The pipeline triggers automatically after a commit is pushed to the master branch of our Azure project repository. The agent pool I am using is called „Default”. Note this is the self hosted agent pool. Pool called „default” (with lowercase „d”) would use 'ubuntu-latest’ VM.

I decided to use variables to separate the content of values from logic of the pipeline and to not hardcode the names of resources used. This allows for reusability if me or someone else decides to use it in different project.

The pipeline itself consists of two steps. In the first one we reach to Azure Key Vault to retrieve secrets stored in it. The second one is responsible for creating the Azure Container Apps. In this task we are passing a lot of arguments to specify the name of Azure Container registry, source of application code, the location where we want the application deployed etc.

After committing the changes to repository the pipeline runs, builds a docker image from the Dockerfile in repository and deploys the application as Azure Container App.

Let’s retrieve the content of /character endpoint and see if we receive the output we are expecting:

And here are our characters! Thank you for reading through the end. In my next blog post we will create a local instance of PostgreSQL database in order to use it in our application.

Deploying application in Azure Container Apps

In the previous blog entry we’ve created a Docker image and made sure that the containerised application runs properly. Now we are ready to host the application in Azure using Azure Container Apps.

Azure Container apps is serverless platform allowing us to maintain less infrastructure and focus only on development of application. This way there is no need to provision virtual machine; instead the container consists only of bare minimum to run the application including the Linux kernel, Python and required dependencies (such as Flask).

In free plan of Azure which I am using Microsoft provides 180,000 vCPU seconds, 360,000 GiB seconds, and 2 million requests which is plentiful for the needs of such simple application.

To create an Azure Container App navigate to portal.azure.com, find „Container Apps” in Azure Services and hit „Create button in upper left corner. You will be presented with an input form like the one below:

In order to create Container App you need to have valid Azure Subscription, and create a Resource group and Container App Environment (both can be created from this view and defaults work just fine in our scenario).The „Region” field should be preferably the one geographically closest to you. Name the container app to your liking, choose „Container image” as deployment source and hit „Next”.

In the „Container” tab choose „Docker Hub” as image source and Image type „Public”. Registry login server remains as default „docker.io”. Fill in name of your image(including version tag if you use it) in „Image and tag” field. In CPU and Memory you can choose the lowest resource tier, it is enough in case of this app. Hit continue to go to next tab.

On „Ingress” tab mark Ingress as enabled and go with „Accept traffic from anywhere” in Ingress traffic section. Insecure connections can be made allowed for the demo purpose. Don’t forget to specify target port as 5000 (default for Flask). You can go to last tab, Review + Create and if validation is passed hit „Create” button in the lower left corner of the page.

The deployment process can take a while, but after it is completed you should be presented with details page of your application, such as the one below:

Make sure that the Status of application is presented as „Running”. If that’s the case you can test the application by copying the application URL and running curl against it utilising our /character endpoint. In my case that would be:

curl -i -H "Accept: application/json" -H "Content-Type: application/json" -X GET https://rpg-app.redfield-12c457b2.polandcentral.azurecontainerapps.io/character

Replace the URl with the one where your application is hosted. After query you should see output like the one below:

We’ve successfully received a list of our characters from the application deployed in Azure Container.

That’s it for today’s entry, in the next step we will automate the deployment process using Azure Devops pipeline!

© 2025 KW Digital

Theme by Anders NorenUp ↑