Hello and welcome to the next entry in this series. Today I will cover creating development environment on local machine using Docker Compose. This way we can setup both API application and database server quickly and effortlessly.
For purpose of demonstration we can reuse the dockerfiles created in previous entries in the series (for postgres server and API app). We were already able to setup and run these containers separately, but using Docker Compose we can create both of them at the same time with just one command.
The important part is networking: we need to make sure that containers can „talk” to each other. For that let’s create separate network called „docker_network” utilising bridge type driver. There is also need to expose the ports; one thing changes compared to the previous setup. We don’t need to specify port forwarding for database server and API container can talk to database directly on port 5432 (default for Postgres). We still need to forward the requests to API app on different port than the one exposed by Flask container (5000).
I’ve also added some safety mechanism in form of healthcheck for database and „depends_on” clause for api which will cause the API to start only when database container is up and ready.
Putting this all together compose file looks the following:
Hello and welcome back to my blog. In this episode we will create a PostgreSQL Flexible Server in Azure using Terraform. So for development environment hooking up a new instance of PostgreSQL in Docker is great – it’ fast and easy, but for production environment we would like something more stable – that’s why I’ve chosen to use PostgreSQL server in cloud.
For creating and managing this server we will be using Terraform, to adhere to Infrastructure as Code approach. This allows us to store the information about our infrastructure in version control system and apply changes based on them.
First things first, in order to work manage Azure resources with Terraform you need to have both Azure CLI and Terraform installed locally on your machine.
Once both are in place login to Azure using the following command:
az login
To provide Terraform with necessary rights to manage Azure instance you need to create service principal. You can do so by running the following in your terminal (remember to replace <SUBSCRIPTION_ID> with the id of your Azure subscription):
az ad sp create-for-rbac --role="Contributor" --scopes="/subscriptions/<SUBSCRIPTION_ID>"
As result of this command you will receive output similar to the one below:
In the next step use the values received in the output and set up corresponding environment variables according to this schema:
Note that I am using syntax valid for MacOS/Linux, if you are using Windows it is going to differ slightly.
Now we are ready and we can test the Terraform installation with following command:
terraform init
After running it you will receive confirmation of successful initialisation:
Before we create Terraform file to provision PostgreSQL server there is one last thing to do. If you have created resource group (like the one used in previous entries on this blog) you need to add this resource to be managed by Terraform. You can do so running this code snippet in your terminal (as usual replace SubscriptionID and Resource group name):
import azurerm_resource_group.<RESOURCE GROUP NAME> /subscriptions/<SUBSCRIPTION ID>/resourceGroups/<RESOURCE GROUP NAME>
Let’s get to creating the server! When working with Terraform it is best to refer to their official documentation. In this case we are working with resource called „azurerm_postgresql_flexible_server” and you can find detailed documentation on it under this link.
First we specify the provider to be „azurerm” and resource group we will be working with. Server needs to reside in virtual network, so in the next step we define it’s name, location, the resource group it belongs to and the address space. We also need to define subnet which provides us with range of IP addresses that can be assigned to resources in the network.
Afterwards we also create DNS zone which allows us to use dns aliases instead of IP addresses in relation to our resources. One last thing requires is private dns zone virtual network link; thanks to it we will be able to connect applications running within Azure to our newly created database server.
Last step is of course to create PostgreSQL server itself. Beside pointing to subnet and private DNS zone created in previous steps we also need to specify configuration of the server. I’ve gone with 30 GB of storage and B standard specs (you can check the specifics either in Terraform or Azure documentation).
One note regarding security – since during the creation of the server there is need to specify username and password for PostgreSQL instance I’ve secured them using Terraform variables.
Putting all of it together the Terraform file looks the following:
Hello and welcome to the next entry in the series. Now that we have a properly configured database we can rewrite our application to interact with it. For this we will be using SQLAlchemy – a powerful tool dedicated for working with databases. You can find the documentation of this library under this link.
The list of libraries necessary for this project have grown a lot due to need for SQLAlchemy, as well as driver dedicated for PostgreSQL. Thankfully Python allows us to import the requirements just by running single command. By the way, you can find the requirements.txt file in my repository.
To install needed libraries download requirements.txt file and run the following command in the context of directory where it is present:
python -m pip install -r requirements.txt
Now that’s sorted out we can get to work. We have a working database, but in what way we can tell our application how to connect to it? SQLAlchemy uses engine object which specifically describes how to „talk” to specified database. It requires info on type of database used, username, password, host, port and database name. I’ve used .env file to store all of this information in a separate file and only import the values within codebase. Creation of the engine looks the following:
I added print at the end for debug purpose (the password information is hidden). Let’s run this code and see if we connected successfully:
The engine information is there – we are using postgresql and connected as rpgapp_user to host 127.0.0.1 on port 5455 to database called „rpgapp”.
Before we get to coding the functions part we also need to specify structure of the data. In SQLAlchemy it’s called models; this structure defines both Python and SQL database object that we are going to interact with.
For our purpose I’ve created a simple model called Character model. We are going to store characters data in table called „characters” which has three columns: characterid(which at the same time is primary key), name(string) and character level(integer value). Putting this all together model definition looks like this:
Finally we can get to the main dish and start coding the functions for our API to retrieve and store data in database. Let’s start with create function so we can populate the database.
I’ve created a separate endpoint which only allows for POST method requests called „/character/create”. It takes three arguments in form of the fields in character table – name, level and id. Afterwards CharacterModel object is created based on the input.
We use the existing engine connection and establish a session. A session object in SQLAlchemy is a way to handle transactions with database. We add created character object to the session and commit it to the database. If everything goes right we then make a response to the query with info on the character data and status code (200). In case of any issues we catch an exception and return 500. Below is the code snippet:
Let’s try to raise a request and test newly created endpoint:
Indeed, it works, in return we receive name and id of the character as well as 200 return code. Now let’s code a GET /character endpoint which will return list of all our characters:
Once again we are using session object, but this time with .query method looking for all objects that match CharacterModel. Before we return data to the user we need to prepare a json object using json.dumps method out of list of characters. Let’s run a GET request to this endpoint:
I’ve created quite a few of them during testing and we get entire list of them.
The endpoints could use some more work in regards to checking the input data quality, auto numerating ID in the database etc, but this is something I will do at later date. In my next entry I will cover how to create an Azure PostgreSQL Flexible Server using Terraform. Thank you for reading and see you next time!
Hello again and welcome to the next entry in my blog! In this episode I will remake my application so that it interacts with a database instead of using a hardcoded dictionary as data entry. This is a step towards applying CRUD (Create Read Update Delete) principle in our application.
But first thing first – we of course need a database itself. The easiest way to setup one is by pulling official Docker image of PostgreSQL and then modify it according to our needs. I’ve chosen to go with version 15.10, but you can choose any stable version to your liking. To pull the image run the following command:
docker pull postgres:15.10
You can verify that the base image is now present on your machine by running:
docker image list | grep postgres
Now that we have an image we can run a PostgreSQL container. Setup is very simple, it all boils down to this single command:
docker run -p 5455:5432 --name postgres -e POSTGRES_PASSWORD=mypassword -d postgres:15.10
Once again we need to use port forwarding since we are working on a local machine. The default PosgreSQL port is 5432 and that is the one that container is listening on; but we will forward all of our database connection requests to port 5455 instead. In the run command we also specify name of the container, variable of default admin password and image which should be used. Before we head to actual work you should confirm if the container is running properly:
docker container list
Now that container is running we can connect to it in order to create a database and database user. We can do it by executing this command:
docker exec -it <container id> bash
Afterwards you will be presented with command prompt window logged in as root user, just as the one below:
Change context to user „postgres” utilising „su” command:
su postgres
Now you can run the postgres command line utility:
psql
From postgres we can create the database we need. Do so by running command:
create database rpgapp;
We also require a dedicated user for this database. You can create it this way:
create user rpgapp_user with encrypted password 'PASSWORD';
Of course you should replace 'PASSWORD’ string with actual strong password. Let’s also grant the rpgapp_user rights to database we’ve created:
grant all privileges on database rpgapp to rpgapp_user;
Last thing to do is to verify if the connection works and our user indeed has rights to database. Quit the context of psql with \q command and on container shell run the following:
psql -U rpgapp_user -h 127.0.0.1 -d rpgapp
That should log you back in psql context as rpgapp user. To confirm that run:
\conninfo
You will receive output like the one below:
Let’s take a break for now; in my next entry I will cover adjusting our application to interact with database.