In AWS we have several ways to deploy Django (and not Django applications) with Docker. We can use ECS or EKS clusters. If we don’t have one ECS or Kubernetes cluster up and running, maybe it can be complex. Today I want to show how deploy a Django application in production mode within a EC2 host. Let’s start.
I’m getting older to provision one host by hand I prefer to use docker. The idea is create one EC2 instance (one simple Amazon Linux AMI AWS-supported image). This host don’t have docker installed. We need to install it. When we launch one instance, when we’re configuring the instance, we can specify user data to configure an instance or run a configuration script during launch.
We only need to put this shell script to set up docker
This script assumes that there’s a deploy.env file with our personal configuration (AWS profile, the host of the EC2, instance, The ECR and things like that)
In this example I’m using docker swarm to deploy the application. I want to play also with secrets. This dummy application don’t have any sensitive information but I’ve created one "ec2.supersecret" variable
And that’s all. Maybe ECS or EKS are better solutions to deploy docker applications in AWS, but we also can deploy easily to one docker host in a EC2 instance that it can be ready within a couple of minutes.
Three years ago I wrote an article about webockets. In fact I’ve written several articles about Websockets (Websockets and real time communications is something that I’m really passionate about), but today I would like to pick up this article. Nowadays I’m involved with several Django projects so I want to create a similar working prototype with Django. Let’s start:
In the past I normally worked with libraries such as socket.io to ensure browser compatibility with Websockets. Nowadays, at least in my world, we can assume that our users are using a modern browser with websocket support, so we’re going to use plain Websockets instead external libraries. Django has a great support to Websockets called Django Channels. It allows us to to handle Websockets (and other async protocols) thanks to Python’s ASGI’s specification. In fact is pretty straightforward to build applications with real time communication and with shared authentication (something that I have done in the past with a lot of effort. I’m getting old and now I like simple things :))
The application that I want to build is the following one: One Web application that shows the current time with seconds. Ok it’s very simple to do it with a couple of javascript lines but this time I want to create a worker that emits an event via Websockets with the current time. My web application will show that real time update. This kind of architecture always have the same problem: The initial state. In this example we can ignore it. When the user opens the browser it must show the current time. As I said before in this example we can ignore this situation. We can wait until the next event to update the initial blank information but if the event arrives each 10 seconds our user will have a blank screen until the next event arrives. In our example we’re going to take into account this situation. Each time our user connects to the Websocket it will ask to the server for the initial state.
Our initial state route will return the current time (using Redis). We can authorize our route using the standard Django’s protected routes
[sourcecode language=”python”]
from django.contrib.auth.decorators import login_required
from django.http import JsonResponse
from ws.redis import redis
As we can see here we can reuse the authentication middleware in channel’s consumers also.
[sourcecode language=”python”]
import json
import json
from channels.generic.websocket import AsyncWebsocketConsumer
class WsConsumer(AsyncWebsocketConsumer):
GROUP = ‘time’
We’re going to need a worker that each second triggers the current time (to avoid problems we’re going to trigger our event each 0.5 seconds). To perform those kind of actions Django has a great tool called Celery. We can create workers and scheduled task with Celery (exactly what we need in our example). To avoid the “initial state” situation our worker will persists the initial state into a Redis storage
Basically that’s the source code (plus Django the stuff).
Application architecture
The architecture of the application is the following one:
Frontend
The Django application. We can run this application in development with
python manage.py runserver
And in production using a asgi server (uvicorn in this case)
[sourcecode language=”xml”]
uvicorn config.asgi:application –port 8000 –host 0.0.0.0 –workers 1
[/sourcecode]
In development mode:
[sourcecode language=”xml”]
celery -A ws worker -l debug
[/sourcecode]
And in production
[sourcecode language=”xml”]
celery -A ws worker –uid=nobody –gid=nogroup
[/sourcecode]
We need this scheduler to emit our event (each 0.5 seconds)
[sourcecode language=”xml”]
celery -A ws beat
[/sourcecode]
Message Server for Celery
In this case we’re going to use Redis
Docker
With this application we can use the same dockerfile for frontend, worker and scheduler using different entrypoints
If we want to deploy our application in a K8s cluster we need to migrate our docker-compose file into a k8s yaml files. I assume that we’ve deployed our docker containers into a container registry (such as ECR)
I’ve learning how to deploy one Python application to Kubernetes. Here you can see my notes:
Let’s start from a dummy Python application. It’s a basic Flask web API. Each time we perform a GET request to “/” we increase one counter and see the number of hits. The persistence layer will be a Redis database. The script is very simple:
[sourcecode language=”python”]
from flask import Flask
import os
from redis import Redis
Now we’re going to run our application within a Docker container. First of of all we need to create one Docker image from a docker file:
[sourcecode language=”xml”]
FROM python:alpine3.8
ADD . /code
WORKDIR /code
RUN pip install -r requirements.txt
EXPOSE 5000
[/sourcecode]
Now we can build or image:
[sourcecode language=”xml”]
docker build -t front .
[/sourcecode]
And now we can run our front image:
[sourcecode language=”xml”]
docker run -p 5000:5000 front python app.py
[/sourcecode]
If we open now our browser with the url http://localhost:5000 we’ll get a 500 error. That’s because our Docker container is trying to use one Redis host within localhost. It worked before, when our application and our Redis were within the same host. Now our API’s localhost isn’t the same than our host’s one.
Our script the Redis host is localhost by default but it can be passed from an environment variable,
we can pass to our our Docker container the real host where our Redis resides (suposing my IP address is 192.168.1.100):
[sourcecode language=”xml”]
docker run -p 5000:5000 –env REDIS_HOST=192.168.1.100 front python app.py
[/sourcecode]
If dont’ want the development server we also can start our API using gunicorn
[sourcecode language=”xml”]
docker run -p 5000:5000 –env REDIS_HOST=192.168.1.100 front gunicorn -w 1 app:app -b 0.0.0.0:5000
And that works. We can start our app manually using Docker. But it’s a bit complicated. We need to run two containers (API and Redis), setting up the env variables.
Docker helps us with docker-compose. We can create a docker-compose.yaml file configuring our all application:
[sourcecode language=”xml”]
docker-compose up
[/sourcecode]
Docker compose is pretty straightforward. But, what happens if our production environment is a cluster? docker-compose works fine in a single host. But it our production environment is a cluster, we´ll face problems (we need to esure manually things like hight avaiavility and things like that). Docker people tried to answer to this question with Docker Swarm. Basically Swarm is docker-compose within a cluster. It uses almost the same syntax than docker-compose in a single host. Looks good, ins’t it? OK. Nobody uses it. Since Google created their Docker conainer orchestator (Kubernetes, aka K8s) it becames into the de-facto standard. The good thing about K8s is that it’s much better than Swarm (more configurable and more powerfull), but the bad part is that it isn’t as simple and easy to understand as docker-compose.
[sourcecode language=”xml”]
apiVersion: v1
kind: Service
metadata:
name: front-api
spec:
# types:
# – ClusterIP: (default) only accessible from within the Kubernetes cluster
# – NodePort: accessible on a static port on each Node in the cluster
# – LoadBalancer: accessible externally through a cloud provider’s load balancer
type: LoadBalancer
# When the node receives a request on the static port (30163)
# "select pods with the label ‘app’ set to ‘front-api’"
# and forward the request to one of them
selector:
app: front-api
ports:
– protocol: TCP
port: 5000 # port exposed internally in the cluster
targetPort: 5000 # the container port to send requests to
nodePort: 30164 # a static port assigned on each the node
[/sourcecode]
And one deployment:
[sourcecode language=”xml”]
apiVersion: apps/v1
kind: Deployment
metadata:
name: front-api
spec:
# How many copies of each pod do we want?
replicas: 1
selector:
matchLabels:
# This must match the labels we set on the pod!
app: front-api
# This template field is a regular pod configuration
# nested inside the deployment spec
template:
metadata:
# Set labels on the pod.
# This is used in the deployment selector.
labels:
app: front-api
spec:
containers:
– name: front-api
image: front:v1
args: ["gunicorn", "-w 1", "app:app", "-b 0.0.0.0:5000"]
ports:
– containerPort: 5000
env:
– name: REDIS_HOST
valueFrom:
configMapKeyRef:
name: api-config
key: redis.host
[/sourcecode]
In order to learn a little bit of K8s I’m using a config map called ‘api-config’ where I put some information (such as the Redis host that I’m going to pass as a env variable)
In the last projects that I’ve been involved with I’ve playing, in one way or another, with microservices, queues and things like that. I’m always facing the same tasks: Building RPCs, Workers, API gateways, … Because of that I’ve searching one framework to help me with those kind of stuff. Finally I discover Nameko. Basically Nameko is the Python tool that I’ve been looking for. In this post I will create a simple proof of concept to learn how to integrate Nameko within my projects. Let start.
The POC is a simple API gateway that gives me the localtime in iso format. I can create a simple Python script to do it
[sourcecode language=”python”]
import datetime
import time
We also can create a simple Flask API server to consume this information. The idea is create a rpc worker to generate this information and also generate another worker to send the localtime, but taken from a PostgreSQL database (yes I know it not very useful but it’s just an excuse to use a PG database in the microservice)
We’re going to create two rpc workers. One giving the local time:
[sourcecode language=”python”]
from nameko.rpc import rpc
from time import time
import datetime
Now we only need to setup the api gateway. With Nameko we can create http entrypoint also (in the same way than we create rpc) but I want to use it with Flask
[sourcecode language=”python”]
from flask import Flask
from nameko.standalone.rpc import ServiceRpcProxy
from dotenv import load_dotenv
import os
Time ago, when I was an ADSL user in my house I had a lot problems with my internet connection. I was a bit lazy to switch to a fiber connection. Finally I changed it, but meanwhile the my Internet company was solving one incident, I started to hack a little bit a simple and dirty script that monitors my connection speed (just for fun and to practise with InfluxDB and Grafana).
Today I’ve lost my quick and dirty script (please Gonzalo keep a working backup the SD card of your Raspberry Pi Server always updated! Sometimes it crashes. It’s simple: “dd if=/dev/disk3 of=pi3.img” 🙂 and I want to rebuild it. This time I want to use Docker (just for fun). Let’s start.
To monitor the bandwidth we only need to use the speedtest-cli api. We can use this api from command line and, as it’s a python library, we can create one python script that uses it.
Now we need to create the docker-compose file to orchestrate the infrastructure. The most complicate thing here is, maybe, to configure grafana within docker files instead of opening browser, create datasoruce and build dashboard by hand. After a couple of hours navigating into github repositories finally I created exactly what I needed for this post. Basically is a custom entry point for my grafana host that creates the datasource and dashboard (via Grafana’s API)
In the previous project we moved one project to docker. The idea was to move exactly the same functionality (even without touching anything within the source code). Now we’re going to add more services. Yes, I know, it looks like overenginering (it’s exactly overenginering, indeed), but I want to build something with different services working together. Let start.
We’re going to change a little bit our original project. Now our frontend will only have one button. This button will increment the number of clicks but we’re going to persists this information in a PostgreSQL database. Also, instead of incrementing the counter in the backend, our backend will emit one event to a RabbitMQ message broker. We’ll have one worker service listening to this event and this worker will persist the information. The communication between the worker and the frontend (to show the incremented value), will be via websockets.
With those premises we are going to need:
Frontend: UI5 application
Backend: PHP/lumen application
Worker: nodejs application which is listening to a RabbitMQ event and serving the websocket server (using socket.io)
Nginx server
PosgreSQL database.
RabbitMQ message broker.
As the previous examples, our PHP backend will be server via Nginx and PHP-FPM.
Here we can see to docker-compose file to set up all the services
// Please don’t do this. Use lazy connections
// I’m ‘lazy’ to do it in this POC 🙂
pgClient.connect(function(err) {
io.on(‘connection’, function() {
pgClient.query(sql, function(err, result) {
var count = result.rows[0][‘clickcount’];
io.emit(‘click’, {count: count});
});
});
rabbitMq.on(‘ready’, function() {
var queue = rabbitMq.queue(‘ui5’);
queue.bind(‘#’);
Database server:
[sourcecode language=”xml”]
FROM postgres:9.6-alpine
COPY pg/init.sql /docker-entrypoint-initdb.d/
[/sourcecode]
As we can see we’re going to generate the database estructure in the first build
[sourcecode language=”sql”]
CREATE SCHEMA docker;
CREATE TABLE docker.clicks (
clickCount numeric(8) NOT NULL
);
ALTER TABLE docker.clicks
OWNER TO username;
INSERT INTO docker.clicks(clickCount) values (0);
[/sourcecode]
With the RabbitMQ server we’re going to use the official docker image so we don’t need to create one Dockerfile
We also have changed a little bit our Nginx configuration. We want to use Nginx to serve backend and also socket.io server. That’s because we don’t want to expose different ports to internet.
[sourcecode language=”xml”]
server {
listen 80;
index index.php index.html;
server_name localhost;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
root /code/src/www;
And basically that’s all. Here also we can use a “production” docker-copose file without exposing all ports and mapping the filesystem to our local machine (useful when we’re developing)
In the first part I spoke about how to build our working environment to work with UI5 locally instead of using WebIDE. Now, in this second part of the post, we’ll see how to do it using docker to set up our environment.
I’ll use docker-compose to set up the project. Basically, as I explain in the first part, the project has two parts. One backend and one frontned. We’re going to use exactly the same code for the frontend and for the backend.
The frontend is build over a localneo. As it’s a node application we’ll use a node:alpine base host
In docker-compose we only need to map the port that we´ll expose in our host and since we want this project in our depelopemet process, we also will map the volume to avoid to re-generate our container each time we change the code.
With this configuration we’re exposing two ports 8080 for the frontend and 8000 for the backend. We also are mapping our local filesystem to containers to avoid to regenerate our containers each time we change the code.
We also can have a variation. A “production” version of our docker-compose file. I put production between quotation marks because normally we aren’t going to use localneo as a production server (please don’t do it). We’ll use SCP to host the frontend.
This configuration is just an example without filesystem mapping, without xdebug in the backend and without exposing the backend externally (Only the frontend can use it)
I must admit this post is just an excuse to play with Grafana and InfluxDb. InfluxDB is a cool database especially designed to work with time series. Grafana is one open source tool for time series analytics. I want to build a simple prototype. The idea is:
One Arduino device (esp32) emits a MQTT event to a mosquitto server. I’ll use a potentiometer to emulate one sensor (Imagine here, for example, a temperature sensor instead of potentiometer). I’ve used this circuit before in another projects
One Python script will be listening to the MQTT event in my Raspberry Pi and it will persist the value to InfluxDB database
I will monitor the state of the time series given by the potentiometer with Grafana
I will create one alert in Grafana (for example when the average value within 10 seconds is above a threshold) and I will trigger a webhook when the alert changes its state
One microservice (a Python Flask server) will be listening to the webhook and it will emit a MQTT event depending on the state
Another Arduino device (one NodeMcu in this case) will be listening to this MQTT event and it will activate a LED. Red one if the alert is ON and green one if the alert is OFF
The server
As I said before we’ll need three servers:
MQTT server (mosquitto)
InfluxDB server
Grafana server
We’ll use Docker. I’ve got a Docker host running in a Raspberry Pi3. The Raspberry Pi is a ARM device so we need docker images for this architecture.
volumes:
grafana-db:
driver: local
grafana-log:
driver: local
grafana-conf:
driver: local
[/sourcecode]
ESP32
The Esp32 part is very simple. We only need to connect our potentiometer to the Esp32. The potentiometer has three pins: Gnd, Signal and Vcc. For signal we’ll use the pin 32.
We only need to configure our Wifi network, connect to our MQTT server and emit the potentiometer value within each loop.
void loop() {
if (!client.connected()) {
mqttReConnect();
}
int current = (int) ((analogRead(potentiometerPin) * 100) / 4095);
mqttEmit(topic, (String) current);
delay(500);
}
[/sourcecode]
Mqtt listener
The esp32 emits an event (“/pot”) with the value of the potentiometer. So we’re going to create a MQTT listener that listen to MQTT and persits the value to InfluxDB.
[sourcecode language=”python”]
import paho.mqtt.client as mqtt
from influxdb import InfluxDBClient
import datetime
import logging
Grafana
In grafana we need to do two things. First to create one datasource from our InfluxDB server. It’s pretty straightforward to it.
Finally we’ll create a dashboard. We only have one time-serie with the value of the potentiometer. I must admit that my dasboard has a lot things that I’ve created only for fun.
Thats the query that I’m using to plot the main graph
[sourcecode language=”sql”]
SELECT
last("value") FROM "pot"
WHERE
time >= now() – 5m
GROUP BY
time($interval) fill(previous)
[/sourcecode]
Here we can see the dashboard
And here my alert configuration:
I’ve also created a notification channel with a webhook. Grafana will use this web hook to notify when the state of alert changes
Webhook listener
Grafana will emit a webhook, so we’ll need an REST endpoint to collect the webhook calls. I normally use PHP/Lumen to create REST servers but in this project I’ll use Python and Flask.
We need to handle HTTP Basic Auth and emmit a MQTT event. MQTT is a very simple protocol but it has one very nice feature that fits like hat fits like a glove here. Le me explain it:
Imagine that we’ve got our system up and running and the state is “ok”. Now we connect one device (for example one big red/green lights). Since the “ok” event was fired before we connect the lights, our green light will not be switch on. We need to wait util “alert” event if we want to see any light. That’s not cool.
MQTT allows us to “retain” messages. That means that we can emit messages with “retain” flag to one topic and when we connect one device later to this topic, it will receive the message. Here it’s exactly what we need.
[sourcecode language=”python”]
from flask import Flask
from flask import request
from flask_httpauth import HTTPBasicAuth
import paho.mqtt.client as mqtt
import json
if __name__ == "__main__":
app.run(host=’0.0.0.0′)
[/sourcecode]
Nodemcu
Finally the Nodemcu. This part is similar than the esp32 one. Our leds are in pins 4 and 5. We also need to configure the Wifi and connect to to MQTT server. Nodemcu and esp32 are similar devices but not the same. For example we need to use different libraries to connect to the wifi.
This device will be listening to the MQTT event and trigger on led or another depending on the state
String data;
for (int i = 0; i < length; i++) {
data += (char)payload[i];
}
cleanLeds();
int value = data.toInt();
switch (value) {
case 1:
digitalWrite(ledRed, HIGH);
break;
case 0:
digitalWrite(ledGreen, HIGH);
break;
}
Serial.print("] value:");
Serial.println((int) value);
}
I’m learning Docker. In this post I want to share a little experiment that I have done. I know the code looks like over-engineering but it’s just an excuse to build something with docker and containers. Let me explain it a little bit.
The idea is build a Time clock in the browser. Something like this:
Yes I know. We can do it only with js, css and html but we want to hack a little bit more. The idea is to create:
A Silex/PHP frontend
A WebSocket server with socket.io/node
A Python script to obtain the current time
WebSocket server will open 2 ports: One port to serve webSockets (socket.io) and another one as a http server (express). Python script will get the current time and it’ll send it to the webSocket server. Finally one frontend(silex) will be listening to WebSocket’s event and it will render the current time.
That’s the WebSocket server (with socket.io and express)
[sourcecode language=”js”]
var
express = require(‘express’),
expressApp = express(),
server = require(‘http’).Server(expressApp),
io = require(‘socket.io’)(server, {origins: ‘localhost:*’})
;
expressApp.get(‘/tic’, function (req, res) {
io.sockets.emit(‘time’, req.query.time);
res.json(‘OK’);
});
expressApp.listen(6400, ‘0.0.0.0’);
server.listen(8080);
[/sourcecode]
That’s our Python script
[sourcecode language=”python”]
from time import gmtime, strftime, sleep
import httplib2
h = httplib2.Http()
while True:
(resp, content) = h.request("http://node:6400/tic?time=" + strftime("%H:%M:%S", gmtime()))
sleep(1)
[/sourcecode]
And our Silex frontend
[sourcecode language=”php”]
use Silex\Application;
use Silex\Provider\TwigServiceProvider;
The idea is to use one Docker container for each process. I like to have all the code in one place so all containers will share the same volume with source code.
First the node container (WebSocket server)
[sourcecode language=”text”]
FROM node:argon
RUN mkdir -p /mnt/src
WORKDIR /mnt/src/node
EXPOSE 8080 6400
[/sourcecode]
Now the python container
[sourcecode language=”text”]
FROM python:2
RUN pip install httplib2
RUN mkdir -p /mnt/src
WORKDIR /mnt/src/python
[/sourcecode]
And finally Frontend contailer (apache2 with Ubuntu 16.04)
[sourcecode language=”text”]
FROM ubuntu:16.04
RUN locale-gen es_ES.UTF-8
RUN update-locale LANG=es_ES.UTF-8
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update -y
RUN apt-get install –no-install-recommends -y apache2 php libapache2-mod-php
RUN apt-get clean -y
Now we’ve got the three containers but we want to use all together. We’ll use a docker-compose.yml file. The web container will expose port 80 and node container 8080. Node container also opens 6400 but this port is an internal port. We don’t need to access to this port outside. Only Python container needs to access to this port. Because of that 6400 is not mapped to any port in docker-compose