Deploying Django application to AWS EC2 instance with Docker

In AWS we have several ways to deploy Django (and not Django applications) with Docker. We can use ECS or EKS clusters. If we don’t have one ECS or Kubernetes cluster up and running, maybe it can be complex. Today I want to show how deploy a Django application in production mode within a EC2 host. Let’s start.

I’m getting older to provision one host by hand I prefer to use docker. The idea is create one EC2 instance (one simple Amazon Linux AMI AWS-supported image). This host don’t have docker installed. We need to install it. When we launch one instance, when we’re configuring the instance, we can specify user data to configure an instance or run a configuration script during launch.

We only need to put this shell script to set up docker

#! /bin/bash
yum update -y
yum install -y docker
usermod -a -G docker ec2-user
curl -L https://github.com/docker/compose/releases/download/1.25.5/docker-compose-`uname -s`-`uname -m` | sudo tee /usr/local/bin/docker-compose > /dev/null
chmod +x /usr/local/bin/docker-compose
service docker start
chkconfig docker on

rm /etc/localtime
ln -s /usr/share/zoneinfo/Europe/Madrid /etc/localtime

ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose

docker swarm init

We also need to attach one IAM role to our instance. This IAM role only need to allow us the following policies:

  • AmazonEC2ContainerRegistryReadOnly (because we’re going to use AWS ECR as container registry)
  • CloudWatchAgentServerPolicy (because we’re going to emit our logs to Cloudwatch)

Also we need to set up a security group to allow incoming SSH connections to port 22 and HTTP connections (in our example to port 8000)

When we launch our instance we need to provide a keypair to connect via ssh. I like to put this keypair in my .ssh/config

Host xxx.eu-central-1.compute.amazonaws.com
    User ec2-user
    Identityfile ~/.ssh/keypair-xxx.pem

To deploy our application we need to follow those steps:

  • Build our docker images
  • Push our images to a container registry (in this case ECR)
  • Deploy the application.

I’ve created a simple shell script called deploy.sh to perform all tasks:

#!/usr/bin/env bash

set -a
[ -f deploy.env ] && . deploy.env
set +a

echo "$(tput setaf 1)Building docker images ...$(tput sgr0)"
docker build -t ec2-web -t ec2-web:latest -t $ECR/ec2-web:latest .
docker build -t ec2-nginx -t $ECR/ec2-nginx:latest .docker/nginx

echo "$(tput setaf 1)Pusing to ECR ...$(tput sgr0)"
aws ecr get-login-password --region $REGION --profile $PROFILE |
  docker login --username AWS --password-stdin $ECR
docker push $ECR/ec2-web:latest
docker push $ECR/ec2-nginx:latest

CMD="docker stack deploy -c $DOCKER_COMPOSE_YML ec2 --with-registry-auth"
echo "$(tput setaf 1)Deploying to EC2 ($CMD)...$(tput sgr0)"
echo "$CMD"

DOCKER_HOST="ssh://$HOST" $CMD
echo "$(tput setaf 1)Building finished $(date +'%Y%m%d.%H%M%S')$(tput sgr0)"

This script assumes that there’s a deploy.env file with our personal configuration (AWS profile, the host of the EC2, instance, The ECR and things like that)

PROFILE=xxxxxxx

DOKER_COMPOSE_YML=docker-compose.yml
HOST=ec2-user@xxxx.eu-central-1.compute.amazonaws.com

ECR=9999999999.dkr.ecr.eu-central-1.amazonaws.com
REGION=eu-central-1

In this example I’m using docker swarm to deploy the application. I want to play also with secrets. This dummy application don’t have any sensitive information but I’ve created one "ec2.supersecret" variable

echo "super secret text" | docker secret create ec2.supersecret -

That’s the docker-compose.yml file:

version: '3.8'
services:
  web:
    image: 999999999.dkr.ecr.eu-central-1.amazonaws.com/ec2-web:latest
    command: /bin/bash ./docker-entrypoint.sh
    environment:
      DEBUG: 'False'
    secrets:
      - ec2.supersecret
    deploy:
      replicas: 1
    logging:
      driver: awslogs
      options:
        awslogs-group: /projects/ec2
        awslogs-region: eu-central-1
        awslogs-stream: app
    volumes:
      - static_volume:/src/staticfiles
  nginx:
    image: 99999999.dkr.ecr.eu-central-1.amazonaws.com/ec2-nginx:latest
    deploy:
      replicas: 1
    logging:
      driver: awslogs
      options:
        awslogs-group: /projects/ec2
        awslogs-region: eu-central-1
        awslogs-stream: nginx
    volumes:
      - static_volume:/src/staticfiles:ro
    ports:
      - 8000:80
    depends_on:
      - web
volumes:
  static_volume:

secrets:
  ec2.supersecret:
    external: true

And that’s all. Maybe ECS or EKS are better solutions to deploy docker applications in AWS, but we also can deploy easily to one docker host in a EC2 instance that it can be ready within a couple of minutes.

Source code in my github

Building real time Python applications with Django Channels, Docker and Kubernetes

Three years ago I wrote an article about webockets. In fact I’ve written several articles about Websockets (Websockets and real time communications is something that I’m really passionate about), but today I would like to pick up this article. Nowadays I’m involved with several Django projects so I want to create a similar working prototype with Django. Let’s start:

In the past I normally worked with libraries such as socket.io to ensure browser compatibility with Websockets. Nowadays, at least in my world, we can assume that our users are using a modern browser with websocket support, so we’re going to use plain Websockets instead external libraries. Django has a great support to Websockets called Django Channels. It allows us to to handle Websockets (and other async protocols) thanks to Python’s ASGI’s specification. In fact is pretty straightforward to build applications with real time communication and with shared authentication (something that I have done in the past with a lot of effort. I’m getting old and now I like simple things :))

The application that I want to build is the following one: One Web application that shows the current time with seconds. Ok it’s very simple to do it with a couple of javascript lines but this time I want to create a worker that emits an event via Websockets with the current time. My web application will show that real time update. This kind of architecture always have the same problem: The initial state. In this example we can ignore it. When the user opens the browser it must show the current time. As I said before in this example we can ignore this situation. We can wait until the next event to update the initial blank information but if the event arrives each 10 seconds our user will have a blank screen until the next event arrives. In our example we’re going to take into account this situation. Each time our user connects to the Websocket it will ask to the server for the initial state.

Our initial state route will return the current time (using Redis). We can authorize our route using the standard Django’s protected routes

[sourcecode language=”python”]
from django.contrib.auth.decorators import login_required
from django.http import JsonResponse
from ws.redis import redis

@login_required
def initial_state(request):
return JsonResponse({‘current’: redis.get(‘time’)})
[/sourcecode]

We need to configure our channels and define a our event:

[sourcecode language=”python”]
from django.urls import re_path

from ws import consumers

websocket_urlpatterns = [
re_path(r’time/tic/$’, consumers.WsConsumer),
]
[/sourcecode]

As we can see here we can reuse the authentication middleware in channel’s consumers also.
[sourcecode language=”python”]
import json
import json
from channels.generic.websocket import AsyncWebsocketConsumer

class WsConsumer(AsyncWebsocketConsumer):
GROUP = ‘time’

async def connect(self):
if self.scope["user"].is_anonymous:
await self.close()
else:
await self.channel_layer.group_add(
self.GROUP,
self.channel_name
)
await self.accept()

async def tic_message(self, event):
if not self.scope["user"].is_anonymous:
message = event[‘message’]

await self.send(text_data=json.dumps({
‘message’: message
}))
[/sourcecode]

We’re going to need a worker that each second triggers the current time (to avoid problems we’re going to trigger our event each 0.5 seconds). To perform those kind of actions Django has a great tool called Celery. We can create workers and scheduled task with Celery (exactly what we need in our example). To avoid the “initial state” situation our worker will persists the initial state into a Redis storage

[sourcecode language=”python”]
app = Celery(‘config’)
app.config_from_object(‘django.conf:settings’, namespace=’CELERY’)
app.autodiscover_tasks()

@app.on_after_configure.connect
def setup_periodic_tasks(sender, **kwargs):
sender.add_periodic_task(0.5, ws_beat.s(), name=’beat every 0.5 seconds’)

@app.task
def ws_beat(group=WsConsumer.GROUP, event=’tic_message’):
current_time = time.strftime(‘%X’)
redis.set(‘time’, current_time)
message = {‘time’: current_time}
channel_layer = channels.layers.get_channel_layer()
async_to_sync(channel_layer.group_send)(group, {‘type’: event, ‘message’: message})
[/sourcecode]

Finally we need a javascript client to consume our Websockets

[sourcecode language=”javascript”]
let getWsUri = () => {
return window.location.protocol === "https:" ? "wss" : "ws" +
‘://’ + window.location.host +
"/time/tic/"
}

let render = value => {
document.querySelector(‘#display’).innerHTML = value
}

let ws = new ReconnectingWebSocket(getWsUri())

ws.onmessage = e => {
const data = JSON.parse(e.data);
render(data.message.time)
}

ws.onopen = async () => {
let response = await axios.get("/api/initial_state")
render(response.data.current)
}
[/sourcecode]

Basically that’s the source code (plus Django the stuff).

Application architecture
The architecture of the application is the following one:

Frontend
The Django application. We can run this application in development with
python manage.py runserver

And in production using a asgi server (uvicorn in this case)
[sourcecode language=”xml”]
uvicorn config.asgi:application –port 8000 –host 0.0.0.0 –workers 1
[/sourcecode]

In development mode:
[sourcecode language=”xml”]
celery -A ws worker -l debug
[/sourcecode]

And in production
[sourcecode language=”xml”]
celery -A ws worker –uid=nobody –gid=nogroup
[/sourcecode]

We need this scheduler to emit our event (each 0.5 seconds)
[sourcecode language=”xml”]
celery -A ws beat
[/sourcecode]

Message Server for Celery
In this case we’re going to use Redis

Docker
With this application we can use the same dockerfile for frontend, worker and scheduler using different entrypoints

Dockerfile:

[sourcecode language=”xml”]
FROM python:3.8

ENV TZ ‘Europe/Madrid’
RUN echo $TZ > /etc/timezone && \
apt-get update && apt-get install -y tzdata && \
rm /etc/localtime && \
ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && \
dpkg-reconfigure -f noninteractive tzdata && \
apt-get clean

ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1

ADD . /src
WORKDIR /src

RUN pip install -r requirements.txt

RUN mkdir -p /var/run/celery /var/log/celery
RUN chown -R nobody:nogroup /var/run/celery /var/log/celery
[/sourcecode]

And our whole application within a docker-compose file

[sourcecode language=”xml”]
version: ‘3.4’

services:
redis:
image: redis
web:
image: clock:latest
command: /bin/bash ./docker-entrypoint.sh
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8000/health"%5D
interval: 1m30s
timeout: 10s
retries: 3
start_period: 40s
depends_on:
– "redis"
ports:
– 8000:8000
environment:
ENVIRONMENT: prod
REDIS_HOST: redis
celery:
image: clock:latest
command: celery -A ws worker –uid=nobody –gid=nogroup
depends_on:
– "redis"
environment:
ENVIRONMENT: prod
REDIS_HOST: redis
cron:
image: clock:latest
command: celery -A ws beat
depends_on:
– "redis"
environment:
ENVIRONMENT: prod
REDIS_HOST: redis
[/sourcecode]

If we want to deploy our application in a K8s cluster we need to migrate our docker-compose file into a k8s yaml files. I assume that we’ve deployed our docker containers into a container registry (such as ECR)

Frontend:
[sourcecode language=”xml”]
apiVersion: apps/v1
kind: Deployment
metadata:
name: clock-web-api
spec:
replicas: 1
selector:
matchLabels:
app: clock-web-api
project: clock
template:
metadata:
labels:
app: clock-web-api
project: clock
spec:
containers:
– name: web-api
image: my-ecr-path/clock:latest
args: ["uvicorn", "config.asgi:application", "–port", "8000", "–host", "0.0.0.0", "–workers", "1"]
ports:
– containerPort: 8000
env:
– name: REDIS_HOST
valueFrom:
configMapKeyRef:
name: clock-app-config
key: redis.host

apiVersion: v1
kind: Service
metadata:
name: clock-web-api
spec:
type: LoadBalancer
selector:
app: clock-web-api
project: clock
ports:
– protocol: TCP
port: 8000 # port exposed internally in the cluster
targetPort: 8000 # the container port to send requests to
[/sourcecode]

Celery worker
[sourcecode language=”xml”]
apiVersion: apps/v1
kind: Deployment
metadata:
name: clock-web-api
spec:
replicas: 1
selector:
matchLabels:
app: clock-web-api
project: clock
template:
metadata:
labels:
app: clock-web-api
project: clock
spec:
containers:
– name: web-api
image: my-ecr-path/clock:latest
args: ["uvicorn", "config.asgi:application", "–port", "8000", "–host", "0.0.0.0", "–workers", "1"]
ports:
– containerPort: 8000
env:
– name: REDIS_HOST
valueFrom:
configMapKeyRef:
name: clock-app-config
key: redis.host

apiVersion: v1
kind: Service
metadata:
name: clock-web-api
spec:
type: LoadBalancer
selector:
app: clock-web-api
project: clock
ports:
– protocol: TCP
port: 8000 # port exposed internally in the cluster
targetPort: 8000 # the container port to send requests to
[/sourcecode]

Celery scheduler
[sourcecode language=”xml”]
apiVersion: apps/v1
kind: Deployment
metadata:
name: clock-cron
spec:
replicas: 1
selector:
matchLabels:
app: clock-cron
project: clock
template:
metadata:
labels:
app: clock-cron
project: clock
spec:
containers:
– name: clock-cron
image: my-ecr-path/clock:latest
args: ["celery", "-A", "ws", "beat"]
env:
– name: REDIS_HOST
valueFrom:
configMapKeyRef:
name: clock-app-config
key: redis.host
[/sourcecode]

Redis
[sourcecode language=”xml”]
apiVersion: apps/v1
kind: Deployment
metadata:
name: clock-redis
spec:
replicas: 1
selector:
matchLabels:
app: clock-redis
project: clock
template:
metadata:
labels:
app: clock-redis
project: clock
spec:
containers:
– name: clock-redis
image: redis
ports:
– containerPort: 6379
name: clock-redis

apiVersion: v1
kind: Service
metadata:
name: clock-redis
spec:
type: ClusterIP
ports:
– port: 6379
targetPort: 6379
selector:
app: clock-redis
[/sourcecode]

Shared configuration
[sourcecode language=”xml”]
apiVersion: v1
kind: ConfigMap
metadata:
name: clock-app-config
data:
redis.host: "clock-redis"
[/sourcecode]

We can deploy or application to our k8s cluster

[sourcecode language=”xml”]
kubectl apply -f .k8s/
[/sourcecode]

And see it running inside the cluster locally with a port forward

[sourcecode language=”xml”]
kubectl port-forward deployment/clock-web-api 8000:8000
[/sourcecode]

And that’s all. Our Django application with Websockets using Django Channels up and running with docker and also using k8s.

Source code in my github

Deploying Python Application using Docker and Kubernetes

I’ve learning how to deploy one Python application to Kubernetes. Here you can see my notes:

Let’s start from a dummy Python application. It’s a basic Flask web API. Each time we perform a GET request to “/” we increase one counter and see the number of hits. The persistence layer will be a Redis database. The script is very simple:

[sourcecode language=”python”]
from flask import Flask
import os
from redis import Redis

redis = Redis(host=os.getenv(‘REDIS_HOST’, ‘localhost’),
port=os.getenv(‘REDIS_PORT’, 6379))
app = Flask(__name__)

@app.route(‘/’)
def hello():
redis.incr(‘hits’)
hits = int(redis.get(‘hits’))
return f"Hits: {hits}"

if __name__ == "__main__":
app.run(host=’0.0.0.0′)
[/sourcecode]

First of all we create a virtual environment to ensure that we’re going to install your dependencies isolatedly:

[sourcecode language=”xml”]
python -m venv venv
[/sourcecode]

We enter in the virtualenv

[sourcecode language=”xml”]
source venv/bin/activate
[/sourcecode]

And we install our dependencies:

[sourcecode language=”xml”]
pip install -r requirements.txt
[/sourcecode]

To be able to run our application we must ensure that we’ve a Redis database ready. We can run one with Docker:

[sourcecode language=”xml”]
docker run -p 6379:6379 redis
[/sourcecode]

Now we can start our application:

[sourcecode language=”xml”]
python app.py
[/sourcecode]

We open our browser with the url: http://localhost:5000 and it works.

Now we’re going to run our application within a Docker container. First of of all we need to create one Docker image from a docker file:

[sourcecode language=”xml”]
FROM python:alpine3.8
ADD . /code
WORKDIR /code
RUN pip install -r requirements.txt

EXPOSE 5000
[/sourcecode]

Now we can build or image:

[sourcecode language=”xml”]
docker build -t front .
[/sourcecode]

And now we can run our front image:

[sourcecode language=”xml”]
docker run -p 5000:5000 front python app.py
[/sourcecode]

If we open now our browser with the url http://localhost:5000 we’ll get a 500 error. That’s because our Docker container is trying to use one Redis host within localhost. It worked before, when our application and our Redis were within the same host. Now our API’s localhost isn’t the same than our host’s one.

Our script the Redis host is localhost by default but it can be passed from an environment variable,

[sourcecode language=”python”]
redis = Redis(host=os.getenv(‘REDIS_HOST’, ‘localhost’),
port=os.getenv(‘REDIS_PORT’, 6379))
[/sourcecode]

we can pass to our our Docker container the real host where our Redis resides (suposing my IP address is 192.168.1.100):

[sourcecode language=”xml”]
docker run -p 5000:5000 –env REDIS_HOST=192.168.1.100 front python app.py
[/sourcecode]

If dont’ want the development server we also can start our API using gunicorn

[sourcecode language=”xml”]
docker run -p 5000:5000 –env REDIS_HOST=192.168.1.100 front gunicorn -w 1 app:app -b 0.0.0.0:5000

And that works. We can start our app manually using Docker. But it’s a bit complicated. We need to run two containers (API and Redis), setting up the env variables.
Docker helps us with docker-compose. We can create a docker-compose.yaml file configuring our all application:

[sourcecode language="xml"]
version: ‘2’

services:
front:
image: front
build:
context: ./front
dockerfile: Dockerfile
container_name: front
command: gunicorn -w 1 app:app -b 0.0.0.0:5000
depends_on:
– redis
ports:
– "5000:5000"
restart: always
environment:
REDIS_HOST: redis
REDIS_PORT: 6379
redis:
image: redis
ports:
– "6379:6379"
[/sourcecode]

I can execute it

[sourcecode language=”xml”]
docker-compose up
[/sourcecode]

Docker compose is pretty straightforward. But, what happens if our production environment is a cluster? docker-compose works fine in a single host. But it our production environment is a cluster, we´ll face problems (we need to esure manually things like hight avaiavility and things like that). Docker people tried to answer to this question with Docker Swarm. Basically Swarm is docker-compose within a cluster. It uses almost the same syntax than docker-compose in a single host. Looks good, ins’t it? OK. Nobody uses it. Since Google created their Docker conainer orchestator (Kubernetes, aka K8s) it becames into the de-facto standard. The good thing about K8s is that it’s much better than Swarm (more configurable and more powerfull), but the bad part is that it isn’t as simple and easy to understand as docker-compose.

Let’s try to execute our proyect in K8s:

First I start minikube

[sourcecode language=”xml”]
minikube start
[/sourcecode]

and I configure kubectl to connect to my minikube k8s cluster

[sourcecode language=”xml”]
eval $(minikube docker-env)
[/sourcecode]

The API:

First we create one service:

[sourcecode language=”xml”]
apiVersion: v1
kind: Service
metadata:
name: front-api
spec:
# types:
# – ClusterIP: (default) only accessible from within the Kubernetes cluster
# – NodePort: accessible on a static port on each Node in the cluster
# – LoadBalancer: accessible externally through a cloud provider’s load balancer
type: LoadBalancer
# When the node receives a request on the static port (30163)
# "select pods with the label ‘app’ set to ‘front-api’"
# and forward the request to one of them
selector:
app: front-api
ports:
– protocol: TCP
port: 5000 # port exposed internally in the cluster
targetPort: 5000 # the container port to send requests to
nodePort: 30164 # a static port assigned on each the node
[/sourcecode]

And one deployment:

[sourcecode language=”xml”]
apiVersion: apps/v1
kind: Deployment
metadata:
name: front-api
spec:
# How many copies of each pod do we want?
replicas: 1

selector:
matchLabels:
# This must match the labels we set on the pod!
app: front-api

# This template field is a regular pod configuration
# nested inside the deployment spec
template:
metadata:
# Set labels on the pod.
# This is used in the deployment selector.
labels:
app: front-api
spec:
containers:
– name: front-api
image: front:v1
args: ["gunicorn", "-w 1", "app:app", "-b 0.0.0.0:5000"]
ports:
– containerPort: 5000
env:
– name: REDIS_HOST
valueFrom:
configMapKeyRef:
name: api-config
key: redis.host
[/sourcecode]

In order to learn a little bit of K8s I’m using a config map called ‘api-config’ where I put some information (such as the Redis host that I’m going to pass as a env variable)

[sourcecode language=”xml”]
apiVersion: v1
kind: ConfigMap
metadata:
name: api-config
data:
redis.host: "back-api"
[/sourcecode]

The Backend: Our Redis database:

First the service:

[sourcecode language=”xml”]
apiVersion: v1
kind: Service
metadata:
name: back-api
spec:
type: ClusterIP
ports:
– port: 6379
targetPort: 6379
selector:
app: back-api
[/sourcecode]

And finally the deployment:

[sourcecode language=”xml”]
apiVersion: apps/v1
kind: Deployment
metadata:
name: back-api
spec:
replicas: 1
selector:
matchLabels:
app: back-api
template:
metadata:
labels:
app: back-api
spec:
containers:
– name: back-api
image: redis
ports:
– containerPort: 6379
name: redis
[/sourcecode]

Before deploying my application to the cluster I need to build my docker image and tag it

[sourcecode language=”xml”]
docker build -t front .
[/sourcecode]

[sourcecode language=”xml”]
docker tag front front:v1
[/sourcecode]

Now I can deploy my application to my K8s cluster:

[sourcecode language=”xml”]
kubectl apply -f .k8s/
[/sourcecode]

If want to know what’s the external url of my application in the cluster I can use this command

[sourcecode language=”xml”]
minikube service front-api –url
[/sourcecode]

Then I can see it running using the browser or with curl

[sourcecode language=”xml”]
curl $(minikube service front-api –url)
[/sourcecode]

And that’s all. I can delete all application alos

[sourcecode language=”xml”]
kubectl delete -f .k8s/
[/sourcecode]

Source code available in my github

Playing with microservices, Docker, Python an Nameko

In the last projects that I’ve been involved with I’ve playing, in one way or another, with microservices, queues and things like that. I’m always facing the same tasks: Building RPCs, Workers, API gateways, … Because of that I’ve searching one framework to help me with those kind of stuff. Finally I discover Nameko. Basically Nameko is the Python tool that I’ve been looking for. In this post I will create a simple proof of concept to learn how to integrate Nameko within my projects. Let start.

The POC is a simple API gateway that gives me the localtime in iso format. I can create a simple Python script to do it

[sourcecode language=”python”]
import datetime
import time

print(datetime.datetime.fromtimestamp(time()).isoformat())
[/sourcecode]

We also can create a simple Flask API server to consume this information. The idea is create a rpc worker to generate this information and also generate another worker to send the localtime, but taken from a PostgreSQL database (yes I know it not very useful but it’s just an excuse to use a PG database in the microservice)

We’re going to create two rpc workers. One giving the local time:

[sourcecode language=”python”]
from nameko.rpc import rpc
from time import time
import datetime

class TimeService:
name = "local_time_service"

@rpc
def local(self):
return datetime.datetime.fromtimestamp(time()).isoformat()

[/sourcecode]

And another one with the date from PostgreSQL:

[sourcecode language=”python”]
from nameko.rpc import rpc
from dotenv import load_dotenv
import os
from ext.pg import PgService

current_dir = os.path.dirname(os.path.abspath(__file__))
load_dotenv(dotenv_path="{}/.env".format(current_dir))

class TimeService:
name = "db_time_service"
conn = PgService(os.getenv(‘DSN’))

@rpc
def db(self):
with self.conn:
with self.conn.cursor() as cur:
cur.execute("select localtimestamp")
timestamp = cur.fetchone()
return timestamp[0]
[/sourcecode]

I’ve created a service called PgService only to learn how to create dependency providers in nameko

[sourcecode language=”python”]
from nameko.extensions import DependencyProvider
import psycopg2

class PgService(DependencyProvider):

def __init__(self, dsn):
self.dsn = dsn

def get_dependency(self, worker_ctx):
return psycopg2.connect(self.dsn)
[/sourcecode]

Now we only need to setup the api gateway. With Nameko we can create http entrypoint also (in the same way than we create rpc) but I want to use it with Flask

[sourcecode language=”python”]
from flask import Flask
from nameko.standalone.rpc import ServiceRpcProxy
from dotenv import load_dotenv
import os

current_dir = os.path.dirname(os.path.abspath(__file__))
load_dotenv(dotenv_path="{}/.env".format(current_dir))

app = Flask(__name__)

def rpc_proxy(service):
config = {‘AMQP_URI’: os.getenv(‘AMQP_URI’)}
return ServiceRpcProxy(service, config)

@app.route(‘/’)
def hello():
return "Hello"

@app.route(‘/local’)
def local_time():
with rpc_proxy(‘local_time_service’) as rpc:
time = rpc.local()

return time

@app.route(‘/db’)
def db_time():
with rpc_proxy(‘db_time_service’) as rpc:
time = rpc.db()

return time

if __name__ == ‘__main__’:
app.run()
[/sourcecode]

As well as I wanna run my POC with docker, here the docker-compose file to set up the project

[sourcecode language=”xml”]
version: ‘3.4’

services:
api:
image: nameko/api
container_name: nameko.api
hostname: api
ports:
– "8080:8080"
restart: always
links:
– rabbit
– db.worker
– local.worker
environment:
– ENV=1
– FLASK_APP=app.py
– FLASK_DEBUG=1
build:
context: ./api
dockerfile: .docker/Dockerfile-api
#volumes:
#- ./api:/usr/src/app:ro
command: flask run –host=0.0.0.0 –port 8080
db.worker:
container_name: nameko.db.worker
image: nameko/db.worker
restart: always
build:
context: ./workers/db.worker
dockerfile: .docker/Dockerfile-worker
command: /bin/bash run.sh
local.worker:
container_name: nameko.local.worker
image: nameko/local.worker
restart: always
build:
context: ./workers/local.worker
dockerfile: .docker/Dockerfile-worker
command: /bin/bash run.sh
rabbit:
container_name: nameko.rabbit
image: rabbitmq:3-management
restart: always
ports:
– "15672:15672"
– "5672:5672"
environment:
RABBITMQ_ERLANG_COOKIE:
RABBITMQ_DEFAULT_VHOST: /
RABBITMQ_DEFAULT_USER: ${RABBITMQ_DEFAULT_USER}
RABBITMQ_DEFAULT_PASS: ${RABBITMQ_DEFAULT_PASS}
pg:
container_name: nameko.pg
image: nameko/pg
restart: always
build:
context: ./pg
dockerfile: .docker/Dockerfile-pg
#ports:
#- "5432:5432"
environment:
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_DB: ${POSTGRES_DB}
PGDATA: /var/lib/postgresql/data/pgdata
[/sourcecode]

And that’s all. Two nameko rpc services working together behind a api gateway

Code available in my github

Monitoring the bandwidth with Grafana, InfluxDB and Docker

Time ago, when I was an ADSL user in my house I had a lot problems with my internet connection. I was a bit lazy to switch to a fiber connection. Finally I changed it, but meanwhile the my Internet company was solving one incident, I started to hack a little bit a simple and dirty script that monitors my connection speed (just for fun and to practise with InfluxDB and Grafana).

Today I’ve lost my quick and dirty script (please Gonzalo keep a working backup the SD card of your Raspberry Pi Server always updated! Sometimes it crashes. It’s simple: “dd if=/dev/disk3 of=pi3.img” 🙂 and I want to rebuild it. This time I want to use Docker (just for fun). Let’s start.

To monitor the bandwidth we only need to use the speedtest-cli api. We can use this api from command line and, as it’s a python library, we can create one python script that uses it.

[sourcecode language="python"]
import datetime
import logging
import os
import speedtest
import time
from dotenv import load_dotenv
from influxdb import InfluxDBClient

logging.basicConfig(level=logging.INFO)

current_dir = os.path.dirname(os.path.abspath(__file__))
load_dotenv(dotenv_path="{}/.env".format(current_dir))

influxdb_host = os.getenv("INFLUXDB_HOST")
influxdb_port = os.getenv("INFLUXDB_PORT")
influxdb_database = os.getenv("INFLUXDB_DATABASE")

def persists(measurement, fields, time):
    logging.info("{} {} {}".format(time, measurement, fields))

    influx_client.write_points([{
        "measurement": measurement,
        "time": time,
        "fields": fields
    }])

influx_client = InfluxDBClient(host=influxdb_host, port=influxdb_port, database=influxdb_database)

def get_speed():
    logging.info("Calculating speed ...")
    s = speedtest.Speedtest()
    s.get_best_server()
    s.download()
    s.upload()

    return s.results.dict()

def loop(sleep):
    current_time = datetime.datetime.utcnow().isoformat()
    speed = get_speed()

    persists(measurement='download', fields={"value": speed['download']}, time=current_time)
    persists(measurement='upload', fields={"value": speed['upload']}, time=current_time)
    persists(measurement='ping', fields={"value": speed['ping']}, time=current_time)

    time.sleep(sleep)

while True:
    loop(sleep=60 * 60) # each hour
[/sourcecode]

Now we need to create the docker-compose file to orchestrate the infrastructure. The most complicate thing here is, maybe, to configure grafana within docker files instead of opening browser, create datasoruce and build dashboard by hand. After a couple of hours navigating into github repositories finally I created exactly what I needed for this post. Basically is a custom entry point for my grafana host that creates the datasource and dashboard (via Grafana’s API)

[sourcecode language="xml"]
version: '3'

services:
  check:
    image: gonzalo123.check
    restart: always
    volumes:
    - ./src/beat:/code/src
    depends_on:
    - influxdb
    build:
      context: ./src
      dockerfile: .docker/Dockerfile-check
    networks:
    - app-network
    command: /bin/sh start.sh
  influxdb:
    image: influxdb:latest
    restart: always
    environment:
    - INFLUXDB_INIT_PWD="${INFLUXDB_PASS}"
    - PRE_CREATE_DB="${INFLUXDB_DB}"
    volumes:
    - influxdb-data:/data
    networks:
    - app-network
  grafana:
    image: grafana/grafana:latest
    restart: always
    ports:
    - "3000:3000"
    depends_on:
    - influxdb
    volumes:
    - grafana-db:/var/lib/grafana
    - grafana-log:/var/log/grafana
    - grafana-conf:/etc/grafana
    networks:
    - app-network

networks:
  app-network:
    driver: bridge

volumes:
  grafana-db:
    driver: local
  grafana-log:
    driver: local
  grafana-conf:
    driver: local
  influxdb-data:
    driver: local
[/sourcecode]

And that’s all. My Internet connection supervised again.

Project available in my github.

Working with SAPUI5 locally (part 3). Adding more services in Docker

In the previous project we moved one project to docker. The idea was to move exactly the same functionality (even without touching anything within the source code). Now we’re going to add more services. Yes, I know, it looks like overenginering (it’s exactly overenginering, indeed), but I want to build something with different services working together. Let start.

We’re going to change a little bit our original project. Now our frontend will only have one button. This button will increment the number of clicks but we’re going to persists this information in a PostgreSQL database. Also, instead of incrementing the counter in the backend, our backend will emit one event to a RabbitMQ message broker. We’ll have one worker service listening to this event and this worker will persist the information. The communication between the worker and the frontend (to show the incremented value), will be via websockets.

With those premises we are going to need:

  • Frontend: UI5 application
  • Backend: PHP/lumen application
  • Worker: nodejs application which is listening to a RabbitMQ event and serving the websocket server (using socket.io)
  • Nginx server
  • PosgreSQL database.
  • RabbitMQ message broker.

As the previous examples, our PHP backend will be server via Nginx and PHP-FPM.

Here we can see to docker-compose file to set up all the services

[sourcecode language=”xml”]
version: ‘3.4’

services:
nginx:
image: gonzalo123.nginx
restart: always
ports:
– "8080:80"
build:
context: ./src
dockerfile: .docker/Dockerfile-nginx
volumes:
– ./src/backend:/code/src
– ./src/.docker/web/site.conf:/etc/nginx/conf.d/default.conf
networks:
– app-network
api:
image: gonzalo123.api
restart: always
build:
context: ./src
dockerfile: .docker/Dockerfile-lumen-dev
environment:
XDEBUG_CONFIG: remote_host=${MY_IP}
volumes:
– ./src/backend:/code/src
networks:
– app-network
ui5:
image: gonzalo123.ui5
ports:
– "8000:8000"
restart: always
volumes:
– ./src/frontend:/code/src
build:
context: ./src
dockerfile: .docker/Dockerfile-ui5
networks:
– app-network
io:
image: gonzalo123.io
ports:
– "9999:9999"
restart: always
volumes:
– ./src/io:/code/src
build:
context: ./src
dockerfile: .docker/Dockerfile-io
networks:
– app-network
pg:
image: gonzalo123.pg
restart: always
ports:
– "5432:5432"
build:
context: ./src
dockerfile: .docker/Dockerfile-pg
environment:
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_DB: ${POSTGRES_DB}
PGDATA: /var/lib/postgresql/data/pgdata
networks:
– app-network
rabbit:
image: rabbitmq:3-management
container_name: gonzalo123.rabbit
restart: always
ports:
– "15672:15672"
– "5672:5672"
environment:
RABBITMQ_ERLANG_COOKIE:
RABBITMQ_DEFAULT_VHOST: /
RABBITMQ_DEFAULT_USER: ${RABBITMQ_DEFAULT_USER}
RABBITMQ_DEFAULT_PASS: ${RABBITMQ_DEFAULT_PASS}
networks:
– app-network
networks:
app-network:
driver: bridge
[/sourcecode]

We’re going to use the same docker files than in the previous post but we also need new ones for worker, database server and message queue:

Worker:
[sourcecode language=”xml”]
FROM node:alpine

EXPOSE 8000

WORKDIR /code/src
COPY ./io .
RUN npm install
ENTRYPOINT ["npm", "run", "serve"]
[/sourcecode]

The worker script is simple script that serves the socket.io server and emits a websocket within every message to the RabbitMQ queue.

[sourcecode language=”js”]
var amqp = require(‘amqp’),
httpServer = require(‘http’).createServer(),
io = require(‘socket.io’)(httpServer, {
origins: ‘*:*’,
}),
pg = require(‘pg’)
;

require(‘dotenv’).config();
var pgClient = new pg.Client(process.env.DB_DSN);

rabbitMq = amqp.createConnection({
host: process.env.RABBIT_HOST,
port: process.env.RABBIT_PORT,
login: process.env.RABBIT_USER,
password: process.env.RABBIT_PASS,
});

var sql = ‘SELECT clickCount FROM docker.clicks’;

// Please don’t do this. Use lazy connections
// I’m ‘lazy’ to do it in this POC 🙂
pgClient.connect(function(err) {
io.on(‘connection’, function() {
pgClient.query(sql, function(err, result) {
var count = result.rows[0][‘clickcount’];
io.emit(‘click’, {count: count});
});

});

rabbitMq.on(‘ready’, function() {
var queue = rabbitMq.queue(‘ui5’);
queue.bind(‘#’);

queue.subscribe(function(message) {
pgClient.query(sql, function(err, result) {
var count = parseInt(result.rows[0][‘clickcount’]);
count = count + parseInt(message.data.toString(‘utf8’));
pgClient.query(‘UPDATE docker.clicks SET clickCount = $1’, [count],
function(err) {
io.emit(‘click’, {count: count});
});
});
});
});
});

httpServer.listen(process.env.IO_PORT);
[/sourcecode]

Database server:
[sourcecode language=”xml”]
FROM postgres:9.6-alpine
COPY pg/init.sql /docker-entrypoint-initdb.d/
[/sourcecode]

As we can see we’re going to generate the database estructure in the first build
[sourcecode language=”sql”]
CREATE SCHEMA docker;

CREATE TABLE docker.clicks (
clickCount numeric(8) NOT NULL
);

ALTER TABLE docker.clicks
OWNER TO username;

INSERT INTO docker.clicks(clickCount) values (0);
[/sourcecode]

With the RabbitMQ server we’re going to use the official docker image so we don’t need to create one Dockerfile

We also have changed a little bit our Nginx configuration. We want to use Nginx to serve backend and also socket.io server. That’s because we don’t want to expose different ports to internet.

[sourcecode language=”xml”]
server {
listen 80;
index index.php index.html;
server_name localhost;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
root /code/src/www;

location /socket.io/ {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass "http://io:9999";
}

location / {
try_files $uri $uri/ /index.php?$query_string;
}

location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass api:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
}
[/sourcecode]

To avoid CORS issues we can also use SCP destination (the localneo proxy in this example), to serve socket.io also. So we need to:

  • change our neo-app.json file
  • [sourcecode language=”js”]
    "routes": [

    {
    "path": "/socket.io",
    "target": {
    "type": "destination",
    "name": "SOCKETIO"
    },
    "description": "SOCKETIO"
    }
    ],
    [/sourcecode]

    And basically that’s all. Here also we can use a “production” docker-copose file without exposing all ports and mapping the filesystem to our local machine (useful when we’re developing)

    [sourcecode language=”xml”]
    version: ‘3.4’

    services:
    nginx:
    image: gonzalo123.nginx
    restart: always
    build:
    context: ./src
    dockerfile: .docker/Dockerfile-nginx
    networks:
    – app-network
    api:
    image: gonzalo123.api
    restart: always
    build:
    context: ./src
    dockerfile: .docker/Dockerfile-lumen
    networks:
    – app-network
    ui5:
    image: gonzalo123.ui5
    ports:
    – "80:8000"
    restart: always
    volumes:
    – ./src/frontend:/code/src
    build:
    context: ./src
    dockerfile: .docker/Dockerfile-ui5
    networks:
    – app-network
    io:
    image: gonzalo123.io
    restart: always
    build:
    context: ./src
    dockerfile: .docker/Dockerfile-io
    networks:
    – app-network
    pg:
    image: gonzalo123.pg
    restart: always
    build:
    context: ./src
    dockerfile: .docker/Dockerfile-pg
    environment:
    POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
    POSTGRES_USER: ${POSTGRES_USER}
    POSTGRES_DB: ${POSTGRES_DB}
    PGDATA: /var/lib/postgresql/data/pgdata
    networks:
    – app-network
    rabbit:
    image: rabbitmq:3-management
    restart: always
    environment:
    RABBITMQ_ERLANG_COOKIE:
    RABBITMQ_DEFAULT_VHOST: /
    RABBITMQ_DEFAULT_USER: ${RABBITMQ_DEFAULT_USER}
    RABBITMQ_DEFAULT_PASS: ${RABBITMQ_DEFAULT_PASS}
    networks:
    – app-network
    networks:
    app-network:
    driver: bridge
    [/sourcecode]

    And that’s all. The full project is available in my github account

    Working with SAPUI5 locally (part 2). Now with docker

    In the first part I spoke about how to build our working environment to work with UI5 locally instead of using WebIDE. Now, in this second part of the post, we’ll see how to do it using docker to set up our environment.

    I’ll use docker-compose to set up the project. Basically, as I explain in the first part, the project has two parts. One backend and one frontned. We’re going to use exactly the same code for the frontend and for the backend.

    The frontend is build over a localneo. As it’s a node application we’ll use a node:alpine base host

    [sourcecode language=”xml”]
    FROM node:alpine

    EXPOSE 8000

    WORKDIR /code/src
    COPY ./frontend .
    RUN npm install
    ENTRYPOINT ["npm", "run", "serve"]
    [/sourcecode]

    In docker-compose we only need to map the port that we´ll expose in our host and since we want this project in our depelopemet process, we also will map the volume to avoid to re-generate our container each time we change the code.

    [sourcecode language=”xml”]

    ui5:
    image: gonzalo123.ui5
    ports:
    – "8000:8000"
    restart: always
    build:
    context: ./src
    dockerfile: .docker/Dockerfile-ui5
    volumes:
    – ./src/frontend:/code/src
    networks:
    – api-network
    [/sourcecode]

    The backend is a PHP application. We can set up a PHP application using different architectures. In this project we’ll use nginx and PHP-FPM.

    for nginx we’ll use the following Dockerfile

    [sourcecode language=”xml”]
    FROM nginx:1.13-alpine

    EXPOSE 80

    COPY ./.docker/web/site.conf /etc/nginx/conf.d/default.conf
    COPY ./backend /code/src
    [/sourcecode]

    And for the PHP host the following one (with xdebug to enable debugging and breakpoints):

    [sourcecode language=”xml”]
    FROM php:7.1-fpm

    ENV PHP_XDEBUG_REMOTE_ENABLE 1

    RUN apt-get update && apt-get install -my \
    git \
    libghc-zlib-dev && \
    apt-get clean

    RUN apt-get install -y libpq-dev \
    && docker-php-ext-configure pgsql -with-pgsql=/usr/local/pgsql \
    && docker-php-ext-install pdo pdo_pgsql pgsql opcache zip

    RUN curl -sS https://getcomposer.org/installer | php — –install-dir=/usr/local/bin –filename=composer

    RUN composer global require "laravel/lumen-installer"
    ENV PATH ~/.composer/vendor/bin:$PATH

    COPY ./backend /code/src
    [/sourcecode]

    And basically that’s all. Here the full docker-compose file

    [sourcecode language=”xml”]
    version: ‘3.4’

    services:
    nginx:
    image: gonzalo123.nginx
    restart: always
    ports:
    – "8080:80"
    build:
    context: ./src
    dockerfile: .docker/Dockerfile-nginx
    volumes:
    – ./src/backend:/code/src
    – ./src/.docker/web/site.conf:/etc/nginx/conf.d/default.conf
    networks:
    – api-network
    api:
    image: gonzalo123.api
    restart: always
    build:
    context: ./src
    dockerfile: .docker/Dockerfile-lumen-dev
    environment:
    XDEBUG_CONFIG: remote_host=${MY_IP}
    volumes:
    – ./src/backend:/code/src
    networks:
    – api-network
    ui5:
    image: gonzalo123.ui5
    ports:
    – "8000:8000"
    restart: always
    build:
    context: ./src
    dockerfile: .docker/Dockerfile-ui5
    networks:
    – api-network

    networks:
    api-network:
    driver: bridge
    [/sourcecode]

    If we want to use this project you only need to:

    • clone the repo fron github
    • run ./ui5 up

    With this configuration we’re exposing two ports 8080 for the frontend and 8000 for the backend. We also are mapping our local filesystem to containers to avoid to regenerate our containers each time we change the code.

    We also can have a variation. A “production” version of our docker-compose file. I put production between quotation marks because normally we aren’t going to use localneo as a production server (please don’t do it). We’ll use SCP to host the frontend.

    This configuration is just an example without filesystem mapping, without xdebug in the backend and without exposing the backend externally (Only the frontend can use it)

    [sourcecode language=”xml”]
    version: ‘3.4’

    services:
    nginx:
    image: gonzalo123.nginx
    restart: always
    build:
    context: ./src
    dockerfile: .docker/Dockerfile-nginx
    networks:
    – api-network
    api:
    image: gonzalo123.api
    restart: always
    build:
    context: ./src
    dockerfile: .docker/Dockerfile-lumen
    networks:
    – api-network
    ui5:
    image: gonzalo123.ui5
    ports:
    – "8000:8000"
    restart: always
    build:
    context: ./src
    dockerfile: .docker/Dockerfile-ui5
    networks:
    – api-network

    networks:
    api-network:
    driver: bridge
    [/sourcecode]

    And that’s all. You can see the all the source code in my github account

    Playing with Docker, MQTT, Grafana, InfluxDB, Python and Arduino

    I must admit this post is just an excuse to play with Grafana and InfluxDb. InfluxDB is a cool database especially designed to work with time series. Grafana is one open source tool for time series analytics. I want to build a simple prototype. The idea is:

    • One Arduino device (esp32) emits a MQTT event to a mosquitto server. I’ll use a potentiometer to emulate one sensor (Imagine here, for example, a temperature sensor instead of potentiometer). I’ve used this circuit before in another projects
    • One Python script will be listening to the MQTT event in my Raspberry Pi and it will persist the value to InfluxDB database
    • I will monitor the state of the time series given by the potentiometer with Grafana
    • I will create one alert in Grafana (for example when the average value within 10 seconds is above a threshold) and I will trigger a webhook when the alert changes its state
    • One microservice (a Python Flask server) will be listening to the webhook and it will emit a MQTT event depending on the state
    • Another Arduino device (one NodeMcu in this case) will be listening to this MQTT event and it will activate a LED. Red one if the alert is ON and green one if the alert is OFF

    The server
    As I said before we’ll need three servers:

    • MQTT server (mosquitto)
    • InfluxDB server
    • Grafana server

    We’ll use Docker. I’ve got a Docker host running in a Raspberry Pi3. The Raspberry Pi is a ARM device so we need docker images for this architecture.

    [sourcecode language=”xml”]
    version: ‘2’

    services:
    mosquitto:
    image: pascaldevink/rpi-mosquitto
    container_name: moquitto
    ports:
    – "9001:9001"
    – "1883:1883"
    restart: always

    influxdb:
    image: hypriot/rpi-influxdb
    container_name: influxdb
    restart: always
    environment:
    – INFLUXDB_INIT_PWD="password"
    – PRE_CREATE_DB="iot"
    ports:
    – "8083:8083"
    – "8086:8086"
    volumes:
    – ~/docker/rpi-influxdb/data:/data

    grafana:
    image: fg2it/grafana-armhf:v4.6.3
    container_name: grafana
    restart: always
    ports:
    – "3000:3000"
    volumes:
    – grafana-db:/var/lib/grafana
    – grafana-log:/var/log/grafana
    – grafana-conf:/etc/grafana

    volumes:
    grafana-db:
    driver: local
    grafana-log:
    driver: local
    grafana-conf:
    driver: local
    [/sourcecode]

    ESP32
    The Esp32 part is very simple. We only need to connect our potentiometer to the Esp32. The potentiometer has three pins: Gnd, Signal and Vcc. For signal we’ll use the pin 32.

    We only need to configure our Wifi network, connect to our MQTT server and emit the potentiometer value within each loop.

    [sourcecode language=”c”]
    #include <PubSubClient.h>
    #include <WiFi.h>

    const int potentiometerPin = 32;

    // Wifi configuration
    const char* ssid = "my_wifi_ssid";
    const char* password = "my_wifi_password";

    // MQTT configuration
    const char* server = "192.168.1.111";
    const char* topic = "/pot";
    const char* clientName = "com.gonzalo123.esp32";

    String payload;

    WiFiClient wifiClient;
    PubSubClient client(wifiClient);

    void wifiConnect() {
    Serial.println();
    Serial.print("Connecting to ");
    Serial.println(ssid);

    WiFi.begin(ssid, password);

    while (WiFi.status() != WL_CONNECTED) {
    delay(500);
    Serial.print(".");
    }
    Serial.println("");
    Serial.print("WiFi connected.");
    Serial.print("IP address: ");
    Serial.println(WiFi.localIP());
    }

    void mqttReConnect() {
    while (!client.connected()) {
    Serial.print("Attempting MQTT connection…");
    if (client.connect(clientName)) {
    Serial.println("connected");
    } else {
    Serial.print("failed, rc=");
    Serial.print(client.state());
    Serial.println(" try again in 5 seconds");
    delay(5000);
    }
    }
    }

    void mqttEmit(String topic, String value)
    {
    client.publish((char*) topic.c_str(), (char*) value.c_str());
    }

    void setup() {
    Serial.begin(115200);

    wifiConnect();
    client.setServer(server, 1883);
    delay(1500);
    }

    void loop() {
    if (!client.connected()) {
    mqttReConnect();
    }
    int current = (int) ((analogRead(potentiometerPin) * 100) / 4095);
    mqttEmit(topic, (String) current);
    delay(500);
    }
    [/sourcecode]

    Mqtt listener

    The esp32 emits an event (“/pot”) with the value of the potentiometer. So we’re going to create a MQTT listener that listen to MQTT and persits the value to InfluxDB.

    [sourcecode language=”python”]
    import paho.mqtt.client as mqtt
    from influxdb import InfluxDBClient
    import datetime
    import logging

    def persists(msg):
    current_time = datetime.datetime.utcnow().isoformat()
    json_body = [
    {
    "measurement": "pot",
    "tags": {},
    "time": current_time,
    "fields": {
    "value": int(msg.payload)
    }
    }
    ]
    logging.info(json_body)
    influx_client.write_points(json_body)

    logging.basicConfig(level=logging.INFO)
    influx_client = InfluxDBClient(‘docker’, 8086, database=’iot’)
    client = mqtt.Client()

    client.on_connect = lambda self, mosq, obj, rc: self.subscribe("/pot")
    client.on_message = lambda client, userdata, msg: persists(msg)

    client.connect("docker", 1883, 60)

    client.loop_forever()
    [/sourcecode]

    Grafana
    In grafana we need to do two things. First to create one datasource from our InfluxDB server. It’s pretty straightforward to it.

    Finally we’ll create a dashboard. We only have one time-serie with the value of the potentiometer. I must admit that my dasboard has a lot things that I’ve created only for fun.

    Thats the query that I’m using to plot the main graph

    [sourcecode language=”sql”]
    SELECT
    last("value") FROM "pot"
    WHERE
    time >= now() – 5m
    GROUP BY
    time($interval) fill(previous)
    [/sourcecode]

    Here we can see the dashboard

    And here my alert configuration:

    I’ve also created a notification channel with a webhook. Grafana will use this web hook to notify when the state of alert changes

    Webhook listener
    Grafana will emit a webhook, so we’ll need an REST endpoint to collect the webhook calls. I normally use PHP/Lumen to create REST servers but in this project I’ll use Python and Flask.

    We need to handle HTTP Basic Auth and emmit a MQTT event. MQTT is a very simple protocol but it has one very nice feature that fits like hat fits like a glove here. Le me explain it:

    Imagine that we’ve got our system up and running and the state is “ok”. Now we connect one device (for example one big red/green lights). Since the “ok” event was fired before we connect the lights, our green light will not be switch on. We need to wait util “alert” event if we want to see any light. That’s not cool.

    MQTT allows us to “retain” messages. That means that we can emit messages with “retain” flag to one topic and when we connect one device later to this topic, it will receive the message. Here it’s exactly what we need.

    [sourcecode language=”python”]
    from flask import Flask
    from flask import request
    from flask_httpauth import HTTPBasicAuth
    import paho.mqtt.client as mqtt
    import json

    client = mqtt.Client()

    app = Flask(__name__)
    auth = HTTPBasicAuth()

    # http basic auth credentials
    users = {
    "user": "password"
    }

    @auth.get_password
    def get_pw(username):
    if username in users:
    return users.get(username)
    return None

    @app.route(‘/alert’, methods=[‘POST’])
    @auth.login_required
    def alert():
    client.connect("docker", 1883, 60)
    data = json.loads(request.data.decode(‘utf-8’))
    if data[‘state’] == ‘alerting’:
    client.publish(topic="/alert", payload="1", retain=True)
    elif data[‘state’] == ‘ok’:
    client.publish(topic="/alert", payload="0", retain=True)

    client.disconnect()

    return "ok"

    if __name__ == "__main__":
    app.run(host=’0.0.0.0′)
    [/sourcecode]

    Nodemcu

    Finally the Nodemcu. This part is similar than the esp32 one. Our leds are in pins 4 and 5. We also need to configure the Wifi and connect to to MQTT server. Nodemcu and esp32 are similar devices but not the same. For example we need to use different libraries to connect to the wifi.

    This device will be listening to the MQTT event and trigger on led or another depending on the state

    [sourcecode language=”c”]
    #include <PubSubClient.h>
    #include <ESP8266WiFi.h>

    const int ledRed = 4;
    const int ledGreen = 5;

    // Wifi configuration
    const char* ssid = "my_wifi_ssid";
    const char* password = "my_wifi_password";

    // mqtt configuration
    const char* server = "192.168.1.111";
    const char* topic = "/alert";
    const char* clientName = "com.gonzalo123.nodemcu";

    int value;
    int percent;
    String payload;

    WiFiClient wifiClient;
    PubSubClient client(wifiClient);

    void wifiConnect() {
    Serial.println();
    Serial.print("Connecting to ");
    Serial.println(ssid);

    WiFi.begin(ssid, password);

    while (WiFi.status() != WL_CONNECTED) {
    delay(500);
    Serial.print(".");
    }
    Serial.println("");
    Serial.print("WiFi connected.");
    Serial.print("IP address: ");
    Serial.println(WiFi.localIP());
    }

    void mqttReConnect() {
    while (!client.connected()) {
    Serial.print("Attempting MQTT connection…");
    if (client.connect(clientName)) {
    Serial.println("connected");
    client.subscribe(topic);
    } else {
    Serial.print("failed, rc=");
    Serial.print(client.state());
    Serial.println(" try again in 5 seconds");
    delay(5000);
    }
    }
    }

    void callback(char* topic, byte* payload, unsigned int length) {

    Serial.print("Message arrived [");
    Serial.print(topic);

    String data;
    for (int i = 0; i < length; i++) {
    data += (char)payload[i];
    }
    cleanLeds();
    int value = data.toInt();
    switch (value) {
    case 1:
    digitalWrite(ledRed, HIGH);
    break;
    case 0:
    digitalWrite(ledGreen, HIGH);
    break;
    }
    Serial.print("] value:");
    Serial.println((int) value);
    }

    void cleanLeds() {
    digitalWrite(ledRed, LOW);
    digitalWrite(ledGreen, LOW);
    }

    void setup() {
    Serial.begin(9600);
    pinMode(ledRed, OUTPUT);
    pinMode(ledGreen, OUTPUT);
    cleanLeds();
    Serial.println("start");

    wifiConnect();
    client.setServer(server, 1883);
    client.setCallback(callback);

    delay(1500);
    }

    void loop() {
    Serial.print(".");
    if (!client.connected()) {
    mqttReConnect();
    }

    client.loop();
    delay(500);
    }
    [/sourcecode]

    Here you can see the working prototype in action

    And here the source code

    Playing with Docker, Silex, Python, Node and WebSockets

    I’m learning Docker. In this post I want to share a little experiment that I have done. I know the code looks like over-engineering but it’s just an excuse to build something with docker and containers. Let me explain it a little bit.

    The idea is build a Time clock in the browser. Something like this:

    Clock

    Yes I know. We can do it only with js, css and html but we want to hack a little bit more. The idea is to create:

    • A Silex/PHP frontend
    • A WebSocket server with socket.io/node
    • A Python script to obtain the current time

    WebSocket server will open 2 ports: One port to serve webSockets (socket.io) and another one as a http server (express). Python script will get the current time and it’ll send it to the webSocket server. Finally one frontend(silex) will be listening to WebSocket’s event and it will render the current time.

    That’s the WebSocket server (with socket.io and express)
    [sourcecode language=”js”]
    var
    express = require(‘express’),
    expressApp = express(),
    server = require(‘http’).Server(expressApp),
    io = require(‘socket.io’)(server, {origins: ‘localhost:*’})
    ;

    expressApp.get(‘/tic’, function (req, res) {
    io.sockets.emit(‘time’, req.query.time);
    res.json(‘OK’);
    });

    expressApp.listen(6400, ‘0.0.0.0’);

    server.listen(8080);
    [/sourcecode]

    That’s our Python script

    [sourcecode language=”python”]
    from time import gmtime, strftime, sleep
    import httplib2

    h = httplib2.Http()
    while True:
    (resp, content) = h.request("http://node:6400/tic?time=&quot; + strftime("%H:%M:%S", gmtime()))
    sleep(1)
    [/sourcecode]

    And our Silex frontend
    [sourcecode language=”php”]
    use Silex\Application;
    use Silex\Provider\TwigServiceProvider;

    $app = new Application([‘debug’ => true]);
    $app->register(new TwigServiceProvider(), [
    ‘twig.path’ => __DIR__ . ‘/../views’,
    ]);

    $app->get("/", function (Application $app) {
    return $app[‘twig’]->render(‘index.twig’, []);
    });

    $app->run();
    [/sourcecode]

    using this twig template

    [sourcecode language=”html”]
    <!DOCTYPE html>
    <html lang="en">
    <head>
    <meta charset="utf-8">
    <meta http-equiv="X-UA-Compatible" content="IE=edge">
    <meta name="viewport" content="width=device-width, initial-scale=1">
    <title>Docker example</title>
    <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css&quot; integrity="sha384-BVYiiSIFeK1dGmJRAkycuHAHRg32OmUcww7on3RYdg4Va+PmSTsz/K68vbdEjh4u" crossorigin="anonymous">
    <link href="css/app.css" rel="stylesheet">
    <script src="https://oss.maxcdn.com/html5shiv/3.7.3/html5shiv.min.js"></script&gt;
    <script src="https://oss.maxcdn.com/respond/1.4.2/respond.min.js"></script&gt;
    </head>
    <body>
    <div class="site-wrapper">
    <div class="site-wrapper-inner">
    <div class="cover-container">
    <div class="inner cover">
    <h1 class="cover-heading">
    <div id="display">
    display
    </div>
    </h1>
    </div>
    </div>
    </div>
    </div>
    <script src="//localhost:8080/socket.io/socket.io.js"></script>
    <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.12.4/jquery.min.js"></script&gt;
    <script>
    var socket = io.connect(‘//localhost:8080’);

    $(function () {
    socket.on(‘time’, function (data) {
    $(‘#display’).html(data);
    });
    });
    </script>
    </body>
    </html>
    [/sourcecode]

    The idea is to use one Docker container for each process. I like to have all the code in one place so all containers will share the same volume with source code.

    First the node container (WebSocket server)

    [sourcecode language=”text”]
    FROM node:argon

    RUN mkdir -p /mnt/src
    WORKDIR /mnt/src/node

    EXPOSE 8080 6400
    [/sourcecode]

    Now the python container
    [sourcecode language=”text”]
    FROM python:2

    RUN pip install httplib2

    RUN mkdir -p /mnt/src
    WORKDIR /mnt/src/python
    [/sourcecode]

    And finally Frontend contailer (apache2 with Ubuntu 16.04)

    [sourcecode language=”text”]
    FROM ubuntu:16.04

    RUN locale-gen es_ES.UTF-8
    RUN update-locale LANG=es_ES.UTF-8
    ENV DEBIAN_FRONTEND=noninteractive

    RUN apt-get update -y
    RUN apt-get install –no-install-recommends -y apache2 php libapache2-mod-php
    RUN apt-get clean -y

    COPY ./apache2/sites-available/000-default.conf /etc/apache2/sites-available/000-default.conf

    RUN mkdir -p /mnt/src

    RUN a2enmod rewrite
    RUN a2enmod proxy
    RUN a2enmod mpm_prefork

    RUN chown -R www-data:www-data /mnt/src
    ENV APACHE_RUN_USER www-data
    ENV APACHE_RUN_GROUP www-data
    ENV APACHE_LOG_DIR /var/log/apache2
    ENV APACHE_LOCK_DIR /var/lock/apache2
    ENV APACHE_PID_FILE /var/run/apache2/apache2.pid
    ENV APACHE_SERVERADMIN admin@localhost
    ENV APACHE_SERVERNAME localhost

    EXPOSE 80
    [/sourcecode]

    Now we’ve got the three containers but we want to use all together. We’ll use a docker-compose.yml file. The web container will expose port 80 and node container 8080. Node container also opens 6400 but this port is an internal port. We don’t need to access to this port outside. Only Python container needs to access to this port. Because of that 6400 is not mapped to any port in docker-compose

    [sourcecode language=”text”]
    version: ‘2’

    services:
    web:
    image: gonzalo123/example_web
    container_name: example_web
    ports:
    – "80:80"
    restart: always
    depends_on:
    – node
    build:
    context: ./images/php
    dockerfile: Dockerfile
    entrypoint:
    – /usr/sbin/apache2
    – -D
    – FOREGROUND
    volumes:
    – ./src:/mnt/src

    node:
    image: gonzalo123/example_node
    container_name: example_node
    ports:
    – "8080:8080"
    restart: always
    build:
    context: ./images/node
    dockerfile: Dockerfile
    entrypoint:
    – npm
    – start
    volumes:
    – ./src:/mnt/src

    python:
    image: gonzalo123/example_python
    container_name: example_python
    restart: always
    depends_on:
    – node
    build:
    context: ./images/python
    dockerfile: Dockerfile
    entrypoint:
    – python
    – tic.py
    volumes:
    – ./src:/mnt/src
    [/sourcecode]

    And that’s all. We only need to start our containers
    [sourcecode language=”bash”]
    docker-compose up –build -d
    [/sourcecode]

    and open our browser at: http://localhost to see our Time clock

    Full source code available within my github account