Sending logs to AWS CloudWatch with a sidecar pattern and Python

In a Docker Swarm environment, the sidecar pattern is a common architectural approach used to extend the functionality of a primary container without directly modifying it.

First there’s a primary container. This is the main application or service you want to run within a Docker Swarm service. It’s responsible for the core functionality of your application. In our case, this container will be the log generator. It will save logs within a json format.

The sidecar container is a secondary container that runs alongside the primary container. It’s tightly coupled with the primary container and assists it by providing additional services or functionality. In our example, the sidecar responsibility will be push logs to AWS CloudWatch. The idea is sharing a docker volume between both containers. Whit this technique, our primary container will not be affected in network latency generating logs.

The idea is to generate something like this:

version: '3.6'

services:
  api:
    image: api:latest
    logging:
      options:
        max-size: 10m
    deploy:
      restart_policy:
        condition: any
    volumes:
      - logs_volume:/src/logs
    environment:
      - ENVIRONMENT=production
      - PROCESS_ID=api
    ports:
      - 5000:5000
    command: gunicorn -w 1 app:app -b 0.0.0.0:5000 --timeout 180

  filebeat:
    image: cw:production
    command: bash -c "python cw.py && sleep 1m"
    deploy:
      restart_policy:
        condition: any
    environment:
      - LOG_GROUP=python_logs_example
      - LOG_STREAM_PREFIX=default
    volumes:
      - logs_volume:/src/logs
volumes:
  logs_volume:

Let’s go. First, we need to setup our aws credentials. We can use a profile or a IAM user

if AWS_PROFILE_NAME:
    session = boto3.Session(profile_name=AWS_PROFILE_NAME, region_name=AWS_REGION)
else:
    session = boto3.Session(
        aws_access_key_id=AWS_ACCESS_KEY_ID,
        aws_secret_access_key=AWS_SECRET_ACCESS_KEY,
        region_name=AWS_REGION)

logs = session. Client('logs')

then we need to setup the logGroup and logStream in CloudWatch. We can generate them by hand, but I preffer to generate them programatically, if they don’t exist.

def init_cloudwatch_stream():
    log_stream_name = f"{LOG_STREAM_PREFIX}_{datetime.now().strftime('%Y%m%d')}"

    log_groups = logs.describe_log_groups(logGroupNamePrefix=LOG_GROUP)['logGroups']
    if not any(group['logGroupName'] == LOG_GROUP for group in log_groups):
        logs.create_log_group(logGroupName=LOG_GROUP)

    log_streams = logs.describe_log_streams(
        logGroupName=LOG_GROUP,
        logStreamNamePrefix=log_stream_name
    )['logStreams']

    if not any(stream['logStreamName'] == log_stream_name for stream in log_streams):
        logs.create_log_stream(logGroupName=LOG_GROUP, logStreamName=log_stream_name)

    return log_stream_name

Now we need to upload logs to CloudWatch. We need to use put_log_events. To send multiple logs we need to use a sequenceToken (not the first time). To do that I use this trick.

function_parameters = dict(
    logGroupName=LOG_GROUP,
    logStreamName=log_stream_name
)

for f in glob(f'{LOG_PATH}/*.{LOG_EXTENSION}'):
    function_parameters['logEvents'] = get_log_events_from_file(f)
    response = logs.put_log_events(**function_parameters)
    function_parameters['sequenceToken'] = response['nextSequenceToken']

We also need to read log files and maybe change the fields according to your needs

def get_log_events_from_file(file):
    exclude_fields = ('@timestamp', 'logger')
    return [
        dict(
            timestamp=int(datetime.fromisoformat(d['@timestamp']).timestamp() * 1000),
            message=json.dumps({k: v for k, v in d.items() if k not in exclude_fields})
        ) for d in [json.loads(linea) for linea in open(file, 'r')]]

I like to have all the settings of my application in a file called settings.py. It’s a pattern that I’ve copied from Django. In this file also I read environment variables from a dotenv file.

import os
from pathlib import Path

from dotenv import load_dotenv

BASE_DIR = Path(__file__).resolve().parent
ENVIRONMENT = os.getenv('ENVIRONMENT', 'local')

load_dotenv(dotenv_path=Path(BASE_DIR).resolve().joinpath('env', ENVIRONMENT, '.env'))

AWS_ACCESS_KEY_ID = os.getenv('AWS_ACCESS_KEY_ID')
AWS_SECRET_ACCESS_KEY = os.getenv('AWS_SECRET_ACCESS_KEY')

AWS_PROFILE_NAME = os.getenv('AWS_PROFILE_NAME', False)
AWS_REGION = os.getenv('AWS_REGION')

LOG_GROUP = os.getenv('LOG_GROUP', 'python_logs_example')
LOG_STREAM_PREFIX = os.getenv('LOG_STREAM_PREFIX', 'default')

LOG_EXTENSION = 'log'
LOG_PATH = os.getenv('LOG_PATH', Path(BASE_DIR).resolve())

And that’s all. Your logs in CloudWatch uploaded in a background process decoupled from the main process.

Source code available in my github.

Deploying Python Application using Docker and Kubernetes

I’ve learning how to deploy one Python application to Kubernetes. Here you can see my notes:

Let’s start from a dummy Python application. It’s a basic Flask web API. Each time we perform a GET request to “/” we increase one counter and see the number of hits. The persistence layer will be a Redis database. The script is very simple:

[sourcecode language=”python”]
from flask import Flask
import os
from redis import Redis

redis = Redis(host=os.getenv(‘REDIS_HOST’, ‘localhost’),
port=os.getenv(‘REDIS_PORT’, 6379))
app = Flask(__name__)

@app.route(‘/’)
def hello():
redis.incr(‘hits’)
hits = int(redis.get(‘hits’))
return f"Hits: {hits}"

if __name__ == "__main__":
app.run(host=’0.0.0.0′)
[/sourcecode]

First of all we create a virtual environment to ensure that we’re going to install your dependencies isolatedly:

[sourcecode language=”xml”]
python -m venv venv
[/sourcecode]

We enter in the virtualenv

[sourcecode language=”xml”]
source venv/bin/activate
[/sourcecode]

And we install our dependencies:

[sourcecode language=”xml”]
pip install -r requirements.txt
[/sourcecode]

To be able to run our application we must ensure that we’ve a Redis database ready. We can run one with Docker:

[sourcecode language=”xml”]
docker run -p 6379:6379 redis
[/sourcecode]

Now we can start our application:

[sourcecode language=”xml”]
python app.py
[/sourcecode]

We open our browser with the url: http://localhost:5000 and it works.

Now we’re going to run our application within a Docker container. First of of all we need to create one Docker image from a docker file:

[sourcecode language=”xml”]
FROM python:alpine3.8
ADD . /code
WORKDIR /code
RUN pip install -r requirements.txt

EXPOSE 5000
[/sourcecode]

Now we can build or image:

[sourcecode language=”xml”]
docker build -t front .
[/sourcecode]

And now we can run our front image:

[sourcecode language=”xml”]
docker run -p 5000:5000 front python app.py
[/sourcecode]

If we open now our browser with the url http://localhost:5000 we’ll get a 500 error. That’s because our Docker container is trying to use one Redis host within localhost. It worked before, when our application and our Redis were within the same host. Now our API’s localhost isn’t the same than our host’s one.

Our script the Redis host is localhost by default but it can be passed from an environment variable,

[sourcecode language=”python”]
redis = Redis(host=os.getenv(‘REDIS_HOST’, ‘localhost’),
port=os.getenv(‘REDIS_PORT’, 6379))
[/sourcecode]

we can pass to our our Docker container the real host where our Redis resides (suposing my IP address is 192.168.1.100):

[sourcecode language=”xml”]
docker run -p 5000:5000 –env REDIS_HOST=192.168.1.100 front python app.py
[/sourcecode]

If dont’ want the development server we also can start our API using gunicorn

[sourcecode language=”xml”]
docker run -p 5000:5000 –env REDIS_HOST=192.168.1.100 front gunicorn -w 1 app:app -b 0.0.0.0:5000

And that works. We can start our app manually using Docker. But it’s a bit complicated. We need to run two containers (API and Redis), setting up the env variables.
Docker helps us with docker-compose. We can create a docker-compose.yaml file configuring our all application:

[sourcecode language="xml"]
version: ‘2’

services:
front:
image: front
build:
context: ./front
dockerfile: Dockerfile
container_name: front
command: gunicorn -w 1 app:app -b 0.0.0.0:5000
depends_on:
– redis
ports:
– "5000:5000"
restart: always
environment:
REDIS_HOST: redis
REDIS_PORT: 6379
redis:
image: redis
ports:
– "6379:6379"
[/sourcecode]

I can execute it

[sourcecode language=”xml”]
docker-compose up
[/sourcecode]

Docker compose is pretty straightforward. But, what happens if our production environment is a cluster? docker-compose works fine in a single host. But it our production environment is a cluster, we´ll face problems (we need to esure manually things like hight avaiavility and things like that). Docker people tried to answer to this question with Docker Swarm. Basically Swarm is docker-compose within a cluster. It uses almost the same syntax than docker-compose in a single host. Looks good, ins’t it? OK. Nobody uses it. Since Google created their Docker conainer orchestator (Kubernetes, aka K8s) it becames into the de-facto standard. The good thing about K8s is that it’s much better than Swarm (more configurable and more powerfull), but the bad part is that it isn’t as simple and easy to understand as docker-compose.

Let’s try to execute our proyect in K8s:

First I start minikube

[sourcecode language=”xml”]
minikube start
[/sourcecode]

and I configure kubectl to connect to my minikube k8s cluster

[sourcecode language=”xml”]
eval $(minikube docker-env)
[/sourcecode]

The API:

First we create one service:

[sourcecode language=”xml”]
apiVersion: v1
kind: Service
metadata:
name: front-api
spec:
# types:
# – ClusterIP: (default) only accessible from within the Kubernetes cluster
# – NodePort: accessible on a static port on each Node in the cluster
# – LoadBalancer: accessible externally through a cloud provider’s load balancer
type: LoadBalancer
# When the node receives a request on the static port (30163)
# "select pods with the label ‘app’ set to ‘front-api’"
# and forward the request to one of them
selector:
app: front-api
ports:
– protocol: TCP
port: 5000 # port exposed internally in the cluster
targetPort: 5000 # the container port to send requests to
nodePort: 30164 # a static port assigned on each the node
[/sourcecode]

And one deployment:

[sourcecode language=”xml”]
apiVersion: apps/v1
kind: Deployment
metadata:
name: front-api
spec:
# How many copies of each pod do we want?
replicas: 1

selector:
matchLabels:
# This must match the labels we set on the pod!
app: front-api

# This template field is a regular pod configuration
# nested inside the deployment spec
template:
metadata:
# Set labels on the pod.
# This is used in the deployment selector.
labels:
app: front-api
spec:
containers:
– name: front-api
image: front:v1
args: ["gunicorn", "-w 1", "app:app", "-b 0.0.0.0:5000"]
ports:
– containerPort: 5000
env:
– name: REDIS_HOST
valueFrom:
configMapKeyRef:
name: api-config
key: redis.host
[/sourcecode]

In order to learn a little bit of K8s I’m using a config map called ‘api-config’ where I put some information (such as the Redis host that I’m going to pass as a env variable)

[sourcecode language=”xml”]
apiVersion: v1
kind: ConfigMap
metadata:
name: api-config
data:
redis.host: "back-api"
[/sourcecode]

The Backend: Our Redis database:

First the service:

[sourcecode language=”xml”]
apiVersion: v1
kind: Service
metadata:
name: back-api
spec:
type: ClusterIP
ports:
– port: 6379
targetPort: 6379
selector:
app: back-api
[/sourcecode]

And finally the deployment:

[sourcecode language=”xml”]
apiVersion: apps/v1
kind: Deployment
metadata:
name: back-api
spec:
replicas: 1
selector:
matchLabels:
app: back-api
template:
metadata:
labels:
app: back-api
spec:
containers:
– name: back-api
image: redis
ports:
– containerPort: 6379
name: redis
[/sourcecode]

Before deploying my application to the cluster I need to build my docker image and tag it

[sourcecode language=”xml”]
docker build -t front .
[/sourcecode]

[sourcecode language=”xml”]
docker tag front front:v1
[/sourcecode]

Now I can deploy my application to my K8s cluster:

[sourcecode language=”xml”]
kubectl apply -f .k8s/
[/sourcecode]

If want to know what’s the external url of my application in the cluster I can use this command

[sourcecode language=”xml”]
minikube service front-api –url
[/sourcecode]

Then I can see it running using the browser or with curl

[sourcecode language=”xml”]
curl $(minikube service front-api –url)
[/sourcecode]

And that’s all. I can delete all application alos

[sourcecode language=”xml”]
kubectl delete -f .k8s/
[/sourcecode]

Source code available in my github

Playing with microservices, Docker, Python an Nameko

In the last projects that I’ve been involved with I’ve playing, in one way or another, with microservices, queues and things like that. I’m always facing the same tasks: Building RPCs, Workers, API gateways, … Because of that I’ve searching one framework to help me with those kind of stuff. Finally I discover Nameko. Basically Nameko is the Python tool that I’ve been looking for. In this post I will create a simple proof of concept to learn how to integrate Nameko within my projects. Let start.

The POC is a simple API gateway that gives me the localtime in iso format. I can create a simple Python script to do it

[sourcecode language=”python”]
import datetime
import time

print(datetime.datetime.fromtimestamp(time()).isoformat())
[/sourcecode]

We also can create a simple Flask API server to consume this information. The idea is create a rpc worker to generate this information and also generate another worker to send the localtime, but taken from a PostgreSQL database (yes I know it not very useful but it’s just an excuse to use a PG database in the microservice)

We’re going to create two rpc workers. One giving the local time:

[sourcecode language=”python”]
from nameko.rpc import rpc
from time import time
import datetime

class TimeService:
name = "local_time_service"

@rpc
def local(self):
return datetime.datetime.fromtimestamp(time()).isoformat()

[/sourcecode]

And another one with the date from PostgreSQL:

[sourcecode language=”python”]
from nameko.rpc import rpc
from dotenv import load_dotenv
import os
from ext.pg import PgService

current_dir = os.path.dirname(os.path.abspath(__file__))
load_dotenv(dotenv_path="{}/.env".format(current_dir))

class TimeService:
name = "db_time_service"
conn = PgService(os.getenv(‘DSN’))

@rpc
def db(self):
with self.conn:
with self.conn.cursor() as cur:
cur.execute("select localtimestamp")
timestamp = cur.fetchone()
return timestamp[0]
[/sourcecode]

I’ve created a service called PgService only to learn how to create dependency providers in nameko

[sourcecode language=”python”]
from nameko.extensions import DependencyProvider
import psycopg2

class PgService(DependencyProvider):

def __init__(self, dsn):
self.dsn = dsn

def get_dependency(self, worker_ctx):
return psycopg2.connect(self.dsn)
[/sourcecode]

Now we only need to setup the api gateway. With Nameko we can create http entrypoint also (in the same way than we create rpc) but I want to use it with Flask

[sourcecode language=”python”]
from flask import Flask
from nameko.standalone.rpc import ServiceRpcProxy
from dotenv import load_dotenv
import os

current_dir = os.path.dirname(os.path.abspath(__file__))
load_dotenv(dotenv_path="{}/.env".format(current_dir))

app = Flask(__name__)

def rpc_proxy(service):
config = {‘AMQP_URI’: os.getenv(‘AMQP_URI’)}
return ServiceRpcProxy(service, config)

@app.route(‘/’)
def hello():
return "Hello"

@app.route(‘/local’)
def local_time():
with rpc_proxy(‘local_time_service’) as rpc:
time = rpc.local()

return time

@app.route(‘/db’)
def db_time():
with rpc_proxy(‘db_time_service’) as rpc:
time = rpc.db()

return time

if __name__ == ‘__main__’:
app.run()
[/sourcecode]

As well as I wanna run my POC with docker, here the docker-compose file to set up the project

[sourcecode language=”xml”]
version: ‘3.4’

services:
api:
image: nameko/api
container_name: nameko.api
hostname: api
ports:
– "8080:8080"
restart: always
links:
– rabbit
– db.worker
– local.worker
environment:
– ENV=1
– FLASK_APP=app.py
– FLASK_DEBUG=1
build:
context: ./api
dockerfile: .docker/Dockerfile-api
#volumes:
#- ./api:/usr/src/app:ro
command: flask run –host=0.0.0.0 –port 8080
db.worker:
container_name: nameko.db.worker
image: nameko/db.worker
restart: always
build:
context: ./workers/db.worker
dockerfile: .docker/Dockerfile-worker
command: /bin/bash run.sh
local.worker:
container_name: nameko.local.worker
image: nameko/local.worker
restart: always
build:
context: ./workers/local.worker
dockerfile: .docker/Dockerfile-worker
command: /bin/bash run.sh
rabbit:
container_name: nameko.rabbit
image: rabbitmq:3-management
restart: always
ports:
– "15672:15672"
– "5672:5672"
environment:
RABBITMQ_ERLANG_COOKIE:
RABBITMQ_DEFAULT_VHOST: /
RABBITMQ_DEFAULT_USER: ${RABBITMQ_DEFAULT_USER}
RABBITMQ_DEFAULT_PASS: ${RABBITMQ_DEFAULT_PASS}
pg:
container_name: nameko.pg
image: nameko/pg
restart: always
build:
context: ./pg
dockerfile: .docker/Dockerfile-pg
#ports:
#- "5432:5432"
environment:
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_DB: ${POSTGRES_DB}
PGDATA: /var/lib/postgresql/data/pgdata
[/sourcecode]

And that’s all. Two nameko rpc services working together behind a api gateway

Code available in my github

Monitoring the bandwidth with Grafana, InfluxDB and Docker

Time ago, when I was an ADSL user in my house I had a lot problems with my internet connection. I was a bit lazy to switch to a fiber connection. Finally I changed it, but meanwhile the my Internet company was solving one incident, I started to hack a little bit a simple and dirty script that monitors my connection speed (just for fun and to practise with InfluxDB and Grafana).

Today I’ve lost my quick and dirty script (please Gonzalo keep a working backup the SD card of your Raspberry Pi Server always updated! Sometimes it crashes. It’s simple: “dd if=/dev/disk3 of=pi3.img” 🙂 and I want to rebuild it. This time I want to use Docker (just for fun). Let’s start.

To monitor the bandwidth we only need to use the speedtest-cli api. We can use this api from command line and, as it’s a python library, we can create one python script that uses it.

[sourcecode language="python"]
import datetime
import logging
import os
import speedtest
import time
from dotenv import load_dotenv
from influxdb import InfluxDBClient

logging.basicConfig(level=logging.INFO)

current_dir = os.path.dirname(os.path.abspath(__file__))
load_dotenv(dotenv_path="{}/.env".format(current_dir))

influxdb_host = os.getenv("INFLUXDB_HOST")
influxdb_port = os.getenv("INFLUXDB_PORT")
influxdb_database = os.getenv("INFLUXDB_DATABASE")

def persists(measurement, fields, time):
    logging.info("{} {} {}".format(time, measurement, fields))

    influx_client.write_points([{
        "measurement": measurement,
        "time": time,
        "fields": fields
    }])

influx_client = InfluxDBClient(host=influxdb_host, port=influxdb_port, database=influxdb_database)

def get_speed():
    logging.info("Calculating speed ...")
    s = speedtest.Speedtest()
    s.get_best_server()
    s.download()
    s.upload()

    return s.results.dict()

def loop(sleep):
    current_time = datetime.datetime.utcnow().isoformat()
    speed = get_speed()

    persists(measurement='download', fields={"value": speed['download']}, time=current_time)
    persists(measurement='upload', fields={"value": speed['upload']}, time=current_time)
    persists(measurement='ping', fields={"value": speed['ping']}, time=current_time)

    time.sleep(sleep)

while True:
    loop(sleep=60 * 60) # each hour
[/sourcecode]

Now we need to create the docker-compose file to orchestrate the infrastructure. The most complicate thing here is, maybe, to configure grafana within docker files instead of opening browser, create datasoruce and build dashboard by hand. After a couple of hours navigating into github repositories finally I created exactly what I needed for this post. Basically is a custom entry point for my grafana host that creates the datasource and dashboard (via Grafana’s API)

[sourcecode language="xml"]
version: '3'

services:
  check:
    image: gonzalo123.check
    restart: always
    volumes:
    - ./src/beat:/code/src
    depends_on:
    - influxdb
    build:
      context: ./src
      dockerfile: .docker/Dockerfile-check
    networks:
    - app-network
    command: /bin/sh start.sh
  influxdb:
    image: influxdb:latest
    restart: always
    environment:
    - INFLUXDB_INIT_PWD="${INFLUXDB_PASS}"
    - PRE_CREATE_DB="${INFLUXDB_DB}"
    volumes:
    - influxdb-data:/data
    networks:
    - app-network
  grafana:
    image: grafana/grafana:latest
    restart: always
    ports:
    - "3000:3000"
    depends_on:
    - influxdb
    volumes:
    - grafana-db:/var/lib/grafana
    - grafana-log:/var/log/grafana
    - grafana-conf:/etc/grafana
    networks:
    - app-network

networks:
  app-network:
    driver: bridge

volumes:
  grafana-db:
    driver: local
  grafana-log:
    driver: local
  grafana-conf:
    driver: local
  influxdb-data:
    driver: local
[/sourcecode]

And that’s all. My Internet connection supervised again.

Project available in my github.

Working with SAPUI5 locally (part 3). Adding more services in Docker

In the previous project we moved one project to docker. The idea was to move exactly the same functionality (even without touching anything within the source code). Now we’re going to add more services. Yes, I know, it looks like overenginering (it’s exactly overenginering, indeed), but I want to build something with different services working together. Let start.

We’re going to change a little bit our original project. Now our frontend will only have one button. This button will increment the number of clicks but we’re going to persists this information in a PostgreSQL database. Also, instead of incrementing the counter in the backend, our backend will emit one event to a RabbitMQ message broker. We’ll have one worker service listening to this event and this worker will persist the information. The communication between the worker and the frontend (to show the incremented value), will be via websockets.

With those premises we are going to need:

  • Frontend: UI5 application
  • Backend: PHP/lumen application
  • Worker: nodejs application which is listening to a RabbitMQ event and serving the websocket server (using socket.io)
  • Nginx server
  • PosgreSQL database.
  • RabbitMQ message broker.

As the previous examples, our PHP backend will be server via Nginx and PHP-FPM.

Here we can see to docker-compose file to set up all the services

[sourcecode language=”xml”]
version: ‘3.4’

services:
nginx:
image: gonzalo123.nginx
restart: always
ports:
– "8080:80"
build:
context: ./src
dockerfile: .docker/Dockerfile-nginx
volumes:
– ./src/backend:/code/src
– ./src/.docker/web/site.conf:/etc/nginx/conf.d/default.conf
networks:
– app-network
api:
image: gonzalo123.api
restart: always
build:
context: ./src
dockerfile: .docker/Dockerfile-lumen-dev
environment:
XDEBUG_CONFIG: remote_host=${MY_IP}
volumes:
– ./src/backend:/code/src
networks:
– app-network
ui5:
image: gonzalo123.ui5
ports:
– "8000:8000"
restart: always
volumes:
– ./src/frontend:/code/src
build:
context: ./src
dockerfile: .docker/Dockerfile-ui5
networks:
– app-network
io:
image: gonzalo123.io
ports:
– "9999:9999"
restart: always
volumes:
– ./src/io:/code/src
build:
context: ./src
dockerfile: .docker/Dockerfile-io
networks:
– app-network
pg:
image: gonzalo123.pg
restart: always
ports:
– "5432:5432"
build:
context: ./src
dockerfile: .docker/Dockerfile-pg
environment:
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_DB: ${POSTGRES_DB}
PGDATA: /var/lib/postgresql/data/pgdata
networks:
– app-network
rabbit:
image: rabbitmq:3-management
container_name: gonzalo123.rabbit
restart: always
ports:
– "15672:15672"
– "5672:5672"
environment:
RABBITMQ_ERLANG_COOKIE:
RABBITMQ_DEFAULT_VHOST: /
RABBITMQ_DEFAULT_USER: ${RABBITMQ_DEFAULT_USER}
RABBITMQ_DEFAULT_PASS: ${RABBITMQ_DEFAULT_PASS}
networks:
– app-network
networks:
app-network:
driver: bridge
[/sourcecode]

We’re going to use the same docker files than in the previous post but we also need new ones for worker, database server and message queue:

Worker:
[sourcecode language=”xml”]
FROM node:alpine

EXPOSE 8000

WORKDIR /code/src
COPY ./io .
RUN npm install
ENTRYPOINT ["npm", "run", "serve"]
[/sourcecode]

The worker script is simple script that serves the socket.io server and emits a websocket within every message to the RabbitMQ queue.

[sourcecode language=”js”]
var amqp = require(‘amqp’),
httpServer = require(‘http’).createServer(),
io = require(‘socket.io’)(httpServer, {
origins: ‘*:*’,
}),
pg = require(‘pg’)
;

require(‘dotenv’).config();
var pgClient = new pg.Client(process.env.DB_DSN);

rabbitMq = amqp.createConnection({
host: process.env.RABBIT_HOST,
port: process.env.RABBIT_PORT,
login: process.env.RABBIT_USER,
password: process.env.RABBIT_PASS,
});

var sql = ‘SELECT clickCount FROM docker.clicks’;

// Please don’t do this. Use lazy connections
// I’m ‘lazy’ to do it in this POC 🙂
pgClient.connect(function(err) {
io.on(‘connection’, function() {
pgClient.query(sql, function(err, result) {
var count = result.rows[0][‘clickcount’];
io.emit(‘click’, {count: count});
});

});

rabbitMq.on(‘ready’, function() {
var queue = rabbitMq.queue(‘ui5’);
queue.bind(‘#’);

queue.subscribe(function(message) {
pgClient.query(sql, function(err, result) {
var count = parseInt(result.rows[0][‘clickcount’]);
count = count + parseInt(message.data.toString(‘utf8’));
pgClient.query(‘UPDATE docker.clicks SET clickCount = $1’, [count],
function(err) {
io.emit(‘click’, {count: count});
});
});
});
});
});

httpServer.listen(process.env.IO_PORT);
[/sourcecode]

Database server:
[sourcecode language=”xml”]
FROM postgres:9.6-alpine
COPY pg/init.sql /docker-entrypoint-initdb.d/
[/sourcecode]

As we can see we’re going to generate the database estructure in the first build
[sourcecode language=”sql”]
CREATE SCHEMA docker;

CREATE TABLE docker.clicks (
clickCount numeric(8) NOT NULL
);

ALTER TABLE docker.clicks
OWNER TO username;

INSERT INTO docker.clicks(clickCount) values (0);
[/sourcecode]

With the RabbitMQ server we’re going to use the official docker image so we don’t need to create one Dockerfile

We also have changed a little bit our Nginx configuration. We want to use Nginx to serve backend and also socket.io server. That’s because we don’t want to expose different ports to internet.

[sourcecode language=”xml”]
server {
listen 80;
index index.php index.html;
server_name localhost;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
root /code/src/www;

location /socket.io/ {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass "http://io:9999";
}

location / {
try_files $uri $uri/ /index.php?$query_string;
}

location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass api:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
}
[/sourcecode]

To avoid CORS issues we can also use SCP destination (the localneo proxy in this example), to serve socket.io also. So we need to:

  • change our neo-app.json file
  • [sourcecode language=”js”]
    "routes": [

    {
    "path": "/socket.io",
    "target": {
    "type": "destination",
    "name": "SOCKETIO"
    },
    "description": "SOCKETIO"
    }
    ],
    [/sourcecode]

    And basically that’s all. Here also we can use a “production” docker-copose file without exposing all ports and mapping the filesystem to our local machine (useful when we’re developing)

    [sourcecode language=”xml”]
    version: ‘3.4’

    services:
    nginx:
    image: gonzalo123.nginx
    restart: always
    build:
    context: ./src
    dockerfile: .docker/Dockerfile-nginx
    networks:
    – app-network
    api:
    image: gonzalo123.api
    restart: always
    build:
    context: ./src
    dockerfile: .docker/Dockerfile-lumen
    networks:
    – app-network
    ui5:
    image: gonzalo123.ui5
    ports:
    – "80:8000"
    restart: always
    volumes:
    – ./src/frontend:/code/src
    build:
    context: ./src
    dockerfile: .docker/Dockerfile-ui5
    networks:
    – app-network
    io:
    image: gonzalo123.io
    restart: always
    build:
    context: ./src
    dockerfile: .docker/Dockerfile-io
    networks:
    – app-network
    pg:
    image: gonzalo123.pg
    restart: always
    build:
    context: ./src
    dockerfile: .docker/Dockerfile-pg
    environment:
    POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
    POSTGRES_USER: ${POSTGRES_USER}
    POSTGRES_DB: ${POSTGRES_DB}
    PGDATA: /var/lib/postgresql/data/pgdata
    networks:
    – app-network
    rabbit:
    image: rabbitmq:3-management
    restart: always
    environment:
    RABBITMQ_ERLANG_COOKIE:
    RABBITMQ_DEFAULT_VHOST: /
    RABBITMQ_DEFAULT_USER: ${RABBITMQ_DEFAULT_USER}
    RABBITMQ_DEFAULT_PASS: ${RABBITMQ_DEFAULT_PASS}
    networks:
    – app-network
    networks:
    app-network:
    driver: bridge
    [/sourcecode]

    And that’s all. The full project is available in my github account

    Working with SAPUI5 locally (part 2). Now with docker

    In the first part I spoke about how to build our working environment to work with UI5 locally instead of using WebIDE. Now, in this second part of the post, we’ll see how to do it using docker to set up our environment.

    I’ll use docker-compose to set up the project. Basically, as I explain in the first part, the project has two parts. One backend and one frontned. We’re going to use exactly the same code for the frontend and for the backend.

    The frontend is build over a localneo. As it’s a node application we’ll use a node:alpine base host

    [sourcecode language=”xml”]
    FROM node:alpine

    EXPOSE 8000

    WORKDIR /code/src
    COPY ./frontend .
    RUN npm install
    ENTRYPOINT ["npm", "run", "serve"]
    [/sourcecode]

    In docker-compose we only need to map the port that we´ll expose in our host and since we want this project in our depelopemet process, we also will map the volume to avoid to re-generate our container each time we change the code.

    [sourcecode language=”xml”]

    ui5:
    image: gonzalo123.ui5
    ports:
    – "8000:8000"
    restart: always
    build:
    context: ./src
    dockerfile: .docker/Dockerfile-ui5
    volumes:
    – ./src/frontend:/code/src
    networks:
    – api-network
    [/sourcecode]

    The backend is a PHP application. We can set up a PHP application using different architectures. In this project we’ll use nginx and PHP-FPM.

    for nginx we’ll use the following Dockerfile

    [sourcecode language=”xml”]
    FROM nginx:1.13-alpine

    EXPOSE 80

    COPY ./.docker/web/site.conf /etc/nginx/conf.d/default.conf
    COPY ./backend /code/src
    [/sourcecode]

    And for the PHP host the following one (with xdebug to enable debugging and breakpoints):

    [sourcecode language=”xml”]
    FROM php:7.1-fpm

    ENV PHP_XDEBUG_REMOTE_ENABLE 1

    RUN apt-get update && apt-get install -my \
    git \
    libghc-zlib-dev && \
    apt-get clean

    RUN apt-get install -y libpq-dev \
    && docker-php-ext-configure pgsql -with-pgsql=/usr/local/pgsql \
    && docker-php-ext-install pdo pdo_pgsql pgsql opcache zip

    RUN curl -sS https://getcomposer.org/installer | php — –install-dir=/usr/local/bin –filename=composer

    RUN composer global require "laravel/lumen-installer"
    ENV PATH ~/.composer/vendor/bin:$PATH

    COPY ./backend /code/src
    [/sourcecode]

    And basically that’s all. Here the full docker-compose file

    [sourcecode language=”xml”]
    version: ‘3.4’

    services:
    nginx:
    image: gonzalo123.nginx
    restart: always
    ports:
    – "8080:80"
    build:
    context: ./src
    dockerfile: .docker/Dockerfile-nginx
    volumes:
    – ./src/backend:/code/src
    – ./src/.docker/web/site.conf:/etc/nginx/conf.d/default.conf
    networks:
    – api-network
    api:
    image: gonzalo123.api
    restart: always
    build:
    context: ./src
    dockerfile: .docker/Dockerfile-lumen-dev
    environment:
    XDEBUG_CONFIG: remote_host=${MY_IP}
    volumes:
    – ./src/backend:/code/src
    networks:
    – api-network
    ui5:
    image: gonzalo123.ui5
    ports:
    – "8000:8000"
    restart: always
    build:
    context: ./src
    dockerfile: .docker/Dockerfile-ui5
    networks:
    – api-network

    networks:
    api-network:
    driver: bridge
    [/sourcecode]

    If we want to use this project you only need to:

    • clone the repo fron github
    • run ./ui5 up

    With this configuration we’re exposing two ports 8080 for the frontend and 8000 for the backend. We also are mapping our local filesystem to containers to avoid to regenerate our containers each time we change the code.

    We also can have a variation. A “production” version of our docker-compose file. I put production between quotation marks because normally we aren’t going to use localneo as a production server (please don’t do it). We’ll use SCP to host the frontend.

    This configuration is just an example without filesystem mapping, without xdebug in the backend and without exposing the backend externally (Only the frontend can use it)

    [sourcecode language=”xml”]
    version: ‘3.4’

    services:
    nginx:
    image: gonzalo123.nginx
    restart: always
    build:
    context: ./src
    dockerfile: .docker/Dockerfile-nginx
    networks:
    – api-network
    api:
    image: gonzalo123.api
    restart: always
    build:
    context: ./src
    dockerfile: .docker/Dockerfile-lumen
    networks:
    – api-network
    ui5:
    image: gonzalo123.ui5
    ports:
    – "8000:8000"
    restart: always
    build:
    context: ./src
    dockerfile: .docker/Dockerfile-ui5
    networks:
    – api-network

    networks:
    api-network:
    driver: bridge
    [/sourcecode]

    And that’s all. You can see the all the source code in my github account

    Playing with Docker, Silex, Python, Node and WebSockets

    I’m learning Docker. In this post I want to share a little experiment that I have done. I know the code looks like over-engineering but it’s just an excuse to build something with docker and containers. Let me explain it a little bit.

    The idea is build a Time clock in the browser. Something like this:

    Clock

    Yes I know. We can do it only with js, css and html but we want to hack a little bit more. The idea is to create:

    • A Silex/PHP frontend
    • A WebSocket server with socket.io/node
    • A Python script to obtain the current time

    WebSocket server will open 2 ports: One port to serve webSockets (socket.io) and another one as a http server (express). Python script will get the current time and it’ll send it to the webSocket server. Finally one frontend(silex) will be listening to WebSocket’s event and it will render the current time.

    That’s the WebSocket server (with socket.io and express)
    [sourcecode language=”js”]
    var
    express = require(‘express’),
    expressApp = express(),
    server = require(‘http’).Server(expressApp),
    io = require(‘socket.io’)(server, {origins: ‘localhost:*’})
    ;

    expressApp.get(‘/tic’, function (req, res) {
    io.sockets.emit(‘time’, req.query.time);
    res.json(‘OK’);
    });

    expressApp.listen(6400, ‘0.0.0.0’);

    server.listen(8080);
    [/sourcecode]

    That’s our Python script

    [sourcecode language=”python”]
    from time import gmtime, strftime, sleep
    import httplib2

    h = httplib2.Http()
    while True:
    (resp, content) = h.request("http://node:6400/tic?time=" + strftime("%H:%M:%S", gmtime()))
    sleep(1)
    [/sourcecode]

    And our Silex frontend
    [sourcecode language=”php”]
    use Silex\Application;
    use Silex\Provider\TwigServiceProvider;

    $app = new Application([‘debug’ => true]);
    $app->register(new TwigServiceProvider(), [
    ‘twig.path’ => __DIR__ . ‘/../views’,
    ]);

    $app->get("/", function (Application $app) {
    return $app[‘twig’]->render(‘index.twig’, []);
    });

    $app->run();
    [/sourcecode]

    using this twig template

    [sourcecode language=”html”]
    <!DOCTYPE html>
    <html lang="en">
    <head>
    <meta charset="utf-8">
    <meta http-equiv="X-UA-Compatible" content="IE=edge">
    <meta name="viewport" content="width=device-width, initial-scale=1">
    <title>Docker example</title>
    <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css&quot; integrity="sha384-BVYiiSIFeK1dGmJRAkycuHAHRg32OmUcww7on3RYdg4Va+PmSTsz/K68vbdEjh4u" crossorigin="anonymous">
    <link href="css/app.css" rel="stylesheet">
    <script src="https://oss.maxcdn.com/html5shiv/3.7.3/html5shiv.min.js"></script&gt;
    <script src="https://oss.maxcdn.com/respond/1.4.2/respond.min.js"></script&gt;
    </head>
    <body>
    <div class="site-wrapper">
    <div class="site-wrapper-inner">
    <div class="cover-container">
    <div class="inner cover">
    <h1 class="cover-heading">
    <div id="display">
    display
    </div>
    </h1>
    </div>
    </div>
    </div>
    </div>
    <script src="//localhost:8080/socket.io/socket.io.js"></script>
    <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.12.4/jquery.min.js"></script&gt;
    <script>
    var socket = io.connect(‘//localhost:8080’);

    $(function () {
    socket.on(‘time’, function (data) {
    $(‘#display’).html(data);
    });
    });
    </script>
    </body>
    </html>
    [/sourcecode]

    The idea is to use one Docker container for each process. I like to have all the code in one place so all containers will share the same volume with source code.

    First the node container (WebSocket server)

    [sourcecode language=”text”]
    FROM node:argon

    RUN mkdir -p /mnt/src
    WORKDIR /mnt/src/node

    EXPOSE 8080 6400
    [/sourcecode]

    Now the python container
    [sourcecode language=”text”]
    FROM python:2

    RUN pip install httplib2

    RUN mkdir -p /mnt/src
    WORKDIR /mnt/src/python
    [/sourcecode]

    And finally Frontend contailer (apache2 with Ubuntu 16.04)

    [sourcecode language=”text”]
    FROM ubuntu:16.04

    RUN locale-gen es_ES.UTF-8
    RUN update-locale LANG=es_ES.UTF-8
    ENV DEBIAN_FRONTEND=noninteractive

    RUN apt-get update -y
    RUN apt-get install –no-install-recommends -y apache2 php libapache2-mod-php
    RUN apt-get clean -y

    COPY ./apache2/sites-available/000-default.conf /etc/apache2/sites-available/000-default.conf

    RUN mkdir -p /mnt/src

    RUN a2enmod rewrite
    RUN a2enmod proxy
    RUN a2enmod mpm_prefork

    RUN chown -R www-data:www-data /mnt/src
    ENV APACHE_RUN_USER www-data
    ENV APACHE_RUN_GROUP www-data
    ENV APACHE_LOG_DIR /var/log/apache2
    ENV APACHE_LOCK_DIR /var/lock/apache2
    ENV APACHE_PID_FILE /var/run/apache2/apache2.pid
    ENV APACHE_SERVERADMIN admin@localhost
    ENV APACHE_SERVERNAME localhost

    EXPOSE 80
    [/sourcecode]

    Now we’ve got the three containers but we want to use all together. We’ll use a docker-compose.yml file. The web container will expose port 80 and node container 8080. Node container also opens 6400 but this port is an internal port. We don’t need to access to this port outside. Only Python container needs to access to this port. Because of that 6400 is not mapped to any port in docker-compose

    [sourcecode language=”text”]
    version: ‘2’

    services:
    web:
    image: gonzalo123/example_web
    container_name: example_web
    ports:
    – "80:80"
    restart: always
    depends_on:
    – node
    build:
    context: ./images/php
    dockerfile: Dockerfile
    entrypoint:
    – /usr/sbin/apache2
    – -D
    – FOREGROUND
    volumes:
    – ./src:/mnt/src

    node:
    image: gonzalo123/example_node
    container_name: example_node
    ports:
    – "8080:8080"
    restart: always
    build:
    context: ./images/node
    dockerfile: Dockerfile
    entrypoint:
    – npm
    – start
    volumes:
    – ./src:/mnt/src

    python:
    image: gonzalo123/example_python
    container_name: example_python
    restart: always
    depends_on:
    – node
    build:
    context: ./images/python
    dockerfile: Dockerfile
    entrypoint:
    – python
    – tic.py
    volumes:
    – ./src:/mnt/src
    [/sourcecode]

    And that’s all. We only need to start our containers
    [sourcecode language=”bash”]
    docker-compose up –build -d
    [/sourcecode]

    and open our browser at: http://localhost to see our Time clock

    Full source code available within my github account