Monitoring the bandwidth with Grafana, InfluxDB and Docker

Time ago, when I was an ADSL user in my house I had a lot problems with my internet connection. I was a bit lazy to switch to a fiber connection. Finally I changed it, but meanwhile the my Internet company was solving one incident, I started to hack a little bit a simple and dirty script that monitors my connection speed (just for fun and to practise with InfluxDB and Grafana).

Today I’ve lost my quick and dirty script (please Gonzalo keep a working backup the SD card of your Raspberry Pi Server always updated! Sometimes it crashes. It’s simple: “dd if=/dev/disk3 of=pi3.img” 🙂 and I want to rebuild it. This time I want to use Docker (just for fun). Let’s start.

To monitor the bandwidth we only need to use the speedtest-cli api. We can use this api from command line and, as it’s a python library, we can create one python script that uses it.

import datetime
import logging
import os
import speedtest
import time
from dotenv import load_dotenv
from influxdb import InfluxDBClient

logging.basicConfig(level=logging.INFO)

current_dir = os.path.dirname(os.path.abspath(__file__))
load_dotenv(dotenv_path="{}/.env".format(current_dir))

influxdb_host = os.getenv("INFLUXDB_HOST")
influxdb_port = os.getenv("INFLUXDB_PORT")
influxdb_database = os.getenv("INFLUXDB_DATABASE")

def persists(measurement, fields, time):
    logging.info("{} {} {}".format(time, measurement, fields))

    influx_client.write_points([{
        "measurement": measurement,
        "time": time,
        "fields": fields
    }])

influx_client = InfluxDBClient(host=influxdb_host, port=influxdb_port, database=influxdb_database)

def get_speed():
    logging.info("Calculating speed ...")
    s = speedtest.Speedtest()
    s.get_best_server()
    s.download()
    s.upload()

    return s.results.dict()

def loop(sleep):
    current_time = datetime.datetime.utcnow().isoformat()
    speed = get_speed()

    persists(measurement='download', fields={"value": speed['download']}, time=current_time)
    persists(measurement='upload', fields={"value": speed['upload']}, time=current_time)
    persists(measurement='ping', fields={"value": speed['ping']}, time=current_time)

    time.sleep(sleep)

while True:
    loop(sleep=60 * 60) # each hour

Now we need to create the docker-compose file to orchestrate the infrastructure. The most complicate thing here is, maybe, to configure grafana within docker files instead of opening browser, create datasoruce and build dashboard by hand. After a couple of hours navigating into github repositories finally I created exactly what I needed for this post. Basically is a custom entry point for my grafana host that creates the datasource and dashboard (via Grafana’s API)

version: '3'

services:
  check:
    image: gonzalo123.check
    restart: always
    volumes:
    - ./src/beat:/code/src
    depends_on:
    - influxdb
    build:
      context: ./src
      dockerfile: .docker/Dockerfile-check
    networks:
    - app-network
    command: /bin/sh start.sh
  influxdb:
    image: influxdb:latest
    restart: always
    environment:
    - INFLUXDB_INIT_PWD="${INFLUXDB_PASS}"
    - PRE_CREATE_DB="${INFLUXDB_DB}"
    volumes:
    - influxdb-data:/data
    networks:
    - app-network
  grafana:
    image: grafana/grafana:latest
    restart: always
    ports:
    - "3000:3000"
    depends_on:
    - influxdb
    volumes:
    - grafana-db:/var/lib/grafana
    - grafana-log:/var/log/grafana
    - grafana-conf:/etc/grafana
    networks:
    - app-network

networks:
  app-network:
    driver: bridge

volumes:
  grafana-db:
    driver: local
  grafana-log:
    driver: local
  grafana-conf:
    driver: local
  influxdb-data:
    driver: local

And that’s all. My Internet connection supervised again.

Project available in my github.

Advertisements

Working with SAPUI5 locally (part 3). Adding more services in Docker

In the previous project we moved one project to docker. The idea was to move exactly the same functionality (even without touching anything within the source code). Now we’re going to add more services. Yes, I know, it looks like overenginering (it’s exactly overenginering, indeed), but I want to build something with different services working together. Let start.

We’re going to change a little bit our original project. Now our frontend will only have one button. This button will increment the number of clicks but we’re going to persists this information in a PostgreSQL database. Also, instead of incrementing the counter in the backend, our backend will emit one event to a RabbitMQ message broker. We’ll have one worker service listening to this event and this worker will persist the information. The communication between the worker and the frontend (to show the incremented value), will be via websockets.

With those premises we are going to need:

  • Frontend: UI5 application
  • Backend: PHP/lumen application
  • Worker: nodejs application which is listening to a RabbitMQ event and serving the websocket server (using socket.io)
  • Nginx server
  • PosgreSQL database.
  • RabbitMQ message broker.

As the previous examples, our PHP backend will be server via Nginx and PHP-FPM.

Here we can see to docker-compose file to set up all the services

version: '3.4'

services:
  nginx:
    image: gonzalo123.nginx
    restart: always
    ports:
    - "8080:80"
    build:
      context: ./src
      dockerfile: .docker/Dockerfile-nginx
    volumes:
    - ./src/backend:/code/src
    - ./src/.docker/web/site.conf:/etc/nginx/conf.d/default.conf
    networks:
    - app-network
  api:
    image: gonzalo123.api
    restart: always
    build:
      context: ./src
      dockerfile: .docker/Dockerfile-lumen-dev
    environment:
      XDEBUG_CONFIG: remote_host=${MY_IP}
    volumes:
    - ./src/backend:/code/src
    networks:
    - app-network
  ui5:
    image: gonzalo123.ui5
    ports:
    - "8000:8000"
    restart: always
    volumes:
    - ./src/frontend:/code/src
    build:
      context: ./src
      dockerfile: .docker/Dockerfile-ui5
    networks:
    - app-network
  io:
    image: gonzalo123.io
    ports:
    - "9999:9999"
    restart: always
    volumes:
    - ./src/io:/code/src
    build:
      context: ./src
      dockerfile: .docker/Dockerfile-io
    networks:
    - app-network
  pg:
    image: gonzalo123.pg
    restart: always
    ports:
    - "5432:5432"
    build:
      context: ./src
      dockerfile: .docker/Dockerfile-pg
    environment:
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
      POSTGRES_USER: ${POSTGRES_USER}
      POSTGRES_DB: ${POSTGRES_DB}
      PGDATA: /var/lib/postgresql/data/pgdata
    networks:
    - app-network
  rabbit:
    image: rabbitmq:3-management
    container_name: gonzalo123.rabbit
    restart: always
    ports:
    - "15672:15672"
    - "5672:5672"
    environment:
      RABBITMQ_ERLANG_COOKIE:
      RABBITMQ_DEFAULT_VHOST: /
      RABBITMQ_DEFAULT_USER: ${RABBITMQ_DEFAULT_USER}
      RABBITMQ_DEFAULT_PASS: ${RABBITMQ_DEFAULT_PASS}
    networks:
    - app-network
networks:
  app-network:
    driver: bridge

We’re going to use the same docker files than in the previous post but we also need new ones for worker, database server and message queue:

Worker:

FROM node:alpine

EXPOSE 8000

WORKDIR /code/src
COPY ./io .
RUN npm install
ENTRYPOINT ["npm", "run", "serve"]

The worker script is simple script that serves the socket.io server and emits a websocket within every message to the RabbitMQ queue.

var amqp = require('amqp'),
  httpServer = require('http').createServer(),
  io = require('socket.io')(httpServer, {
    origins: '*:*',
  }),
  pg = require('pg')
;

require('dotenv').config();
var pgClient = new pg.Client(process.env.DB_DSN);

rabbitMq = amqp.createConnection({
  host: process.env.RABBIT_HOST,
  port: process.env.RABBIT_PORT,
  login: process.env.RABBIT_USER,
  password: process.env.RABBIT_PASS,
});

var sql = 'SELECT clickCount FROM docker.clicks';

// Please don't do this. Use lazy connections
// I'm 'lazy' to do it in this POC 🙂
pgClient.connect(function(err) {
  io.on('connection', function() {
    pgClient.query(sql, function(err, result) {
      var count = result.rows[0]['clickcount'];
      io.emit('click', {count: count});
    });

  });

  rabbitMq.on('ready', function() {
    var queue = rabbitMq.queue('ui5');
    queue.bind('#');

    queue.subscribe(function(message) {
      pgClient.query(sql, function(err, result) {
        var count = parseInt(result.rows[0]['clickcount']);
        count = count + parseInt(message.data.toString('utf8'));
        pgClient.query('UPDATE docker.clicks SET clickCount = $1', [count],
          function(err) {
            io.emit('click', {count: count});
          });
      });
    });
  });
});

httpServer.listen(process.env.IO_PORT);

Database server:

FROM postgres:9.6-alpine
COPY pg/init.sql /docker-entrypoint-initdb.d/

As we can see we’re going to generate the database estructure in the first build

CREATE SCHEMA docker;

CREATE TABLE docker.clicks (
clickCount numeric(8) NOT NULL
);

ALTER TABLE docker.clicks
OWNER TO username;

INSERT INTO docker.clicks(clickCount) values (0);

With the RabbitMQ server we’re going to use the official docker image so we don’t need to create one Dockerfile

We also have changed a little bit our Nginx configuration. We want to use Nginx to serve backend and also socket.io server. That’s because we don’t want to expose different ports to internet.

server {
    listen 80;
    index index.php index.html;
    server_name localhost;
    error_log  /var/log/nginx/error.log;
    access_log /var/log/nginx/access.log;
    root /code/src/www;

    location /socket.io/ {
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_pass "http://io:9999";
    }

    location / {
        try_files $uri $uri/ /index.php?$query_string;
    }

    location ~ \.php$ {
        try_files $uri =404;
        fastcgi_split_path_info ^(.+\.php)(/.+)$;
        fastcgi_pass api:9000;
        fastcgi_index index.php;
        include fastcgi_params;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_param PATH_INFO $fastcgi_path_info;
    }
}

To avoid CORS issues we can also use SCP destination (the localneo proxy in this example), to serve socket.io also. So we need to:

  • change our neo-app.json file
  • "routes": [
        ...
        {
          "path": "/socket.io",
          "target": {
            "type": "destination",
            "name": "SOCKETIO"
          },
          "description": "SOCKETIO"
        }
      ],
    

    And basically that’s all. Here also we can use a “production” docker-copose file without exposing all ports and mapping the filesystem to our local machine (useful when we’re developing)

    version: '3.4'
    
    services:
      nginx:
        image: gonzalo123.nginx
        restart: always
        build:
          context: ./src
          dockerfile: .docker/Dockerfile-nginx
        networks:
        - app-network
      api:
        image: gonzalo123.api
        restart: always
        build:
          context: ./src
          dockerfile: .docker/Dockerfile-lumen
        networks:
        - app-network
      ui5:
        image: gonzalo123.ui5
        ports:
        - "80:8000"
        restart: always
        volumes:
        - ./src/frontend:/code/src
        build:
          context: ./src
          dockerfile: .docker/Dockerfile-ui5
        networks:
        - app-network
      io:
        image: gonzalo123.io
        restart: always
        build:
          context: ./src
          dockerfile: .docker/Dockerfile-io
        networks:
        - app-network
      pg:
        image: gonzalo123.pg
        restart: always
        build:
          context: ./src
          dockerfile: .docker/Dockerfile-pg
        environment:
          POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
          POSTGRES_USER: ${POSTGRES_USER}
          POSTGRES_DB: ${POSTGRES_DB}
          PGDATA: /var/lib/postgresql/data/pgdata
        networks:
        - app-network
      rabbit:
        image: rabbitmq:3-management
        restart: always
        environment:
          RABBITMQ_ERLANG_COOKIE:
          RABBITMQ_DEFAULT_VHOST: /
          RABBITMQ_DEFAULT_USER: ${RABBITMQ_DEFAULT_USER}
          RABBITMQ_DEFAULT_PASS: ${RABBITMQ_DEFAULT_PASS}
        networks:
        - app-network
    networks:
      app-network:
        driver: bridge
    

    And that’s all. The full project is available in my github account

    Working with SAPUI5 locally (part 2). Now with docker

    In the first part I spoke about how to build our working environment to work with UI5 locally instead of using WebIDE. Now, in this second part of the post, we’ll see how to do it using docker to set up our environment.

    I’ll use docker-compose to set up the project. Basically, as I explain in the first part, the project has two parts. One backend and one frontned. We’re going to use exactly the same code for the frontend and for the backend.

    The frontend is build over a localneo. As it’s a node application we’ll use a node:alpine base host

    FROM node:alpine
    
    EXPOSE 8000
    
    WORKDIR /code/src
    COPY ./frontend .
    RUN npm install
    ENTRYPOINT ["npm", "run", "serve"]
    

    In docker-compose we only need to map the port that we´ll expose in our host and since we want this project in our depelopemet process, we also will map the volume to avoid to re-generate our container each time we change the code.

    ...
      ui5:
        image: gonzalo123.ui5
        ports:
        - "8000:8000"
        restart: always
        build:
          context: ./src
          dockerfile: .docker/Dockerfile-ui5
        volumes:
        - ./src/frontend:/code/src
        networks:
        - api-network
    

    The backend is a PHP application. We can set up a PHP application using different architectures. In this project we’ll use nginx and PHP-FPM.

    for nginx we’ll use the following Dockerfile

    FROM  nginx:1.13-alpine
    
    EXPOSE 80
    
    COPY ./.docker/web/site.conf /etc/nginx/conf.d/default.conf
    COPY ./backend /code/src
    

    And for the PHP host the following one (with xdebug to enable debugging and breakpoints):

    FROM php:7.1-fpm
    
    ENV PHP_XDEBUG_REMOTE_ENABLE 1
    
    RUN apt-get update && apt-get install -my \
        git \
        libghc-zlib-dev && \
        apt-get clean
    
    RUN apt-get install -y libpq-dev \
        && docker-php-ext-configure pgsql -with-pgsql=/usr/local/pgsql \
        && docker-php-ext-install pdo pdo_pgsql pgsql opcache zip
    
    RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
    
    RUN composer global require "laravel/lumen-installer"
    ENV PATH ~/.composer/vendor/bin:$PATH
    
    COPY ./backend /code/src
    

    And basically that’s all. Here the full docker-compose file

    version: '3.4'
    
    services:
      nginx:
        image: gonzalo123.nginx
        restart: always
        ports:
        - "8080:80"
        build:
          context: ./src
          dockerfile: .docker/Dockerfile-nginx
        volumes:
        - ./src/backend:/code/src
        - ./src/.docker/web/site.conf:/etc/nginx/conf.d/default.conf
        networks:
        - api-network
      api:
        image: gonzalo123.api
        restart: always
        build:
          context: ./src
          dockerfile: .docker/Dockerfile-lumen-dev
        environment:
          XDEBUG_CONFIG: remote_host=${MY_IP}
        volumes:
        - ./src/backend:/code/src
        networks:
        - api-network
      ui5:
        image: gonzalo123.ui5
        ports:
        - "8000:8000"
        restart: always
        build:
          context: ./src
          dockerfile: .docker/Dockerfile-ui5
        networks:
        - api-network
    
    networks:
      api-network:
        driver: bridge
    

    If we want to use this project you only need to:

    • clone the repo fron github
    • run ./ui5 up

    With this configuration we’re exposing two ports 8080 for the frontend and 8000 for the backend. We also are mapping our local filesystem to containers to avoid to regenerate our containers each time we change the code.

    We also can have a variation. A “production” version of our docker-compose file. I put production between quotation marks because normally we aren’t going to use localneo as a production server (please don’t do it). We’ll use SCP to host the frontend.

    This configuration is just an example without filesystem mapping, without xdebug in the backend and without exposing the backend externally (Only the frontend can use it)

    version: '3.4'
    
    services:
      nginx:
        image: gonzalo123.nginx
        restart: always
        build:
          context: ./src
          dockerfile: .docker/Dockerfile-nginx
        networks:
        - api-network
      api:
        image: gonzalo123.api
        restart: always
        build:
          context: ./src
          dockerfile: .docker/Dockerfile-lumen
        networks:
        - api-network
      ui5:
        image: gonzalo123.ui5
        ports:
        - "8000:8000"
        restart: always
        build:
          context: ./src
          dockerfile: .docker/Dockerfile-ui5
        networks:
        - api-network
    
    networks:
      api-network:
        driver: bridge
    

    And that’s all. You can see the all the source code in my github account

    Working with SAPUI5 locally and deploying in SCP

    When I work with SAPUI5 projects I normally use WebIDE. WebIDE is a great tool but I’m more confortable working locally with my local IDE.
    I’ve this idea in my mind but I never find the time slot to work on it. Finally, after finding this project from Holger Schäfer in github, I realized how easy it’s and I started to work with this project and adapt it to my needs.

    The base of this project is localneo. Localneo starts a http server based on neo-app.json file. That means we’re going to use the same configuration than we’ve in production (in SCP). Of course we’ll need destinations. We only need one extra file called destination.json where we’ll set up our destinations (it creates one http proxy, nothing else).

    In this project I’ll create a simple example application that works with one API server.

    The backend

    I’ll use in this example one PHP/Lumen application:

    $app->router->group(['prefix' => '/api', 'middleware' => Middleware\AuthMiddleware::NAME], function (Router $route) {
        $route->get('/', Handlers\HomeHandler::class);
        $route->post('/', Handlers\HomeHandler::class);
    });
    

    Basically it has two routes. In fact both routes are the same. One accept POST request and another one GET requests.
    They’ll answer with the current date in a json file

    namespace App\Http\Handlers;
    
    class HomeHandler
    {
        public function __invoke()
        {
            return ['date' => (new \DateTime())->format('c')];
        }
    }
    

    Both routes are under one middleware to provide the authentication.

    namespace App\Http\Middleware;
    
    use Closure;
    use Illuminate\Http\Request;
    
    class AuthMiddleware
    {
        public const NAME = 'auth';
    
        public function handle(Request $request, Closure $next)
        {
            $user = $request->getUser();
            $pass = $request->getPassword();
    
            if (!$this->validateDestinationCredentials($user, $pass)) {
                $headers = ['WWW-Authenticate' => 'Basic'];
    
                return response('Backend Login', 401, $headers);
            }
    
            $authorizationHeader = $request->header('Authorization2');
            if (!$this->validateApplicationToken($authorizationHeader)) {
                return response('Invalid token ', 403);
            }
    
            return $next($request);
    
        }
    
        private function validateApplicationToken($authorizationHeader)
        {
            $token = str_replace('Bearer ', null, $authorizationHeader);
    
            return $token === getenv('APP_TOKEN');
        }
    
        private function validateDestinationCredentials($user, $pass)
        {
            if (!($user === getenv('DESTINATION_USER') && $pass === getenv('DESTINATION_PASS'))) {
                return false;
            }
    
            return true;
        }
    }
    

    That means our service will need Basic Authentication and also one Token based authentication.

    The frontend

    Our ui5 application will use one destination called BACKEND. We’ll configure it in our neo-app.json file

        ...
        {
          "path": "/backend",
          "target": {
            "type": "destination",
            "name": "BACKEND"
          },
          "description": "BACKEND"
        }
        ...
    

    Now we’ll create our extra file called destinations.json. Localneo will use this file to create a web server to serve our frontend locally (using the destination).

    As I said before our backend will need a Basic Authentication. This Authentication will be set up in the destination configuration

    {
      "server": {
        "port": "8080",
        "path": "/webapp/index.html",
        "open": true
      },
      "service": {
        "sapui5": {
          "useSAPUI5": true,
          "version": "1.54.8"
        }
      },
      "destinations": {
        "BACKEND": {
          "url": "http://localhost:8888",
          "auth": "superSecretUser:superSecretPassword"
        }
      }
    }
    

    Our application will be a simple list of items

    <mvc:View controllerName="gonzalo123.controller.App" xmlns:html="http://www.w3.org/1999/xhtml" xmlns:mvc="sap.ui.core.mvc" displayBlock="true" xmlns="sap.m">
      <App id="idAppControl">
        <pages>
          <Page title="{i18n>appTitle}">
            <content>
              <List>
                <items>
                  <ObjectListItem id="GET" title="{i18n>get}"
                                  type="Active"
                                  press="getPressHandle">
                    <attributes>
                      <ObjectAttribute id="getCount" text="{/Data/get/count}"/>
                    </attributes>
                  </ObjectListItem>
                  <ObjectListItem id="POST" title="{i18n>post}"
                                  type="Active"
                                  press="postPressHandle">
                    <attributes>
                      <ObjectAttribute id="postCount" text="{/Data/post/count}"/>
                    </attributes>
                  </ObjectListItem>
                </items>
              </List>
            </content>
          </Page>
        </pages>
      </App>
    </mvc:View>
    

    When we click on GET we’ll perform a GET request to the backend and we’ll increment the counter. The same with POST.
    We’ll also show de date provided by the backend in a MessageToast.

    sap.ui.define([
      "sap/ui/core/mvc/Controller",
      "sap/ui/model/json/JSONModel",
      'sap/m/MessageToast',
      "gonzalo123/model/api"
    ], function (Controller, JSONModel, MessageToast, api) {
      "use strict";
    
      return Controller.extend("gonzalo123.controller.App", {
        model: new JSONModel({
          Data: {get: {count: 0}, post: {count: 0}}
        }),
    
        onInit: function () {
          this.getView().setModel(this.model);
        },
    
        getPressHandle: function () {
          api.get("/", {}).then(function (data) {
            var count = this.model.getProperty('/Data/get/count');
            MessageToast.show("Pressed : " + data.date);
            this.model.setProperty('/Data/get/count', ++count);
          }.bind(this));
        },
    
        postPressHandle: function () {
          var count = this.model.getProperty('/Data/post/count');
          api.post("/", {}).then(function (data) {
            MessageToast.show("Pressed : " + data.date);
            this.model.setProperty('/Data/post/count', ++count);
          }.bind(this));
        }
      });
    });
    

    Start our application locally

    Now we only need to start the backend

    php -S 0.0.0.0:8888 -t www

    And the frontend
    localneo

    Debugging locally

    As we’re working locally we can use local debugger in the backend and we can use breakpoints, inspect variables, etc.

    We also can debug the frontend using Chrome developer tools. We can also map our local filesystem in the browser and we can save files directly in Chrome.

    Testing

    We can test the backend using phpunit and run our tests with
    composer run test

    Here we can see a simple test of the backend

        public function testAuthorizedRequest()
        {
            $headers = [
                'Authorization2' => 'Bearer superSecretToken',
                'Content-Type'   => 'application/json',
                'Authorization'  => 'Basic ' . base64_encode('superSecretUser:superSecretPassword'),
            ];
    
            $this->json('GET', '/api', [], $headers)
                ->assertResponseStatus(200);
            $this->json('POST', '/api', [], $headers)
                ->assertResponseStatus(200);
        }
    
    
        public function testRequests()
        {
    
            $headers = [
                'Authorization2' => 'Bearer superSecretToken',
                'Content-Type'   => 'application/json',
                'Authorization'  => 'Basic ' . base64_encode('superSecretUser:superSecretPassword'),
            ];
    
            $this->json('GET', '/api', [], $headers)
                ->seeJsonStructure(['date']);
            $this->json('POST', '/api', [], $headers)
                ->seeJsonStructure(['date']);
        }
    

    We also can test the frontend using OPA5.

    As Backend is already tested we’ll mock the backend here using sinon (https://sinonjs.org/) server

    ...
        opaTest("When I click on GET the GET counter should increment by one", function (Given, When, Then) {
          Given.iStartMyApp("./integration/Test1/index.html");
          When.iClickOnGET();
          Then.getCounterShouldBeIncrementedByOne().and.iTeardownMyAppFrame();
        });
    
        opaTest("When I click on POST the POST counter should increment by one", function (Given, When, Then) {
          Given.iStartMyApp("./integration/Test1/index.html");
          When.iClickOnPOST();
          Then.postCounterShouldBeIncrementedByOne().and.iTeardownMyAppFrame();
        });
    ...
    

    The configuration of our sinon server:

    sap.ui.define(
      ["test/server"],
      function (server) {
        "use strict";
    
        return {
          init: function () {
            var oServer = server.initServer("/backend/api");
    
            oServer.respondWith("GET", /backend\/api/, [200, {
              "Content-Type": "application/json"
            }, JSON.stringify({
              "date": "2018-07-29T18:44:57+02:00"
            })]);
    
            oServer.respondWith("POST", /backend\/api/, [200, {
              "Content-Type": "application/json"
            }, JSON.stringify({
              "date": "2018-07-29T18:44:57+02:00"
            })]);
          }
        };
      }
    );
    

    The build process

    Before uploading the application to SCP we need to build it. The build process optimizes the files and creates Component-preload.js and sap-ui-cachebuster-info.json file (to ensure our users aren’t using a cached version of our application)
    We’ll use grunt to build our application. Here we can see our Gruntfile.js

    module.exports = function (grunt) {
      "use strict";
    
      require('load-grunt-tasks')(grunt);
      require('time-grunt')(grunt);
    
      grunt.config.merge({
        pkg: grunt.file.readJSON('package.json'),
        watch: {
          js: {
            files: ['Gruntfile.js', 'webapp/**/*.js', 'webapp/**/*.properties'],
            tasks: ['jshint'],
            options: {
              livereload: true
            }
          },
    
          livereload: {
            options: {
              livereload: true
            },
            files: [
              'webapp/**/*.html',
              'webapp/**/*.js',
              'webapp/**/*.css'
            ]
          }
        }
      });
    
      grunt.registerTask("default", [
        "clean",
        "lint",
        "build"
      ]);
    };
    

    In our Gruntfile I’ve also configure a watcher to build the application automatically and triggering the live reload (to reload my browser every time I change the frontend)

    Now I can build the dist folder with the command:

    grunt

    Deploy to SCP

    The deploy process is very well explained in the Holger’s repository
    Basically we need to download MTA Archive builder and extract it to ./ci/tools/mta.jar.
    Also we need SAP Cloud Platform Neo Environment SDK (./ci/tools/neo-java-web-sdk/)
    We can download those binaries from here

    Then we need to fulfill our scp credentials in ./ci/deploy-mta.properties and configure our application in ./ci/mta.yaml
    Finally we will run ./ci/deploy-mta.sh (here we can set up our scp password in order to input it within each deploy)

    Full code (frontend and backend) in my github account

    Playing with Grafana and weather APIs

    Today I want to play with Grafana. Let me show you my idea:

    I’ve got a Beewi temperature sensor. I’ve been playing with it previously. Today I want to show the temperature within a Grafana dashboard.
    I want to play also with openweathermap API.

    Fist I want to retrieve the temperature from Beewi device. I’ve got a node script that connects via Bluetooth to the device using noble library.
    I only need to pass the sensor mac address and I obtain a JSON with the current temperature

    #!/usr/bin/env node
    noble = require('noble');
    
    var status = false;
    var address = process.argv[2];
    
    if (!address) {
        console.log('Usage "./reader.py <sensor mac address>"');
        process.exit();
    }
    
    function hexToInt(hex) {
        var num, maxVal;
        if (hex.length % 2 !== 0) {
            hex = "0" + hex;
        }
        num = parseInt(hex, 16);
        maxVal = Math.pow(2, hex.length / 2 * 8);
        if (num > maxVal / 2 - 1) {
            num = num - maxVal;
        }
    
        return num;
    }
    
    noble.on('stateChange', function(state) {
        status = (state === 'poweredOn');
    });
    
    noble.on('discover', function(peripheral) {
        if (peripheral.address == address) {
            var data = peripheral.advertisement.manufacturerData.toString('hex');
            out = {
                temperature: parseFloat(hexToInt(data.substr(10, 2)+data.substr(8, 2))/10).toFixed(1)
            };
            console.log(JSON.stringify(out))
            noble.stopScanning();
            process.exit();
        }
    });
    
    noble.on('scanStop', function() {
        noble.stopScanning();
    });
    
    setTimeout(function() {
        noble.stopScanning();
        noble.startScanning();
    }, 2000);
    
    
    setTimeout(function() {
        noble.stopScanning();
        process.exit()
    }, 20000);
    

    And finally another script (this time a Python script) to collect data from openweathermap API, collect data from node script and storing the information in a influxdb database.

    from sense_hat import SenseHat
    from influxdb import InfluxDBClient
    import datetime
    import logging
    import requests
    import json
    from subprocess import check_output
    import os
    import sys
    from dotenv import load_dotenv
    
    logging.basicConfig(level=logging.INFO)
    
    current_dir = os.path.dirname(os.path.abspath(__file__))
    load_dotenv(dotenv_path="{}/.env".format(current_dir))
    
    sensor_mac_address = os.getenv("BEEWI_SENSOR")
    openweathermap_api_key = os.getenv("OPENWEATHERMAP_API_KEY")
    influxdb_host = os.getenv("INFLUXDB_HOST")
    influxdb_port = os.getenv("INFLUXDB_PORT")
    influxdb_database = os.getenv("INFLUXDB_DATABASE")
    
    reader = '{}/reader.js'.format(current_dir)
    
    
    def get_rain_level_from_weather(weather):
        rain = False
        rain_level = 0
        if len(weather) > 0:
            for w in weather:
                if w['icon'] == '09d':
                    rain = True
                    rain_level = 1
                elif w['icon'] == '10d':
                    rain = True
                    rain_level = 2
                elif w['icon'] == '11d':
                    rain = True
                    rain_level = 3
                elif w['icon'] == '13d':
                    rain = True
                    rain_level = 4
    
        return rain, rain_level
    
    
    def openweathermap():
        data = {}
        r = requests.get(
            "http://api.openweathermap.org/data/2.5/weather?id=3110044&appid={}&units=metric".format(
                openweathermap_api_key))
    
        if r.status_code == 200:
            current_data = r.json()
            data['weather'] = current_data['main']
            rain, rain_level = get_rain_level_from_weather(current_data['weather'])
            data['weather']['rain'] = rain
            data['weather']['rain_level'] = rain_level
    
        r2 = requests.get(
            "http://api.openweathermap.org/data/2.5/uvi?lat=43.32&lon=-1.93&appid={}".format(openweathermap_api_key))
        if r2.status_code == 200:
            data['uvi'] = r2.json()
    
        r3 = requests.get(
            "http://api.openweathermap.org/data/2.5/forecast?id=3110044&appid={}&units=metric".format(
                openweathermap_api_key))
    
        if r3.status_code == 200:
            forecast = r3.json()['list']
            data['forecast'] = []
            for f in forecast:
                rain, rain_level = get_rain_level_from_weather(f['weather'])
                data['forecast'].append({
                    "dt": f['dt'],
                    "fields": {
                        "temp": float(f['main']['temp']),
                        "humidity": float(f['main']['humidity']),
                        "rain": rain,
                        "rain_level": int(rain_level),
                        "pressure": float(float(f['main']['pressure']))
                    }
                })
    
            return data
    
    
    def persists(measurement, fields, location, time):
        logging.info("{} {} [{}] {}".format(time, measurement, location, fields))
        influx_client.write_points([{
            "measurement": measurement,
            "tags": {"location": location},
            "time": time,
            "fields": fields
        }])
    
    
    def in_sensors():
        try:
            sense = SenseHat()
            pressure = sense.get_pressure()
            reader_output = check_output([reader, sensor_mac_address]).strip()
            sensor_info = json.loads(reader_output)
            temperature = sensor_info['temperature']
    
            persists(measurement='home_pressure', fields={"value": float(pressure)}, location="in", time=current_time)
            persists(measurement='home_temperature', fields={"value": float(temperature)}, location="in",
                     time=current_time)
        except Exception as err:
            logging.error(err)
    
    
    def out_sensors():
        try:
            out_info = openweathermap()
    
            persists(measurement='home_pressure',
                     fields={"value": float(out_info['weather']['pressure'])},
                     location="out",
                     time=current_time)
            persists(measurement='home_humidity',
                     fields={"value": float(out_info['weather']['humidity'])},
                     location="out",
                     time=current_time)
            persists(measurement='home_temperature',
                     fields={"value": float(out_info['weather']['temp'])},
                     location="out",
                     time=current_time)
            persists(measurement='home_rain',
                     fields={"value": out_info['weather']['rain'], "level": out_info['weather']['rain_level']},
                     location="out",
                     time=current_time)
            persists(measurement='home_uvi',
                     fields={"value": float(out_info['uvi']['value'])},
                     location="out",
                     time=current_time)
            for f in out_info['forecast']:
                persists(measurement='home_weather_forecast',
                         fields=f['fields'],
                         location="out",
                         time=datetime.datetime.utcfromtimestamp(f['dt']).isoformat())
    
        except Exception as err:
            logging.error(err)
    
    
    influx_client = InfluxDBClient(host=influxdb_host, port=influxdb_port, database=influxdb_database)
    current_time = datetime.datetime.utcnow().isoformat()
    
    in_sensors()
    out_sensors()
    

    I’m running this python script from a Raspberry Pi3 with a Sense Hat. Sense Hat has a atmospheric pressure sensor, so I will also retrieve the pressure from the Sense Hat.

    From openweathermap I will obtain:

    • Current temperature/humidity and atmospheric pressure in the street
    • UV Index (the measure of the level of UV radiation)
    • Weather conditions (if it’s raining or not)
    • Weather forecast

    I run this script with the Rasberry Pi crontab each 5 minutes. That means that I’ve got a fancy time series ready to be shown with grafana.

    Here we can see the dashboard

    Source code available in my github account.

    Playing with Docker, MQTT, Grafana, InfluxDB, Python and Arduino

    I must admit this post is just an excuse to play with Grafana and InfluxDb. InfluxDB is a cool database especially designed to work with time series. Grafana is one open source tool for time series analytics. I want to build a simple prototype. The idea is:

    • One Arduino device (esp32) emits a MQTT event to a mosquitto server. I’ll use a potentiometer to emulate one sensor (Imagine here, for example, a temperature sensor instead of potentiometer). I’ve used this circuit before in another projects
    • One Python script will be listening to the MQTT event in my Raspberry Pi and it will persist the value to InfluxDB database
    • I will monitor the state of the time series given by the potentiometer with Grafana
    • I will create one alert in Grafana (for example when the average value within 10 seconds is above a threshold) and I will trigger a webhook when the alert changes its state
    • One microservice (a Python Flask server) will be listening to the webhook and it will emit a MQTT event depending on the state
    • Another Arduino device (one NodeMcu in this case) will be listening to this MQTT event and it will activate a LED. Red one if the alert is ON and green one if the alert is OFF

    The server
    As I said before we’ll need three servers:

    • MQTT server (mosquitto)
    • InfluxDB server
    • Grafana server

    We’ll use Docker. I’ve got a Docker host running in a Raspberry Pi3. The Raspberry Pi is a ARM device so we need docker images for this architecture.

    version: '2'
    
    services:
      mosquitto:
        image: pascaldevink/rpi-mosquitto
        container_name: moquitto
        ports:
         - "9001:9001"
         - "1883:1883"
        restart: always
      
      influxdb:
        image: hypriot/rpi-influxdb
        container_name: influxdb
        restart: always
        environment:
         - INFLUXDB_INIT_PWD="password"
         - PRE_CREATE_DB="iot"
        ports:
         - "8083:8083"
         - "8086:8086"
        volumes:
         - ~/docker/rpi-influxdb/data:/data
    
      grafana:
        image: fg2it/grafana-armhf:v4.6.3
        container_name: grafana
        restart: always
        ports:
         - "3000:3000"
        volumes:
          - grafana-db:/var/lib/grafana
          - grafana-log:/var/log/grafana
          - grafana-conf:/etc/grafana
    
    volumes:
      grafana-db:
        driver: local  
      grafana-log:
        driver: local
      grafana-conf:
        driver: local
    

    ESP32
    The Esp32 part is very simple. We only need to connect our potentiometer to the Esp32. The potentiometer has three pins: Gnd, Signal and Vcc. For signal we’ll use the pin 32.

    We only need to configure our Wifi network, connect to our MQTT server and emit the potentiometer value within each loop.

    #include <PubSubClient.h>
    #include <WiFi.h>
    
    const int potentiometerPin = 32;
    
    // Wifi configuration
    const char* ssid = "my_wifi_ssid";
    const char* password = "my_wifi_password";
    
    // MQTT configuration
    const char* server = "192.168.1.111";
    const char* topic = "/pot";
    const char* clientName = "com.gonzalo123.esp32";
    
    String payload;
    
    WiFiClient wifiClient;
    PubSubClient client(wifiClient);
    
    void wifiConnect() {
      Serial.println();
      Serial.print("Connecting to ");
      Serial.println(ssid);
    
      WiFi.begin(ssid, password);
    
      while (WiFi.status() != WL_CONNECTED) {
        delay(500);
        Serial.print(".");
      }
      Serial.println("");
      Serial.print("WiFi connected.");
      Serial.print("IP address: ");
      Serial.println(WiFi.localIP());
    }
    
    void mqttReConnect() {
      while (!client.connected()) {
        Serial.print("Attempting MQTT connection...");
        if (client.connect(clientName)) {
          Serial.println("connected");
        } else {
          Serial.print("failed, rc=");
          Serial.print(client.state());
          Serial.println(" try again in 5 seconds");
          delay(5000);
        }
      }
    }
    
    void mqttEmit(String topic, String value)
    {
      client.publish((char*) topic.c_str(), (char*) value.c_str());
    }
    
    void setup() {
      Serial.begin(115200);
    
      wifiConnect();
      client.setServer(server, 1883);
      delay(1500);
    }
    
    void loop() {
      if (!client.connected()) {
        mqttReConnect();
      }
      int current = (int) ((analogRead(potentiometerPin) * 100) / 4095);
      mqttEmit(topic, (String) current);
      delay(500);
    }
    

    Mqtt listener

    The esp32 emits an event (“/pot”) with the value of the potentiometer. So we’re going to create a MQTT listener that listen to MQTT and persits the value to InfluxDB.

    import paho.mqtt.client as mqtt
    from influxdb import InfluxDBClient
    import datetime
    import logging
    
    
    def persists(msg):
        current_time = datetime.datetime.utcnow().isoformat()
        json_body = [
            {
                "measurement": "pot",
                "tags": {},
                "time": current_time,
                "fields": {
                    "value": int(msg.payload)
                }
            }
        ]
        logging.info(json_body)
        influx_client.write_points(json_body)
    
    
    logging.basicConfig(level=logging.INFO)
    influx_client = InfluxDBClient('docker', 8086, database='iot')
    client = mqtt.Client()
    
    client.on_connect = lambda self, mosq, obj, rc: self.subscribe("/pot")
    client.on_message = lambda client, userdata, msg: persists(msg)
    
    client.connect("docker", 1883, 60)
    
    client.loop_forever()
    

    Grafana
    In grafana we need to do two things. First to create one datasource from our InfluxDB server. It’s pretty straightforward to it.

    Finally we’ll create a dashboard. We only have one time-serie with the value of the potentiometer. I must admit that my dasboard has a lot things that I’ve created only for fun.

    Thats the query that I’m using to plot the main graph

    SELECT 
      last("value") FROM "pot" 
    WHERE 
      time >= now() - 5m 
    GROUP BY 
      time($interval) fill(previous)
    

    Here we can see the dashboard

    And here my alert configuration:

    I’ve also created a notification channel with a webhook. Grafana will use this web hook to notify when the state of alert changes

    Webhook listener
    Grafana will emit a webhook, so we’ll need an REST endpoint to collect the webhook calls. I normally use PHP/Lumen to create REST servers but in this project I’ll use Python and Flask.

    We need to handle HTTP Basic Auth and emmit a MQTT event. MQTT is a very simple protocol but it has one very nice feature that fits like hat fits like a glove here. Le me explain it:

    Imagine that we’ve got our system up and running and the state is “ok”. Now we connect one device (for example one big red/green lights). Since the “ok” event was fired before we connect the lights, our green light will not be switch on. We need to wait util “alert” event if we want to see any light. That’s not cool.

    MQTT allows us to “retain” messages. That means that we can emit messages with “retain” flag to one topic and when we connect one device later to this topic, it will receive the message. Here it’s exactly what we need.

    from flask import Flask
    from flask import request
    from flask_httpauth import HTTPBasicAuth
    import paho.mqtt.client as mqtt
    import json
    
    client = mqtt.Client()
    
    app = Flask(__name__)
    auth = HTTPBasicAuth()
    
    # http basic auth credentials
    users = {
        "user": "password"
    }
    
    
    @auth.get_password
    def get_pw(username):
        if username in users:
            return users.get(username)
        return None
    
    
    @app.route('/alert', methods=['POST'])
    @auth.login_required
    def alert():
        client.connect("docker", 1883, 60)
        data = json.loads(request.data.decode('utf-8'))
        if data['state'] == 'alerting':
            client.publish(topic="/alert", payload="1", retain=True)
        elif data['state'] == 'ok':
            client.publish(topic="/alert", payload="0", retain=True)
    
        client.disconnect()
    
        return "ok"
    
    
    if __name__ == "__main__":
        app.run(host='0.0.0.0')
    

    Nodemcu

    Finally the Nodemcu. This part is similar than the esp32 one. Our leds are in pins 4 and 5. We also need to configure the Wifi and connect to to MQTT server. Nodemcu and esp32 are similar devices but not the same. For example we need to use different libraries to connect to the wifi.

    This device will be listening to the MQTT event and trigger on led or another depending on the state

    #include <PubSubClient.h>
    #include <ESP8266WiFi.h>
    
    const int ledRed = 4;
    const int ledGreen = 5;
    
    // Wifi configuration
    const char* ssid = "my_wifi_ssid";
    const char* password = "my_wifi_password";
    
    // mqtt configuration
    const char* server = "192.168.1.111";
    const char* topic = "/alert";
    const char* clientName = "com.gonzalo123.nodemcu";
    
    int value;
    int percent;
    String payload;
    
    WiFiClient wifiClient;
    PubSubClient client(wifiClient);
    
    void wifiConnect() {
      Serial.println();
      Serial.print("Connecting to ");
      Serial.println(ssid);
    
      WiFi.begin(ssid, password);
    
      while (WiFi.status() != WL_CONNECTED) {
        delay(500);
        Serial.print(".");
      }
      Serial.println("");
      Serial.print("WiFi connected.");
      Serial.print("IP address: ");
      Serial.println(WiFi.localIP());
    }
    
    void mqttReConnect() {
      while (!client.connected()) {
        Serial.print("Attempting MQTT connection...");
        if (client.connect(clientName)) {
          Serial.println("connected");
          client.subscribe(topic);
        } else {
          Serial.print("failed, rc=");
          Serial.print(client.state());
          Serial.println(" try again in 5 seconds");
          delay(5000);
        }
      }
    }
    
    void callback(char* topic, byte* payload, unsigned int length) {
    
      Serial.print("Message arrived [");
      Serial.print(topic);
    
      String data;
      for (int i = 0; i < length; i++) {
        data += (char)payload[i];
      }
      cleanLeds();
      int value = data.toInt();
      switch (value)  {
        case 1:
          digitalWrite(ledRed, HIGH);
          break;
        case 0:
          digitalWrite(ledGreen, HIGH);
          break;
      }
      Serial.print("] value:");
      Serial.println((int) value);
    }
    
    void cleanLeds() {
      digitalWrite(ledRed, LOW);
      digitalWrite(ledGreen, LOW);
    }
    
    void setup() {
      Serial.begin(9600);
      pinMode(ledRed, OUTPUT);
      pinMode(ledGreen, OUTPUT);
      cleanLeds();
      Serial.println("start");
    
      wifiConnect();
      client.setServer(server, 1883);
      client.setCallback(callback);
    
      delay(1500);
    }
    
    void loop() {
      Serial.print(".");
      if (!client.connected()) {
        mqttReConnect();
      }
    
      client.loop();
      delay(500);
    }
    

    Here you can see the working prototype in action

    And here the source code

    Happy logins. Only the happy user will pass

    Login forms are bored. In this example we’re going to create an especial login form. Only for happy users. Happiness is something complicated, but at least, one smile is more easy to obtain, and all is better with one smile :). Our login form will only appear if the user smiles. Let’s start.

    I must admit that this project is just an excuse to play with different technologies that I wanted to play. Weeks ago I discovered one library called face_classification. With this library I can perform emotion classification from a picture. The idea is simple. We create RabbitMQ RPC server script that answers with the emotion of the face within a picture. Then we obtain on frame from the video stream of the webcam (with HTML5) and we send this frame using websocket to a socket.io server. This websocket server (node) ask to the RabbitMQ RPC the emotion and it sends back to the browser the emotion and a the original picture with a rectangle over the face.

    Frontend

    As well as we’re going to use socket.io for websockets we will use the same script to serve the frontend (the login and the HTML5 video capture)

    <!doctype html>
    <html>
    <head>
        <title>Happy login</title>
        <link rel="stylesheet" href="css/app.css">
    </head>
    <body>
    
    <div id="login-page" class="login-page">
        <div class="form">
            <h1 id="nonHappy" style="display: block;">Only the happy user will pass</h1>
            <form id="happyForm" class="login-form" style="display: none" onsubmit="return false;">
                <input id="user" type="text" placeholder="username"/>
                <input id="pass" type="password" placeholder="password"/>
                <button id="login">login</button>
                <p></p>
                <img id="smile" width="426" height="320" src=""/>
            </form>
            <div id="video">
                <video style="display:none;"></video>
                <canvas id="canvas" style="display:none"></canvas>
                <canvas id="canvas-face" width="426" height="320"></canvas>
            </div>
        </div>
    </div>
    
    <div id="private" style="display: none;">
        <h1>Private page</h1>
    </div>
    
    <script src="https://code.jquery.com/jquery-3.2.1.min.js" integrity="sha256-hwg4gsxgFZhOsEEamdOYGBf13FyQuiTwlAQgxVSNgt4=" crossorigin="anonymous"></script>
    <script src="https://unpkg.com/sweetalert/dist/sweetalert.min.js"></script>
    <script type="text/javascript" src="/socket.io/socket.io.js"></script>
    <script type="text/javascript" src="/js/app.js"></script>
    </body>
    </html>
    

    Here we’ll connect to the websocket and we’ll emit the webcam frame to the server. We´ll also be listening to one event called ‘response’ where server will notify us when one emotion has been detected.

    let socket = io.connect(location.origin),
        img = new Image(),
        canvasFace = document.getElementById('canvas-face'),
        context = canvasFace.getContext('2d'),
        canvas = document.getElementById('canvas'),
        width = 640,
        height = 480,
        delay = 1000,
        jpgQuality = 0.6,
        isHappy = false;
    
    socket.on('response', function (r) {
        let data = JSON.parse(r);
        if (data.length > 0 && data[0].hasOwnProperty('emotion')) {
            if (isHappy === false && data[0]['emotion'] === 'happy') {
                isHappy = true;
                swal({
                    title: "Good!",
                    text: "All is better with one smile!",
                    icon: "success",
                    buttons: false,
                    timer: 2000,
                });
    
                $('#nonHappy').hide();
                $('#video').hide();
                $('#happyForm').show();
                $('#smile')[0].src = 'data:image/png;base64,' + data[0].image;
            }
    
            img.onload = function () {
                context.drawImage(this, 0, 0, canvasFace.width, canvasFace.height);
            };
    
            img.src = 'data:image/png;base64,' + data[0].image;
        }
    });
    
    navigator.getMedia = (navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia);
    
    navigator.getMedia({video: true, audio: false}, (mediaStream) => {
        let video = document.getElementsByTagName('video')[0];
        video.src = window.URL.createObjectURL(mediaStream);
        video.play();
        setInterval(((video) => {
            return function () {
                let context = canvas.getContext('2d');
                canvas.width = width;
                canvas.height = height;
                context.drawImage(video, 0, 0, width, height);
                socket.emit('img', canvas.toDataURL('image/jpeg', jpgQuality));
            }
        })(video), delay)
    }, error => console.log(error));
    
    $(() => {
        $('#login').click(() => {
            $('#login-page').hide();
            $('#private').show();
        })
    });
    

    Backend
    Finally we’ll work in the backend. Basically I’ve check the examples that we can see in face_classification project and tune it a bit according to my needs.

    from rabbit import builder
    import logging
    import numpy as np
    from keras.models import load_model
    from utils.datasets import get_labels
    from utils.inference import detect_faces
    from utils.inference import draw_text
    from utils.inference import draw_bounding_box
    from utils.inference import apply_offsets
    from utils.inference import load_detection_model
    from utils.inference import load_image
    from utils.preprocessor import preprocess_input
    import cv2
    import json
    import base64
    
    detection_model_path = 'trained_models/detection_models/haarcascade_frontalface_default.xml'
    emotion_model_path = 'trained_models/emotion_models/fer2013_mini_XCEPTION.102-0.66.hdf5'
    emotion_labels = get_labels('fer2013')
    font = cv2.FONT_HERSHEY_SIMPLEX
    
    # hyper-parameters for bounding boxes shape
    emotion_offsets = (20, 40)
    
    # loading models
    face_detection = load_detection_model(detection_model_path)
    emotion_classifier = load_model(emotion_model_path, compile=False)
    
    # getting input model shapes for inference
    emotion_target_size = emotion_classifier.input_shape[1:3]
    
    
    def format_response(response):
        decoded_json = json.loads(response)
        return "Hello {}".format(decoded_json['name'])
    
    
    def on_data(data):
        f = open('current.jpg', 'wb')
        f.write(base64.decodebytes(data))
        f.close()
        image_path = "current.jpg"
    
        out = []
        # loading images
        rgb_image = load_image(image_path, grayscale=False)
        gray_image = load_image(image_path, grayscale=True)
        gray_image = np.squeeze(gray_image)
        gray_image = gray_image.astype('uint8')
    
        faces = detect_faces(face_detection, gray_image)
        for face_coordinates in faces:
            x1, x2, y1, y2 = apply_offsets(face_coordinates, emotion_offsets)
            gray_face = gray_image[y1:y2, x1:x2]
    
            try:
                gray_face = cv2.resize(gray_face, (emotion_target_size))
            except:
                continue
    
            gray_face = preprocess_input(gray_face, True)
            gray_face = np.expand_dims(gray_face, 0)
            gray_face = np.expand_dims(gray_face, -1)
            emotion_label_arg = np.argmax(emotion_classifier.predict(gray_face))
            emotion_text = emotion_labels[emotion_label_arg]
            color = (0, 0, 255)
    
            draw_bounding_box(face_coordinates, rgb_image, color)
            draw_text(face_coordinates, rgb_image, emotion_text, color, 0, -50, 1, 2)
            bgr_image = cv2.cvtColor(rgb_image, cv2.COLOR_RGB2BGR)
    
            cv2.imwrite('predicted.png', bgr_image)
            data = open('predicted.png', 'rb').read()
            encoded = base64.encodebytes(data).decode('utf-8')
            out.append({
                'image': encoded,
                'emotion': emotion_text,
            })
    
        return out
    
    logging.basicConfig(level=logging.WARN)
    rpc = builder.rpc("image.check", {'host': 'localhost', 'port': 5672})
    rpc.server(on_data)
    

    Here you can see in action the working prototype

    Maybe we can do the same with another tools and even more simple but as I said before this example is just an excuse to play with those technologies:

    • Send webcam frames via websockets
    • Connect one web application to a Pyhon application via RabbitMQ RPC
    • Play with face classification script

    Please don’t use this script in production. It’s just a proof of concepts. With smiles but a proof of concepts 🙂

    You can see the project in my github account

    Opencv and esp32 experiment. Moving a servo with my face alignment

    One saturday morning I was having a breakfast and I discovered face_recognition project. I started to play with the opencv example. I put my picture and, Wow! It works like a charm. It’s pretty straightforward to detect my face and also I can obtain the face landmarks. One of the landmark that I can get is the nose tip. Playing with this script I realized that with the nose tip I can determine the position of the face. I can see if my face is align to the center or if I move it to one side. As well as I have a new iot device (one ESP32) I wanted to do something with it. For example control a servo (SG90) and moving it from left to right depending on my face position.

    First we have the main python script. With this script I detect my face, the nose tip and the position of my face. With this position I will emit an event to a mqtt broker (a mosquitto server running on my laptop).

    import face_recognition
    import cv2
    import numpy as np
    import math
    import paho.mqtt.client as mqtt
    
    video_capture = cv2.VideoCapture(0)
    
    gonzalo_image = face_recognition.load_image_file("gonzalo.png")
    gonzalo_face_encoding = face_recognition.face_encodings(gonzalo_image)[0]
    
    known_face_encodings = [
        gonzalo_face_encoding
    ]
    known_face_names = [
        "Gonzalo"
    ]
    
    RED = (0, 0, 255)
    GREEN = (0, 255, 0)
    BLUE = (255, 0, 0)
    
    face_locations = []
    face_encodings = []
    face_names = []
    process_this_frame = True
    status = ''
    labelColor = GREEN
    
    client = mqtt.Client()
    client.connect("localhost", 1883, 60)
    
    while True:
        ret, frame = video_capture.read()
    
        # Resize frame of video to 1/4 size for faster face recognition processing
        small_frame = cv2.resize(frame, (0, 0), fx=0.25, fy=0.25)
    
        # Convert the image from BGR color (which OpenCV uses) to RGB color (which face_recognition uses)
        rgb_small_frame = small_frame[:, :, ::-1]
    
        face_locations = face_recognition.face_locations(rgb_small_frame)
        face_encodings = face_recognition.face_encodings(rgb_small_frame, face_locations)
        face_landmarks_list = face_recognition.face_landmarks(rgb_small_frame, face_locations)
    
        face_names = []
        for face_encoding, face_landmarks in zip(face_encodings, face_landmarks_list):
            matches = face_recognition.compare_faces(known_face_encodings, face_encoding)
            name = "Unknown"
    
            if True in matches:
                first_match_index = matches.index(True)
                name = known_face_names[first_match_index]
    
                nose_tip = face_landmarks['nose_tip']
                maxLandmark = max(nose_tip)
                minLandmark = min(nose_tip)
    
                diff = math.fabs(maxLandmark[1] - minLandmark[1])
                if diff < 2:
                    status = "center"
                    labelColor = BLUE
                    client.publish("/face/{}/center".format(name), "1")
                elif maxLandmark[1] > minLandmark[1]:
                    status = ">>>>"
                    labelColor = RED
                    client.publish("/face/{}/left".format(name), "1")
                else:
                    status = "<<<<"
                    client.publish("/face/{}/right".format(name), "1")
                    labelColor = RED
    
                shape = np.array(face_landmarks['nose_bridge'], np.int32)
                cv2.polylines(frame, [shape.reshape((-1, 1, 2)) * 4], True, (0, 255, 255))
                cv2.fillPoly(frame, [shape.reshape((-1, 1, 2)) * 4], GREEN)
    
            face_names.append("{} {}".format(name, status))
    
        for (top, right, bottom, left), name in zip(face_locations, face_names):
            # Scale back up face locations since the frame we detected in was scaled to 1/4 size
            top *= 4
            right *= 4
            bottom *= 4
            left *= 4
    
            if 'Unknown' not in name.split(' '):
                cv2.rectangle(frame, (left, top), (right, bottom), labelColor, 2)
                cv2.rectangle(frame, (left, bottom - 35), (right, bottom), labelColor, cv2.FILLED)
                cv2.putText(frame, name, (left + 6, bottom - 6), cv2.FONT_HERSHEY_DUPLEX, 1.0, (255, 255, 255), 1)
            else:
                cv2.rectangle(frame, (left, top), (right, bottom), BLUE, 2)
    
        cv2.imshow('Video', frame)
    
        if cv2.waitKey(1) & 0xFF == ord('q'):
            break
    
    video_capture.release()
    cv2.destroyAllWindows()
    

    Now another Python script will be listening to mqtt events and it will trigger one event with the position of the servo. I know that this second Python script maybe is unnecessary. We can move its logic to esp32 and main opencv script, but I was playing with mqtt and I wanted to decouple it a little bit.

    import paho.mqtt.client as mqtt
    
    class Iot:
        _state = None
        _client = None
        _dict = {
            'left': 0,
            'center': 1,
            'right': 2
        }
    
        def __init__(self, client):
            self._client = client
    
        def emit(self, name, event):
            if event != self._state:
                self._state = event
                self._client.publish("/servo", self._dict[event])
                print("emit /servo envent with value {} - {}".format(self._dict[event], name))
    
    
    def on_message(topic, iot):
        data = topic.split("/")
        name = data[2]
        action = data[3]
        iot.emit(name, action)
    
    
    client = mqtt.Client()
    iot = Iot(client)
    
    client.on_connect = lambda self, mosq, obj, rc: self.subscribe("/face/#")
    client.on_message = lambda client, userdata, msg: on_message(msg.topic, iot)
    
    client.connect("localhost", 1883, 60)
    client.loop_forever()
    

    And finally the ESP32. Here will connect to my wifi and to my mqtt broker.

    #include <WiFi.h>
    #include <PubSubClient.h>
    
    #define LED0 17
    #define LED1 18
    #define LED2 19
    #define SERVO_PIN 5
    
    // wifi configuration
    const char* ssid = "my_ssid";
    const char* password = "my_wifi_password";
    // mqtt configuration
    const char* server = "192.168.1.111"; // mqtt broker ip
    const char* topic = "/servo";
    const char* clientName = "com.gonzalo123.esp32";
    
    int channel = 1;
    int hz = 50;
    int depth = 16;
    
    WiFiClient wifiClient;
    PubSubClient client(wifiClient);
    
    void wifiConnect() {
      Serial.print("Connecting to ");
      Serial.println(ssid);
    
      WiFi.begin(ssid, password);
    
      while (WiFi.status() != WL_CONNECTED) {
        delay(500);
        Serial.print("*");
      }
    
      Serial.print("WiFi connected: ");
      Serial.println(WiFi.localIP());
    }
    
    void mqttReConnect() {
      while (!client.connected()) {
        Serial.print("Attempting MQTT connection...");
        if (client.connect(clientName)) {
          Serial.println("connected");
          client.subscribe(topic);
        } else {
          Serial.print("failed, rc=");
          Serial.print(client.state());
          Serial.println(" try again in 5 seconds");
          delay(5000);
        }
      }
    }
    
    void callback(char* topic, byte* payload, unsigned int length) {
      Serial.print("Message arrived [");
      Serial.print(topic);
    
      String data;
      for (int i = 0; i < length; i++) {
        data += (char)payload[i];
      }
    
      int value = data.toInt();
      cleanLeds();
      switch (value)  {
        case 0:
          ledcWrite(1, 3400);
          digitalWrite(LED0, HIGH);
          break;
        case 1:
          ledcWrite(1, 4900);
          digitalWrite(LED1, HIGH);
          break;
        case 2:
          ledcWrite(1, 6400);
          digitalWrite(LED2, HIGH);
          break;
      }
      Serial.print("] value:");
      Serial.println((int) value);
    }
    
    void cleanLeds() {
      digitalWrite(LED0, LOW);
      digitalWrite(LED1, LOW);
      digitalWrite(LED2, LOW);
    }
    
    void setup() {
      Serial.begin(115200);
    
      ledcSetup(channel, hz, depth);
      ledcAttachPin(SERVO_PIN, channel);
    
      pinMode(LED0, OUTPUT);
      pinMode(LED1, OUTPUT);
      pinMode(LED2, OUTPUT);
      cleanLeds();
      wifiConnect();
      client.setServer(server, 1883);
      client.setCallback(callback);
    
      delay(1500);
    }
    
    void loop()
    {
      if (!client.connected()) {
        mqttReConnect();
      }
    
      client.loop();
      delay(100);
    }
    

    Here a video with the working prototype in action

    The source code is available in my github account.

    Pomodoro with ESP32. One “The Melee – Side by side” project

    Last weekend there was a great event called The Melee – Side by side (Many thanks to @ojoven and @diversius).

    The event was one kind of Hackathon where a group of people meet together one day, to share our side projects and to work together (yes. We also have a lunch and beers also :). The format of the event is just a copy of the event that our colleagues from Bilbao called “El Comité“.

    @ibaiimaz spoke about one project to create one collaborative pomodoro where the people of one team can share their status and see the status of the rest of the team. When I heard pomodoro and status I immediately thought in one servo moving a flag and some LEDs turning on and off. We had a project. @penniath and @tatai also joined us. We also had a team.

    We had a project and we also had a deadline. We must show a working prototype at the end of the day. That means that we didn’t have too many time. First we decided the mockup of the project, reducing the initial scope (more ambitious) to fit it within our time slot. We discuss intensely for 10 minutes and finally we describe an ultra detailed blueprint. That’s the full blueprint of the project:

    It was time to start working.

    @penniath and @tatai worked in the Backend. It must be the responsible of the pomodoro timers, listen to MQTT events and create an API for the frontend. The backend also must provide a WebSockets interface to allow real time events within the frontend. They decided to use node and socket.io for the WebSockets. You can see the source code here.

    @ibaiimaz started with the frontend. He decided to create an Angular web application listening to socket.io events to show the status of the pomodoro. You can see the source code here.

    Finaly I worked with the hardware. I created a prototype with one ESP32, two RGB LEDs, one button, one servo and a couple of resistors.

    That’s the source code.

    #include <WiFi.h>
    #include <PubSubClient.h>
    
    int redPin_g = 19;
    int greenPin_g = 17;
    int bluePin_g = 18;
    
    int redPin_i = 21;
    int greenPin_i = 2;
    int bluePin_i = 4;
    
    #define SERVO_PIN 16
    
    const int buttonPin = 15;
    int buttonState = 0;
    
    int channel = 1;
    int hz = 50;
    int depth = 16;
    
    const char* ssid = "SSID";
    const char* password = "password";
    const char* server = "192.168.1.105";
    const char* topic = "/pomodoro/+";
    const char* clientName = "com.gonzalo123.esp32";
    
    WiFiClient wifiClient;
    PubSubClient client(wifiClient);
    
    void wifiConnect() {
      Serial.print("Connecting to ");
      Serial.println(ssid);
    
      WiFi.begin(ssid, password);
    
      while (WiFi.status() != WL_CONNECTED) {
        delay(500);
        Serial.print("*");
      }
    
      Serial.print("WiFi connected: ");
      Serial.println(WiFi.localIP());
    }
    
    void mqttReConnect() {
      while (!client.connected()) {
        Serial.print("Attempting MQTT connection...");
        if (client.connect(clientName)) {
          Serial.println("connected");
          client.subscribe(topic);
        } else {
          Serial.print("failed, rc=");
          Serial.print(client.state());
          Serial.println(" try again in 5 seconds");
          delay(5000);
        }
      }
    }
    
    void callback(char* topic, byte* payload, unsigned int length) {
      Serial.print("Message arrived [");
      Serial.print(topic);
    
      String data;
      for (int i = 0; i < length; i++) {
        data += (char)payload[i];
      }
    
      int value = data.toInt();
    
      if (strcmp(topic, "/pomodoro/gonzalo") == 0) {
        Serial.print("[gonzalo]");
        switch (value) {
          case 1:
            ledcWrite(1, 3400);
            setColor_g(0, 255, 0);
            break;
          case 2:
            setColor_g(255, 0, 0);
            break;
          case 3:
            ledcWrite(1, 6400);
            setColor_g(0, 0, 255);
            break;
        }
      } else {
        Serial.print("[ibai]");
        switch (value) {
          case 1:
            setColor_i(0, 255, 0);
            break;
          case 2:
            setColor_i(255, 0, 0);
            break;
          case 3:
            setColor_i(0, 0, 255);  // green
            break;
        }
      }
    
      Serial.print("] value:");
      Serial.println(data);
    }
    
    void setup()
    {
      Serial.begin(115200);
    
      pinMode(buttonPin, INPUT_PULLUP);
      pinMode(redPin_g, OUTPUT);
      pinMode(greenPin_g, OUTPUT);
      pinMode(bluePin_g, OUTPUT);
    
      pinMode(redPin_i, OUTPUT);
      pinMode(greenPin_i, OUTPUT);
      pinMode(bluePin_i, OUTPUT);
    
      ledcSetup(channel, hz, depth);
      ledcAttachPin(SERVO_PIN, channel);
      wifiConnect();
      client.setServer(server, 1883);
      client.setCallback(callback);
    
      delay(1500);
    }
    
    void mqttEmit(String topic, String value)
    {
      client.publish((char*) topic.c_str(), (char*) value.c_str());
    }
    
    void loop()
    {
      if (!client.connected()) {
        mqttReConnect();
      }
    
      client.loop();
    
      buttonState = digitalRead(buttonPin);
      if (buttonState == HIGH) {
        mqttEmit("/start/gonzalo", (String) "3");
      }
    
      delay(200);
    }
    
    void setColor_i(int red, int green, int blue)
    {
      digitalWrite(redPin_i, red);
      digitalWrite(greenPin_i, green);
      digitalWrite(bluePin_i, blue);
    }
    
    void setColor_g(int red, int green, int blue)
    {
      digitalWrite(redPin_g, red);
      digitalWrite(greenPin_g, green);
      digitalWrite(bluePin_g, blue);
    }
    

    The MQTT server (a mosquitto server) was initially running in my laptop but as well as I had one Raspberry Pi Zero also in my bag we decided to user the Pi Zero as a server and run mosquitto MQTT server with Raspbian. Everything is better with a Raspberry Pi. @tatai helped me to set up the server.

    Here you can see the prototype in action

    That’s the kind of side projects that I normally create alone but definitely it’s more fun to do it with other colleagues even it I need to wake up early one Saturday morning.

    Source code of ESP32 here.

    Playing with Ionic, Lumen, Firebase, Google maps, Raspberry Pi and background geolocation

    I wanna do a simple pet project. The idea is to build a mobile application. This application will track my GPS location and send this information to a Firebase database. I’ve never play with Firebase and I want to learn a little bit. With this information I will build a simple web application hosted in my Raspberry Pi. This web application will show a Google map with my last location. I will put this web application in my TV and anyone in my house will see where I am every time.

    That’s the idea. I want a MVP. First the mobile application. I will use ionic framework. I’m big fan of ionic.

    The mobile application is very simple. It only has a toggle to activate-deactivate the background geolocation (sometimes I don’t want to be tracked :).

    <ion-header>
        <ion-navbar>
            <ion-title>
                Ionic Blank
            </ion-title>
        </ion-navbar>
    </ion-header>
    
    <ion-header>
        <ion-toolbar [color]="toolbarColor">
            <ion-title>{{title}}</ion-title>
            <ion-buttons end>
                <ion-toggle color="light"
                            checked="{{isBgEnabled}}"
                            (ionChange)="changeWorkingStatus($event)">
                </ion-toggle>
            </ion-buttons>
        </ion-toolbar>
    </ion-header>
    
    <ion-content padding>
    </ion-content>
    

    And the controller:

    import {Component} from '@angular/core';
    import {Platform} from 'ionic-angular';
    import {LocationTracker} from "../../providers/location-tracker/location-tracker";
    
    @Component({
        selector: 'page-home',
        templateUrl: 'home.html'
    })
    export class HomePage {
        public status: string = localStorage.getItem('status') || "-";
        public title: string = "";
        public isBgEnabled: boolean = false;
        public toolbarColor: string;
    
        constructor(platform: Platform,
                    public locationTracker: LocationTracker) {
    
            platform.ready().then(() => {
    
                    if (localStorage.getItem('isBgEnabled') === 'on') {
                        this.isBgEnabled = true;
                        this.title = "Working ...";
                        this.toolbarColor = 'secondary';
                    } else {
                        this.isBgEnabled = false;
                        this.title = "Idle";
                        this.toolbarColor = 'light';
                    }
            });
        }
    
        public changeWorkingStatus(event) {
            if (event.checked) {
                localStorage.setItem('isBgEnabled', "on");
                this.title = "Working ...";
                this.toolbarColor = 'secondary';
                this.locationTracker.startTracking();
            } else {
                localStorage.setItem('isBgEnabled', "off");
                this.title = "Idle";
                this.toolbarColor = 'light';
                this.locationTracker.stopTracking();
            }
        }
    }
    

    As you can see, the toggle button will activate-deactivate the background geolocation and it also changes de background color of the toolbar.

    For background geolocation I will use one cordova plugin available as ionic native plugin

    Here you can see read a very nice article explaining how to use the plugin with ionic. As the article explains I’ve created a provider

    import {Injectable, NgZone} from '@angular/core';
    import {BackgroundGeolocation} from '@ionic-native/background-geolocation';
    import {CONF} from "../conf/conf";
    
    @Injectable()
    export class LocationTracker {
        constructor(public zone: NgZone,
                    private backgroundGeolocation: BackgroundGeolocation) {
        }
    
        showAppSettings() {
            return this.backgroundGeolocation.showAppSettings();
        }
    
        startTracking() {
            this.startBackgroundGeolocation();
        }
    
        stopTracking() {
            this.backgroundGeolocation.stop();
        }
    
        private startBackgroundGeolocation() {
            this.backgroundGeolocation.configure(CONF.BG_GPS);
            this.backgroundGeolocation.start();
        }
    }
    

    The idea of the plugin is send a POST request to a url with the gps data in the body of the request. So, I will create a web api server to handle this request. I will use my Raspberry Pi3. to serve the application. I will create a simple PHP/Lumen application. This application will handle the POST request of the mobile application and also will serve a html page with the map (using google maps).

    Mobile requests will be authenticated with a token in the header and web application will use a basic http authentication. Because of that I will create two middlewares to handle the the different ways to authenticate.

    <?php
    require __DIR__ . '/../vendor/autoload.php';
    
    use App\Http\Middleware;
    use App\Model\Gps;
    use Illuminate\Contracts\Debug\ExceptionHandler;
    use Illuminate\Http\Request;
    use Laravel\Lumen\Application;
    use Laravel\Lumen\Routing\Router;
    
    (new Dotenv\Dotenv(__DIR__ . '/../env/'))->load();
    
    $app = new Application(__DIR__ . '/..');
    $app->singleton(ExceptionHandler::class, App\Exceptions\Handler::class);
    $app->routeMiddleware([
        'auth'  => Middleware\AuthMiddleware::class,
        'basic' => Middleware\BasicAuthMiddleware::class,
    ]);
    
    $app->router->group(['middleware' => 'auth', 'prefix' => '/locator'], function (Router $route) {
        $route->post('/gps', function (Gps $gps, Request $request) {
            $requestData = $request->all();
            foreach ($requestData as $poi) {
                $gps->persistsData([
                    'date'             => date('YmdHis'),
                    'serverTime'       => time(),
                    'time'             => $poi['time'],
                    'latitude'         => $poi['latitude'],
                    'longitude'        => $poi['longitude'],
                    'accuracy'         => $poi['accuracy'],
                    'speed'            => $poi['speed'],
                    'altitude'         => $poi['altitude'],
                    'locationProvider' => $poi['locationProvider'],
                ]);
            }
    
            return 'OK';
        });
    });
    
    return $app;
    

    As we can see the route /locator/gps will handle the post request. I’ve created a model to persists gps data in the firebase database:

    <?php
    
    namespace App\Model;
    
    use Kreait\Firebase\Factory;
    use Kreait\Firebase\ServiceAccount;
    
    class Gps
    {
        private $database;
    
        private const FIREBASE_CONF = __DIR__ . '/../../conf/firebase.json';
    
        public function __construct()
        {
            $serviceAccount = ServiceAccount::fromJsonFile(self::FIREBASE_CONF);
            $firebase       = (new Factory)
                ->withServiceAccount($serviceAccount)
                ->create();
    
            $this->database = $firebase->getDatabase();
        }
    
        public function getLast()
        {
            $value = $this->database->getReference('gps/poi')
                ->orderByKey()
                ->limitToLast(1)
                ->getValue();
    
            $out                 = array_values($value)[0];
            $out['formatedDate'] = \DateTimeImmutable::createFromFormat('YmdHis', $out['date'])->format('d/m/Y H:i:s');
    
            return $out;
        }
    
        public function persistsData(array $data)
        {
            return $this->database
                ->getReference('gps/poi')
                ->push($data);
        }
    }
    

    The project is almost finished. Now we only need to create the google map.

    That’s the api

    <?php
    $app->router->group(['middleware' => 'basic', 'prefix' => '/map'], function (Router $route) {
        $route->get('/', function (Gps $gps) {
            return view("index", $gps->getLast());
        });
    
        $route->get('/last', function (Gps $gps) {
            return $gps->getLast();
        });
    });
    

    And the HTML

    <!DOCTYPE html>
    <html>
    <head>
        <meta name="viewport" content="initial-scale=1.0, user-scalable=no">
        <meta charset="utf-8">
        <title>Locator</title>
        <style>
            #map {
                height: 100%;
            }
    
            html, body {
                height: 100%;
                margin: 0;
                padding: 0;
            }
        </style>
    </head>
    <body>
    <div id="map"></div>
    <script>
    
        var lastDate;
        var DELAY = 60;
    
        function drawMap(lat, long, text) {
            var CENTER = {lat: lat, lng: long};
            var contentString = '<div id="content">' + text + '</div>';
            var infowindow = new google.maps.InfoWindow({
                content: contentString
            });
            var map = new google.maps.Map(document.getElementById('map'), {
                zoom: 11,
                center: CENTER,
                disableDefaultUI: true
            });
    
            var marker = new google.maps.Marker({
                position: CENTER,
                map: map
            });
            var trafficLayer = new google.maps.TrafficLayer();
    
            trafficLayer.setMap(map);
            infowindow.open(map, marker);
        }
    
        function initMap() {
            lastDate = '{{ $formatedDate }}';
            drawMap({{ $latitude }}, {{ $longitude }}, lastDate);
        }
    
        setInterval(function () {
            fetch('/map/last', {credentials: "same-origin"}).then(function (response) {
                response.json().then(function (data) {
                    if (lastDate !== data.formatedDate) {
                        drawMap(data.latitude, data.longitude, data.formatedDate);
                    }
                });
            });
        }, DELAY * 1000);
    </script>
    <script async defer src="https://maps.googleapis.com/maps/api/js?key=my_google_maps_key&callback=initMap">
    </script>
    </body>
    </html>
    

    And that’s all just enough for a weekend. Source code is available in my github account