PHP/Lumen data source for Grafana

Today I need to integrate a third party service into Grafana. I cannot access directly to the service’s database, so I will integrate via JSON datasource. Grafana allows us to build custom data sources but in this case I don’t need to create a new one. I can use the simple JSON datasource

grafana-cli plugins install grafana-simple-json-datasource

Now I need to create one REST server to serve the data that our JSON datasource needs. According to the documentation we need three routes:

  • GET / should return 200 ok.
  • POST /search used by the find metric options on the query tab in panels.
  • POST /query should return metrics based on input.
  • POST /annotations should return annotations.

We’re going to create a PHP/Lumen server. Basically the routes of the application are those ones:

<?php

use Laravel\Lumen\Routing\Router;
use App\Http\Middleware;
use Laravel\Lumen\Application;
use Dotenv\Dotenv;
use App\Http\Handlers;

require_once __DIR__ . '/../vendor/autoload.php';

(Dotenv::create(__DIR__ . '/../env/local'))->load();

$app = new Application(dirname(__DIR__));
$app->middleware([
    Middleware\CorsMiddleware::class,
]);

$app->router->group(['middleware' => Middleware\AuthMiddleware::class], function (Router $router) {
    $router->get('/', Handlers\HelloHandler::class);
    $router->post('/search', Handlers\SearchHandler::class);
    $router->post('/query', Handlers\QueryHandler::class);
    $router->post('/annotations', Handlers\AnnotationHandler::class);
});

return $app;

We need to take care with CORS. I will use the Middleware that I normally use in those cases

<?php

namespace App\Http\Middleware;

use Closure;

class CorsMiddleware
{
    public function handle($request, Closure $next)
    {
        $headers = [
            'Access-Control-Allow-Origin'      => 'http://localhost:3000',
            'Access-Control-Allow-Methods'     => 'POST, GET, OPTIONS, PUT, DELETE',
            'Access-Control-Allow-Credentials' => 'true',
            'Access-Control-Max-Age'           => '86400',
            'Access-Control-Allow-Headers'     => 'accept, content-type, Content-Type, Authorization, X-Requested-With',
        ];

        if ($request->isMethod('OPTIONS')) {
            return response()->json('{"method":"OPTIONS"}', 200, $headers);
        }

        $response = $next($request);
        foreach ($headers as $key => $value) {
            $response->header($key, $value);
        }

        return $response;
    }
}

I’ll use also a basic authentication so we’ll use a simple Http Basic Authentication middleware

<?php

namespace App\Http\Middleware;

use Closure;
use Illuminate\Http\Request;

class AuthMiddleware
{
    const NAME = 'auth.web';

    public function handle(Request $request, Closure $next)
    {
        if ($request->getUser() != env('HTTP_USER') || $request->getPassword() != env('HTTP_PASS')) {
            $headers = ['WWW-Authenticate' => 'Basic'];

            return response('Unauthorized', 401, $headers);
        }

        return $next($request);
    }
}

HelloHandler is a dummy route that the datasource needs to check the connection. We only need to answer with a 200-OK

<?php
namespace App\Http\Handlers;

class HelloHandler
{
    public function __invoke()
    {
        return "Ok";
    }
}

SearchHandler will return the list of available metrics that we´ll use within our grafana panels. They aren’t strictly necessary. We can return an empty array and use later one metric that it isn’t defined here (it’s only to fill the combo that grafana shows us)

<?php
namespace App\Http\Handlers;

class SearchHandler
{
    public function __invoke()
    {
        return [25, 50, 100];
    }
}
```

QueryHandler is an important one. Here we'll return the datapoints that we´ll show in grafana. For testing purposes I've created one handler that read the metric, and the date from and date to that grafana sends to the backend and return a random values for several metrics and fixed ones to the rest. It's basically to see something in grafana. Later, in the real life project, I'll query the database and return real data.
 

<?php

namespace App\Http\Handlers;

use Illuminate\Http\Request;

class QueryHandler
{
    public function __invoke(Request $request)
    {
        $json   = $request->json();
        $range  = $json->get('range');
        $target = $json->get('targets')[0]['target'];

        $tz   = new \DateTimeZone('Europe/Madrid');
        $from = \DateTimeImmutable::createFromFormat("Y-m-d\TH:i:s.uP", $range['from'], $tz);
        $to   = \DateTimeImmutable::createFromFormat("Y-m-d\TH:i:s.uP", $range['to'], $tz);

        return ['target' => $target, 'datapoints' => $this->getDataPoints($from, $to, $target)];
    }

    private function getDataPoints($from, $to, $target)
    {
        $interval = new \DateInterval('PT1H');
        $period   = new \DatePeriod($from, $interval, $to->add($interval));

        $dataPoints = [];
        foreach ($period as $date) {
            $value        = $target > 50 ? rand(0, 100) : $target;
            $dataPoints[] = [$value, strtotime($date->format('Y-m-d H:i:sP')) * 1000];
        }

        return $dataPoints;
    }
}

Also I’ll like to use annotations. It’s something similar. AnnotationHandler will handle this request. For this test I’ve created two types of annotations: One each hour and another one each 6 hours

<?php

namespace App\Http\Handlers;

use Illuminate\Http\Request;

class AnnotationHandler
{
    public function __invoke(Request $request)
    {
        $json       = $request->json();
        $annotation = $json->get('annotation');
        $range      = $json->get('range');
  
        return $this->getAnnotations($annotation, $range);
    }

    private function getAnnotations($annotation, $range)
    {
        return $this->getValues($range, 'PT' . $annotation['query'] . 'H');
    }


    private function getValues($range, $int)
    {
        $tz   = new \DateTimeZone('Europe/Madrid');
        $from = \DateTimeImmutable::createFromFormat("Y-m-d\TH:i:s.uP", $range['from'], $tz);
        $to   = \DateTimeImmutable::createFromFormat("Y-m-d\TH:i:s.uP", $range['to'], $tz);

        $annotation = [
            'name'       => $int,
            'enabled'    => true,
            'datasource' => "gonzalo datasource",
            'showLine'   => true,
        ];

        $interval = new \DateInterval($int);
        $period   = new \DatePeriod($from, $interval, $to->add($interval));

        $annotations = [];
        foreach ($period as $date) {
            $annotations[] = ['annotation' => $annotation, "title" => "H " . $date->format('H'), "time" => strtotime($date->format('Y-m-d H:i:sP')) * 1000, 'text' => "teeext"];
        }

        return $annotations;
    }
}

And that’s all. I’ve also put the whole example in a docker-compose file to test it

version: '2'

services:
  nginx:
    image: gonzalo123.nginx
    restart: always
    ports:
      - "80:80"
    build:
      context: ./src
      dockerfile: .docker/Dockerfile-nginx
    volumes:
      - ./src/api:/code/src
  api:
    image: gonzalo123.api
    restart: always
    build:
      context: ./src
      dockerfile: .docker/Dockerfile-lumen-dev
    environment:
      XDEBUG_CONFIG: remote_host=${MY_IP}
    volumes:
      - ./src/api:/code/src
  grafana:
    image: gonzalo123.grafana
    build:
      context: ./src
      dockerfile: .docker/Dockerfile-grafana
    restart: always
    environment:
      - GF_SECURITY_ADMIN_USER=${GF_SECURITY_ADMIN_USER}
      - GF_SECURITY_ADMIN_PASSWORD=${GF_SECURITY_ADMIN_PASSWORD}
      - GF_USERS_DEFAULT_THEME=${GF_USERS_DEFAULT_THEME}
      - GF_USERS_ALLOW_SIGN_UP=${GF_USERS_ALLOW_SIGN_UP}
      - GF_USERS_ALLOW_ORG_CREATE=${GF_USERS_ALLOW_ORG_CREATE}
      - GF_AUTH_ANONYMOUS_ENABLED=${GF_AUTH_ANONYMOUS_ENABLED}
      - GF_INSTALL_PLUGINS=${GF_INSTALL_PLUGINS}
    ports:
      - "3000:3000"
    volumes:
      - grafana-db:/var/lib/grafana
      - grafana-log:/var/log/grafana
      - grafana-conf:/etc/grafana
volumes:
  grafana-db:
    driver: local
  grafana-log:
    driver: local
  grafana-conf:
    driver: local

Here you can see the example in action:

Full code in my github

Working with SAPUI5 locally (part 2). Now with docker

In the first part I spoke about how to build our working environment to work with UI5 locally instead of using WebIDE. Now, in this second part of the post, we’ll see how to do it using docker to set up our environment.

I’ll use docker-compose to set up the project. Basically, as I explain in the first part, the project has two parts. One backend and one frontned. We’re going to use exactly the same code for the frontend and for the backend.

The frontend is build over a localneo. As it’s a node application we’ll use a node:alpine base host

FROM node:alpine

EXPOSE 8000

WORKDIR /code/src
COPY ./frontend .
RUN npm install
ENTRYPOINT ["npm", "run", "serve"]

In docker-compose we only need to map the port that we´ll expose in our host and since we want this project in our depelopemet process, we also will map the volume to avoid to re-generate our container each time we change the code.

...
  ui5:
    image: gonzalo123.ui5
    ports:
    - "8000:8000"
    restart: always
    build:
      context: ./src
      dockerfile: .docker/Dockerfile-ui5
    volumes:
    - ./src/frontend:/code/src
    networks:
    - api-network

The backend is a PHP application. We can set up a PHP application using different architectures. In this project we’ll use nginx and PHP-FPM.

for nginx we’ll use the following Dockerfile

FROM  nginx:1.13-alpine

EXPOSE 80

COPY ./.docker/web/site.conf /etc/nginx/conf.d/default.conf
COPY ./backend /code/src

And for the PHP host the following one (with xdebug to enable debugging and breakpoints):

FROM php:7.1-fpm

ENV PHP_XDEBUG_REMOTE_ENABLE 1

RUN apt-get update && apt-get install -my \
    git \
    libghc-zlib-dev && \
    apt-get clean

RUN apt-get install -y libpq-dev \
    && docker-php-ext-configure pgsql -with-pgsql=/usr/local/pgsql \
    && docker-php-ext-install pdo pdo_pgsql pgsql opcache zip

RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer

RUN composer global require "laravel/lumen-installer"
ENV PATH ~/.composer/vendor/bin:$PATH

COPY ./backend /code/src

And basically that’s all. Here the full docker-compose file

version: '3.4'

services:
  nginx:
    image: gonzalo123.nginx
    restart: always
    ports:
    - "8080:80"
    build:
      context: ./src
      dockerfile: .docker/Dockerfile-nginx
    volumes:
    - ./src/backend:/code/src
    - ./src/.docker/web/site.conf:/etc/nginx/conf.d/default.conf
    networks:
    - api-network
  api:
    image: gonzalo123.api
    restart: always
    build:
      context: ./src
      dockerfile: .docker/Dockerfile-lumen-dev
    environment:
      XDEBUG_CONFIG: remote_host=${MY_IP}
    volumes:
    - ./src/backend:/code/src
    networks:
    - api-network
  ui5:
    image: gonzalo123.ui5
    ports:
    - "8000:8000"
    restart: always
    build:
      context: ./src
      dockerfile: .docker/Dockerfile-ui5
    networks:
    - api-network

networks:
  api-network:
    driver: bridge

If we want to use this project you only need to:

  • clone the repo fron github
  • run ./ui5 up

With this configuration we’re exposing two ports 8080 for the frontend and 8000 for the backend. We also are mapping our local filesystem to containers to avoid to regenerate our containers each time we change the code.

We also can have a variation. A “production” version of our docker-compose file. I put production between quotation marks because normally we aren’t going to use localneo as a production server (please don’t do it). We’ll use SCP to host the frontend.

This configuration is just an example without filesystem mapping, without xdebug in the backend and without exposing the backend externally (Only the frontend can use it)

version: '3.4'

services:
  nginx:
    image: gonzalo123.nginx
    restart: always
    build:
      context: ./src
      dockerfile: .docker/Dockerfile-nginx
    networks:
    - api-network
  api:
    image: gonzalo123.api
    restart: always
    build:
      context: ./src
      dockerfile: .docker/Dockerfile-lumen
    networks:
    - api-network
  ui5:
    image: gonzalo123.ui5
    ports:
    - "8000:8000"
    restart: always
    build:
      context: ./src
      dockerfile: .docker/Dockerfile-ui5
    networks:
    - api-network

networks:
  api-network:
    driver: bridge

And that’s all. You can see the all the source code in my github account

Working with SAPUI5 locally and deploying in SCP

When I work with SAPUI5 projects I normally use WebIDE. WebIDE is a great tool but I’m more confortable working locally with my local IDE.
I’ve this idea in my mind but I never find the time slot to work on it. Finally, after finding this project from Holger Schäfer in github, I realized how easy it’s and I started to work with this project and adapt it to my needs.

The base of this project is localneo. Localneo starts a http server based on neo-app.json file. That means we’re going to use the same configuration than we’ve in production (in SCP). Of course we’ll need destinations. We only need one extra file called destination.json where we’ll set up our destinations (it creates one http proxy, nothing else).

In this project I’ll create a simple example application that works with one API server.

The backend

I’ll use in this example one PHP/Lumen application:

$app->router->group(['prefix' => '/api', 'middleware' => Middleware\AuthMiddleware::NAME], function (Router $route) {
    $route->get('/', Handlers\HomeHandler::class);
    $route->post('/', Handlers\HomeHandler::class);
});

Basically it has two routes. In fact both routes are the same. One accept POST request and another one GET requests.
They’ll answer with the current date in a json file

namespace App\Http\Handlers;

class HomeHandler
{
    public function __invoke()
    {
        return ['date' => (new \DateTime())->format('c')];
    }
}

Both routes are under one middleware to provide the authentication.

namespace App\Http\Middleware;

use Closure;
use Illuminate\Http\Request;

class AuthMiddleware
{
    public const NAME = 'auth';

    public function handle(Request $request, Closure $next)
    {
        $user = $request->getUser();
        $pass = $request->getPassword();

        if (!$this->validateDestinationCredentials($user, $pass)) {
            $headers = ['WWW-Authenticate' => 'Basic'];

            return response('Backend Login', 401, $headers);
        }

        $authorizationHeader = $request->header('Authorization2');
        if (!$this->validateApplicationToken($authorizationHeader)) {
            return response('Invalid token ', 403);
        }

        return $next($request);

    }

    private function validateApplicationToken($authorizationHeader)
    {
        $token = str_replace('Bearer ', null, $authorizationHeader);

        return $token === getenv('APP_TOKEN');
    }

    private function validateDestinationCredentials($user, $pass)
    {
        if (!($user === getenv('DESTINATION_USER') && $pass === getenv('DESTINATION_PASS'))) {
            return false;
        }

        return true;
    }
}

That means our service will need Basic Authentication and also one Token based authentication.

The frontend

Our ui5 application will use one destination called BACKEND. We’ll configure it in our neo-app.json file

    ...
    {
      "path": "/backend",
      "target": {
        "type": "destination",
        "name": "BACKEND"
      },
      "description": "BACKEND"
    }
    ...

Now we’ll create our extra file called destinations.json. Localneo will use this file to create a web server to serve our frontend locally (using the destination).

As I said before our backend will need a Basic Authentication. This Authentication will be set up in the destination configuration

{
  "server": {
    "port": "8080",
    "path": "/webapp/index.html",
    "open": true
  },
  "service": {
    "sapui5": {
      "useSAPUI5": true,
      "version": "1.54.8"
    }
  },
  "destinations": {
    "BACKEND": {
      "url": "http://localhost:8888",
      "auth": "superSecretUser:superSecretPassword"
    }
  }
}

Our application will be a simple list of items

<mvc:View controllerName="gonzalo123.controller.App" xmlns:html="http://www.w3.org/1999/xhtml" xmlns:mvc="sap.ui.core.mvc" displayBlock="true" xmlns="sap.m">
  <App id="idAppControl">
    <pages>
      <Page title="{i18n>appTitle}">
        <content>
          <List>
            <items>
              <ObjectListItem id="GET" title="{i18n>get}"
                              type="Active"
                              press="getPressHandle">
                <attributes>
                  <ObjectAttribute id="getCount" text="{/Data/get/count}"/>
                </attributes>
              </ObjectListItem>
              <ObjectListItem id="POST" title="{i18n>post}"
                              type="Active"
                              press="postPressHandle">
                <attributes>
                  <ObjectAttribute id="postCount" text="{/Data/post/count}"/>
                </attributes>
              </ObjectListItem>
            </items>
          </List>
        </content>
      </Page>
    </pages>
  </App>
</mvc:View>

When we click on GET we’ll perform a GET request to the backend and we’ll increment the counter. The same with POST.
We’ll also show de date provided by the backend in a MessageToast.

sap.ui.define([
  "sap/ui/core/mvc/Controller",
  "sap/ui/model/json/JSONModel",
  'sap/m/MessageToast',
  "gonzalo123/model/api"
], function (Controller, JSONModel, MessageToast, api) {
  "use strict";

  return Controller.extend("gonzalo123.controller.App", {
    model: new JSONModel({
      Data: {get: {count: 0}, post: {count: 0}}
    }),

    onInit: function () {
      this.getView().setModel(this.model);
    },

    getPressHandle: function () {
      api.get("/", {}).then(function (data) {
        var count = this.model.getProperty('/Data/get/count');
        MessageToast.show("Pressed : " + data.date);
        this.model.setProperty('/Data/get/count', ++count);
      }.bind(this));
    },

    postPressHandle: function () {
      var count = this.model.getProperty('/Data/post/count');
      api.post("/", {}).then(function (data) {
        MessageToast.show("Pressed : " + data.date);
        this.model.setProperty('/Data/post/count', ++count);
      }.bind(this));
    }
  });
});

Start our application locally

Now we only need to start the backend

php -S 0.0.0.0:8888 -t www

And the frontend
localneo

Debugging locally

As we’re working locally we can use local debugger in the backend and we can use breakpoints, inspect variables, etc.

We also can debug the frontend using Chrome developer tools. We can also map our local filesystem in the browser and we can save files directly in Chrome.

Testing

We can test the backend using phpunit and run our tests with
composer run test

Here we can see a simple test of the backend

    public function testAuthorizedRequest()
    {
        $headers = [
            'Authorization2' => 'Bearer superSecretToken',
            'Content-Type'   => 'application/json',
            'Authorization'  => 'Basic ' . base64_encode('superSecretUser:superSecretPassword'),
        ];

        $this->json('GET', '/api', [], $headers)
            ->assertResponseStatus(200);
        $this->json('POST', '/api', [], $headers)
            ->assertResponseStatus(200);
    }


    public function testRequests()
    {

        $headers = [
            'Authorization2' => 'Bearer superSecretToken',
            'Content-Type'   => 'application/json',
            'Authorization'  => 'Basic ' . base64_encode('superSecretUser:superSecretPassword'),
        ];

        $this->json('GET', '/api', [], $headers)
            ->seeJsonStructure(['date']);
        $this->json('POST', '/api', [], $headers)
            ->seeJsonStructure(['date']);
    }

We also can test the frontend using OPA5.

As Backend is already tested we’ll mock the backend here using sinon (https://sinonjs.org/) server

...
    opaTest("When I click on GET the GET counter should increment by one", function (Given, When, Then) {
      Given.iStartMyApp("./integration/Test1/index.html");
      When.iClickOnGET();
      Then.getCounterShouldBeIncrementedByOne().and.iTeardownMyAppFrame();
    });

    opaTest("When I click on POST the POST counter should increment by one", function (Given, When, Then) {
      Given.iStartMyApp("./integration/Test1/index.html");
      When.iClickOnPOST();
      Then.postCounterShouldBeIncrementedByOne().and.iTeardownMyAppFrame();
    });
...

The configuration of our sinon server:

sap.ui.define(
  ["test/server"],
  function (server) {
    "use strict";

    return {
      init: function () {
        var oServer = server.initServer("/backend/api");

        oServer.respondWith("GET", /backend\/api/, [200, {
          "Content-Type": "application/json"
        }, JSON.stringify({
          "date": "2018-07-29T18:44:57+02:00"
        })]);

        oServer.respondWith("POST", /backend\/api/, [200, {
          "Content-Type": "application/json"
        }, JSON.stringify({
          "date": "2018-07-29T18:44:57+02:00"
        })]);
      }
    };
  }
);

The build process

Before uploading the application to SCP we need to build it. The build process optimizes the files and creates Component-preload.js and sap-ui-cachebuster-info.json file (to ensure our users aren’t using a cached version of our application)
We’ll use grunt to build our application. Here we can see our Gruntfile.js

module.exports = function (grunt) {
  "use strict";

  require('load-grunt-tasks')(grunt);
  require('time-grunt')(grunt);

  grunt.config.merge({
    pkg: grunt.file.readJSON('package.json'),
    watch: {
      js: {
        files: ['Gruntfile.js', 'webapp/**/*.js', 'webapp/**/*.properties'],
        tasks: ['jshint'],
        options: {
          livereload: true
        }
      },

      livereload: {
        options: {
          livereload: true
        },
        files: [
          'webapp/**/*.html',
          'webapp/**/*.js',
          'webapp/**/*.css'
        ]
      }
    }
  });

  grunt.registerTask("default", [
    "clean",
    "lint",
    "build"
  ]);
};

In our Gruntfile I’ve also configure a watcher to build the application automatically and triggering the live reload (to reload my browser every time I change the frontend)

Now I can build the dist folder with the command:

grunt

Deploy to SCP

The deploy process is very well explained in the Holger’s repository
Basically we need to download MTA Archive builder and extract it to ./ci/tools/mta.jar.
Also we need SAP Cloud Platform Neo Environment SDK (./ci/tools/neo-java-web-sdk/)
We can download those binaries from here

Then we need to fulfill our scp credentials in ./ci/deploy-mta.properties and configure our application in ./ci/mta.yaml
Finally we will run ./ci/deploy-mta.sh (here we can set up our scp password in order to input it within each deploy)

Full code (frontend and backend) in my github account

Playing with Ionic, Lumen, Firebase, Google maps, Raspberry Pi and background geolocation

I wanna do a simple pet project. The idea is to build a mobile application. This application will track my GPS location and send this information to a Firebase database. I’ve never play with Firebase and I want to learn a little bit. With this information I will build a simple web application hosted in my Raspberry Pi. This web application will show a Google map with my last location. I will put this web application in my TV and anyone in my house will see where I am every time.

That’s the idea. I want a MVP. First the mobile application. I will use ionic framework. I’m big fan of ionic.

The mobile application is very simple. It only has a toggle to activate-deactivate the background geolocation (sometimes I don’t want to be tracked :).

<ion-header>
    <ion-navbar>
        <ion-title>
            Ionic Blank
        </ion-title>
    </ion-navbar>
</ion-header>

<ion-header>
    <ion-toolbar [color]="toolbarColor">
        <ion-title>{{title}}</ion-title>
        <ion-buttons end>
            <ion-toggle color="light"
                        checked="{{isBgEnabled}}"
                        (ionChange)="changeWorkingStatus($event)">
            </ion-toggle>
        </ion-buttons>
    </ion-toolbar>
</ion-header>

<ion-content padding>
</ion-content>

And the controller:

import {Component} from '@angular/core';
import {Platform} from 'ionic-angular';
import {LocationTracker} from "../../providers/location-tracker/location-tracker";

@Component({
    selector: 'page-home',
    templateUrl: 'home.html'
})
export class HomePage {
    public status: string = localStorage.getItem('status') || "-";
    public title: string = "";
    public isBgEnabled: boolean = false;
    public toolbarColor: string;

    constructor(platform: Platform,
                public locationTracker: LocationTracker) {

        platform.ready().then(() => {

                if (localStorage.getItem('isBgEnabled') === 'on') {
                    this.isBgEnabled = true;
                    this.title = "Working ...";
                    this.toolbarColor = 'secondary';
                } else {
                    this.isBgEnabled = false;
                    this.title = "Idle";
                    this.toolbarColor = 'light';
                }
        });
    }

    public changeWorkingStatus(event) {
        if (event.checked) {
            localStorage.setItem('isBgEnabled', "on");
            this.title = "Working ...";
            this.toolbarColor = 'secondary';
            this.locationTracker.startTracking();
        } else {
            localStorage.setItem('isBgEnabled', "off");
            this.title = "Idle";
            this.toolbarColor = 'light';
            this.locationTracker.stopTracking();
        }
    }
}

As you can see, the toggle button will activate-deactivate the background geolocation and it also changes de background color of the toolbar.

For background geolocation I will use one cordova plugin available as ionic native plugin

Here you can see read a very nice article explaining how to use the plugin with ionic. As the article explains I’ve created a provider

import {Injectable, NgZone} from '@angular/core';
import {BackgroundGeolocation} from '@ionic-native/background-geolocation';
import {CONF} from "../conf/conf";

@Injectable()
export class LocationTracker {
    constructor(public zone: NgZone,
                private backgroundGeolocation: BackgroundGeolocation) {
    }

    showAppSettings() {
        return this.backgroundGeolocation.showAppSettings();
    }

    startTracking() {
        this.startBackgroundGeolocation();
    }

    stopTracking() {
        this.backgroundGeolocation.stop();
    }

    private startBackgroundGeolocation() {
        this.backgroundGeolocation.configure(CONF.BG_GPS);
        this.backgroundGeolocation.start();
    }
}

The idea of the plugin is send a POST request to a url with the gps data in the body of the request. So, I will create a web api server to handle this request. I will use my Raspberry Pi3. to serve the application. I will create a simple PHP/Lumen application. This application will handle the POST request of the mobile application and also will serve a html page with the map (using google maps).

Mobile requests will be authenticated with a token in the header and web application will use a basic http authentication. Because of that I will create two middlewares to handle the the different ways to authenticate.

<?php
require __DIR__ . '/../vendor/autoload.php';

use App\Http\Middleware;
use App\Model\Gps;
use Illuminate\Contracts\Debug\ExceptionHandler;
use Illuminate\Http\Request;
use Laravel\Lumen\Application;
use Laravel\Lumen\Routing\Router;

(new Dotenv\Dotenv(__DIR__ . '/../env/'))->load();

$app = new Application(__DIR__ . '/..');
$app->singleton(ExceptionHandler::class, App\Exceptions\Handler::class);
$app->routeMiddleware([
    'auth'  => Middleware\AuthMiddleware::class,
    'basic' => Middleware\BasicAuthMiddleware::class,
]);

$app->router->group(['middleware' => 'auth', 'prefix' => '/locator'], function (Router $route) {
    $route->post('/gps', function (Gps $gps, Request $request) {
        $requestData = $request->all();
        foreach ($requestData as $poi) {
            $gps->persistsData([
                'date'             => date('YmdHis'),
                'serverTime'       => time(),
                'time'             => $poi['time'],
                'latitude'         => $poi['latitude'],
                'longitude'        => $poi['longitude'],
                'accuracy'         => $poi['accuracy'],
                'speed'            => $poi['speed'],
                'altitude'         => $poi['altitude'],
                'locationProvider' => $poi['locationProvider'],
            ]);
        }

        return 'OK';
    });
});

return $app;

As we can see the route /locator/gps will handle the post request. I’ve created a model to persists gps data in the firebase database:

<?php

namespace App\Model;

use Kreait\Firebase\Factory;
use Kreait\Firebase\ServiceAccount;

class Gps
{
    private $database;

    private const FIREBASE_CONF = __DIR__ . '/../../conf/firebase.json';

    public function __construct()
    {
        $serviceAccount = ServiceAccount::fromJsonFile(self::FIREBASE_CONF);
        $firebase       = (new Factory)
            ->withServiceAccount($serviceAccount)
            ->create();

        $this->database = $firebase->getDatabase();
    }

    public function getLast()
    {
        $value = $this->database->getReference('gps/poi')
            ->orderByKey()
            ->limitToLast(1)
            ->getValue();

        $out                 = array_values($value)[0];
        $out['formatedDate'] = \DateTimeImmutable::createFromFormat('YmdHis', $out['date'])->format('d/m/Y H:i:s');

        return $out;
    }

    public function persistsData(array $data)
    {
        return $this->database
            ->getReference('gps/poi')
            ->push($data);
    }
}

The project is almost finished. Now we only need to create the google map.

That’s the api

<?php
$app->router->group(['middleware' => 'basic', 'prefix' => '/map'], function (Router $route) {
    $route->get('/', function (Gps $gps) {
        return view("index", $gps->getLast());
    });

    $route->get('/last', function (Gps $gps) {
        return $gps->getLast();
    });
});

And the HTML

<!DOCTYPE html>
<html>
<head>
    <meta name="viewport" content="initial-scale=1.0, user-scalable=no">
    <meta charset="utf-8">
    <title>Locator</title>
    <style>
        #map {
            height: 100%;
        }

        html, body {
            height: 100%;
            margin: 0;
            padding: 0;
        }
    </style>
</head>
<body>
<div id="map"></div>
<script>

    var lastDate;
    var DELAY = 60;

    function drawMap(lat, long, text) {
        var CENTER = {lat: lat, lng: long};
        var contentString = '<div id="content">' + text + '</div>';
        var infowindow = new google.maps.InfoWindow({
            content: contentString
        });
        var map = new google.maps.Map(document.getElementById('map'), {
            zoom: 11,
            center: CENTER,
            disableDefaultUI: true
        });

        var marker = new google.maps.Marker({
            position: CENTER,
            map: map
        });
        var trafficLayer = new google.maps.TrafficLayer();

        trafficLayer.setMap(map);
        infowindow.open(map, marker);
    }

    function initMap() {
        lastDate = '{{ $formatedDate }}';
        drawMap({{ $latitude }}, {{ $longitude }}, lastDate);
    }

    setInterval(function () {
        fetch('/map/last', {credentials: "same-origin"}).then(function (response) {
            response.json().then(function (data) {
                if (lastDate !== data.formatedDate) {
                    drawMap(data.latitude, data.longitude, data.formatedDate);
                }
            });
        });
    }, DELAY * 1000);
</script>
<script async defer src="https://maps.googleapis.com/maps/api/js?key=my_google_maps_key&callback=initMap">
</script>
</body>
</html>

And that’s all just enough for a weekend. Source code is available in my github account

PHP application in SAP Cloud Platform. With PostgreSQL, Redis and Cloud Foundry

Keeping on with my study of SAP’s cloud platform (SCP) and Cloud Foundry today I’m going to build a simple PHP application. This application serves a simple Bootstrap landing page. The application uses a HTTP basic authentication. The credentials are validated against a PostgreSQL database. It also has a API to retrieve the localtimestamp from database server (just for play with a database server). I also want to play with Redis in the cloud too, so the API request will have a Time To Live (ttl) of 5 seconds. I will use a Redis service to do it.

First we create our services in cloud foundry. I’m using the free layer of SAP cloud foundry for this example. I’m not going to explain here how to do that. It’s pretty straightforward within SAP’s coopkit. Time ago I played with IBM’s cloud foundry too. I remember that it was also very simple too.

Then we create our application (.bp-config/options.json)

{
"WEBDIR": "www",
"LIBDIR": "lib",
"PHP_VERSION": "{PHP_70_LATEST}",
"PHP_MODULES": ["cli"],
"WEB_SERVER": "nginx"
}

If we want to use our PostgreSQL and Redis services with our PHP Appliacation we need to connect those services to our application. This operation can be done also with SAP’s Cockpit.

Now is the turn of PHP application. I normally use Silex framework within my backends, but now there’s a problem: Silex is dead. I feel a little bit sad but I’m not going to cry. It’s just a tool and there’re another ones. I’ve got my example with Silex but, as an exercise, I will also do it with Lumen.

Let’s start with Silex. If you’re familiar with Silex micro framework (or another microframework, indeed) you can see that there isn’t anything especial.

use Symfony\Component\HttpKernel\Exception\HttpException;
use Symfony\Component\HttpFoundation\Request;
use Silex\Provider\TwigServiceProvider;
use Silex\Application;
use Predis\Client;

if (php_sapi_name() == "cli-server") {
    // when I start the server my local machine vendors are in a different path
    require __DIR__ . '/../vendor/autoload.php';
    // and also I mock VCAP_SERVICES env
    $env   = file_get_contents(__DIR__ . "/../conf/vcap_services.json");
    $debug = true;
} else {
    require 'vendor/autoload.php';
    $env   = $_ENV["VCAP_SERVICES"];
    $debug = false;
}

$vcapServices = json_decode($env, true);

$app = new Application(['debug' => $debug, 'ttl' => 5]);

$app->register(new TwigServiceProvider(), [
    'twig.path' => __DIR__ . '/../views',
]);

$app['db'] = function () use ($vcapServices) {
    $dbConf = $vcapServices['postgresql'][0]['credentials'];
    $dsn    = "pgsql:dbname={$dbConf['dbname']};host={$dbConf['hostname']};port={$dbConf['port']}";
    $dbh    = new PDO($dsn, $dbConf['username'], $dbConf['password']);
    $dbh->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
    $dbh->setAttribute(PDO::ATTR_CASE, PDO::CASE_UPPER);
    $dbh->setAttribute(PDO::ATTR_DEFAULT_FETCH_MODE, PDO::FETCH_ASSOC);

    return $dbh;
};

$app['redis'] = function () use ($vcapServices) {
    $redisConf = $vcapServices['redis'][0]['credentials'];

    return new Client([
        'scheme'   => 'tcp',
        'host'     => $redisConf['hostname'],
        'port'     => $redisConf['port'],
        'password' => $redisConf['password'],
    ]);
};

$app->get("/", function (Application $app) {
    return $app['twig']->render('index.html.twig', [
        'user' => $app['user'],
        'ttl'  => $app['ttl'],
    ]);
});

$app->get("/timestamp", function (Application $app) {
    if (!$app['redis']->exists('timestamp')) {
        $stmt = $app['db']->prepare('SELECT localtimestamp');
        $stmt->execute();
        $app['redis']->set('timestamp', $stmt->fetch()['TIMESTAMP'], 'EX', $app['ttl']);
    }

    return $app->json($app['redis']->get('timestamp'));
});

$app->before(function (Request $request) use ($app) {
    $username = $request->server->get('PHP_AUTH_USER', false);
    $password = $request->server->get('PHP_AUTH_PW');

    $stmt = $app['db']->prepare('SELECT name, surname FROM public.user WHERE username=:USER AND pass=:PASS');
    $stmt->execute(['USER' => $username, 'PASS' => md5($password)]);
    $row = $stmt->fetch();
    if ($row !== false) {
        $app['user'] = $row;
    } else {
        header("WWW-Authenticate: Basic realm='RIS'");
        throw new HttpException(401, 'Please sign in.');
    }
}, 0);

$app->run();

Maybe the only especial thing is the way that autoloader is done. We are initializing autoloader in two different ways. One way when the application is run in the cloud and another one when the application is run locally with PHP’s built-in server. That’s because vendors are located in different paths depending on which environment the application lives in. When Cloud Foundry connect services to appliations it injects environment variables with the service configuration (credentials, host, …). It uses VCAP_SERVICES env var.

I use the built-in server to run the application locally. When I’m doing that I don’t have VCAP_SERVICES variable. And also my services information are different than when I’m running the application in the cloud. Maybe it’s better with an environment variable but I’m using this trick:

if (php_sapi_name() == "cli-server") {
    // I'm runing the application locally
} else {
    // I'm in the cloud
}

So when I’m locally I mock VCAP_SERVICES with my local values and also, for example, configure Silex application in debug mode.

Sometimes I want to run my application locally but I want to use the cloud services. I cannot connect directly to those services, but we can do it over ssh through our connected application. For example If our PostgreSQL application is running on 10.11.241.0:48825 we can map this remote port (in a private network) to our local port with this command.

cf ssh -N -T -L 48825:10.11.241.0:48825 silex

You can see more information about this command here.

Now we can use pgAdmin, for example, in our local machine to connect to cloud server.

We can do the same with Redis

cf ssh -N -T -L 54266:10.11.241.9:54266 silex

And basically that’s all. Now we’ll do the same with Lumen. The idea is create the same application with Lumen instead of Silex. It’s a dummy application but it cover task that I normally use. I also will re-use the Redis and PostgreSQL services from the previous project.

use App\Http\Middleware;
use Laravel\Lumen\Application;
use Laravel\Lumen\Routing\Router;
use Predis\Client;

if (php_sapi_name() == "cli-server") {
    require __DIR__ . '/../vendor/autoload.php';
    $env = 'dev';
} else {
    require 'vendor/autoload.php';
    $env = 'prod';
}

(new Dotenv\Dotenv(__DIR__ . "/../env/{$env}"))->load();

$app = new Application();

$app->routeMiddleware([
    'auth' => Middleware\AuthMiddleware::class,
]);

$app->register(App\Providers\VcapServiceProvider::class);
$app->register(App\Providers\StdoutLogServiceProvider::class);
$app->register(App\Providers\DbServiceProvider::class);
$app->register(App\Providers\RedisServiceProvider::class);

$app->router->group(['middleware' => 'auth'], function (Router $router) {
    $router->get("/", function () {
        return view("index", [
            'user' => config("user"),
            'ttl'  => getenv('TTL'),
        ]);
    });

    $router->get("/timestamp", function (Client $redis, PDO $conn) {
        if (!$redis->exists('timestamp')) {
            $stmt = $conn->prepare('SELECT localtimestamp');
            $stmt->execute();
            $redis->set('timestamp', $stmt->fetch()['TIMESTAMP'], 'EX', getenv('TTL'));
        }

        return response()->json($redis->get('timestamp'));
    });
});

$app->run();

I’ve created four servicer providers. One for handle Database connections (I don’t like ORMs)

namespace App\Providers;

use Illuminate\Support\ServiceProvider;
use PDO;

class DbServiceProvider extends ServiceProvider
{
    public function register()
    {
    }

    public function boot()
    {
        $vcapServices = app('vcap_services');

        $dbConf = $vcapServices['postgresql'][0]['credentials'];
        $dsn    = "pgsql:dbname={$dbConf['dbname']};host={$dbConf['hostname']};port={$dbConf['port']}";
        $dbh    = new PDO($dsn, $dbConf['username'], $dbConf['password']);
        $dbh->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
        $dbh->setAttribute(PDO::ATTR_CASE, PDO::CASE_UPPER);
        $dbh->setAttribute(PDO::ATTR_DEFAULT_FETCH_MODE, PDO::FETCH_ASSOC);

        $this->app->bind(PDO::class, function ($app) use ($dbh) {
            return $dbh;
        });
    }
}

Another one for Redis. I need to study a little bit more Lumen. I know that Lumen has a built-in tool to work with Redis.

namespace App\Providers;

use Illuminate\Support\ServiceProvider;
use Predis\Client;

class RedisServiceProvider extends ServiceProvider
{
    public function register()
    {
    }

    public function boot()
    {
        $vcapServices = app('vcap_services');
        $redisConf    = $vcapServices['redis'][0]['credentials'];

        $redis = new Client([
            'scheme'   => 'tcp',
            'host'     => $redisConf['hostname'],
            'port'     => $redisConf['port'],
            'password' => $redisConf['password'],
        ]);

        $this->app->bind(Client::class, function ($app) use ($redis) {
            return $redis;
        });
    }
}

Another one to tell monolog to send logs to Stdout

namespace App\Providers;

use Illuminate\Support\ServiceProvider;
use Monolog;

class StdoutLogServiceProvider extends ServiceProvider
{
    public function register()
    {
        app()->configureMonologUsing(function (Monolog\Logger $monolog) {
            return $monolog->pushHandler(new \Monolog\Handler\ErrorLogHandler());
        });
    }
}

And the last one to work with Vcap environment variables. Probably I need to integrate it with dotenv files

namespace App\Providers;

use Illuminate\Support\ServiceProvider;

class VcapServiceProvider extends ServiceProvider
{
    public function register()
    {
        if (php_sapi_name() == "cli-server") {
            $env = file_get_contents(__DIR__ . "/../../conf/vcap_services.json");
        } else {
            $env = $_ENV["VCAP_SERVICES"];
        }

        $vcapServices = json_decode($env, true);

        $this->app->bind('vcap_services', function ($app) use ($vcapServices) {
            return $vcapServices;
        });
    }
}

We also need to handle authentication (http basic auth in this case) so we’ll create a simple middleware

namespace App\Http\Middleware;

use Closure;
use Illuminate\Http\Request;
use PDO;

class AuthMiddleware
{
    public function handle(Request $request, Closure $next)
    {
        $user = $request->getUser();
        $pass = $request->getPassword();

        $db = app(PDO::class);
        $stmt = $db->prepare('SELECT name, surname FROM public.user WHERE username=:USER AND pass=:PASS');
        $stmt->execute(['USER' => $user, 'PASS' => md5($pass)]);
        $row = $stmt->fetch();
        if ($row !== false) {
            config(['user' => $row]);
        } else {
            $headers = ['WWW-Authenticate' => 'Basic'];
            return response('Admin Login', 401, $headers);
        }

        return $next($request);
    }
}

In summary: Lumen is cool. The interface is very similar to Silex. I can swap my mind from thinking in Silex to thinking in Lumen easily. Blade instead Twig: no problem. Service providers are very similar. Routing is almost the same and Middlewares are much better. Nowadays backend is a commodity for me so I don’t want to spend to much time working on it. I want something that just work. Lumen looks like that.

Both projects: Silex and Lumen are available in my github

Taking photos with an ionic2 application and upload them to S3 Bucket with SAP’s Cloud Foundry using Silex and Lumen

Today I want to play with an experiment. When I work with mobile applications, I normally use ionic and on-premise backends. Today I want play with cloud based backends. In this small experiment I want to use an ionic2 application to take pictures and upload them to an S3 bucket. Let’s start.

First I’ve created a simple ionic2 application. It’s a very simple application. Only one page with a button to trigger the device’s camera.

<ion-header>
    <ion-navbar>
        <ion-title>
            Photo
        </ion-title>
    </ion-navbar>
</ion-header>

<ion-content padding>
    <ion-fab bottom right>
        <button ion-fab (click)="takePicture()">
            <ion-icon  name="camera"></ion-icon>
        </button>
    </ion-fab>
</ion-content>

The controller uses @ionic-native/camera to take photos and later we use @ionic-native/transfer to upload them to the backend.

import {Component} from '@angular/core';
import {Camera, CameraOptions} from '@ionic-native/camera';
import {Transfer, FileUploadOptions, TransferObject} from '@ionic-native/transfer';
import {ToastController} from 'ionic-angular';
import {LoadingController} from 'ionic-angular';

@Component({
    selector: 'page-home',
    templateUrl: 'home.html'
})
export class HomePage {
    constructor(private transfer: Transfer,
                private camera: Camera,
                public toastCtrl: ToastController,
                public loading: LoadingController) {
    }

    takePicture() {
        const options: CameraOptions = {
            quality: 100,
            destinationType: this.camera.DestinationType.FILE_URI,
            sourceType: this.camera.PictureSourceType.CAMERA,
            encodingType: this.camera.EncodingType.JPEG,
            targetWidth: 1000,
            targetHeight: 1000,
            saveToPhotoAlbum: false,
            correctOrientation: true
        };

        this.camera.getPicture(options).then((uri) => {
            const fileTransfer: TransferObject = this.transfer.create();

            let options: FileUploadOptions = {
                fileKey: 'file',
                fileName: uri.substr(uri.lastIndexOf('/') + 1),
                chunkedMode: true,
                headers: {
                    Connection: "close"
                },
                params: {
                    metadata: {foo: 'bar'},
                    token: 'mySuperSecretToken'
                }
            };

            let loader = this.loading.create({
                content: 'Uploading ...',
            });

            loader.present().then(() => {
                let s3UploadUri = 'https://myApp.cfapps.eu10.hana.ondemand.com/upload';
                fileTransfer.upload(uri, s3UploadUri, options).then((data) => {
                    let message;
                    let response = JSON.parse(data.response);
                    if (response['status']) {
                        message = 'Picture uploaded to S3: ' + response['key']
                    } else {
                        message = 'Error Uploading to S3: ' + response['error']
                    }
                    loader.dismiss();
                    let toast = this.toastCtrl.create({
                        message: message,
                        duration: 3000
                    });
                    toast.present();
                }, (err) => {
                    loader.dismiss();
                    let toast = this.toastCtrl.create({
                        message: "Error",
                        duration: 3000
                    });
                    toast.present();
                });
            });
        });
    }
}

Now let’s work with the backend. Next time I’ll use JavaScript AWS SDK to upload pictures directly from mobile application (without backend), but today We’ll use a backend. Nowadays I’m involved with SAP Cloud platform projects, so we’ll use SAP’s Cloud Foundry tenant (using a free account). In this tenant we’ll create a PHP application using the PHP buildpack with nginx

applications:
- name:    myApp
  path: .
  memory:  128MB
  buildpack: php_buildpack

The PHP application is a simple Silex application to handle the file uploads and post the pictures to S3 using the official AWS SDK for PHP (based on Guzzle)

use Symfony\Component\HttpFoundation\Request;
use Silex\Application;
use Aws\S3\S3Client;

require 'vendor/autoload.php';

$app = new Application([
    'debug'        => false,
    'aws.config'   => [
        'debug'       => false,
        'version'     => 'latest',
        'region'      => 'eu-west-1',
        'credentials' => [
            'key'    => $_ENV['s3key'],
            'secret' => $_ENV['s3secret'],
        ],
    ],
]);

$app['aws'] = function () use ($app) {
    return new S3Client($app['aws.config']);
};

$app->post('/upload', function (Request $request, Application $app) {
    $metadata = json_decode($request->get('metadata'), true);
    $token    = $request->get('token');

    if ($token === $_ENV['token']) {
        $fileName = $_FILES['file']['name'];
        $fileType = $_FILES['file']['type'];
        $tmpName  = $_FILES['file']['tmp_name'];

        /** @var \Aws\S3\S3Client $s3 */
        $s3 = $app['aws'];
        try {
            $key = date('YmdHis') . "_" . $fileName;
            $s3->putObject([
                'Bucket'      => $_ENV['s3bucket'],
                'Key'         => $key,
                'SourceFile'  => $tmpName,
                'ContentType' => $fileType,
                'Metadata'    => $metadata,
            ]);
            unlink($tmpName);

            return $app->json([
                'status' => true,
                'key'    => $key,
            ]);
        } catch (Aws\S3\Exception\S3Exception $e) {
            return $app->json([
                'status' => false,
                'error'  => $e->getMessage(),
            ]);
        }
    } else {
        return $app->json([
            'status' => false,
            'error'  => "Token error",
        ]);
    }
});

$app->run();

I just wanted a simple prototype (a working one). Enough for a Sunday morning hacking.

UPDATE

I had this post ready weeks ago but something has changed. Silex is dead. So, as an exercise I’ll migrate current Silex application to Lumen (a quick prototype).

That’s the main application.

use App\Http\Middleware;
use Aws\S3\S3Client;
use Illuminate\Http\Request;
use Laravel\Lumen\Application;

require 'vendor/autoload.php';

(new Dotenv\Dotenv(__DIR__ . "/../env"))->load();

$app = new Application();

$app->routeMiddleware([
    'auth' => Middleware\AuthMiddleware::class,
]);

$app->register(App\Providers\S3ServiceProvider::class);

$app->group(['middleware' => 'auth'], function (Application $app) {
    $app->post('/upload', function (Request $request, Application $app, S3Client $s3) {
        $metadata = json_decode($request->get('metadata'), true);
        $fileName = $_FILES['file']['name'];
        $fileType = $_FILES['file']['type'];
        $tmpName  = $_FILES['file']['tmp_name'];

        try {
            $key = date('YmdHis') . "_" . $fileName;
            $s3->putObject([
                'Bucket'      => getenv('s3bucket'),
                'Key'         => $key,
                'SourceFile'  => $tmpName,
                'ContentType' => $fileType,
                'Metadata'    => $metadata,
            ]);
            unlink($tmpName);

            return response()->json([
                'status' => true,
                'key'    => $key,
            ]);
        } catch (Aws\S3\Exception\S3Exception $e) {
            return response()->json([
                'status' => false,
                'error'  => $e->getMessage(),
            ]);
        }
    });
});

$app->run();

Probably we can find a S3 Service provider, but I’ve built a simple one for this example.

namespace App\Providers;

use Illuminate\Support\ServiceProvider;
use Aws\S3\S3Client;

class S3ServiceProvider extends ServiceProvider
{
    public function register()
    {
        $this->app->bind(S3Client::class, function ($app) {
            $conf = [
                'debug'       => false,
                'version'     => getenv('AWS_VERSION'),
                'region'      => getenv('AWS_REGION'),
                'credentials' => [
                    'key'    => getenv('s3key'),
                    'secret' => getenv('s3secret'),
                ],
            ];

            return new S3Client($conf);
        });
    }
}

And also I’m using a middleware for the authentication

namespace App\Http\Middleware;

use Closure;
use Illuminate\Http\Request;

class AuthMiddleware
{
    public function handle(Request $request, Closure $next)
    {
        $token = $request->get('token');
        if ($token === getenv('token')) {
            return response('Admin Login', 401);
        }

        return $next($request);
    }
}

Ok. I’ll post this article soon. At least before Lumen will be dead also, and I need to update this post again 🙂

Full project (mobile application and both backends) in my githubgithub

Silex is dead (… or not)

The last week was deSymfony conference in Castellón (Spain). IMHO deSymfony is the best conference I’ve ever attended. The talks are good but from time to now I appreciate this kind of events not because of them. I like to go to events because of people, the coffee breaks and the community (and in deSymfony is brilliant at this point). This year I cannot join to the conference. It was a pity. A lot of good friends there. So I only can follow the buzz in Twitter, read the published slides (thanks Raul) and wait for the talk videos in youtube.

In my Twitter timeline especially two tweets get my attention. One tweet was from Julieta Cuadrado and another one from Asier Marqués.

Tweets are in Spanish but the translation is clear: Javier Eguiluz (Symfony Core Team member and co-organizer of the conference) said in his talk: “Silex is dead”. At the time I read the tweets his slides were not available yet, but a couple of days after the slides were online. The slide 175 is clear “Silex is dead”

Javier recommends us not to use Silex in future new projects and mark existing ones as “legacy”. It’s hard to me. If you have ever read my blog you will notice that I’m a big fan of Silex. Each time I need a backend, a API/REST server of something like that the first thing I do is “composer require silex/silex”. I know that Silex has limitations. It’s built on top of Pimple dependency injection container and Pimple is really awful, but this microframework gives to me exactly what I need. It’s small, simple, fast enough and really easy to adapt to my needs.

I remember a dinner in deSymfony years ago speaking with Javier in Barcelona. He was trying to “convince” me to use Symfony full stack framework instead of Silex. He almost succeeded, but I didn’t like Symfony full stack. Too complicated for me. A lot interesting things but a lot of them I don’t really need. I don’t like Symfony full stack framework, but I love Symfony. For me it’s great because of its components. They’re independent pieces of code that I can use to fit exactly to my needs instead of using a full-stack framework. I’ve learn a lot SOLID reading and hacking with Symfony components. I’m not saying that full stack frameworks are bad. I only say that they’re not for me. If I’m forced to use them I will do it, but if I can choose, I definitely choose a micro framework, even for medium an big projects.

New version of Symfony (Symfony 4) is coming next November and reading the slides of Javier at slideshare I can get an idea of its roadmap. My summary is clear: “Brilliant”. It looks like the people of Symfony listen to my needs and change all the framework to adapt it to me. After understand the roadmap I think that I need to change to title of the post (Initially it was only “Silex is dead”). Silex is not dead. For me Symfony (the full stack framework) is the death. Silex will be upgraded and will be renamed to Symfony (I know that this assertion is subjective. It’s just my point of view). So the bad feeling that I felt when I read Julieta and Asier’s tweets turns into a good one. Good move SensioLabs!

But I’ve got a problem right now. What can I do if I need to start a new project today? Symfony 4 isn’t ready yet. Javier said that we can use Symfony Flex and create new projects with Symfony 3 with the look and feel of Symfony 4, but Flex is still in alpha and I don’t want to play with alpha tools in production. Especially in the backend. I’m getting older, I know. For me the backend is a commodity right now. I need the backend to serve JSON mainly.

I normally use PHP and Silex here only because I’m very confortable with it. In the projects, business people doesn’t care about technologies and frameworks. It’s our job (or our problem depending on how to read it). And don’t forget one thing: Developers are part of business, so in one part of my mind I don’t care about frameworks also. I care about making things done, maximising the potential of technology and driving innovation to customer benefits (good lapidary phrase, isn’t it?).

So I’ve started looking for alternatives. My objective here is clear: I want to find a framework to do the things that I usually do with Silex. Nothing more. And there’s something important here: The tool must be easy to learn. I want to master (or at least become productive) the tool in a couple of days maximum.

I started with the first one: Lumen and I think I will stop searching. Lumen is the micro framework of Laravel. Probably in the PHP world now there’re two major communities: Symfony and Laravel. Maybe if we’re strict Laravel and Symfony are not different communities. In fact Laravel and Symfony shares a lot of components. So maybe both communities are the same.

I’ve almost never played with Laravel and it’s time to study it a little bit. Time ago I used Eloquent ORM but since I hate ORMs I always return to PDO/DBAL. As I said before I didn’t like Symfony full stack framework. It’s too complex for me, and Laravel is the same. When I started with PHP (in the early 2000) there weren’t any framework. I remember me reading books of Java and J2EE. Trying to understand something in its nightmare of acronyms, XMLs configurations and trying to build my own framework in PHP. Now in 2017 to build our own framework in PHP is good learning point but use it with real projects is ridiculous. As someone said before (I don’t remember who) “Everybody must build his own framework, and never use it at all“.

Swap from Silex to Lumen is pretty straightforward. In fact with one ultra-minimal application it’s exactly the same:

use Silex\Application;

$app = new Application();

$app->get("/", function() {
    return "Hello from Silex";
});

$app->run();
use Laravel\Lumen\Application;

$app = new Application();

$app->get("/", function() {
    return "Hello from Lumen";
});

$app->run();

If you’re a Silex user you only need a couple of hours reading the Lumen docs and you will be able to set up a new project without any problem. Concepts are the same, slight differences and even cool things such as groups and middlewares. Nothing impossible to do with Silex, indeed, but with a very smart and simple interface. If I need to create a new project right now I will use Lumen without any doubt.

Next winter, when Symfony 4 arrives, I probably will face the problem of choose. But past years I’ve been involved into the crazy world of JavaScript: Angular, Angular2, React, npm, yarn, webpack, … If I’ve survived this (finally I choose JQuery, but that’s a different story :), I am ready for all right now.

PHP Redis cache wrapper

Today I want to share a simple Redis Wrapper for PHP I’m using in one project. Nowadays I don’t like reinvent the wheel as years ago. I’m not going to create a new Redis Library for PHP there’re too many. I normally use Predis (it’s the “de facto” standard, indeed).

As I said before I don’t like to re-invent the wheel, but I like to create wrappers to adapt libraries to my common operations. For example with Redis I normally use a lot read key from cache (and create the key in the cache if it doesn’t exits). Because of that I’ve created this simple wrapper to enclose this common operation for me.

The idea is to use something like this:

use G\Redis\Cache;
use Predis\Client;

$cache = new Cache(new Client(json_decode(file_get_contents(__DIR__ . '/conf.json'), true)));

$value = $cache-&gt;get("aaa.aaa.aaa", function () {
return [
'a' =&gt; 1,
];
});

$cache-&gt;delete("aaa.aaa.aaa");

It’s very simple. I create a new instance (injecting a Predis instance), and when I want to get the key from Redis I also pass a callback with all that I need if to create the key in the cache if it doesn’t exits.

The wrapper’s code is very simple also

namespace G\Redis;
use Predis\ClientInterface;
class Cache
{
private $redis;

public function __construct(ClientInterface $redis)
{
$this-&gt;redis = $redis;
}

public function get($key, callable $callback = null, $ttl = null)
{
if ($this-&gt;redis-&gt;exists($key)) {
return json_decode($this-&gt;redis-&gt;get($key), true);
}
if (!is_callable($callback)) {
throw new Exception("Key value '{$key}' not in cache and not available callback to populate cache");
}
$out = call_user_func($callback);
if (!is_null($ttl)) {
$this-&gt;redis-&gt;set($key, json_encode($out), 'EX', $ttl);
} else {
$this-&gt;redis-&gt;set($key, json_encode($out));
}
return $out;
}

public function delete($key)
{
$keys = $this-&gt;redis-&gt;keys($key);
if (count($keys) &gt; 0) {
$this-&gt;redis-&gt;del($keys);
}
return $keys;
}
}

The code is available in my github and also in Packagist

Playing with RabbitMQ, PHP and node

I need to use RabbitMQ in one project. I’m a big fan of Gearman, but I must admit Rabbit is much more powerful. In this project I need to handle with PHP code and node, so I want to build a wrapper for those two languages. I don’t want to re-invent the wheel so I will use existing libraries (php-amqplib and amqplib for node).

Basically I need to use three things: First I need to create exchange channels to log different actions. I need to decouple those actions from the main code. I also need to create work queues to ensure those works are executed. It doesn’t matter if work is executed later but it must be executed. And finally RPC commands.

Let’s start with the queues. I want to push events to a queue in PHP

use G\Rabbit\Builder;
$server = [
    'host' => 'localhost',
    'port' => 5672,
    'user' => 'guest',
    'pass' => 'guest',
];
$queue = Builder::queue('queue.backend', $server);
$queue->emit(["aaa" => 1]);

and also with node

var server = {
    host: 'localhost',
    port: 5672,
    user: 'guest',
    pass: 'guest'
};

var queue = builder.queue('queue.backend', server);
queue.emit({aaa: 1});

And I also want to register workers to those queues with PHP and node

use G\Rabbit\Builder;
$server = [
    'host' => 'localhost',
    'port' => 5672,
    'user' => 'guest',
    'pass' => 'guest',
];
Builder::queue('queue.backend', $server)->receive(function ($data) {
    error_log(json_encode($data));
});
var server = {
    host: 'localhost',
    port: 5672,
    user: 'guest',
    pass: 'guest'
};

var queue = builder.queue('queue.backend', server);
queue.receive(function (data) {
    console.log(data);
});

Both implementations use one builder. In this case we are using Queue:

namespace G\Rabbit;
use PhpAmqpLib\Connection\AMQPStreamConnection;
use PhpAmqpLib\Message\AMQPMessage;
class Queue
{
    private $name;
    private $conf;
    public function __construct($name, $conf)
    {
        $this->name = $name;
        $this->conf = $conf;
    }
    private function createConnection()
    {
        $server = $this->conf['server'];
        return new AMQPStreamConnection($server['host'], $server['port'], $server['user'], $server['pass']);
    }
    private function declareQueue($channel)
    {
        $conf = $this->conf['queue'];
        $channel->queue_declare($this->name, $conf['passive'], $conf['durable'], $conf['exclusive'],
            $conf['auto_delete'], $conf['nowait']);
    }
    public function emit($data = null)
    {
        $connection = $this->createConnection();
        $channel = $connection->channel();
        $this->declareQueue($channel);
        $msg = new AMQPMessage(json_encode($data),
            ['delivery_mode' => 2] # make message persistent
        );
        $channel->basic_publish($msg, '', $this->name);
        $channel->close();
        $connection->close();
    }
    public function receive(callable $callback)
    {
        $connection = $this->createConnection();
        $channel = $connection->channel();
        $this->declareQueue($channel);
        $consumer = $this->conf['consumer'];
        if ($consumer['no_ack'] === false) {
            $channel->basic_qos(null, 1, null);
        }
        $channel->basic_consume($this->name, '', $consumer['no_local'], $consumer['no_ack'], $consumer['exclusive'],
            $consumer['nowait'],
            function ($msg) use ($callback) {
                call_user_func($callback, json_decode($msg->body, true), $this->name);
                $msg->delivery_info['channel']->basic_ack($msg->delivery_info['delivery_tag']);
                $now = new \DateTime();
                echo '['.$now->format('d/m/Y H:i:s')."] {$this->name}::".$msg->body, "\n";
            });
        $now = new \DateTime();
        echo '['.$now->format('d/m/Y H:i:s')."] Queue '{$this->name}' initialized \n";
        while (count($channel->callbacks)) {
            $channel->wait();
        }
        $channel->close();
        $connection->close();
    }
}
var amqp = require('amqplib/callback_api');

var Queue = function (name, conf) {
    return {
        emit: function (data, close=true) {
            amqp.connect(`amqp://${conf.server.user}:${conf.server.pass}@${conf.server.host}:${conf.server.port}`, function (err, conn) {
                conn.createChannel(function (err, ch) {
                    var msg = JSON.stringify(data);

                    ch.assertQueue(name, conf.queue);
                    ch.sendToQueue(name, new Buffer(msg));
                });
                if (close) {
                    setTimeout(function () {
                        conn.close();
                        process.exit(0)
                    }, 500);
                }
            });
        },
        receive: function (callback) {
            amqp.connect(`amqp://${conf.server.user}:${conf.server.pass}@${conf.server.host}:${conf.server.port}`, function (err, conn) {
                conn.createChannel(function (err, ch) {
                    ch.assertQueue(name, conf.queue);
                    console.log(new Date().toString() + ' Queue ' + name + ' initialized');
                    ch.consume(name, function (msg) {
                        console.log(new Date().toString() + " Received %s", msg.content.toString());
                        if (callback) {
                            callback(JSON.parse(msg.content.toString()), msg.fields.routingKey)
                        }
                        if (conf.consumer.noAck === false) {
                            ch.ack(msg);
                        }
                    }, conf.consumer);
                });
            });
        }
    };
};

module.exports = Queue;

We also want to emit messages using an exchange

Emiter:

use G\Rabbit\Builder;
$server = [
    'host' => 'localhost',
    'port' => 5672,
    'user' => 'guest',
    'pass' => 'guest',
];
$exchange = Builder::exchange('process.log', $server);
$exchange->emit("xxx.log", "aaaa");
$exchange->emit("xxx.log", ["11", "aaaa"]);
$exchange->emit("yyy.log", "aaaa");
var builder = require('../../src/Builder');

var server = {
    host: 'localhost',
    port: 5672,
    user: 'guest',
    pass: 'guest'
};

var exchange = builder.exchange('process.log', server);

exchange.emit("xxx.log", "aaaa");
exchange.emit("xxx.log", ["11", "aaaa"]);
exchange.emit("yyy.log", "aaaa");

and receiver:

use G\Rabbit\Builder;
$server = [
    'host' => 'localhost',
    'port' => 5672,
    'user' => 'guest',
    'pass' => 'guest',
];
Builder::exchange('process.log', $server)->receive("yyy.log", function ($routingKey, $data) {
    error_log($routingKey." - ".json_encode($data));
});
var server = {
    host: 'localhost',
    port: 5672,
    user: 'guest',
    pass: 'guest'
};

var exchange = builder.exchange('process.log', server);

exchange.receive("yyy.log", function (routingKey, data) {
    console.log(routingKey, data);
});

And that’s the PHP implementation:

namespace G\Rabbit;
use PhpAmqpLib\Connection\AMQPStreamConnection;
use PhpAmqpLib\Message\AMQPMessage;
class Exchange
{
    private $name;
    private $conf;
    public function __construct($name, $conf)
    {
        $this->name = $name;
        $this->conf = $conf;
    }
    private function createConnection()
    {
        $server = $this->conf['server'];
        return new AMQPStreamConnection($server['host'], $server['port'], $server['user'], $server['pass']);
    }
    public function emit($routingKey, $data = null)
    {
        $connection = $this->createConnection();
        $channel = $connection->channel();
        $conf = $this->conf['exchange'];
        $channel->exchange_declare($this->name, 'topic', $conf['passive'], $conf['durable'], $conf['auto_delete'],
            $conf['internal'], $conf['nowait']);
        $msg = new AMQPMessage(json_encode($data), [
            'delivery_mode' => 2, # make message persistent
        ]);
        $channel->basic_publish($msg, $this->name, $routingKey);
        $channel->close();
        $connection->close();
    }
    public function receive($bindingKey, callable $callback)
    {
        $connection = $this->createConnection();
        $channel = $connection->channel();
        $conf = $this->conf['exchange'];
        $channel->exchange_declare($this->name, 'topic', $conf['passive'], $conf['durable'], $conf['auto_delete'],
            $conf['internal'], $conf['nowait']);
        $queueConf = $this->conf['queue'];
        list($queue_name, ,) = $channel->queue_declare("", $queueConf['passive'], $queueConf['durable'],
            $queueConf['exclusive'], $queueConf['auto_delete'], $queueConf['nowait']);
        $channel->queue_bind($queue_name, $this->name, $bindingKey);
        $consumerConf = $this->conf['consumer'];
        $channel->basic_consume($queue_name, '', $consumerConf['no_local'], $consumerConf['no_ack'],
            $consumerConf['exclusive'], $consumerConf['nowait'],
            function ($msg) use ($callback) {
                call_user_func($callback, $msg->delivery_info['routing_key'], json_decode($msg->body, true));
                $now = new \DateTime();
                echo '['.$now->format('d/m/Y H:i:s').'] '.$this->name.':'.$msg->delivery_info['routing_key'].'::', $msg->body, "\n";
                $msg->delivery_info['channel']->basic_ack($msg->delivery_info['delivery_tag']);
            });
        $now = new \DateTime();
        echo '['.$now->format('d/m/Y H:i:s')."] Exchange '{$this->name}' initialized \n";
        while (count($channel->callbacks)) {
            $channel->wait();
        }
        $channel->close();
        $connection->close();
    }
}

And node:

var amqp = require('amqplib/callback_api');

var Exchange = function (name, conf) {
    return {
        emit: function (routingKey, data, close = true) {
            amqp.connect(`amqp://${conf.server.user}:${conf.server.pass}@${conf.server.host}:${conf.server.port}`, function (err, conn) {
                conn.createChannel(function (err, ch) {
                    var msg = JSON.stringify(data);
                    ch.assertExchange(name, 'topic', conf.exchange);
                    ch.publish(name, routingKey, new Buffer(msg));
                });
                if (close) {
                    setTimeout(function () {
                        conn.close();
                        process.exit(0)
                    }, 500);
                }
            });
        },
        receive: function (bindingKey, callback) {
            amqp.connect(`amqp://${conf.server.user}:${conf.server.pass}@${conf.server.host}:${conf.server.port}`, function (err, conn) {
                conn.createChannel(function (err, ch) {
                    ch.assertExchange(name, 'topic', conf.exchange);
                    console.log(new Date().toString() + ' Exchange ' + name + ' initialized');
                    ch.assertQueue('', conf.queue, function (err, q) {

                        ch.bindQueue(q.queue, name, bindingKey);

                        ch.consume(q.queue, function (msg) {
                            console.log(new Date().toString(), name, ":", msg.fields.routingKey, "::", msg.content.toString());
                            if (callback) {
                                callback(msg.fields.routingKey, JSON.parse(msg.content.toString()))
                            }
                            if (conf.consumer.noAck === false) {
                                ch.ack(msg);
                            }
                        }, conf.consumer);
                    });
                });
            });
        }
    };
};

module.exports = Exchange;

Finally we want to use RPC commands. In fact RPC implementations is similar than Queue but in this case client will receive an answer.

Client side

use G\Rabbit\Builder;
$server = [
    'host' => 'localhost',
    'port' => 5672,
    'user' => 'guest',
    'pass' => 'guest',
];
echo Builder::rpc('rpc.hello', $server)->call("Gonzalo", "Ayuso");
var builder = require('../../src/Builder');

var server = {
    host: 'localhost',
    port: 5672,
    user: 'guest',
    pass: 'guest'
};

var rpc = builder.rpc('rpc.hello', server);
rpc.call("Gonzalo", "Ayuso", function (data) {
    console.log(data);
});

Server side:

use G\Rabbit\Builder;
$server = [
    'host' => 'localhost',
    'port' => 5672,
    'user' => 'guest',
    'pass' => 'guest',
];
Builder::rpc('rpc.hello', $server)->server(function ($name, $surname) use ($server) {
    return "Hello {$name} {$surname}";
});
var builder = require('../../src/Builder');

var server = {
    host: 'localhost',
    port: 5672,
    user: 'guest',
    pass: 'guest'
};

var rpc = builder.rpc('rpc.hello', server);

rpc.server(function (name, surname) {
    return "Hello " + name + " " + surname;
});

And Implementations:

namespace G\Rabbit;
use PhpAmqpLib\Connection\AMQPStreamConnection;
use PhpAmqpLib\Message\AMQPMessage;
class RPC
{
    private $name;
    private $conf;
    public function __construct($name, $conf)
    {
        $this->name = $name;
        $this->conf = $conf;
    }
    private function createConnection()
    {
        $server = $this->conf['server'];
        return new AMQPStreamConnection($server['host'], $server['port'], $server['user'], $server['pass']);
    }
    public function call()
    {
        $params = (array)func_get_args();
        $response = null;
        $corr_id = uniqid();
        $connection = $this->createConnection();
        $channel = $connection->channel();
        $queueConf = $this->conf['queue'];
        list($callback_queue, ,) = $channel->queue_declare("", $queueConf['passive'], $queueConf['durable'],
            $queueConf['exclusive'], $queueConf['auto_delete'], $queueConf['nowait']);
        $consumerConf = $this->conf['consumer'];
        $channel->basic_consume($callback_queue, '', $consumerConf['no_local'], $consumerConf['no_ack'],
            $consumerConf['exclusive'], $consumerConf['nowait'], function ($rep) use (&$corr_id, &$response) {
                if ($rep->get('correlation_id') == $corr_id) {
                    $response = $rep->body;
                }
            });
        $msg = new AMQPMessage(json_encode($params), [
            'correlation_id' => $corr_id,
            'reply_to'       => $callback_queue,
        ]);
        $channel->basic_publish($msg, '', $this->name);
        while (!$response) {
            $channel->wait();
        }
        return json_decode($response, true);
    }
    public function server(callable $callback)
    {
        $connection = $this->createConnection();
        $channel = $connection->channel();
        $queueConf = $this->conf['queue'];
        $channel->queue_declare($this->name, $queueConf['passive'], $queueConf['durable'], $queueConf['exclusive'],
            $queueConf['auto_delete'], $queueConf['nowait']);
        $now = new \DateTime();
        echo '['.$now->format('d/m/Y H:i:s')."] RPC server '{$this->name}' initialized \n";
        $channel->basic_qos(null, 1, null);
        $consumerConf = $this->conf['consumer'];
        $channel->basic_consume($this->name, '', $consumerConf['no_local'], $consumerConf['no_ack'],
            $consumerConf['exclusive'],
            $consumerConf['nowait'], function ($req) use ($callback) {
                $response = json_encode(call_user_func_array($callback, array_values(json_decode($req->body, true))));
                $msg = new AMQPMessage($response, [
                    'correlation_id' => $req->get('correlation_id'),
                    'delivery_mode'  => 2, # make message persistent
                ]);
                $req->delivery_info['channel']->basic_publish($msg, '', $req->get('reply_to'));
                $req->delivery_info['channel']->basic_ack($req->delivery_info['delivery_tag']);
                $now = new \DateTime();
                echo '['.$now->format('d/m/Y H:i:s').'] '.$this->name.":: req => '{$req->body}' response=> '{$response}'\n";
            });
        while (count($channel->callbacks)) {
            $channel->wait();
        }
        $channel->close();
        $connection->close();
    }
}
var amqp = require('amqplib/callback_api');

var RPC = function (name, conf) {
    var generateUuid = function () {
        return Math.random().toString() +
            Math.random().toString() +
            Math.random().toString();
    };

    return {
        call: function () {
            var params = [];
            for (i = 0; i < arguments.length - 1; i++) {
                params.push(arguments[i]);
            }
            var callback = arguments[arguments.length - 1];

            amqp.connect(`amqp://${conf.server.user}:${conf.server.pass}@${conf.server.host}:${conf.server.port}`, function (err, conn) {
                conn.createChannel(function (err, ch) {
                    ch.assertQueue('', conf.queue, function (err, q) {
                        var corr = generateUuid();

                        ch.consume(q.queue, function (msg) {
                            if (msg.properties.correlationId == corr) {
                                callback(JSON.parse(msg.content.toString()));
                                setTimeout(function () {
                                    conn.close();
                                    process.exit(0)
                                }, 500);
                            }
                        }, conf.consumer);
                        ch.sendToQueue(name,
                            new Buffer(JSON.stringify(params)),
                            {correlationId: corr, replyTo: q.queue});
                    });
                });
            });
        },
        server: function (callback) {
            amqp.connect(`amqp://${conf.server.user}:${conf.server.pass}@${conf.server.host}:${conf.server.port}`, function (err, conn) {
                conn.createChannel(function (err, ch) {
                    ch.assertQueue(name, conf.queue);
                    console.log(new Date().toString() + ' RPC ' + name + ' initialized');
                    ch.prefetch(1);
                    ch.consume(name, function reply(msg) {
                        console.log(new Date().toString(), msg.fields.routingKey, " :: ", msg.content.toString());
                        var response = JSON.stringify(callback.apply(this, JSON.parse(msg.content.toString())));
                        ch.sendToQueue(msg.properties.replyTo,
                            new Buffer(response),
                            {correlationId: msg.properties.correlationId});

                        ch.ack(msg);

                    }, conf.consumer);
                });
            });
        }
    };
};

module.exports = RPC;

You can see whole projects at github: RabbitMQ-php, RabbitMQ-node

Playing with Docker, Silex, Python, Node and WebSockets

I’m learning Docker. In this post I want to share a little experiment that I have done. I know the code looks like over-engineering but it’s just an excuse to build something with docker and containers. Let me explain it a little bit.

The idea is build a Time clock in the browser. Something like this:

Clock

Yes I know. We can do it only with js, css and html but we want to hack a little bit more. The idea is to create:

  • A Silex/PHP frontend
  • A WebSocket server with socket.io/node
  • A Python script to obtain the current time

WebSocket server will open 2 ports: One port to serve webSockets (socket.io) and another one as a http server (express). Python script will get the current time and it’ll send it to the webSocket server. Finally one frontend(silex) will be listening to WebSocket’s event and it will render the current time.

That’s the WebSocket server (with socket.io and express)

var
    express = require('express'),
    expressApp = express(),
    server = require('http').Server(expressApp),
    io = require('socket.io')(server, {origins: 'localhost:*'})
    ;

expressApp.get('/tic', function (req, res) {
    io.sockets.emit('time', req.query.time);
    res.json('OK');
});

expressApp.listen(6400, '0.0.0.0');

server.listen(8080);

That’s our Python script

from time import gmtime, strftime, sleep
import httplib2

h = httplib2.Http()
while True:
    (resp, content) = h.request("http://node:6400/tic?time=" + strftime("%H:%M:%S", gmtime()))
    sleep(1)

And our Silex frontend

use Silex\Application;
use Silex\Provider\TwigServiceProvider;

$app = new Application(['debug' => true]);
$app->register(new TwigServiceProvider(), [
    'twig.path' => __DIR__ . '/../views',
]);

$app->get("/", function (Application $app) {
    return $app['twig']->render('index.twig', []);
});

$app->run();

using this twig template

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="utf-8">
    <meta http-equiv="X-UA-Compatible" content="IE=edge">
    <meta name="viewport" content="width=device-width, initial-scale=1">
    <title>Docker example</title>
    <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css" integrity="sha384-BVYiiSIFeK1dGmJRAkycuHAHRg32OmUcww7on3RYdg4Va+PmSTsz/K68vbdEjh4u" crossorigin="anonymous">
    <link href="css/app.css" rel="stylesheet">
    <script src="https://oss.maxcdn.com/html5shiv/3.7.3/html5shiv.min.js"></script>
    <script src="https://oss.maxcdn.com/respond/1.4.2/respond.min.js"></script>
</head>
<body>
<div class="site-wrapper">
    <div class="site-wrapper-inner">
        <div class="cover-container">
            <div class="inner cover">
                <h1 class="cover-heading">
                    <div id="display">
                        display
                    </div>
                </h1>
            </div>
        </div>
    </div>
</div>
<script src="//localhost:8080/socket.io/socket.io.js"></script>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.12.4/jquery.min.js"></script>
<script>
var socket = io.connect('//localhost:8080');

$(function () {
    socket.on('time', function (data) {
        $('#display').html(data);
    });
});
</script>
</body>
</html>

The idea is to use one Docker container for each process. I like to have all the code in one place so all containers will share the same volume with source code.

First the node container (WebSocket server)

FROM node:argon

RUN mkdir -p /mnt/src
WORKDIR /mnt/src/node

EXPOSE 8080 6400

Now the python container

FROM python:2

RUN pip install httplib2

RUN mkdir -p /mnt/src
WORKDIR /mnt/src/python

And finally Frontend contailer (apache2 with Ubuntu 16.04)

FROM ubuntu:16.04

RUN locale-gen es_ES.UTF-8
RUN update-locale LANG=es_ES.UTF-8
ENV DEBIAN_FRONTEND=noninteractive

RUN apt-get update -y
RUN apt-get install --no-install-recommends -y apache2 php libapache2-mod-php
RUN apt-get clean -y

COPY ./apache2/sites-available/000-default.conf /etc/apache2/sites-available/000-default.conf

RUN mkdir -p /mnt/src

RUN a2enmod rewrite
RUN a2enmod proxy
RUN a2enmod mpm_prefork

RUN chown -R www-data:www-data /mnt/src
ENV APACHE_RUN_USER www-data
ENV APACHE_RUN_GROUP www-data
ENV APACHE_LOG_DIR /var/log/apache2
ENV APACHE_LOCK_DIR /var/lock/apache2
ENV APACHE_PID_FILE /var/run/apache2/apache2.pid
ENV APACHE_SERVERADMIN admin@localhost
ENV APACHE_SERVERNAME localhost

EXPOSE 80

Now we’ve got the three containers but we want to use all together. We’ll use a docker-compose.yml file. The web container will expose port 80 and node container 8080. Node container also opens 6400 but this port is an internal port. We don’t need to access to this port outside. Only Python container needs to access to this port. Because of that 6400 is not mapped to any port in docker-compose

version: '2'

services:
  web:
    image: gonzalo123/example_web
    container_name: example_web
    ports:
     - "80:80"
    restart: always
    depends_on:
      - node
    build:
      context: ./images/php
      dockerfile: Dockerfile
    entrypoint:
      - /usr/sbin/apache2
      - -D
      - FOREGROUND
    volumes:
     - ./src:/mnt/src

  node:
    image: gonzalo123/example_node
    container_name: example_node
    ports:
     - "8080:8080"
    restart: always
    build:
      context: ./images/node
      dockerfile: Dockerfile
    entrypoint:
      - npm
      - start
    volumes:
     - ./src:/mnt/src

  python:
      image: gonzalo123/example_python
      container_name: example_python
      restart: always
      depends_on:
        - node
      build:
        context: ./images/python
        dockerfile: Dockerfile
      entrypoint:
        - python
        - tic.py
      volumes:
       - ./src:/mnt/src

And that’s all. We only need to start our containers

docker-compose up --build -d

and open our browser at: http://localhost to see our Time clock

Full source code available within my github account