Working with SAPUI5 locally (part 3). Adding more services in Docker

In the previous project we moved one project to docker. The idea was to move exactly the same functionality (even without touching anything within the source code). Now we’re going to add more services. Yes, I know, it looks like overenginering (it’s exactly overenginering, indeed), but I want to build something with different services working together. Let start.

We’re going to change a little bit our original project. Now our frontend will only have one button. This button will increment the number of clicks but we’re going to persists this information in a PostgreSQL database. Also, instead of incrementing the counter in the backend, our backend will emit one event to a RabbitMQ message broker. We’ll have one worker service listening to this event and this worker will persist the information. The communication between the worker and the frontend (to show the incremented value), will be via websockets.

With those premises we are going to need:

  • Frontend: UI5 application
  • Backend: PHP/lumen application
  • Worker: nodejs application which is listening to a RabbitMQ event and serving the websocket server (using socket.io)
  • Nginx server
  • PosgreSQL database.
  • RabbitMQ message broker.

As the previous examples, our PHP backend will be server via Nginx and PHP-FPM.

Here we can see to docker-compose file to set up all the services

version: '3.4'

services:
  nginx:
    image: gonzalo123.nginx
    restart: always
    ports:
    - "8080:80"
    build:
      context: ./src
      dockerfile: .docker/Dockerfile-nginx
    volumes:
    - ./src/backend:/code/src
    - ./src/.docker/web/site.conf:/etc/nginx/conf.d/default.conf
    networks:
    - app-network
  api:
    image: gonzalo123.api
    restart: always
    build:
      context: ./src
      dockerfile: .docker/Dockerfile-lumen-dev
    environment:
      XDEBUG_CONFIG: remote_host=${MY_IP}
    volumes:
    - ./src/backend:/code/src
    networks:
    - app-network
  ui5:
    image: gonzalo123.ui5
    ports:
    - "8000:8000"
    restart: always
    volumes:
    - ./src/frontend:/code/src
    build:
      context: ./src
      dockerfile: .docker/Dockerfile-ui5
    networks:
    - app-network
  io:
    image: gonzalo123.io
    ports:
    - "9999:9999"
    restart: always
    volumes:
    - ./src/io:/code/src
    build:
      context: ./src
      dockerfile: .docker/Dockerfile-io
    networks:
    - app-network
  pg:
    image: gonzalo123.pg
    restart: always
    ports:
    - "5432:5432"
    build:
      context: ./src
      dockerfile: .docker/Dockerfile-pg
    environment:
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
      POSTGRES_USER: ${POSTGRES_USER}
      POSTGRES_DB: ${POSTGRES_DB}
      PGDATA: /var/lib/postgresql/data/pgdata
    networks:
    - app-network
  rabbit:
    image: rabbitmq:3-management
    container_name: gonzalo123.rabbit
    restart: always
    ports:
    - "15672:15672"
    - "5672:5672"
    environment:
      RABBITMQ_ERLANG_COOKIE:
      RABBITMQ_DEFAULT_VHOST: /
      RABBITMQ_DEFAULT_USER: ${RABBITMQ_DEFAULT_USER}
      RABBITMQ_DEFAULT_PASS: ${RABBITMQ_DEFAULT_PASS}
    networks:
    - app-network
networks:
  app-network:
    driver: bridge

We’re going to use the same docker files than in the previous post but we also need new ones for worker, database server and message queue:

Worker:

FROM node:alpine

EXPOSE 8000

WORKDIR /code/src
COPY ./io .
RUN npm install
ENTRYPOINT ["npm", "run", "serve"]

The worker script is simple script that serves the socket.io server and emits a websocket within every message to the RabbitMQ queue.

var amqp = require('amqp'),
  httpServer = require('http').createServer(),
  io = require('socket.io')(httpServer, {
    origins: '*:*',
  }),
  pg = require('pg')
;

require('dotenv').config();
var pgClient = new pg.Client(process.env.DB_DSN);

rabbitMq = amqp.createConnection({
  host: process.env.RABBIT_HOST,
  port: process.env.RABBIT_PORT,
  login: process.env.RABBIT_USER,
  password: process.env.RABBIT_PASS,
});

var sql = 'SELECT clickCount FROM docker.clicks';

// Please don't do this. Use lazy connections
// I'm 'lazy' to do it in this POC 🙂
pgClient.connect(function(err) {
  io.on('connection', function() {
    pgClient.query(sql, function(err, result) {
      var count = result.rows[0]['clickcount'];
      io.emit('click', {count: count});
    });

  });

  rabbitMq.on('ready', function() {
    var queue = rabbitMq.queue('ui5');
    queue.bind('#');

    queue.subscribe(function(message) {
      pgClient.query(sql, function(err, result) {
        var count = parseInt(result.rows[0]['clickcount']);
        count = count + parseInt(message.data.toString('utf8'));
        pgClient.query('UPDATE docker.clicks SET clickCount = $1', [count],
          function(err) {
            io.emit('click', {count: count});
          });
      });
    });
  });
});

httpServer.listen(process.env.IO_PORT);

Database server:

FROM postgres:9.6-alpine
COPY pg/init.sql /docker-entrypoint-initdb.d/

As we can see we’re going to generate the database estructure in the first build

CREATE SCHEMA docker;

CREATE TABLE docker.clicks (
clickCount numeric(8) NOT NULL
);

ALTER TABLE docker.clicks
OWNER TO username;

INSERT INTO docker.clicks(clickCount) values (0);

With the RabbitMQ server we’re going to use the official docker image so we don’t need to create one Dockerfile

We also have changed a little bit our Nginx configuration. We want to use Nginx to serve backend and also socket.io server. That’s because we don’t want to expose different ports to internet.

server {
    listen 80;
    index index.php index.html;
    server_name localhost;
    error_log  /var/log/nginx/error.log;
    access_log /var/log/nginx/access.log;
    root /code/src/www;

    location /socket.io/ {
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_pass "http://io:9999";
    }

    location / {
        try_files $uri $uri/ /index.php?$query_string;
    }

    location ~ \.php$ {
        try_files $uri =404;
        fastcgi_split_path_info ^(.+\.php)(/.+)$;
        fastcgi_pass api:9000;
        fastcgi_index index.php;
        include fastcgi_params;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_param PATH_INFO $fastcgi_path_info;
    }
}

To avoid CORS issues we can also use SCP destination (the localneo proxy in this example), to serve socket.io also. So we need to:

  • change our neo-app.json file
  • "routes": [
        ...
        {
          "path": "/socket.io",
          "target": {
            "type": "destination",
            "name": "SOCKETIO"
          },
          "description": "SOCKETIO"
        }
      ],
    

    And basically that’s all. Here also we can use a “production” docker-copose file without exposing all ports and mapping the filesystem to our local machine (useful when we’re developing)

    version: '3.4'
    
    services:
      nginx:
        image: gonzalo123.nginx
        restart: always
        build:
          context: ./src
          dockerfile: .docker/Dockerfile-nginx
        networks:
        - app-network
      api:
        image: gonzalo123.api
        restart: always
        build:
          context: ./src
          dockerfile: .docker/Dockerfile-lumen
        networks:
        - app-network
      ui5:
        image: gonzalo123.ui5
        ports:
        - "80:8000"
        restart: always
        volumes:
        - ./src/frontend:/code/src
        build:
          context: ./src
          dockerfile: .docker/Dockerfile-ui5
        networks:
        - app-network
      io:
        image: gonzalo123.io
        restart: always
        build:
          context: ./src
          dockerfile: .docker/Dockerfile-io
        networks:
        - app-network
      pg:
        image: gonzalo123.pg
        restart: always
        build:
          context: ./src
          dockerfile: .docker/Dockerfile-pg
        environment:
          POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
          POSTGRES_USER: ${POSTGRES_USER}
          POSTGRES_DB: ${POSTGRES_DB}
          PGDATA: /var/lib/postgresql/data/pgdata
        networks:
        - app-network
      rabbit:
        image: rabbitmq:3-management
        restart: always
        environment:
          RABBITMQ_ERLANG_COOKIE:
          RABBITMQ_DEFAULT_VHOST: /
          RABBITMQ_DEFAULT_USER: ${RABBITMQ_DEFAULT_USER}
          RABBITMQ_DEFAULT_PASS: ${RABBITMQ_DEFAULT_PASS}
        networks:
        - app-network
    networks:
      app-network:
        driver: bridge
    

    And that’s all. The full project is available in my github account

    Advertisement

    Pomodoro with ESP32. One “The Melee – Side by side” project

    Last weekend there was a great event called The Melee – Side by side (Many thanks to @ojoven and @diversius).

    The event was one kind of Hackathon where a group of people meet together one day, to share our side projects and to work together (yes. We also have a lunch and beers also :). The format of the event is just a copy of the event that our colleagues from Bilbao called “El Comité“.

    @ibaiimaz spoke about one project to create one collaborative pomodoro where the people of one team can share their status and see the status of the rest of the team. When I heard pomodoro and status I immediately thought in one servo moving a flag and some LEDs turning on and off. We had a project. @penniath and @tatai also joined us. We also had a team.

    We had a project and we also had a deadline. We must show a working prototype at the end of the day. That means that we didn’t have too many time. First we decided the mockup of the project, reducing the initial scope (more ambitious) to fit it within our time slot. We discuss intensely for 10 minutes and finally we describe an ultra detailed blueprint. That’s the full blueprint of the project:

    It was time to start working.

    @penniath and @tatai worked in the Backend. It must be the responsible of the pomodoro timers, listen to MQTT events and create an API for the frontend. The backend also must provide a WebSockets interface to allow real time events within the frontend. They decided to use node and socket.io for the WebSockets. You can see the source code here.

    @ibaiimaz started with the frontend. He decided to create an Angular web application listening to socket.io events to show the status of the pomodoro. You can see the source code here.

    Finaly I worked with the hardware. I created a prototype with one ESP32, two RGB LEDs, one button, one servo and a couple of resistors.

    That’s the source code.

    #include <WiFi.h>
    #include <PubSubClient.h>
    
    int redPin_g = 19;
    int greenPin_g = 17;
    int bluePin_g = 18;
    
    int redPin_i = 21;
    int greenPin_i = 2;
    int bluePin_i = 4;
    
    #define SERVO_PIN 16
    
    const int buttonPin = 15;
    int buttonState = 0;
    
    int channel = 1;
    int hz = 50;
    int depth = 16;
    
    const char* ssid = "SSID";
    const char* password = "password";
    const char* server = "192.168.1.105";
    const char* topic = "/pomodoro/+";
    const char* clientName = "com.gonzalo123.esp32";
    
    WiFiClient wifiClient;
    PubSubClient client(wifiClient);
    
    void wifiConnect() {
      Serial.print("Connecting to ");
      Serial.println(ssid);
    
      WiFi.begin(ssid, password);
    
      while (WiFi.status() != WL_CONNECTED) {
        delay(500);
        Serial.print("*");
      }
    
      Serial.print("WiFi connected: ");
      Serial.println(WiFi.localIP());
    }
    
    void mqttReConnect() {
      while (!client.connected()) {
        Serial.print("Attempting MQTT connection...");
        if (client.connect(clientName)) {
          Serial.println("connected");
          client.subscribe(topic);
        } else {
          Serial.print("failed, rc=");
          Serial.print(client.state());
          Serial.println(" try again in 5 seconds");
          delay(5000);
        }
      }
    }
    
    void callback(char* topic, byte* payload, unsigned int length) {
      Serial.print("Message arrived [");
      Serial.print(topic);
    
      String data;
      for (int i = 0; i < length; i++) {
        data += (char)payload[i];
      }
    
      int value = data.toInt();
    
      if (strcmp(topic, "/pomodoro/gonzalo") == 0) {
        Serial.print("[gonzalo]");
        switch (value) {
          case 1:
            ledcWrite(1, 3400);
            setColor_g(0, 255, 0);
            break;
          case 2:
            setColor_g(255, 0, 0);
            break;
          case 3:
            ledcWrite(1, 6400);
            setColor_g(0, 0, 255);
            break;
        }
      } else {
        Serial.print("[ibai]");
        switch (value) {
          case 1:
            setColor_i(0, 255, 0);
            break;
          case 2:
            setColor_i(255, 0, 0);
            break;
          case 3:
            setColor_i(0, 0, 255);  // green
            break;
        }
      }
    
      Serial.print("] value:");
      Serial.println(data);
    }
    
    void setup()
    {
      Serial.begin(115200);
    
      pinMode(buttonPin, INPUT_PULLUP);
      pinMode(redPin_g, OUTPUT);
      pinMode(greenPin_g, OUTPUT);
      pinMode(bluePin_g, OUTPUT);
    
      pinMode(redPin_i, OUTPUT);
      pinMode(greenPin_i, OUTPUT);
      pinMode(bluePin_i, OUTPUT);
    
      ledcSetup(channel, hz, depth);
      ledcAttachPin(SERVO_PIN, channel);
      wifiConnect();
      client.setServer(server, 1883);
      client.setCallback(callback);
    
      delay(1500);
    }
    
    void mqttEmit(String topic, String value)
    {
      client.publish((char*) topic.c_str(), (char*) value.c_str());
    }
    
    void loop()
    {
      if (!client.connected()) {
        mqttReConnect();
      }
    
      client.loop();
    
      buttonState = digitalRead(buttonPin);
      if (buttonState == HIGH) {
        mqttEmit("/start/gonzalo", (String) "3");
      }
    
      delay(200);
    }
    
    void setColor_i(int red, int green, int blue)
    {
      digitalWrite(redPin_i, red);
      digitalWrite(greenPin_i, green);
      digitalWrite(bluePin_i, blue);
    }
    
    void setColor_g(int red, int green, int blue)
    {
      digitalWrite(redPin_g, red);
      digitalWrite(greenPin_g, green);
      digitalWrite(bluePin_g, blue);
    }
    

    The MQTT server (a mosquitto server) was initially running in my laptop but as well as I had one Raspberry Pi Zero also in my bag we decided to user the Pi Zero as a server and run mosquitto MQTT server with Raspbian. Everything is better with a Raspberry Pi. @tatai helped me to set up the server.

    Here you can see the prototype in action

    That’s the kind of side projects that I normally create alone but definitely it’s more fun to do it with other colleagues even it I need to wake up early one Saturday morning.

    Source code of ESP32 here.

    Real Time IoT in the cloud with SAP’s SCP, Cloud Foundry and WebSockets

    Nowadays I’m involved with a cloud project based on SAP Cloud Platform (SCP). Side projects are the best way to mastering new technologies (at least for me) so I want to build something with SCP and my Arduino stuff. SCP comes whit one IoT module. In fact every cloud platforms have, in one way or another, one IoT module (Amazon, Azure, …). With SCP the IoT module it’s just a Hana Database where we can push our IoT values and we’re able to retrieve information via oData (the common way in SAP world).

    It’s pretty straightforward to configure the IoT module with the SAP Cloud Platform Cockpit (Every thing can be done with a hana trial account).

    NodeMcu

    First I’m going to use a simple circuit with my NodeMcu connected to my wifi network. The prototype is a potentiometer connected to the analog input. I normally use this this circuit because I can change the value just changing the potentiometer wheel. I know it’s not very usefull, but we can easily change it and use a sensor (temperature, humidity, light, …)

    It will send the percentage (from 0 to 100) of the position of the potentiometer directly to the cloud.

    #include <ESP8266WiFi.h>
    
    const int potentiometerPin = 0;
    
    // Wifi configuration
    const char* ssid = "my-wifi-ssid";
    const char* password = "my-wifi-password";
    
    // SAP SCP specific configuration
    const char* host = "mytenant.hanatrial.ondemand.com";
    String device_id = "my-device-ide";
    String message_type_id = "my-device-type-id";
    String oauth_token = "my-oauth-token";
    
    String url = "https://[mytenant].hanatrial.ondemand.com/com.sap.iotservices.mms/v1/api/http/data/" + device_id;
    
    const int httpsPort = 443;
    
    WiFiClientSecure clientTLS;
    
    void wifiConnect() {
      Serial.println();
      Serial.print("Connecting to ");
      Serial.println(ssid);
    
      WiFi.begin(ssid, password);
    
      while (WiFi.status() != WL_CONNECTED) {
        delay(500);
        Serial.print(".");
      }
      Serial.println("");
      Serial.print("WiFi connected.");
      Serial.print("IP address: ");
      Serial.println(WiFi.localIP());
    }
    
    void sendMessage(int value) {
      String payload = "{\"mode\":\"async\", \"messageType\":\"" + message_type_id + "\", \"messages\":[{\"value\": " + (String) value + "}]}";
      Serial.print("connecting to ");
      Serial.println(host);
      if (!clientTLS.connect(host, httpsPort)) {
        Serial.println("connection failed");
        return;
      }
    
      Serial.print("requesting payload: ");
      Serial.println(url);
    
      clientTLS.print(String("POST ") + url + " HTTP/1.0\r\n" +
                   "Host: " + host + "\r\n" +
                   "Content-Type: application/json;charset=utf-8\r\n" +
                   "Authorization: Bearer " + oauth_token + "\r\n" +
                   "Content-Length: " + payload.length() + "\r\n\r\n" +
                   payload + "\r\n\r\n");
    
      Serial.println("request sent");
    
      Serial.println("reply was:");
      while (clientTLS.connected()) {
        String line = clientTLS.readStringUntil('\n');
        Serial.println(line);
      }
    }
    
    void setup() {
      Serial.begin(9600);
      wifiConnect();
    
      delay(10);
    }
    
    int mem;
    void loop() {
    
      int value = ((analogRead(potentiometerPin) * 100) / 1010);
      if (value < (mem - 1) or value > (mem + 1)) {
        sendMessage(value);
        Serial.println(value);
        mem = value;
      }
    
      delay(200);
    }
    

    SCP

    SAP Cloud Platform allows us to create web applications using SAPUI5 framework easily. It also allows us to create a destination (the way that SAP’s cloud uses to connect different modules) to our IoT module. Also every Hana table can be accessed via oData so and we can retrieve the information easily within SAPIUI5.

    onAfterRendering: function () {
        var model = this.model;
    
        this.getView().getModel().read("/my-hana-table-odata-uri", {
            urlParameters: {
                $top: 1,
                $orderby: "G_CREATED desc"
            },
            success: function (oData) {
                model.setProperty("/value", oData.results[0].C_VALUE);
            }
        });
    }
    

    and display in a view

    <mvc:View controllerName="gonzalo123.iot.controller.Main" xmlns:html="http://www.w3.org/1999/xhtml" xmlns:mvc="sap.ui.core.mvc"
              displayBlock="true" xmlns="sap.m">
        <App>
            <pages>
                <Page title="{i18n>title}">
                    <content>
                        <GenericTile class="sapUiTinyMarginBegin sapUiTinyMarginTop tileLayout" header="nodemcu" frameType="OneByOne">
                            <tileContent>
                                <TileContent unit="%">
                                    <content>
                                        <NumericContent value="{view>/value}" icon="sap-icon://line-charts"/>
                                    </content>
                                </TileContent>
                            </tileContent>
                        </GenericTile>
                    </content>
                </Page>
            </pages>
        </App>
    </mvc:View>
    

    Cloud Foundry

    The web application (with SCP and SAPUI5) can access to IoT values via oData. We can fetch data again and again, but that’s not cool. We want real time updates in the web application. So we need WebSockets. SCP IoT module allows us to use WebSockets to put information, but not get updates (afaik. Let me know if I’m wrong). We also can connect our IoT to an existing MQTT server, but in this prototype I only want to use websockets. So we’re going to create a simple WebSocket server with node and socket.io. This server will be listening to devices updates (again and again with a setInterval function via oData) and when it detects a change it will emit a broadcast to the WebSocket.

    SAP’s SCP also allows us to create services with Cloud Foundry. So we’ll create our nodejs server there.

    var http = require('http'),
        io = require('socket.io'),
        request = require('request'),
        auth = "Basic " + new Buffer(process.env.USER + ":" + process.env.PASS).toString("base64"),
        url = process.env.IOT_ODATA,
        INTERVAL = process.env.INTERVAL,
        socket,
        value;
    
    server = http.createServer();
    server.listen(process.env.PORT || 3000);
    
    socket = io.listen(server);
    
    setInterval(function () {
        request.get({
            url: url,
            headers: {
                "Authorization": auth,
                "Accept": "application/json"
            }
        }, function (error, response, body) {
            var newValue = JSON.parse(body).d.results[0].C_VALUE;
            if (value !== newValue) {
                value = newValue;
                socket.sockets.emit('value', value);
            }
        });
    }, INTERVAL);
    

    And that’s all. My NodeMcu device connected to the cloud.

    Full project available in my github

    Playing with Docker, Silex, Python, Node and WebSockets

    I’m learning Docker. In this post I want to share a little experiment that I have done. I know the code looks like over-engineering but it’s just an excuse to build something with docker and containers. Let me explain it a little bit.

    The idea is build a Time clock in the browser. Something like this:

    Clock

    Yes I know. We can do it only with js, css and html but we want to hack a little bit more. The idea is to create:

    • A Silex/PHP frontend
    • A WebSocket server with socket.io/node
    • A Python script to obtain the current time

    WebSocket server will open 2 ports: One port to serve webSockets (socket.io) and another one as a http server (express). Python script will get the current time and it’ll send it to the webSocket server. Finally one frontend(silex) will be listening to WebSocket’s event and it will render the current time.

    That’s the WebSocket server (with socket.io and express)

    var
        express = require('express'),
        expressApp = express(),
        server = require('http').Server(expressApp),
        io = require('socket.io')(server, {origins: 'localhost:*'})
        ;
    
    expressApp.get('/tic', function (req, res) {
        io.sockets.emit('time', req.query.time);
        res.json('OK');
    });
    
    expressApp.listen(6400, '0.0.0.0');
    
    server.listen(8080);
    

    That’s our Python script

    from time import gmtime, strftime, sleep
    import httplib2
    
    h = httplib2.Http()
    while True:
        (resp, content) = h.request("http://node:6400/tic?time=" + strftime("%H:%M:%S", gmtime()))
        sleep(1)
    

    And our Silex frontend

    use Silex\Application;
    use Silex\Provider\TwigServiceProvider;
    
    $app = new Application(['debug' => true]);
    $app->register(new TwigServiceProvider(), [
        'twig.path' => __DIR__ . '/../views',
    ]);
    
    $app->get("/", function (Application $app) {
        return $app['twig']->render('index.twig', []);
    });
    
    $app->run();
    

    using this twig template

    <!DOCTYPE html>
    <html lang="en">
    <head>
        <meta charset="utf-8">
        <meta http-equiv="X-UA-Compatible" content="IE=edge">
        <meta name="viewport" content="width=device-width, initial-scale=1">
        <title>Docker example</title>
        <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css" integrity="sha384-BVYiiSIFeK1dGmJRAkycuHAHRg32OmUcww7on3RYdg4Va+PmSTsz/K68vbdEjh4u" crossorigin="anonymous">
        <link href="css/app.css" rel="stylesheet">
        <script src="https://oss.maxcdn.com/html5shiv/3.7.3/html5shiv.min.js"></script>
        <script src="https://oss.maxcdn.com/respond/1.4.2/respond.min.js"></script>
    </head>
    <body>
    <div class="site-wrapper">
        <div class="site-wrapper-inner">
            <div class="cover-container">
                <div class="inner cover">
                    <h1 class="cover-heading">
                        <div id="display">
                            display
                        </div>
                    </h1>
                </div>
            </div>
        </div>
    </div>
    <script src="//localhost:8080/socket.io/socket.io.js"></script>
    <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.12.4/jquery.min.js"></script>
    <script>
    var socket = io.connect('//localhost:8080');
    
    $(function () {
        socket.on('time', function (data) {
            $('#display').html(data);
        });
    });
    </script>
    </body>
    </html>
    

    The idea is to use one Docker container for each process. I like to have all the code in one place so all containers will share the same volume with source code.

    First the node container (WebSocket server)

    FROM node:argon
    
    RUN mkdir -p /mnt/src
    WORKDIR /mnt/src/node
    
    EXPOSE 8080 6400
    

    Now the python container

    FROM python:2
    
    RUN pip install httplib2
    
    RUN mkdir -p /mnt/src
    WORKDIR /mnt/src/python
    

    And finally Frontend contailer (apache2 with Ubuntu 16.04)

    FROM ubuntu:16.04
    
    RUN locale-gen es_ES.UTF-8
    RUN update-locale LANG=es_ES.UTF-8
    ENV DEBIAN_FRONTEND=noninteractive
    
    RUN apt-get update -y
    RUN apt-get install --no-install-recommends -y apache2 php libapache2-mod-php
    RUN apt-get clean -y
    
    COPY ./apache2/sites-available/000-default.conf /etc/apache2/sites-available/000-default.conf
    
    RUN mkdir -p /mnt/src
    
    RUN a2enmod rewrite
    RUN a2enmod proxy
    RUN a2enmod mpm_prefork
    
    RUN chown -R www-data:www-data /mnt/src
    ENV APACHE_RUN_USER www-data
    ENV APACHE_RUN_GROUP www-data
    ENV APACHE_LOG_DIR /var/log/apache2
    ENV APACHE_LOCK_DIR /var/lock/apache2
    ENV APACHE_PID_FILE /var/run/apache2/apache2.pid
    ENV APACHE_SERVERADMIN admin@localhost
    ENV APACHE_SERVERNAME localhost
    
    EXPOSE 80
    

    Now we’ve got the three containers but we want to use all together. We’ll use a docker-compose.yml file. The web container will expose port 80 and node container 8080. Node container also opens 6400 but this port is an internal port. We don’t need to access to this port outside. Only Python container needs to access to this port. Because of that 6400 is not mapped to any port in docker-compose

    version: '2'
    
    services:
      web:
        image: gonzalo123/example_web
        container_name: example_web
        ports:
         - "80:80"
        restart: always
        depends_on:
          - node
        build:
          context: ./images/php
          dockerfile: Dockerfile
        entrypoint:
          - /usr/sbin/apache2
          - -D
          - FOREGROUND
        volumes:
         - ./src:/mnt/src
    
      node:
        image: gonzalo123/example_node
        container_name: example_node
        ports:
         - "8080:8080"
        restart: always
        build:
          context: ./images/node
          dockerfile: Dockerfile
        entrypoint:
          - npm
          - start
        volumes:
         - ./src:/mnt/src
    
      python:
          image: gonzalo123/example_python
          container_name: example_python
          restart: always
          depends_on:
            - node
          build:
            context: ./images/python
            dockerfile: Dockerfile
          entrypoint:
            - python
            - tic.py
          volumes:
           - ./src:/mnt/src
    

    And that’s all. We only need to start our containers

    docker-compose up --build -d
    

    and open our browser at: http://localhost to see our Time clock

    Full source code available within my github account

    Encrypt Websocket (socket.io) communications

    I’m a big fan of WebSockets and socket.io. I’ve written a lot of about it. In last posts I’ve written about socket.io and authentication. Today we’re going to speak about communications.

    Imagine we’ve got a websocket server and we connect our application to this server (even using https/wss). If we open our browser’s console we can inspect our WebSocket communications. We also can enable debugging. This works in a similar way than when we start the promiscuous mode within our network interface. We will see every packets. Not only the packets that server is sending to us.

    If we send send sensitive information over websockets, that means than one logged user can see another ones information. We can separate namespaces in our socket.io server. We also can do another thing: Encrypt communications using crypto-js.

    I’ve created one small wrapper to use it with socket.io.
    We can install our server dependency

    npm g-crypt
    

    And install our client dependency with bower

    bower install g-crypt
    

    And use it in our server

    var io = require('socket.io')(3000),
        Crypt = require("g-crypt"),
        passphrase = 'super-secret-passphrase',
        crypter = Crypt(passphrase);
    
    io.on('connection', function (socket) {
        socket.on('counter', function (data) {
            var decriptedData = crypter.decrypt(data);
            setTimeout(function () {
                console.log("counter status: " + decriptedData.id);
                decriptedData.id++;
                socket.emit('counter', crypter.encrypt(decriptedData));
            }, 1000);
        });
    });
    

    And now a simple HTTP application

    <!DOCTYPE html>
    <html lang="en">
    <head>
        <meta charset="UTF-8">
        <title>Title</title>
    </head>
    <body>
    Open console to see the messages
    
    <script src="http://localhost:3000/socket.io/socket.io.js"></script>
    <script src="assets/cryptojslib/rollups/aes.js"></script>
    <script src="assets/g-crypt/src/Crypt.js"></script>
    <script>
        var socket = io('http://localhost:3000/'),
            passphrase = 'super-secret-passphrase',
            crypter = Crypt(passphrase),
            id = 0;
    
        socket.on('connect', function () {
            console.log("connected! Let's start the counter with: " + id);
            socket.emit('counter', crypter.encrypt({id: id}));
        });
    
        socket.on('counter', function (data) {
            var decriptedData = crypter.decrypt(data);
            console.log("counter status: " + decriptedData.id);
            socket.emit('counter', crypter.encrypt({id: decriptedData.id}));
        });
    </script>
    
    </body>
    </html>
    

    Now our communications are encrypted and logged user cannot read another ones data.

    Library is a simple wrapper

    Crypt = function (passphrase) {
        "use strict";
        var pass = passphrase;
        var CryptoJSAesJson = {
            parse: function (jsonStr) {
                var j = JSON.parse(jsonStr);
                var cipherParams = CryptoJS.lib.CipherParams.create({ciphertext: CryptoJS.enc.Base64.parse(j.ct)});
                if (j.iv) cipherParams.iv = CryptoJS.enc.Hex.parse(j.iv);
                if (j.s) cipherParams.salt = CryptoJS.enc.Hex.parse(j.s);
                return cipherParams;
            },
            stringify: function (cipherParams) {
                var j = {ct: cipherParams.ciphertext.toString(CryptoJS.enc.Base64)};
                if (cipherParams.iv) j.iv = cipherParams.iv.toString();
                if (cipherParams.salt) j.s = cipherParams.salt.toString();
                return JSON.stringify(j);
            }
        };
    
        return {
            decrypt: function (data) {
                return JSON.parse(CryptoJS.AES.decrypt(data, pass, {format: CryptoJSAesJson}).toString(CryptoJS.enc.Utf8));
            },
            encrypt: function (data) {
                return CryptoJS.AES.encrypt(JSON.stringify(data), pass, {format: CryptoJSAesJson}).toString();
            }
        };
    };
    
    if (typeof module !== 'undefined' && typeof module.exports !== 'undefined') {
        CryptoJS = require("crypto-js");
        module.exports = Crypt;
    } else {
        window.Crypt = Crypt;
    }
    

    Library available in my github and also we can use it using npm and bower.

    Sharing authentication between socket.io and a PHP frontend (using JSON Web Tokens)

    I’ve written a previous post about Sharing authentication between socket.io and a PHP frontend but after publish the post a colleague (hi @mariotux) told me that I can use JSON Web Tokens (jwt) to do this. I had never used jwt before so I decided to study a little bit.

    JWT are pretty straightforward. You only need to create the token and send it to the client. You don’t need to store this token within a database. Client can decode and validate it on its own. You also can use any programming language to encode and decode tokens (jwt is available in the most common ones)

    We’re going to create the same example than the previous post. Today, with jwt, we don’t need to pass the PHP session and perform a http request to validate it. We’ll only pass the token. Our nodejs server will validate by its own.

    var io = require('socket.io')(3000),
        jwt = require('jsonwebtoken'),
        secret = "my_super_secret_key";
    
    // middleware to perform authorization
    io.use(function (socket, next) {
        var token = socket.handshake.query.token,
            decodedToken;
        try {
            decodedToken = jwt.verify(token, secret);
            console.log("token valid for user", decodedToken.user);
            socket.connectedUser = decodedToken.user;
            next();
        } catch (err) {
            console.log(err);
            next(new Error("not valid token"));
            //socket.disconnect();
        }
    });
    
    io.on('connection', function (socket) {
        console.log('Connected! User: ', socket.connectedUser);
    });
    

    That’s the client:

    <!DOCTYPE html>
    <html lang="en">
    <head>
        <meta charset="UTF-8">
        <title>Title</title>
    </head>
    <body>
    Welcome {{ user }}!
    
    <script src="http://localhost:3000/socket.io/socket.io.js"></script>
    <script src="/assets/jquery/dist/jquery.js"></script>
    
    <script>
        var socket;
        $(function () {
            $.getJSON("/getIoConnectionToken", function (jwt) {
                socket = io('http://localhost:3000', {
                    query: 'token=' + jwt
                });
    
                socket.on('connect', function () {
                    console.log("connected!");
                });
    
                socket.on('error', function (err) {
                    console.log(err);
                });
            });
        });
    </script>
    
    </body>
    </html>
    

    And here the backend. A simple Silex server very similar than the previous post one. JWT has also several reserved claims. For example “exp” to set up an expiration timestamp. It’s very useful. We only set one value and validator will reject tokens with incorrect timestamp. In this example I’m not using expiration date. That’s means that my token will never expires. And never means never. In my first prototype I set up an small expiration date (10 seconds). That means my token is only available during 10 seconds. Sounds great. My backend generate tokens that are going to be used immediately. That’s the normal situation but, what happens if I restart the socket.io server? The client will try to reconnect again using the token but it’s expired. We’ll need to create a new jwt before reconnecting. Because of that I’ve removed expiration date in this example but remember: Without expiration date your generated tokens will be always valid (al always is a very big period of time)

    <?php
    include __DIR__ . "/../vendor/autoload.php";
    
    use Firebase\JWT\JWT;
    use Silex\Application;
    use Silex\Provider\SessionServiceProvider;
    use Silex\Provider\TwigServiceProvider;
    use Symfony\Component\HttpFoundation\Response;
    use Symfony\Component\HttpKernel\Exception\AccessDeniedHttpException;
    
    $app = new Application([
        'secret' => "my_super_secret_key",
        'debug' => true
    ]);
    $app->register(new SessionServiceProvider());
    $app->register(new TwigServiceProvider(), [
        'twig.path' => __DIR__ . '/../views',
    ]);
    
    $app->get('/', function (Application $app) {
        return $app['twig']->render('home.twig');
    });
    $app->get('/login', function (Application $app) {
        $username = $app['request']->server->get('PHP_AUTH_USER', false);
        $password = $app['request']->server->get('PHP_AUTH_PW');
        if ('gonzalo' === $username && 'password' === $password) {
            $app['session']->set('user', ['username' => $username]);
    
            return $app->redirect('/private');
        }
        $response = new Response();
        $response->headers->set('WWW-Authenticate', sprintf('Basic realm="%s"', 'site_login'));
        $response->setStatusCode(401, 'Please sign in.');
    
        return $response;
    });
    
    $app->get('/getIoConnectionToken', function (Application $app) {
        $user = $app['session']->get('user');
        if (null === $user) {
            throw new AccessDeniedHttpException('Access Denied');
        }
    
        $jwt = JWT::encode([
            // I can use "exp" reserved claim. It's cool. My connection token is only available
            // during a period of time. The problem is if I restart the io server. Client will
            // try to re-connect using this token and it's expired.
            //"exp"  => (new \DateTimeImmutable())->modify('+10 second')->getTimestamp(),
            "user" => $user
        ], $app['secret']);
    
        return $app->json($jwt);
    });
    
    $app->get('/private', function (Application $app) {
        $user = $app['session']->get('user');
    
        if (null === $user) {
            throw new AccessDeniedHttpException('Access Denied');
        }
    
        $userName = $user['username'];
    
        return $app['twig']->render('private.twig', [
            'user'  => $userName
        ]);
    });
    $app->run();
    

    Full project in my github.

    Sharing authentication between socket.io and a PHP frontend

    Normally, when I work with websockets, my stack is a socket.io server and a Silex frontend. Protect a PHP frontend with one kind of authentication of another is pretty straightforward. But if we want to use websockets, we need to set up another server and if we protect our frontend we need to protect our websocket server too.

    If our frontend is node too (express for example), sharing authentication is more easy but at this time we we want to use two different servers (a node server and a PHP server). I’ve written about it too but today we`ll see another solution. Let’s start.

    Imagine we have this simple Silex application. It has three routes:

    • “/” a public route
    • “/login” to perform the login action
    • “/private” a private route. If we try to get here without a valid session we’ll get a 403 error

    And this is the code. It’s basically one example using sessions taken from Silex documentation:

    use Silex\Application;
    use Silex\Provider\SessionServiceProvider;
    use Silex\Provider\TwigServiceProvider;
    use Symfony\Component\HttpFoundation\Response;
    use Symfony\Component\HttpKernel\Exception\AccessDeniedHttpException;
    
    $app = new Application();
    
    $app->register(new SessionServiceProvider());
    $app->register(new TwigServiceProvider(), [
        'twig.path' => __DIR__ . '/../views',
    ]);
    
    $app->get('/', function (Application $app) {
        return $app['twig']->render('home.twig');
    });
    
    $app->get('/login', function () use ($app) {
        $username = $app['request']->server->get('PHP_AUTH_USER', false);
        $password = $app['request']->server->get('PHP_AUTH_PW');
    
        if ('gonzalo' === $username && 'password' === $password) {
            $app['session']->set('user', ['username' => $username]);
    
            return $app->redirect('/private');
        }
    
        $response = new Response();
        $response->headers->set('WWW-Authenticate', sprintf('Basic realm="%s"', 'site_login'));
        $response->setStatusCode(401, 'Please sign in.');
    
        return $response;
    });
    
    $app->get('/private', function () use ($app) {
        $user = $app['session']->get('user');
        if (null === $user) {
            throw new AccessDeniedHttpException('Access Denied');
        }
    
        return $app['twig']->render('private.twig', [
            'username'  => $user['username']
        ]);
    });
    
    $app->run();
    

    Our “/private” route also creates a connection with our websocket server.

    <!DOCTYPE html>
    <html lang="en">
    <head>
        <meta charset="UTF-8">
        <title>Title</title>
    </head>
    <body>
    Welcome {{ username }}!
    
    <script src="http://localhost:3000/socket.io/socket.io.js"></script>
    <script>
        var socket = io('http://localhost:3000/');
        socket.on('connect', function () {
            console.log("connected!");
        });
        socket.on('disconnect', function () {
            console.log("disconnected!");
        });
    </script>
    
    </body>
    </html>
    

    And that’s our socket.io server. A really simple one.

    var io = require('socket.io')(3000);
    

    It works. Our frontend is protected. We need to login with our credentials (in this example “gonzalo/password”), but everyone can connect to our socket.io server. The idea is to use our PHP session to protect our socket.io server too. In fact is very easy how to do it. First we need to pass our PHPSESSID to our socket.io server. To do it, when we perform our socket.io connection in the frontend, we pass our session id

    <script>
        var socket = io('http://localhost:3000/', {
            query: 'token={{ sessionId }}'
        });
        socket.on('connect', function () {
            console.log("connected!");
        });
        socket.on('disconnect', function () {
            console.log("disconnect!");
        });
    </script>
    

    As well as we’re using a twig template we need to pass sessionId variable

    $app->get('/private', function () use ($app) {
        $user = $app['session']->get('user');
        if (null === $user) {
            throw new AccessDeniedHttpException('Access Denied');
        }
    
        return $app['twig']->render('private.twig', [
            'username'  => $user['username'],
            'sessionId' => $app['session']->getId()
        ]);
    });
    

    Now we only need to validate the token before stabilising connection. Socket.io provides us a middleware to perform those kind of operations. In this example we’re using PHP sessions out of the box. How can we validate it? The answer is easy. We only need to create a http client (in the socket.io server) and perform a request to a protected route (we’ll use “/private”). If we’re using a different provider to store our sessions (I hope you aren’t using Memcached to store PHP session, indeed) you’ll need to validate our sessionId against your provider.

    var io = require('socket.io')(3000),
        http = require('http');
    
    io.use(function (socket, next) {
        var options = {
            host: 'localhost',
            port: 8080,
            path: '/private',
            headers: {Cookie: 'PHPSESSID=' + socket.handshake.query.token}
        };
    
        http.request(options, function (response) {
            response.on('error', function () {
                next(new Error("not authorized"));
            }).on('data', function () {
                next();
            });
        }).end();
    });
    
    io.on('connection', function () {
        console.log("connected!");
    });
    

    Ok. This example works but we’re generating dynamically a js file injecting our PHPSESSID. If we want to extract the sessionId from the request we can use document.cookie but sometimes it doesn’t work. That’s because HttpOnly. HttpOnly is our friend if we want to protect our cookies against XSS attacks but in this case our protection difficults our task.

    We can solve this problem performing a simple request to our server. We’ll create a new route (a private route) called ‘getSessionID’ that gives us our sessionId.

    $app->get('/getSessionID', function (Application $app) {
        $user = $app['session']->get('user');
        if (null === $user) {
            throw new AccessDeniedHttpException('Access Denied');
        }
    
        return $app->json($app['session']->getId());
    });
    

    So before establishing the websocket we just need to create a GET request to our new route to obtain the sessionID.

    var io = require('socket.io')(3000),
        http = require('http');
    
    io.use(function (socket, next) {
        var sessionId = socket.handshake.query.token,
            options = {
                host: 'localhost',
                port: 8080,
                path: '/getSessionID',
                headers: {Cookie: 'PHPSESSID=' + sessionId}
            };
    
        http.request(options, function (response) {
            response.on('error', function () {
                next(new Error("not authorized"));
            });
            response.on('data', function (chunk) {
                var sessionIdFromRequest;
                try {
                    sessionIdFromRequest = JSON.parse(chunk.toString());
                } catch (e) {
                    next(new Error("not authorized"));
                }
    
                if (sessionId == sessionIdFromRequest) {
                    next();
                } else {
                    next(new Error("not authorized"));
                }
            });
        }).end();
    });
    
    io.on('connection', function (socket) {
        setInterval(function() {
            socket.emit('hello', {hello: 'world'});
        }, 1000);
    });
    

    And thats all. You can see the full example in my github account.

    Book review: Socket.IO Cookbook

    Last summer I collaborated as a technical reviewer in the book “Socket.IO Cookbook” written by Tyson Cadenhead and finally I’ve got the book in my hands

    I’m a big fan of real time technologies and I’m normally Socket.io user. Because of that, when people of Packt Publishing contacted me to join to the project as technical reviewer my answer was yes. I’ve got serious problems nowadays to find time to pet projects and extra activities, but if there’re WebSockets inside I cannot resists.

    The book is correct and it’s a good starting point to event-based communication with JavaScript. I normally don’t like beginners books (even if I’m a beginner in the technology). I don’t like the books where author explains how to do one thing that I can see how to do it within the website of the. OK. This book isn’t one of those of books. The writer don’t assume reader is a totally newbie. Because of that newbies sometimes can be lost in some chapters, but this exactly the way we all learn new technologies. I like the way Tyson introduces concepts about socket.io.

    The book is focused in JavaScript and also uses JavaScript to the backend (with node). Maybe I miss the integration with non-JavaScript environments, but as socket.io is a javascript library I understand that the usage of JavaScript in all application lifecycle is a good approach.

    IMG_20151106_204902_jpg

    Also those days I was reading and playing a little bit with WebRTC and the book has one chapter about it! #cool

    PHP Dumper using Websockets

    Another crazy idea. I want to dump my backend output in the browser’s console. There’re several PHP dumpers. For example Raul Fraile’s LadyBug. There’re also libraries to do exactly what I want to do, such as Chrome Logger. But I wanted to use Websockets and dump values in real time, without waiting to the end of backend script. Why? The answer is simple: Because I wanted to it 🙂

    I’ve written several post about Websockets, Silex, PHP. In this case I’ll use a similar approach than the previous posts. First I’ve created a simple Webscocket server with socket.io. This server also starts a Express server to handle internal messages from the Silex Backend

    var CONF = {
            IO: {HOST: '0.0.0.0', PORT: 8888},
            EXPRESS: {HOST: '0.0.0.0', PORT: 26300}
        },
        express = require('express'),
        expressApp = express(),
        server = require('http').Server(expressApp),
        io = require('socket.io')(server, {origins: 'localhost:*'})
        ;
    
    expressApp.get('/:type/:session/:message', function (req, res) {
        console.log(req.params);
        var session = req.params.session,
            type = req.params.type,
            message = req.params.message;
    
        io.sockets.emit('dumper.' + session, {title: type, data: JSON.parse(message)});
        res.json('OK');
    });
    
    io.sockets.on('connection', function (socket) {
        console.log("Socket connected!");
    });
    
    expressApp.listen(CONF.EXPRESS.PORT, CONF.EXPRESS.HOST, function () {
        console.log('Express started');
    });
    
    server.listen(CONF.IO.PORT, CONF.IO.HOST, function () {
        console.log('IO started');
    });
    

    Now we create a simple Service provider to connect our Silex Backend to our Express server (and send the dumper’s messages using the websocket connection)

    <?php
    
    namespace Dumper\Silex\Provider;
    
    use Silex\Application;
    use Silex\ServiceProviderInterface;
    use Dumper\Dumper;
    use Silex\Provider\SessionServiceProvider;
    use GuzzleHttp\Client;
    
    class DumperServiceProvider implements ServiceProviderInterface
    {
        private $wsConnector;
        private $client;
    
        public function __construct(Client $client, $wsConnector)
        {
            $this->client = $client;
            $this->wsConnector = $wsConnector;
        }
    
        public function register(Application $app)
        {
            $app->register(new SessionServiceProvider());
    
            $app['dumper'] = function () use ($app) {
                return new Dumper($this->client, $this->wsConnector, $app['session']->get('uid'));
            };
    
            $app['dumper.init'] = $app->protect(function ($uid) use ($app) {
                $app['session']->set('uid', $uid);
            });
    
            $app['dumper.uid'] = function () use ($app) {
                return $app['session']->get('uid');
            };
        }
    
        public function boot(Application $app)
        {
        }
    }
    

    Finally our Silex Application looks like that:

    include __DIR__ . '/../vendor/autoload.php';
    
    use Silex\Application;
    use Silex\Provider\TwigServiceProvider;
    use Dumper\Silex\Provider\DumperServiceProvider;
    use GuzzleHttp\Client;
    
    $app = new Application([
        'debug' => true
    ]);
    
    $app->register(new DumperServiceProvider(new Client(), 'http://192.168.1.104:26300'));
    
    $app->register(new TwigServiceProvider(), [
        'twig.path' => __DIR__ . '/../views',
    ]);
    
    $app->get("/", function (Application $app) {
        $uid = uniqid();
    
        $app['dumper.init']($uid);
    
        return $app['twig']->render('index.twig', [
            'uid' => $uid
        ]);
    });
    
    $app->get('/api/hello', function (Application $app) {
        $app['dumper']->error("Hello world1");
        $app['dumper']->info([1,2,3]);
    
        return $app->json('OK');
    });
    
    
    $app->run();
    

    In the client side we have one index.html. I’ve created Twig template to pass uid to the dumper object (the websocket channel to listen to), but we also can fetch this uid from the backend with one ajax call.

    <!DOCTYPE html>
    <html>
    <head lang="en">
        <meta charset="UTF-8">
        <title>Dumper example</title>
    </head>
    <body>
    
    <a href="#" onclick="api('hello')">hello</a>
    
    <!-- We use jQuery just for the demo. Library doesn't need jQuery -->
    <script src="assets/jquery/dist/jquery.min.js"></script>
    <!-- We load the library -->
    <script src="js/dumper.js"></script>
    
    <script>
        dumper.startSocketIo('{{ uid }}', '//localhost:8888');
        function api(name) {
            // we perform a remote api ajax call that triggers websockets
            $.getJSON('/api/' + name, function (data) {
                // Doing nothing. We only call the api to test php dumper
            });
        }
    </script>
    </body>
    </html>
    

    I use jQuery to handle ajax request and to connect to the websocket dumper object (it doesn’t deppend on jQuery, only depend on socket.io)

    var dumper = (function () {
        var socket, sessionUid, socketUri, init;
    
        init = function () {
            if (typeof(io) === 'undefined') {
                setTimeout(init, 100);
            } else {
                socket = io(socketUri);
    
                socket.on('dumper.' + sessionUid, function (data) {
                    console.group('Dumper:', data.title);
                    switch (data.title) {
                        case 'emergency':
                        case 'alert':
                        case 'critical':
                        case 'error':
                            console.error(data.data);
                            break;
                        case 'warning':
                            console.warn(data.data);
                            break;
                        case 'notice':
                        case 'info':
                        //case 'debug':
                            console.info(data.data);
                            break;
                        default:
                            console.log(data.data);
                    }
                    console.groupEnd();
                });
            }
        };
    
        return {
            startSocketIo: function (uid, uri) {
                var script = document.createElement('script');
                var node = document.getElementsByTagName('script')[0];
    
                sessionUid = uid;
                socketUri = uri;
                script.src = socketUri + '/socket.io/socket.io.js';
                node.parentNode.insertBefore(script, node);
    
                init();
            }
        };
    })();
    

    Source code is available in my github account

    Enclosing socket.io Websocket connection inside a HTML5 SharedWorker

    I really like WebSockets. I’ve written several posts about them. Today we’re going to speak about something related to WebSockets. Let me explain it a little bit.

    Imagine that we build a Web Application with WebSockets. That’s means that when we start the application, we need to connect to the WebSockets server. If our application is a Single-page application, we’ll create one socket per application, but: What happens if we open three tabs with the application within the browser? The answer is simple, we’ll create three sockets. Also, if we reload one tab (a full refresh) we’ll disconnect our socket and reconnect again. Maybe we can handle this situation, but we can easily bypass this disconnect-connect situation with a HTML5 feature called SharedWorkers.

    Web Workers allows us to run JavaScript process in background. We also can create Shared Workers. SharedWorkers can be shared within our browser session. That’s means that we can enclose our WebSocket server inside s SharedWorker, and if we open various tabs with our browser we only need one Socket (one socket per session instead one socket per tab).

    I’ve written a simple library called gio to perform this operation. gio uses socket.io to create WebSockets. WebWorker is a new HTML5 feature and it needs a modern browser. Socket.io works also with old browsers. It checks if WebWorkers are available and if they isn’t, then gio creates a WebSocket connection instead of using a WebWorker to enclose the WebSockets.

    We can see one simple video to see how it works. In the video we can see how sockets are created. Only one socket is created even if we open more than one tab in our browser. But if we open a new session (one incognito session for example), a new socket is created

    Here we can see the SharedWorker code:

    "use strict";
    
    importScripts('socket.io.js');
    
    var socket = io(self.name),
        ports = [];
    
    addEventListener('connect', function (event) {
        var port = event.ports[0];
        ports.push(port);
        port.start();
    
        port.addEventListener("message", function (event) {
            for (var i = 0; i < event.data.events.length; ++i) {
                var eventName = event.data.events[i];
    
                socket.on(event.data.events[i], function (e) {
                    port.postMessage({type: eventName, message: e});
                });
            }
        });
    });
    
    socket.on('connect', function () {
        for (var i = 0; i < ports.length; i++) {
            ports[i].postMessage({type: '_connect'});
        }
    });
    
    socket.on('disconnect', function () {
        for (var i = 0; i < ports.length; i++) {
            ports[i].postMessage({type: '_disconnect'});
        }
    });
    

    And here we can see the gio source code:

    var gio = function (uri, onConnect, onDisConnect) {
        "use strict";
        var worker, onError, workerUri, events = {};
    
        function getKeys(obj) {
            var keys = [];
    
            for (var i in obj) {
                if (obj.hasOwnProperty(i)) {
                    keys.push(i);
                }
            }
    
            return keys;
        }
    
        function onMessage(type, message) {
            switch (type) {
                case '_connect':
                    if (onConnect) onConnect();
                    break;
                case '_disconnect':
                    if (onDisConnect) onDisConnect();
                    break;
                default:
                    if (events[type]) events[type](message);
            }
        }
    
        function startWorker() {
            worker = new SharedWorker(workerUri, uri);
            worker.port.addEventListener("message", function (event) {
                onMessage(event.data.type, event.data.message);
    
            }, false);
    
            worker.onerror = function (evt) {
                if (onError) onError(evt);
            };
    
            worker.port.start();
            worker.port.postMessage({events: getKeys(events)});
        }
    
        function startSocketIo() {
            var socket = io(uri);
            socket.on('connect', function () {
                if (onConnect) onConnect();
            });
    
            socket.on('disconnect', function () {
                if (onDisConnect) onDisConnect();
            });
    
            for (var eventName in events) {
                if (events.hasOwnProperty(eventName)) {
                    socket.on(eventName, socketOnEventHandler(eventName));
                }
            }
        }
    
        function socketOnEventHandler(eventName) {
            return function (e) {
                onMessage(eventName, e);
            };
        }
    
        return {
            registerEvent: function (eventName, callback) {
                events[eventName] = callback;
            },
    
            start: function () {
                if (!SharedWorker) {
                    startSocketIo();
                } else {
                    startWorker();
                }
            },
    
            onError: function (cbk) {
                onError = cbk;
            },
    
            setWorker: function (uri) {
                workerUri = uri;
            }
        };
    };
    

    And here the application code:

    (function (gio) {
        "use strict";
    
        var onConnect = function () {
            console.log("connected!");
        };
    
        var onDisConnect = function () {
            console.log("disconnect!");
        };
    
        var ws = gio("http://localhost:8080", onConnect, onDisConnect);
        ws.setWorker("sharedWorker.js");
    
        ws.registerEvent("message", function (data) {
            console.log("message", data);
        });
    
        ws.onError(function (data) {
            console.log("error", data);
        });
    
        ws.start();
    }(gio));
    

    I’ve also created a simple webSocket server with socket.io. In this small server there’s a setInterval function broadcasting one message to all clients per second to see the application working

    var io, connectedSockets;
    
    io = require('socket.io').listen(8080);
    connectedSockets = 0;
    
    io.sockets.on('connection', function (socket) {
        connectedSockets++;
        console.log("Socket connected! Conected sockets:", connectedSockets);
    
        socket.on('disconnect', function () {
            connectedSockets--;
            console.log("Socket disconnect! Conected sockets:", connectedSockets);
        });
    });
    
    setInterval(function() {
        io.emit("message", "Hola " + new Date().getTime());
    }, 1000); 
    

    Source code is available in my github account.