Alexa skill and account linking with serverless and Cognito

Sometimes when we’re building one Alexa skill, we need to identify the user. To do that Alexa provides account linking. Basically we need an Oauth2 server to link our account within our Alexa skill. AWS provide us a managed Oauth2 service called Cognito, so we can use use Cognito identity pool to handle the authentication for our Alexa Skills.

In this example I’ve followed the following blog post. Cognito is is a bit weird to set up but after following all the steps we can use Account Linking in Alexa skill.

There’s also a good sample skill here. I’ve studied a little bit this example and create a working prototype by my own, basically to understand the process.

That’s my skill:

const Alexa = require('ask-sdk')

const RequestInterceptor = require('./interceptors/RequestInterceptor')
const ResponseInterceptor = require('./interceptors/ResponseInterceptor')
const LocalizationInterceptor = require('./interceptors/LocalizationInterceptor')
const GetLinkedInfoInterceptor = require('./interceptors/GetLinkedInfoInterceptor')

const LaunchRequestHandler = require('./handlers/LaunchRequestHandler')
const CheckAccountLinkedHandler = require('./handlers/CheckAccountLinkedHandler')
const HelloWorldIntentHandler = require('./handlers/HelloWorldIntentHandler')
const HelpIntentHandler = require('./handlers/HelpIntentHandler')
const CancelAndStopIntentHandler = require('./handlers/CancelAndStopIntentHandler')
const SessionEndedRequestHandler = require('./handlers/SessionEndedRequestHandler')
const FallbackHandler = require('./handlers/FallbackHandler')
const ErrorHandler = require('./handlers/ErrorHandler')
const RequestInfoHandler = require('./handlers/RequestInfoHandler')

let skill

module.exports.handler = async (event, context) => {
  if (!skill) {
    skill = Alexa.SkillBuilders.custom().
      addRequestInterceptors(
        RequestInterceptor,
        ResponseInterceptor,
        LocalizationInterceptor,
        GetLinkedInfoInterceptor
      ).
      addRequestHandlers(
        LaunchRequestHandler,
        CheckAccountLinkedHandler,
        HelloWorldIntentHandler,
        RequestInfoHandler,
        HelpIntentHandler,
        CancelAndStopIntentHandler,
        SessionEndedRequestHandler,
        FallbackHandler).
      addErrorHandlers(
        ErrorHandler).
      create()
  }

  return await skill.invoke(event, context)
}

The most important thing here is maybe GetLinkedInfoInterceptor.

const log = require('../lib/log')
const cognito = require('../lib/cognito')
const utils = require('../lib/utils')

const GetLinkedInfoInterceptor = {
  async process (handlerInput) {
    if (utils.isAccountLinked(handlerInput)) {
      const userData = await cognito.getUserData(handlerInput.requestEnvelope.session.user.accessToken)
      log.info('GetLinkedInfoInterceptor: getUserData: ', userData)
      const sessionAttributes = handlerInput.attributesManager.getSessionAttributes()
      if (userData.Username !== undefined) {
        sessionAttributes.auth = true
        sessionAttributes.emailAddress = cognito.getAttribute(userData.UserAttributes, 'email')
        sessionAttributes.userName = userData.Username
        handlerInput.attributesManager.setSessionAttributes(sessionAttributes)
      } else {
        sessionAttributes.auth = false
        log.error('GetLinkedInfoInterceptor: No user data was found.')
      }
    }
  }
}

module.exports = GetLinkedInfoInterceptor

This interceptor retrieves the user info from cognito when we provide the accessToken. We can obtain the accessToken from session (if our skill is account linked). Then we inject the user information (in my example the email and the username of the Cognito identity pool) into the session.

Then we can create one intent in our request handlers chain called CheckAccountLinkedHandler. With this intent we check if our skill is account linked. If not we can provide ‘withLinkAccountCard’ to force user to login with Cognito and link the skill’s account.

const utils = require('../lib/utils')

const CheckAccountLinkedHandler = {
  canHandle (handlerInput) {
    return !utils.isAccountLinked(handlerInput)
  },
  handle (handlerInput) {
    const requestAttributes = handlerInput.attributesManager.getRequestAttributes()
    const speakOutput = requestAttributes.t('NEED_TO_LINK_MESSAGE', 'SKILL_NAME')
    return handlerInput.responseBuilder.
      speak(speakOutput).
      withLinkAccountCard().
      getResponse()
  }
}

module.exports = CheckAccountLinkedHandler

Later we can create one intent to give the information to the user of maybe, in another case, perform an authorization workflow

const RequestInfoHandler = {
  canHandle (handlerInput) {
    const request = handlerInput.requestEnvelope.request
    return (request.type === 'IntentRequest'
      && request.intent.name === 'RequestInfoIntent')
  },
  handle (handlerInput) {
    const request = handlerInput.requestEnvelope.request
    const requestAttributes = handlerInput.attributesManager.getRequestAttributes()
    const sessionAttributes = handlerInput.attributesManager.getSessionAttributes()
    const repromptOutput = requestAttributes.t('FOLLOW_UP_MESSAGE')
    const cardTitle = requestAttributes.t('SKILL_NAME')

    let speakOutput = ''

    let inquiryTypeId = getResolvedSlotIDValue(request, 'infoTypeRequested')
    if (!inquiryTypeId) {
      inquiryTypeId = 'fullProfile'
      speakOutput += requestAttributes.t('NOT_SURE_OF_TYPE_MESSAGE')
    } else {
      if (inquiryTypeId === 'emailAddress' || inquiryTypeId === 'fullProfile') {
        speakOutput += requestAttributes.t('REPORT_EMAIL_ADDRESS', sessionAttributes.emailAddress)
      }

      if (inquiryTypeId === 'userName' || inquiryTypeId === 'fullProfile') {
        speakOutput += requestAttributes.t('REPORT_USERNAME', sessionAttributes.userName)
      }
    }

    speakOutput += repromptOutput

    return handlerInput.responseBuilder.
      speak(speakOutput).
      reprompt(repromptOutput).
      withSimpleCard(cardTitle, speakOutput).
      getResponse()
  }
}

module.exports = RequestInfoHandler

And basically that’s all. In fact isn’t very different than traditional web authentication. Maybe the most complicated part especially if you’re not used to Oauth2 is to configure Cognito properly.

Here you can see the source code in my github.

Advertisement

Alexa skill example with Serverless framework

Today I want to play with Alexa and serverless framework. I’ve created a sample hello world skill. In fact this example is more or less the official hello-world sample.

const Alexa = require('ask-sdk-core')
const RequestInterceptor = require('./interceptors/RequestInterceptor')
const ResponseInterceptor = require('./interceptors/ResponseInterceptor')
const LaunchRequestHandler = require('./handlers/LaunchRequestHandler')
const HelloWorldIntentHandler = require('./handlers/HelloWorldIntentHandler')
const HelpIntentHandler = require('./handlers/HelpIntentHandler')
const CancelAndStopIntentHandler = require('./handlers/CancelAndStopIntentHandler')
const SessionEndedRequestHandler = require('./handlers/SessionEndedRequestHandler')
const FallbackHandler = require('./handlers/FallbackHandler')
const ErrorHandler = require('./handlers/ErrorHandler')

let skill

module.exports.handler = async (event, context) => {
  if (!skill) {
    skill = Alexa.SkillBuilders.custom().
      addRequestInterceptors(RequestInterceptor).
      addResponseInterceptors(ResponseInterceptor).
      addRequestHandlers(
        LaunchRequestHandler,
        HelloWorldIntentHandler,
        HelpIntentHandler,
        CancelAndStopIntentHandler,
        SessionEndedRequestHandler,
        FallbackHandler).
      addErrorHandlers(
        ErrorHandler).
      create()
  }

  return await skill.invoke(event, context)
}

It has one Intent that answers to hello command.

const HelloWorldIntentHandler = {
  canHandle (handlerInput) {
    return handlerInput.requestEnvelope.request.type === 'IntentRequest'
      && handlerInput.requestEnvelope.request.intent.name === 'HelloWorldIntent'
  },
  handle (handlerInput) {
    const speechOutput = 'Hello World'
    const cardTitle = 'Hello world'

    return handlerInput.responseBuilder.
      speak(speechOutput).
      reprompt(speechOutput).
      withSimpleCard(cardTitle, speechOutput).
      getResponse()
  }
}

module.exports = HelloWorldIntentHandler

I’m using Serverless framework to deploy the skill. I know that serverless frameworks has plugins for Alexa skills that help us to create the whole skill, but in this example I want to do it a little more manually (it’s the way that I learn new things).

First I create the skill in the Alexa developer console (or via ask cli). There’re a lot of tutorials about it. Then I take my alexaSkillId and I use this id within my serverless config file as the trigger event of my lambda function.

service: hello-world

provider:
  name: aws
  runtime: nodejs8.10
  region: ${opt:region, self:custom.defaultRegion}
  stage: ${opt:stage, self:custom.defaultStage}

custom:
  defaultRegion: eu-west-1
  defaultStage: prod

functions:
  info:
    handler: src/index.handler
    events:
      - alexaSkill: amzn1.ask.skill.my_skill

then I deploy the lambda function

npx serverless deploy –aws-s3-accelerate

And I take the ARN of the lambda function and I use this lambda as my endpoint in the Alexa developer console.

Also we can test our skill (at least the lambda function) using our favorite testing framework. I will use jest in this example.
Testing is very important, at least for me, when I’m working with lambdas and serverless. I want to test my script locally, instead of deploying to AWS again and again (it’s slow).

const when = require('./steps/when')
const { init } = require('./steps/init')

describe('When we invoke the skill', () => {
  beforeAll(() => {
    init()
  })

  test('launch intent', async () => {
    const res = await when.we_invoke_intent(require('./events/use_skill'))
    const card = res.response.card
    expect(card.title).toBe('Hello world')
    expect(card.content).toBe('Welcome to Hello world, you can say Hello or Help. Which would you like to try?')
  })

  test('help handler', async () => {
    const res = await when.we_invoke_intent(require('./events/help_handler'))
    console.log(res.response.outputSpeech.ssml)
    expect(res.response.outputSpeech.ssml).toBe('<speak>You can say hello to me! How can I help?</speak>')
  })

  test('Hello world handler', async () => {
    const res = await when.we_invoke_intent(require('./events/hello_world_handler'))
    const card = res.response.card
    expect(card.title).toBe('Hello world')
    expect(card.content).toBe('Hello World')
  })
})

Full code in my github account.

Playing with TOTP (2FA) and mobile applications with ionic

Today I want to play with Two Factor Authentication. When we speak about 2FA, TOTP come to our mind. There’re a lot of TOTP clients, for example Google Authenticator.

My idea with this prototype is to build one Mobile application (with ionic) and validate one totp token in a server (in this case a Python/Flask application). The token will be generated with a standard TOTP client. Let’s start

The sever will be a simple Flask server to handle routes. One route (GET /) will generate one QR code to allow us to configure or TOTP client. I’m using the library pyotp to handle totp operations.

from flask import Flask, jsonify, abort, render_template, request
import os
from dotenv import load_dotenv
from functools import wraps
import pyotp
from flask_qrcode import QRcode

current_dir = os.path.dirname(os.path.abspath(__file__))
load_dotenv(dotenv_path="{}/.env".format(current_dir))

totp = pyotp.TOTP(os.getenv('TOTP_BASE32_SECRET'))

app = Flask(__name__)
QRcode(app)


def verify(key):
    return totp.verify(key)


def authorize(f):
    @wraps(f)
    def decorated_function(*args, **kws):
        if not 'Authorization' in request.headers:
            abort(401)

        data = request.headers['Authorization']
        token = str.replace(str(data), 'Bearer ', '')

        if token != os.getenv('BEARER'):
            abort(401)

        return f(*args, **kws)

    return decorated_function


@app.route('/')
def index():
    return render_template('index.html', totp=pyotp.totp.TOTP(os.getenv('TOTP_BASE32_SECRET')).provisioning_uri("gonzalo123.com", issuer_name="TOTP Example"))


@app.route('/check/<key>', methods=['GET'])
@authorize
def alert(key):
    status = verify(key)
    return jsonify({'status': status})


if __name__ == "__main__":
    app.run(host='0.0.0.0')

I’ll use an standard TOTP client to generate the tokens but with pyotp we can easily create a client also

import pyotp
import time
import os
from dotenv import load_dotenv
import logging

logging.basicConfig(level=logging.INFO)

current_dir = os.path.dirname(os.path.abspath(__file__))
load_dotenv(dotenv_path="{}/.env".format(current_dir))

totp = pyotp.TOTP(os.getenv('TOTP_BASE32_SECRET'))

mem = None
while True:
    now = totp.now()
    if mem != now:
        logging.info(now)
        mem = now
        time.sleep(1)

And finally the mobile application. It’s a simple ionic application. That’s the view:

<ion-header>
  <ion-toolbar>
    <ion-title>
      TOTP Validation demo
    </ion-title>
  </ion-toolbar>
</ion-header>

<ion-content>
  <div class="ion-padding">
    <ion-item>
      <ion-label position="stacked">totp</ion-label>
      <ion-input placeholder="Enter value" [(ngModel)]="totp"></ion-input>
    </ion-item>
    <ion-button fill="solid" color="secondary" (click)="validate()" [disabled]="!totp">
      Validate
      <ion-icon slot="end" name="help-circle-outline"></ion-icon>
    </ion-button>
  </div>
</ion-content>

The controller:

import { Component } from '@angular/core'
import { ApiService } from '../sercices/api.service'
import { ToastController } from '@ionic/angular'

@Component({
  selector: 'app-home',
  templateUrl: 'home.page.html',
  styleUrls: ['home.page.scss']
})
export class HomePage {
  public totp

  constructor (private api: ApiService, public toastController: ToastController) {}

  validate () {
    this.api.get('/check/' + this.totp).then(data => this.alert(data.status))
  }

  async alert (status) {
    const toast = await this.toastController.create({
      message: status ? 'OK' : 'Not valid code',
      duration: 2000,
      color: status ? 'primary' : 'danger',
    })
    toast.present()
  }
}

I’ve also put a simple security system. In a real life application we’ll need something better, but here I’ve got a Auth Bearer harcoded and I send it en every http request. To do it I’ve created a simple api service

import { Injectable } from '@angular/core'
import { isDevMode } from '@angular/core'
import { HttpClient, HttpHeaders, HttpParams } from '@angular/common/http'
import { CONF } from './conf'

@Injectable({
  providedIn: 'root'
})
export class ApiService {

  private isDev: boolean = isDevMode()
  private apiUrl: string

  constructor (private http: HttpClient) {
    this.apiUrl = this.isDev ? CONF.API_DEV : CONF.API_PROD
  }

  public get (uri: string, params?: Object): Promise<any> {
    return new Promise((resolve, reject) => {
      this.http.get(this.apiUrl + uri, {
        headers: ApiService.getHeaders(),
        params: ApiService.getParams(params)
      }).subscribe(
        res => {this.handleHttpNext(res), resolve(res)},
        err => {this.handleHttpError(err), reject(err)},
        () => this.handleHttpComplete()
      )
    })
  }

  private static getHeaders (): HttpHeaders {

    const headers = {
      'Content-Type': 'application/json'
    }

    headers['Authorization'] = 'Bearer ' + CONF.bearer

    return new HttpHeaders(headers)
  }

  private static getParams (params?: Object): HttpParams {
    let Params = new HttpParams()
    for (const key in params) {
      if (params.hasOwnProperty(key)) {
        Params = Params.set(key, params[key])
      }
    }

    return Params
  }

  private handleHttpError (err) {
    console.log('HTTP Error', err)
  }

  private handleHttpNext (res) {
    console.log('HTTP response', res)
  }

  private handleHttpComplete () {
    console.log('HTTP request completed.')
  }
}

And that’s all. Here one video with a working example of the prototype:

Source code here

Using cache buster with OpenUI5 outside SCP

When we work with SPAs and web applications we need to handle with the browser’s cache. Sometimes we change our static files but the client’s browser uses a cached version of the file instead of the new one. We can tell the user: Please empty your cache to use the new version. But most of the times the user don’t know what we’re speaking about, and we have a problem. There’s a technique called cache buster used to bypass this issue. It consists on to change the name of the file (or adding an extra parameter), basically to ensure that the browser will send a different request to the server to prevent the browser from reusing the cached version of the file.

When we work with sapui5 application over SCP, we only need to use the cachebuster version of sap-ui-core

<script id="sap-ui-bootstrap"
      src="https://sapui5.hana.ondemand.com/resources/sap-ui-cachebuster/sap-ui-core.js"
      data-sap-ui-libs="sap.m"
      data-sap-ui-theme="sap_belize"
      data-sap-ui-compatVersion="edge"
      data-sap-ui-appCacheBuster=""
      data-sap-ui-preload="async"
      data-sap-ui-resourceroots='{"app": ""}'
      data-sap-ui-frameOptions="trusted">
</script>

With this configuration, our framework will use a “cache buster friendly” version of our files and SCP will serve them properly.

For example, when our framework wants the /dist/Component.js file, the browser will request /dist/~1541685070813~/Component.js to the server. And the server will server the file /dist/Component.js. As I said before when we work with SCP, our standard build process automatically takes care about it. It creates a file called sap-ui-cachebuster-info.json where we can find all our files with one kind of hash that our build process changes each time our file is changed.

{
  "Component-dbg.js": 1545316733136,
  "Component-preload.js": 1545316733226,
  "Component.js": 1541685070813,
  ...
}

It works like a charm but I not always use SCP. Sometimes I use OpenUI5 in one nginx server, for example. So cache buster “doesn’t work”. That’s a problem because I need to handle with browser caches again each time we deploy the new version of the application. I wanted to solve the issue. Let me explain how I did it.

Since I was using one Lumen/PHP server to the backend, my first idea was to create a dynamic route in Lumen to handle cache-buster urls. With this approach I know I can solve the problem but there’s something that I did not like: I’ll use a dynamic server to serve static content. I don’t have a huge traffic. I can use this approach but it isn’t beautiful.

My second approach was: Ok I’ve got a sap-ui-cachebuster-info.json file where I can see all the files that cache buster will use and their hashes. So, Why not I create those files in my build script. With this approach I will create the full static structure each time I deploy de application, without needing any server side scripting language to generate dynamic content. OpenUI5 uses grunt so I can create a simple grunt task to create my files.

'use strict';

var fs = require('fs');
var path = require('path');
var chalk = require('chalk');

module.exports = function(grunt) {
  var name = 'cacheBuster';
  var info = 'Generates Cache buster files';

  var cacheBuster = function() {
    var config = grunt.config.get(name);
    var data, t, src, dest, dir, prop;

    data = grunt.file.readJSON(config.src + '/sap-ui-cachebuster-info.json');
    for (prop in data) {
      if (data.hasOwnProperty(prop)) {
        t = data[prop];
        src = config.src + '/' + prop;
        dest = config.src + '/~' + t + '~/' + prop;
        grunt.verbose.writeln(
            name + ': ' + chalk.cyan(path.basename(src)) + ' to ' +
            chalk.cyan(dest) + '.');
        dir = path.dirname(dest);
        grunt.file.mkdir(dir);
        fs.copyFileSync(src, dest);
      }
    }
  };

  grunt.registerMultiTask(name, info, cacheBuster);
};

I deploy my grunt task to npm so when I need to use it I only need to:

Install the task

npm install gonzalo123-cachebuster

Add the task to my gruntfile

require('gonzalo123-cachebuster')(grunt);

and set up the path where ui5 task generates our dist files

  grunt.config.merge({
    pkg: grunt.file.readJSON('package.json'),
    ...
    cacheBuster: {
      src: 'dist'
    }
  });

And that’s all. My users with enjoy (or suffer) the new versions of my applications without having problems with cached files.

Grunt task available in my github

Working with SAPUI5 locally (part 3). Adding more services in Docker

In the previous project we moved one project to docker. The idea was to move exactly the same functionality (even without touching anything within the source code). Now we’re going to add more services. Yes, I know, it looks like overenginering (it’s exactly overenginering, indeed), but I want to build something with different services working together. Let start.

We’re going to change a little bit our original project. Now our frontend will only have one button. This button will increment the number of clicks but we’re going to persists this information in a PostgreSQL database. Also, instead of incrementing the counter in the backend, our backend will emit one event to a RabbitMQ message broker. We’ll have one worker service listening to this event and this worker will persist the information. The communication between the worker and the frontend (to show the incremented value), will be via websockets.

With those premises we are going to need:

  • Frontend: UI5 application
  • Backend: PHP/lumen application
  • Worker: nodejs application which is listening to a RabbitMQ event and serving the websocket server (using socket.io)
  • Nginx server
  • PosgreSQL database.
  • RabbitMQ message broker.

As the previous examples, our PHP backend will be server via Nginx and PHP-FPM.

Here we can see to docker-compose file to set up all the services

version: '3.4'

services:
  nginx:
    image: gonzalo123.nginx
    restart: always
    ports:
    - "8080:80"
    build:
      context: ./src
      dockerfile: .docker/Dockerfile-nginx
    volumes:
    - ./src/backend:/code/src
    - ./src/.docker/web/site.conf:/etc/nginx/conf.d/default.conf
    networks:
    - app-network
  api:
    image: gonzalo123.api
    restart: always
    build:
      context: ./src
      dockerfile: .docker/Dockerfile-lumen-dev
    environment:
      XDEBUG_CONFIG: remote_host=${MY_IP}
    volumes:
    - ./src/backend:/code/src
    networks:
    - app-network
  ui5:
    image: gonzalo123.ui5
    ports:
    - "8000:8000"
    restart: always
    volumes:
    - ./src/frontend:/code/src
    build:
      context: ./src
      dockerfile: .docker/Dockerfile-ui5
    networks:
    - app-network
  io:
    image: gonzalo123.io
    ports:
    - "9999:9999"
    restart: always
    volumes:
    - ./src/io:/code/src
    build:
      context: ./src
      dockerfile: .docker/Dockerfile-io
    networks:
    - app-network
  pg:
    image: gonzalo123.pg
    restart: always
    ports:
    - "5432:5432"
    build:
      context: ./src
      dockerfile: .docker/Dockerfile-pg
    environment:
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
      POSTGRES_USER: ${POSTGRES_USER}
      POSTGRES_DB: ${POSTGRES_DB}
      PGDATA: /var/lib/postgresql/data/pgdata
    networks:
    - app-network
  rabbit:
    image: rabbitmq:3-management
    container_name: gonzalo123.rabbit
    restart: always
    ports:
    - "15672:15672"
    - "5672:5672"
    environment:
      RABBITMQ_ERLANG_COOKIE:
      RABBITMQ_DEFAULT_VHOST: /
      RABBITMQ_DEFAULT_USER: ${RABBITMQ_DEFAULT_USER}
      RABBITMQ_DEFAULT_PASS: ${RABBITMQ_DEFAULT_PASS}
    networks:
    - app-network
networks:
  app-network:
    driver: bridge

We’re going to use the same docker files than in the previous post but we also need new ones for worker, database server and message queue:

Worker:

FROM node:alpine

EXPOSE 8000

WORKDIR /code/src
COPY ./io .
RUN npm install
ENTRYPOINT ["npm", "run", "serve"]

The worker script is simple script that serves the socket.io server and emits a websocket within every message to the RabbitMQ queue.

var amqp = require('amqp'),
  httpServer = require('http').createServer(),
  io = require('socket.io')(httpServer, {
    origins: '*:*',
  }),
  pg = require('pg')
;

require('dotenv').config();
var pgClient = new pg.Client(process.env.DB_DSN);

rabbitMq = amqp.createConnection({
  host: process.env.RABBIT_HOST,
  port: process.env.RABBIT_PORT,
  login: process.env.RABBIT_USER,
  password: process.env.RABBIT_PASS,
});

var sql = 'SELECT clickCount FROM docker.clicks';

// Please don't do this. Use lazy connections
// I'm 'lazy' to do it in this POC 🙂
pgClient.connect(function(err) {
  io.on('connection', function() {
    pgClient.query(sql, function(err, result) {
      var count = result.rows[0]['clickcount'];
      io.emit('click', {count: count});
    });

  });

  rabbitMq.on('ready', function() {
    var queue = rabbitMq.queue('ui5');
    queue.bind('#');

    queue.subscribe(function(message) {
      pgClient.query(sql, function(err, result) {
        var count = parseInt(result.rows[0]['clickcount']);
        count = count + parseInt(message.data.toString('utf8'));
        pgClient.query('UPDATE docker.clicks SET clickCount = $1', [count],
          function(err) {
            io.emit('click', {count: count});
          });
      });
    });
  });
});

httpServer.listen(process.env.IO_PORT);

Database server:

FROM postgres:9.6-alpine
COPY pg/init.sql /docker-entrypoint-initdb.d/

As we can see we’re going to generate the database estructure in the first build

CREATE SCHEMA docker;

CREATE TABLE docker.clicks (
clickCount numeric(8) NOT NULL
);

ALTER TABLE docker.clicks
OWNER TO username;

INSERT INTO docker.clicks(clickCount) values (0);

With the RabbitMQ server we’re going to use the official docker image so we don’t need to create one Dockerfile

We also have changed a little bit our Nginx configuration. We want to use Nginx to serve backend and also socket.io server. That’s because we don’t want to expose different ports to internet.

server {
    listen 80;
    index index.php index.html;
    server_name localhost;
    error_log  /var/log/nginx/error.log;
    access_log /var/log/nginx/access.log;
    root /code/src/www;

    location /socket.io/ {
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_pass "http://io:9999";
    }

    location / {
        try_files $uri $uri/ /index.php?$query_string;
    }

    location ~ \.php$ {
        try_files $uri =404;
        fastcgi_split_path_info ^(.+\.php)(/.+)$;
        fastcgi_pass api:9000;
        fastcgi_index index.php;
        include fastcgi_params;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_param PATH_INFO $fastcgi_path_info;
    }
}

To avoid CORS issues we can also use SCP destination (the localneo proxy in this example), to serve socket.io also. So we need to:

  • change our neo-app.json file
  • "routes": [
        ...
        {
          "path": "/socket.io",
          "target": {
            "type": "destination",
            "name": "SOCKETIO"
          },
          "description": "SOCKETIO"
        }
      ],
    

    And basically that’s all. Here also we can use a “production” docker-copose file without exposing all ports and mapping the filesystem to our local machine (useful when we’re developing)

    version: '3.4'
    
    services:
      nginx:
        image: gonzalo123.nginx
        restart: always
        build:
          context: ./src
          dockerfile: .docker/Dockerfile-nginx
        networks:
        - app-network
      api:
        image: gonzalo123.api
        restart: always
        build:
          context: ./src
          dockerfile: .docker/Dockerfile-lumen
        networks:
        - app-network
      ui5:
        image: gonzalo123.ui5
        ports:
        - "80:8000"
        restart: always
        volumes:
        - ./src/frontend:/code/src
        build:
          context: ./src
          dockerfile: .docker/Dockerfile-ui5
        networks:
        - app-network
      io:
        image: gonzalo123.io
        restart: always
        build:
          context: ./src
          dockerfile: .docker/Dockerfile-io
        networks:
        - app-network
      pg:
        image: gonzalo123.pg
        restart: always
        build:
          context: ./src
          dockerfile: .docker/Dockerfile-pg
        environment:
          POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
          POSTGRES_USER: ${POSTGRES_USER}
          POSTGRES_DB: ${POSTGRES_DB}
          PGDATA: /var/lib/postgresql/data/pgdata
        networks:
        - app-network
      rabbit:
        image: rabbitmq:3-management
        restart: always
        environment:
          RABBITMQ_ERLANG_COOKIE:
          RABBITMQ_DEFAULT_VHOST: /
          RABBITMQ_DEFAULT_USER: ${RABBITMQ_DEFAULT_USER}
          RABBITMQ_DEFAULT_PASS: ${RABBITMQ_DEFAULT_PASS}
        networks:
        - app-network
    networks:
      app-network:
        driver: bridge
    

    And that’s all. The full project is available in my github account

    Working with SAPUI5 locally (part 2). Now with docker

    In the first part I spoke about how to build our working environment to work with UI5 locally instead of using WebIDE. Now, in this second part of the post, we’ll see how to do it using docker to set up our environment.

    I’ll use docker-compose to set up the project. Basically, as I explain in the first part, the project has two parts. One backend and one frontned. We’re going to use exactly the same code for the frontend and for the backend.

    The frontend is build over a localneo. As it’s a node application we’ll use a node:alpine base host

    FROM node:alpine
    
    EXPOSE 8000
    
    WORKDIR /code/src
    COPY ./frontend .
    RUN npm install
    ENTRYPOINT ["npm", "run", "serve"]
    

    In docker-compose we only need to map the port that we´ll expose in our host and since we want this project in our depelopemet process, we also will map the volume to avoid to re-generate our container each time we change the code.

    ...
      ui5:
        image: gonzalo123.ui5
        ports:
        - "8000:8000"
        restart: always
        build:
          context: ./src
          dockerfile: .docker/Dockerfile-ui5
        volumes:
        - ./src/frontend:/code/src
        networks:
        - api-network
    

    The backend is a PHP application. We can set up a PHP application using different architectures. In this project we’ll use nginx and PHP-FPM.

    for nginx we’ll use the following Dockerfile

    FROM  nginx:1.13-alpine
    
    EXPOSE 80
    
    COPY ./.docker/web/site.conf /etc/nginx/conf.d/default.conf
    COPY ./backend /code/src
    

    And for the PHP host the following one (with xdebug to enable debugging and breakpoints):

    FROM php:7.1-fpm
    
    ENV PHP_XDEBUG_REMOTE_ENABLE 1
    
    RUN apt-get update && apt-get install -my \
        git \
        libghc-zlib-dev && \
        apt-get clean
    
    RUN apt-get install -y libpq-dev \
        && docker-php-ext-configure pgsql -with-pgsql=/usr/local/pgsql \
        && docker-php-ext-install pdo pdo_pgsql pgsql opcache zip
    
    RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
    
    RUN composer global require "laravel/lumen-installer"
    ENV PATH ~/.composer/vendor/bin:$PATH
    
    COPY ./backend /code/src
    

    And basically that’s all. Here the full docker-compose file

    version: '3.4'
    
    services:
      nginx:
        image: gonzalo123.nginx
        restart: always
        ports:
        - "8080:80"
        build:
          context: ./src
          dockerfile: .docker/Dockerfile-nginx
        volumes:
        - ./src/backend:/code/src
        - ./src/.docker/web/site.conf:/etc/nginx/conf.d/default.conf
        networks:
        - api-network
      api:
        image: gonzalo123.api
        restart: always
        build:
          context: ./src
          dockerfile: .docker/Dockerfile-lumen-dev
        environment:
          XDEBUG_CONFIG: remote_host=${MY_IP}
        volumes:
        - ./src/backend:/code/src
        networks:
        - api-network
      ui5:
        image: gonzalo123.ui5
        ports:
        - "8000:8000"
        restart: always
        build:
          context: ./src
          dockerfile: .docker/Dockerfile-ui5
        networks:
        - api-network
    
    networks:
      api-network:
        driver: bridge
    

    If we want to use this project you only need to:

    • clone the repo fron github
    • run ./ui5 up

    With this configuration we’re exposing two ports 8080 for the frontend and 8000 for the backend. We also are mapping our local filesystem to containers to avoid to regenerate our containers each time we change the code.

    We also can have a variation. A “production” version of our docker-compose file. I put production between quotation marks because normally we aren’t going to use localneo as a production server (please don’t do it). We’ll use SCP to host the frontend.

    This configuration is just an example without filesystem mapping, without xdebug in the backend and without exposing the backend externally (Only the frontend can use it)

    version: '3.4'
    
    services:
      nginx:
        image: gonzalo123.nginx
        restart: always
        build:
          context: ./src
          dockerfile: .docker/Dockerfile-nginx
        networks:
        - api-network
      api:
        image: gonzalo123.api
        restart: always
        build:
          context: ./src
          dockerfile: .docker/Dockerfile-lumen
        networks:
        - api-network
      ui5:
        image: gonzalo123.ui5
        ports:
        - "8000:8000"
        restart: always
        build:
          context: ./src
          dockerfile: .docker/Dockerfile-ui5
        networks:
        - api-network
    
    networks:
      api-network:
        driver: bridge
    

    And that’s all. You can see the all the source code in my github account

    Happy logins. Only the happy user will pass

    Login forms are bored. In this example we’re going to create an especial login form. Only for happy users. Happiness is something complicated, but at least, one smile is more easy to obtain, and all is better with one smile :). Our login form will only appear if the user smiles. Let’s start.

    I must admit that this project is just an excuse to play with different technologies that I wanted to play. Weeks ago I discovered one library called face_classification. With this library I can perform emotion classification from a picture. The idea is simple. We create RabbitMQ RPC server script that answers with the emotion of the face within a picture. Then we obtain on frame from the video stream of the webcam (with HTML5) and we send this frame using websocket to a socket.io server. This websocket server (node) ask to the RabbitMQ RPC the emotion and it sends back to the browser the emotion and a the original picture with a rectangle over the face.

    Frontend

    As well as we’re going to use socket.io for websockets we will use the same script to serve the frontend (the login and the HTML5 video capture)

    <!doctype html>
    <html>
    <head>
        <title>Happy login</title>
        <link rel="stylesheet" href="css/app.css">
    </head>
    <body>
    
    <div id="login-page" class="login-page">
        <div class="form">
            <h1 id="nonHappy" style="display: block;">Only the happy user will pass</h1>
            <form id="happyForm" class="login-form" style="display: none" onsubmit="return false;">
                <input id="user" type="text" placeholder="username"/>
                <input id="pass" type="password" placeholder="password"/>
                <button id="login">login</button>
                <p></p>
                <img id="smile" width="426" height="320" src=""/>
            </form>
            <div id="video">
                <video style="display:none;"></video>
                <canvas id="canvas" style="display:none"></canvas>
                <canvas id="canvas-face" width="426" height="320"></canvas>
            </div>
        </div>
    </div>
    
    <div id="private" style="display: none;">
        <h1>Private page</h1>
    </div>
    
    <script src="https://code.jquery.com/jquery-3.2.1.min.js" integrity="sha256-hwg4gsxgFZhOsEEamdOYGBf13FyQuiTwlAQgxVSNgt4=" crossorigin="anonymous"></script>
    <script src="https://unpkg.com/sweetalert/dist/sweetalert.min.js"></script>
    <script type="text/javascript" src="/socket.io/socket.io.js"></script>
    <script type="text/javascript" src="/js/app.js"></script>
    </body>
    </html>
    

    Here we’ll connect to the websocket and we’ll emit the webcam frame to the server. We´ll also be listening to one event called ‘response’ where server will notify us when one emotion has been detected.

    let socket = io.connect(location.origin),
        img = new Image(),
        canvasFace = document.getElementById('canvas-face'),
        context = canvasFace.getContext('2d'),
        canvas = document.getElementById('canvas'),
        width = 640,
        height = 480,
        delay = 1000,
        jpgQuality = 0.6,
        isHappy = false;
    
    socket.on('response', function (r) {
        let data = JSON.parse(r);
        if (data.length > 0 && data[0].hasOwnProperty('emotion')) {
            if (isHappy === false && data[0]['emotion'] === 'happy') {
                isHappy = true;
                swal({
                    title: "Good!",
                    text: "All is better with one smile!",
                    icon: "success",
                    buttons: false,
                    timer: 2000,
                });
    
                $('#nonHappy').hide();
                $('#video').hide();
                $('#happyForm').show();
                $('#smile')[0].src = 'data:image/png;base64,' + data[0].image;
            }
    
            img.onload = function () {
                context.drawImage(this, 0, 0, canvasFace.width, canvasFace.height);
            };
    
            img.src = 'data:image/png;base64,' + data[0].image;
        }
    });
    
    navigator.getMedia = (navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia);
    
    navigator.getMedia({video: true, audio: false}, (mediaStream) => {
        let video = document.getElementsByTagName('video')[0];
        video.src = window.URL.createObjectURL(mediaStream);
        video.play();
        setInterval(((video) => {
            return function () {
                let context = canvas.getContext('2d');
                canvas.width = width;
                canvas.height = height;
                context.drawImage(video, 0, 0, width, height);
                socket.emit('img', canvas.toDataURL('image/jpeg', jpgQuality));
            }
        })(video), delay)
    }, error => console.log(error));
    
    $(() => {
        $('#login').click(() => {
            $('#login-page').hide();
            $('#private').show();
        })
    });
    

    Backend
    Finally we’ll work in the backend. Basically I’ve check the examples that we can see in face_classification project and tune it a bit according to my needs.

    from rabbit import builder
    import logging
    import numpy as np
    from keras.models import load_model
    from utils.datasets import get_labels
    from utils.inference import detect_faces
    from utils.inference import draw_text
    from utils.inference import draw_bounding_box
    from utils.inference import apply_offsets
    from utils.inference import load_detection_model
    from utils.inference import load_image
    from utils.preprocessor import preprocess_input
    import cv2
    import json
    import base64
    
    detection_model_path = 'trained_models/detection_models/haarcascade_frontalface_default.xml'
    emotion_model_path = 'trained_models/emotion_models/fer2013_mini_XCEPTION.102-0.66.hdf5'
    emotion_labels = get_labels('fer2013')
    font = cv2.FONT_HERSHEY_SIMPLEX
    
    # hyper-parameters for bounding boxes shape
    emotion_offsets = (20, 40)
    
    # loading models
    face_detection = load_detection_model(detection_model_path)
    emotion_classifier = load_model(emotion_model_path, compile=False)
    
    # getting input model shapes for inference
    emotion_target_size = emotion_classifier.input_shape[1:3]
    
    
    def format_response(response):
        decoded_json = json.loads(response)
        return "Hello {}".format(decoded_json['name'])
    
    
    def on_data(data):
        f = open('current.jpg', 'wb')
        f.write(base64.decodebytes(data))
        f.close()
        image_path = "current.jpg"
    
        out = []
        # loading images
        rgb_image = load_image(image_path, grayscale=False)
        gray_image = load_image(image_path, grayscale=True)
        gray_image = np.squeeze(gray_image)
        gray_image = gray_image.astype('uint8')
    
        faces = detect_faces(face_detection, gray_image)
        for face_coordinates in faces:
            x1, x2, y1, y2 = apply_offsets(face_coordinates, emotion_offsets)
            gray_face = gray_image[y1:y2, x1:x2]
    
            try:
                gray_face = cv2.resize(gray_face, (emotion_target_size))
            except:
                continue
    
            gray_face = preprocess_input(gray_face, True)
            gray_face = np.expand_dims(gray_face, 0)
            gray_face = np.expand_dims(gray_face, -1)
            emotion_label_arg = np.argmax(emotion_classifier.predict(gray_face))
            emotion_text = emotion_labels[emotion_label_arg]
            color = (0, 0, 255)
    
            draw_bounding_box(face_coordinates, rgb_image, color)
            draw_text(face_coordinates, rgb_image, emotion_text, color, 0, -50, 1, 2)
            bgr_image = cv2.cvtColor(rgb_image, cv2.COLOR_RGB2BGR)
    
            cv2.imwrite('predicted.png', bgr_image)
            data = open('predicted.png', 'rb').read()
            encoded = base64.encodebytes(data).decode('utf-8')
            out.append({
                'image': encoded,
                'emotion': emotion_text,
            })
    
        return out
    
    logging.basicConfig(level=logging.WARN)
    rpc = builder.rpc("image.check", {'host': 'localhost', 'port': 5672})
    rpc.server(on_data)
    

    Here you can see in action the working prototype

    Maybe we can do the same with another tools and even more simple but as I said before this example is just an excuse to play with those technologies:

    • Send webcam frames via websockets
    • Connect one web application to a Pyhon application via RabbitMQ RPC
    • Play with face classification script

    Please don’t use this script in production. It’s just a proof of concepts. With smiles but a proof of concepts 🙂

    You can see the project in my github account

    Pomodoro with ESP32. One “The Melee – Side by side” project

    Last weekend there was a great event called The Melee – Side by side (Many thanks to @ojoven and @diversius).

    The event was one kind of Hackathon where a group of people meet together one day, to share our side projects and to work together (yes. We also have a lunch and beers also :). The format of the event is just a copy of the event that our colleagues from Bilbao called “El Comité“.

    @ibaiimaz spoke about one project to create one collaborative pomodoro where the people of one team can share their status and see the status of the rest of the team. When I heard pomodoro and status I immediately thought in one servo moving a flag and some LEDs turning on and off. We had a project. @penniath and @tatai also joined us. We also had a team.

    We had a project and we also had a deadline. We must show a working prototype at the end of the day. That means that we didn’t have too many time. First we decided the mockup of the project, reducing the initial scope (more ambitious) to fit it within our time slot. We discuss intensely for 10 minutes and finally we describe an ultra detailed blueprint. That’s the full blueprint of the project:

    It was time to start working.

    @penniath and @tatai worked in the Backend. It must be the responsible of the pomodoro timers, listen to MQTT events and create an API for the frontend. The backend also must provide a WebSockets interface to allow real time events within the frontend. They decided to use node and socket.io for the WebSockets. You can see the source code here.

    @ibaiimaz started with the frontend. He decided to create an Angular web application listening to socket.io events to show the status of the pomodoro. You can see the source code here.

    Finaly I worked with the hardware. I created a prototype with one ESP32, two RGB LEDs, one button, one servo and a couple of resistors.

    That’s the source code.

    #include <WiFi.h>
    #include <PubSubClient.h>
    
    int redPin_g = 19;
    int greenPin_g = 17;
    int bluePin_g = 18;
    
    int redPin_i = 21;
    int greenPin_i = 2;
    int bluePin_i = 4;
    
    #define SERVO_PIN 16
    
    const int buttonPin = 15;
    int buttonState = 0;
    
    int channel = 1;
    int hz = 50;
    int depth = 16;
    
    const char* ssid = "SSID";
    const char* password = "password";
    const char* server = "192.168.1.105";
    const char* topic = "/pomodoro/+";
    const char* clientName = "com.gonzalo123.esp32";
    
    WiFiClient wifiClient;
    PubSubClient client(wifiClient);
    
    void wifiConnect() {
      Serial.print("Connecting to ");
      Serial.println(ssid);
    
      WiFi.begin(ssid, password);
    
      while (WiFi.status() != WL_CONNECTED) {
        delay(500);
        Serial.print("*");
      }
    
      Serial.print("WiFi connected: ");
      Serial.println(WiFi.localIP());
    }
    
    void mqttReConnect() {
      while (!client.connected()) {
        Serial.print("Attempting MQTT connection...");
        if (client.connect(clientName)) {
          Serial.println("connected");
          client.subscribe(topic);
        } else {
          Serial.print("failed, rc=");
          Serial.print(client.state());
          Serial.println(" try again in 5 seconds");
          delay(5000);
        }
      }
    }
    
    void callback(char* topic, byte* payload, unsigned int length) {
      Serial.print("Message arrived [");
      Serial.print(topic);
    
      String data;
      for (int i = 0; i < length; i++) {
        data += (char)payload[i];
      }
    
      int value = data.toInt();
    
      if (strcmp(topic, "/pomodoro/gonzalo") == 0) {
        Serial.print("[gonzalo]");
        switch (value) {
          case 1:
            ledcWrite(1, 3400);
            setColor_g(0, 255, 0);
            break;
          case 2:
            setColor_g(255, 0, 0);
            break;
          case 3:
            ledcWrite(1, 6400);
            setColor_g(0, 0, 255);
            break;
        }
      } else {
        Serial.print("[ibai]");
        switch (value) {
          case 1:
            setColor_i(0, 255, 0);
            break;
          case 2:
            setColor_i(255, 0, 0);
            break;
          case 3:
            setColor_i(0, 0, 255);  // green
            break;
        }
      }
    
      Serial.print("] value:");
      Serial.println(data);
    }
    
    void setup()
    {
      Serial.begin(115200);
    
      pinMode(buttonPin, INPUT_PULLUP);
      pinMode(redPin_g, OUTPUT);
      pinMode(greenPin_g, OUTPUT);
      pinMode(bluePin_g, OUTPUT);
    
      pinMode(redPin_i, OUTPUT);
      pinMode(greenPin_i, OUTPUT);
      pinMode(bluePin_i, OUTPUT);
    
      ledcSetup(channel, hz, depth);
      ledcAttachPin(SERVO_PIN, channel);
      wifiConnect();
      client.setServer(server, 1883);
      client.setCallback(callback);
    
      delay(1500);
    }
    
    void mqttEmit(String topic, String value)
    {
      client.publish((char*) topic.c_str(), (char*) value.c_str());
    }
    
    void loop()
    {
      if (!client.connected()) {
        mqttReConnect();
      }
    
      client.loop();
    
      buttonState = digitalRead(buttonPin);
      if (buttonState == HIGH) {
        mqttEmit("/start/gonzalo", (String) "3");
      }
    
      delay(200);
    }
    
    void setColor_i(int red, int green, int blue)
    {
      digitalWrite(redPin_i, red);
      digitalWrite(greenPin_i, green);
      digitalWrite(bluePin_i, blue);
    }
    
    void setColor_g(int red, int green, int blue)
    {
      digitalWrite(redPin_g, red);
      digitalWrite(greenPin_g, green);
      digitalWrite(bluePin_g, blue);
    }
    

    The MQTT server (a mosquitto server) was initially running in my laptop but as well as I had one Raspberry Pi Zero also in my bag we decided to user the Pi Zero as a server and run mosquitto MQTT server with Raspbian. Everything is better with a Raspberry Pi. @tatai helped me to set up the server.

    Here you can see the prototype in action

    That’s the kind of side projects that I normally create alone but definitely it’s more fun to do it with other colleagues even it I need to wake up early one Saturday morning.

    Source code of ESP32 here.

    Playing with Ionic, Lumen, Firebase, Google maps, Raspberry Pi and background geolocation

    I wanna do a simple pet project. The idea is to build a mobile application. This application will track my GPS location and send this information to a Firebase database. I’ve never play with Firebase and I want to learn a little bit. With this information I will build a simple web application hosted in my Raspberry Pi. This web application will show a Google map with my last location. I will put this web application in my TV and anyone in my house will see where I am every time.

    That’s the idea. I want a MVP. First the mobile application. I will use ionic framework. I’m big fan of ionic.

    The mobile application is very simple. It only has a toggle to activate-deactivate the background geolocation (sometimes I don’t want to be tracked :).

    <ion-header>
        <ion-navbar>
            <ion-title>
                Ionic Blank
            </ion-title>
        </ion-navbar>
    </ion-header>
    
    <ion-header>
        <ion-toolbar [color]="toolbarColor">
            <ion-title>{{title}}</ion-title>
            <ion-buttons end>
                <ion-toggle color="light"
                            checked="{{isBgEnabled}}"
                            (ionChange)="changeWorkingStatus($event)">
                </ion-toggle>
            </ion-buttons>
        </ion-toolbar>
    </ion-header>
    
    <ion-content padding>
    </ion-content>
    

    And the controller:

    import {Component} from '@angular/core';
    import {Platform} from 'ionic-angular';
    import {LocationTracker} from "../../providers/location-tracker/location-tracker";
    
    @Component({
        selector: 'page-home',
        templateUrl: 'home.html'
    })
    export class HomePage {
        public status: string = localStorage.getItem('status') || "-";
        public title: string = "";
        public isBgEnabled: boolean = false;
        public toolbarColor: string;
    
        constructor(platform: Platform,
                    public locationTracker: LocationTracker) {
    
            platform.ready().then(() => {
    
                    if (localStorage.getItem('isBgEnabled') === 'on') {
                        this.isBgEnabled = true;
                        this.title = "Working ...";
                        this.toolbarColor = 'secondary';
                    } else {
                        this.isBgEnabled = false;
                        this.title = "Idle";
                        this.toolbarColor = 'light';
                    }
            });
        }
    
        public changeWorkingStatus(event) {
            if (event.checked) {
                localStorage.setItem('isBgEnabled', "on");
                this.title = "Working ...";
                this.toolbarColor = 'secondary';
                this.locationTracker.startTracking();
            } else {
                localStorage.setItem('isBgEnabled', "off");
                this.title = "Idle";
                this.toolbarColor = 'light';
                this.locationTracker.stopTracking();
            }
        }
    }
    

    As you can see, the toggle button will activate-deactivate the background geolocation and it also changes de background color of the toolbar.

    For background geolocation I will use one cordova plugin available as ionic native plugin

    Here you can see read a very nice article explaining how to use the plugin with ionic. As the article explains I’ve created a provider

    import {Injectable, NgZone} from '@angular/core';
    import {BackgroundGeolocation} from '@ionic-native/background-geolocation';
    import {CONF} from "../conf/conf";
    
    @Injectable()
    export class LocationTracker {
        constructor(public zone: NgZone,
                    private backgroundGeolocation: BackgroundGeolocation) {
        }
    
        showAppSettings() {
            return this.backgroundGeolocation.showAppSettings();
        }
    
        startTracking() {
            this.startBackgroundGeolocation();
        }
    
        stopTracking() {
            this.backgroundGeolocation.stop();
        }
    
        private startBackgroundGeolocation() {
            this.backgroundGeolocation.configure(CONF.BG_GPS);
            this.backgroundGeolocation.start();
        }
    }
    

    The idea of the plugin is send a POST request to a url with the gps data in the body of the request. So, I will create a web api server to handle this request. I will use my Raspberry Pi3. to serve the application. I will create a simple PHP/Lumen application. This application will handle the POST request of the mobile application and also will serve a html page with the map (using google maps).

    Mobile requests will be authenticated with a token in the header and web application will use a basic http authentication. Because of that I will create two middlewares to handle the the different ways to authenticate.

    <?php
    require __DIR__ . '/../vendor/autoload.php';
    
    use App\Http\Middleware;
    use App\Model\Gps;
    use Illuminate\Contracts\Debug\ExceptionHandler;
    use Illuminate\Http\Request;
    use Laravel\Lumen\Application;
    use Laravel\Lumen\Routing\Router;
    
    (new Dotenv\Dotenv(__DIR__ . '/../env/'))->load();
    
    $app = new Application(__DIR__ . '/..');
    $app->singleton(ExceptionHandler::class, App\Exceptions\Handler::class);
    $app->routeMiddleware([
        'auth'  => Middleware\AuthMiddleware::class,
        'basic' => Middleware\BasicAuthMiddleware::class,
    ]);
    
    $app->router->group(['middleware' => 'auth', 'prefix' => '/locator'], function (Router $route) {
        $route->post('/gps', function (Gps $gps, Request $request) {
            $requestData = $request->all();
            foreach ($requestData as $poi) {
                $gps->persistsData([
                    'date'             => date('YmdHis'),
                    'serverTime'       => time(),
                    'time'             => $poi['time'],
                    'latitude'         => $poi['latitude'],
                    'longitude'        => $poi['longitude'],
                    'accuracy'         => $poi['accuracy'],
                    'speed'            => $poi['speed'],
                    'altitude'         => $poi['altitude'],
                    'locationProvider' => $poi['locationProvider'],
                ]);
            }
    
            return 'OK';
        });
    });
    
    return $app;
    

    As we can see the route /locator/gps will handle the post request. I’ve created a model to persists gps data in the firebase database:

    <?php
    
    namespace App\Model;
    
    use Kreait\Firebase\Factory;
    use Kreait\Firebase\ServiceAccount;
    
    class Gps
    {
        private $database;
    
        private const FIREBASE_CONF = __DIR__ . '/../../conf/firebase.json';
    
        public function __construct()
        {
            $serviceAccount = ServiceAccount::fromJsonFile(self::FIREBASE_CONF);
            $firebase       = (new Factory)
                ->withServiceAccount($serviceAccount)
                ->create();
    
            $this->database = $firebase->getDatabase();
        }
    
        public function getLast()
        {
            $value = $this->database->getReference('gps/poi')
                ->orderByKey()
                ->limitToLast(1)
                ->getValue();
    
            $out                 = array_values($value)[0];
            $out['formatedDate'] = \DateTimeImmutable::createFromFormat('YmdHis', $out['date'])->format('d/m/Y H:i:s');
    
            return $out;
        }
    
        public function persistsData(array $data)
        {
            return $this->database
                ->getReference('gps/poi')
                ->push($data);
        }
    }
    

    The project is almost finished. Now we only need to create the google map.

    That’s the api

    <?php
    $app->router->group(['middleware' => 'basic', 'prefix' => '/map'], function (Router $route) {
        $route->get('/', function (Gps $gps) {
            return view("index", $gps->getLast());
        });
    
        $route->get('/last', function (Gps $gps) {
            return $gps->getLast();
        });
    });
    

    And the HTML

    <!DOCTYPE html>
    <html>
    <head>
        <meta name="viewport" content="initial-scale=1.0, user-scalable=no">
        <meta charset="utf-8">
        <title>Locator</title>
        <style>
            #map {
                height: 100%;
            }
    
            html, body {
                height: 100%;
                margin: 0;
                padding: 0;
            }
        </style>
    </head>
    <body>
    <div id="map"></div>
    <script>
    
        var lastDate;
        var DELAY = 60;
    
        function drawMap(lat, long, text) {
            var CENTER = {lat: lat, lng: long};
            var contentString = '<div id="content">' + text + '</div>';
            var infowindow = new google.maps.InfoWindow({
                content: contentString
            });
            var map = new google.maps.Map(document.getElementById('map'), {
                zoom: 11,
                center: CENTER,
                disableDefaultUI: true
            });
    
            var marker = new google.maps.Marker({
                position: CENTER,
                map: map
            });
            var trafficLayer = new google.maps.TrafficLayer();
    
            trafficLayer.setMap(map);
            infowindow.open(map, marker);
        }
    
        function initMap() {
            lastDate = '{{ $formatedDate }}';
            drawMap({{ $latitude }}, {{ $longitude }}, lastDate);
        }
    
        setInterval(function () {
            fetch('/map/last', {credentials: "same-origin"}).then(function (response) {
                response.json().then(function (data) {
                    if (lastDate !== data.formatedDate) {
                        drawMap(data.latitude, data.longitude, data.formatedDate);
                    }
                });
            });
        }, DELAY * 1000);
    </script>
    <script async defer src="https://maps.googleapis.com/maps/api/js?key=my_google_maps_key&callback=initMap">
    </script>
    </body>
    </html>
    

    And that’s all just enough for a weekend. Source code is available in my github account

    Authenticate OpenUI5 applications and Lumen backends with Amazon Cognito and JWT

    Today I want to create an UI5/OpenUI5 boilerplate that plays with Lumen backends. Simple, isn’t it? We only need to create a Lumen API server and connect our OpenUI5 application with this API server. But today I also want to create a Login also. The typical user/password input form. I don’t want to build it from scratch (a user database, oauth provider or something like that). Since this days I’m involved with Amazon AWS projects I want to try Amazon Cognito.

    Cognito has a great javaScript SDK. In fact we can do all the authentication flow (create users, validate passwords, change password, multifactor authentication, …) with Cognito. To create this project first I’ve create the following steps within Amazon AWS Cognito Console: Create a user pool with required attributes (email only in this example), without MFA and only allow administrators to create users. I’ve also created a App client inside this pool, so I’ve got a UserPoolId and a ClientId.

    Let’s start with the OpenUI5 application. I’ve created an small application with one route called “home”. To handle the login process I will work in Component.js init function. The idea is check the cognito session. If there’s an active one (that’s means a Json Web Token stored in the local storage) we’ll display to “home” route and if there isn’t we’ll show login one.

    sap.ui.define([
            "sap/ui/core/UIComponent",
            "sap/ui/Device",
            "app/model/models",
            "app/model/cognito"
        ], function (UIComponent, Device, models, cognito) {
            "use strict";
    
            return UIComponent.extend("app.Component", {
    
                metadata: {
                    manifest: "json"
                },
    
                init: function () {
                    UIComponent.prototype.init.apply(this, arguments);
                    this.setModel(models.createDeviceModel(), "device");
                    this.getRouter().initialize();
    
                    var targets = this.getTargets();
                    cognito.hasSession(function (err) {
                        if (err) {
                            targets.display("login");
                            return;
                        }
                        targets.display("home");
                    });
                },
    
                /* *** */
            });
        }
    );
    

    To encapsulate the cognito operations I’ve create a model called cognito.js. It’s not perfect, but it allows me to abstract cognito stuff in the OpenUI5 application.

    sap.ui.define([
            "app/conf/env"
        ], function (env) {
            "use strict";
    
            AWSCognito.config.region = env.region;
    
            var poolData = {
                UserPoolId: env.UserPoolId,
                ClientId: env.ClientId
            };
    
            var userPool = new AWSCognito.CognitoIdentityServiceProvider.CognitoUserPool(poolData);
            var jwt;
    
            var cognito = {
                getJwt: function () {
                    return jwt;
                },
    
                hasSession: function (cbk) {
                    var cognitoUser = cognito.getCurrentUser();
                    if (cognitoUser != null) {
                        cognitoUser.getSession(function (err, session) {
                            if (err) {
                                cbk(err);
                                return;
                            }
                            if (session.isValid()) {
                                jwt = session.idToken.getJwtToken();
                                cbk(false, session)
                            } else {
                                cbk(true);
                            }
                        });
                    } else {
                        cbk(true);
                    }
                },
    
                getCurrentUser: function () {
                    return userPool.getCurrentUser();
                },
    
                signOut: function () {
                    var currentUser = cognito.getCurrentUser();
                    if (currentUser) {
                        currentUser.signOut()
                    }
                },
    
                getUsername: function () {
                    var currentUser = cognito.getCurrentUser();
                    return (currentUser) ? currentUser.username : undefined;
                },
    
                getUserData: function (user) {
                    return {
                        Username: user,
                        Pool: userPool
                    };
                },
    
                getCognitoUser: function (user) {
                    return new AWSCognito.CognitoIdentityServiceProvider.CognitoUser(cognito.getUserData(user));
                },
    
                authenticateUser: function (user, pass, cbk) {
                    var authenticationData = {
                        Username: user,
                        Password: pass
                    };
    
                    var authenticationDetails = new AWSCognito.CognitoIdentityServiceProvider.AuthenticationDetails(authenticationData);
                    var cognitoUser = new AWSCognito.CognitoIdentityServiceProvider.CognitoUser(cognito.getUserData(user));
    
                    cognitoUser.authenticateUser(authenticationDetails, cbk);
    
                    return cognitoUser;
                }
            };
    
            return cognito;
        }
    );
    

    The login route has the following xml view:

    <core:View
            xmlns:core="sap.ui.core"
            xmlns:f="sap.ui.layout.form"
            xmlns="sap.m"
            controllerName="app.controller.Login"
    >
        <Image class="bg"></Image>
        <VBox class="sapUiSmallMargin loginForm">
            <f:SimpleForm visible="{= ${/flow} === 'login' }">
                <f:toolbar>
                    <Toolbar>
                        <Title text="{i18n>Login_Title}" level="H4" titleStyle="H4"/>
                    </Toolbar>
                </f:toolbar>
                <f:content>
                    <Label text="{i18n>Login_user}"/>
                    <Input placeholder="{i18n>Login_userPlaceholder}" value="{/user}"/>
                    <Label text="{i18n>Login_pass}"/>
                    <Input type="Password" placeholder="{i18n>Login_passPlaceholder}" value="{/pass}"/>
                    <Button type="Accept" text="{i18n>OK}" press="loginPressHandle"/>
                </f:content>
            </f:SimpleForm>
            
            <f:SimpleForm visible="{= ${/flow} === 'PasswordReset' }">
                <f:toolbar>
                    <Toolbar>
                        <Title text="{i18n>Login_PasswordReset}" level="H4" titleStyle="H4"/>
                    </Toolbar>
                </f:toolbar>
                <f:content>
                    <Label text="{i18n>Login_verificationCode}"/>
                    <Input type="Number" placeholder="{i18n>Login_verificationCodePlaceholder}" value="{/verificationCode}"/>
                    <Label text="{i18n>Login_newpass}"/>
                    <Input type="Password" placeholder="{i18n>Login_newpassPlaceholder}" value="{/newPass}"/>
                    <Button type="Accept" text="{i18n>OK}" press="newPassVerificationPressHandle"/>
                </f:content>
            </f:SimpleForm>
            
            <f:SimpleForm visible="{= ${/flow} === 'newPasswordRequired' }">
                <f:toolbar>
                    <Toolbar>
                        <Title text="{i18n>Login_PasswordReset}" level="H4" titleStyle="H4"/>
                    </Toolbar>
                </f:toolbar>
                <f:content>
                    <Label text="{i18n>Login_newpass}"/>
                    <Input type="Password" placeholder="{i18n>Login_newpassPlaceholder}" value="{/newPass}"/>
                    <Button type="Accept" text="{i18n>OK}" press="newPassPressHandle"/>
                </f:content>
            </f:SimpleForm>
        </VBox>
    </core:View>
    

    It has three different stages: “login”, “PasswordReset” and “newPasswordRequired”
    “login” is the main one. In this stage the user can input his login credentials. If credentials are OK then we’ll display home route.
    The first time a user log in in the application with the password provided by the administrator, Cognito will force to change the password. Then We’ll show newPasswordRequired flow. I’m not going to explain each step. We developers prefer code than texts. That’s the code:

    sap.ui.define([
            "app/controller/BaseController",
            "sap/ui/model/json/JSONModel",
            "sap/m/MessageToast",
            "app/model/cognito"
        ], function (BaseController, JSONModel, MessageToast, cognito) {
            "use strict";
    
            var cognitoUser;
            return BaseController.extend("app.controller.Login", {
                model: {
                    user: "",
                    pass: "",
                    flow: "login",
                    verificationCode: undefined,
                    newPass: undefined
                },
    
                onInit: function () {
                    this.getView().setModel(new JSONModel(this.model));
                },
    
                newPassPressHandle: function () {
                    var that = this;
                    var targets = this.getOwnerComponent().getTargets();
                    var attributesData = {};
                    sap.ui.core.BusyIndicator.show();
                    cognitoUser.completeNewPasswordChallenge(this.model.newPass, attributesData, {
                        onFailure: function (err) {
                            sap.ui.core.BusyIndicator.hide();
                            MessageToast.show(err.message);
                        },
                        onSuccess: function (data) {
                            sap.ui.core.BusyIndicator.hide();
                            that.getModel().setProperty("/flow", "login");
                            targets.display("home");
                        }
                    })
                },
    
                newPassVerificationPressHandle: function () {
                    var that = this;
                    var targets = this.getOwnerComponent().getTargets();
                    sap.ui.core.BusyIndicator.show();
                    cognito.getCognitoUser(this.model.user).confirmPassword(this.model.verificationCode, this.model.newPass, {
                        onFailure: function (err) {
                            sap.ui.core.BusyIndicator.hide();
                            MessageToast.show(err);
                        },
                        onSuccess: function (result) {
                            sap.ui.core.BusyIndicator.hide();
                            that.getModel().setProperty("/flow", "PasswordReset");
                            targets.display("home");
                        }
                    });
                },
    
                loginPressHandle: function () {
                    var that = this;
                    var targets = this.getOwnerComponent().getTargets();
                    sap.ui.core.BusyIndicator.show();
                    cognitoUser = cognito.authenticateUser(this.model.user, this.model.pass, {
                        onSuccess: function (result) {
                            sap.ui.core.BusyIndicator.hide();
                            targets.display("home");
                        },
    
                        onFailure: function (err) {
                            sap.ui.core.BusyIndicator.hide();
                            switch (err.code) {
                                case "PasswordResetRequiredException":
                                    that.getModel().setProperty("/flow", "PasswordReset");
                                    break;
                                default:
                                    MessageToast.show(err.message);
                            }
                        },
    
                        newPasswordRequired: function (userAttributes, requiredAttributes) {
                            sap.ui.core.BusyIndicator.hide();
                            that.getModel().setProperty("/flow", "newPasswordRequired");
                        }
                    });
                }
            });
        }
    );
    

    The home route is the main one. It asumes that there’s an active Cognito session enabled.

    <mvc:View
            controllerName="app.controller.Home"
            xmlns="sap.m"
            xmlns:mvc="sap.ui.core.mvc"
            xmlns:semantic="sap.m.semantic">
        <semantic:FullscreenPage
                id="page"
                semanticRuleSet="Optimized"
                showNavButton="false"
                title="{i18n>loggedUser}: {/userName}">
            <semantic:content>
                <Panel width="auto" class="sapUiResponsiveMargin" accessibleRole="Region">
                    <headerToolbar>
                        <Toolbar height="3rem">
                            <Title text="Title"/>
                        </Toolbar>
                    </headerToolbar>
                    <content>
                        <Text text="Lorem ipsum dolor st amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet. Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat"/>
                        <Button text="{i18n>Hello}" icon="sap-icon://hello-world" press="helloPress"/>
                    </content>
                </Panel>
            </semantic:content>
            <semantic:customFooterContent>
                <Button text="{i18n>LogOff}" icon="sap-icon://visits" press="onLogOffPress"/>
            </semantic:customFooterContent>
        </semantic:FullscreenPage>
    </mvc:View>
    

    It shows the Cognito login name. It alos has a simple logff button and also one button that calls to the backend.

    sap.ui.define([
            "app/controller/BaseController",
            "sap/ui/model/json/JSONModel",
            "sap/m/MessageToast",
            "app/model/cognito",
            "app/model/api"
        ], function (BaseController, JSONModel, MessageToast, cognito, api) {
            "use strict";
    
            return BaseController.extend("app.controller.Home", {
                model: {
                    userName: ""
                },
    
                onInit: function () {
                    this.model.userName = cognito.getUsername();
                    this.getView().setModel(new JSONModel(this.model));
                },
    
                helloPress: function () {
                    api.get("/api/hi", {}, function (data) {
                        MessageToast.show("Hello user " + data.userInfo.username + " (" + data.userInfo.email + ")");
                    });
                },
    
                onLogOffPress: function () {
                    cognito.signOut();
                    this.getOwnerComponent().getTargets().display("login");
                }
            });
        }
    );
    

    To handle ajax requests I’ve create an api model. This model injects jwt inside every request.

    sap.ui.define([
        "sap/m/MessageToast",
        "app/model/cognito"
    ], function (MessageToast, cognito) {
        "use strict";
    
        var backend = "";
    
        return {
            get: function (uri, params, cb) {
                params = params || {};
                params._jwt = cognito.getJwt();
                sap.ui.core.BusyIndicator.show(1000);
    
                jQuery.ajax({
                    type: "GET",
                    contentType: "application/json",
                    data: params,
                    url: backend + uri,
                    cache: false,
                    dataType: "json",
                    async: true,
                    success: function (data, textStatus, jqXHR) {
                        sap.ui.core.BusyIndicator.hide();
                        cb(data);
                    },
                    error: function (data, textStatus, jqXHR) {
                        sap.ui.core.BusyIndicator.hide();
                        switch (data.status) {
                            case 403: // Forbidden
                                MessageToast.show('Auth error');
                                break;
                            default:
                                console.log('Error', data);
                        }
                    }
                });
            }
        };
    });
    

    That’s the frontend. Now it’s time to backend. Our Backend will be a simple Lumen server.

    use App\Http\Middleware;
    use Illuminate\Contracts\Debug\ExceptionHandler;
    use Laravel\Lumen\Application;
    
    (new Dotenv\Dotenv(__DIR__ . "/../env/"))->load();
    
    $app = new Application();
    
    $app->singleton(ExceptionHandler::class, App\Exceptions\Handler::class);
    
    $app->routeMiddleware([
        'cognito' => Middleware\AuthCognitoMiddleware::class,
    ]);
    
    $app->register(App\Providers\RedisServiceProvider::class);
    
    $app->group([
        'middleware' => 'cognito',
        'namespace'  => 'App\Http\Controllers',
    ], function (Application $app) {
        $app->get("/api/hi", "DemoController@hi");
    });
    
    $app->run();
    

    As you can see I’ve created a middelware to handle the authentication. This middleware will check the jwt provided by the frontend. We will use “spomky-labs/jose” library to validate the token.

    namespace App\Http\Middleware;
    
    use Closure;
    use Illuminate\Http\Request;
    use Jose\Factory\JWKFactory;
    use Jose\Loader;
    use Monolog\Logger;
    use Symfony\Component\Cache\Adapter\RedisAdapter;
    
    class AuthCognitoMiddleware
    {
        public function handle(Request $request, Closure $next)
        {
            try {
                $payload = $this->getPayload($request->get('_jwt'), $this->getJwtWebKeys());
                config([
                    "userInfo" => [
                        'username' => $payload['cognito:username'],
                        'email'    => $payload['email'],
                    ],
                ]);
            } catch (\Exception $e) {
                $log = app(Logger::class);
                $log->alert($e->getMessage());
    
                return response('Token Error', 403);
            }
    
            return $next($request);
        }
    
        private function getJwtWebKeys()
        {
            $url      = sprintf(
                'https://cognito-idp.%s.amazonaws.com/%s/.well-known/jwks.json',
                getenv('AWS_REGION'),
                getenv('AWS_COGNITO_POOL')
            );
            $cacheKey = sprintf('JWKFactory-Content-%s', hash('sha512', $url));
    
            $cache = app(RedisAdapter::class);
    
            $item = $cache->getItem($cacheKey);
            if (!$item->isHit()) {
                $item->set($this->getContent($url));
                $item->expiresAfter((int)getenv("TTL_JWK_CACHE"));
                $cache->save($item);
            }
    
            return JWKFactory::createFromJKU($url, false, $cache);
        }
    
        private function getPayload($accessToken, $jwtWebKeys)
        {
            $loader  = new Loader();
            $jwt     = $loader->loadAndVerifySignatureUsingKeySet($accessToken, $jwtWebKeys, ['RS256']);
            $payload = $jwt->getPayload();
    
            return $payload;
        }
    
        private function getContent($url)
        {
            $ch = curl_init();
            curl_setopt_array($ch, [
                CURLOPT_RETURNTRANSFER => true,
                CURLOPT_URL            => $url,
                CURLOPT_SSL_VERIFYPEER => true,
                CURLOPT_SSL_VERIFYHOST => 2,
            ]);
            $content = curl_exec($ch);
            curl_close($ch);
    
            return $content;
        }
    }
    

    To validate jwt Cognito tokens we need to obtain JwtWebKeys from this url

    https://cognito-idp.my_aws_region.amazonaws.com/my_aws_cognito_pool_id/.well-known/jwks.json

    That means that we need to fetch this url within every backend request, and that’s not cool. spomky-labs/jose allows us to use a cache to avoid fetch the request again and again. This cache is an instance of something that implementes the interface Psr\Cache\CacheItemPoolInterface. I’m not going to create a Cache from scratch. I’m not crazy. I’ll use symfony/cache here with a Redis adapter

    And basically that’s all. Full application in my github