Category Archives: Python

Working with SAPUI5 locally (part 2). Now with docker

In the first part I spoke about how to build our working environment to work with UI5 locally instead of using WebIDE. Now, in this second part of the post, we’ll see how to do it using docker to set up our environment.

I’ll use docker-compose to set up the project. Basically, as I explain in the first part, the project has two parts. One backend and one frontned. We’re going to use exactly the same code for the frontend and for the backend.

The frontend is build over a localneo. As it’s a node application we’ll use a node:alpine base host

FROM node:alpine

EXPOSE 8000

WORKDIR /code/src
COPY ./frontend .
RUN npm install
ENTRYPOINT ["npm", "run", "serve"]

In docker-compose we only need to map the port that we´ll expose in our host and since we want this project in our depelopemet process, we also will map the volume to avoid to re-generate our container each time we change the code.

...
  ui5:
    image: gonzalo123.ui5
    ports:
    - "8000:8000"
    restart: always
    build:
      context: ./src
      dockerfile: .docker/Dockerfile-ui5
    volumes:
    - ./src/frontend:/code/src
    networks:
    - api-network

The backend is a PHP application. We can set up a PHP application using different architectures. In this project we’ll use nginx and PHP-FPM.

for nginx we’ll use the following Dockerfile

FROM  nginx:1.13-alpine

EXPOSE 80

COPY ./.docker/web/site.conf /etc/nginx/conf.d/default.conf
COPY ./backend /code/src

And for the PHP host the following one (with xdebug to enable debugging and breakpoints):

FROM php:7.1-fpm

ENV PHP_XDEBUG_REMOTE_ENABLE 1

RUN apt-get update && apt-get install -my \
    git \
    libghc-zlib-dev && \
    apt-get clean

RUN apt-get install -y libpq-dev \
    && docker-php-ext-configure pgsql -with-pgsql=/usr/local/pgsql \
    && docker-php-ext-install pdo pdo_pgsql pgsql opcache zip

RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer

RUN composer global require "laravel/lumen-installer"
ENV PATH ~/.composer/vendor/bin:$PATH

COPY ./backend /code/src

And basically that’s all. Here the full docker-compose file

version: '3.4'

services:
  nginx:
    image: gonzalo123.nginx
    restart: always
    ports:
    - "8080:80"
    build:
      context: ./src
      dockerfile: .docker/Dockerfile-nginx
    volumes:
    - ./src/backend:/code/src
    - ./src/.docker/web/site.conf:/etc/nginx/conf.d/default.conf
    networks:
    - api-network
  api:
    image: gonzalo123.api
    restart: always
    build:
      context: ./src
      dockerfile: .docker/Dockerfile-lumen-dev
    environment:
      XDEBUG_CONFIG: remote_host=${MY_IP}
    volumes:
    - ./src/backend:/code/src
    networks:
    - api-network
  ui5:
    image: gonzalo123.ui5
    ports:
    - "8000:8000"
    restart: always
    build:
      context: ./src
      dockerfile: .docker/Dockerfile-ui5
    networks:
    - api-network

networks:
  api-network:
    driver: bridge

If we want to use this project you only need to:

  • clone the repo fron github
  • run ./ui5 up

With this configuration we’re exposing two ports 8080 for the frontend and 8000 for the backend. We also are mapping our local filesystem to containers to avoid to regenerate our containers each time we change the code.

We also can have a variation. A “production” version of our docker-compose file. I put production between quotation marks because normally we aren’t going to use localneo as a production server (please don’t do it). We’ll use SCP to host the frontend.

This configuration is just an example without filesystem mapping, without xdebug in the backend and without exposing the backend externally (Only the frontend can use it)

version: '3.4'

services:
  nginx:
    image: gonzalo123.nginx
    restart: always
    build:
      context: ./src
      dockerfile: .docker/Dockerfile-nginx
    networks:
    - api-network
  api:
    image: gonzalo123.api
    restart: always
    build:
      context: ./src
      dockerfile: .docker/Dockerfile-lumen
    networks:
    - api-network
  ui5:
    image: gonzalo123.ui5
    ports:
    - "8000:8000"
    restart: always
    build:
      context: ./src
      dockerfile: .docker/Dockerfile-ui5
    networks:
    - api-network

networks:
  api-network:
    driver: bridge

And that’s all. You can see the all the source code in my github account

Advertisements

Playing with Grafana and weather APIs

Today I want to play with Grafana. Let me show you my idea:

I’ve got a Beewi temperature sensor. I’ve been playing with it previously. Today I want to show the temperature within a Grafana dashboard.
I want to play also with openweathermap API.

Fist I want to retrieve the temperature from Beewi device. I’ve got a node script that connects via Bluetooth to the device using noble library.
I only need to pass the sensor mac address and I obtain a JSON with the current temperature

#!/usr/bin/env node
noble = require('noble');

var status = false;
var address = process.argv[2];

if (!address) {
    console.log('Usage "./reader.py <sensor mac address>"');
    process.exit();
}

function hexToInt(hex) {
    var num, maxVal;
    if (hex.length % 2 !== 0) {
        hex = "0" + hex;
    }
    num = parseInt(hex, 16);
    maxVal = Math.pow(2, hex.length / 2 * 8);
    if (num > maxVal / 2 - 1) {
        num = num - maxVal;
    }

    return num;
}

noble.on('stateChange', function(state) {
    status = (state === 'poweredOn');
});

noble.on('discover', function(peripheral) {
    if (peripheral.address == address) {
        var data = peripheral.advertisement.manufacturerData.toString('hex');
        out = {
            temperature: parseFloat(hexToInt(data.substr(10, 2)+data.substr(8, 2))/10).toFixed(1)
        };
        console.log(JSON.stringify(out))
        noble.stopScanning();
        process.exit();
    }
});

noble.on('scanStop', function() {
    noble.stopScanning();
});

setTimeout(function() {
    noble.stopScanning();
    noble.startScanning();
}, 2000);


setTimeout(function() {
    noble.stopScanning();
    process.exit()
}, 20000);

And finally another script (this time a Python script) to collect data from openweathermap API, collect data from node script and storing the information in a influxdb database.

from sense_hat import SenseHat
from influxdb import InfluxDBClient
import datetime
import logging
import requests
import json
from subprocess import check_output
import os
import sys
from dotenv import load_dotenv

logging.basicConfig(level=logging.INFO)

current_dir = os.path.dirname(os.path.abspath(__file__))
load_dotenv(dotenv_path="{}/.env".format(current_dir))

sensor_mac_address = os.getenv("BEEWI_SENSOR")
openweathermap_api_key = os.getenv("OPENWEATHERMAP_API_KEY")
influxdb_host = os.getenv("INFLUXDB_HOST")
influxdb_port = os.getenv("INFLUXDB_PORT")
influxdb_database = os.getenv("INFLUXDB_DATABASE")

reader = '{}/reader.js'.format(current_dir)


def get_rain_level_from_weather(weather):
    rain = False
    rain_level = 0
    if len(weather) > 0:
        for w in weather:
            if w['icon'] == '09d':
                rain = True
                rain_level = 1
            elif w['icon'] == '10d':
                rain = True
                rain_level = 2
            elif w['icon'] == '11d':
                rain = True
                rain_level = 3
            elif w['icon'] == '13d':
                rain = True
                rain_level = 4

    return rain, rain_level


def openweathermap():
    data = {}
    r = requests.get(
        "http://api.openweathermap.org/data/2.5/weather?id=3110044&appid={}&units=metric".format(
            openweathermap_api_key))

    if r.status_code == 200:
        current_data = r.json()
        data['weather'] = current_data['main']
        rain, rain_level = get_rain_level_from_weather(current_data['weather'])
        data['weather']['rain'] = rain
        data['weather']['rain_level'] = rain_level

    r2 = requests.get(
        "http://api.openweathermap.org/data/2.5/uvi?lat=43.32&lon=-1.93&appid={}".format(openweathermap_api_key))
    if r2.status_code == 200:
        data['uvi'] = r2.json()

    r3 = requests.get(
        "http://api.openweathermap.org/data/2.5/forecast?id=3110044&appid={}&units=metric".format(
            openweathermap_api_key))

    if r3.status_code == 200:
        forecast = r3.json()['list']
        data['forecast'] = []
        for f in forecast:
            rain, rain_level = get_rain_level_from_weather(f['weather'])
            data['forecast'].append({
                "dt": f['dt'],
                "fields": {
                    "temp": float(f['main']['temp']),
                    "humidity": float(f['main']['humidity']),
                    "rain": rain,
                    "rain_level": int(rain_level),
                    "pressure": float(float(f['main']['pressure']))
                }
            })

        return data


def persists(measurement, fields, location, time):
    logging.info("{} {} [{}] {}".format(time, measurement, location, fields))
    influx_client.write_points([{
        "measurement": measurement,
        "tags": {"location": location},
        "time": time,
        "fields": fields
    }])


def in_sensors():
    try:
        sense = SenseHat()
        pressure = sense.get_pressure()
        reader_output = check_output([reader, sensor_mac_address]).strip()
        sensor_info = json.loads(reader_output)
        temperature = sensor_info['temperature']

        persists(measurement='home_pressure', fields={"value": float(pressure)}, location="in", time=current_time)
        persists(measurement='home_temperature', fields={"value": float(temperature)}, location="in",
                 time=current_time)
    except Exception as err:
        logging.error(err)


def out_sensors():
    try:
        out_info = openweathermap()

        persists(measurement='home_pressure',
                 fields={"value": float(out_info['weather']['pressure'])},
                 location="out",
                 time=current_time)
        persists(measurement='home_humidity',
                 fields={"value": float(out_info['weather']['humidity'])},
                 location="out",
                 time=current_time)
        persists(measurement='home_temperature',
                 fields={"value": float(out_info['weather']['temp'])},
                 location="out",
                 time=current_time)
        persists(measurement='home_rain',
                 fields={"value": out_info['weather']['rain'], "level": out_info['weather']['rain_level']},
                 location="out",
                 time=current_time)
        persists(measurement='home_uvi',
                 fields={"value": float(out_info['uvi']['value'])},
                 location="out",
                 time=current_time)
        for f in out_info['forecast']:
            persists(measurement='home_weather_forecast',
                     fields=f['fields'],
                     location="out",
                     time=datetime.datetime.utcfromtimestamp(f['dt']).isoformat())

    except Exception as err:
        logging.error(err)


influx_client = InfluxDBClient(host=influxdb_host, port=influxdb_port, database=influxdb_database)
current_time = datetime.datetime.utcnow().isoformat()

in_sensors()
out_sensors()

I’m running this python script from a Raspberry Pi3 with a Sense Hat. Sense Hat has a atmospheric pressure sensor, so I will also retrieve the pressure from the Sense Hat.

From openweathermap I will obtain:

  • Current temperature/humidity and atmospheric pressure in the street
  • UV Index (the measure of the level of UV radiation)
  • Weather conditions (if it’s raining or not)
  • Weather forecast

I run this script with the Rasberry Pi crontab each 5 minutes. That means that I’ve got a fancy time series ready to be shown with grafana.

Here we can see the dashboard

Source code available in my github account.

Playing with Docker, MQTT, Grafana, InfluxDB, Python and Arduino

I must admit this post is just an excuse to play with Grafana and InfluxDb. InfluxDB is a cool database especially designed to work with time series. Grafana is one open source tool for time series analytics. I want to build a simple prototype. The idea is:

  • One Arduino device (esp32) emits a MQTT event to a mosquitto server. I’ll use a potentiometer to emulate one sensor (Imagine here, for example, a temperature sensor instead of potentiometer). I’ve used this circuit before in another projects
  • One Python script will be listening to the MQTT event in my Raspberry Pi and it will persist the value to InfluxDB database
  • I will monitor the state of the time series given by the potentiometer with Grafana
  • I will create one alert in Grafana (for example when the average value within 10 seconds is above a threshold) and I will trigger a webhook when the alert changes its state
  • One microservice (a Python Flask server) will be listening to the webhook and it will emit a MQTT event depending on the state
  • Another Arduino device (one NodeMcu in this case) will be listening to this MQTT event and it will activate a LED. Red one if the alert is ON and green one if the alert is OFF

The server
As I said before we’ll need three servers:

  • MQTT server (mosquitto)
  • InfluxDB server
  • Grafana server

We’ll use Docker. I’ve got a Docker host running in a Raspberry Pi3. The Raspberry Pi is a ARM device so we need docker images for this architecture.

version: '2'

services:
  mosquitto:
    image: pascaldevink/rpi-mosquitto
    container_name: moquitto
    ports:
     - "9001:9001"
     - "1883:1883"
    restart: always
  
  influxdb:
    image: hypriot/rpi-influxdb
    container_name: influxdb
    restart: always
    environment:
     - INFLUXDB_INIT_PWD="password"
     - PRE_CREATE_DB="iot"
    ports:
     - "8083:8083"
     - "8086:8086"
    volumes:
     - ~/docker/rpi-influxdb/data:/data

  grafana:
    image: fg2it/grafana-armhf:v4.6.3
    container_name: grafana
    restart: always
    ports:
     - "3000:3000"
    volumes:
      - grafana-db:/var/lib/grafana
      - grafana-log:/var/log/grafana
      - grafana-conf:/etc/grafana

volumes:
  grafana-db:
    driver: local  
  grafana-log:
    driver: local
  grafana-conf:
    driver: local

ESP32
The Esp32 part is very simple. We only need to connect our potentiometer to the Esp32. The potentiometer has three pins: Gnd, Signal and Vcc. For signal we’ll use the pin 32.

We only need to configure our Wifi network, connect to our MQTT server and emit the potentiometer value within each loop.

#include <PubSubClient.h>
#include <WiFi.h>

const int potentiometerPin = 32;

// Wifi configuration
const char* ssid = "my_wifi_ssid";
const char* password = "my_wifi_password";

// MQTT configuration
const char* server = "192.168.1.111";
const char* topic = "/pot";
const char* clientName = "com.gonzalo123.esp32";

String payload;

WiFiClient wifiClient;
PubSubClient client(wifiClient);

void wifiConnect() {
  Serial.println();
  Serial.print("Connecting to ");
  Serial.println(ssid);

  WiFi.begin(ssid, password);

  while (WiFi.status() != WL_CONNECTED) {
    delay(500);
    Serial.print(".");
  }
  Serial.println("");
  Serial.print("WiFi connected.");
  Serial.print("IP address: ");
  Serial.println(WiFi.localIP());
}

void mqttReConnect() {
  while (!client.connected()) {
    Serial.print("Attempting MQTT connection...");
    if (client.connect(clientName)) {
      Serial.println("connected");
    } else {
      Serial.print("failed, rc=");
      Serial.print(client.state());
      Serial.println(" try again in 5 seconds");
      delay(5000);
    }
  }
}

void mqttEmit(String topic, String value)
{
  client.publish((char*) topic.c_str(), (char*) value.c_str());
}

void setup() {
  Serial.begin(115200);

  wifiConnect();
  client.setServer(server, 1883);
  delay(1500);
}

void loop() {
  if (!client.connected()) {
    mqttReConnect();
  }
  int current = (int) ((analogRead(potentiometerPin) * 100) / 4095);
  mqttEmit(topic, (String) current);
  delay(500);
}

Mqtt listener

The esp32 emits an event (“/pot”) with the value of the potentiometer. So we’re going to create a MQTT listener that listen to MQTT and persits the value to InfluxDB.

import paho.mqtt.client as mqtt
from influxdb import InfluxDBClient
import datetime
import logging


def persists(msg):
    current_time = datetime.datetime.utcnow().isoformat()
    json_body = [
        {
            "measurement": "pot",
            "tags": {},
            "time": current_time,
            "fields": {
                "value": int(msg.payload)
            }
        }
    ]
    logging.info(json_body)
    influx_client.write_points(json_body)


logging.basicConfig(level=logging.INFO)
influx_client = InfluxDBClient('docker', 8086, database='iot')
client = mqtt.Client()

client.on_connect = lambda self, mosq, obj, rc: self.subscribe("/pot")
client.on_message = lambda client, userdata, msg: persists(msg)

client.connect("docker", 1883, 60)

client.loop_forever()

Grafana
In grafana we need to do two things. First to create one datasource from our InfluxDB server. It’s pretty straightforward to it.

Finally we’ll create a dashboard. We only have one time-serie with the value of the potentiometer. I must admit that my dasboard has a lot things that I’ve created only for fun.

Thats the query that I’m using to plot the main graph

SELECT 
  last("value") FROM "pot" 
WHERE 
  time >= now() - 5m 
GROUP BY 
  time($interval) fill(previous)

Here we can see the dashboard

And here my alert configuration:

I’ve also created a notification channel with a webhook. Grafana will use this web hook to notify when the state of alert changes

Webhook listener
Grafana will emit a webhook, so we’ll need an REST endpoint to collect the webhook calls. I normally use PHP/Lumen to create REST servers but in this project I’ll use Python and Flask.

We need to handle HTTP Basic Auth and emmit a MQTT event. MQTT is a very simple protocol but it has one very nice feature that fits like hat fits like a glove here. Le me explain it:

Imagine that we’ve got our system up and running and the state is “ok”. Now we connect one device (for example one big red/green lights). Since the “ok” event was fired before we connect the lights, our green light will not be switch on. We need to wait util “alert” event if we want to see any light. That’s not cool.

MQTT allows us to “retain” messages. That means that we can emit messages with “retain” flag to one topic and when we connect one device later to this topic, it will receive the message. Here it’s exactly what we need.

from flask import Flask
from flask import request
from flask_httpauth import HTTPBasicAuth
import paho.mqtt.client as mqtt
import json

client = mqtt.Client()

app = Flask(__name__)
auth = HTTPBasicAuth()

# http basic auth credentials
users = {
    "user": "password"
}


@auth.get_password
def get_pw(username):
    if username in users:
        return users.get(username)
    return None


@app.route('/alert', methods=['POST'])
@auth.login_required
def alert():
    client.connect("docker", 1883, 60)
    data = json.loads(request.data.decode('utf-8'))
    if data['state'] == 'alerting':
        client.publish(topic="/alert", payload="1", retain=True)
    elif data['state'] == 'ok':
        client.publish(topic="/alert", payload="0", retain=True)

    client.disconnect()

    return "ok"


if __name__ == "__main__":
    app.run(host='0.0.0.0')

Nodemcu

Finally the Nodemcu. This part is similar than the esp32 one. Our leds are in pins 4 and 5. We also need to configure the Wifi and connect to to MQTT server. Nodemcu and esp32 are similar devices but not the same. For example we need to use different libraries to connect to the wifi.

This device will be listening to the MQTT event and trigger on led or another depending on the state

#include <PubSubClient.h>
#include <ESP8266WiFi.h>

const int ledRed = 4;
const int ledGreen = 5;

// Wifi configuration
const char* ssid = "my_wifi_ssid";
const char* password = "my_wifi_password";

// mqtt configuration
const char* server = "192.168.1.111";
const char* topic = "/alert";
const char* clientName = "com.gonzalo123.nodemcu";

int value;
int percent;
String payload;

WiFiClient wifiClient;
PubSubClient client(wifiClient);

void wifiConnect() {
  Serial.println();
  Serial.print("Connecting to ");
  Serial.println(ssid);

  WiFi.begin(ssid, password);

  while (WiFi.status() != WL_CONNECTED) {
    delay(500);
    Serial.print(".");
  }
  Serial.println("");
  Serial.print("WiFi connected.");
  Serial.print("IP address: ");
  Serial.println(WiFi.localIP());
}

void mqttReConnect() {
  while (!client.connected()) {
    Serial.print("Attempting MQTT connection...");
    if (client.connect(clientName)) {
      Serial.println("connected");
      client.subscribe(topic);
    } else {
      Serial.print("failed, rc=");
      Serial.print(client.state());
      Serial.println(" try again in 5 seconds");
      delay(5000);
    }
  }
}

void callback(char* topic, byte* payload, unsigned int length) {

  Serial.print("Message arrived [");
  Serial.print(topic);

  String data;
  for (int i = 0; i < length; i++) {
    data += (char)payload[i];
  }
  cleanLeds();
  int value = data.toInt();
  switch (value)  {
    case 1:
      digitalWrite(ledRed, HIGH);
      break;
    case 0:
      digitalWrite(ledGreen, HIGH);
      break;
  }
  Serial.print("] value:");
  Serial.println((int) value);
}

void cleanLeds() {
  digitalWrite(ledRed, LOW);
  digitalWrite(ledGreen, LOW);
}

void setup() {
  Serial.begin(9600);
  pinMode(ledRed, OUTPUT);
  pinMode(ledGreen, OUTPUT);
  cleanLeds();
  Serial.println("start");

  wifiConnect();
  client.setServer(server, 1883);
  client.setCallback(callback);

  delay(1500);
}

void loop() {
  Serial.print(".");
  if (!client.connected()) {
    mqttReConnect();
  }

  client.loop();
  delay(500);
}

Here you can see the working prototype in action

And here the source code

Happy logins. Only the happy user will pass

Login forms are bored. In this example we’re going to create an especial login form. Only for happy users. Happiness is something complicated, but at least, one smile is more easy to obtain, and all is better with one smile :). Our login form will only appear if the user smiles. Let’s start.

I must admit that this project is just an excuse to play with different technologies that I wanted to play. Weeks ago I discovered one library called face_classification. With this library I can perform emotion classification from a picture. The idea is simple. We create RabbitMQ RPC server script that answers with the emotion of the face within a picture. Then we obtain on frame from the video stream of the webcam (with HTML5) and we send this frame using websocket to a socket.io server. This websocket server (node) ask to the RabbitMQ RPC the emotion and it sends back to the browser the emotion and a the original picture with a rectangle over the face.

Frontend

As well as we’re going to use socket.io for websockets we will use the same script to serve the frontend (the login and the HTML5 video capture)

<!doctype html>
<html>
<head>
    <title>Happy login</title>
    <link rel="stylesheet" href="css/app.css">
</head>
<body>

<div id="login-page" class="login-page">
    <div class="form">
        <h1 id="nonHappy" style="display: block;">Only the happy user will pass</h1>
        <form id="happyForm" class="login-form" style="display: none" onsubmit="return false;">
            <input id="user" type="text" placeholder="username"/>
            <input id="pass" type="password" placeholder="password"/>
            <button id="login">login</button>
            <p></p>
            <img id="smile" width="426" height="320" src=""/>
        </form>
        <div id="video">
            <video style="display:none;"></video>
            <canvas id="canvas" style="display:none"></canvas>
            <canvas id="canvas-face" width="426" height="320"></canvas>
        </div>
    </div>
</div>

<div id="private" style="display: none;">
    <h1>Private page</h1>
</div>

<script src="https://code.jquery.com/jquery-3.2.1.min.js" integrity="sha256-hwg4gsxgFZhOsEEamdOYGBf13FyQuiTwlAQgxVSNgt4=" crossorigin="anonymous"></script>
<script src="https://unpkg.com/sweetalert/dist/sweetalert.min.js"></script>
<script type="text/javascript" src="/socket.io/socket.io.js"></script>
<script type="text/javascript" src="/js/app.js"></script>
</body>
</html>

Here we’ll connect to the websocket and we’ll emit the webcam frame to the server. We´ll also be listening to one event called ‘response’ where server will notify us when one emotion has been detected.

let socket = io.connect(location.origin),
    img = new Image(),
    canvasFace = document.getElementById('canvas-face'),
    context = canvasFace.getContext('2d'),
    canvas = document.getElementById('canvas'),
    width = 640,
    height = 480,
    delay = 1000,
    jpgQuality = 0.6,
    isHappy = false;

socket.on('response', function (r) {
    let data = JSON.parse(r);
    if (data.length > 0 && data[0].hasOwnProperty('emotion')) {
        if (isHappy === false && data[0]['emotion'] === 'happy') {
            isHappy = true;
            swal({
                title: "Good!",
                text: "All is better with one smile!",
                icon: "success",
                buttons: false,
                timer: 2000,
            });

            $('#nonHappy').hide();
            $('#video').hide();
            $('#happyForm').show();
            $('#smile')[0].src = 'data:image/png;base64,' + data[0].image;
        }

        img.onload = function () {
            context.drawImage(this, 0, 0, canvasFace.width, canvasFace.height);
        };

        img.src = 'data:image/png;base64,' + data[0].image;
    }
});

navigator.getMedia = (navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia);

navigator.getMedia({video: true, audio: false}, (mediaStream) => {
    let video = document.getElementsByTagName('video')[0];
    video.src = window.URL.createObjectURL(mediaStream);
    video.play();
    setInterval(((video) => {
        return function () {
            let context = canvas.getContext('2d');
            canvas.width = width;
            canvas.height = height;
            context.drawImage(video, 0, 0, width, height);
            socket.emit('img', canvas.toDataURL('image/jpeg', jpgQuality));
        }
    })(video), delay)
}, error => console.log(error));

$(() => {
    $('#login').click(() => {
        $('#login-page').hide();
        $('#private').show();
    })
});

Backend
Finally we’ll work in the backend. Basically I’ve check the examples that we can see in face_classification project and tune it a bit according to my needs.

from rabbit import builder
import logging
import numpy as np
from keras.models import load_model
from utils.datasets import get_labels
from utils.inference import detect_faces
from utils.inference import draw_text
from utils.inference import draw_bounding_box
from utils.inference import apply_offsets
from utils.inference import load_detection_model
from utils.inference import load_image
from utils.preprocessor import preprocess_input
import cv2
import json
import base64

detection_model_path = 'trained_models/detection_models/haarcascade_frontalface_default.xml'
emotion_model_path = 'trained_models/emotion_models/fer2013_mini_XCEPTION.102-0.66.hdf5'
emotion_labels = get_labels('fer2013')
font = cv2.FONT_HERSHEY_SIMPLEX

# hyper-parameters for bounding boxes shape
emotion_offsets = (20, 40)

# loading models
face_detection = load_detection_model(detection_model_path)
emotion_classifier = load_model(emotion_model_path, compile=False)

# getting input model shapes for inference
emotion_target_size = emotion_classifier.input_shape[1:3]


def format_response(response):
    decoded_json = json.loads(response)
    return "Hello {}".format(decoded_json['name'])


def on_data(data):
    f = open('current.jpg', 'wb')
    f.write(base64.decodebytes(data))
    f.close()
    image_path = "current.jpg"

    out = []
    # loading images
    rgb_image = load_image(image_path, grayscale=False)
    gray_image = load_image(image_path, grayscale=True)
    gray_image = np.squeeze(gray_image)
    gray_image = gray_image.astype('uint8')

    faces = detect_faces(face_detection, gray_image)
    for face_coordinates in faces:
        x1, x2, y1, y2 = apply_offsets(face_coordinates, emotion_offsets)
        gray_face = gray_image[y1:y2, x1:x2]

        try:
            gray_face = cv2.resize(gray_face, (emotion_target_size))
        except:
            continue

        gray_face = preprocess_input(gray_face, True)
        gray_face = np.expand_dims(gray_face, 0)
        gray_face = np.expand_dims(gray_face, -1)
        emotion_label_arg = np.argmax(emotion_classifier.predict(gray_face))
        emotion_text = emotion_labels[emotion_label_arg]
        color = (0, 0, 255)

        draw_bounding_box(face_coordinates, rgb_image, color)
        draw_text(face_coordinates, rgb_image, emotion_text, color, 0, -50, 1, 2)
        bgr_image = cv2.cvtColor(rgb_image, cv2.COLOR_RGB2BGR)

        cv2.imwrite('predicted.png', bgr_image)
        data = open('predicted.png', 'rb').read()
        encoded = base64.encodebytes(data).decode('utf-8')
        out.append({
            'image': encoded,
            'emotion': emotion_text,
        })

    return out

logging.basicConfig(level=logging.WARN)
rpc = builder.rpc("image.check", {'host': 'localhost', 'port': 5672})
rpc.server(on_data)

Here you can see in action the working prototype

Maybe we can do the same with another tools and even more simple but as I said before this example is just an excuse to play with those technologies:

  • Send webcam frames via websockets
  • Connect one web application to a Pyhon application via RabbitMQ RPC
  • Play with face classification script

Please don’t use this script in production. It’s just a proof of concepts. With smiles but a proof of concepts 🙂

You can see the project in my github account

Opencv and esp32 experiment. Moving a servo with my face alignment

One saturday morning I was having a breakfast and I discovered face_recognition project. I started to play with the opencv example. I put my picture and, Wow! It works like a charm. It’s pretty straightforward to detect my face and also I can obtain the face landmarks. One of the landmark that I can get is the nose tip. Playing with this script I realized that with the nose tip I can determine the position of the face. I can see if my face is align to the center or if I move it to one side. As well as I have a new iot device (one ESP32) I wanted to do something with it. For example control a servo (SG90) and moving it from left to right depending on my face position.

First we have the main python script. With this script I detect my face, the nose tip and the position of my face. With this position I will emit an event to a mqtt broker (a mosquitto server running on my laptop).

import face_recognition
import cv2
import numpy as np
import math
import paho.mqtt.client as mqtt

video_capture = cv2.VideoCapture(0)

gonzalo_image = face_recognition.load_image_file("gonzalo.png")
gonzalo_face_encoding = face_recognition.face_encodings(gonzalo_image)[0]

known_face_encodings = [
    gonzalo_face_encoding
]
known_face_names = [
    "Gonzalo"
]

RED = (0, 0, 255)
GREEN = (0, 255, 0)
BLUE = (255, 0, 0)

face_locations = []
face_encodings = []
face_names = []
process_this_frame = True
status = ''
labelColor = GREEN

client = mqtt.Client()
client.connect("localhost", 1883, 60)

while True:
    ret, frame = video_capture.read()

    # Resize frame of video to 1/4 size for faster face recognition processing
    small_frame = cv2.resize(frame, (0, 0), fx=0.25, fy=0.25)

    # Convert the image from BGR color (which OpenCV uses) to RGB color (which face_recognition uses)
    rgb_small_frame = small_frame[:, :, ::-1]

    face_locations = face_recognition.face_locations(rgb_small_frame)
    face_encodings = face_recognition.face_encodings(rgb_small_frame, face_locations)
    face_landmarks_list = face_recognition.face_landmarks(rgb_small_frame, face_locations)

    face_names = []
    for face_encoding, face_landmarks in zip(face_encodings, face_landmarks_list):
        matches = face_recognition.compare_faces(known_face_encodings, face_encoding)
        name = "Unknown"

        if True in matches:
            first_match_index = matches.index(True)
            name = known_face_names[first_match_index]

            nose_tip = face_landmarks['nose_tip']
            maxLandmark = max(nose_tip)
            minLandmark = min(nose_tip)

            diff = math.fabs(maxLandmark[1] - minLandmark[1])
            if diff < 2:
                status = "center"
                labelColor = BLUE
                client.publish("/face/{}/center".format(name), "1")
            elif maxLandmark[1] > minLandmark[1]:
                status = ">>>>"
                labelColor = RED
                client.publish("/face/{}/left".format(name), "1")
            else:
                status = "<<<<"
                client.publish("/face/{}/right".format(name), "1")
                labelColor = RED

            shape = np.array(face_landmarks['nose_bridge'], np.int32)
            cv2.polylines(frame, [shape.reshape((-1, 1, 2)) * 4], True, (0, 255, 255))
            cv2.fillPoly(frame, [shape.reshape((-1, 1, 2)) * 4], GREEN)

        face_names.append("{} {}".format(name, status))

    for (top, right, bottom, left), name in zip(face_locations, face_names):
        # Scale back up face locations since the frame we detected in was scaled to 1/4 size
        top *= 4
        right *= 4
        bottom *= 4
        left *= 4

        if 'Unknown' not in name.split(' '):
            cv2.rectangle(frame, (left, top), (right, bottom), labelColor, 2)
            cv2.rectangle(frame, (left, bottom - 35), (right, bottom), labelColor, cv2.FILLED)
            cv2.putText(frame, name, (left + 6, bottom - 6), cv2.FONT_HERSHEY_DUPLEX, 1.0, (255, 255, 255), 1)
        else:
            cv2.rectangle(frame, (left, top), (right, bottom), BLUE, 2)

    cv2.imshow('Video', frame)

    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

video_capture.release()
cv2.destroyAllWindows()

Now another Python script will be listening to mqtt events and it will trigger one event with the position of the servo. I know that this second Python script maybe is unnecessary. We can move its logic to esp32 and main opencv script, but I was playing with mqtt and I wanted to decouple it a little bit.

import paho.mqtt.client as mqtt

class Iot:
    _state = None
    _client = None
    _dict = {
        'left': 0,
        'center': 1,
        'right': 2
    }

    def __init__(self, client):
        self._client = client

    def emit(self, name, event):
        if event != self._state:
            self._state = event
            self._client.publish("/servo", self._dict[event])
            print("emit /servo envent with value {} - {}".format(self._dict[event], name))


def on_message(topic, iot):
    data = topic.split("/")
    name = data[2]
    action = data[3]
    iot.emit(name, action)


client = mqtt.Client()
iot = Iot(client)

client.on_connect = lambda self, mosq, obj, rc: self.subscribe("/face/#")
client.on_message = lambda client, userdata, msg: on_message(msg.topic, iot)

client.connect("localhost", 1883, 60)
client.loop_forever()

And finally the ESP32. Here will connect to my wifi and to my mqtt broker.

#include <WiFi.h>
#include <PubSubClient.h>

#define LED0 17
#define LED1 18
#define LED2 19
#define SERVO_PIN 5

// wifi configuration
const char* ssid = "my_ssid";
const char* password = "my_wifi_password";
// mqtt configuration
const char* server = "192.168.1.111"; // mqtt broker ip
const char* topic = "/servo";
const char* clientName = "com.gonzalo123.esp32";

int channel = 1;
int hz = 50;
int depth = 16;

WiFiClient wifiClient;
PubSubClient client(wifiClient);

void wifiConnect() {
  Serial.print("Connecting to ");
  Serial.println(ssid);

  WiFi.begin(ssid, password);

  while (WiFi.status() != WL_CONNECTED) {
    delay(500);
    Serial.print("*");
  }

  Serial.print("WiFi connected: ");
  Serial.println(WiFi.localIP());
}

void mqttReConnect() {
  while (!client.connected()) {
    Serial.print("Attempting MQTT connection...");
    if (client.connect(clientName)) {
      Serial.println("connected");
      client.subscribe(topic);
    } else {
      Serial.print("failed, rc=");
      Serial.print(client.state());
      Serial.println(" try again in 5 seconds");
      delay(5000);
    }
  }
}

void callback(char* topic, byte* payload, unsigned int length) {
  Serial.print("Message arrived [");
  Serial.print(topic);

  String data;
  for (int i = 0; i < length; i++) {
    data += (char)payload[i];
  }

  int value = data.toInt();
  cleanLeds();
  switch (value)  {
    case 0:
      ledcWrite(1, 3400);
      digitalWrite(LED0, HIGH);
      break;
    case 1:
      ledcWrite(1, 4900);
      digitalWrite(LED1, HIGH);
      break;
    case 2:
      ledcWrite(1, 6400);
      digitalWrite(LED2, HIGH);
      break;
  }
  Serial.print("] value:");
  Serial.println((int) value);
}

void cleanLeds() {
  digitalWrite(LED0, LOW);
  digitalWrite(LED1, LOW);
  digitalWrite(LED2, LOW);
}

void setup() {
  Serial.begin(115200);

  ledcSetup(channel, hz, depth);
  ledcAttachPin(SERVO_PIN, channel);

  pinMode(LED0, OUTPUT);
  pinMode(LED1, OUTPUT);
  pinMode(LED2, OUTPUT);
  cleanLeds();
  wifiConnect();
  client.setServer(server, 1883);
  client.setCallback(callback);

  delay(1500);
}

void loop()
{
  if (!client.connected()) {
    mqttReConnect();
  }

  client.loop();
  delay(100);
}

Here a video with the working prototype in action

The source code is available in my github account.

Tracking blue objects with Opencv and Python

Opencv is an amazing Open Source Computer Vision Library. Today We’re going to hack a little bit with it. The idea is track blue objects. Why blue objects? Maybe because I’ve got a couple of them in my desk. Let’s start.

The idea is simple. We’ll create a mask. Our mask is a black and white image where each blue pixel will turn into a white one and the rest of pixels will be black.

Original frame:

Masked one:

Now we only need put a bounding rectangle around the blue object.

import cv2
import numpy

cam = cv2.VideoCapture(0)
kernel = numpy.ones((5 ,5), numpy.uint8)

while (True):
    ret, frame = cam.read()
    rangomax = numpy.array([255, 50, 50]) # B, G, R
    rangomin = numpy.array([51, 0, 0])
    mask = cv2.inRange(frame, rangomin, rangomax)
    # reduce the noise
    opening = cv2.morphologyEx(mask, cv2.MORPH_OPEN, kernel)

    x, y, w, h = cv2.boundingRect(opening)

    cv2.rectangle(frame, (x, y), (x+w, y + h), (0, 255, 0), 3)
    cv2.circle(frame, (x+w/2, y+h/2), 5, (0, 0, 255), -1)

    cv2.imshow('camera', frame)

    k = cv2.waitKey(1) & 0xFF

    if k == 27:
        break

And that’s all. A nice hack for a Sunday morning

Source code in my github account

NFC tag reader with Raspberry Pi

In another post we spoke about NFC tag readers and Arduino. Today I’ll do the same but with a Raspberry Pi. Why? More or less everything we can do with an Arduino board we can do it also with a Raspberry Pi (and viceversa). Sometimes Arduino is to much low level for me. For example if we want to connect an Arduino to the LAN we need to set up mac address by hand. We can do it but this operation is trivial with Raspberry Pi and Python. We can connect our Arduino to a PostgreSQL Database, but it’s not pretty straightforward. Mi background is also better with Python than C++, so I feel more confortable working with Raspberry Pi. I’m not saying that RPi is better than Arduino. With Arduino for example we don’t need to worry about start proceses, reboots and those kind of thing stuff that we need to worry about with computers. Arduino a Raspberry Pi are different tools. Sometimes it will be better one and sometimes the other.

So let’s start connecting our RFID/NFC Sensor MFRC522 to our Raspberry Py 3
The wiring:

  • RC522 VCC > RP 3V3
  • RC522 RST > RPGPIO25
  • RC522 GND > RP Ground
  • RC522 MISO > RPGPIO9 (MISO)
  • RC522 MOSI > RPGPIO10 (MOSO)
  • RC522 SCK > RPGPIO11 (SCLK)
  • RC522 NSS > RPGPIO8 (CE0)
  • RC522 IRQ > RPNone

I will a Python port of the example code for the NFC module MF522-AN thank to mxgxw

I’m going to use two Python Scripts. One to control NFC reader

import RPi.GPIO as gpio
import MFRC522
import sys
import time

MIFAREReader = MFRC522.MFRC522()
GREEN = 11
RED = 13
YELLOW = 15
SERVO = 12

gpio.setup(GREEN, gpio.OUT, initial=gpio.LOW)
gpio.setup(RED, gpio.OUT, initial=gpio.LOW)
gpio.setup(YELLOW, gpio.OUT, initial=gpio.LOW)
gpio.setup(SERVO, gpio.OUT)
p = gpio.PWM(SERVO, 50)

good = [211, 200, 106, 217, 168]

def servoInit():
    print "servoInit"
    p.start(7.5)

def servoOn():
    print "servoOn"
    p.ChangeDutyCycle(4.5)

def servoNone():
    print "servoOn"
    p.ChangeDutyCycle(7.5)

def servoOff():
    print "servoOff"
    p.ChangeDutyCycle(10.5)

def clean():
    gpio.output(GREEN, False)
    gpio.output(RED, False)
    gpio.output(YELLOW, False)

def main():
    servoInit()
    while 1:
        (status, TagType) = MIFAREReader.MFRC522_Request(MIFAREReader.PICC_REQIDL)
        if status == MIFAREReader.MI_OK:
            (status, backData) = MIFAREReader.MFRC522_Anticoll()
            gpio.output(YELLOW, True)
            if status == MIFAREReader.MI_OK:
                mac = []
                for x in backData[0:-1]:
                    mac.append(hex(x).split('x')[1].upper())
                print ":".join(mac)
                if good == backData:
                    servoOn()
                    gpio.output(GREEN, True)
                    time.sleep(0.5)
                    servoNone()
                else:
                    gpio.output(RED, True)
                    servoOff()
                    time.sleep(0.5)
                    servoNone()
                time.sleep(1)
                gpio.output(YELLOW, False)
                gpio.output(RED, False)

if __name__ == '__main__':
    try:
        main()
    except KeyboardInterrupt:
        print 'Interrupted'
        clean()
        MIFAREReader.GPIO_CLEEN()
        sys.exit(0)

And another one to control push button. I use this second script only to see how to use different processes.

import RPi.GPIO as gpio
import time

gpio.setwarnings(False)
gpio.setmode(gpio.BOARD)
BUTTON = 40
GREEN = 11
RED = 13
YELLOW = 15

gpio.setup(GREEN, gpio.OUT)
gpio.setup(RED, gpio.OUT)
gpio.setup(YELLOW, gpio.OUT)

gpio.setup(BUTTON, gpio.IN, pull_up_down=gpio.PUD_DOWN)
gpio.add_event_detect(BUTTON, gpio.RISING)
def leds(status):
    gpio.output(YELLOW, status)
    gpio.output(GREEN, status)
    gpio.output(RED, status)

def buttonCallback(pin):
    if gpio.input(pin) == 1:
        print "STOP"
        leds(True)
        time.sleep(0.2)
        leds(False)
        time.sleep(0.2)
        leds(True)
        time.sleep(0.2)
        leds(False)

gpio.add_event_callback(BUTTON, buttonCallback)
while 1:
    pass

Here a video with a working example (I’ve also put a servo and and three leds only because it looks good :))

Code in my github account

Playing with Raspberry Pi, Arduino, NodeMcu and MQTT

These days I’m playing with IoT. Today I want to use MQTT protocol to comunicate between different devices. First I’ve start a mqtt broker in my Laptop. For testing I’ll use mosquitto server. In production we can use RabbitMQ or even a 3party server such as iot.eclipse.org or even Amazon’s IoT service.

The idea is emit one value with one device, and listen this value whit the rest of devices and perform one action depending on that value. For example I will use one potentiometer connected to on NodeMcu micro controller.

This controller will connect to the mqtt broker and will emit the value of the potentiometer (reading the analog input) into one topic (called “potentiometer”). We can code our NodeMcu with Lua but I’m more confortable with C++ and Arduino IDE. First I need to connect to my Wifi and then connect to broker and start emmiting potentiometer’s values

#include <PubSubClient.h>
#include <ESP8266WiFi.h>

// Wifi configuration
const char* ssid = "MY_WIFI_SSID";
const char* password = "my_wifi_password";

// mqtt configuration
const char* server = "192.168.1.104";
const char* topic = "potentiometer";
const char* clientName = "com.gonzalo123.nodemcu";

int value;
int percent;
String payload;

WiFiClient wifiClient;
PubSubClient client(wifiClient);

void wifiConnect() {
  Serial.println();
  Serial.print("Connecting to ");
  Serial.println(ssid);

  WiFi.begin(ssid, password);

  while (WiFi.status() != WL_CONNECTED) {
    delay(500);
    Serial.print(".");
  }
  Serial.println("");
  Serial.print("WiFi connected.");
  Serial.print("IP address: ");
  Serial.println(WiFi.localIP());

  if (client.connect(clientName)) {
    Serial.print("Connected to MQTT broker at ");
    Serial.print(server);
    Serial.print(" as ");
    Serial.println(clientName);
    Serial.print("Topic is: ");
    Serial.println(topic);
  }
  else {
    Serial.println("MQTT connect failed");
    Serial.println("Will reset and try again...");
    abort();
  }
}

void mqttReConnect() {
  while (!client.connected()) {
    Serial.print("Attempting MQTT connection...");
    // Attempt to connect
    if (client.connect(clientName)) {
      Serial.println("connected");
      client.subscribe(topic);
    } else {
      Serial.print("failed, rc=");
      Serial.print(client.state());
      Serial.println(" try again in 5 seconds");
      delay(5000);
    }
  }
}

void setup() {
  Serial.begin(9600);
  client.setServer(server, 1883);
  wifiConnect();
  delay(10);
}

void loop() {
  value = analogRead(A0);
  percent = (int) ((value * 100) / 1010);
  payload = (String) percent;
  if (client.connected()) {
    if (client.publish(topic, (char*) payload.c_str())) {
      Serial.print("Publish ok (");
      Serial.print(payload);
      Serial.println(")");
    } else {
      Serial.println("Publish failed");
    }
  } else {
    mqttReConnect();
  }

  delay(200);
}

Now we will use another Arduino (with a ethernet shield).

We’ll move one servomotor depending to NodeMcu’s potentiomenter value. This Arduino only needs to listen to MQTT’s topic and move the servo.

#include <SPI.h>
#include <Servo.h>
#include <Ethernet.h>
#include <PubSubClient.h>

#define SERVO_CONTROL 9
byte mac[] = { 0xDE, 0xAD, 0xBE, 0xEF, 0xFE, 0xED };

Servo servo;
EthernetClient ethClient;

// mqtt configuration
const char* server = "192.168.1.104";
const char* topic = "potentiometer";
const char* clientName = "com.gonzalo123.arduino";

PubSubClient client(ethClient);

void callback(char* topic, byte* payload, unsigned int length) {
  Serial.print("Message arrived [");
  Serial.print(topic);
  Serial.print("] angle:");

  String data;
  for (int i = 0; i < length; i++) {
    data += (char)payload[i];
  }

  double angle = ((data.toInt() * 180) / 100);
  constrain(angle, 0, 180);
  servo.write((int) angle);
  Serial.println((int) angle);
}

void mqttReConnect() {
  while (!client.connected()) {
    Serial.print("Attempting MQTT connection...");
    // Attempt to connect
    if (client.connect(clientName)) {
      Serial.println("connected");
      client.subscribe(topic);
    } else {
      Serial.print("failed, rc=");
      Serial.print(client.state());
      Serial.println(" try again in 5 seconds");
      delay(5000);
    }
  }
}

void setup()
{
  Serial.begin(9600);
  client.setServer(server, 1883);
  client.setCallback(callback);
  servo.attach(SERVO_CONTROL);
  if (Ethernet.begin(mac) == 0) {
    Serial.println("Failed to configure Ethernet using DHCP");
  }

  delay(1500); // Allow the hardware to sort itself out
}

void loop()
{
  if (!client.connected()) {
    mqttReConnect();
  }
  client.loop();
}

Finally we’ll use one Raspberry Pi with a Sense Hat and we’ll display with its led matrix different colors and dots, depending on the NodeMcu’s value. In the same way than the Arduino script here we only need to listen to the broker’s topic and perform the actions with the sense hat. Now with Python

import paho.mqtt.client as mqtt
from sense_hat import SenseHat

sense = SenseHat()
sense.clear()
mqttServer = "192.168.1.104"

red = [255, 0, 0]
green = [0, 255, 0]
yellow = [255, 255, 0]
black = [0, 0, 0]

def on_connect(client, userdata, rc):
    print("Connected!")
    client.subscribe("potentiometer")

def on_message(client, userdata, msg):
    value = (64 * int(msg.payload)) / 100
    O = black
    if value < 21:
        X = red
    elif value < 42:
        X = yellow
    else:
        X = green

    sense.set_pixels(([X] * value) + ([O] * (64 - value)))

client = mqtt.Client()
client.on_connect = on_connect
client.on_message = on_message

client.connect(mqttServer, 1883, 60)
client.loop_forever()

The hardware:

  • 1 Arduino Uno
  • 1 NodeMCU (V3)
  • 1 potentiometer
  • 1 Servo (SG90)
  • 1 Raspberry Pi 3 (with a Sense Hat)
    • Source code is available in my github.

Arduino and Raspberry Pi working together. Part 2 (now with i2c)

The easiest way to connect our Arduino board to our Raspberry Py is using the USB cable, but sometimes this communication is a nightmare, especially because there isn’t any clock signal to synchronize our devices and we must rely on the bitrate. There’re different ways to connect our Arduino and our Raspberry Py such as I2C, SPI and serial over GPIO. Today we’re going to speak about I2C, especially because it’s pretty straightforward if we take care with a couple of things. Let’s start.

I2C uses two lines SDA (data) and SCL (clock), in addition to GND (ground). SDA is bidirectional so we need to ensure, in one way or another, who is sending data (master or slave). With I2C only master can start communications and also master controls the clock signal. Each device has a 7bit direction so we can connect 128 devices to the same bus.

If we want to connect Arduino board and Raspberry Pi we must ensure that Raspberry Pi is the master. That’s because Arduino works with 5V and Raspberry Pi with 3.3V. That means that we need to use pull-up resistors if we don’t want destroy our Raspberry Pi. But Raspberry Pi has 1k8 ohms resistors to the 3.3 votl power rail, so we can connect both devices (if we connect other i2c devices to the bus they must have their pull-up resistors removed)

Thats all we need to connect our Raspberry pi to our Arduino board.

  • RPi SDA to Arduino analog 4
  • RPi SCL to Arduino analog 5
  • RPi GND to Arduino GND

Now we are going to build a simple prototype. Raspberry Pi will blink one led (GPIO17) each second and also will send a message (via I2C) to Arduino to blink another led. That’s the Python part

import RPi.GPIO as gpio
import smbus
import time
import sys

bus = smbus.SMBus(1)
address = 0x04

def main():
    gpio.setmode(gpio.BCM)
    gpio.setup(17, gpio.OUT)
    status = False
    while 1:
        gpio.output(17, status)
        status = not status
        bus.write_byte(address, 1 if status else 0)
        print "Arduino answer to RPI:", bus.read_byte(address)
        time.sleep(1)


if __name__ == '__main__':
    try:
        main()
    except KeyboardInterrupt:
        print 'Interrupted'
        gpio.cleanup()
        sys.exit(0)

And finally the Arduino program. Arduino also answers to Raspberry Pi with the value that it’s been sent, and Raspberry Pi will log the answer within console.

#include <Wire.h>

#define SLAVE_ADDRESS 0x04
#define LED  13

int number = 0;

void setup() {
  pinMode(LED, OUTPUT);
  Serial.begin(9600);
  Wire.begin(SLAVE_ADDRESS);
  Wire.onReceive(receiveData);
  Wire.onRequest(sendData);

  Serial.println("Ready!");
}

void loop() {
  delay(100);
}

void receiveData(int byteCount) {
  Serial.print("receiveData");
  while (Wire.available()) {
    number = Wire.read();
    Serial.print("data received: ");
    Serial.println(number);

    if (number == 1) {
      Serial.println(" LED ON");
      digitalWrite(LED, HIGH);
    } else {
      Serial.println(" LED OFF");
      digitalWrite(LED, LOW);
    }
  }
}

void sendData() {
  Wire.write(number);
}

Hardware:

  • Arduino UNO
  • Raspberry Pi
  • Two LEDs and two resistors

Code available in my github

Arduino and Raspberry Pi working together

Basically everything we can do with Arduino it can be done also with a Raspberry Pi (an viceversa). There’re things that they’re easy to do with Arduino (connect sensors for example). But another things (such as work with REST servers, databases, …) are “complicated” with Arduino and C++ (they are possible but require a lot of low level operations) and pretty straightforward with Raspberry Pi and Python (at least for me and because of my background)

With this small project I want to use an Arduino board and Raspberry Pi working together. The idea is blink two LEDs. One (green one) will be controlled by Raspberry Pi directly via GPIO and another one (red one) will be controlled by Arduino board. Raspberry Pi will be the “brain” of the project and will tell to Arduino board when turn on/off it’s led. Let’s show you the code.

import RPi.GPIO as gpio
import serial
import time
import sys
import os

def main():
    gpio.setmode(gpio.BOARD)
    gpio.setup(12, gpio.OUT)

    s = serial.Serial('/dev/ttyACM0', 9600)
    status = False

    while 1:
        gpio.output(12, status)
        status = not status
        print status
        s.write("1\n" if status else "0\n")
        time.sleep(1)

if __name__ == '__main__':
    try:
        main()
    except KeyboardInterrupt:
        print 'Interrupted'
        gpio.cleanup()
        try:
            sys.exit(0)
        except SystemExit:
            os._exit(0)

As we can see the script is a simple loop and blink led (using pin 12) with one interval of one second. Our Arduino board is connected directly to the Raspberry Pi via USB cable and we send commands via serial interface.

Finally the Arduino program:

#define LED  11

String serialData = "";
boolean onSerialRead = false; 

void setup() {
  // initialize serial:
  Serial.begin(9600);
  serialData.reserve(200);
}

void procesSerialData() {
  Serial.print("Data " + serialData);
  if (serialData == "1") {    
    Serial.println(" LED ON");
    digitalWrite(LED, HIGH);
  } else {
    Serial.println(" LED OFF");
    digitalWrite(LED, LOW);
  }
  serialData = "";
  onSerialRead = false;
}

void loop() {
  if (onSerialRead) {
    procesSerialData();
  }
}

void serialEvent() {
  while (Serial.available()) {
    char inChar = (char)Serial.read();
    if (inChar == '\n') {
      onSerialRead = true;
    } else {
      serialData += inChar;
    }
  }
}

Here our Arduino Board is listening to serial interface (with serialEvent) and each time we receive “\n” the main loop will turn on/off the led depending on value (1 – On, 0 – Off)

We can use I2C and another ways to connect Arduino and Raspberry Pi but in this example we’re using the simplest way to do it: A USB cable. We only need a A/B USB cable. We don’t need any other extra hardware (such as resistors) and the software part is pretty straightforward also.

Hardware:

  • Arduino UNO
  • Raspberry Pi 3
  • Two LEDs and two resistors

Code in my github account