Blog Archives

Data Analysis with Python. Pivot tables with Pandas

One of the first post in my blog was about Pivot tables. I’d created a library to pivot tables in my PHP scripts. The library is not very beautiful (it throws a lot of warnings), but it works. These days I’m playing with Python Data Analysis and I’m using Pandas. The purpose of this post is something that I like a lot: Learn by doing. So I want to do the same operations that I did eight years ago in the post but now with Pandas. Let’s start.

I’ll start with the same datasource that I used almost ten years ago. One simple recordset with cliks and number of users

I create a dataframe with this data

import numpy as np
import pandas as pd

data = pd.DataFrame([
    {'host': 1, 'country': 'fr', 'year': 2010, 'month': 1, 'clicks': 123, 'users': 4},
    {'host': 1, 'country': 'fr', 'year': 2010, 'month': 2, 'clicks': 134, 'users': 5},
    {'host': 1, 'country': 'fr', 'year': 2010, 'month': 3, 'clicks': 341, 'users': 2},
    {'host': 1, 'country': 'es', 'year': 2010, 'month': 1, 'clicks': 113, 'users': 4},
    {'host': 1, 'country': 'es', 'year': 2010, 'month': 2, 'clicks': 234, 'users': 5},
    {'host': 1, 'country': 'es', 'year': 2010, 'month': 3, 'clicks': 421, 'users': 2},
    {'host': 1, 'country': 'es', 'year': 2010, 'month': 4, 'clicks': 22, 'users': 3},
    {'host': 2, 'country': 'es', 'year': 2010, 'month': 1, 'clicks': 111, 'users': 2},
    {'host': 2, 'country': 'es', 'year': 2010, 'month': 2, 'clicks': 2, 'users': 4},
    {'host': 3, 'country': 'es', 'year': 2010, 'month': 3, 'clicks': 34, 'users': 2},
    {'host': 3, 'country': 'es', 'year': 2010, 'month': 4, 'clicks': 1, 'users': 1}
])

Pivot_tables

Now we want to do a simple pivot operation. We want to pivot on host

pd.pivot_table(data,
   index=['host'],
   values=['users', 'clicks'],
   columns=['year', 'month'],
   fill_value=''
  )

Pivot_tables_2

We can add totals

pd.pivot_table(data,
               index=['host'],
               values=['users', 'clicks'],
               columns=['year', 'month'],
               fill_value='',
               aggfunc=np.sum,
               margins=True,
               margins_name='Total'
              )

Pivot_tables_3

We can also pivot on more than one column. For example host and country

pd.pivot_table(data,
               index=['host', 'country'],
               values=['users', 'clicks'],
               columns=['year', 'month'],
               fill_value=''
              )

Pivot_tables_4

and also with totals

pd.pivot_table(data,
               index=['host', 'country'],
               values=['users', 'clicks'],
               columns=['year', 'month'],
               aggfunc=np.sum,
               fill_value='',
               margins=True,
               margins_name='Total'
              )

Pivot_tables_5

We can group by dataframe and calculate subtotals

data.groupby(['host', 'country'])[('clicks', 'users')].sum()

Pivot_tables_6

data.groupby(['host', 'country'])[('clicks', 'users')].mean()

Pivot_tables_7

And finally we can mix totals and subtotals.

out = data.groupby('host').apply(lambda sub: sub.pivot_table(
    index=['host', 'country'],
    values=['users', 'clicks'],
    columns=['year', 'month'],
    aggfunc=np.sum,
    margins=True,
    margins_name='SubTotal',
))

out.loc[('', 'Max', '')] = out.max()
out.loc[('', 'Min', '')] = out.min()
out.loc[('', 'Total', '')] = out.sum()

out.index = out.index.droplevel(0)

out.fillna('', inplace=True)

Pivot_tables_8

And that’s all. A lot of to learn yet about data analysis, but Pandas will be definitely a good friend of mine.

You can see the Jupiter notebook in my github account

Advertisements

Monitoring the bandwidth (part 2) now with Python Nameko microservice

This days I’ve been playing with Nameko. The Python framework for building microservices. Today I want to upgrade one small pet project that I’ve got in my house to monitor the bandwidth of my internet connection. I want to use one nameko microservice using the Timer entrypoint.

That’s the worker:

from nameko.timer import timer
import datetime
import logging
import os
import speedtest
from dotenv import load_dotenv
from influxdb import InfluxDBClient

current_dir = os.path.dirname(os.path.abspath(__file__))
load_dotenv(dotenv_path="{}/.env".format(current_dir))


class SpeedService:
    name = "speed"

    def __init__(self):
        self.influx_client = InfluxDBClient(
            host=os.getenv("INFLUXDB_HOST"),
            port=os.getenv("INFLUXDB_PORT"),
            database=os.getenv("INFLUXDB_DATABASE")
        )

    @timer(interval=3600)
    def speedTest(self):
        logging.info("speedTest")
        current_time = datetime.datetime.utcnow().isoformat()
        speed = self.get_speed()

        self.persists(measurement='download', fields={"value": speed['download']}, time=current_time)
        self.persists(measurement='upload', fields={"value": speed['upload']}, time=current_time)
        self.persists(measurement='ping', fields={"value": speed['ping']}, time=current_time)

    def persists(self, measurement, fields, time):
        logging.info("{} {} {}".format(time, measurement, fields))
        self.influx_client.write_points([{
            "measurement": measurement,
            "time": time,
            "fields": fields
        }])

    @staticmethod
    def get_speed():
        logging.info("Calculating speed ...")
        s = speedtest.Speedtest()
        s.get_best_server()
        s.download()
        s.upload()

        return s.results.dict()

I need to adapt my docker-compose file to include the RabbitMQ server (Nameko needs a RabbitMQ message broker)

version: '3'

services:
  speed.worker:
    container_name: speed.worker
    image: speed/worker
    restart: always
    build:
      context: ./src/speed.worker
      dockerfile: .docker/Dockerfile-worker
    command: /bin/bash run.sh
  rabbit:
    container_name: speed.rabbit
    image: rabbitmq:3-management
    restart: always
    ports:
      - "15672:15672"
      - "5672:5672"
    environment:
      RABBITMQ_ERLANG_COOKIE:
      RABBITMQ_DEFAULT_VHOST: /
      RABBITMQ_DEFAULT_USER: ${RABBITMQ_DEFAULT_USER}
      RABBITMQ_DEFAULT_PASS: ${RABBITMQ_DEFAULT_PASS}
  influxdb:
    container_name: speed.influxdb
    image: influxdb:latest
    restart: always
    environment:
    - INFLUXDB_DB=${INFLUXDB_DB}
    - INFLUXDB_ADMIN_USER=${INFLUXDB_ADMIN_USER}
    - INFLUXDB_ADMIN_PASSWORD=${INFLUXDB_ADMIN_PASSWORD}
    - INFLUXDB_HTTP_AUTH_ENABLED=${INFLUXDB_HTTP_AUTH_ENABLED}
    volumes:
    - influxdb-data:/data
  grafana:
    container_name: speed.grafana
    build:
      context: ./src/grafana
      dockerfile: .docker/Dockerfile-grafana
    restart: always
    environment:
    - GF_SECURITY_ADMIN_USER=${GF_SECURITY_ADMIN_USER}
    - GF_SECURITY_ADMIN_PASSWORD=${GF_SECURITY_ADMIN_PASSWORD}
    - GF_USERS_DEFAULT_THEME=${GF_USERS_DEFAULT_THEME}
    - GF_USERS_ALLOW_SIGN_UP=${GF_USERS_ALLOW_SIGN_UP}
    - GF_USERS_ALLOW_ORG_CREATE=${GF_USERS_ALLOW_ORG_CREATE}
    - GF_AUTH_ANONYMOUS_ENABLED=${GF_AUTH_ANONYMOUS_ENABLED}
    ports:
    - "3000:3000"
    depends_on:
    - influxdb
volumes:
  influxdb-data:
    driver: local

And that’s all. Over engineering to control my Internet Connection? Maybe, but that’s the way I learn new stuff 🙂

Source code available in my github.

Playing with microservices, Docker, Python an Nameko

In the last projects that I’ve been involved with I’ve playing, in one way or another, with microservices, queues and things like that. I’m always facing the same tasks: Building RPCs, Workers, API gateways, … Because of that I’ve searching one framework to help me with those kind of stuff. Finally I discover Nameko. Basically Nameko is the Python tool that I’ve been looking for. In this post I will create a simple proof of concept to learn how to integrate Nameko within my projects. Let start.

The POC is a simple API gateway that gives me the localtime in iso format. I can create a simple Python script to do it

import datetime
import time

print(datetime.datetime.fromtimestamp(time()).isoformat())

We also can create a simple Flask API server to consume this information. The idea is create a rpc worker to generate this information and also generate another worker to send the localtime, but taken from a PostgreSQL database (yes I know it not very useful but it’s just an excuse to use a PG database in the microservice)

We’re going to create two rpc workers. One giving the local time:

from nameko.rpc import rpc
from time import time
import datetime


class TimeService:
    name = "local_time_service"

    @rpc
    def local(self):
        return datetime.datetime.fromtimestamp(time()).isoformat()

And another one with the date from PostgreSQL:

from nameko.rpc import rpc
from dotenv import load_dotenv
import os
from ext.pg import PgService

current_dir = os.path.dirname(os.path.abspath(__file__))
load_dotenv(dotenv_path="{}/.env".format(current_dir))


class TimeService:
    name = "db_time_service"
    conn = PgService(os.getenv('DSN'))

    @rpc
    def db(self):
        with self.conn:
            with self.conn.cursor() as cur:
                cur.execute("select localtimestamp")
                timestamp = cur.fetchone()
        return timestamp[0]

I’ve created a service called PgService only to learn how to create dependency providers in nameko

from nameko.extensions import DependencyProvider
import psycopg2


class PgService(DependencyProvider):

    def __init__(self, dsn):
        self.dsn = dsn

    def get_dependency(self, worker_ctx):
        return psycopg2.connect(self.dsn)

Now we only need to setup the api gateway. With Nameko we can create http entrypoint also (in the same way than we create rpc) but I want to use it with Flask

from flask import Flask
from nameko.standalone.rpc import ServiceRpcProxy
from dotenv import load_dotenv
import os

current_dir = os.path.dirname(os.path.abspath(__file__))
load_dotenv(dotenv_path="{}/.env".format(current_dir))

app = Flask(__name__)


def rpc_proxy(service):
    config = {'AMQP_URI': os.getenv('AMQP_URI')}
    return ServiceRpcProxy(service, config)


@app.route('/')
def hello():
    return "Hello"


@app.route('/local')
def local_time():
    with rpc_proxy('local_time_service') as rpc:
        time = rpc.local()

    return time


@app.route('/db')
def db_time():
    with rpc_proxy('db_time_service') as rpc:
        time = rpc.db()

    return time


if __name__ == '__main__':
    app.run()

As well as I wanna run my POC with docker, here the docker-compose file to set up the project

version: '3.4'

services:
  api:
    image: nameko/api
    container_name: nameko.api
    hostname: api
    ports:
    - "8080:8080"
    restart: always
    links:
    - rabbit
    - db.worker
    - local.worker
    environment:
    - ENV=1
    - FLASK_APP=app.py
    - FLASK_DEBUG=1
    build:
      context: ./api
      dockerfile: .docker/Dockerfile-api
    #volumes:
    #- ./api:/usr/src/app:ro
    command: flask run --host=0.0.0.0 --port 8080
  db.worker:
    container_name: nameko.db.worker
    image: nameko/db.worker
    restart: always
    build:
      context: ./workers/db.worker
      dockerfile: .docker/Dockerfile-worker
    command: /bin/bash run.sh
  local.worker:
    container_name:  nameko.local.worker
    image: nameko/local.worker
    restart: always
    build:
      context: ./workers/local.worker
      dockerfile: .docker/Dockerfile-worker
    command: /bin/bash run.sh
  rabbit:
    container_name: nameko.rabbit
    image: rabbitmq:3-management
    restart: always
    ports:
    - "15672:15672"
    - "5672:5672"
    environment:
      RABBITMQ_ERLANG_COOKIE:
      RABBITMQ_DEFAULT_VHOST: /
      RABBITMQ_DEFAULT_USER: ${RABBITMQ_DEFAULT_USER}
      RABBITMQ_DEFAULT_PASS: ${RABBITMQ_DEFAULT_PASS}
  pg:
    container_name: nameko.pg
    image: nameko/pg
    restart: always
    build:
      context: ./pg
      dockerfile: .docker/Dockerfile-pg
    #ports:
    #- "5432:5432"
    environment:
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
      POSTGRES_USER: ${POSTGRES_USER}
      POSTGRES_DB: ${POSTGRES_DB}
      PGDATA: /var/lib/postgresql/data/pgdata

And that’s all. Two nameko rpc services working together behind a api gateway

Code available in my github

Monitoring the bandwidth with Grafana, InfluxDB and Docker

Time ago, when I was an ADSL user in my house I had a lot problems with my internet connection. I was a bit lazy to switch to a fiber connection. Finally I changed it, but meanwhile the my Internet company was solving one incident, I started to hack a little bit a simple and dirty script that monitors my connection speed (just for fun and to practise with InfluxDB and Grafana).

Today I’ve lost my quick and dirty script (please Gonzalo keep a working backup the SD card of your Raspberry Pi Server always updated! Sometimes it crashes. It’s simple: “dd if=/dev/disk3 of=pi3.img” 🙂 and I want to rebuild it. This time I want to use Docker (just for fun). Let’s start.

To monitor the bandwidth we only need to use the speedtest-cli api. We can use this api from command line and, as it’s a python library, we can create one python script that uses it.

import datetime
import logging
import os
import speedtest
import time
from dotenv import load_dotenv
from influxdb import InfluxDBClient

logging.basicConfig(level=logging.INFO)

current_dir = os.path.dirname(os.path.abspath(__file__))
load_dotenv(dotenv_path="{}/.env".format(current_dir))

influxdb_host = os.getenv("INFLUXDB_HOST")
influxdb_port = os.getenv("INFLUXDB_PORT")
influxdb_database = os.getenv("INFLUXDB_DATABASE")

def persists(measurement, fields, time):
    logging.info("{} {} {}".format(time, measurement, fields))

    influx_client.write_points([{
        "measurement": measurement,
        "time": time,
        "fields": fields
    }])

influx_client = InfluxDBClient(host=influxdb_host, port=influxdb_port, database=influxdb_database)

def get_speed():
    logging.info("Calculating speed ...")
    s = speedtest.Speedtest()
    s.get_best_server()
    s.download()
    s.upload()

    return s.results.dict()

def loop(sleep):
    current_time = datetime.datetime.utcnow().isoformat()
    speed = get_speed()

    persists(measurement='download', fields={"value": speed['download']}, time=current_time)
    persists(measurement='upload', fields={"value": speed['upload']}, time=current_time)
    persists(measurement='ping', fields={"value": speed['ping']}, time=current_time)

    time.sleep(sleep)

while True:
    loop(sleep=60 * 60) # each hour

Now we need to create the docker-compose file to orchestrate the infrastructure. The most complicate thing here is, maybe, to configure grafana within docker files instead of opening browser, create datasoruce and build dashboard by hand. After a couple of hours navigating into github repositories finally I created exactly what I needed for this post. Basically is a custom entry point for my grafana host that creates the datasource and dashboard (via Grafana’s API)

version: '3'

services:
  check:
    image: gonzalo123.check
    restart: always
    volumes:
    - ./src/beat:/code/src
    depends_on:
    - influxdb
    build:
      context: ./src
      dockerfile: .docker/Dockerfile-check
    networks:
    - app-network
    command: /bin/sh start.sh
  influxdb:
    image: influxdb:latest
    restart: always
    environment:
    - INFLUXDB_INIT_PWD="${INFLUXDB_PASS}"
    - PRE_CREATE_DB="${INFLUXDB_DB}"
    volumes:
    - influxdb-data:/data
    networks:
    - app-network
  grafana:
    image: grafana/grafana:latest
    restart: always
    ports:
    - "3000:3000"
    depends_on:
    - influxdb
    volumes:
    - grafana-db:/var/lib/grafana
    - grafana-log:/var/log/grafana
    - grafana-conf:/etc/grafana
    networks:
    - app-network

networks:
  app-network:
    driver: bridge

volumes:
  grafana-db:
    driver: local
  grafana-log:
    driver: local
  grafana-conf:
    driver: local
  influxdb-data:
    driver: local

And that’s all. My Internet connection supervised again.

Project available in my github.

Playing with Grafana and weather APIs

Today I want to play with Grafana. Let me show you my idea:

I’ve got a Beewi temperature sensor. I’ve been playing with it previously. Today I want to show the temperature within a Grafana dashboard.
I want to play also with openweathermap API.

Fist I want to retrieve the temperature from Beewi device. I’ve got a node script that connects via Bluetooth to the device using noble library.
I only need to pass the sensor mac address and I obtain a JSON with the current temperature

#!/usr/bin/env node
noble = require('noble');

var status = false;
var address = process.argv[2];

if (!address) {
    console.log('Usage "./reader.py <sensor mac address>"');
    process.exit();
}

function hexToInt(hex) {
    var num, maxVal;
    if (hex.length % 2 !== 0) {
        hex = "0" + hex;
    }
    num = parseInt(hex, 16);
    maxVal = Math.pow(2, hex.length / 2 * 8);
    if (num > maxVal / 2 - 1) {
        num = num - maxVal;
    }

    return num;
}

noble.on('stateChange', function(state) {
    status = (state === 'poweredOn');
});

noble.on('discover', function(peripheral) {
    if (peripheral.address == address) {
        var data = peripheral.advertisement.manufacturerData.toString('hex');
        out = {
            temperature: parseFloat(hexToInt(data.substr(10, 2)+data.substr(8, 2))/10).toFixed(1)
        };
        console.log(JSON.stringify(out))
        noble.stopScanning();
        process.exit();
    }
});

noble.on('scanStop', function() {
    noble.stopScanning();
});

setTimeout(function() {
    noble.stopScanning();
    noble.startScanning();
}, 2000);


setTimeout(function() {
    noble.stopScanning();
    process.exit()
}, 20000);

And finally another script (this time a Python script) to collect data from openweathermap API, collect data from node script and storing the information in a influxdb database.

from sense_hat import SenseHat
from influxdb import InfluxDBClient
import datetime
import logging
import requests
import json
from subprocess import check_output
import os
import sys
from dotenv import load_dotenv

logging.basicConfig(level=logging.INFO)

current_dir = os.path.dirname(os.path.abspath(__file__))
load_dotenv(dotenv_path="{}/.env".format(current_dir))

sensor_mac_address = os.getenv("BEEWI_SENSOR")
openweathermap_api_key = os.getenv("OPENWEATHERMAP_API_KEY")
influxdb_host = os.getenv("INFLUXDB_HOST")
influxdb_port = os.getenv("INFLUXDB_PORT")
influxdb_database = os.getenv("INFLUXDB_DATABASE")

reader = '{}/reader.js'.format(current_dir)


def get_rain_level_from_weather(weather):
    rain = False
    rain_level = 0
    if len(weather) > 0:
        for w in weather:
            if w['icon'] == '09d':
                rain = True
                rain_level = 1
            elif w['icon'] == '10d':
                rain = True
                rain_level = 2
            elif w['icon'] == '11d':
                rain = True
                rain_level = 3
            elif w['icon'] == '13d':
                rain = True
                rain_level = 4

    return rain, rain_level


def openweathermap():
    data = {}
    r = requests.get(
        "http://api.openweathermap.org/data/2.5/weather?id=3110044&appid={}&units=metric".format(
            openweathermap_api_key))

    if r.status_code == 200:
        current_data = r.json()
        data['weather'] = current_data['main']
        rain, rain_level = get_rain_level_from_weather(current_data['weather'])
        data['weather']['rain'] = rain
        data['weather']['rain_level'] = rain_level

    r2 = requests.get(
        "http://api.openweathermap.org/data/2.5/uvi?lat=43.32&lon=-1.93&appid={}".format(openweathermap_api_key))
    if r2.status_code == 200:
        data['uvi'] = r2.json()

    r3 = requests.get(
        "http://api.openweathermap.org/data/2.5/forecast?id=3110044&appid={}&units=metric".format(
            openweathermap_api_key))

    if r3.status_code == 200:
        forecast = r3.json()['list']
        data['forecast'] = []
        for f in forecast:
            rain, rain_level = get_rain_level_from_weather(f['weather'])
            data['forecast'].append({
                "dt": f['dt'],
                "fields": {
                    "temp": float(f['main']['temp']),
                    "humidity": float(f['main']['humidity']),
                    "rain": rain,
                    "rain_level": int(rain_level),
                    "pressure": float(float(f['main']['pressure']))
                }
            })

        return data


def persists(measurement, fields, location, time):
    logging.info("{} {} [{}] {}".format(time, measurement, location, fields))
    influx_client.write_points([{
        "measurement": measurement,
        "tags": {"location": location},
        "time": time,
        "fields": fields
    }])


def in_sensors():
    try:
        sense = SenseHat()
        pressure = sense.get_pressure()
        reader_output = check_output([reader, sensor_mac_address]).strip()
        sensor_info = json.loads(reader_output)
        temperature = sensor_info['temperature']

        persists(measurement='home_pressure', fields={"value": float(pressure)}, location="in", time=current_time)
        persists(measurement='home_temperature', fields={"value": float(temperature)}, location="in",
                 time=current_time)
    except Exception as err:
        logging.error(err)


def out_sensors():
    try:
        out_info = openweathermap()

        persists(measurement='home_pressure',
                 fields={"value": float(out_info['weather']['pressure'])},
                 location="out",
                 time=current_time)
        persists(measurement='home_humidity',
                 fields={"value": float(out_info['weather']['humidity'])},
                 location="out",
                 time=current_time)
        persists(measurement='home_temperature',
                 fields={"value": float(out_info['weather']['temp'])},
                 location="out",
                 time=current_time)
        persists(measurement='home_rain',
                 fields={"value": out_info['weather']['rain'], "level": out_info['weather']['rain_level']},
                 location="out",
                 time=current_time)
        persists(measurement='home_uvi',
                 fields={"value": float(out_info['uvi']['value'])},
                 location="out",
                 time=current_time)
        for f in out_info['forecast']:
            persists(measurement='home_weather_forecast',
                     fields=f['fields'],
                     location="out",
                     time=datetime.datetime.utcfromtimestamp(f['dt']).isoformat())

    except Exception as err:
        logging.error(err)


influx_client = InfluxDBClient(host=influxdb_host, port=influxdb_port, database=influxdb_database)
current_time = datetime.datetime.utcnow().isoformat()

in_sensors()
out_sensors()

I’m running this python script from a Raspberry Pi3 with a Sense Hat. Sense Hat has a atmospheric pressure sensor, so I will also retrieve the pressure from the Sense Hat.

From openweathermap I will obtain:

  • Current temperature/humidity and atmospheric pressure in the street
  • UV Index (the measure of the level of UV radiation)
  • Weather conditions (if it’s raining or not)
  • Weather forecast

I run this script with the Rasberry Pi crontab each 5 minutes. That means that I’ve got a fancy time series ready to be shown with grafana.

Here we can see the dashboard

Source code available in my github account.

Playing with Docker, MQTT, Grafana, InfluxDB, Python and Arduino

I must admit this post is just an excuse to play with Grafana and InfluxDb. InfluxDB is a cool database especially designed to work with time series. Grafana is one open source tool for time series analytics. I want to build a simple prototype. The idea is:

  • One Arduino device (esp32) emits a MQTT event to a mosquitto server. I’ll use a potentiometer to emulate one sensor (Imagine here, for example, a temperature sensor instead of potentiometer). I’ve used this circuit before in another projects
  • One Python script will be listening to the MQTT event in my Raspberry Pi and it will persist the value to InfluxDB database
  • I will monitor the state of the time series given by the potentiometer with Grafana
  • I will create one alert in Grafana (for example when the average value within 10 seconds is above a threshold) and I will trigger a webhook when the alert changes its state
  • One microservice (a Python Flask server) will be listening to the webhook and it will emit a MQTT event depending on the state
  • Another Arduino device (one NodeMcu in this case) will be listening to this MQTT event and it will activate a LED. Red one if the alert is ON and green one if the alert is OFF

The server
As I said before we’ll need three servers:

  • MQTT server (mosquitto)
  • InfluxDB server
  • Grafana server

We’ll use Docker. I’ve got a Docker host running in a Raspberry Pi3. The Raspberry Pi is a ARM device so we need docker images for this architecture.

version: '2'

services:
  mosquitto:
    image: pascaldevink/rpi-mosquitto
    container_name: moquitto
    ports:
     - "9001:9001"
     - "1883:1883"
    restart: always
  
  influxdb:
    image: hypriot/rpi-influxdb
    container_name: influxdb
    restart: always
    environment:
     - INFLUXDB_INIT_PWD="password"
     - PRE_CREATE_DB="iot"
    ports:
     - "8083:8083"
     - "8086:8086"
    volumes:
     - ~/docker/rpi-influxdb/data:/data

  grafana:
    image: fg2it/grafana-armhf:v4.6.3
    container_name: grafana
    restart: always
    ports:
     - "3000:3000"
    volumes:
      - grafana-db:/var/lib/grafana
      - grafana-log:/var/log/grafana
      - grafana-conf:/etc/grafana

volumes:
  grafana-db:
    driver: local  
  grafana-log:
    driver: local
  grafana-conf:
    driver: local

ESP32
The Esp32 part is very simple. We only need to connect our potentiometer to the Esp32. The potentiometer has three pins: Gnd, Signal and Vcc. For signal we’ll use the pin 32.

We only need to configure our Wifi network, connect to our MQTT server and emit the potentiometer value within each loop.

#include <PubSubClient.h>
#include <WiFi.h>

const int potentiometerPin = 32;

// Wifi configuration
const char* ssid = "my_wifi_ssid";
const char* password = "my_wifi_password";

// MQTT configuration
const char* server = "192.168.1.111";
const char* topic = "/pot";
const char* clientName = "com.gonzalo123.esp32";

String payload;

WiFiClient wifiClient;
PubSubClient client(wifiClient);

void wifiConnect() {
  Serial.println();
  Serial.print("Connecting to ");
  Serial.println(ssid);

  WiFi.begin(ssid, password);

  while (WiFi.status() != WL_CONNECTED) {
    delay(500);
    Serial.print(".");
  }
  Serial.println("");
  Serial.print("WiFi connected.");
  Serial.print("IP address: ");
  Serial.println(WiFi.localIP());
}

void mqttReConnect() {
  while (!client.connected()) {
    Serial.print("Attempting MQTT connection...");
    if (client.connect(clientName)) {
      Serial.println("connected");
    } else {
      Serial.print("failed, rc=");
      Serial.print(client.state());
      Serial.println(" try again in 5 seconds");
      delay(5000);
    }
  }
}

void mqttEmit(String topic, String value)
{
  client.publish((char*) topic.c_str(), (char*) value.c_str());
}

void setup() {
  Serial.begin(115200);

  wifiConnect();
  client.setServer(server, 1883);
  delay(1500);
}

void loop() {
  if (!client.connected()) {
    mqttReConnect();
  }
  int current = (int) ((analogRead(potentiometerPin) * 100) / 4095);
  mqttEmit(topic, (String) current);
  delay(500);
}

Mqtt listener

The esp32 emits an event (“/pot”) with the value of the potentiometer. So we’re going to create a MQTT listener that listen to MQTT and persits the value to InfluxDB.

import paho.mqtt.client as mqtt
from influxdb import InfluxDBClient
import datetime
import logging


def persists(msg):
    current_time = datetime.datetime.utcnow().isoformat()
    json_body = [
        {
            "measurement": "pot",
            "tags": {},
            "time": current_time,
            "fields": {
                "value": int(msg.payload)
            }
        }
    ]
    logging.info(json_body)
    influx_client.write_points(json_body)


logging.basicConfig(level=logging.INFO)
influx_client = InfluxDBClient('docker', 8086, database='iot')
client = mqtt.Client()

client.on_connect = lambda self, mosq, obj, rc: self.subscribe("/pot")
client.on_message = lambda client, userdata, msg: persists(msg)

client.connect("docker", 1883, 60)

client.loop_forever()

Grafana
In grafana we need to do two things. First to create one datasource from our InfluxDB server. It’s pretty straightforward to it.

Finally we’ll create a dashboard. We only have one time-serie with the value of the potentiometer. I must admit that my dasboard has a lot things that I’ve created only for fun.

Thats the query that I’m using to plot the main graph

SELECT 
  last("value") FROM "pot" 
WHERE 
  time >= now() - 5m 
GROUP BY 
  time($interval) fill(previous)

Here we can see the dashboard

And here my alert configuration:

I’ve also created a notification channel with a webhook. Grafana will use this web hook to notify when the state of alert changes

Webhook listener
Grafana will emit a webhook, so we’ll need an REST endpoint to collect the webhook calls. I normally use PHP/Lumen to create REST servers but in this project I’ll use Python and Flask.

We need to handle HTTP Basic Auth and emmit a MQTT event. MQTT is a very simple protocol but it has one very nice feature that fits like hat fits like a glove here. Le me explain it:

Imagine that we’ve got our system up and running and the state is “ok”. Now we connect one device (for example one big red/green lights). Since the “ok” event was fired before we connect the lights, our green light will not be switch on. We need to wait util “alert” event if we want to see any light. That’s not cool.

MQTT allows us to “retain” messages. That means that we can emit messages with “retain” flag to one topic and when we connect one device later to this topic, it will receive the message. Here it’s exactly what we need.

from flask import Flask
from flask import request
from flask_httpauth import HTTPBasicAuth
import paho.mqtt.client as mqtt
import json

client = mqtt.Client()

app = Flask(__name__)
auth = HTTPBasicAuth()

# http basic auth credentials
users = {
    "user": "password"
}


@auth.get_password
def get_pw(username):
    if username in users:
        return users.get(username)
    return None


@app.route('/alert', methods=['POST'])
@auth.login_required
def alert():
    client.connect("docker", 1883, 60)
    data = json.loads(request.data.decode('utf-8'))
    if data['state'] == 'alerting':
        client.publish(topic="/alert", payload="1", retain=True)
    elif data['state'] == 'ok':
        client.publish(topic="/alert", payload="0", retain=True)

    client.disconnect()

    return "ok"


if __name__ == "__main__":
    app.run(host='0.0.0.0')

Nodemcu

Finally the Nodemcu. This part is similar than the esp32 one. Our leds are in pins 4 and 5. We also need to configure the Wifi and connect to to MQTT server. Nodemcu and esp32 are similar devices but not the same. For example we need to use different libraries to connect to the wifi.

This device will be listening to the MQTT event and trigger on led or another depending on the state

#include <PubSubClient.h>
#include <ESP8266WiFi.h>

const int ledRed = 4;
const int ledGreen = 5;

// Wifi configuration
const char* ssid = "my_wifi_ssid";
const char* password = "my_wifi_password";

// mqtt configuration
const char* server = "192.168.1.111";
const char* topic = "/alert";
const char* clientName = "com.gonzalo123.nodemcu";

int value;
int percent;
String payload;

WiFiClient wifiClient;
PubSubClient client(wifiClient);

void wifiConnect() {
  Serial.println();
  Serial.print("Connecting to ");
  Serial.println(ssid);

  WiFi.begin(ssid, password);

  while (WiFi.status() != WL_CONNECTED) {
    delay(500);
    Serial.print(".");
  }
  Serial.println("");
  Serial.print("WiFi connected.");
  Serial.print("IP address: ");
  Serial.println(WiFi.localIP());
}

void mqttReConnect() {
  while (!client.connected()) {
    Serial.print("Attempting MQTT connection...");
    if (client.connect(clientName)) {
      Serial.println("connected");
      client.subscribe(topic);
    } else {
      Serial.print("failed, rc=");
      Serial.print(client.state());
      Serial.println(" try again in 5 seconds");
      delay(5000);
    }
  }
}

void callback(char* topic, byte* payload, unsigned int length) {

  Serial.print("Message arrived [");
  Serial.print(topic);

  String data;
  for (int i = 0; i < length; i++) {
    data += (char)payload[i];
  }
  cleanLeds();
  int value = data.toInt();
  switch (value)  {
    case 1:
      digitalWrite(ledRed, HIGH);
      break;
    case 0:
      digitalWrite(ledGreen, HIGH);
      break;
  }
  Serial.print("] value:");
  Serial.println((int) value);
}

void cleanLeds() {
  digitalWrite(ledRed, LOW);
  digitalWrite(ledGreen, LOW);
}

void setup() {
  Serial.begin(9600);
  pinMode(ledRed, OUTPUT);
  pinMode(ledGreen, OUTPUT);
  cleanLeds();
  Serial.println("start");

  wifiConnect();
  client.setServer(server, 1883);
  client.setCallback(callback);

  delay(1500);
}

void loop() {
  Serial.print(".");
  if (!client.connected()) {
    mqttReConnect();
  }

  client.loop();
  delay(500);
}

Here you can see the working prototype in action

And here the source code

Happy logins. Only the happy user will pass

Login forms are bored. In this example we’re going to create an especial login form. Only for happy users. Happiness is something complicated, but at least, one smile is more easy to obtain, and all is better with one smile :). Our login form will only appear if the user smiles. Let’s start.

I must admit that this project is just an excuse to play with different technologies that I wanted to play. Weeks ago I discovered one library called face_classification. With this library I can perform emotion classification from a picture. The idea is simple. We create RabbitMQ RPC server script that answers with the emotion of the face within a picture. Then we obtain on frame from the video stream of the webcam (with HTML5) and we send this frame using websocket to a socket.io server. This websocket server (node) ask to the RabbitMQ RPC the emotion and it sends back to the browser the emotion and a the original picture with a rectangle over the face.

Frontend

As well as we’re going to use socket.io for websockets we will use the same script to serve the frontend (the login and the HTML5 video capture)

<!doctype html>
<html>
<head>
    <title>Happy login</title>
    <link rel="stylesheet" href="css/app.css">
</head>
<body>

<div id="login-page" class="login-page">
    <div class="form">
        <h1 id="nonHappy" style="display: block;">Only the happy user will pass</h1>
        <form id="happyForm" class="login-form" style="display: none" onsubmit="return false;">
            <input id="user" type="text" placeholder="username"/>
            <input id="pass" type="password" placeholder="password"/>
            <button id="login">login</button>
            <p></p>
            <img id="smile" width="426" height="320" src=""/>
        </form>
        <div id="video">
            <video style="display:none;"></video>
            <canvas id="canvas" style="display:none"></canvas>
            <canvas id="canvas-face" width="426" height="320"></canvas>
        </div>
    </div>
</div>

<div id="private" style="display: none;">
    <h1>Private page</h1>
</div>

<script src="https://code.jquery.com/jquery-3.2.1.min.js" integrity="sha256-hwg4gsxgFZhOsEEamdOYGBf13FyQuiTwlAQgxVSNgt4=" crossorigin="anonymous"></script>
<script src="https://unpkg.com/sweetalert/dist/sweetalert.min.js"></script>
<script type="text/javascript" src="/socket.io/socket.io.js"></script>
<script type="text/javascript" src="/js/app.js"></script>
</body>
</html>

Here we’ll connect to the websocket and we’ll emit the webcam frame to the server. We´ll also be listening to one event called ‘response’ where server will notify us when one emotion has been detected.

let socket = io.connect(location.origin),
    img = new Image(),
    canvasFace = document.getElementById('canvas-face'),
    context = canvasFace.getContext('2d'),
    canvas = document.getElementById('canvas'),
    width = 640,
    height = 480,
    delay = 1000,
    jpgQuality = 0.6,
    isHappy = false;

socket.on('response', function (r) {
    let data = JSON.parse(r);
    if (data.length > 0 && data[0].hasOwnProperty('emotion')) {
        if (isHappy === false && data[0]['emotion'] === 'happy') {
            isHappy = true;
            swal({
                title: "Good!",
                text: "All is better with one smile!",
                icon: "success",
                buttons: false,
                timer: 2000,
            });

            $('#nonHappy').hide();
            $('#video').hide();
            $('#happyForm').show();
            $('#smile')[0].src = 'data:image/png;base64,' + data[0].image;
        }

        img.onload = function () {
            context.drawImage(this, 0, 0, canvasFace.width, canvasFace.height);
        };

        img.src = 'data:image/png;base64,' + data[0].image;
    }
});

navigator.getMedia = (navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia);

navigator.getMedia({video: true, audio: false}, (mediaStream) => {
    let video = document.getElementsByTagName('video')[0];
    video.src = window.URL.createObjectURL(mediaStream);
    video.play();
    setInterval(((video) => {
        return function () {
            let context = canvas.getContext('2d');
            canvas.width = width;
            canvas.height = height;
            context.drawImage(video, 0, 0, width, height);
            socket.emit('img', canvas.toDataURL('image/jpeg', jpgQuality));
        }
    })(video), delay)
}, error => console.log(error));

$(() => {
    $('#login').click(() => {
        $('#login-page').hide();
        $('#private').show();
    })
});

Backend
Finally we’ll work in the backend. Basically I’ve check the examples that we can see in face_classification project and tune it a bit according to my needs.

from rabbit import builder
import logging
import numpy as np
from keras.models import load_model
from utils.datasets import get_labels
from utils.inference import detect_faces
from utils.inference import draw_text
from utils.inference import draw_bounding_box
from utils.inference import apply_offsets
from utils.inference import load_detection_model
from utils.inference import load_image
from utils.preprocessor import preprocess_input
import cv2
import json
import base64

detection_model_path = 'trained_models/detection_models/haarcascade_frontalface_default.xml'
emotion_model_path = 'trained_models/emotion_models/fer2013_mini_XCEPTION.102-0.66.hdf5'
emotion_labels = get_labels('fer2013')
font = cv2.FONT_HERSHEY_SIMPLEX

# hyper-parameters for bounding boxes shape
emotion_offsets = (20, 40)

# loading models
face_detection = load_detection_model(detection_model_path)
emotion_classifier = load_model(emotion_model_path, compile=False)

# getting input model shapes for inference
emotion_target_size = emotion_classifier.input_shape[1:3]


def format_response(response):
    decoded_json = json.loads(response)
    return "Hello {}".format(decoded_json['name'])


def on_data(data):
    f = open('current.jpg', 'wb')
    f.write(base64.decodebytes(data))
    f.close()
    image_path = "current.jpg"

    out = []
    # loading images
    rgb_image = load_image(image_path, grayscale=False)
    gray_image = load_image(image_path, grayscale=True)
    gray_image = np.squeeze(gray_image)
    gray_image = gray_image.astype('uint8')

    faces = detect_faces(face_detection, gray_image)
    for face_coordinates in faces:
        x1, x2, y1, y2 = apply_offsets(face_coordinates, emotion_offsets)
        gray_face = gray_image[y1:y2, x1:x2]

        try:
            gray_face = cv2.resize(gray_face, (emotion_target_size))
        except:
            continue

        gray_face = preprocess_input(gray_face, True)
        gray_face = np.expand_dims(gray_face, 0)
        gray_face = np.expand_dims(gray_face, -1)
        emotion_label_arg = np.argmax(emotion_classifier.predict(gray_face))
        emotion_text = emotion_labels[emotion_label_arg]
        color = (0, 0, 255)

        draw_bounding_box(face_coordinates, rgb_image, color)
        draw_text(face_coordinates, rgb_image, emotion_text, color, 0, -50, 1, 2)
        bgr_image = cv2.cvtColor(rgb_image, cv2.COLOR_RGB2BGR)

        cv2.imwrite('predicted.png', bgr_image)
        data = open('predicted.png', 'rb').read()
        encoded = base64.encodebytes(data).decode('utf-8')
        out.append({
            'image': encoded,
            'emotion': emotion_text,
        })

    return out

logging.basicConfig(level=logging.WARN)
rpc = builder.rpc("image.check", {'host': 'localhost', 'port': 5672})
rpc.server(on_data)

Here you can see in action the working prototype

Maybe we can do the same with another tools and even more simple but as I said before this example is just an excuse to play with those technologies:

  • Send webcam frames via websockets
  • Connect one web application to a Pyhon application via RabbitMQ RPC
  • Play with face classification script

Please don’t use this script in production. It’s just a proof of concepts. With smiles but a proof of concepts 🙂

You can see the project in my github account

Opencv and esp32 experiment. Moving a servo with my face alignment

One saturday morning I was having a breakfast and I discovered face_recognition project. I started to play with the opencv example. I put my picture and, Wow! It works like a charm. It’s pretty straightforward to detect my face and also I can obtain the face landmarks. One of the landmark that I can get is the nose tip. Playing with this script I realized that with the nose tip I can determine the position of the face. I can see if my face is align to the center or if I move it to one side. As well as I have a new iot device (one ESP32) I wanted to do something with it. For example control a servo (SG90) and moving it from left to right depending on my face position.

First we have the main python script. With this script I detect my face, the nose tip and the position of my face. With this position I will emit an event to a mqtt broker (a mosquitto server running on my laptop).

import face_recognition
import cv2
import numpy as np
import math
import paho.mqtt.client as mqtt

video_capture = cv2.VideoCapture(0)

gonzalo_image = face_recognition.load_image_file("gonzalo.png")
gonzalo_face_encoding = face_recognition.face_encodings(gonzalo_image)[0]

known_face_encodings = [
    gonzalo_face_encoding
]
known_face_names = [
    "Gonzalo"
]

RED = (0, 0, 255)
GREEN = (0, 255, 0)
BLUE = (255, 0, 0)

face_locations = []
face_encodings = []
face_names = []
process_this_frame = True
status = ''
labelColor = GREEN

client = mqtt.Client()
client.connect("localhost", 1883, 60)

while True:
    ret, frame = video_capture.read()

    # Resize frame of video to 1/4 size for faster face recognition processing
    small_frame = cv2.resize(frame, (0, 0), fx=0.25, fy=0.25)

    # Convert the image from BGR color (which OpenCV uses) to RGB color (which face_recognition uses)
    rgb_small_frame = small_frame[:, :, ::-1]

    face_locations = face_recognition.face_locations(rgb_small_frame)
    face_encodings = face_recognition.face_encodings(rgb_small_frame, face_locations)
    face_landmarks_list = face_recognition.face_landmarks(rgb_small_frame, face_locations)

    face_names = []
    for face_encoding, face_landmarks in zip(face_encodings, face_landmarks_list):
        matches = face_recognition.compare_faces(known_face_encodings, face_encoding)
        name = "Unknown"

        if True in matches:
            first_match_index = matches.index(True)
            name = known_face_names[first_match_index]

            nose_tip = face_landmarks['nose_tip']
            maxLandmark = max(nose_tip)
            minLandmark = min(nose_tip)

            diff = math.fabs(maxLandmark[1] - minLandmark[1])
            if diff < 2:
                status = "center"
                labelColor = BLUE
                client.publish("/face/{}/center".format(name), "1")
            elif maxLandmark[1] > minLandmark[1]:
                status = ">>>>"
                labelColor = RED
                client.publish("/face/{}/left".format(name), "1")
            else:
                status = "<<<<"
                client.publish("/face/{}/right".format(name), "1")
                labelColor = RED

            shape = np.array(face_landmarks['nose_bridge'], np.int32)
            cv2.polylines(frame, [shape.reshape((-1, 1, 2)) * 4], True, (0, 255, 255))
            cv2.fillPoly(frame, [shape.reshape((-1, 1, 2)) * 4], GREEN)

        face_names.append("{} {}".format(name, status))

    for (top, right, bottom, left), name in zip(face_locations, face_names):
        # Scale back up face locations since the frame we detected in was scaled to 1/4 size
        top *= 4
        right *= 4
        bottom *= 4
        left *= 4

        if 'Unknown' not in name.split(' '):
            cv2.rectangle(frame, (left, top), (right, bottom), labelColor, 2)
            cv2.rectangle(frame, (left, bottom - 35), (right, bottom), labelColor, cv2.FILLED)
            cv2.putText(frame, name, (left + 6, bottom - 6), cv2.FONT_HERSHEY_DUPLEX, 1.0, (255, 255, 255), 1)
        else:
            cv2.rectangle(frame, (left, top), (right, bottom), BLUE, 2)

    cv2.imshow('Video', frame)

    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

video_capture.release()
cv2.destroyAllWindows()

Now another Python script will be listening to mqtt events and it will trigger one event with the position of the servo. I know that this second Python script maybe is unnecessary. We can move its logic to esp32 and main opencv script, but I was playing with mqtt and I wanted to decouple it a little bit.

import paho.mqtt.client as mqtt

class Iot:
    _state = None
    _client = None
    _dict = {
        'left': 0,
        'center': 1,
        'right': 2
    }

    def __init__(self, client):
        self._client = client

    def emit(self, name, event):
        if event != self._state:
            self._state = event
            self._client.publish("/servo", self._dict[event])
            print("emit /servo envent with value {} - {}".format(self._dict[event], name))


def on_message(topic, iot):
    data = topic.split("/")
    name = data[2]
    action = data[3]
    iot.emit(name, action)


client = mqtt.Client()
iot = Iot(client)

client.on_connect = lambda self, mosq, obj, rc: self.subscribe("/face/#")
client.on_message = lambda client, userdata, msg: on_message(msg.topic, iot)

client.connect("localhost", 1883, 60)
client.loop_forever()

And finally the ESP32. Here will connect to my wifi and to my mqtt broker.

#include <WiFi.h>
#include <PubSubClient.h>

#define LED0 17
#define LED1 18
#define LED2 19
#define SERVO_PIN 5

// wifi configuration
const char* ssid = "my_ssid";
const char* password = "my_wifi_password";
// mqtt configuration
const char* server = "192.168.1.111"; // mqtt broker ip
const char* topic = "/servo";
const char* clientName = "com.gonzalo123.esp32";

int channel = 1;
int hz = 50;
int depth = 16;

WiFiClient wifiClient;
PubSubClient client(wifiClient);

void wifiConnect() {
  Serial.print("Connecting to ");
  Serial.println(ssid);

  WiFi.begin(ssid, password);

  while (WiFi.status() != WL_CONNECTED) {
    delay(500);
    Serial.print("*");
  }

  Serial.print("WiFi connected: ");
  Serial.println(WiFi.localIP());
}

void mqttReConnect() {
  while (!client.connected()) {
    Serial.print("Attempting MQTT connection...");
    if (client.connect(clientName)) {
      Serial.println("connected");
      client.subscribe(topic);
    } else {
      Serial.print("failed, rc=");
      Serial.print(client.state());
      Serial.println(" try again in 5 seconds");
      delay(5000);
    }
  }
}

void callback(char* topic, byte* payload, unsigned int length) {
  Serial.print("Message arrived [");
  Serial.print(topic);

  String data;
  for (int i = 0; i < length; i++) {
    data += (char)payload[i];
  }

  int value = data.toInt();
  cleanLeds();
  switch (value)  {
    case 0:
      ledcWrite(1, 3400);
      digitalWrite(LED0, HIGH);
      break;
    case 1:
      ledcWrite(1, 4900);
      digitalWrite(LED1, HIGH);
      break;
    case 2:
      ledcWrite(1, 6400);
      digitalWrite(LED2, HIGH);
      break;
  }
  Serial.print("] value:");
  Serial.println((int) value);
}

void cleanLeds() {
  digitalWrite(LED0, LOW);
  digitalWrite(LED1, LOW);
  digitalWrite(LED2, LOW);
}

void setup() {
  Serial.begin(115200);

  ledcSetup(channel, hz, depth);
  ledcAttachPin(SERVO_PIN, channel);

  pinMode(LED0, OUTPUT);
  pinMode(LED1, OUTPUT);
  pinMode(LED2, OUTPUT);
  cleanLeds();
  wifiConnect();
  client.setServer(server, 1883);
  client.setCallback(callback);

  delay(1500);
}

void loop()
{
  if (!client.connected()) {
    mqttReConnect();
  }

  client.loop();
  delay(100);
}

Here a video with the working prototype in action

The source code is available in my github account.

Tracking blue objects with Opencv and Python

Opencv is an amazing Open Source Computer Vision Library. Today We’re going to hack a little bit with it. The idea is track blue objects. Why blue objects? Maybe because I’ve got a couple of them in my desk. Let’s start.

The idea is simple. We’ll create a mask. Our mask is a black and white image where each blue pixel will turn into a white one and the rest of pixels will be black.

Original frame:

Masked one:

Now we only need put a bounding rectangle around the blue object.

import cv2
import numpy

cam = cv2.VideoCapture(0)
kernel = numpy.ones((5 ,5), numpy.uint8)

while (True):
    ret, frame = cam.read()
    rangomax = numpy.array([255, 50, 50]) # B, G, R
    rangomin = numpy.array([51, 0, 0])
    mask = cv2.inRange(frame, rangomin, rangomax)
    # reduce the noise
    opening = cv2.morphologyEx(mask, cv2.MORPH_OPEN, kernel)

    x, y, w, h = cv2.boundingRect(opening)

    cv2.rectangle(frame, (x, y), (x+w, y + h), (0, 255, 0), 3)
    cv2.circle(frame, (x+w/2, y+h/2), 5, (0, 0, 255), -1)

    cv2.imshow('camera', frame)

    k = cv2.waitKey(1) & 0xFF

    if k == 27:
        break

And that’s all. A nice hack for a Sunday morning

Source code in my github account

NFC tag reader with Raspberry Pi

In another post we spoke about NFC tag readers and Arduino. Today I’ll do the same but with a Raspberry Pi. Why? More or less everything we can do with an Arduino board we can do it also with a Raspberry Pi (and viceversa). Sometimes Arduino is to much low level for me. For example if we want to connect an Arduino to the LAN we need to set up mac address by hand. We can do it but this operation is trivial with Raspberry Pi and Python. We can connect our Arduino to a PostgreSQL Database, but it’s not pretty straightforward. Mi background is also better with Python than C++, so I feel more confortable working with Raspberry Pi. I’m not saying that RPi is better than Arduino. With Arduino for example we don’t need to worry about start proceses, reboots and those kind of thing stuff that we need to worry about with computers. Arduino a Raspberry Pi are different tools. Sometimes it will be better one and sometimes the other.

So let’s start connecting our RFID/NFC Sensor MFRC522 to our Raspberry Py 3
The wiring:

  • RC522 VCC > RP 3V3
  • RC522 RST > RPGPIO25
  • RC522 GND > RP Ground
  • RC522 MISO > RPGPIO9 (MISO)
  • RC522 MOSI > RPGPIO10 (MOSO)
  • RC522 SCK > RPGPIO11 (SCLK)
  • RC522 NSS > RPGPIO8 (CE0)
  • RC522 IRQ > RPNone

I will a Python port of the example code for the NFC module MF522-AN thank to mxgxw

I’m going to use two Python Scripts. One to control NFC reader

import RPi.GPIO as gpio
import MFRC522
import sys
import time

MIFAREReader = MFRC522.MFRC522()
GREEN = 11
RED = 13
YELLOW = 15
SERVO = 12

gpio.setup(GREEN, gpio.OUT, initial=gpio.LOW)
gpio.setup(RED, gpio.OUT, initial=gpio.LOW)
gpio.setup(YELLOW, gpio.OUT, initial=gpio.LOW)
gpio.setup(SERVO, gpio.OUT)
p = gpio.PWM(SERVO, 50)

good = [211, 200, 106, 217, 168]

def servoInit():
    print "servoInit"
    p.start(7.5)

def servoOn():
    print "servoOn"
    p.ChangeDutyCycle(4.5)

def servoNone():
    print "servoOn"
    p.ChangeDutyCycle(7.5)

def servoOff():
    print "servoOff"
    p.ChangeDutyCycle(10.5)

def clean():
    gpio.output(GREEN, False)
    gpio.output(RED, False)
    gpio.output(YELLOW, False)

def main():
    servoInit()
    while 1:
        (status, TagType) = MIFAREReader.MFRC522_Request(MIFAREReader.PICC_REQIDL)
        if status == MIFAREReader.MI_OK:
            (status, backData) = MIFAREReader.MFRC522_Anticoll()
            gpio.output(YELLOW, True)
            if status == MIFAREReader.MI_OK:
                mac = []
                for x in backData[0:-1]:
                    mac.append(hex(x).split('x')[1].upper())
                print ":".join(mac)
                if good == backData:
                    servoOn()
                    gpio.output(GREEN, True)
                    time.sleep(0.5)
                    servoNone()
                else:
                    gpio.output(RED, True)
                    servoOff()
                    time.sleep(0.5)
                    servoNone()
                time.sleep(1)
                gpio.output(YELLOW, False)
                gpio.output(RED, False)

if __name__ == '__main__':
    try:
        main()
    except KeyboardInterrupt:
        print 'Interrupted'
        clean()
        MIFAREReader.GPIO_CLEEN()
        sys.exit(0)

And another one to control push button. I use this second script only to see how to use different processes.

import RPi.GPIO as gpio
import time

gpio.setwarnings(False)
gpio.setmode(gpio.BOARD)
BUTTON = 40
GREEN = 11
RED = 13
YELLOW = 15

gpio.setup(GREEN, gpio.OUT)
gpio.setup(RED, gpio.OUT)
gpio.setup(YELLOW, gpio.OUT)

gpio.setup(BUTTON, gpio.IN, pull_up_down=gpio.PUD_DOWN)
gpio.add_event_detect(BUTTON, gpio.RISING)
def leds(status):
    gpio.output(YELLOW, status)
    gpio.output(GREEN, status)
    gpio.output(RED, status)

def buttonCallback(pin):
    if gpio.input(pin) == 1:
        print "STOP"
        leds(True)
        time.sleep(0.2)
        leds(False)
        time.sleep(0.2)
        leds(True)
        time.sleep(0.2)
        leds(False)

gpio.add_event_callback(BUTTON, buttonCallback)
while 1:
    pass

Here a video with a working example (I’ve also put a servo and and three leds only because it looks good :))

Code in my github account