Couple of weeks ago I attended to serverless course. I’ve played with lambdas from time to time (basically when AWS forced me to use them) but without knowing exactly what I was doing. After this course I know how to work with the serverless framework and I understand better lambda world. Today I want to hack a little bit and create a simple Python service to obtain random numbers. Let’s start
We don’t need Flask to create lambdas but as I’m very comfortable with it so we’ll use it here.
Basically I follow the steps that I’ve read here.
[sourcecode language=”python”]
from flask import Flask
app = Flask(__name__)
@app.route("/", methods=["GET"])
def hello():
return "Hello from lambda"
if __name__ == ‘__main__’:
app.run()
[/sourcecode]
And that’s all. Our “Hello world” lambda service with Python and Flask is up and running.
Now We’re going to create a “more complex” service. We’re going to return a random number with random.randint function.
randint requires two parameters: start, end. We’re going to pass the end parameter to our service. The start value will be parameterized. I’ll parameterize it only because I want to play with AWS’s Parameter Store (SSM). It’s just an excuse.
Let’s start with the service:
[sourcecode language=”python”]
from random import randint
from flask import Flask, jsonify
import boto3
from ssm_parameter_store import SSMParameterStore
One of the first post in my blog was about Pivot tables. I’d created a library to pivot tables in my PHP scripts. The library is not very beautiful (it throws a lot of warnings), but it works. These days I’m playing with Python Data Analysis and I’m using Pandas. The purpose of this post is something that I like a lot: Learn by doing. So I want to do the same operations that I did eight years ago in the post but now with Pandas. Let’s start.
I’ll start with the same datasource that I used almost ten years ago. One simple recordset with cliks and number of users
I create a dataframe with this data
[sourcecode language=”python”]
import numpy as np
import pandas as pd
This days I’ve been playing with Nameko. The Python framework for building microservices. Today I want to upgrade one small pet project that I’ve got in my house to monitor the bandwidth of my internet connection. I want to use one nameko microservice using the Timer entrypoint.
That’s the worker:
[sourcecode language=”python”]
from nameko.timer import timer
import datetime
import logging
import os
import speedtest
from dotenv import load_dotenv
from influxdb import InfluxDBClient
I need to adapt my docker-compose file to include the RabbitMQ server (Nameko needs a RabbitMQ message broker)
[sourcecode language=”xml”]
version: ‘3’
When we work with SPAs and web applications we need to handle with the browser’s cache. Sometimes we change our static files but the client’s browser uses a cached version of the file instead of the new one. We can tell the user: Please empty your cache to use the new version. But most of the times the user don’t know what we’re speaking about, and we have a problem. There’s a technique called cache buster used to bypass this issue. It consists on to change the name of the file (or adding an extra parameter), basically to ensure that the browser will send a different request to the server to prevent the browser from reusing the cached version of the file.
When we work with sapui5 application over SCP, we only need to use the cachebuster version of sap-ui-core
With this configuration, our framework will use a “cache buster friendly” version of our files and SCP will serve them properly.
For example, when our framework wants the /dist/Component.js file, the browser will request /dist/~1541685070813~/Component.js to the server. And the server will server the file /dist/Component.js. As I said before when we work with SCP, our standard build process automatically takes care about it. It creates a file called sap-ui-cachebuster-info.json where we can find all our files with one kind of hash that our build process changes each time our file is changed.
It works like a charm but I not always use SCP. Sometimes I use OpenUI5 in one nginx server, for example. So cache buster “doesn’t work”. That’s a problem because I need to handle with browser caches again each time we deploy the new version of the application. I wanted to solve the issue. Let me explain how I did it.
Since I was using one Lumen/PHP server to the backend, my first idea was to create a dynamic route in Lumen to handle cache-buster urls. With this approach I know I can solve the problem but there’s something that I did not like: I’ll use a dynamic server to serve static content. I don’t have a huge traffic. I can use this approach but it isn’t beautiful.
My second approach was: Ok I’ve got a sap-ui-cachebuster-info.json file where I can see all the files that cache buster will use and their hashes. So, Why not I create those files in my build script. With this approach I will create the full static structure each time I deploy de application, without needing any server side scripting language to generate dynamic content. OpenUI5 uses grunt so I can create a simple grunt task to create my files.
[sourcecode language=”js”]
‘use strict’;
var fs = require(‘fs’);
var path = require(‘path’);
var chalk = require(‘chalk’);
module.exports = function(grunt) {
var name = ‘cacheBuster’;
var info = ‘Generates Cache buster files’;
var cacheBuster = function() {
var config = grunt.config.get(name);
var data, t, src, dest, dir, prop;
data = grunt.file.readJSON(config.src + ‘/sap-ui-cachebuster-info.json’);
for (prop in data) {
if (data.hasOwnProperty(prop)) {
t = data[prop];
src = config.src + ‘/’ + prop;
dest = config.src + ‘/~’ + t + ‘~/’ + prop;
grunt.verbose.writeln(
name + ‘: ‘ + chalk.cyan(path.basename(src)) + ‘ to ‘ +
chalk.cyan(dest) + ‘.’);
dir = path.dirname(dest);
grunt.file.mkdir(dir);
fs.copyFileSync(src, dest);
}
}
};
and set up the path where ui5 task generates our dist files
[sourcecode language=”js”]
grunt.config.merge({
pkg: grunt.file.readJSON(‘package.json’),
…
cacheBuster: {
src: ‘dist’
}
});
[/sourcecode]
And that’s all. My users with enjoy (or suffer) the new versions of my applications without having problems with cached files.
In the last projects that I’ve been involved with I’ve playing, in one way or another, with microservices, queues and things like that. I’m always facing the same tasks: Building RPCs, Workers, API gateways, … Because of that I’ve searching one framework to help me with those kind of stuff. Finally I discover Nameko. Basically Nameko is the Python tool that I’ve been looking for. In this post I will create a simple proof of concept to learn how to integrate Nameko within my projects. Let start.
The POC is a simple API gateway that gives me the localtime in iso format. I can create a simple Python script to do it
[sourcecode language=”python”]
import datetime
import time
We also can create a simple Flask API server to consume this information. The idea is create a rpc worker to generate this information and also generate another worker to send the localtime, but taken from a PostgreSQL database (yes I know it not very useful but it’s just an excuse to use a PG database in the microservice)
We’re going to create two rpc workers. One giving the local time:
[sourcecode language=”python”]
from nameko.rpc import rpc
from time import time
import datetime
Now we only need to setup the api gateway. With Nameko we can create http entrypoint also (in the same way than we create rpc) but I want to use it with Flask
[sourcecode language=”python”]
from flask import Flask
from nameko.standalone.rpc import ServiceRpcProxy
from dotenv import load_dotenv
import os
Time ago, when I was an ADSL user in my house I had a lot problems with my internet connection. I was a bit lazy to switch to a fiber connection. Finally I changed it, but meanwhile the my Internet company was solving one incident, I started to hack a little bit a simple and dirty script that monitors my connection speed (just for fun and to practise with InfluxDB and Grafana).
Today I’ve lost my quick and dirty script (please Gonzalo keep a working backup the SD card of your Raspberry Pi Server always updated! Sometimes it crashes. It’s simple: “dd if=/dev/disk3 of=pi3.img” 🙂 and I want to rebuild it. This time I want to use Docker (just for fun). Let’s start.
To monitor the bandwidth we only need to use the speedtest-cli api. We can use this api from command line and, as it’s a python library, we can create one python script that uses it.
Now we need to create the docker-compose file to orchestrate the infrastructure. The most complicate thing here is, maybe, to configure grafana within docker files instead of opening browser, create datasoruce and build dashboard by hand. After a couple of hours navigating into github repositories finally I created exactly what I needed for this post. Basically is a custom entry point for my grafana host that creates the datasource and dashboard (via Grafana’s API)
In the previous project we moved one project to docker. The idea was to move exactly the same functionality (even without touching anything within the source code). Now we’re going to add more services. Yes, I know, it looks like overenginering (it’s exactly overenginering, indeed), but I want to build something with different services working together. Let start.
We’re going to change a little bit our original project. Now our frontend will only have one button. This button will increment the number of clicks but we’re going to persists this information in a PostgreSQL database. Also, instead of incrementing the counter in the backend, our backend will emit one event to a RabbitMQ message broker. We’ll have one worker service listening to this event and this worker will persist the information. The communication between the worker and the frontend (to show the incremented value), will be via websockets.
With those premises we are going to need:
Frontend: UI5 application
Backend: PHP/lumen application
Worker: nodejs application which is listening to a RabbitMQ event and serving the websocket server (using socket.io)
Nginx server
PosgreSQL database.
RabbitMQ message broker.
As the previous examples, our PHP backend will be server via Nginx and PHP-FPM.
Here we can see to docker-compose file to set up all the services
// Please don’t do this. Use lazy connections
// I’m ‘lazy’ to do it in this POC 🙂
pgClient.connect(function(err) {
io.on(‘connection’, function() {
pgClient.query(sql, function(err, result) {
var count = result.rows[0][‘clickcount’];
io.emit(‘click’, {count: count});
});
});
rabbitMq.on(‘ready’, function() {
var queue = rabbitMq.queue(‘ui5’);
queue.bind(‘#’);
Database server:
[sourcecode language=”xml”]
FROM postgres:9.6-alpine
COPY pg/init.sql /docker-entrypoint-initdb.d/
[/sourcecode]
As we can see we’re going to generate the database estructure in the first build
[sourcecode language=”sql”]
CREATE SCHEMA docker;
CREATE TABLE docker.clicks (
clickCount numeric(8) NOT NULL
);
ALTER TABLE docker.clicks
OWNER TO username;
INSERT INTO docker.clicks(clickCount) values (0);
[/sourcecode]
With the RabbitMQ server we’re going to use the official docker image so we don’t need to create one Dockerfile
We also have changed a little bit our Nginx configuration. We want to use Nginx to serve backend and also socket.io server. That’s because we don’t want to expose different ports to internet.
[sourcecode language=”xml”]
server {
listen 80;
index index.php index.html;
server_name localhost;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
root /code/src/www;
And basically that’s all. Here also we can use a “production” docker-copose file without exposing all ports and mapping the filesystem to our local machine (useful when we’re developing)
In the first part I spoke about how to build our working environment to work with UI5 locally instead of using WebIDE. Now, in this second part of the post, we’ll see how to do it using docker to set up our environment.
I’ll use docker-compose to set up the project. Basically, as I explain in the first part, the project has two parts. One backend and one frontned. We’re going to use exactly the same code for the frontend and for the backend.
The frontend is build over a localneo. As it’s a node application we’ll use a node:alpine base host
In docker-compose we only need to map the port that we´ll expose in our host and since we want this project in our depelopemet process, we also will map the volume to avoid to re-generate our container each time we change the code.
With this configuration we’re exposing two ports 8080 for the frontend and 8000 for the backend. We also are mapping our local filesystem to containers to avoid to regenerate our containers each time we change the code.
We also can have a variation. A “production” version of our docker-compose file. I put production between quotation marks because normally we aren’t going to use localneo as a production server (please don’t do it). We’ll use SCP to host the frontend.
This configuration is just an example without filesystem mapping, without xdebug in the backend and without exposing the backend externally (Only the frontend can use it)
When I work with SAPUI5 projects I normally use WebIDE. WebIDE is a great tool but I’m more confortable working locally with my local IDE.
I’ve this idea in my mind but I never find the time slot to work on it. Finally, after finding this project from Holger Schäfer in github, I realized how easy it’s and I started to work with this project and adapt it to my needs.
The base of this project is localneo. Localneo starts a http server based on neo-app.json file. That means we’re going to use the same configuration than we’ve in production (in SCP). Of course we’ll need destinations. We only need one extra file called destination.json where we’ll set up our destinations (it creates one http proxy, nothing else).
In this project I’ll create a simple example application that works with one API server.
The backend
I’ll use in this example one PHP/Lumen application:
Basically it has two routes. In fact both routes are the same. One accept POST request and another one GET requests.
They’ll answer with the current date in a json file
Now we’ll create our extra file called destinations.json. Localneo will use this file to create a web server to serve our frontend locally (using the destination).
As I said before our backend will need a Basic Authentication. This Authentication will be set up in the destination configuration
When we click on GET we’ll perform a GET request to the backend and we’ll increment the counter. The same with POST.
We’ll also show de date provided by the backend in a MessageToast.
As we’re working locally we can use local debugger in the backend and we can use breakpoints, inspect variables, etc.
We also can debug the frontend using Chrome developer tools. We can also map our local filesystem in the browser and we can save files directly in Chrome.
Testing
We can test the backend using phpunit and run our tests with composer run test
As Backend is already tested we’ll mock the backend here using sinon (https://sinonjs.org/) server
[sourcecode language=”js”]
…
opaTest("When I click on GET the GET counter should increment by one", function (Given, When, Then) {
Given.iStartMyApp("./integration/Test1/index.html");
When.iClickOnGET();
Then.getCounterShouldBeIncrementedByOne().and.iTeardownMyAppFrame();
});
opaTest("When I click on POST the POST counter should increment by one", function (Given, When, Then) {
Given.iStartMyApp("./integration/Test1/index.html");
When.iClickOnPOST();
Then.postCounterShouldBeIncrementedByOne().and.iTeardownMyAppFrame();
});
…
[/sourcecode]
The configuration of our sinon server:
[sourcecode language=”js”]
sap.ui.define(
["test/server"],
function (server) {
"use strict";
return {
init: function () {
var oServer = server.initServer("/backend/api");
Before uploading the application to SCP we need to build it. The build process optimizes the files and creates Component-preload.js and sap-ui-cachebuster-info.json file (to ensure our users aren’t using a cached version of our application)
We’ll use grunt to build our application. Here we can see our Gruntfile.js
[sourcecode language=”js”]
module.exports = function (grunt) {
"use strict";
In our Gruntfile I’ve also configure a watcher to build the application automatically and triggering the live reload (to reload my browser every time I change the frontend)
Now I can build the dist folder with the command:
grunt
Deploy to SCP
The deploy process is very well explained in the Holger’s repository
Basically we need to download MTA Archive builder and extract it to ./ci/tools/mta.jar.
Also we need SAP Cloud Platform Neo Environment SDK (./ci/tools/neo-java-web-sdk/)
We can download those binaries from here
Then we need to fulfill our scp credentials in ./ci/deploy-mta.properties and configure our application in ./ci/mta.yaml
Finally we will run ./ci/deploy-mta.sh (here we can set up our scp password in order to input it within each deploy)
Full code (frontend and backend) in my github account
Today I want to play with Grafana. Let me show you my idea:
I’ve got a Beewi temperature sensor. I’ve been playing with it previously. Today I want to show the temperature within a Grafana dashboard.
I want to play also with openweathermap API.
Fist I want to retrieve the temperature from Beewi device. I’ve got a node script that connects via Bluetooth to the device using noble library.
I only need to pass the sensor mac address and I obtain a JSON with the current temperature
And finally another script (this time a Python script) to collect data from openweathermap API, collect data from node script and storing the information in a influxdb database.
[sourcecode language=”python”]
from sense_hat import SenseHat
from influxdb import InfluxDBClient
import datetime
import logging
import requests
import json
from subprocess import check_output
import os
import sys
from dotenv import load_dotenv
I’m running this python script from a Raspberry Pi3 with a Sense Hat. Sense Hat has a atmospheric pressure sensor, so I will also retrieve the pressure from the Sense Hat.
From openweathermap I will obtain:
Current temperature/humidity and atmospheric pressure in the street
UV Index (the measure of the level of UV radiation)
Weather conditions (if it’s raining or not)
Weather forecast
I run this script with the Rasberry Pi crontab each 5 minutes. That means that I’ve got a fancy time series ready to be shown with grafana.