PHP application in SAP Cloud Platform. With PostgreSQL, Redis and Cloud Foundry

Keeping on with my study of SAP’s cloud platform (SCP) and Cloud Foundry today I’m going to build a simple PHP application. This application serves a simple Bootstrap landing page. The application uses a HTTP basic authentication. The credentials are validated against a PostgreSQL database. It also has a API to retrieve the localtimestamp from database server (just for play with a database server). I also want to play with Redis in the cloud too, so the API request will have a Time To Live (ttl) of 5 seconds. I will use a Redis service to do it.

First we create our services in cloud foundry. I’m using the free layer of SAP cloud foundry for this example. I’m not going to explain here how to do that. It’s pretty straightforward within SAP’s coopkit. Time ago I played with IBM’s cloud foundry too. I remember that it was also very simple too.

Then we create our application (.bp-config/options.json)

{
"WEBDIR": "www",
"LIBDIR": "lib",
"PHP_VERSION": "{PHP_70_LATEST}",
"PHP_MODULES": ["cli"],
"WEB_SERVER": "nginx"
}

If we want to use our PostgreSQL and Redis services with our PHP Appliacation we need to connect those services to our application. This operation can be done also with SAP’s Cockpit.

Now is the turn of PHP application. I normally use Silex framework within my backends, but now there’s a problem: Silex is dead. I feel a little bit sad but I’m not going to cry. It’s just a tool and there’re another ones. I’ve got my example with Silex but, as an exercise, I will also do it with Lumen.

Let’s start with Silex. If you’re familiar with Silex micro framework (or another microframework, indeed) you can see that there isn’t anything especial.

use Symfony\Component\HttpKernel\Exception\HttpException;
use Symfony\Component\HttpFoundation\Request;
use Silex\Provider\TwigServiceProvider;
use Silex\Application;
use Predis\Client;

if (php_sapi_name() == "cli-server") {
    // when I start the server my local machine vendors are in a different path
    require __DIR__ . '/../vendor/autoload.php';
    // and also I mock VCAP_SERVICES env
    $env   = file_get_contents(__DIR__ . "/../conf/vcap_services.json");
    $debug = true;
} else {
    require 'vendor/autoload.php';
    $env   = $_ENV["VCAP_SERVICES"];
    $debug = false;
}

$vcapServices = json_decode($env, true);

$app = new Application(['debug' => $debug, 'ttl' => 5]);

$app->register(new TwigServiceProvider(), [
    'twig.path' => __DIR__ . '/../views',
]);

$app['db'] = function () use ($vcapServices) {
    $dbConf = $vcapServices['postgresql'][0]['credentials'];
    $dsn    = "pgsql:dbname={$dbConf['dbname']};host={$dbConf['hostname']};port={$dbConf['port']}";
    $dbh    = new PDO($dsn, $dbConf['username'], $dbConf['password']);
    $dbh->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
    $dbh->setAttribute(PDO::ATTR_CASE, PDO::CASE_UPPER);
    $dbh->setAttribute(PDO::ATTR_DEFAULT_FETCH_MODE, PDO::FETCH_ASSOC);

    return $dbh;
};

$app['redis'] = function () use ($vcapServices) {
    $redisConf = $vcapServices['redis'][0]['credentials'];

    return new Client([
        'scheme'   => 'tcp',
        'host'     => $redisConf['hostname'],
        'port'     => $redisConf['port'],
        'password' => $redisConf['password'],
    ]);
};

$app->get("/", function (Application $app) {
    return $app['twig']->render('index.html.twig', [
        'user' => $app['user'],
        'ttl'  => $app['ttl'],
    ]);
});

$app->get("/timestamp", function (Application $app) {
    if (!$app['redis']->exists('timestamp')) {
        $stmt = $app['db']->prepare('SELECT localtimestamp');
        $stmt->execute();
        $app['redis']->set('timestamp', $stmt->fetch()['TIMESTAMP'], 'EX', $app['ttl']);
    }

    return $app->json($app['redis']->get('timestamp'));
});

$app->before(function (Request $request) use ($app) {
    $username = $request->server->get('PHP_AUTH_USER', false);
    $password = $request->server->get('PHP_AUTH_PW');

    $stmt = $app['db']->prepare('SELECT name, surname FROM public.user WHERE username=:USER AND pass=:PASS');
    $stmt->execute(['USER' => $username, 'PASS' => md5($password)]);
    $row = $stmt->fetch();
    if ($row !== false) {
        $app['user'] = $row;
    } else {
        header("WWW-Authenticate: Basic realm='RIS'");
        throw new HttpException(401, 'Please sign in.');
    }
}, 0);

$app->run();

Maybe the only especial thing is the way that autoloader is done. We are initializing autoloader in two different ways. One way when the application is run in the cloud and another one when the application is run locally with PHP’s built-in server. That’s because vendors are located in different paths depending on which environment the application lives in. When Cloud Foundry connect services to appliations it injects environment variables with the service configuration (credentials, host, …). It uses VCAP_SERVICES env var.

I use the built-in server to run the application locally. When I’m doing that I don’t have VCAP_SERVICES variable. And also my services information are different than when I’m running the application in the cloud. Maybe it’s better with an environment variable but I’m using this trick:

if (php_sapi_name() == "cli-server") {
    // I'm runing the application locally
} else {
    // I'm in the cloud
}

So when I’m locally I mock VCAP_SERVICES with my local values and also, for example, configure Silex application in debug mode.

Sometimes I want to run my application locally but I want to use the cloud services. I cannot connect directly to those services, but we can do it over ssh through our connected application. For example If our PostgreSQL application is running on 10.11.241.0:48825 we can map this remote port (in a private network) to our local port with this command.

cf ssh -N -T -L 48825:10.11.241.0:48825 silex

You can see more information about this command here.

Now we can use pgAdmin, for example, in our local machine to connect to cloud server.

We can do the same with Redis

cf ssh -N -T -L 54266:10.11.241.9:54266 silex

And basically that’s all. Now we’ll do the same with Lumen. The idea is create the same application with Lumen instead of Silex. It’s a dummy application but it cover task that I normally use. I also will re-use the Redis and PostgreSQL services from the previous project.

use App\Http\Middleware;
use Laravel\Lumen\Application;
use Laravel\Lumen\Routing\Router;
use Predis\Client;

if (php_sapi_name() == "cli-server") {
    require __DIR__ . '/../vendor/autoload.php';
    $env = 'dev';
} else {
    require 'vendor/autoload.php';
    $env = 'prod';
}

(new Dotenv\Dotenv(__DIR__ . "/../env/{$env}"))->load();

$app = new Application();

$app->routeMiddleware([
    'auth' => Middleware\AuthMiddleware::class,
]);

$app->register(App\Providers\VcapServiceProvider::class);
$app->register(App\Providers\StdoutLogServiceProvider::class);
$app->register(App\Providers\DbServiceProvider::class);
$app->register(App\Providers\RedisServiceProvider::class);

$app->router->group(['middleware' => 'auth'], function (Router $router) {
    $router->get("/", function () {
        return view("index", [
            'user' => config("user"),
            'ttl'  => getenv('TTL'),
        ]);
    });

    $router->get("/timestamp", function (Client $redis, PDO $conn) {
        if (!$redis->exists('timestamp')) {
            $stmt = $conn->prepare('SELECT localtimestamp');
            $stmt->execute();
            $redis->set('timestamp', $stmt->fetch()['TIMESTAMP'], 'EX', getenv('TTL'));
        }

        return response()->json($redis->get('timestamp'));
    });
});

$app->run();

I’ve created four servicer providers. One for handle Database connections (I don’t like ORMs)

namespace App\Providers;

use Illuminate\Support\ServiceProvider;
use PDO;

class DbServiceProvider extends ServiceProvider
{
    public function register()
    {
    }

    public function boot()
    {
        $vcapServices = app('vcap_services');

        $dbConf = $vcapServices['postgresql'][0]['credentials'];
        $dsn    = "pgsql:dbname={$dbConf['dbname']};host={$dbConf['hostname']};port={$dbConf['port']}";
        $dbh    = new PDO($dsn, $dbConf['username'], $dbConf['password']);
        $dbh->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
        $dbh->setAttribute(PDO::ATTR_CASE, PDO::CASE_UPPER);
        $dbh->setAttribute(PDO::ATTR_DEFAULT_FETCH_MODE, PDO::FETCH_ASSOC);

        $this->app->bind(PDO::class, function ($app) use ($dbh) {
            return $dbh;
        });
    }
}

Another one for Redis. I need to study a little bit more Lumen. I know that Lumen has a built-in tool to work with Redis.

namespace App\Providers;

use Illuminate\Support\ServiceProvider;
use Predis\Client;

class RedisServiceProvider extends ServiceProvider
{
    public function register()
    {
    }

    public function boot()
    {
        $vcapServices = app('vcap_services');
        $redisConf    = $vcapServices['redis'][0]['credentials'];

        $redis = new Client([
            'scheme'   => 'tcp',
            'host'     => $redisConf['hostname'],
            'port'     => $redisConf['port'],
            'password' => $redisConf['password'],
        ]);

        $this->app->bind(Client::class, function ($app) use ($redis) {
            return $redis;
        });
    }
}

Another one to tell monolog to send logs to Stdout

namespace App\Providers;

use Illuminate\Support\ServiceProvider;
use Monolog;

class StdoutLogServiceProvider extends ServiceProvider
{
    public function register()
    {
        app()->configureMonologUsing(function (Monolog\Logger $monolog) {
            return $monolog->pushHandler(new \Monolog\Handler\ErrorLogHandler());
        });
    }
}

And the last one to work with Vcap environment variables. Probably I need to integrate it with dotenv files

namespace App\Providers;

use Illuminate\Support\ServiceProvider;

class VcapServiceProvider extends ServiceProvider
{
    public function register()
    {
        if (php_sapi_name() == "cli-server") {
            $env = file_get_contents(__DIR__ . "/../../conf/vcap_services.json");
        } else {
            $env = $_ENV["VCAP_SERVICES"];
        }

        $vcapServices = json_decode($env, true);

        $this->app->bind('vcap_services', function ($app) use ($vcapServices) {
            return $vcapServices;
        });
    }
}

We also need to handle authentication (http basic auth in this case) so we’ll create a simple middleware

namespace App\Http\Middleware;

use Closure;
use Illuminate\Http\Request;
use PDO;

class AuthMiddleware
{
    public function handle(Request $request, Closure $next)
    {
        $user = $request->getUser();
        $pass = $request->getPassword();

        $db = app(PDO::class);
        $stmt = $db->prepare('SELECT name, surname FROM public.user WHERE username=:USER AND pass=:PASS');
        $stmt->execute(['USER' => $user, 'PASS' => md5($pass)]);
        $row = $stmt->fetch();
        if ($row !== false) {
            config(['user' => $row]);
        } else {
            $headers = ['WWW-Authenticate' => 'Basic'];
            return response('Admin Login', 401, $headers);
        }

        return $next($request);
    }
}

In summary: Lumen is cool. The interface is very similar to Silex. I can swap my mind from thinking in Silex to thinking in Lumen easily. Blade instead Twig: no problem. Service providers are very similar. Routing is almost the same and Middlewares are much better. Nowadays backend is a commodity for me so I don’t want to spend to much time working on it. I want something that just work. Lumen looks like that.

Both projects: Silex and Lumen are available in my github

Taking photos with an ionic2 application and upload them to S3 Bucket with SAP’s Cloud Foundry using Silex and Lumen

Today I want to play with an experiment. When I work with mobile applications, I normally use ionic and on-premise backends. Today I want play with cloud based backends. In this small experiment I want to use an ionic2 application to take pictures and upload them to an S3 bucket. Let’s start.

First I’ve created a simple ionic2 application. It’s a very simple application. Only one page with a button to trigger the device’s camera.

<ion-header>
    <ion-navbar>
        <ion-title>
            Photo
        </ion-title>
    </ion-navbar>
</ion-header>

<ion-content padding>
    <ion-fab bottom right>
        <button ion-fab (click)="takePicture()">
            <ion-icon  name="camera"></ion-icon>
        </button>
    </ion-fab>
</ion-content>

The controller uses @ionic-native/camera to take photos and later we use @ionic-native/transfer to upload them to the backend.

import {Component} from '@angular/core';
import {Camera, CameraOptions} from '@ionic-native/camera';
import {Transfer, FileUploadOptions, TransferObject} from '@ionic-native/transfer';
import {ToastController} from 'ionic-angular';
import {LoadingController} from 'ionic-angular';

@Component({
    selector: 'page-home',
    templateUrl: 'home.html'
})
export class HomePage {
    constructor(private transfer: Transfer,
                private camera: Camera,
                public toastCtrl: ToastController,
                public loading: LoadingController) {
    }

    takePicture() {
        const options: CameraOptions = {
            quality: 100,
            destinationType: this.camera.DestinationType.FILE_URI,
            sourceType: this.camera.PictureSourceType.CAMERA,
            encodingType: this.camera.EncodingType.JPEG,
            targetWidth: 1000,
            targetHeight: 1000,
            saveToPhotoAlbum: false,
            correctOrientation: true
        };

        this.camera.getPicture(options).then((uri) => {
            const fileTransfer: TransferObject = this.transfer.create();

            let options: FileUploadOptions = {
                fileKey: 'file',
                fileName: uri.substr(uri.lastIndexOf('/') + 1),
                chunkedMode: true,
                headers: {
                    Connection: "close"
                },
                params: {
                    metadata: {foo: 'bar'},
                    token: 'mySuperSecretToken'
                }
            };

            let loader = this.loading.create({
                content: 'Uploading ...',
            });

            loader.present().then(() => {
                let s3UploadUri = 'https://myApp.cfapps.eu10.hana.ondemand.com/upload';
                fileTransfer.upload(uri, s3UploadUri, options).then((data) => {
                    let message;
                    let response = JSON.parse(data.response);
                    if (response['status']) {
                        message = 'Picture uploaded to S3: ' + response['key']
                    } else {
                        message = 'Error Uploading to S3: ' + response['error']
                    }
                    loader.dismiss();
                    let toast = this.toastCtrl.create({
                        message: message,
                        duration: 3000
                    });
                    toast.present();
                }, (err) => {
                    loader.dismiss();
                    let toast = this.toastCtrl.create({
                        message: "Error",
                        duration: 3000
                    });
                    toast.present();
                });
            });
        });
    }
}

Now let’s work with the backend. Next time I’ll use JavaScript AWS SDK to upload pictures directly from mobile application (without backend), but today We’ll use a backend. Nowadays I’m involved with SAP Cloud platform projects, so we’ll use SAP’s Cloud Foundry tenant (using a free account). In this tenant we’ll create a PHP application using the PHP buildpack with nginx

applications:
- name:    myApp
  path: .
  memory:  128MB
  buildpack: php_buildpack

The PHP application is a simple Silex application to handle the file uploads and post the pictures to S3 using the official AWS SDK for PHP (based on Guzzle)

use Symfony\Component\HttpFoundation\Request;
use Silex\Application;
use Aws\S3\S3Client;

require 'vendor/autoload.php';

$app = new Application([
    'debug'        => false,
    'aws.config'   => [
        'debug'       => false,
        'version'     => 'latest',
        'region'      => 'eu-west-1',
        'credentials' => [
            'key'    => $_ENV['s3key'],
            'secret' => $_ENV['s3secret'],
        ],
    ],
]);

$app['aws'] = function () use ($app) {
    return new S3Client($app['aws.config']);
};

$app->post('/upload', function (Request $request, Application $app) {
    $metadata = json_decode($request->get('metadata'), true);
    $token    = $request->get('token');

    if ($token === $_ENV['token']) {
        $fileName = $_FILES['file']['name'];
        $fileType = $_FILES['file']['type'];
        $tmpName  = $_FILES['file']['tmp_name'];

        /** @var \Aws\S3\S3Client $s3 */
        $s3 = $app['aws'];
        try {
            $key = date('YmdHis') . "_" . $fileName;
            $s3->putObject([
                'Bucket'      => $_ENV['s3bucket'],
                'Key'         => $key,
                'SourceFile'  => $tmpName,
                'ContentType' => $fileType,
                'Metadata'    => $metadata,
            ]);
            unlink($tmpName);

            return $app->json([
                'status' => true,
                'key'    => $key,
            ]);
        } catch (Aws\S3\Exception\S3Exception $e) {
            return $app->json([
                'status' => false,
                'error'  => $e->getMessage(),
            ]);
        }
    } else {
        return $app->json([
            'status' => false,
            'error'  => "Token error",
        ]);
    }
});

$app->run();

I just wanted a simple prototype (a working one). Enough for a Sunday morning hacking.

UPDATE

I had this post ready weeks ago but something has changed. Silex is dead. So, as an exercise I’ll migrate current Silex application to Lumen (a quick prototype).

That’s the main application.

use App\Http\Middleware;
use Aws\S3\S3Client;
use Illuminate\Http\Request;
use Laravel\Lumen\Application;

require 'vendor/autoload.php';

(new Dotenv\Dotenv(__DIR__ . "/../env"))->load();

$app = new Application();

$app->routeMiddleware([
    'auth' => Middleware\AuthMiddleware::class,
]);

$app->register(App\Providers\S3ServiceProvider::class);

$app->group(['middleware' => 'auth'], function (Application $app) {
    $app->post('/upload', function (Request $request, Application $app, S3Client $s3) {
        $metadata = json_decode($request->get('metadata'), true);
        $fileName = $_FILES['file']['name'];
        $fileType = $_FILES['file']['type'];
        $tmpName  = $_FILES['file']['tmp_name'];

        try {
            $key = date('YmdHis') . "_" . $fileName;
            $s3->putObject([
                'Bucket'      => getenv('s3bucket'),
                'Key'         => $key,
                'SourceFile'  => $tmpName,
                'ContentType' => $fileType,
                'Metadata'    => $metadata,
            ]);
            unlink($tmpName);

            return response()->json([
                'status' => true,
                'key'    => $key,
            ]);
        } catch (Aws\S3\Exception\S3Exception $e) {
            return response()->json([
                'status' => false,
                'error'  => $e->getMessage(),
            ]);
        }
    });
});

$app->run();

Probably we can find a S3 Service provider, but I’ve built a simple one for this example.

namespace App\Providers;

use Illuminate\Support\ServiceProvider;
use Aws\S3\S3Client;

class S3ServiceProvider extends ServiceProvider
{
    public function register()
    {
        $this->app->bind(S3Client::class, function ($app) {
            $conf = [
                'debug'       => false,
                'version'     => getenv('AWS_VERSION'),
                'region'      => getenv('AWS_REGION'),
                'credentials' => [
                    'key'    => getenv('s3key'),
                    'secret' => getenv('s3secret'),
                ],
            ];

            return new S3Client($conf);
        });
    }
}

And also I’m using a middleware for the authentication

namespace App\Http\Middleware;

use Closure;
use Illuminate\Http\Request;

class AuthMiddleware
{
    public function handle(Request $request, Closure $next)
    {
        $token = $request->get('token');
        if ($token === getenv('token')) {
            return response('Admin Login', 401);
        }

        return $next($request);
    }
}

Ok. I’ll post this article soon. At least before Lumen will be dead also, and I need to update this post again 🙂

Full project (mobile application and both backends) in my githubgithub

Silex is dead (… or not)

The last week was deSymfony conference in Castellón (Spain). IMHO deSymfony is the best conference I’ve ever attended. The talks are good but from time to now I appreciate this kind of events not because of them. I like to go to events because of people, the coffee breaks and the community (and in deSymfony is brilliant at this point). This year I cannot join to the conference. It was a pity. A lot of good friends there. So I only can follow the buzz in Twitter, read the published slides (thanks Raul) and wait for the talk videos in youtube.

In my Twitter timeline especially two tweets get my attention. One tweet was from Julieta Cuadrado and another one from Asier Marqués.

Tweets are in Spanish but the translation is clear: Javier Eguiluz (Symfony Core Team member and co-organizer of the conference) said in his talk: “Silex is dead”. At the time I read the tweets his slides were not available yet, but a couple of days after the slides were online. The slide 175 is clear “Silex is dead”

Javier recommends us not to use Silex in future new projects and mark existing ones as “legacy”. It’s hard to me. If you have ever read my blog you will notice that I’m a big fan of Silex. Each time I need a backend, a API/REST server of something like that the first thing I do is “composer require silex/silex”. I know that Silex has limitations. It’s built on top of Pimple dependency injection container and Pimple is really awful, but this microframework gives to me exactly what I need. It’s small, simple, fast enough and really easy to adapt to my needs.

I remember a dinner in deSymfony years ago speaking with Javier in Barcelona. He was trying to “convince” me to use Symfony full stack framework instead of Silex. He almost succeeded, but I didn’t like Symfony full stack. Too complicated for me. A lot interesting things but a lot of them I don’t really need. I don’t like Symfony full stack framework, but I love Symfony. For me it’s great because of its components. They’re independent pieces of code that I can use to fit exactly to my needs instead of using a full-stack framework. I’ve learn a lot SOLID reading and hacking with Symfony components. I’m not saying that full stack frameworks are bad. I only say that they’re not for me. If I’m forced to use them I will do it, but if I can choose, I definitely choose a micro framework, even for medium an big projects.

New version of Symfony (Symfony 4) is coming next November and reading the slides of Javier at slideshare I can get an idea of its roadmap. My summary is clear: “Brilliant”. It looks like the people of Symfony listen to my needs and change all the framework to adapt it to me. After understand the roadmap I think that I need to change to title of the post (Initially it was only “Silex is dead”). Silex is not dead. For me Symfony (the full stack framework) is the death. Silex will be upgraded and will be renamed to Symfony (I know that this assertion is subjective. It’s just my point of view). So the bad feeling that I felt when I read Julieta and Asier’s tweets turns into a good one. Good move SensioLabs!

But I’ve got a problem right now. What can I do if I need to start a new project today? Symfony 4 isn’t ready yet. Javier said that we can use Symfony Flex and create new projects with Symfony 3 with the look and feel of Symfony 4, but Flex is still in alpha and I don’t want to play with alpha tools in production. Especially in the backend. I’m getting older, I know. For me the backend is a commodity right now. I need the backend to serve JSON mainly.

I normally use PHP and Silex here only because I’m very confortable with it. In the projects, business people doesn’t care about technologies and frameworks. It’s our job (or our problem depending on how to read it). And don’t forget one thing: Developers are part of business, so in one part of my mind I don’t care about frameworks also. I care about making things done, maximising the potential of technology and driving innovation to customer benefits (good lapidary phrase, isn’t it?).

So I’ve started looking for alternatives. My objective here is clear: I want to find a framework to do the things that I usually do with Silex. Nothing more. And there’s something important here: The tool must be easy to learn. I want to master (or at least become productive) the tool in a couple of days maximum.

I started with the first one: Lumen and I think I will stop searching. Lumen is the micro framework of Laravel. Probably in the PHP world now there’re two major communities: Symfony and Laravel. Maybe if we’re strict Laravel and Symfony are not different communities. In fact Laravel and Symfony shares a lot of components. So maybe both communities are the same.

I’ve almost never played with Laravel and it’s time to study it a little bit. Time ago I used Eloquent ORM but since I hate ORMs I always return to PDO/DBAL. As I said before I didn’t like Symfony full stack framework. It’s too complex for me, and Laravel is the same. When I started with PHP (in the early 2000) there weren’t any framework. I remember me reading books of Java and J2EE. Trying to understand something in its nightmare of acronyms, XMLs configurations and trying to build my own framework in PHP. Now in 2017 to build our own framework in PHP is good learning point but use it with real projects is ridiculous. As someone said before (I don’t remember who) “Everybody must build his own framework, and never use it at all“.

Swap from Silex to Lumen is pretty straightforward. In fact with one ultra-minimal application it’s exactly the same:

use Silex\Application;

$app = new Application();

$app->get("/", function() {
    return "Hello from Silex";
});

$app->run();
use Laravel\Lumen\Application;

$app = new Application();

$app->get("/", function() {
    return "Hello from Lumen";
});

$app->run();

If you’re a Silex user you only need a couple of hours reading the Lumen docs and you will be able to set up a new project without any problem. Concepts are the same, slight differences and even cool things such as groups and middlewares. Nothing impossible to do with Silex, indeed, but with a very smart and simple interface. If I need to create a new project right now I will use Lumen without any doubt.

Next winter, when Symfony 4 arrives, I probably will face the problem of choose. But past years I’ve been involved into the crazy world of JavaScript: Angular, Angular2, React, npm, yarn, webpack, … If I’ve survived this (finally I choose JQuery, but that’s a different story :), I am ready for all right now.

Playing with Docker, Silex, Python, Node and WebSockets

I’m learning Docker. In this post I want to share a little experiment that I have done. I know the code looks like over-engineering but it’s just an excuse to build something with docker and containers. Let me explain it a little bit.

The idea is build a Time clock in the browser. Something like this:

Clock

Yes I know. We can do it only with js, css and html but we want to hack a little bit more. The idea is to create:

  • A Silex/PHP frontend
  • A WebSocket server with socket.io/node
  • A Python script to obtain the current time

WebSocket server will open 2 ports: One port to serve webSockets (socket.io) and another one as a http server (express). Python script will get the current time and it’ll send it to the webSocket server. Finally one frontend(silex) will be listening to WebSocket’s event and it will render the current time.

That’s the WebSocket server (with socket.io and express)

var
    express = require('express'),
    expressApp = express(),
    server = require('http').Server(expressApp),
    io = require('socket.io')(server, {origins: 'localhost:*'})
    ;

expressApp.get('/tic', function (req, res) {
    io.sockets.emit('time', req.query.time);
    res.json('OK');
});

expressApp.listen(6400, '0.0.0.0');

server.listen(8080);

That’s our Python script

from time import gmtime, strftime, sleep
import httplib2

h = httplib2.Http()
while True:
    (resp, content) = h.request("http://node:6400/tic?time=" + strftime("%H:%M:%S", gmtime()))
    sleep(1)

And our Silex frontend

use Silex\Application;
use Silex\Provider\TwigServiceProvider;

$app = new Application(['debug' => true]);
$app->register(new TwigServiceProvider(), [
    'twig.path' => __DIR__ . '/../views',
]);

$app->get("/", function (Application $app) {
    return $app['twig']->render('index.twig', []);
});

$app->run();

using this twig template

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="utf-8">
    <meta http-equiv="X-UA-Compatible" content="IE=edge">
    <meta name="viewport" content="width=device-width, initial-scale=1">
    <title>Docker example</title>
    <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css" integrity="sha384-BVYiiSIFeK1dGmJRAkycuHAHRg32OmUcww7on3RYdg4Va+PmSTsz/K68vbdEjh4u" crossorigin="anonymous">
    <link href="css/app.css" rel="stylesheet">
    <script src="https://oss.maxcdn.com/html5shiv/3.7.3/html5shiv.min.js"></script>
    <script src="https://oss.maxcdn.com/respond/1.4.2/respond.min.js"></script>
</head>
<body>
<div class="site-wrapper">
    <div class="site-wrapper-inner">
        <div class="cover-container">
            <div class="inner cover">
                <h1 class="cover-heading">
                    <div id="display">
                        display
                    </div>
                </h1>
            </div>
        </div>
    </div>
</div>
<script src="//localhost:8080/socket.io/socket.io.js"></script>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.12.4/jquery.min.js"></script>
<script>
var socket = io.connect('//localhost:8080');

$(function () {
    socket.on('time', function (data) {
        $('#display').html(data);
    });
});
</script>
</body>
</html>

The idea is to use one Docker container for each process. I like to have all the code in one place so all containers will share the same volume with source code.

First the node container (WebSocket server)

FROM node:argon

RUN mkdir -p /mnt/src
WORKDIR /mnt/src/node

EXPOSE 8080 6400

Now the python container

FROM python:2

RUN pip install httplib2

RUN mkdir -p /mnt/src
WORKDIR /mnt/src/python

And finally Frontend contailer (apache2 with Ubuntu 16.04)

FROM ubuntu:16.04

RUN locale-gen es_ES.UTF-8
RUN update-locale LANG=es_ES.UTF-8
ENV DEBIAN_FRONTEND=noninteractive

RUN apt-get update -y
RUN apt-get install --no-install-recommends -y apache2 php libapache2-mod-php
RUN apt-get clean -y

COPY ./apache2/sites-available/000-default.conf /etc/apache2/sites-available/000-default.conf

RUN mkdir -p /mnt/src

RUN a2enmod rewrite
RUN a2enmod proxy
RUN a2enmod mpm_prefork

RUN chown -R www-data:www-data /mnt/src
ENV APACHE_RUN_USER www-data
ENV APACHE_RUN_GROUP www-data
ENV APACHE_LOG_DIR /var/log/apache2
ENV APACHE_LOCK_DIR /var/lock/apache2
ENV APACHE_PID_FILE /var/run/apache2/apache2.pid
ENV APACHE_SERVERADMIN admin@localhost
ENV APACHE_SERVERNAME localhost

EXPOSE 80

Now we’ve got the three containers but we want to use all together. We’ll use a docker-compose.yml file. The web container will expose port 80 and node container 8080. Node container also opens 6400 but this port is an internal port. We don’t need to access to this port outside. Only Python container needs to access to this port. Because of that 6400 is not mapped to any port in docker-compose

version: '2'

services:
  web:
    image: gonzalo123/example_web
    container_name: example_web
    ports:
     - "80:80"
    restart: always
    depends_on:
      - node
    build:
      context: ./images/php
      dockerfile: Dockerfile
    entrypoint:
      - /usr/sbin/apache2
      - -D
      - FOREGROUND
    volumes:
     - ./src:/mnt/src

  node:
    image: gonzalo123/example_node
    container_name: example_node
    ports:
     - "8080:8080"
    restart: always
    build:
      context: ./images/node
      dockerfile: Dockerfile
    entrypoint:
      - npm
      - start
    volumes:
     - ./src:/mnt/src

  python:
      image: gonzalo123/example_python
      container_name: example_python
      restart: always
      depends_on:
        - node
      build:
        context: ./images/python
        dockerfile: Dockerfile
      entrypoint:
        - python
        - tic.py
      volumes:
       - ./src:/mnt/src

And that’s all. We only need to start our containers

docker-compose up --build -d

and open our browser at: http://localhost to see our Time clock

Full source code available within my github account

Silex service provider for a Gearman wrapper

I’ve written an small wrapper for the gearman api. Normally I use Silex in the frontend. Today we’re going to build a service provider to allow us to integrate the gearman wrapper easily within our Silex applications.

Here I show you an example of how to use the service provider.

Imagine this simple worker:

use G\Gearman\Builder;

$worker = Builder::createWorker();

$worker->on("worker.example", function ($response) {
    return strrev($response);
});

$worker->run();

And this is the Silex application using the service provider as a gearman client:

use G\Gearman\Client;
use G\GearmanServiceProvider;
use Silex\Application;

$app = new Application();

$app->register(new GearmanServiceProvider());

$app->get("/", function (Client $client) {
    return "Hello " . $client->doNormal("worker.example", "Gonzalo");
});

$app->run();

I’m using injector library to inject providers. I’ve written about it here.

This is the code of the service provider

namespace G;
use G\Gearman\Client;
use G\Gearman\Tasks;
use Injector\InjectorServiceProvider;
use Silex\Application;
use Silex\ServiceProviderInterface;
class GearmanServiceProvider implements ServiceProviderInterface
{
    private $client;
    public function __construct(\GearmanClient $client = null)
    {
        if (is_null($client)) {
            $client = new \GearmanClient();
            $client->addServers("localhost:4730");
        }
        $this->client = $client;
    }
    public function register(Application $app)
    {
        $app->register(new InjectorServiceProvider([
            'G\Gearman\Client' => 'gearmanClient',
            'G\Gearman\Tasks'  => 'gearmanTasks',
        ]));
        $app['gearmanClient'] = function () use ($app) {
            $client = new Client($this->client);
            $client->onSuccess(function ($response) {
                return $response;
            });
            return $client;
        };
        $app['gearmanTasks'] = function () use ($app) {
            return new Tasks($this->client);
        };
    }
    public function boot(Application $app)
    {
    }
}

The code is available in my github account

Sharing authentication between socket.io and a PHP frontend (using JSON Web Tokens)

I’ve written a previous post about Sharing authentication between socket.io and a PHP frontend but after publish the post a colleague (hi @mariotux) told me that I can use JSON Web Tokens (jwt) to do this. I had never used jwt before so I decided to study a little bit.

JWT are pretty straightforward. You only need to create the token and send it to the client. You don’t need to store this token within a database. Client can decode and validate it on its own. You also can use any programming language to encode and decode tokens (jwt is available in the most common ones)

We’re going to create the same example than the previous post. Today, with jwt, we don’t need to pass the PHP session and perform a http request to validate it. We’ll only pass the token. Our nodejs server will validate by its own.

var io = require('socket.io')(3000),
    jwt = require('jsonwebtoken'),
    secret = "my_super_secret_key";

// middleware to perform authorization
io.use(function (socket, next) {
    var token = socket.handshake.query.token,
        decodedToken;
    try {
        decodedToken = jwt.verify(token, secret);
        console.log("token valid for user", decodedToken.user);
        socket.connectedUser = decodedToken.user;
        next();
    } catch (err) {
        console.log(err);
        next(new Error("not valid token"));
        //socket.disconnect();
    }
});

io.on('connection', function (socket) {
    console.log('Connected! User: ', socket.connectedUser);
});

That’s the client:

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <title>Title</title>
</head>
<body>
Welcome {{ user }}!

<script src="http://localhost:3000/socket.io/socket.io.js"></script>
<script src="/assets/jquery/dist/jquery.js"></script>

<script>
    var socket;
    $(function () {
        $.getJSON("/getIoConnectionToken", function (jwt) {
            socket = io('http://localhost:3000', {
                query: 'token=' + jwt
            });

            socket.on('connect', function () {
                console.log("connected!");
            });

            socket.on('error', function (err) {
                console.log(err);
            });
        });
    });
</script>

</body>
</html>

And here the backend. A simple Silex server very similar than the previous post one. JWT has also several reserved claims. For example “exp” to set up an expiration timestamp. It’s very useful. We only set one value and validator will reject tokens with incorrect timestamp. In this example I’m not using expiration date. That’s means that my token will never expires. And never means never. In my first prototype I set up an small expiration date (10 seconds). That means my token is only available during 10 seconds. Sounds great. My backend generate tokens that are going to be used immediately. That’s the normal situation but, what happens if I restart the socket.io server? The client will try to reconnect again using the token but it’s expired. We’ll need to create a new jwt before reconnecting. Because of that I’ve removed expiration date in this example but remember: Without expiration date your generated tokens will be always valid (al always is a very big period of time)

<?php
include __DIR__ . "/../vendor/autoload.php";

use Firebase\JWT\JWT;
use Silex\Application;
use Silex\Provider\SessionServiceProvider;
use Silex\Provider\TwigServiceProvider;
use Symfony\Component\HttpFoundation\Response;
use Symfony\Component\HttpKernel\Exception\AccessDeniedHttpException;

$app = new Application([
    'secret' => "my_super_secret_key",
    'debug' => true
]);
$app->register(new SessionServiceProvider());
$app->register(new TwigServiceProvider(), [
    'twig.path' => __DIR__ . '/../views',
]);

$app->get('/', function (Application $app) {
    return $app['twig']->render('home.twig');
});
$app->get('/login', function (Application $app) {
    $username = $app['request']->server->get('PHP_AUTH_USER', false);
    $password = $app['request']->server->get('PHP_AUTH_PW');
    if ('gonzalo' === $username && 'password' === $password) {
        $app['session']->set('user', ['username' => $username]);

        return $app->redirect('/private');
    }
    $response = new Response();
    $response->headers->set('WWW-Authenticate', sprintf('Basic realm="%s"', 'site_login'));
    $response->setStatusCode(401, 'Please sign in.');

    return $response;
});

$app->get('/getIoConnectionToken', function (Application $app) {
    $user = $app['session']->get('user');
    if (null === $user) {
        throw new AccessDeniedHttpException('Access Denied');
    }

    $jwt = JWT::encode([
        // I can use "exp" reserved claim. It's cool. My connection token is only available
        // during a period of time. The problem is if I restart the io server. Client will
        // try to re-connect using this token and it's expired.
        //"exp"  => (new \DateTimeImmutable())->modify('+10 second')->getTimestamp(),
        "user" => $user
    ], $app['secret']);

    return $app->json($jwt);
});

$app->get('/private', function (Application $app) {
    $user = $app['session']->get('user');

    if (null === $user) {
        throw new AccessDeniedHttpException('Access Denied');
    }

    $userName = $user['username'];

    return $app['twig']->render('private.twig', [
        'user'  => $userName
    ]);
});
$app->run();

Full project in my github.

Sharing authentication between socket.io and a PHP frontend

Normally, when I work with websockets, my stack is a socket.io server and a Silex frontend. Protect a PHP frontend with one kind of authentication of another is pretty straightforward. But if we want to use websockets, we need to set up another server and if we protect our frontend we need to protect our websocket server too.

If our frontend is node too (express for example), sharing authentication is more easy but at this time we we want to use two different servers (a node server and a PHP server). I’ve written about it too but today we`ll see another solution. Let’s start.

Imagine we have this simple Silex application. It has three routes:

  • “/” a public route
  • “/login” to perform the login action
  • “/private” a private route. If we try to get here without a valid session we’ll get a 403 error

And this is the code. It’s basically one example using sessions taken from Silex documentation:

use Silex\Application;
use Silex\Provider\SessionServiceProvider;
use Silex\Provider\TwigServiceProvider;
use Symfony\Component\HttpFoundation\Response;
use Symfony\Component\HttpKernel\Exception\AccessDeniedHttpException;

$app = new Application();

$app->register(new SessionServiceProvider());
$app->register(new TwigServiceProvider(), [
    'twig.path' => __DIR__ . '/../views',
]);

$app->get('/', function (Application $app) {
    return $app['twig']->render('home.twig');
});

$app->get('/login', function () use ($app) {
    $username = $app['request']->server->get('PHP_AUTH_USER', false);
    $password = $app['request']->server->get('PHP_AUTH_PW');

    if ('gonzalo' === $username && 'password' === $password) {
        $app['session']->set('user', ['username' => $username]);

        return $app->redirect('/private');
    }

    $response = new Response();
    $response->headers->set('WWW-Authenticate', sprintf('Basic realm="%s"', 'site_login'));
    $response->setStatusCode(401, 'Please sign in.');

    return $response;
});

$app->get('/private', function () use ($app) {
    $user = $app['session']->get('user');
    if (null === $user) {
        throw new AccessDeniedHttpException('Access Denied');
    }

    return $app['twig']->render('private.twig', [
        'username'  => $user['username']
    ]);
});

$app->run();

Our “/private” route also creates a connection with our websocket server.

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <title>Title</title>
</head>
<body>
Welcome {{ username }}!

<script src="http://localhost:3000/socket.io/socket.io.js"></script>
<script>
    var socket = io('http://localhost:3000/');
    socket.on('connect', function () {
        console.log("connected!");
    });
    socket.on('disconnect', function () {
        console.log("disconnected!");
    });
</script>

</body>
</html>

And that’s our socket.io server. A really simple one.

var io = require('socket.io')(3000);

It works. Our frontend is protected. We need to login with our credentials (in this example “gonzalo/password”), but everyone can connect to our socket.io server. The idea is to use our PHP session to protect our socket.io server too. In fact is very easy how to do it. First we need to pass our PHPSESSID to our socket.io server. To do it, when we perform our socket.io connection in the frontend, we pass our session id

<script>
    var socket = io('http://localhost:3000/', {
        query: 'token={{ sessionId }}'
    });
    socket.on('connect', function () {
        console.log("connected!");
    });
    socket.on('disconnect', function () {
        console.log("disconnect!");
    });
</script>

As well as we’re using a twig template we need to pass sessionId variable

$app->get('/private', function () use ($app) {
    $user = $app['session']->get('user');
    if (null === $user) {
        throw new AccessDeniedHttpException('Access Denied');
    }

    return $app['twig']->render('private.twig', [
        'username'  => $user['username'],
        'sessionId' => $app['session']->getId()
    ]);
});

Now we only need to validate the token before stabilising connection. Socket.io provides us a middleware to perform those kind of operations. In this example we’re using PHP sessions out of the box. How can we validate it? The answer is easy. We only need to create a http client (in the socket.io server) and perform a request to a protected route (we’ll use “/private”). If we’re using a different provider to store our sessions (I hope you aren’t using Memcached to store PHP session, indeed) you’ll need to validate our sessionId against your provider.

var io = require('socket.io')(3000),
    http = require('http');

io.use(function (socket, next) {
    var options = {
        host: 'localhost',
        port: 8080,
        path: '/private',
        headers: {Cookie: 'PHPSESSID=' + socket.handshake.query.token}
    };

    http.request(options, function (response) {
        response.on('error', function () {
            next(new Error("not authorized"));
        }).on('data', function () {
            next();
        });
    }).end();
});

io.on('connection', function () {
    console.log("connected!");
});

Ok. This example works but we’re generating dynamically a js file injecting our PHPSESSID. If we want to extract the sessionId from the request we can use document.cookie but sometimes it doesn’t work. That’s because HttpOnly. HttpOnly is our friend if we want to protect our cookies against XSS attacks but in this case our protection difficults our task.

We can solve this problem performing a simple request to our server. We’ll create a new route (a private route) called ‘getSessionID’ that gives us our sessionId.

$app->get('/getSessionID', function (Application $app) {
    $user = $app['session']->get('user');
    if (null === $user) {
        throw new AccessDeniedHttpException('Access Denied');
    }

    return $app->json($app['session']->getId());
});

So before establishing the websocket we just need to create a GET request to our new route to obtain the sessionID.

var io = require('socket.io')(3000),
    http = require('http');

io.use(function (socket, next) {
    var sessionId = socket.handshake.query.token,
        options = {
            host: 'localhost',
            port: 8080,
            path: '/getSessionID',
            headers: {Cookie: 'PHPSESSID=' + sessionId}
        };

    http.request(options, function (response) {
        response.on('error', function () {
            next(new Error("not authorized"));
        });
        response.on('data', function (chunk) {
            var sessionIdFromRequest;
            try {
                sessionIdFromRequest = JSON.parse(chunk.toString());
            } catch (e) {
                next(new Error("not authorized"));
            }

            if (sessionId == sessionIdFromRequest) {
                next();
            } else {
                next(new Error("not authorized"));
            }
        });
    }).end();
});

io.on('connection', function (socket) {
    setInterval(function() {
        socket.emit('hello', {hello: 'world'});
    }, 1000);
});

And thats all. You can see the full example in my github account.

Reading Modbus devices with Python from a PHP/Silex Application via Gearman worker

Yes. I know. I never know how to write a good tittle to my posts. Let me show one integration example that I’ve been working with this days. Let’s start.

In industrial automation there’re several standard protocols. Modbus is one of them. Maybe isn’t the coolest or the newest one (like OPC or OPC/UA), but we can speak Modbus with a huge number of devices.

I need to read from one of them, and show a couple of variables in a Web frontend. Imagine the following fake Modbus server (it emulates my real Modbus device)

#!/usr/bin/env python

##
# Fake modbus server
# - exposes "Energy" 66706 = [1, 1170]
# - exposes "Power" 132242 = [2, 1170]
##

from pymodbus.datastore import ModbusSlaveContext, ModbusServerContext
from pymodbus.datastore import ModbusSequentialDataBlock
from pymodbus.server.async import StartTcpServer
import logging

logging.basicConfig()
log = logging.getLogger()
log.setLevel(logging.DEBUG)

hrData = [1, 1170, 2, 1170]
store = ModbusSlaveContext(hr=ModbusSequentialDataBlock(2, hrData))

context = ModbusServerContext(slaves=store, single=True)

StartTcpServer(context)

This server exposes two variables “Energy” and “Power”. This is a fake server and it will returns always 66706 for energy and 132242 for power. Mobus is a binary protocol so 66706 = [1, 1170] and 132242 = [2, 1170]

I can read Modbus from PHP, but normally use Python for this kind of logic. I’m not going to re-write an existing logic to PHP. I’m not crazy enough. Furthermore my real Modbus device only accepts one active socket to retrieve information. That’s means if two clients uses the frontend at the same time, it will crash. In this situations Queues are our friends.

I’ll use a Gearman worker (written in Python) to read Modbus information.

from pyModbusTCP.client import ModbusClient
from gearman import GearmanWorker
import json

def reader(worker, job):
    c = ModbusClient(host="localhost", port=502)

    if not c.is_open() and not c.open():
        print("unable to connect to host")

    if c.is_open():

        holdingRegisters = c.read_holding_registers(1, 4)

        # Imagine we've "energy" value in position 1 with two words
        energy = (holdingRegisters[0] << 16) | holdingRegisters[1]

        # Imagine we've "power" value in position 3 with two words
        power = (holdingRegisters[2] << 16) | holdingRegisters[3]

        out = {"energy": energy, "power": power}
        return json.dumps(out)
    return None

worker = GearmanWorker(['127.0.0.1'])

worker.register_task('modbusReader', reader)

print 'working...'
worker.work()

Our backend is ready. Now we’ll work with the frontend. In this example I’ll use PHP and Silex.

<?php
include __DIR__ . '/../vendor/autoload.php';
use Silex\Application;
$app = new Application(['debug' => true]);
$app->register(new Silex\Provider\TwigServiceProvider(), array(
    'twig.path' => __DIR__.'/../views',
));
$app['modbusReader'] = $app->protect(function() {
    $client = new \GearmanClient();
    $client->addServer();
    $handle = $client->doNormal('modbusReader', 'modbusReader');
    $returnCode = $client->returnCode();
    if ($returnCode != \GEARMAN_SUCCESS) {
        throw new \Exception($this->client->error(), $returnCode);
    } else {
        return json_decode($handle, true);
    }
});
$app->get("/", function(Application $app) {
    return $app['twig']->render('home.twig', $app['modbusReader']());
});
$app->run();

As we can see the frontend is a simple Gearman client. It uses our Python worker to read information from Modbus and render a simple html with a Twig template

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <title>Demo</title>
</head>
<body>
    Energy: {{ energy }}
    Power: {{ power }}
</body>
</html>

And that’s all. You can see the full example in my github account

Sending logs to a remote server using RabbitMQ

Time ago I wrote an article to show how to send Silex logs to a remote server. Today I want to use a messaging queue to do it. Normally, when I need queues, I use Gearman but today I want to play with RabbitMQ.

When we work with web applications it’s important to have, in some way or another, one way to decouple operations from the main request. Messaging queues are great tools to perform those operations. They even allow us to create our workers with a different languages than the main request. This days, for example, I’m working with modbus devices. The whole modbus logic is written in Python and I want to use a Frontend with PHP. I can rewrite the modbus logic with PHP (there’re PHP libraries to connect with modbus devices), but I’m not so crazy. Queues are our friends.

The idea in this post is the same than the previous post. We’ll use event dispatcher to emit events and we’ll send those events to a RabitMQ queue. We’ll use a Service Provider called.

<?php
include __DIR__ . '/../vendor/autoload.php';

use PhpAmqpLib\Connection\AMQPStreamConnection;
use RabbitLogger\LoggerServiceProvider;
use Silex\Application;
use Symfony\Component\HttpKernel\Event;
use Symfony\Component\HttpKernel\KernelEvents;

$connection = new AMQPStreamConnection('localhost', 5672, 'guest', 'guest');
$channel    = $connection->channel();

$app = new Application(['debug' => true]);
$app->register(new LoggerServiceProvider($connection, $channel));

$app->on(KernelEvents::TERMINATE, function (Event\PostResponseEvent $event) use ($app) {
    $app['rabbit.logger']->info('TERMINATE');
});

$app->on(KernelEvents::CONTROLLER, function (Event\FilterControllerEvent $event) use ($app) {
    $app['rabbit.logger']->info('CONTROLLER');
});

$app->on(KernelEvents::EXCEPTION, function (Event\GetResponseForExceptionEvent $event) use ($app) {
    $app['rabbit.logger']->info('EXCEPTION');
});

$app->on(KernelEvents::FINISH_REQUEST, function (Event\FinishRequestEvent $event) use ($app) {
    $app['rabbit.logger']->info('FINISH_REQUEST');
});

$app->on(KernelEvents::RESPONSE, function (Event\FilterResponseEvent $event) use ($app) {
    $app['rabbit.logger']->info('RESPONSE');
});

$app->on(KernelEvents::REQUEST, function (Event\GetResponseEvent $event) use ($app) {
    $app['rabbit.logger']->info('REQUEST');
});

$app->on(KernelEvents::VIEW, function (Event\GetResponseForControllerResultEvent $event) use ($app) {
    $app['rabbit.logger']->info('VIEW');
});

$app->get('/', function (Application $app) {
    $app['rabbit.logger']->info('inside route');
    return "HELLO";
});

$app->run();

Here we can see the service provider:

<?php
namespace RabbitLogger;

use PhpAmqpLib\Channel\AMQPChannel;
use PhpAmqpLib\Connection\AMQPStreamConnection;
use Silex\Application;
use Silex\ServiceProviderInterface;

class LoggerServiceProvider implements ServiceProviderInterface
{
    private $connection;
    private $channel;

    public function __construct(AMQPStreamConnection $connection, AMQPChannel $channel)
    {
        $this->connection = $connection;
        $this->channel    = $channel;
    }

    public function register(Application $app)
    {
        $app['rabbit.logger'] = $app->share(
            function () use ($app) {
                $channelName = isset($app['logger.channel.name']) ? $app['logger.channel.name'] : 'logger.channel';
                return new Logger($this->connection, $this->channel, $channelName);
            }
        );
    }

    public function boot(Application $app)
    {
    }
}

And here the logger:

<?php
namespace RabbitLogger;

use PhpAmqpLib\Channel\AMQPChannel;
use PhpAmqpLib\Connection\AMQPStreamConnection;
use PhpAmqpLib\Message\AMQPMessage;
use Psr\Log\LoggerInterface;
use Psr\Log\LogLevel;
use Silex\Application;

class Logger implements LoggerInterface
{
    private $connection;
    private $channel;
    private $queueName;

    public function __construct(AMQPStreamConnection $connection, AMQPChannel $channel, $queueName = 'logger')
    {
        $this->connection = $connection;
        $this->channel    = $channel;
        $this->queueName  = $queueName;
        $this->channel->queue_declare($queueName, false, false, false, false);
    }

    function __destruct()
    {
        $this->channel->close();
        $this->connection->close();
    }

    public function emergency($message, array $context = [])
    {
        $this->sendLog($message, $context, LogLevel::EMERGENCY);
    }

    public function alert($message, array $context = [])
    {
        $this->sendLog($message, $context, LogLevel::ALERT);
    }

    public function critical($message, array $context = [])
    {
        $this->sendLog($message, $context, LogLevel::CRITICAL);
    }

    public function error($message, array $context = [])
    {
        $this->sendLog($message, $context, LogLevel::ERROR);
    }

    public function warning($message, array $context = [])
    {
        $this->sendLog($message, $context, LogLevel::WARNING);
    }

    public function notice($message, array $context = [])
    {
        $this->sendLog($message, $context, LogLevel::NOTICE);
    }

    public function info($message, array $context = [])
    {
        $this->sendLog($message, $context, LogLevel::INFO);
    }

    public function debug($message, array $context = [])
    {
        $this->sendLog($message, $context, LogLevel::DEBUG);
    }
    public function log($level, $message, array $context = [])
    {
        $this->sendLog($message, $context, $level);
    }

    private function sendLog($message, array $context = [], $level = LogLevel::INFO)
    {
        $msg = new AMQPMessage(json_encode([$message, $context, $level]), ['delivery_mode' => 2]);
        $this->channel->basic_publish($msg, '', $this->queueName);
    }
}

And finally the RabbitMQ Worker to process our logs

require_once __DIR__ . '/../vendor/autoload.php';
use PhpAmqpLib\Connection\AMQPStreamConnection;
$connection = new AMQPStreamConnection('localhost', 5672, 'guest', 'guest');
$channel = $connection->channel();
$channel->queue_declare('logger.channel', false, false, false, false);
echo ' [*] Waiting for messages. To exit press CTRL+C', "\n";
$callback = function($msg){
    echo " [x] Received ", $msg->body, "\n";
    //$msg->delivery_info['channel']->basic_ack($msg->delivery_info['delivery_tag']);
};
//$channel->basic_qos(null, 1, null);
$channel->basic_consume('logger.channel', '', false, false, false, false, $callback);
while(count($channel->callbacks)) {
    $channel->wait();
}
$channel->close();
$connection->close();

To run the example we must:

Start RabbitMQ server

rabbitmq-server

start Silex server

php -S 0.0.0.0:8080 -t www

start worker

php worker/worker.php

You can see whole project in my github account

Calling Silex backend from command line. Creating SAAS command line tools

Sometimes we need to create command line tools. We can build those tools using different technologies. In Symfony world there’s Symfony Console. I feel very confortable using it. But if we want to distribute our tool we will need to face with one “problem”. User’ll need to have PHP installed. It sounds trivial but it isn’t installed in every computer. We can use nodeJs to build our tool. Nowadays nodeJs is a de-facto standard but we still have the problem. Another “problem” is how to distribute new version of our tool. Problems everywhere.

Software as a service tools are great. We can build a service (a web based service for example) and we can even monetize our service with one kind of paid-plan or another. With our SAAS we don’t need to worry about redistribute our software within each release. But, what happens when our service is a command line one?

Imagine for example that we’re going to build one service to convert text to uppercase (I thing this idea will become me rich, indeed 🙂

We can create one simple Silex example to convert to upper case strings:

<?php
include __DIR__ . "/../vendor/autoload.php";

use Silex\Application;
use Symfony\Component\HttpFoundation\Request;

$app = new Application();
$app->post("/", function (Request $request) {
    return strtoupper($request->getContent());
});
$app->run();

And now we only to call this service from the command line. We can use curl for example and convert one file content to upper case:

cat myfile.txt | curl -d @- localhost:8080 > MYFILE.txt

You can see the example in my github account here