Category Archives: Cloud

Handling Amazon SNS messages with PHP, Lumen and CloudWatch

This days I’m involve with Amazon’s AWS and since I am migrating my backends to Lumen I’m going to play a little bit with AWS and Lumen. Today I want to create a simple Lumen server to handle SNS notifications. One end-point to listen to SNS and another one to emit notifications. I also want to register logs within CloudWatch. Let’s start.

First the Lumen server.

use Laravel\Lumen\Application;

require __DIR__ . '/../vendor/autoload.php';

(new Dotenv\Dotenv(__DIR__ . "/../env"))->load();

$app = new Application();

$app->register(App\Providers\LogServiceProvider::class);
$app->register(App\Providers\AwsServiceProvider::class);

$app->group(['namespace' => 'App\Http\Controllers'], function (Application $app) {
    $app->get("/push", "SnsController@push");
    $app->post("/read", "SnsController@read");
});

$app->run();

As we can see there’s a route to push notifications and another one to read messages.

To work with SNS I will create a simple service provider

namespace App\Providers;

use Illuminate\Support\ServiceProvider;
use Aws\Sns\SnsClient;

class AwsServiceProvider extends ServiceProvider
{
    public function register()
    {
        $awsCredentials = [
            'region'      => getenv('AWS_REGION'),
            'version'     => getenv('AWS_VERSION'),
            'credentials' => [
                'key'    => getenv('AWS_CREDENTIALS_KEY'),
                'secret' => getenv('AWS_CREDENTIALS_SECRET'),
            ],
        ];

        $this->app->instance(SnsClient::class, new SnsClient($awsCredentials));
    }
}

Now We can create the routes in SnsController. Sns has a confirmation mechanism to validate endpoints. It’s well explained here

namespace App\Http\Controllers;

use Aws\Sns\SnsClient;
use Illuminate\Http\Request;
use Laravel\Lumen\Routing\Controller;
use Monolog\Logger;

class SnsController extends Controller
{
    private $request;
    private $logger;

    public function __construct(Request $request, Logger $logger)
    {
        $this->request = $request;
        $this->logger  = $logger;
    }

    public function push(SnsClient $snsClient)
    {
        $snsClient->publish([
            'TopicArn' => getenv('AWS_SNS_TOPIC1'),
            'Message'  => 'hi',
            'Subject'  => 'Subject',
        ]);

        return ['push'];
    }

    public function read(SnsClient $snsClient)
    {
        $data = $this->request->json()->all();

        if ($this->request->headers->get('X-Amz-Sns-Message-Type') == 'SubscriptionConfirmation') {
            $this->logger->notice("sns:confirmSubscription");
            $snsClient->confirmSubscription([
                'TopicArn' => getenv('AWS_SNS_TOPIC1'),
                'Token'    => $data['Token'],
            ]);
        } else {
            $this->logger->warn("read", [
                'Subject'   => $data['Subject'],
                'Message'   => $data['Message'],
                'Timestamp' => $data['Timestamp'],
            ]);
        }

        return "OK";
    }
}

Finally I want to use CloudWatch so I will configure Monolog with another service provider. It’s also well explained here:

namespace App\Providers;

use Aws\CloudWatchLogs\CloudWatchLogsClient;
use Illuminate\Support\ServiceProvider;
use Maxbanton\Cwh\Handler\CloudWatch;
use Monolog\Formatter\LineFormatter;
use Monolog\Logger;

class LogServiceProvider extends ServiceProvider
{
    public function register()
    {
        $awsCredentials = [
            'region'      => getenv('AWS_REGION'),
            'version'     => getenv('AWS_VERSION'),
            'credentials' => [
                'key'    => getenv('AWS_CREDENTIALS_KEY'),
                'secret' => getenv('AWS_CREDENTIALS_SECRET'),
            ],
        ];

        $cwClient = new CloudWatchLogsClient($awsCredentials);

        $cwRetentionDays      = getenv('CW_RETENTIONDAYS');
        $cwGroupName          = getenv('CW_GROUPNAME');
        $cwStreamNameInstance = getenv('CW_STREAMNAMEINSTANCE');
        $loggerName           = getenv('CF_LOGGERNAME');

        $logger  = new Logger($loggerName);
        $handler = new CloudWatch($cwClient, $cwGroupName, $cwStreamNameInstance, $cwRetentionDays);
        $handler->setFormatter(new LineFormatter(null, null, false, true));

        $logger->pushHandler($handler);

        $this->app->instance(Logger::class, $logger);
    }
}

Debugging those kind of webhooks with a EC2 instance sometimes is a bit hard. But we can easily expose our local webserver to internet with ngrok.
We only need to start our local server

php -S 0.0.0.0:8080 -t www

And create a tunnel wiht ngrok

ngrok http 8080

And that’s up. Lumen and SNS up and running.

Code available in my github

Advertisements

Working with clouds. Multi-master file-system replication with CouchDB

When we want to work with a cloud/cluster one of the most common problems is the file-system. It’s mandatory to be able to scale horizontally (scale out). We need to share the same file-system between our nodes. We can do it with a file server (samba for example), but this solution inserts a huge bottleneck into our application. There’s different distributed filesystems such as Apache Hadoop (inspired by Google’s MapReduce and Google File System). In this post we’re going to build a really simple distributed storage system based on NoSql. Let’s start.

NoSql (aka one of our last hypes) databases normally allow to store large files. MongoDB for example has GridFS, But in this post we’re going to do it using CouchDB. With CouchDB we can attach documents within our database as simple attachments, just like email.

The api of CouchDB to upload an attachment is very simple. It’s a pure REST api.

In order to create the http connections directly with curl commands we can use libraries to automate this process. For example we can use a simple library shown in a previous post. If we inspect the code we will see that we’re creating a PUT request to store the file in our couchDB database.

Another cool thing we can do with PHP is to create a stream-wrapper to use standard filesystem functions for read/write/delete we can see a post about this here.

As we can see is very easy to use couchdb as filesystem. but we also can replicate our couchDB databases. Imagine that we have tho couchDB servers (host1 and host2). Each host has one database called fs. If we run the following command:

curl -X POST -H "Content-Type: application/json" http://host1:5984/_replicate -d '{"source":"cmr","target":"http://host2:5984/fs","continuous":true}'

Our database will be replicated from host1 to host2 in a continuous mode. That’s means everytime we create/delete anything in host1, couchDB will replicate it to host2. A simple master-slave replica.

Now if we execute the same command in host2:

curl -X POST -H "Content-Type: application/json" http://host2:5984/_replicate -d '{"source":"cmr","target":"http://host1:5984/fs","continuous":true}'

We have a multi-master replica system, cheap and easy to implement. As we can see we only need to install couchDB in each node, activate the replica and that’s all. Pretty straightforward, isn’t it?