Building one HTTP client in PostgreSQL with PL/Python

Don’t ask me way, but I need to call to a HTTP server (one Silex application) from a PostgreSQL database.

I want to do something like this:

select get('http://localhost:8080?name=Gonzalo')->'hello';

PostgreSQL has a datatype for json. It’s really cool and it allows us to connect our HTTP server and our SQL database using same datatype.

PostgreSQL also allows us to create stored procedures using different languages. The default language is PL/pgSQL. PL/pgSQL is a simple language where we can embed SQL. But we also can use Python. With Python we can easily create HTTP clients, for example with urllib2. That means that develop our a HTTP client for a PostgreSQL database is pretty straightforward.

CREATE OR REPLACE FUNCTION get(uri character varying)
  RETURNS json AS
$BODY$
import urllib2

data = urllib2.urlopen(uri)

return data.read()

$BODY$
  LANGUAGE plpython2u VOLATILE
  COST 100;
ALTER FUNCTION get(character varying)
  OWNER TO gonzalo;

Ok that’s a GET client, but we also want a POST client to do something like this:

select post('http://localhost:8080', '{"name": "Gonzalo"}'::json)->'hello';

As you can see I want to use application/json instead of application/x-www-form-urlencoded to send request parameters. I wrote about it here time ago. So I will create one endpoint within my Silex server to handle my POST requests to:

<?php
include __DIR__ . '/../vendor/autoload.php';

use Silex\Application;
use Symfony\Component\HttpFoundation\Request;
use G\AngularPostRequestServiceProvider;

$app = new Application(['debug' => true]);

$app->register(new AngularPostRequestServiceProvider());

$app->post('/', function (Application $app, Request $request) {
    return $app->json(['hello' => $request->get('name')]);
});

$app->get('/', function (Application $app, Request $request) {
    return $app->json(['hello' => $request->get('name')]);
});

$app->run();

And now we only need to create one stored procedure to send POST requests

CREATE OR REPLACE FUNCTION post(
    uri character varying,
    paramenters json)
  RETURNS json AS
$BODY$
import urllib2

clen = len(paramenters)
req = urllib2.Request(uri, paramenters, {'Content-Type': 'application/json', 'Content-Length': clen})
f = urllib2.urlopen(req)
return f.read()

$BODY$
  LANGUAGE plpython2u VOLATILE
  COST 100;
ALTER FUNCTION post(character varying, json)
  OWNER TO gonzalo;

And that’s all. At least this simple script is exactly what I need.

Yet another Database Abstraction layer with PHP and DBAL

I’m not a big fan of ORMs. I feel very confortable working with raw SQLs and because of that I normally use DBAL (or PDO in old projects). I’ve got one small library to handle my dayly operations with databases and today I’ve written this library

First of all imagine one DBAL connection. I’m using a sqlite in-memomy database in this example but we can use any database supported by DBAL (aka “almost all”):

use Doctrine\DBAL\DriverManager;

$conn = DriverManager::getConnection([
    'driver' => 'pdo_sqlite',
    'memory' => true
]);

We can also create one DBAL connection from a PDO connection. It’s usefull to use DBAL within legacy applications instead of creating a new connection (remember that DBAL works over PDO)

use Doctrine\DBAL\DriverManager;

$conn = DriverManager::getConnection(['pdo' => new PDO('sqlite::memory:')]);

Now we set up the database for the example

$conn->exec("CREATE TABLE users (
            userid VARCHAR PRIMARY KEY  NOT NULL ,
            password VARCHAR NOT NULL ,
            name VARCHAR,
            surname VARCHAR
            );");
$conn->exec("INSERT INTO users VALUES('user','pass','Name','Surname');");
$conn->exec("INSERT INTO users VALUES('user2','pass2','Name2','Surname2');");

Our table “users” has two records. Now we can start to use our library.

First we create a new instance of our library:

use G\Db;

$db = new Db($conn);

Now a simple query from a string:

$data = $db->select("select * from users");

Sometimes I’m lazy and I don’t want to write the whole SQL string and I want to perform a select * from table:

use G\Sql;
$data = $db->select(SQL::createFromTable("users"));

Probably we need to filter our Select statement with a WHERE clause:

$data = $db->select(SQL::createFromTable("users", ['userid' => 'user2']));

And now something very intersting (at least for me). I want to iterate over the recordset and maybe change it. Of course I can use “foreach” over $data and do whatever I need, but I preffer to use the following sintax:

$data = $db->select(SQL::createFromTable("users"), function (&$row) {
    $row['name'] = strtoupper($row['name']);
});

For me it’s more readable. I iterate over the recordset and change the row ‘name’ to uppercase. Here you can see what is doing my “select” function:

/**
* @param Sql|string $sql
* @param \Closure $callback
* @return array
*/
public function select($sql, \Closure $callback = null)
{
    if ($sql instanceof Sql) {
        $sqlString = $sql->getString();
        $parameters = $sql->getParameters();
        $types = $sql->getTypes();
    } else {
        $sqlString = $sql;
        $parameters = [];
        $types = [];
    }

    $statement = $this->conn->executeQuery($sqlString, $parameters, $types);
    $data = $statement->fetchAll();
    if (!is_null($callback) && count($data) > 0) {
        $out = [];
        foreach ($data as $row) {
            if (call_user_func_array($callback, [&$row]) !== false) {
                $out[] = $row;
            }
        }
        $data = $out;
   }

   return $data;
}

And finally transactions (I normally never use autocommit and I like to handle transactions by my own)

$db->transactional(function (Db $db) {
    $userId = 'temporal';

    $db->insert('users', [
        'USERID'   => $userId,
        'PASSWORD' => uniqid(),
        'NAME'     => 'name3',
        'SURNAME'  => 'name3'
    ]);

    $db->update('users', ['NAME' => 'updatedName'], ['USERID' => $userId]);
    $db->delete('users', ['USERID' => $userId]);
});

The “transactional” function it’s very simmilar than DBAL’s transactional function

public function transactional(\Closure $callback)
{
    $out = null;
    $this->conn->beginTransaction();
    try {
        $out = $callback($this);
        $this->conn->commit();
    } catch (\Exception $e) {
        $this->conn->rollback();
        throw $e;
    }

    return $out;
}

I change a little bit because I like to return a value within the closure and allow to do things like that:

$status = $db->transactional(function (Db $db) {
    $userId = 'temporal';

    $db->insert('users', [
        'USERID'   => $userId,
        'PASSWORD' => uniqid(),
        'NAME'     => 'name3',
        'SURNAME'  => 'name3'
    ]);

    $db->update('users', ['NAME' => 'updatedName'], ['USERID' => $userId]);
    $db->delete('users', ['USERID' => $userId]);

    return "OK"
});

The other functions (insert, update, delete) only bypass the calls to DBAL’s funcitons:

private $conn;

public function __construct(Doctrine\DBAL\Connection $conn)
{
    $this->conn = $conn;
}

public function insert($tableName, array $values = [], array $types = [])
{
    $this->conn->insert($tableName, $values, $types);
}

public function delete($tableName, array $where = [], array $types = [])
{
    $this->conn->delete($tableName, $where, $types);
}

public function update($tableName, array $data, array $where = [], array $types = [])
{
    $this->conn->update($tableName, $data, $where, $types);
}

And that’s all. You can use the library with composer and download at github.

BTW I’ve test the new Sensiolabs product (SensioLabs Insight) to analyze the code and verify good practices and I’ve got the Platinum medal #yeah!

Handling several DBAL Database connections in Symfony2 through the Dependency Injection Container with PHP

(This post is the second part of my previous post: Handling several PDO Database connections in Symfony2 through the Dependency Injection Container with PHP. You can read it here)

OK. We can handle PDOs connections inside a Symfony2 application, but what happens if we prefer DBAL. As we know DBAL is built over PDO and adds a set of “extra” features to our database connection. It’s something like PDO with steroids.

If we read the documentation, we will see how to use DBAL:

<?php
$config = new \Doctrine\DBAL\Configuration();
//..
$connectionParams = array(
    'dbname' => 'mydb',
    'user' => 'user',
    'password' => 'secret',
    'host' => 'localhost',
    'driver' => 'pdo_mysql',
);
$conn = DriverManager::getConnection($connectionParams, $config);

As we can see to obtain a DBAL connection we use a factory method in DriverManager class. We can easily implements it in our service container:

# databases.yml
parameters:
  doctrine.dbal.configuration: Doctrine\DBAL\Configuration
  doctrine.dbal.drivermanager: Doctrine\DBAL\DriverManager

  database.db1:
    driver: pdo_sqlite
    memory: true
  database.db2:
    driver: pdo_pgsql
    dbname: testdb
    user: username
    password: password
    host: 127.0.0.1

services:
  dbal_configuartion:
    class: %doctrine.dbal.configuration%
  db1:
    factory_class: %doctrine.dbal.drivermanager%
    factory_method: getConnection
    arguments: [%database.db1%]
  db2:
      factory_class: %doctrine.dbal.drivermanager%
      factory_method: getConnection
      arguments: [%database.db2%]

But if we run again our example Symfony will throws us one error:

RuntimeException: Please add the class to service “db1” even if it is constructed by a factory since we might need to add method calls based on compile-time checks.

If we use this service container configuration outside Symfony2 application it works (remember we can use Symfony’s Dependency Injection Container outside Symfony application as a component. Example here). But if we want to use it with Symfony2 we need to set the “class”, even here when we only need the static constructor, so we change it to:

# databases.yml
parameters:
  doctrine.dbal.configuration: Doctrine\DBAL\Configuration
  doctrine.dbal.drivermanager: Doctrine\DBAL\DriverManager

  database.db1:
    driver: pdo_sqlite
    memory: true
  database.db2:
    driver: pdo_pgsql
    dbname: testdb
    user: username
    password: password
    host: 127.0.0.1

services:
  dbal_configuartion:
    class: %doctrine.dbal.configuration%
  db1:
    class: %doctrine.dbal.drivermanager%
    factory_class: %doctrine.dbal.drivermanager%
    factory_method: getConnection
    arguments: [%database.db1%]
  db2:
    class: %doctrine.dbal.drivermanager%
    factory_class: %doctrine.dbal.drivermanager%
    factory_method: getConnection
    arguments: [%database.db2%]

And that’s all. We can use DBAL instead of PDO in our database connections.

UPDATE:

After publishing this post someone comment me Doctrine allows us to do it “out of the box” within Symfony with its DoctrineBundle:

https://github.com/doctrine/DoctrineBundle/blob/master/Resources/doc/configuration.rst#doctrine-dbal-configuration

Database connection pooling with PHP and React (node.php)

Last saturday I meet a new hype: “React” also known as “node.php”. Basically it’s the same idea than node.js but built with PHP instead of javascript. Twitter was on fire with this new library (at least my timeline). The next sunday was a rainy day and because of that I spent the afternoon hacking a little bit with this new library. Basically I want to create a database connection pooling. It’s one of the things that I miss in PHP. I wrote a post here some time ago with this idea with one exotic experiment building one connection pooling using gearman. Today the idea is the same but now with React. Let’s start

First of all we install React. It’s a simple process using composer.

% echo '{ "require": { "react/react": "dev-master" } }' > composer.json
% composer install

Now we can start with our experiment. Imagine a simple query to PostgreSql using PDO:

CREATE TABLE users
(
  uid integer NOT NULL,
  name character varying(50),
  surname character varying(50),
  CONSTRAINT pk_users PRIMARY KEY (uid)
)
WITH (
  OIDS=FALSE
);
ALTER TABLE users OWNER TO gonzalo;

INSERT INTO users(uid, name, surname) VALUES (0, 'Gonzalo', 'Ayuso');
INSERT INTO users(uid, name, surname) VALUES (1, 'Hans', 'Solo');
INSERT INTO users(uid, name, surname) VALUES (2, 'Luke', 'Skywalker');
$dbh = new PDO('pgsql:dbname=demo;host=vmserver', 'gonzalo', 'password');
$sql = "SELECT * FROM USERS";
$stmt = $dbh->prepare($sql);
$stmt->execute();
$data = $stmt->fetchAll();
print_r($data);

Now we are going to use the same interface but instead of using PDO we will use one server with React:

include "CPool.php";
define('NODEPHP', '127.0.0.1:1337');

$dbh = new CPool();
$sql = "SELECT * FROM USERS";
$stmt = $dbh->prepare($sql);
$stmt->execute();
$data = $stmt->fetchAll();
$stmt->closeCursor();
print_r($data);

Our CPool library:

class CPoolStatement
{
    private $stmt;
    function __construct($sql=null)
    {
        if (!is_null($sql)) {
            $url = "http://" . NODEPHP . "?" . http_build_query(array(
                    'action' => 'prepare',
                    'sql'    => $sql
                 ));
            $this->stmt = file_get_contents($url);
        }
    }

    public function getId()
    {
        return $this->stmt;
    }

    public function setId($id)
    {
        $this->stmt = $id;
    }

    public function execute($values=array())
    {
        $url = "http://" . NODEPHP . "?" . http_build_query(array(
                'action' => 'execute',
                'smtId'  => $this->stmt,
                'values' => $values
             ));
        $this->stmt = file_get_contents($url);
    }

    public function fetchAll()
    {
        $url = "http://" . NODEPHP . "?" . http_build_query(array(
                'action' => 'fetchAll',
                'smtId'  => $this->stmt
             ));
        return (file_get_contents($url));
    }

    public function closeCursor()
    {
        $url = "http://" . NODEPHP . "?" . http_build_query(array(
                'action' => 'closeCursor',
                'smtId'  => $this->stmt
             ));
        return (file_get_contents($url));
    }
}

class CPool
{
    function prepare($sql)
    {
        return new CPoolStatement($sql);
    }
}

We also can create one script that creates one statement

include "CPool.php";
define('NODEPHP', '127.0.0.1:1337');

$dbh = new CPool();
$sql = "SELECT * FROM USERS";
$stmt = $dbh->prepare($sql);

echo $stmt->getId();

And another script (another http request for example) to fetch the resultset. Notice that we can execute this script all the times that we want because the compiled statement persists in the node.php server (we don’t need to create it again and again within each request).

include "CPool.php";
define('NODEPHP', '127.0.0.1:1337');

$stmt = new CPoolStatement();
$stmt->setId(1);

$stmt->execute();
$data = $stmt->fetchAll();
print_r($data);

And basically that was my sunday afternoon experiment. As you can imagine the library is totally unstable. It’s only one experiment. We can add transaccions, comits, rollbacks, savepoints, … but I needed a walk and I stopped:). What do you think?

The code is available at github

Building a simple SQL wrapper with PHP

If we don’t use an ORM within our projects we need to write SQL statements by hand. I don’t mind to write SQL. It’s simple and descriptive but sometimes we like to use helpers to avoid write the same code again and again. Today we are going to create a simple library to help use to write simple SQL queries. Let’s start:

The idea is to instead of write:

SELECT * from users where uid=7;

write:

$sql->select('users', array('uid' => 7));

As we all must know, the best documentation are Unit Test, so here you are the tests of the library:

class SqlTest extends PHPUnit_Framework_TestCase
{
    public function setUp()
    {
        $this->dbh = new Conn('pgsql:dbname=db;host=localhost', 'gonzalo', 'password');
        $this->dbh->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
        $this->dbh->forceRollback();
    }

    public function testTransactions()
    {

        $sql = new Sql($this->dbh);
        $that = $this;

        $this->dbh->transactional(function($dbh) use ($sql, $that) {
            $actual = $sql->insert('users', array('uid' => 7, 'name' => 'Gonzalo', 'surname' => 'Ayuso'));
            $that->assertTrue($actual);

            $actual = $sql->insert('users', array('uid' => 8, 'name' => 'Peter', 'surname' => 'Parker'));
            $that->assertTrue($actual);

            $data = $sql->select('users', array('uid' => 8));
            $that->assertEquals('Peter', $data[0]['name']);
            $that->assertEquals('Parker', $data[0]['surname']);

            $sql->update('users', array('name' => 'gonzalo'), array('uid' => 7));

            $data = $sql->select('users', array('uid' => 7));
            $that->assertEquals('gonzalo', $data[0]['name']);

            $data = $sql->delete('users', array('uid' => 7));

            $data = $sql->select('users', array('uid' => 7));
            $that->assertTrue(count($data) == 0);
        });
    }
}

As you can see we use DI to inject the database connection to our library. Simple isn’t it?

Here the whole library:

class Conn extends PDO
{
    private $forcedRollback = false;
    public function transactional(Closure $func)
    {
        $this->beginTransaction();
        try {
            $func($this);
            $this->forcedRollback ? $this->rollback() : $this->commit();
        } catch (Exception $e) {
            $this->rollback();
            throw $e;
        }
    }

    public function forceRollback()
    {
        $this->forcedRollback = true;
    }
}

class Sql
{
    /** @var Conn */
    private $dbh;
    function __construct(Conn $dbh)
    {
        $this->dbh = $dbh;
    }

    public function select($table, $where)
    {
        $sql         = $this->createSelect($table, $where);
        $whereParams = $this->getWhereParameters($where);
        $stmp = $this->dbh->prepare($sql);
        $stmp->execute($whereParams);
        return $stmp->fetchAll();
    }

    public function insert($table, $values)
    {
        $sql = $this->createInsert($table, $values);
        $stmp = $this->dbh->prepare($sql);
        return $stmp->execute($values);
    }

    public function update($table, $values, $where)
    {
        $sql = $this->createUpdate($table, $values, $where);
        $whereParams = $this->getWhereParameters($where);

        $stmp = $this->dbh->prepare($sql);
        return $stmp->execute(array_merge($values, $whereParams));
    }

    public function delete($table, $where)
    {
        $sql         = $this->createDelete($table, $where);
        $whereParams = $this->getWhereParameters($where);
        $stmp = $this->dbh->prepare($sql);
        return $stmp->execute($whereParams);
    }

    protected function getWhereParameters($where)
    {
        $whereParams = array();
        foreach ($where as $key => $value) {
            $whereParams[":W_{$key}"] = $value;
        }
        return $whereParams;
    }

    protected function createSelect($table, $where)
    {
        return "SELECT * FROM " . $table . $this->createSqlWhere($where);
    }

    protected function createUpdate($table, $values, $where)
    {
        $sqlValues = array();
        foreach (array_keys($values) as $key) {
            $sqlValues[] = "{$key} = :{$key}";
        }
        return "UPDATE {$table} SET " . implode(', ', $sqlValues) . $this->createSqlWhere($where);
    }

    protected function createInsert($table, $values)
    {
        $sqlValues = array();
        foreach (array_keys($values) as $key) {
            $sqlValues[] = ":{$key}";
        }
        return "INSERT INTO {$table} (" . implode(', ', array_keys($values)) . ") VALUES (" . implode(', ', $sqlValues) . ")";
    }

    protected function createDelete($table, $where)
    {
        return "DELETE FROM {$table}" . $this->createSqlWhere($where);
    }

    protected function createSqlWhere($where)
    {
        if (count((array) $where) == 0) return null;

        $whereSql = array();
        foreach ($where as $key => $value) {
            $whereSql[] = "{$key} = :W_{$key}";
        }
        return ' WHERE ' . implode(' AND ', $whereSql);
    }
}

You can see the full code at github.

How to protect from SQL Injection with PHP

Security is a part of our work as developers. We need to ensure our applications against malicious attacks. SQL Injection is one of the most common possible attacks. Basically SQL Injection is one kind of attack that happens when someone injects SQL statements in our application. You can find a lot of info about SQL Injection attack. Basically you need to follow the security golden rule

Filter input
Escape output

If you work with PHP problably you work with PDO Database abstraction layer.
Let’s prepare our database for the examples (I work with PostgreSQL):

CREATE TABLE users
(
  uid integer NOT NULL,
  name character varying(50),
  surname character varying(50),
  CONSTRAINT pk_users PRIMARY KEY (uid)
)
WITH (
  OIDS=FALSE
);
ALTER TABLE users OWNER TO gonzalo;

INSERT INTO users(uid, name, surname) VALUES (0, 'Gonzalo', 'Ayuso');
INSERT INTO users(uid, name, surname) VALUES (1, 'Hans', 'Solo');
INSERT INTO users(uid, name, surname) VALUES (2, 'Luke', 'Skywalker');

OK our database is ready. Now let create a simple query

$dbh = new PDO('pgsql:dbname=db;host=localhost', 'gonzalo', 'password');
$dbh->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);

$stmt = $dbh->prepare('select uid, name, surname from users where uid=:ID');
$stmt->execute(array('ID' => 0));
$data = $stmt->fetchAll();

It works. We are using bind parameters, so we need to prepare one statement, execute and fetch the recordset. The use of prepared statements is strongly recommended. We can also use query() function:

$dbh = new PDO('pgsql:dbname=db;host=localhost', 'gonzalo', 'password');
$dbh->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);

$uid = 0;
$stmt = $dbh->query("select uid, name, surname from users where uid={$uid}");
$data = $stmt->fetchAll();

But what happens if $id came from the request and it’s not propertly escaped

$dbh = new PDO('pgsql:dbname=db;host=localhost', 'gonzalo', 'password');
$dbh->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);

$uid = "0; drop table users;";
$stmt = $dbh->query("select uid, name, surname from users where uid={$uid}");
$data = $stmt->fetchAll();

basically nothing: SQLSTATE[42601]: Syntax error. That’s because is not allowed to use two prepared statements in a single statement.

If we use an insert:

$dbh = new PDO('pgsql:dbname=db;host=localhost', 'gonzalo', 'password');
$dbh->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);

$uid     = 20;
$name    = 'Gonzalo';
$surname = "Ayuso'); drop table users; select 1 where ('1' = '1";

$count = $dbh->exec("INSERT INTO users(uid, name, surname) VALUES ({$uid}, '{$name}', '{$surname}')");

Now we have a problem. Our user table will be deleted. Why? That’s because of the user we are using to connect to the database. It’s important especially at production servers.
It’s very important not to use a database superuser in production. Superusers are very comfortable in our development servers, because you don’t need to grant privileges to every tables but if you forget this issue in production you could have Sql-Injection problems. The solution is very simple:

GRANT SELECT, UPDATE, INSERT, DELETE ON TABLE users TO gonzalo2;

And now

$dbh = new PDO('pgsql:dbname=db;host=localhost', 'gonzalo2', 'password');
$dbh->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);

$uid     = 20;
$name    = 'Gonzalo';
$surname = "Ayuso'); drop table users; select 1 where ('1' = '1";

$count = $dbh->exec("INSERT INTO users(uid, name, surname) VALUES ({$uid}, '{$name}', '{$surname}')");

Now we are safe, at least with this possible attack.
Sumarizing:

  • Filter input
  • Escape output
  • Take care about the database users. Don’t use one user that it allowed to perform “not-allowed” operations within our application. It sounds like a pun but is important.

Database abstraction layers in PHP. PDO versus DBAL

I normally use PDO in my PHP projects. I like it because it’s a PHP extension easy to use and shares the same interface between all databases. Normally I use PostgreSQL but if I change to mySql or Oracle I don’t need to use different functions to handle the database connections.

PHP has a great project called Doctrine2. Doctrine2 is a ORM and it uses its own database abstraction layer called DBAL. In fact DBAL isn’t a pure database abstraction layer. It’s built over PDO. It’s a set of PHP classes we can use that gives us features not available with ‘pure’ PDO. If we use Doctrine2 we’re using DBAL behind the scene, but we don’t need to use Doctrine2 to use DBAL. We can use DBAL as a database abstraction layer without any ORM. Obiously this extra PHP layer over our PDO extension needs to pay a fee. I will have a look to this fee in this post. I will take one of my old post about PDO and I will do the same with DBAL to see the performance differences. Let’s start:

The PDO version:

error_reporting(-1);
$time = microtime(TRUE);
$mem = memory_get_usage();

$dbh = new PDO('pgsql:dbname=mydb;host=localhost', 'gonzalo', 'password');
$dbh->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);

$dbh->beginTransaction();

$smtp = $dbh->prepare('INSERT INTO test.tbl1 (id, field1) values (:ID, :FIELD1)');

for ($i=0; $i<1000; $i++) {
    $smtp->execute(array('ID' => $i, 'FIELD1' => "field {$i}"));
}

$dbh->commit();

$stmt = $dbh->prepare('SELECT * FROM test.tbl1 limit 10000');
$stmt->execute();

$i=0;
while ($row = $stmt->fetch()) {
	$i++;
}
echo '<h1>PDO</h1>';
echo "<strong>{$i} </strong>";

print_r(array('memory' => (memory_get_usage() - $mem) / (1024 * 1024), 'seconds' => microtime(TRUE) - $time));

$dbh->beginTransaction();
$smtp = $dbh->prepare('delete from test.tbl1');
$smtp->execute();
$dbh->commit();

The DBAL version:

error_reporting(-1);
$time = microtime(TRUE);
$mem = memory_get_usage();

use Doctrine\DBAL\DriverManager;

$connectionParams = array(
    'dbname'   => 'mydb',
    'user'     => 'gonzalo',
    'password' => 'password',
    'host'     => 'localhost',
    'driver'   => 'pdo_pgsql',
    );

$dbh = DriverManager::getConnection($connectionParams);

$dbh->beginTransaction();

$smtp = $dbh->prepare('INSERT INTO test.tbl1 (id, field1) values (:ID, :FIELD1)');

for ($i=0; $i<1000; $i++) {
    $smtp->execute(array('ID' => $i, 'FIELD1' => "field {$i}"));
}

$dbh->commit();

$stmt = $dbh->prepare('SELECT * FROM test.tbl1 limit 10000');
$stmt->execute();

$i=0;
while ($row = $stmt->fetch()) {
	$i++;
}
echo '<h1>DBAL</h1>';
echo "<strong>{$i} </strong>";

print_r(array('memory' => (memory_get_usage() - $mem) / (1024 * 1024), 'seconds' => microtime(TRUE) - $time));

As we can see DBAL is slower than pure PDO (obiously). Anyway the most of the extra time of DBAL is the time we need to include php classes (remember PDO is a PHP extension and we dont need to include any file). If we take times excluding the include time, the memory usage is almost the same and the execution time a little slower.

Autoload for DBAL version:

spl_autoload_register(function ($class) {
        $class = str_replace('\\', '/', $class) . '.php';
        require_once($class);
    }
);

or hardcoded includes for this example

require_once('Doctrine/DBAL/Driver.php');
require_once('Doctrine/DBAL/Driver/Connection.php');
require_once('Doctrine/DBAL/Platforms/AbstractPlatform.php');
require_once('Doctrine/DBAL/Driver/Statement.php');

require_once('Doctrine/DBAL/DriverManager.php');
require_once('Doctrine/DBAL/Configuration.php');
require_once('Doctrine/Common/EventManager.php');
require_once('Doctrine/DBAL/Driver/PDOPgSql/Driver.php');
require_once('Doctrine/DBAL/Driver.php');
require_once('Doctrine/DBAL/Connection.php');
require_once('Doctrine/DBAL/Driver/Connection.php');
require_once('Doctrine/DBAL/Query/Expression/ExpressionBuilder.php');
require_once('Doctrine/DBAL/Platforms/PostgreSqlPlatform.php');
require_once('Doctrine/DBAL/Platforms/AbstractPlatform.php');
require_once('Doctrine/DBAL/Driver/PDOConnection.php');
require_once('Doctrine/DBAL/Driver/PDOStatement.php');
require_once('Doctrine/DBAL/Driver/Statement.php');
require_once('Doctrine/DBAL/Events.php');
require_once('Doctrine/DBAL/Statement.php');

Outcomes of the tests:

With pure PDO:

  • memory: 0.0044288635253906
  • seconds: 0.24748301506042

With DBAL and autoload:

  • memory: 0.97610473632812
  • seconds: 0.29042816162109

With DBAL and hardcoded requires:

  • memory: 0.97521591186523
  • seconds: 0.31192088127136

With DBAL bypassing the include part:

  • memory: 0.0099525451660156
  • seconds: 0.30333304405212

The fee we paid for using DBAL gives us some extra features. OK we don’t need DBAL to get those features. If we code a bit we can get them (remember DBAL is nothing but a PHP extra layer). But DBAL has a great interface a well documented. Now I’m going to list a few extra features from DBAL very interesting, at least for me:

Transactional mode

I really like it. It allows us to create scripts like that:

$dbh->transactional(function($conn) {
    $smtp = $conn->prepare('INSERT INTO wf.tbl1 (id, field1) values (:ID, :FIELD1)');

    for ($i=0; $i<1000; $i++) {
        $smtp->execute(array('ID' => $i, 'FIELD1' => "field {$i}"));
    }
});

A simple closure will make the code more concise and it will commit/rollback our transaction for us. In fact I borrowed this function in my PDO projects to use this interface. I love Open source.

Snippet from DBAL library:

    /**
     * Executes a function in a transaction.
     *
     * The function gets passed this Connection instance as an (optional) parameter.
     *
     * If an exception occurs during execution of the function or transaction commit,
     * the transaction is rolled back and the exception re-thrown.
     *
     * @param Closure $func The function to execute transactionally.
     */
    public function transactional(Closure $func)
    {
        $this->beginTransaction();
        try {
            $func($this);
            $this->commit();
        } catch (Exception $e) {
            $this->rollback();
            throw $e;
        }
    }

Types conversion

Really useful, at least for when I work with dates:

$date = new \DateTime("2011-03-05 14:00:21");
$stmt = $conn->prepare("SELECT * FROM articles WHERE publish_date > ?");
$stmt->bindValue(1, $date, "datetime");
$stmt->execute();

List of Parameters Conversion

It’s a cool feature too available in DBAL since Doctrine 2.1

$dbh->executeQuery('SELECT * FROM wf.tbl1 WHERE id IN (?)',
    array(array(1, 2, 3, 4, 5, 6)),
    array(\Doctrine\DBAL\Connection::PARAM_INT_ARRAY));

Bind parameters with IN clause with PDO is a bit ugly. We need to create a series of bind parameters depending on our list to map them within the SQL. It’s possible but DBAL interface is smarter.

Transaction Nesting

Another cool feature:

$dbh->beginTransaction();
try {
    $dbh->beginTransaction();
    try {
        $smtp = $dbh->prepare('INSERT INTO wf.tbl1 (id, field1) values (:ID, :FIELD1)');

        for ($i=0; $i<1000; $i++) {
            $smtp->execute(array('ID' => $i, 'FIELD1' => "field {$i}"));
        }

        } catch (Exception $e) {
            $dbh->rollback(); //transaction marked for rollback only
            throw $e;
        }
    $smtp = $dbh->prepare('INSERT INTO wf.tbl1 (id, field1) values (:ID, :FIELD1)');

    for ($i=0; $i<1000; $i++) {
        $smtp->execute(array('ID' => $i, 'FIELD1' => "field {$i}"));
    }

    $dbh->commit(); // real transaction committed
} catch (Exception $e) {
    $dbh->rollback(); // transaction rollback
    throw $e;
}

This piece of code with PDO will throw the following error:
There is already an active transaction
but it works with DBAL. If we need to do this kind of things with PDO we need to use savepoints and things like that. DBAL does the ugly part for us.

Performance analysis of Stored Procedures with PDO and PHP

Last week I had an interesting conversation on twitter about the usage of stored procedures in databases. Someone told stored procedure are evil. I’m not agree with that. Stored procedures are a great place to store business logic. In this post I’m going to test the performance of a small piece of code using stored procedures and using only PHP code.

Without stored procedures

// Without stored procedures
$time = microtime(TRUE);
$mem = memory_get_usage();

$dsn = 'pgsql:host=localhost;dbname=gonzalo;port=5432';
$user = 'user';
$password = 'password';
$conn = new PDO($dsn, $user, $password);
$conn->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);

$conn->beginTransaction();
$stmt = $conn->prepare('delete from web.tbltest');
$stmt->execute();

$stmt = $conn->prepare('INSERT INTO web.tbltest (field1) values (?)');
foreach (range(0,1000) as $i) {
    $stmt->execute(array($i));
}
$conn->commit();

print_r(array('memory' => (memory_get_usage() - $mem) / (1024 * 1024), 'seconds' => microtime(TRUE) - $time));

With stored procedures:

// With stored procedures:
/*
CREATE OR REPLACE FUNCTION web.method1()
  RETURNS numeric AS
$BODY$
BEGIN
   DELETE FROM web.tbltest;
   FOR i IN 0..1000 LOOP
     INSERT INTO web.tbltest (field1) values (i);
   END LOOP;
   RETURN 1;
END;
$BODY$
  LANGUAGE plpgsql VOLATILE
  COST 100;
*/
$time = microtime(TRUE);
$mem = memory_get_usage();

$dsn = 'pgsql:host=localhost;dbname=gonzalo;port=5432';
$user = 'user';
$password = 'password';
$conn = new PDO($dsn, $user, $password);
$conn->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
$conn->beginTransaction();
$stmt = $conn->prepare('SELECT web.method1()');
$stmt->execute();
$stmt->setFetchMode(PDO::FETCH_ASSOC);
$out = $stmt->fetchAll();
$conn->commit();

print_r(array('memory' => (memory_get_usage() - $mem) / (1024 * 1024), 'seconds' => microtime(TRUE) - $time));
without stored procedures with stored procedures
memory: 0.0023880004882812
seconds: 0.31109309196472
memory: 0.0020713806152344
seconds: 0.065021991729736

So my conclusion: Stored procedures are not evil. The performance is really good. I know maybe it can be a bit mess if we place business logic within database and outside database at the same time, but with a good design and architecture this problem is easy to solve. What do you think?

Keep our PostgreSQL databases syncronized with PHP. Database version control.

Last October I attended to PHP Barcelona 2010. One of the talk I really wanted to see was “Database version control without pain”. In this talk Harrie Verveer showed us different tools to keep synchronized our databases. This is a really common problem working with databases. Source code is well covered with source control management tools. I normally use mercurial (hg) but with git, bazaar or svn we can cover all our needs within source code. We create source code at development server and push the changes to production. It’s really easy to keep synchronized all our code. But with databases it’s different. Maybe Oracle’s people are few steps above the rest and they have something similar than source code control for database schema’s natively in the database. If you add a new column to a table, your schema will be transformed into a new version. Oracle database is great. It has incredible features and really good performance, if you can afford the fee. In this post I will try to solve this problem with PostgreSQL. A really good database similar than Oracle and open source and free (as “free beer” also)

It’s a recurrent problem working with databases. We create database objects (tables, views, ..) in the development server and when our application is ready to go live we push the changes to production server. If we are smart developers we save all database scripts in a file and when we deploy them to production we execute the script. There are tools to do it like dbdeploy or even phing (You must have a look to Harrie’s presentation to see all the possibilities). The problem is that we must be very strict and we must create the script even when we alter a column in a bug fix. It’s a hard work and it will be worthwhile but we must be mindful it’s really easy and fast to alter a column with a IDE (pgadmin for example) and very easy to forget to save the diff file. If you work alone maybe you can afford it but if you work within a team is difficult to ensure everybody update the diff file by hand. The purpose of this library is to create the diff scrip automatically according to the current state of the database. Lets’s start.

As Harrie said there isn’t a silver bullet solution (I really went to his talk finding the silver bullet 🙂 ). My first attempt was to use Doctrine2. Doctrine2 has a great tool called dbal. I tried to use it but I faced with a big problem, at least for me. I like to organize my database objects (tables, views, …) within schemas. With Doctrine2’s dbal I have problems and it doesn’t work as I think it must work. It assumes all objects are in the default schema and it doesn’t add “[schema].” to the information schema queries.  Maybe is my fault but If want to use dbal I need to hack the code. Because of that I started to write pgdbsync.

It would be cool to create a multi-database library to synchronize our databases. But it’s a huge work for doing alone so I’ve focused only on PostgreSQL, because is the database I mainly use in my daily work.

The Idea is to create a command line tool to create the need script to keep synchronized two (or more databases). We must take care we’re speaking about database’s schema. Not database’s data.

First I create a ini file with the configuration of each database connection. The user we use to connect to the database must have access to data dictionary in our PostgreSQL database.

[devel]
TYPE = pgsql
HOST = development
PORT = 5432
DBNAME = developement
USER = user
PASSWORD = password

[prod1]
TYPE = pgsql
HOST = production
PORT = 5432
DBNAME = prod1
USER = user
PASSWORD = password

I’ve created a simple CLI script to use the library with getopt. You can see the script here.
The usage of the script is very simple. I have implemented three main actions: diff, summary and run.

  • diff: Calculates the needed script to keep synchronized the databases. Prints the script on the screen but it doesn’t executes anything.
  • summary: Shows a summary the differences. Not the full script. Useful to see the differences in a glance.
  • run: Calculates the diff function and executes it.

Let’s start. First we start with an empty database in Development and Production servers with a schema WEB in both sides (schema creation’s coverage is not supported by pgdbsync). Then we create a few objects on the development server.

-- TABLE TEST
CREATE TABLE WEB.TEST (
  TEST_NAME CHAR(30) NOT NULL,
  TEST_ID INTEGER NOT NULL,
  TEST_DATE TIMESTAMP NOT NULL
)
TABLESPACE WEB;
ALTER TABLE WEB.TEST ADD CONSTRAINT PK_TEST PRIMARY KEY (TEST_ID);

-- VIEW on TEST
CREATE VIEW WEB.testview(
  TEST_NAME,
  TEST_ID,
  TEST_DATE
) AS
SELECT *
FROM WEB.TEST
WHERE TEST_NAME LIKE 't%';

-- Function hello
CREATE OR REPLACE FUNCTION WEB.hello(item character varying)
  RETURNS character varying AS
$BODY$
DECLARE
BEGIN
   return "Hi " || item;
END;
$BODY$
  LANGUAGE plpgsql VOLATILE
  COST 100;

Now our Development and Production servers are different. We need to execute the last script in Production, but instead of doing it we are going to check differences with our pgdbsync script.

The usage of pgdbsync command line script is the following one:

 -c [schema]
 -f [from database]
 -t [to database]
 -a [action: diff | summary | run]
./pgdbsync -s web -f devel -t prod -a summary
HOST : production :: prod1
--------------------------------------------
function
 create :: WEB.hello(varchar)
tables
 create :: WEB.test
view
 create :: WEB.testview

[OK]  end process

Here we can see our production server is different from the development one.

./pgdbsync -s wf -f devel -t prod -a diff
HOST : production :: prod1
--------------------------------------------
CREATE OR REPLACE FUNCTION web.hello(item character varying)
 RETURNS character varying
 LANGUAGE plpgsql
AS $function$
DECLARE
BEGIN
 return "Hi " || item;
END;
$function$

CREATE TABLE web.test(
 test_name character NOT NULL,
 test_id integer NOT NULL,
 test_date timestamp without time zone NOT NULL,
 CONSTRAINT pk_test PRIMARY KEY (test_date)
)
TABLESPACE web;
ALTER TABLE web.test OWNER TO user;

CREATE OR REPLACE VIEW web.testview AS
 SELECT test.test_name, test.test_id, test.test_date FROM web.test WHERE (test.test_name ~~ 't%'::text);;
ALTER TABLE web.testview OWNER TO user;
[OK]  end process

and finally

./pgdbsync -s web -f devel -t prod -a run
HOST : production :: prod1
----------------------------------

[OK]  end process

And that’s all. Our databases are synchronized. The library is not finished. Foreign keys are not supported yet, but it checks:

  • Tables
  • Constraints
  • Sequences
  • Views
  • Functions

To finish the demo we are going to drop the new objects at development server and we will run cli script again:

./pgdbsync -s wf -f devel -t prod1 -a diff

HOST : prododuction :: prod1
--------------------------------------------

drop function web.hello(varchar);

DROP TABLE web.test;

drop view web.testview;

[OK]  end process

And we run the script:

./pgdbsync -s wf -f devel -t prod1 -a run
HOST : production :: prod1
--------------------------------------------

[OK]  end process

Source code is available at Google, code here.

An important post I’ve read to understand to PostgreSQL’s information schema is this one from great Lorenzo Alberton. It really helped me building pgdbsync.

The Library is not finished yet and it may crashed in some cases (not all data-types are covered). I always check diff file before execute it. If you want to join me to develop the library, don’t hesitate to contact me :).

Database connection pooling with PHP and gearman

Handling Database connections with PHP is pretty straightforward. Just open database connection, perform the actions we need, and close it. There’s a little problem. We cannot create a pull of database connections. We need to create the connection in every request. Create and destroy, again and again. There are some third-party solutions like SQL relay or pgpool2 (if you use PostgreSQL like me). In this post I’m going to try to explain a personal experiment to create a connection pooling using gearman. Another purpose of this experiment is create a prepared statements’ cache not only for the current request but also for all ones. Let’s start.

This is an example of SELECT statement with PDO and PHP

$dbh = new PDO('pgsql:dbname=pg1;host=localhost', 'user', 'pass');
$stmt = $dbh->prepare($sql);
$stmt->execute();
$data = $stmt->fetchAll();

Basically the idea is to use a gearman worker to perform every database operations. As far as I known we cannot pass PDO instances from gearman worker to gearman client. Even with object serialization (PDO objects cannot be serialized). That’s a problem.

My idea is use the same interface than using PDO but let the database work to the worker and obtaining a connection id instead of a real PDO connection.

That’s is the configuration class. We can see It’s defined one database connection and two gearman servers at the same host:

class PoolConf
{
    const PG1 = 'PG1';
    static $DB = array(
        self::PG1 => array(
            'dsn'      => "pgsql:dbname=gonzalo;host=localhost",
            'username' => 'user',
            'password' => 'pass',
            'options'  => null),
    );

    static $SERVERS = array(
        array('127.0.0.1', 4730),
        array('127.0.0.1', 4731),
    );
}

We start the workers:

gearmand -d --log-file=/var/log/gearman --user=gonzalo -p=4730
gearmand -d --log-file=/var/log/gearman --user=gonzalo -p=4731

How many workers we need to start? Depends on your needs. We must realize gearman is not a pool. It’s a queue. But we can start as many servers as we want  (OK it’s depends on our RAM) and create a poll of queues. We need to remember that if we start only one gearman server we only can handle only one database operation each time (it’s a queue) and it will be a huge bottleneck if the application scales. So you need to assess your site and evaluate how many concurrent operations you normally have and start as many gearman server as you need.

Maybe is difficult to explain but the final outcome will be something like that:

use Pool\Client;
$conn = Client::singleton()->getConnection(PoolConf::PG1);

$sql = "SELECT * FROM TEST.TBL1";
$stmt = $conn->prepare($sql);

$stmt->execute();
$data = $stmt->fetchall();
echo "<p>count: " . count($data) . "</p>";

We must take into account that $stmt is not a “real” PHP statement. The real PHP statement is stored into a static variable within the worker. Our $stmt is an instance of Pool\Server\Stmt class. This class has some public methods with the same name than the real statement (because of that it behaves as a real statement), and internally those methods are calls to gearman worker. The same occurs with $conn variable. It’s not a real PDO connection. It’s am instance of Pool\Server\Connection Class.

// Pool/Server/Stmt.php
namespace Pool\Server;

class Stmt
{
    private $_smtpId = null;
    private $_cid    = null;
    private $_client = null;

    function __construct($stmtId, $cid, $client)
    {
        $this->_stmt = $stmtId;
        $this->_cid = $cid;
        $this->_client = $client;
    }

    public function execute($parameters=array())
    {
        $out = $this->_client->do('execute', serialize(array(
            'parameters' => $parameters,
            'stmt'       => $this->_stmt,
            )));
        $error = unserialize($out);
        if (is_a($error, '\Pool\Exception')) {
            throw $error;
        }
        $this->_stmt = $out;
        return $this;
    }

    public function fetchAll()
    {
        $data = $this->_client->do('fetchAll', serialize(array(
            'stmt' => $this->_stmt,
            )));
        return unserialize($data);
    }
}

The worker. The following code is an extract. You can see the full code here

# Create our worker object.
$worker= new GearmanWorker();
foreach (PoolConf::$SERVERS as $server) {
    $worker->addServer($server[0], $server[1]);
}

\Pool\Server::init();

$worker->addFunction('getConnection', 'getConnection');
$worker->addFunction('prepare', 'prepare');
$worker->addFunction('execute', 'execute');
$worker->addFunction('fetchAll', 'fetchAll');
$worker->addFunction('info', 'info');
$worker->addFunction('release', 'release');
$worker->addFunction('beginTransaction', 'beginTransaction');
$worker->addFunction('commit', 'commit');
$worker->addFunction('rollback', 'rollback');

while (1) {
    try {
        $ret = $worker->work();
        if ($worker->returnCode() != GEARMAN_SUCCESS) {
            break;
        }
    } catch (Exception $e) {
        echo $e->getMessage();
    }
}

function fetchAll($job)
{
    echo __function__."\n";
    $params = unserialize($job->workload());
    $stmtId = $params['stmt'];
    return serialize(\Pool\Server::fetchAll($stmtId));
}

function execute($job)
{
    echo __function__."\n";
    $params = unserialize($job->workload());
    $stmtId = $params['stmt'];
    $parameters = $params['parameters'];
    return \Pool\Server::execute($stmtId, $parameters);
}
...

The heart of the worker is \Pool\Server class. This class performs every real PDO operations and stores statements and connections into static private variables.

And now we can use the database pool reusing connections and prepared statements. You can see here a small performance test of reusing prepared statements in an older post.

I’ve also implemented a small error handling. Errors in the worker are serialized and thrown on the client simulating the normal operation of standard PDO usage.

Now a set of examples:

Simple queries. And a simple error handling:

include('../conf/PoolConf.php');
include('../lib/Pool/Client.php');
include('../lib/Pool/Server.php');
include('../lib/Pool/Exception.php');
include('../lib/Pool/Server/Connection.php');
include('../lib/Pool/Server/Stmt.php');

use Pool\Client;
$conn = Client::singleton()->getConnection(PoolConf::PG1);

$sql = "SELECT * FROM TEST.TBL1";
$stmt = $conn->prepare($sql);

$stmt->execute();
$data = $stmt->fetchall();
echo "<p>count: " . count($data) . "</p>";

try {
    $sql = "SELECT * TEST.NON_EXISTENT_TABLE";
    $stmt = $conn->prepare($sql);

    $stmt->execute();
    $data = $stmt->fetchall();
    echo "<p>count: " . count($data) . "</p>";

} catch (Exception $e) {
    echo "ERROR: " . $e->getMessage();
}

print_r(Client::singleton()->info(PoolConf::PG1));

Now with bind parameters:

<?php
include('../conf/PoolConf.php');
include('../lib/Pool/Client.php');
include('../lib/Pool/Server.php');
include('../lib/Pool/Exception.php');
include('../lib/Pool/Server/Connection.php');
include('../lib/Pool/Server/Stmt.php');

use Pool\Client;
$conn = Client::singleton()->getConnection(PoolConf::PG1);

$data = $conn->prepare("SELECT * FROM TEST.TBL1 WHERE SELECCION=:S")->execute(array('S' => 1))->fetchall();

echo count($data);

print_r(Client::singleton()->info(PoolConf::PG1));

And now a transaction:

<?php
include('../conf/PoolConf.php');
include('../lib/Pool/Client.php');
include('../lib/Pool/Server.php');
include('../lib/Pool/Exception.php');
include('../lib/Pool/Server/Connection.php');
include('../lib/Pool/Server/Stmt.php');

use Pool\Client;
$conn = Client::singleton()->getConnection(PoolConf::PG1);

$conn->beginTransaction();
$data = $conn->prepare("SELECT * FROM TEST.TBL1 WHERE SELECCION=:S")->execute(array('S' => 1))->fetchall();
$conn->rollback();

print_r(Client::singleton()->info(PoolConf::PG1));

Conclusion.

That’s a personal experiment. It works, indeed, but probably it’s crowed by bugs. You can use it in production if you are a brave developer :).

Source code.
The full sourcecode is available here at google code