Checking the performance of PHP exceptions

Sometimes we use exceptions to manage the flow of our scripts. I imagine that the use of exceptions must have a performance lack. Because of that I will perform a small benchmark to test the performance of one simple script throwing exceptions and without them. Let’s start:

First a silly script to find even numbers (please don’t use it it’s only for the benchmanrk 🙂 )

error_reporting(-1);
$time = microtime(TRUE);
$mem = memory_get_usage();

$even = $odd = array();
foreach (range(1, 100000) as $i) {
  try {
    if ($i % 2 == 0) {
      throw new Exception("even number");
    } else {
      $odd[] = $i;
    }
  } catch (Exception $e) {
    $even[] = $i;
  }
}
echo "odds: " . count($odd) . ", evens " . count($even);
print_r(array('memory' => (memory_get_usage() - $mem) / (1024 * 1024), 'microtime' => microtime(TRUE) - $time));

And now the same script without exceptions.

error_reporting(-1);
$time = microtime(TRUE);
$mem = memory_get_usage();

$even = $odd = array();
foreach (range(1, 100000) as $i) {
    if ($i % 2 == 0) {
        $even[] = $i;
    } else {
        $odd[] = $i;
    }
}

echo "odd: " . count($odd) . ", evens " . count($even);
print_r(array('memory' => (memory_get_usage() - $mem) / (1024 * 1024), 'microtime' => microtime(TRUE) - $time));

The outcomes:
with exceptions
memory: 10.420181274414
microtime: 1.1479668617249 0.19249302864075 (without xdebug)

without exceptions
memory: 10.418941497803
microtime: 0.14752888679505 0.1234929561615

As we can see the use of memory is almost the same and ten times faster without exceptions.

I have done this test using a VM box with 512MB of memory and PHP 5.3.
Now we are going to do the same test with a similar host. The same configuration but PHP 5.4 instead of 5.3

PHP 5.4:
with exceptions
memory: 7.367259979248
microtime: 0.1864490332

without exceptions
memory: 7.3669052124023
microtime: 0.089046955108643

I’m really impressed with the outcomes. The use of memory here with PHP 5.4 is much better now and the execution time better too (ten times faster).

According to the outcomes my conclusion is that the use of exceptions in the flow of our scripts is not as bad as I supposed. OK in this example the use of exceptions is not a good idea, but in another complex script exceptions are really useful. We also must take into account the tests are iterations over 100000 to maximize the differences. So I will keep on using exceptions. This kind of micro-optimization not seems to be really useful. What do you think?

Performance analysis of Stored Procedures with PDO and PHP

Last week I had an interesting conversation on twitter about the usage of stored procedures in databases. Someone told stored procedure are evil. I’m not agree with that. Stored procedures are a great place to store business logic. In this post I’m going to test the performance of a small piece of code using stored procedures and using only PHP code.

Without stored procedures

// Without stored procedures
$time = microtime(TRUE);
$mem = memory_get_usage();

$dsn = 'pgsql:host=localhost;dbname=gonzalo;port=5432';
$user = 'user';
$password = 'password';
$conn = new PDO($dsn, $user, $password);
$conn->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);

$conn->beginTransaction();
$stmt = $conn->prepare('delete from web.tbltest');
$stmt->execute();

$stmt = $conn->prepare('INSERT INTO web.tbltest (field1) values (?)');
foreach (range(0,1000) as $i) {
    $stmt->execute(array($i));
}
$conn->commit();

print_r(array('memory' => (memory_get_usage() - $mem) / (1024 * 1024), 'seconds' => microtime(TRUE) - $time));

With stored procedures:

// With stored procedures:
/*
CREATE OR REPLACE FUNCTION web.method1()
  RETURNS numeric AS
$BODY$
BEGIN
   DELETE FROM web.tbltest;
   FOR i IN 0..1000 LOOP
     INSERT INTO web.tbltest (field1) values (i);
   END LOOP;
   RETURN 1;
END;
$BODY$
  LANGUAGE plpgsql VOLATILE
  COST 100;
*/
$time = microtime(TRUE);
$mem = memory_get_usage();

$dsn = 'pgsql:host=localhost;dbname=gonzalo;port=5432';
$user = 'user';
$password = 'password';
$conn = new PDO($dsn, $user, $password);
$conn->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
$conn->beginTransaction();
$stmt = $conn->prepare('SELECT web.method1()');
$stmt->execute();
$stmt->setFetchMode(PDO::FETCH_ASSOC);
$out = $stmt->fetchAll();
$conn->commit();

print_r(array('memory' => (memory_get_usage() - $mem) / (1024 * 1024), 'seconds' => microtime(TRUE) - $time));
without stored procedures with stored procedures
memory: 0.0023880004882812
seconds: 0.31109309196472
memory: 0.0020713806152344
seconds: 0.065021991729736

So my conclusion: Stored procedures are not evil. The performance is really good. I know maybe it can be a bit mess if we place business logic within database and outside database at the same time, but with a good design and architecture this problem is easy to solve. What do you think?

Performance analysis fetching data with PDO and PHP.

Fetching data from databases is a common operation in our work as developers. There are many drivers (normally I use PDO), but the usage of all of them are similar and switch from one to another is not difficult (they almost share the same interface). In this post I will focus on fetching data. Basically we’ve got two functions: fetch and fetchAll. I’ve created two examples. One with fetch and another one with fetchAll:

// Example with fetch
error_reporting(-1);
$time = microtime(TRUE);
$mem = memory_get_usage();

$dbh = new PDO('pgsql:dbname=mydb;host=localhost', 'username', 'password');
$dbh->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);

$stmt = $dbh->prepare('SELECT * FROM tableName limit 10000');
$stmt->execute();

$i=0;
while ($row = $stmt->fetch()) {
	$i++;
}
echo '
<h1>fetch()</h1>
';
echo '
<strong>{$i} </strong>

';
print_r(array('memory' => (memory_get_usage() - $mem) / (1024 * 1024), 'seconds' => microtime(TRUE) - $time));
// Example with fetchAll
error_reporting(-1);
$time = microtime(TRUE);
$mem = memory_get_usage();

$dbh = new PDO('pgsql:dbname=mydb;host=localhost', 'username', 'password');
$dbh->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);

$stmt = $dbh->prepare('SELECT * FROM tableName limit 10000');
$stmt->execute();

$i=0;
$data = $stmt->fetchAll();
foreach ($data as $row) {
	$i++;
}

echo '
<h1>fetchAll()</h1>
';
echo '
<strong>{$i}</strong>

';
print_r(array('memory' => (memory_get_usage() - $mem) / (1024 * 1024), 'seconds' => microtime(TRUE) - $time));

if we execute the test we obtain:

fetchAll: [memory] => 31.305999755859
fetch: [memory] => 0.002532958984375

OK. It’s obvious. If we approach to the data extraction with fetchAll method we will use more memory. That’s because we’re mapping the whole recorded to a variable ($data) at once. With the fetch loop we are mapping only on row per iteration. By the way if we change the fetch loop to:

$data = array();
while ($row = $stmt->fetch()) {
	$i++;
	$data[] = $row;
}

We will use almost the same amount of memory than the fetchAll method
[memory] => 31.267543792725

Conclusion:
Is it better fetch than fetchAll? The answer is simple: No. We only need to take care what are we doing and use the best solution that fix to our need. If we’re handling small recordset, they’re similar, but if we work with big ones we need to realize that the memory usage we are using changes drastically if we use one method or another.

Speed up page page load combining javascript files with PHP

One of the golden rules when we want a high performance web site is minimize the HTTP requests. Normally we have several JavaScript files within our projects. It’s a very good practice to combine all our JavaScript files into an only one file. We can do it manually. Not a hard work. We only need to copy and paste the source files into our single js file. There’s even tools to do it. You can have look to Yslow (if you don’t know it yet). That’s a good solution if your project is finished. But if your project is alive and you are changing it, it’s helpful to spare your JavaScript files between several files. It’s good to organize them (at least for me). So we need to choose between high performance and development comfort. Because of that I like to use the simple script I’m going to show now. Let’s start.

The idea is the following one. Normally I have all js files into a folder called js (original isn’t?). I also have a development server and a production one (really original again, isn’t it?). When I’m developing my application I like to have my js files separated and in a human readable way (I’m human), but in production I want to combine them and even minimized and gziped to improve the performance.

The script I have is a simple script that combines all the JavaScript files, minimizes and gzips.

//js.php
require 'jsmin.php';

function checkCanGzip(){
    if (array_key_exists('HTTP_ACCEPT_ENCODING', $_SERVER)) {
        if (strpos($_SERVER['HTTP_ACCEPT_ENCODING'], 'gzip') !== false) return "gzip";
        if (strpos($_SERVER['HTTP_ACCEPT_ENCODING'], 'x-gzip') !== false) return "x-gzip";
    }
    return false;
}

function gzDocOut($contents, $level=6){
    $return = array();
    $return[] = "\x1f\x8b\x08\x00\x00\x00\x00\x00";
    $size = strlen($contents);
    $crc = crc32($contents);
    $contents = gzcompress($contents,$level);
    $contents = substr($contents, 0, strlen($contents) - 4);
    $return[] = $contents;
    $return[] = pack('V',$crc);
    $return[] = pack('V',$size);
    return implode(null, $return);
}

$ite = new RecursiveDirectoryIterator(dirname(__FILE__));
foreach(new RecursiveIteratorIterator($ite) as $file => $fileInfo) {
    $extension = strtolower(pathinfo($file, PATHINFO_EXTENSION));
    if ($extension == 'js') {
        $f = $fileInfo->openFile('r');
        $fdata = "";
        while ( ! $f->eof()) {
            $fdata .= $f->fgets();
        }
        $buffer[] = $fdata;
    }
}

$output = JSMin::minify(implode(";\n", $buffer));

header("Content-type: application/x-javascript; charset: UTF-8");
$forceGz    = filter_input(INPUT_GET, 'gz', FILTER_SANITIZE_STRING);
$forcePlain = filter_input(INPUT_GET, 'plain', FILTER_SANITIZE_STRING);

$encoding = checkCanGzip();
if ($forceGz) {
    header("Content-Encoding: {$encoding}");
    echo gzDocOut($output);
} elseif ($forcePlain) {
    echo $output;
} else {
    if ($encoding){
        header("Content-Encoding: {$encoding}");
        echo GzDocOut($output);
    } else {
        echo $output;
    }
}

As you can see the script checks recursively all the js files inside one folder, combine them and also use jsmin library for PHP to improve the download time in the browser.

It’s very easy now when we’re building our HTML file switch from one js file to another. Here you can see an example with Smarty template engine:

<html>
    <head>
        <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
        <title></title>
    </head>
    <body>
        Hello World
{if $dev}
        <script src="js1.js" type="text/javascript"></script>
        <script src="js2.js" type="text/javascript"></script>
        <script src="xxx/js1.js" type="text/javascript"></script>
{else}
        <script src="js.php" type="text/javascript"></script>
{/if}
    </body>
</html>

Yes. I know. There’s a problem with this solution. Maybe we’ve improved the client side performance reducing the number of HTTP requests but, what about our server side performance? We’ve changed from serving static js files to dinamic PHP file. Now our server’s CPU will work more. Another great hight performance golden rule is to place static files into a server dedicated to serve static files (without PHP support). Whit this golden rule we help to the browser to perform multiple downloads and also we reduce the use of CPU (static server will not instance any PHP session).

So a better solution is the offline generation of the static js file when we deploy the application. I do it with a simple curl at command line.

curl http://nov/js/js.php -o jsfull.minified.js

So the smarty template will change to

<html>
    <head>
        <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
        <title></title>
    </head>
    <body>
        Hello World
{if $dev}
        <script src="js1.js" type="text/javascript"></script>
        <script src="js2.js" type="text/javascript"></script>
        <script src="xxx/js1.js" type="text/javascript"></script>
{else}
        <script src="jsfull.minified.js" type="text/javascript"></script>
{/if}
    </body>
</html>

It’s also a good solution put a prefix in our JavaScript file and change it each time we build it (easy to automate) to ensure the cache renew.

<script src="jsfull.minified.20110216.js" type="text/javascript"></script>

And yes we can use the same trick with our css files.