Category Archives: Web Development

Building a REST client with asynchronous calls using PHP and curl

One month ago I posted an article called Building a simple HTTP client with PHP. A REST client. In this post I tried to create a simple fluid interface to call REST webservices in PHP. Some days ago I read an article and one light switch on on my mind. I can use curl’s “multi” functions to improve my library and perform simultaneous calls very easily.

I’ve got a project that needs to call to different webservices. Those webservices sometimes are slow (2-3 seconds). If I need to call to, for example, three webservices my script will use the add of every single call’s time. With this improve to the library I will use only the time of the slowest webservice. 2 seconds instead of 2+2+2 seconds. Great.

For the example I’ve created a really complex php script that sleeps x seconds depend on an input param:

sleep((integer) $_REQUEST['sleep']);
echo $_REQUEST['sleep'];

With synchronous calls:

echo Http::connect('localhost', 8082)
    ->doGet('/tests/gam_http/sleep.php', array('sleep' => 3));
echo Http::connect('localhost', 8082)
    ->doPost('/tests/gam_http/sleep.php', array('sleep' => 2));
echo Http::connect('localhost', 8082)
    ->doGet('/tests/gam_http/sleep.php', array('sleep' => 1));

This script takes more or less 6 seconds (3+2+1)

But If I switch it to:

$out = Http::connect('localhost', 8082)
    ->get('/tests/gam_http/sleep.php', array('sleep' => 3))
    ->post('/tests/gam_http/sleep.php', array('sleep' => 2))
    ->get('/tests/gam_http/sleep.php', array('sleep' => 1))
    ->run();
print_r($out);

The script only uses 3 seconds (the slowest process)

I’ve got a project that uses it. But I have a problem. I have webservices in different hosts so I’ve done a bit change to the library:

$out = Http::multiConnect()
    ->add(Http::connect('localhost', 8082)->get('/tests/gam_http/sleep.php', array('sleep' => 3)))
    ->add(Http::connect('localhost', 8082)->post('/tests/gam_http/sleep.php', array('sleep' => 2)))
    ->add(Http::connect('localhost', 8082)->get('/tests/gam_http/sleep.php', array('sleep' => 1)))
    ->run();

With a single connection, the exceptions are easy to implement. If curl_getinfo() returns an error message I throw an exception, but now with a multiple interface how I can do it? I throw an exception if one call fail, or not? I have decided not to use exceptions in multiple interface. I always return an array with all the output of every webservice’s call and if something wrong happens instead of the output I will return an instance of Http_Multiple_Error class. Why I use a class instead of a error message? The answer is easy. If I want to check all the answers I can check if any of them is an instanceof Http_Multiple_Error. Also I don’t want to check anything I put a silentMode() function to switch off all error messages.

$out = Http::multiConnect()
    ->silentMode()
    ->add(Http::connect('localhost', 8082)->get('/tests/gam_http/sleep.php', array('sleep' => 3)))
    ->add(Http::connect('localhost', 8082)->post('/tests/gam_http/sleep.php', array('sleep' => 2)))
    ->add(Http::connect('localhost', 8082)->get('/tests/gam_http/sleep.php', array('sleep' => 1)))
    ->run();

The full code is available on google code but the main function is the following one:

    ...
    private function _run()
    {
        $headers = $this->_headers;
        $curly = $result = array();

        $mh = curl_multi_init();
        foreach ($this->_requests as $id => $reg) {
            $curly[$id] = curl_init();

            $type     = $reg[0];
            $url       = $reg[1];
            $params = $reg[2];

            if(!is_null($this->_user)){
               curl_setopt($curly[$id], CURLOPT_USERPWD, $this->_user.':'.$this->_pass);
            }

            switch ($type) {
                case self::DELETE:
                    curl_setopt($curly[$id], CURLOPT_URL, $url . '?' . http_build_query($params));
                    curl_setopt($curly[$id], CURLOPT_CUSTOMREQUEST, self::DELETE);
                    break;
                case self::POST:
                    curl_setopt($curly[$id], CURLOPT_URL, $url);
                    curl_setopt($curly[$id], CURLOPT_POST, true);
                    curl_setopt($curly[$id], CURLOPT_POSTFIELDS, $params);
                    break;
                case self::GET:
                    curl_setopt($curly[$id], CURLOPT_URL, $url . '?' . http_build_query($params));
                    break;
            }
            curl_setopt($curly[$id], CURLOPT_RETURNTRANSFER, true);
            curl_setopt($curly[$id], CURLOPT_HTTPHEADER, $headers);

            curl_multi_add_handle($mh, $curly[$id]);
        }

        $running = null;
        do {
            curl_multi_exec($mh, $running);
            sleep(0.2);
        } while($running > 0);

        foreach($curly as $id => $c) {
            $status = curl_getinfo($c, CURLINFO_HTTP_CODE);
            switch ($status) {
                case self::HTTP_OK:
                case self::HTTP_CREATED:
                case self::HTTP_ACEPTED:
                    $result[$id] = curl_multi_getcontent($c);
                    break;
                default:
                    if (!$this->_silentMode) {
                        $result[$id] = new Http_Multiple_Error($status, $type, $url, $params);
                    }
            }
            curl_multi_remove_handle($mh, $c);
        }

        curl_multi_close($mh);
        return $result;

Clean way to call to multiple xmlrpc remote servers

The problem:

I’ve got a class library in PHP. This class library is distributed in a several servers and I want to call them synchronously. The class library has a XMLRPC interface with Zend Framework XMLRPC server.
There is an example class

class Gam_Dummy
{
    /**
     * foo
     *
     * @param integer $arg1
     * @param integer $arg2
     * @return integer
     */
    function foo($arg1, $arg2)
    {
        return $arg1 + $arg2;
    }
}

and the xmlrpc server:

$class = (string) $_GET['class'];
$server = new Zend_XmlRpc_Server();
$server->setClass($class);
echo $server->handle();

First solution

An easy a fast solution for calling remote interfaces is:

$class = "Gam_Dummy";
$client = new Zend_XmlRpc_Client("http://location/of/xmlrpc/server?class={class}");
echo $client->call('foo', array($arg1, $arg2));

and if we have several remote servers:

$servers = array(
    'server1' => 'http://location/of/xmlrpc/server1',
    'server2' => 'http://location/of/xmlrpc/server2',
    'server3' => 'http://location/of/xmlrpc/server3'
    );

$class = "Gam_Dummy";
foreach (array_values($servers) as $_server) {
    $server = "{_server}?class={$class}";
    $client = new Zend_XmlRpc_Client($server);
    echo $client->call('foo', array($arg1, $arg2));
}

Second solution (one remote server):

I want to use the following interface to call my remote class

$class = "Gam_Dummy";
Gam_Dummy::remote("Gam_Dummy", 'server1')->foo($arg1, $arg2);

Why? The answer is because I like to coding with the help of the IDE. If I use the first solution I must remember Gam_Dummy class has a foo function with two parameters. With the second solution if I place the PHPDoc code correctly my IDE will help me showing me the function list of the Gam_Dummy class and even when I type Gam_ IDE will show me all the classes of my repository starting with Gam_. That issue could sound irrelevant for a lot of people but for me is really useful

To get this interface I will change my Gam_Dummy class to:

class Gam_Dummy
{
    /**
     * foo
     *
     * @param integer $arg1
     * @param integer $arg2
     * @return integer
     */
    function foo($arg1, $arg2)
    {
        return $arg1 + $arg2;
    }

    /**
     * Remote interface
     *
     * @param string|array $server
     * @return Gam_Dummy
     */
    static function remote($server)
    {
        return new Remote(get_called_class(), $server);
    }
}

And of course Remote class:

class Remote
{
    private $_class  = null;
    private $_server = null;

    function __construct($class, $server)
    {
        $this->_class  = $class;
        $this->_server = $server;
    }

    function __call($method, $arguments)
    {
        if (class_exists($this->_class)) {
            $server = "{$this->_server}?class={$this->_class}";
            $client = new Zend_XmlRpc_Client($server);
            return $client->call($method, array($arg1, $arg2));
        }
    }
}

Cool. Isn’t it?. But there is a problem if I want to work with two or more remote servers I must write one line of code for each server:

Gam_Dummy::remote('http://location/of/xmlrpc/server1')->foo($arg1, $arg2);
Gam_Dummy::remote('http://location/of/xmlrpc/server2')->foo($arg1, $arg2);
Gam_Dummy::remote('http://location/of/xmlrpc/server3')->foo($arg1, $arg2);

or may better with the array $servers

foreach (array_values($servers) as $server) {
    Gam_Dummy::remote($server)->foo($arg1, $arg2);
}

Third solution for multiple remote servers:

I would like to use this interface instead of solution two with a foreach for multiple servers:

$servers = array(
    'server1' => 'http://location/of/xmlrpc/server1',
    'server2' => 'http://location/of/xmlrpc/server2',
    'server3' => 'http://location/of/xmlrpc/server3'
    );

Gam_Dummy::remote($servers)->foo($arg1, $arg2);

so I change Remote class to:

class Remote
{
    private $_class  = null;
    private $_server = null;

    function __construct($class, $server)
    {
        $this->_class  = $class;
        $this->_server = $server;
    }

    function __call($method, $arguments)
    {
        $out = array();
        if (is_array($this->_server)) {
            foreach ($this->_server as $key => $_server) {
                $server = "{$_server}?class={$this->_class}";
                $client = new Zend_XmlRpc_Client($server);
                $out[$key] = $client->call($method, $arguments);
            }
        } else {
            $server = "{$this->_server}?class={$this->_class}";
            $client = new Zend_XmlRpc_Client($server);
            $out = $client->call($method, $arguments);
        }
        return $out;
    }
}

Jugando con FriendFeed y Appengine: tv-sms.appspot.com

INTRODUCCIÓN

Estas navidades me he propuesto publicar un aplicación con google app engine.
La idea de la aplicación es simple. Hoy en día es muy común disponer de un portátil o netbook y estar en la sala viendo la tele con nuestro portátil. Los canales de televisión suelen permitir que la gente mande sms a unos números de teléfono y estos mensajes aparezcan en la parte inferior de nuestras pantallas. Personalmente no lo he usado nunca pero parece ser que el servicio tiene mucho éxito. Estos mensajes tienen un coste bastante elevado y son una fuente de ingresos para los canales y para las operadoras. La idea de la aplicación es hacer esto por Internet agarrándonos los sms aprovechando que tenemos un portátil en la sala junto a nosotros.

Los requisitos que me he impuesto son los siguientes:

  • No dispongo de tiempo ilimitado. Tengo que publicarla en un par de días.
  • Usar la API de FriendFeed.
  • La aplicación tiene que limitar en lo posible el uso de recursos en el servidor.
  • Publicar el código en google code.
  • Subir la aplicación a appengine: http://tv-sms.appspot.com/
  • Escribir una entrada en mi blog explicando como esta hecha.

Pues bien ya estoy en el ultimo punto así que paso a explicar como funciona.

FUNCIONAMIENTO DE LA APLICACIÓN

Creación de los grupos

La idea es la siguiente me creo con mi usuario un grupo público en FriendFeed por cada canal de televisión y después me creo un archivo javascript con la configuración de los canales:

var chanels = {
    'tvsmstve1' : 'TVE 1',
    'tvsmstve2' : 'TVE 2',
    'tvsms-antena3' : 'Antena 3',
    'tvsmscuatro' : 'Cuatro',
    'tvsms-tele5' : 'Tele 5',
    'tvsmslasexta' : 'La Sexta'
}

Entiendo que esta información la debería guardar en algún tipo de Base de datos pero por ahora se queda así. Al arrancar la aplicación leo esto y lo muestro en pantalla

function populateChanels() {
    $.each(chanels, function(i, chanel) {
        $('#chanels').append("
	<li class="status"><a class="chanelsLink" onclick="start(\&quot;&quot; + i + &quot;\&quot;)" href="#">" + chanel + "</a></li>
");
    });
}

Listado de las últimas modificaciones de los grupos

Friendfeed tiene una API muy sencilla de usar usando REST y JSONP. Para obtener una la lista de actualizaciones de un canal simplemente tenemos que hacer una petición GET a la siguiente url:

http://friendfeed-api.com/v2/feed/grupoid

Por lo tanto y teniendo en cuenta que voy a utilizar jQuery para mi aplicación creo la siguiente función:

var users = {};
var frmInput = "
<div class="share"><form action="?" method="post"><input id="body" style="width: 300px;" name="body" type="text" /><input id="submit" type="submit" value="Post" /></form></div>
";

function _getLi(body, user, date) {
    var fDade = formatFriendFeedDate(date);
    return "
	<li class="status"><span class="thumb vcard author"><img height='50' width='50' src='http://friendfeed-api.com/v2/picture/" + user + "?size=medium' alt='" + user + "'/></span><span class="status-body">" + user + " : " + body + "<span class="meta entry-meta">" + fDade + "</span></span></li>
";
}

function list(id) {
    var txt = '';
    $.getJSON("http://friendfeed-api.com/v2/feed/" + id + "?callback=?",
        function (data) {
            $('#ff').html('');
            $.each(data.entries, function(i, entry) {
                users[entry.from.id] = entry.from.id;
                txt+= _getLi(entry.body, entry.from.id, entry.date);
            });
            $("#ff").html(txt);
            if (logged) {
                $('#inputForm').html(frmInput);
            }
            $('#txtCanal').html(chanels[id]);

            cometClient(id);
        });
}

En esta función lo que estoy haciendo es obtener la lista en formato JSON y añadirla a un ol. Además de la lista le añado un formulario para que el usuario pueda introducir comentarios y arranco el cliente comet pero esto lo explicaré más adelante.

Actualizaciones en tiempo real

Lo que mas me gusta de Friendfeed y lo que me ha hecho decantarme por el y no por Twitter es que Friendfeed me permite lo que llama Real-Time updates. Esta caracteristica, que hasta donde yo se Twitter no nos la da nos permite hacer de forma muy sencilla clientes Comet. Aquí va el mio:

function cometClient(id, cursor) {
    var url = "http://friendfeed-api.com/v2/updates/feed/" + id + "?callback=?&timeout=5";
    if (cursor) {
        url += "&cursor==" + cursor;
    }
    $.getJSON(url, function (data) {
        $.each(data.entries, function(i, entry) {
            $('#ff').prepend(_getLi(entry.body, entry.from.id, entry.date));
        });
        t = setTimeout(function() {cometClient(id, data.realtime.cursor)}, 2000);
    });
}

Pues nada gracias a esto tengo actualizaciones a tiempo real de mi aplicación y sin usar ningún recurso de servidor.

Nuevo comentario

Para crear un nuevo comentario la API de FriendFeed no me deja hacerlo directamente desde javascript. Esto es una pequeña faena ya que si no tuviera esta restricción toda la aplicación correría sobre javascript y no necesitaría nada de código en servidor. Pero al no ser posible me tengo que autentificar con OAuth desde google appengine. Para hacer esto me baso en la aplicación de ejemplo que nos proporciona FriendFeed para jugar con su API.

El citado ejemplo hace todo lo que necesito y lo hace todo en el servidor. Ya que yo solo voy a usar la parte que FriendFeed denomina creating an entry, me limito a leer el código y eliminar la parte que no necesito. Como una de mis premisas es que no dispongo de tiempo ilimitado para esta aplicación no me voy a dedicar a entender cada linea de código. Me dedico a entender mas o menos como funciona.

En mi aplicación la opción de nuevo comentario va a ser llamada desde javascript y espero un JSON por lo que a parte de eliminar las partes del código que no me interesan, hago unas pequeñas modificaciones para que la respuesta sea JSON

class EntryHandler(webapp.RequestHandler):
    @authenticated
    def post(self):
        try:
            entry = self.friendfeed.post_entry(
                body=self.request.get("body"),
                to=self.request.get("to"))
            out = {'error' : 0}
        except:
            out = {"redirect" : '/oauth/authorize'}
        self.response.headers['Content-Type'] = 'application/json'
        jsonData = out
        self.response.out.write(simplejson.dumps(jsonData))
        return
...

application = webapp.WSGIApplication([
    (r"/oauth/callback", OAuthCallbackHandler),
    (r"/oauth/authorize", OAuthAuthorizeHandler),
    (r"/oauth/check", OAuthCheck),
    (r"/a/entry", EntryHandler),
])

El código es bastante mejorable (no hay gestión de errores y no se hace nada con la variable entry) pero hace lo que quiero: publica una entrada en FriendFeed.

Autentificando

Para publicar entradas en FrinedFeed necesito estar autentificado por lo cual quiero que la aplicación solo muestre el formulario de entrada de datos si el usuario esta autentificado. Para hacer esto hago una llamada asíncrona al servidor al cargar la aplicación que me mire si el usuario esta o no autentificado

function checkFfLogin() {
    $.post("/oauth/check", {}, function (data) {
        if (data.redirect) {
            logged = false;
        } else {
            if (data.ok && data.ok==1) {
                logged = true;
            }
        }

        if (logged == true) {
            $('#login').html('');
        } else {
            $('#login').html("<a href="&quot; + data.redirect + &quot;"><img border='0' src='/sign-in-with-friendfeed.png' alt='Sign in with FriendFeed'></a>");
        }
        populateChanels();
    }, "json");
}

Esto se complementa con la parte del servidor.

class OAuthCheck(webapp.RequestHandler):
    @authenticated
    def post(self):
        cookie_val = parse_cookie(self.request.cookies.get("FF_API_AUTH"))
        try:
            key, secret, username = cookie_val.split("|")
            self.response.headers['Content-Type'] = 'application/json'
            jsonData = {"ok" : 1}
            self.response.out.write(simplejson.dumps(jsonData))
        except:
            self.response.headers['Content-Type'] = 'application/json'
            jsonData = {"redirect" : '/oauth/authorize'}
            self.response.out.write(simplejson.dumps(jsonData))
        return

Cambio de canal

Ya solo queda que el usuario pueda cambiar de canal.

var selectedChanel;
function start(group) {
    $('#body').val('');
    selectedChanel = group;
    if (t) {
        clearTimeout(t);
    }
    $('#ff').html('<img src="/ajax-loader.gif" alt="loader"/>... cargando ' + chanels[group]);

    list(group);
}

FIN

Pues bien la aplicación ya esta operativa. Entiendo que tiene bastantes lagunas pero funciona y he cumplido las premisas que me había propuesto: Un par de tardes para hacerla y otra más para escribir la entrada en el blog y subirla al appengine.

Enlace a la aplicación: http://tv-sms.appspot.com/
Código fuente: http://code.google.com/p/gam-tvsms/

An idea for calling PostgreSQL’s stored procedures with PDO

As far as I know PDO doesn’t allow to call directly the  PostgreSQL’s strored procedures. That’s not a problem. We can create a SQL and call a stored procedures as simple sql.
Imagine we have a stored procedure in the schema called ‘schemaName’ with the name ‘method1’.

CREATE OR REPLACE FUNCTION schemaName.method1(param1 numeric, param2 numeric)
  RETURNS numeric AS
$BODY$
BEGIN
   RETURN param1 + param2;
END;
$BODY$
  LANGUAGE 'plpgsql' VOLATILE
  COST 100;

The way of call it is something like this:
$conn = new PDO($dsn, $user, $password);
$conn->beginTransaction();
$stmt = $this->prepare("SELECT * FROM schemaName.method1(?, ?)");
$stmt->execute(1, 2);
$stmt->setFetchMode(PDO::FETCH_ASSOC);
$out = $stmt->fetchAll();
$conn->commit();
An idea for doing the same in a more clean way is:
$conn = new MyPDO($dsn, $user, $password);
$conn->beginTransaction();
$out = $conn->setSchema('schemaName')->method1(1, 2);
$conn->commit();
That’s only an approach. I haven’t think a lot about it but that’s OK as a first approach.
And now the class I’ve created extending PDO to obtain the above interface.
The trick is in __call function. Using __call I have dynamic functions in my MyPDO class and I will suppose every functions will be stored procedures.
class MyPDO extends PDO
{
    private $_schema = null; 

    /**
     * Set Schema
     *
     * @return MyPDO
     */
    public function setSchema($_squemaName)
    {
        $this->_schema = $_squemaName;
        return $this;
    } 

    function __call($method, $arguments)
    {
        $_params = array();
        if (count($arguments)>0) {
            for ($i=0; $i<count($arguments); $i++) {
                $_params[] = '?';
            }
        } 

        $stmt = $this->prepare("SELECT * FROM {$this->_schema}.{$method}(" .
            implode(', ', $_params) .  ")");
        $stmt->execute($arguments);
        $stmt->setFetchMode(PDO::FETCH_ASSOC);
        return $stmt->fetchAll();
    }
}

Moving singleton and factory patterns to Abstract with php 5.3

I have built a backend library. I have a tree of classes and i want to use singleton and factory patterns to my class set. Easy isn’t it?

My dummy class

class Lib_Myclass
{
    public function function1($var1)
    {
        return $var1;
    }
}

new instance of my class

$obj = new Lib_Myclass;
$obj->function1('hi');

Ok. Nice OO tutorial Gonzalo, but I want to use factory pattern so:

class Lib_Myclass
{
    static function factory()
    {
        return new Lib_Myclass;
    }
    public function function1($var1)
    {
        return $var1;
    }
}

Lib_Myclass::factory()->function1('hi');

Now imagine you have a lot of classes. You must create over and over the factory function in every classes. OK you can create an abstract class and extend all classes with your abstract class but …

abstract class AbstractClass
{
    static function factory()
    {
        return new Lib_Myclass; // <---- what's the name of the class?
    }
}

Whats is the name of the class you will use when creating the new object? You can use __CLASS__ but it only works when you are in the parent class. In an extended class __CLASS__ points to the name of the abstract class and not the parent one. With php < 5.3 there are some tricks for doing it but those tricks are very ugly. In php 5.3 we have get_called_class. So now we can create an abstract class with or factory and singleton implementations and extend our classes without adding any extra code over and over again

abstract class AbstractClass
{
    static $_instance = null;

    static function singleton()
    {
        if (is_null(self::$_instance)) {
            self::$_instance = self::factory();
        }
        return self::$_instance;
    }

    static function factory()
    {
        $class = get_called_class();
        return new $class;
    }
}

And now or class:

class Lib_Myclass extends AbstractClass
{
    public function function1($var1)
    {
        return $var1;
    }
}

And use patterns:

<?php
Lib_Myclass::factory()->function1('hi');

Lib_Myclass::singleton()->function1('hi');

Clean and simple. I like it.

Should we use our own frameworks in a production system?

Some days ago I read an article called why every developer should write their own framework. It’s an interesting post and everybody are agree with the author. Build a framework is a good way to learn other frameworks. You face problems and you give solutions to those problems similar than other framework’s ones. It’s also good see the code of other frameworks to see the differences. Definitely I am fully agree with Brandon.

But my question is more complicated. Should we use our own frameworks in a production system? I think the answer is not as clear as the fact of building or own framework. I will try to answer (or at least give my answer) in two scenarios. One as a developer and another one as an IT Manager.

Answer for a developer

As a developer is a difficult answer. We are agree build our own framework is a good exercise but use it in a production system?. When you build your own framework you are the biggest expert in your framework. You don’t need to read books to learn your framework. You need to read books to take ideas and learn more things from another ones but not on your framework. That’s a good point. But you must code a lot. You must code solutions that other frameworks has done already implemented. So your own framework speeds up the learning curve (when the framework is done or at least done) but slows down the developing time when you need to code core functions. If your framework is finished and it works under all situations and all possible future possibilities had been taken into account, the use of your own framework is the best idea. But your own framework will be never finished. It will be always under construction so the answer is not easy. I also don’t like to develop solutions for non-existing problems. I prefer to develop the solution when the problem appear. I like think in possible problems and think in a possible solution but not code the solution until the problem appear. Maybe I can can use a sandbox to develop some ideas but not add them to the framework. so at least for me my frameworks are never finished.

As a developer another question appear. Does my framework improve my resume. I’m not a person who learn things only to improve my resume but, like or not, resume is important. If you are an expert in Zend, Cake or Symfony you will have maybe more possibilities getting a job. It will be very difficult to find a job offer looking for an expert in your own framework. isn’t it? But If you build your own framework and if you are in touch with the frameworks of the market (you read articles, books, and you play a bit with them) it will be very easy to adapt yourself to another exiting framework.

Answer for an IT Manager

If you are an IT Manager and you need to choose between custom-made framework and existing one I will give you two possible answer. the sort one and the long one. The sort one is use an existing one. If you need to build an IT team it will be easy to hire developers if you chose an existing framework. Zend, Cake, Symfony are good frameworks. They are open source and is easy find developers. Of course you must chose only open source frameworks. They are good enough, widely used and free. Free as free beer. Something very important nowadays.

Now the long answer: Depend on your team. If you don’t have a team and you need to hire use the sort answer and use an existing one. But if you have a team the answer is not as easy as if you don’t have it. Home made framework means the knowledge is in-house. Home made framework means you don’t need to paid any external consultant to solve anything or to add any new functionality. But home-made framework means you cannot do those things because nobody outside your team knows your framework. If you need to hire more developers you will need to train them and probably the documentation of your framework will be very poor (or maybe it doesn’t exits) so the answer is difficult.

Another important fact is: When we build our own things, those things are something like our children and we work with them in a different way than if our boss imposes those things to us as a mandatory things.

Conclusion

There are two possible answer: yes and not. Use one or another according to your personal situation. Probably you will choose a wrong answer but remember its better to choose one than stay doing nothing.

Server side web frameworks are dead

Server side web frameworks are dead. Long life to client side web frameworks. Struts, JSF, Spring, Ruby on Rails, Cake, Symfony … (even my framework in PHP 😉 ) as full stack web framework are dead. Logic is moving to the client side. Javascript, the language used as deadly weapon in the browser wars nowadays is the key of the web. Server logic is getting thin. A thin server side framework serving JSON (Zend Framework is really cool for this) plus a good and strong client side framework such as Dojo or ExtJs with a good widget set with all the application logic can be the new parading of web develop.

A must-see speech about web tendencies about this idea:

mixin dojo and google api libraries

I’m working in a dojo application and I also need to use some goolge api like feeds, maps and books. Dojo is very versatile adding new components using dojo.require. when all dojo components are loaded dojo.addOnLoad callback is fired.

dojo.require("dojo.io.script");
dojo.require("dojo.data.ItemFileReadStore");
dojo.addOnLoad(function(){
    console.log('all is ready');
});

dojo.require is great. It also alloy us to create nested onLoad callbacks

dojo.require(“dojo.io.script”);
dojo.require(“dojo.data.ItemFileReadStore”);

dojo.addOnLoad(function(){ // <------------ console.log('all is ready1'); dojo.require("dijit.TitlePane"); dojo.require("dijit.Dialog"); dojo.addOnLoad(function(){ // <------------ console.log('all is ready 2'); }); }); [/sourcecode] google API uses his own loader google.load. [sourcecode language='js'] google.load("jquery", "1.3.2"); google.load("feeds", "1"); google.setOnLoadCallback(function() { console.log('all is ready'); }); [/sourcecode] If your application uses both libraries you must take care about onLoad callback of dojo and google because they are not the same. You can create your own dojo widget for loading google api and forget google.setOnLoadCallback but I want to use libraries as standard as I can (without hacking code every time I have a problem). So I prefer to live with both callbacks. You can try to do something like this: [sourcecode language='js'] google.load("dojo", "1.3"); google.load("feeds", "1"); dojo.require("dijit.layout.BorderContainer"); dojo.require("dijit.layout.TabContainer"); dojo.require("dijit.layout.ContentPane"); dojo.addOnLoad(function(){ // start dojo application }); [/sourcecode] This code will work but sometimes will crash. Then you press F5 and all works. As far as I know you can't do nested onLoad callbacks with google (it's a pity), so first you must load google api and load dojo components when google api is ready So it's better to use this other version of code: [sourcecode language='js'] google.load("dojo", "1.3"); google.load("feeds", "1"); google.setOnLoadCallback(function() { dojo.require("dijit.layout.BorderContainer"); dojo.require("dijit.layout.TabContainer"); dojo.require("dijit.layout.ContentPane"); dojo.addOnLoad(function(){ // start dojo application }); }); [/sourcecode]

Speed up page load with asynchronous javascript

JavaScript an important part of or web applications. Normally or web is not usable until js is full loaded. Almost all js framework implements an event to notice when the js is ready (dojo.addOnLoad(); $(function() {});) and the page is usable. The page is usable when js is loaded. Sometimes if you are working with dojo you need all js and maybe is better to create an splash screen until all is loaded. But sometimes your app don’t need all js to be usable. I give you a example: I’ve been working in a project with jQuery. I load jQuery library from google cdn

<script type="text/javascript" src="http://www.google.com/jsapi?key=mygoogleApiKey"></script>
<script type="text/javascript">
    google.load("jquery", "1.3.2");
</script>

jQuery is mandatory. The site doesn’t work without this library.

google.setOnLoadCallback(function() {
    $(function() {
        // application code ...
    });
});

I want to put rss feeds of some blogs on the left bar of the application and I need to do it with js. I’m going to use google’s feed api to do it so:

<script type="text/javascript" src="http://www.google.com/jsapi?key=myGoogleApiKey"></script>
<script type="text/javascript">
    google.load("jquery", "1.3.2");
    google.load("feeds", "1"); // <-------------
</script>

And I call to the js function to populates RSS feed into google’s onready callback (setOnLoadCallback)

google.setOnLoadCallback(function() {
initFeeds() // <------------- $(function() { // application code ... }); }); [/sourcecode] This version of code works but it has a problem. All application will not be usable until initFeeds ends. The fedds on the left bar are cool but the user don't need them to use the application. It's just a decoration so: why we force user to wait until feeds are loaded? The solution is quite simple. Put a timer to free the main js file and populate feed asynchronously [sourcecode language='js'] google.setOnLoadCallback(function() { setTimeout(initFeeds, 1000); <------------- $(function() { // application code ... }); }); [/sourcecode]

fetching book cover with google API and dojo

I want to put the cover of one book into my dojo application. I can use amazon Web service but google API has a nice JSONP interface and also google API uses ISBN instead ASIN to fetch book info and for me is easier to know the ISBN (it is in the first page of every book).
I have a previous post explaining how to use use JSONP with javascript, but now I want to do the same in a dojo-way using dojo.io.script.get

dojo.io.script.get({
    url:"http://books.google.com/books?bibkeys=" + isbn + "&jscmd=viewapi",
    callbackParamName: "callback",
    load: dojo.hitch(this, function(booksInfo){googleCallback(booksInfo);})
});

And finally we are going to use booksInfo to to show our flaming cover:

googleCallback: function(booksInfo) {
        var div = dojo.byId('divId');
        div.innerHTML = '';
        var mainDiv = dojo.doc.createElement('div');
        var x = 0;
        for (i in booksInfo) {
            // Create a DIV for each book
            var book = booksInfo[i];
            var thumbnailDiv = dojo.doc.createElement('div');
            thumbnailDiv.className = "thumbnail";

            // Add a link to each book's informtaion page
            var a = dojo.doc.createElement("a");
            a.href = book.info_url;
            a.target = '_blank';

            // Display a thumbnail of the book's cover
            var img = dojo.doc.createElement("img");
            img.src = book.thumbnail_url + '&zoom=1';
            img.border = 0;
            a.appendChild(img);
            thumbnailDiv.appendChild(a);

            mainDiv.appendChild(thumbnailDiv);
        }
        div.appendChild(mainDiv);
}