My website is slow. What can I do?

You are working with a website. The website works. All is perfect but your clients told you it’s very slow. You must face the problem and improve the behaviour of the web site. But remember all site works properly. There isn’t any errors. The common way to resolve problems (understand the problem, reproduce the problem in test environment, solve the problem) doesn’t fit in this scenario. What can we do? I want to give some recommendations to improve performance problems in this post. Let’s go

Don’t assume anything

It’s very typical in our work to assume what is the problem when someone gives us an issue. Normally the user is not a technician. He suffer a problem and tell us the symptoms. For example some people from procurement office call you telling the application doesn’t work. You assume he is speaking about the application he use every day doesn’t work. You can go to check the server. It’s OK. Server logs OK. What happens? Finally you discover there is a network problem. Not a problem in the application. but the effects to the users are the same.
When you face a performance issue don’t assume anything. First of all you must debrief the user to take a picture of the problem. Forget the solution just now. In this phase you only need to collect the information and perform an analysis later. If it possible speak and go to his office and see the problem with him. I remember a performance problem some time ago. The user had serious problems with the application. I tested the application and I didn’t find anything wrong. But the problem persists and she was the main user of the application. I went to her office and I discover the real problem. The application was slow. But the screen-saver was slow too. Spreadsheet was slow. In fact all was slow there. The problem was the RAM memory of the PC. More RAM and magically all applications become faster.

Measure the problem

If something is slow you must check times. But take care about it. Wrong measures can make you waste your time trying to solve the problem. A typical mistake is check times only at server-side. For example you start a timer when the script start and finish it when it ends. Imagine you have 1 second. It can be improved but, is the end-user complaining for a performance issue with 1 second of response time? Probably not. You can start to improve your server-side code. You spend some time coding and you turn from 1 second to 0.1 second. You are very proud of your improvements, and you tell to the user the problem is solved. But the user is not agree with you. The problem persists. You’ve been working on a different problem. A problem indeed but different one than user claims. Why? Because you’ve assumed the problem was in server code and your time measurements have been done in a wrong scenario.It’s quite probably the problem in in client site. If you take a look into firebug’s net tab you can realize server-side part (e.g. php ones) normally is the first one but is not the only one. Even it’s not the longest part. It can be a short percent of full-page load and render time. If you want to acheive significant success within your performance problem you must attack directly to the main bottle neck (and detect them before of course). If you want to learn a lot about client side performance, please pick up Steve Soulders’s “High performance web sites” book. You can also read the another Steve’s book “Even faster web sites” but first one is definitely a must read book for people who work in this area. You can also see many conferences of great Steve Soulders in youtuve. Do it. He is a great guru in this area and also a good speaker. After watching his conferences you will have  the desire to drink a beer with him. Probably working on “High performance web sites”’s recommendations you will achieve a significant results, following a really simple rules.

Cache it

I know I not very original giving recommendations about caching in this post but caching is very important. There is a lot theory about caching. You must cache all as you can. But don’t do it like mad. You will get a cache nightmare if you don’ have a good caching plan. You must define the storage, the ttl (time to live) and what is going to be cached and what not.  A wrong caching politics can jeopardize a project but a good caching ones will improve the performance.

Do it offline

Doing all online is cool. The user will get the fresh results when he clicks a button but what happen if the action takes too many time?. Imagine you have a button that sends ten emails every time the user clicks on it. In normal situations the operation is fast enough but what happens if mail server has a big load, or even it’s down. Your application will freeze and your user will become angry. Think moving operations to background. There are great tools like gearman to perform those kind of work. Transform your button from: user clicks, mail one is sent, mail 2, … mail 10 is sent, OK  to: use clicks, new task in our job server, OK. Now true doesn’t mean the ten emails have been sent. Now means they will be send. Balance the possibility of this new behaviour. I realize sometimes it isn’t possible but it is viable in other cases.Imagine you have an important report that uses a complex SQL to extract information from database. This SQL uses several tables to met user expectations. Is it mandatory to perform always the query to get the results? Think in the possibility of creating some statistic tables to collect old information (non changed ones, such as old months or years). Take a snapshot offline of your real-time data and perform queries over those snapshots instead of real information. Sometimes this technique is not possible but if it is available you can achieve important time benefits within your database queries.

Database Connections.

Working with relational databases the creation of database connection is a slow operation. Take care about it. It’s a good practise to put a counter in your script to show you how many connections you create in your script, how many queries and how many results gives your queries to your application. You can realize unpleasant problems with this simple log. If your application perform more than one connection to the same database within the same execution script it’s very sure you are doing something wrong. Use always lazy connections to the database. I also have seen scripts that connects to the database without doing any operation. That’s means you are wasting time connection to the database. Connect only when you really are going to use it. Not always in the beginning of the script as general rule.Check the sql you are using. If for example you are always doing the same query every click on the site to check some kind user information a red light must appear in your mind with a flashing box with the text: cache it! inside.

Trace long query and analyze them into the database. Check indexes and execution plans. This normally is a great bottle neck in the web applications.

Debug flags

Use debug flags to measure the problem. Firebug in combination with FirePHP are a great team to help us in our work. But don’t forget to turn those flag off in the production server. To many debug information collectors active in our production servers will slow down or application with unnecessary actions

Building network services with PHP and xinetd

Not all is web and HTTP. Sometimes we need to create a network service listening to a port. We can create a TCP server in C, Java or even PHP but there’s a really helpful daemon in Linux that helps us to do it. This daemon is xinetd. In this article we are going to create a network service with PHP and xinretd.

Now we are going to create our brand new service with xinetd and PHP. Let’s start. First we are  going to create a simple network service listening to 60321 port. Our network service will say hello. The PHP script will be very complicated:

// /home/gonzalo/tests/test1.php
echo "HELLO\n";

We want to create a network service on 60321 tcp port so we need to define this port on /etc/services. We put the following line at the end of /etc/services

// /etc/services
myService   60321/tcp # my hello service

And finally we create out xinetd configuration script on the folder /etc/xinet.d/ , called myService (/etc/xinetd.d/myService)

# default: on
# description: my test service

service myService
        socket_type             = stream
        protocol                = tcp
        wait                    = no
        user                    = gonzalo
        server                  = /usr/local/bin/php-cli
        server_args             = /home/gonzalo/tests/test1.php
        log_on_success          += DURATION
        nice                    = 10
        disable                 = no

Now we restart xinetd

sudo /etc/init.d/xinetd restart

And we have our network service ready:

telnet localhost 60321
Trying ::1...
Connected to localhost.
Escape character is '^]'.
Connection closed by foreign host.

Easy. isn’t it? But it may be not really useful. So we are going to change something in our php script to accept input. Here we cannot use POST or GET parameters (that’s not HTTP) so we need to read input from stdin. In PHP (and in other languajes too) that’s pretty straightforward.

$handle = fopen('php://stdin','r');
$input = fgets($handle);

echo "hello {$input}";

Now if we run our script from CLI it will ask for input.
So if we test our network service with a telnet.

And we have our network service ready:

telnet localhost 60321
Trying ::1...
Connected to localhost.
Escape character is '^]'.

we type: “gonzalo” and:

hello gonzalo
Connection closed by foreign host.

Easy trick to move cache files to RAM without coding a PHP line.

Some year ago I faced to a problem to increase the performance of a web application. This application used Smarty as template engine. Smarty ‘compiles’ the templates (the tpl files) into tpl.php files.

As I saw in the server logs almost all I/O reads were to the tpl.php files. Those files where with other cache files in a cache dir on the file system. Nowadays template engine allows to use some RAM backends like memcache or similar. But I don’t wanted neither touch any line of code nor change the behaviour of the template engine. So I decided to do an easy trick.

Server was a linux box. Linux systems has /dev/shm. shm is a temp file system on shared memory. Normally it’s mounted with a percent of our RAM memory and we can use it if we want to increase the performance or our application. I/O operation over RAM are faster than hard disks.

We only must take care about one thing. When we power down the host all data of our tempfs will be lost. So it’s perfect to save cache files. I only need to delete cache dir and create a symbolic link to /dev/shm

rm -Rf cache/
ln -s /dev/shm cache/

And that’s it. All cache reads will be done now in RAM memory instead of  disk.

As I said before we must take care /dev/shm flushes when we power down the server. So if our cache system uses subfolder, we must take care the application will not assume the folders are already created. If we have a tree in our cache directory and we don’t want to change a line in our application we can create the tree in a script and put the script in the start-up of the server.

Easy, fast and clean. I like it.

nslu2 installation

I need to change the HDD of my Linksys NSLU2 so I will use the opportunity to write a small HowTo to show how to set up an unslung firmware into my nslu2. Let’s start

First of all I need to download the last version of Unslung firmware and install in my ubuntu box the software to put the firmware into my nslu2:

sudo aptitude install upslug2

I put my nslu2 into upgrade mode and …

# upslug2
 [no NSLU2 machines found in upgrade mode]

What happen? I always forget I have two network cards (eth0 and eth1) and I use eth1

upslug2 -help
upslug2: usage: upslug2 {options}
-d --device[eth0]: local ethernet device to use
-t --target: NSLU2 to upgrade (MAC address)
-i --image: complete flash image to use

so …

sudo upslug2 -d eth1 -t  00:13:10:d9:03:ff -i Unslung-6.10-beta.bin

network is a my home network is a so I need to change my IP if I want to connect to my nslu2:

sudo ifconfig eth1 netmask up

Now I open a browser and type

From the web interface I change the ip from to, the gateway and the dns.

change again my IP

sudo ifconfig eth1 netmask up

and open again a browser

Now I format plug the HDD to the device (into Disk1) and format it from the web inteface.

Now I enable telnet, telnet to the device with the default username/password (root/uNSLUng) and unsling disk1

Connected to
Escape character is '^]'.
LKGD903B8 login: root
Welcome to Unslung V2.3R63-uNSLUng-6.10-beta
This system is currently running from the internal flash memory,
it has NOT booted up into "unslung" mode from an external drive.
In this mode, very few services are running, and available disk
space is extremely limited.  This mode is normally only used
for initial installation, and system maintenance and recovery.
BusyBox v1.3.1 (2007-12-29 03:38:35 UTC) Built-in shell (ash)
Enter 'help' for a list of built-in commands.

# unsling disk1
Waiting for /share/hdd/data ...
Target disk is /share/hdd/data
Checking that /share/hdd/data has been properly formatted...
Checking that /share/hdd/data is clean...
Please enter the new root password.  This will be the new root
password used when the NSLU2 boots up with or without disks
Changing password for root
Enter the new password (minimum of 5, maximum of 8 characters)
Please use a combination of upper and lower case letters and numbers.
Enter new password:
Re-enter new password:
Password changed.
Copying the complete rootfs from / to /share/hdd/data ...
(this will take just a couple of minutes)
Copy complete ...
Linking /usr/bin/ipkg executable on target disk.
Linking /etc/motd to the unslung motd on target disk.
Updating /home/httpd/html/home.htm with target disk info.
Creating /.sdb1root to direct switchbox to boot from /share/hdd/data.
Unsling complete.
Leave the device disk1, /dev/sdb1, plugged in and reboot (using
either the Web GUI, or the command "DO_Reboot") in order to boot
this system up into unslung mode.
# Connection closed by foreign host.

Reboot the device enable telnet again and install openssh:

# ipkg update
# ipkg install openssh

Detectar movimiento con Linksys WVC54GC

Tengo una cámara Linksys WVC54GC y estoy jugando con la detección del movimiento. Para esto he encontrado un script en bash que tocado un poquito aqui y allá, que hace mas o menos lo que necesito.


cd /mnt/data/.cam/
while true
mplayer http://webcam/img/video.asf -really-quiet -frames 1 -vo jpeg
djpeg 00000001.jpg > current.ppm
$PNMPSNR current.ppm last.ppm>>Ycolour 2>
Y=`awk '/Y  color/ {print int($5)}' Ycolour`
if [ $Y -lt $SENSITIVITY ]

if [ -d ./`date +%Y%m%d` ]
cp 00000001.jpg ./`date +%Y%m%d`/`date +%y%m%d%H%M%S.jpg`
mkdir  ./`date +%Y%m%d`
cp 00000001.jpg ./`date +%Y%m%d`/`date +%y%m%d%H%M%S.jpg`

mv current.ppm last.ppm

Esto unido a un script en el init.d

gonzalo@gnzl:/etc/init.d$ cat webcamd
# Provides:          webcamd
# Required-Start:    networking
# Required-Stop:     networking
# Default-Start:     2 3 4 5
# Default-Stop:      0 1 6
# Short-Description: Start the webcamd web server.

DESC="webcam snapshot motion"
ENV="env -i LANG=C PATH=/usr/local/bin:/usr/bin:/bin"

. /lib/lsb/init-functions
cd /mnt/data/.cam/
case "$1" in
      echo "Starting $DESC" $NAME
      su -l gonzalo -c "sh /mnt/data/.cam/ &"

      exit 1

exit 0

Hace más o menos lo que quiero.

Si ya se que es muy mejorable ya que el tiempo que le he dedicado es muy poco (modo escusas activado). Si ya se que hay programas como Motion y Zoneminder que hacen esto mas bonito, pero bueno algún dia los miraré.

Mi principal problema es que la única forma que tengo para obtener una imagen fija de la camara es con el comando:

mplayer http://webcam/img/video.asf -really-quiet -frames 1 -vo jpeg

ya que la cámara no me da la posibilidad de obterner la imagen directamente (o al menos no se como hacerlo). Mi problema es que esto no lo quiero ejecutar en un PC, como esta ahora, si no en un NSLU2, que es un aparatito con arquitectura ARM. Pues bien no consigo compilar el mplayer para ARM y encima creo que aunque lo consiga hacer no me va a funcionar ya que los codecs necesarios para ver un stream asf en Linux solo estan en formato binario para x86.

Me gustaría saber como se hace lo mismo que hago con mplayer con vlc (que si lo tengo correctamente instalado en el NSLU2). Otra opción es seguir las instrucciones que veo aqui para cer funcionar la camarita con Motion pero no consigo hacer el reetreaming con ffmpeg, ya que al hacer:

ffmpeg -an -i http://yourwebcam.up/img/video.asf http://localhost:8090/feed1.ffm

me dice que no puede abrir el archivo asf y no tengo ni idea por que.

En fin seguire peleando,