By Loïc d'Anterroches,, 18th of February, 2011

Photon Performance Against PHP as Module

Photon is as fast as the baseline PHP of a traditional PHP as a module setup. This means that you can have both ease of development with a high quality framework and high performance.

Photon Is The Fastest

Because nothing is faster than a photon. Photon reaches 90% of the baseline PHP where the fastest other frameworks are always below 1/3 of the baseline.

For the "Hello World" application, Photon is about:

  • 2.7 times faster than Symfony 2.0.0alpha1
  • 3.2 times faster Solar 1.0.0beta3
  • 4 times faster than Lithium 0.6
  • 4.7 times faster than Yii 1.1.1
  • 5.2 times faster than symfony 1.4.2
  • 10 times faster than Zend 1.10
  • 18 times faster than CakePHP 1.2.6
  • 180 times faster than Flow3 1.0.0alpha7
Framework            |      rel | Photon is about:
-------------------- | -------- | -----------------------------------------
photon-dev           |   0.9041 | ########################################
symfony-2.0.0alpha1  |   0.3312 | ##############
solar-1.0.0beta3     |   0.2825 | ############
lithium-0.6          |   0.2128 | #########
yii-1.1.1            |   0.1901 | ########
symfony-1.4.2        |   0.1737 | ########
zend-1.10            |   0.0906 | ####
cakephp-1.2.6        |   0.0513 | ##
flow3-1.0.0alpha7    |   0.0048 | |

Ranking based on the relative values against the baseline-php generated by the Symfony benchmarks (this page has been taken offline, you have another copy of the figures available).

All The Benchmarks Are Lies

First, you need to know that all the benchmarks are made to look nice against the other frameworks and at the moment you will not find a single benchmark summary where each framework author say: "Yes this is the optimal you can get out of my framework". For example, you have the Symfony benchmarks which were not updated when Paul M. Jones showed that had some flaws. Worse, they do not include CodeIgniter or Kohana because these two are known to be really fast. At the end you have a so called "Product" Application benchmark, for a real life application, just discard it, as it is not auto escaping the output of the variables for Symfony (but for all the other frameworks always doing so automatically) and the auto escaping in the template engines is the limiting factor as they are all compiled as pure PHP.

Only The Hello World Is Worth Something

Yes, only the traditional Hello World! benchmark is interesting because it measures the overhead of the framework. It answers the question: The day I really need to squeeze out the maximum performance of my system, what is the limit?. You cannot go faster than that per page, after, you need to scale up or out.

Now The Results

All the tests have been run on a very small setup running Ubuntu Lucid Lynx with the stock Apache and mod_php without particular tuning. The software used:

  • Siege 2.68
  • Apache 2.2.14
  • PHP 5.3.2
  • Mongrel2 1.5
  • ZeroMQ 2.0.10

The content of the baseline PHP is:

<?php echo 'Hello World!'; ?>

The content of the baseline HTML:

Hello World!

The content of the Photon view:

class Hello
    public function hello($request, $match)
        return new \photon\http\Response('Hello World!', 'text/plain');

The baseline HTML returns about 2900 transactions per second.

$ siege -b -c4 -t5S http://localhost/hello.html
** SIEGE 2.68
** Preparing 4 concurrent users for battle.
The server is now under siege...
Lifting the server siege..      done.
Transactions:                  13913 hits
Availability:                 100.00 %
Elapsed time:                   4.75 secs
Data transferred:               3.76 MB
Response time:                  0.00 secs
Transaction rate:            2929.05 trans/sec
Throughput:                     0.79 MB/sec
Concurrency:                    3.11
Successful transactions:           0
Failed transactions:               0
Longest transaction:            0.03
Shortest transaction:           0.00

The baseline PHP returns a bit less than the HTML version. It correctly reflects the normal difference between the baseline PHP and baseline HTML you can find all over the web for such minimal test case.

$ siege -b -c4 -t5S http://localhost/hello.php
** SIEGE 2.68
** Preparing 4 concurrent users for battle.
The server is now under siege...
Lifting the server siege..      done.
Transactions:                  13596 hits
Availability:                 100.00 %
Elapsed time:                   4.90 secs
Data transferred:               3.66 MB
Response time:                  0.00 secs
Transaction rate:            2774.69 trans/sec
Throughput:                     0.75 MB/sec
Concurrency:                    3.62
Successful transactions:           0
Failed transactions:               0
Longest transaction:            0.02
Shortest transaction:           0.00

The baseline Photon returns about 2500 transactions per second. More than 90% of the PHP baseline!

$ siege -b -c4 -t5S http://localhost:6767/handlertest/hello 
** SIEGE 2.68
** Preparing 4 concurrent users for battle.
The server is now under siege...
Lifting the server siege..      done.
Transactions:                  10063 hits
Availability:                 100.00 %
Elapsed time:                   4.01 secs
Data transferred:               0.12 MB
Response time:                  0.04 secs
Transaction rate:            2509.48 trans/sec
Throughput:                     0.03 MB/sec
Concurrency:                    3.97
Successful transactions:       10063
Failed transactions:               0
Longest transaction:            0.15
Shortest transaction:           0.02

I have been running several time the 5 second tests, longer, siege is exhausting the available sockets and cannot connect anymore. This is because I am running these tests on a small old simple system. Note the differences in the throughput, this is because Apache is sending way more headers (put still everything in one TCP packet).

Maybe you wonder how many requests can the Photon server itself handle by connecting directly to it via the ZeroMQ handlers, simple, 3943 transactions per second with a stupid test script, I am sure it can be more, but I do not really care anyway. Of course in that case, you do not have the HTTP overhead anymore, this is why you cannot compare, but what is great, is that Photon is not the bottleneck and all the improvements made to Mongrel2 will immediately be translated into better performances.

If you wonder how can Photon support a high level of concurrency, do not worry about it, from 4 to 100 concurrent connections, it stays between 2250 and 2500 req/s.

So, now, you can stop reading stupid benchmarks like this one, just code and have fun!

Is 4000 Transactions per Second Slow?

All the test were run directly on a low power, 4 year old, dual core CPU. The important point here is that Photon itself will never be your bottleneck. The day you will start to pump 4000 req/s to your clients, you will anyway not send just a Hello World! string. Your business logic will automatically be in the order of 20 to 50 ms and this will be your bottleneck. Anyway, people getting 50 requests per second in real life are very happy.

What is important is that in normal conditions, on a correct hardware, you will never need to put a cache layer in front of a resource served by Photon. The only bottleneck will be your data store if you query it for some fancy stuff.

What you will do is simply take benefit of the zero configuration scaling power of Photon and spread the load over several backend servers. This means that you will be able to scale out transparently. The only overhead will be the network latency and the cost of having more workers on the round robin scheduler of ZeroMQ, in practice these should be negligeable.

Number of Handlers Effect

Note that these tests were directly run against the current in production website. This is why the transaction numbers are a bit different, a bit higher, but the purpose here is not to have big numbers but the difference between these numbers.

With the commands hnu server new and hnu server less you start/stop new children to handle the load. By default, you get 3 children. On a small dual core system (OpenVZ VM), this means that you have 4 processes handling the requests: 1 Mongrel2 and 3 Photon children. It looks like that for the very small Hello World! test (a closure directly in the URL map), this is the sweet spot. After that, you are at the limits of your system and you end up being CPU bounded.

Children Req. Conc. Effect. Conc. Trans/s
1 10 9.96 2508
2 10 9.93 4352
3 10 9.88 5997
4 10 9.90 5815
5 10 9.90 5864
6 10 9.89 5860
7 10 9.90 5844
1 25 24.96 2543
2 25 24.92 4259
3 25 24.86 6954
4 25 24.86 6815
5 25 24.89 6706

Then run was simply siege -cXX -t15S with a first 2 second run just after adding a new child to the pool and XX with either 10 or 25.

In this case, 4 processes is a sweet spot to extract the maximum out of the two cores. After that, you lose a bit because of the context switching between processes on a maxed out system. If you have more cores available I expect the number of transactions to increase the same way to a higher limit and then, again, plateau. The good points are:

  • with one installation, you can fine tune the number of children to take maximum benefit of all the cores available;
  • it looks like the sweet spot is not affected by the number of concurrent connections;
  • if you over commit resources you pay little price for it.

For reference, the Hello World! view definition is a very simple closure:

array('regex' => '#^/hello$#',
      'view' => function ($req, $match) {
          return new \photon\http\Response('Hello World!', 'text/plain');
      'name' => 'photonweb_hello',

The tests were performed between two OpenVZ VMs over a LAN. The real interesting next step would be to add new VMs, each runnig a Photon instance and all responding to the main Mongrel2 servers. But I am a bit lazy to test this right now.