Phusion white papers Phusion overview

Phusion Blog

A sneak preview of Phusion Passenger 3.2

By Hongli Lai on April 13th, 2012

We’ve been working on Phusion Passenger 3.2 on local repositories for a while now. Today we just pushed this code to Github. You can find it in the experimental branch. Despite the relatively minor version bump, Phusion Passenger 3.2 has actually received a major internal overhaul. This is because our version number growth rate is based on the quantity of user-visible improvements, not internal (technical) improvements.

Rewritten ApplicationPool

One of the central subsystems in Phusion Passenger is the ApplicationPool, which spawns Ruby application processes when necessary and keeps track of them. It scales the number of processes according to the current traffic and it ensures that the number of processes do not go over your defined resource limits. It’s one of the most complex parts of Phusion Passenger and consists of a lot of carefully written code. The ApplicationPool also goes through a huge amount of effort to ensuring that the user is appropriately notified in the event of problems. One of the nerving things about FastCGI (what a lot of people used before Mongrel and Phusion Passenger were introduced) was that when something goes wrong, you often have absolutely no idea why. There is no error report. Or sometimes it prints an error report to a log file, but users often have no idea where that log file is or even realize that they have to look in the log file in the first place. In contrast, Phusion Passenger displays this beautiful error message page which tells you all the details of the problem, right there in the browser. And because Phusion Passenger is heavily multithreaded, ApplicationPool was designed to be thread-safe. However, our initial implementation (during the 1.x and 2.x days) was not concurrent enough: the ApplicationPool’s mutex was locked while an application process is being spawned. This meant that Phusion Passenger is essentially frozen whenever an application process is being spawned; it will be unable to serve any requests during this time. You can’t even run passenger-status while this is happening. Phusion Passenger 3.x partially solves this issue by spawning all processes in the background, except for the first one. Another problem is that if an application behaves badly and freezes during startup, then Phusion Passenger also stays frozen. The ApplicationPool subsystem in 3.0 is partially written in C++, partially in Ruby. However we’ve found that Ruby is not a good language for system software such as Phusion Passenger. Although we love Ruby and still think Ruby is great for lots of things (web apps, sysadmin software, etc) we realized that it’s better to move that part to C++ instead. And we did: in Phusion Passenger 3.2, we’ve rewritten the entire ApplicationPool subsystem in C++. The new subsystem is much more concurrent and more fault-tolerant. It no longer holds the lock for an extended period of time and never freezes, ever. It enforces a spawn timeout on all processes: if they fail to spawn within 60 seconds then they will be killed. The new code is also much easier to understand and to maintain. This opens the road towards many potential future enhancements. 3.2 also supports multiple Ruby versions at the same time, something which many people have been waiting for.

Request/response I/O handling is now evented

Phusion Passenger <= 3.0 uses a static number of threads to handle request/response I/O. This severely limits the amount of concurrent I/O that we can handle. in 3.2, our request/response I/O handling subsystem has been rewritten to be single-threaded and evented. We use the excellent libev and libeio libraries by Marc Lehmann. This evented I/O subsystem allows us to handle a virtually unlimited amount of I/O requests concurrently and also solves some pathological edge case I/O problems that Phusion Passenger currently suffers from. It also opens the road towards future support for WebSockets, long polling and other mechanisms which require a connection socket to stay open for a potentially long time. One of the best features in the new I/O handler is real-time disk-buffered response forwarding. Traditionally, the web server would buffer the entire response before sending it to the client. This is necessary because you don’t want slow HTTP clients to block the application. However it also means that it’s not possible to flush partial response data to the client immediately (e.g. Rails 3.2 streaming), at least not without risking slow clients blocking the application. In Phusion Passenger 3.2, if the client is slow then we buffer the response data in memory, or to disk if the data is larger than a certain threshold. In contrast to most web servers we do not wait until the entire response has finished before forwarding the data; we do it immediately! You don’t need to turn this on, it’s enabled by default. Now application developers need never worry about response buffering anymore, it Just Works(tm) and does the right thing.

Improved WSGI support

It’s not a well-known fact because we’ve never made a fuss about it, but Phusion Passenger actually supports Python WSGI as well, and has done this since the 1.x days. Here’s a demonstration of Django running on Phusion Passenger and a demonstration of Pylons running on Phusion Passenger. Our WSGI support worked, but wasn’t particularly good. For example if the application fails to spawn then we do not display an error message in the browser. But we do now, so WSGI is now unofficially supported as a first-class citizen. Python processes are managed just like Ruby processes; all the resource limits, response buffering, etc. work exactly in the same way.

  • Hey buddies, great to see you back. Don’t be a stranger and thanks again for another great piece of work.

  • Can you say anything about how the evented model will effect rack apps that are themselves multi-threaded? Is the evented model isolated to passenger’s internals, or will it end up having an effect on the actual app processes — will multiple overlapping requests end up executed in the _same_ application-level thread?

  • It is isolated to Phusion Passenger’s internals. There will be no differences as far as the Rack applications are concerned.

  • We were running Pylons through WSGI thanks to Passenger, here’s sample:

  • Marius

    Will passenger be able to leverage multiple cores for the event loop? You wrote that it’s single-threaded but maybe it uses multiple processes? Or maybe it’s not needed at all because it’s fast enough on one core and does very little work.

    Anyway, thanks for your great product!

  • @Marius: The architecture allows it, but we don’t do it. In practice, all cores should already be fully saturated because the web server and the Ruby application processes also need CPU time. Running the event loop in multiple threads is just going to stress the scheduler more and cause more unnecessary context switches.

  • udksl

    Thank You

  • Great to hear what you guys are working on! Very excited about 3.2, what’s the ETA?

  • Geoff

    Great, you guys are the best!

  • Michael

    Great to hear the news!
    As we are looking for a solution to running several ruby versions side by side in the next two months is it worth hoping for 3.2 to be released by then 🙂

  • Thomas

    Interesting stuff and great work!

    BTW: will it be possible in the future, to have something like automatic/dynamic deployment? Not everybody is developing huge apps which run on an orchestra of servers. The other end are small apps, which run next to next on the same server. What is missing, that is deploying (not “re”deploying) of rails-/rack-apps without restarting the server.

    What I have in mind is something like this:

    RackBaseURI /apps/*

    Where RackBaseURI /apps/* says, that each subdirectory (just one level) can contain a rack application, like any other “static” sub-uri-deployment configuration. This will allow the deployment of new apps without restarting the server.

    Besides having an easier PHP-like deployment which let internal developers deploy their applications themselves without having root access to the webserver, this will also allow something like what I am developing in the moment:

    I have a master app, which is practically the main app for multiple short running projects of the same type (they do some sort of data collection). They are developed in ruby using a specific DSL. The projects are deployed very agile (hacked together in one day, run eg. 3 days and never come back again) as separate apps and should run in separate processes. They will be deployed and organized through the master app.

    In the moment I could only deploy this via a jruby/tomcat environment, which allows dynamic deployment of apps. I think it would be great, if I could define something like this also in an apache/passenger-environment.

  • Pingback: http wiki dreamhost com Passenger WSGI http blog… « TWSS()

  • we make our living on your stack – thanks so much for putting in the hard work to make it faster and more durable.

  • This sounds excellent! I will be eager to get it into production once the stable release is out.

  • Matthijs Langenberg

    Thanks for the update! I have been waiting for Passenger 3.2 to update some apps to 1.9.3, without having to setup a bunch of proxy configurations and Passenger Standalone instances.

  • chp


    The new features look very interesting.
    Do you have any release date for 3.2 or an aproximation?

  • Pingback: The Mega April 2012 Ruby and Rails News Roundup()

  • Pingback: Why haven’t we upgraded to Ruby 1.9.3+? | Site5 Webmaster Blog – Learn about all those internet tube things!()

  • Pingback: Nginx, Passenger and WSGI » Geek and Artist()