Phusion white papers Phusion overview

Phusion Blog

Phusion Passenger, now with Global Queuing

By Ninh Bui on October 29th, 2008

I’m not sure what I should write down here seeing as David Heinemeier Hansson pretty much sums it all up:

Indeed, Passenger now supports the ability to choose for global queing at the request of 37signals, which has the potential to be the first of many exciting collaborations with the company responsible for applications such as Basecamp, Campfire and of course, the Rails framework.

For more info on this feature as well as on Phusion Passenger’s architecture, we’d kindly like to refer you to Passenger’s user guide and architectural overview.

For completness sake, I’ve included the relevant information with regards to this feature below for your reading pleasure.

What does this option (Global Queuing) do?

Recall that Phusion Passenger spawns multiple backend processes (e.g. multiple Ruby on Rails processes), each which processes HTTP requests serially. One of Phusion Passenger’s jobs is to forward HTTP requests to a suitable backend process. A backend process may take an arbitrary amount of time to process a specific HTTP request. If the websites are (temporarily) under high load, and the backend processes cannot process the requests fast enough, then some requests may have to be queued.

If global queuing is turned off, then Phusion Passenger will use fair load balancing. This means that each backend process will have its own private queue. Phusion Passenger will forward an HTTP request to the backend process that has the least amount of requests in its queue.

If global queuing is turned on, then Phusion Passenger will use a global queue that’s shared between all backend processes. If an HTTP request comes in, and all the backend processes are still busy, then Phusion Passenger will wait until at least one backend process is done, and will then forward the request to that process.

When to turn on global queuing?

You should turn on global queuing if one of your web applications may have long-running requests.

For example suppose that:

  • global queuing is turned off.
  • we’re currently in a state where all backend processes have 3 requests in their queue, except for a single backend process, which has 1 request in its queue.

The situation looks like this:

Backend process A:  [*     ]  (1 request in queue)
Backend process B:  [***   ]  (3 requests in queue)
Backend process C:  [***   ]  (3 requests in queue)
Backend process D:  [***   ]  (3 requests in queue)

Each process is currently serving short-running requests.

Phusion Passenger will forward the next request to backend process A. A will now have 2 items in its queue. We’ll mark this new request with an X:

Backend process A:  [*X    ]  (2 request in queue)
Backend process B:  [***   ]  (3 requests in queue)
Backend process C:  [***   ]  (3 requests in queue)
Backend process D:  [***   ]  (3 requests in queue)

Assuming that B, C and D still aren’t done with their current request, the next HTTP request – let’s call this Y – will be forwarded to backend process A as well, because it has the least number of items in its queue:

Backend process A:  [*XY   ]  (3 requests in queue)
Backend process B:  [***   ]  (3 requests in queue)
Backend process C:  [***   ]  (3 requests in queue)
Backend process D:  [***   ]  (3 requests in queue)

But if request X happens to be a long-running request that needs 60 seconds to complete, then we’ll have a problem. Y won’t be processed for at least 60 seconds. It would have been a better idea if Y was forward to processes B, C or D instead, because they only have short-living requests in their queues.

This problem will be avoided entirely if you turn global queuing on. With global queuing, all backend processes will share the same queue. The first backend process that becomes available will take from the queue, and so this “queuing-behind-long-running-request” problem will never occur.

Turning global queuing off will yield a minor performance improvement (about 5%, depending on how fast/slow your web application is), which is why it’s off by default.

  • Pingback: 赖洪礼的 blog » Long-running requests, now a problem of the past()

  • does this feature block current http requrest thus cause timeout error?

  • Dan Kubb

    This sounds very similar to using “maxconn 1” in haproxy, which IMHO is it’s “killer feature”.. or was I guess. It’s awesome to see this making it’s way into Passenger.

  • Wes Oldenbeuving

    So this is like running HAProxy with maxconn=1 in front of a pack of mongrels, except it’s all built into Passenger? Now that sounds really interesting!

  • Just wonder,

    You mean that even this global queuing is useful, it still have some overhead which makes the performance decreased?

    It would be cool if we can get over this overhead, since I think it’s a good think to have by default 🙂

  • @Sikachu:

    It’s more about using the right tool for the right job, or in this case, using the right strategy for the problem. This strategy has benefits for specific situations in which you have to deal with longlasting requests as well as short lasting requests: in situations as these you want to have the backend processes as ‘responsive as possible’.

    It’s not the silver bullet for any generic situation and you need to determine for yourself which strategy suits your application best. The overhead is not something you can overcome programmatically seeing as it is part of the strategy: using this strategy, the backend process needs to report back to apache that it is done processing the request and can accept new requests. Compare this with the situation where in we can simply let Apache delegate the request to any backend process without having to wait for this notification. As you can see, the former situation will decrease the throughput because Apache needs to wait. In short, it has its pro’s and con’s. It’s as simple as that. 🙂

  • Pingback: links for 2008-10-30 « Bloggitation()

  • How will the global queue account for backend processes that can handle multiple concurrent requests? Can I ask it to maintain a certain length of requests (like the thread pool size) before queuing in the global queue.

  • Wow. Rock on! Seems to me like the pros would far outweigh the cons. This feature makes me almost HAVE to use mod_rails. Nice job.

    I assume that it’s impossible for apache to just monitor “that connection is done” [kind of like sniffing the stream] or that it re-uses a connection and that’s why the child process must report back to apache?


  • Pingback: Nome do Jogo » Blog Archive » Rails Podcast Brasil - Episódio 36()

  • Pingback: 37Signals outsources development to… The Netherlands!()

  • Excellent. We’ve been using nginx/haproxy/mongrels. With phusion now supporting global queuing, we’re good to go on switching some of our busier enterprise apps over to it that have high load and extremely varying request lengths. Our previous setup worked well for this, but it will be nice to get rid of those pesky mongrels we’ve been fighting with. 🙂 The “Mongrel” invasion is over!

  • mml


    what version of passenger has this feature? 2.0.3 appears to not have it.

  • @mml It’ll be included in the next release.

  • With the introduction of Rails 2.2 thread safety would this help eleviate the problem? So you could have multiple requests come in on one process? I don’t think Passenger supports thread safety but that would be cool, and it might help out with this problem. I’m no expert on thread safety or anything like that so I’d love to hear what you guys have to say about it!

  • @Paul Smith: It is possible, yes. But as far as Phusion Passenger is concerned, there aren’t many advantages to using threads compared to using processes, as it does now.

  • Memory usage would be lower right? I’m mostly wondering if that would help alleviate the problem that global queuing resolves? I’m guessing not. Thanks for replying so quickly

  • @Paul Smith:
    No, not significantly thanks to copy-on-write semantics. Passenger in conjunction with Ruby Enterprise Edition act kind of like a threaded set-up in the sense that as much memory as possible is being shared across the processes: only when observable side effects occur through write operations, the memory will get copied (i.e. copied-on-write). In any other case, the memory can be shared, and in particular this means Rails framework code etc…

  • Awesome! Thanks for answering so quickly, and great job on passenger!

  • Pingback: Phusion Passenger 2.0.4 released; 37signals’s Ta-da List now using Passenger « Phusion Corporate Blog()

  • Pingback: InVisible Blog » Hosting fun()

  • Pingback: Phusion Passenger 2.0.5 released; mentioned on « Phusion Corporate Blog()

  • Pingback: Ta-da List on Rails 2.2, Passenger And EC2 | Castup()

  • Pingback: Aplikacja działająca na dwóch serwerach()