Phusion white papers Phusion overview

Phusion Blog

Duplicity + S3: easy, cheap, encrypted, automated full-disk backups for your servers

By Hongli Lai on November 11th, 2013

Backup

Backups are one of those things that are important, but that a lot of people don’t do. The thought of setting up backups always raised a mental barrier for me for a number of reasons:

  • I have to think about where to backup to.
  • I have to remember to run the backup on a periodic basis.
  • I worry about the bandwidth and/or storage costs.

I still remember the days when a 2.5 GB harddisk was considered large, and when I had to spent a few hours splitting MP3 files and putting them on 20 floppy disks to transfer them between computers. Backing up my entire harddisk would have costed me hundreds of dollars and hours of time. Because of this, I tend to worry about the efficiency of my backups. I only want to backup things that need backing up.

I tended to tweak my backup software and rules to be as efficient as possible. However, this made setting up backups a total pain, and makes it very easy to procrastinate backups… until it is too late.

I learned to embrace Moore’s Law

Times have changed. Storage is cheap, very cheap. Time Machine — Apple’s backup software — taught me to stop worrying about efficiency. Backing up everything not only makes backing up a mindless and trivial task, it also makes me feel safe. I don’t have to worry about losing my data anymore. I don’t have to worry that my backup rules missed an important file.

Backing up desktops and laptops is easy and cheap enough. A 2 TB harddisk costs only $100.

What about servers?

  • Most people can’t go to the data center and attach a hard disk. Buying or renting another harddisk from the hosting provider can be expensive. Furthermore, if your backup device resides on the same location where the data center is, then destruction of the data center (e.g. a fire) will destroy your backup as well.
  • Backup services provided by the hosting provider can be expensive.
  • Until a few years ago, bandwidth was relatively expensive, making backing up the entire harddisk to a remote storage service an unviable option for those with a tight budget.
  • And finally, do you trust that the storage provider will not read or tamper with your data?

Enter Duplicity and S3

Duplicity is a tool for creating incremental, encrypted backups. “Incremental” means that each backup only stores data that has changed since the last backup run. This is achieved by using the rsync algorithm.

What is rsync? It is a tool for synchronizing files between machines. The cool thing about rsync is that it only transfers changes. If you have a directory with 10 GB of files, and your remote machine has an older version of that directory, then rsync only transfers new files or changed files. Of the changed files, rsync is smart enough to only transfer the parts of the files that have changed!

At some point, Ben Escoto authored the tool rdiff-backup, an incremental backup tool which uses an rsync-like algorithm to create filesystem backups. Rdiff-backup also saves metadata such as permissions, owner and group IDs, ACLs, etc. Rdiff-backup stores past versions as well and allows easy rollback to a point in time. It even compresses backups. However, rdiff-backup has one drawback: you have to install it on the remote server as well. This makes it impossible to use rdiff-backup to backup to storage services that don’t allow running arbitrary software.

Ben later created Duplicity, which is like rdiff-backup but encrypts everything. Duplicity works without needing special software on the remote machine and supports many storage methods, for example FTP, SSH, and even S3.

On the storage side, Amazon has consistently lowered the prices of S3 over the past few years. The current price for the US-west-2 region is only $0.09 per GB per month.

Bandwidth costs have also lowered tremendously. Many hosting providers these days allow more than 1 TB of traffic per month per server.

This makes Duplicity and S3 the perfect combination for backing up my servers. Using encryption means that I don’t have to trust my service provider. Storing 200 GB only costs $18 per month.

Setting up Duplicity and S3 using Duply

Duplicity in itself is still a relative pain to use. It has many options — too many if you’re just starting out. Luckily there is a tool which simplifies Duplicity even further: Duply. It keeps your settings in a profile, and supports pre- and post-execution scripts.

Let’s install Duplicity and Duply. If you’re on Ubuntu, you should add the Duplicity PPA so that you get the latest version. If not, you can just install an older version of Duplicity from the distribution’s repositories.

# Replace 'precise' with your Ubuntu version's codename.
echo deb http://ppa.launchpad.net/duplicity-team/ppa/ubuntu precise main | \
sudo tee /etc/apt/sources.list.d/duplicity.list
sudo apt-get update

Then:

# python-boto adds S3 support
sudo apt-get install duplicity duply python-boto

Create a profile. Let’s name this profile “test”.

duply test create

This will create a configuration file in $HOME/.duply/test/conf. Open it in your editor. You will be presented with a lot of configuration options, but only a few are really important. One of them is GPG_KEY and GPG_PW. Duplicity supports asymmetric public-key encryption, or symmetric password-only encryption. For the purposes of this tutorial we’re going to use symmetric password-only encryption because it’s the easiest.

Let’s generate a random, secure password:

openssl rand -base64 20

Comment out GPG_KEY and set a password in GPG_PW:

#GPG_KEY='_KEY_ID_'
GPG_PW='<the password you just got from openssl>'

Scroll down and set the TARGET options:

TARGET='s3://s3-<region endpoint name>.amazonaws.com/<bucket name>/<folder name>'
TARGET_USER='<your AWS access key ID>'
TARGET_PASS='<your AWS secret key>'

Substitute “region endpoint name” with the host name of the region in which you want to store your S3 bucket. You can find a list of host names at the AWS website. For example, for US-west-2 (Oregon):

TARGET='s3://s3-us-west-2.amazonaws.com/myserver.com-backup/main'

Set the base directory of the backup. We want to backup the entire filesystem:

SOURCE='/'

It is also possible to set a maximum time for keeping old backups. In this tutorial, let’s set it to 6 months:

MAX_AGE=6M

Save and close the configuration file.

There are also some things that we never want to backup, such as /tmp, /dev and log files. So we create an exclusion file $HOME/.duply/test/exclude with the following contents:

- /dev
- /home/*/.cache
- /home/*/.ccache
- /lost+found
- /media
- /mnt
- /proc
- /root/.cache
- /root/.ccache
- /run
- /selinux
- /sys
- /tmp
- /u/apps/*/current/log/*
- /u/apps/*/releases/*/log/*
- /var/cache/*/*
- /var/log
- /var/run
- /var/tmp

This file follows the Duplicity file list syntax. The - sign here means “exclude this directory”. For more information, please refer to the Duplicity man page.

Notice that this file excludes Capistrano-deployed Ruby web apps’ log files. If you’re running Node.js apps on your server then it’s easy to exclude your Node.js log files in a similar manner.

Finally, go to the Amazon S3 control panel, and create a bucket in the chosen region:

Create a bucket on S3

Enter the bucket name

Initiating the backup

We’re now ready to initiate the backup. This can take a while, so let’s open a screen session so that we can terminate the SSH session and check back later.

sudo apt-get install screen
screen

Initiate the backup:

sudo duply test backup

Press Esc-D to detach the screen session.

Check back a few hours later. Login to your server and reattach your screen session:

screen -x

You should see something like this, which means that the backup succeeded. Congratulations!

--------------[ Backup Statistics ]--------------
...
Errors 0
-------------------------------------------------

--- Finished state OK at 16:48:16.192 - Runtime 01:17:08.540 ---

--- Start running command POST at 16:48:16.213 ---
Skipping n/a script '/home/admin/.duply/main/post'.
--- Finished state OK at 16:48:16.244 - Runtime 00:00:00.031 ---

Setting up periodic incremental backups with cron

We can use cron, the system’s periodic task scheduler, to setup periodic incremental backups. Edit root’s crontab:

sudo crontab -e

Insert the following:

0 2 * * 7 env HOME=/home/admin duply main backup

This line runs the duply main backup command every Sunday at 2:00 AM. Note that we set the HOME environment variable here to /home/admin. Duply is run as root because the cronjob belongs to root. However the Duply profiles are stored in /home/admin/.duply, which is why we need to set the HOME environment variable here.

If you want to setup daily backups, replace “0 2 * * 7” with “0 2 * * *”.

Making cron jobs less noisy

Cron has a nice feature: it emails you with the output of every job it has run. If you find that this gets annoying after a while, then you can make it only email you if something went wrong. For this, we’ll need the silence-unless-failed tool, part of phusion-server-tools. This tool runs the given command and swallows its output, unless the command fails.

Install phusion-server-tools and edit root’s crontab again:

sudo git clone https://github.com/phusion/phusion-server-tools.git /tools
sudo crontab -e

Replace:

env HOME=/home/admin duply main backup

with:

/tools/silence-unless-failed env HOME=/home/admin duply main backup

Restoring a backup

Simple restores

You can restore the latest backup with the Duply restore command. It is important to use sudo because this allows Duplicity to restore the original filesystem metadata.

The following will restore the latest backup to a specific directory. The target directory does not need to exist, Duplicity will automatically create it. After restoration, you can move its contents to the root filesystem using mv.

sudo duply main restore /restored_files

You can’t just do sudo duply main restore / here because your system files (e.g. bash, libc, etc) are in use.

Moving the files from /restored_files to / using mv might still not work for you. In that case, consider booting your server from a rescue system and restoring from there.

Restoring a specific file or directory

Use the fetch command to restore a specific file. This restores the /etc/password file in the backup and saves it to /home/admin/password. Notice the lack of leading slash in the etc/password argument.

sudo duply main fetch etc/password /home/admin/password

The fetch command also works on directories:

sudo duply main fetch etc /home/admin/etc

Restoring from a specific date

Every restoration command accepts a date, allowing you to restore from that specific date.

First, use the status command to get an overview of backup dates:

$ duply main status
...
Number of contained backup sets: 2
Total number of contained volumes: 2
 Type of backup set:                            Time:      Num volumes:
                Full         Sat Nov  8 07:38:30 2013                 1
         Incremental         Sat Nov  9 07:43:17 2013                 1
...

In this example, we restore the November 8 backup. Unfortunately we can’t just copy and paste the time string. Instead, we have to write the time in the w3 format. See also the Time Formats section in the Duplicity man page.

sudo duply test restore /restored_files '2013-11-08T07:38:30'

Safely store your keys or passwords!

Whether you used asymmetric public-key encryption or symmetric password-only encryption, you must store them safely! If you ever lose them, you will lose your data. There is no way to recover encrypted data for which the key or password is lost.

My preferred way of storing secrets is to store them inside 1Password and to replicate the data to my phone and tablet so that I have redundant encrypted copies. Alternatives to 1Password include LastPass or KeePass although I have no experience with them.

Conclusion

With Duplicity, Duply and S3, you can setup cheap and secure automated backups in a matter of minutes. For many servers this combo is the silver bullet.

One thing that this tutorial hasn’t dealt with, is database backups. While we’re backing up the database’s raw files, doing so isn’t a good idea. If the database files were being written to at the time the backup was made, then the backup will contain potentially irrecoverably corrupted database files. Even the database’s journaling file or write-ahead log won’t help, because these technologies are designed only to protect against power failures, not against concurrent file-level backup processes. Luckily Duply supports the concept of pre-scripts. In the next part of this article, we’ll cover pre-scripts and database backups.

I hope you’ve enjoyed this article. If you have any comments, please don’t hesitate to post them below. We regularly publish news and interesting articles. If you’re interested, please follow us on Twitter, or subscribe to our newsletter.

Discuss on Hacker News.



Tuning Phusion Passenger’s concurrency settings

By Hongli Lai on March 12th, 2013

Update February 2015: this blog post has been superseded by the Server Optimization Guide. Please read that guide instead.

Phusion Passenger turns Apache and Nginx into a full-featured application server for Ruby and Python web apps. It has a strong focus on ease of use, stability and performance. Phusion Passenger is built on top of tried-and-true, battle-hardened Unix technologies, yet at the same time introduces innovations not found in most traditional Unix servers. Since mid-2012, it aims to be the ultimate polyglot application server.

We recently received a support inquiry from a Phusion Passenger Enterprise customer regarding excessive process creation activity. During peak times, Phusion Passenger would suddenly create a lot of processes, making the server slow or unresponsive for a period of time. This is because Phusion Passenger spawns and shuts down application processes according to traffic, but they apparently had irregular traffic patterns during peak times. Since their servers were dedicated for 1 application only, the solution was to make the number of processes constant regardless of traffic. This could be done by setting PassengerMinInstances to a value equal to PassengerMaxPoolSize.

The customer then raised the question: what is the best value for PassengerMaxPoolSize? This is a non-trivial question, and the answer encompasses more than just PassengerMaxPoolSize. In this article we’re going to shed more light on this topic.

For simplicity reasons, we assume that your server only hosts 1 web application. Things become more complicated when more web applications are involved, but you can use the principles in this article to apply to multi-application server environments.

Aspects of concurrency tuning

The goal of tuning is usually to maximize throughput. Increasing the number of processes or threads increases the maximum throughput and concurrency, but there are several factors that should be kept in mind.

  • Memory. More processes implies a higher memory usage. If too much memory is used then the machine will hit swap, which slows everything down. You should only have as many processes as memory limits comfortably allow. Threads use less memory, so prefer threads when possible. You can create tens of threads in place of one process.
  • Number of CPUs. True (hardware) concurrency cannot be higher than the number of CPUs. In theory, if all processes/threads on your system use the CPUs constantly, then:
    • You can increase throughput up to NUMBER_OF_CPUS processes/threads.
    • Increasing the number of processes/threads after that point will increase virtual (software) concurrency, but will not increase true (hardware) concurrency and will not increase maximum throughput.

    Having more processes than CPUs may decrease total throughput a little thanks to context switching overhead, but the difference is not big because OSes are good at context switching these days.

    On the other hand, if your CPUs are not used constantly, e.g. because they’re often blocked on I/O, then the above does not apply and increasing the number of processes/threads does increase concurrency and throughput, at least until the CPUs are saturated.

  • Blocking I/O. This covers all blocking I/O, including hard disk access latencies, database call latencies, web API calls, etc. Handling input from the client and output to the client does not count as blocking I/O, because Phusion Passenger has buffering layers that relief the application from worrying about this.

    The more blocking I/O calls your application process/thread makes, the more time it spends on waiting for external components. While it’s waiting it does not use the CPU, so that’s when another process/thread should get the chance to use the CPU. If no other process/thread needs CPU right now (e.g. all processes/threads are waiting for I/O) then CPU time is essentially wasted. Increasing the number processes or threads decreases the chance of CPU time being wasted. It also increases concurrency, so that clients do not have to wait for a previous I/O call to be completed before being served.

With these in mind, we give the following tuning recommendations. These recommendations assume that your machine is dedicated to Phusion Passenger. If your machine also hosts other software (e.g. a database) then you should look at the amount of RAM that you’re willing to reserve for Phusion Passenger and Phusion Passenger-served applications.

Tuning the application process and thread count

In our experience, a typical single-threaded Rails application process uses 100 MB of RAM on a 64-bit machine, and by contrast, a thread would only consume 10% as much. We use this fact in determining a proper formula.

Step 1: determine the system’s limits

First, let’s define the maximum number of (single-threaded) processes, or the number of threads, that you can comfortably have given the amount of RAM you have. This is a reasonable upper limit that you can reach without degrading system performance. We use the following formulas.

In purely single-threaded multi-process deployments, the formula is as follows:

max_app_processes = (TOTAL_RAM * 0.75) / RAM_PER_PROCESS

This formula is derived as follows:

  • (TOTAL_RAM * 0.75): We can assume that there must be at least 25% of free RAM that the operating system can use for other things. The result of this calculation is the RAM that is freely available for applications.
  • / RAM_PER_PROCESS: Each process consumes a roughly constant amount of RAM, so the maximum number of processes is a single devision between the aforementioned calculation and this constant.

In multithreaded deployments, the formula is as follows:

max_app_threads_per_process =
  ((TOTAL_RAM * 0.75) - (NUMBER_OF_PROCESSES * RAM_PER_PROCESS * 0.9)) /
  (RAM_PER_PROCESS / 10)

Here, NUMBER_OF_PROCESSES is the number of application process you want to use. In case of Ruby or Python, this should be equal to NUMBER_OF_CPUS. This is because both Ruby and Python have a Global Interpreter Lock so that they cannot utilize multicore no matter how many threads they’re using. By using multiple processes, you can utilize multicore. If you’re using a language runtime that does not have a Global Interpreter Lock, e.g. JRuby or Rubinius, then NUMBER_OF_PROCESSES can be 1.

This formula is derived as follows:

  • (TOTAL_RAM * 0.75): The same as explained earlier.
  • - (NUMBER_OF_PROCESSES * RAM_PER_PROCESS): In multithreaded deployments, the application processes consume a constant amount of memory, so we deduct this from the RAM that is available to applications. The result is the amount of RAM available to application threads.
  • / (RAM_PER_PROCESS / 10): A thread consumes about 10% of the amount of memory a process would, so we divide the amount of RAM available to threads with this number. What we get is the number of threads that the system can handle.

On 32-bit systems, max_app_threads_per_process should not be higher than about 200. Assuming an 8 MB stack size per thread, you will run out of virtual address space if you go much further. On 64-bit systems you don’t have to worry about this problem.

Step 2: derive the applications’ needs

The earlier two formulas were not for calculating the number of processes or threads that application needs, but for calculating how much the system can handle without getting into trouble. Your application may not actually need that many processes or threads! If your application is CPU-bound, then you only need a small multiple of the number of CPUs you have. Only if your application performs a lot of blocking I/O (e.g. database calls that take tens of milliseconds to complete, or you call to Twitter) do you need a large number of processes or threads.

Armed with this knowledge, we derive the formulas for calculating how many processes or threads we actually need.

  • If your application performs a lot of blocking I/O then you should give it as many processes and threads as possible:
    # Use this formula for purely single-threaded multi-process deployments.
    desired_app_processes = max_app_processes
    
    # Use this formula for multithreaded deployments.
    desired_app_threads_per_process = max_app_threads_per_process
    
  • If your application doesn’t perform a lot of blocking I/O, then you should limit the number of processes or threads to a multiple of the number of CPUs to minimize context switching:
    # Use this formula for purely single-threaded multi-process deployments.
    desired_app_processes = min(max_app_processes, NUMBER_OF_CPUS)
    
    # Use this formula for multithreaded deployments.
    desired_app_threads_per_process = min(max_app_threads_per_process, 2 * NUMBER_OF_CPUS)
    

Step 3: configure Phusion Passenger

You should put the number for desired_app_processes into the PassengerMaxPoolSize option. Whether you want to make PassengerMinInstances equal to that number or not is up to you: doing so will make the number of processes static, regardless of traffic. If your application has very irregular traffic patterns, response times could drop while Passenger spins up new processes to handle peak traffic. Setting PassengerMinInstances as high as possible prevents this problem.

If desired_app_processes is 1, then you should set PassengerSpawnMethod conservative (on Phusion Passenger 3 or earlier) or PassengerSpawnMethod direct (on Phusion Passenger 4 or later). By using direct/conservative spawning instead of smart spawning, Phusion Passenger will not keep an ApplicationSpawner/Preloader process around. This is because an ApplicationSpawner/Preloader process is useless when there’s only 1 application process.

In order to use multiple threads you must use Phusion Passenger Enterprise 4. The open source version of Phusion Passenger does not support multithreading, and neither does version 3 of Phusion Passenger Enterprise. At the time of writing, Phusion Passenger Enterprise 4.0 is on its 4th Release Candidate. You can download it from the Customer Area.

You should put the number for desired_app_threads_per_process into the PassengerThreadCount option. If you do this, you also need to set PassengerConcurrencyModel thread in order to turn on multithreading support.

Possible step 4: configure Rails

Only if you’re on a multithreaded deployment do you need to configure Rails.

Rails is thread-safe since version 2.2, but you need to enable thread-safety by setting config.thread_safe! in config/environments/production.rb.

You should also increase the ActiveRecord pool size because it limits concurrency. You can configure it in config/database.yml. Set the pool size to the number of threads. But if you believe your database cannot handle that much concurrency, keep it at a low value.

Example 1: purely single-threaded multi-process deployment with lots of blocking I/O

Suppose you have 1 GB of RAM and lots of blocking I/O, and you’re on a purely single-threaded multi-process deployment.

# Use this formula for purely single-threaded multi-process deployments.
max_app_processes = (1024 * 0.75)  / 100 = 7.68
desired_app_processes = max_app_processes = 7.68

Conclusion: you should use 7 or 8 processes. Phusion Passenger should be configured as follows:

PassengerMaxPoolSize 7

However a concurrency of 7 or 8 is way too low if your application performs a lot of blocking I/O. You should use a multithreaded deployment instead, or you need to get more RAM so you can run more processes.

Example 2: multithreaded deployment with lots of blocking I/O

Consider the same machine and application (1 GB RAM, lots of blocking I/O), but this time you’re on a multithreaded deployment with 2 application processes. How many threads do you need per process?

Let’s assume that we’re using Ruby and that we have 4 CPUs. Then:

# Use this formula for multithreaded deployments.
max_app_threads_per_process
= ((1024 * 0.75) - (4 * 100)) / (100 / 10)
= 368 / 10
= 36.8

Conclusion: you should use 4 processes, each with 36-37 threads, so that your system ends up with . Phusion Passenger Enterprise should be configured as follows:

PassengerMaxPoolSize 4
PassengerConcurrencyModel thread
PassengerThreadCount 36

Configuring the web server

If you’re using Nginx then it does not need configuring. Nginx is evented and already supports a high concurrency out of the box.

If you’re using Apache, then prefer the worker MPM (which uses a combination of processes and threads) or the event MPM (which is similar to the worker MPM, but better) over the prefork MPM (which only uses processes) whenever possible. PHP requires prefork, but if you don’t use PHP then you can probably use one of the other MPMs. Make sure you set a low number of processes and a moderate to high number of threads.

Because Apache performs a lot of blocking I/O (namely HTTP handling), you should give it a lot of threads so that it has a lot of concurrency. The number of threads should be at least the number of concurrent clients that you’re willing to serve with Apache. A small website can get away with 1 process and 100 threads. A large website may want to have 8 processes and 200 threads per process (resulting in 1600 threads in total).

If you cannot use the event MPM, consider putting Apache behind an Nginx reverse proxy, with response buffering turned on on the Nginx side. This reliefs a lot of concurrency problems from Apache. If you can use the event MPM then adding Nginx to the mix does not provide many advantages.

Conclusion

  • If your application performs a lot of blocking I/O, use lots of processes/threads. You should move away from single-threaded multiprocessing in this case, and start using multithreading.
  • If your application is CPU-bound, use a small multiple of the number of CPUs.
  • Do not exceed the number of processes/threads your system can handle without swapping.

We at Phusion are regularly updating our products. Want to stay up to date? Fill in your name and email address below and sign up for our newsletter. We won’t spam you, we promise.



The right way to deal with frozen processes on Unix

By Hongli Lai on September 21st, 2012


Those who administer production Unix systems have undoubtedly encountered the problem of frozen processes before. They just sit around, consuming CPU and/or memory indefinitely until you forcefully shut them down.

Phusion Passenger 3 – our high-performance and advanced web application server – was not completely immune to this problem either. That is until today, because we have just implemented a fix which will be part of Phusion Passenger 4.0 (4.0 hasn’t been released yet, but a first beta will appear soon). It’s going into the open source version, not just Phusion Passenger Enterprise, because we believe this is an important change that should benefit the entire community.

Behind our solution lies an array of knowledge about operating systems, process management and Unix. Today, I’d like to take this opportunity to share some of our knowledge with our readers. In this article we’re going to dive into the operating system-level details of frozen processes. Some of the questions answered by this article include:

  • What causes a process to become frozen and how can you debug them?
  • What facilities does Unix provide to control and to shut down processes?
  • How can a process manager (e.g. Phusion Passenger) automatically cleanup frozen processes?
  • Why was Phusion Passenger not immune to the problem of frozen processes?
  • How is Phusion Passenger’s frozen process killing fix implemented, and under what constraints does it work?

This article attempts to be generic, but will also provide Ruby-specific tips.

Not all frozen processes are made equal

Let me first clear some confusion. Frozen processes are sometimes also called zombie processes, but this is not formally correct. Formally, a zombie process as defined by Unix operating systems is a process that has already exited, but its parent process has not waited for its exit yet. Zombie processes show up in ps and on Linux they have “<defunct>” appended to their names. Zombie processes do not consume memory or CPU. Killing them – even with SIGKILL – has no effect. The only way to get rid of them is to either make the parent process call waitpid() on them, or by terminating the parent process. The latter causes the zombie process to be “adopted” by the init process (PID 1), which immediately calls waitpid() on any adopted processes. In any case, zombie processes are harmless.

In this article we’re only covering frozen processes, not zombie processes. A process is often considered frozen if it has stopped responding normally. They appear stuck inside something and are not throwing errors. Some of the general causes are:

  1. The process is stuck in an infinite loop. In practice we rarely see this kind of freeze.
  2. The process is very slow, even during shutdown, causing it to appear frozen. Some of the causes include:
    2.1. It is using too much memory, causing it to hit the swap. In this case you should notice the entire system becoming slow.
    2.2. It is performing an unoptimized operation that takes a long time. For example you may have code that iterates through all database table rows to perform a calculation. In development you only had a handful of rows so you never noticed this, but you forgot that in production you have millions of rows.
  3. The process is stuck in an I/O operation. It’s waiting for an external process or a server, be it the database, an HTTP API server, or whatever.

Debugging frozen processes

Obviously fixing a frozen process involves more than figuring out how to automatically kill it. Killing it just fixes the symptoms, and should be considered a final defense in the war against frozen processes. It’s better to figure out the root cause. We have a number of tools in our arsenal to find out what a frozen process is doing.

crash-watch

crash-watch is a tool that we’ve written to easily obtain process backtraces. Crash-watch can be instructed to dump backtraces when a given process crashes, or dump their current backtraces.

Crash-watch is actually a convenience wrapper around gdb, so you must install gdb first. It dumps C-level backtraces, not language-specific backtraces. If you run crash-watch on a Ruby program it will dump the Ruby interpreter’s C backtraces and not the Ruby code’s backtraces. It also dumps the backtraces of all threads, not just the active thread.

Invoke crash-watch as follows:

crash-watch --dump <PID>

Here is a sample output. This output is the result of invoking crash-watch on a simple “hello world” Rack program that simulates being frozen.

Crash-watch is especially useful for analyzing C-level problems. As you can see in the sample output, the program is frozen in a freeze_process call.

Crash-watch can also assist in analyzing problems caused by Ruby C extensions. Ruby’s mysql gem is quite notorious in this regard because it blocks all Ruby threads while it’s doing its work. If the MySQL server doesn’t respond (e.g. because of network problems, because the server is dead, or because the query is too heavy) then the mysql gem will freeze the entire process, making it even unable to respond to signals. With crash-watch you are able to clearly see that a process is frozen in a mysql gem call.

Phusion Passenger’s SIGQUIT handler

Ruby processes managed by Phusion Passenger respond to SIGQUIT by printing their backtraces to stderr. On Phusion Passenger, stderr is always redirected to the global web server error log (e.g. /var/log/apache2/error.log or /var/log/nginx/error.log), not the vhost-specific error log. If your Ruby interpreter supports it, Phusion Passenger will even print the backtraces of all Ruby threads, not just the active one. This latter feature requires either Ruby Enterprise Edition or Ruby 1.9.

Note that for this to work, the Ruby interpreter must be responding to signals. If the Ruby interpreter is frozen inside a C extension call (such as is the case in the sample program) then nothing will happen. In that case you should use crash-watch or the rb_backtrace() trick below.

rb_backtrace()

If you want to debug a Ruby program that’s not managed by Phusion Passenger then there’s another trick to obtain the Ruby backtrace. The Ruby interpreter has a nice function called rb_backtrace() which causes it to print its current Ruby-level backtrace to stdout. You can use gdb to force a process to call that function. This works even when the Ruby interpreter is stuck inside a C call. This method has two downsides:

  1. Its reliability depends on the state of the Ruby interpreter. You are forcing a call from arbitrary places in the code, so you risk corrupting the process state. Use with caution.
  2. It only prints the backtrace of the active Ruby thread. It’s not possible to print the backtraces of any other Ruby threads.

First, start gdb:

$ gdb

Then attach gdb to the process you want:

attach <PID>

This will probably print a whole bunch of messages. Ignore them; if gdb prints a lot of library names and then asks you whether to continue, answer Yes.

Now we get to the cream. Use the following command to force a call to rb_backtrace():

p (void) rb_backtrace()

You should now see a backtrace appearing in the process’s stdout:

from config.ru:5:in `freeze_process'
from config.ru:5
from /Users/hongli/Projects/passenger/lib/phusion_passenger/rack/thread_handler_extension.rb:67:in `call'
from /Users/hongli/Projects/passenger/lib/phusion_passenger/rack/thread_handler_extension.rb:67:in `process_request'
from /Users/hongli/Projects/passenger/lib/phusion_passenger/request_handler/thread_handler.rb:126:in `accept_and_process_next_request'
from /Users/hongli/Projects/passenger/lib/phusion_passenger/request_handler/thread_handler.rb:100:in `main_loop'
from /Users/hongli/Projects/passenger/lib/phusion_passenger/utils/robust_interruption.rb:82:in `disable_interruptions'
from /Users/hongli/Projects/passenger/lib/phusion_passenger/request_handler/thread_handler.rb:98:in `main_loop'
from /Users/hongli/Projects/passenger/lib/phusion_passenger/request_handler.rb:432:in `start_threads'
from /Users/hongli/Projects/passenger/lib/phusion_passenger/request_handler.rb:426:in `initialize'
from /Users/hongli/Projects/passenger/lib/phusion_passenger/request_handler.rb:426

strace and dtruss

The strace tool (Linux) and the dtruss tool (OS X) can be used to see which system calls a process is calling. This is specially useful for detecting problems belonging to categories (1) and (2.2).

Invoke strace and dtruss as follows:

sudo strace -p <PID>
sudo dtruss -p <PID>

Phusion Passenger’s role

Phusion Passenger

Phusion Passenger was traditionally architected to trust application processes. That is, it assumed that if we tell an application process to start, it will start, and if we tell it to stop it will stop. In practice this is not always true. Applications and libraries contain bugs that can cause freezes, or maybe interaction with an external buggy component causes freezes (network problems, database server problems, etc). Our point of view was that the developer and system administrator should be responsible for these kind of problems. If the developer/administrator does not manually intervene, the system may remain unusable.

Starting from Phusion Passenger 3, we began turning away from this philosophy. Our core philosophy has always been that software should Just Work™ with the least amount of hassle, and that software should strive to be zero-maintenance. As a result, Phusion Passenger 3 introduced the Watchdog, Phusion Passenger Enterprise introduced request time limiting, and Phusion Passenger 4 will introduce application spawning time limiting. These things attack different aspects of the freezing process problem:

  1. Application spawning time limiting solves the problem of application processes not starting up quickly enough, or application processes freezing during startup. This feature will be included in the open source version of Phusion Passenger.
  2. Request time limiting solves the problem of application processes freezing during a web request. This feature is not available in the open source version of Phusion Passenger, and only in Phusion Passenger Enterprise.
  3. The Watchdog traditionally only solves the problem of Phusion Passenger helper processes (namely the HelperAgent and LoggingAgent) freezing during web server shutdown. Now, it also solves the problem of application processes freezing during web server shutdown. These features will too be included in the open source version of Phusion Passenger.

The shutdown procedure and the fix

When you shut down your web server, the Watchdog will be the one to notice this event. It will send a message to the LoggingAgent and the HelperAgent to tell them to gracefully shut down. In turn, the HelperAgent will tell all application processes to gracefully shut down. The HelperAgent does not wait until they’ve actually shut down. Instead it will just assume that they will eventually shut down. It is for this reason that even if you shut down your web server, application processes may stay behind.

The Watchdog was already designed to assume that agent processes could freeze during shutdown, so it gives agent processes a maximum amount of time during which they shut down gracefully. If they don’t, then the Watchdog will forcefully kill them with SIGKILL. It wouldn’t just kill the agent processes, but also all application processes.

The fix was therefore pretty straightforward. Always have the watchdog kill applications processes, even if the HelperAgent terminates normally and in time. The final fix was effectively 3 lines of code.

Utilizing Unix process groups

The most straightforward method to shutdown application processes would be to maintain a list of their PIDs and then killing them one-by-one. The Watchdog however uses a more powerful and little-used Unix mechanism, namely process groups. Let me first explain them.

A system can have multiple process groups. A process belongs to exactly one process group. Each process group has exactly 1 “process group leader”. The process group ID is equal to the PID of the group leader.

The process group that a process belongs to is inherited from its parent process upon forking. However a process can be a member of any process group, no matter what group the parent belongs to.


Same-colored processes denote processes belonging to the same process group. As you can see in the process tree on the right, process group membership is not constrained by parent-child relationships.

You can simulate the process tree on the right using the following Ruby code.

top = $$
puts "Top process PID: #{$$}"
Process.setpgrp
pid = fork do
  pid = $$
  pid2 = fork do
    pid2 = $$
    Process.setpgrp
    pid3 = fork do
      pid3 = $$
      sleep 0.1
      puts "#{top} belongs to process group: #{Process.getpgid(top)}"
      puts "#{pid} belongs to process group: #{Process.getpgid(pid)}"
      puts "#{pid2} belongs to process group: #{Process.getpgid(pid2)}"
      puts "#{pid3} belongs to process group: #{Process.getpgid(pid3)}"
    end

    # We change process group ID of pid3 after the fact!
    Process.setpgid(pid3, top)

    Process.waitpid(pid3)
  end
  Process.waitpid(pid2)
end
Process.waitpid(pid)

As you can see, you can change the process group membership of any process at any time, provided you have the permissions to do so.

The kill() system call provides a neat little feature: it allows you to send a signal to an entire process group! You can already guess where this is going.

Whenever the Watchdog spawns an agent process, it creates a new process group for it. The HelperAgent and the LoggingAgent are both process group leaders of their own process group. Since the HelperAgent is responsible for spawning application processes, all application processes automatically belong to the HelperAgent’s process group, unless the application processes themselves change this.

Upon shutdown, the Watchdog sends SIGKILL to HelperAgent’s and the LoggingAgent’s process groups. This ensures that everything is killed, including all application processes and even whatever subprocesses they spawn.

Conclusion

In this article we’ve considered several reasons why processes may freeze and how to debug them. We’ve seen which mechanisms Phusion Passenger provides to combat frozen processes. We’ve introduced the reader to Unix process groups and how they interact with Unix signal handling.

A lot of engineering has gone into Phusion Passenger to ensure that everything works correctly even in the face of failing software components. As you can see, Phusion Passenger uses proven and rock-solid Unix technologies and concepts to implement its advanced features.

Support us by buying Phusion Passenger Enterprise

It has been a little over a month now since we’ve released Phusion Passenger Enterprise, which comes with a wide array of additional features not found in the open source Phusion Passenger. However as we’ve stated before, we remain committed to maintaining and developing the open source version. This is made possible by the support of our customers; your support. If you like Phusion Passenger or if you want to enjoy the premium Enterprise features, please consider buying one or more Phusion Passenger Enterprise licenses. Development of the open source Phusion Passenger is directly funded by Phusion Passenger Enterprise sales.

Let me know your thoughts about this article in the comments section and thank you for your support!

Tutorial: setting up Gitlab on Debian 6

By Hongli Lai on April 21st, 2012

We host many git repositories on our servers using SSH and file system ACLs for access control. However, having been spoiled by Github for so long, this method feels archaic and cumbersome. While Github provides private repositories, sometimes it’s just not an option because there are some things that may never leave the organization. If you still want to have a fancy web interface for your Git repositories, then there are several alternatives:

Gitlab is an excellent option. It’s not as fully featured as Github, but like Gitorious it is open source. We’ve found that it’s not only more user friendly than Gitorious but also easier to install.

This tutorial teaches you how to setup Gitlab on Debian 6, Ruby 1.9.3, Phusion Passenger and Nginx. It assumes that Gitlab and the git repositories are hosted on the same machine. Your Gitlab installation will be protected by SSL. Your users will be able to pull from and push to your repositories using the ssh:// protocol, but they won’t have actual shell access and you don’t need to have separate system accounts for each user to control access. Gitlab (or to be more correct, gitolite, which Gitlab uses) manages access without system accounts.

Step 1: Setup sudo

In this tutorial we’re going to use sudo to switch between user accounts. Sudo is not installed by default on Debian so install it if you don’t already have it:

$ su
# apt-get update
# apt-get install sudo

Now add your own account to /etc/sudoers:

# visudo

Add something like this to the end of the file:

your_username_here   ALL=(ALL) ALL

Step 2: Install the base software

Install basic dependencies:

$ sudo apt-get update
$ sudo apt-get install build-essential git-core wget curl gcc checkinstall libxml2-dev libxslt-dev sqlite3 libsqlite3-dev libcurl4-openssl-dev libreadline-dev libc6-dev libssl-dev libmysql++-dev make build-essential zlib1g-dev libicu-dev redis-server openssh-server python-dev python-pip libyaml-dev

Install Pygments, which Gitlab needs for syntax highlighting:

$ sudo pip install pygments

Gitlab needs the sendmail command in order to send emails (for things like lost password recovery). This command is provided by the exim4, postfix and sendmail packages but you can only have one of them installed. If you don’t already have one of them installed, then we recommend postfix.

First, check whether you already have the sendmail command:

$ ls /usr/sbin/sendmail

If you get a ‘file not found’ then install Postfix:

$ sudo apt-get install postfix

Step 3: Install gitolite

We need gitolite. Gitolite is a tool used by Gitlab for managing access control to git repositories. It works by providing a bunch of config files in which you can register users and their public keys. Gitolite modifies ~/.ssh/authorized_keys appropriately based on the config files’ contents. Gitolite is not supposed to run as root; instead, it runs as a single user.

Install gitolite:

$ sudo apt-get install gitolite

The Debian package will automatically create an account called gitolite. Gitlab controls gitolite by logging into gitolite’s user account through SSH, so we want to create a passwordless SSH keypair for this. This SSH keypair is what we call the gitolite admin key.

$ sudo -u gitolite ssh-keygen
...
Enter file in which to save the key (/var/lib/gitolite/.ssh/id_rsa):    <--- enter nothing here and press enter
...
Enter passphrase (empty for no passphrase):    <--- enter nothing here and press enter
Enter same passphrase again:                   <--  enter nothing here and press enter
...
Your public key has been saved in /var/lib/gitolite/.ssh/id_rsa.pub.
...

Now you need to tell gitolite that you want to use this key as the admin key. First, let’s print the content of the public key and copy it to the clipboard:

$ sudo -u gitolite cat /var/lib/gitolite/.ssh/id_rsa.pub
(now select the output in your terminal and copy it to the clipboard)

Now let’s configure gitolite:

$ sudo dpkg-reconfigure gitolite

When asked for a username, keep it at the default:

When asked for a repository path, keep it at the default:

When asked for the admin key, paste the contents of the key you copied to clipboard:

Finally, we need to set the REPO_MASK option to 0007.

$ sudo -u gitolite vi /var/lib/gitolite/.gitolite.rc

Look for:

$REPO_UMASK = 0077;         # gets you 'rwx------'

Change it to:

$REPO_UMASK = 0007;  # rwxrwx---

Tighten security

We think the default gitolite home directory security is a bit weak: everybody on the system can access the config files and the repository files.

$ ls -ld /var/lib/gitolite
drwxr-xr-x 5 gitolite gitolite 4096 Apr 21 08:40 /var/lib/gitolite

Let’s tighten it up a little bit so that only gitolite can access the files:

$ sudo -u gitolite chmod o-rx /var/lib/gitolite

Step 4: Install Ruby 1.9

Gitlab requires Ruby 1.9 and their developers recommend Ruby 1.9.2, but we don’t recommend 1.9.2 because it has a lot of critical bugs compared to 1.9.3. We’ve been running Gitlab in production on Ruby 1.9.3 for a while now and so far everything works great, so in this tutorial we’ll teach you how to install Ruby 1.9.3. But feel free to install a different version if you disagree with us.

Debian does not provide a recent enough version of Ruby through apt, so we need to install it manually.

$ wget http://ftp.ruby-lang.org/pub/ruby/1.9/ruby-1.9.3-p194.tar.gz
$ tar xzvf ruby-1.9.3-p194.tar.gz
$ cd ruby-1.9.3-p194
$ ./configure
$ make
$ sudo make install

Install Bundler:

$ sudo gem install bundler

Step 5: Install Gitlab

Download the source

$ cd /opt
$ sudo git clone git://github.com/gitlabhq/gitlabhq.git
$ sudo chown -R gitolite:gitolite gitlabhq
$ cd gitlabhq
$ sudo -u gitolite git checkout 9af14e4bda35

(You can also checkout the stable branch for the latest stable version, but this tutorial is written specifically for commit 9af14e4bda35.)

Configure

Setup the database configuration. Make sure you fill in the database details under the production section.

$ sudo -u gitolite cp config/database.yml.example config/database.yml
$ sudo -u gitolite vi config/database.yml

Setup other Gitlab configuration.

$ sudo -u gitolite cp config/gitlab.yml.example config/gitlab.yml
$ sudo -u gitolite vi config/gitlab.yml
  • Set email.from to what should be filled in the ‘From’ header in emails.
  • Set email.to to the domain name on which you want to host Gitlab, without the protocol scheme and without the path.
  • Set email.protocol to https.
  • Set git_host.admin_uri to gitolite@localhost:gitolite-admin.
  • Set git_host.base_path to /var/lib/gitolite/repositories/.
  • Set git_host.host to the system’s SSH domain name.
  • Set git_host.git_user to gitolite.

Now tighten up security:

$ sudo -u gitolite chmod o-rwx config/*.yml

Install gems and setup database

$ sudo -u gitolite -H bundle install --without development test --deployment
$ sudo -u gitolite bundle exec rake db:setup RAILS_ENV=production
$ sudo -u gitolite bundle exec rake db:seed_fu RAILS_ENV=production

This last command will output an initial administrator account’s username and password. Write it down somewhere.

Check status

$ sudo -u gitolite bundle exec rake gitlab:app:status RAILS_ENV=production

You should get all YES:

Starting diagnostic
config/database.yml............exists
config/gitlab.yml............exists
/home/git/repositories/............exists
/home/git/repositories/ is writable?............YES
remote: Counting objects: 603, done.
remote: Compressing objects: 100% (466/466), done.
remote: Total 603 (delta 174), reused 0 (delta 0)
Receiving objects: 100% (603/603), 53.29 KiB, done.
Resolving deltas: 100% (174/174), done.
Can clone gitolite-admin?............YES
UMASK for .gitolite.rc is 0007? ............YES

Run a Resque worker

This worker daemon is for processing background tasks.

$ sudo -u gitolite bundle exec rake environment resque:work QUEUE=* RAILS_ENV=production BACKGROUND=yes

Step 6: Generate an SSL certificate

Generate a self-signed certificate. You may enter arbitrary certificate information but the Common Name must be set to the domain name on which you want to host Gitlab.

$ cd /opt/nginx/conf
$ sudo openssl req -new -x509 -nodes -days 3560 -out gitlab.crt -keyout gitlab.key
$ sudo chmod o-r gitlab.key

Installing the certificate into your browser

This self-signed certificate by itself provides encryption, but not authentication, making it vulnerable to man-in-the-middle attacks. To solve this problem, install this certificate into your browser.

Here’s how you do it on Google Chrome on OS X. First, copy the certificate to your local machine:

$ scp your-server.com:/opt/nginx/conf/gitlab.crt .

Now open Keychain Access. Under the “Keychains” list, select “System”. Then click on the “+” button and add the certificate file.

Step 7: Deploy to Phusion Passenger and Nginx

Install Nginx and Phusion Passenger:

$ sudo gem install passenger --no-rdoc --no-ri
$ sudo passenger-install-nginx-module
(Choose option 1 and install Nginx to /opt/nginx)

Now edit the Nginx config file

$ sudo vi /opt/nginx/conf/nginx.conf

and add this to the http block:

# This is a normal HTTP host which redirects all traffic to the HTTPS host.
server {
    listen 80;
    server_name gitlab.yourdomain.com;
    root /nowhere;
    rewrite ^ https://gitlab.phusion.nl$request_uri permanent;
}

# The actual Gitlab HTTPS host.
server {
    listen 443;
    server_name gitlab.yourdomain.com;
    root /opt/gitlabhq/public;
    ssl on;
    ssl_certificate gitlab.crt;
    ssl_certificate_key gitlab.key;
    add_header Strict-Transport-Security "max-age=315360000";
    location / {
        passenger_enabled on;
    }
    location /assets {
        expires max;
        add_header Cache-Control public;
        passenger_enabled on;
    }
}

Now start Nginx:

$ sudo /opt/nginx/sbin/nginx

And viola, you’re up and running! You can access Gitlab through https://gitlab.yourdomain.com/.

Already have Phusion Passenger installed and already running Ruby 1.8?

Phusion Passenger for Nginx does not yet support multiple Ruby versions (but it will in 3.2). If you already have Ruby apps deployed on your server using Phusion Passenger, and those apps require Ruby 1.8, then you’ll have to run Gitlab on Ruby 1.9 using Phusion Passenger Standalone, and connect it to Nginx through a reverse proxy setup.

$ cd /opt/gitlabhq
$ sudo -u gitolite passenger start -d -e production --max-pool-size 2 -S gitlab.socket

The HTTPS server block should then look like this:

upstream gitlab {
    server unix:/opt/gitlabhq/gitlab.socket;
}

# The actual Gitlab HTTPS host.
server {
    listen 443;
    server_name gitlab.yourdomain.com;
    root /opt/gitlabhq/public;
    ssl on;
    ssl_certificate gitlab.crt;
    ssl_certificate_key gitlab.key;
    add_header Strict-Transport-Security "max-age=315360000";
    proxy_set_header Host $http_host;
    proxy_set_header X-Forwarded-Ssl on;
    location / {
        if (!-f $request_filename) {
            proxy_pass http://gitlab;
            break;
        }
    }
    location /assets {
        expires max;
        add_header Cache-Control public;
        if (!-f $request_filename) {
            proxy_pass http://gitlab;
            break;
        }
    }
}

Donate

Gitlab started a donation campaign a few weeks ago. It is excellent, high-quality open source software so if you like it, consider giving them a donation to support their development.

Phusion Passenger & running multiple Ruby versions

By Hongli Lai on September 21st, 2010

UPDATE February 27 2013: this article has been obsolete. Phusion Passenger supports multiple Ruby interpreters as of version 4.0.0. The PassengerRuby config option has been made a per-virtual host option, so you can customize your Ruby interpreter on a per-application basis.

One of the questions we’ve been getting a lot lately is whether it’s possible to run multiple Ruby versions with Phusion Passenger, e.g. have app A and B run on Ruby 1.8.7 while having app C run on Ruby 1.9.2. In previous versions of Phusion Passenger there were ways to get around that, e.g. by mixing in Mongrels. As of Phusion Passenger 3 you can run all components as Phusion Passenger.

The setup that we currently recommend is to combine Phusion Passenger for Apache or Phusion Passenger for Nginx, with Phusion Passenger Standalone. One must first identify the Ruby version that you use most. One then proceeds with setting up Phusion Passenger for Apache or Phusion Passenger for Nginx to use that Ruby version. All applications that are to use a different Ruby version can be served separately through Phusion Passenger Standalone and hook into the main web server via a reverse proxy configuration.

Example

Suppose that you have four websites:

  • foo.com, to run on Ruby 1.8.7.
  • bar.com, to run on Ruby 1.8.7.
  • fries.com, to run on Ruby 1.9.1.
  • hamburgers.com, to run on Ruby 1.9.2.

And suppose that you’re using RVM to manage these Rubies.

Setting up foo.com and bar.com (Ruby 1.8.7)

The Ruby version that you use most is Ruby 1.8.7, so you setup Apache or Nginx to use Ruby 1.8.7 and to serve foo.com and bar.com.

rvm use 1.8.7
gem install passenger --pre

# Then one of:
passenger-install-apache2-module
passenger-install-nginx-module
# Partial Apache configuration
PassengerRuby /home/someuser/.rvm/wrappers/ruby-1.8.7/ruby

<VirtualHost *:80>
    ServerName www.foo.com
    DocumentRoot /webapps/foo.com/public
</VirtualHost>

<VirtualHost *:80>
    ServerName www.bar.com
    DocumentRoot /webapps/bar.com/public
</VirtualHost>
# Partial Nginx configuration
passenger_ruby /home/someuser/.rvm/wrappers/ruby-1.8.7/ruby

server {
    listen 80;
    server_name www.foo.com;
    root /webapps/foo.com/public;
    passenger_enabled on;
}

server {
    listen 80;
    server_name www.bar.com;
    root /webapps/bar.com/public;
    passenger_enabled on;
}

foo.com and bar.com have now been deployed on Phusion Passenger for Apache or Phusion Passenger for Nginx, and running on Ruby 1.8.7.

Setting up fries.com (Ruby 1.9.1)

The next step is to start fries.com in Phusion Passenger Standalone using Ruby 1.9.1. Since port 80 is already used by Apache or Nginx, we use start Phusion Passenger Standalone on a different port.

rvm use 1.9.1
gem install passenger --pre
cd /webapps/fries.com
passenger start -a 127.0.0.1 -p 3000 -d

Fries.com is now running on localhost port 3000 as a background daemon. Next, connect it to Apache or Nginx via a reverse proxy.

# Partial Apache configuration
<VirtualHost *:80>
    ServerName www.fries.com
    DocumentRoot /webapps/fries.com/public
    PassengerEnabled off
    ProxyPass / http://127.0.0.1:3000
    ProxyPassReverse / http://127.0.0.1:3000
</VirtualHost>
# Partial Nginx configuration
server {
    listen 80;
    server_name www.fries.com;
    root /webapps/fries.com/public;
    location / {
        proxy_pass http://127.0.0.1:3000;
        proxy_set_header Host $host;
    }
}

Setting up hamburgers.com (Ruby 1.9.2)

We do the same thing for hamburgers.com. Port 3000 is already in use, so we assign it to port 3001.

rvm use 1.9.2
gem install passenger --pre
cd /webapps/hamburgers.com
passenger start -a 127.0.0.1 -p 3001 -d

Then we hook it up to the web server via reverse proxying.

# Partial Apache configuration
<VirtualHost *:80>
    ServerName www.hamburgers.com
    DocumentRoot /webapps/hamburgers.com/public
    PassengerEnabled off
    ProxyPass / http://127.0.0.1:3001
    ProxyPassReverse / http://127.0.0.1:3001
</VirtualHost>
# Partial Nginx configuration
server {
    listen 80;
    server_name www.hamburgers.com;
    root /webapps/hamburgers.com/public;
    location / {
        proxy_pass http://127.0.0.1:3001;
        proxy_set_header Host $host;
    }
}

Performance tip

Phusion Passenger Standalone also supports listening on a Unix domain socket instead of a TCP socket. Unix domain sockets are significantly faster than TCP sockets.

Only Nginx supports reverse proxying to Unix domain sockets; Apache does not support this.

In order to make Phusion Passenger Standalone listen on a Unix domain socket, you need to run it with Nginx 0.8.21 or higher. In fact we contributed support for Unix domain sockets to Nginx specifically for this feature!

Start Phusion Passenger Standalone like this:

passenger start --socket /tmp/fries.com.socket -d --nginx-version 0.8.50

The --socket option tells Phusion Passenger to bind to the given Unix domain socket. The --nginx-version option tells Phusion Passenger Standalone to use Nginx 0.8; 0.7 is the default.

Next you must setup an Nginx upstream block with the Unix domain socket as the only entry. Then setup Nginx to reverse proxy to the created upstream block.

upstream fries_upstream {
    server unix:/tmp/fries.com.socket;
}

server {
    listen 80;
    server_name www.fries.com;
    root /webapps/fries.com/public;
    location / {
        proxy_pass http://fries_upstream;
        proxy_set_header Host $host;
    }
}

It should be noted that Phusion Passenger for Apache and Phusion Passenger for Nginx already use Unix domain sockets internally for optimal performance. In fact we’ve done this since version 1.0. We plan on elaborating more about our internal technologies in a future blog post.

Conclusion

Those of you who are familiar with Mongrel and Thin will see the similarity. Indeed, Phusion Passenger Standalone was designed to be able to used in a reverse proxy environment such as the one demonstrated in this article. Unlike Mongrel and Thin clusters however you only need a single Phusion Passenger Standalone instance per web application and thus only a single address to proxy to. Phusion Passenger Standalone will take care of starting and stopping application processes for you and will make sure processes are restarted when they crash.