gem version

Configuration reference

Relevant selection for this article:

Nginx

Table of contents

  1. Loading...

Application loading

passenger_root

Syntax passenger_root path;
Since 2.0.0
Context http

Refers to the location to the Passenger root directory, or to a location configuration file. This configuration option is essential to Passenger, and allows Passenger to locate its own data files.

You normally do not need to set this configuration option. If you used our Debian or RPM packages to install Passenger, then they automatically configure passenger_root for you with the right value. If you installed Passenger from Homebrew, tarball or RubyGems, then at some point during the installation process you are told what the correct value should be, and instructed to insert it into your Nginx configuration file.

What happens if this option is not set, or set wrongly

If you do not set passenger_root, Passenger will disable itself, and Nginx will behave as if Passenger was never installed.

If you set passenger_root to the wrong value, then Passenger will attempt to locate some of its own files, fail to do so, then complain with an error message and abort Nginx.

How to fix passenger_root

If you lost the passenger_root configuration value (e.g. because you accidentally removed the Nginx configuration file, and you are trying to reconstruct it), if you didn't follow the installation instructions correctly, or if you moved Passenger to a different directory, then you can fix passenger_root as follows:

  • If you installed Passenger through source tarball or by cloning it from the Passenger Github repository, then the value should be the path to the Passenger directory.
  • In all other cases, obtain the correct value by running the following command:

    passenger-config --root
    

Once you have obtained the value, open your Nginx configuration file and insert a passenger_root option somewhere with that value.

passenger_enabled

Syntax passenger_enabled on|off;
Default passenger_enabled off;
Since 2.0.0
Context server, location, if

This option enables or disables Passenger for that particular context. Passenger is disabled by default, so you must explicitly enable it for contexts where you want Passenger to serve your application. Please see the deployment guide for full examples.

server {
    listen 80;
    server_name www.example.com;
    root /webapps/example/public;

    # You must explicitly set 'passenger_enabled on', otherwise
    # Passenger won't serve this app.
    passenger_enabled on;
}

Note that since version 5.0.0, passenger_enabled is inherited into subcontexts. This was not the case in previous versions.

passenger_start_timeout

Syntax passenger_start_timeout seconds;
Default passenger_start_timeout 90;
Since 4.0.15
Context http, server, location, if

Specifies a timeout for the startup of application processes. If an application process fails to start within the timeout period then it will be forcefully killed with SIGKILL, and the error will be logged.

passenger_abort_on_startup_error

Syntax passenger_abort_on_startup_error on|off;
Default passenger_abort_on_startup_error off;
Since 3.0.0
Context http

When turned on, Passenger will shutdown if startup fails, primarily used internally by Passenger Standalone.

passenger_ruby

Syntax passenger_ruby path-to-ruby-interpreter;
Default passenger_ruby ruby;
Since 4.0.0
Context http, server, location, if

The passenger_ruby option specifies the Ruby interpreter to use for serving Ruby web applications.

In addition, the passenger_ruby option in the http context also specifies which Ruby interpreter to use for Passenger's internal Ruby helper tools, e.g. the one used by passenger_pre_start. See Lightweight Ruby dependency for more information about the internal Ruby helper tools.

If passenger_ruby is not specified, then it defaults to ruby, which means that the first ruby command found in PATH will be used.

Closely related to passenger_ruby is passenger_python, passenger_nodejs, etc. The following example illustrates how it works and how you can use these options to specify different interpreters for different web apps.

http {
    passenger_root ...;

    # Use Ruby 2.1 by default.
    passenger_ruby /usr/bin/ruby2.1;
    # Use Python 2.6 by default.
    passenger_python /usr/bin/python2.6;
    # Use /usr/bin/node by default.
    passenger_nodejs /usr/bin/node;

    server {
        # This Rails web app will use Ruby 2.1
        listen 80;
        server_name www.foo.com;
        root /webapps/foo/public;
    }

    server {
        # This Rails web app will use Ruby 2.2.1, as installed by RVM
        passenger_ruby /usr/local/rvm/wrappers/ruby-2.2.1/ruby;

        listen 80;
        server_name www.bar.com;
        root /webapps/bar/public;

        # If you have a web app deployed in a sub-URI, customize
        # passenger_ruby/passenger_python inside a `location` block.
        # The web app under www.bar.com/blog will use JRuby 1.7.1
        location ~ ^/blog(/.*|$) {
            alias /websites/blog/public$1;
            passenger_base_uri /blog;
            passenger_app_root /websites/blog;
            passenger_document_root /websites/blog/public;
            passenger_enabled on;
            passenger_ruby /usr/local/rvm/wrappers/jruby-1.7.1/ruby;
        }
    }

    server {
        # This Flask web app will use Python 3.0
        passenger_python /usr/bin/python3.0;

        listen 80;
        server_name www.baz.com;
        root /webapps/baz/public;
    }
}

Notes about multiple Ruby interpreters

If your system has multiple Ruby interpreters, then it is important that you set this configuration option to the right value. If you do not set this configuration option correctly, and your app is run under the wrong Ruby interpreter, then all sorts of things may go wrong, such as:

  • The app won't be able to find its installed gems.
  • The app won't be able to run because of syntax and feature differences between Ruby versions.

Note that a different RVM gemset also counts as "a different Ruby interpreter".

How to set the correct value

If you are not sure what value to set passenger_ruby to, then you can find out the correct value as follows.

First, find out the location to the passenger-config tool and take note of it:

$ which passenger-config
/opt/passenger/bin/passenger-config

Next, activate the Ruby interpreter (and if applicable, the gemset) you want to use. For example, if you are on RVM and you use Ruby 2.2.1, you may want to run this:

$ rvm use 2.2.1

Finally, invoke passenger-config with its full path, passing --ruby-command as parameter:

$ /opt/passenger/bin/passenger-config --ruby-command
passenger-config was invoked through the following Ruby interpreter:
  Command: /usr/local/rvm/wrappers/ruby-1.8.7-p358/ruby
  Version: ruby 1.8.7 (2012-02-08 patchlevel 358) [universal-darwin12.0]
  To use in Apache: PassengerRuby /usr/local/rvm/wrappers/ruby-1.8.7-p358/ruby
  To use in Nginx : passenger_ruby /usr/local/rvm/wrappers/ruby-1.8.7-p358/ruby
  To use with Standalone: /usr/local/rvm/wrappers/ruby-1.8.7-p358/ruby /opt/passenger/bin/passenger start


## Notes for RVM users
Do you want to know which command to use for a different Ruby interpreter? 'rvm use' that Ruby interpreter, then re-run 'passenger-config --ruby-command'.

The output tells you what value to set.

passenger_python

Syntax passenger_python path-to-python-interpreter;
Default passenger_python python;
Since 4.0.0
Context http, server, location, if

This option specifies the Python interpreter to use for serving Python web applications. If it is not specified, then it uses the first python command found in PATH.

Closely related to this option is passenger_ruby, passenger_nodejs, etc. The following example illustrates how it works and how you can use these options to specify different interpreters for different web apps.

http {
    passenger_root ...;

    # Use Ruby 2.1 by default.
    passenger_ruby /usr/bin/ruby2.1;
    # Use Python 2.6 by default.
    passenger_python /usr/bin/python2.6;
    # Use /usr/bin/node by default.
    passenger_nodejs /usr/bin/node;

    server {
        # This Rails web app will use Ruby 2.1
        listen 80;
        server_name www.foo.com;
        root /webapps/foo/public;
    }

    server {
        # This Rails web app will use Ruby 2.2.1, as installed by RVM
        passenger_ruby /usr/local/rvm/wrappers/ruby-2.2.1/ruby;

        listen 80;
        server_name www.bar.com;
        root /webapps/bar/public;

        # If you have a web app deployed in a sub-URI, customize
        # passenger_ruby/passenger_python inside a `location` block.
        # The web app under www.bar.com/blog will use JRuby 1.7.1
        location ~ ^/blog(/.*|$) {
            alias /websites/blog/public$1;
            passenger_base_uri /blog;
            passenger_app_root /websites/blog;
            passenger_document_root /websites/blog/public;
            passenger_enabled on;
            passenger_ruby /usr/local/rvm/wrappers/jruby-1.7.1/ruby;
        }
    }

    server {
        # This Flask web app will use Python 3.0
        passenger_python /usr/bin/python3.0;

        listen 80;
        server_name www.baz.com;
        root /webapps/baz/public;
    }
}

passenger_nodejs

Syntax passenger_nodejs path-to-node-js;
Default passenger_nodejs node;
Since 4.0.0
Context http, server, location, if

This option specifies the Node.js command to use for serving Node.js web applications. If it is not specified, then it uses the first node command found in PATH.

Closely related to this option is passenger_ruby, passenger_python, etc. The following example illustrates how it works and how you can use these options to specify different interpreters for different web apps.

http {
    passenger_root ...;

    # Use Ruby 2.1 by default.
    passenger_ruby /usr/bin/ruby2.1;
    # Use Python 2.6 by default.
    passenger_python /usr/bin/python2.6;
    # Use /usr/bin/node by default.
    passenger_nodejs /usr/bin/node;

    server {
        # This Rails web app will use Ruby 2.1
        listen 80;
        server_name www.foo.com;
        root /webapps/foo/public;
    }

    server {
        # This Rails web app will use Ruby 2.2.1, as installed by RVM
        passenger_ruby /usr/local/rvm/wrappers/ruby-2.2.1/ruby;

        listen 80;
        server_name www.bar.com;
        root /webapps/bar/public;

        # If you have a web app deployed in a sub-URI, customize
        # passenger_ruby/passenger_python inside a `location` block.
        # The web app under www.bar.com/blog will use JRuby 1.7.1
        location ~ ^/blog(/.*|$) {
            alias /websites/blog/public$1;
            passenger_base_uri /blog;
            passenger_app_root /websites/blog;
            passenger_document_root /websites/blog/public;
            passenger_enabled on;
            passenger_ruby /usr/local/rvm/wrappers/jruby-1.7.1/ruby;
        }
    }

    server {
        # This Flask web app will use Python 3.0
        passenger_python /usr/bin/python3.0;

        listen 80;
        server_name www.baz.com;
        root /webapps/baz/public;
    }
}

passenger_meteor_app_settings

Syntax passenger_meteor_app_settings path-to-json-settings-file;
Since 5.0.7
Context http, server, location, if

When using a Meteor application in non-bundled mode, use this option to specify a JSON file with settings for the application. The meteor run command will be run with the --settings parameter set to this option.

Note that this option is not intended to be used for bundled/packaged Meteor applications. When running bundled/packaged Meteor applications on Passenger, you should set the METEOR_SETTINGS environment variable using passenger_env_var.

passenger_app_env

Syntax passenger_app_env name;
Aliases rails_env name;
rack_env name;
Default passenger_app_env production;
Since 4.0.0
Context http, server, location, if

This option sets, for the current application, the value of the following environment variables:

  • RAILS_ENV
  • RACK_ENV
  • WSGI_ENV
  • NODE_ENV
  • PASSENGER_APP_ENV

Some web frameworks, for example Rails and Connect.js, adjust their behavior according to the value in one of these environment variables.

Passenger sets the default value to production. If you're developing the application (instead of running it in production), then you should set this to development.

If you want to set other environment variables, please use passenger_env_var.

Setting this option also adds the application environment name to the default application group name, so that you can run multiple versions of your application with different application environment names.

rails_env, rack_env

Syntax rails_env name;
rack_env name;
Default rails_env production;
rails_env production;
Since 2.0.0
Context http, server, location, if

These are aliases for passenger_app_env.

passenger_app_root

Syntax passenger_app_root path;
Default passenger_app_root parent-directory-of-virtual-host-root;
Since 4.0.0
Context http, server, location, if

By default, Passenger assumes that the application's root directory is the parent directory of the virtual host's (server block's) root directory. This option allows one to the application's root independently from the virtual host root, which is useful if your application does not follow the conventions that Passenger assumes.

See also How Passenger + Nginx autodetects applications.

Example

server {
    server_name test.host;
    root /var/rails/zena/sites/example.com/public;
    # normally Passenger would
    # have assumed that the application
    # root is "/var/rails/zena/sites/example.com"
    passenger_app_root /var/rails/zena;
}

passenger_app_group_name

Syntax passenger_app_group_name name;
Default See description
Since 4.0.0
Context http, server, location, if

Sets the name of the application group that the current application should belong to. Its default value is the application root, plus (if it is explicitly set), the application environment name.

Passenger stores and caches most application spawning settings – such as environment variables, process limits, etc – on a per-app-group-name basis. This means that if you want to start two versions of your application, with each version having different environment variables, then you must assign them under different application group names.

The request queue is also per-application group, so creating multiple application groups allow you to separate requests into different queues.

Example

Consider a situation in which you are running multiple versions of the same app, with each version intended for a different customer. You use the CUSTOMER_NAME environment variable to tell the app which customer that version should serve.

# WRONG example! Doesn't work!

server {
    listen 80;
    server_name customer1.foo.com;
    root /webapps/foo/public;
    passenger_enabled on;
    passenger_env_var CUSTOMER_NAME customer1;
}

server {
    listen 80;
    server_name customer2.foo.com;
    root /webapps/foo/public;
    passenger_enabled on;
    passenger_env_var CUSTOMER_NAME customer2;
}

This example doesn't work, because Passenger thinks that they are the same application. When a user visits customer1.foo.com, Passenger will start a process with CUSTOMER_NAME=customer1. When another user visits customer2.foo.com, Passenger will route the request to the application process that was started earlier. Because environment variables are only set during application process startup, the second user will be served the website for customer 1.

To make this work, assign unique application group names:

server {
    listen 80;
    server_name customer1.foo.com;
    root /webapps/foo/public;
    passenger_enabled on;
    passenger_env_var CUSTOMER_NAME customer1;
    passenger_app_group_name foo_customer1;
}

server {
    listen 80;
    server_name customer2.foo.com;
    root /webapps/foo/public;
    passenger_enabled on;
    passenger_env_var CUSTOMER_NAME customer2;
    passenger_app_group_name foo_customer2;
}

Note that it is not necessary to set passenger_app_group_name if you want to run two versions of your application under different application environment names, because the application environment name is included in the default application group name. For example, consider a situation in which you want to run a production and a staging version of your application. The following configuration will work fine:

server {
    listen 80;
    server_name bar.com;
    root /webapps/bar/public;
    passenger_enabled on;
    # Passenger implicitly sets:
    # passenger_app_group_name /webapps/bar/public;
}

server {
    listen 80;
    server_name staging.com;
    root /webapps/bar/public;
    passenger_enabled on;
    passenger_app_env staging;
    # Passenger implicitly sets:
    # passenger_app_group_name '/webapps/bar/public (staging)';
}

passenger_app_start_command

Syntax passenger_app_start_command COMMAND;
Since 6.0.0
Context server, location, if

Specifies how Passenger should start your app on a specific port.

Passenger has built-in support for starting Ruby, Python, Node.js, and Meteor apps, however it can also start any application written in any language which can listen on a specified port. This functionality is termed Generic Language Support (GLS) and is discussed in greater detail here. The minimum required configuration to make use of GLS in Passenger, is to specify how Passenger should start your app on a specific port. To achieve this you specify the passenger_app_start_command which is the command you would use on the command line to start your app, with a placeholder $PORT where Passenger should substitute its chosen port, for your app to receive and bind to. We go into greater detail on various ways to pass the port to your app if it doesn't take a command line argument to set the port here.

Consider the following config snippet:

  passenger_app_start_command "/usr/local/bin/myapp --foreground --port $PORT";

Passenger will start your app by calling your command, with an actual port number in place of the $PORT placeholder. For eg. /usr/local/bin/myapp --foreground --port 5000.

passenger_app_type

Syntax passenger_app_type name;
Default Autodetected
Since 4.0.25
Context http, server, location, if

By default, Passenger autodetects the type of the application, e.g. whether it's a Ruby, Python, Node.js or Meteor app. If it's unable to autodetect the type of the application (e.g. because you've specified a custom passenger_startup_file) then you can use this option to force Passenger to recognize the application as a specific type.

Allowed values are:

Value Application type
rack Ruby, Ruby on Rails
wsgi Python
node Node.js or Meteor JS in bundled/packaged mode
meteor Meteor JS in non-bundled/packaged mode

Example

server {
    server_name example.com;
    root /webapps/example.com/public;
    passenger_enabled on;
    # Use server.js as the startup file (entry point file) for
    # your Node.js application, instead of the default app.js
    passenger_startup_file server.js;
    passenger_app_type node;
}

passenger_startup_file

Syntax passenger_startup_file relative-path;
Default Autodetected
Since 4.0.25
Context http, server, location, if

This option specifies the startup file that Passenger should use when loading the application. This path is relative to the application root(#passenger_app_root).

Every application has a startup file or entry point file: a file where the application begins execution. Some languages have widely accepted conventions about how such a file should be called (e.g. Ruby, with its config.ru). Other languages have somewhat-accepted conventions (e.g. Node.js, with its app.js). In these cases, Passenger follows these conventions, and executes applications through those files.

Other languages have no conventions at all, and so Passenger invents one (e.g. Python WSGI with passenger_wsgi.py).

Passenger tries to autodetect according to the following language-specific conventions:

Language Passenger convention
Ruby, Ruby on Rails config.ru
Python passenger_wsgi.py
Node.js app.js
Meteor JS in non-bundled/packaged mode .meteor

For other cases you will need to specify the startup-file manually. For example, on Node.js, you might need to use bin/www as the startup file instead if you are using the Express app generator.

Notes

  • Customizing the startup file affects user account sandboxing. After all, if user account sandboxing is enabled, the application is executed as the user that owns the startup file.
  • If you set this option, you must also set passenger_app_type, otherwise Passenger doesn't know what kind of application it is.

Example

server {
    server_name example.com;
    root /webapps/example.com/public;
    passenger_enabled on;
    # Use server.js as the startup file (entry point file) for
    # your Node.js application, instead of the default app.js
    passenger_startup_file server.js;
    passenger_app_type node;
}

passenger_restart_dir

Syntax passenger_restart_dir relative-path;
Default passenger_restart_dir tmp;
Since 4.0.0
Context http, server, location, if

As described in Restarting applications, Passenger checks the file tmp/restart.txt in the application root directory to determine whether it should restart the application. Sometimes it may be desirable for Passenger to look in a different directory instead, for example for security reasons (see below). This option allows you to customize the directory in which restart.txt is searched for.

You can either set it to an absolute directory, or to a directory relative to the application root.

Examples

server {
    listen 80;
    server_name www.foo.com;
    # Passenger will check for /apps/foo/public/tmp/restart.txt
    root /apps/foo/public;
    passenger_enabled on;
}

server {
    listen 80;
    server_name www.bar.com;
    root /apps/bar/public;
    # An absolute filename is given; Passenger will
    # check for /restart_files/bar/restart.txt
    passenger_restart_dir /restart_files/bar;
}

server {
    listen 80;
    server_name www.baz.com;
    root /apps/baz/public;
    # A relative filename is given; Passenger will
    # check for /apps/baz/restart_files/restart.txt
    #
    # Note that this directory is relative to the APPLICATION ROOT, *not*
    # the value of DocumentRoot!
    passenger_restart_dir restart_files;
}

Security reasons for wanting to customize PassengerRestartDir

Touching restart.txt will cause Passenger to restart the application. So anybody who can touch restart.txt can effectively cause a Denial-of-Service attack by touching restart.txt over and over. If your web server or one of your web applications has the permission to touch restart.txt, and one of them has a security flaw which allows an attacker to touch restart.txt, then that will allow the attacker to cause a Denial-of-Service.

You can prevent this from happening by pointing passenger_restart_dir to a directory that's readable by Nginx, but only writable by administrators.

passenger_spawn_method

Syntax passenger_spawn_method smart|direct;
Default For Ruby apps: passenger_spawn_method smart;
For other apps: passenger_spawn_method direct;
Since 2.0.0
Context http, server, location, if

This option controls whether Passenger spawns applications directly, or using a prefork copy-on-write mechanism. The spawn methods guide explains this in detail.

passenger_env_var

Syntax passenger_env_var name value;
Since 5.0.0
Context http, server, location, if

Sets environment variables to pass to the application. Environment variables are only set during application loading.

Example

server {
    server_name www.foo.com;
    root /webapps/foo/public;
    passenger_enabled on;

    passenger_env_var DATABASE_USERNAME foo_db;
    passenger_env_var DATABASE_PASSWORD secret;
}

passenger_load_shell_envvars

Syntax passenger_load_shell_envvars on|off;
Default passenger_load_shell_envvars on;
Since 4.0.20
Context http, server, location, if

Enables or disables the loading of shell environment variables before spawning the application.

If this option is turned on, and the user's shell is bash, then applications are loaded by running them with bash -l -c. The benefit of this is that you can specify environment variables in .bashrc, and they will appear in the application as one would expect.

If this option is turned off, applications are loaded by running them directly from the Passenger core process.

passenger_preload_bundler

Syntax passenger_preload_bundler on|off;
Default passenger_preload_bundler off;
Since 6.0.13
Context http, server, location, if

Enables or disables loading bundler before loading your Ruby app.

If this option is turned on, Ruby will be instructed to load the bundler gem before loading your application. This can help with gem version conflicts due to order-of require issues.

passenger_rolling_restarts

Syntax passenger_rolling_restarts on|off;
Default passenger_rolling_restarts off;
Since 3.0.0
Context http, server, location, if
Enterprise only This option is available in Passenger Enterprise only. Buy Passenger Enterprise here.

Enables or disables support for zero-downtime application restarts through restart.txt.

Please note that passenger_rolling_restarts is completely unrelated to the passenger-config restart-app command. That command always initiates a blocking restart, unless --rolling-restart is given.

NOTE: Are you looking to prevent applications from being restarted when you restart Nginx? That is handled by the Flying Passenger mode, not by the rolling restarts feature.

passenger_resist_deployment_errors

Syntax passenger_resist_deployment_errors on|off;
Default passenger_resist_deployment_errors off;
Since 3.0.0
Context http, server, location, if
Enterprise only This option is available in Passenger Enterprise only. Buy Passenger Enterprise here.

Enables or disables resistance against deployment errors.

Suppose that you have upgraded your application and you have issued a command to restart it, but the application update contains an error (e.g. a syntax error or a database configuration error) that prevents Passenger from successfully spawning a process. Passenger would normally display an error message to the visitor in response to this.

By enabling deployment error resistance, Passenger Enterprise would "freeze" the application's process list. Existing application processes (belonging to the previous version) will be kept around to serve requests. The error is logged, but visitors do not see any error messages. Passenger keeps the old processes around until an administrator has taken action. This way, visitors will suffer minimally from deployment errors.

Learn more about this feature in Deployment Error Resistance guide.

Note that enabling deployment error resistance only works if you perform a rolling restart instead of a blocking restart.

passenger_instance_registry_dir

Syntax passenger_instance_registry_dir path;
Default passenger_instance_registry_dir /tmp|/var/run/passenger-instreg;
Since 5.0.0
Context http

Specifies the directory that Passenger should use for registering its current instance.

When Passenger starts up, it creates a temporary directory inside the instance registry directory. This temporary directory is called the instance directory. It contains all sorts of files that are important to that specific running Passenger instance, such as Unix domain socket files so that all the different Passenger processes can communicate with each other. Command line tools such as passenger-status use the files in this directory in order to query Passenger's status.

It is therefore important that, while Passenger is working, the instance directory is never removed or tampered with. However, the default path for the instance registry directory is the system's temporary directory, and some systems may run background jobs that periodically clean this directory. If this happens, and the files inside the instance directory are removed, then it will cause Passenger to malfunction: Passenger won't be able to communicate with its own processes, and you will see all kinds of connection errors in the log files. This malfunction can only be recovered from by restarting Nginx. You can prevent such cleaning background jobs from interfering by setting this option to a different directory.

This option is also useful if the partition that the temporary directory lives on doesn't have enough disk space.

The instance directory is automatically removed when Nginx shuts down.

Flying Passenger note

This option has no effect when you are using Flying Passenger. Instead, you should configure this by passing the --instance-registry-dir command line option to the Flying Passenger daemon.

Default value

The default value for this option is as follows:

  • If you are on Red Hat, CentOS, Rocky, or Alma Linux and installed Passenger through the RPMs provided by Phusion, then the default value is /var/run/passenger-instreg.
  • Otherwise, the default value is the value of the $TMPDIR environment variable. Or, if $TMPDIR is not set, /tmp.

Note regarding command line tools

Some Passenger command line administration tools, such as passenger-status, must know what Passenger's instance registry directory is in order to function properly. You can pass the directory through the PASSENGER_INSTANCE_REGISTRY_DIR or the TMPDIR environment variable.

For example, if you set 'PassengerInstanceRegistryDir' to '/my_temp_dir', then invoke passenger-status after you've set the PASSENGER_INSTANCE_REGISTRY_DIR, like this:

export PASSENGER_INSTANCE_REGISTRY_DIR=/my_temp-dir
sudo -E passenger-status

Notes regarding the above example:

  • The -E option tells 'sudo' to preserve environment variables.
  • If Passenger is installed through an RVM Ruby, then you must use rvmsudo instead of sudo.

passenger_fly_with

Syntax passenger_fly_with path;
Default Flying Passenger mode disabled
Since 4.1.0
Context http
Enterprise only This option is available in Passenger Enterprise only. Buy Passenger Enterprise here.

Enables the Flying Passenger mode, and configures Nginx to connect to the Flying Passenger daemon that's listening on the given socket filename.

Performance tuning

passenger_core_file_descriptor_ulimit

Syntax passenger_core_file_descriptor_ulimit integer;
Default Inherited from Nginx
Since 5.0.26
Context http

Sets the file descriptor operating system ulimit for the Passenger core process. If you see "too many file descriptors" errors on a regular basis, then increasing this limit will help.

The default value is inherited from the process that started Passenger, which is the Nginx master process in the Nginx integration mode. Assuming Passenger has enough access rights (normally true if the Nginx master process runs as root), it can override its ulimit to the requested setting.

On most operating systems, the default ulimit can also be configured with a config file such as /etc/security/limits.conf, but since ulimits are inherited on a process basis instead of set globally, using that file to change ulimits is usually an error-prone process. This Passenger configuration option provides an easier and high confidence way to set the file descriptor ulimit.

Note that application ulimits may also be affected by this setting because ulimits are inherited on a process basis (i.e. from Passenger). There are two exceptions to this:

  1. If you are using passenger_load_shell_envvars then the application processes are started through the shell, and the shell startup files may override the ulimits set by Passenger.

  2. You can also set the file descriptor ulimit on a per-application basis (instead of setting it globally for the Passenger core process) using passenger_app_file_descriptor_ulimit.

passenger_app_file_descriptor_ulimit

Syntax passenger_app_file_descriptor_ulimit integer;
Default See description
Since 5.0.26
Context http, server, location, if

Sets the file descriptor operating system ulimit for application processes managed by Passenger. If you see "too many file descriptor" errors on a regular basis, and these errors originate from the application process (as opposed to the Passenger core processes), then increasing this limit will help.

If the "too many file descriptor" errors originate from the Passenger core process, then setting this option will not help. Use passenger_core_file_descriptor_ulimit for that.

The default file descriptor ulimit is inherited from the Passenger core process. See passenger_core_file_descriptor_ulimit to learn how the default file descriptor ulimit for Passenger core process is set.

passenger_max_pool_size

Syntax passenger_max_pool_size integer;
Default passenger_max_pool_size 6;
Since 1.0.0
Context http

The maximum number of application processes that may simultaneously exist. A larger number results in higher memory usage, but improves the ability to handle concurrent HTTP requests.

The optimal value depends on your system's hardware and your workload. Please read the optimization guide to learn how to find out the optimal value.

This option behaves like a "safety switch" that prevents Passenger from overloading your system with too many processes. No matter how you configure passenger_min_instances and passenger_max_instances, the total number of processes won't ever surpass the value set for this option. For example, if passenger_max_pool_size is set to 6, and you also deployed two applications on Passenger with each application's passenger_min_instances set to 4, then the maximum number processes that may simultaneously exist is 6, not 8.

If you find that your server is running out of memory then you should lower this value. In order to prevent your server from crashing due to out-of-memory conditions, the default value is relatively low (6).

Flying Passenger note

This option has no effect when you are using Flying Passenger. Instead, you should configure this by passing the --max-pool-size command line option to the Flying Passenger daemon.

passenger_min_instances

Syntax passenger_min_instances integer;
Default passenger_min_instances 1;
Since 3.0.0
Context http, server, location, if

This specifies the minimum number of application processes that should exist for a given application. You should set this option to a non-zero value if you want to avoid potentially long startup times after a website has been idle for an extended period of time.

Please note that this option does not pre-start application processes during Nginx startup. It just makes sure that when the application is first accessed:

  1. at least the given number of processes will be spawned.
  2. the given number of processes will be kept around even when processes are being idle cleaned.

If you want to pre-start application processes during Nginx startup, then you should use the passenger_pre_start option, possibly in combination with passenger_min_instances. This behavior might seem counter-intuitive at first sight, but passenger_pre_start explains the rationale behind it.

Example

Suppose that you have the following configuration:

http {
    ...
    passenger_max_pool_size 15;
    passenger_pool_idle_time 10;

    server {
        listen 80;
        server_name foobar.com;
        root /webapps/foobar/public;
        passenger_min_instances 3;
    }
}

When you start Nginx, there are 0 application processes for 'foobar.com'. Things will stay that way until someone visits 'foobar.com'. Suppose that there is only one visitor. One application process will be started immediately to serve the visitor, while two will be spawned in the background. After 10 seconds, when the idle timeout has been reached, these 3 application processes will not be cleaned up.

Now suppose that there's a sudden spike of traffic, and 100 users visit 'foobar.com' simultaneously. Passenger will start 12 more application processes (15 - 3 = 12). After the idle timeout of 10 seconds has passed, Passenger will clean up 12 application processes, keeping 3 processes around.

passenger_max_instances

Syntax passenger_max_instances integer;
Default passenger_max_instances 0;
Since 3.0.0
Context http, server, location, if
Enterprise only This option is available in Passenger Enterprise only. Buy Passenger Enterprise here.

The maximum number of application processes that may simultaneously exist for an application. This helps to make sure that a single application will not occupy all available slots in the application pool.

This value must be less than passenger_max_pool_size. A value of 0 means that there is no limit placed on the number of processes a single application may spawn, i.e. only the global limit of passenger_max_pool_size will be enforced.

Example

Suppose that you're hosting two web applications on your server, a personal blog and an e-commerce website. You've set passenger_max_pool_size to 10. The e-commerce website is more important to you. You can then set passenger_max_instances to 3 for your blog, so that it will never spawn more than 3 processes, even if it suddenly gets a lot of traffic. Your e-commerce website on the other hand will be free to spawn up to 10 processes if it gets a lot of traffic.

passenger_max_instances_per_app

Syntax passenger_max_instances_per_app integer;
Default passenger_max_instances_per_app 0;
Since 3.0.0
Context http

The maximum number of application processes that may simultaneously exist for a single application. This helps to make sure that a single application will not occupy all available slots in the application pool.

This value must be less than passenger_max_pool_size. A value of 0 means that there is no limit placed on the number of processes a single application may use, i.e. only the global limit of passenger_max_pool_size will be enforced.

Example

Suppose that you're hosting two blogs (blog A and B) on your server, and that you've set passenger_max_pool_size to 10. Under normal circumstances, if blog A suddenly gets a lot of traffic, then A will use all 10 pool slots. If blog B suddenly gets some traffic, then it will only be able to use 1 pool slot (forcefully releasing 1 slot from A) until A's traffic has settled down and it has released more pool slots.

If you consider both blogs equally important, then you can set passenger_max_instances_per_app to 5. This way, both blogs will never use more than 5 pool slots.

Relation to passenger_max_instances

Unlike passenger_max_instances, this configuration option is global (only usable in the http context) and applies to all applications. passenger_max_instances on the other hand is per-virtual host.

Suppose that you're hosting two web applications on your server, a personal blog and an e-commerce website. You've set passenger_max_pool_size to 10. The e-commerce website is more important to you. You can then set passenger_max_instances to 3 for your blog, so that it will never use more than 3 pool slots, even if it suddenly gets a lot of traffic. Your e-commerce website on the other hand will be free to use up all 10 slots if it gets a lot of traffic.

In summary, passenger_max_instances_per_app divides the pool equally among the different web applications, while 'passenger_max_instances' allows one to divide the pool unequally, according to each web application's relative importance.

passenger_pool_idle_time

Syntax passenger_pool_idle_time seconds;
Default passenger_pool_idle_time 300; (5 minutes)
Since 3.0.0
Context http

The maximum number of seconds that an application process may be idle. That is, if an application process hasn't received any traffic after the given number of seconds, then it will be shutdown in order to conserve memory.

Decreasing this value means that applications will have to be spawned more often. Since spawning is a relatively slow operation, some visitors may notice a small delay when they visit your web app. However, it will also free up resources used by applications more quickly.

The optimal value depends on the average time that a visitor spends on a single dynamic page. We recommend a value of 2 * x, where x is the average number of seconds that a visitor spends on a single dynamic page. But your mileage may vary.

When this value is set to 0, application processes will not be shutdown unless it's really necessary. Here is a situation where Passenger seems necessary to shutdown an application process. Suppose that you have two apps on your server, foo and bar. If a user visits foo, but there are no processes for foo, and at the same time there are lots of application processes for bar (as many as the pool limit), then Passenger will wait until one of those bar processes is no longer handling a request. At this time, that process will be shutdown so that Passenger can spawn a foo process.

Setting the value to 0 is recommended if you're on a non-shared host that's only running a few applications, each which must be available at all times.

Flying Passenger note

This option has no effect when you are using Flying Passenger. Instead, you should configure this by passing the --pool-idle-time command line option to the Flying Passenger daemon.

passenger_max_preloader_idle_time

Syntax passenger_max_preloader_idle_time seconds;
Default passenger_max_preloader_idle_time 300; (5 minutes)
Since 4.0.0
Context http, server, location, if

The preloader process (explained in Spawn methods) has an idle timeout, just like the application processes spawned by Passenger do. That is, it will automatically shutdown a preloader process if it hasn't done anything for a given period.

This option allows you to set the preloader's idle timeout, in seconds. A value of 0 means that it should never idle timeout.

Setting a higher value will mean that the preloader is kept around longer, which may slightly increase memory usage. But as long as the preloader server is running, the time to spawn a Ruby application process only takes about 10% of the time that is normally needed, assuming that you're using the smart spawn method. So if your system has enough memory, then is it recommended that you set this option to a high value or to 0.

passenger_force_max_concurrent_requests_per_process

Syntax passenger_force_max_concurrent_requests_per_process number;
Default passenger_force_max_concurrent_requests_per_process -1;
Since 5.0.22
Context http, server, location, if

Use this option to tell Passenger how many concurrent requests the application can handle per process. A value of 0 means that each process can handle an unlimited number of connections, while a value of -1 (the default) means that Passenger will infer the value based on internal heuristics.

There are three main use cases for this option:

  1. To make dynamic process scaling work in Node.js and Meteor applications. Set this option to approximately the number of concurrent requests at which the performance of a single process begins to degrade.
  2. To make SSE and WebSockets work well in Ruby applications. Set this option to 0.
  3. To specify the available concurrency of an app using the GLS capabilities of Passenger.

This option is a hint to Passenger and does not make the application actually able to handle that many concurrent requests per process. For example in Ruby applications, the amount of concurrency that your application process can handle usually depends on the number of configured threads. If you set the number of threads, then Passenger will automatically infer that Ruby applications' max concurrency per process equals the number of threads. But in non-standard cases where this heuristic fails (e.g. in situations where a WebSocket library such as Faye spawns threads to handle WebSockets) then you can use this option to override Passenger's heuristic.

It is recommended that you do not touch this configuration option unless you want to tweak Passenger for one of the three main use cases documented above.

passenger_concurrency_model

Syntax passenger_concurrency_model process|thread;
Default passenger_concurrency_model process;
Since 4.0.0
Context http, server, location, if
Enterprise only This option is available in Passenger Enterprise only. Buy Passenger Enterprise here.

Specifies the I/O concurrency model that should be used for Ruby application processes. Passenger supports two concurrency models:

  • process – single-threaded, multi-processed I/O concurrency. Each application process only has a single thread and can only handle 1 request at a time. This is the concurrency model that Ruby applications traditionally used. It has excellent compatibility (can work with applications that are not designed to be thread-safe) but is unsuitable for workloads in which the application has to wait for a lot of external I/O (e.g. HTTP API calls), and uses more memory because each process has a large memory overhead.
  • thread – multi-threaded, multi-processed I/O concurrency. Each application process has multiple threads (customizable via passenger_thread_count. This model provides much better I/O concurrency and uses less memory because threads share memory with each other within the same process. However, using this model may cause compatibility problems if the application is not designed to be thread-safe.
  • This option only has effect on Ruby applications.
  • Multithreading is not supported for Python.
  • Multithreading is not applicable to Node.js and Meteor because they are evented and do not need (and cannot use) multithreading.

passenger_thread_count

Syntax passenger_thread_count integer;
Default passenger_thread_count 1;
Since 4.0.0
Context http, server, location, if
Enterprise only This option is available in Passenger Enterprise only. Buy Passenger Enterprise here.

Specifies the number of threads that Passenger should spawn per Ruby application process. This option only has effect if passenger_concurrency_model is thread.

  • This option only has effect on Ruby applications.
  • Multithreading is not supported for Python.
  • Multithreading is not applicable to Node.js and Meteor because they are evented and do not need (and cannot use) multithreading.

passenger_stat_throttle_rate

Syntax passenger_stat_throttle_rate seconds;
Default (as of 5.0.0) passenger_stat_throttle_rate 10;
Since 2.2.0
Context http, server, location, if

By default, Passenger performs several filesystem checks (or, in programmers jargon, "stat() calls") each time a request is processed:

  • It checks which the application startup files are present, in order to autodetect the application type.
  • It checks whether restart.txt has changed or whether always_restart.txt exists, in order to determine whether the application should be restarted.

On some systems where disk I/O is expensive, e.g. systems where the harddisk is already being heavily loaded, or systems where applications are stored on NFS shares, these filesystem checks can incur a lot of overhead.

You can decrease or almost entirely eliminate this overhead by setting passenger_stat_throttle_rate. Setting this option to a value of x means that the above list of filesystem checks will be performed at most once every x seconds. Setting it to a value of '0' means that no throttling will take place, or in other words, that the above list of filesystem checks will be performed on every request.

passenger_pre_start

Syntax passenger_pre_start url;
Since 3.0.0
Context http

By default, Passenger does not start any application processes until said web application is first accessed. The result is that the first visitor of said web application might experience a small delay as Passenger is starting the web application on demand. If that is undesirable, then this option can be used to pre-start application processes during Nginx startup.

A few things to be careful of:

  • This option accepts the URL of the web application you want to pre-start, not a on/off value! This might seem a bit weird, but read on for rationale. As for the specifics of the URL:
    • The domain part of the URL must be equal to the value of the server_name option of the server block that defines the web application.
    • Unless the web application is deployed on port 80, the URL should contain the web application's port number too.
    • The path part of the URL must point to some URI that the web application handles.
  • You will probably want to combine this option with passenger_min_instances because application processes started with passenger_pre_start are subject to the usual idle timeout rules. See the example below for an explanation.
This option is currently not available when using Flying Passenger.

Example 1: basic usage

Suppose that you have the following web applications.

server {
    listen 80;
    server_name foo.com;
    root /webapps/foo/public;
    passenger_enabled on;
}

server {
    listen 3500;
    server_name bar.com;
    root /webapps/bar/public;
    passenger_enabled on;
}

You want both of them to be pre-started during Nginx startup. The URL for foo.com is http://foo.com/ (or, equivalently, http://foo.com:80/) and the URL for bar.com is http://bar.com:3500/. So we add two passenger_pre_start options, like this:

server {
    listen 80;
    server_name foo.com;
    root /webapps/foo/public;
    passenger_enabled on;
}

server {
    listen 3500;
    server_name bar.com;
    root /webapps/bar/public;
    passenger_enabled on;
}

passenger_pre_start http://foo.com/;           # <--- added
passenger_pre_start http://bar.com:3500/;      # <--- added

Example 2: pre-starting apps that are deployed in sub-URIs

Suppose that you have a web application deployed in a sub-URI /store, like this:

server {
    listen 80;
    server_name myblog.com;
    root /webapps/wordpress;
    passenger_base_uri /store;
}

Then specify the server_name value followed by the sub-URI, like this:

server {
    listen 80;
    server_name myblog.com;
    root /webapps/wordpress;
    passenger_base_uri /store;
}

passenger_pre_start http://myblog.com/store;    # <----- added

The sub-URI must be included; if you don't then the option will have no effect. The following example is wrong and won't pre-start the store web application:

passenger_pre_start http://myblog.com/;    # <----- WRONG! Missing "/store" part.

Example 3: combining with passenger_min_instances

Application processes started with passenger_pre_start are also subject to the idle timeout rules as specified by passenger_pool_idle_time! That means that by default, the pre-started application processes for foo.com and bar.com are shut down after a few minutes of inactivity. If you don't want that to happen, then you should combine passenger_pre_start with passenger_min_instances, like this:

server {
    listen 80;
    server_name foo.com;
    root /webapps/foo/public;
    passenger_enabled on;
    passenger_min_instances 1;      # <--- added
}

server {
    listen 3500;
    server_name bar.com;
    root /webapps/bar/public;
    passenger_enabled on;
    passenger_min_instances 1;      # <--- added
}

passenger_pre_start http://foo.com/;
passenger_pre_start http://bar.com:3500/;

So why a URL? Why not just an on/off flag?

An option that accepts a simple on/off flag is definitely more intuitive, but due technical difficulties w.r.t. the way Nginx works, it's very hard to implement it like that:

It is very hard to obtain a full list of web applications defined in the Nginx configuration file(s). In other words, it's hard for Passenger to know which web applications are deployed on Nginx until a web application is first accessed, and without such a list Passenger wouldn't know which web applications to pre-start. So as a compromise, we made it accept a URL.

What does Passenger do with the URL?

During Nginx startup, Passenger will send a dummy HEAD request to the given URL and discard the result. In other words, Passenger simulates a web access at the given URL. However this simulated request is always sent to localhost, not to the IP that the domain resolves to. Suppose that bar.com in example 1 resolves to 209.85.227.99; Passenger will send the following HTTP request to 127.0.0.1 port 3500 (and not to 209.85.227.99 port 3500):

HEAD / HTTP/1.1
Host: bar.com
Connection: close

Similarly, for example 2, Passenger will send the following HTTP request to 127.0.0.1 port 80:

HEAD /store HTTP/1.1
Host: myblog.com
Connection: close

Do I need to edit /etc/hosts and point the domain in the URL to 127.0.0.1?

No. See previous subsection.

My web application consists of multiple web servers. What URL do I need to specify, and in which web server's Nginx config file?

Put the web application's server_name value and the server block's port in the URL, and put passenger_pre_start on all machines that you want to pre-start the web application on. The simulated web request is always sent to 127.0.0.1, with the domain name in the URL as value for the 'Host' HTTP header, so you don't need to worry about the request ending up at a different web server in the cluster.

Does passenger_pre_start support https:// URLs?

Yes. And it does not perform any certificate validation.

passenger_response_buffer_high_watermark

Syntax passenger_response_buffer_high_watermark bytes;
Default passenger_response_buffer_high_watermark 134217728; (128 MB)
Since 5.0.0
Context http, server, location, if

As explained in passenger_buffer_response, Passenger has two response buffering mechanisms. This option configures the maximum size of the real-time disk-backed response buffering system. If the buffer is full, the application will be blocked until the client has fully read the buffer.

This buffering system has a default size of 128 MB (134217728 bytes). This default value is large enough to prevent most applications from blocking on slow clients, but small enough to prevent broken applications from filling up the hard disk.

You can't disable real-time disk-backed response buffering, but you can set the buffer size to a small value, which is effectively the same as disabling it.

Most of the time, you won't need to tweak this value. But there is one good use case where you may want set this option to a low value: if you are streaming a large response, but want to detect client disconnections as soon as possible. If the buffer size is larger than your response size, then Passenger will read and buffer the response as fast as it can, offloading the application as soon as it can, thereby preventing the application from detecting client disconnects. But if the buffer size is sufficiently small (say, 64 KB), then your application will effectively output response data at the same speed as the client reads it, allowing you to detect client disconnects almost immediately. This is also a down side, because many slow clients blocking your application can result in a denial of service, so use this option with care.

If your application outputs responses larger than 128 MB and you are not interested in detecting client disconnects as soon as possible, then you should raise this value, or set it to 0.

A value of 0 means that the buffer size is unlimited.

passenger_max_request_queue_size

Syntax passenger_max_request_queue_size integer;
Default passenger_max_request_queue_size 100;
Since 4.0.15
Context http, server, location, if

When all application processes are already handling their maximum number of concurrent requests, Passenger will queue all incoming requests. This option specifies the maximum size for that queue. If the queue is already at this specified limit, then Passenger will immediately send a "503 Service Unavailable" error to any incoming requests. You may use passenger_request_queue_overflow_status_code to customize the response status.

A value of 0 means that the queue is unbounded.

This article on StackOverflow explains how the request queue works, what it means for the queue to grow or become full, why that is bad, and what you can do about it.

You may combine this option with passenger_intercept_errors and error_page to set a custom error page whenever the queue is full. In the following example, Nginx will serve /error503.html whenever the queue is full:

passenger_intercept_errors on;
error_page 503 /error503.html;

passenger_max_request_queue_time

Syntax passenger_max_request_queue_time integer
Default passenger_max_request_queue_time 0
Since 5.1.12
Context http, server, location, if
Enterprise only This option is available in Passenger Enterprise only. Buy Passenger Enterprise here.

When all application processes are already handling their maximum number of concurrent requests, Passenger will queue all incoming requests. This option specifies the maximum time a request may spend in that queue. If a request in the queue reaches this specified limit, then Passenger will send a "504 Gateway Timeout" error for that request. For performance reasons it might take up to 0.5 × passenger_max_request_queue_time after a request timed out before a 504 response is sent (when all application processes are stuck).

A value of 0 means that the queue time is unbounded.

This blog article explains how to use this option to optimize the user experience during rush hour, when queueing starts happening.

You may combine this option with passenger_intercept_errors and error_page to set a custom error page whenever the queue is full. In the following example, Nginx will serve /error504.html whenever the queue is full:

passenger_intercept_errors on;
error_page 504 /error504.html;

passenger_socket_backlog

Syntax passenger_socket_backlog size;
Default passenger_socket_backlog 1024; (< 5.0.25)
passenger_socket_backlog 2048; (≥ 5.0.26)
Since 5.0.24
Context http

The socket backlog is a queue of incoming connections (from Nginx) not yet acknowledged by Passenger. The default value is chosen to match the default for Nginx' worker_connections. If you increase the latter, it is likely that you'll also need to increase the passenger_socket_backlog. If connections are coming in too fast and overflow the backlog, you'll see the error:

connect() to unix:/tmp/passenger… failed (11: Resource temporarily unavailable) while connecting to upstream

passenger_turbocaching

Syntax passenger_turbocaching on|off;
Default passenger_turbocaching on;
Since 5.0.0
Context http

When set of off, will disable Passenger's turbocache.

Syntax passenger_vary_turbocache_by_cookie name;
Default passenger_vary_turbocache_by_cookie _passenger_route;
Since 5.0.0
Context http, server, location, if

If set Passenger will treat requests as separate in the turbocache if the value of the cookie with the provided name is different.

Security

passenger_user_switching

Syntax passenger_user_switching on|off;
Default passenger_user_switching on;
Since 2.0.0
Context http

Whether to attempt to enable user account sandboxing, also known as user switching.

This option has no effect when you are using Flying Passenger. You can disable user account sandboxing for Flying Passenger by starting the Flying Passenger daemon as a non-root user.
If you're on Red Hat, CentOS, Rocky, or Alma Linux be sure to read the Enterprise Linux user account sandboxing caveats.

passenger_user

Syntax passenger_user username;
Default See the user account sandboxing rules
Since 4.0.0
Context http, server, location, if

If user account sandboxing (also known as user switching) is enabled, then Passenger will by default run the web application as the owner of the application's startup file. passenger_user allows you to override that behavior and explicitly set a user to run the web application as, regardless of the ownership of the startup file.

passenger_group

Syntax passenger_group groupname;
Default See the user account sandboxing rules
Since 4.0.0
Context http, server, location, if

If user account sandboxing (also known as user switching) is enabled, then Passenger will by default run the web application as the primary group of the owner of the application's startup file. passenger_group allows you to override that behavior and explicitly set a group to run the web application as, regardless of the ownership of the startup file.

The value may also be set to the special value !STARTUP_FILE!, in which case the web application's group will be set to the startup file's group.

passenger_default_user

Syntax passenger_default_user username;
Default passenger_default_user nobody;
Since 3.0.0
Context http, server, location, if

Passenger enables user account sandboxing (also known as user switching) by default. This configuration option allows you to specify the user that applications must run as, if user switching fails or is disabled.

This option has no effect when you are using Flying Passenger. You can disable user account sandboxing for Flying Passenger by starting the Flying Passenger daemon as a non-root user.

passenger_default_group

Syntax passenger_default_group groupname;
Default See description
Since 3.0.0
Context http, server, location, if

Passenger enables user account sandboxing (also known as user switching) by default. This configuration option allows you to specify the group that applications must run as, if user switching fails or is disabled.

The default value is the primary group of the user specifified by passenger_default_user. So the default value on most systems is nobody or nogroup.

This option has no effect when you are using Flying Passenger. You can disable user account sandboxing for Flying Passenger by starting the Flying Passenger daemon as a non-root user.

passenger_show_version_in_header

Syntax passenger_show_version_in_header on|off;
Default passenger_show_version_in_header on;
Since 5.0.0
Context http

When turned on, Passenger will output its version number in the Server and X-Powered-By header in all Passenger-served requests:

Server: nginx/1.8.0 + Phusion Passenger 5.0.13
X-Powered-By: Phusion Passenger 5.0.13

When turned off, the version number will be hidden:

Server: nginx/1.8.0 + Phusion Passenger
X-Powered-By: Phusion Passenger

passenger_friendly_error_pages

Syntax passenger_friendly_error_pages on|off;
Default (as of 5.0.28)

When passenger_app_env is development:
passenger_friendly_error_pages on;

Otherwise:
passenger_friendly_error_pages off;

Since 4.0.0
Context http, server, location, if

Passenger can display friendly error pages whenever an application fails to start. This friendly error page presents the startup error message, some suggestions for solving the problem, a backtrace and a dump of the environment variables.

This feature is very useful during application development and useful for less experienced system administrators, but the page might reveal potentially sensitive information, depending on the application. For this reason, friendly error pages are disabled by default, unless passenger_app_env (or its aliases such as rails_env and rack_env) is set to development. You can use this option to explicitly enable or disable this feature.

passenger_disable_security_update_check

Syntax passenger_disable_security_update_check on|off;
Default passenger_disable_security_update_check off;
Since 5.1.0
Context http

This option allows disabling the Passenger security update check, a daily check with https://securitycheck.phusionpassenger.com for important security updates that might be available.

passenger_security_update_check_proxy

Syntax passenger_security_update_check_proxy scheme://user:password@proxy_host:proxy_port;
Since 5.1.0
Context http

This option allows use of an intermediate proxy for the Passenger security update check. The proxy client code uses libcurl, which supports the following values for scheme:
http, socks5, socks5h, socks4, socks4a

passenger_disable_anonymous_telemetry

Syntax passenger_disable_anonymous_telemetry on|off;
Default passenger_disable_anonymous_telemetry off;
Since 6.0.0
Context http

This option allows disabling the Passenger anonymous telemetry reporting, which regularly sends anonymous telemetry data to https://anontelemetry.phusionpassenger.com.

passenger_anonymous_telemetry_proxy

Syntax passenger_anonymous_telemetry_proxy scheme://user:password@proxy_host:proxy_port;
Since 6.0.0
Context http

This option allows use of an intermediate proxy for the Passenger anonymous telemetry reporting. The proxy client code uses libcurl, which supports the following values for scheme:
http, socks5, socks5h, socks4, socks4a

passenger_data_buffer_dir

Syntax passenger_data_buffer_dir path;
Default See description
Since 5.0.0
Context http

By default, Passenger buffers large web application responses. This prevents slow HTTP clients from blocking web applications by reading responses very slowly. This feature is also known as "real-time disk-backed response buffering".

By default, such buffers are stored in the directory given by the $TMPDIR environment variable, or (if $TMPDIR is not set) the /tmp directory. This configuration option allows you to specify a different directory.

Changing this option is especially useful if the partition that the default directory lives on doesn't have enough disk space.

If you've specified such a directory (as opposed to using Passenger's default) then you must ensure that this directory exists.

Flying Passenger note

This option has no effect when you are using Flying Passenger. Instead, you should configure this by passing the --data-buffer-dir command line option to the Flying Passenger daemon.

passenger_buffer_response

Syntax passenger_buffer_response on|off;
Default passenger_buffer_response off;
Since 4.0.0
Context http, server, location, if

When turned on, application-generated responses are buffered by Nginx. Buffering will happen in memory and also on disk if the response is larger than a certain threshold.

Before we proceed with explaining this configuration option, we want to state the following to avoid confusion. If you use Passenger for Nginx, there are in fact two response buffering systems active:

  1. The Nginx response buffering system. passenger_buffer_response turns this on or off.
  2. The Passenger response buffering system, a.k.a. "real-time disk-backed response buffering". This buffering system is always on, regardless of the value of passenger_buffer_response, but its behavior can be tweaked with passenger_response_buffer_high_watermark.

Response buffering is useful because it protects against slow HTTP clients that do not read responses immediately or quickly enough. Buffering prevents such slow clients from blocking web applications that have limited concurrency. Because Passenger's response buffering is always turned on, you are always protected. Therefore, passenger_buffer_response is off by default, and you never should have to turn it on.

If for whatever reason you want to turn Nginx-level response buffering on, you can do so with this option.

Nginx's response buffering works differently from Passenger's. Nginx's buffering system buffers the entire response before attempting to send it to the client, while Passenger's attempts to send the data to the client immediately. Therefore, if you turn on passenger_buffer_response, you may interfere with applications that want to stream responses to the client.

So keep in mind that enabling passenger_buffer_response will make streaming responses impossible. Consider for example this piece of Ruby on Rails code:

render :text => lambda { |response, output|
  10.times do |i|
    output.write("entry #{i}\n")
    output.flush
    sleep 1
  end
}

…or this piece of Ruby Rack code:

class Response
  def each
    10.times do |i|
      yield("entry #{i}\n")
      sleep 1
    end
  end
end

app = lambda do |env|
  [200, { "Content-Type" => "text/plain" }, Response.new]
end

When passenger_buffer_response is turned on, Nginx will wait until the application is done sending the entire response before forwarding it to the client. The client will not receive anything for 10 seconds, after which it receives the entire response at once. When passenger_buffer_response is turned off, it works as expected: the client receives an "entry X" message every second for 10 seconds.

passenger_request_buffering

Syntax passenger_request_buffering on|off;
Default passenger_request_buffering on;
Since 6.0.0
Context http, server, location, if

When turned on, request body buffering will be disabled. This allows for streaming uploads, but is only supported when used with Nginx >= 1.15.3.

passenger_buffer_upload

Syntax passenger_buffer_upload on|off;
Default passenger_buffer_upload off;
Since 6.0.3
Context http, server, location, if

When enabled Passenger will buffer the upload from the client (useful if your app cannot handle chunked uploads).

passenger_spawn_dir

Syntax passenger_spawn_dir path;
Default passenger_spawn_dir /tmp|$TMPDIR;
Since 6.0.3
Context http

The directory in which Passenger will record progress during startup, which is specifically useful for users using sandbox tech such as CageFS, SElinux, or macOS sandboxes. The default value is the value of the $TMPDIR environment variable. Or, if $TMPDIR is not set, /tmp.

passenger_direct_instance_request_address

Syntax passenger_direct_instance_request_address ip;
Default passenger_direct_instance_request_address 127.0.0.1;
Since 6.0.7
Context http, server, location, if

The port which Passenger will cause your ruby app to additionally bind to, to allow sending requests directly to specific app instances. Sending requests to specific app processes is detailed here.

passenger_temp_path

Syntax passenger_temp_path path;
Default passenger_temp_path passenger_temp;
Since 6.0.5
Context http, server, location, if

The directory which Passenger will use for the disk backed response cache. Which is specifically useful for users using sandbox tech such as CageFS, SElinux, or macOS sandboxes.

Request / response customization

passenger_base_uri

Syntax passenger_base_uri uri;
Since 2.0.0
Context http, server, location, if

Used to specify that the given URI is an distinct application that should be served by Passenger. Please see the deployment guide for more information.

It is allowed to specify this option multiple times. Do this to deploy multiple applications in different sub-URIs under the same virtual host.

As of Version 5.2 there is a bug that prevents using both root and a base uri at the same time in Passenger.

passenger_document_root

Syntax passenger_document_root path;
Since 4.0.25
Context http, server, location, if

Used in sub-URI deployment scenarios to tell Passenger where it should look for static files. Please see the deployment guide for more information.

passenger_sticky_sessions

Syntax passenger_sticky_sessions on|off;
Default passenger_sticky_sessions off;
Since 4.0.45
Context http, server, location, if

When sticky sessions are enabled, all requests that a client sends will be routed to the same originating application process, whenever possible. When sticky sessions are disabled, requests may be distributed over multiple processes, and may not necessarily be routed to the originating process, in order to balance traffic over multiple CPU cores. Because of this, sticky sessions should only be enabled in specific circumstances.

For applications that store important state inside the process's own memory – that is, as opposed to storing state in a distributed data store, such as the database or Redis – sticky sessions should be enabled. This is because otherwise, some requests could be routed to a different process, which stores different state data. Because processes don't share memory with each other, there's no way for one process to know about the state in another process, and then things can go wrong.

One prominent example is the popular SockJS library, which is capable of emulating WebSockets through long polling. This is implemented through two HTTP endpoints, /SESSION_ID/xhr_stream (a long polling end point which sends data from the server to the client), and /SESSION_ID/xhr_send (a normal POST endpoint which is used for sending data from the client to the server). SockJS correlates the two requests with each other through a session identifier. At the same time, in its default configuration, it stores all known session identifiers in an in-memory data structure. It is therefore important that a particular /SESSION_ID/xhr_send request is sent to the same process where the corresponding /SESSION_ID/xhr_stream request originates from; otherwise, SockJS cannot correlate the two requests, and an error occurs.

So prominent examples where sticky sessions should (or even must) be enabled, include:

  • Applications that use the SockJS library (unless configured with a distributed data store)
  • Applications that use the Socket.io library (unless configured with a distributed data store)
  • Applications that use the faye-websocket gem (unless configured with a distributed data store)
  • Meteor JS applications (because Meteor uses SockJS)

Sticky sessions work through the use of a special cookie, whose name can be customized with passenger_sticky_sessions_cookie_name. Passenger puts an identifier in this cookie, which tells Passenger what the originating process is. Next time the client sends a request, Passenger reads this cookie and uses the value in the cookie to route the request back to the originating process. If the originating process no longer exists (e.g. because it has crashed or restarted) then Passenger will route the request to some other process, and reset the cookie.

If you have a load balancer in front end of Passenger, then you must configure sticky sessions on that load balancer too. Otherwise, the load balancer could route the request to a different server.

Syntax passenger_sticky_sessions_cookie_name name;
Default passenger_sticky_sessions_cookie_name _passenger_route;
Since 4.0.45
Context http, server, location, if

Sets the name of the sticky sessions cookie.

Syntax passenger_sticky_sessions_cookie_name string;
Default passenger_sticky_sessions_cookie_name "SameSite=Lax; Secure;";
Since 6.0.5
Context http, server, location, if

Sets the attributes of the sticky sessions cookie.

passenger_set_header

Syntax passenger_set_header HTTP-header-name value;
Since 5.0.0
Context http, server, location, if

Sets additional HTTP headers to pass to the web application. This is comparable to ngx_http_proxy_module's proxy_set_header option. Nginx variables in the value are interpolated.

Example:

server {
    server_name www.foo.com;
    root /webapps/foo/public;
    passenger_enabled on;

    passenger_set_header X-Power-Level 9000;
    passenger_set_header X-Forwarded-For internal-router.foo.com;
}

Headers set by this option cannot be spoofed by the client. Passenger/Nginx will not forward any client-supplied headers with the same names.

This configuration option is NOT inherited across contexts

In each new context (e.g. in each new location block), you must re-specify passenger_set_header. Values set in parent contexts have no effect on subcontexts. For example:

server {
    ...
    passenger_set_header X-Foo foo;

    location /users {
        passenger_enabled on;
        # !!!THIS IS WRONG!!! The 'X-Foo' header will not
        # be passed URLs beginning with /users because we didn't
        # re-specify passenger_set_header.
    }

    location /apps {
        passenger_enabled on;
        # This is correct. Here we re-specify passenger_set_header,
        # so the 'X-Foo' header will be correctly passed to URLs
        # starting with /apps.
        passenger_set_header X-Foo foo;
    }
}

passenger_request_queue_overflow_status_code

Syntax passenger_request_queue_overflow_status_code code;
Default passenger_request_queue_overflow_status_code 503;
Since 4.0.15
Context http, server, location, if

This option allows you to customize the HTTP status code that is sent back when the request queue is full. See passenger_max_request_queue_size for more information.

passenger_request_queue_timeout_status_code

Syntax passenger_request_queue_timeout_status_code code;
Default passenger_request_queue_timeout_status_code 504;
Since 5.1.12
Context http, server, location, if

This option allows you to customize the HTTP status code that is sent back when a request remains in the queue for too long. See passenger_max_request_queue_time for more information.

passenger_spawn_exception_status_code

Syntax passenger_spawn_exception_status_code code;
Default passenger_spawn_exception_status_code 500;
Since 6.0.12
Context http, server, location, if

This option allows you to customize the HTTP status code that is sent back when an application fails to start.

passenger_ignore_client_abort

Syntax passenger_ignore_client_abort on|off;
Default passenger_ignore_client_abort off;
Since 4.0.0
Context http, server, location, if

Normally, when the HTTP client aborts the connection (e.g. when the user clicked on "Stop" in the browser), the connection with the application process will be closed too. If the application process continues to send its response, then that will result in EPIPE ("Broken pipe") errors in the application, which will be printed in the error log if the application doesn't handle them gracefully.

If this option is turned on then, upon a client abort, Passenger will continue to read the application process's response while discarding all the read data. This prevents EPIPE errors but it will also mean the application process will be unavailable for new requests until it is done sending its response.

passenger_custom_error_page

Syntax passenger_custom_error_page path;
Default passenger_custom_error_page "";
Since 6.0.23
Context http, server, location, if

Replaces the default Passenger error page when there is an issue spawning an app.

By default, Passenger either renders a friendly error page or a minimal error page depending on the passenger_friendly_error_pages and passenger_app_env config options. This option overrides the error page with your own. The path should point to a static file which Passenger has permission to read.

passenger_intercept_errors

Syntax passenger_intercept_errors on|off;
Default passenger_intercept_errors off;
Since 4.0.15
Context http, server, location, if

Decides if Nginx will intercept responses with HTTP status codes of 400 and higher.

By default, all responses are sent as-is from the application or from the Passenger core. If you turn this option on then Nginx will be able to handle such responses using the Nginx error_page option. Responses with status codes that do not match an error_page option are sent as-is.

passenger_pass_header

Syntax passenger_pass_header header-name;
Since 4.0.0
Context http, server, location, if

Some headers generated by application processes are not forwarded to the HTTP client. For example, X-Accel-Redirect is directly processed by Nginx and then discarded from the final response. This option allows one to force Nginx to pass those headers to the client anyway, similar to how proxy_pass_header works.

Example:

location / {
   passenger_pass_header X-Accel-Redirect;
}

passenger_ignore_headers

Syntax passenger_ignore_headers header-names...;
Since 4.0.0
Context http, server, location, if

Disables processing of certain response header fields from the application, similar to how proxy_ignore_headers works.

passenger_headers_hash_bucket_size

Syntax passenger_headers_hash_bucket_size integer;
Default passenger_headers_hash_bucket_size 64;
Since 4.0.0
Context http, server, location, if

Sets the bucket size of the hash tables used by the passenger_set_header directive. The details of setting up hash tables are can be found in the Nginx documentation.

passenger_headers_hash_max_size

Syntax passenger_headers_hash_max_size integer;
Default passenger_headers_hash_max_size 512;
Since 4.0.0
Context http, server, location, if

Sets the maximum size of the hash tables used by the passenger_set_header directive. The details of setting up hash tables are can be found in the Nginx documentation.

passenger_buffer_size, passenger_buffers, passenger_busy_buffers_size

Syntax passenger_buffer_size size;
passenger_buffers number size;
passenger_busy_buffers_size size;
Default passenger_buffer_size 4k|8k;
passenger_buffers 8 4k|8k;
passenger_busy_buffers_size 8k|16k;
Since 4.0.0
Context http, server, location, if

These options have the same effect as ngx_http_proxy_module's similarly named options. They can be used to modify the maximum allowed HTTP header size. Please refer to:

Logging & troubleshooting

passenger_log_level

Syntax passenger_log_level number;
Default (as of 5.0.0) passenger_log_level 3;
Since 3.0.0
Context http

This option allows one to specify how much information Passenger should log to its log file. A higher log level value means that more information will be logged.

Possible values are:

  • 0 (crit): Show only critical errors which would cause Passenger to abort.
  • 1 (error): Also show non-critical errors – errors that do not cause Passenger to abort.
  • 2 (warn): Also show warnings. These are not errors, and Passenger continues to operate correctly, but they might be an indication that something is wrong with the system.
  • 3 (notice): Also show important informational messages. These give you a high-level overview of what Passenger is doing.
  • 4 (info): Also show less important informational messages. These messages show more details about what Passenger is doing. They're high-level enough to be readable by users.
  • 5 (debug): Also show the most important debugging information. Reading this information requires some system or programming knowledge, but the information shown is typically high-level enough to be understood by experienced system administrators.
  • 6 (debug2): Show more debugging information. This is typically only useful for developers.
  • 7 (debug3): Show even more debugging information.

passenger_disable_log_prefix

Syntax passenger_disable_log_prefix on|off;
Default passenger_disable_log_prefix off;
Since 6.0.2
Context http
This option allows one to stop Passenger from prefixing logs that come from your app with "App PID stdout stderr" when they are written to Passenger's log. This can be useful to simplify log-aggregating setups.

passenger_log_file

Syntax passenger_log_file path;
Default passenger_log_file path-to-nginx-global-error-log;
Since 5.0.5
Context http

By default Passenger log messages are written to the Nginx global error log. With this option, you can have those messages logged to a different file instead.

Flying Passenger note

This option has no effect when you are using Flying Passenger. Instead, you should configure this by passing the --log-file command line option to the Flying Passenger daemon.

passenger_app_log_file

Syntax passenger_app_log_file path;
Default passenger_app_log_file path-to-passenger-log-file;
Since 5.3.0
Context server
Enterprise only This option is available in Passenger Enterprise only. Buy Passenger Enterprise here.

By default Passenger log messages are all written to the Passenger log file. With this option, you can have the app specific messages logged to a different file in addition.

passenger_file_descriptor_log_file

Syntax passenger_file_descriptor_log_file path;
Default passenger_file_descriptor_log_file path-to-nginx-global-error-log;
Since 5.0.5
Context http

Log file descriptor debug tracing messages to the given file.

Passenger has the ability to log all file descriptors that it opens and closes. These logs are useful to the Passenger developers for the purpose of analyzing file descriptor leaks.

File descriptor activity is logged as follows:

  • If passenger_file_descriptor_log_file is not set, then file descriptor activity is logged to the main log file, but only if the log level is 5 (debug) or higher.
  • If passenger_file_descriptor_log_file is set, then file descriptor activity is logged to the specified file, regardless of the log level.

Flying Passenger note

This option has no effect when you are using Flying Passenger. Instead, you should configure this by passing the --file-descriptor-log-file command line option to the Flying Passenger daemon.

passenger_debugger

Syntax passenger_debugger on|off;
Default passenger_debugger off;
Since 3.0.0
Context http, server, location, if
Enterprise only This option is available in Passenger Enterprise only. Buy Passenger Enterprise here.
At this time, this feature is supported for Ruby applications only.

Turns support for Ruby application debugging on or off. Please read the Ruby debugging console guide for more information.

passenger_admin_panel_url

Syntax passenger_admin_panel_url uri;
Since 5.2.2
Context http

The URI to connect to the Fuse Panel with. Information is sent to enable monitoring, administering, analysis and troubleshooting of this Passenger instance and apps running on it. The feature is disabled if this option is not specified. See "Connect Passengers" in the Fuse Panel for further instructions.

passenger_admin_panel_auth_type

Syntax passenger_admin_panel_auth_type type;
Default passenger_admin_panel_auth_type basic;
Since 5.2.2
Context http

The authentication method Passenger should use when connecting to the Fuse Panel. Currently only basic authentication is supported. See "Connect Passengers" in the Fuse Panel for further instructions.

passenger_admin_panel_username

Syntax passenger_admin_panel_username string;
Since 5.2.2
Context http

The username that Passenger should use when connecting to the Fuse Panel with basic authentication. See "Connect Passengers" in the Fuse Panel for further instructions.

passenger_admin_panel_password

Syntax passenger_admin_panel_password string;
Since 5.2.2
Context http

The password that Passenger should use when connecting to the Fuse Panel with basic authentication. See "Connect Passengers" in the Fuse Panel for further instructions.

passenger_dump_config_manifest

Syntax passenger_dump_config_manifest path;
Since 5.2.2
Context http

If specified, Passenger will dump a representation of its own configuration to the given file, in JSON format. This option is usually only interesting to Passenger developers for the purpose of developing configuration-related features.

passenger_max_requests

Syntax passenger_max_requests integer;
Default passenger_max_requests 0;
Since 3.0.0
Context http, server, location, if

The maximum number of requests an application process will process. After serving that many requests, the application process will be shut down and Passenger will restart it. A value of 0 means that there is no maximum. The application process might also be shut down if its idle timeout is reached.

This option is useful if your application is leaking memory. By shutting it down after a certain number of requests, all of its memory is guaranteed to be freed by the operating system. An alternative (and better) mechanism for dealing with memory leaks is passenger_memory_limit.

This option should be considered as a workaround for misbehaving applications. It is advised that you fix the problem in your application rather than relying on this option as a measure to avoid memory leaks.

passenger_max_request_time

Syntax passenger_max_request_time seconds;
Default passenger_max_request_time 0;
Since 3.0.0
Context http, server, location, if
Enterprise only This option is available in Passenger Enterprise only. Buy Passenger Enterprise here.

The maximum amount of time, in seconds, that an application process may take to process a request. If the request takes longer than this amount of time, then the application process will be forcefully shut down, and possibly restarted upon the next request. A value of 0 means that there is no time limit.

This option is useful for preventing your application from getting stuck for an indefinite period of time.

This option should be considered as a workaround for misbehaving applications. It is advised that you fix the problem in your application rather than relying on this option as a measure to avoid stuck applications.

Example

Suppose that most of your requests are known to finish within 2 seconds. However, there is one URI, /expensive_computation, which is known to take up to 10 seconds. You can then configure Passenger as follows:

server {
    listen 80;
    server_name www.example.com;
    root /webapps/my_app/public;
    passenger_enabled on;
    passenger_max_request_time 2;
    location /expensive_compuation {
        passenger_enabled on;
        passenger_max_request_time 10;
    }
}

If a request to '/expensive_computation' takes more than 10 seconds, or if a request to any other URI takes more than 2 seconds, then the corresponding application process will be forced to shutdown.

passenger_read_timeout

Syntax passenger_read_timeout milliseconds;
Default passenger_read_timeout 60000;
Since 5.0.7
Context http, server, location, if
Enterprise only This option is available in Passenger Enterprise only. Buy Passenger Enterprise here.

Available for rare cases when server needs more than the default 10 minute timeout.

passenger_memory_limit

Syntax passenger_memory_limit megabytes;
Default passenger_memory_limit 0;
Since 3.0.0
Context http, server, location, if
Enterprise only This option is available in Passenger Enterprise only. Buy Passenger Enterprise here.

The soft limit on memory that an application process may use, in megabytes. Once an application process has surpassed this memory limit, Passenger allow it to finish processing all of its current requests, then shuts the process down. A value of 0 means that there is no maximum: the application's memory usage will not be checked.

This option is useful if your application is leaking memory. By shutting it down, all of its memory is guaranteed to be freed by the operating system.

A word about permissions

This option uses the ps command to query memory usage information. On Linux, it further queries /proc to obtain additional memory usage information that's not obtainable through ps. You should ensure that the ps works correctly and that the /proc filesystem is accessible by the Passenger core process.

This option should be considered as a workaround for misbehaving applications. It is advised that you fix the problem in your application rather than relying on this option as a measure to avoid memory leaks. As respawning app processes takes time.

passenger_hard_memory_limit

Syntax passenger_hard_memory_limit megabytes;
Default passenger_hard_memory_limit 0;
Since 3.0.0
Context http, server, location, if
Enterprise only This option is available in Passenger Enterprise only. Buy Passenger Enterprise here.

The hard limit on memory that an application process may use, in megabytes. Once an application process has surpassed this memory limit, Passenger will kill it within PassengerAnalyticsCollectionRate seconds. A value of 0 means that there is no maximum: the application's memory usage will not result in Passenger killing the process (though the kernel oom killer may still kill the process).

This option is useful if your application is leaking memory. By shutting it down, all of its memory is guaranteed to be freed by the operating system.

A word about permissions

This option uses the ps command to query memory usage information. On Linux, it further queries /proc to obtain additional memory usage information that's not obtainable through ps. You should ensure that the ps works correctly and that the /proc filesystem is accessible by the Passenger core process.

This option should be considered as a workaround for misbehaving applications. It is advised that you fix the problem in your application rather than relying on this option as a measure to avoid memory leaks. As requests that were being served by killed processes will receive error responses, and respawning app processes takes time.

passenger_analytics_collection_rate

Syntax passenger_analytics_collection_rate seconds;
Default passenger_analytics_collection_rate 5;
Since 6.0.21
Context http

The time Passenger waits between checking the memory use of your application processes, in seconds. A longer duration will allow processes over the memory limit to live longer, but a lower duration will use more CPU time.

passenger_abort_websockets_on_process_shutdown

Syntax passenger_abort_websockets_on_process_shutdown on|off;
Default passenger_abort_websockets_on_process_shutdown on;
Since 5.0.22
Context http, server, location, if

Before shutting down or restarting an application process, Passenger performs two operations:

  1. It waits until existing requests routed to that process are finished. This way, existing requests will be finished gracefully.
  2. It aborts WebSocket connections. This is because WebSocket connections can stay open for an arbitrary amount of time and will block the shutdown/restart.

If you want Passenger to not abort WebSocket connections, then turn this option off. That way, Passenger will wait for WebSocket connections to terminate by themselves, before proceeding with a process shutdown or restart. For this reason, you must modify your application code to ensure that WebSocket connections do not stay open for an arbitrary amount of time.

Deprecated or removed options

The following options have been deprecated or removed. Some are still supported for backwards compatibility reasons.

rails_spawn_method

Deprecated in 3.0.0 in favor of passenger_spawn_method.

passenger_debug_log_file

This option has been renamed in version 5.0.5 to passenger_log_file.

light mode dark mode

Table of contents


    Up