Configuration reference
Relevant selection for this article:
This page lists all the configuration options available for Passenger Standalone, through either command line options or Passengerfile.json
options.
Table of contents
- Loading...
Standalone server options
--engine
/ "engine"
Command line syntax | passenger start --engine nginx|builtin |
---|---|
Config file syntax | "engine": "nginx"|"builtin" |
Environment variable syntax | PASSENGER_ENGINE=nginx|builtin |
Default | nginx |
Since | 5.0.0 |
Mass deployment context | Main |
Sets the Passenger Standalone engine to use. Learn more about Passenger Standalone engines.
--nginx-config-template
/ "nginx_config_template"
Command line syntax | passenger start --nginx-config-template PATH |
---|---|
Config file syntax | "nginx_config_template": string |
Environment variable syntax | PASSENGER_NGINX_CONFIG_TEMPLATE=string |
Default | Enabled |
Since | 4.0.1 |
Engines | nginx |
Mass deployment context | Main |
Instructs Passenger Standalone's Nginx engine to use a specific Nginx config template, instead of the default. Learn more about this in the configuration introduction.
Please note that this option only works if Passenger Standalone is configured to use the Nginx engine.
The Nginx config file must follow a specific format
You can't just use any arbitrary Nginx config file! The Nginx config file you pass to Passenger Standalone must be based on the one provided by Passenger Standalone. Learn more about this in the configuration introduction.
Do not duplicate options set by Passenger Standalone
Nginx only allows setting most configuration options once. This means that if you insert any configuration options already set by Passenger Standalone, then Nginx will abort with an error. Therefore, you should prefer using the configuration options provided by Passenger Standalone over setting them yourself in the Nginx configuration template. For example, do not set passenger_log_level yourself; use the --log-level
/ "log_level"
configuration option instead.
Keep your configuration template up to date
The original configuration template file may change from time to time, e.g. because new features are introduced into Passenger. If your configuration template file does not contain the required changes, then these new features may not work properly. In the worst case, Passenger Standalone might even malfunction. Therefore, every time you upgrade Passenger, you should check whether the original configuration template file has changed, and merge back any changes into your own file.
--debug-nginx-config
/ "debug_nginx_config"
Command line syntax | passenger start --debug-nginx-config |
---|---|
Config file syntax | "debug_nginx_config": true |
Environment variable syntax | PASSENGER_DEBUG_NGINX_CONFIG=true |
Default | Enabled |
Since | 5.0.22 |
Engines | nginx |
Mass deployment context | Main |
If Passenger Standalone is using the Nginx engine, then this configuration option will cause Passenger Standalone to print the contents of the generated Nginx configuration file, after which it exits. This allows you to inspect the generated configuration file, e.g. to debug problems.
--address
/ "address"
Command line syntax | passenger start --address HOST |
---|---|
Config file syntax | "address": string |
Environment variable syntax | PASSENGER_ADDRESS=string |
Default | 0.0.0.0 |
Since | 3.0.0 |
Engines | nginx, builtin |
Mass deployment context | Main, per-app |
Instructs Passenger to listen for requests on the given IP address. This means that Passenger will only be able to accept requests that are sent to that IP address.
The IP address may be an IPv4 address or an IPv6 address. If you want to listen on a Unix domain socket, use --socket
/ "socket_file".
The default is to bind to 0.0.0.0, which means that Passenger can accept requests from any IPv4 address. If you use Passenger in a reverse proxy setup then you should bind Passenger to 127.0.0.1, which means that only processes on the local host can access Passenger, not the public Internet.
--port
/ "port"
Command line syntax | passenger start --port NUMBER |
---|---|
Config file syntax | "port": integer |
Environment variable syntax | PASSENGER_PORT=integer |
Default | 3000 |
Since | 3.0.0 |
Engines | nginx, builtin |
Mass deployment context | Main, per-app |
Instructs Passenger to listen for requests on the given TCP port number. Only has effect if you did not use --socket
/ "socket_file".
--socket
/ "socket_file"
Command line syntax | passenger start --socket PATH |
---|---|
Config file syntax | "socket_file": string |
Environment variable syntax | PASSENGER_SOCKET_FILE=path |
Since | 3.0.0 |
Engines | nginx, builtin |
Mass deployment context | Main, per-app |
Instructs Passenger to listen for requests on a Unix domain socket, not on a TCP socket. Unix domain sockets are a special kind of socket provided by the operating system, that are only usable on the local computer, not over the Internet. In return for this restricted functionality, they are highly optimized and much faster than TCP sockets.
A Unix domain socket appears as a file on the filesystem.
There are almost no web browsers and HTTP clients that support Unix domain sockets. Unix domain sockets are mainly useful if you plan on using Passenger in a reverse proxy setup, where you configure a reverse proxy like Nginx, running on the local machine, to forward requests to Passenger over a Unix domain socket.
--ssl
/ "ssl"
Command line syntax | passenger start --ssl |
---|---|
Config file syntax | "ssl": true |
Environment variable syntax | PASSENGER_SSL=true |
Default | Passenger does not enable SSL |
Since | 5.0.0 |
Engines | nginx |
Mass deployment context | Main, per-app |
Instructs Passenger to accept (encrypted) HTTPS requests on its socket, instead of (unencrypted) HTTP requests.
If you want Passenger to be able to listen for HTTP and HTTPS at the same time (although on different port numbers), then please use --ssl-port
/ "ssl_port". It is not possible to make passenger listen for HTTP and HTTPS on the same port.
If this option is set, you must also set --ssl-certificate
/ "ssl_certificate" and --ssl-certificate-key
/ "ssl_certificate_key" to the SSL certificate and key files, respectively.
--ssl-certificate
/ "ssl_certificate"
Command line syntax | passenger start [...] --ssl-certificate PATH |
---|---|
Config file syntax | "ssl_certificate": string |
Environment variable syntax | PASSENGER_SSL_CERTIFICATE=string |
Since | 5.0.0 |
Engines | nginx |
Mass deployment context | Main, per-app |
Sets the SSL certificate to use.
This option only has effect if --ssl
/ "ssl" is set.
--ssl-certificate-key
/ "ssl_certificate_key"
Command line syntax | passenger start [...] --ssl-certificate-key PATH |
---|---|
Config file syntax | "ssl_certificate_key": string |
Environment variable syntax | PASSENGER_SSL_CERTIFICATE_KEY=path |
Since | 5.0.0 |
Engines | nginx |
Mass deployment context | Main, per-app |
Sets the SSL certificate key to use.
This option only has effect if --ssl
/ "ssl" is set.
--ssl-port
/ "ssl_port"
Command line syntax | passenger start [...] --ssl-port NUMBER |
---|---|
Config file syntax | "ssl_port": integer |
Environment variable syntax | PASSENGER_SSL_PORT=integer |
Since | 5.0.0 |
Engines | nginx |
Mass deployment context | Main, per-app |
Instructs Passenger to listen for HTTPS requests on the given port number, while letting the normal port number listen for regular unecrypted HTTP requests.
For example, if you run the following, the Passenger will listen for HTTP requests on port 3000, while also listening for HTTPS requests on port 3005:
passenger start --ssl --ssl-certificate ... --ssl-certificate-key ... --ssl-port 3005
This option only has effect if --ssl
/ "ssl" is set.
ssl_port
instead of leaving this option unspecified. This is because some of your apps may contain a Passengerfile.json that contains "ssl": true
, while others do not. In such a situation, this means that some apps want to listen for HTTPS requests on the default port, while others want to listen for unencrypted HTTP requests, which is a contradiction and causes Passenger to abort.
--daemonize
/ "daemonize"
Command line syntax | passenger start --daemonize |
---|---|
Config file syntax | "daemonize": true |
Environment variable syntax | PASSENGER_DAEMONIZE=true |
Default | Passenger runs in the foreground |
Since | 3.0.0 |
Engines | nginx, builtin |
Mass deployment context | Main |
Instructs Passenger to daemonize into the background.
--pid-file
/ "pid_file"
Command line syntax | passenger start --pid-file PATH |
---|---|
Config file syntax | "pid_file": string |
Environment variable syntax | PASSENGER_PID_FILE=path |
Default | See description |
Since | 3.0.0 |
Engines | nginx, builtin |
Mass deployment context | Main |
Store the Passenger PID in the given PID file.
The default behavior is as follows:
- If there is a
tmp/pids
subdirectory, use the PID filetmp/pids/passenger.XXX.pid
. - Otherwise, use the PID file
passenger.XXX.pid
.
In both cases, XXX is the port number that Passenger listens on.
If --socket
/ "socket_file" is set, then the default PID filename does not contain the .XXX
part.
Application loading
--environment
/ "environment"
Command line syntax | passenger start --environment NAME |
---|---|
Config file syntax | "environment": string |
Environment variable syntax | PASSENGER_ENVIRONMENT=string |
Default | development |
Since | 3.0.0 |
Engines | nginx, builtin |
Mass deployment context | Main, per-app |
This option sets the value of the following environment variables:
RAILS_ENV
RACK_ENV
WSGI_ENV
NODE_ENV
PASSENGER_APP_ENV
Some web frameworks, for example Rails and Connect.js, adjust their behavior according to the value in one of these environment variables.
--envvar
/ "envvars"
Command line syntax | passenger start --envvar name1=value1 --envvar name2=value2 ... |
---|---|
Config file syntax | "envvars": { "name1": "value1", "name2": "value2", ... } |
Since |
Command line syntax: since 5.0.22 Config file syntax: since 5.0.1 |
Engines | nginx, builtin |
Mass deployment context | Main, per-app |
Sets arbitrary environment variables for the application.
--ruby
/ "ruby"
Command line syntax | passenger start --ruby COMMAND_PATH |
---|---|
Config file syntax | "ruby": string |
Environment variable syntax | PASSENGER_RUBY=string |
Default | The Ruby interpreter that was used for starting Passenger Standalone |
Since | 5.0.7 |
Engines | nginx, builtin |
Mass deployment context | Main, per-app |
Specifies the Ruby interpreter to use for serving Ruby web applications.
Notes about multiple Ruby interpreters
If your system has multiple Ruby interpreters, then it is important that you set this configuration option to the right value. If you do not set this configuration option correctly, and your app is run under the wrong Ruby interpreter, then all sorts of things may go wrong, such as:
- The app won't be able to find its installed gems.
- The app won't be able to run because of syntax and feature differences between Ruby versions.
Note that a different RVM gemset also counts as "a different Ruby interpreter".
How to set the correct value
If you are not sure what value to set --ruby / "ruby"
to, then you can find out the correct value as follows.
First, find out the location to the passenger-config
tool and take note of it:
$ which passenger-config /opt/passenger/bin/passenger-config
Next, activate the Ruby interpreter (and if applicable, the gemset) you want to use. For example, if you are on RVM and you use Ruby 2.2.1, you may want to run this:
$ rvm use 2.2.1
Finally, invoke passenger-config
with its full path, passing --ruby-command
as parameter:
$ /opt/passenger/bin/passenger-config --ruby-command passenger-config was invoked through the following Ruby interpreter: Command: /usr/local/rvm/wrappers/ruby-1.8.7-p358/ruby Version: ruby 1.8.7 (2012-02-08 patchlevel 358) [universal-darwin12.0] To use in Apache: PassengerRuby /usr/local/rvm/wrappers/ruby-1.8.7-p358/ruby To use in Nginx : passenger_ruby /usr/local/rvm/wrappers/ruby-1.8.7-p358/ruby To use with Standalone: /usr/local/rvm/wrappers/ruby-1.8.7-p358/ruby /opt/passenger/bin/passenger start ## Notes for RVM users Do you want to know which command to use for a different Ruby interpreter? 'rvm use' that Ruby interpreter, then re-run 'passenger-config --ruby-command'.
The output tells you what value to set.
--python
/ "python"
Command line syntax | passenger start --python COMMAND_PATH |
---|---|
Config file syntax | "python": string |
Environment variable syntax | PASSENGER_PYTHON=string |
Default | The first "python" command in the $PATH environment variable |
Since | 5.0.7 |
Engines | nginx, builtin |
Mass deployment context | Main, per-app |
Specifies the Python interpreter to use for serving Python web applications.
--nodejs
/ "nodejs"
Command line syntax | passenger start --nodejs COMMAND_PATH |
---|---|
Config file syntax | "nodejs": string |
Environment variable syntax | PASSENGER_NODEJS=string |
Default | The first "node" command in the $PATH environment variable |
Since | 5.0.7 |
Engines | nginx, builtin |
Mass deployment context | Main, per-app |
Specifies the Node.js command to use for serving Node.js web applications.
--meteor-app-settings
/ "meteor_app_settings"
Command line syntax | passenger start --meteor-app-settings PATH_TO_JSON_SETTINGS_FILE |
---|---|
Config file syntax | "meteor_app_settings": string |
Environment variable syntax | PASSENGER_METEOR_APP_SETTINGS=string |
Since | 5.0.7 |
Engines | nginx, builtin |
Mass deployment context | Main, per-app |
When using a Meteor application in non-bundled mode, use this option to specify a JSON file with settings for the application. The meteor run
command will be run with the --settings
parameter set to this option.
Note that this option is not intended to be used for bundled/packaged Meteor applications. When running bundled/packaged Meteor applications on Passenger, you should set the METEOR_SETTINGS
environment variable.
--instance-registry-dir
/ "instance_registry_dir"
Command line syntax | passenger start --instance-registry-dir PATH |
---|---|
Config file syntax | "instance_registry_dir": string |
Environment variable syntax | PASSENGER_INSTANCE_REGISTRY_DIR=path |
Default | /tmp or /var/run/passenger-instreg |
Since | 5.0.0 |
Engines | nginx, builtin |
Mass deployment context | Main |
Specifies the directory that Passenger should use for registering its current instance.
When Passenger starts up, it creates a temporary directory inside the instance registry directory. This temporary directory is called the instance directory. It contains all sorts of files that are important to that specific running Passenger instance, such as Unix domain socket files so that all the different Passenger processes can communicate with each other. Command line tools such as passenger-status
use the files in this directory in order to query Passenger's status.
It is therefore important that, while Passenger is working, the instance directory is never removed or tampered with. However, the default path for the instance registry directory is the system's temporary directory, and some systems may run background jobs that periodically clean this directory. If this happens, and the files inside the instance directory are removed, then it will cause Passenger to malfunction: Passenger won't be able to communicate with its own processes, and you will see all kinds of connection errors in the log files. This malfunction can only be recovered from by restarting Passenger. You can prevent such cleaning background jobs from interfering by setting this option to a different directory.
This option is also useful if the partition that the temporary directory lives on doesn't have enough disk space.
The instance directory is automatically removed when Passenger shuts down.
Default value
The default value for this option is as follows:
- If you are on Red Hat, CentOS, Rocky, or Alma Linux and installed Passenger through the RPMs provided by Phusion, then the default value is
/var/run/passenger-instreg
. - Otherwise, the default value is the value of the
$TMPDIR
environment variable. Or, if$TMPDIR
is not set,/tmp
.
Note regarding command line tools
Some Passenger command line administration tools, such as passenger-status
, must know what Passenger's instance registry directory is in order to function properly. You can pass the directory through the PASSENGER_INSTANCE_REGISTRY_DIR
or the TMPDIR
environment variable.
For example, if you set 'PassengerInstanceRegistryDir' to '/my_temp_dir', then invoke passenger-status
after you've set the PASSENGER_INSTANCE_REGISTRY_DIR
, like this:
export PASSENGER_INSTANCE_REGISTRY_DIR=/my_temp-dir
sudo -E passenger-status
Notes regarding the above example:
- The -E option tells 'sudo' to preserve environment variables.
- If Passenger is installed through an RVM Ruby, then you must use
rvmsudo
instead ofsudo
.
--rackup
Command line syntax | passenger start --rackup PATH |
---|---|
Since | 4.0.0 |
Engines | nginx, builtin |
Mass deployment context | Main, per-app |
Consider application a Ruby app, and use the given Rackup file instead of the default config.ru
.
The corresponding Passengerfile.json looks as follows:
{
"app_type": "rack",
"startup_file": "(your value here)"
}
--app-start-command
/ "app_start_command"
Command line syntax | passenger start --app-start-command COMMAND |
---|---|
Config file syntax | "app_start_command": string |
Since | 6.0.0 |
Engines | nginx, builtin |
Mass deployment context | Main, per-app |
Specifies how Passenger should start your app on a specific port.
Passenger has built-in support for starting Ruby, Python, Node.js, and Meteor apps, however it can also start any application written in any language which can listen on a specified port. This functionality is termed Generic Language Support (GLS) and is discussed in greater detail here. The minimum required configuration to make use of GLS in Passenger, is to specify how Passenger should start your app on a specific port. To achieve this you specify the app-start-command
which is the command you would use on the command line to start your app, with a placeholder $PORT
where Passenger should substitute its chosen port, for your app to receive and bind to. We go into greater detail on various ways to pass the port to your app if it doesn't take a command line argument to set the port here.
Consider the following config snippet:
{
"app_start_command": "/usr/local/bin/myapp --foreground --port $PORT"
}
Passenger will start your app by calling your command, with an actual port number in place of the $PORT
placeholder. For eg. /usr/local/bin/myapp --foreground --port 5000
.
--app-type
/ "app_type"
Command line syntax | passenger start --app-type NAME |
---|---|
Config file syntax | "app_type": string |
Environment variable syntax | PASSENGER_APP_TYPE=string |
Default | Autodetected |
Since | 4.0.25 |
Engines | nginx, builtin |
Mass deployment context | Main, per-app |
By default, Passenger autodetects the type of the application, e.g. whether it's a Ruby, Python, Node.js or Meteor app. If it's unable to autodetect the type of the application (e.g. because you've specified a custom --startup-file
/ "startup_file") then you can use this option to force Passenger to recognize the application as a specific type.
Allowed values are:
Value | Application type |
---|---|
rack | Ruby, Ruby on Rails |
wsgi | Python |
node | Node.js or Meteor JS in bundled/packaged mode |
meteor | Meteor JS in non-bundled/packaged mode |
Config file example
Use server.js as the startup file (entry point file) for your Node.js application, instead of the default app.js:
{
"app_type": "node",
"startup_file": "server.js"
}
--startup-file
/ "startup_file"
Command line syntax | passenger start --startup-file PATH |
---|---|
Config file syntax | "startup_file": string |
Environment variable syntax | PASSENGER_STARTUP_FILE=path |
Default | Autodetected |
Since | 4.0.25 |
Engines | nginx, builtin |
Mass deployment context | Main, per-app |
This option specifies the startup file that Passenger should use when loading the application.
Every application has a startup file or entry point file: a file where the application begins execution. Some languages have widely accepted conventions about how such a file should be called (e.g. Ruby, with its config.ru
). Other languages have somewhat-accepted conventions (e.g. Node.js, with its app.js
). In these cases, Passenger follows these conventions, and executes applications through those files.
Other languages have no conventions at all, and so Passenger invents one (e.g. Python WSGI with passenger_wsgi.py
).
Passenger tries to autodetect according to the following language-specific conventions:
Language | Passenger convention |
---|---|
Ruby, Ruby on Rails | config.ru |
Python | passenger_wsgi.py |
Node.js | app.js |
Meteor JS in non-bundled/packaged mode | .meteor |
For other cases you will need to specify the startup-file
manually. For example, on Node.js, you might need to use bin/www
as the startup file instead if you are using the Express app generator.
--app-type
/ "app_type", otherwise Passenger doesn't know what kind of application it is.
Config file example
{
"app_type": "node",
"startup_file": "server.js"
}
--spawn-method
/ "spawn_method"
Command line syntax | passenger start --spawn-method NAME |
---|---|
Config file syntax | "spawn_method": string |
Environment variable syntax | PASSENGER_SPAWN_METHOD=string |
Default |
For Ruby apps: smart For other apps: direct
|
Since | 3.0.0 |
Engines | nginx, builtin |
Mass deployment context | Main, per-app |
This option controls whether Passenger spawns applications directly, or using a prefork copy-on-write mechanism. The spawn methods guide explains this in detail.
--restart-dir
/ "restart_dir"
Command line syntax | passenger start --restart-dir PATH |
---|---|
Config file syntax | "restart_dir": string |
Environment variable syntax | PASSENGER_RESTART_DIR=path |
Default | app_dir/tmp |
Since | 4.0.0 |
Engines | nginx, builtin |
Mass deployment context | Main, per-app |
As described in Restarting applications, Passenger checks the file tmp/restart.txt
in the application directory to determine whether it should restart the application. Sometimes it may be desirable for Passenger to look in a different directory instead. This option allows you to customize the directory in which restart.txt
is searched for.
Example 1: default behavior
Passenger will check for /apps/foo/public/tmp/restart.txt:
cd /apps/foo
passenger start
Example 2: absolute path
An absolute filename is given. Passenger will check for /restart_files/bar/restart.txt:
cd /apps/bar
passenger start --restart-dir /restart_files/bar
Example 3: relative path
A relative filename is given. Passenger will check for /apps/baz/restart_files/restart.txt.
cd /apps/baz
passenger start --restart-dir restart_files
--load-shell-envvars
/ "load_shell_envvars"
Command line syntax | passenger start --load-shell-envvars |
---|---|
Config file syntax | "load_shell_envvars": true |
Environment variable syntax | PASSENGER_LOAD_SHELL_ENVVARS=true |
Default | Disabled |
Since | 4.0.42 |
Engines | nginx, builtin |
Mass deployment context | Main, per-app |
Enables or disables the loading of shell environment variables before spawning the application.
If this option is turned on, and the user's shell is bash
, then applications are loaded by running them with bash -l -c
. If this option is turned off, applications are loaded by running them directly from the Passenger core
process.
--preload-bundler
/ "preload_bundler"
Command line syntax | passenger start --preload-bundler |
---|---|
Config file syntax | "preload_bundler": true |
Environment variable syntax | PASSENGER_PRELOAD_BUNDLER=true |
Default | Disabled |
Since | 6.0.13 |
Engines | nginx, builtin |
Mass deployment context | Main, per-app |
Enables or disables loading bundler before loading your Ruby app.
If this option is turned on, Ruby will be instructed to load the bundler gem before loading your application. This can help with gem version conflicts due to order-of require issues.
--defer-port-binding
/ "defer_port_binding"
Command line syntax | passenger start --defer-port-binding |
---|---|
Config file syntax | "defer_port_binding": true |
Environment variable syntax | PASSENGER_DEFER_PORT_BINDING=true |
Default | Disabled |
Since | 5.1.11 |
Engines | nginx |
Mass deployment context | Main, per-app |
Enterprise only | This option is available in Passenger Enterprise only. Buy Passenger Enterprise here. |
Enables or disables support for delaying binding the tcp port until after the application(s) have started up. This is useful in environments where binding the tcp port is considered a signal that the app server is ready to handle requests, such as Heroku. This setup uses an Nginx server in a reverse proxy configuration to bind the tcp port and communicate with the main web server via a unix socket.
--start-timeout
/ "start_timeout"
Command line syntax | --start-timeout SECONDS; |
---|---|
Config file syntax | start_timeout: integer |
Environment variable syntax | PASSENGER_START_TIMEOUT=integer |
Default | 90 |
Since | 5.1.11 |
Engines | nginx, builtin |
Mass deployment context | Main, per-app |
Specifies a timeout for the startup of application processes. If an application process fails to start within the timeout period then it will be forcefully killed with SIGKILL, and the error will be logged.
--rolling-restarts
/ "rolling_restarts"
Command line syntax | passenger start --rolling-restarts |
---|---|
Config file syntax | "rolling_restarts": true |
Environment variable syntax | PASSENGER_ROLLING_RESTARTS=true |
Default | Disabled |
Since | 4.0.0 |
Engines | nginx, builtin |
Mass deployment context | Main, per-app |
Enterprise only | This option is available in Passenger Enterprise only. Buy Passenger Enterprise here. |
Enables or disables support for zero-downtime application restarts through restart.txt
.
Please note that this option is completely unrelated to the passenger-config restart-app
command. That command always initiates a blocking restart, unless --rolling-restart
is given.
--resist-deployment-errors
/ "resist_deployment_errors"
Command line syntax | passenger start --resist-deployment-errors |
---|---|
Config file syntax | "resist_deployment_errors": true |
Environment variable syntax | PASSENGER_RESIST_DEPLOYMENT_ERRORS=true |
Default | Disabled |
Since | 4.0.0 |
Engines | nginx, builtin |
Mass deployment context | Main, per-app |
Enterprise only | This option is available in Passenger Enterprise only. Buy Passenger Enterprise here. |
Enables or disables resistance against deployment errors.
Suppose that you have upgraded your application and you have issued a command to restart it, but the application update contains an error (e.g. a syntax error or a database configuration error) that prevents Passenger from successfully spawning a process. Passenger would normally display an error message to the visitor in response to this.
By enabling deployment error resistance, Passenger Enterprise would "freeze" the application's process list. Existing application processes (belonging to the previous version) will be kept around to serve requests. The error is logged, but visitors do not see any error messages. Passenger keeps the old processes around until an administrator has taken action. This way, visitors will suffer minimally from deployment errors.
Learn more about this feature in Deployment Error Resistance guide.
Note that enabling deployment error resistance only works if you perform a rolling restart instead of a blocking restart.
Advanced configuration through Nginx config template
If you customize the Passenger Standalone Nginx config template then you can further customize the behavior of spawning failures.
- You may use passenger_spawn_exception_status_code to customize the response status, if you do not want to use deployment error resistance and also don't want the status to be 500. (For example if you don't want Passenger to intercept 500 errors generated by your application).
- You may use passenger_intercept_errors and error_page to set a custom error page whenever a specific error code is returned.
-
In the following example, Nginx will serve /spawnerror.html whenever there is a problem spawning the application (but not for application generated 500 statuses), and set the status back to 500:
passenger_intercept_errors on; passenger_spawn_exception_status_code 418; error_page 418 =500 /spawnerror.html;
Performance tuning
--core-file-descriptor-ulimit
/ "core_file_descriptor_ulimit"
Command line syntax | passenger start --core-file-descriptor-ulimit NUMBER |
---|---|
Config file syntax | "core_file_descriptor_ulimit": integer |
Environment variable syntax | PASSENGER_CORE_FILE_DESCRIPTOR_ULIMIT=path |
Default | See description |
Since | 5.0.26 |
Engines | nginx, builtin |
Mass deployment context | Main |
Sets the file descriptor operating system ulimit for the Passenger core process. If you see "too many file descriptor" errors on a regular basis, then increasing this limit will help.
The default value is inherited from the process that started Passenger Standalone. If you started Passenger Standalone from the shell, then the file descriptor ulimit is inherited from the shell process (which you can inspect with ulimit -a
). If you started Passenger Standalone from an OS startup script such as /etc/rc.local, then the file descriptor ulimit is inherited from the process that invoked the script.
On most operating systems, the default ulimit can also be configured with a config file such as /etc/security/limits.conf, but since ulimits are inherited on a process basis instead of set globally, using that file to change ulimits is usually an error-prone process. This Passenger configuration option provides an easier and high confidence way to set the file descriptor ulimit.
Note that application ulimits may also be affected by this setting because ulimits are inherited on a process basis (i.e. from Passenger). There are two exceptions to this:
-
If you are using
--load-shell-envvars
/ "load_shell_envvars" then the application processes are started through the shell, and the shell startup files may override the ulimits set by Passenger. -
You can also set the file descriptor ulimit on a per-application basis (instead of setting it globally for the Passenger core process) using
--app-file-descriptor-ulimit
/ "app_file_descriptor_ulimit".
--app-file-descriptor-ulimit
/ "app_file_descriptor_ulimit"
Command line syntax | passenger start --app-file-descriptor-ulimit NUMBER |
---|---|
Config file syntax | "app_file_descriptor_ulimit": integer |
Environment variable syntax | PASSENGER_APP_FILE_DESCRIPTOR_ULIMIT=path |
Default | See description |
Since | 5.0.26 |
Engines | nginx, builtin |
Mass deployment context | Main, per-app |
Sets the file descriptor operating system ulimit for application processes managed by Passenger. If you see "too many file descriptor" errors on a regular basis, and these errors originate from the Passenger core process (as opposed to the application processes), then increasing this limit will help.
If the "too many file descriptor" errors originate from the Passenger core process, then setting this option will not help. Use --core-file-descriptor-ulimit
/ "core_file_descriptor_ulimit" for that.
The default file descriptor ulimit is inherited from the Passenger core process. See --core-file-descriptor-ulimit
/ "core_file_descriptor_ulimit" to learn how the default file descriptor ulimit for Passenger core process is set.
--socket-backlog
/ "socket_backlog"
Command line syntax | passenger start --socket-backlog NUMBER |
---|---|
Config file syntax | "socket_backlog": integer |
Environment variable syntax | PASSENGER_SOCKET_BACKLOG=integer |
Default |
1024 (< 5.0.25) 2048 (≥ 5.0.26) |
Since | 5.0.24 |
Engines | nginx, builtin |
Mass deployment context | Main |
The socket backlog is a queue of incoming connections not yet acknowledged by Passenger. The default value is chosen to match the default for Nginx' worker_connections
. If you use the Nginx engine and increase the latter, it is likely that you'll also need to increase the passenger_socket_backlog
. If connections are coming in too fast and overflow the backlog, you'll see the error (Nginx engine):
connect() to unix:/tmp/passenger… failed (11: Resource temporarily unavailable) while connecting to upstream
--max-pool-size
/ "max_pool_size"
Command line syntax | passenger start --max-pool-size NUMBER |
---|---|
Config file syntax | "max_pool_size": integer |
Environment variable syntax | PASSENGER_MAX_POOL_SIZE=integer |
Default | 6 |
Since | 3.0.0 |
Engines | nginx, builtin |
Mass deployment context | Main |
The maximum number of application processes that may simultaneously exist. Generally speaking, the more application processes you run, the more concurrent traffic you can handle and the better your CPU core utilization becomes, until your hardware is saturated. But running more processes also means higher memory consumption.
The optimal value depends on your system's hardware and your workload. Please read the optimization guide to learn how to find out the optimal value.
This option behaves like a "safety switch" that prevents Passenger from overloading your system with too many processes. No matter how you configure min_instances, the total number of processes won't ever surpass the value set for this option. For example, if max_pool_size is set to 6 and min_instances to 8, then the maximum number processes that may simultaneously exist is 6, not 8.
If you find that your server is running out of memory then you should lower this value. In order to prevent your server from crashing due to out-of-memory conditions, the default value is relatively low (6).
--min-instances
/ "min_instances"
Command line syntax | passenger start --min-instances NUMBER |
---|---|
Config file syntax | "min_instances": integer |
Environment variable syntax | PASSENGER_MIN_INSTANCES=integer |
Default | 1 |
Since | 3.0.0 |
Engines | nginx, builtin |
Mass deployment context | Main, per-app |
This specifies the minimum number of application processes that should exist for a given application. You should set this option to a non-zero value if you want to avoid potentially long startup times after a website has been idle for an extended period of time.
Example
Suppose that you have the following configuration:
{
"max_pool_size": 15,
"pool_idle_time": 10
"min_instances": 3
}
When you start Passenger, it spawns 3 application processes. Suppose that there's a sudden spike of traffic, and 100 users visit 'foobar.com' simultaneously. Passenger will start 12 more application processes (15 - 3 = 12
). After the idle timeout of 10 seconds has passed, Passenger will clean up 12 application processes, keeping 3 processes around.
--pool-idle-time
/ "pool_idle_time"
Command line syntax | passenger start --pool-idle-time SECONDS |
---|---|
Config file syntax | "pool_idle_time": integer |
Environment variable syntax | PASSENGER_POOL_IDLE_TIME=integer |
Default | 300 (5 minutes) |
Since | 3.0.0 |
Engines | nginx, builtin |
Mass deployment context | Main |
The maximum number of seconds that an application process may be idle. That is, if an application process hasn't received any traffic after the given number of seconds, then it will be shutdown in order to conserve memory.
Decreasing this value means that appliction processes will have to be spawned more often. Since spawning is a relatively slow operation, some visitors may notice a small delay when they visit your web app. However, it will also free up resources used by the processes more quickly.
The optimal value depends on the average time that a visitor spends on a single dynamic page. We recommend a value of 2 * x
, where x
is the average number of seconds that a visitor spends on a single dynamic page. But your mileage may vary.
When this value is set to 0
, application processes never not be shutdown (unless they crash or are manually killed, of course).
Setting the value to 0 is recommended if you favor resource savings more than predictable performance.
--max-preloader-idle-time
/ "max_preloader_idle_time"
Command line syntax | passenger start --max-preloader-idle-time SECONDS |
---|---|
Config file syntax | "max_preloader_idle_time": integer |
Environment variable syntax | PASSENGER_MAX_PRELOADER_IDLE_TIME=integer |
Default | 300 (5 minutes) |
Since | 4.0.0 |
Engines | nginx, builtin |
Mass deployment context | Main |
The preloader process (explained in Spawn methods) has an idle timeout, just like the application processes spawned by Passenger do. That is, it will automatically shutdown a preloader process if it hasn't done anything for a given period.
This option allows you to set the preloader's idle timeout, in seconds. A value of 0
means that it should never idle timeout.
Setting a higher value will mean that the preloader is kept around longer, which may slightly increase memory usage. But as long as the preloader server is running, the time to spawn a Ruby application process only takes about 10% of the time that is normally needed, assuming that you're using the smart
spawn method. So if your system has enough memory, then is it recommended that you set this option to a high value or to 0
.
--force-max-concurrent-requests-per-process
/ "force_max_concurrent_requests_per_process"
Command line syntax | passenger start --force-max-concurrent-requests-per-process NUMBER |
---|---|
Config file syntax | "force_max_concurrent_requests_per_process": integer |
Environment variable syntax | PASSENGER_FORCE_MAX_CONCURRENT_REQUESTS_PER_PROCESS=integer |
Default | -1 |
Since | 5.0.22 |
Engines | nginx, builtin |
Mass deployment context | Main, per-app |
Use this option to tell Passenger how many concurrent requests the application can handle per process. A value of 0 means that each process can handle an unlimited number of connections, while a value of -1 (the default) means that Passenger will infer the value based on internal heuristics.
There are three main use cases for this option:
- To make dynamic process scaling work in Node.js and Meteor applications. Set this option to approximately the number of concurrent requests at which the performance of a single process begins to degrade.
- To make SSE and WebSockets work well in Ruby applications. Set this option to 0.
- To specify the available concurrency of an app using the GLS capabilities of Passenger.
This option is a hint to Passenger and does not make the application actually able to handle that many concurrent requests per process. For example in Ruby applications, the amount of concurrency that your application process can handle usually depends on the number of configured threads. If you set the number of threads, then Passenger will automatically infer that Ruby applications' max concurrency per process equals the number of threads. But in non-standard cases where this heuristic fails (e.g. in situations where a WebSocket library such as Faye spawns threads to handle WebSockets) then you can use this option to override Passenger's heuristic.
It is recommended that you do not touch this configuration option unless you want to tweak Passenger for one of the three main use cases documented above.
--concurrency-model
/ "concurrency_model"
Command line syntax | passenger start --concurrency-model <process|thread> |
---|---|
Config file syntax | "concurrency_model": "process"|"thread" |
Environment variable syntax | PASSENGER_CONCURRENCY_MODEL=process|thread |
Default | process |
Since | 4.0.0 |
Engines | nginx, builtin |
Mass deployment context | Main, per-app |
Enterprise only | This option is available in Passenger Enterprise only. Buy Passenger Enterprise here. |
Specifies the I/O concurrency model that should be used for Ruby application processes. Passenger supports two concurrency models:
process
– single-threaded, multi-processed I/O concurrency. Each application process only has a single thread and can only handle 1 request at a time. This is the concurrency model that Ruby applications traditionally used. It has excellent compatibility (can work with applications that are not designed to be thread-safe) but is unsuitable for workloads in which the application has to wait for a lot of external I/O (e.g. HTTP API calls), and uses more memory because each process has a large memory overhead.thread
– multi-threaded, multi-processed I/O concurrency. Each application process has multiple threads (customizable via--thread-count
/ "thread_count". This model provides much better I/O concurrency and uses less memory because threads share memory with each other within the same process. However, using this model may cause compatibility problems if the application is not designed to be thread-safe.
- This option only has effect on Ruby applications.
- Multithreading is not supported for Python.
- Multithreading is not applicable to Node.js and Meteor because they are evented and do not need (and cannot use) multithreading.
--thread-count
/ "thread_count"
Command line syntax | passenger start [...] --thread-count INTEGER |
---|---|
Config file syntax | "thread_count": integer |
Environment variable syntax | PASSENGER_THREAD_COUNT=integer |
Default | 1 |
Since | 4.0.0 |
Engines | nginx, builtin |
Mass deployment context | Main, per-app |
Specifies the number of threads that Passenger should spawn per Ruby application process. This option only has effect if --concurrency-model
/ "concurrency-model" is thread
.
- This option only has effect on Ruby applications.
- Multithreading is not supported for Python.
- Multithreading is not applicable to Node.js and Meteor because they are evented and do not need (and cannot use) multithreading.
--max-request-queue-size
/ "max_request_queue_size"
Command line syntax | passenger start --max-request-queue-size NUMBER |
---|---|
Config file syntax | "max_request_queue_size": integer |
Environment variable syntax | PASSENGER_MAX_REQUEST_QUEUE_SIZE=integer |
Default | 100 |
Since | 5.0.22 |
Engines | nginx, builtin |
Mass deployment context | Main, per-app |
When all application processes are already handling their maximum number of concurrent requests, Passenger will queue all incoming requests. This option specifies the maximum size for that queue. If the queue is already at this specified limit, then Passenger will immediately send a "503 Service Unavailable" error to any incoming requests.
A value of 0 means that the queue is unbounded.
This article on StackOverflow explains how the request queue works, what it means for the queue to grow or become full, why that is bad, and what you can do about it.
Advanced configuration through Nginx config template
If you customize the Passenger Standalone Nginx config template then you can further customize the behavior of request queue overflows.
- You may use passenger_request_queue_overflow_status_code to customize the response status, if you do not want the status to be 503.
-
You may use passenger_intercept_errors and error_page to set a custom error page whenever the queue is overflown. In the following example, Nginx will serve /error503.html whenever the queue is full:
passenger_intercept_errors on; error_page 503 /error503.html;
--max-request-queue-time
/ "max_request_queue_time"
Command line syntax | passenger start --max-request-queue-time NUMBER |
---|---|
Config file syntax | "max_request_queue_time": integer |
Environment variable syntax | PASSENGER_MAX_REQUEST_QUEUE_TIME=integer |
Default | 0 |
Since | 5.1.12 |
Engines | nginx, builtin |
Mass deployment context | Main, per-app |
Enterprise only | This option is available in Passenger Enterprise only. Buy Passenger Enterprise here. |
When all application processes are already handling their maximum number of concurrent requests, Passenger will queue all incoming requests. This option specifies the maximum time a request may spend in that queue. If a request in the queue reaches this specified limit, then Passenger will send a "504 Gateway Timeout" error for that request. For performance reasons it might take up to 0.5 × passenger_max_request_queue_time
after a request timed out before a 504 response is sent (when all application processes are stuck).
A value of 0 means that the queue time is unbounded.
This article on StackOverflow explains how the request queue works, what it means for the queue to grow or become full, why that is bad, and what you can do about it.
Advanced configuration through Nginx config template
If you customize the Passenger Standalone Nginx config template then you can further customize the behavior of request queue overflows.
- You may use passenger_request_queue_timeout_status_code to customize the response status, if you do not want the status to be 504.
-
You may use passenger_intercept_errors and error_page to set a custom error page whenever the queue is overflown. In the following example, Nginx will serve /error504.html whenever the queue is full:
passenger_intercept_errors on; error_page 504 /error504.html;
--disable-turbocaching
/ "turbocaching"
Command line syntax | passenger start --disable-turbocaching |
---|---|
Config file syntax | "turbocaching": false |
Environment variable syntax | PASSENGER_TURBOCACHING=false |
Default | Enabled |
Since | 5.0.14 |
Engines | nginx, builtin |
Mass deployment context | Main |
Disables turbocaching.
--vary-turbocache-by-cookie
/ "vary_turbocache_by_cookie"
Command line syntax | passenger start --vary-turbocache-by-cookie name |
---|---|
Config file syntax | "vary_turbocache_by_cookie": string |
Environment variable syntax | PASSENGER_VARY_TURBOCACHE_BY_COOKIE=string |
Default | Enabled |
Since | 6.0.5 |
Engines | nginx, builtin |
Mass deployment context | Main, per-app |
If set Passenger will treat requests as separate in the turbocache if the value of the cookie with the provided name is different.
--unlimited-concurrency-path
/ "unlimited_concurrency_paths"
Command line syntax | passenger start --unlimited-concurrency-path URI-PATH1 [--unlimited-concurrency-path URI-PATH2 ...] |
---|---|
Config file syntax | "unlimited_concurrency_paths": ["uri-path1", "uri-path2", ...] |
Since | 5.0.25 |
Engines | nginx |
Mass deployment context | Main, per-app |
Use this option to tell Passenger that the application supports unlimited concurrency at the specified URI paths. The main use cases for this option are:
- To SSE and WebSockets work well in Ruby applications.
- To integrate Ruby on Rails Action Cable with Passenger Standalone.
This option is functionally equivalent to setting –force-max-concurrent-requests-per-process / "force_max_concurrent_requests_per_process to 0, but only for the specify sub-URIs.
Security
--user
/ "user"
Command line syntax | passenger start --user USERNAME |
---|---|
Config file syntax | "user": string |
Environment variable syntax | PASSENGER_USER=string |
Since | 3.0.0 |
Engines | nginx, builtin |
Mass deployment context | Main |
Instructs Passenger to drop its privilege to that of the given user as soon as Passenger has setup the socket. This only works if Passenger was started with root privileges.
If this option is not given, then Passenger runs as the user that invoked it.
--data-buffer-dir
/ "data_buffer_dir"
Command line syntax | passenger start --data-buffer-dir PATH |
---|---|
Config file syntax | "data_buffer_dir": string |
Environment variable syntax | PASSENGER_DATA_BUFFER_DIR=path |
Default | See description |
Since | 5.0.0 |
Engines | nginx, builtin |
Mass deployment context | Main |
By default, Passenger buffers large web application responses. This prevents slow HTTP clients from blocking web applications by reading responses very slowly. This feature is also known as "real-time disk-backed response buffering".
By default, such buffers are stored in the directory given by the $TMPDIR
environment variable, or (if $TMPDIR
is not set) the /tmp
directory. This configuration option allows you to specify a different directory.
Changing this option is especially useful if the partition that the default directory lives on doesn't have enough disk space.
If you've specified such a directory (as opposed to using Passenger's default) then you must ensure that this directory exists.
--disable-security-update-check
/ "disable_security_update_check"
Command line syntax | passenger start --disable-security-update-check |
---|---|
Config file syntax | "disable_security_update_check": true |
Environment variable syntax | PASSENGER_DISABLE_SECURITY_UPDATE_CHECK=true |
Default | false |
Since | 5.1.0 |
This option allows disabling the Passenger security update check, a daily check with https://securitycheck.phusionpassenger.com for important security updates that might be available.
--security-update-check-proxy
/ "security_update_check_proxy"
Command line syntax | passenger start --security-update-check-proxy scheme://user:password@proxy_host:proxy_port; |
---|---|
Config file syntax | "security_update_check_proxy": string |
Environment variable syntax | PASSENGER_SECURITY_UPDATE_CHECK_PROXY=string |
Since | 5.1.0 |
This option allows use of an intermediate proxy for the Passenger security update check.
The proxy client code uses libcurl, which supports the following values for scheme:
http, socks5, socks5h, socks4, socks4a
--disable-anonymous-telemetry
/ "disable_anonymous_telemetry"
Command line syntax | passenger start --disable-anonymous-telemetry |
---|---|
Config file syntax | "disable_anonymous_telemetry": true |
Environment variable syntax | DISABLE_ANONYMOUS_TELEMETRY=true |
Default | false |
Since | 6.0.0 |
This option allows disabling the Passenger anonymous telemetry reporting, which regularly sends anonymous telemetry data to https://anontelemetry.phusionpassenger.com.
--anonymous-telemetry-proxy
/ "anonymous_telemetry_proxy"
Command line syntax | passenger start --anonymous-telemetry-proxy scheme://user:password@proxy_host:proxy_port; |
---|---|
Config file syntax | "anonymous_telemetry_proxy": string |
Environment variable syntax | PASSENGER_ANONYMOUS_TELEMETRY_PROXY=string |
Since | 6.0.0 |
This option allows use of an intermediate proxy for the Passenger anonymous telemetry reporting.
The proxy client code uses libcurl, which supports the following values for scheme:
http, socks5, socks5h, socks4, socks4a
--direct-instance-request-address
/ "direct_instance_request_address"
Command line syntax | passenger start --direct-instance-request-address ip; |
---|---|
Config file syntax | "direct_instance_request_address":"ip" |
Environment variable syntax | passenger_direct_instance_request_address=ip |
Since | 6.0.7 |
The port which Passenger will cause your ruby app to additionally bind to, to allow sending requests directly to specific app instances. Sending requests to specific app processes is detailed here.
Request / response customization
--static-files-dir
/ "static_files_dir"
Command line syntax | passenger start --static-files-dir PATH |
---|---|
Config file syntax | "static_files_dir": string |
Environment variable syntax | PASSENGER_STATIC_FILES_DIR=path |
Default | app_dir/public |
Since | 4.0.25 |
Engines | nginx |
Mass deployment context | Main, per-app |
By default, Passenger automatically serves static files in the application's public
subdirectory. Your application is offloaded from having to serve static files. In case your static files are not located in public
but somewhere else, then use this option to specify the location.
--sticky-sessions
/ "sticky_sessions"
Command line syntax | passenger start --sticky-sessions |
---|---|
Config file syntax | "sticky_sessions": true |
Environment variable syntax | PASSENGER_STICKY_SESSIONS=true |
Default | Disabled |
Since | 5.0.1 |
Engines | nginx, builtin |
Mass deployment context | Main, per-app |
When sticky sessions are enabled, all requests that a client sends will be routed to the same originating application process, whenever possible. When sticky sessions are disabled, requests may be distributed over multiple processes, and may not necessarily be routed to the originating process, in order to balance traffic over multiple CPU cores. Because of this, sticky sessions should only be enabled in specific circumstances.
For applications that store important state inside the process's own memory – that is, as opposed to storing state in a distributed data store, such as the database or Redis – sticky sessions should be enabled. This is because otherwise, some requests could be routed to a different process, which stores different state data. Because processes don't share memory with each other, there's no way for one process to know about the state in another process, and then things can go wrong.
One prominent example is the popular SockJS library, which is capable of emulating WebSockets through long polling. This is implemented through two HTTP endpoints, /SESSION_ID/xhr_stream
(a long polling end point which sends data from the server to the client), and /SESSION_ID/xhr_send
(a normal POST endpoint which is used for sending data from the client to the server). SockJS correlates the two requests with each other through a session identifier. At the same time, in its default configuration, it stores all known session identifiers in an in-memory data structure. It is therefore important that a particular /SESSION_ID/xhr_send
request is sent to the same process where the corresponding /SESSION_ID/xhr_stream
request originates from; otherwise, SockJS cannot correlate the two requests, and an error occurs.
So prominent examples where sticky sessions should (or even must) be enabled, include:
- Applications that use the SockJS library (unless configured with a distributed data store)
- Applications that use the Socket.io library (unless configured with a distributed data store)
- Applications that use the faye-websocket gem (unless configured with a distributed data store)
- Meteor JS applications (because Meteor uses SockJS)
Sticky sessions work through the use of a special cookie, whose name can be customized with --sticky-sessions-cookie-name
/ "sticky_sessions_cookie_name". Passenger puts an identifier in this cookie, which tells Passenger what the originating process is. Next time the client sends a request, Passenger reads this cookie and uses the value in the cookie to route the request back to the originating process. If the originating process no longer exists (e.g. because it has crashed or restarted) then Passenger will route the request to some other process, and reset the cookie.
If you have a load balancer in front end of Passenger + Nginx, then you must configure sticky sessions on that load balancer too. Otherwise, the load balancer could route the request to a different server.
--sticky-sessions-cookie-name
/ "sticky_sessions_cookie_name"
Command line syntax | passenger start --sticky-sessions-cookie-name NAME |
---|---|
Config file syntax | "sticky_sessions_cookie_name": string |
Environment variable syntax | PASSENGER_STICKY_SESSIONS_COOKIE_NAME=string |
Default | _passenger_route |
Since | 5.0.1 |
Engines | nginx, builtin |
Mass deployment context | Main, per-app |
Sets the name of the sticky sessions cookie.
--sticky-sessions-cookie-attributes
/ "sticky_sessions_cookie_attributes"
Command line syntax | passenger start --sticky-sessions-cookie-attributes string |
---|---|
Config file syntax | "sticky_sessions_cookie_attributes": string |
Environment variable syntax | PASSENGER_STICKY_SESSIONS_COOKIE_ATTRIBUTES=string |
Default | SameSite=Lax; Secure; |
Since | 6.0.5 |
Engines | nginx, builtin |
Mass deployment context | Main, per-app |
Sets the name of the sticky sessions cookie.
Logging & troubleshooting
--log-file
/ "log_file"
Command line syntax | passenger start --log-file PATH |
---|---|
Config file syntax | "log_file": string |
Environment variable syntax | PASSENGER_LOG_FILE=path |
Default | See description |
Since | 3.0.0 |
Engines | nginx, builtin |
Mass deployment context | Main |
Log to the specified path, which is generally expected to be a file. Since version 5.0.29, /dev/stdout and /dev/stderr are also supported.
The default behavior is as follows:
- If there is a
log
subdirectory, log tolog/passenger.XXX.log
. - Otherwise, log to
passenger.XXX.log
.
In both cases, XXX is the port number that Passenger listens on.
If --socket
/ "socket_file" is set, then the default log filename does not contain the .XXX
part.
--log-level
/ "log_level"
Command line syntax | passenger start --log-level NUMBER |
---|---|
Config file syntax | "log_level": integer |
Environment variable syntax | PASSENGER_LOG_LEVEL=integer |
Default | 3 |
Since | 5.0.0 |
Engines | nginx, builtin |
Mass deployment context | Main |
This option allows one to specify how much information Passenger should write to the log file. A higher log level value means that more information will be logged.
Possible values are:
0
(crit): Show only critical errors which would cause Passenger to abort.1
(error): Also show non-critical errors – errors that do not cause Passenger to abort.2
(warn): Also show warnings. These are not errors, and Passenger continues to operate correctly, but they might be an indication that something is wrong with the system.3
(notice): Also show important informational messages. These give you a high-level overview of what Passenger is doing.4
(info): Also show less important informational messages. These messages show more details about what Passenger is doing. They're high-level enough to be readable by users.5
(debug): Also show the most important debugging information. Reading this information requires some system or programming knowledge, but the information shown is typically high-level enough to be understood by experienced system administrators.6
(debug2): Show more debugging information. This is typically only useful for developers.7
(debug3): Show even more debugging information.
--disable-log-prefix
/ "disable_log_prefix"
Command line syntax | passenger start --disable-log-prefix |
---|---|
Config file syntax | "disable_log_prefix": boolean |
Environment variable syntax | "PASSENGER_DISABLE_LOG_PREFIX": boolean |
Default | false |
Since | 6.0.2 |
Engines | nginx, builtin |
Mass deployment context | Main |
This option allows one to stop Passenger from prefixing logs that come from your app with "App PID stdout | stderr" when they are written to Passenger's log. This can be useful to simplify log-aggregating setups. |
--friendly-error-pages
, --no-friendly-error-pages
/ "friendly_error_pages"
Command line syntax |
passenger start --friendly-error-pages passenger start --no-friendly-error-pages |
---|---|
Config file syntax | "friendly_error_pages": boolean |
Environment variable syntax | PASSENGER_FRIENDLY_ERROR_PAGES=boolean |
Default |
When
Otherwise: |
Since | 5.0.28 |
Engines | nginx, builtin |
Mass deployment context | Main, per-app |
Passenger can display friendly error pages whenever an application fails to start. This friendly error page presents the startup error message, some suggestions for solving the problem, a backtrace and a dump of the environment variables.
This feature is very useful during application development and useful for less experienced system administrators, but the page might reveal potentially sensitive information, depending on the application. For this reason, friendly error pages are disabled by default, unless –environment / "environment" is set to development
.
You can use this option to explicitly enable or disable this feature. --friendly-error-pages
always enables friendly error pages, and --no-friendly-error-pages
always disables friendly error pages. Similarly, the "friendly_error_pages": boolean
config option always enables or disables friendly error pages.
--custom-error-page
/ "custom_error_page"
Command line syntax | passenger start --custom-error-page PATH< |
---|---|
Config file syntax | "custom_error_page": string |
Environment variable syntax | PASSENGER_CUSTOM_ERROR_PAGE=string |
Default |
Disabled |
Since | 6.0.23 |
Engines | nginx, builtin |
Mass deployment context | Main, per-app |
Replaces the default Passenger error page when there is an issue spawning an app.
By default, Passenger either renders a friendly error page or a minimal error page depending on the friendly error pages and app environment config options. This option overrides the error page with your own. The path should point to a static file which Passenger has permission to read.
--debugger
/ "debugger"
Command line syntax | passenger start --debugger |
---|---|
Config file syntax | "debugger": true |
Environment variable syntax | PASSENGER_DEBUGGER=true |
Default | Disabled |
Since | 3.0.0 |
Engines | nginx, builtin |
Mass deployment context | Main, per-app |
Enterprise only | This option is available in Passenger Enterprise only. Buy Passenger Enterprise here. |
Turns support for Ruby application debugging on or off. Please read the Ruby debugging console guide for more information.
--max-requests
/ "max_requests"
Command line syntax | passenger start --max-requests NUMBER |
---|---|
Config file syntax | "max_requests": integer |
Environment variable syntax | PASSENGER_MAX_REQUESTS=integer |
Default | 0 |
Since | 5.1.0 |
Engines | nginx, builtin |
Mass deployment context | Main, per-app |
The maximum number of requests an application process will process. After serving that many requests, the application process will be shut down and Passenger will restart it. A value of 0 means that there is no maximum. The application process might also be shut down if its idle timeout is reached.
This option is useful if your application is leaking memory. By shutting it down after a certain number of requests, all of its memory is guaranteed to be freed by the operating system. An alternative (and better) mechanism for dealing with memory leaks is memory_limit.
--memory-limit
/ "memory_limit"
Command line syntax | passenger start [...] --memory-limit MB |
---|---|
Config file syntax | "memory_limit": integer |
Environment variable syntax | PASSENGER_MEMORY_LIMIT=integer |
Default | 0 |
Since | 5.0.22 |
Engines | nginx, builtin |
Mass deployment context | Main, per-app |
Enterprise only | This option is available in Passenger Enterprise only. Buy Passenger Enterprise here. |
The maximum amount of memory that an application process may use, in megabytes. Once an application process has surpassed its memory limit, Passenger will allow it to finish processing all of its current requests, then shut the process down. A value of 0 means that there is no maximum: the application's memory usage will not be checked.
This option is useful if your application is leaking memory. By shutting it down, all of its memory is guaranteed to be freed by the operating system.
A word about permissions
This option uses the ps
command to query memory usage information. On Linux, it further queries /proc
to obtain additional memory usage information that's not obtainable through ps
. You should ensure that the ps
works correctly and that the /proc
filesystem is accessible by the Passenger core
process.
--max-request-time
/ "max_request_time"
Command line syntax | passenger start --max-request-time SECONDS |
---|---|
Config file syntax | "max_request_time": integer |
Environment variable syntax | PASSENGER_MAX_REQUEST_TIME=integer |
Default | 0 |
Since | 4.0.0 |
Engines | nginx, builtin |
Mass deployment context | Main, per-app |
Enterprise only | This option is available in Passenger Enterprise only. Buy Passenger Enterprise here. |
The maximum amount of time, in seconds, that an application process may take to process a request. If the request takes longer than this amount of time, then the application process will be forcefully shut down, and possibly restarted upon the next request. A value of 0 means that there is no time limit.
This option is useful for preventing your application from getting stuck for an indefinite period of time.
--no-abort-websockets-on-process-shutdown
/ "abort_websockets_on_process_shutdown"
Command line syntax | passenger start --no-abort-websockets-on-process-shutdown |
---|---|
Config file syntax | "abort_websockets_on_process_shutdown": false |
Environment variable syntax | PASSENGER_ABORT_WEBSOCKETS_ON_PROCESS_SHUTDOWN=false |
Default | Enabled |
Since | 5.0.22 |
Engines | nginx, builtin |
Mass deployment context | Main, per-app |
Before shutting down or restarting an application process, Passenger performs two operations:
- It waits until existing requests routed to that process are finished. This way, existing requests will be finished gracefully.
- It aborts WebSocket connections. This is because WebSocket connections can stay open for an arbitrary amount of time and will block the shutdown/restart.
If you want Passenger to not abort WebSocket connections, then use this option to turn the behavior off. That way, Passenger will wait for WebSocket connections to terminate by themselves, before proceeding with a process shutdown or restart. For this reason, you must modify your application code to ensure that WebSocket connections do not stay open for an arbitrary amount of time.
--admin-panel-url
/ "admin_panel_url"
Command line syntax | passenger start --admin-panel-url uri |
---|---|
Config file syntax | "admin_panel_url": uri |
Environment variable syntax | ADMIN_PANEL_URL=uri |
Since | 5.2.2 |
Engines | nginx |
Mass deployment context | Main |
The URI to connect to the Fuse Panel with. Information is sent to enable monitoring, administering, analysis and troubleshooting of this Passenger instance and apps running on it. The feature is disabled if this option is not specified. See "Connect Passengers" in the Fuse Panel for further instructions.
--admin-panel-auth-type
/ "admin_panel_auth_type"
Command line syntax | passenger start --admin-panel-auth-type type |
---|---|
Config file syntax | "admin_panel_auth_type": type |
Environment variable syntax | ADMIN_PANEL_AUTH_TYPE=type |
Default | basic |
Since | 5.2.2 |
Engines | nginx |
Mass deployment context | Main |
The authentication method Passenger should use when connecting to the Fuse Panel. Currently only basic authentication is supported. See "Connect Passengers" in the Fuse Panel for further instructions.
--admin-panel-username
/ "admin_panel_username"
Command line syntax | passenger start --admin-panel-username string |
---|---|
Config file syntax | "admin_panel_username": string |
Environment variable syntax | ADMIN_PANEL_USERNAME=string |
Since | 5.2.2 |
Engines | nginx |
Mass deployment context | Main |
The username that Passenger should use when connecting to the Fuse Panel with basic authentication. See "Connect Passengers" in the Fuse Panel for further instructions.
--admin-panel-password
/ "admin_panel_password"
Command line syntax | passenger start --admin-panel-password string |
---|---|
Config file syntax | "admin_panel_password": string |
Environment variable syntax | ADMIN_PANEL_PASSWORD=string |
Since | 5.2.2 |
Engines | nginx |
Mass deployment context | Main |
The password that Passenger should use when connecting to the Fuse Panel with basic authentication. See "Connect Passengers" in the Fuse Panel for further instructions.
--ctls
/ "ctls"
Command line syntax | passenger start --ctls var=value |
---|---|
Config file syntax | "ctls": [ "var1=value1", "var2=value2", ... ] |
Default | 3 |
Since | 5.0.0 |
Engines | nginx, builtin |
Mass deployment context | Main |
Low-level mechanism to set arbitrary internal options. This flag can be used multiple times on the command line to specify multiple options.