Upstream#
Provides a context for describing groups of servers that can be used in the proxy_pass, fastcgi_pass, uwsgi_pass, scgi_pass, memcached_pass, and grpc_pass directives. Added in version 1.9.0: PRO The directive enables the ability to start server selection not from the primary group,
but from the active group, i.e., the one where a server was successfully found last time.
If a server cannot be found in the active group for the next request,
and the search moves to the backup group,
then this group becomes active,
and subsequent requests are first directed to servers in this group. If the Example: If the balancer switches from primary servers to the backup group,
all subsequent requests are handled by this backup group for 2 minutes.
After 2 minutes elapse, the balancer rechecks the primary servers
and makes them active again if they are working normally. Allows binding a server connection to a client connection when the
value, specified as a string of variables, becomes different from Warning The Warning When using the directive, the Proxy module settings
must allow the use of persistent connections, for example: A typical use case for the directive is proxying connections with
NTLM authentication, where it is necessary to ensure client-to-server binding at
the beginning of negotiation: Default — upstream Sets up a feedback-based load balancing mechanism in the The following parameters can be specified: The variable from which the feedback value is taken.
It should represent a performance or health metric;
it is assumed that the server provides it in headers or otherwise. The value is evaluated with each response from the server
and is factored into the moving average
according to the If the parameter is set, the feedback value is interpreted inversely:
lower values indicate better performance. The factor by which the feedback value is considered
when calculating the average.
Valid values are integers from 0 to 99.
Default is The average is calculated using the exponential smoothing formula. The larger the factor, the less new values affect the average;
if Specifies a condition variable
that controls which responses are considered in the calculation.
The average value is updated with the feedback value from the response
only if the condition variable of that response
is not equal to Note By default, responses during active checks
are not included in the calculation;
combining the $upstream_probe variable
with Allows processing data from the proxied server after receiving
the complete response, not just the header. Example: This configuration categorizes server responses by feedback levels
based on specific scores from response header fields,
and also adds a condition on $upstream_probe
to consider only responses from the Specifies a load balancing method for a server group where the client-server mapping is determined using the hashed key value. The key can contain text, variables, and their combinations. Note that adding or removing a server from the group may result in remapping most of the keys to different servers. The method is compatible with the Cache::Memcached Perl library. When using domain names that resolve to multiple IP addresses
(for example, with the If the Specifies a load balancing method for the group where requests are distributed among servers based on client IP addresses. The first three octets of the client's IPv4 address or the entire IPv6 address are used as a hashing key. The method ensures that requests from the same client will always be passed to the same server except when this server is unavailable. In that case, client requests will be passed to another server. Most likely this will also be the same server. If one of the servers needs to be temporarily removed, it should be marked with the Activates the cache of connections to upstream servers. The Note It should be particularly noted that the Warning The Example configuration of memcached upstream with keepalive connections: For HTTP, the proxy_http_version directive should be set to "1.1" and the Note Alternatively, HTTP/1.0 persistent connections can be used by passing the "Connection: Keep-Alive" header field to an upstream server, though this method is not recommended. For FastCGI servers, it is required to set fastcgi_keep_conn for keepalive connections to work: Note SCGI and uwsgi protocols do not have a concept of keepalive connections. Sets the maximum number of requests that can be served through one keepalive connection. After the maximum number of requests is made, the connection is closed. Closing connections periodically is necessary to free per-connection memory allocations. Therefore, using too high maximum number of requests could result in excessive memory usage and not recommended. Limits the maximum time during which requests can be processed through one keepalive connection. After this time is reached, the connection is closed following the subsequent request processing. Sets a timeout during which an idle keepalive connection to an upstream server will stay open. Specifies that a group should use a load balancing method where a request is passed to the server with the least number of active connections, taking into account weights of servers. If there are several such servers, they are tried in turn using a weighted round-robin balancing method. Default — upstream Specifies that the group should use a load balancing method where an active
server's chance of receiving the request is inversely proportional to its
average response time; the less it is, the more requests a server gets. The directive accounts for the average time to receive response headers. The directive uses the average time to receive the entire response. Serves the same purpose as response_time_factor (PRO)
and overrides it if the parameter is set. Specifies a condition variable
that controls which responses are included in the calculation.
The average is updated
only if the condition variable for the response
is not Note By default, responses during active health probes
are not included in the calculation;
combining the $upstream_probe variable
with Current values are presented as If it is not possible to assign a proxied server to a request on the first attempt
(for example, during a brief service interruption
or when there is a surge in load reaching the max_conns limit),
the request is not rejected;
instead, Angie attempts to enqueue it for processing. The number parameter of the directive sets the maximum number of requests
in the queue for a worker process.
If the queue is full,
a Note The logic of the proxy_next_upstream directive also applies to queued requests.
Specifically, if a server was selected for a request
but it cannot be handed over to it,
the request may be returned to the queue. If a server is not selected to process a queued request
within the time set by Warning The Specifies a load balancing method for the group where a request is passed to a randomly selected server, taking into account server weights. If the optional Sets the smoothing factor for the previous value when calculating the average
response time for the least_time (PRO) load balancing method using the
exponentially weighted moving average formula. The higher the specified number, the less new values affect the average; if
Current calculation results are presented as Note Only successful responses are included in the calculation; what is considered an unsuccessful
response is determined by the proxy_next_upstream,
fastcgi_next_upstream, uwsgi_next_upstream,
scgi_next_upstream, memcached_next_upstream, and
grpc_next_upstream directives. Additionally, the Defines the address and other parameters of a server. The address can be specified as a domain name or IP address, with an optional port, or as a UNIX-domain socket path specified after the The following parameters can be defined: Sets the weight of the server. Default is 1. Limits the maximum number of simultaneous active connections to the proxied server. The default value is Note With idle keepalive connections enabled, multiple worker processes, and a shared memory zone, the total number of active and idle connections to the proxied server may exceed the What is considered an unsuccessful attempt
is defined by the proxy_next_upstream,
fastcgi_next_upstream, uwsgi_next_upstream,
scgi_next_upstream, memcached_next_upstream, and
grpc_next_upstream directives. When Note If a If after resolving all Default number of attempts; Disables attempt accounting. Default value is 10 seconds. Note If a If after resolving all Marks the server as a backup server. It will be passed requests when the primary servers are unavailable. If the backup_switch (PRO) directive is specified,
its active backup logic also applies. Marks the server as permanently unavailable. Marks the server as draining; this means
it only receives requests from sessions
previously bound via sticky.
Otherwise, the behavior is the same as in Warning The The Allows monitoring changes to the list of IP addresses corresponding to
a domain name and updating it without reloading the configuration.
The group must reside in
shared memory;
a resolver must also be defined. Enables resolution of DNS SRV records and sets the service name. For
the parameter to work, the If the service name does not contain a dot, a name is formed according to the RFC standard:
the service name is prefixed with Angie resolves SRV records
by combining the normalized service name and hostname
and obtaining a list of servers for the resulting combination via DNS,
along with their priorities and weights. SRV records with the highest priority
(those with the lowest priority value)
are resolved as primary servers,
while other records become backup servers.
If Weight is analogous to the In this example, a lookup is performed for the record Sets the server ID in the group. If the parameter is not specified,
the ID is set as a hexadecimal MD5 hash
of the IP address and port or UNIX socket path. Sets the time for gradual recovery of a returning server's weight
when load balancing with the
round-robin or least_conn method. If the parameter is set
and the server is considered operational again after a failure
from the perspective of max_fails and upstream_probe (PRO),
such a server gradually increases to its specified weight
over the given time period. If the parameter is not set,
in a similar situation
the server immediately starts operating with its specified weight. Note If only one Specifies a file where the list of upstream servers is persistently stored.
When installing from
our packages,
the directory
The server list here has a format similar to Warning To use the Default — upstream Configures session affinity to bind client sessions to proxied servers
in the mode specified by the first parameter;
to drain servers
that have the Warning The This mode uses cookies to manage sessions.
It's suitable when cookies are already used for session tracking. The first client request, before any stickiness applies,
is sent to a backend server according to the configured balancing method.
Angie then sets a cookie identifying the chosen server. The cookie name ( Subsequent requests with this cookie are routed to the server
specified by its sid.
If the server is unavailable or can't process the request,
another one is chosen via the configured balancing method. You can assign cookie attributes in the directive;
by default, only This example sets a cookie This mode uses predefined route identifiers,
which may come from URLs, cookies, or request arguments.
It's less flexible but good when such identifiers already exist. The backend server may return a route ID known to both client and server.
This value must match the sid. Subsequent requests should carry the route ID,
e.g., via a cookie or query argument. The directive takes a list of variables to extract the route ID.
The first non-empty value is matched against sid. In this example, Angie checks the This mode uses a dynamically generated key to assign a client to a backend.
It's flexible and supports session storage in shared memory
and various identifier sources. A session is created from the backend server's response.
The session ID is the first non-empty variable from Sessions are stored in shared memory,
defined with By default, Angie refreshes sessions on each use.
To disable this, use The session ID from a client request is extracted via Use Example: session created using Use Sessions expire after By default, Angie refreshes session TTL on each use.
Use Basic flow: Extract session ID from the first non-empty If If no session or no zone, pick a server and make an HTTP subrequest
to the session ID ($sticky_sessid); server ID from Send these as HTTP headers (via proxy_set_header). The remote store responds: 200/201/204 confirms the session; cache it if 409 signals conflict (if Other status codes or missing server ID — fallback to original server. Example: session ID comes from Example response: Resulting Angie variables: The Servers marked Servers over Recovered servers are reused automatically. You can further adjust behavior using
sticky_secret and sticky_strict.
If stickiness fails and Each Adds the string as the salt value to the MD5 hashing function
for the sticky directive in The salt is appended to the value being hashed;
to verify the hashing mechanism independently: When enabled, causes Angie to return an HTTP 502 error to the client
if the desired server is unavailable,
rather than using any other available server
as it would when no servers in the upstream are available. Defines a group of servers. Servers can listen on different ports. In addition, servers listening on TCP and UNIX domain sockets can be mixed. Example: By default, requests are distributed between the servers using a weighted round-robin balancing method. In the above example, each 7 requests will be distributed as follows: 5 requests go to backend1.example.com and one request to each of the second and third servers. If an error occurs during communication with a server, the request will be passed to the next server, and so on until all of the functioning servers will be tried. If a successful response could not be obtained from any of the servers, the client will receive the result of the communication with the last server. Defines the name and size of the shared memory zone that keeps the group's configuration and run-time state that are shared between worker processes. Several groups may share the same zone. In this case, it is enough to specify the size only once. The Used with Used with stores the IP address and port, or the path to the UNIX domain socket of the upstream server. If several servers were contacted during request processing, their addresses are separated by commas, e.g.: 192.168.1.1:80, 192.168.1.2:80, unix:/tmp/sock If an internal redirect from one server group to another happens, initiated by 192.168.1.1:80, 192.168.1.2:80, unix:/tmp/sock : 192.168.10.1:80, 192.168.10.2:80 If a server cannot be selected, the variable keeps the name of the server group. number of bytes received from an upstream server. Values from several connections are separated by commas and colons like addresses in the $upstream_addr variable. number of bytes sent to an upstream server. Values from several connections are separated by commas and colons like addresses in the $upstream_addr variable. keeps the status of accessing a response cache. The status can be either
If the request bypassed the cache without accessing it,
the variable is not set. keeps time spent on establishing a connection with the upstream server; the time is kept in seconds with millisecond resolution. In case of SSL, includes time spent on handshake. Times of several connections are separated by commas and colons like addresses in the $upstream_addr variable. keeps time spent on receiving the response header from the upstream server; the time is kept in seconds with millisecond resolution. Times of several responses are separated by commas and colons like addresses in the $upstream_addr variable. keep server response header fields. For example, the keeps time the request spent in the queue
before the next server selection;
the time is kept in seconds with millisecond resolution.
Times of several attempts are separated by commas and colons
like addresses in the $upstream_addr variable. keeps the length of the response obtained from the upstream server; the length is kept in bytes. Lengths of several responses are separated by commas and colons like addresses in the $upstream_addr variable. keeps time spent on receiving the response from the upstream server; the time is kept in seconds with millisecond resolution. Times of several responses are separated by commas and colons like addresses in the $upstream_addr variable. keeps status code of the response obtained from the upstream server. Status codes of several responses are separated by commas and colons like addresses in the $upstream_addr variable. If a server cannot be selected, the variable keeps the 502 (Bad Gateway) status code. Status of sticky requests. Request sent to an upstream where sticky is not enabled. Request does not contain sticky information. Request with sticky information sent to the desired server. Request with sticky information sent to a server
selected by the load balancing algorithm. Statuses from several connections are separated by commas and colons
like addresses in the $upstream_addr variable. keeps fields from the end of the response obtained from the upstream server.Configuration Example#
upstream backend {
zone backend 1m;
server backend1.example.com weight=5;
server backend2.example.com:8080;
server backend3.example.com service=_example._tcp resolve;
server unix:/tmp/backend3;
server backup1.example.com:8080 backup;
server backup2.example.com:8080 backup;
}
resolver 127.0.0.53 status_zone=resolver;
server {
location / {
proxy_pass http://backend;
}
}
Directives#
backup_switch (PRO)#
permanent parameter is defined without a time value,
the group remains active after selection,
and automatic rechecking of groups with lower levels does not occur.
If time is specified,
the active status of the group expires after the specified interval,
and the balancer again checks groups with lower levels,
returning to them if the servers are working normally.upstream my_backend {
server primary1.example.com;
server primary2.example.com;
server backup1.example.com backup;
server backup2.example.com backup;
backup_switch permanent=2m;
}
bind_conn (PRO)#
"" and
"0".bind_conn directive must be used after all directives
that set a load balancing method,
otherwise it will not work.
If it is used together with the sticky directive,
then bind_conn must come after sticky.proxy_http_version 1.1;
proxy_set_header Connection "";
map $http_authorization $ntlm {
~*^N(?:TLM|egotiate) 1;
}
upstream ntlm_backend {
server 127.0.0.1:8080;
bind_conn $ntlm;
}
server {
# ...
location / {
proxy_pass http://ntlm_backend;
proxy_http_version 1.1;
proxy_set_header Connection "";
# ...
}
}
feedback (PRO)#
feedback variable [inverse] [factor=number] [account=condition_variable] [last_byte];upstream.
It dynamically adjusts balancing decisions
by multiplying the weight of each proxied server by the average feedback value,
which changes over time depending on the value of the variable
and is subject to an optional condition.variableinverse and factor settings.inversefactor90.90 is specified, then 90% of the previous value
and only 10% of the new value will be taken.account"" or "0".account allows including these responses
or even excluding everything else.last_byteupstream backend {
zone backend 1m;
feedback $feedback_value factor=80 account=$condition_value;
server backend1.example.com;
server backend2.example.com;
}
map $upstream_http_custom_score $feedback_value {
"high" 100;
"medium" 75;
"low" 50;
default 10;
}
map $upstream_probe $condition_value {
"high_priority" "1";
"low_priority" "0";
default "1";
}
high_priority active check
or responses to regular client requests.hash#
hash $remote_addr;
resolve parameter),
the server does not sort the received addresses, so their order may differ
across different servers, which affects client distribution.
To ensure consistent distribution,
use the consistent parameter.consistent parameter is specified, the ketama consistent hashing method will be used instead. The method ensures that only a few keys will be remapped to different servers when a server is added to or removed from the group. This helps to achieve a higher cache hit ratio for caching servers. The method is compatible with the Cache::Memcached::Fast Perl library with the ketama_points parameter set to 160.ip_hash#
down parameter to preserve the current hashing of client IP addresses:upstream backend {
ip_hash;
server backend1.example.com;
server backend2.example.com;
server backend3.example.com down;
server backend4.example.com;
}
keepalive#
connections parameter sets the maximum number of idle keepalive connections to upstream servers that are preserved in the cache of each worker process. When this number is exceeded, the least recently used connections are closed.keepalive directive does not limit the total number of connections to upstream servers that Angie worker processes can open. The connections parameter should be set low enough to let upstream servers process new incoming connections as well.keepalive directive must be used after all directives that set
a load balancing method, otherwise it will not work.upstream memcached_backend {
server 127.0.0.1:11211;
server 10.0.0.2:11211;
keepalive 32;
}
server {
#...
location /memcached/ {
set $memcached_key $uri;
memcached_pass memcached_backend;
}
}
Connection header field should be cleared:upstream http_backend {
server 127.0.0.1:8080;
keepalive 16;
}
server {
#...
location /http/ {
proxy_pass http://http_backend;
proxy_http_version 1.1;
proxy_set_header Connection "";
# ...
}
}
upstream fastcgi_backend {
server 127.0.0.1:9000;
keepalive 8;
}
server {
#...
location /fastcgi/ {
fastcgi_pass fastcgi_backend;
fastcgi_keep_conn on;
# ...
}
}
keepalive_requests#
keepalive_time#
keepalive_timeout#
least_conn#
least_time (PRO)#
least_time header | last_byte [factor=number] [account=condition_variable];headerlast_bytefactoraccount"" or "0".account allows including these responses
or even excluding everything else.header_time (headers only)
and response_time (entire responses) in the server's health object
among the upstream metrics in the API.queue (PRO)#
502 (Bad Gateway) error is returned to the client.timeout
(default is 60 seconds),
a 502 (Bad Gateway) error is returned to the client.
Requests from clients that prematurely close the connection are also removed from the queue;
there are counters for the states of requests passing through the queue in the API.queue directive must be used after all directives that set the
load balancing method; otherwise, it won't work.random#
two parameter is specified, Angie randomly selects two servers, then selects one of them using the least_conn method, where a request is passed to the server with the least number of active connections.response_time_factor (PRO)#
90 is specified, 90% of the previous value and only 10% of
the new value are taken. Valid values range from 0 to 99 inclusive.header_time
(headers only) and response_time (entire responses) in the server's health
object among the upstream metrics in the API.header_time
value is recalculated only if all headers are received and processed, and
response_time — only if the entire response is received.server#
unix: prefix. If a port is not specified, port 80 is used. A domain name that resolves to multiple IP addresses defines multiple servers at once.weight=numbermax_conns=number0, meaning there is no limit. If the group does not reside in the shared memory zone, the limitation works per worker process.max_conns value.max_fails=number — sets the number of unsuccessful attempts to communicate with the server
that should occur during the specified fail_timeout
period for the server to be considered unavailable;
after this, it will be checked again after the same period.max_fails is exceeded, the server is also considered unavailable from the perspective of
upstream_probe (PRO); client requests will not be directed to it
until checks determine it is available.server directive in a group resolves to multiple servers,
its max_fails setting applies to each server separately.server directives
only one server remains in the upstream,
the max_fails setting has no effect and will be ignored.max_fails=1max_fails=0fail_timeout=time — sets the period of time during which
a specified number of unsuccessful attempts to communicate with the server
(max_fails) must occur for the server to be considered unavailable.
The server then remains unavailable for the same period of time
before being checked again.server directive in a group resolves to multiple servers,
its fail_timeout setting applies to each server separately.server directives
only one server remains in the upstream,
the fail_timeout setting has no effect and will be ignored.backupdowndrain (PRO)down mode.backup parameter cannot be used together with the
hash, ip_hash, and
random load balancing methods.down and drain parameters are mutually exclusive.resolveservice=nameresolve parameter must be specified for the server,
without specifying a port in the hostname._,
then _tcp is appended after a dot.
Thus, the service name http results in _http._tcp.backup is set with server,
SRV records with the highest priority are resolved as backup servers,
and other records are ignored.weight parameter of the server directive.
If weight is specified both in the directive itself and in the SRV record,
the weight set in the directive is used._http._tcp.backend.example.com:server backend.example.com service=http resolve;
sid=idslow_start=timeserver is specified in the upstream,
slow_start does not work and will be ignored.state (PRO)#
/var/lib/angie/state/ (/var/db/angie/state/ on FreeBSD)
is created specifically for storing such files
with appropriate access permissions,
and in the configuration you only need to add the file name:upstream backend {
zone backend 1m;
state /var/lib/angie/state/<FILE NAME>;
}
server.
The file contents are modified whenever servers are changed in the
/config/http/upstreams/ section
via the configuration API.
The file is read when Angie starts or when the configuration is reloaded.state directive in an upstream block,
there must be no server directives in it,
but a shared memory zone (zone) is required.sticky#
sticky cookie name [attr=value]...;sticky route $variable...;sticky learn zone=zone create=$create_var1... lookup=$lookup_var1... [header] [norefresh] [timeout=time];sticky learn [zone=zone] lookup=$lookup_var1... remote_action=uri remote_result=$remote_var [norefresh] [timeout=time];sticky directive configured,
you can use the drain (PRO) option
in the server block.sticky directive must be used after all directives
that specify a particular load balancing method,
otherwise it will not work.
If it is used alongside the bind_conn (PRO) directive,
then bind_conn must come after sticky.name) is defined by the sticky directive,
and its value corresponds to the sid
from the server directive.
This value is further hashed if sticky_secret is set.path=/ is set.
Attribute values can contain variables.
To remove an attribute, set it to an empty value: attr=.
For example, sticky cookie path= omits the path attribute.srv_id for 1 hour,
using a domain from a variable:upstream backend {
server backend1.example.com:8080;
server backend2.example.com:8080;
sticky cookie srv_id domain=$my_domain max-age=3600;
}
route cookie first,
then the route query argument:upstream backend {
server backend1.example.com:8080 "sid=server 1";
server backend2.example.com:8080 "sid=server 2";
sticky route $cookie_route $arg_route;
}
create and lookup define how to generate and locate sessions,
and both accept multiple variables.create.
For example, it may come from a response cookie.zone name:size.
If unused for timeout duration (default: 1 hour), the session expires.norefresh.lookup,
using the first non-empty variable listed.
If none is found, it's a new request.header to create the session upon receiving response headers
rather than after full response processing.examplecookie:upstream backend {
server backend1.example.com:8080;
server backend2.example.com:8080;
sticky learn
create=$upstream_cookie_examplecookie
lookup=$cookie_examplecookie
zone=client_sessions:1m;
}
remote_action and remote_result
to manage session IDs with an external session store.
The shared memory zone acts as a cache;
the external store is authoritative.
create is not compatible with remote_action.timeout (default: 1 hour),
regardless of remote_action.norefresh to disable that.zone is optional with remote_action.
Without it, Angie always queries the external store.lookup variable.
If none, fall back to standard load balancing.zone is set and session exists, use it and stop.remote_action endpoint with:sid= or from $sticky_sid.zone is set.zone is set) — session linked to another server.
Use remote_result to extract the corrected server ID.remote_result uses upstream_http_* variables
to read headers from the remote store's response.$cookie_bar,
confirmed via $upstream_http_x_sticky_sid:http {
upstream u1 {
server srv1;
server srv2;
sticky learn zone=sz:1m
lookup=$cookie_bar
remote_action=/remote_session
remote_result=$upstream_http_x_sticky_sid;
zone z 1m;
}
server {
listen localhost;
location / {
proxy_pass http://u1/;
}
location /remote_session {
internal;
proxy_set_header X-Sticky-Sessid $sticky_sessid;
proxy_set_header X-Sticky-Sid $sticky_sid;
proxy_set_header X-Sticky-Last $msec;
proxy_pass http://remote;
}
}
}
HTTP/1.1 200 OK
...
X-Sid: web-server-01
X-Session-Backend: backend-pool-1
$upstream_http_x_sid → web-server-01$upstream_http_x_session_backend → backend-pool-1remote_result will use web-server-01
to select the matching sid.sticky directive respects upstream server states:down or failing are excluded.max_conns limit are skipped.drain servers (PRO) may still be selected for new sessions in sticky mode when identifiers match.sticky_strict is off,
fallback balancing is used;
if on, the request is rejected.zone used in sticky must be exclusive to a single upstream.
Zones cannot be shared across multiple upstream blocks.sticky_secret#
cookie and route modes.
The string may contain variables, for example, $remote_addr:upstream backend {
server backend1.example.com:8080;
server backend2.example.com:8080;
sticky cookie cookie_name;
sticky_secret my_secret.$remote_addr;
}
$ echo -n "<VALUE><SALT>" | md5sum
sticky_strict#
upstream#
upstream backend {
server backend1.example.com weight=5;
server 127.0.0.1:8080 max_fails=3 fail_timeout=30s;
server backup1.example.com backup;
}
zone#
Built-in Variables#
http_upstream module supports the following built-in variables:$sticky_sessid#remote_action in sticky;
stores the initial session ID taken from lookup.$sticky_sid#remote_action in sticky;
stores the server ID previously associated with the session.$upstream_addr#X-Accel-Redirect or error_page, then the server addresses from different groups are separated by colons, e.g.:$upstream_bytes_received#$upstream_bytes_sent#$upstream_cache_status#MISS, BYPASS, EXPIRED, STALE, UPDATING,
REVALIDATED, or HIT:MISS: The response is not found in the cache,
and the request is passed to the upstream server.BYPASS: The cache is bypassed,
and the request is passed directly to the upstream server.EXPIRED: The cached response is stale,
and a new request is passed to the upstream server to update the content.STALE: The cached response is stale,
but is still served to clients
until the content is eventually updated from the upstream server.UPDATING: The cached response is stale,
but is still served to clients
while the currently ongoing update from the upstream server is in progress.REVALIDATED: The cached response is stale,
but was successfully revalidated
and does not need to be updated from the upstream server.HIT: The response was taken from the cache.$upstream_connect_time#$upstream_header_time#$upstream_http_<name>#Server response header field is available through the $upstream_http_server variable. The rules of converting header field names to variable names are the same as for the variables that start with the $http_ prefix. Only the header fields from the response of the last server are saved.$upstream_queue_time#$upstream_response_length#$upstream_response_time#$upstream_status#$upstream_sticky_status#""NEWHITMISS$upstream_trailer_<name>#