Upstream#
The module is used to define groups of servers that can be referenced by the proxy_pass, fastcgi_pass, uwsgi_pass, scgi_pass, memcached_pass and grpc_pass directives. Enables binding the server connection to the client when the value, which is
set as a string of variables, becomes anything other than Attention The Attention When using the directive, configure the http_proxy
module to allow keepalive connections, for example: A typical use case for the directive is proxying NTLM-authenticated
connections, where the client should be bound to the server when the
negotiation starts: Added in version 1.6.0: PRO Default — upstream Enables a feedback-based load balancing mechanism for the The following parameters are accepted: The variable from which the feedback value is taken.
It should represent a performance or health metric,
and is intended to be supplied by the peer in header fields or otherwise. The value is assessed at each response from the peer
and factored into the rolling average
according to If set, the feedback value is interpreted inversely,
meaning lower values indicate better performance. The factor by which the feedback value is weighted
when calculating the average.
Valid values are integers between 0 and 99.
By default — The average feedback is calculated using the exponential moving average formula. The larger is the factor, the less is the average affected by new values;
if the factor is set to Specifies a condition variable
that controls which responses should be included in the calculation.
The average is updated with the feedback value
only if the condition variable for the response
isn't Note By default, responses from probes
aren't included in the calculation;
combining the $upstream_probe variable
with Allows processing feedback from the upstream server after the full
response has been received, instead of just after the header. Example: This categorizes server responses into different feedback levels
based on specific scores obtained from response header fields,
and also adds a condition mapped from $upstream_probe
to account only for the responses from the Specifies a load balancing method for a server group where the client-server mapping is based on the hashed key value. The key can contain text, variables, and their combinations. Note that adding or removing a server from the group may result in remapping most of the keys to different servers. The method is compatible with the Cache::Memcached Perl library. If the Specifies that a group should use a load balancing method where requests are distributed between servers based on client IP addresses. The first three octets of the client IPv4 address, or the entire IPv6 address, are used as a hashing key. The method ensures that requests from the same client will always be passed to the same server except when this server is unavailable. In the latter case client requests will be passed to another server. Most probably, it will always be the same server as well. If one of the servers needs to be temporarily removed, it should be marked with the Activates the cache for connections to upstream servers. The Note It should be particularly noted that the keepalive directive does not limit the total number of connections to upstream servers that an Angie worker process can open. The connections parameter should be set to a number small enough to let upstream servers process new incoming connections as well. Attention The Example configuration of memcached upstream with keepalive connections: For HTTP, the proxy_http_version directive should be set to "1.1" and the "Connection" header field should be cleared: Note Alternatively, HTTP/1.0 persistent connections can be used by passing the "Connection: Keep-Alive" header field to an upstream server, though this method is not recommended. For FastCGI servers, it is required to set fastcgi_keep_conn for keepalive connections to work: Note SCGI and uwsgi protocols do not have a notion of keepalive connections. Sets the maximum number of requests that can be served through one keepalive connection. After the maximum number of requests is made, the connection is closed. Closing connections periodically is necessary to free per-connection memory allocations. Therefore, using too high maximum number of requests could result in excessive memory usage and not recommended. Limits the maximum time during which requests can be processed through one keepalive connection. After this time is reached, the connection is closed following the subsequent request processing. Sets a timeout during which an idle keepalive connection to an upstream server will stay open. Specifies that a group should use a load balancing method where a request is passed to the server with the least number of active connections, taking into account weights of servers. If there are several such servers, they are tried in turn using a weighted round-robin balancing method. Default — upstream Specifies that the group should use a load balancing method where an active
server's chance of receiving the request is inversely proportional to its
average response time; the less it is, the more requests a server gets. The directive accounts for response headers only. The directive uses the average time to receive the entire response. Added in version 1.7.0: PRO Serves the same purpose as response_time_factor (PRO)
and overrides it if set. Specifies a condition variable
that controls which responses should be included in the calculation.
The average is updated
only if the condition variable for the response
isn't Note By default, responses from probes
aren't included in the calculation;
combining the $upstream_probe variable
with The respective moving averages, adjusted for Added in version 1.4.0: PRO If it is not possible to assign a proxied server to a request on the first attempt
(for example, during a brief service interruption
or when there is a surge in load reaching the max_conns limit),
the request is not rejected;
instead, Angie attempts to enqueue it for processing. The number in the directive sets the maximum number of requests
in the queue for a worker process.
If the queue is full,
a Note The logic of the proxy_next_upstream directive also applies to queued requests.
Specifically, if a server was selected for a request
but it cannot be handed over to it,
the request may be returned to the queue. If a server is not selected to process a queued request
within the time set by Attention The Specifies that a group should use a load balancing method where a request is passed to a randomly selected server, taking into account weights of servers. The optional If the least_time (PRO) load balancing method is used, sets the smoothing
factor for the previous value when average response time is calculated
using the exponential moving average formula. The larger is the number, the less is the average affected by new values; if
the number is set to The respective moving averages are presented as Note The calculation accounts for successful reponses only; what is considered an
unsuccessful response is defined by the proxy_next_upstream,
fastcgi_next_upstream, uwsgi_next_upstream,
scgi_next_upstream, memcached_next_upstream, and
grpc_next_upstream directives. Besides, Defines the address and other parameters of a server. The address can be
specified as a domain name or IP address, with an optional port, or as a UNIX
domain socket path specified after the The following parameters can be defined: sets the weight of the server limits the maximum number of simultaneous active connections to the proxied server. Note If idle keepalive connections, multiple workers, and the shared memory are enabled, the total number of active and idle connections to the proxied server may exceed the max_conns value. What is considered an
unsuccessful attempt is defined by the proxy_next_upstream,
fastcgi_next_upstream, uwsgi_next_upstream,
scgi_next_upstream, memcached_next_upstream, and
grpc_next_upstream directives. When Note If a If an upstream contains only one peer
after all its the default number of unsuccessful attempts disables the accounting of attempts By default, this is set to 10 seconds. Note If a If an upstream contains only one peer
after all its marks the server as a backup server. It will be passed requests when the primary servers are unavailable. marks the server as permanently unavailable. sets the server to draining; this means
it receives only requests from the sessions
that were bound earlier with sticky.
Otherwise it behaves similarly to Caution The parameter The Added in version 1.1.0. enables monitoring changes to the list of IP addresses that corresponds
to a domain name, updating it without a configuration reload.
The group should be stored in a
shared memory zone;
also, you need to define a
resolver. enables resolving DNS SRV records and sets the service name.
For this parameter to work, specify the resolve server parameter,
providing a hostname without a port number. If there are no dots in the service name,
the name is formed according to the RFC standard:
the service name is prefixed with Angie resolves the SRV records
by combining the normalized service name and the hostname
and obtaining the list of servers for the combination via DNS,
along with their priorities and weights. Top-priority SRV records
(ones that share the minimum priority value)
resolve into primary servers,
and other records become backup servers.
If Weight influences the selection of servers by the assigned capacity:
higher weights receive more requests.
If set by both the This example will look up the Added in version 1.2.0: Angie Added in version 1.1.0-P1: Angie PRO sets the server ID within the group. Added in version 1.4.0. sets the time to recover the If the value is set
and the server is again considered available and healthy
as defined by max_fails and upstream_probe (PRO),
the server will steadily recover its designated weight
within the allocated timeframe. If the value isn't set,
the server in a similar situation
will recover its designated weight immediately. Note If there's only one Added in version 1.2.0: PRO Specifies the file where the upstream's server list is persisted.
When installing from
our packages,
a designated
The format of this server list is similar to Caution For the Added in version 1.2.0: Angie Added in version 1.1.0-P1: Angie PRO Default — upstream Configures the binding of client sessions to proxied servers
in the mode specified by the first parameter;
to drain requests from servers
that have Attention The This mode uses cookies to maintain session persistence.
It is more suitable for situations
where cookies are already used for session management. Here, a client's request,
not yet bound to any server,
is sent to a server
chosen according to the configured balancing method.
Also, Angie sets a cookie
with a unique value identifying the server. The cookie's name ( Subsequent client requests that contain this cookie
are forwarded to the server identified by the cookie's value,
which is the server with the specified sid.
If selecting a server fails
or the chosen server can't handle the request,
another server is selected
according to the configured balancing method. The directive allows assigning attributes to the cookie;
the only attribute set by default is Here,
Angie creates a cookie named This mode uses predefined route identifiers
that can be embedded in URLs, cookies, or other request properties.
It is less flexible because it relies on predefined values
but can suit better if such identifiers are already in place. Here, when a proxied server receives a request,
it can assign a route to the client and return its identifier
in a way that both the client and the server are aware of.
The value of the sid parameter
of the server directive
must be used as the route identifier.
Note that the parameter is additionally hashed
if the sticky_secret directive is set. Subsequent requests from clients that wish to use this route
must contain the identifier issued by the server in a way
that ensures it ends up in Angie variables, for example,
in cookie or request arguments. The directive lists the specific variables used for routing.
To select the server to which the incoming request is forwarded,
the first non-empty variable is used;
it is then compared with the sid parameter
of the server directive.
If selecting a server fails
or the chosen server can't handle the request,
another server is selected
according to the configured balancing method. Here,
Angie looks for the route identifier in the This mode uses a dynamically generated key
to associate a client with a particular proxied server;
it's more flexible
because it assigns servers on the go,
stores sessions in a shared memory zone,
and supports different ways of passing session identifiers. Here, a session is created
based on the response from the proxied server.
The The session identifier is the value of the first non-empty variable
specified with Sessions are stored in a shared memory zone;
its name and size are set by the Subsequent requests from clients that wish to use the session
must contain its identifier,
ensuring that it ends up in a non-empty variable
specified with The In the example, Angie creates a session,
setting a cookie named The The initial session ID always comes from If this session ID isn't found locally,
Angie sends a synchronous subrequest to remote storage.
The It accepts the session ID from A 200 response from the remote storage
indicates it has accepted the session
and saved it with the suggested values for later retrieval. A 409 response from the remote storage
indicates that this session ID is already populated.
In this case, the response should suggest an alternative session ID
in the In this example, Angie creates a session,
uses the Added in version 1.2.0: Angie Added in version 1.1.0-P1: Angie PRO Adds the string as the salt value to the MD5 hashing function
for the sticky directive in Salt is appended to the value being hashed;
to verify the hashing mechanism independently: Added in version 1.2.0: Angie Added in version 1.1.0-P1: Angie PRO When enabled, causes Angie to return an HTTP 502 error to the client
if the desired server is unavailable,
rather than using any other available server
as it would when no servers in the upstream are available. Defines a group of servers. Servers can listen on different ports. In addition, servers listening on TCP and UNIX domain sockets can be mixed. Example: By default, requests are distributed between the servers using a weighted round-robin balancing method. In the above example, each 7 requests will be distributed as follows: 5 requests go to backend1.example.com and one request to each of the second and third servers. If an error occurs during communication with a server, the request will be passed to the next server, and so on until all of the functioning servers will be tried. If a successful response could not be obtained from any of the servers, the client will receive the result of the communication with the last server. Defines the name and size of the shared memory zone that keeps the group's configuration and run-time state that are shared between worker processes. Several groups may share the same zone. In this case, it is enough to specify the size only once. The Used with Used with keeps the IP address and port, or the path to the UNIX domain socket of the upstream server. If several servers were contacted during request processing, their addresses are separated by commas, e.g. : 192.168.1.1:80, 192.168.1.2:80, unix:/tmp/sock If an internal redirect from one server group to another happens, initiated by "X-Accel-Redirect" or error_page, then the server addresses from different groups are separated by colons, e.g.: 192.168.1.1:80, 192.168.1.2:80, unix:/tmp/sock : 192.168.10.1:80, 192.168.10.2:80 If a server cannot be selected, the variable keeps the name of the server group. number of bytes received from an upstream server. Values from several connections are separated by commas and colons like addresses in the $upstream_addr variable. number of bytes sent to an upstream server. Values from several connections are separated by commas and colons like addresses in the $upstream_addr variable. keeps the status of accessing a response cache. The status can be either
If the cache was bypassed entirely without accessing it,
the variable isn't set. keeps time spent on establishing a connection with the upstream server; the time is kept in seconds with millisecond resolution. In case of SSL, includes time spent on handshake. Times of several connections are separated by commas and colons like addresses in the $upstream_addr variable. stores time spent on receiving the response header from the upstream server; the time is kept in seconds with millisecond resolution. Times of several responses are separated by commas and colons like addresses in the $upstream_addr variable. stores server response header fields. For example, the "Server" response header field is available through the $upstream_http_server variable. The rules of converting header field names to variable names are the same as for the variables that start with the "$http_" prefix. Only the header fields from the response of the last server are saved. stores time the request spent in the queue
before a server was selected;
the time is kept in seconds with millisecond resolution.
Times of several selection attempts are separated by commas and colons,
like addresses in the $upstream_addr variable. keeps the length of the response obtained from the upstream server; the length is kept in bytes. Lengths of several responses are separated by commas and colons like addresses in the $upstream_addr variable. keeps time spent on receiving the response from the upstream server; the time is kept in seconds with millisecond resolution. Times of several responses are separated by commas and colons like addresses in the $upstream_addr variable. keeps status code of the response obtained from the upstream server. Status codes of several responses are separated by commas and colons like addresses in the $upstream_addr variable. If a server cannot be selected, the variable keeps the 502 (Bad Gateway) status code. Status of sticky requests. Request sent to upstream without sticky enabled. Request without sticky information. Request with sticky information routed to the desired backend. Request with sticky information routed to the backend selected by the
load balancing algorithm. Values from multiple connections are separated by commas and colons, similar to
addresses in the $upstream_addr variable. stores fields from the end of the response obtained from the upstream server.Configuration Example#
upstream backend {
zone backend 1m;
server backend1.example.com weight=5;
server backend2.example.com:8080;
server backend3.example.com service=_example._tcp resolve;
server unix:/tmp/backend3;
server backup1.example.com:8080 backup;
server backup2.example.com:8080 backup;
}
resolver 127.0.0.53 status_zone=resolver;
server {
location / {
proxy_pass http://backend;
}
}
Directives#
bind_conn (PRO)#
""
and "0"
.bind_conn
directive must be used after all directives
that set the load balancing method;
otherwise, it won't work.
If sticky is also used,
bind_conn
should appear after sticky
.proxy_http_version 1.1;
proxy_set_header Connection "";
map $http_authorization $ntlm {
~*^N(?:TLM|egotiate) 1;
}
upstream ntlm_backend {
server 127.0.0.1:8080;
bind_conn $ntlm;
}
server {
# ...
location / {
proxy_pass http://ntlm_backend;
proxy_http_version 1.1;
proxy_set_header Connection "";
# ...
}
}
feedback (PRO)#
feedback
variable [inverse
] [factor=
number] [account=
condition_variable] [last_byte
];upstream
.
It adjusts the load balancing decisions dynamically,
multiplying each peer's weight by its average feedback value
that is affected by the value of a variable over time
and is subject to an optional condition.variable
inverse
and factor
settings.inverse
factor
90
.90
,
the result has 90% of the previous value and only 10% of the new value.account
""
or "0"
.account
allows to include these responses
or even exclude everything else.last_byte
upstream backend {
zone backend 1m;
feedback $feedback_value factor=80 account=$condition_value;
server backend1.example.com;
server backend2.example.com;
}
map $upstream_http_custom_score $feedback_value {
"high" 100;
"medium" 75;
"low" 50;
default 10;
}
map $upstream_probe $condition_value {
"high_priority" "1";
"low_priority" "0";
default "1";
}
high_priority
probe
or responses to regular client requests.hash#
consistent
parameter is specified, the ketama consistent hashing method will be used instead. The method ensures that only a few keys will be remapped to different servers when a server is added to or removed from the group. This helps to achieve a higher cache hit ratio for caching servers. The method is compatible with the Cache::Memcached::Fast Perl library with the ketama_points parameter set to 160.ip_hash#
down
parameter in order to preserve the current hashing of client IP addresses.upstream backend {
ip_hash;
server backend1.example.com;
server backend2.example.com;
server backend3.example.com down;
server backend4.example.com;
}
keepalive#
connections
parameter sets the maximum number of idle keepalive connections to upstream servers that are preserved in the cache of each worker process. When this number is exceeded, the least recently used connections are closed.keepalive
directive must be used after all directives that set the
load balancing method; otherwise, it won't work.upstream memcached_backend {
server 127.0.0.1:11211;
server 10.0.0.2:11211;
keepalive 32;
}
server {
#...
location /memcached/ {
set $memcached_key $uri;
memcached_pass memcached_backend;
}
}
upstream http_backend {
server 127.0.0.1:8080;
keepalive 16;
}
server {
#...
location /http/ {
proxy_pass http://http_backend;
proxy_http_version 1.1;
proxy_set_header Connection "";
# ...
}
}
upstream fastcgi_backend {
server 127.0.0.1:9000;
keepalive 8;
}
server {
#...
location /fastcgi/ {
fastcgi_pass fastcgi_backend;
fastcgi_keep_conn on;
# ...
}
}
keepalive_requests#
keepalive_time#
keepalive_timeout#
least_conn#
least_time (PRO)#
least_time
header
| last_byte
[factor=
number] [account=
condition_variable];header
last_byte
factor
account
""
or "0"
.account
allows to include these responses
or even exclude everything else.factor
and account
,
are also presented as header_time
and response_time
in the
server's health
object among the upstream metrics in the API.queue (PRO)#
502 (Bad Gateway)
error is returned to the client.timeout
(default is 60 seconds),
a 502 (Bad Gateway)
error is returned to the client.
Requests from clients that prematurely close the connection are also removed from the queue;
there are counters for requests passing through the queue in the API.queue
directive must be used after all directives that set the
load balancing method; otherwise, it won't work.random#
two
parameter instructs Angie to randomly select two servers and then choose a server using the specified method. The default method is least_conn which passes a request to a server with the least number of active connections.response_time_factor (PRO)#
90
, the result has 90% of the previous value and
only 10% of the new value. The allowed range is 0 to 99, inclusive.header_time
(headers
only) and response_time
(entire responses) in the server's
health
object among the upstream metrics in the API.header_time
is updated
only if all headers are received and processed, and response_time
is
updated only if the entire reponse is received.server#
unix:
prefix. If a port is not
specified, the port 80 is used. A domain name that resolves to several IP
addresses defines multiple servers at once.weight=
number
by default, 1.max_conns=
number
Default value is 0
, meaning there is no limit. If the server group does not reside in the shared memory, the limitation works per each worker process.max_fails=
number — sets the number of unsuccessful attempts
to communicate with the server
that should happen in the duration set by fail_timeout
to consider the server unavailable;
it is then retried after the same duration.max_fails
is reached, the peer is also considered unhealthy by
the upstream_probe (PRO) probes; it won't receive client requests until
the probes consider it healthy again.server
in an upstream resolves into multiple peers,
its max_fails
setting applies to each peer individually.server
directives are resolved,
the max_fails
setting has no effect and will be ignored.max_fails=1
max_fails=0
fail_timeout=
time — sets the period of time during which a number
of unsuccessful attempts to communicate with the server
(max_fails) should happen to consider the server unavailable.
The server then becomes unavailable for the same amount of time
before it is retried.server
in an upstream resolves into multiple peers,
its fail_timeout
setting applies to each peer individually.server
directives are resolved,
the fail_timeout
setting has no effect and will be ignored.backup
down
drain
down
.backup
cannot be used along with the hash, ip_hash, and random load
balancing methods.down
and drain
options are mutually exclusive.resolve
service=
name_
,
then _tcp
is added after a dot.
Thus, the service name http
will result in _http._tcp
.backup
is set with server
,
top-priority SRV records resolve into backup servers,
and other records are ignored.server
directive and the SRV record,
the weight set by server
is used._http._tcp.backend.example.com
record:server backend.example.com service=http resolve;
sid=
id
If the parameter is omitted,
the ID is set to the hexadecimal MD5 hash value
of either the IP address and port or the UNIX domain socket path.slow_start=
timeweight
for a server
that goes back online,
if load balancing uses the
round-robin or least_conn method.server
in an upstream,
slow_start
has no effect and will be ignored.state (PRO)#
/var/lib/angie/state/
(/var/db/angie/state/
on FreeBSD)
directory with appropriate permissions
is created to store these files,
so you will only need to add the file's basename in the configuration:upstream backend {
zone backend 1m;
state /var/lib/angie/state/<FILE NAME>;
}
server
. The contents of
the file change whenever there is any modification to servers in the
/config/http/upstreams/ section
via the configuration API.
The file is read at Angie start or configuration reload.state
directive to be used in an upstream
block,
the block should have no server
directives;
instead, it must have a shared memory zone (zone).sticky#
sticky
cookie name [attr=value]...;sticky
route $variable...;sticky
learn zone=
zone create=
$create_var1... lookup=
$lookup_var1... [header
] [timeout=
time];sticky
learn zone=
zone lookup=
$lookup_var1... remote_action=
uri remote_result=
$remote_var [timeout=
time];sticky
defined,
use the drain
option in the server block.sticky
directive must be used after all directives
that set the load balancing method;
otherwise, it won't work.
If bind_conn (PRO) is also used,
bind_conn
should appear after sticky
.name
) is set by the sticky
directive,
and the value (value
) corresponds
to the sid parameter
of the server directive.
Note that the parameter is additionally hashed
if the sticky_secret directive is set.path=/
.
Attribute values are specified as strings with variables.
To remove an attribute, set an empty value for it: attr=
.
Thus, sticky cookie path=
creates a cookie without path
.srv_id
with a one-hour lifespan
and a variable-specified domain:upstream backend {
server backend1.example.com:8080;
server backend2.example.com:8080;
sticky cookie srv_id domain=$my_domain max-age=3600;
}
route
cookie,
and then in the route
request argument:upstream backend {
server backend1.example.com:8080 "sid=server 1";
server backend2.example.com:8080 "sid=server 2";
sticky route $cookie_route $arg_route;
}
create
and lookup
parameters list variables
indicating how new sessions are created
and existing sessions are looked up.
Both parameters can occur multiple times.create
;
for example, this could be a
cookie from the proxied server.zone
parameter.
If a session has been inactive for the time set by timeout
,
it is deleted.
The default is 10 minutes.lookup
;
its value will then be matched against sessions in shared memory.
If selecting a server fails
or the chosen server can't handle the request,
another server is selected
according to the configured balancing method.header
parameter allows creating a session
immediately after receiving headers from the proxied server.
Without it, a session is created only after processing the request.examplecookie
in the response:upstream backend {
server backend1.example.com:8080;
server backend2.example.com:8080;
sticky learn
lookup=$cookie_examplecookie
zone=client_sessions:1m;
}
remote_action
and remote_result
parameters
enable dynamically assigning and managing session IDs
via remote session storage.
Here, the shared memory acts as a local cache,
while the remote storage is the authoritative source.
Thus, the create
parameter
is incompatible with remote_action
because session IDs need to be created remotely.
If a session has been inactive for the time set by timeout
,
it is deleted.
The remote_action
setting doesn't affect the timeout.
The default is 10 minutes.lookup
;
if it can be found in the local shared memory,
Angie proceeds to select the appropriate peer.remote_action
parameter sets the URI of the remote storage,
which should handle session lookup and creation as follows:lookup
and the locally suggested server ID to be associated with this session
via custom header fields or in some other way.
On Angie's side, two special variables are provided for this purpose:
$sticky_sessid and $sticky_sid, respectively.
The sticky_sid
value
comes from the sid=
setting in the upstream block
of the server
directive, if it's set;
otherwise, it is an MD5 hash of the server's name.X-Sticky-Sid
header field.
Angie saves this ID in the variable
set by the remote_result
parameter.$cookie_bar
variable for the initial session ID,
and stores alternative session IDs reported by the remote storage
in $upstream_http_x_sticky_sid
:http {
upstream u1 {
server srv1;
server srv2;
sticky learn zone=sz:1m
lookup=$cookie_bar
remote_action=/remote_session
remote_result=$upstream_http_x_sticky_sid;
zone z 1m;
}
server {
listen localhost;
location / {
proxy_pass http://u1/;
}
location /remote_session {
internal;
proxy_set_header X-Sticky-Sessid $sticky_sessid;
proxy_set_header X-Sticky-Sid $sticky_sid;
proxy_set_header X-Sticky-Last $msec;
proxy_pass http://remote;
}
}
}
sticky_secret#
cookie
and route
modes.
The string may contain variables, for example, $remote_addr:upstream backend {
server backend1.example.com:8080;
server backend2.example.com:8080;
sticky cookie cookie_name;
sticky_secret my_secret.$remote_addr;
}
$ echo -n "<VALUE><SALT>" | md5sum
sticky_strict#
upstream#
upstream backend {
server backend1.example.com weight=5;
server 127.0.0.1:8080 max_fails=3 fail_timeout=30s;
server unix:/tmp/backend3;
server backup1.example.com backup;
}
zone#
Built-in Variables#
http_upstream
module supports the following built-in variables:$sticky_sessid
#remote_action
in sticky;
stores the initial session ID taken from lookup
.$sticky_sid
#remote_action
in sticky;
stores the server ID tentatively associated with the session.$upstream_addr
#$upstream_bytes_received
#$upstream_bytes_sent
#$upstream_cache_status
#MISS
, BYPASS
, EXPIRED
, STALE
, UPDATING
,
REVALIDATED
or HIT
:MISS
: The response isn't found in the cache,
and the request is forwarded to the upstream server.BYPASS
: The cache is bypassed,
and the request is directly forwarded to the upstream server.EXPIRED
: The cached response is stale,
and a new request for the updated content is sent to the upstream server.STALE
: The cached response is stale,
but will be served to the clients
until an update has been eventually fetched from the upstream server.UPDATING
: The cached response is stale,
but will be served to the clients
until the currently ongoing update from the upstream server has been finished.REVALIDATED
: The cached response is stale,
but is successfully revalidated
and doesn't need an update from the upstream server.HIT
: The response was served from the cache.$upstream_connect_time
#$upstream_header_time
#$upstream_http_<name>
#$upstream_queue_time
#$upstream_response_length
#$upstream_response_time
#$upstream_status
#$upstream_sticky_status
#""
NEW
HIT
MISS
$upstream_trailer_<name>
#