Upstream#
Provides a context for describing groups of servers that can be used in the proxy_pass, fastcgi_pass, uwsgi_pass, scgi_pass, memcached_pass, and grpc_pass directives. Added in version 1.9.0: PRO The directive enables the ability to start server selection not from the primary group,
but from the active group, i.e., the one where a server was successfully found last time.
If a server cannot be found in the active group for the next request,
and the search moves to the backup group,
then this group becomes active,
and subsequent requests are first directed to servers in this group. If the Example: If the balancer switches from primary servers to the backup group,
all subsequent requests are handled by this backup group for 2 minutes.
After 2 minutes elapse, the balancer rechecks the primary servers
and makes them active again if they are working normally. Allows binding a server connection to a client connection when the
value, specified as a string of variables, becomes different from Warning The Warning When using the directive, the Proxy module settings
must allow the use of persistent connections, for example: A typical use case for the directive is proxying connections with
NTLM authentication, where it is necessary to ensure client-to-server binding at
the beginning of negotiation: Added in version 1.6.0: PRO Default — upstream Sets up a feedback-based load balancing mechanism in the The following parameters can be specified: The variable from which the feedback value is taken.
It should represent a performance or health metric;
it is assumed that the server provides it in headers or otherwise. The value is evaluated with each response from the server
and is factored into the moving average
according to the If the parameter is set, the feedback value is interpreted inversely:
lower values indicate better performance. The factor by which the feedback value is considered
when calculating the average.
Valid values are integers from 0 to 99.
Default is The average is calculated using the exponential smoothing formula. The larger the factor, the less new values affect the average;
if Specifies a condition variable
that controls which responses are considered in the calculation.
The average value is updated with the feedback value from the response
only if the condition variable of that response
is not equal to Note By default, responses during active checks
are not included in the calculation;
combining the $upstream_probe variable
with Allows processing data from the proxied server after receiving
the complete response, not just the header. Example: This configuration categorizes server responses by feedback levels
based on specific scores from response header fields,
and also adds a condition on $upstream_probe
to consider only responses from the Specifies a load balancing method for a server group where the client-server mapping is based on the hashed key value. The key can contain text, variables, and their combinations. Note that adding or removing a server from the group may result in remapping most of the keys to different servers. The method is compatible with the Cache::Memcached Perl library. If the Specifies a load balancing method for the group where requests are distributed among servers based on client IP addresses. The first three octets of the IPv4 address or the entire IPv6 address are used as a hashing key. The method ensures that requests from the same client will always be passed to the same server, except when this server is considered unavailable. In this case, the client requests will be passed to another server, which will most likely be the same server for that client as well. If one of the servers needs to be temporarily removed, it should be marked with the Activates the cache for connections to upstream servers. The Note It should be particularly noted that the keepalive directive does not limit the total number of connections to upstream servers that an Angie worker process can open. The Warning The Example configuration of memcached upstream with keepalive connections: For HTTP, the proxy_http_version directive should be set to "1.1" and the Note Alternatively, HTTP/1.0 persistent connections can be used by passing the "Connection: Keep-Alive" header field to an upstream server, though this method is not recommended. For FastCGI servers, it is required to set fastcgi_keep_conn for keepalive connections to work: Note SCGI and uwsgi protocols do not define a semantics for keepalive connections. Sets the maximum number of requests that can be served through one keepalive connection. After the maximum number of requests are made, the connection is closed. Closing connections periodically is necessary to free per-connection memory allocations. Therefore, using too high a maximum number of requests could result in excessive memory usage and is not recommended. Limits the maximum time during which requests can be processed through one keepalive connection. After this time is reached, the connection is closed following the subsequent request processing. Sets a timeout during which an idle keepalive connection to an upstream server will stay open. Specifies that a group should use a load balancing method where a request is passed to the server with the least number of active connections, taking into account weights of servers. If there are several such servers, they are tried in turn using a weighted round-robin balancing method. Default — upstream Specifies that the group should use a load balancing method where an active
server's chance of receiving the request is inversely proportional to its
average response time; the less it is, the more requests a server gets. The directive accounts for the average time to receive response headers. The directive uses the average time to receive the entire response. Added in version 1.7.0: PRO Serves the same purpose as response_time_factor (PRO)
and overrides it if the parameter is set. Specifies a condition variable
that controls which responses are included in the calculation.
The average is updated
only if the condition variable for the response
is not Note By default, responses during active health checks
are not included in the calculation;
combining the $upstream_probe variable
with Current values are presented as Added in version 1.4.0: PRO If it is not possible to assign a proxied server to a request on the first attempt
(for example, during a brief service interruption
or when there is a surge in load reaching the max_conns limit),
the request is not rejected;
instead, Angie attempts to enqueue it for processing. The number parameter of the directive sets the maximum number of requests
in the queue for a worker process.
If the queue is full,
a Note The logic of the proxy_next_upstream directive also applies to queued requests.
Specifically, if a server was selected for a request
but it cannot be handed over to it,
the request may be returned to the queue. If a server is not selected to process a queued request
within the time set by Warning The Specifies a load balancing method for the group where a request is passed to a randomly selected server, taking into account server weights. If the optional Sets the smoothing factor for the previous value when calculating the average response time for the least_time (PRO) load balancing method using the exponentially weighted moving average formula. The larger the specified number, the less new values affect the average; if
Current calculation results are presented as Note Only successful responses are included in the calculation; what is considered an unsuccessful
response is defined by the proxy_next_upstream,
fastcgi_next_upstream, uwsgi_next_upstream,
scgi_next_upstream, memcached_next_upstream, and
grpc_next_upstream directives. Additionally, the Defines the address and other parameters of a server. The address can be specified as a domain name or IP address, with an optional port, or as a UNIX socket path specified after the The following parameters can be defined: Sets the weight of the server. Default is 1. Limits the maximum number of simultaneous active connections to the proxied server. Default value is Note If idle keepalive connections, multiple worker processes, and the shared memory are enabled, the total number of active and idle connections to the proxied server may exceed the What is considered an
unsuccessful attempt is defined by the proxy_next_upstream,
fastcgi_next_upstream, uwsgi_next_upstream,
scgi_next_upstream, memcached_next_upstream, and
grpc_next_upstream directives. When Note If a If an upstream contains only one server
after all its the default number of attempts disables the accounting of attempts By default, this is set to 10 seconds. Note If a If an upstream contains only one server
after all its marks the server as a backup server. It will be passed requests when the primary servers are unavailable. If the backup_switch (PRO) directive is configured,
its active backup logic is also applied. marks the server as permanently unavailable. marks the server as draining; this means
it receives only requests from the sessions
that were bound earlier with sticky.
Otherwise it behaves similarly to Warning The parameter The Added in version 1.1.0. enables monitoring changes to the list of IP addresses that corresponds
to a domain name, updating it without a configuration reload.
The group should be stored in a
shared memory zone;
also, you need to define a
resolver. enables resolving DNS SRV records and sets the service name.
For this parameter to work, specify the resolve server parameter,
providing a hostname without a port number. If there are no dots in the service name,
the name is formed according to the RFC standard:
the service name is prefixed with Angie resolves the SRV records
by combining the normalized service name and the hostname
and obtaining the list of servers for the combination via DNS,
along with their priorities and weights. Top-priority SRV records
(ones that share the minimum priority value)
resolve into primary servers,
and other records become backup servers.
If Weight is similar to the This example will look up the Added in version 1.2.0: Angie Added in version 1.1.0-P1: Angie PRO sets the server ID within the group. If the parameter is not set,
the ID is set to the hexadecimal MD5 hash
of the IP address and port or the UNIX socket path. Added in version 1.4.0. Sets the time for a server
returning to service
to recover its weight
when load balancing uses the
round-robin or least_conn method. If the parameter is set
and the server is again considered healthy
after a failure
as defined by max_fails and upstream_probe (PRO),
the server will gradually recover its designated weight
within the specified timeframe. If the parameter is not set,
in a similar situation
the server will immediately start working with its designated weight. Note If there's only one Added in version 1.2.0: PRO Specifies the file where the upstream's server list is persistently stored.
When installing from
our packages,
a designated
The server list format here is similar to Warning To use the Added in version 1.2.0: Angie Added in version 1.1.0-P1: Angie PRO Default — upstream Configures the binding of client sessions to proxied servers
in the mode specified by the first parameter;
to drain servers
that have the Warning The This mode uses cookies to maintain session persistence.
It is more suitable for situations
where cookies are already used for session management. Here, a client's request,
not yet bound to any server,
is sent to a server
chosen according to the configured balancing method.
Also, Angie sets a cookie
with a unique value identifying the server. The cookie's name ( Subsequent client requests that contain this cookie
are forwarded to the server identified by the cookie's value,
which is the server with the specified sid.
If selecting a server fails
or the chosen server can't handle the request,
another server is selected
according to the configured balancing method. The directive allows assigning attributes to the cookie;
the only attribute set by default is Here,
Angie creates a cookie named This mode uses predefined route identifiers
that can be embedded in URLs, cookies, or other request properties.
It is less flexible because it relies on predefined values
but can suit better if such identifiers are already in place. Here, when a proxied server receives a request,
it can assign a route to the client and return its identifier
in a way that both the client and the server are aware of.
The value of the sid parameter
of the server directive
must be used as the route identifier.
Note that the parameter is additionally hashed
if the sticky_secret directive is set. Subsequent requests from clients that wish to use this route
must contain the identifier issued by the server in a way
that ensures it ends up in Angie variables, for example,
in cookie or request arguments. The directive lists the specific variables used for routing.
To select the server to which the incoming request is forwarded,
the first non-empty variable is used;
it is then compared with the sid parameter
of the server directive.
If selecting a server fails
or the chosen server can't handle the request,
another server is selected
according to the configured balancing method. Here,
Angie looks for the route identifier in the This mode uses a dynamically generated key
to associate a client with a particular proxied server;
it's more flexible
because it assigns servers on the go,
stores sessions in a shared memory zone,
and supports different ways of passing session identifiers. Here, a session is created
based on the response from the proxied server.
The The session identifier is the value of the first non-empty variable
specified with Sessions are stored in a shared memory zone;
its name and size are set by the By default, Angie extends the session lifetime,
updating the last access timestamp on each use.
The Subsequent requests from clients that wish to use the session
must contain its identifier,
ensuring that it ends up in a non-empty variable
specified with The In the example, Angie creates a session,
setting a cookie named The The initial session ID always comes from If this session ID isn't found locally,
Angie sends a synchronous subrequest to remote storage.
The The storage accepts the session ID from On Angie's side, two special variables are provided for this purpose:
$sticky_sessid and $sticky_sid, respectively.
The A response with code 200, 201, or 204 from the remote storage
indicates that it has accepted the session
and saved it with the suggested values for future use,
or that the session already exists and matches the suggested one. A 409 response from the remote storage
indicates that this session ID already exists,
but with a different server.
In this case, the response should contain an alternative session ID
in a header that Angie can use to select the server. Angie extracts this identifier
from the remote storage response using the variable
specified by the In the following example, Angie creates a session,
uses the Each time there's a local record miss or timeout expiration
(considering The Below is a simplified configuration example.
The remote storage returns the session ID in the With the following response from the remote storage: Two variables become available: Since the variable Added in version 1.2.0: Angie Added in version 1.1.0-P1: Angie PRO Adds the string as the salt value to the MD5 hashing function
for the sticky directive in The salt is appended to the value being hashed;
to verify the hashing mechanism independently: Added in version 1.2.0: Angie Added in version 1.1.0-P1: Angie PRO When enabled, causes Angie to return an HTTP 502 error to the client
if the desired server is unavailable,
rather than using any other available server
as it would when no servers in the upstream are available. Defines a group of servers. Servers can listen on different ports. In addition, servers listening on TCP and UNIX domain sockets can be mixed. Example: By default, requests are distributed between the servers using a weighted round-robin balancing method. In the above example, each 7 requests will be distributed as follows: 5 requests go to backend1.example.com and one request to each of the second and third servers. If an error occurs during communication with a server, the request will be passed to the next server, and so on until all of the functioning servers will be tried. If a successful response could not be obtained from any of the servers, the client will receive the result of the communication with the last server. Defines the name and size of the shared memory zone that keeps the group's configuration and run-time state that are shared between worker processes. Several groups may share the same zone. In this case, it is enough to specify the size only once. The Used with Used with stores the IP address and port, or the path to the UNIX domain socket of the upstream server. If several servers were contacted during request processing, their addresses are separated by commas, e.g.: 192.168.1.1:80, 192.168.1.2:80, unix:/tmp/sock If an internal redirect from one server group to another happens, initiated by 192.168.1.1:80, 192.168.1.2:80, unix:/tmp/sock : 192.168.10.1:80, 192.168.10.2:80 If a server cannot be selected, the variable keeps the name of the server group. number of bytes received from an upstream server. Values from several connections are separated by commas and colons like addresses in the $upstream_addr variable. number of bytes sent to an upstream server. Values from several connections are separated by commas and colons like addresses in the $upstream_addr variable. keeps the status of accessing a response cache. The status can be either
If the cache was bypassed entirely without accessing it,
the variable isn't set. keeps time spent on establishing a connection with the upstream server; the time is kept in seconds with millisecond resolution. In case of SSL, includes time spent on handshake. Times of several connections are separated by commas and colons like addresses in the $upstream_addr variable. keeps time spent on receiving the response header from the upstream server; the time is kept in seconds with millisecond resolution. Times of several responses are separated by commas and colons like addresses in the $upstream_addr variable. keep server response header fields. For example, the keeps time the request spent in the queue
before a server was selected;
the time is kept in seconds with millisecond resolution.
Times of several selection attempts are separated by commas and colons,
like addresses in the $upstream_addr variable. keeps the length of the response obtained from the upstream server; the length is kept in bytes. Lengths of several responses are separated by commas and colons like addresses in the $upstream_addr variable. keeps time spent on receiving the response from the upstream server; the time is kept in seconds with millisecond resolution. Times of several responses are separated by commas and colons like addresses in the $upstream_addr variable. keeps status code of the response obtained from the upstream server. Status codes of several responses are separated by commas and colons like addresses in the $upstream_addr variable. If a server cannot be selected, the variable keeps the 502 (Bad Gateway) status code. Status of sticky requests. Request sent to upstream without sticky enabled. Request without sticky information. Request with sticky information routed to the desired server. Request with sticky information routed to the server selected by the
load balancing algorithm. Values from multiple connections are separated by commas and colons, similar to
addresses in the $upstream_addr variable. keeps fields from the end of the response obtained from the upstream server.Configuration Example#
upstream backend {
zone backend 1m;
server backend1.example.com weight=5;
server backend2.example.com:8080;
server backend3.example.com service=_example._tcp resolve;
server unix:/tmp/backend3;
server backup1.example.com:8080 backup;
server backup2.example.com:8080 backup;
}
resolver 127.0.0.53 status_zone=resolver;
server {
location / {
proxy_pass http://backend;
}
}
Directives#
backup_switch (PRO)#
permanent
parameter is defined without a time value,
the group remains active after selection,
and automatic rechecking of groups with lower levels does not occur.
If time is specified,
the active status of the group expires after the specified interval,
and the balancer again checks groups with lower levels,
returning to them if the servers are working normally.upstream my_backend {
server primary1.example.com;
server primary2.example.com;
server backup1.example.com backup;
server backup2.example.com backup;
backup_switch permanent=2m;
}
bind_conn (PRO)#
""
and
"0"
.bind_conn
directive must be used after all directives
that set a load balancing method,
otherwise it will not work.
If it is used together with the sticky directive,
then bind_conn
must come after sticky
.proxy_http_version 1.1;
proxy_set_header Connection "";
map $http_authorization $ntlm {
~*^N(?:TLM|egotiate) 1;
}
upstream ntlm_backend {
server 127.0.0.1:8080;
bind_conn $ntlm;
}
server {
# ...
location / {
proxy_pass http://ntlm_backend;
proxy_http_version 1.1;
proxy_set_header Connection "";
# ...
}
}
feedback (PRO)#
feedback
variable [inverse
] [factor=
number] [account=
condition_variable] [last_byte
];upstream
.
It dynamically adjusts balancing decisions
by multiplying the weight of each proxied server by the average feedback value,
which changes over time depending on the value of the variable
and is subject to an optional condition.variable
inverse
and factor
settings.inverse
factor
90
.90
is specified, then 90% of the previous value
and only 10% of the new value will be taken.account
""
or "0"
.account
allows including these responses
or even excluding everything else.last_byte
upstream backend {
zone backend 1m;
feedback $feedback_value factor=80 account=$condition_value;
server backend1.example.com;
server backend2.example.com;
}
map $upstream_http_custom_score $feedback_value {
"high" 100;
"medium" 75;
"low" 50;
default 10;
}
map $upstream_probe $condition_value {
"high_priority" "1";
"low_priority" "0";
default "1";
}
high_priority
active check
or responses to regular client requests.hash#
consistent
parameter is specified, the ketama consistent hashing method will be used instead. The method ensures that only a few keys will be remapped to different servers when a server is added to or removed from the group. This helps to achieve a higher cache hit ratio for caching servers. The method is compatible with the Cache::Memcached::Fast Perl library with the ketama_points
parameter set to 160.ip_hash#
down
parameter to preserve the current hashing of client IP addresses:upstream backend {
ip_hash;
server backend1.example.com;
server backend2.example.com;
server backend3.example.com down;
server backend4.example.com;
}
keepalive#
connections
parameter sets the maximum number of idle keepalive connections to upstream servers that are preserved in the cache of each worker process. When this number is exceeded, the least recently used connections are closed.connections
parameter should be set to a number small enough to let upstream servers process new incoming connections as well.keepalive
directive must be used after all directives that set the
load balancing method; otherwise, it won't work.upstream memcached_backend {
server 127.0.0.1:11211;
server 10.0.0.2:11211;
keepalive 32;
}
server {
#...
location /memcached/ {
set $memcached_key $uri;
memcached_pass memcached_backend;
}
}
Connection
header field should be cleared:upstream http_backend {
server 127.0.0.1:8080;
keepalive 16;
}
server {
#...
location /http/ {
proxy_pass http://http_backend;
proxy_http_version 1.1;
proxy_set_header Connection "";
# ...
}
}
upstream fastcgi_backend {
server 127.0.0.1:9000;
keepalive 8;
}
server {
#...
location /fastcgi/ {
fastcgi_pass fastcgi_backend;
fastcgi_keep_conn on;
# ...
}
}
keepalive_requests#
keepalive_time#
keepalive_timeout#
least_conn#
least_time (PRO)#
least_time
header
| last_byte
[factor=
number] [account=
condition_variable];header
last_byte
factor
account
""
or "0"
.account
allows including these responses
or even excluding everything else.header_time
(headers only)
and response_time
(entire responses) in the server's health
object
among the upstream metrics in the API.queue (PRO)#
502 (Bad Gateway)
error is returned to the client.timeout
(default is 60 seconds),
a 502 (Bad Gateway)
error is returned to the client.
Requests from clients that prematurely close the connection are also removed from the queue;
there are counters for the states of requests passing through the queue in the API.queue
directive must be used after all directives that set the
load balancing method; otherwise, it won't work.random#
two
parameter is specified, Angie randomly selects two servers, then selects one of them using the least_conn method, where a request is passed to the server with the least number of active connections.response_time_factor (PRO)#
90
is specified, 90% of the previous value and only 10% of
the new value will be taken. Allowed values are from 0 to 99, inclusive.header_time
(headers only) and response_time
(entire responses) in the server's health
object among the upstream metrics in the API.header_time
value is recalculated only if all headers are received and processed, and
response_time
is recalculated only if the entire response is received.server#
unix:
prefix. If a port is not specified, port 80 is used. A domain name that resolves to several IP addresses defines multiple servers at once.weight=
numbermax_conns=
number0
, meaning there is no limit. If the server group does not reside in the shared memory, the limitation works per each worker process.max_conns
value.max_fails=
number — sets the number of unsuccessful attempts
to communicate with the server
that should happen in the duration set by fail_timeout
to consider the server unavailable;
it is then retried after the same duration.max_fails
is reached, the server is also considered unhealthy by
the upstream_probe (PRO) probes; it won't receive client requests until
the probes consider it healthy again.server
directive in a group resolves into multiple servers,
its max_fails
setting applies to each server individually.server
directives are resolved,
the max_fails
setting has no effect and will be ignored.max_fails=1
max_fails=0
fail_timeout=
time — sets the period of time during which a specified number
of unsuccessful attempts to communicate with the server
(max_fails) should happen to consider the server unavailable.
The server then remains unavailable for the same amount of time
before it is retried.server
directive in a group resolves into multiple servers,
its fail_timeout
setting applies to each server individually.server
directives are resolved,
the fail_timeout
setting has no effect and will be ignored.backup
down
drain
(PRO)down
.backup
cannot be used along with the hash, ip_hash, and random load
balancing methods.down
and drain
parameters are mutually exclusive.resolve
service=
name_
,
then _tcp
is added after a dot.
Thus, the service name http
will result in _http._tcp
.backup
is set with server
,
top-priority SRV records resolve into backup servers,
and other records are ignored.weight
parameter of the server
directive.
If weight is set by both the directive and the SRV record,
the weight set by the directive is used._http._tcp.backend.example.com
record:server backend.example.com service=http resolve;
sid=
idslow_start=
timeserver
in an upstream,
slow_start
has no effect and will be ignored.state (PRO)#
/var/lib/angie/state/
(/var/db/angie/state/
on FreeBSD)
directory with appropriate permissions
is created to store these files,
so you only need to add the filename in the configuration:upstream backend {
zone backend 1m;
state /var/lib/angie/state/<FILE NAME>;
}
server
.
The file contents change whenever servers are modified in the
/config/http/upstreams/ section
via the configuration API.
The file is read at Angie startup or configuration reload.state
directive in an upstream
block,
there should be no server
directives in it,
but a shared memory zone (zone) is required.sticky#
sticky
cookie name [attr=value]...;sticky route
$variable...;sticky learn
zone=
zone create=
$create_var1... lookup=
$lookup_var1... [header
] [norefresh
] [timeout=
time];sticky learn
[zone=
zone] lookup=
$lookup_var1... remote_action=
uri remote_result=
$remote_var [norefresh
] [timeout=
time];sticky
directive configured,
you can use the drain
option (PRO) in the server block.sticky
directive must be used after all directives
that set a load balancing method;
otherwise, it won't work.
If it's used together with the bind_conn (PRO) directive,
bind_conn
should appear after sticky
.name
) is set by the sticky
directive,
and the value (value
) corresponds
to the sid parameter
of the server directive.
Note that the parameter is additionally hashed
if the sticky_secret directive is set.path=/
.
Attribute values are specified as strings with variables.
To remove an attribute, set an empty value for it: attr=
.
Thus, sticky cookie path=
creates a cookie without path
.srv_id
with a one-hour lifespan
and a variable-specified domain:upstream backend {
server backend1.example.com:8080;
server backend2.example.com:8080;
sticky cookie srv_id domain=$my_domain max-age=3600;
}
route
cookie,
and then in the route
request argument:upstream backend {
server backend1.example.com:8080 "sid=server 1";
server backend2.example.com:8080 "sid=server 2";
sticky route $cookie_route $arg_route;
}
create
and lookup
parameters list variables
indicating how new sessions are created
and existing sessions are looked up.
Both parameters can occur multiple times.create
;
for example, this could be a
cookie from the proxied server.zone
parameter.
If a session has been inactive for the time set by timeout
,
it is deleted.
The default is 10 minutes.norefresh
parameter disables this behavior:
the session will expire strictly by timeout, even if it continues to be used.
This mode is useful
when forced session termination after a time period is required,
for example, when integrating with external session managers.lookup
;
its value will then be matched against sessions in shared memory.
If selecting a server fails
or the chosen server can't handle the request,
another server is selected
according to the configured balancing method.header
parameter allows creating a session
immediately after receiving response headers from the proxied server.
Without it, a session is created only after request processing is complete.examplecookie
in the response:upstream backend {
server backend1.example.com:8080;
server backend2.example.com:8080;
sticky learn
create=$upstream_cookie_examplecookie
lookup=$cookie_examplecookie
zone=client_sessions:1m;
}
remote_action
and remote_result
parameters
enable dynamically assigning and managing session IDs
via remote session storage.
Here, the shared memory zone acts as a local cache,
while the remote storage is the authoritative source.
Thus, the create
parameter
is incompatible with remote_action
because session IDs need to be created remotely.
If a session has been inactive for the time set by timeout
,
it is deleted.
The remote_action
setting doesn't affect the timeout.
The default is 10 minutes.lookup
;
if it can be found in the local shared memory,
Angie proceeds to select the appropriate server.remote_action
parameter sets the URI of the remote storage,
which should handle session lookup and creation as follows:lookup
and the locally suggested server ID associated with this session
via custom headers or in some other way.sticky_sid
contains the value of the sid=
parameter
from the server
directive in the upstream block, if set,
or an MD5 hash of the server name.remote_result
parameter
(for example, $upstream_http_x_sticky_sid
).
The remote_result
parameter uses variables
with the upstream_http_
prefix,
which provide dynamic access
to HTTP response headers from the remote storage.
For example, the header X-Sid: server1
becomes available in the variable $upstream_http_x_sid
with the value server1
.$cookie_bar
variable for the initial session ID,
and stores alternative session IDs returned by the remote storage
in $upstream_http_x_sticky_sid
:http {
upstream u1 {
server srv1;
server srv2;
sticky learn zone=sz:1m
lookup=$cookie_bar
remote_action=/remote_session
remote_result=$upstream_http_x_sticky_sid;
zone z 1m;
}
server {
listen localhost;
location / {
proxy_pass http://u1/;
}
location /remote_session {
internal;
proxy_set_header X-Sticky-Sessid $sticky_sessid;
proxy_set_header X-Sticky-Sid $sticky_sid;
proxy_set_header X-Sticky-Last $msec;
proxy_pass http://remote;
}
}
}
norefresh
), a subrequest is made to the resource
specified in remote_action
.zone
parameter in the sticky
configuration is optional.
If not set,
Angie relies entirely on the remote storage:
it doesn't cache sessions locally
(though it allows caching storage responses via proxy_cache
)
and contacts the remote storage every time
a session needs to be retrieved or created.X-Sid
header
and thus confirms or overrides Angie's choice:http {
proxy_cache_path c1 keys_zone=s1:1m;
upstream tc_0 {
server 10.0.0.1 sid=web-server-01;
server 10.0.0.2 sid=web-server-02;
sticky learn
lookup=$arg_id
remote_action=@create_session
remote_result=$upstream_http_x_sid;
}
server {
listen 127.0.0.1:8080;
location / {
proxy_pass http://tc_0/;
}
# Request to remote session storage
location @create_session {
internal;
proxy_set_header X-Sticky-Sessid $sticky_sessid;
proxy_set_header X-Sticky-Sid $sticky_sid;
proxy_set_header X-Sticky-Last $msec;
proxy_pass http://session_backend;
proxy_connect_timeout 1s;
proxy_read_timeout 1s;
proxy_cache s1;
proxy_cache_valid 200 1d;
proxy_cache_key "$scheme$proxy_host$request_uri$sticky_sessid";
}
}
}
HTTP/1.1 200 OK
...
X-Sid: web-server-01
X-Session-Backend: backend-pool-1
$upstream_http_x_sid
,
with the value web-server-01
;$upstream_http_x_session_backend
,
with the value backend-pool-1
.$upstream_http_x_sid
is specified in the remote_result
parameter,
its value will be used
to select the server with sid=web-server-01
.sticky_secret#
cookie
and route
modes.
The string may contain variables, for example, $remote_addr
:upstream backend {
server backend1.example.com:8080;
server backend2.example.com:8080;
sticky cookie cookie_name;
sticky_secret my_secret.$remote_addr;
}
$ echo -n "<VALUE><SALT>" | md5sum
sticky_strict#
upstream#
upstream backend {
server backend1.example.com weight=5;
server 127.0.0.1:8080 max_fails=3 fail_timeout=30s;
server unix:/tmp/backend3;
server backup1.example.com backup;
}
zone#
Built-in Variables#
http_upstream
module supports the following built-in variables:$sticky_sessid
#remote_action
in sticky;
stores the initial session ID taken from lookup
.$sticky_sid
#remote_action
in sticky;
stores the server ID previously associated with the session.$upstream_addr
#X-Accel-Redirect
or error_page, then the server addresses from different groups are separated by colons, e.g.:$upstream_bytes_received
#$upstream_bytes_sent
#$upstream_cache_status
#MISS
, BYPASS
, EXPIRED
, STALE
, UPDATING
,
REVALIDATED
, or HIT
:MISS
: The response isn't found in the cache,
and the request is forwarded to the upstream server.BYPASS
: The cache is bypassed,
and the request is directly forwarded to the upstream server.EXPIRED
: The cached response is stale,
and a new request for the updated content is sent to the upstream server.STALE
: The cached response is stale,
but will be served to the clients
until an update has been eventually fetched from the upstream server.UPDATING
: The cached response is stale,
but will be served to the clients
until the currently ongoing update from the upstream server has been finished.REVALIDATED
: The cached response is stale,
but is successfully revalidated
and doesn't need an update from the upstream server.HIT
: The response was served from the cache.$upstream_connect_time
#$upstream_header_time
#$upstream_http_<name>
#Server
response header field is available through the $upstream_http_server
variable. The rules of converting header field names to variable names are the same as for the variables that start with the $http_
prefix. Only the header fields from the response of the last server are saved.$upstream_queue_time
#$upstream_response_length
#$upstream_response_time
#$upstream_status
#$upstream_sticky_status
#""
NEW
HIT
MISS
$upstream_trailer_<name>
#