API#
The module provides HTTP RESTful interface for accessing in JSON format basic information about a web server instance, as well as metrics of client connections, shared memory zones, DNS queries, HTTP requests, HTTP responses cache, TCP/UDP sessions of Stream Module, and zones of Limit Conn, Limit Conn, Limit Req and Upstream modules.
The API accepts GET
and HEAD
HTTP requests; other request methods
cause an error:
{
"error": "MethodNotAllowed",
"description": "The POST method is not allowed for the requested API element \"/\"."
}
The Angie PRO API has a dynamic configuration that allows
updating the settings without reloading the configuration or restarting
Angie itself; currently, it enables configuring individual peers in an
upstream. Enables HTTP RESTful interface in path parameter is mandatory and works similar to the alias directive, but operates over API tree, rather than filesystem hierarchy. When specified in a prefix location: the part of request URI matching particular prefix location Also it's possible to use variables: api /status/$module/server_zones/$name/ and the directive can also be specified inside a regexp location: here, similar to alias, parameter defines the whole path to the API element. E.g., from which results after interpolation in Note In Angie PRO, you can decouple the dynamic configuration API
from the immutable metrics API that reflects current state: Overall, it serves for precise configuration of API access rights, e.g.: Or: Enables or disables the A query to By default, the object is disabled because the config files
can contain extra sensitive, confidential details. Angie exposes usage metrics in the Subtree access, already discussed earlier: Configured with Responds to Each JSON branch can be requested separately with the request constructed accordingly, e.g.: Note By default, the module uses ISO 8601 strings for date values;
to use the integer epoch format instead,
add the String; version of the running Angie web server String; particular build name when it specified during compilation String; the address of the server that accepted API request Number; total number of configuration reloads since last start String or number; time of the last configuration reload,
formatted as a date;
strings have millisecond resolution Object; its members are absolute pathnames
of all Angie configuration files
that are currently loaded by the server instance,
and their values are string representations of the files' contents,
for example: Caution The Number; the total number of accepted client connections Number; the total number of dropped client connections Number; the current number of active client connections Number; the current number of idle client connections To collect resolver statistics,
the resolver directive must set the The specified shared memory zone will collect the following statistics: Object; queries statistics Number; the number of queries to resolve names to addresses
(A and AAAA queries) Number; the number of queries to resolve services to addresses
(SRV queries) Number; the number of queries to resolve addresses to names
(PTR queries) Object; responses statistics Number; the number of successful responses Number; the number of timed out queries Number; the number responses with code 1 (Format Error) Number; the number responses with code 2 (Server Failure) Number; the number responses with code 3 (Name Error) Number; the number responses with code 4 (Not Implemented) Number; the number responses with code 5 (Refused) Number; the number of queries completed with other non-zero code Object; sent DNS queries statistics Number; the number of A type queries Number; the number of AAAA type queries Number; the number of PTR type queries Number; the number of SRV type queries The response codes are described in RFC 1035, section 4.1.1. Various DNS record types are detailed in RFC 1035,
RFC 2782, and
RFC 3596. Example: To collect the To group the metrics by a custom value, use the alternative syntax.
Here, the metrics are aggregated by $host,
with each group reported as a standalone zone: The specified shared memory zone will collect the following statistics: Object; SSL statistics.
Present if Number; the total number of successful SSL handshakes Number; the total number of session reuses during SSL handshake Number; the total number of timed out SSL handshakes Number; the total number of failed SSL handshakes Object; requests statistics Number; the total number of client requests Number; the number of currently being processed client requests Number; the total number of client requests completed without sending a response Object; responses statistics Number; a non-zero number of responses with status <code> (100-599) Number; a non-zero number of responses with other status codes Object; data statistics Number; the total number of bytes received from clients Number; the total number of bytes sent to clients Example: To collect the To group the metrics by a custom value, use the alternative syntax.
Here, the metrics are aggregated by $host,
with each group reported as a standalone zone: The specified shared memory zone will collect the following statistics: Object; requests statistics Number; the total number of client requests Number; the total number of client requests completed without sending a response Object; responses statistics Number; a non-zero number of responses with status <code> (100-599) Number; a non-zero number of responses with other status codes Object; data statistics Number; the total number of bytes received from clients Number; the total number of bytes sent to clients Example: To collect the To group the metrics by a custom value, use the alternative syntax.
Here, the metrics are aggregated by $host,
with each group reported as a standalone zone: The specified shared memory zone will collect the following statistics: Object; SSL statistics.
Present if Number; the total number of successful SSL handshakes Number; the total number of session reuses during SSL handshake Number; the total number of timed out SSL handshakes Number; the total number of failed SSL handshakes Object; connections statistics Number; the total number of client connections Number; the number of currently being processed client connections Number; the total number of client connections
completed without creating a session Number; the total number of client connections
relayed to another listening port with Object; sessions statistics Number; the number of sessions completed with code 200, which means successful completion Number; the number of sessions completed with code 400, which happens when client data could not be parsed, e.g. the PROXY protocol header Number; the number of sessions completed with code 403, when access was forbidden, for example, when access is limited for certain client addresses Number; the number of sessions completed with code 500, the internal server error Number; the number of sessions completed with code 502, bad gateway, for example, if an upstream server could not be selected or reached Number; the number of sessions completed with code 503, service unavailable, for example, when access is limited by the number of connections Object; data statistics Number; the total number of bytes received from clients Number; the total number of bytes sent to clients Example: For each zone configured with proxy_cache, the following data is
stored: Number; the current size of the cache Number; configured limit on the maximum size of the cache Boolean; Object; statistics of valid cached responses (proxy_cache_valid) Number; the total number of responses read from the cache Number; the total number of bytes read from the cache Object; statistics of expired responses taken from the cache (proxy_cache_use_stale) Number; the total number of responses read from the cache Number; the total number of bytes read from the cache Object; statistics of expired responses taken from the cache while responses were being updated (proxy_cache_use_stale updating) Number; the total number of responses read from the cache Number; the total number of bytes read from the cache Object; statistics of expired and revalidated responses taken from the cache (proxy_cache_revalidate) Number; the total number of responses read from the cache Number; the total number of bytes read from the cache Object; statistics of responses not found in the cache Number; the total number of corresponding responses Number; the total number of bytes read from the proxied server Number; the total number of responses written to the cache Number; the total number of bytes written to the cache Object; statistics of expired responses not taken from the cache Number; the total number of corresponding responses Number; the total number of bytes read from the proxied server Number; the total number of responses written to the cache Number; the total number of bytes written to the cache Object; statistics of responses not looked up in the cache (proxy_cache_bypass) Number; the total number of corresponding responses Number; the total number of bytes read from the proxied server Number; the total number of responses written to the cache Number; the total number of bytes written to the cache Added in version 1.2.0: PRO In Angie PRO, if cache sharding is enabled with proxy_cache_path directives,
individual shards are exposed as object members of a Object; lists individual shards as members Object; represents an individual shard with its cache path for name Number; the shard's current size Number; maximum shard size, if configured Boolean; Objects for each configured limit_conn in http or limit_conn in stream contexts with the following fields: Number; the total number of passed connections Number; the total number of connections passed with zero-length key, or key exceeding 255 bytes Number; the total number of connections exceeding the configured limit Number; the total number of connections rejected due to exhaustion of zone storage Objects for each configured limit_req with the following fields: Number; the total number of passed connections Number; the total number of requests passed with zero-length key, or key exceeding 65535 bytes Number; the total number of delayed requests Number; the total number of rejected requests Number; the total number of requests rejected due to exhaustion of zone storage Added in version 1.1.0. To enable collection of the following metrics,
set the zone directive in the upstream context,
for instance: where <upstream> is the name of any upstream specified with the zone directive Object; contains the metrics of the upstream's peers as subobjects
whose names are canonical representations of the peers' addresses.
Members of each subobject: String; the parameter of the server directive String; name of service as it's specified in server directie, if configured Number; the specified slow_start value for the server,
expressed in seconds. When setting the value via the
respective subsection
of the dynamic configuration API,
you can specify either a number
or a time value with millisecond precision. Boolean; Number; configured weight String; current state of the peer: Object; peer selection statistics Number; the current number of connections to peer Number; total number of requests forwarded to peer String or number; time when peer was last selected,
formatted as a date Number; the configured maximum number of simultaneous connections, if specified Object; responses statistics Number; a non-zero number of responses with status <code> (100-599) Number; a non-zero number of responses with other status codes Object; data statistics Number; the total number of bytes received from peer Number; the total number of bytes sent to peer Object; health statistics Number; the total number of unsuccessful attempts to communicate with the peer Number; how many times peer became Number; the total time (in milliseconds) when peer was String or number; time when peer became Number; average time (in milliseconds)
to receive the response headers from the peer;
see response_time_factor (PRO) Number; average time (in milliseconds)
to receive the entire peer response;
see response_time_factor (PRO) String; configured id of the server in upstream group Number; the number of currently cached connections Changed in version 1.2.0: PRO If the upstream has upstream_probe (PRO) probes configured,
the The Counters in Number; total probes for this peer Number; total failed probes String or number; last probe time,
formatted as a date Changed in version 1.4.0. If a request queue is configured for the upstream,
the upstream object also contains a nested The counter values are aggregated across all worker processes: Number; total count of requests that entered the queue Number; current count of requests in the queue Number; total count of requests removed from the queue due to the
client prematurely closing the connection Number; total count of requests removed from the queue due to timeout Number; total count of queue overflow occurrences To enable collection of the following metrics,
set the zone directive in the upstream context,
for instance: Here, <upstream> is the name of an upstream that is
configured with a zone directive. Object; contains the metrics of the upstream's peers as subobjects
whose names are canonical representations of the peers' addresses.
Members of each subobject: String; address set by the server directive String; service name, if set by server directive Number; the specified slow_start value for the server,
expressed in seconds. When setting the value via the
respective subsection
of the dynamic configuration API,
you can specify either a number
or a time value with millisecond precision. Boolean; Number; the weight of the peer String; current state of the peer: Object; the peer's selection metrics Number; current connections to the peer Number; total connections forwarded to the peer String or number; time when the peer was last selected,
formatted as a date Number;
maximum
number of simultaneous active connections to the peer, if set Object; data transfer metrics Number; total bytes received from the peer Number; total bytes sent to the peer Object; peer health metrics Number; total failed attempts to reach the peer Number; times the peer became Number; total time (in milliseconds) that the peer was
String or number; time when the peer last became Number; average time (in milliseconds)
taken to establish a connection with the peer;
see the response_time_factor (PRO) directive. Number; average time (in milliseconds)
to receive the first byte of the response from the peer;
see the response_time_factor (PRO) directive. Number; average time (in milliseconds)
to receive the complete response from the peer;
see the response_time_factor (PRO) directive. Changed in version 1.4.0: PRO In Angie PRO, if the upstream has upstream_probe (PRO) probes configured,
the The Counters in Number; total probes for this peer Number; total failed probes String or number; last probe time,
formatted as a date Added in version 1.2.0. The API includes a Currently, configuration of individual servers within upstreams is available
in the Enables configuring individual upstream peers,
including deleting existing peers or adding new ones. URI path parameters: Name of the upstream; to be configurable via The peer's name within the upstream, defined as
For example, the following configuration: Allows the following peer names: This API subsection enables setting the Note There is no separate Example: Actually available parameters are limited to the ones supported by the
current load balancing method of the upstream.
So, if the upstream is configured as You will be unable to add a new peer that defines Note Even with a compatible load balancing method, the Enables configuring individual upstream peers,
including deleting existing peers or adding new ones. URI path parameters: Name of the upstream; to be configurable via The peer's name within the upstream, defined as
For example, the following configuration: Allows the following peer names: This API subsection enables setting the Note There is no separate Example: Actually available parameters are limited to the ones supported by the
current load balancing method of the upstream.
So, if the upstream is configured as You will be unable to add a new peer that defines Note Even with a compatible load balancing method, the When deleting servers, you can set the Let's consider the semantics of all HTTP methods applicable to this section,
given this upstream configuration: The For example, the
You can obtain default parameter values with The For example, to set the Verify the changes: The For example, to delete the previously set Verify the changes using The When deleting servers, you can set the The The method operates as follows: if the entities from the new definition
exist in the configuration, they are overwritten; otherwise, they are added. For example, to change the Verify the changes: The JSON object supplied with the The Note This deletion is identical to For example, to delete the Verify the changes: The Directives#
api#
location
.location /stats/ {
api /status/http/server_zones/;
}
/stats/
will be replaced by the path /status/http/server_zones/
in the directive parameter. For example, on a request of /stats/foo/
the /status/http/server_zones/foo/
API element will be accessed.location ~^/api/([^/]+)/(.*)$ {
api /status/http/$1_zones/$2;
}
/api/location/bar/data/
the following positional variables will be populated:$1 = "location"
$2 = "bar/data/"
/status/http/location_zones/bar/data
API request.location /config/ {
api /config/;
}
location /status/ {
api /status/;
}
location /status/ {
api /status/;
allow 127.0.0.1;
deny all;
}
location /blog/requests/ {
api /status/http/server_zones/blog/requests/;
auth_basic "blog";
auth_basic_user_file conf/htpasswd;
}
api_config_files#
config_files
object,
which enumerates the contents of all Angie config files
that are currently loaded by the server instance,
in the /status/angie/ API section.
For example, with this configuration:location /status/ {
api /status/;
api_config_files on;
}
/status/angie/
returns approximately this:{
"version":"1.8.2",
"address":"192.168.16.5",
"generation":1,
"load_time":"2025-02-13T12:58:39.789Z",
"config_files": {
"/etc/angie/angie.conf": "...",
"/etc/angie/mime.types": "..."
}
}
Metrics#
/status/
section of the API;
you can make it accessible by defining a respective location
.
Full access:location /status/ {
api /status/;
}
location /stats/ {
api /status/http/server_zones/;
}
Example configuration#
location /status/
, resolver
, http
upstream
, http server
, location
, cache
,
limit_conn
in http
and limit_req
zones:http {
resolver 127.0.0.53 status_zone=resolver_zone;
proxy_cache_path /var/cache/angie/cache keys_zone=cache_zone:2m;
limit_conn_zone $binary_remote_addr zone=limit_conn_zone:10m;
limit_req_zone $binary_remote_addr zone=limit_req_zone:10m rate=1r/s;
upstream upstream {
zone upstream 256k;
server backend.example.com service=_example._tcp resolve max_conns=5;
keepalive 4;
}
server {
server_name www.example.com;
listen 443 ssl;
status_zone http_server_zone;
proxy_cache cache_zone;
access_log /var/log/access.log main;
location / {
root /usr/share/angie/html;
status_zone location_zone;
limit_conn limit_conn_zone 1;
limit_req zone=limit_req_zone burst=5;
}
location /status/ {
api /status/;
allow 127.0.0.1;
deny all;
}
}
}
curl https://www.example.com/status/
with the following JSON:JSON tree
{
"angie": {
"version":"1.8.2",
"address":"192.168.16.5",
"generation":1,
"load_time":"2025-02-13T12:58:39.789Z"
},
"connections": {
"accepted":2257,
"dropped":0,
"active":3,
"idle":1
},
"slabs": {
"cache_zone": {
"pages": {
"used":2,
"free":506
},
"slots": {
"64": {
"used":1,
"free":63,
"reqs":1,
"fails":0
},
"512": {
"used":1,
"free":7,
"reqs":1,
"fails":0
}
}
},
"limit_conn_zone": {
"pages": {
"used":2,
"free":2542
},
"slots": {
"64": {
"used":1,
"free":63,
"reqs":74,
"fails":0
},
"128": {
"used":1,
"free":31,
"reqs":1,
"fails":0
}
}
},
"limit_req_zone": {
"pages": {
"used":2,
"free":2542
},
"slots": {
"64": {
"used":1,
"free":63,
"reqs":1,
"fails":0
},
"128": {
"used":2,
"free":30,
"reqs":3,
"fails":0
}
}
}
},
"http": {
"server_zones": {
"http_server_zone": {
"ssl": {
"handshaked":4174,
"reuses":0,
"timedout":0,
"failed":0
},
"requests": {
"total":4327,
"processing":0,
"discarded":8
},
"responses": {
"200":4305,
"302":12,
"404":4
},
"data": {
"received":733955,
"sent":59207757
}
}
},
"location_zones": {
"location_zone": {
"requests": {
"total":4158,
"discarded":0
},
"responses": {
"200":4157,
"304":1
},
"data": {
"received":538200,
"sent":177606236
}
}
},
"caches": {
"cache_zone": {
"size":0,
"cold":false,
"hit": {
"responses":0,
"bytes":0
},
"stale": {
"responses":0,
"bytes":0
},
"updating": {
"responses":0,
"bytes":0
},
"revalidated": {
"responses":0,
"bytes":0
},
"miss": {
"responses":0,
"bytes":0,
"responses_written":0,
"bytes_written":0
},
"expired": {
"responses":0,
"bytes":0,
"responses_written":0,
"bytes_written":0
},
"bypass": {
"responses":0,
"bytes":0,
"responses_written":0,
"bytes_written":0
}
}
},
"limit_conns": {
"limit_conn_zone": {
"passed":73,
"skipped":0,
"rejected":0,
"exhausted":0
}
},
"limit_reqs": {
"limit_req_zone": {
"passed":54816,
"skipped":0,
"delayed":65,
"rejected":26,
"exhausted":0
}
},
"upstreams": {
"upstream": {
"peers": {
"192.168.16.4:80": {
"server":"backend.example.com",
"service":"_example._tcp",
"backup":false,
"weight":5,
"state":"up",
"selected": {
"current":2,
"total":232
},
"max_conns":5,
"responses": {
"200":222,
"302":12
},
"data": {
"sent":543866,
"received":27349934
},
"health": {
"fails":0,
"unavailable":0,
"downtime":0
},
"sid":"<server_id>"
}
},
"keepalive":2
}
}
},
"resolvers": {
"resolver_zone": {
"queries": {
"name":442,
"srv":2,
"addr":0
},
"responses": {
"success":440,
"timedout":1,
"format_error":0,
"server_failure":1,
"not_found":1,
"unimplemented":0,
"refused":1,
"other":0
}
}
}
}
$ curl https://www.example.com/status/angie
$ curl https://www.example.com/status/connections
$ curl https://www.example.com/status/slabs
$ curl https://www.example.com/status/slabs/<zone>/slots
$ curl https://www.example.com/status/slabs/<zone>/slots/64
$ curl https://www.example.com/status/http/
$ curl https://www.example.com/status/http/server_zones
$ curl https://www.example.com/status/http/server_zones/<http_server_zone>
$ curl https://www.example.com/status/http/server_zones/<http_server_zone>/ssl
date=epoch
parameter to the query string:$ curl https://www.example.com/status/angie/load_time
"2024-04-01T00:59:59+01:00"
$ curl https://www.example.com/status/angie/load_time?date=epoch
1711929599
Server status#
/status/angie
#{
"version": "1.8.2",
"address": "192.168.16.5",
"generation": 1,
"load_time": "2025-02-13T16:15:43.805Z"
"config_files": {
"/etc/angie/angie.conf": "...",
"/etc/angie/mime.types": "..."
}
}
version
build
address
generation
load_time
config_files
{
"/etc/angie/angie.conf": "server {\n listen 80;\n # ...\n\n}\n"
}
config_files
object is available in /status/angie/
only if the
api_config_files
directive is enabled.Connections global metrics#
/status/connections
#{
"accepted": 2257,
"dropped": 0,
"active": 3,
"idle": 1
}
accepted
dropped
active
idle
Resolver DNS queries#
/status/resolvers/<zone>
#status_zone
parameter
(HTTP and Stream):resolver 127.0.0.53 status_zone=resolver_zone;
queries
name
srv
addr
responses
success
timedout
format_error
server_failure
not_found
unimplemented
refused
other
sent
a
aaaa
ptr
srv
{
"queries": {
"name": 442,
"srv": 2,
"addr": 0
},
"responses": {
"success": 440,
"timedout": 1,
"format_error": 0,
"server_failure": 1,
"not_found": 1,
"unimplemented": 0,
"refused": 1,
"other": 0
},
"sent": {
"a": 185,
"aaaa": 245,
"srv": 2,
"ptr": 12
}
}
HTTP server and location#
/status/http/server_zones/<zone>
#server
metrics,
set the status_zone directive in the server context:server {
...
status_zone server_zone;
}
status_zone $host zone=server_zone:5;
ssl
server
sets listen ssl;
handshaked
reuses
timedout
failed
requests
total
processing
discarded
responses
<code>
xxx
data
received
sent
"ssl": {
"handshaked": 4174,
"reuses": 0,
"timedout": 0,
"failed": 0
},
"requests": {
"total": 4327,
"processing": 0,
"discarded": 0
},
"responses": {
"200": 4305,
"302": 6,
"304": 12,
"404": 4
},
"data": {
"received": 733955,
"sent": 59207757
}
/status/http/location_zones/<zone>
#location
metrics, set the status_zone directive
in the context of location or if in location:location / {
root /usr/share/angie/html;
status_zone location_zone;
if ($request_uri ~* "^/condition") {
# ...
status_zone if_location_zone;
}
}
status_zone $host zone=server_zone:5;
requests
total
discarded
responses
<code>
xxx
data
received
sent
{
"requests": {
"total": 4158,
"discarded": 0
},
"responses": {
"200": 4157,
"304": 1
},
"data": {
"received": 538200,
"sent": 177606236
}
}
Stream server#
/status/stream/server_zones/<zone>
#server
metrics,
set the status_zone directive in the server context:server {
...
status_zone server_zone;
}
status_zone $host zone=server_zone:5;
ssl
server
sets listen ssl;
handshaked
reuses
timedout
failed
connections
total
processing
discarded
discarded
pass
directivessessions
success
invalid
forbidden
internal_error
bad_gateway
service_unavailable
data
received
sent
{
"ssl": {
"handshaked": 24,
"reuses": 0,
"timedout": 0,
"failed": 0
},
"connections": {
"total": 24,
"processing": 1,
"discarded": 0,
"passed": 2
},
"sessions": {
"success": 24,
"invalid": 0,
"forbidden": 0,
"internal_error": 0,
"bad_gateway": 0,
"service_unavailable": 0
},
"data": {
"received": 2762947,
"sent": 53495723
}
}
HTTP caches#
proxy_cache cache_zone;
/status/http/caches/<cache>
#{
"name_zone": {
"size": 0,
"cold": false,
"hit": {
"responses": 0,
"bytes": 0
},
"stale": {
"responses": 0,
"bytes": 0
},
"updating": {
"responses": 0,
"bytes": 0
},
"revalidated": {
"responses": 0,
"bytes": 0
},
"miss": {
"responses": 0,
"bytes": 0,
"responses_written": 0,
"bytes_written": 0
},
"expired": {
"responses": 0,
"bytes": 0,
"responses_written": 0,
"bytes_written": 0
},
"bypass": {
"responses": 0,
"bytes": 0,
"responses_written": 0,
"bytes_written": 0
}
}
}
size
max_size
cold
true
while the cache loader loads data from diskhit
responses
bytes
stale
responses
bytes
updating
responses
bytes
revalidated
responses
bytes
miss
responses
bytes
responses_written
bytes_written
expired
responses
bytes
responses_written
bytes_written
bypass
responses
bytes
responses_written
bytes_written
shards
object:shards
<shard>
sizes
max_size
cold
true
while the cache loader loads data from disk{
"name_zone": {
"shards": {
"/path/to/shard1": {
"size": 0,
"cold": false
},
"/path/to/shard2": {
"size": 0,
"cold": false
}
}
}
limit_conn#
limit_conn_zone $binary_remote_addr zone=limit_conn_zone:10m;
/status/http/limit_conns/<zone>
, /status/stream/limit_conns/<zone>
#{
"passed": 73,
"skipped": 0,
"rejected": 0,
"exhausted": 0
}
passed
skipped
rejected
exhausted
limit_req#
limit_req_zone $binary_remote_addr zone=limit_req_zone:10m rate=1r/s;
/status/http/limit_reqs/<zone>
#{
"passed": 54816,
"skipped": 0,
"delayed": 65,
"rejected": 26,
"exhausted": 0
}
passed
skipped
delayed
rejected
exhausted
HTTP upstream#
upstream upstream {
zone upstream 256k;
server backend.example.com service=_example._tcp resolve max_conns=5;
keepalive 4;
}
/status/http/upstreams/<upstream>
#{
"peers": {
"192.168.16.4:80": {
"server": "backend.example.com",
"service": "_example._tcp",
"backup": false,
"weight": 5,
"state": "up",
"selected": {
"current": 2,
"total": 232
},
"max_conns": 5,
"responses": {
"200": 222,
"302": 12
},
"data": {
"sent": 543866,
"received": 27349934
},
"health": {
"fails": 0,
"unavailable": 0,
"downtime": 0
},
"sid": "<server_id>"
}
},
"keepalive": 2
}
peers
server
service
slow_start
(PRO 1.4.0+)backup
true
for backup serversweight
state
checking
(PRO): set to essential
, being checked now,
only probe requests are sentdown
: disabled manually, no requests are sentdraining
(PRO): similar to down
,
but requests from sessions
that were earlier bound using sticky
are still sentrecovering
: recovering after failure
according to slow_start,
more requests are sent graduallyunavailable
: reached the max_fails limit,
a client request is attempted at fail_timeout
intervalsunhealthy
(PRO): not functioning properly,
only probe requests are sentup
: operational, requests are sent as usualselected
current
total
last
max_conns
responses
<code>
xxx
data
received
sent
health
fails
unavailable
unavailable
due to reaching the max_fails limitdowntime
unavailable
for selectiondownstart
unavailable
,
formatted as a dateheader_time
(PRO 1.3.0+)response_time
(PRO 1.3.0+)sid
keepalive
health/probes
(PRO)#health
object also has a probes
subobject
that stores the peer's health probe counters,
while the peer's state
can also be checking
and unhealthy
,
apart from the values listed in the table above:{
"192.168.16.4:80": {
"state": "unhealthy",
"...": "...",
"health": {
"...": "...",
"probes": {
"count": 10,
"fails": 10,
"last": "2025-02-13T09:56:07Z"
}
}
}
}
checking
value of state
isn't counted as downtime
and means that the peer, which has a probe configured as essential
,
hasn't been checked yet;
the unhealthy
value means that the peer is malfunctioning.
Both states also imply that the peer isn't included in load balancing.
For details of health probes, see upstream_probe.probes
:count
fails
last
queue
#queue
object,
which holds counters for requests in the queue:{
"queue": {
"queued": 20112,
"waiting": 1011,
"dropped": 6031,
"timedout": 560,
"overflows": 13
}
}
queued
waiting
dropped
timedout
overflows
Stream upstream#
upstream upstream {
zone upstream 256k;
server backend.example.com service=_example._tcp resolve max_conns=5;
keepalive 4;
}
/status/stream/upstreams/<upstream>
#{
"peers": {
"192.168.16.4:1935": {
"server": "backend.example.com",
"service": "_example._tcp",
"backup": false,
"weight": 5,
"state": "up",
"selected": {
"current": 2,
"total": 232
},
"max_conns": 5,
"data": {
"sent": 543866,
"received": 27349934
},
"health": {
"fails": 0,
"unavailable": 0,
"downtime": 0
}
}
}
}
peers
server
service
slow_start
(PRO 1.4.0+)backup
true
for backup serverweight
state
up
: operational, requests are sent as usualdown
: disabled manually, no requests are sentdraining
(PRO): similar to down
,
but requests from sessions
that were earlier bound using sticky
are still sentunavailable
: reached the max_fails limit,
a client request is attempted at fail_timeout
intervalsrecovering
: recovering after failure
according to slow_start,
more requests are sent graduallychecking
(PRO): set to essential
, being checked now,
only probe requests are sentunhealthy
(PRO): not functioning properly,
only probe requests are sentselected
current
total
last
max_conns
data
received
sent
health
fails
unavailable
unavailable
due to
reaching the max_failsdowntime
unavailable
for selectiondownstart
unavailable
,
formatted as a dateconnect_time
(PRO 1.4.0+)first_byte_time
(PRO 1.4.0+)last_byte_time
(PRO 1.4.0+)health
object also has a probes
subobject
that stores the peer's health probe counters,
while the peer's state
can also be checking
and unhealthy
,
apart from the values listed in the table above:{
"192.168.16.4:80": {
"state": "unhealthy",
"...": "...",
"health": {
"...": "...",
"probes": {
"count": 2,
"fails": 2,
"last": "2025-02-13T11:03:54Z"
}
}
}
}
checking
value of state
isn't counted as downtime
and means that the peer, which has a probe configured as essential
,
hasn't been checked yet;
the unhealthy
value means that the peer is malfunctioning.
Both states also imply that the peer isn't included in load balancing.
For details of health probes, see upstream_probe.probes
:count
fails
last
Dynamic Configuration API (PRO only)#
/config
section that enables dynamic updates
to Angie's configuration in JSON
with PUT
, PATCH
, and DELETE
HTTP requests.
All updates are atomic; new settings are applied as a whole,
or none are applied at all.
On error, Angie reports the reason.Subsections of
/config
#/config
section for the HTTP and stream modules; the number of settings
eligible for dynamic configuration is steadily increasing./config/http/upstreams/<upstream>/servers/<name>
#<upstream>
/config
, it must
have a zone directive configured, defining a shared
memory zone.<name>
<service>@<host>
, where:<service>@
is an optional service name, used for
SRV record resolution.<host>
is the domain name of the service (if resolve
is present) or its IP; an optional port can be defined here.upstream backend {
server backend.example.com service=_http._tcp resolve;
server 127.0.0.1;
zone backend 1m;
}
$ curl http://127.0.0.1/config/http/upstreams/backend/servers/_http._tcp@backend.example.com/
$ curl http://127.0.0.1/config/http/upstreams/backend/servers/127.0.0.1:80/
weight
, max_conns
,
max_fails
, fail_timeout
, backup
, down
and
sid
parameters, as described in server.drain
option here;
to enable drain
,
set down
to the string value drain
:$ curl -X PUT -d \"drain\" \
http://127.0.0.1/config/http/upstreams/backend/servers/backend.example.com/down
curl http://127.0.0.1/config/http/upstreams/backend/servers/backend.example.com?defaults=on
{
"weight": 1,
"max_conns": 0,
"max_fails": 1,
"fail_timeout": 10,
"backup": true,
"down": false,
"sid": ""
}
random
:upstream backend {
zone backend 256k;
server backend.example.com resolve max_conns=5;
random;
}
backup
:$ curl -X PUT -d '{ "backup": true }' \
http://127.0.0.1/config/http/upstreams/backend/servers/backend1.example.com
{
"error": "FormatError",
"description": "The \"backup\" field is unknown."
}
backup
parameter
can only be set at new peer creation./config/stream/upstreams/<upstream>/servers/<name>
#<upstream>
/config
, it must
have a zone directive configured, defining a shared
memory zone.<name>
<service>@<host>
, where:<service>@
is an optional service name, used for
SRV record resolution.<host>
is the domain name of the service (if resolve
is present) or its IP; an optional port can be defined here.upstream backend {
server backend.example.com:8080 service=_example._tcp resolve;
server 127.0.0.1:12345;
zone backend 1m;
}
$ curl http://127.0.0.1/config/stream/upstreams/backend/servers/_example._tcp@backend.example.com:8080/
$ curl http://127.0.0.1/config/stream/upstreams/backend/servers/127.0.0.1:12345/
weight
, max_conns
,
max_fails
, fail_timeout
, backup
and down
parameters, as described in server.drain
option here;
to enable drain
,
set down
to the string value drain
:$ curl -X PUT -d \"drain\" \
http://127.0.0.1/config/stream/upstreams/backend/servers/backend.example.com/down
curl http://127.0.0.1/config/stream/upstreams/backend/servers/backend.example.com?defaults=on
{
"weight": 1,
"max_conns": 0,
"max_fails": 1,
"fail_timeout": 10,
"backup": true,
"down": false,
}
random
:upstream backend {
zone backend 256k;
server backend.example.com resolve max_conns=5;
random;
}
backup
:$ curl -X PUT -d '{ "backup": true }' \
http://127.0.0.1/config/stream/upstreams/backend/servers/backend1.example.com
{
"error": "FormatError",
"description": "The \"backup\" field is unknown."
}
backup
parameter
can only be set at new peer creation.connection_drop=<value>
argument
(PRO) to override the proxy_connection_drop settings:$ curl -X DELETE \
http://127.0.0.1/config/stream/upstreams/backend/servers/backend1.example.com?connection_drop=off
$ curl -X DELETE \
http://127.0.0.1/config/stream/upstreams/backend/servers/backend2.example.com?connection_drop=on
$ curl -X DELETE \
http://127.0.0.1/config/stream/upstreams/backend/servers/backend3.example.com?connection_drop=1000
HTTP Methods#
http {
# ...
upstream backend {
zone upstream 256k;
server backend.example.com resolve max_conns=5;
# ...
}
server {
# ...
location /config/ {
api /config/;
allow 127.0.0.1;
deny all;
}
}
}
GET#
GET
HTTP method queries an entity at any existing path within
/config
, just as it does for other API sections./config/http/upstreams/backend/servers/
upstream server branch enables these queries:$ curl http://127.0.0.1/config/http/upstreams/backend/servers/backend.example.com/max_conns
$ curl http://127.0.0.1/config/http/upstreams/backend/servers/backend.example.com
$ curl http://127.0.0.1/config/http/upstreams/backend/servers
$ # ...
$ curl http://127.0.0.1/config
defaults=on
:$ curl http://127.0.0.1/config/http/upstreams/backend/servers?defaults=on
{
"backend.example.com": {
"weight": 1,
"max_conns": 5,
"max_fails": 1,
"fail_timeout": 10,
"backup": false,
"down": false,
"sid": ""
}
}
PUT#
PUT
HTTP method creates a new JSON entity at the specified path
or entirely replaces an existing one.max_fails
parameter, not specified earlier,
of the backend.example.com
server within the backend
upstream:$ curl -X PUT -d '2' \
http://127.0.0.1/config/http/upstreams/backend/servers/backend.example.com/max_fails
{
"success": "Updated",
"description": "Existing configuration API entity \"/config/http/upstreams/backend/servers/backend.example.com/max_fails\" was updated with replacing."
}
$ curl http://127.0.0.1/config/http/upstreams/backend/servers/backend.example.com
{
"max_conns": 5,
"max_fails": 2
}
DELETE#
DELETE
HTTP method deletes previously defined settings at the specified path;
at doing that, it returns to the default values if there are any.max_fails
parameter
of the backend.example.com
server within the backend
upstream:$ curl -X DELETE \
http://127.0.0.1/config/http/upstreams/backend/servers/backend.example.com/max_fails
{
"success": "Reset",
"description": "Configuration API entity \"/config/http/upstreams/backend/servers/backend.example.com/max_fails\" was reset to default."
}
defaults=on
:$ curl http://127.0.0.1/config/http/upstreams/backend/servers/backend.example.com?defaults=on
{
"weight": 1,
"max_conns": 5,
"max_fails": 1,
"fail_timeout": 10,
"backup": false,
"down": false,
"sid": ""
}
max_fails
setting is back to its default value.connection_drop=<value>
argument
(PRO) to override the proxy_connection_drop, grpc_connection_drop,
fastcgi_connection_drop, scgi_connection_drop, and
uwsgi_connection_drop settings:$ curl -X DELETE \
http://127.0.0.1/config/http/upstreams/backend/servers/backend1.example.com?connection_drop=off
$ curl -X DELETE \
http://127.0.0.1/config/http/upstreams/backend/servers/backend2.example.com?connection_drop=on
$ curl -X DELETE \
http://127.0.0.1/config/http/upstreams/backend/servers/backend3.example.com?connection_drop=1000
PATCH#
PATCH
HTTP method creates a new entity at the specified path
or partially replaces or complements an existing one
(RFC 7386)
by supplying a JSON definition in its payload.down
setting of the
backend.example.com
server within the backend
upstream,
leaving the rest intact:$ curl -X PATCH -d '{ "down": true }' \
http://127.0.0.1/config/http/upstreams/backend/servers/backend.example.com
{
"success": "Updated",
"description": "Existing configuration API entity \"/config/http/upstreams/backend/servers/backend.example.com\" was updated with merging."
}
$ curl http://127.0.0.1/config/http/upstreams/backend/servers/backend.example.com
{
"max_conns": 5,
"down": true
}
PATCH
request was merged with the
existing one instead of overwriting it, as would be the case with PUT
.null
values are a corner case; they are used to delete specific
configuration items during such merge.DELETE
;
in particular, it reinstates the default values.down
setting added earlier
and simultaneously update max_conns
:$ curl -X PATCH -d '{ "down": null, "max_conns": 6 }' \
http://127.0.0.1/config/http/upstreams/backend/servers/backend.example.com
{
"success": "Updated",
"description": "Existing configuration API entity \"/config/http/upstreams/backend/servers/backend.example.com\" was updated with merging."
}
$ curl http://127.0.0.1/config/http/upstreams/backend/servers/backend.example.com
{
"max_conns": 6
}
down
parameter, for which a null
was supplied, was deleted;
max_conns
was updated.