Development#
Angie is an open-source project
that welcomes all contributors. You can clone Angie source code from our public repositories:
Mercurial,
Git. Your changes should be consistent with the rest of Angie's code;
the coding conventions are a good starting point. Tip If in doubt, examine the nearby code to follow its lead,
or simply grep the codebase for inspiration. Historically, the commit log is maintained in English. Start with a one-line summary of what was done.
It may have a prefix that the commit log uses for the affected code portion.
The summary can be up to 67 characters long
and may be followed by a blank line and more details. A good message tells what caused the change, what was done about it,
and what the situation is now: Details that may otherwise go unnoticed: Summary ends with a period and starts with a capital letter. If a prefix is used, it is followed by a lowercase letter. Double whitespace separates sentences within a single line. Do your best to verify that the changes work on all target platforms. For each platform, run the test suite to make sure there's no regression: See the Make sure you're comfortable with the legal terms. To send a patch, create a pull request on our
GitHub mirror. For questions and suggestions, please contact the developers via
GitHub Issues. The source code follows the following structure and conventions. The following two In addition to that, HTTP code should include Mail code should include Stream code should include For general purposes, nginx code uses two integer types,
Most functions in nginx return the following codes: The The values of Example using For C strings, nginx uses the unsigned character type pointer
The nginx string type The The string operations in nginx are declared in
Other string functions are nginx-specific The following functions perform case conversion and comparison: The following macros simplify string initialization: The following formatting functions support nginx-specific types: The full list of formatting options, supported by these functions is
in You can prepend Several functions for numeric conversion are implemented in nginx.
The first four each convert a string of given length to a positive integer of
the indicated type.
They return There are two additional numeric conversion functions.
Like the first four, they return The regular expressions interface in nginx is a wrapper around
the PCRE library.
The corresponding header file is To use a regular expression for string matching, it first needs to be
compiled, which is usually done at the configuration phase.
Note that since PCRE support is optional, all code using the interface must
be protected by the surrounding After successful compilation, the The compiled regular expression can then be used for matching against strings: The arguments to If there are matches, captures can be accessed as follows: The The The To obtain the current time, it is usually sufficient to access one of the
available global variables, representing the cached time value in the desired
format. The available string representations are: The To obtain the time explicitly, use The following functions convert The The nginx array type The elements of the array are available in the Use the Use the following functions to add elements to an array: If the currently allocated amount of memory is not large enough to accommodate
the new elements, a new block of memory is allocated and the existing elements
are copied to it.
The new memory block is normally twice as large as the existing one. In nginx a list is a sequence of arrays, optimized for inserting a potentially
large number of items.
The The actual items are stored in list parts, which are defined as follows: Before use, a list must be initialized by calling
Lists are primarily used for HTTP input and output headers. Lists do not support item removal.
However, when needed, items can internally be marked as missing without actually
being removed from the list.
For example, to mark HTTP output headers (which are stored as
In nginx a queue is an intrusive doubly linked list, with each node defined as
follows: The head queue node is not linked with any data.
Use the An example: The To deal with a tree as a whole, you need two nodes: root and sentinel.
Typically, they are added to a custom structure, allowing you to
organize your data into a tree in which the leaves contain a link to or embed
your data. To initialize a tree: To traverse a tree and insert new values, use the
" The traversal is pretty straightforward and can be demonstrated with the
following lookup function pattern: The To add a node to a tree, allocate a new node, initialize it and call
To remove a node, call the Hash table functions are declared in Before initializing a hash, you need to know the number of elements it will
hold so that nginx can build it optimally.
Two parameters that need to be configured are The The hash keys are stored in To insert keys into a hash keys array, use the
To build the hash table, call the
The function fails if When the hash is built, use the
To create a hash that works with wildcards, use the
It is possible to add wildcard keys using the
The function recognizes wildcards and adds keys into the corresponding arrays.
Please refer to the
map module
documentation for the description of the wildcard syntax and the
matching algorithm. Depending on the contents of added keys, you may need to initialize up to three
key arrays: one for exact matching (described above), and two more to enable
matching starting from the head or tail of a string: The keys array needs to be sorted, and initialization results must be added
to the combined hash.
The initialization of The lookup in a combined hash is handled by the
To allocate memory from system heap, use the following functions: Most nginx allocations are done in pools.
Memory allocated in an nginx pool is freed automatically when the pool is
destroyed.
This provides good allocation performance and makes memory control easy. A pool internally allocates objects in continuous blocks of memory.
Once a block is full, a new one is allocated and added to the pool memory
block list.
When the requested allocation is too large to fit into a block, the request
is forwarded to the system allocator and the
returned pointer is stored in the pool for further deallocation. The type for nginx pools is Chain links ( Cleanup handlers can be registered in a pool.
A cleanup handler is a callback with an argument which is called when pool is
destroyed.
A pool is usually tied to a specific nginx object (like an HTTP request) and is
destroyed when the object reaches the end of its lifetime.
Registering a pool cleanup is a convenient way to release resources, close
file descriptors or make final adjustments to the shared data associated with
the main object. To register a pool cleanup, call
For logging nginx uses stderr — Logging to standard error (stderr) file — Logging to a file syslog — Logging to syslog memory — Logging to internal memory storage for development purposes; the memory
can be accessed later with a debugger A logger instance can be a chain of loggers, linked to each other with
the For each logger, a severity level controls which messages are written to the
log (only events assigned that level or higher are logged).
The following severity levels are supported: For debug logging, the debug mask is checked as well.
The debug masks are: Normally, loggers are created by existing nginx code from
Nginx provides the following logging macros: A log message is formatted in a buffer of size
The example above results in log entries like these: A cycle object stores the nginx runtime context created from a specific
configuration.
Its type is A cycle is created by the Members of the cycle include: path loader — Executes only once in 60 seconds after starting or reloading
nginx.
Normally, the loader reads the directory and stores data in nginx shared
memory.
The handler is called from the dedicated nginx process "nginx cache loader". path manager — Executes periodically.
Normally, the manager removes old files from the directory and updates nginx
memory to reflect the changes.
The handler is called from the dedicated "nginx cache manager" process. For input/output operations, nginx provides the buffer type
The For input and output operations buffers are linked in chains.
A chain is a sequence of chain links of type Each chain link keeps a reference to its buffer and a reference to the next
chain link. An example of using buffers and chains: The connection type An nginx connection can transparently encapsulate the SSL layer.
In this case the connection's The Because the number of connections per worker is limited, nginx provides a
way to grab connections that are currently in use.
To enable or disable reuse of a connection, call the
Event object Fields in Each connection obtained by calling the An event can be set to send a notification when a timeout expires.
The timer used by events counts milliseconds since some unspecified point
in the past truncated to The function An event can be posted which means that its handler will be called at some
point later within the current event loop iteration.
Posting events is a good practice for simplifying code and escaping stack
overflows.
Posted events are held in a post queue.
The An example: Except for the nginx master process, all nginx processes do I/O and so have an
event loop.
(The nginx master process instead spends most of its time in the
The event loop has the following stages: Find the timeout that is closest to expiring, by calling
Process I/O events by calling a handler, specific to the event notification
mechanism, chosen by nginx configuration.
This handler waits for at least one I/O event to happen, but only until the next
timeout expires.
When a read or write event occurs, the Expire timers by calling Process posted events by calling All nginx processes handle signals as well.
Signal handlers only set global variables which are checked after the
There are several types of processes in nginx.
The type of a process is kept in the The nginx processes handle the following signals: While all nginx worker processes are able to receive and properly handle POSIX
signals, the master process does not use the standard It is possible to offload into a separate thread tasks that would otherwise
block the nginx worker process.
For example, nginx can be configured to use threads to perform
file I/O.
Another use case is a library that doesn't have asynchronous interface
and thus cannot be normally used with nginx.
Keep in mind that the threads interface is a helper for the existing
asynchronous approach to processing client connections, and by no means
intended as a replacement. To deal with synchronization, the following wrappers over
Instead of creating a new thread for each task, nginx implements
a thread_pool strategy.
Multiple thread pools may be configured for different purposes
(for example, performing I/O on different sets of disks).
Each thread pool is created at startup and contains a limited number of threads
that process a queue of tasks.
When a task is completed, a predefined completion handler is called. The At configuration time, a module willing to use threads has to obtain a
reference to a thread pool by calling
To add a To execute a function in a thread, pass parameters and setup a completion
handler using the Each standalone nginx module resides in a separate directory that contains
at least two files:
The The following modules are typically used as references.
The By default, filter modules are placed before the
To compile a module into nginx statically, use the
Modules are the building blocks of nginx, and most of its functionality is
implemented as modules.
The module source file must contain a global variable of type
The omitted private part includes the module version and a signature and is
filled using the predefined macro Each module keeps its private data in the Configuration directive handlers are called as they appear
in configuration files in the context of the master process. After the configuration is parsed successfully, The master process creates one or more worker processes and the
When a worker process receives the shutdown or terminate command from the
master, it invokes the The master process calls the Because threads are used in nginx only as a supplementary I/O facility with its
own API, The module The The set of core modules includes where the For example, a simplistic module called The Terminate the array with the special value The flags for directive types are: A directive's context defines where it may appear in the configuration: The configuration parser uses these flags to throw an error in case of
a misplaced directive and calls directive handlers supplied with a proper
configuration pointer, so that the same directives in different locations can
store their values in distinct places. The The The The The Each HTTP client connection runs through the following stages: For each client HTTP request the Note that for HTTP connections A request is usually posted by the
Each HTTP module can have three types of configuration: Main configuration — Applies to the entire Server configuration — Applies to a single Location configuration — Applies to a single Configuration structures are created at the nginx configuration stage by
calling functions, which allocate the structures, initialize them
and merge them.
The following example shows how to create a simple location
configuration for a module.
The configuration has one setting, As seen in the example, the The following macros are available.
for accessing configuration for HTTP modules at configuration time.
They all take The following example gets a pointer to a location configuration of
standard nginx core module
ngx_http_core_module
and replaces the location content handler kept
in the The following macros are available for accessing configuration for HTTP
modules at runtime. These macros receive a reference to an HTTP request
Each HTTP request passes through a sequence of phases.
In each phase a distinct type of processing is performed on the request.
Module-specific handlers can be registered in most phases,
and many standard nginx modules register their phase handlers as a way
to get called at a specific stage of request processing.
Phases are processed successively and the phase handlers are called
once the request reaches the phase.
Following is the list of nginx HTTP phases. Following is the example of a preaccess phase handler. Phase handlers are expected to return specific codes: Any other value returned by the phase handler is treated as a request
finalization code, in particular, an HTTP response code.
The request is finalized with the code provided. For some phases, return codes are treated in a slightly different way.
At the content phase, any return code other that
The
nginx-dev-examples
repository provides nginx module examples. maximum text width is 80 characters indentation is 4 spaces no tabs, no trailing spaces list elements on the same line are separated with spaces hexadecimal literals are lowercase file names, function and type names, and global variables have the
A typical source file may contain the following sections separated by
two empty lines: copyright statements includes preprocessor definitions type definitions function prototypes variable definitions function definitions Copyright statements look like this: If the file is modified significantly, the list of authors should be updated,
the new author is added to the top. The Header files should include the so called "header protection": Macro names start from Conditions are inside parentheses, negation is outside: Type names end with the " Structure types are defined using Keep alignment identical among different structures in the file.
A structure that points to itself has the name, ending with
" Each structure member is declared on its own line: Function pointers inside structures have defined types ending
with " Enumerations have types ending with " Variables are declared sorted by length of a base type, then alphabetically.
Type names and variable names are aligned.
The type and name "columns" are separated with two spaces.
Large arrays are put at the end of a declaration block: Static and global variables may be initialized on declaration: There is a bunch of commonly used type/name combinations: All functions (even static ones) should have prototypes.
Prototypes include argument names.
Long prototypes are wrapped with a single indentation on continuation lines: The function name in a definition starts with a new line.
The function body opening and closing braces are on separate lines.
The body of a function is indented.
There are two empty lines between functions: There is no space after the function name and opening parenthesis.
Long function calls are wrapped such that continuation lines start
from the position of the first function argument.
If this is impossible, format the first continuation line such that it
ends at position 79: The Binary operators except " Type casts are separated by one space from casted expressions.
An asterisk inside type cast is separated with space from type name: If an expression does not fit into single line, it is wrapped.
The preferred point to break a line is a binary operator.
The continuation line is lined up with the start of expression: As a last resort, it is possible to wrap an expression so that the
continuation line ends at position 79: The above rules also apply to sub-expressions,
where each sub-expression has its own indentation level: Sometimes, it is convenient to wrap an expression after a cast.
In this case, the continuation line is indented: Pointers are explicitly compared to
The " Similar formatting rules are applied to " The " Most " If some part of the " A loop with an empty body is also indicated by the
" An endless loop looks like this: Labels are surrounded with empty lines and are indented at the previous level: To debug memory issues such as buffer overruns or use-after-free errors, you
can use the AddressSanitizer
(ASan) supported by some modern compilers.
To enable ASan with Since most allocations in nginx are made from nginx internal
pool, enabling ASan may not always be enough to debug
memory issues.
The internal pool allocates a big chunk of memory from the system and cuts
smaller allocations from it.
However, this mechanism can be disabled by setting the
The following configuration line summarizes the information provided above.
It is recommended while developing third-party modules and testing nginx on
different platforms. The most common pitfall is an attempt to write a full-fledged C module
when it can be avoided.
In most cases your task can be accomplished by creating a proper configuration.
If writing a module is inevitable, try to make it
as small and simple as possible.
For example, a module can only export some
variables. Before starting a module, consider the following questions: Is it possible to implement a desired feature using already
available modules? Is it possible to solve an issue using built-in scripting languages,
such as Perl
or njs? The most used string type in nginx,
ngx_str_t is not a C-Style
zero-terminated string.
You cannot pass the data to standard C library functions
such as Avoid using global variables in your modules.
Most likely this is an error to have a global variable.
Any global data should be tied to a configuration cycle
and be allocated from the corresponding memory pool.
This allows nginx to perform graceful configuration reloads.
An attempt to use global variables will likely break this feature,
because it will be impossible to have two configurations at
the same time and get rid of them.
Sometimes global variables are required.
In this case, special attention is needed to manage reconfiguration
properly.
Also, check if libraries used by your code have implicit
global state that may be broken on reload. Instead of dealing with malloc/free approach which is error prone,
learn how to use nginx pools.
A pool is created and tied to an object -
configuration,
cycle,
connection,
or HTTP request.
When the object is destroyed, the associated pool is destroyed too.
So when working with an object, it is possible to allocate the amount
needed from the corresponding pool and don't care about freeing memory
even in case of errors. It is recommended to avoid using threads in nginx because it will
definitely break things: most nginx functions are not thread-safe.
It is expected that a thread will be executing only system calls and
thread-safe library functions.
If you need to run some code that is not related to client request processing,
the proper way is to schedule a timer in the A common mistake is to use libraries that are blocking internally.
Most libraries out there are synchronous and blocking by nature.
In other words, they perform one operation at a time and waste
time waiting for response from other peer.
As a result, when a request is processed with such library, whole
nginx worker is blocked, thus destroying performance.
Use only libraries that provide asynchronous interface and don't
block whole process. Often modules need to perform an HTTP call to some external service.
A common mistake is to use some external library, such as libcurl,
to perform the HTTP request.
It is absolutely unnecessary to bring a huge amount of external
(probably blocking!) code
for the task which can be accomplished by nginx itself. There are two basic usage scenarios when an external request is needed: in the context of processing a client request (for example, in content handler) in the context of a worker process (for example, timer handler) In the first case, the best is to use
subrequests API.
Instead of directly accessing external service, you declare a location
in nginx configuration and direct your subrequest to this location.
This location is not limited to
proxying
requests, but may contain other nginx directives.
An example of such approach is the
auth_request
directive implemented in
ngx_http_auth_request module. For the second case, it is possible to use basic HTTP client functionality
available in nginx.
For example,
OCSP module
implements simple HTTP client.Source Code#
Coding Style#
Commit Messages#
API: bad things removed, good things added.
As explained elsewhere[1], the original API was bad because stuff;
this change was introduced to improve that aspect locally.
Levels of goodness have been implemented to mitigate the badness;
this is now the preferred way to work. Also, the badness is gone.
[1] https://example.com
Final Checks#
$ cd tests
$ prove .
tests/README
file for details.Submitting Contributions#
Coding conventions#
Code layout#
auto
— Build scriptssrc
core
— Basic types and functions — string, array, log,
pool, etc.event
— Event coremodules
— Event notification modules:
epoll
, kqueue
, select
etc.http
— Core HTTP module and common codemodules
— Other HTTP modulesv2
— HTTP/2mail
— Mail modulesos
— Platform-specific codeunix
win32
stream
— Stream modulesInclude files#
#include
statements must appear at the
beginning of every nginx file:#include <ngx_config.h>
#include <ngx_core.h>
#include <ngx_http.h>
#include <ngx_mail.h>
#include <ngx_stream.h>
Integers#
ngx_int_t
and ngx_uint_t
, which are
typedefs for intptr_t
and uintptr_t
respectively.Common return codes#
NGX_OK
— Operation succeeded.NGX_ERROR
— Operation failed.NGX_AGAIN
— Operation incomplete; call the function again.NGX_DECLINED
— Operation rejected, for example, because it is
disabled in the configuration. This is never an error.NGX_BUSY
— Resource is not available.NGX_DONE
— Operation complete or continued elsewhere.
Also used as an alternative success code.NGX_ABORT
— Function was aborted.
Also used as an alternative error code.Error handling#
ngx_errno
macro returns the last system error code.
It's mapped to errno
on POSIX platforms and to
GetLastError()
call in Windows.
The ngx_socket_errno
macro returns the last socket error
number.
Like the ngx_errno
macro, it's mapped to
errno
on POSIX platforms.
It's mapped to the WSAGetLastError()
call on Windows.
Accessing the values of ngx_errno
or
ngx_socket_errno
more than once in a row can cause
performance issues.
If the error value might be used multiple times, store it in a local variable
of type ngx_err_t
.
To set errors, use the ngx_set_errno(errno)
and
ngx_set_socket_errno(errno)
macros.ngx_errno
and
ngx_socket_errno
can be passed to the logging functions
ngx_log_error()
and ngx_log_debugX()
, in
which case system error text is added to the log message.ngx_errno
:ngx_int_t
ngx_my_kill(ngx_pid_t pid, ngx_log_t *log, int signo)
{
ngx_err_t err;
if (kill(pid, signo) == -1) {
err = ngx_errno;
ngx_log_error(NGX_LOG_ALERT, log, err, "kill(%P, %d) failed", pid, signo);
if (err == NGX_ESRCH) {
return 2;
}
return 1;
}
return 0;
}
Strings#
Overview#
u_char *
.ngx_str_t
is defined as follows:typedef struct {
size_t len;
u_char *data;
} ngx_str_t;
len
field holds the string length and
data
holds the string data.
The string, held in ngx_str_t
, may or may not be
null-terminated after the len
bytes.
In most cases it's not.
However, in certain parts of the code (for example, when parsing configuration),
ngx_str_t
objects are known to be null-terminated, which
simplifies string comparison and makes it easier to pass the strings to
syscalls.src/core/ngx_string.h
Some of them are wrappers around standard C functions:ngx_strcmp()
ngx_strncmp()
ngx_strstr()
ngx_strlen()
ngx_strchr()
ngx_memcmp()
ngx_memset()
ngx_memcpy()
ngx_memmove()
ngx_memzero()
— Fills memory with zeroes.ngx_explicit_memzero()
— Does the same as
ngx_memzero()
, but this call is never removed by the
compiler's dead store elimination optimization.
This function can be used to clear sensitive data such as passwords and keys.ngx_cpymem()
— Does the same as
ngx_memcpy()
, but returns the final destination address
This one is handy for appending multiple strings in a row.ngx_movemem()
— Does the same as
ngx_memmove()
, but returns the final destination address.ngx_strlchr()
— Searches for a character in a string,
delimited by two pointers.ngx_tolower()
ngx_toupper()
ngx_strlow()
ngx_strcasecmp()
ngx_strncasecmp()
ngx_string(text)
— static initializer for the
ngx_str_t
type from the C string literal
text
ngx_null_string
— static empty string initializer for the
ngx_str_t
typengx_str_set(str, text)
— initializes string
str
of ngx_str_t *
type with the C string
literal text
ngx_str_null(str)
— initializes string str
of ngx_str_t *
type with the empty stringFormatting#
ngx_sprintf(buf, fmt, ...)
ngx_snprintf(buf, max, fmt, ...)
ngx_slprintf(buf, last, fmt, ...)
ngx_vslprintf(buf, last, fmt, args)
ngx_vsnprintf(buf, max, fmt, args)
src/core/ngx_string.c
. Some of them are:%O
— off_t
%T
— time_t
%z
— ssize_t
%i
— ngx_int_t
%p
— void *
%V
— ngx_str_t *
%s
— u_char *
(null-terminated)%*s
— size_t + u_char *
u
on most types to make them unsigned.
To convert output to hex, use X
or x
.Numeric conversion#
NGX_ERROR
on error.ngx_atoi(line, n)
— ngx_int_t
ngx_atosz(line, n)
— ssize_t
ngx_atoof(line, n)
— off_t
ngx_atotm(line, n)
— time_t
NGX_ERROR
on error.ngx_atofp(line, n, point)
— Converts a fixed point floating
number of given length to a positive integer of type
ngx_int_t
.
The result is shifted left by point
decimal
positions.
The string representation of the number is expected to have no more
than points
fractional digits.
For example, ngx_atofp("10.5", 4, 2)
returns
1050
.ngx_hextoi(line, n)
— Converts a hexadecimal representation
of a positive integer to ngx_int_t
.Regular expressions#
src/core/ngx_regex.h
.NGX_PCRE
macro:#if (NGX_PCRE)
ngx_regex_t *re;
ngx_regex_compile_t rc;
u_char errstr[NGX_MAX_CONF_ERRSTR];
ngx_str_t value = ngx_string("message (\\d\\d\\d).*Codeword is '(?<cw>\\w+)'");
ngx_memzero(&rc, sizeof(ngx_regex_compile_t));
rc.pattern = value;
rc.pool = cf->pool;
rc.err.len = NGX_MAX_CONF_ERRSTR;
rc.err.data = errstr;
/* rc.options can be set to NGX_REGEX_CASELESS */
if (ngx_regex_compile(&rc) != NGX_OK) {
ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, "%V", &rc.err);
return NGX_CONF_ERROR;
}
re = rc.regex;
#endif
captures
and
named_captures
fields in the
ngx_regex_compile_t
structure contain the count of all
captures and named captures, respectively, found in the regular expression.ngx_int_t n;
int captures[(1 + rc.captures) * 3];
ngx_str_t input = ngx_string("This is message 123. Codeword is 'foobar'.");
n = ngx_regex_exec(re, &input, captures, (1 + rc.captures) * 3);
if (n >= 0) {
/* string matches expression */
} else if (n == NGX_REGEX_NO_MATCHED) {
/* no match was found */
} else {
/* some error */
ngx_log_error(NGX_LOG_ALERT, log, 0, ngx_regex_exec_n " failed: %i", n);
}
ngx_regex_exec()
are the compiled regular
expression re
, the string to match input
,
an optional array of integers to hold any captures
that are
found, and the array's size
.
The size of the captures
array must be a multiple of three,
as required by the
PCRE API.
In the example, the size is calculated from the total number of captures plus
one for the matched string itself.u_char *p;
size_t size;
ngx_str_t name, value;
/* all captures */
for (i = 0; i < n * 2; i += 2) {
value.data = input.data + captures[i];
value.len = captures[i + 1] - captures[i];
}
/* accessing named captures */
size = rc.name_size;
p = rc.names;
for (i = 0; i < rc.named_captures; i++, p += size) {
/* capture name */
name.data = &p[2];
name.len = ngx_strlen(name.data);
n = 2 * ((p[0] << 8) + p[1]);
/* captured value */
value.data = &input.data[captures[n]];
value.len = captures[n + 1] - captures[n];
}
ngx_regex_exec_array()
function accepts the array of
ngx_regex_elt_t
elements (which are just compiled regular
expressions with associated names), a string to match, and a log.
The function applies expressions from the array to the string until
either a match is found or no more expressions are left.
The return value is NGX_OK
when there is a match and
NGX_DECLINED
otherwise, or NGX_ERROR
in case of error.Time#
ngx_time_t
structure represents time with three separate
types for seconds, milliseconds, and the GMT offset:typedef struct {
time_t sec;
ngx_uint_t msec;
ngx_int_t gmtoff;
} ngx_time_t;
ngx_tm_t
structure is an alias for
struct tm
on UNIX platforms and SYSTEMTIME
on Windows.ngx_cached_err_log_time
— Used in error log entries:
"1970/09/28 12:00:00"
ngx_cached_http_log_time
— Used in HTTP access log entries:
"28/Sep/1970:12:00:00 +0600"
ngx_cached_syslog_time
— Used in syslog entries:
"Sep 28 12:00:00"
ngx_cached_http_time
— Used in HTTP headers:
"Mon, 28 Sep 1970 06:00:00 GMT"
ngx_cached_http_log_iso8601
— The ISO 8601
standard format:
"1970-09-28T12:00:00+06:00"
ngx_time()
and ngx_timeofday()
macros
return the current time value in seconds and are the preferred way to access
the cached time value.ngx_gettimeofday()
,
which updates its argument (pointer to
struct timeval
).
The time is always updated when nginx returns to the event loop from system
calls.
To update the time immediately, call ngx_time_update()
,
or ngx_time_sigsafe_update()
if updating the time in the
signal handler context.time_t
into the indicated
broken-down time representation.
The first function in each pair converts time_t
to
ngx_tm_t
and the second (with the _libc_
infix) to struct tm
:ngx_gmtime(), ngx_libc_gmtime()
— Time expressed as UTCngx_localtime(), ngx_libc_localtime()
— Time expressed
relative to the local time zonengx_http_time(buf, time)
function returns a string
representation suitable for use in HTTP headers (for example,
"Mon, 28 Sep 1970 06:00:00 GMT"
).
The ngx_http_cookie_time(buf, time)
returns a string
representation function returns a string representation suitable
for HTTP cookies ("Thu, 31-Dec-37 23:55:55 GMT"
).Containers#
Array#
ngx_array_t
is defined as followstypedef struct {
void *elts;
ngx_uint_t nelts;
size_t size;
ngx_uint_t nalloc;
ngx_pool_t *pool;
} ngx_array_t;
elts
field.
The nelts
field holds the number of elements.
The size
field holds the size of a single element and is set
when the array is initialized.ngx_array_create(pool, n, size)
call to create an
array in a pool, and the ngx_array_init(array, pool, n, size)
call to initialize an array object that has already been allocated.ngx_array_t *a, b;
/* create an array of strings with preallocated memory for 10 elements */
a = ngx_array_create(pool, 10, sizeof(ngx_str_t));
/* initialize string array for 10 elements */
ngx_array_init(&b, pool, 10, sizeof(ngx_str_t));
ngx_array_push(a)
adds one tail element and returns pointer
to itngx_array_push_n(a, n)
adds n
tail elements
and returns pointer to the first ones = ngx_array_push(a);
ss = ngx_array_push_n(&b, 3);
List#
ngx_list_t
list type is defined as follows:typedef struct {
ngx_list_part_t *last;
ngx_list_part_t part;
size_t size;
ngx_uint_t nalloc;
ngx_pool_t *pool;
} ngx_list_t;
typedef struct ngx_list_part_s ngx_list_part_t;
struct ngx_list_part_s {
void *elts;
ngx_uint_t nelts;
ngx_list_part_t *next;
};
ngx_list_init(list, pool, n, size)
or created by calling
ngx_list_create(pool, n, size)
.
Both functions take as arguments the size of a single item and a number of
items per list part.
To add an item to a list, use the ngx_list_push(list)
function.
To iterate over the items, directly access the list fields as shown in the
example:ngx_str_t *v;
ngx_uint_t i;
ngx_list_t *list;
ngx_list_part_t *part;
list = ngx_list_create(pool, 100, sizeof(ngx_str_t));
if (list == NULL) { /* error */ }
/* add items to the list */
v = ngx_list_push(list);
if (v == NULL) { /* error */ }
ngx_str_set(v, "foo");
v = ngx_list_push(list);
if (v == NULL) { /* error */ }
ngx_str_set(v, "bar");
/* iterate over the list */
part = &list->part;
v = part->elts;
for (i = 0; /* void */; i++) {
if (i >= part->nelts) {
if (part->next == NULL) {
break;
}
part = part->next;
v = part->elts;
i = 0;
}
ngx_do_smth(&v[i]);
}
ngx_table_elt_t
objects) as missing, set the
hash
field in ngx_table_elt_t
to
zero.
Items marked in this way are explicitly skipped when the headers are iterated
over.Queue#
typedef struct ngx_queue_s ngx_queue_t;
struct ngx_queue_s {
ngx_queue_t *prev;
ngx_queue_t *next;
};
ngx_queue_init(q)
call to initialize the list head
before use.
Queues support the following operations:ngx_queue_insert_head(h, x)
,
ngx_queue_insert_tail(h, x)
— Insert a new nodengx_queue_remove(x)
— Remove a queue nodengx_queue_split(h, q, n)
— Split a queue at a node,
returning the queue tail in a separate queuengx_queue_add(h, n)
— Add a second queue to the first queuengx_queue_head(h)
,
ngx_queue_last(h)
— Get first or last queue nodengx_queue_sentinel(h)
- Get a queue sentinel object to end
iteration atngx_queue_data(q, type, link)
— Get a reference to the
beginning of a queue node data structure, considering the queue field offset in
ittypedef struct {
ngx_str_t value;
ngx_queue_t queue;
} ngx_foo_t;
ngx_foo_t *f;
ngx_queue_t values, *q;
ngx_queue_init(&values);
f = ngx_palloc(pool, sizeof(ngx_foo_t));
if (f == NULL) { /* error */ }
ngx_str_set(&f->value, "foo");
ngx_queue_insert_tail(&values, &f->queue);
/* insert more nodes here */
for (q = ngx_queue_head(&values);
q != ngx_queue_sentinel(&values);
q = ngx_queue_next(q))
{
f = ngx_queue_data(q, ngx_foo_t, queue);
ngx_do_smth(&f->value);
}
Red-Black tree#
src/core/ngx_rbtree.h
header file provides access to the
effective implementation of red-black trees.typedef struct {
ngx_rbtree_t rbtree;
ngx_rbtree_node_t sentinel;
/* custom per-tree data here */
} my_tree_t;
typedef struct {
ngx_rbtree_node_t rbnode;
/* custom per-node data */
foo_t val;
} my_node_t;
my_tree_t root;
ngx_rbtree_init(&root.rbtree, &root.sentinel, insert_value_function);
insert_value
" functions.
For example, the ngx_str_rbtree_insert_value
function deals
with the ngx_str_t
type.
Its arguments are pointers to a root node of an insertion, the newly created
node to be added, and a tree sentinel.void ngx_str_rbtree_insert_value(ngx_rbtree_node_t *temp,
ngx_rbtree_node_t *node,
ngx_rbtree_node_t *sentinel)
my_node_t *
my_rbtree_lookup(ngx_rbtree_t *rbtree, foo_t *val, uint32_t hash)
{
ngx_int_t rc;
my_node_t *n;
ngx_rbtree_node_t *node, *sentinel;
node = rbtree->root;
sentinel = rbtree->sentinel;
while (node != sentinel) {
n = (my_node_t *) node;
if (hash != node->key) {
node = (hash < node->key) ? node->left : node->right;
continue;
}
rc = compare(val, node->val);
if (rc < 0) {
node = node->left;
continue;
}
if (rc > 0) {
node = node->right;
continue;
}
return n;
}
return NULL;
}
compare()
function is a classic comparator function that
returns a value less than, equal to, or greater than zero.
To speed up lookups and avoid comparing user objects that can be big, an integer
hash field is used.ngx_rbtree_insert()
:my_node_t *my_node;
ngx_rbtree_node_t *node;
my_node = ngx_palloc(...);
init_custom_data(&my_node->val);
node = &my_node->rbnode;
node->key = create_key(my_node->val);
ngx_rbtree_insert(&root->rbtree, node);
ngx_rbtree_delete()
function:ngx_rbtree_delete(&root->rbtree, node);
Hash#
src/core/ngx_hash.h
.
Both exact and wildcard matching are supported.
The latter requires extra setup and is described in a separate section below.max_size
and bucket_size
, as detailed in a separate
document.
They are usually configurable by the user.
Hash initialization settings are stored with the
ngx_hash_init_t
type, and the hash itself is
ngx_hash_t
:ngx_hash_t foo_hash;
ngx_hash_init_t hash;
hash.hash = &foo_hash;
hash.key = ngx_hash_key;
hash.max_size = 512;
hash.bucket_size = ngx_align(64, ngx_cacheline_size);
hash.name = "foo_hash";
hash.pool = cf->pool;
hash.temp_pool = cf->temp_pool;
key
is a pointer to a function that creates the hash
integer key from a string.
There are two generic key-creation functions:
ngx_hash_key(data, len)
and
ngx_hash_key_lc(data, len)
.
The latter converts a string to all lowercase characters, so the passed string
must be writable.
If that is not true, pass the NGX_HASH_READONLY_KEY
flag
to the function, initializing the key array (see below).ngx_hash_keys_arrays_t
and
are initialized with ngx_hash_keys_array_init(arr, type)
:
The second parameter (type
) controls the amount of resources
preallocated for the hash and can be either NGX_HASH_SMALL
or
NGX_HASH_LARGE
.
The latter is appropriate if you expect the hash to contain thousands of
elements.ngx_hash_keys_arrays_t foo_keys;
foo_keys.pool = cf->pool;
foo_keys.temp_pool = cf->temp_pool;
ngx_hash_keys_array_init(&foo_keys, NGX_HASH_SMALL);
ngx_hash_add_key(keys_array, key, value, flags)
function:ngx_str_t k1 = ngx_string("key1");
ngx_str_t k2 = ngx_string("key2");
ngx_hash_add_key(&foo_keys, &k1, &my_data_ptr_1, NGX_HASH_READONLY_KEY);
ngx_hash_add_key(&foo_keys, &k2, &my_data_ptr_2, NGX_HASH_READONLY_KEY);
ngx_hash_init(hinit, key_names, nelts)
function:ngx_hash_init(&hash, foo_keys.keys.elts, foo_keys.keys.nelts);
max_size
or
bucket_size
parameters are not big enough.ngx_hash_find(hash, key, name, len)
function to look up
elements:my_data_t *data;
ngx_uint_t key;
key = ngx_hash_key(k1.data, k1.len);
data = ngx_hash_find(&foo_hash, key, k1.data, k1.len);
if (data == NULL) {
/* key not found */
}
Wildcard matching#
ngx_hash_combined_t
type.
It includes the hash type described above and has two additional keys arrays:
dns_wc_head
and dns_wc_tail
.
The initialization of basic properties is similar to a regular hash:ngx_hash_init_t hash
ngx_hash_combined_t foo_hash;
hash.hash = &foo_hash.hash;
hash.key = ...;
NGX_HASH_WILDCARD_KEY
flag:/* k1 = ".example.org"; */
/* k2 = "foo.*"; */
ngx_hash_add_key(&foo_keys, &k1, &data1, NGX_HASH_WILDCARD_KEY);
ngx_hash_add_key(&foo_keys, &k2, &data2, NGX_HASH_WILDCARD_KEY);
if (foo_keys.dns_wc_head.nelts) {
ngx_qsort(foo_keys.dns_wc_head.elts,
(size_t) foo_keys.dns_wc_head.nelts,
sizeof(ngx_hash_key_t),
cmp_dns_wildcards);
hash.hash = NULL;
hash.temp_pool = pool;
if (ngx_hash_wildcard_init(&hash, foo_keys.dns_wc_head.elts,
foo_keys.dns_wc_head.nelts)
!= NGX_OK)
{
return NGX_ERROR;
}
foo_hash.wc_head = (ngx_hash_wildcard_t *) hash.hash;
}
dns_wc_tail
array is done similarly.ngx_hash_find_combined(chash, key, name, len)
:/* key = "bar.example.org"; - will match ".example.org" */
/* key = "foo.example.com"; - will match "foo.*" */
hkey = ngx_hash_key(key.data, key.len);
res = ngx_hash_find_combined(&foo_hash, hkey, key.data, key.len);
Memory management#
Heap#
ngx_alloc(size, log)
— Allocate memory from system heap.
This is a wrapper around malloc()
with logging support.
Allocation error and debugging information is logged to log
.ngx_calloc(size, log)
— Allocate memory from system heap
like ngx_alloc()
, but fill memory with zeros after
allocation.ngx_memalign(alignment, size, log)
— Allocate aligned memory
from system heap.
This is a wrapper around posix_memalign()
on those platforms that provide that function.
Otherwise implementation falls back to ngx_alloc()
which
provides maximum alignment.ngx_free(p)
— Free allocated memory.
This is a wrapper around free()
Pool#
ngx_pool_t
.
The following operations are supported:ngx_create_pool(size, log)
— Create a pool with specified
block size.
The pool object returned is allocated in the pool as well.
The size
should be at least NGX_MIN_POOL_SIZE
and a multiple of NGX_POOL_ALIGNMENT
.ngx_destroy_pool(pool)
— Free all pool memory, including
the pool object itself.ngx_palloc(pool, size)
— Allocate aligned memory from the
specified pool.ngx_pcalloc(pool, size)
— Allocate aligned memory
from the specified pool and fill it with zeroes.ngx_pnalloc(pool, size)
— Allocate unaligned memory from the
specified pool.
Mostly used for allocating strings.ngx_pfree(pool, p)
— Free memory that was previously
allocated in the specified pool.
Only allocations that result from requests forwarded to the system allocator
can be freed.u_char *p;
ngx_str_t *s;
ngx_pool_t *pool;
pool = ngx_create_pool(1024, log);
if (pool == NULL) { /* error */ }
s = ngx_palloc(pool, sizeof(ngx_str_t));
if (s == NULL) { /* error */ }
ngx_str_set(s, "foo");
p = ngx_pnalloc(pool, 3);
if (p == NULL) { /* error */ }
ngx_memcpy(p, "foo", 3);
ngx_chain_t
) are actively used in nginx,
so the nginx pool implementation provides a way to reuse them.
The chain
field of ngx_pool_t
keeps a
list of previously allocated links ready for reuse.
For efficient allocation of a chain link in a pool, use the
ngx_alloc_chain_link(pool)
function.
This function looks up a free chain link in the pool list and allocates a new
chain link if the pool list is empty.
To free a link, call the ngx_free_chain(pool, cl)
function.ngx_pool_cleanup_add(pool, size)
, which returns a
ngx_pool_cleanup_t
pointer to
be filled in by the caller.
Use the size
argument to allocate context for the cleanup
handler.ngx_pool_cleanup_t *cln;
cln = ngx_pool_cleanup_add(pool, 0);
if (cln == NULL) { /* error */ }
cln->handler = ngx_my_cleanup;
cln->data = "foo";
...
static void
ngx_my_cleanup(void *data)
{
u_char *msg = data;
ngx_do_smth(msg);
}
Logging#
ngx_log_t
objects.
The nginx logger supports several types of output:next
field.
In this case, each message is written to all loggers in the chain.NGX_LOG_EMERG
NGX_LOG_ALERT
NGX_LOG_CRIT
NGX_LOG_ERR
NGX_LOG_WARN
NGX_LOG_NOTICE
NGX_LOG_INFO
NGX_LOG_DEBUG
NGX_LOG_DEBUG_CORE
NGX_LOG_DEBUG_ALLOC
NGX_LOG_DEBUG_MUTEX
NGX_LOG_DEBUG_EVENT
NGX_LOG_DEBUG_HTTP
NGX_LOG_DEBUG_MAIL
NGX_LOG_DEBUG_STREAM
error_log
directives and are available at nearly every stage
of processing in cycle, configuration, client connection and other objects.ngx_log_error(level, log, err, fmt, ...)
— Error loggingngx_log_debug0(level, log, err, fmt)
,
ngx_log_debug1(level, log, err, fmt, arg1)
etc — Debug
logging with up to eight supported formatting argumentsNGX_MAX_ERROR_STR
(currently, 2048 bytes) on stack.
The message is prepended with the severity level, process ID (PID), connection
ID (stored in log->connection
), and the system error text.
For non-debug messages log->handler
is called as well to
prepend more specific information to the log message.
HTTP module sets ngx_http_log_error()
function as log
handler to log client and server addresses, current action (stored in
log->action
), client request line, server name etc./* specify what is currently done */
log->action = "sending mp4 to client";
/* error and debug log */
ngx_log_error(NGX_LOG_INFO, c->log, 0, "client prematurely
closed connection");
ngx_log_debug2(NGX_LOG_DEBUG_HTTP, mp4->file.log, 0,
"mp4 start:%ui, length:%ui", mp4->start, mp4->length);
2016/09/16 22:08:52 [info] 17445#0: *1 client prematurely closed connection while
sending mp4 to client, client: 127.0.0.1, server: , request: "GET /file.mp4 HTTP/1.1"
2016/09/16 23:28:33 [debug] 22140#0: *1 mp4 start:0, length:10000
Cycle#
ngx_cycle_t
.
The current cycle is referenced by the ngx_cycle
global
variable and inherited by nginx workers as they start.
Each time the nginx configuration is reloaded, a new cycle is created from the
new nginx configuration; the old cycle is usually deleted after the new one is
successfully created.ngx_init_cycle()
function, which
takes the previous cycle as its argument.
The function locates the previous cycle's configuration file and inherits as
many resources as possible from the previous cycle.
A placeholder cycle called "init cycle" is created as nginx start, then is
replaced by an actual cycle built from configuration.pool
— Cycle pool.
Created for each new cycle.log
— Cycle log.
Initially inherited from the old cycle, it is set to point to
new_log
after the configuration is read.new_log
— Cycle log, created by the configuration.
It's affected by the root-scope error_log
directive.connections
, connection_n
—
Array of connections of type ngx_connection_t
, created by
the event module while initializing each nginx worker.
The worker_connections
directive in the nginx configuration
sets the number of connections connection_n
.free_connections
,
free_connection_n
— List and number of currently available
connections.
If no connections are available, an nginx worker refuses to accept new clients
or connect to upstream servers.files
, files_n
— Array for mapping file
descriptors to nginx connections.
This mapping is used by the event modules, having the
NGX_USE_FD_EVENT
flag (currently, it's
poll
and devpoll
).conf_ctx
— Array of core module configurations.
The configurations are created and filled during reading of nginx configuration
files.modules
, modules_n
— Array of modules
of type ngx_module_t
, both static and dynamic, loaded by
the current configuration.listening
— Array of listening objects of type
ngx_listening_t
.
Listening objects are normally added by the listen
directive of different modules which call the
ngx_create_listening()
function.
Listen sockets are created based on the listening objects.paths
— Array of paths of type ngx_path_t
.
Paths are added by calling the function ngx_add_path()
from
modules which are going to operate on certain directories.
These directories are created by nginx after reading configuration, if missing.
Moreover, two handlers can be added for each path:open_files
— List of open file objects of type
ngx_open_file_t
, which are created by calling the function
ngx_conf_open_file()
.
Currently, nginx uses this kind of open files for logging.
After reading the configuration, nginx opens all files in the
open_files
list and stores each file descriptor in the
object's fd
field.
The files are opened in append mode and are created if missing.
The files in the list are reopened by nginx workers upon receiving the
reopen signal (most often USR1
).
In this case the descriptor in the fd
field is changed to a
new value.shared_memory
— List of shared memory zones, each added by
calling the ngx_shared_memory_add()
function.
Shared zones are mapped to the same address range in all nginx processes and
are used to share common data, for example the HTTP cache in-memory tree.Buffer#
ngx_buf_t
.
Normally, it's used to hold data to be written to a destination or read from a
source.
A buffer can reference data in memory or in a file and it's technically
possible for a buffer to reference both at the same time.
Memory for the buffer is allocated separately and is not related to the buffer
structure ngx_buf_t
.ngx_buf_t
structure has the following fields:start
, end
— The boundaries of the memory
block allocated for the buffer.pos
, last
— The boundaries of the memory
buffer; normally a subrange of start
..
end
.file_pos
, file_last
— The boundaries of a
file buffer, expressed as offsets from the beginning of the file.tag
— Unique value used to distinguish buffers; created by
different nginx modules, usually for the purpose of buffer reuse.file
— File object.temporary
— Flag indicating that the buffer references
writable memory.memory
— Flag indicating that the buffer references read-only
memory.in_file
— Flag indicating that the buffer references data
in a file.flush
— Flag indicating that all data prior to the buffer
need to be flushed.recycled
— Flag indicating that the buffer can be reused and
needs to be consumed as soon as possible.sync
— Flag indicating that the buffer carries no data or
special signal like flush
or last_buf
.
By default nginx considers such buffers an error condition, but this flag tells
nginx to skip the error check.last_buf
— Flag indicating that the buffer is the last in
output.last_in_chain
— Flag indicating that there are no more data
buffers in a request or subrequest.shadow
— Reference to another ("shadow") buffer related to
the current buffer, usually in the sense that the buffer uses data from the
shadow.
When the buffer is consumed, the shadow buffer is normally also marked as
consumed.last_shadow
— Flag indicating that the buffer is the last
one that references a particular shadow buffer.temp_file
— Flag indicating that the buffer is in a temporary
file.ngx_chain_t
,
defined as follows:typedef struct ngx_chain_s ngx_chain_t;
struct ngx_chain_s {
ngx_buf_t *buf;
ngx_chain_t *next;
};
ngx_chain_t *
ngx_get_my_chain(ngx_pool_t *pool)
{
ngx_buf_t *b;
ngx_chain_t *out, *cl, **ll;
/* first buf */
cl = ngx_alloc_chain_link(pool);
if (cl == NULL) { /* error */ }
b = ngx_calloc_buf(pool);
if (b == NULL) { /* error */ }
b->start = (u_char *) "foo";
b->pos = b->start;
b->end = b->start + 3;
b->last = b->end;
b->memory = 1; /* read-only memory */
cl->buf = b;
out = cl;
ll = &cl->next;
/* second buf */
cl = ngx_alloc_chain_link(pool);
if (cl == NULL) { /* error */ }
b = ngx_create_temp_buf(pool, 3);
if (b == NULL) { /* error */ }
b->last = ngx_cpymem(b->last, "foo", 3);
cl->buf = b;
cl->next = NULL;
*ll = cl;
return out;
}
Networking#
Connection#
ngx_connection_t
is a wrapper around a
socket descriptor.
It includes the following fields:fd
— Socket descriptordata
— Arbitrary connection context.
Normally, it is a pointer to a higher-level object built on top of the
connection, such as an HTTP request or a Stream session.read
, write
— Read and write events for
the connection.recv
, send
,
recv_chain
, send_chain
— I/O operations
for the connection.pool
— Connection pool.log
— Connection log.sockaddr
, socklen
,
addr_text
— Remote socket address in binary and text forms.local_sockaddr
, local_socklen
— Local
socket address in binary form.
Initially, these fields are empty.
Use the ngx_connection_local_sockaddr()
function to get the
local socket address.proxy_protocol_addr
, proxy_protocol_port
- PROXY protocol client address and port, if the PROXY protocol is enabled for
the connection.ssl
— SSL context for the connection.reusable
— Flag indicating the connection is in a state that
makes it eligible for reuse.close
— Flag indicating that the connection is being reused
and needs to be closed.ssl
field holds a pointer to an
ngx_ssl_connection_t
structure, keeping all SSL-related data
for the connection, including SSL_CTX
and
SSL
.
The recv
, send
,
recv_chain
, and send_chain
handlers are
set to SSL-enabled functions as well.worker_connections
directive in the nginx configuration
limits the number of connections per nginx worker.
All connection structures are precreated when a worker starts and stored in
the connections
field of the cycle object.
To retrieve a connection structure, use the
ngx_get_connection(s, log)
function.
It takes as its s
argument a socket descriptor, which needs
to be wrapped in a connection structure.ngx_reusable_connection(c, reusable)
function.
Calling ngx_reusable_connection(c, 1)
sets the
reuse
flag in the connection structure and inserts the
connection into the reusable_connections_queue
of the cycle.
Whenever ngx_get_connection()
finds out there are no
available connections in the cycle's free_connections
list,
it calls ngx_drain_connections()
to release a
specific number of reusable connections.
For each such connection, the close
flag is set and its read
handler is called which is supposed to free the connection by calling
ngx_close_connection(c)
and make it available for reuse.
To exit the state when a connection can be reused
ngx_reusable_connection(c, 0)
is called.
HTTP client connections are an example of reusable connections in nginx; they
are marked as reusable until the first request byte is received from the client.Events#
Event#
ngx_event_t
in nginx provides a mechanism
for notification that a specific event has occurred.ngx_event_t
include:data
— Arbitrary event context used in event handlers,
usually as pointer to a connection related to the event.handler
— Callback function to be invoked when the event
happens.write
— Flag indicating a write event.
Absence of the flag indicates a read event.active
— Flag indicating that the event is registered for
receiving I/O notifications, normally from notification mechanisms like
epoll
, kqueue
, poll
.ready
— Flag indicating that the event has received an
I/O notification.delayed
— Flag indicating that I/O is delayed due to rate
limiting.timer
— Red-black tree node for inserting the event into
the timer tree.timer_set
— Flag indicating that the event timer is set and
not yet expired.timedout
— Flag indicating that the event timer has expired.eof
— Flag indicating that EOF occurred while reading data.pending_eof
— Flag indicating that EOF is pending on the
socket, even though there may be some data available before it.
The flag is delivered via the EPOLLRDHUP
epoll
event or
EV_EOF
kqueue
flag.error
— Flag indicating that an error occurred during
reading (for a read event) or writing (for a write event).cancelable
— Timer event flag indicating that the event
should be ignored while shutting down the worker.
Graceful worker shutdown is delayed until there are no non-cancelable timer
events scheduled.posted
— Flag indicating that the event is posted to a queue.queue
— Queue node for posting the event to a queue.I/O events#
ngx_get_connection()
function has two attached events, c->read
and
c->write
, which are used for receiving notification that the
socket is ready for reading or writing.
All such events operate in Edge-Triggered mode, meaning that they only trigger
notifications when the state of the socket changes.
For example, doing a partial read on a socket does not make nginx deliver a
repeated read notification until more data arrives on the socket.
Even when the underlying I/O notification mechanism is essentially
Level-Triggered (poll
, select
etc), nginx
converts the notifications to Edge-Triggered.
To make nginx event notifications consistent across all notifications systems
on different platforms, the functions
ngx_handle_read_event(rev, flags)
and
ngx_handle_write_event(wev, lowat)
must be called after
handling an I/O socket notification or calling any I/O functions on that socket.
Normally, the functions are called once at the end of each read or write
event handler.Timer events#
ngx_msec_t
type.
Its current value can be obtained from the ngx_current_msec
variable.ngx_add_timer(ev, timer)
sets a timeout for an
event, ngx_del_timer(ev)
deletes a previously set timeout.
The global timeout red-black tree ngx_event_timer_rbtree
stores all timeouts currently set.
The key in the tree is of type ngx_msec_t
and is the time
when the event occurs.
The tree structure enables fast insertion and deletion operations, as well as
access to the nearest timeouts, which nginx uses to find out how long to wait
for I/O events and for expiring timeout events.Posted events#
ngx_post_event(ev, q)
macro posts the event
ev
to the post queue q
.
The ngx_delete_posted_event(ev)
macro deletes the event
ev
from the queue it's currently posted in.
Normally, events are posted to the ngx_posted_events
queue,
which is processed late in the event loop — after all I/O and timer
events are already handled.
The function ngx_event_process_posted()
is called to process
an event queue.
It calls event handlers until the queue is empty.
This means that a posted event handler can post more events to be processed
within the current event loop iteration.void
ngx_my_connection_read(ngx_connection_t *c)
{
ngx_event_t *rev;
rev = c->read;
ngx_add_timer(rev, 1000);
rev->handler = ngx_my_read_handler;
ngx_my_read(rev);
}
void
ngx_my_read_handler(ngx_event_t *rev)
{
ssize_t n;
ngx_connection_t *c;
u_char buf[256];
if (rev->timedout) { /* timeout expired */ }
c = rev->data;
while (rev->ready) {
n = c->recv(c, buf, sizeof(buf));
if (n == NGX_AGAIN) {
break;
}
if (n == NGX_ERROR) { /* error */ }
/* process buf */
}
if (ngx_handle_read_event(rev, 0) != NGX_OK) { /* error */ }
}
Event loop#
sigsuspend()
call waiting for signals to arrive.)
The nginx event loop is implemented in the
ngx_process_events_and_timers()
function, which is called
repeatedly until the process exits.ngx_event_find_timer()
.
This function finds the leftmost node in the timer tree and returns the
number of milliseconds until the node expires.ready
flag is set and the event's handler is called.
For Linux, the ngx_epoll_process_events()
handler
is normally used, which calls epoll_wait()
to wait for I/O
events.ngx_event_expire_timers()
.
The timer tree is iterated from the leftmost element to the right until an
unexpired timeout is found.
For each expired node the timedout
event flag is set,
the timer_set
flag is reset, and the event handler is calledngx_event_process_posted()
.
The function repeatedly removes the first element from the posted events
queue and calls the element's handler, until the queue is empty.ngx_process_events_and_timers()
call.Processes#
ngx_process
global variable, and is one of the following:NGX_PROCESS_MASTER
— The master process, which reads the
NGINX configuration, creates cycles, and starts and controls child processes.
It does not perform any I/O and responds only to signals.
Its cycle function is ngx_master_process_cycle()
.NGX_PROCESS_WORKER
— The worker process, which handles client
connections.
It is started by the master process and responds to its signals and channel
commands as well.
Its cycle function is ngx_worker_process_cycle()
.
There can be multiple worker processes, as configured by the
worker_processes
directive.NGX_PROCESS_SINGLE
— The single process, which exists only in
master_process off
mode, and is the only process running in
that mode.
It creates cycles (like the master process does) and handles client connections
(like the worker process does).
Its cycle function is ngx_single_process_cycle()
.NGX_PROCESS_HELPER
— The helper process, of which currently
there are two types: cache manager and cache loader.
The cycle function for both is
ngx_cache_manager_process_cycle()
.NGX_SHUTDOWN_SIGNAL
(SIGQUIT
on most
systems) — Gracefully shutdown.
Upon receiving this signal, the master process sends a shutdown signal to all
child processes.
When no child processes are left, the master destroys the cycle pool and exits.
When a worker process receives this signal, it closes all listening sockets and
waits until there are no non-cancelable events scheduled, then destroys the
cycle pool and exits.
When the cache manager or the cache loader process receives this signal, it
exits immediately.
The ngx_quit
variable is set to 1
when a
process receives this signal, and is immediately reset after being processed.
The ngx_exiting
variable is set to 1
while
a worker process is in the shutdown state.NGX_TERMINATE_SIGNAL
(SIGTERM
on most
systems) — Terminate.
Upon receiving this signal, the master process sends a terminate signal to all
child processes.
If a child process does not exit within 1 second, the master process sends the
SIGKILL
signal to kill it.
When no child processes are left, the master process destroys the cycle pool and
exits.
When a worker process, the cache manager process or the cache loader process
receives this signal, it destroys the cycle pool and exits.
The variable ngx_terminate
is set to 1
when this signal is received.NGX_NOACCEPT_SIGNAL
(SIGWINCH
on most
systems) - Shut down all worker and helper processes.
Upon receiving this signal, the master process shuts down its child processes.
If a previously started new nginx binary exits, the child processes of the old
master are started again.
When a worker process receives this signal, it shuts down in debug mode
set by the debug_points
directive.NGX_RECONFIGURE_SIGNAL
(SIGHUP
on most
systems) - Reconfigure.
Upon receiving this signal, the master process re-reads the configuration and
creates a new cycle based on it.
If the new cycle is created successfully, the old cycle is deleted and new
child processes are started.
Meanwhile, the old child processes receive the
NGX_SHUTDOWN_SIGNAL
signal.
In single-process mode, nginx creates a new cycle, but keeps the old one until
there are no longer clients with active connections tied to it.
The worker and helper processes ignore this signal.NGX_REOPEN_SIGNAL
(SIGUSR1
on most
systems) — Reopen files.
The master process sends this signal to workers, which reopen all
open_files
related to the cycle.NGX_CHANGEBIN_SIGNAL
(SIGUSR2
on most
systems) — Change the nginx binary.
The master process starts a new nginx binary and passes in a list of all listen
sockets.
The text-format list, passed in the "NGINX"
environment
variable, consists of descriptor numbers separated with semicolons.
The new nginx binary reads the "NGINX"
variable and adds the
sockets to its init cycle.
Other processes ignore this signal.kill()
syscall to pass signals to workers and helpers.
Instead, nginx uses inter-process socket pairs which allow sending messages
between all nginx processes.
Currently, however, messages are only sent from the master to its children.
The messages carry the standard signals.Threads#
pthreads
primitives are available:typedef pthread_mutex_t ngx_thread_mutex_t;
ngx_int_t
ngx_thread_mutex_create(ngx_thread_mutex_t *mtx, ngx_log_t *log);
ngx_int_t
ngx_thread_mutex_destroy(ngx_thread_mutex_t *mtx, ngx_log_t *log);
ngx_int_t
ngx_thread_mutex_lock(ngx_thread_mutex_t *mtx, ngx_log_t *log);
ngx_int_t
ngx_thread_mutex_unlock(ngx_thread_mutex_t *mtx, ngx_log_t *log);
typedef pthread_cond_t ngx_thread_cond_t;
ngx_int_t
ngx_thread_cond_create(ngx_thread_cond_t *cond, ngx_log_t *log);
ngx_int_t
ngx_thread_cond_destroy(ngx_thread_cond_t *cond, ngx_log_t *log);
ngx_int_t
ngx_thread_cond_signal(ngx_thread_cond_t *cond, ngx_log_t *log);
ngx_int_t
ngx_thread_cond_wait(ngx_thread_cond_t *cond, ngx_thread_mutex_t *mtx,
ngx_log_t *log);
src/core/ngx_thread_pool.h
header file contains
relevant definitions:struct ngx_thread_task_s {
ngx_thread_task_t *next;
ngx_uint_t id;
void *ctx;
void (*handler)(void *data, ngx_log_t *log);
ngx_event_t event;
};
typedef struct ngx_thread_pool_s ngx_thread_pool_t;
ngx_thread_pool_t *ngx_thread_pool_add(ngx_conf_t *cf, ngx_str_t *name);
ngx_thread_pool_t *ngx_thread_pool_get(ngx_cycle_t *cycle, ngx_str_t *name);
ngx_thread_task_t *ngx_thread_task_alloc(ngx_pool_t *pool, size_t size);
ngx_int_t ngx_thread_task_post(ngx_thread_pool_t *tp, ngx_thread_task_t *task);
ngx_thread_pool_add(cf, name)
, which either creates a
new thread pool with the given name
or returns a reference
to the pool with that name if it already exists.task
into a queue of a specified thread pool
tp
at runtime, use the
ngx_thread_task_post(tp, task)
function.ngx_thread_task_t
structure:typedef struct {
int foo;
} my_thread_ctx_t;
static void
my_thread_func(void *data, ngx_log_t *log)
{
my_thread_ctx_t *ctx = data;
/* this function is executed in a separate thread */
}
static void
my_thread_completion(ngx_event_t *ev)
{
my_thread_ctx_t *ctx = ev->data;
/* executed in nginx event loop */
}
ngx_int_t
my_task_offload(my_conf_t *conf)
{
my_thread_ctx_t *ctx;
ngx_thread_task_t *task;
task = ngx_thread_task_alloc(conf->pool, sizeof(my_thread_ctx_t));
if (task == NULL) {
return NGX_ERROR;
}
ctx = task->ctx;
ctx->foo = 42;
task->handler = my_thread_func;
task->event.handler = my_thread_completion;
task->event.data = ctx;
if (ngx_thread_task_post(conf->thread_pool, task) != NGX_OK) {
return NGX_ERROR;
}
return NGX_OK;
}
Modules#
Adding new modules#
config
and a file with the module source code.
The config
file contains all information needed for nginx to
integrate the module, for example:ngx_module_type=CORE
ngx_module_name=ngx_foo_module
ngx_module_srcs="$ngx_addon_dir/ngx_foo_module.c"
. auto/module
ngx_addon_name=$ngx_module_name
config
file is a POSIX shell script that can set
and access the following variables:ngx_module_type
— Type of module to build.
Possible values are CORE
, HTTP
,
HTTP_FILTER
, HTTP_INIT_FILTER
,
HTTP_AUX_FILTER
, MAIL
,
STREAM
, or MISC
.ngx_module_name
— Module names.
To build multiple modules from a set of source files, specify a
whitespace-separated list of names.
The first name indicates the name of the output binary for the dynamic module.
The names in the list must match the names used in the source code.ngx_addon_name
— Name of the module as it appears in output
on the console from the configure script.ngx_module_srcs
— Whitespace-separated list of source
files used to compile the module.
The $ngx_addon_dir
variable can be used to represent the path
to the module directory.ngx_module_incs
— Include paths required to build the modulengx_module_deps
— Whitespace-separated list of the module's
dependencies.
Usually, it is the list of header files.ngx_module_libs
— Whitespace-separated list of libraries to
link with the module.
For example, use ngx_module_libs=-lpthread
to link
libpthread
library.
The following macros can be used to link against the same libraries as
nginx:
LIBXSLT
, LIBGD
, GEOIP
,
PCRE
, OPENSSL
, MD5
,
SHA1
, ZLIB
, and PERL
.ngx_module_link
— Variable set by the build system to
DYNAMIC
for a dynamic module or ADDON
for a static module and used to determine different actions to perform
depending on linking type.ngx_module_order
— Load order for the module;
useful for the HTTP_FILTER
and
HTTP_AUX_FILTER
module types.
The format for this option is a whitespace-separated list of modules.
All modules in the list following the current module's name end up after it in
the global list of modules, which sets up the order for modules initialization.
For filter modules later initialization means earlier execution.ngx_http_copy_filter_module
reads the data for other
filter modules and is placed near the bottom of the list so that it is one of
the first to be executed.
The ngx_http_write_filter_module
writes the data to the
client socket and is placed near the top of the list, and is the last to be
executed.ngx_http_copy_filter
in the module list so that the filter
handler is executed after the copy filter handler.
For other module types the default is the empty string.--add-module=/path/to/module
argument to the configure
script.
To compile a module for later dynamic loading into nginx, use the
--add-dynamic-module=/path/to/module
argument.Core Modules#
ngx_module_t
, which is defined as follows:struct ngx_module_s {
/* private part is omitted */
void *ctx;
ngx_command_t *commands;
ngx_uint_t type;
ngx_int_t (*init_master)(ngx_log_t *log);
ngx_int_t (*init_module)(ngx_cycle_t *cycle);
ngx_int_t (*init_process)(ngx_cycle_t *cycle);
ngx_int_t (*init_thread)(ngx_cycle_t *cycle);
void (*exit_thread)(ngx_cycle_t *cycle);
void (*exit_process)(ngx_cycle_t *cycle);
void (*exit_master)(ngx_cycle_t *cycle);
/* stubs for future extensions are omitted */
};
NGX_MODULE_V1
.ctx
field,
recognizes the configuration directives, specified in the
commands
array, and can be invoked at certain stages of
nginx lifecycle.
The module lifecycle consists of the following events:init_module
handler is called in the context of the master process.
The init_module
handler is called in the master process each
time a configuration is loaded.init_process
handler is called in each of them.exit_process
handler.exit_master
handler before
exiting.init_thread
and exit_thread
handlers are not currently called.
There is also no init_master
handler, because it would be
unnecessary overhead.type
defines exactly what is stored in the
ctx
field.
Its value is one of the following types:NGX_CORE_MODULE
NGX_EVENT_MODULE
NGX_HTTP_MODULE
NGX_MAIL_MODULE
NGX_STREAM_MODULE
NGX_CORE_MODULE
is the most basic and thus the most
generic and most low-level type of module.
The other module types are implemented on top of it and provide a more
convenient way to deal with corresponding domains, like handling events or HTTP
requests.ngx_core_module
,
ngx_errlog_module
, ngx_regex_module
,
ngx_thread_pool_module
and
ngx_openssl_module
modules.
The HTTP module, the stream module, the mail module and event modules are core
modules too.
The context of a core module is defined as:typedef struct {
ngx_str_t name;
void *(*create_conf)(ngx_cycle_t *cycle);
char *(*init_conf)(ngx_cycle_t *cycle, void *conf);
} ngx_core_module_t;
name
is a module name string,
create_conf
and init_conf
are pointers to functions that create and initialize module configuration
respectively.
For core modules, nginx calls create_conf
before parsing
a new configuration and init_conf
after all configuration
is parsed successfully.
The typical create_conf
function allocates memory for the
configuration and sets default values.ngx_foo_module
might
look like this:/*
* Copyright (C) Author.
*/
#include <ngx_config.h>
#include <ngx_core.h>
typedef struct {
ngx_flag_t enable;
} ngx_foo_conf_t;
static void *ngx_foo_create_conf(ngx_cycle_t *cycle);
static char *ngx_foo_init_conf(ngx_cycle_t *cycle, void *conf);
static char *ngx_foo_enable(ngx_conf_t *cf, void *post, void *data);
static ngx_conf_post_t ngx_foo_enable_post = { ngx_foo_enable };
static ngx_command_t ngx_foo_commands[] = {
{ ngx_string("foo_enabled"),
NGX_MAIN_CONF|NGX_DIRECT_CONF|NGX_CONF_FLAG,
ngx_conf_set_flag_slot,
0,
offsetof(ngx_foo_conf_t, enable),
&ngx_foo_enable_post },
ngx_null_command
};
static ngx_core_module_t ngx_foo_module_ctx = {
ngx_string("foo"),
ngx_foo_create_conf,
ngx_foo_init_conf
};
ngx_module_t ngx_foo_module = {
NGX_MODULE_V1,
&ngx_foo_module_ctx, /* module context */
ngx_foo_commands, /* module directives */
NGX_CORE_MODULE, /* module type */
NULL, /* init master */
NULL, /* init module */
NULL, /* init process */
NULL, /* init thread */
NULL, /* exit thread */
NULL, /* exit process */
NULL, /* exit master */
NGX_MODULE_V1_PADDING
};
static void *
ngx_foo_create_conf(ngx_cycle_t *cycle)
{
ngx_foo_conf_t *fcf;
fcf = ngx_pcalloc(cycle->pool, sizeof(ngx_foo_conf_t));
if (fcf == NULL) {
return NULL;
}
fcf->enable = NGX_CONF_UNSET;
return fcf;
}
static char *
ngx_foo_init_conf(ngx_cycle_t *cycle, void *conf)
{
ngx_foo_conf_t *fcf = conf;
ngx_conf_init_value(fcf->enable, 0);
return NGX_CONF_OK;
}
static char *
ngx_foo_enable(ngx_conf_t *cf, void *post, void *data)
{
ngx_flag_t *fp = data;
if (*fp == 0) {
return NGX_CONF_OK;
}
ngx_log_error(NGX_LOG_NOTICE, cf->log, 0, "Foo Module is enabled");
return NGX_CONF_OK;
}
Configuration Directives#
ngx_command_t
type defines a single configuration
directive.
Each module that supports configuration provides an array of such structures
that describe how to process arguments and what handlers to call:typedef struct ngx_command_s ngx_command_t;
struct ngx_command_s {
ngx_str_t name;
ngx_uint_t type;
char *(*set)(ngx_conf_t *cf, ngx_command_t *cmd, void *conf);
ngx_uint_t conf;
ngx_uint_t offset;
void *post;
};
ngx_null_command
.
The name
is the name of a directive as it appears
in the configuration file, for example "worker_processes" or "listen".
The type
is a bit-field of flags that specify the number of
arguments the directive takes, its type, and the context in which it appears.
The flags are:NGX_CONF_NOARGS
— Directive takes no arguments.NGX_CONF_1MORE
— Directive takes one or more arguments.NGX_CONF_2MORE
— Directive takes two or more arguments.NGX_CONF_TAKE1
..:samp:NGX_CONF_TAKE7 —
Directive takes exactly the indicated number of arguments.NGX_CONF_TAKE12
, NGX_CONF_TAKE13
,
NGX_CONF_TAKE23
, NGX_CONF_TAKE123
,
NGX_CONF_TAKE1234
— Directive may take different number of
arguments.
Options are limited to the given numbers.
For example, NGX_CONF_TAKE12
means it takes one or two
arguments.NGX_CONF_BLOCK
— Directive is a block, that is, it can
contain other directives within its opening and closing braces, or even
implement its own parser to handle contents inside.NGX_CONF_FLAG
— Directive takes a boolean value, either
on
or off
.NGX_MAIN_CONF
— In the top level context.NGX_HTTP_MAIN_CONF
— In the http
block.NGX_HTTP_SRV_CONF
— In a server
block
within the http
block.NGX_HTTP_LOC_CONF
— In a location
block
within the http
block.NGX_HTTP_UPS_CONF
— In an upstream
block
within the http
block.NGX_HTTP_SIF_CONF
— In an if
block within
a server
block in the http
block.NGX_HTTP_LIF_CONF
— In an if
block within
a location
block in the http
block.NGX_HTTP_LMT_CONF
— In a limit_except
block within the http
block.NGX_STREAM_MAIN_CONF
— In the stream
block.NGX_STREAM_SRV_CONF
— In a server
block
within the stream
block.NGX_STREAM_UPS_CONF
— In an upstream
block
within the stream
block.NGX_MAIL_MAIN_CONF
— In the mail
block.NGX_MAIL_SRV_CONF
— In a server
block
within the mail
block.NGX_EVENT_CONF
— In the event
block.NGX_DIRECT_CONF
— Used by modules that don't
create a hierarchy of contexts and only have one global configuration.
This configuration is passed to the handler as the conf
argument.set
field defines a handler that processes a directive
and stores parsed values into the corresponding configuration.
There's a number of functions that perform common conversions:ngx_conf_set_flag_slot
— Converts the literal strings
on
and off
into an
ngx_flag_t
value with values 1 or 0, respectively.ngx_conf_set_str_slot
— Stores a string as a value of the
ngx_str_t
type.ngx_conf_set_str_array_slot
— Appends a value to an array
ngx_array_t
of strings ngx_str_t
.
The array is created if does not already exist.ngx_conf_set_keyval_slot
— Appends a key-value pair to an
array ngx_array_t
of key-value pairs
ngx_keyval_t
.
The first string becomes the key and the second the value.
The array is created if it does not already exist.ngx_conf_set_num_slot
— Converts a directive's argument
to an ngx_int_t
value.ngx_conf_set_size_slot
— Converts a
size to a size_t
value
expressed in bytes.ngx_conf_set_off_slot
— Converts an
offset to an off_t
value
expressed in bytes.ngx_conf_set_msec_slot
— Converts a
time to an ngx_msec_t
value
expressed in milliseconds.ngx_conf_set_sec_slot
— Converts a
time to a time_t
value
expressed in in seconds.ngx_conf_set_bufs_slot
— Converts the two supplied arguments
into an ngx_bufs_t
object that holds the number and
size of buffers.ngx_conf_set_enum_slot
— Converts the supplied argument
into an ngx_uint_t
value.
The null-terminated array of ngx_conf_enum_t
passed in the
post
field defines the acceptable strings and corresponding
integer values.ngx_conf_set_bitmask_slot
— Converts the supplied arguments
into an ngx_uint_t
value.
The mask values for each argument are ORed producing the result.
The null-terminated array of ngx_conf_bitmask_t
passed in the
post
field defines the acceptable strings and corresponding
mask values.set_path_slot
— Converts the supplied arguments to an
ngx_path_t
value and performs all required initializations.
For details, see the documentation for the
proxy_temp_path
directive.set_access_slot
— Converts the supplied arguments to a file
permissions mask.
For details, see the documentation for the
proxy_store_access
directive.conf
field defines which configuration structure is
passed to the directory handler.
Core modules only have the global configuration and set
NGX_DIRECT_CONF
flag to access it.
Modules like HTTP, Stream or Mail create hierarchies of configurations.
For example, a module's configuration is created for server
,
location
and if
scopes.NGX_HTTP_MAIN_CONF_OFFSET
— Configuration for the
http
block.NGX_HTTP_SRV_CONF_OFFSET
— Configuration for a
server
block within the http
block.NGX_HTTP_LOC_CONF_OFFSET
— Configuration for a
location
block within the http
.NGX_STREAM_MAIN_CONF_OFFSET
— Configuration for the
stream
block.NGX_STREAM_SRV_CONF_OFFSET
— Configuration for a
server
block within the stream
block.NGX_MAIL_MAIN_CONF_OFFSET
— Configuration for the
mail
block.NGX_MAIL_SRV_CONF_OFFSET
— Configuration for a
server
block within the mail
block.offset
defines the offset of a field in a module
configuration structure that holds values for this particular directive.
The typical use is to employ the offsetof()
macro.post
field has two purposes: it may be used to define
a handler to be called after the main handler has completed, or to pass
additional data to the main handler.
In the first case, the ngx_conf_post_t
structure needs to
be initialized with a pointer to the handler, for example:static char *ngx_do_foo(ngx_conf_t *cf, void *post, void *data);
static ngx_conf_post_t ngx_foo_post = { ngx_do_foo };
post
argument is the ngx_conf_post_t
object itself, and the data
is a pointer to the value,
converted from arguments by the main handler with the appropriate type.HTTP#
Connection#
ngx_event_accept()
accepts a client TCP connection.
This handler is called in response to a read notification on a listen socket.
A new ngx_connection_t
object is created at this stage
to wrap the newly accepted client socket.
Each nginx listener provides a handler to pass the new connection object to.
For HTTP connections it's ngx_http_init_connection(c)
.ngx_http_init_connection()
performs early initialization of
the HTTP connection.
At this stage an ngx_http_connection_t
object is created for
the connection and its reference is stored in the connection's
data
field.
Later it will be replaced by an HTTP request object.
A PROXY protocol parser and the SSL handshake are started at
this stage as well.ngx_http_wait_request_handler()
read event handler
is called when data is available on the client socket.
At this stage an HTTP request object ngx_http_request_t
is
created and set to the connection's data
field.ngx_http_process_request_line()
read event handler
reads client request line.
The handler is set by ngx_http_wait_request_handler()
.
The data is read into connection's buffer
.
The size of the buffer is initially set by the directive
client_header_buffer_size.
The entire client header is supposed to fit in the buffer.
If the initial size is not sufficient, a bigger buffer is allocated,
with the capacity set by the large_client_header_buffers
directive.ngx_http_process_request_headers()
read event handler,
is set after ngx_http_process_request_line()
to read
the client request header.ngx_http_core_run_phases()
is called when the request header
is completely read and parsed.
This function runs request phases from
NGX_HTTP_POST_READ_PHASE
to
NGX_HTTP_CONTENT_PHASE
.
The last phase is intended to generate a response and pass it along the filter
chain.
The response is not necessarily sent to the client at this phase.
It might remain buffered and be sent at the finalization stage.ngx_http_finalize_request()
is usually called when the
request has generated all the output or produced an error.
In the latter case an appropriate error page is looked up and used as the
response.
If the response is not completely sent to the client by this point, an
HTTP writer ngx_http_writer()
is activated to finish
sending outstanding data.ngx_http_finalize_connection()
is called when the complete
response has been sent to the client and the request can be destroyed.
If the client connection keepalive feature is enabled,
ngx_http_set_keepalive()
is called, which destroys the
current request and waits for the next request on the connection.
Otherwise, ngx_http_close_request()
destroys both the
request and the connection.Request#
ngx_http_request_t
object is
created. Some of the fields of this object are:connection
— Pointer to a ngx_connection_t
client connection object.
Several requests can reference the same connection object at the same time -
one main request and its subrequests.
After a request is deleted, a new request can be created on the same connection.ngx_connection_t
's
data
field points back to the request.
Such requests are called active, as opposed to the other requests tied to the
connection.
An active request is used to handle client connection events and is allowed to
output its response to the client.
Normally, each request becomes active at some point so that it can send its
output.ctx
— Array of HTTP module contexts.
Each module of type NGX_HTTP_MODULE
can store any value
(normally, a pointer to a structure) in the request.
The value is stored in the ctx
array at the module's
ctx_index
position.
The following macros provide a convenient way to get and set request contexts:ngx_http_get_module_ctx(r, module)
— Returns
the module
's contextngx_http_set_ctx(r, c, module)
— Sets c
as the module
's contextmain_conf
, srv_conf
,
loc_conf
— Arrays of current request
configurations.
Configurations are stored at the module's ctx_index
positions.read_event_handler
, write_event_handler
-
Read and write event handlers for the request.
Normally, both the read and write event handlers for an HTTP connection
are set to ngx_http_request_handler()
.
This function calls the read_event_handler
and
write_event_handler
handlers for the currently
active request.cache
— Request cache object for caching the
upstream response.upstream
— Request upstream object for proxying.pool
— Request pool.
The request object itself is allocated in this pool, which is destroyed when
the request is deleted.
For allocations that need to be available throughout the client connection's
lifetime, use ngx_connection_t
's pool instead.header_in
— Buffer into which the client HTTP request
header is read.headers_in
, headers_out
— Input and
output HTTP headers objects.
Both objects contain the headers
field of type
ngx_list_t
for keeping the raw list of headers.
In addition to that, specific headers are available for getting and setting as
separate fields, for example content_length_n
,
status
etc.request_body
— Client request body object.start_sec
, start_msec
— Time point when
the request was created, used for tracking request duration.method
, method_name
— Numeric and text
representation of the client HTTP request method.
Numeric values for methods are defined in
src/http/ngx_http_request.h
with the macros
NGX_HTTP_GET
, NGX_HTTP_HEAD
,
NGX_HTTP_POST
, etc.http_protocol
— Client HTTP protocol version in its
original text form ("HTTP/1.0", "HTTP/1.1" etc).http_version
— Client HTTP protocol version in
numeric form (NGX_HTTP_VERSION_10
,
NGX_HTTP_VERSION_11
, etc.).http_major
, http_minor
— Client HTTP
protocol version in numeric form split into major and minor parts.request_line
, unparsed_uri
— Request line
and URI in the original client request.uri
, args
, exten
—
URI, arguments and file extension for the current request.
The URI value here might differ from the original URI sent by the client due to
normalization.
Throughout request processing, these values can change as internal redirects
are performed.main
— Pointer to a main request object.
This object is created to process a client HTTP request, as opposed to
subrequests, which are created to perform a specific subtask within the main
request.parent
— Pointer to the parent request of a subrequest.postponed
— List of output buffers and subrequests, in the
order in which they are sent and created.
The list is used by the postpone filter to provide consistent request output
when parts of it are created by subrequests.post_subrequest
— Pointer to a handler with the context
to be called when a subrequest gets finalized.
Unused for main requests.posted_requests
— List of requests to be started or
resumed, which is done by calling the request's
write_event_handler
.
Normally, this handler holds the request main function, which at first runs
request phases and then produces the output.ngx_http_post_request(r, NULL)
call.
It is always posted to the main request posted_requests
list.
The function ngx_http_run_posted_requests(c)
runs all
requests that are posted in the main request of the passed
connection's active request.
All event handlers call ngx_http_run_posted_requests
,
which can lead to new posted requests.
Normally, it is called after invoking a request's read or write handler.phase_handler
— Index of current request phase.ncaptures
, captures
,
captures_data
— Regex captures produced
by the last regex match of the request.
A regex match can occur at a number of places during request processing:
map lookup, server lookup by SNI or HTTP Host, rewrite, proxy_redirect, etc.
Captures produced by a lookup are stored in the above mentioned fields.
The field ncaptures
holds the number of captures,
captures
holds captures boundaries and
captures_data
holds the string against which the regex was
matched and which is used to extract captures.
After each new regex match, request captures are reset to hold new values.count
— Request reference counter.
The field only makes sense for the main request.
Increasing the counter is done by simple r->main->count++
.
To decrease the counter, call
ngx_http_finalize_request(r, rc)
.
Creating of a subrequest and running the request body read process both
increment the counter.subrequests
— Current subrequest nesting level.
Each subrequest inherits its parent's nesting level, decreased by one.
An error is generated if the value reaches zero.
The value for the main request is defined by the
NGX_HTTP_MAX_SUBREQUESTS
constant.uri_changes
— Number of URI changes remaining for
the request.
The total number of times a request can change its URI is limited by the
NGX_HTTP_MAX_URI_CHANGES
constant.
With each change the value is decremented until it reaches zero, at which time
an error is generated.
Rewrites and internal redirects to normal or named locations are considered URI
changes.blocked
— Counter of blocks held on the request.
While this value is non-zero, the request cannot be terminated.
Currently, this value is increased by pending AIO operations (POSIX AIO and
thread operations) and active cache lock.buffered
— Bitmask showing which modules have buffered the
output produced by the request.
A number of filters can buffer output; for example, sub_filter can buffer data
because of a partial string match, copy filter can buffer data because of the
lack of free output buffers etc.
As long as this value is non-zero, the request is not finalized
pending the flush.header_only
— Flag indicating that the output does not
require a body.
For example, this flag is used by HTTP HEAD requests.keepalive
— Flag indicating whether client connection
keepalive is supported.
The value is inferred from the HTTP version and the value of the
"Connection" header.header_sent
— Flag indicating that the output header
has already been sent by the request.internal
— Flag indicating that the current request
is internal.
To enter the internal state, a request must pass through an internal
redirect or be a subrequest.
Internal requests are allowed to enter internal locations.allow_ranges
— Flag indicating that a partial response
can be sent to the client, as requested by the HTTP Range header.subrequest_ranges
— Flag indicating that a partial response
can be sent while a subrequest is being processed.single_range
— Flag indicating that only a single continuous
range of output data can be sent to the client.
This flag is usually set when sending a stream of data, for example from a
proxied server, and the entire response is not available in one buffer.main_filter_need_in_memory
,
filter_need_in_memory
— Flags
requesting that the output produced in memory buffers rather than files.
This is a signal to the copy filter to read data from file buffers even if
sendfile is enabled.
The difference between the two flags is the location of the filter modules that
set them.
Filters called before the postpone filter in the filter chain set
filter_need_in_memory
, requesting that only the current
request output come in memory buffers.
Filters called later in the filter chain set
main_filter_need_in_memory
, requesting that
both the main request and all subrequests read files in memory
while sending output.filter_need_temporary
— Flag requesting that the request
output be produced in temporary buffers, but not in readonly memory buffers or
file buffers.
This is used by filters which may change output directly in the buffers where
it's sent.Configuration#
http
block.
Functions as global settings for a module.server
block.
Functions as server-specific settings for a module.location
,
if
or limit_except
block.
Functions as location-specific settings for a module.foo
, of type
unsigned integer.typedef struct {
ngx_uint_t foo;
} ngx_http_foo_loc_conf_t;
static ngx_http_module_t ngx_http_foo_module_ctx = {
NULL, /* preconfiguration */
NULL, /* postconfiguration */
NULL, /* create main configuration */
NULL, /* init main configuration */
NULL, /* create server configuration */
NULL, /* merge server configuration */
ngx_http_foo_create_loc_conf, /* create location configuration */
ngx_http_foo_merge_loc_conf /* merge location configuration */
};
static void *
ngx_http_foo_create_loc_conf(ngx_conf_t *cf)
{
ngx_http_foo_loc_conf_t *conf;
conf = ngx_pcalloc(cf->pool, sizeof(ngx_http_foo_loc_conf_t));
if (conf == NULL) {
return NULL;
}
conf->foo = NGX_CONF_UNSET_UINT;
return conf;
}
static char *
ngx_http_foo_merge_loc_conf(ngx_conf_t *cf, void *parent, void *child)
{
ngx_http_foo_loc_conf_t *prev = parent;
ngx_http_foo_loc_conf_t *conf = child;
ngx_conf_merge_uint_value(conf->foo, prev->foo, 1);
}
ngx_http_foo_create_loc_conf()
function creates a new configuration structure, and
ngx_http_foo_merge_loc_conf()
merges a configuration with
configuration from a higher level.
In fact, server and location configuration do not exist only at the server and
location levels, but are also created for all levels above them.
Specifically, a server configuration is also created at the main level and
location configurations are created at the main, server, and location levels.
These configurations make it possible to specify server- and location-specific
settings at any level of an nginx configuration file.
Eventually configurations are merged down.
A number of macros like NGX_CONF_UNSET
and
NGX_CONF_UNSET_UINT
are provided
for indicating a missing setting and ignoring it while merging.
Standard nginx merge macros like ngx_conf_merge_value()
and
ngx_conf_merge_uint_value()
provide a convenient way to
merge a setting and set the default value if none of the configurations
provided an explicit value.
For complete list of macros for different types, see
src/core/ngx_conf_file.h
.ngx_conf_t
reference as the first argument.ngx_http_conf_get_module_main_conf(cf, module)
ngx_http_conf_get_module_srv_conf(cf, module)
ngx_http_conf_get_module_loc_conf(cf, module)
handler
field of the structure.static ngx_int_t ngx_http_foo_handler(ngx_http_request_t *r);
static ngx_command_t ngx_http_foo_commands[] = {
{ ngx_string("foo"),
NGX_HTTP_LOC_CONF|NGX_CONF_NOARGS,
ngx_http_foo,
0,
0,
NULL },
ngx_null_command
};
static char *
ngx_http_foo(ngx_conf_t *cf, ngx_command_t *cmd, void *conf)
{
ngx_http_core_loc_conf_t *clcf;
clcf = ngx_http_conf_get_module_loc_conf(cf, ngx_http_core_module);
clcf->handler = ngx_http_bar_handler;
return NGX_CONF_OK;
}
ngx_http_get_module_main_conf(r, module)
ngx_http_get_module_srv_conf(r, module)
ngx_http_get_module_loc_conf(r, module)
ngx_http_request_t
.
The main configuration of a request never changes.
Server configuration can change from the default after
the virtual server for the request is chosen.
Location configuration selected for processing a request can change multiple
times as a result of a rewrite operation or internal redirect.
The following example shows how to access a module's HTTP configuration at
runtime.static ngx_int_t
ngx_http_foo_handler(ngx_http_request_t *r)
{
ngx_http_foo_loc_conf_t *flcf;
flcf = ngx_http_get_module_loc_conf(r, ngx_http_foo_module);
...
}
Phases#
NGX_HTTP_POST_READ_PHASE
— First phase.
The ngx_http_realip_module
registers its handler at this phase to enable
substitution of client addresses before any other module is invoked.NGX_HTTP_SERVER_REWRITE_PHASE
— Phase where
rewrite directives defined in a server
block
(but outside a location
block) are processed.
The
ngx_http_rewrite_module
installs its handler at this phase.NGX_HTTP_FIND_CONFIG_PHASE
— Special phase
where a location is chosen based on the request URI.
Before this phase, the default location for the relevant virtual server
is assigned to the request, and any module requesting a location configuration
receives the configuration for the default server location.
This phase assigns a new location to the request.
No additional handlers can be registered at this phase.NGX_HTTP_REWRITE_PHASE
— Same as
NGX_HTTP_SERVER_REWRITE_PHASE
, but for
rewrite rules defined in the location, chosen in the previous phase.NGX_HTTP_POST_REWRITE_PHASE
— Special phase
where the request is redirected to a new location if its URI changed
during a rewrite.
This is implemented by the request going through
the NGX_HTTP_FIND_CONFIG_PHASE
again.
No additional handlers can be registered at this phase.NGX_HTTP_PREACCESS_PHASE
— A common phase for different
types of handlers, not associated with access control.
The standard nginx modules
ngx_http_limit_conn_module and
ngx_http_limit_req_module register their handlers at this phase.NGX_HTTP_ACCESS_PHASE
— Phase where it is verified
that the client is authorized to make the request.
Standard nginx modules such as
ngx_http_access_module and
ngx_http_auth_basic_module register their handlers at this phase.
By default the client must pass the authorization check of all handlers
registered at this phase for the request to continue to the next phase.
The satisfy directive,
can be used to permit processing to continue if any of the phase handlers
authorizes the client.NGX_HTTP_POST_ACCESS_PHASE
— Special phase where the
satisfy any
directive is processed.
If some access phase handlers denied access and none explicitly allowed it, the
request is finalized.
No additional handlers can be registered at this phase.NGX_HTTP_PRECONTENT_PHASE
— Phase for handlers to be called
prior to generating content.
Standard modules such as
ngx_http_try_files_module and
ngx_http_mirror_module
register their handlers at this phase.NGX_HTTP_CONTENT_PHASE
— Phase where the response
is normally generated.
Multiple nginx standard modules register their handlers at this phase,
including
ngx_http_index_module or
ngx_http_static_module
.
They are called sequentially until one of them produces
the output.
It's also possible to set content handlers on a per-location basis.
If the
ngx_http_core_module's
location configuration has handler
set, it is
called as the content handler and the handlers installed at this phase
are ignored.NGX_HTTP_LOG_PHASE
— Phase where request logging
is performed.
Currently, only the
ngx_http_log_module
registers its handler
at this stage for access logging.
Log phase handlers are called at the very end of request processing, right
before freeing the request.static ngx_http_module_t ngx_http_foo_module_ctx = {
NULL, /* preconfiguration */
ngx_http_foo_init, /* postconfiguration */
NULL, /* create main configuration */
NULL, /* init main configuration */
NULL, /* create server configuration */
NULL, /* merge server configuration */
NULL, /* create location configuration */
NULL /* merge location configuration */
};
static ngx_int_t
ngx_http_foo_handler(ngx_http_request_t *r)
{
ngx_table_elt_t *ua;
ua = r->headers_in.user_agent;
if (ua == NULL) {
return NGX_DECLINED;
}
/* reject requests with "User-Agent: foo" */
if (ua->value.len == 3 && ngx_strncmp(ua->value.data, "foo", 3) == 0) {
return NGX_HTTP_FORBIDDEN;
}
return NGX_DECLINED;
}
static ngx_int_t
ngx_http_foo_init(ngx_conf_t *cf)
{
ngx_http_handler_pt *h;
ngx_http_core_main_conf_t *cmcf;
cmcf = ngx_http_conf_get_module_main_conf(cf, ngx_http_core_module);
h = ngx_array_push(&cmcf->phases[NGX_HTTP_PREACCESS_PHASE].handlers);
if (h == NULL) {
return NGX_ERROR;
}
*h = ngx_http_foo_handler;
return NGX_OK;
}
NGX_OK
— Proceed to the next phase.NGX_DECLINED
— Proceed to the next handler of the current
phase.
If the current handler is the last in the current phase,
move to the next phase.NGX_AGAIN
, NGX_DONE
— Suspend
phase handling until some future event which can be
an asynchronous I/O operation or just a delay, for example.
It is assumed, that phase handling will be resumed later by calling
ngx_http_core_run_phases()
.NGX_DECLINED
is considered a finalization code.
Any return code from the location content handlers is considered a
finalization code.
At the access phase, in
satisfy any
mode,
any return code other than NGX_OK
,
NGX_DECLINED
, NGX_AGAIN
,
NGX_DONE
is considered a denial.
If no subsequent access handlers allow or deny access with a different
code, the denial code will become the finalization code.Examples#
Code style#
General rules#
ngx_
or more specific prefix such as
ngx_http_
and ngx_mail_
size_t
ngx_utf8_length(u_char *p, size_t n)
{
u_char c, *last;
size_t len;
last = p + n;
for (len = 0; p < last; len++) {
c = *p;
if (c < 0x80) {
p++;
continue;
}
if (ngx_utf8_decode(&p, last - p) > 0x10ffff) {
/* invalid UTF-8 */
return n;
}
}
return len;
}
Files#
/*
* Copyright (C) Author Name
* Copyright (C) Organization, Inc.
*/
ngx_config.h
and ngx_core.h
files
are always included first, followed by one of
ngx_http.h
, ngx_stream.h
,
or ngx_mail.h
.
Then follow optional external header files:#include <ngx_config.h>
#include <ngx_core.h>
#include <ngx_http.h>
#include <libxml/parser.h>
#include <libxml/tree.h>
#include <libxslt/xslt.h>
#if (NGX_HAVE_EXSLT)
#include <libexslt/exslt.h>
#endif
#ifndef _NGX_PROCESS_CYCLE_H_INCLUDED_
#define _NGX_PROCESS_CYCLE_H_INCLUDED_
...
#endif /* _NGX_PROCESS_CYCLE_H_INCLUDED_ */
Preprocessor#
ngx_
or NGX_
(or more specific) prefix.
Macro names for constants are uppercase.
Parameterized macros and macros for initializers are lowercase.
The macro name and value are separated by at least two spaces:#define NGX_CONF_BUFFER 4096
#define ngx_buf_in_memory(b) (b->temporary || b->memory || b->mmap)
#define ngx_buf_size(b) \
(ngx_buf_in_memory(b) ? (off_t) (b->last - b->pos): \
(b->file_last - b->file_pos))
#define ngx_null_string { 0, NULL }
#if (NGX_HAVE_KQUEUE)
...
#elif ((NGX_HAVE_DEVPOLL && !(NGX_TEST_BUILD_DEVPOLL)) \
|| (NGX_HAVE_EVENTPORT && !(NGX_TEST_BUILD_EVENTPORT)))
...
#elif (NGX_HAVE_EPOLL && !(NGX_TEST_BUILD_EPOLL))
...
#elif (NGX_HAVE_POLL)
...
#else /* select */
...
#endif /* NGX_HAVE_KQUEUE */
Types#
_t
" suffix.
A defined type name is separated by at least two spaces:typedef ngx_uint_t ngx_rbtree_key_t;
typedef
.
Inside structures, member types and names are aligned:typedef struct {
size_t len;
u_char *data;
} ngx_str_t;
_s
".
Adjacent structure definitions are separated with two empty lines:typedef struct ngx_list_part_s ngx_list_part_t;
struct ngx_list_part_s {
void *elts;
ngx_uint_t nelts;
ngx_list_part_t *next;
};
typedef struct {
ngx_list_part_t *last;
ngx_list_part_t part;
size_t size;
ngx_uint_t nalloc;
ngx_pool_t *pool;
} ngx_list_t;
typedef struct {
ngx_uint_t hash;
ngx_str_t key;
ngx_str_t value;
u_char *lowcase_key;
} ngx_table_elt_t;
_pt
":typedef ssize_t (*ngx_recv_pt)(ngx_connection_t *c, u_char *buf, size_t size);
typedef ssize_t (*ngx_recv_chain_pt)(ngx_connection_t *c, ngx_chain_t *in,
off_t limit);
typedef ssize_t (*ngx_send_pt)(ngx_connection_t *c, u_char *buf, size_t size);
typedef ngx_chain_t *(*ngx_send_chain_pt)(ngx_connection_t *c, ngx_chain_t *in,
off_t limit);
typedef struct {
ngx_recv_pt recv;
ngx_recv_chain_pt recv_chain;
ngx_recv_pt udp_recv;
ngx_send_pt send;
ngx_send_pt udp_send;
ngx_send_chain_pt udp_send_chain;
ngx_send_chain_pt send_chain;
ngx_uint_t flags;
} ngx_os_io_t;
_e
":typedef enum {
ngx_http_fastcgi_st_version = 0,
ngx_http_fastcgi_st_type,
...
ngx_http_fastcgi_st_padding
} ngx_http_fastcgi_state_e;
Variables#
u_char *rv, *p;
ngx_conf_t *cf;
ngx_uint_t i, j, k;
unsigned int len;
struct sockaddr *sa;
const unsigned char *data;
ngx_peer_connection_t *pc;
ngx_http_core_srv_conf_t **cscfp;
ngx_http_upstream_srv_conf_t *us, *uscf;
u_char text[NGX_SOCKADDR_STRLEN];
static ngx_str_t ngx_http_memcached_key = ngx_string("memcached_key");
static ngx_uint_t mday[] = { 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31 };
static uint32_t ngx_crc32_table16[] = {
0x00000000, 0x1db71064, 0x3b6e20c8, 0x26d930ac,
...
0x9b64c2b0, 0x86d3d2d4, 0xa00ae278, 0xbdbdf21c
};
u_char *rv;
ngx_int_t rc;
ngx_conf_t *cf;
ngx_connection_t *c;
ngx_http_request_t *r;
ngx_peer_connection_t *pc;
ngx_http_upstream_srv_conf_t *us, *uscf;
Functions#
static char *ngx_http_block(ngx_conf_t *cf, ngx_command_t *cmd, void *conf);
static ngx_int_t ngx_http_init_phases(ngx_conf_t *cf,
ngx_http_core_main_conf_t *cmcf);
static char *ngx_http_merge_servers(ngx_conf_t *cf,
ngx_http_core_main_conf_t *cmcf, ngx_http_module_t *module,
ngx_uint_t ctx_index);
static ngx_int_t
ngx_http_find_virtual_server(ngx_http_request_t *r, u_char *host, size_t len)
{
...
}
static ngx_int_t
ngx_http_add_addresses(ngx_conf_t *cf, ngx_http_core_srv_conf_t *cscf,
ngx_http_conf_port_t *port, ngx_http_listen_opt_t *lsopt)
{
...
}
ngx_log_debug2(NGX_LOG_DEBUG_HTTP, r->connection->log, 0,
"http header: \"%V: %V\"",
&h->key, &h->value);
hc->busy = ngx_palloc(r->connection->pool,
cscf->large_client_header_buffers.num * sizeof(ngx_buf_t *));
ngx_inline
macro should be used instead of
inline
:static ngx_inline void ngx_cpuid(uint32_t i, uint32_t *buf);
Expressions#
.
" and "->
"
should be separated from their operands by one space.
Unary operators and subscripts are not separated from their operands by spaces:width = width * 10 + (*fmt++ - '0');
ch = (u_char) ((decoded << 4) + (ch - '0'));
r->exten.data = &r->uri.data[i + 1];
len = ngx_sock_ntop((struct sockaddr *) sin6, p, len, 1);
if (status == NGX_HTTP_MOVED_PERMANENTLY
|| status == NGX_HTTP_MOVED_TEMPORARILY
|| status == NGX_HTTP_SEE_OTHER
|| status == NGX_HTTP_TEMPORARY_REDIRECT
|| status == NGX_HTTP_PERMANENT_REDIRECT)
{
...
}
p->temp_file->warn = "an upstream response is buffered "
"to a temporary file";
hinit->hash = ngx_pcalloc(hinit->pool, sizeof(ngx_hash_wildcard_t)
+ size * sizeof(ngx_hash_elt_t *));
if (((u->conf->cache_use_stale & NGX_HTTP_UPSTREAM_FT_UPDATING)
|| c->stale_updating) && !r->background
&& u->conf->cache_background_update)
{
...
}
node = (ngx_rbtree_node_t *)
((u_char *) lr - offsetof(ngx_rbtree_node_t, color));
NULL
(not 0
):if (ptr != NULL) {
...
}
Conditionals and Loops#
if
" keyword is separated from the condition by
one space.
Opening brace is located on the same line, or on a
dedicated line if the condition takes several lines.
Closing brace is located on a dedicated line, optionally followed
by "else if
/ else
".
Usually, there is an empty line before the
"else if
/ else
" part:if (node->left == sentinel) {
temp = node->right;
subst = node;
} else if (node->right == sentinel) {
temp = node->left;
subst = node;
} else {
subst = ngx_rbtree_min(node->right, sentinel);
if (subst->left != sentinel) {
temp = subst->left;
} else {
temp = subst->right;
}
}
do
"
and "while
" loops:while (p < last && *p == ' ') {
p++;
}
do {
ctx->node = rn;
ctx = ctx->next;
} while (ctx);
switch
" keyword is separated from the condition by
one space.
Opening brace is located on the same line.
Closing brace is located on a dedicated line.
The "case
" keywords are lined up with
"switch
":switch (ch) {
case '!':
looked = 2;
state = ssi_comment0_state;
break;
case '<':
copy_end = p;
break;
default:
copy_end = p;
looked = 0;
state = ssi_start_state;
break;
}
for
" loops are formatted like this:for (i = 0; i < ccf->env.nelts; i++) {
...
}
for (q = ngx_queue_head(locations);
q != ngx_queue_sentinel(locations);
q = ngx_queue_next(q))
{
...
}
for
" statement is omitted,
this is indicated by the "/* void */
" comment:for (i = 0; /* void */ ; i++) {
...
}
/* void */
" comment which may be put on the same line:for (cl = *busy; cl->next; cl = cl->next) { /* void */ }
for ( ;; ) {
...
}
Labels#
if (i == 0) {
u->err = "host not found";
goto failed;
}
u->addrs = ngx_pcalloc(pool, i * sizeof(ngx_addr_t));
if (u->addrs == NULL) {
goto failed;
}
u->naddrs = i;
...
return NGX_OK;
failed:
freeaddrinfo(res);
return NGX_ERROR;
Debugging memory issues#
gcc
and clang
,
use the -fsanitize=address
compiler and linker option.
When building nginx, this can be done by adding the option to
--with-cc-opt
and --with-ld-opt
parameters of the configure
script.NGX_DEBUG_PALLOC
macro to 1
.
In this case, allocations are passed directly to the system allocator giving it
full control over the buffers boundaries.auto/configure --with-cc-opt='-fsanitize=address -DNGX_DEBUG_PALLOC=1'
--with-ld-opt=-fsanitize=address
Common Pitfalls#
Writing a C module#
C Strings#
strlen()
or strstr()
.
Instead, nginx counterparts
that accept either ngx_str_t
should be used
or pointer to data and a length.
However, there is a case when ngx_str_t
holds
a pointer to a zero-terminated string: strings that come as a result of
configuration file parsing are zero-terminated.Global Variables#
Manual Memory Management#
Threads#
init_process
module handler and perform required actions in timer handler.
Internally nginx makes use of threads to
boost IO-related operations, but this is a special case with a lot
of limitations.Blocking Libraries#
HTTP Requests to External Services#
Comments#
"
//
" comments are not usedtext is written in English, American spelling is preferred
multi-line comments are formatted like this:
/* find the server configuration for the address:port */