ANIC - Angie Ingress Controller#
Today we will talk about the Angie Ingress Controller (ANIC) - a solution from "Web-Server" for simplifying traffic management in Kubernetes.
Kubernetes is a popular container orchestration system. One of its tasks is to route requests to applications within this system. This is accomplished using two components: Ingress and Ingress Controller. Ingress is an entry point for requests that helps route and load balance traffic. However, for it to function, an Ingress Controller is required. We explain how the new product from "Web-Server," Angie Ingress Controller (ANIC), helps simplify traffic management in Kubernetes.
The Ingress Controller allows you to manage proxying and load balancing according to specified settings. When the settings change, the Ingress Controller receives a signal and reconfigures the Ingress based on the new data. Thus, we gain the ability to manage external requests to applications running in the Kubernetes cluster by modifying the settings. Ingress can be configured to bind external URLs to internal services, provide traffic balancing, and handle SSL/TLS connections. Typically, a reverse proxy server is used as Ingress.
Angie Ingress Controller (ANIC) supports two types of installations: DaemonSet and Deployment.
Use Deployment if you need to install a single instance of the Ingress Controller and dynamically change the number of instances. Use DaemonSet if you need to install the Ingress Controller on every node in the cluster.
ANIC uses the Angie PRO web server as Ingress. It is one of the most powerful web servers in the world. Angie PRO effectively addresses the core tasks assigned to Ingress. A multitude of settings allows for flexible proxying and the ability to dynamically manage upstream group settings via a REST interface. Additionally, Angie PRO acts as an L4-L7 load balancer.
In addition to the standard features of Angie PRO:
It allows the creation of virtual servers and has numerous flexible settings.
It supports the HTTP/2 protocol, enabling you to accept HTTP/2 connections on the listening socket.
It supports session persistence (sticky sessions), ensuring that all requests within a client session are tied to a single server in the upstream group.
Traffic splitting allows for A/B testing and canary deployments.
It provides extensive statistics and real-time monitoring using a RESTful interface. Basic server information is provided in JSON format, along with statistics on client connections, shared memory zones, DNS queries, HTTP requests, HTTP response caches, stream module sessions, http_upstream, and zones of other modules.
It allows for dynamic management of upstream group settings via a REST interface.
For more details, you can read on our website.