Azure Application Gateway
Azure Application gateway is an application delivery service which operates as a layer 7 load balancer. Hence, it’s vital to understand the difference between traditional load balancer and application layer load balancer. A traditional load balancer operates in layer 4 which is transport layer and analyses few TCP/UDP packets to extract information related port and IP address. However, application layer load balancers are somewhat new and derive information from application layer such as URL path, message content, Cookie details, etc. Hence, application layer load balancers could perform intelligent routing as it can obtain more information from the request.
Azure Application Gateway comes in two SKUs. They are as V1 and V2. V2 holds following characteristics
- Auto scaling -Enables it to scale up or down
- Zone redundancy -Enables your application gateway to survive during zonal failures
- Static VIP -Ensures that the IP address will not be changed during the life time of the application gateway
- Header Rewrite-Allows to update HTTP request and response headers to enable various scenarios such as changing cache control, securing cookies and HSTS (Http Strict Transport Security) support
- Improved performance
Azure Application Gateway components
Front-end IP
This refers the IP address associated with the application gateway. IP address configuration varies based on the SKU
- V1 SKU -Internal static or dynamic IP address/ dynamic public IP address
- V2 SKU -Static internal or public IP address/ static public IP address
Listeners
Listen to incoming requests (to a specific port and under a given HTTP pattern) and properly triggers the associated rule. We can configure multiple listeners, which use the same protocol and supported protocols are as HTTP, HTTPS, HTTP/2, and Web Socket. However, the communication to back end server pools is always over HTTP/1.1
Two Types of Listeners
- Basic
- Multi-site
HTTP settings
Http Settings are configurations which you need to integrate to one-more many request routing rules
Backend pools
A backend pool routes request to backend servers, which serve the request. Backend pools can contain NIC, VCs, FQDN, Public or Internal IP addresses
Health probes
Application Gateway monitors the health of all resources in its backend pool by performing a http GET request to the backend pool. The interval for this call is configurable and the default probe interval is 30 seconds. In addition, custom probes are configurable to define host URL, port, protocol and timeout. When there is no valid response for a probe request the probe is considered to be failed. This could lead to automatically remove the unhealthy pools. Also, add them back to the healthy backend pool when become healthy
TLS termination
- Improved performance- Empowers the app server to manage many connections at one time, while simplifying complexity and eliminating performance degradation
- Better utilization of the backend servers — SSL/TLS processing is very CPU intensive. Removing this work from the backend servers allows them to focus on what they are most efficient at, delivering content.
- Intelligent routing — By decrypting the traffic, the application gateway has access to the request content, such as headers, URI, and so on, and can use this data to route requests.
- Certificate management — Certificates only need to be purchased and installed on the application gateway and not all backend servers. This saves both time and money.
End-to-end TLS encryption
- May not want unencrypted communication to the backend servers. You may have security requirements, compliance requirements, or the application may only accept a secure connection
- Terminates the TLS sessions at the gateway and decrypts user traffic
- Then initiates a new TLS connection to the backend server and re-encrypts data using the backend server’s public key certificate before transmitting the request to the backend.
Configuration with Azure Web Application Firewall
Web Application Firewall provides centralized protection of your enterprise applications from different type of attacks. Azure WAF can be deployed with Azure Application Gateway, Azure Front Door, and Azure Content Delivery Network (In public preview) service from Microsoft. It’s based on Core Rule set 3.1, 3.0 or 2.2.9 from OWASP and Microsoft Bot manager. The bot mitigation ruleset list of known bad IP addresses updates multiple times per day from the Microsoft Threat Intelligence feed.
Azure Application Gateway vs Azure API Manager
Both Application Gateway and API manager used to implement API gateway pattern. According to this design principle an application gateway sits in between Micro services and client. However, these two components are intended to serve distinct purposes. For instance application gateway is capable of providing advance security with the integration with Web Application Firewall, load balancing and application layer routing. However, API Manager solely facilitates api exposure to external parties while supporting versioning, rate limiting, api security, developer portal configuration, etc.
Azure Application Gateway vs Azure Front Door
While both components considered to be layer 7 application routing/load balancers, Azure Front Door operates as a global level service. That means, Azure Front Door can manage your traffic in any where. However, Application Gateways can only distribute the traffic within a region. Hence, the best combination could be a single Azure Front Door instance with multiple Azure application gateways for each region. This entirely depends on your design but it’s advisable to consider cost to benefit ratio.
Conclusion
Azure Application Gateway offers you the comprehensive toolkit to configure industry standard practices to your application. Since, both API Manager and Application Gateway implements API Gateway pattern, it could be used to manage your apis. However, we need to use the right tool for the job.