Azure Networking Blog articles

Azure Networking Blog articles

https://techcommunity.microsoft.com/t5/azure-networking-blog/bg-p/AzureNetworkingBlog

Azure Networking Blog articles

Azure CNI Powered by Cilium for Azure Kubernetes Engine (AKS)

Published

Azure CNI Powered by Cilium for Azure Kubernetes Engine (AKS)

Production deployments of Kubernetes continue to soar as customers increasingly containerize their applications. With the growth in application modernization customers are looking to rapidly scale their Kubernetes deployments by building very large clusters or adopting a multi-cluster strategy. They expect instantaneous connectivity when spinning up and scaling out application instances. Specialized applications, such as gaming apps, expect superior data path throughput for rich application experience. The increased east-west traffic flows necessitate fine-grained monitoring and tracing for troubleshooting. Network Security is another important aspect as customers wish to implement common L4 and L7 security controls for their cloud-native applications and need solutions that are more tailored for Kubernetes and containers.    

 

These requirements call for a robust platform that scales seamlessly to provide networking for millions of containers, a rich set of security controls and hooks into rich traffic metrics and logs for network visibility, without compromising on the performance.  

 

Azure Container Network Interface (CNI) Powered by Cilium is the next-generation networking platform that meets all these requirements by combining two powerful technologies, viz. Azure CNI that provides a scalable and flexible Pod networking control plane integrated with the Azure Virtual Network stack and Cilium open-source project, a pioneer in providing eBPF-powered data plane for networking, security, and observability in Kubernetes.  

 

We are proud to announce the availability of Azure CNI Powered by Cilium natively in Azure Kubernetes Service to provide scalable and high-performance Pod networking and Kubernetes Network Policies. 

 

About Cilium eBPF 

eBPF is a revolutionary technology that allows the insertion of sandboxed programs into the Linux kernel to greatly enhance the traffic processing capabilities in the operating system runtime. eBPF programs today enable a rich set of networking, security, observability, and application tracing use cases at very high performance. 

 

Cilium offers the next generation dataplane for Kubernetes that builds on top of eBPF technology to address these use cases for cloud native workloads. Cilium provides rich functionalities such as high-performance data path for Kubernetes services, efficient load-balancing, extensive network security features and rich monitoring. Besides the traditional Kubernetes network-level security Cilium also enables security based on application protocol context, DNS FQDNs, and service identity.

 

About Azure CNI 

Azure CNI provides network provisioning for Kubernetes Pods in AKS. It functions in one of the following two modes which is configured at the time of AKS cluster creation. 

 

VNET Mode: In VNET mode Azure CNI assigns IPs to Pods from a Vnet subnet making Pods first-class citizens in a Vnet. Pods have direct connectivity to each other and to other resources in the VNET and on-premises. You can choose to dynamically assign IP addresses to Pods from a separate Pod subnet that is different from the cluster subnet. This provides better utilization of VNET IP space, and the ability to configure separate Vnet policies for Pods  

 

Overlay Mode: In Overlay mode only the cluster nodes are deployed into a VNET whereas Pods are assigned IP addresses from a private address space that is logically different from the VNET hosting the nodes. This mode significantly reduces the amount of Vnet IP addresses consumed by AKS clusters allowing limitless cluster scale. The Pod address space can be re-used on multiple clusters in the same VNET, greatly simplifying IP address planning. Overlay addressing does not require provisioning of custom routes or usage of encapsulation for Pod-Pod connectivity offering data path performance at par with connectivity between VMs in a VNET.

 

What does Azure CNI Powered by Cilium provide?

Azure CNI powered by Cilium integrates the scalable and flexible Azure IPAM control plane with the robust dataplane offered by Cilium OSS to create a modern container networking stack that meets the demands of cloud native workloads. 

 

Azure CNI Powered by CiliumAzure CNI Powered by Cilium

 

Azure CNI Powered by Cilium offers the following benefits today and provides the ideal platform for future innovations. 

 

Scalable and performant Networking 

The Cilium powered CNI supports both Vnet and Overlay modes. The socket-based load-balancing for Kubernetes services in Cilium replaces the inefficient load-balancing based on IPTable rules in KubeProxy to provide superior data path performance at par with direct connectivity to service backend Pod. The performance is deterministic irrespective of the number of services deployed in the cluster. 

 

Kubernetes Network

The Cilium powered CNI comes with built-in support for the basic Kubernetes Network Policies. There is no need to install a separate solution on top. The solution offers significant improvement in scale and performance by eliminating usage of IPTables for network filtering.

 

Using Azure CNI powered by Cilium

Azure CNI powered by Cilium is currently in preview in AKS. For detailed usage instructions refer to - https://aka.ms/aks/cillium-dataplane.

 

Continue to website...

More from Azure Networking Blog articles