Loading...

Securing Network Egress in Azure Container Apps

Securing Network Egress in Azure Container Apps

Introduction

Since its inception nearly two years ago, Azure Container Apps (ACA) has added significant features to make it a relevant container hosting platform. Built atop Kubernetes, Azure Container Apps is a fully managed Platform-as-a-Service that empowers Azure container workloads to focus on the business value they provide, not be mired in infrastructure management. Many of my colleagues and I believe that Azure Container Apps has grown into a viable alternative for hosting many different containers compared to other, more specialized hosting platforms in Azure. No longer must customers learn the ins and outs of Kubernetes, cluster management, Kubernetes versions, etc. The gap to Azure Kubernetes Service (AKS) has been reduced significantly, and while the two products are not intended to compete, comparisons of the two are commonplace.

 

A barrier of entry, however, has been Azure Container Apps' inability to restrict outbound traffic from containers, even when set up in a virtual network (VNet). Given the lack of a control over a container connecting to the internet, customers understandably passed on adoption.

 

This limitation has been removed as of August 2023 as we have announced general availability of user defined routes (UDR). This impactful feature addition motivated the following article.

 

Acknowledgement

This article would not have been possible without the diligent research and proof of concept by Steve Griffith. In addition to this article, I encourage you to also explore his aca-egress-lockdown in GitHub.

 

Pieces of the Puzzle

In addition to an Azure Container App Environment with a container app, we need a virtual network, a route table for our user defined routes, an Azure Firewall instance, as well as a Log Analytics Workspace. We will set up everything sequentially using the Azure CLI. I encourage you to use an Azure cloud shell for maximum compatibility, automatically using the latest Azure CLI components.

 

Laying the Groundwork

Since this GA release is very new (August 30th), you may need to upgrade the AZ CLI.

 

az upgrade

 

We start by setting our variables, following by the creation of the resource group. Naming is generally based on the Cloud Adoption Framework.

 

ResourceGroup=rg-egress-lockdown Location=eastus VnetName=vnet-aca FirewallName=afw ContainerAppName=ca-egress-testaz group create -g $ResourceGroup -l $Location

 

Next, the VNet gets created with two subnets for the Azure Firewall and one subnet for the Azure Container Apps Environment. The dedicated Azure Container Apps tier can use subnets as small as a /27 CIDR. 

 

az network vnet create \ -g $ResourceGroup \ -n $VnetName \ --address-prefix 10.0.0.0/16 \ --subnet-name AzureFirewallSubnet \ --subnet-prefix 10.0.0.0/24

 

 

az network vnet subnet create \ -g $ResourceGroup \ -n AzureFirewallManagementSubnet \ --vnet-name $VnetName \ --address-prefix 10.0.1.0/24

 

The Azure Container App subnet must be delegated to be managed by the Azure Container App service.

 

az network vnet subnet create \ -g $ResourceGroup \ -n AzureContainerAppSubnet \ --vnet-name $VnetName \ --address-prefix 10.0.2.0/27 \ --delegations 'Microsoft.App/environments'

 

We need to retrieve the Azure Container App subnet resource ID for later use.

 

PrivateAcaEnvironmentSubnetId=$(az network vnet subnet show -g $ResourceGroup --vnet-name $VnetName -n AzureContainerAppSubnet --query id -o tsv)

 

 

Creating the Azure Firewall

Note that while we use the Basic tier, Azure Firewall still consumes a considerable amount of resources and cost. Please be advised to clean up your resources at the end. Setting up a Standard tier is simpler but also considerably more costly.

 

Two public IP addresses are needed for the Basic Azure Firewall tier. 

 

az network public-ip create \ -g $ResourceGroup \ -n pip-firewall \ --sku "Standard" az network public-ip create \ -g $ResourceGroup \ -n pip-firewall-management \ --sku "Standard"

 

Create the Azure Firewall.

 

az network firewall create \ -g $ResourceGroup \ -n $FirewallName \ --enable-dns-proxy true \ --tier Basic

 

The public IP addresses need to be configured for the Azure Firewall.

 

az network firewall ip-config create \ -g $ResourceGroup \ -f $FirewallName \ -n firewallconfig \ --public-ip-address pip-firewall \ --vnet-name $VnetName \ --m-name myManagementIpConfig \ --m-public-ip-address pip-firewall-management \ --m-vnet-name $VnetName

 

We need to retrieve the Azure Firewall ID for later use. Since we only have one Azure Firewall, we simply select the first of the Azure Firewall to get its ID.

 

FirewallId=$(az network firewall list -g $ResourceGroup --query "[0].id" -o tsv)

 

Azure Firewall blocks all traffic by default. In order for our container images to be pulled and our egress to be tested later, we must add appropriate application rules. Here, we are grouping Microsoft service fully qualified domain names (FQDNs) for container registries into a Microsoft-specific allowed collection.

 

az network firewall application-rule create \ -g $ResourceGroup \ -f $FirewallName \ -c "allowed-Microsoft" \ -n "container-registries" \ --source-addresses '10.0.2.0/27' \ --protocols "https=443" \ --target-fqdns mcr.microsoft.com *.data.mcr.microsoft.com \ --action allow \ --priority 200

 

To demonstrate reaching an obviously external URL, we also add an allowed-external collection.

 

az network firewall application-rule create \ -g $ResourceGroup \ -f $FirewallName \ -c "allowed-external" \ -n "websites" \ --source-addresses '10.0.2.0/27' \ --protocols "https=443" \ --target-fqdns icanhazip.com \ --action allow \ --priority 201

 

Note that these are the minimum targets that must be available. If you expand on this tutorial and include other services, you may likely need to open up more.

 

Next, we need to get the public and private Azure Firewall IP addresses for the route table.

 

FirewallPublicIp=$(az network public-ip show -g $ResourceGroup -n pip-firewall --query "ipAddress" -o tsv) FirewallPrivateIp=$(az network firewall show -g $ResourceGroup -n $FirewallName --query "ipConfigurations[0].privateIPAddress" -o tsv)

 

 

Creating the Routes

To ensure that our Azure Container Apps will later be forced to traverse the Azure Firewall for outbound traffic, we now create the route table with the default routes.

 

az network route-table create \ -g $ResourceGroup \ -n udr-aca az network route-table route create \ -g $ResourceGroup \ -n firewall-route \ --route-table-name udr-aca \ --address-prefix 0.0.0.0/0 \ --next-hop-type VirtualAppliance \ --next-hop-ip-address $FirewallPrivateIp az network route-table route create \ -g $ResourceGroup \ -n internet-route \ --route-table-name udr-aca \ --address-prefix $FirewallPublicIp/32 \ --next-hop-type Internet

 

Lastly, we must associate the route table with the Azure Container App subnet.

 

az network vnet subnet update \ -g $ResourceGroup \ -n AzureContainerAppSubnet \ --vnet-name $VnetName \ --route-table udr-aca

 

 

Creating the Log Analytics Workspace

While not necessary for the success of this tutorial, a Log Analytics Workspace illustrates the successes and failures of requests through the Azure Firewall. I encourage you to always consider logging and telemetry in your architectures.

 

az monitor log-analytics workspace create \ -g $ResourceGroup \ --workspace-name log-egress-lockdown

 

Retrieve the Workspace ID.

 

WorkspaceId=$(az monitor log-analytics workspace show -g $ResourceGroup --workspace-name log-egress-lockdown --query id -o tsv)

 

We want to send the Azure Firewall logs to the Log Analytics Workspace.

 

az monitor diagnostic-settings create \ --workspace $WorkspaceId \ --resource $FirewallId \ --name "Firewall logs" \ --logs '[{"category":"AzureFirewallApplicationRule","enabled":true},{"category":"AzureFirewallNetworkRule","enabled":true},{"category":"AzureFirewallDnsProxy","enabled":true}]' \ --metrics '[{"category":"AllMetrics","enabled":true}]'

 

 

Creating the Azure Container App Environment and Container App

We begin by creating the Azure Container App Environment, which will be VNet-injected into the previously-created Azure Container App subnet. The environment will be configured as internal-only to prevent external access. While we could also send logs to the Log Analytics Workspace, this is not needed for this tutorial.

 

az containerapp env create \ -g $ResourceGroup \ -n cae-egress-lockdown \ --location $Location \ --internal-only true \ --logs-destination none \ --enable-workload-profiles \ --infrastructure-subnet-resource-id $PrivateAcaEnvironmentSubnetId

 

Azure Container Apps support Consumption and Dedicated plans. In order to use user defined routes, we need to configure our container app for Dedicated. This means we need to create a workload profile that has a specific type of vCPU and memory dedicated to our Azure Container App Environment.

 

az containerapp env workload-profile add \ -g $ResourceGroup \ -n cae-egress-lockdown \ --min-nodes 1 \ --max-nodes 10 \ --workload-profile-name 'egresslockdown' \ --workload-profile-type 'D4'

 

Lastly, we pull an nginx image from Microsoft's container registry to serve as our image for the Azure Container App we will use for testing egress.

 

az containerapp create \ -g $ResourceGroup \ -n $ContainerAppName \ --environment cae-egress-lockdown \ --workload-profile-name 'egresslockdown' \ --image 'mcr.microsoft.com/cbl-mariner/base/nginx:1' \ --min-replicas 1

 

Everything is set up now. Your list of resources should look similar to this:

 

SimonKurtzMSFT_0-1693538152452.png

Validation

Now that we have a test container app inside a VNet that uses a user defined route to the Azure Firewall, we can validate that we are indeed blocked when we should be and successful when we are allowed to be.

 

We now use the cloud shell to connect to the container app and launch a bash shell.

 

az containerapp exec \ -n $ContainerAppName \ -g $ResourceGroup \ --command 'bash'

 

SimonKurtzMSFT_1-1693538449748.png

 

Next, issue a curl command against an allowed target as well as targets that we did not allow. Only the first target is allowed in our application rules. Even though we are also targeting Microsoft URLs, we did not explicitly allow those URLs.

 

curl https://icanhazip.com curl http://icanhazip.com curl https://www.microsoft.com curl http://www.microsoft.com

 

 

SimonKurtzMSFT_3-1693538826966.png

The first requested expectedly returns an IP address. The second and fourth request are similar in nature and are blocked with a default message by the Azure Firewall. The third request is blocked in the same manner; however, `curl` shows us the SSL instead of the Firewall error, which is appropriate.

 

Furthermore, we can validate these results in our Azure Firewall logs that we sent to the Log Analytics Workspace. To do so, open the Azure Firewall instance in your resource group, navigate to the Logs blade, and execute the following query:

 

AzureDiagnostics | where Category == "AzureFirewallNetworkRule" or Category == "AzureFirewallApplicationRule" | parse msg_s with Protocol " request from " SourceIP " to " Target ". Action: " Action "." Message | where Target startswith "icanhazip.com" or Target startswith "www.microsoft.com" | project TimeGenerated, Category, SourceIP, Protocol, Target, Action, Message | order by TimeGenerated desc

 

SimonKurtzMSFT_4-1693539058200.png

Similarly to the curl output, you can see the rules that were not matched. Note that we do not see the same SSL error that curl showed us but the true underlying message from the Azure Firewall.

 

Conclusion

Hopefully, this tutorial gave you insight into this exciting new Azure Container Apps feature to secure your container workloads' traffic! 

 

Please follow or connect with me on LinkedIn where I frequently post about feature updates. Thank you!

 

Published on:

Learn more
Azure PaaS Blog articles
Azure PaaS Blog articles

Azure PaaS Blog articles

Share post:

Related posts

This Month in Azure Static Web Apps | 09/2024

    We are back with another edition of the Azure Static Web Apps Community! :party_popper:   September was yet another month ...

1 day ago

IPv6 Adoption: Enhancing Azure WAF on Front Door

The transition to IPv6 is a significant step for enterprise corporations, reflecting the evolution of internet technology and the need for a l...

1 day ago

Introducing the Data-Bound Reference Layer in Azure Maps Visual for Power BI

Imagine managing a nationwide sales team and needing to understand how your sales align with factors like population density, competitor locat...

2 days ago

GitHub Copilot for Azure: 6 Must-Try Features

As developers, we are constantly seeking tools that streamline our workflows and boost productivity. … Enter GitHub Copilot for Azure, now in ...

2 days ago

Unlocking the Best of Azure with AzureRM and AzAPI Providers

With the recent release of AzAPI 2.0, Azure offers two powerful Terraform providers to meet your infrastructure needs: AzureRM and AzAPI. The ...

2 days ago

Azure Communication Services Ideas Board: Share your feedback with the product team

Innovation is not a solitary pursuit, and we recognize that some of the best ideas come from you, our Azure Communication Services community. ...

2 days ago

Engage with the Azure Community Services Ideas Board: Your Voice Matters

Innovation is not a solitary pursuit, and we recognize that some of the best ideas come from you, our Azure Communication Services community. ...

3 days ago

Optimizing custom copilot (agent) performance with Azure Load Testing: A comprehensive guide

As we move into the next phase of digital transformation, the role of custom copilots is set to become increasingly pivotal. By leveragin...

3 days ago

Azure Storage - TLS 1.0 and 1.1 retirement

Overview TLS 1.0 and 1.1 retirement on Azure Storage was previously announced for Nov 1st, 2024, and it was postponed recently to 1 year later...

3 days ago
Stay up to date with latest Microsoft Dynamics 365 and Power Platform news!
* Yes, I agree to the privacy policy