Loading...

Customizing the combo of Azure Developer CLI and .NET Aspire

Customizing the combo of Azure Developer CLI and .NET Aspire
 
When in the developer flow the Azure Developer CLI (azd) can provide a good experience when you want to move the code from your machine to the cloud especially in tandem with .NET Aspire. To be clear - it does require your solution to fit into a select number of categories and does not work for everything you can create in Visual Studio. (.NET Aspire drives you towards Azure Container Apps whereas azd on its own can also do things like Azure App Services so there's also the question of whether you want both tools or just the one. But that's a bigger topic not in scope for today.)
azd_logo.png

 


Like other templated solutions it might not suit your use case directly. Based on previous posts where I created the Bicep files (for a web app hosted on Azure Container Apps) by hand and compared to the files generated by azd I asked the question of whether I was over-engineering or the azd team under-engineering. The default answer would be "it depends".

If you have a component in your web app that uses Semantic Kernel to provide a chat experience it's not like it will infer that you need to provision an Azure OpenAI deployment for that to work. (At least it doesn't go that deep at the time of writing this even though Aspire has an Azure OpenAI integration you can add for helping you out.) And it may be the same for other external dependencies as well.
An illustration (from the Azure Architecture Center) shows how even a simple app potentially needs a number of components:
microservices-with-container-apps-runtime-diagram.png

 

 
The perhaps bigger challenge is that azd does not provide production grade infrastructure out of the box. Can you write code and do an azd up to create Azure resources that can be consumed by others? Yes. Should you? Probably not. Let me contextualize that so it doesn't just come across as grumpy outbursts of "you're doing it wrong".
 
Azd focuses on the happy path. As it should. When I'm in Visual Studio in my developer inner loop that's what I want. But unless I'm just throwing together a quick proof of concept I probably need to consider things like needing to throttle access to my APIs, not exposing my database directly to the world at large via public endpoints and various other things. If I work at a large enterprise maybe there's a security operations center monitoring activity around the clock and they will happily wake you at night if your app misbehaves. Never mind the tooling you use - you as a developer are not equipped to know and understand all these details.
 
Well, that's why we have different environments - dev, test, staging, prod isn't it? There are different ways to separate the daily needs of developers and daily needs of users, and a completely locked-down dev environment is sort of useless. However, you need to think about how large deltas you can accept; using MS SQL in dev and Oracle SQL in prod would obviously be silly, but if you use private virtual networks in prod it probably wouldn't hurt testing that your code works with such a configuration before it goes live.
 
This blog post is not about solving all of these issues or making a grand solution containing an order of magnitude as much infra code as actual code. I will try to explore a few of the options we have to add things on top of what is provided by default.
 
All the code can be found here:
 
Note: there are separate folders in the code to illustrate the approaches separately, but only the last example works from end to end.

Inlining extra code

Azure Developer CLI uses Bicep to work its magic and by default this is a black box. If you want to step outside this box (which is the purpose of this post) you can use the azd infra synth command to generate files you can inspect and edit. This means that if you feel there are things missing you can make adjustments within the generated files before starting the deployment. There's actually two ways to go about this - through the .NET Aspire code and within the files scaffolded by azd. This is not a synced command - it will generate Bicep based on the contents of the Aspire C#. (Technically the manifest generated by dotnet publish.) If you update the Aspire code you will need to run it again to regenerate, but it may override changes you have made so make sure you take care 🙂

Inlining via Aspire

The example app uses Entra for authentication and this is handled by adding the necessary values to appsettings.json and orchestrating through Aspire:
var tenantId = builder.AddParameter("TenantId"); var clientId = builder.AddParameter("ClientId"); var clientSecret = builder.AddParameter("ClientSecret",secret:true); var weatherapi = builder.AddProject<Projects.WeatherAPI>("weatherapi") .WithEnvironment("TenantId",tenantId) .WithEnvironment("ClientId",clientId);
What if we could create an app registration through code and skip the json file? That could be a good candidate for inlining in Aspire. You can add Bicep directly:
using Aspire.Hosting.Azure; //Extra NuGet package var appRegistration = builder.AddBicepTemplate( name: "Graph", bicepFile: "../infra/Graph/app-registration.bicep" ); var tenantId = appRegistration.GetOutput("tenantId"); var clientId = appRegistration.GetOutput("clientId"); var weatherapi = builder.AddProject<Projects.WeatherAPI>("weatherapi") .WithEnvironment("TenantId",tenantId) .WithEnvironment("ClientId",clientId);
The Bicep file in question:
extension microsoftGraph resource app 'Microsoft.Graph/[email protected]' = { displayName: 'azd-custom-01' uniqueName: 'azd-custom-01' } output tenantId string = tenant().tenantId output clientId string = app.appId​
Note: Azd supports deployment stacks in an alpha state in the latest preview. This will break when using the Graph extension for Bicep so for now you cannot use this. (The restriction is lacking support in the Graph extension; not azd.)
Well, that is mighty nifty and it feels logical to add this to the Aspire AppHost. (There are some caveats we will get into later.) So, what would we want to inline outside Aspire?
 
This approach is documented here:

Inlining via Azure Developer CLI

Turns out we can do more here. We added the tenantId and clientId above and these are fairly easy to work with. The web app also requires a clientSecret to work and secrets require a little bit of extra effort.
 
It is a known limitation in the Graph extension for Bicep that while you can create a secret using PowerShell or the CLI it is not supported through Bicep. You can seemingly create correct Bicep, but it will not work. (I don't know if there is a plan to implement this.) You can create a key credential instead, but that is more elaborate than we'll go into in this section. So, for simplicities sake we will just create this out of band directly in the portal. You shouldn't just copy this onto your disk - you should treat the secret properly and put it in a Key Vault. And this Key Vault could for instance be added to what azd created for us.
Go into resources.bicep and add a few lines:
resource vault 'Microsoft.KeyVault/vaults@2024-04-01-preview' = { name: 'kv-${resourceToken}' location: location properties: { sku: { name: 'standard' family: 'A' } accessPolicies: [] enableRbacAuthorization: true enabledForDeployment: true tenantId: tenant().tenantId } } resource kvMiRoleAssignment 'Microsoft.Authorization/roleAssignments@2022-04-01' = { name: guid(vault.id, managedIdentity.id, subscriptionResourceId('Microsoft.Authorization/roleDefinitions', '4633458b-17de-408a-b874-0445c86b69e6')) scope: vault properties: { principalId: managedIdentity.properties.principalId principalType: 'ServicePrincipal' roleDefinitionId: subscriptionResourceId('Microsoft.Authorization/roleDefinitions', '4633458b-17de-408a-b874-0445c86b69e6') } } output KEYVAULT_URL string = vault.properties.vaultUri
Add the output to main.bicep:
output KEYVAULT_URL string = resources.outputs.KEYVAULT_URL
Make sure the following is part of bff-web-app.tmpl.yaml:
registries: - server: {{ .Env.AZURE_CONTAINER_REGISTRY_ENDPOINT }} identity: {{ .Env.AZURE_CONTAINER_REGISTRY_MANAGED_IDENTITY_ID }} secrets: - name: clientsecret keyVaultUrl: '{{ .Env.KEYVAULT_URL }}secrets/clientSecret' identity: {{ .Env.AZURE_CONTAINER_REGISTRY_MANAGED_IDENTITY_ID }} ... template: containers: - image: {{ .Image }} name: bff-web-app env: - name: ClientSecret secretRef: clientsecret
Update/backport to Program.cs:
var tenantId = appRegistration.GetOutput("tenantId"); var clientId = appRegistration.GetOutput("clientId"); var clientSecret = appRegistration.GetSecretOutput("clientSecret"); builder.AddProject<Projects.BFF_Web_App>("bff-web-app") .WithReference(weatherapi) .WithEnvironment("TenantId", tenantId) .WithEnvironment("ClientId", clientId) .WithEnvironment("ClientSecret", clientSecret); ​
These tweaks are suited for inlining - after all they are tightly coupled to our application code. But even if you take care and maybe even split things into more files it doesn't necessarily scale. It's brittle in the sense that as you edit your Aspire AppHost and run re-generations of the templates you may overwrite changes you have done. And if you need supporting Azure resources that are on a different abstraction level than your code you can end up creating "hidden infrastructure" resulting in a non-optimal architecture. There are ways to tackle that as well right?
 
The code for inlining is in the 01_Inline_AZD folder in the repo.
 

CAF Primer

The d in azd stands for developer so even if something like azure architect cli could potentially be an interesting tool it is about speeding up the developer workflow. Unfortunately, we will need to mention the Cloud Adoption Framework (CAF) nonetheless. I did a series on infra for devs earlier this year you can dive into if you like, but quickly recapping what I said back then. Think of the infra your app consists of as (logical) layers/levels. If you look at what azd deploys for you it's not very surprising the container environment is deployed before the container apps; it wouldn't work the other way. You start from the bottom and work upwards with generic examples being something like:
  • Level 1: Azure Policy, Entra ID, Log Analytics workspaces
  • Level 2: Virtual networks, DNS zones, Azure Firewall
  • Level 3: Azure Kubernetes Service, Azure Container App Environments, SQL Server
  • Level 4: Containers, SQL Databases
What this means is that in essence azd only looks at level 3 & 4 leaving you to figure out the remainders. Depending on your needs this may be perfectly fine in dev environments with separate adjustments made when moving to production. This post sort of has to assume you need additional levels for the examples to make sense 🙂
 
More CAF info here:
 
Odds are that whether you use azd or not you're still doing some considerations as to what components you need - do I need a Redis cache, load balancer, and so on? This is just a mapping of where it fits in overall.

Pre-creating out of band

The Bicep code you or azd generate is stand-alone out of the box and of course you can step outside to provision infra. You can go to the Azure Portal and create resources, have an infra team create something with Terraform, or other means. The output of these processes might mean you add a value to appsettings.json, add to a Key Vault or App Configuration store. ("Flowing" outputs and inputs from one level to another is an interesting discussion in itself. In Terraform you have state files whereas Bicep can "push & pop" from Azure itself. Let's leave it out of scope for now.)
 
When I say "out of band" here I mean that it is not part of the azd up/deploy/down cycle. Does that mean it shouldn't be part of the repo either? That's really up to you and how you organize your teams - there's no clear answer here. One Azure feature that should be mentioned here is Template specs. As the name suggests, it gives someone (infra team) the ability to provide ready-made templates for someone else (dev team). You can both use Bicep to create these templates as well as consume them. They are versioned too so the team creating and maintaining these templates can iterate without breaking things for the consumers.
 
I'm not diving into a full-fledged example here, but this quick start will show you how to use it:
 

Pre-creating with hooks

The Azure Developer CLI as a tool also acknowledges that there are things it cannot infer from Aspire alone nor the templates from the Awesome azd repo. You also have "hooks" that enables you to add your own actions at various stages of the azd workflow:
 
In my original blog post on how to create the web app I'm using as a sample here I didn't use Aspire or azd instead focusing on how to do it by hand. Using that approach I created three infra levels and published containers separately. As I said azd addresses levels 3 & 4 in that stack, but level 2 remains.
 
Level 2 contains the networking pieces for running your containers on private subnets so they can only be accessed internally. You can add these components by editing the azure.yaml file:
 name: 02_Pre-create hooks: preprovision: shell: pwsh run: az stack sub create --name azd-level-2 --location norwayeast --template-file .\infra\level-2\main.bicep --parameters .\infra\level-2\main.bicepparam --action-on-unmanage 'deleteAll' --deny-settings-mode none postdown: shell: pwsh run: az stack sub delete --name azd-level-2 --action-on-unmanage deleteAll --yes services: app: language: dotnet project: ./BFF_Aspire/BFF_Web_App.AppHost/BFF_Web_App.AppHost.csproj host: containerapp ​
The deployment is added as a preprovision event to make sure it gets created before the Aspire-based stuff. I've also added an event that make sure it's deleted after you do an azd down. It may not be apparent from the snippet above, but resiliency is not given here. By that I mean that if you do things in your Bicep that fails for some reason it is not given that this will stop the flow and (depending on the type of error) azd will continue to run resulting in a non-working deployment if you have dependencies that are not in place.
 
If you run azd up with this setup you should get one resource group with the container environment and all the Aspire/azd resources, one for the vnet and one for DNS. Resource group allocation isn't important here, but shows how we aren't constrained in this matter. The pre-created infra is done through Deployments Stacks even if we aren't using that for azd itself. (Azd has alpha support for Deployment Stacks at the time of writing this.)
 
In this example we haven't actually wired the container environment into the virtual network - it's purely to demo the hooks feature.
Right, so now we've done x to create a, y to create b... How does it all tie together?
 
The code for pre-creation is in the 02_Pre-create folder in the repo.
 

Complete sample

Like many computing challenges the best solution will usually require more than one technique to be applied.  Some things make sense to inline whereas others should be external and maintained by other teams. We want to adjust our solution to run on a private virtual network, and we want to automate the authentication registration process as well. (Since a private virtual network requires you as the developer to be on the same network as the containers, I've also included the logic for creating a Dev Center where you can create a Dev Box to access things.)
 
Since we can't through Bicep alone create client secrets we will do the change to using certificates instead. (On the backend - the user does not need to play around with certificates frontend.) To enable a good local development experience we start by creating a certificate:
 New-SelfSignedCertificate -Type Custom -Subject "CN=MySelfSignedCertificate" -TextExtension @("2.5.29.37={text}1.3.6.1.5.5.7.3.3") -KeyUsage DigitalSignature -KeyAlgorithm RSA -KeyLength 2048 -NotAfter (Get-Date).AddYears(2) -CertStoreLocation "Cert:\CurrentUser\My"​
This PowerShell line will create a self-signed certificate and put it into the user's certificate store. Note that this assumes Windows. You can of course achieve the same on Linux, but it requires a few extra lines of code, and since we're orchestrating with .NET Aspire anyways we will skip that part.
 
The application code as is will get angry if you don't supply it with a client secret so we need to change the way we add authentication to our BFF_Web_App by making a few edits to Program.cs:
 var tenantId = builder.Configuration.GetValue<string>("TenantId"); var clientId = builder.Configuration.GetValue<string>("ClientId"); var clientSecret = builder.Configuration.GetValue<string>("ClientSecret"); var keyvaultUrl = builder.Configuration.GetValue<string>("KeyVaultUrl") ?? "noVault"; var keyvaultSecret = builder.Configuration.GetValue<string>("KeyVaultSecret") ?? "noVault"; builder.AddServiceDefaults(); builder.Services.AddAuthentication(OpenIdConnectDefaults.AuthenticationScheme) .AddCookie("MicrosoftOidc") .AddMicrosoftIdentityWebApp(microsoftIdentityOptions => { if (builder.Environment.IsDevelopment()) { microsoftIdentityOptions.ClientCredentials = new CredentialDescription[] { CertificateDescription.FromStoreWithDistinguishedName("CN=MySelfSignedCertificate",System.Security.Cryptography.X509Certificates.StoreLocation.CurrentUser)}; } else { microsoftIdentityOptions.ClientCredentials = new CredentialDescription[] { CertificateDescription.FromKeyVault(keyvaultUrl,keyvaultSecret)}; } microsoftIdentityOptions.ClientId = clientId; microsoftIdentityOptions.SignInScheme = CookieAuthenticationDefaults.AuthenticationScheme; microsoftIdentityOptions.CallbackPath = new PathString("/signin-oidc"); microsoftIdentityOptions.SignedOutCallbackPath = new PathString("/signout-callback-oidc"); microsoftIdentityOptions.Scope.Add($"api://{clientId}/Weather.Get"); microsoftIdentityOptions.Authority = $"https://login.microsoftonline.com/{tenantId}/v2.0/"; microsoftIdentityOptions.ResponseType = OpenIdConnectResponseType.Code; microsoftIdentityOptions.MapInboundClaims = false; microsoftIdentityOptions.TokenValidationParameters.NameClaimType = JwtRegisteredClaimNames.Name; microsoftIdentityOptions.TokenValidationParameters.RoleClaimType = "role"; }).EnableTokenAcquisitionToCallDownstreamApi(confidentialClientApplicationOptions => { confidentialClientApplicationOptions.Instance = "https://login.microsoftonline.com/"; confidentialClientApplicationOptions.TenantId = tenantId; confidentialClientApplicationOptions.ClientId = clientId; }) .AddInMemoryTokenCaches(); builder.Services.ConfigureCookieOidcRefresh("Cookies", OpenIdConnectDefaults.AuthenticationScheme); builder.Services.AddAuthorization();​
 
We're using Identity.Web (which builds on top of MSAL) and if you want more on the specifics they are available here:

You will notice that we do an environment check - if dev then local certificate, else use a Key Vault. There is a "flaw" here though in the sense that we rely on an app registration in Entra ID locally as well and since we haven't deployed any code yet a client id doesn't exist right now. However, that's just how it works when using Entra for auth purposes. How best to create a complete AuthN/AuthZ developer story is out of scope here. Closing the circle on the developer certificate you need to upload it to your app registration as a key credential (after running the deployment). The complete instructions for doing so are here:
 
In section 01 I added a reference to a Bicep file to create an app registration. Since we cannot create secrets we will add code for adding a key credential instead. This doc shows how to create a certificate, place in a key vault, and add to an app registration:
Create secret for app registration:
I copied parts of it and made some adjustments of my own to make it work with the RBAC model for the Key Vault. And since we need to hook up valid redirects and scopes I added that as well in app-registration.bicep:
 resource app 'Microsoft.Graph/[email protected]' = { displayName: 'azd-custom-03' uniqueName: 'azd-custom-03' keyCredentials: [ { displayName: 'Credential from KV' usage: 'Verify' type: 'AsymmetricX509Cert' key: createAddCertificate.properties.outputs.certKey startDateTime: createAddCertificate.properties.outputs.certStart endDateTime: createAddCertificate.properties.outputs.certEnd } ] //The default would be api://<appid> but this creates an invalid (for Bicep) self-referential value identifierUris: [ identifierUri ] web: { redirectUris: [ 'https://localhost:7109/signin-oidc' 'https://bff-web-app.${caeDomainName}/signin-oidc' 'https://bff-web-app.internal.${caeDomainName}/signin-oidc' ] } api: { oauth2PermissionScopes: [ { adminConsentDescription: 'Weather.Get' adminConsentDisplayName: 'Weather.Get' value: 'Weather.Get' type: 'User' isEnabled: true userConsentDescription: 'Weather.Get' userConsentDisplayName: 'Weather.Get' id: guid('Weather.Get') } ] } }​
A few additions to the Aspire Program.cs ensures we feed the necessary values back to the code:
 using Aspire.Hosting; using Aspire.Hosting.Azure; var builder = DistributedApplication.CreateBuilder(args); //Replace with a verified domain in your tenant var identifierUri = "api://contoso.com"; //var appRegistration = builder.AddBicepTemplate( // name: "Graph", // bicepFile: "../infra/Graph/app-registration.bicep" //) // .WithParameter("identifierUri", identifierUri) // .WithParameter("subjectName", "CN=bff.contoso.com") // .WithParameter("keyVaultName") // .WithParameter("certificateName") // .WithParameter("uamiName") // .WithParameter("caeDomainName"); //var tenantId = appRegistration.GetOutput("tenantId"); //var clientId = appRegistration.GetOutput("clientId"); //var keyVaultUrl = appRegistration.GetOutput("keyVaultUrl"); //var keyVaultSecret = appRegistration.GetOutput("keyVaultSecret"); var tenantId = builder.AddParameter("TenantId"); var clientId = builder.AddParameter("ClientId"); var keyVaultUrl = builder.AddParameter("keyVaultUrl"); var keyVaultSecret = builder.AddParameter("keyVaultSecret"); var weatherapi = builder.AddProject<Projects.WeatherAPI>("weatherapi") .WithEnvironment("TenantId", tenantId) .WithEnvironment("ClientId", clientId) .WithEnvironment("IdentifierUri", identifierUri); builder.AddProject<Projects.BFF_Web_App>("bff-web-app") .WithReference(weatherapi) .WithExternalHttpEndpoints() .WithEnvironment("TenantId", tenantId) .WithEnvironment("ClientId", clientId) .WithEnvironment("IdentifierUri", identifierUri) .WithEnvironment("KeyVaultUrl", keyVaultUrl) .WithEnvironment("KeyVaultSecret", keyVaultSecret); builder.Build().Run();​
You will notice there is a technique for adding parameters to the app registration through Aspire as well. You will also notice that those lines are commented out. A few explanations and notes are in order (what I referred to as caveats earlier).
 
Note 1: when bringing the Bicep into the Aspire orchestration like this you are actually enabling resource provisioning without the need for the Azure Developer CLI. So, basically when you hit F5 in Visual Studio actions are performed against the Azure control plane. This means you can enable use cases like developing against the very same resources locally and your cloud environment. You do however need to bring in additional settings to your appsettings.json file (subscription id, etc.) to do this. Unfortunately, I'm currently only able to get this half-working when combining with the rest of my Bicep so for now I've chosen to not enable this. (The app registration is only invoked by azd.)
 
Note 2: since Aspire is C# we can get more creative with how we pass things around:
 if (builder.Environment.IsDevelopment()) { //azd infra synth will not generate code } if (!builder.Environment.IsDevelopment()) { //azd infra synth will generate code }​
This opens up new opportunities but doesn't solve all underlying complexities as such. For now, we will not dive further into this.
Another confusing tidbit that is not apparent in the Aspire code is how variables are actually passed into the apps and services behind the scenes. The containers that gets deployed are configured with a yaml spec, and all variables that you need runtime are passed in as environment variables. (These only apply when deploying to Azure; not when running locally.) But these can be resolved in different ways that you need to pay attention to. A few code examples show how these are carried between files:
/* Environment variables */ //main.bicep output GRAPH_CLIENTID string = Graph.outputs.clientId //bff-web-app.tmpl.yaml - name: ClientId value: '{{ .Env.GRAPH_CLIENTID }}' /* Parameters */ //main.bicep param ClientId string //bff-web-app.tmpl.yaml - name: ClientId value: '{{ parameter "ClientId" }}' /* Hard-coded values */ //Program.cs var identifierUri = "api://contoso.com"; //bff-web-app.tmpl.yaml - name: IdentifierUri value: api://contoso.com
Since we intend to connect resources to a private virtual network we make the assumption that this has already been created (as covered in 02) and we need to bring these into the main.bicep file provided by azd:
 @description('The resource group for the network infrastructure.') param networkRGName string = 'rg-azd-level-2' @description('The name of the virtual network to attach resources to.') param vnetName string = 'aca-azd-weu' resource rg_vnet 'Microsoft.Resources/resourceGroups@2024-03-01' existing = { name: networkRGName } resource vnet 'Microsoft.Network/virtualNetworks@2023-09-01' existing = { scope: rg_vnet name: vnetName } ... module resources 'resources.bicep' = { scope: rg name: 'resources' params: { location: location vnetId: vnet.id dnsRGName: networkRGName tags: tags principalId: principalId } }​
While azd is kind enough to generate Bicep (in resources.bicep) for the container registry and container environment we need to make a few adjustments to these to make it work with private endpoints. For the registry it looks like this:
 // resource containerRegistry 'Microsoft.ContainerRegistry/registries@2023-07-01' = { // name: replace('acr-${resourceToken}', '-', '') // location: location // sku: { // name: 'Basic' // } // tags: tags // } module containerRegistry 'modules/containers/container-registry/main.bicep' = { name: replace('acr-${resourceToken}', '-', '') params: { resourceTags: tags acrName: replace('acr-${resourceToken}', '-', '') acrSku: 'Premium' adminUserEnabled: false anonymousPullEnabled: false location: location managedIdentity: 'SystemAssigned' publicNetworkAccess: 'Enabled' } }​
As you see we can re-use modules if that makes sense. Depending on how familiar you are with Bicep you may get confused with how scoping works when doing this. For instance, we need extra "trickery" to create a role assignment on the registry afterwards:
 //"Import" the registry resource we created resource scopeACR 'Microsoft.ContainerRegistry/registries@2023-07-01' existing = { name: containerRegistry.name } //And use it as the scope for a role assignment resource caeMiRoleAssignment 'Microsoft.Authorization/roleAssignments@2022-04-01' = { name: guid(containerRegistry.name, managedIdentity.id, subscriptionResourceId('Microsoft.Authorization/roleDefinitions', '7f951dda-4ed3-4680-a7ca-43fe172d538d')) scope: scopeACR properties: { principalId: managedIdentity.properties.principalId principalType: 'ServicePrincipal' roleDefinitionId: subscriptionResourceId('Microsoft.Authorization/roleDefinitions', '7f951dda-4ed3-4680-a7ca-43fe172d538d') } }​
If you don't want to use private endpoints for your container environment you can change the vnetInternal param to false:
 module containerAppEnvironment 'modules/containers/container-environment/main.bicep' = { name: 'cae-${resourceToken}' params: { resourceTags: tags location: location environmentName: 'cae-${resourceToken}' snetId: '${vnetId}/subnets/snet-cae-01' //true for connecting CAE to snet (with private IPs) //false for public IPs vnetInternal: true } }​
There's also some additions for making DNS work so you're able to actually browse the site from the Dev Box.
To stitch it all together we add more hooks to our azure.yaml file:
 name: BFF_Aspire #resourceGroup: rg-03-E2E hooks: preprovision: - shell: pwsh run: az stack sub create --name azd-level-2 --location westeurope --template-file .\infra\level-2\main.bicep --parameters .\infra\level-2\main.bicepparam --action-on-unmanage 'deleteAll' --deny-settings-mode none postprovision: - shell: pwsh run: az stack sub create --name azd-devCenter --location westeurope --template-file .\infra\devCenter\main.bicep --parameters .\infra\devCenter\main.bicepparam --action-on-unmanage 'deleteAll' --deny-settings-mode none postdown: - shell: pwsh run: az stack sub delete --name azd-devCenter --action-on-unmanage deleteAll --yes - shell: pwsh run: az stack sub delete --name azd-level-2 --action-on-unmanage deleteAll --yes services: app: language: dotnet project: ./BFF_Web_App.AppHost/BFF_Web_App.AppHost.csproj host: containerapp ​
If you are familiar with the Azure Developer CLI you will know that you start with azd init. This is not required here since the necessary files are already provided. You should use the azd env new command instead:
PS C:\Code\POCs\azd_customization\03_End-to-End\BFF_Aspire> azd env new ? Enter a new environment name: 03-E2E PS C:\Code\POCs\azd_customization\03_End-to-End\BFF_Aspire> azd up ? Select an Azure Subscription to use: [Use arrows to move, type to filter]
There is a bug I haven't quite figured out yet with the app registration. It will fail with an error complaining about the certificate start date, but it will work if you just re-run azd up.
 
Since we use multiple resource groups (and azd assumes just the one apparently) you might get an error about which resource group to use. The resource group will be prefixed with rg- so is you provide an environment name of 03-E2E the resource group will be called rg-03-E2E. You can either add as shown above to the azure.yaml file and then it will apply for all environments. Or you add the following to your .env file (making it unique per environment):
 AZURE_RESOURCE_GROUP="rg-03-E2E"​
After you have run the deployment process you need to copy out the tenant id and client id from the app registration (since you have to upload your local certificate anyways) and stick it into the appsettings.json file for BFF_Web_App.AppHost:
{ "Logging": { "LogLevel": { "Default": "Information", "Microsoft.AspNetCore": "Warning", "Aspire.Hosting.Dcp": "Warning" } }, "Parameters": { "TenantId": "guid", "ClientId": "guid", "ClientSecret": "guid", "KeyVaultUrl": "https://contoso.vault.azure.net", "KeyVaultSecret": "certificateName" } }
You should now be able to develop your code further and use the F5 experience locally while being able to update an Azure environment by doing azd up.
 
Once you are done testing resources should go away by executing azd down. The app registration does not get removed automatically so this will have to be deleted (and purged) in the Azure/Entra portal. (As a sort of bonus the local development experience will continue to work after deleting the Azure resources as long as the registration persists in Entra.)
For all of the code head on over to:
 
 
Wrap-up
Azure has become a beast over the years when it comes to building out complex and advanced solutions. You could always create individual components quickly by going with click-ops in the Azure portal, but following the best practice of codifying the infrastructure hasn't always been equally easy. Especially when you are a developer and not an "IT Pro" (as we called ops people back in the day). The intent of the Azure Developer CLI is to make this easier for you, but sometimes the pre-baked templates don't quite work for you. This has been a quick tour of what you can tweak in this regard. To be clear I probably wouldn't use the sample above for a real world development team, but the techniques are relevant for more specific use cases as well in my opinion.
 
.NET 9 arrives mid-November along with a new version of Aspire and azd iterates on a monthly cadence as well so there should be more good things in the pipeline!
 

Published on:

Learn more
Azure Developer Community Blog articles
Azure Developer Community Blog articles

Azure Developer Community Blog articles

Share post:

Related posts

Announcing Cost and Performance Improvements with Azure Cosmos DB’s Binary Encoding

We are excited to announce a significant enhancement to Azure Cosmos DB, bringing substantial cost savings and performance improvements to our...

4 hours ago

That's not my name! How to use Azure Policy to enforce resource naming conventions in your DevOps pipelines

Let’s talk about Azure naming conventions I know, I know, you’re probably thinking, “Seriously? We’re gonna talk about...

11 hours ago

Getting Started with Azure DDoS Protection REST API: A Step-by-Step Guide

REST API is a cornerstone in the management of resources on Azure, providing a streamlined and efficient approach for executing create, read, ...

1 day ago

Monitoring Azure DDoS Protection Mitigation Triggers

Monitoring Azure DDoS Protection Mitigation Triggers In today’s digital landscape, Distributed Denial of Service (DDoS) attacks pose a signifi...

1 day ago

General Availability: Azure confidential VMs with NVIDIA H100 Tensor Core GPUs

Today, we are announcing the general availability of Azure confidential virtual machines (VMs) with NVIDIA H100 Tensor core GPUs. These VMs co...

1 day ago

Azure AI Confidential Inferencing: Technical Deep-Dive

Generative AI powered by Large Language Models (LLMs) has revolutionized the way we interact with technology. Through chatbots, co-pilots, and...

1 day ago

How to automate vulnerability scans in Azure DevOps with Microsoft Defender for Cloud

You know how it goes. You’re working on a project, pushing code left and right, and then someone asks, “But is it secure?” ...

1 day ago
Stay up to date with latest Microsoft Dynamics 365 and Power Platform news!
* Yes, I agree to the privacy policy