Loading...

Infra in Azure for Developers - The How (Part 2)

Infra in Azure for Developers - The How (Part 2)

Still working our way through infrastructure for developers here. I've done the why & what. Not to mention part one of how, and now we're back with part two of creating stuff. You should work your way through part one first to get the necessary context for this part.

 

dev_sweating_01.jpg

 

When we left off last time we had a network and a private container app environment on top. That could have been done by a non-programmer, but in this part your development experience might come in handy. And that is part of my motivation here - at some point in the deployment cycle developer eyes are needed to cross the finish line.

 

We were implementing our infra as logical layers, and that's where we will continue.

 

All the code can be found here: https://github.com/ahelland/infra-for-devs 

Level 4

Actual application code - yay! I know, a lot of work to get there. There are a number of different ways to deploy code to Azure depending on which service you use. If you use Azure Static Web Apps you just point to a git repo and once you check in your code it's all taken care of for you. If you're using Kubernetes it might involve a yaml file with the necessary specs. Old school app services could even work with FTP. In our case we can just continue down the Bicep path that we are already on.

 

Let's remind ourselves of the components listed on our Aspire dashboard:

aspire_dash_01.png

 

Three of our services are container images pulled from Docker Hub whereas the rest will be hosted in the container registry we created. For simplicity we will create separate Bicep modules for the two. You can create one for both, but then you would need some extra conditionals inside to handle the fact that Docker Hub allows anonymous access and our ACR does not so for learning purposes we're keeping them separate.

 

Let's start with these Docker containers. Container Apps are agnostic to where the images come from and as long as the wrapping of the binaries is in the right format it treats third-party code the same as your first-party code. There is however a a relatively new feature where some common third-party products are available in an easier way - "Services" (below "regular Apps"):

Services_01.png

 

Services_02.png

 

There's a list to choose from here. I must admit I'm not familiar with all of them, but you can see we find both Redis and Postgres on this list. We can find these in Program.cs for the Aspire code as well:

 

var redis = builder.AddRedisContainer("redis"); var rabbitMq = builder.AddRabbitMQContainer("EventBus"); var postgres = builder.AddPostgresContainer("postgres") .WithAnnotation(new ContainerImageAnnotation { Image = "ankane/pgvector", Tag = "latest" });

 

This annotation means that both Redis and RabbitMQ use the latest official images. However, Postgres uses a separate version that embeds vector search capabilities. The Service feature in Container Apps doesn't let you specify the image, which is part of why they are so easy to use, so unfortunately that blocks us from using the PostgreSQL service. (Yes, I tried and you will get errors runtime without the Vector support.) RabbitMQ is absent from the list so it's just Redis we can deploy this way. I wrapped the actual implementation in a Bicep module, but basically this is all the code required to deploy Redis this way:

 

resource containerService 'Microsoft.App/containerApps@2023-04-01-preview' = { name: serviceName location: location tags: resourceTags properties: { environmentId: containerAppEnvironmentId configuration: { service: { type: serviceType } } } }

 

The RabbitMQ and PostgreSQL images are similar in the way that both pull from Docker Hub so the core of a module for these looks like this:

 

resource containerApp 'Microsoft.App/containerApps@2023-05-02-preview' = { name: name location: location tags: resourceTags properties: { managedEnvironmentId: containerAppEnvironmentId environmentId: containerAppEnvironmentId workloadProfileName: 'Consumption' configuration: { activeRevisionsMode: 'Single' ingress: { external: true targetPort: targetPort exposedPort: exposedPort transport: transport traffic: [ { weight: 100 latestRevision: true } ] allowInsecure: false } } template: { revisionSuffix: '' containers: [ { image: containerImage name: containerName env: envVars resources: { cpu: json('0.25') memory: '0.5Gi' } } ] scale: { minReplicas: minReplicas maxReplicas: maxReplicas } } } identity: { type: 'None' } }

 

And as an example instantiating RabbitMQ looks like this:

 

module rabbitMQApp '../modules/containers/container-app-docker-hub/main.bicep' = { scope: rg_cae name: 'eshop-rabbitmq' params: { location: location resourceTags: resourceTags containerAppEnvironmentId: containerenvironment.id name: 'eventbus' containerImage: 'rabbitmq:3-management' containerName: 'eventbus' targetPort: 5672 transport: 'tcp' minReplicas: 1 } }

 

Hold on a moment. There might not be a native RabbitMQ offering in Azure, but surely both Redis and PostgreSQL are available - why would I use these instead? The reason is simplicity during the development phase. When you fire up your code locally to test and debug you don't care about high availability for the database and you don't care if the cache doesn't handle millions of items. There isn't an easy way to do geo-replication of the Postgres container and I'm pretty sure there wil be hickups if you hammer the Redis cache with enough traffic as well. These are great for development purposes within the Container Environment, but if you want to move to production you should look into the full offerings and configure them to your liking.

 

I should also note here that while I had no issues creating a Redis service for some reason I did have issues with the Basket API connecting to it. (It threw up an unhelpful stack trace.) Deploying as a "regular" container app seemed to fix this. This means that while a module for services is included we don't actually use it. (This is not a deal-breaker for our scenario, but as a side note the Azure Developer Cli attempts to use Services when deploying Aspire-based solutions.)

 

There are some low level details needed before actual deployment so hold on before pressing the roll-out button. We will get back to that after briefly looking at the actual eShop microservices.

 

Packaging microservices

We can't avoid containerizing our microservices since we use Azure Container Apps. There's nothing preventing us from pushing this to Docker Hub as well and consume from there. (Since eShop is open source there's nothing to hide.) Let's pretend it's our company internal app and that it would be better served by the Azure Container Registry we created in the previous part. How do we get the images into the registry you ask? Ah, minor details. Quickest thing to do is to is to fire up Visual Studio and do the process from there. First add a Dockerfile:

docker_01.png

 

Then go through the process of publishing to ACR:

acr_publish_01.pngacr_publish_02.pngacr_publish_03.png

You might hit a snag here - we locked down our ACR to access from our private vnet and unless you happen to be on the same network you are not able to upload these images. This is the control plane vs data plane split hitting us. We are able to deploy our resources without being on the same network. Our requests go through the ARM API which by nature is public, but governed by control plane RBAC. (No permissions on a subscription equals no creation of resources.) When we want to push data into a registry that is going over the endpoints by the registry which we have said should be private. We need data plane permissions as well for private resources, but the private thing kicks in first. The solution here is to either login to the Dev Box if you created one, or as a quick fix do a click-ops and enable public access. (Which you can disable after uploading the images.)

acr_network_01.png

This is a blocker if you want to do the push outside of Visual Studio too. You have the command az acr build, but the way this works behind the scenes is that the files are uploaded to a container instance not connected to the vnet (regardless of your computer). This is however slightly more complicated so I'm skipping the details of this one.

Environment variables

This was one of the trickier things to figure out - most code needs some environment variables to get going and the eShop services are no different. Part of the reason you want to use Aspire is that it takes care of this for you, but we are not using Aspire so we have to hack things manually. If you inspect Program.cs in the eShop.AppHost project you will find things like:

 

var basketApi = builder.AddProject<Projects.Basket_API>("basket-api") .WithReference(redis) .WithReference(rabbitMq) .WithEnvironment("Identity__Url", identityApi.GetEndpoint("http"));

 

Both .WithEnvironment and .WithReference injects variables. To figure out what the values are we can look up the manifests for Aspire here:

https://learn.microsoft.com/en-us/dotnet/aspire/deployment/manifest-format

 

And we can inspect the values in the Aspire dashboard as well:

aspire_dash_02.png

 

Now, normally appsettings.json would be a good place to check as well, but it looks like Aspire does some extra magic so not all variables are on file. (This is a totally valid pattern in .NET - sometimes you don't want to hardwire things and want them more dynamic.)

 

Coming in blind to the code base it took a little detective work looking in various places and digging into how Aspire works with Dependency Injection.

Container App Module

This brings us to building a module for container apps that pull their image from Azure Container Registry:

 

resource containerApp 'Microsoft.App/containerApps@2023-05-02-preview' = { name: name location: location tags: resourceTags identity: { type: 'SystemAssigned,UserAssigned' userAssignedIdentities: { '${mi.id}':{} } } properties: { managedEnvironmentId: containerAppEnvironmentId environmentId: containerAppEnvironmentId workloadProfileName: 'Consumption' configuration: { registries: [ { identity: mi.id server: containerRegistry } ] activeRevisionsMode: 'Single' ingress: { external: true targetPort: targetPort exposedPort: 0 transport: transport traffic: [ { weight: 100 latestRevision: true } ] allowInsecure: false } } template: { serviceBinds: (!empty(serviceId)) ? [ { serviceId: serviceId name: 'redis' } ] : [] containers: [ { image: containerImage name: containerName env: envVars resources: { cpu: json('0.25') memory: '0.5Gi' } } ] scale: { minReplicas: minReplicas maxReplicas: maxReplicas } } } }

 

Of notice here is that we add identity and specify it is to be used with the ACR. We can also see a reference to a Redis service bind for those apps that need it. That is the Redis service we referred to a few paragraphs up. The fact that we only support one bind here is because we only identified Redis as a possible service - we could have reworked the Bicep to handle multiple bindings. The binding takes care of network plumbing and the like between apps and services.

Ingress

Network is a good cue to talk about ingress. We have an element in the Bicep code called "ingress" with a few parameters attached to it. If you try to create a container app in the Azure Portal you will see a couple of options appear if you enable ingress:

Ingress_01.png

 

If you have something like a cron job that wakes up every 24 hours and cleans up old database records that would be an app that didn't require an ingress - it isn't supposed to react to incoming traffic.

 

A web app on the other hand would need an ingress because the whole purpose is that it will expose an interface for receiving inbound traffic.

 

You can however get more granular with this ingress. The web app needs an external ingress because it will receive traffic originating from outside the cluster (like my web browser), but other microservices may only need internal exposure. A classic problem when you have multiple microservices is how they should reach each other - by DNS name or IP. IP addresses are fickly since they can easily change, and proper DNS registration often involves other systems on-premises. In both plain Kubernetes and Azure Container Apps it is easy to facilitate service discovery. This means that I can hit a frontend in my browser, and it makes calls to http://backend without worrying about the IP address.

 

Unfortunately, it is mostly a guessing game for us how eShop works in this regard. The architecture diagram does show the logical flows, but not the network requirements.

 

Note that if we want external ingress (like accessing from the Dev Box) we need to add a DNS record to the Container App. This is not because the app doesn't like IP addresses, but because there is a load balancer in front exposing a shared IP for all container apps and this uses the host name header to direct calls to the right app on the other side. (You can use mechanisms like editing the hosts-file on your computer, but if you're using Dev Box you already have the DNS resolver in place so you don't need that extra step.)

Database initialization

We are almost ready for deployment, but since we covered environment variables let's revisit Postgres. If you use the "Service" version it comes with a few defaults. We didn't use that so we will add a few variables; basically a not to secure password and defining things needed to bootstrap the instance.

 

module postgresVectorApp '../modules/containers/container-app-docker-hub/main.bicep' = if(deploy_postgres) { scope: rg_cae name: 'eshop-postgres' params: { location: location resourceTags: resourceTags containerAppEnvironmentId: containerenvironment.id name: 'postgres' containerImage: 'ankane/pgvector:latest' containerName: 'postgres' targetPort: 5432 transport: 'tcp' minReplicas: 1 envVars: [ { name: 'POSTGRES_HOST_AUTH_METHOD', value: 'scram-sha-256' } { name: 'POSTGRES_INITDB_ARGS', value: '--auth-host=scram-sha-256 --auth-local=scram-sha-256' } { name: 'POSTGRES_PASSWORD', value: 'foo' } ] } }

 

Aspire did a few extra things for us though:

 

var catalogDb = postgres.AddDatabase("CatalogDB"); var identityDb = postgres.AddDatabase("IdentityDB"); var orderDb = postgres.AddDatabase("OrderingDB"); var webhooksDb = postgres.AddDatabase("WebHooksDB");

 

It added databases. The default Docker image isn't clever enough to preprovision this for us. There are different ways to approach this. We could create a sql-file that we embed in a Dockerfile that takes care of this, but we are not using Dockerfiles for the third-party containers. (And we don't use Dockerfiles during the deployment phase anyways.) We could create a script that is passed as a command line argument by Bicep during creation - in which case we need somewhere to put that script.

 

The microservices use Entity Framework so they should take care of the actual seeding of the databases. EF will create the databases too if the timing is right. By that I mean that if the PostgreSQL container gets created first, and the containers "owning" the database comes in second before you start querying it should work. I saw mixed results though so in case you need to create the databases manually you can use a little backdoor into the container going through the console in the Azure Portal. (Seeding still needs to be done by EF unless you create scripts for that yourself.)

postgres_console_01.pngpostgres_console_02.pngpostgres_console_03.png

The commands used:

 

su - postgres psql create database "CatalogDB" \g create databasee "IdentityDB" \g create databasease "OrderingDB" \g create databasese "WebHooksDB" \g

 

I know - we could line them up and trigger things at the last line, but I went with \g to commit on each database. The password you get prompted for is the one you set as an environment variable. I realize it is a bit of a preemptive strike as we haven't deployed Postgres yet, but I'm setting the bar for what you should expect to do after running the deployment.

Assembling our main.bicep

Our Level-4 is easy in the sense that it's the same resource type repeated a number of times. If it was just pushing the images to the container registry and duplicating a few lines of Bicep - what bliss that would be. A spoiler: there was more to it and I was not able to fix everything. We will still go through it all.

 

We already covered PostgreSQL, RabbitMQ and Redis so we'll skip those. If you refer to the complete file you will also notice DNS records being created, but they aren't interesting to explain here.

 

Basket API:

 

module basketapi '../modules/containers/container-app-acr/main.bicep' = if(deploy_basketAPI) { scope: rg_cae name: 'eshop-basketapi' params: { location: location resourceTags: resourceTags containerAppEnvironmentId: containerenvironment.id containerRegistry: '${acrName}.azurecr.io' containerImage: '${acrName}.azurecr.io/basketapi:latest' targetPort: 8080 transport: 'http2' externalIngress: false containerName: 'basket-api' identityName: 'eshop-cae-user-mi' name: 'basket-api' minReplicas: 1 //If you want to bind to a Redis service uncomment the line below //serviceId: redisService.outputs.id envVars: [ { name: 'Identity__Url', value: 'https://identity-api.${containerenvironment.properties.defaultDomain}' } { name: 'ConnectionStrings__redis', value: 'redis:6379' } { name: 'ConnectionStrings__EventBus', value: 'amqp://guest:guest@eventbus:5672' } ] } }

 

We've wired things up with references to the Identity API, Redis and RabbitMQ (called eventBus) through environment variables. The transport is set to http2 due to gRPC.

 

Catalog API:

 

module catalogapi '../modules/containers/container-app-acr/main.bicep' = if(deploy_catalogAPI) { scope: rg_cae name: 'eshop-catalogapi' params: { location: location resourceTags: resourceTags containerAppEnvironmentId: containerenvironment.id containerRegistry: '${acrName}.azurecr.io' containerImage: '${acrName}.azurecr.io/catalogapi:latest' externalIngress: true targetPort: 8080 transport: 'http' containerName: 'catalog-api' identityName: 'eshop-cae-user-mi' name: 'catalog-api' minReplicas: 1 envVars: [ { name: 'ConnectionStrings__EventBus', value: 'amqp://guest:guest@eventbus:5672' } { name: 'Aspire__Npgsql__EntityFrameworkCore__PostgreSQL__ConnectionString', value: 'Host=postgres;Database=CatalogDB;Port=5432;Username=postgres;Password=foo' } ] } }

 

RabbitMQ here as well and we connect to the PostgreSQL database just by using the short-form "postgres" for name resolution. For demonstration purposes I injected the connection string in a different way as explained here:

https://learn.microsoft.com/en-us/dotnet/aspire/database/postgresql-entity-framework-component?tabs=dotnet-cli

 

Identity API:

 

module identityapi '../modules/containers/container-app-acr/main.bicep' = if(deploy_identityAPI) { scope: rg_cae name: 'eshop-identityapi' params: { location: location resourceTags: resourceTags containerAppEnvironmentId: containerenvironment.id containerRegistry: '${acrName}.azurecr.io' containerImage: '${acrName}.azurecr.io/identityapi:latest' targetPort: 8080 transport: 'http' containerName: 'identity-api' identityName: 'eshop-cae-user-mi' name: 'identity-api' minReplicas: 1 envVars: [ { name: 'ASPNETCORE_ENVIRONMENT', value: 'Development' } { name: 'ASPNETCORE_FORWARDEDHEADERS_ENABLED', value: 'true' } { name: 'BasketApiClient', value: 'https://basket-api.internal.${containerenvironment.properties.defaultDomain}' } { name: 'OrderingApiClient', value: 'https://ordering-api.${containerenvironment.properties.defaultDomain}' } { name: 'WebhooksApiClient', value: 'https://webhooks-api.${containerenvironment.properties.defaultDomain}' } { name: 'WebhooksWebClient', value: 'https://webhooksclient.${containerenvironment.properties.defaultDomain}' } { name: 'WebAppClient', value: 'https://webapp.${containerenvironment.properties.defaultDomain}' } { name: 'ConnectionStrings__IdentityDB', value: 'Host=postgres;Database=IdentityDB;Port=5432;Username=postgres;Password=foo' } ] } }

 

The Identity API is sort of a hub since other APIs depend on it. The Identity API must have external ingress enabled for you to be able to log in - your browser needs a line of sight to the Identity Provider (Duende IdentityServer here but that applies to all IdPs). We're making references to the full DNS names of other APIs here implying they also have external ingress enabled. Apart from the Basket API which is internal, but that just means suffixing .internal as part of the FQDN. That's more of a matter of convenience since that allows us to hit the Swagger endpoints if we want to use the APIs without the web app.

 

Order API:

 

module orderingapi '../modules/containers/container-app-acr/main.bicep' = if(deploy_orderingAPI) { scope: rg_cae name: 'eshop-orderingapi' params: { location: location resourceTags: resourceTags containerAppEnvironmentId: containerenvironment.id containerRegistry: '${acrName}.azurecr.io' containerImage: '${acrName}.azurecr.io/orderingapi:latest' targetPort: 8080 transport: 'http' containerName: 'ordering-api' identityName: 'eshop-cae-user-mi' name: 'ordering-api' minReplicas: 1 envVars: [ { name: 'Identity__Url', value: 'https://identity-api.${containerenvironment.properties.defaultDomain}' } { name: 'ConnectionStrings__EventBus', value: 'amqp://guest:guest@eventbus:5672' } { name: 'ConnectionStrings__OrderingDB', value: 'Host=postgres;Database=OrderingDB;Port=5432;Username=postgres;Password=foo' } ] } }

 

References to Identity API, Rabbit and PostgreSQL.

 

Order Processor:

 

module orderprocessor '../modules/containers/container-app-acr/main.bicep' = if(deploy_orderProcessor) { scope: rg_cae name: 'eshop-orderprocessor' params: { location: location resourceTags: resourceTags containerAppEnvironmentId: containerenvironment.id containerRegistry: '${acrName}.azurecr.io' containerImage: '${acrName}.azurecr.io/orderprocessor:latest' targetPort: 8080 transport: 'http' containerName: 'order-processor' identityName: 'eshop-cae-user-mi' name: 'order-processor' minReplicas: 1 envVars: [ { name: 'ConnectionStrings__EventBus', value: 'amqp://guest:guest@eventbus:1672' } { name: 'ConnectionStrings__OrderingDB', value: 'Host=postgres;Database=OrderingDB;Port=5432;Username=postgres;Password=foo' } ] } }

 

Supporting service for ordering.

 

Payment Processor:

 

module paymentprocessor '../modules/containers/container-app-acr/main.bicep' = if(deploy_paymentProcessor) { scope: rg_cae name: 'eshop-paymentprocessor' params: { location: location resourceTags: resourceTags containerAppEnvironmentId: containerenvironment.id containerRegistry: '${acrName}.azurecr.io' containerImage: '${acrName}.azurecr.io/paymentprocessor:latest' targetPort: 8080 transport: 'http' externalIngress: false containerName: 'payment-processor' identityName: 'eshop-cae-user-mi' name: 'payment-processor' minReplicas: 1 envVars: [ { name: 'ConnectionStrings__EventBus', value: 'amqp://guest:guest@eventbus:5672' } ] } }

 

To process payments when you check out your order.

 

Web App:

 

module webapp '../modules/containers/container-app-acr/main.bicep' = if(deploy_webApp) { scope: rg_cae name: 'eshop-webapp' params: { location: location resourceTags: resourceTags containerAppEnvironmentId: containerenvironment.id containerRegistry: '${acrName}.azurecr.io' containerImage: '${acrName}.azurecr.io/webapp:latest' targetPort: 8080 transport: 'http' containerName: 'webapp' identityName: 'eshop-cae-user-mi' name: 'webapp' minReplicas: 1 envVars: [ { name: 'ASPNETCORE_ENVIRONMENT', value: 'Development' } { name: 'ASPNETCORE_FORWARDEDHEADERS_ENABLED', value: 'true' } { name: 'IdentityUrl', value: 'https://identity-api.${containerenvironment.properties.defaultDomain}' } { name: 'CallBackUrl', value: 'https://webapp.${containerenvironment.properties.defaultDomain}/signin-oidc' } { name: 'ConnectionStrings__EventBus', value: 'amqp://guest:guest@eventbus:5672' } { name: 'services__basket-api__0', value: 'http://basket-api.internal.${containerenvironment.properties.defaultDomain}' } { name: 'services__basket-api__1', value: 'https://basket-api.internal.${containerenvironment.properties.defaultDomain}' } { name: 'services__catalog-api__0', value: 'http://catalog-api.internal.${containerenvironment.properties.defaultDomain}' } { name: 'services__catalog-api__1', value: 'https://catalog-api.internal.${containerenvironment.properties.defaultDomain}' } { name: 'services__ordering-api__0', value: 'http://ordering-api.internal.${containerenvironment.properties.defaultDomain}' } { name: 'services__ordering-api__1', value: 'https://ordering-api.internal.${containerenvironment.properties.defaultDomain}' } ] } }

 

The Web App needs to use various supporting APIs and has references to these. I added ASPNETCORE_ENVIRONMENT for debugging purposes. The ASPNETCORE_FORWARDEDHEADERS_ENABLED was needed to get through the load balancer correctly.

 

The web app caused me the most grief. It has depencies to everything else and it is of course the UI of eShop. Incorrect settings in other places surfaces here. I wouldn't go so far as to say monoliths are easier, but there's fewer cross-cutting concerns so it would be something to be aware of when dealing with a container environment full of worker services fronted by a pretty interface.

 

Webhooks API:

 

module webhooksapi '../modules/containers/container-app-acr/main.bicep' = if(deploy_webhooksAPI) { scope: rg_cae name: 'eshop-webhooksapi' params: { location: location resourceTags: resourceTags containerAppEnvironmentId: containerenvironment.id containerRegistry: '${acrName}.azurecr.io' containerImage: '${acrName}.azurecr.io/webhooksapi:latest' targetPort: 8080 transport: 'http' externalIngress: false containerName: 'webhooks-api' identityName: 'eshop-cae-user-mi' name: 'webhooks-api' minReplicas: 1 envVars: [ { name: 'ConnectionStrings__EventBus', value: 'amqp://guest:guest@eventbus:5672' } { name: 'ConnectionStrings__WebhooksDB', value: 'Host=postgres;Database=WebhooksDB;Port=5432;Username=postgres;Password=foo' } { name: 'Identity__Url', value: 'https://identity-api.${containerenvironment.properties.defaultDomain}' } ] } }

 

The Webhooks API is for subscribing to web hooks.

 

Webhook client:

 

module webhookclient '../modules/containers/container-app-acr/main.bicep' = if(deploy_webhooksClient) { scope: rg_cae name: 'eshop-webhookclient' params: { location: location resourceTags: resourceTags containerAppEnvironmentId: containerenvironment.id containerRegistry: '${acrName}.azurecr.io' containerImage: '${acrName}.azurecr.io/webhookclient:latest' targetPort: 8080 containerName: 'webhooksclient' identityName: 'eshop-cae-user-mi' name: 'webhooksclient' minReplicas: 1 envVars: [ { name: 'IdentityUrl', value: 'https://identity-api.${containerenvironment.properties.defaultDomain}' } { name: 'CallBackUrl', value: 'https://webhooksclient.${containerenvironment.properties.defaultDomain}/signin-oidc' } { name: 'services__webhooks-api__0', value: 'http://webhooks-api.internal.${containerenvironment.properties.defaultDomain}' } { name: 'services__webhooks-api__1', value: 'https://webhooks-api.internal.${containerenvironment.properties.defaultDomain}' } ] } }

 

Supporting client for web hooks.

 

Mobile BFF:

 

module mobilebffshopping '../modules/containers/container-app-acr/main.bicep' = if(deploy_bff) { scope: rg_cae name: 'eshop-mobilebffshopping' params: { location: location resourceTags: resourceTags containerAppEnvironmentId: containerenvironment.id containerRegistry: '${acrName}.azurecr.io' containerImage: '${acrName}.azurecr.io/mobilebffshopping:latest' targetPort: 8080 containerName: 'mobile-bff' identityName: 'eshop-cae-user-mi' name: 'mobile-bff' minReplicas: 1 envVars: [ { name: 'services__catalog-api__0', value: 'catalog-api' } { name: 'services__catalog-api__1', value: 'catalog-api' } { name: 'services__identity-api__0', value: 'identity-api' } { name: 'services__identity-api__1', value: 'identity-api' } ] } }

 

For connecting from mobile apps. Since we deployed on a private virtual network I didn't test any of this and it is more for completeness sake.

 

All in all it doesn't look to bad. But I said I didn't get it working like this. Don't mistake me for one who doesn't try :)

Debugging and code level changes

During the initial testing what you need to do is to step into the Azure Portal and go to "Log Stream" for each individual app and watch the output to see if there are any errors.

Log_Stream_01.png

 

Some are easy to fix - for instance if one of the apps relying on RabbitMQ is ready before the eventBus app is ready it will not be able to connect. It should sort itself out given some time, but if not you might jumpstart things by restarting the revision:

Revision_Restart_01.png

Other errors…

 

The Identity API has an issue with encryption keys as well as serving up plain http in the wrong places.

  • Encryption keys is fixed by deleting the /keys subfolder before building the container image. (Republish to ACR if you already pushed it.) You should be able to add an ignore directive to the Dockerfile as well, but for unknown reasons this didn't work for me.
  • Forcing https can be done by adding a few lines to Program.cs:

 

app.UseStaticFiles(); app.Use((context, next) => { context.Request.Scheme = "https"; return next(context); });

 

 

These fall more in the category workaround than proper fix, but that's what we'll use here.

 

The OrderProcessor Dockerfile was for some reason incorrectly built by Visual Studio using an incorrect base image. Change the Dockerfile to look like the others:

 

FROM mcr.microsoft.com/dotnet/aspnet:8.0 AS base USER app WORKDIR /app EXPOSE 8080

 

Web App needs the same https fix as the Identity API in Program.cs.

The Catalog API exposes an API for getting all items:

 

app.MapGet("/items", GetAllItems);

 

This throws http 500 (tested via Swagger to exclude the Web App), but returns working JSON by changing to the following:

 

app.MapGet("items", async (CatalogContext context) => await context.CatalogItems.OrderBy(x => x.Name).ToListAsync());

 

There are a number of API variants exposed, but I somehow feel I'm not fixing the root cause so it's something to be aware of.

 

Related to the Catalog the web app misbehaves and throws an error that it's not able to parse the JSON so I implemented the bluntest workaround I could find and hardwired a single item skipping the API call:

 

protected override async Task OnInitializedAsync() { //Hardcoding a catalog with one item CatalogBrand brand = new(1, "Daybird"); CatalogItemType itemType = new(1, "Footwear"); CatalogItem data = new(99,"Wanderer Black Hiking Boots", "Daybird's Wanderer Hiking Boots in sleek black are perfect for all your outdoor adventures. These boots are made with a waterproof leather upper and a durable rubber sole for superior traction. With their cushioned insole and padded collar, these boots will keep you comfortable all day long.", 109, "99.webp", 4, brand, 3, itemType); List<CatalogItem> catalogitems = new List<CatalogItem>(); catalogitems.Add(data); catalogResult = await Task.FromResult( new CatalogResult(1,5,3,catalogitems.ToList()));

 

You should be able to login, you should be able to check your orders and cart (both empty) and you should be able to see the main page:

webapp_01.png

 

The database story felt a little non-optimal as well. Sure, mostly Entity Framework would figure out migrations status and seeding, but seems like it's not a sure thing. If those technically are bugs or would have been better using PostgreSQL as a service (whether it's the built-in offering in Container Apps or proper Azure service) I do not know. I would probably look into figuring out a better way for a stable environment. Possibly splitting the deployment into 2 where PostgreSQL, RabbitMQ Redis are spun up first along with a SQL script before doing the microservices. (Then again - would I use PostgreSQL and RabbitMQ or Cosmos/SQL & Service Bus?)

 

Why not just fix the remaining bugs? (There could be more than the ones listed.) Ideally I would.  I skipped this for a couple of reasons:

  • I did not write this code so while I can browse through it and do regular debugging it's still me looking in from the outside. By making "errors go away" I could even be introducing new bugs.
  • The intent of this series is to get developers going with the infra part of deployment. You are probably better than me at both the code you write and some of the details in eShop.
  • Even with it being caused by an incorrect environment variable being injected or something similar the solution is able to deploy as such and demo the basic concepts.

 

If one has to rewrite substantial parts to work on cloud there's something else amiss here too. But of course - if an eagle-eyed reader spots something I didn't I'll be happy to correct things in the repo. It could be a "silly me" issue.

 

The rest of both the infra and app code can be found here: https://github.com/ahelland/infra-for-devs 

Concluding remarks

This brings us to the conclusion of our little experiment. It feels a little anticlimactic that we didn't get everything to work, but there are still plenty of learnings here.

 

The process is the same if you have a solution you want to deploy to Azure Container Apps. You can get the Container Environment going the same way, and you can deploy the apps and services the same way. I could have happy-pathed things and deploy a simple Hello World type backend + frontend app - that would have worked without problems. And then you would have experienced problems instead when trying to scale into more code. Better to demonstrate that there are things you need to be aware of.

 

Does it make me have more opinions on Aspire and the Azure Developer Cli? Aspire is still a bit rough around the edges and that's probably why it's in preview. To be fair - Aspire has not advertised itself as a tool for deploying to cloud; the purpose is to bring a better inner loop for the developer locally which I believe it does. The secret sauce that makes you wonder what is happening behind the scenes is a bit more annoying.

 

The dev cli is more mature, but it's challenging when you have something that doesn't fit into the templates that come out of the box. If you put it to work on something that will produce deployable Bicep you will notice that it generates code that is structured differently than my samples. I don't present my code as the one correct solution so I'm ok with that, but I've tried to present a setup that is less tightly coupled and abstracts the different resources as layers/levels so that's just the way things are.

 

I hope you enjoyed this mini-series on infra for devs and that you agree there is no lack of code in this brave new world :)

Published on:

Learn more
Azure Developer Community Blog articles
Azure Developer Community Blog articles

Azure Developer Community Blog articles

Share post:

Related posts

Update on Azure Boards + GitHub Integration

It’s been a few months since our last update on the initiative to enhance the integration between Azure Boards and GitHub. We’re e...

1 day ago

Bring your custom engine copilot from Azure OpenAI Studio to Microsoft Teams: now in public preview

Azure OpenAI now offers a Deploy to a Teams app option in public preview, providing a new way to connect enterprise data with custom engine co...

1 day ago

Enhancing Security and Scalability with Reusable Workflows in GitHub and Pipeline Templates in Azure

Introduction   In the world of modern software development, efficiency, security, and scalability are paramount. Leveraging template work...

1 day ago
Stay up to date with latest Microsoft Dynamics 365 and Power Platform news!
* Yes, I agree to the privacy policy