[Mitigated] Azure Lab Services - Lab Plan Outage
Azure Lab Service is experiencing an outage that is affecting Lab Plans, but not Lab Accounts. This outage intermittently impacts all operations in the following regions:
- Australia East
- East US
- North Europe
- South Central US
- Southeast Asia
- UAE North
- UK South
- West Europe
Impacted customers are encouraged to use unaffected regions as a workaround. We apologize for any inconvenience this may cause.
Update 5/13: A potential hotfix is being tested. We also have temporarily disabled lab schedules which means that VMs will not automatically start/stop based on schedules. Please refer back to this blog post for updates.
Update 5/14 (8:00 AM Central): The engineering team is still working on a hotfix and validating that this addresses the issue before rolling the fix out incrementally.
Update 5/14 (12:00 PM Central): The root-cause has been confirmed and further testing of the hotfix is expected to be completed within the next 1-2 hours in preparation for a widespread roll-out. We expect the hotfix to be rolled out shortly after that. If you have a tight timeline and are currently unable to use labs, we still recommend recreating labs in an unimpacted region if possible.
Update 5/14 (4:00 PM Central): The engineering team has completed testing the hotfix and verified that it addresses the underlying issue causing the outage. They are in process of rolling out the hotfix first to Southeast Asia which is one of the impacted regions. Within the next few hours, we'll provide an update when you can expect the hotfix to deployed to the other impacted regions.
Update 5/15 (1:00 AM Central): The initial hotfix has been deployed and although it addressed the underlying issue, the regional processing isn't recovering as expected. Upon further investigation there was an additional underlying issue uncovered which is slowing down processing the backlog of operations. The engineering team is actively working on creating a new hotfix for the underlying issue.
Update 5/15 (9:00 PM Central): We recognize the frustration and inconvenience this outage is causing for our customers with labs in the impacted regions, and we sincerely apologize. We have made positive progress in our investigation and validated that the outage no longer exists in the following regions - however, you may see slower lab creation and start/stop VM performance:
- Australia East
- Australia Southeast
- Brazil South
- Canada Central
- Canada East
- Central India
- Central US
- East Asia
- East US
- France Central
- Germany West Central
- Japan East
- Korea Central
- North Central US
- North Europe
- Norway East
- South Africa North
- Southeast Asia
- Switzerland North
- UAE North
- UK South
- UK West
- West Central US
- West Europe
- West US
For the remaining impacted regions, please know that we have escalated the matter and several engineering teams are working diligently to explore mitigation options. For transparency, we anticipate that the investigation and resolution may take a one additional business day for the following regions that are still impacted:
- East US 2
- South Central US
Update 5/16 (8pm Central): All regions are running and processing as expected including SouthCentralUS and EastUS2 (mentioned above). We have also confirmed that EastUS is also processing jobs as expected (there were confirmed slowdowns earlier today). One side effect of the outage will be failed operations - you may see VMs showing failures to start, labs failing to be created, labs failing to publish, etc. Please retry those operations. If you encounter any additional issues in any region with Azure Lab Services, please open an Azure support ticket for us to investigate.
Update 5/16 (9pm Central): One last update, we wanted to let you know that schedules have been re-enabled in all regions. Please open an Azure support ticket if you see any issues!
Update 5/20 (3pm Central): We wanted to post an update to let you know that we've received several Azure support tickets relating to slowdowns in the EastUS region. This appears to be sporadic (not everyone encounters slowdowns) and the engineering team is actively investigating. The best route for support is opening an Azure support ticket - we will provide an update back here once the slowdowns are resolved.
Update 5/22 (11am Central): The engineering team continues to investigate the sporadic slow operations in the platform. If you have any stuck operations (virtual machines are stuck starting, stuck stopping, etc) or you see very slow operations (more than 15min), the best next step is to notify the team by opening an Azure support ticket.
Update 5/23 (6pm Central): The engineering team has identified scalability issues resulting in intermittent slow operations with the service hardware. We are now conducting internal tests with upgraded hardware and expect to implement the changes by 5/31.
Published on:
Learn moreRelated posts
Disconnected operations for Azure Local
Introducing the new Linux-based Azure Cosmos DB Emulator (Preview)
We are excited to announce the preview release of the new Linux-based Azure Cosmos DB Emulator! This latest version is built to provide faster...
Azure Cosmos DB Shines at Microsoft Ignite 2024!
Microsoft Ignite 2024 took over the Windy City this week, bringing with it new technological innovation and exciting product announcements apl...