Announcing UNLIMITED Public Preview of Metadata Caching for Azure Premium SMB/REST File Shares
Azure Files is excited to announce the Unlimited public preview of Metadata Caching for the premium SMB/REST file share tier.
Unlimited Public Preview allows customers to automatically self-serve the onboarding process though feature registration (AFEC) in the support regions.
Feature Overview
Metadata Caching is an enhancement aimed at reducing metadata latency for file workloads running on Windows/Linux clients. In addition to lowering metadata latency, workloads will observe a consistent latency experience which will allow metadata intensive workloads to be more predictable and deterministic. Reduced metadata latency also translates to more data IOPS (reads/writes) and throughput. Once the Metadata Caching feature is register within your subscription, there is no additional cost or operational management overhead when using this feature.
The following Metadata APIs will benefit from Metadata Caching.
- Create: Creating a new file; Up to 30% Faster
- Open: Opening a file; Up to 55% Faster
- Close: Closing a file; Up to 45% Faster
- Delete: Deleting a file; Up to 25% Faster
Workloads that perform a high volume of metadata operations (creating/opening/closing/deleting) against a SMB/REST Premium File share will receive the biggest benefit compared to workloads that are primarily data IO (e.g. databases)
Example of metadata heavy workloads include:
- Web\App Services: Frequently accessed files for CMS\LMS services such as Moodle\WordPress.
- Indexing\Batch Jobs: Large scale processing using Azure Kubernetes or Azure Batch.
- Virtual Desktop Infrastructure: Azure Virtual Desktop\Citrix users with home directories or VDI applications management needs.
- Business Application: Custom line of business or legacy application with “Lift and shift” needs.
- CI\CD - DevOps Pipeline: Building, testing, and deployment workloads such as Jenkins open-source automation
GitHub Solutions using Metadata Caching
- Moodle deployment + Azure Premium Files with Metadata Caching
-
Moodle consists of server hosting (cloud platforms), a database (MySQL, PostgreSQL), file storage (Azure Premium Files), and a PHP-based web server. It is used for course management (uploading materials, assignments, quizzes), user interaction (students accessing resources, submitting work, and discussions), and performance monitoring (tracking progress, reporting).
Metadata Cache Benefit: Provides a faster and more consistent user experience.
-
-
GitHub Actions + Azure Premium Files with Metadata Caching
- GitHub Actions is an automation tool integrated with GitHub that allows developers to build, test, and deploy code directly from their repositories. It uses workflows, defined in YAML files, to automate tasks such as running tests, building software, or deploying applications. These workflows can be triggered by events like code pushes, pull requests, or scheduled times.
Metadata Cache Benefit: Shorter build and deployment times when using Azure Premium Files with Metadata cache as the build artifact.
- GitHub Actions is an automation tool integrated with GitHub that allows developers to build, test, and deploy code directly from their repositories. It uses workflows, defined in YAML files, to automate tasks such as running tests, building software, or deploying applications. These workflows can be triggered by events like code pushes, pull requests, or scheduled times.
Expected Performance Improvement with Metadata Cache.
- 2-3x Improved Metadata Latency Consistency
- Up to 3x increased scale for Metadata operations
- Improved Metadata Latency beyond 30%
- Increased IOPS and Bandwidth up to 60%
How to get started
To get started, register your subscription with the Metadata Cache feature using Azure portal or PowerShell.
- Australia Central
- Jio India West
- India South
- Mexico Central
- Norway East
- Poland Central
- Spain Central
- Sweden Central
- Switzerland North
- UAE North
- US West 3
Note: As we extend region support for the Metadata Cache feature, Premium File Storage Accounts in those regions will be automatically onboarded for all subscriptions registered with the Metadata Caching feature.
Who should Participate?
Whether it is a new workload looking to leverage file shares or existing ones looking for improvements. Any workloads/usage patterns that contains metadata should be encouraged to onboard, specifically metadata heavy workloads that consist primarily of Create/Open/Close or Delete requests.
To determine if your workload contains metadata, can use Azure Monitor to split the transactions by API dimension as described in the following article
Thanks
Azure Files Team
Published on:
Learn moreRelated posts
Microsoft Purview: Data Lifecycle Management- Azure PST Import
Azure PST Import is a migration method that enables PST files stored in Azure Blob Storage to be imported directly into Exchange Online mailbo...
How Snowflake scales with Azure IaaS
Microsoft Rewards: Retirement of Azure AD Account Linking
Microsoft is retiring the Azure AD Account Linking feature for Microsoft Rewards by March 19, 2026. Users can no longer link work accounts to ...
Azure Function to scrape Yahoo data and store it in SharePoint
A couple of weeks ago, I learned about an AI Agent from this Microsoft DevBlogs, which mainly talks about building an AI Agent on top of Copil...
Maximize Azure Cosmos DB Performance with Azure Advisor Recommendations
In the first post of this series, we introduced how Azure Advisor helps Azure Cosmos DB users uncover opportunities to optimize efficiency and...
February Patches for Azure DevOps Server
We are releasing patches for our self‑hosted product, Azure DevOps Server. We strongly recommend that all customers stay on the latest, most s...
Building AI-Powered Apps with Azure Cosmos DB and the Vercel AI SDK
The Vercel AI SDK is an open-source TypeScript toolkit that provides the core building blocks for integrating AI into any JavaScript applicati...
Time Travel in Azure SQL with Temporal Tables
Applications often need to know what data looked like before. Who changed it, when it changed, and what the previous values were. Rebuilding t...