Dataverse Capacity Planning: Estimation Techniques Every Architect Should Know
The Dataverse Capacity Estimation Model is used to predict how much storage an organization will need as data grows over time. It helps estimate database, file, and log capacity based on factors such as record volumes, data growth, retention periods, and system activities like auditing and automation. This model is important because Dataverse storage is shared and license-based, and running out of capacity can impact business operations. By using a capacity estimation model early in the design phase, organizations can plan storage accurately, control costs, avoid unexpected capacity issues, and ensure that applications remain performant and scalable as usage increases.
Step 1: Identify Data Domains
Break data into categories:
Step 2: Estimate Record Volume
Step 3: Estimate Average Record Size
Add 20–30% overhead for indexes & metadata.
Step 4: Database Capacity Formula
DB Capacity (GB) = (Records × Avg Size × Retention Years) × 1.3
Example (Cases):
1,000,000 × 10 KB × 2 yrs × 1.3 ≈ 26 GB
Step 5: File Capacity Formula
File Capacity = (Number of files × Avg file size)
Example:
500,000 attachments × 1 MB = 500 GB
Use SharePoint to offload files.
Step 6: Log Capacity Formula
Rule of thumb:
Log capacity ≈ 15–30% of DB annually
Summary:
The Dataverse Capacity Estimation Model helps organizations predict how much storage they will need as their applications grow. It estimates capacity by considering factors such as the number of records, average record size, data growth rate, retention period, attachments, and system logs. By planning database, file, and log storage separately, the model enables better cost control, prevents capacity-related disruptions, and supports informed design decisions. This proactive approach ensures Dataverse environments remain stable, scalable, and well-governed as business usage increases.
Published on:
Learn more