The CRM Chap

The CRM Chap

https://crmchap.co.uk

Anything and everything to do with the #PowerPlatform, #MSDYN365, #Azure and more!

Exam PL-600 Revision Notes: Designing a Data Model for the Power Platform

Published

Featured image of post Exam PL-600 Revision Notes: Designing a Data Model for the Power Platform

Welcome to the fifth post in my series focused on providing revision notes for the PL-600: Microsoft Power Platform Solution Architect exam. Previously, we went into detail about how we can approach designing our Power Platform business application as part of our first look at the Architect a solution area of the exam. We now move onto the next area of the exam, Design the data model, which focuses on the following topics:

Design the data model

  • design tables and columns
  • design reference and configuration data
  • design relationships and relationship behaviors
  • determine when to connect to external data versus import data
  • design data models to address complex sets of requirements

Data forms the cornerstone of any solution leveraging the Power Platform. Whether we leverage Microsoft Dataverse as our data source or bring in others via the rich array of connectors available, the solution architect must carefully consider all aspects of our overall data model and how this fits within the platform. With this in mind, let’s jump in and look at the key elements we need to appreciate.

The aim of this post, and the entire series, is to provide a broad outline of the core areas to keep in mind when tackling the exam, linked to appropriate resources for more focused study. Ideally, your revision should involve a high degree of hands-on testing and familiarity with the platform if you want to do well. And, given the nature of this exam, it’s expected that you already have the necessary skills as a Power Platform Developer or Functional Consultant, with the certification to match.

Tables and Columns Overview

More often than not, our Power Platform solution will involve Dataverse. And, even though we may leverage objects within the Common Data Model (CDM), there will still be a requirement to tailor Dataverse to suit our particular purposes. As such, a solution architect needs to have an excellent grasp of building the most effective Dataverse model, using our detailed knowledge of how this service works. Understanding of the following Dataverse concepts will be essential as part of this:

  • Types of Tables: This includes knowing which of our tables are included “out of the box” as part of the CDM, the difference between standard versus activity tables and how table ownership settings affect how we leverage the table within our various applications.
  • Column Data Types: A significant benefit of using Dataverse is that we can quickly select the most appropriate data type for the information we plan to store. These types conform to our general expected data types when working with SQL Server and .NET more generally. A solution architect should be well placed to elaborate on the various options available to us, including their limitations and the intricacies of their behaviour. For example, there is no excuse for not realising that our choice column types are stored as integers within the underlying database.
  • Standard, Calculated & Rollup Columns: For most situations, a standard column type - one that typically accepts and receives a user-defined input - will be sufficient. For more complex requirements, the solution architect should appreciate the capabilities on offer as part of our calculated and rollup columns. We will leverage calculated columns to automate a semi-complex or straightforward calculation and persist this within the database. Rollup columns, in comparison, will almost always involve a situation where we need to aggregate numerous child rows from a parent row. It will be vital to know the differences between them, and details such as rollup column values need to be refreshed regularly (either by a user or a background system job).
  • Global Choices: This component type will be most appropriate when we plan to leverage several choice column types across different tables, and we want to ensure that all of these have the same selection options. We can centralise this management into a single component type and achieve these aims by using a global choice.

I could go on, but this represents the “core” elements that we must know at a minimum. With our detailed knowledge of Dataverse, we can then identify the existing CDM objects we plan to leverage and what bespoke development work will be required to fill in any gaps. As part of this, it may be prudent to look at mocking up our entire data model as part of a diagram…but more on this subject later. 😀

The Importance of Reference & Configuration Data

It’s possible that, as we configure our Dataverse environment, we create a variety of different table types that we then use to store data that integrates with our solution somewhat. For example, we could set up a lookup table instead of a choice column to list various options our users can select at the model-driven app form level. Or we could have plug-ins implemented that have their behaviour modified based on configuration data we define in the system. Regardless of the scenario, we need to ensure, as solution architects, that we have not only modelled these table types out correctly but also considered how this data gets applied to our different environments. We might often reference these table types via components such as classic workflows or business rules. This means we start to have dependencies in our environments on row(s) with a specific GUID value; simply creating these rows in different environments won’t be sufficient. Therefore, an alternate approach will be needed.

To easily migrate across this data into our downstream environments, solution architects should encourage the team to implement Configuration Migration profiles/packages and, where possible, incorporate this into any Azure DevOps deployment pipelines via the dedicated task(s) available to us. For more complex scenarios, which might involve transforming data from external systems, we can consider leveraging data flows or Azure Data Factory and, in particular, mapping data flows to meet the requirement. Automation is critical throughout all of this, and we should always apply reference/configuration data during any solution deployment we perform.

Understanding Dataverse Relationships

Hopefully, if we have studied and passed PL-200 or PL-400, this is a topic that we should have an excellent appreciation of. But let’s spend a few moments just recapping over the fundamentals. 😉 Relationships provide us with the primary mechanisms to model our Dataverse tables effectively. They also ensure that our business objects are well represented within Dataverse by mandating, for example, that a single Account can have many Contacts. There are two types of relationships we can create in Dataverse:

  • One to many (1:N) / Many to one (N:1): Don’t get confused by the different phraseology, as all we’re describing here is the relationship’s direction. The base functionality of having a lookup column on our N table to represent our 1 row is precisely the same.
  • Many to many (N:N): In this scenario, we set up an intersect table to record each instance of the N:N relationship. There will be specific scenarios where this relationship type will be most prudent. For example, when we want to record which Contact records have attended an Event - many potential Events could have one, several, or zero Contacts.

As well as knowing these important theoretical details of how relationships, it’s expected that a solution architect can go further and understand elements such as:

  • Behaviours and Actions: These will control what occurs when we perform a specific action against a parent row. For example, if we delete an Account row, do we also want to delete all Contacts linked to it? Or is it better to preserve these as “orphan” rows in the system? Considering the different scenarios and recommending the most appropriate behaviour type will be crucial for a solution architect.
  • Manual vs. Native N:N Relationships: Most commonly, we can look to implement our N:N relationships natively by allowing Dataverse to create and manage the background intersect table for us. However, this can sometimes present challenges from an integration standpoint, as it’s notoriously difficult to target SDK operations against this hidden table type. In addition, this table cannot be customised, meaning we miss a valuable opportunity to decorate this table with additional metadata properties. When these become concerns, we can instead create a manual N:N relationship by making the intersect table ourselves and configuring the appropriate 1:N relationships against this.
  • Hierarchical Relationships: It’s possible to configure 1:N relationships to be self-referential on the current table. This could be useful when we need to, for example, model and display our table data as part of a graphical hierarchy within a model-driven Power App. As solution architects, we should know how to configure and set this up on any table.
  • Connection Roles: Often overlooked, connection roles are technically a type of relationship; as such, making sure we understand how to set these up will be essential. From a usability standpoint, there are most appropriate for situations where we cannot anticipate how objects will need to be related to each other, but where we still require a flexible mechanism of making these links in the system.

The genuine expectation throughout all of this is that we’ve “been there, done that” via previous projects; therefore, in any new projects that we are part of, we can elaborate and guide the project team on things to watch out for.

To Import or Extend? How to Deal with External Data

Once we’ve established what data we plan to work with, we can decide the best route to bring them into the fold of the Power Platform. The options available to us will be primarily dictated by whether we want to leverage Dataverse or not, and, as the solution architect, we need to align ourselves towards the best tool based on the data we are consuming. Typically, our choice will fall to one of the following four options:

  • Data Import Wizard: Provided we have a model-driven Power App setup that exposes out the table we want to work with, we can very easily import data into Dataverse and for that particular table at any time. As part of this, we can either use a previously exported template or look to import data in CSV / XML format. This option is best if we plan to migrate fully into Dataverse or import data already in the required structure to map into Dataverse easily.
  • Data Flows: We alluded to this option earlier, and this provides a more enhanced experience than the data import wizard. This option is particularly beneficial if we need to manipulate our data before importing it into Dataverse. We can leverage the full capabilities on offer as part of Power Query when using data flows. In addition, data flows support the ability to load data one-time or continually perform loads/synchronisation from our external data source, making them invaluable if we need to perform iterative loads of data.
  • Connectors: This option will be most appropriate if we plan to build out a Power Automate cloud flow or canvas Power App. Using one of the 350+ connectors available to us, it becomes effortless to bring our external data into these systems. And, if it’s a case that can’t find a connector to suit our scenario, we can look to create our own with relative ease.
  • Virtual Tables: For situations where we want to surface data from external systems, so they behave exactly like a standard Dataverse table, we can implement Virtual Tables to achieve this requirement. What’s even better is that records we pull in via Virtual Tables can support full Create, Read, Update and Delete (CRUD) permissions. The only major downside with our virtual tables is that we will almost always have to get a .NET developer involved to build them out. This is because the custom data provider they rely on can only be authored via a .NET plug-in.

Demo: Working with External Data Sources in the Power Platform

To better understand each of the above options in detail, check out the video below, where I dive into each option:

Fail to Plan, Plan to Fail: The Importance of Data Model Diagrams

Throughout any IT project, the temptation to get stuck and start building a solution can be incredibly tempting. This temptation is more significant than ever when it comes to the Power Platform, as there are minimal barriers for us to go in and start building our solution. Notwithstanding all this, it remains imperative to fully document and create a diagram of the data model we plan to use within the Power Platform. This diagram can represent the envisioned structure that we hope to build out using Dataverse or represent the external data sources that we plan to bring into our solution. The advantages of doing this cannot be understated:

  • A diagram can be leveraged to represent the business objects we plan to use, the appropriate attributes to record and the different relationships between objects, to help validate with our stakeholders that we are building a solution that will be fit for purpose.
  • The diagram can indicate which components we plan to leverage that may already exist in Dataverse and which will need to be customised bespoke.
  • Typically, the actual work of building out our data model will be handled by someone else on the project team. By building a precise diagram that communicates how our data model needs to look, the solution architect can ensure that this work can continue and we can minimise the risk of mistakes or misunderstandings.
  • Any resulting diagram we create can later be included as part of any formal documentation for our solution, such as for a Low-Level Design (LLD) document.

A standard format we can adhere to here is an Entity Relationship Diagram or, specifically, a Crow’s Foot Notation Diagram. There are various tools we can use to build this out, but given this is a Microsoft-focused technology blog, it would be remiss of me not to advocate using Microsoft Visio. 😏

Demo: Building Crow’s Foot Notation Diagrams using Microsoft Visio

In this video, I demonstrate how we can use Microsoft Visio to build out a Crow’s Foot Notation diagram that we can then use as the basis of any future data modelling activities:

One of the all-time boons of the Power Platform is that it’s pretty agnostic when it comes to our business data; therefore, we as Solution Architects can leverage to deliver a product that is fit for the business, not the other way around. Next time in the series, we’ll be moving on to look at integrations and how we can approach designing them across all aspects of the Power Platform.

Continue to website...

More from The CRM Chap