Exam PL-600 Revision Notes: Designing a Power Platform Solution
Welcome (somewhat belatedly) to the fourth post in my series focused on providing revision notes for the PL-600: Microsoft Power Platform Solution Architect exam. Last time in the series, we evaluated mechanisms to capture requirements and perform fit/gap analysis for our Power Platform solution. Today, we’ll be moving on to our next exam area and the first topic within there, Architect a solution. This area has the highest weighting across the whole exam, a whopping 40-45%, and as part of our first topic, Lead the design process, Microsoft expects us to demonstrate knowledge of the following:
Lead the design process
- design the solution topology
- design customizations for existing apps
- design and validate user experience prototypes
- identify opportunities for component reuse
- communicate system design visually
- design application lifecycle management (ALM) processes
- design a data migration strategy
- design apps by grouping required features based on role or task
- design a data visualization strategy
- design an automation strategy that uses Power Automate
As alluded to previously in the series, a solution architect is our Subject Matter Expert (SME) when it comes to the Power Platform, with the natural anticipation that we’ve grasped many seemingly unrelated but vital concepts that will have a bearing on our end product. Let’s jump into this first area to elaborate on the themes Microsoft expects us to know for the exam.
The aim of this post, and the entire series, is to provide a broad outline of the core areas to keep in mind when tackling the exam, linked to appropriate resources for more focused study. Ideally, your revision should involve a high degree of hands-on testing and familiarity with the platform if you want to do well. And, given the nature of this exam, it’s expected that you already have the necessary skills as a Power Platform Developer or Functional Consultant, with the certification to match.
Determining a Solution Topology
Solutions form the cornerstone of any successful Power Platform project. They will allow us to introduce important Application Lifecycle Management (ALM) concepts to our project and act as a mechanism to quickly and straightforwardly refer back to the specific features our application(s) rely on to work successfully. The solution architect on the project will need to decide how many solution(s) will need to be set up to best fulfil the requirements and support any future growth aspirations for our business solution. As part of this, some of the following considerations come into play:
- Managed vs Unmanaged: We need to appreciate fully the differences between both solution types, the most appropriate place to use each one, and the implications that layering (more on this shortly) can have on our customizations. If you haven’t already, make sure you brush up on this Microsoft Docs article to learn more.
- Solution Publisher: All customizations performed against Microsoft Dataverse will be impacted by whichever publisher we’ve chosen for our solution. We should always create our own bespoke publisher that reflects the organisation that the solution belongs to.
- Layering: As multiple managed solutions and unmanaged customizations get applied to Microsoft Dataverse, the result of what the user ultimately sees will potentially change. As a general rule of thumb, managed solutions will typically operate on a “last one wins” basis, and unmanaged customizations will always take precedence. Microsoft expects us to fully understand the impact layering can have on our system and on how the platform merges particular managed layers to resolve potential conflicts..
- Dependencies: As we introduce more solutions into our environment, the risk of dependency issues causing them to fail to import cleanly increases exponentially. Being able to mitigate and identify how to remove dependencies will be reasonably atypical in the day-to-day work of a solution architect.
- Segmentation: Proper segmentation of our solutions can significantly reduce the time it takes to import customizations and avoids the risk of any unplanned customizations getting deployed out early. Understanding how to work with each of the different segmentation options for our solutions will be vital for any solution architect.
Despite the potential mechanisms open to us, there is a lot to be said for KISS - Keep It Simple, Stupid. 😉 Smaller projects can typically benefit from having only a single solution throughout their lifecycle. Things can get unnecessarily complex as we introduce more solutions into the equation. Think carefully about the current and potential future direction of what we’re trying to achieve with the Power Platform. Use this to inform on the most optimal and straightforward approach to adopt. Microsoft has published an excellent article outlining the benefits of a single versus multiple solutions and the dangers of layering and dependencies.
Customizing Existing Dynamics 365 Applications
As we saw in our previous post, the importance of understanding - and potentially leveraging - one or all of the different Dynamics 365 Customer Engagement (CE) applications remains a high possibility for any Power Platform project. As part of doing this, we should ensure that we’ve implemented the basics first from a solution standpoint by setting up a solution and a publisher prefix to indicate all bespoke customizations we plan to perform. From there, we should consider the following factors:
- Microsoft releases up to two extensive updates to each of the Dynamics 365 CE applications every year, sometimes resulting in additional or unwanted functionality being automatically pushed out into our environments. If we customize on top of any of these applications, awareness and contingency planning for handling these updates will be essential.
- For specific scenarios, it may be preferable to create new customization types within Dataverse instead of customizing on top of any existing Dynamics 365 CE functionality. For example, consider taking a copy of the Lead table Main form, customizing this and exposing this out instead within your application. We can also adopt the same approach when working with views, business process flows and model-driven apps.
- What is the likelihood of other solutions or third-party add-ons also leveraging the same set of components? Considering what we know about layering, it could be that our planned customizations will not surface correctly based on this. Take care to understand the entire landscape and evaluate the impact of additional, external solutions alongside any customizations we plan to perform.
The role of the solution architect here is to decide on the most appropriate direction of travel and coach members of the team on following the correct process.
Demo: Customizing Existing Dynamics 365 Applications
In the following video, I demonstrate some of the approaches we can adopt when planning to customize the existing Dynamics 365 Sales application:
Application Lifecyle Management (ALM) Considerations & Concepts
A healthy Power Platform solution is one that we can deploy quickly, efficiently, and with minimal human interference. We’ve all been there for those long Friday evenings, trying for the umpteenth time to get our updates deployed out to production. 😫 We can avoid a lot of this pain by ensuring we’ve considered all aspects of our ALM process. The solution architect will need to take the lead in recommending and guiding the project team towards implementing the correct tools to ensure that those frustrating late-night sessions on Friday remain a thing of the past. 😉 To summarise, an appreciation of the following topics will be essential, not only for the exam but in our day-to-day work with the Power Platform too:
- Solutions: We’ve already talked in-depth about solutions’ importance and the fundamental considerations. In short, they provide the central cornerstone of any ALM strategy pertaining to the Power Platform.
- Environments: Organisations wanting to ensure appropriate quality assurance (QA) stages and complete testing of components before launching them to users will need to consider implementing several different environments. The solution architect will need to consider carefully the number, region, and related deployment considerations for all environments that the organisation plans to implement for their project. I would argue that environments are essential for businesses of any size globally. At a bare minimum, all Power Platform deployments should have at least one Sandbox environment setup for development/testing purposes.
- Azure DevOps: One of the significant risks associated with an IT deployment comes down to human error. We are prone to make mistakes through no fault of our own, which can be costly to the business, our reputation, and our end customers. For these reasons, adopting Azure DevOps becomes an essential consideration for any solution architect and assits us in automating all aspects of our software deployments. Using DevOps and the associated Power Platform Build Tools, we can look to do things such as:
- Automatically extract out the contents of our solutions into a Git repository.
- Enforce automatic QA of our solutions by automatically calling the solution checker when checking in code changes.
- Create, delete, copy and manage our environments via a build or deployment pipeline.
- Deploy out a solution automatically into one or multiple different environments.
- Configuration Migration Tool: As we configure our Dataverse environment, we potentially create a variety of configuration-based data that we may need to migrate across to our downstream environments. This is particularly the case if we leverage one of the Dynamics 365 Customer Engagement applications and use features like Queues, Business Units or Subjects. To straightforwardly migrate this data between environments and, most crucially, ensure GUID lookups are preserved, we can use the Configuration Migration Tool to define our export schema, export data and import it into any Dataverse environment. We can carry out migrations manually, via the dedicated SDK tool available on Nuget, or automatically - either by using Azure DevOps or through PowerShell.
- Package Deployer: For scenarios where we plan to deploy multiple solutions and any corresponding reference data generated by the Configuration Migration Tool, we can use the Package Deployer tool to build out a complete, end-to-end installation package to ensure we install everything as part of a single action. Package Deployer is used extensively by third-party developers / ISV’s, and there will be applicable scenarios for internal projects where using it may realise benefits.
These examples only scratch the surface of the things we need to consider. The topic is so weighty that an entire set of Microsoft Docs articles are devoted to the subject. For a better understanding of the practical steps involved in building out an ALM solution leveraging Azure DevOps, check out this post from my previous PL-400 blog series. You can also check out the following video that was recorded for this series, that demonstrates how to build this out:
The developers within our organisation will typically build out the core components to satisfy our ALM strategy, so a high-level understanding and overview are more than sufficient for a solution architect’s purpose.
The Importance of Prototyping
IT projects are fraught with risk from the outset, and one of the biggest, ever troublesome, dangers can relate to requirements not aligning to the finished product. This emphasises the importance of the project team and, crucially, the solution architect validating that we are building a solution that will be “fit for purpose.” We can turn to a tried and tested approach to get this validation as early as possible by building a prototype of our solution. The prototype should have the “look and feel” of our end product and is effectively used as a benchmark to a) validate that the base requirement can be met in a minimal sense and b) confirm that we’ve been able to translate across the requirement into a workable solution. The critical thing to remember is that a prototype is…well… a prototype! It won’t necessarily meet all business, technical, security, or regulatory requirements, and we should take care to communicate this to our stakeholders when they review our prototype. The good thing about prototyping is, particularly when it comes to the Power Platform, we can accomplish this in a very rapid fashion. For example, we can do things such as:
- Quickly build out model-driven / canvas applications, implementing simple navigation and styling functionality where appropriate.
- Build out an automation between the system(s) we plan to integrate with, validating that we can make connections and that data flows at the most appropriate trigger points.
- Mock out our proposed data structures within Microsoft Dataverse
- Create a simple dashboard using Power BI to indicate the insights our solution can provide once fully built out.
Once your stakeholders have reviewed your prototype, you can decide on the most appropriate course of action. If everything has gone well, we can start converting our prototype into an actual, fully working solution, providing an accurate estimate of the amount of time it will take to do this in the process. We might decide that the prototype will meet the requirements after making some essential alterations; we should then decide what level of investment is made on addressing and then presenting the prototype again. Finally - our less than ideal outcome - we could not move forward whatsoever. Maybe the prototype cannot handle the business need, or we need an alternative approach instead. Perhaps the prototype cannot address the business need, or an alternate method is needed instead. The key objective for the project team at this stage is to try and salvage any efforts/investments made into the product, either by converting them into something usable or by ensuring the appropriate lessons are fed back into the organisation. The earlier, the better to make this type of call; it may become too expensive and impossible to make this decision later on down the road.
Strategising Data Migration & Visualization Approach
Data Migration is a consideration that solution architects need to consider if we plan to bring data fully into the Power Platform, typically as part of Microsoft Dataverse. Suppose we plan to use an existing data source, such as SQL Server or SalesForce. In that case, it’s highly likely instead that we can use many of the out-of-box connectors available to us, introducing the on-premise data gateway for when we need to securely expose out any internally hosted resources into the Power Platform. We’ll need to consider one or several different solutions for all other situations, depending on the complexity of the data we are migrating. For simple migrations, we could leverage the data import wizard. When our data requires cleansing, it will be necessary for us to consider instead tools like dataflows or Azure Data Factory, to Extract, Transform and Load (ETL) our data. The solution architect will typically need to spend time assessing the format of data needing to be migrated into the Power Platform and, from there, provide a suitable recommendation on the best tool to leverage.
When it comes to building the most effective visualization solution, we should immediately draw our attention towards using Power BI as a first preference tool. Not only can it work well with Microsoft Dataverse, but it also has a wide variety of different connectors available, thereby allowing us to bring together data from disparate systems into an easy to consume, visually interactive report or dashboard. Power BI will also represent our best choice if a degree of data preparation, cleansing, and enhancement has to take place, tasks that we can satisfactorily handle via Power Query or a Data Analysis Expression (DAX) function. Beyond this, we can also consider some additional tools, such as charts, dashboards, Excel Templates and reports. All of these will be best suited for situations where all of our business data resides within Dataverse or if there are licensing / cost implications in place that prevent us from adding Power BI into the equation.
Building Effective Role-Based / Task-Based Applications
A good application will allow users to carry out the necessary steps for their job role or current task; a well-built application will go a step further by enabling users to complete what they need to in the most straightforward manner possible. For this reason, we must plan to build our applications around a focused experience that aligns towards a specific role or task that users/colleagues need to complete frequently. The temptation to build a single monolith application is ever-present, and we should attempt to avoid this where possible. Although a single application may reduce our management overhead, it could potentially lead to end-user confusion and severe usability issues in the long term. Again, the concept of KISS is very much relevant here. 😉 As solution architects, we should guide the business towards building out applications that fulfil this objective. To assist you further, the list below provides some benefits, disadvantages, and examples of each approach, which can help you in determining the best option to choose:
- Role-Based Applications
- Benefits: Ensures that our app can be leveraged by a broad group of user types/personas within the organisation.
- Disadvantages: Can limit the scope of the application and make it difficult to determine the end-result / output.
- Examples:
- Case management application for all internal support agents.
- App for Project Managers to review and update all projects under their control.
- Application for stock managers to use to manage their warehouse(s).
- Task-Based
- Benefits: Allows us to model out our application based on a standard, repeatable series of steps that our organisation carries out frequently
- Disadvantages: Can be challenging to build out for a complex task or one that involves several different dependencies.
- Examples:
- Time entry application for all internal colleagues in the organisation to link their time back to projects worked on.
- App that registers new site visitors.
- Home visit inspection app that allows for a set of steps to be completed when in customers home.
How we go about building the application (i.e., which type of Power App will be the most appropriate) will be a separate conversation. Still, with this fundamental decision made, we can begin in earnest.
How Best to Leverage Power Automate?
When best considering how to introduce the benefits of Power Automate, the solution architect needs first to have a good appreciation of the three different “pillars” or features within this service:
- Cloud Flows: These types of flows will be best for most automations we plan to build out and are most suitable for when the application systems we are working with have existing connectors or APIs available to connect to. Developers can build out simple flows to improve their productivity or create more complex flows that integrate systems together. We can trigger cloud flows based on specific events that arise in our systems (e.g., on Create of a Dataverse row), on a pre-defined schedule or even manually.
- Business Process Flows (BPF): Whereas cloud flows are designed for cross-application automations, these types of Flows are best for enforcing data integrity within our particular Dataverse environment. By building out a straightforward BPF, we can guide users of our model-driven apps or, indeed, any user on the Power Automate portal towards submitting the correct data at the right stage in our particular journey. We can then extend this by calling cloud flows when certain conditions are met as part of our process and implement more complex functionality, such as conditional branching or custom control integration.
- Robotic Process Automation (RPA): Also referred to as desktop flows, we would typically leverage these types of flow for our trickiest automations, typically involving legacy application systems with no entry point from a database / API standpoint. RPA flows will also be most appropriate for any automation that spans multiple, complex steps across different systems and for when the automation may need to be attended to while it’s running. RPA flow builders will use the Power Automate Desktop application to build, test and execute their flows, with options to then execute them either in attended or unattended mode.
Solution architects are expected to align a particular automation requirement to the most appropriate Power Automate feature set based on the base requirements and the technical environment involved.
Demo: Evaluating Power Automate Capabilities
To help better demonstrate the capabilities available across Power Automate, take a look at the below video where I provide a demonstration on each of the core capabilities on offer:
We’ve covered a lot in today’s post. 😅 But this reflects the importance of getting a lot of the basics ticked off early on during our Power Platform project. Next time in the series, we’ll jump down a gear and focus on designing a data model using Microsoft Dataverse.
Published on:
Learn moreRelated posts
Microsoft 365 & Power Platform Call (Microsoft Speakers) – November 12th, 2024 – Screenshot Summary
Call Highlights  SharePoint Quicklinks: Primary PnP Website: https://aka.ms/m365pnp Documentation & Guidance SharePoint Dev Videos Issues...
Power Platform – General availability of solution-aware cloud flow sharing limits is now available
The Power Platform now offers solution-aware cloud flow sharing limits, which allows administrators to control how makers share their cloud fl...
Introducing Git Integration in Power Platform (preview)
Now in public preview, Git Integration provides a streamlined experience for developers and citizen developers to build solutions together usi...
What is a solution on the Power Platform?
3 Awesome Microsoft Graph integrations for your Power Platform solutions
Over the past year, I've been building Microsoft Graph into a lot of the community content I build. I just got back from Scottish Summit ...