Exam PL-600 Revision Notes: Implementing a Power Platform Solution
Welcome to the seventh post in my series focused on providing revision notes for the PL-600: Microsoft Power Platform Solution Architect exam. Previously, we evaluated the various security features and capabilities on offer within the platform and as part of Microsoft Dataverse. In today’s post, we move on to the final area of the exam, titled Implement the solution. This exam area has the smallest weighting (15%-20%), so we can expect a minimal amount of questions emerging. Nevertheless, Microsoft expects candidates to have a good understanding of the following topics:
Validate the solution design
- evaluate detail designs and implementation
- validate security
- ensure that the solution conforms to API limits
- resolve automation conflicts
- resolve integration conflicts Support go-live
- identify and resolve potential and actual performance issues
- troubleshoot data migration
- resolve any identified issues with deployment plans
- identify factors that impact go-live readiness and remediate issues
After many months of hard work and sleepless nights, the go-live of our solution could be both exciting and nerve-racking in equal measure. At this stage in the journey, the solution architect may fall into the trap of thinking that no further work is required. In actual fact, their involvement at this crucial juncture is essential, and they will need to remain on-hand to support the team and ensure the deployment is a success for the organisation. With all this in mind, let’s dive into the aforementioned areas and evaluate the variables that could make or break your go-live.
The aim of this post, and the entire series, is to provide a broad outline of the core areas to keep in mind when tackling the exam, linked to appropriate resources for more focused study. Ideally, your revision should involve a high degree of hands-on testing and familiarity with the platform if you want to do well. And, given the nature of this exam, it’s expected that you already have the necessary skills as a Power Platform Developer or Functional Consultant, with the certification to match.
Validating the Solution Against Requirements
Once we’ve been able to get a workable version of our solution created, we’ll want to then obtain the necessary validation to confirm what we’ve got built fits the requirements for the business.
At this juncture, it can be beneficial to start going back to the original business requirements we put together at project execution and reviewing the various acceptance criteria set out within them. Ideally, we should have been performing ongoing testing of our solution, in the form of unit testing or similar, to develop this confidence early on; notwithstanding this, a formal round of testing by an objective audience is still advisable. Assuming that we have a solid set of requirements that we’ve been working towards, validation at this stage should be straightforward. We should also have a set of measurable outputs that we can use to determine a pass or fail for a specific element of our solution.
Once we, as a team, are happy that the solution is fit for purpose, it will generally be a good idea to conduct a formal round of user acceptance testing (UAT), with a candidate group of actual users of the end solution. This exercise may occur late on during our development cycle, but ideally, the earliest we can get our solution into the hands of these key stakeholders, the better. It will be far easier to address issues early than a week before our intended go-live. 😉
After all validation has been performed and everyone is happy, the solution architect can then look to get the project team together and confirm this formally. This may then provide an excellent opportunity to discuss how best to get the solution deployed out into production and discuss any dependencies required as part of this, all of which are essential elements as part of our deployment plan. But more on this later on.
Validating the Security Model
This can often be the most challenging and under-appreciated aspect of our testing. Part of the reason for this is that people tend to forget it entirely. For this reason, the solution architect must get a good rein over all this. To help us as part of this, here are a few things I would recommend:
- Level up for Dynamics 365/Power Apps Extension: This must-have extension, which works with both Google Chrome and Edge Chromium, has an incredibly valuable feature we can use in the context of our security model - impersonation. This, therefore, allows System Administrators (or those with the correct privileges assigned) to use a model-driven application as another user, with all relevant security permissions assigned. Functionality like this can be invaluable in performing early testing to ensure we get the behaviour we expect.
- Test User Accounts: To test the different security roles, column (field) security profiles and other components most effectively, I’d recommend setting up a dedicated account that testers can use to emulate the various personas we’ve established for our solution. Ideally, this should be a real user account with a name like John Doe or Jane Doe. We can then modify the permissions assigned to this account as we work through the different categories of users for which we are testing scenarios.
- User Acceptance Testing: This was highlighted earlier but again provides a valuable opportunity to ensure that things work as expected with our actual set of users who plan to leverage our solution.
API Limits Overview
For any Power Platform solution, particularly one that leverages Microsoft Dataverse, the solution architect needs to be acutely aware of not only the entitlement limits enforced at the API level, but also the service protection limits as well.
Let’s focus first of all on entitlement limits. These dictate the number of individual requests users can make within 24 hours. Microsoft defines this somewhat broadly. From a Dataverse standpoint, Microsoft class a request as any CRUD action targeting the platform via the SOAP / REST API and any internal action triggered by a workflow, plug-in, or other types of automation. The number of requests allocated to a user depends on the type of license assigned to them. A general rule of thumb is that the more you are paying, the more you get. The good news is that if a user has multiple licenses assigned, then all requests are summed up accordingly, instead of just defaulting to the highest amount across all licenses. If a user exceeds the number of requests aligned to them, Microsoft may start throttling requests and users will see a major degradation in performance. When it comes to any non-interactive user account type, such as an application user, then different rules apply and Microsoft will instead deduct requests from the pooled amount at the tenant level. Again, the precise number of requests available is dictated by the types of licenses you have and the number.
In addition to these limits, integrations have to factor in potential errors should we fire too many requests at once. Microsoft outlines the threshold for this trigger, and should we find ourselves in the situation, we will receive 429 error codes. These responses always return details of when it’s safe for the caller to re-try the request, so it becomes possible for us to modify our applications to leverage this information. Should we perform a heavy-duty integration involving the platform, I recommend reading through some of the strategies Microsoft advises us to adopt and alter our application accordingly.
Resolving Automation / Integration Conflicts
Automations and integrations in this context can take the form of a simplistic cloud flow that communicates with the Microsoft 365 Graph API, right through to a complex integration that consumes several Azure or externally based systems. The solution architect should have an excellent grasp of the landscape here, varying from project to project.
Ideally, most of our potential conflicts should have been resolved as part of a dedicated integration testing round. Therefore, the solution architect needs to account for this and other considerations, such as that these services may require time and effort to prepare for testing. In addition, the teams who maintain these systems may be super-duper whizzy experts with the systems they look after but clueless when it comes to the Power Platform itself. Therefore, the solution architect should take the lead in coaching and sharing knowledge with these teams as needed.
Once an issue has been identified, it may be straightforward for us to address any potential issues on our side, but keep in mind this may not necessarily be the case for the external systems we are consuming. At worst, it could even impact the expected go-live of our solution. At this stage, it may be necessary for the solution architect to step in and work out some form of compromise. Perhaps the issue we’ve discovered can be addressed after the go-live, or an acceptable workaround can be implemented. Early and frank communication to the wider project team is essential, even if the news we are bearing is not always positive.
Testing, Analysing and Resolving Performance Issues
There’s nothing worse than having an IT system that is sloooooooooooooooooooooooow. This can be one of the significant elements that can hurt the adoption of our solution and introduce unnecessary frustration into our overall system. The solution architect should be consistently on guard regarding this topic and adopt a hawkish posture when it comes to any of the following topics in particular:
- App Design: How we’ve built our model-driven and canvas apps can significantly impact how they run. If, for example, we’ve created a model-driven application that loads every single view and form for a table, don’t be surprised if things start taking a while to load. Similarly, if we have views containing too many attributes or other components, this can also negatively impact things. With this in mind, we should be designing with simplicity and fundamental clarity in mind. If a particular element isn’t adding any value, we should remove it altogether.
- Security Model: This can have an indirect effect on performance. Suppose we have granted a set of users global Read access to a table object. Whenever a user navigates to specific views in the application, time and resources will be spent to retrieve all of these rows from the backend table. Our security model should continuously be tweaked to ensure we introduce performance benefits where applicable. It becomes somewhat trivial to introduce these gains through clever use of some of the features we’ve spoken out about previously in the series, such as business units and security roles.
- Client vs Server Side Issues: With advances in web development over the years, many of the modern features of the platform typically expect a modernised browser and device to work effectively. So if we do find ourselves creaking along with an ancient Windows machine with legacy browsers, don’t be surprised if users start to experience issues using the Power Platform. Likewise, if our environments are hosted in a geographic region far from where a user is connecting, it’s natural to expect latency and a performance dip. The solution architect should grasp where most users are based and then take steps to align towards the closest environment location for our solution. In addition, they should read and ensure all users meet the minimum requirements for services such as Power Apps.
Throughout all of this, the Power Platform Admin Center can act as an excellent resource to understand and gauge problems. The center includes a range of Admin Analytics reports that we can use to understand metrics such as the amount of storage being consumed, the failure rate for specific APIs and the locations where users are accessing the resources of an environment from. These metrics could give us clues as to why we may be seeing some potential performance degradation, and the solution architect can then use them to initiate further investigations as needed. We can also use Monitor for model-driven apps to generate further data and clues as to what could be causing bottlenecks.
Deployment Plans and Mitigating Go-Live Impacts
Once all testing has been completed, and any remedial work has been dealt with appropriately, the solution architect can then start sharing details of the deployment plan. In an ideal world, this document should already exist in draft form. The primary task at this stage is to dust this off, update where appropriate and ensure that all key stakeholders and project team members have a copy. Ideally, your deployment plan should include:
- A breakdown of all critical tasks.
- An outline of any dependencies or requirements.
- An owner for each task.
- A due date or deadline.
- Any additional or useful commentary.
Using some form of visual aid, such as a Gantt chart or similar, could be best to convey the steps involved, alongside any timelines.
Although our intentions may be good, and we’ve made commitments to the business that we will be going live on X date, there can be a multitude of reasons which could impact our go-live negatively and force us to re-evaluate our plan. This can include things such as:
- A dependent system not being ready as part of an integration or automation.
- A key business event, such as another system launch or BAU operations, such as the end of month invoice processing.
- A realisation that a critical requirement or feature has not been accounted for in the solution.
- Unavailability of resources due to planned / unplanned absences.
By having things documented clearly in advance, the solution architect can quickly adapt to these situations and present an updated plan back to all key stakeholders as and when required. Like what we touched upon previously, early and frank communication in all these matters is generally most appropriate.
Another important consideration, which could impact our go-live, is if we plan our release on our around the same time as a release wave. The solution architect should be fully aware that there are two major releases into the Power Platform every year, in the spring and autumn. Microsoft will always publish the dates when these upgrades will take place for each region, so there is no excuse for evaluating this in advance and, as much as possible, ensuring that we don’t clash with any of these major releases. For an example of the type of information that’s published, you can review the documentation for the 2022 release wave 1 plan and deployment schedule for each region.
And with that, we are at the end of this series! Next week, we’ll do a wrap-up post that brings the various blog posts and video content into a single location. Until next time, happy revising, and I hope you’ve found the series useful!
Published on:
Learn moreRelated posts
Power Platform & M365 Dev Community Call – November 21st, 2024 – Screenshot Summary
Call Highlights  SharePoint Quicklinks: Primary PnP Website: https://aka.ms/m365pnp Documentation & Guidance SharePoint Dev Videos Issues...
Ignite ’24 – Power Platform Governance Announcements
Being at Microsoft Ignite ’24 in Chicago is an amazing experience. Even MORE amazing are the announcements that the Power Platform Gover...
Power Platform – November 2024 – Screenshot Summary
Community Call Highlights  Quicklinks: Power Platform Community: Power Apps Power Automate Power BI Power Virtual Agents Power Pages M365 Pla...
Power Platform – November 2024 – Screenshot Summary
Community Call Highlights  Quicklinks: Power Platform Community: Power Apps Power Automate Power BI Power Virtual Agents Power Pages M365 Pla...
We need to talk about... Power Platform Release Wave 2 for 2024... Power Pages
Today I am taking you through the features and functionality we can expect to see released between October 2024 and March 2025 for Power...
Microsoft 365 & Power Platform Call (Microsoft Speakers) – November 19th, 2024 – Screenshot Summary – Ignite Edition
Call Highlights  SharePoint Quicklinks: Primary PnP Website: https://aka.ms/m365pnp Documentation & Guidance SharePoint Dev Videos Issues...