Skip to main content

Notifications

Power Platform | Pipelines – the Admin view

After a first review of the end-user experience in my previous article, today I´d like to take a closer look at Power Platform Pipelines from the admin perspective. So let´s switch heads and drill into the typical requests upcoming from being a member of the Power Platform admin team that wants to use ALM for the good.

Overview of Power Platform Pipelines

When selecting for an ALM tool the obvious goals to achieve should be around:

  • Hosting a code repository that allows for an easy re-construction of development work done
  • Ensuring an audit process that allows to check for code deviation and identification of who made those changes
  • Automation of the code build, -merge and deployment processes
  • Managing development and application lifecycle management in general
  • Automating unit tests and other quality assessments
  • Simplify software release management

While Power Platform Pipelines recently was announced being general available, there´s still some work to be done to fulfill on all above requirements and it also should be subject to roadmap if there will be integration scenarios with typical platforms known by code-first developers, such as GitHub or Azure DevOps.

Power Platform Admin Center – showing the main environments used for Pipelines

Starting with the Admin Center experience first, where it´s about to configure your environments that you later on will be using for Power Platform Pipelines. From the visual above you can see that I configured a Power Pipelines environment as being my host for the solution. And due to testing purposes, I selected to go with a Developer Plan license for this exercise.

Power Platform Pipelines Configuration App with a header notification

To my surprise when I opened the Deployment Pipeline Configuration app and take a look into the configured environments, I saw a notification message that shows an information „Pipelines is only supported in Managed Environments“. Well, if you noticed from previous visual my host environment is configured as being a Managed Environment. So this notification shows up surprisingly and may remain showing because of me using a Developer Plan licensed environment. Let me know in the comments if you see this message popping up in a production environment configured for being a Managed Environment as well.

Pipelines Setup – Overview of Deployment Environments

I was playing around with the Deployment Environments and added both a Microsoft Teams Environment as well as my Default environment which (you can check from Environment visual is configured for being Non-Managed Environment). To my surprise both where validated successfully and it worries me that even though documentation clearly stated that

  • Microsoft Dataverse for Teams environments aren’t supported for use with pipelines
  • Pipelines are a feature of Managed Environments. As such, the development and target environments used in a pipeline must be enabled as a managed environment.

are not leading to a validation failure in this case. So why? Why am I not provided a message that I can´t use any of those environments? An answer that possibly only the engineering team can provide at this moment. Even admins like to be running through a smooth setup and configuration process and guided, instead of reading and following documentation. Hopefully, we´ll see a rapid fix for this and integrate those checks inside the validation run.

Pipelines Setup – Overview of Deployment Environments

First hand, I thought the validation process in total might be broken in my environment, so I decided to add both a Non-EU Environment without Dataverse, and a Non-EU Environment with Dataverse becoming part of my Deployment Environments. And as you can see from the visual above – both validation tests failed this time. And drilling into the details, I was presented the first one to fail, because of a missing Dataverse and the second one failing because of being in a region outside of the Pipelines host – which both are listed to be out of supported scope.

Pipelines – Configured Deployment Pipeline and Run history

So let´s switch gears and take a look into the monitoring experience from an Admin perspective. How easy is it to identify an issue that occured when a Maker calls out for having a problem with deploying their solution. There´re two ways to drill into this. One is from the Pipelines configured and see the related Run history. Another one is to use the Navigation on the left and open the Run history from the section Deployments inside the Deployment Pipeline Configuration app.

What really became an issue for me was that I needed to step into the record by a click on the Edit button to take a closer look on what caused the failure. Wait – „edit“ to monitor and drill into a failure? Oops – yes, another conflicting part of using the solution from an admin perspective. Cause the moment you edit something which was already done, you could easily do something wrong and save the record afterwards. This can become a real pain when auditing what happened.

Pipelines – Quick Power BI report view

Lucky though, there´s a third way of monitoring and drill into this using the Dashboard from the Navigation item (well it needs some re-work done on that to really empower an admin for a quick drill-in) or creating a Power BI quick report from the Run history and create a report that allows yours to drill-into the details needed. Of course this means customizing the current solution and there´s room for improvements to provide a drill-in/inspection report first hand. So please, Microsoft engineering team, go for it – even prior of shipping new features.

Another topic I wanted to check out was about the solution artifacts and being capable of capturing source code. Note: Pipelines docs currently mentions that you can use external tools to see diffs.

Pipelines – Deployment Artifact record

From what can be visualized within the current Pipelines solutions you see a Managed Artifact File that is stored in ZIP format and it allows for being downloaded. Those knowing that on creation of a Dataverse record you can run a Power Automate flow would easily get the automation part to download the file automatically, extract it and check-in the artifacts in a source control system of your choice.

Using Notepad++ Compare Plugin

For my testing purposes, I exported a solution file from the UI and downloaded this file in addition. I then extracted both ZIP files and opened the solutions.xml to compare with Notepad++. As you can see both files are exactly the same.

File explorer – showing both solution exports. First generated by Pipelines, second generated via a manual export of the solution from UI

As you can see from the above visual the only real difference is that Pipelines is using the „.“ in the file name. If you´re downloading the artifact automatically via Power Automate flow, extract it and check-in all the files into your source code system, keep in mind that this is a managed solution. So from a source code control system perspective, you don´t have control over the Master (unmanaged) in this case, you do have control over the build file (managed).

Well, that´s pretty much it what I wanted to share with all of you today. From an admin perspective the Power Platform Pipelines solutions setup and configuration process normally is a smooth run. Still you need to take care of limitations and instead of full wizard providing you guidance on configuration there´re manual steps to follow. There´s work to be done from a monitoring perspective in terms of easily drill into those jobs that showed „failures“ and needs support if the Maker can´t solve it on their own (which typically will be the case if the Maker doesn´t have a Power Platform solution developer background).

There´s also work to be done to fulfill on further source control operations in terms of making it easy to identify code changes from an in-app experience or at least allow for an easy audit control of who made changes. Adding the Publisher or the prefix for instance to the details information of the artifact might help to simplify that task. Was it an individual performing these changes or was it a team working on features (both could be separated by using different Publisher in this case)

Hidden gems

During my tests I stepped accross some interesting parts that I didn´t want to miss out mentioning here as well. Both look promising and also shows that the roadmap ahead for Pipelines hasn´t stopped by announcing it to be GA.

Pipelines – Deployment Stage Run

Number one I figured out when I created a new Deployment Stage manually – after a successful run of a Deployment. Not sure if this button should be showing up, if you´d ask me as the Deployment stage already was processed, but it allowed me to kind of reverse engineer. We do find two Operation options:

  • Validation
  • Deploy

And in case of selecting one of them we also do find the list of Supoperations available shown in above visual. That leaves hope that there will be more Operation and Suboperation becoming available in future. Furthermore, them being used for individually perfoming an operation by configuring them. For instance using the Retrieving Artifact suboperation in a validation operation job and use it to generate the file without deploying it.

Pipelines – Deployment Stage Run

Number two I found inside the Deployment Stage Run by taking a look outside the General tab. Inside Deployment settings tab, you can find a Deployment Settings Json which is a setting that I actually couldn´t configure during the creation of a Deployment Pipeline, nor a stage. So my hope is that this is something to become available and someone took a carefull look at Azure DevOps in terms of Deployment Jobs. Let´s see what the future brings.

That´s it folks. Go and test it on your own, decide on which ALM tool to go with by again taking a look at my previous article containing a decision tree. Share your thoughts and leave a comment. Until then,…


This was originally posted here.

Comments

*This post is locked for comments