Summary

 

Microsoft Power Automate is a service allowing makers to create business processes, orchestrations and workflows to help achieve common and even complex business requirements. Within the Power Platform, Power Automate represents one of the most important pillars of the platform. It provides a no to low code solution to process automation. From sending push notifications to mobile devices, to complex robotic process automation flows, Power Automate can be used in virtually any workload.

The goal of this article is to describe how to implement a monitoring strategy for both Power Automate and the Logic App services. We will explore how to create custom identifiers to be used for tracking run time actions and flow history. We will discuss using the Azure Log Analytics Data Collector connector. The article will also show how to build a delivery mechanism to Azure Application Insights from Power Automate Flow. Finally, we wrap with considerations for using Azure Log Analytics and Azure Application Insights.

Adding Custom Identifiers

 

Power Automate Flows generate tracking identifiers for each individual flow and specific action within a flow run. For each workflow, triggers and action, these auto generated identifiers are included and available as part of the various objects documented in Part 1: Triggers, Workflows and Actions. These typically are very helpful, specifically the workflow run name, when reviewing previous run information via the UI or PowerShell.

We've also seen, in Part 2: Tracked Properties and Error Handling, how connectors themselves can often provide identifiers in form of header properties that can be tracked. These returned properties from the connector response are vital when troubleshooting and supporting standard and especially custom connectors.

That said, there will come a time when you may need to generate or provide an identifier yourself. Luckily, Power Automate Flow provides the ability to do both, by passing in a tracking identifier or generating one which can be used across actions or even other flows as the calling flow. This allows makers to set the expectation that the flow will return a pre determined identifier which can then be used across other workloads.

Tracked Properties

 

Before we get into identifiers, please take a moment to review the Tracked Properties section in Part 2: Tracked Properties and Error Handling to familiarize yourself with attaching data to actions.

Utilizing a system provided Correlation Id

 

Power Automate includes its own correlation identifiers as referenced in the Get-FlowRun cmdlet as part of the ClientTrackingId. This identifier can be obtained in the flow itself by use of the outputs object. Consider using this or a custom tracking identifier when working with external integrations to assist with application and workload monitoring. By implementing this early, the beginnings of an application map can appear when working with Azure Application Insights as shown below.

AUTHOR NOTE: CLICK ON EACH IMAGE TO EXPAND FOR DETAIL

Using the guid() function, we can generate a identifier based on various formats that can be used as the ClientTrackingId or passed to other connector actions such as Common Data Service creates or updates or even custom connectors as detailed below.

Adding a custom Tracking Id

 

The Custom Tracking Id, part of the trigger, allows for the static of dynamic input of an identifier that can be passed in or set from the trigger and added to the response. This identifier can also be used across child flows to represent the calling or parent flow.

As the image below shows, we can use the "Custom Tracking Id" field to set a static or dynamic value. This example shows setting the custom tracking id from a property in the body of a HTTP request called correlatonid.

An added bonus to using the trigger in this fashion is that we now can add conditions that work as a gate keeper to our flow. In this scenario we can strictly enforce that if a correlating identifier is not provided to the flow, the flow will not run and return a message expecting the identifier. This design is very important to ensure visibility and supportability, and as such needs to be included as a first step, as we include more and more integrations into our workloads.

In the example below we have a HTTP request and response Power Automate flow. In the request a Tracking Id "PowerAutomateArticleExampleId" has been provided.

In the response, captured in Fiddler or Postman, we can see the response header now includes our custom tracking Id represented by "x-ms-client-tracking-id".

This technique can be propagated down to Child Flows as well as shown below. As with the parent, the "x-ms-client-tracking-id" is now included within the Child Flow. This allows the Child Flow to use the same tracking Id for any actions or tracked property.

Finally, an added benefit is that this same custom tracking identifier will be available as part of the "Get-FlowRun" cmdlet covered in Part 4: Reviewing Run History with Get-FlowRun.

The Log Analytics Connector

 

One approach that can help deliver event data to Azure Monitor is to utilize the Log Analytics Data Collector connector. Utilizing the Azure Monitor HTTP Data Collector API, we can ingest data in virtually any structure using a JSON formatted string. Since every object in our Power Automate Flow is formatted in JSON we can send any item we want, from scopes to actions to the entire workflow.

Creating Custom Tables

 

Below is an image showing custom tables created by using the Log Analytics Data Collector connection action.

Here are the actions that created these tables starting with the PowerAutomateFlow custom table.

Here is the action for PowerAutomateActions:

As shown, one of the benefits of Log Analytics is the ability to add custom tables to the workspace. When these custom tables are created, they can be added to by passing in additional properties. For instance, in the image I have a table for Power Automate Actions that was created by passing an action object in. Starting with a simple initialize variable I may only have a couple of fields created in my new table. However by passing in a more robust output from an action, like the Common Data Service, I begin to grow my table exponentially.

Here is an example gif showing using the workflow and action object. The workflow object is well defined however actions can become cumbersome. In this example we are splitting these into two separate tables.

NOTE: Consider using custom tables for specific actions.

This approach provides extreme flexibility but does come with a cost. Data can quickly become fragmented, null saturation can become unmanageable and even fields that we would often use for identifying can become obscure. We quickly can render our logs useless working this way and should provide some limits to what can be provided. Without meaningful data, what value are the logs we're capturing?

Thoughts on Controlling the Collector

 

I like the approach shown due to the flexibility of the endpoint and the native integration it has with Azure Logic Apps. Creating custom tables and allowing data ingress in any shape or form does have benefit. Also, by using the Log Analytics Connector, all Azure Logic App and Power Automate Flow run time data (tracked properties, action results, etc) can be centralized. This allows for ease of use reporting and monitoring. That said, we need to build in enforceable and stringent rules to what our tables will accept to ensure quality as we move ahead.

The Case for Application Insights

 

Azure Application Insights, as mentioned in previous articles, is an application performance monitoring platform. Power Automate, at the time of this writing, does not have a native integration or feature to push events to Application Insights. However, as with Log Analytics, there is an HTTP endpoint that can be used to send messages to Application Insights. How that HTTP endpoint is invoked is up to the enterprise. HTTP requests can be issued directly from Power Automate Flows. Custom connectors can be created to help delivery of messaging.

The next section will go into building a direct request to Azure Application Insights. This is similar to the approach I've documented in the past for sending messages from Dynamics 365 Plug-ins.

Utilizing the Application Insights REST API directly

 

To achieve this we will send a POST message to the Application Insights endpoint with the definition of the specific event we want to capture. This endpoint can accept one or multiple messages so it can be used at the end of a Power Automate flow run or really anywhere. Combining the messages and sending at the end would follow a similar approach as with the Azure Log Analytics Data Collector action. As long as the action is ensured to run and we have our contracts correctly defined, we can rely on the API to deliver our messages. One of the benefits of the Application Insights endpoint is it does allow any well defined message payload and will report back the number of accepted and failed messages.

The trigger I'm using here is the Power Apps trigger which allow for use of this flow as a child flow. Remember, to use this flow in this manner utilizing Child Flows, the flow must be solution aware.

Building the Message Body

 

Azure Application Insights, like Azure Log Analytics, can accept JSON serialized requests and store within tables within the log store. Unlike Log Analytics, specifically the Data Collector, is that Application Insights expects a well formed structured message. Each message contains these minimal properties: time (represents timestamp), iKey (Instrumentation Key), name (the name of the table/message) and data (the message payload). A reference to building this contact can be found here.

The data property is specific to each message type and will include a baseType and baseData property within. A good reference for finding out more can be found on the ApplicationInsights-JS Interfaces folder. For each message type, be it exceptions, custom events, traces, etc., there are common and explicit properties. Consider the below image.

The image above shows how to take the contract for a specific table and inject variables which will show up in Application Insights. In this example I'm only using the customEvents table but this can easily be extended for all tables within Application Insights.

Most tables in Application Insights include a customDimensions and customMeasurements property bag which I would recommend passing contextual data from the flow. In the example above, the properties property within baseData allow for a json object to be passed in. The image above shows the trigger function but this could also be the workflow object, or maybe a particular action that failed. The below image shows using the action object output with the Exception message in Azure Application Insights.

All tables include user and session data points which can be used to help facilitate out of the box capabilities in Application Insights such as the application map mentioned above.

Utilizing Azure Functions and Custom Connectors

 

Azure Functions provide a great way to build micro services including ones to help surface run time data from Power Automate Flow or Model Driven Application Plug-in tracing. Azure Functions can be written using .NET Core and included is native integration with Azure Application Insights. Alternatively, we can import the Azure Application Insights SDK to provide a streamlined approach to delivering messages. The article Monitoring the Power Platform: Custom Connectors - Building an Application Insights Connector covers building both an Azure Function and Custom Connector to realize this approach.

Choosing between Log Analytics and Application Insights

 

The information above has provided mechanisms showing how to deliver events to Azure Monitor utilizing the Azure Log Analytics Data Collector or the Azure Application Insights REST API. Choosing one of the other requires considerations into the features and benefits and maybe more importantly the limitations of both. For an overview of each please review the Monitoring the Power Platform: Introduction and Index article.

As we progress through the Monitoring the Power Platform series, we will dive deeper into these considerations and why to choose one over the other based on requirements. Ultimately the log store for Azure Application Insights is available in Azure Log Analytics but the features provided by Azure Application Insights allow for deep "insights" into scenarios we are interested in.

Next Steps

 

In this article we have discussed using custom identifiers to help track flow history and specifically group parent and child flows together. We have examined how we can use trigger conditions to enforce the use of custom identifiers. From there we reviewed the Azure Log Analytics Data Collector connector and how to setup and send messages to custom tables. After that, we went into building and sending Azure Application Insights messages directly in Power Automate flow.

In previous articles, we discussed how to evaluate workflows, triggers and run functions to help deliver insights. In the following articles covering Microsoft Power Automate Flow run time, we will discuss pushing events to Application Insights and reviewing previous flow runs for monitoring and governance.

If you are interested in learning more about specialized guidance and training for monitoring or other areas of the Power Platform, which includes a monitoring workshop, please contact your Technical Account Manager or Microsoft representative for further details.

Your feedback is extremely valuable so please leave a comment below and I'll be happy to help where I can! Also, if you find any inconsistencies, omissions or have suggestions, please go here to submit a new issue.

Index

 

Monitoring the Power Platform: Introduction and Index