The views and opinions expressed in this blog are those solely of the author(s) and do not necessarily reflect Microsoft’s current policy, position, or branding. For official announcements and guidance on Dynamics 365 apps and services, please visit the Microsoft Dynamics 365 Blog.
‘Better Together’ Integration forum available
We're launching a how-to forum where you can learn and engage about how Dynamics 365 integrates with other Power Platform products.
Read about Better Together forum
2020 release wave 1Discover the latest updates and new features to Dynamics 365 planned through September 2020
Release overview guides and videos Release Plan | Preview 2020 Release Wave 1 TimelineWatch the 2020 Release Wave 1 virtual launch event
Ace your Dynamics 365 deployment with packaged services delivered by expert consultants. | Explore service offerings
Connect with the ISV success team on the latest roadmap, developer tool for AppSource certification, and ISV community engagements | ISV self-service portal
The FastTrack program is designed to help you accelerate your Dynamics 365 deployment with confidence.
FastTrack Program | Finance TechTalks | Customer Engagement TechTalks | Upcoming TechTalks
When considering the variety of Dynamics 365 integration needs, common design patterns emerge for how to address them. Design patterns, in software engineering, are the most logical sequence of steps to solve a specific type of requirement, and are established from actual use cases. The five most common design patterns for data integration are:
In this mini-series of blog posts, I’ll introduce each of the data integration design patterns above, and describe their application in the context of Dynamics 365 and, when relevant, Azure Service Bus. An approach to data integration based on an Enterprise Service Bus allows to implement most of these patterns in a very convenient and effective way, by simply defining source and target systems, frequency of communication, data format in input and in output (and relative mapping between the two formats), and access credentials to the systems at each end of the communication channel. If we were to replace any of the systems in the integration mix, we would simply need to plug the new one to the ESB, and define its input and output message formats. Job done, no other changes required at any other endpoint!
When adopting an ESB turns to be an over-engineering of the data integration solution, or even an anti-pattern, I’ll describe alternative approaches based on different technologies. Before describing the first pattern, and in order to get more familiar with the capabilities of an Enterprise Service Bus, I recommend a quick reading of previous articles on the same topic
Let’s get started with the first pattern then, the Broadcast Pattern.
The broadcast integration pattern moves data from Dynamics 365 to multiple target systems in a continuous real-time (or near real-time) basis. Essentially, it is one-way synchronization from one to many.
The broadcast pattern is transactional, meaning that if a data transfer (a “transaction”) succeeds, data is committed (i.e. persisted) at destination. If the transaction fails, the data transfer is aborted, or rolled back. This type of synchronization is also optimized for processing records as quickly as possible so that data is up-to-date between multiple systems over time. As a consequence, it is essential that a broadcast integration be highly available and reliable to avoid losing critical data in transit. This is where the role of an Enterprise Service Bus is of crucial importance.
The implementation of this integration pattern in Dynamics 365 follows the principles described in the Publish / Subscribe pattern in Azure Service Bus . In this scenario, Dynamics 365 is the source system broadcasting data (i.e. sending a message) to other target systems via the ESB. The ESB acts as a broker to guarantee delivery of the message at destination. This “push” mechanism delivers the best performance, but requires implementing a trigger mechanism, for example as an XRM plugin, as seen for the implementation of the Publish / Subscribe pattern.
An alternative approach, based on a “pull” mechanism, expects an external application to poll regularly the Dynamics 365 tenant for changes to entities (new records, updates, etc.) and then trigger the data broadcast process. This approach does not require coding of plugins in Dynamics 365 but is less efficient in terms of speed of reaction to a change, as polling can happen only at set regular intervals. Workflow management applications like Microsoft Flow or similar are perfectly suitable to handle this kind of data integration requirements.
Irrespective of the approach taken, though, broadcasting data out of Dynamics 365 should be transactional, with the possibility to cancel the transaction in case of failure on delivery. Failure may be defined not necessarily on first attempt. Transactions crossing system boundaries can be implemented with a “state machine” that retains a snapshot of the brokered message involved in the data transfer before it is claimed by all subscribed applications. If any of the subscribers fails to retrieve the message, the transaction is aborted.
The code snippet below broadcasts a message to a topic in Azure Service Bus and implements the State Machine design pattern  for keeping memory of delivery of a message. In case of failure of delivery, the State Machine moves the message to the “dead letter” queue to indicate that it is no longer valid for data transfer.
In the XRM plugin, we broadcast entity data as a brokered message.
private async Task Broadcast(Entity entity)
var client = TopicClient.CreateFromConnectionString(connectionString, topicName);
var message = new BrokeredMessage(JsonConvert.SerializeObject(entity));
The State Machine is implemented as a singleton asynchronous dictionary of transaction counter by topic. This is a count of how many concurrent transactions, i.e. subscribers, are waiting for a message on a specific topic from the Service Bus. The dictionary is thread-safe to allow for concurrent requests.
private static StateMachine _instance;
public static StateMachine Current => _instance ?? (_instance = new StateMachine());
protected Concurrentictionary<string, int> transactions = new ConcurrentDictionary<string, int>();
A subscriber application that wants to read a message broadcast by the ESB, will begin a new transaction by invoking the BeginTransactionAsync method for a specific topic on the State Machine and then handle the OnMessage event to obtain a copy of the initial entity. The object is then saved or processed internally, and if this step fails, the transaction is cancelled.
public async Task ReadMessageAsync()
var client = SubscriptionClient.CreateFromConnectionString(connectionString, topicName, subscriptionName);
client.OnMessageAsync(async message =>
var entity = JsonConvert.DeserializeObject(message.GetBody<string>());
await StateMachine.Current.SuccessAsync(message , topicName);
await StateMachine.Current.CancelAsync(message , topicName);
The State Machine implements a method for indicating success of retrieval of message from the ESB and completion of the transaction by the subscriber application: SuccessAsync. This method, in turn, invokes CompleteAsync on the message, thus completing the receive operation of the message itself, and indicating that the message should be marked as processed in the ESB, and eventually deleted from the topic. This is done only when all concurrent active transactions are completed.
public async Task<bool> SuccessAsync(BrokeredMessage message, string topicName)
bool done = await EndTransactionAsync(topicName);
int count = Current.transactions[topicName];
// All concurrent transactions are done
if (done && count == 0)
The CancelAsync method, instead, cancels the broadcast of the message by resetting the transaction counter for a topic, and moves the message to the “dead letter” queue, a queue of messages that has not been processed successfully. This is done by invoking the DeadLetterAsync method on the brokered message.
public async Task<bool> CancelAsync(BrokeredMessage message, string topicName)
// Cancel the message broadcast -> Remove all concurrent transactions
int count = Current.transactions[topicName];
bool done = Current.transactions.TryUpdate(topicName, 0, count);
The entire solution is available free to download from my GitHub repository .
 Integration Design Patterns for Dynamics 365, “XRM and Beyond” blog, Stefano Tempesta
 Message Queueing in Dynamics 365 with Azure Service Bus, “XRM and Beyond” blog, Stefano Tempesta
 Publish/Subscribe Pattern in Dynamics 365 with Azure Service Bus, “XRM and Beyond” blog, Stefano Tempesta
 State pattern, Wikipedia
 Dynamics365 repository, GitHub, Stefano Tempesta
Business Applications communities