Breaking news from around the world
Get the Bing + MSN extension
Choose your path Increase your proficiency with the Dynamics 365 applications that you already use and learn more about the apps that interest you. Up your game with a learning path tailored to today's Dynamics 365 masterminds and designed to prepare you for industry-recognized Microsoft certifications.
Visit Microsoft Learn
2019 release wave 2 Discover the latest updates and new features to Dynamics 365 planned through March 2020
Release overview guides and videos Release Plan | View virtual launch event
Ace your Dynamics 365 deployment with packaged services delivered by expert consultants. | Explore service offerings
Connect with the ISV success team on the latest roadmap, developer tool for AppSource certification, and ISV community engagements | ISV self-service portal
The FastTrack program is designed to help you accelerate your Dynamics 365 deployment with confidence.
FastTrack Program | Finance TechTalks | Customer Engagement TechTalks | Talent TechTalks | Upcoming TechTalks
Integrating with external systems is a common requirement in Dynamics 365 Customer Engagement projects, but when the project involves an on-premises instance of Dynamics 365, routing requests from external systems through your firewall can present an additional challenge. Over the course of the next few posts, I will show you can easily build a simple service relay with RabbitMQ and Python to handle inbound requests from external data interface consumers.
Here's how my approach works. A consumer writes a request to a cloud-hosted RabbitMQ request queue (either directly or through a proxy service) and starts waiting for a response. On the other end, a Python script monitors the request queue for inbound requests. When it sees a new one, it executes the appropriate request through the Dynamics 365 Web API and writes the response back to a client-specific RabbitMQ response queue. The consumer then picks up the response from the queue. This way the consumer doesn't need to know anything other than how to write the initial request, and no extra inbound firewall ports need to be opened.
This diagram shows an overview of the process.
Although my original goal was to accelerate the deployment of data interfaces for on-premises Dynamics 365 CE instances, a simple service relay like this could also be useful for IFD or Dynamics 365 online deployments if you don't want to allow direct access to your organization. Because the queue monitoring process is single-threaded, it's an easy way to throttle requests, but you can run multiple instances of the queue monitor script if you want to increase the number of concurrent requests the relay can process.
There are lots message brokers and service bus offerings (Azure Service Bus, IBM MQ, Amazon SQS, etc.) you could use to build a service relay. In fact there's even an Azure offering called Azure Relay that aims to solve exactly the same problem that my approach does, but not just for Dynamics 365, so "why use this?" is a great question.
First, I think RabbitMQ is just a great tool, and I previously wrote a five-part series about using RabbitMQ with Dynamics 365 (back when it was still called Dynamics CRM). Second, using RabbitMQ instead of a cloud-specific service bus offering gives you maximum flexibility in where you host your request and response queues and how you chose to scale. For example, my RabbitMQ broker runs in a Docker container on a Digital Ocean VPS. If I ever decide to move off of Digital Ocean, I can easily switch to any IaaS or VPS provider. I can also configure a RabbitMQ cluster to achieve significantly faster throughput.
That's it for now. In my next post in this series I will walk through the prerequisites for building the simple service relay.
How have you handled inbound data interfaces for on-premises Dynamics 365 CE organizations? Let us know in the comments!
Business Applications communities