Check out the latest Business Central updates!Learn about the key capabilities and features of Dynamics 365 Business Central and experience some of the new features.
Download overview guide | Watch Business Central video
2019 release wave 2 Discover the latest updates and new features to Dynamics 365 planned through March 2020
Release overview guides and videos Release Plan | Preview 2020 Release Wave 1 Timeline
Ace your Dynamics 365 deployment with packaged services delivered by expert consultants. | Explore service offerings
Connect with the ISV success team on the latest roadmap, developer tool for AppSource certification, and ISV community engagements | ISV self-service portal
The FastTrack program is designed to help you accelerate your Dynamics 365 deployment with confidence.
FastTrack Program | Finance TechTalks | Customer Engagement TechTalks | Upcoming TechTalks
This is the 5th post in the “Behind the scenes of a cloud service” blog series. You can read the previous posts here:
We ended the previous post with concerns about fault tolerance and latency. When a customer uses Business Central in the cloud – whether using the UI at https://businesscentral.dynamics.com or using the APIs at https://api.businesscentral.dynamics.com – there are several so-called global services involved, in particular the Fixed Client Endpoint, the Fixed Web Service Endpoint, and the Global Tenant Manager. Since these global services are used in all customer interactions with Business Central in the cloud, we need to make them both fast and fault tolerant.
Let’s start by examining the concern, and let’s use the Fixed Web Service Endpoint (FWSE) to illustrate it. For this discussion, we will assume that the FWSE is hosted in an Azure data center in the US.
Consider this situation:
When the customer runs his/her local application, it makes API calls to https://api.businesscentral.dynamics.com. As we covered in the previous blog post, all these API calls go to the FWSE, which then forwards the calls to the data plane cluster that hosts the customer’s tenant. The following picture illustrates the scenario:
By now, the two problems should be clear:
It doesn’t take a rocket scientist to conclude that we need multiple instances of FWSE in different parts of the world to solve these problems. So let’s start by adding a second instance of the FWSE in Europe:
If the customer is in Europe, and his/her application makes requests to https://api.businesscentral.dynamics.com, we would like the request to go to the instance of FWSE in Europe. But how do we make that happen?
As you probably know, when you enter “https://businesscentral.dynamics.com” in a browser – or make a web service call to https://api.businesscentral.dynamics.com from an application – the first thing that happens is that the domain gets translated to an IP address. At the end of the day, network traffic is sent from one IP address to another IP address, not between domains.
The translation from a domain to an IP address is handled by DNS name servers as part of the global DNS system. A traditional DNS name server is configured with a set of static entries:
The DNS lookup is done once, and the IP address is cached. When the application subsequently sends HTTP requests, they go straight to the target IP address, never through the DNS name server.
Now, what if we had two instances of the Fixed Web Service Endpoint, hosted on two different clusters in different parts of the world? They would have two different IP addresses. So we need to map one domain to two IP addresses. But that’s not even enough: How do we choose the best one of them in each case?
In Business Central in the cloud, we solve this using the Azure Traffic Manager service. Azure Traffic Manager is essentially an intelligent DNS name server. It is more than a static list of mappings between domains and IP addresses: For a given domain such as “api.businesscentral.dynamics.com” it can have multiple IP addresses as well as rules for which one to return to callers:
We have configured our Traffic Manager to return the instance of FWSE that is closest to the caller.
If the customer is in Europe, his/her laptop gets a European IP address such as 18.104.22.168. When the customer’s application looks up “api.businesscentral.dynamics.com” by calling the DNS system, our Traffic Manager gets that call and can see that the call is coming from 22.214.171.124, i.e., from Europe. And now it knows that it needs to return the IP address for the European instance of FWSE – 126.96.36.199.
With sufficiently many instances of the FWSE scattered around the world, we can ensure low latency for all customers. You can easily check which instance of FWSE you will get routed to. This is what I get:
Can you see in which Azure region “my” instance of FWSE is hosted?
We have now covered how we minimize latency by having multiple instances of the FWSE deployed, but what happens if one of them goes down? If the Traffic Manager keeps routing requests to the bad instance of FWSE, our customers will not be able to connect.
Azure Traffic Manager solves this by periodically probing all the instances by sending HTTP requests to each one per our configuration. If the European instance of FWSE is down, Traffic Manager will soon discover it and stop directing clients to it. It means that the Asian customer in our example will now have his/her requests routed via the US, which is slower, but at least it works!
We use the same approach for other global services. This is only a part of our High Availability story, however, because we also have all the regional services. The solution for those is quite different from what we have described in this post, but we’ll save that for another post.
Business Applications communities