Breaking news from around the world
Get the Bing + MSN extension
Now Available in Community - MBAS 2019 Presentation Videos
Catch the most popular sessions on demand and learn how Dynamics 365, Power BI, PowerApps, Microsoft Flow, and Excel are powering major transformations around the globe. | View Gallery
2019 release wave 2 Discover the latest updates to Dynamics 365Release overview guides and videos Release Plan | Early Access Availability
Ace your Dynamics 365 deployment with packaged services delivered by expert consultants. | Explore service offerings
Connect with the ISV success team on the latest roadmap, developer tool for AppSource certification, and ISV community engagements | ISV self-service portal
The FastTrack program is designed to help you accelerate your Dynamics 365 deployment with confidence.
FastTrack Program | Finance TechTalks | Customer Engagement TechTalks | Talent TechTalks | Upcoming TechTalks
Over the past few months, since the feature was released with the fall update, we have seen many people trying out the new Intelligent Cloud functionality in Microsoft Dynamics 365 Business Central. This functionality is designed to bring all the data (or, at the very least, all of the relevant data) from an on-premises instance of Business Central to a cloud instance. This enables many of the features that are only available in a cloud system to be used on your real business data. More information about the feature can be found in the official documentation. This blog post details the most common issues that you may encounter while setting up your Intelligent Cloud and gives you the tools that you need to fix these issues so that you can reach a high rate of success in replicating your on-premises data.
Failed to enable your replication
You may get this error when walking through the Intelligent Cloud Setup wizard. Currently, the error message is fairly generic and doesn't do too much to help the user identify the issue. A fix for this is coming in a future update so that you can better understand the cause of the error. The good news is that there is a way to find out exactly what the issue is when you get this error message. It is typically caused by one of the following three things:
The first thing you'll need to do is open up your Self-Hosted Integration Runtime window and verify that it is actually connected to Azure Data Factory. Your window should look similar to the following.
Once you've verified that your Self-hosted Integration Runtime is properly set up, the next thing to do is jump over to the Diagnostics tab. This is where you can test the connection to your SQL Server and also where you can look at the runtime logs to see the errors that might have happened when the SQL connection was attempted.
The logs from the Self-hosted Integration Runtime are stored in the Event Viewer, which will open when you click View logs. It may take some time to load all of the logs. Once the window opens and fully loads, scroll down to look for any logs around the time that you saw the error from the setup wizard. In my case, I had a bad user name or password, which is indicated by the error message in the log.
If you ever get an error with an unknown or unspecified cause, this should be the first place you go to find out what happened. The majority of errors will show up here.
Once you've successfully set up replication and try it out for the first time, you may run into a situation where change tracking is not enabled for certain tables. The workaround is fairly easy: just enable change tracking on those tables. All of those details are described in this separate blog post. While we're on this topic, one thing that you should be aware of is the retention period for your change tracking data. The Intelligent Cloud uses change tracking to make the replicating data more efficient. After all, it'd be wasteful to move all of the data every time the replication runs rather than just the data that has changed. To make this work most effectively, the change tracking retention period will need to be at least as long as the time between your scheduled replication runs. For example, if your replication is scheduled to run once per day, then make your retention period at least one day.
This issue will typically be limited to extension tables. If the version of your on-premises Business Central instance is the same as the cloud instance, then you shouldn't see this error message for any of the core tables. Still, failures on extension tables can lead to big issues later on. For example, let's say there is an extension in the cloud that extends the Item table (as is the case in Figure 6), but that extension is not installed in the on-premises system. In this scenario, the data for the Item table may replicate okay, but the data in the extended table will not come across because it doesn't exist on-premises. The reason this is an issue is because when the system attempts to collect all of the Item records, it does a join on the extended tables. But those tables are empty (because the data didn't get replicated), so the result of the join is an empty set. This illustrates why it's important to look at the extensions installed in the cloud and install them on-premises as well.
You can determine which extension is the suspect by looking at the GUID in the table name. In Figure 6, the suspect is c526b3e9..., which happens to be the Sales and Inventory Forecast extension. To alleviate the error, I would just install that extension on-premises. Note that having additional extensions in your on-premises system (ones that do not live in your cloud instance) is okay. Any tables from those extensions will just be ignored. It's only the converse scenario that can cause issues.
Finally, it may be that the error is for an extension table that you don't really care about. If this is the case, then you can just ignore the error. Achieving 100% success rate with your replication is not necessary to take advantage of the Intelligent Cloud.
This should be the least common error that you see when replicating your data. The primary cause for an error like this is that either a cumulative update was applied to the on-premises system before the cloud tenant was upgraded or vice versa. We try to avoid schema changes as much as possible between minor updates, but they do sneak in on occasion. In addition, the Intelligent Cloud replication is currently on the aggressive side of enforcing schema parity. So even if the schema change was only extending a table field from, say, Text to Text, you would get this error. This is something we will be toning down in the future so that minor updates will have a much lower risk of causing failures.
The workaround for this error is to bring each side to the same Business Central version. In order to avoid getting into this situation, the best thing to do is to wait to apply the CU in the on-premises instance until you know that your cloud tenant has been upgraded to the corresponding version.
Data from the table is unavailable
One of the engineering decisions we made before releasing this feature was to flag tables that have failed to replicate with a 'Blocked' indicator. This effectively prevents the Business Central client from accessing any data within such a table. The reason behind this decision was to prevent users from seeing potentially stale or inconsistent data and then making business decisions based on that bad data, which could be really bad. So, depending on which tables have failed to replicate, you might see errors like the one in figure 8 when you try to sign in to one of the replicated companies or when you try to view certain data. The workaround for this issue is to sign into the Business Central demo company and look at what tables failed and fix the issues.
We've thought a lot about this 'Blocked' indicator and may remove it in the future, but that's undecided at the moment. What we do plan to do is implement various improvements to make it so that table failures happen less often, one of which is to reduce the schema matching requirements.
As you set up your Intelligent Cloud in Dynamics 365 Business Central, I hope that everything goes smoothly the first time. But if you do run into any trouble, these tips should help you quickly solve the issues so that you can achieve a high replication success rate and take full advantage of your data in the cloud. As always, we'd love to hear your feedback. Tell us what you think of the Intelligent Cloud feature in the comment section below.
Hello @Kalyan - unfortunately not all versions will be represented as docker images, so I can't say there is an easy way to identify which one will get you the closest to matching exact schemas. That is one of the issues with our first implementation of the Intelligent Cloud feature - that it requires matching schemas, even when there are cases where the on-prem CUs may not precisely match the schema of the SaaS updates. The good news is that we have made many improvements to the Intelligent Cloud replication process - one of which will relax those schema restrictions. Instead of throwing an error, it will move the data across for the fields that do exist on-prem. These changes will be rolling out with the Spring update of Business Central.
Hi Zech, how can we download the docker image equal to BC Sandbox. We are having issues in replicating the data and schema of our on prem database does not match with the BC sandbox (Figure 7). We want make extensions, schema equal by creating a docker image, container locally. please suggest us on this. Our BC sandbox version is 13.2.27692. And we are not able to find that build in mcr.microsoft.com/.../list.
@domask - Thank you for trying the Intelligent Cloud functionality. We initially implemented the restriction to prevent the system from being flooded if a malicious user attempted to run replication over and over. However, we realize that there may be some hurdles to get this set up initially, and it can be beneficial to run it multiple times in one day. Therefore, we will be lifting this restriction in a future release. Unfortunately, that doesn't help you with your current issue as there's no way at the moment to get around the restriction.
Keep in mind, the limit is one *manual* replication run per day, so my advice for you at present is to leverage the schedule functionality. If you set up a daily schedule, you would be able to have one scheduled replication run and one manual replication run happen in the same day. Still not ideal, but a little better than only getting one shot per day.
Hi Zech, thank you for nice blog entry. At the moment we are facing another issue, which is not described here. Replication of Data passed successfully - status = Complete, but some of the tables failed, what means we have to fix them and run Replication again. Unfortunately BC limits successful attempts to once per day, which should be ok in future, once all is up and running. But when you are in the initial phase of replication and want to figure out the data, wait another day for another attempt is simply waste of time. Is there any ways to change amount of allowed attempts ? If no, then how you imagine we should do our first runs and issues fixing, creating new sandboxes after each attempt ?
Business Applications communities