Creating a Machine Learning model in Azure - beginner's step by step guide
The Azure Machine Learning (ML) Studio has been greatly renewed. Here is a beginner's guide how to create your first ML Model, including setting up an account. The steps are a bit fiddly, and as the ML Studio is still very much under development, the individual steps may change, so what you read here may not be 100% accurate by the time you read it. But hopefully it can help you get started if you want to look into creating ML models to integrate with Business Central.
Create an account
Go to ml.azure.com and log in with an existing Azure account. Click on "Create a new subscription" and follow through there. Then wait a few minutes or hours for the account to become operational.
Go to your Azure portal (portal.azure.com), search for "Azure Machine Learning", and here click on "+create" to create a new workspace. When created you can find it under "Azure Machine Learning". Click on "Launch Studio" to open it in "Microsoft Azure Machine Learning Studio".
Create your first model
In MS Azure Machine Learning Studio, on the left hand side in the "Author" section you have three options:
- Notebooks - mostly code based models (complicated)
- Automted ML - Walkthroughs (Basic)
- Designer - Drag and Drop low-code models (intermediate)
For our first model, click on Designer, then click on the + button. Click on "Component", then search for "Enter Data Manually", and drag and drop this component into the blank canvas. Double-click it, and set the following properties:
- Data format = CSV
- Has Headers = True
- Data =
1 Number
2 1
Soon we will set up an incoming web service to receive data via API rather than inputting it manually, but we do need this component because it defines the expected input (one number).
Search for "Execute", and select "Execute R Script" or "Execute Python Script" (just pick the language that you prefer). In our example we will run the following R script:
azureml_main <- function(dataframe1, dataframe2){
print("R script run.")
return(list(dataset1=dataframe1 * 2))
}
It will just multiply the incoming number by 2.
Now add two more components: "Web Service Input" and "Web Service Output". Finally link these four components: "Enter data manually" and "Web Service Input" links to Dataset1. And "Execute R Script" Result links to "Web Service Output".
This is our finished model. It just takes a number an multiplies it by two, then returns the result.
If you now click on Validate, it will tell you that you need to specify Compute.
Add Compute
In the top right, click on Settings, then select "Compute Type" - here we will set it to "Compute Instance", then click on "Create Azure ML compute instance". Choose an appropriate option. As a test, just pick the cheapest one. Click on "Next: Advanced Settings", and here "Add a schedule". Then create a schedule to Stop Compute Instance every day, for example 20:00 hours. This step is of course optional, but it can safe your Azure credits from running out.
Leave the model, then in the left hand column go to "Compute", and here there should be a new one, with state = Creating. Wait for this to be Running. Then go back to Designer, design your model, and in settings select the right "Compute type" (here "Compute Instance"), and then select your new commute.
Now save your model and then click on Validate. It should say that your pipeline looks good and you can submit or publish it now. So click on "Submit". Click on "Create New", enter a name, then click on Submit. Note that this mentions "Pipeline", so this is where we go next.
Click on Pipelines in the left hand menu, then select your new pipeline. Now we need to deploy the model (= create endpoints so that we can reach it).
Publish the model
In the left hand pane click on Jobs, select your pipeline, then click on Deploy. Enter a name, and for "compute type" select "Azure container instance", then click on Deploy. This will create an endpoint.
Go to Endpoints, then click on your new endpoint. For now it will report "Deployment State" as being unhealthy, so wait a few minutes or up to an hour until "Deployment state" goes to Transitioning and then finally Healthy.
Now you have a deployed model that you can call.
Call your model from Postman:
On your Ednpoint, click on Consume to get the REST Endpint and the key. Then in Postman make a call like this:
POST http://123abc7ef2a-1a05-4e12-8889-1f078a5abcd8.uksouth.azurecontainer.io/score
HEADER:
Authorization Bearer hOzWAXsutkc0l2nChyrCB66QxfYQGaWn
BODY
{
"Inputs": {
"input1": [
{
"Number": 1
}
]
},
"GlobalParameters": {}
}
Wait for the call to complete - this may take a minute or so. The output should be this:
{
"Results": {
"output1": [
{
"Number": 2.0
}
]
}
}
Now you can go and turn of your compute, and it should still work but you don't need to keep your compute running, costing you money.
*This post is locked for comments