Meetup: Computer Vision – Case study: Build a smart and connected helmet.
Today we start the first appointment in our meetup Robotics & Artificial Intelligence Meetup Group. The first appointment was related with “Computer Vision” and my workshop topic was focused on a real case in JOULEHUB: Build a smart and connected helmet.
We prepare this prototype to study a way to build our smart helmet. There are strong jobs where the people are always under the risks. One is the mining jobs, where technicians are always involved in works for many hours inside the mine. Another job is the construction tunnel where there are the same issues.
We think to solve o to give comfort and safe during these jobs. Our idea was to use Digital Transformation and Artificial Intelligence to help people.
These helmet has two main functionality: SAFE and check the materials inside a mine. In this post I will explain how I trained a custom model and used it for calculating the probability to find gold or amber inside a mine.
The model was trained with Azure Custom Vision, an AI service which allows easy customization and training of custom models. It was then tested via a custom-made web page and with JavaScript code.
Architecture

Here the complete diagram with all communication process between helmets and Azure, Machine Learning, Dynamics 365 and Power BI.
The first step is to make sure that the model we are creating in Custom Vision is trained to be able to detect materials and to distinguish one from other appearing objects. To do this we need enough data that can be trained on and to get hold of this we used Semantic range of the images: we use Minerals images database. In total I used 100 pictures for the training.

When uploading a picture, the so far untrained model will point out what it thinks is an object and you will then need to correctly tag it, in this case we will point out and tag all the materials in each picture with “Gold”, “Amber”, “Quartz”. After doing this the model can be trained and tested. For each iteration you will get a performance measure consisting of precision, recall and mAP, standing for:
- Precision – the fraction of relevant instances among the retrieved instances
- Recall – the fraction of the total amount of relevant instances that were retrieved
- mAP – overall object detector performance across all tags

A good starting point for creating the custom-made web page is to use Microsoft’s Quickstart: Analyze a remote image using the REST API and JavaScript in Computer Vision. We can easily modify this quick start example for our own calculations.
Beside defining the subscription key and the endpoint URL we need to change the quick start example to use Custom Vision endpoint. These can be seen in the the Custom Vision dashboard under “Prediction URL”.
varuriBase=endpoint+"customvision/v3.0/Prediction/…";
We also need to set the custom header “Prediction-Key” for our request.
xhrObj.setRequestHeader("Prediction-Key","…");
Custom Vision will analyze the pictures we send and provide with result data out of our created model. For our testing purposes we uploaded the pictures to the Azure Blob Storage.
In this project we use the AI and Cognitive system to analyze the images and the environment around the technicians. Each helmet has a camera to take a picture every 10 seconds. All these images are sent to our IOT HUB and save on an AZURE Blob Storage:

Get Image

In the last code, we can see how manage and save on USB drive in our device the image. Before to stream in azure, we need to create the image. It is very easy.
Here the result inside our blob storage.
The image is sent at our image classifier and compare with the parameters that we is setup.
The result that we received is :

Slide: https://www.slideshare.net/algraps/digital-transformation-in-mining
Video: http://ow.ly/Obe350C6Dm5
This was originally posted here.

Like
Report
*This post is locked for comments