Ease, accuracy
We are going to do a hands-on lab with face recognition:
- We load to Business Central the picture of all our employees in their Resource Cards.
- We use these pictures to recognize our employees in a group photo, and with this recognition we fill a Resource Journal with the resource number of all the employees in the picture. This way we can easily complete all resource journal lines only from a photograph.
To get this goal we need two tools.
- Visual Studio Code AL with Business Central. I guess you know more than me about the tool.
- Cognitive Face API form Microsoft Azure. You need an Azure account and get and API key to use Azure Vision APIs.
I am going to be a good boy and use Microsoft products instead some alternative companies, but all big tech companies (Microsoft, Google and IBM) are in the AI rush and I tested many of their tools. I prefer to use Microsoft due the ease of handle and image detection accuracy. Two warnings about these services and the example:
- I am a trench programmer and I am far to know about several countries laws but using pictures of people could be troublesome. Verify privacy issues in your country.
- Microsoft Face recognition is very accurate without additional training, but if you have a critic recognition process you must train the model and get more recognition accuracy. This is a very wide subject and we are going to focus only in the example.
Use case
The initial scenario is having all Resource Cards with resource type person, with their pictures already loaded.
Step 1. User receives or take a group photo.
He uploads the photo to a special Page in business Central:
(This picture is from https://www.flickr.com/photos/139285241@N06/24869709981 and is Creative Commons licensed).
Step 2. User push a Page Action “Get resources In Picture” and BC returns a list of detected resources in group photo.
Step 3. User push Close and BC creates many Resource Journal lines as resources in the list.
Face Recognition from NAV/BC breakdown.
Process schema of this system recognition is:
The first process, split a group picture in individual faces, is made with a service of Microsoft Azure called “Detect face”
The link of the reference of the API:
The service URL is:
The input is a group picture (or a single person picture), the API stores temporary this picture in the cloud and delete it after 24 hours. The API assigns a key to each face in the picture and the output returns a JSON object with all Face IDs of the people in the image.
To compare two faces, you must upload with the previous Detect Method, save the both Face ID returned, and call another method with the two Face IDs: “Face Verify”. The input are the two face IDs from already stored face pictures and the response is true if the two face IDs belong to a same person. False if the faces don´t match.
This is the reference of the API:
Service URL;
The process to Verify whether two faces belong to a same person, is:
- Upload pictures and save theirs Face IDS with “Face Detect”.
- Verify faces calling the “Verify Face” service, with the Face Ids we get in previous step.
AL Code key functions.
I don´t want to breakdown all the Code, only show the main functions of the feature. Rather than my explanation, see my hub repo. In the Codeunit 50177 we call the function “WriteResJournalFromPicture” with a Instream with the group picture:
procedure WriteResJournalFromPicture(InsStream: InStream)
var
JSONBuffer: Record "JSON Buffer" temporary;
Resource: Record Resource;
begin
UploadImageAndGetAllFaceIds(InsStream, JSONBuffer);
with JSONBuffer do begin
repeat
CompareNewFaceWithResourcesAndMark(Value, Resource);
until next = 0;
Resource.MarkedOnly(true);
Page.RunModal(0, Resource);
CreateResJournalLines(Resource);
end;
end;
“UploadImageAndGetAllFaceIds” function call Azure service “Face Detect” and return a buffer with all the Face Ids in the Picture:
local procedure UploadImageAndGetAllFaceIds(InsStream: InStream; var JSONBuffer: Record "JSON Buffer" temporary)
var
AzureVision: Codeunit "Azure Vision";
begin
with JSONBuffer do begin
ReadFromText(AzureVision.GetTempFaceID(InsStream));
SetRange("Token type", "Token type"::String);
SetFilter(Path, '*faceId');
FindSet();
end;
end;
“CompareNewFaceWithResourcesAndMark” Read all resource Type person and check if their faces match.
var
AzureVision: Codeunit "Azure Vision";
ResourceFaceId: text[50];
begin
with Resource do begin
setrange(Type, type::Person);
FindSet();
repeat
ResourceFaceId := GetResourceFaceId(Resource);
if ResourceFaceId <> '' then
if AzureVision.MatchTwoFaceIds(FaceId, ResourceFaceId) then begin
Mark(true);
exit;
end;
until next = 0;
end;
end;
My AI repo it´s a bit messy, have many utilities with miscellaneous companies services, so I will remark the main objects of this face recognition:
Page 50176 “Azure Vision”. In this Page we load a picture an push “Get resources in picture” button.
Codeunit 50177 Load Resource Journal. Processes the picture and loaf the journal.
Codeunit 50176 Azure Vision. Manages all API calls to AL.
Please fell free to ask me all the questions you need. My AI repo is: https://github.com/JalmarazMartn/MessyAIWhithALBusinessCentral
*This post is locked for comments