Camera with JavaScript and AL Business Central
Jalmaraz
667
Update 18/02/2020
I Updated all the code to Wave 2 version. For this pourpose I created a new branch in the repository https://github.com/JalmarazMartn/ALBSC365_CameraJS/tree/Wave2
This add-in doesn´t work with new Edge Navigator and Chrome (for PC). Works on Firefox (PC) and others navigators but on mobile devices.
Introduction.
I needed to take photos with Business Central (AKA and beloved NAV). Business Central already has a Page to manage camera called "Camera Interaction". I don´t know if it works or not, but in my SAAS doesn´t work. And I needed it to do some tests.
“Who cares?” section.
Well, we can think this is not very important, but let´s see two items:
- If you are beginning to work with artificial intelligence and you use face recognition, camera have an important role in all this.
- I also wanted to have a video preview to take the photo, and previous page “Camera Interaction” doesn´t have it.
So, we have to put hands on, and make our own camera manager in AL and JavaScript. I had a little help: lots of posts in JavaScript forums about this subject. Then, we have the keys: JavaScript, AL add-in and a new Page.
Page preview.
We can see in this page three images:
- Top control is a AL Business Central blob with the caught camera image.
- Next control below is a video stream in JavaScript.
- Last control is a photo too, like AL first control, but inside a HTML control.
We have two actions: “Play” to begin stream the Video in the middle control and “Photo” to take the photo and show it in the upper and the bottom control. And we can store this blob from the upper NAV control.
You can get all the code in my Git repository https://github.com/JalmarazMartn/ALBSC365_CameraJS
Code breakdown: “CameraControl.js” Script.
We create a Script called “CameraControl.js”. First, this statement create the HTML controls (video and canvas with still image) :
document.write('<html>Video<br><video id=video width=200 height=150 autoplay></video><br>'+
'Photo<br><canvas id=canvas width=200 height=150></canvas></html>');
A new JavaScript function to activate the camera and show streaming in video control:
function PlayVideo()
{
var video = document.getElementById('video');
if(navigator.mediaDevices && navigator.mediaDevices.getUserMedia) {
navigator.mediaDevices.getUserMedia({ video: true }).then(function(stream) {
video.srcObject = stream;
video.play();});
}}
And other function to take the photo and call NAV with the last statement NAV extensibility with the canvas control content:
function TakePhoto()
{
var canvas = document.getElementById('canvas');
var context = canvas.getContext('2d');
var video = document.getElementById('video');
context.drawImage(video, 0, 0, canvas.width, canvas.height);
Microsoft.Dynamics.NAV.InvokeExtensibilityMethod('PhotoTaken', [canvas.toDataURL("image/png")]);
}
A little explanation about content of “toDataURL” statement. This returns a base 64 string with the photo, and this text prefix: “data:image/png;base64,”. Later we will see how to manage this content to save it in NAV. We have to remove this prefix to convert the rest of string into an image stream.
Add-in Control breakdown.
Here we set control properties and declare procedures and event.
controladdin "Generic widget"
{
RequestedHeight = 2000;
MinimumHeight = 300;
MaximumHeight = 3000;
RequestedWidth = 2000;
MinimumWidth = 700;
MaximumWidth = 3000;
VerticalStretch = true;
VerticalShrink = true;
HorizontalStretch = true;
HorizontalShrink = true;
Scripts = 'CameraControl.js';
procedure PlayVideo();
procedure TakePhoto();
event PhotoTaken(Video: Text);
}
Page with Add-in.
We create a new page with a blob control:
field("Image"; BufferImage.Picture)
This a blob field with subtype “Bitmap”, from a temporary record:
field(2; Image; Blob)
{
DataClassification = ToBeClassified;
Subtype = Bitmap;
}
We create a user control with the add-in, and defines the behavior to receive the event when JavaScript send us the photo text stream:
usercontrol(Camerajs; "Generic widget")
{
ApplicationArea = All;
Visible = true;
trigger PhotoTaken(PhotoText: Text)
var
OutStream: OutStream;
InsTream: InStream;
TempBlob: Record TempBlob temporary;
begin
TempBlob.FromBase64String(PhotoText.Remove(1, 22));
TempBlob.Insert;
TempBlob.Blob.CreateInStream(InsTream);
if BufferImage.Insert() then;
BufferImage.Picture.CreateOutStream(OutStream);
CopyStream(OutStream, InsTream);
BufferImage.modify;
CurrPage.update;
end;
We add the button “Play” to begin play video:
action(Play)
{
ApplicationArea = All;
Promoted = true;
CaptionML = ENU = 'Play';
Image = ViewDescription;
PromotedIsBig = true;
trigger OnAction();
begin
CurrPage.Camerajs.PlayVideo();
CurrPage.Update(false);
end;
}
And finally, the button “Photo” to take a photograph:
action(Photo)
{
ApplicationArea = All;
Promoted = true;
CaptionML = ENU = 'Photo';
Image = Picture;
PromotedIsBig = true;
trigger OnAction();
begin
CurrPage.Camerajs.TakePhoto();
CurrPage.Update(false);
end;
}
Flow recap.
When we open the page, in first place loads HTML controls from the Script:
document.write('<html>Video<br><video id=video width=200 height=150 autoplay></video><br>'+
'Photo<br><canvas id=canvas width=200 height=150></canvas></html>');
Now the page is opened and with the JavaScript loaded. Then if we push “Play” action, action button executes this code:
trigger OnAction();
begin
CurrPage.Camerajs.PlayVideo();
CurrPage.Update(false);
end;
And the code calls JavaScript function “PlayVideo”:
function PlayVideo()
{
var video = document.getElementById('video');
The bridges between AL page and JavaScript, that make possible this are previous “CameraAddin.al” file where we declare the methods and events, and the Add-in control declaration in the Page.
Take photo flow recap.
When we push take photo action then we have a back and forth flow from AL to JavaScript and JavaScript to AL. The “Take photo” action executes this code:
trigger OnAction();
begin
CurrPage.Camerajs.TakePhoto();
The JavaScript function “TakePhoto” is executed and with this last statement go back to AL with “toDataURL”, that returns the image in base 64 coding:
context.drawImage(video, 0, 0, canvas.width, canvas.height);
Microsoft.Dynamics.NAV.InvokeExtensibilityMethod('PhotoTaken', [canvas.toDataURL("image/png")]);
The event “PhotoTaken” is caught by add-in control in page this way (see complete code bellow):
trigger PhotoTaken(PhotoText: Text)
var
OutStream: OutStream;
InsTream: InStream;
TempBlob: Record TempBlob temporary;
begin
TempBlob.FromBase64String(PhotoText.Remove(1, 22));
We remove 22 previous characters because we must split prefix (“data:image/png;base64,”) from base 64 string with the photo (the rest of string), and this way we can store the still image in our AL blob.
*This post is locked for comments