Quantcast
Channel: Kauffmann @ Dynamics 365 Business Central
Viewing all 118 articles
Browse latest View live

PDF Viewer in Business Central

$
0
0

Recently a question was raised by one of the MVP’s if somebody has an example of a PDF viewer in Business Central. That reminded me that I actually had an example, created almost one year ago. Should have published it way earlier! Anyway, I thought it still makes sense to polish that thing and share it.

I’ve explored several ways to display a PDF as part of a website. The HTML <object> element uses the native browser support for viewing PDF. Optionally, it can be combined with the HTML <iframe> object as a fallback. It doesn’t require any JavaScript and seems to work on most browsers.

However, the browser will use whichever PDF reader is installed on the system and there is no way to customize the look and feel. There is only a very limited set of API’s available and even these API’s are not supported in all browsers. In other words, how the PDF is rendered on the page is out of your control.

So I decided to move on and look for a solution that gives you full control over how the PDF is rendered, including an API to integrate with. And I stumbled over PDF.js, an open-source PDF viewer, built with HTML5 and supported by Mozilla Labs.

This tool proved to be very flexible and I got it to work pretty quickly. In the first place as a single page viewer, as explained in the examples. However, the option were still quite limited and I figured that it would be great to take the viewer, which has all the options you can think of, and embed that one into Business Central. Not just as-is, but integrated with Busines Central, and with a little re-skin applied, like they ask here: However, we do ask if you plan to embed the viewer in your own site, that it not just be an unmodified version. Please re-skin it or build upon it.

Allright, enough talking, let’s first look at the result and then see how it’s done.

And the same control can also be used in a factbox, without any modifications:

And now straight to the source code. The complete code can be found on GitHub: https://github.com/ajkauffmann/PDFViewer

Please read the explanation below how to use it, before you jump in and start using it right away!

The GitHub repository consists of two folders:

  • PDFViewer: Business Central extension
  • pdfjs-2.0.943-dist: source code for a static HTML website

If you clone the repo, then you will find a PDFViewer.code-workspace file in the root. Open this file with VS Code and you will find both folders opened at once. Alternatively, you can open the folders individually.

Let’s first look at the folder pdfjs-2.0.943-dist. This is a download of the latest version of the viewer application, which can be found here. Alternatively, you can clone the full repo on GitHub, with all source code and build the viewer yourself by using gulp. But if you are not used to that, it’s easier to download the distributable file of the viewer and use that one.

Embedding a JavaScript control add-in in Business Central is normally done by packaging all necessary files in the extension. This could potentially work, with the trick that Vjeko explained in this blog post. However, the viewer contains about 400 files, so you get a really big list of image resource in the controladdin. And what’s more, it just doesn’t work. Some files couldn’t be downloaded at all, like the .properties files in the locale folder and the bcmap files in the cmaps folder. On top of that, if it works in your local docker sandbox that doesn’t mean it will work with Business Central online. It’s out of scope to go into details about that, but it just didn’t work.

How can we embed a control add-in if the files can’t be published together with the extension? Simply, by hosting the files elsewhere and point the browser to those files. Like you can include JavaScript files from external sources, you can also include HTML files from external sources.

So I decided to just grab the whole PDF.js viewer application and host it on Azure. I have used static website hosting in Azure storage. This is an option with Azure storage with no additional cost. Azure blob storage is pretty cheap, so it was a no-brainer. With extra options like CDN and custom domain names it is even more attractive.

So this is what I did: after modifying the PDF web viewer application to integrate with Business Central (see below) I have created a static website with the Azure portal as explained here. Then I uploaded the files using the Azure Storage Explorer. I have now the PDF web viewer application hosted here: https://bcpdfviewer.z6.web.core.windows.net/web/viewer.html. As a result, the AL extension is pretty small and easy to understand.

Warning: don’t use my PDF web viewer website for any production scenario! Ue your own hosting website instead!

After I figured this hosting thing out, there were some other hurdles to take. I will save some of them for another blog post and just focus on the parts to integrate the web viewer application with Business Central. I will first go over the modifications I did to the web viewer application and then switch to the AL code.

Viewer.js

The web viewer app comes with a JavaScript file that configures the viewer and loads it into the page. Until the viewer is ready, we should not load anything into it. Because I couldn’t find any event that would tell me if the viewer is loaded, I decided to add a new event. You can find this in the viewer.js in the function webViewerLoad.

bcintegration.js

To integrate with Business Central and be able to send data to the viewer, I have created a new JavaScript file that is loaded together with the web viewer page. It contains functions to send the new event to Business Central and to load documents into the PDF viewer. It supports both URL and Base64 encoded data, so it is possible to use data from blob fields.

This bcintegration.js must be added to the viewer.html page.

viewer.css

The viewer.css defines the look and feel of the viewer. I’ve made a few changes to better integrate the viewer into Business Central. Feel free to play with it and change to whatever you like. Look at the history of the file to see what changes I did.

That’s it, let’s now have a look at the AL code

The most important part you need to understand is how exactly the web viewer is embedded in the page. The web viewer is loaded in a nested <iframe> element that is added to the control add-in element. This is done in script.js.

Because it is not possible to directly communicate between the iframe, which has a different url then the parent, and the parent object, the script.js file implements to use postMessage and onMessage. This is a safe way to communicate between the two and supported by all browsers.

In the top of the script.js file, the url of the website where the PDF web viewer is hosted is specified. You should change this to the website where you have hosted your version of the web viewer.

Don’t use a trailing slash for the url setting. It wil fail the test in the onMessage function.

The rest of the code is just tableextensions, pages and pageextensions to add the viewer to the Incoming Document pages. I guess you will be able to figure out yourself what’s going on there.

There are some tricks in this PDF viewer control add-in that I haven’t talked about yet. I will do that in a next blog post because these tricks apply to JavaScript control add-ins in general.

Thanks Mohana for raising the question and testing the first release!

Get luck with it and please let me know what you think! Feel free to grab the GitHub repository and to contribute to it if you have something cool to share.


Controlling the size of a Control Add-In

$
0
0

In the previous blog post about a PDF viewer for Business Central, I promised to come back with some tips. Here is tip #1: how to get full control over the size of a control add-in.

Final code of this example can be found on GitHub.

Control add-in example

Let’s first look at a very simple example with no size settings at all and then gradually take over control over it’s length and width.

The stylesheet is very short and used to give the control add-in some colors so we can easily recognize it on the page.

The startup script just writes some text into it.

And finally we put this thing on the Customer Card with a pageextension.

This results in a blue box with a red border on the Customer Card:

Let’s zoom in a little bit on that blue box.

First thing we notice here is that the box doesn’t display the red border at the right and the bottom. This is because by default the size of a border isn’t part of the size of the box itself. More info can be found here. So let’s first fix this by setting the box-sizing to border-box.

Which then results in a box with red borders around it.

Controlling the size

Ok, that was easy to solve. Next step is to expand the blue box to 100% of the available space and also resize when the whole page is resized.

What we need to understand is how the control add-in is added to the page. It is not just a simple HTML <div> control, but it is inside an iframe. An iframe is used to display a web page inside a web page. The iframe has its own source and acts as an independent page, embedded in the parent page.

Why is this important to know? Because the div element which represents the control add-in can’t be used to define the size. In fact the div has already width and height set to 100%. But it still doesn’t take all remaining space. And that’s because it is embedded in a parent element, the iframe, that has its own width and height set. By default, width and height of the iframe are 100px.

With Chrome DevTools we can easily inspect the elements on the page. In the picture below you see the <iframe> element and inside the iframe the <div> element of the control add-in. Notice that the <div> element has set height and width to 100%, but height and width of the <iframe> is set to 100px.

So, how can we set the height and width of that <iframe> element? First option to is specify it in the controladdin object in AL code. In fact, there are a lot of settings that can be used to set the sizes. A complete overview can be found here. I’m not going to discuss every one of them, but I would like to focus on the most asked question: how can I force the control add-in to take up as much space as possible and resize automatically?

The answer is: use the properties HorizontalStretch and VerticalStretch.

Setting the width

Let’s first look at HorizontalStretch. When you set this property to true, the control add-in will occupy width up to the maximum that has been set with the property MaximumWidth. But if you omit the maximum width property, then the control add-in will expand indefinitely. Same goes with the property HorizontalShrink. If you specify this, the control can shrink up to the minimum that has set with the property MinimumWidth. And if you omit the minimum width, then the control could even shrink to zero width. Which is not really recommended, you should set a minimum width that will keep the content of the control add-in visible.

So, let’s set the properties HorizontalStretch and HorizontalShrink, including a minimum width, and look at the result.

Now the control add-in takes up full width and it expands and resizes automatically when the page is resized.


Take a look at the properties of the iframe to see how the settings were applied.

Setting the height

So, does the same work for the height property? Yes, but limited to specific situations. The VerticalStretch property is only supported in a CardPart on Role Center pages or when it is the only content in a Card page. Same as with the width properties, the MinimumHeight and MaximumHeight can be used to limit lower and upper limits.

Before we look at setting the height of a control and letting it resize with the screen, we should think about how it should behave. Do you place other controls under the the control add-in, like I’ve done in the example above where I placed the blue box in the middle of a screen? In that case, VerticalStretch is not very useful. But in case you place the control as the last control in a page, then it would make sense to let it automatically occupy the rest of the screen.

So, let’s change the example and add the blue box on a page with only a few fields in the top. First screenshot is with no height settings applied, so the height is still the default 100px.

Let’s now set the VerticalStretch property and also set a MinimumHeight and see if that has any effect.


I’ll spare you another screenshot. With this setting, you won’t see any difference. The control add-in still has a height of 100px. The reason why this is happening is that MinimumHeight only applies when VerticalShrink is true. But VerticalShrink only applies when the add-in control is the only control on a page. The only way to control the height, is by setting the RequestedHeight property. Let’s prove that by setting RequestedHeight and look at the result.

As you can see, the blue box is now bigger. You could say, we are getting closer. Well, not really. What we want to achieve is that the blue box takes the rest of screen, and according to the documentation that is only going to happen when we place the control add-in as the only one on the page.

In the screenprint below, I have removed the two fields, and now the control add-in behaves like documented. Again, out-of-the-box this doesn’t work when you combine the control add-in with other fields on the page.

Advanced scenarios

Now that I’ve demonstrated the default behavior, it’s time to take it to the next level and figure out if we can achieve what we were looking for. Because the AL controladdin object doesn’t support this, there is only one way left: use JavaScript and directly set the desired properties on the HTML elements.

Without going to deep, the problem can’t simply be solved by setting height to 100% on the HTML iframe element. The height of the iframe will become the same as its parent, without calculating the space of the fields above. As a result, the control add-in will be larger than the screen and a scrollbar will be displayed.

The only solution I could find is to use the CSS flexbox layout. As a matter of fact, the Business Central web client is also using this property quite a lot. If you want to read more about flexbox, then I can recommend this site.

In short, what we are going to do:

  • The parent element of the iframe must become a flex container.
  • The iframe element must become a flex item
  • The iframe should not have any height settings.

Let’s start with the last bullet point and remove the height settings from the control add-in. Remember, the iframe will now get the default height of 100px. So that needs to be removed, which we will do with JavaScript.

The first two bullet points is done with JavaScript code in the startup.js. Including the code to remove the height properties from the iframe element.

We could also remove the width settings from the iframe, and modern browser like Chrome and Edge, will still show the expected result. However, Internet Explorer 11 doesn’t like that. So I leave the width settings to also support IE11.

Please note that I have added a padding of 42px to the bottom of the iframe (see line 14 in the startup.js script). That’s optional, but it just looks better to me. The value of 42px is also used by fasttabs in the web client.

And here is the result of these changes:

The code of the result can be found on GitHub.

Good luck, I hope this helps you with creating nice control add-ins for Business Central!

How to install Docker on Windows 10 without Hyper-V

$
0
0

Let me start by saying that it is quite hard to exactly understand the different Docker editions. Only this page is already confusing. There is a Community Edition (CE) and an Enterprise Edition (EE). But in the table there are three editions listed and none of them are named exactly the same as the names in the list. As far as I understand, the Docker Community is for free, while Docker Enterprise is the paid version.

And there is also something called Docker Engine – Community and Docker Engine – Enterprise. It seems like these are free versions, whereas the Community version for Windows 10 is also called Docker Desktop for Windows. Docker Desktop for Windows comes with an installation program and has a (basic) GUI. The version for Windows Server is Docker Engine – Enterprise and does not have a GUI nor an installation program.

Why do I start with this? Because both Docker Desktop for Windows and Docker Engine – Enterprise can be downloaded and installed for free. The Docker website does not mention any pricing for these products and the documentation makes clear how to install and does not talk about any license. I wanted to make clear that what I’m about to tell you is legal and not an infringement of any license terms.

What’s the point? Well, Docker Desktop for Windows requires Hyper-V to be enabled on your Windows 10 system, while Docker Engine – Enterprise (from now on referred to as Docker EE) does not. I was wondering if it would be possible and officially supported to install Docker EE on Windows 10. And it appears to be the case.

Why would I want that? Not because I have a problem with Hyper-V on itself. I’ve used it for many years to maintain different installations of Dynamics NAV. And even when Microsoft started to ship Dynamics NAV and Business Central as docker images, I used an Hyper-V VM with Docker installed instead of using Docker directly on my laptop. Just because Docker only supported containers in Hyper-V mode on Windows 10, which my laptop did not really like in combination with other native Hyper-V VM’s.

Since Docker supports process isolation on Windows 10 (starting with version 1809) it became time to say goodbye to my Hyper-V setup and switch to running containers directly on Windows 10. And because I didn’t need Hyper-V anymore, I also decided to get totally rid of it. That was wrong. Docker Desktop installed without complaining, but it didn’t want to start. Instead, I was presented with this screen:


And when I chose Cancel, I got this error message:

Too bad… even if you plan to not use Hyper-V based containers, you have to install Hyper-V otherwise Docker Desktop will not run after installation. I’ve heard it is because of Linux containers. They can only run inside an Hyper-V based container. And because Docker Desktop supports to switch between Windows and Linux containers, it simply expects you to have Hyper-V installed, no matter what.

Ok, but is that so bad after all? Well, maybe it is, maybe not. In my experience, Hyper-V can cause problems when combined with Docker running in process isolation mode. Especially networking seems to be quite vulnerable when Docker and Hyper-V have to work together with virtual networks.

Because Windows 10 and Windows Server share the same codebase, I was wondering if it would be possible to install Docker EE on Windows 10. Just like you install it on Windows Server. So I followed the instructions, but to no avail. I ran into another error:

Bummer…

Because I’m that kind of curious guy, I tried to find out the what this error is about. And you know what? It’s just a bug in the installation script. It’s not because Windows 10 is not supported with Docker EE. Just a bug in the script that could be solved a year ago. The script of DockerMsftProvider is available on GitHub. Make sure you are in the master branche, in case you take a look. The default development branche is already quite some commits behind master (sigh).

So, what’s the bug? There is a test on operating system, because it wants to run a different cmdlet when installing on Windows 10. It wants to test if the Windows feature Containers is installed.

And you know what? My system does not have ‘Microsoft Windows 10’ as output, but ‘Microsoft Windows 10 Pro‘. So the script fails, because it tries to run the cmdlet Get-WindowsFeature which is only available on Windows Server.

But wait… it gets worse. The suggestion in the script is that there is a difference between Windows 10 and Windows Server. Which is not true. The cmdlet Get-WindowsOptionalFeature works both on Windows Server and Windows 10! So why using the other cmdlet that is only supported on Windows Server? That’s a cmdlet you use when you want to work on a remote system. In this case that’s not what we do, so the solution here would be to just use Get-WindowsOptionalFeature (and a few lines later Enable-WindowsOptionalFeature). Without even testing on operating system, the script would be valid for both Windows 10 and Windows Server!

Of course I have made a bugfix, including some other small fixes, and created a pull request. The bad news is, there were already 5 pull request waiting, the oldest waiting for more than a year (and coincidentally for the very same bug, just a different solution). I wouldn’t be surprised if my pull request will be ignored as well.

So, what can we do to install Docker EE on Windows 10? Well, I have two options for you. Option number 1 is do a manual install, which is quite easy to do. Option number 2 is to download my version of the module DockerMsftProvider and let it install Docker for you.

Option 1: Manual install

The documentation of Docker EE contains a step-by-step instruction to use a script to install Docker EE. Follow that script and you will be safe. It can also be used to update Docker, just by downloading the latest files and overwrite the existing files.

Here is a modified version of that script. It will automatically detect the latest version for Windows version 1809, download and extract it and install it as a service.

# Install Windows feature containers
$restartNeeded = $false
if (!(Get-WindowsOptionalFeature -FeatureName containers -Online).State -eq 'Enabled') {
    $restartNeeded = (Enable-WindowsOptionalFeature -FeatureName containers -Online).RestartNeeded
}

if (Get-Service docker -ErrorAction SilentlyContinue)
{
    Stop-Service docker
}

# Download the zip file.
$json = Invoke-WebRequest https://download.docker.com/components/engine/windows-server/index.json | ConvertFrom-Json
$version = $version = $json.channels.'18.09'.version
$url = $json.versions.$version.url
$zipfile = Join-Path "$env:USERPROFILE\Downloads\" $json.versions.$version.url.Split('/')[-1]
Invoke-WebRequest -UseBasicparsing -Outfile $zipfile -Uri $url

# Extract the archive.
Expand-Archive $zipfile -DestinationPath $Env:ProgramFiles -Force

# Modify PATH to persist across sessions.
$newPath = [Environment]::GetEnvironmentVariable("PATH",[EnvironmentVariableTarget]::Machine) + ";$env:ProgramFiles\docker"
$splittedPath = $newPath -split ';'
$cleanedPath = $splittedPath | Sort-Object -Unique
$newPath = $cleanedPath -join ';'
[Environment]::SetEnvironmentVariable("PATH", $newPath, [EnvironmentVariableTarget]::Machine)
$env:path = $newPath

# Register the Docker daemon as a service.
if (!(Get-Service docker -ErrorAction SilentlyContinue)) {
  dockerd --exec-opt isolation=process --register-service
}

# Start the Docker service.
if ($restartNeeded) {
    Write-Host 'A restart is needed to finish the installation' -ForegroundColor Green
    If ((Read-Host 'Do you want to restart now? [Y/N]') -eq 'Y') {
      Restart-Computer
    }
} else {
    Start-Service docker
}

Run this script, and you will have Docker EE installed on Windows 10. Make sure to save the script and use it again to update to a newer version.

Option 2: Use DockerMsftProvider

If you want to use DockerMsftProvider with my fixes, then download the module from my GitHub repository and copy it to your PowerShell modules folder. I have create a little script that downloads the two files to the PowerShell modules folder and runs the script for you.

A smal note: while testing the script again, I noticed that the path variable was not updated. I guess it’s another bug, not sure about it. So you might want to go with option 1 anyway… 😉

$paths = $env:psmodulePath.Split(';')
$modulePath = Join-Path $paths[0] "DockerMsftProvider"
if (!(Test-Path $modulePath)) {
  New-Item -Path $modulePath -ItemType Directory
}
$outfile = Join-Path $modulePath 'DockerMsftProvider.psm1'
Invoke-WebRequest -UseBasicParsing -OutFile $outfile -Uri https://raw.githubusercontent.com/ajkauffmann/MicrosoftDockerProvider/master/DockerMsftProvider.psm1

$outfile = Join-Path $modulePath 'DockerMsftProvider.psd1'
Invoke-WebRequest -UseBasicParsing -OutFile $outfile https://raw.githubusercontent.com/ajkauffmann/MicrosoftDockerProvider/master/DockerMsftProvider.psd1

Install-Package Docker -ProviderName DockerMsftProvider -Force

And here you have Docker EE running on Windows Server 10.

Portainer

Another benefit of installing Docker EE is that Portainer works out of the box. You don’t need to do any settings, like exposing port 2375. The next two docker commands download and install Portainer for you.

docker pull portainer/portainer
docker run -d --restart always --name portainer --isolation process -h portainer -p 9000:9000 -v //./pipe/docker_engine://./pipe/docker_engine portainer/portainer

Open portainer with http://localhost:9000, provide a username and password and then click on Manage the local Docker environment. Click Connect to continue.

Then click on the local connection to manage your containers and images in a GUI.

That’s it! As you can see, it is perfectly possible to install and run Docker EE on Windows 10. For me this was really a set-and-forget installation.

Good luck!

Disclaimer: scripts are tested on my Windows 10 v1809 installation. No guarantee it will work on your installation. Scripts are provided ‘as-is’ and without any support. 😉

Debug without publishing

$
0
0

Recently, with the Developer Preview – February 2019, a new feature was added to the AL extension for Visual Studio Code: Debug without publishing. This new feature brings more than you would expect. It is not just another option in the list of Publish and Publish without debugging.

What’s the difference? With the option Publish and Publish without debugging you are going to publish the extension that you currently have open in the VS Code workspace. With the option Debug without publishing, you don’t publish anything. In fact, you don’t even have to have any code open. This option allows you to debug the actual code on the server instead of debugging the code inside the VS Code workspace you have open. A quote from the blog post about Developer preview – February 2019:

Please be aware that if you start debugging without publishing you will be debugging the last published code of your app.

What you basically need is a workspace in VS Code with a server configuration in the launch.json. This server will be the server that you are going to debug. You don’t even need to download the symbols files. And the app.json can also be ignored. Just follow these steps:

  • Create a new workspace with AL:Go!
  • Configure launch.json for the server you want to debug
  • Press Ctrl+Shift+F5

and a debugging session will be started. If the code runs into an error, the corresponding .dal file will be downloaded. From there you can view variables, set breakpoints (in the .dal file) and step through the code.

The launch.json has a breakOnError setting. By default this setting is true. Since there is no way to set breakpoints manually, you wil need this setting. Optionally you can use the setting breakOnRecordWrite.

Again, basically the only thing you need to debug the actual code is a workspace with a launch.json file.

I believe this feature helps to support scenario’s like ‘an error occurred but I don’t know where the error comes from’. The debugger will allow you to debug any code, including all installed extensions.

One final comment: according to this documentation, you can’t debug an extension that has set ShowMyCode = false in the app.json.

New AL file type icon

$
0
0

The latest release of Business Central not only comes with a new icon for the application, the AL language extension for VS Code also got a new icon:

And not only that, but the AL language also includes a file icon theme, as you can see here:

When you apply this file icon theme, you will get a small logo with AL in front of all .al files.

I’m not going to discuss the design of the icon itself. There will be people who like it and people who don’t. But what I really don’t like about this file icon theme is that it doesn’t apply icons to any other file type. And it doesn’t clearly line out folders. If you are used to the VSCode Icons theme, then you know what I mean. But that theme currently uses the old Business Central logo, as you can see in this picture:

To update the AL icon in VSCode Icons, I’ve created a pull request. It can take a while before this pull request is accepted and merged. If you can’t wait for that, then here is a workaround to get the new AL logo combined with VSCode Icons.

According to the documentation, it is possible to use custom icons with the VSCode Icons theme. All you need to do is to follow this documentation and copy the AL logo file.

In short, follow these steps:

  • Create a new folder vscode-custom-icons.
    The path should be C:\Users\<your_user>\AppData\Roaming\<Code Folder>\User\vsicons-custom-icons
  • Copy the icon file from AL to this folder. The icon file can be found in this folder: C:\Users\<your_user>\.vscode\extensions\ms-dynamics-smb.al-<version>\img. The name of the file is AL_file_logo.svg
  • Rename the copied file to file_type_al.svg
  • If you don’t want to copy and rename, then just download the file (right click, save link as)
  • Finally, in VS Code, run the command Icons: Apply Icons Customization

Et voila, you have a custom icon:

What if you are more in favor of the new Business Central logo? Well, then download this file, copy it to the folder and rename it to file_type_al.svg.

One way or another, you will find it very useful when you can quickly identify a file by just looking at the logo. Enjoy!

.Net types in AL are reference types

$
0
0

This week I presented a session about .Net in the AL language at Directions EMEA 2019 in Vienna. And of course, I paid attention to the types in AL that represent .Net types, like TextBuilder, List, Dictionary, Http and JSON types. During the preparation of that session, I found a comment in the Microsoft documentation, below the documenation about HTTP, JSON etc., that I thought it could use some explanation:

For performance reasons all HTTP, JSON, TextBuilder, and XML types are reference types, not value types. Reference types holds a pointer to the data elsewhere in memory, whereas value types store its own data.

In this quote from the Microsoft documentation, the List and Dictionary type are not mentioned, but the same applies to these types.

The question is of course: what does it mean that they are reference types? How does it impact your code? For .Net developers, it’s quite obvious, they are used to working with reference types, but C/AL developers certainly are not so used to the concept. So if you move to AL code and use these new variable types, then you might run into unexpected behaviour. I though it would make sense to shine some light on it.

A reference type means that the variable doesn’t hold a value, but a reference to an instance of an object in memory. When you pass such a variable as a parameter to a function, then you are passing a reference to the object in memory. The receiving function works directly on that object in memory through the reference it received. Let me explain it with an example:

codeunit 50100 ListDemo
{
    procedure Demo()
    var
        Cust: Record CUstomer;
        NameList: List of [Text];
        Name: Text;
    begin
        GetNameList(Cust, NameList);

        foreach Name in NameList do
            Message(Name);
    end;

    procedure GetNameList(Cust: Record Customer; NameList: List of [Text])
    begin
        if Cust.FindSet() then
            repeat
                NameList.Add(Cust.Name);
            until Cust.Next() = 0;
    end;
}

The NameList variable is not passed by var to the function GetNameList. However, after the function call on line 9, the variable NameList will contain values. This is possible because the NameList parameter just holds a reference and the code on line 19 is working on the very same object in memory.

Let’s add one line of code that will destroy all of this. We add a Clear() command to the beginning of the function GetNameList.

codeunit 50100 ListDemo
{
    procedure Demo()
    var
        Cust: Record CUstomer;
        NameList: List of [Text];
        Name: Text;
    begin
        GetNameList(Cust, NameList);

        foreach Name in NameList do
            Message(Name);
    end;

    procedure GetNameList(Cust: Record Customer; NameList: List of [Text])
    begin
        Clear(NameList);
        if Cust.FindSet() then
            repeat
                NameList.Add(Cust.Name);
            until Cust.Next() = 0;
    end;
}

After the call, the variable NameList in the first function does not contain any value at all. This is the kind of situation that can really drive you nuts. If you debug this, you will see values being added to the parameter on line 20, but when you get back into the calling function the NameList is suddenly empty. While the previous example perfectly worked, this one doesn’t.

The reason is that the Clear() command removes the reference from the parameter. It doesn’t point to the original object instance anymore. And because AL types are automatically instantiated (we don’t have to call a constructor) a new instance will be created for us and the parameter will now hold a reference to that new object. However, the parameter was not passed in by var, so the calling function will not receive the new reference from the parameter. It will still hold the reference to the original object, but nothing happened on that original object. The function GetNameList worked on the newly instantiated object.

Of course, the solution is simple: add var to the parameter, and it works again.

codeunit 5010 ListDemo
{
    procedure Demo()
    var
        Cust: Record CUstomer;
        NameList: List of [Text];
        Name: Text;
    begin
        GetNameList(Cust, NameList);

        foreach Name in NameList do
            Message(Name);
    end;

    procedure GetNameList(Cust: Record Customer; var NameList: List of [Text])
    begin
        Clear(NameList);
        if Cust.FindSet() then
            repeat
                NameList.Add(Cust.Name);
            until Cust.Next() = 0;
    end;
}

Hope this makes a little bit sense to you!

Where is the API v1.0 source code?

$
0
0

With the April ’19 release of Business Central the standard API’s were released as version 1.0. According to the documentation, the beta version of the standard API’s remains available until April 2020 (Business Central 2020 release wave 1).

TL;DR

  • API v1.0 pages are not part of the base application
  • They are deployed as a hidden app, called _Exclude_APIV1_
  • With navcontainerhelper you can download and extract the source code

The journey to find the source code

At Directions EMEA in Vienna, I watched a demo where the inventory of an item was changed by just setting the correct value of the inventory field via an API call. And I wanted to know more about it because I thought the inventory field is a non-editable FlowField. First I tested it by calling the API and search for the created entries. For a moment, I was afraid it would only create Item Ledger Entries, but I found out quickly that there were also Value Entries and G/L Entries created. Apparently, the correction was done through a normal Item Journal Line posting. So far so good. But I still had to answer the question of how it was implemented in code.

So I opened the v15 source code, searched for the Item Entity page (page 5470) and tried to find out what happens behind the scenes. Why did I look for the Item Entity page? Because that used to be the API page for items, like amongst a number of other Entity pages. However, it turned out that the page didn’t have code for creating Item Journal Lines. So, is it magic? No, of course not! Business Central can be a mystery, but it doesn’t create entries out of the blue. There must be something going on behind the scenes that I didn’t see.

Then I suddenly realized that I was looking at the beta version of the API page. The page was already part of the application before API v1.0 was released. And there is no version set in the API page, which means it is supposed to be beta (isn’t that strange by the way?). Anyway, I concluded there should be another page that serves the API v1.0.

But where? I searched everywhere in the source code of the v15 base application, but I couldn’t find any API page for the items v1.0 API. That’s not really satisfying, I can tell you. And when this happens nobody can stop me to find out what is happening. I will leave no stone unturned until I know exactly what’s going on.

I decided to create an event subscriber to the OnBeforePostItem event in the Item Jnl.-Post Line Codeunit. Set a breakpoint, debug the web service session (thanks to v15 that’s possible with attach next) and look at the call stack. And that brought me one step further to the answer.

The object APIV1 – Items was my suspect. It turned out to be page 20008 “APIV1 – Items”.

And it contains a function UpdateInventory that creates an Item Journal Line with the required change and posts it

But hey… where is this page hiding? Now that I knew the name and ID, I went back into the source code of the base application and tried to track down the page. However, it wasn’t there. For a moment I began to doubt myself. Then a suddenly realized there is another way to add features to Business Central: apps! 😀

PowerShell and navcontainerhelper helped me with the next step:

Now I could feel I was getting closer! The only thing left was to get my hands on the source code of this app, so I could verify the code and cross-check with the results from the debugger.

As you can see, the app name starts with _Exclude_. Such apps will not be visible in the Extensions Management page in Business Central. So I needed another way to get to the source code. Again, navcontainerhelper helped me with this final step (code can be copied):

Get-BcContainerApp `
-containerName bcsandbox `
-appName _Exclude_APIV1_ `
-publisher Microsoft `
-appVersion 15.0.36560.0 `
-appFile "C:\ProgramData\NavContainerHelper\apps\_Exclude_APIV1_.app" `
-credential (Get-Credential)

Extract-AppFileToFolder `
-appFilename "C:\ProgramData\NavContainerHelper\apps\_Exclude_APIV1_.app" `
-appFolder "C:\ProgramData\NavContainerHelper\apps\_Exclude_APIV1_"

And there is the API page I was looking for!

Conclusion

After this journey, I felt somewhat unhappy. I found out what is going on behind the scenes, but did I really like what I found? Honestly, I think we shouldn’t have to find out these little secrets ourselves. Why is not clearly documented at the API documentation how the v1.0 API’s are deployed? And why hide away the app by excluding it from the extension management page? Why not publish the code, e.g. on the ALAppExtensions repo on GitHub? The source code of the base application contains the beta version, which will be deprecated in April 2020 and should not be used, while the actual code of the API is not available. Think about taking a standard API as a starting point for a custom API. You would probably take the API of the beta version instead of the latest version.

And then the point the journey started with. Setting the inventory by means of an API call that enables you to modify a non-editable field. That’s the scenario that triggered me in the first place. Now I know that the standard API page creates an Item Journal Line with a bare minimum of information, completely out of my control. I’m not sure whether I should like it or not. At least I’m not comfortable with it.

Anyway, I hope this journey helps you to take more control over the standard API’s of Business Central!

How to use the Excel Buffer in Business Central cloud

$
0
0

A week ago, my friend Daniel asked a question on Twitter how to use the Excel Buffer to export data with Business Central online. Many functions on the Excel Buffer are only available for on-prem. When you change the target in app.json to ‘Cloud’ then those functions become unavailable.

Luckily I had a code example available so I decided to put that on GitHub. But I thought it would make sense to write a blog post to explain the code.

It contains two examples: the first example exports the Customer list to a new Excel file and the second exports a Sales Order to a prepared and uploaded Excel file. Both options offer to download or to email the created Excel file.

The example to export the Customer List can be found on the Customers page under Actions. The export Sales Order is on the Sales Order list under Actions.

Basic flow

The basic flow of creating a new Excel file with the Excel Buffer table can be found in Codeunit Export Customer 2 Excel, function CreateAndFillExcelBuffer.

local procedure CreateAndFillExcelBuffer(var TempExcelBuf: Record "Excel Buffer" temporary)
begin
    TempExcelBuf.CreateNewBook(SheetNameTxt);
    FillExcelBuffer(TempExcelBuf);
    TempExcelBuf.WriteSheet(HeaderTxt, CompanyName(), UserId());
    TempExcelBuf.CloseBook();
end;

All of this happens in memory. The CreateNewBook function creates a temporary server file and initializes the .Net components to write to that file. There is an overload function for CreateNewBook that accepts a filename parameter. Obviously, that function isn’t available for the cloud, because you don’t have direct access to server files. After this, you fill the Excel Buffer with data in the usual way.

local procedure FillExcelRow(
    var TempExcelBuf: Record "Excel Buffer" temporary;
    Customer: Record Customer)
begin
    with Customer do begin
        TempExcelBuf.NewRow();
        TempExcelBuf.AddColumn("No.", false, '', false, false, false, '', TempExcelBuf."Cell Type"::Text);
        TempExcelBuf.AddColumn(Name, false, '', false, false, false, '', TempExcelBuf."Cell Type"::Text);
        TempExcelBuf.AddColumn(Address, false, '', false, false, false, '', TempExcelBuf."Cell Type"::Text);
        TempExcelBuf.AddColumn("Post Code", false, '', false, false, false, '', TempExcelBuf."Cell Type"::Text);
        TempExcelBuf.AddColumn(City, false, '', false, false, false, '', TempExcelBuf."Cell Type"::Text);
        TempExcelBuf.AddColumn("Country/Region Code", false, '', false, false, false, '', TempExcelBuf."Cell Type"::Text);
    end;
end;

The WriteSheet function uses the data of the Excel Buffer to write to the temporary server file. Finally, the Close function clears the .Net components that are used to write to the file. Now you end up with a temporary server file. The next step is to get the file, either as a download to the client or into a stream so you can store it in a blob or attach it to an email.

Getting the file

To download the file to the client, you can only use the function OpenExcel. Other functions are blocked for the cloud. Below are the three functions in the Excel Buffer table, and I believe these functions could be reduced to just two functions. The function OpenExcel could be removed and the OnPrem limitation on function DownloadAndOpenExcel could be removed. As far as I can see, it would result in one function to be used for both cloud and on-premises with exactly the same behaviour. And no code duplication between OpenExcel and OpenExcelWithName.

    procedure OpenExcel()
    begin
        if OpenUsingDocumentService('') then
            exit;

        FileManagement.DownloadHandler(FileNameServer, '', '', Text034, GetFriendlyFilename);
    end;

    [Scope('OnPrem')]
    procedure DownloadAndOpenExcel()
    begin
        OpenExcelWithName(GetFriendlyFilename);
    end;

    [Scope('OnPrem')]
    procedure OpenExcelWithName(FileName: Text)
    begin
        if FileName = '' then
            Error(Text001);

        if OpenUsingDocumentService(FileName) then
            exit;

        FileManagement.DownloadHandler(FileNameServer, '', '', Text034, FileName);
    end;

Anyway… the only thing that I wanted to add is the FriendlyFileName. So I have created a function DownloadAndOpenExcel that does that and then calls the function OpenExcel. As a result, the web client will download the file to the local file system.

local procedure DownloadAndOpenExcel(var TempExcelBuf: Record "Excel Buffer" temporary)
begin
    TempExcelBuf.SetFriendlyFilename(BookNameTxt);
    TempExcelBuf.OpenExcel();
end;

Attach the Excel to an email

The other possibility is to get the file into a stream variable and attach it to an email. I have added a function EmailExcelFile to the Excel Buffer table with a tableextension. In the Codeunit EmailExcelFileImpl you can find the implementation of this function. The code below retrieves the Excel file into a stream and adds it as an attachment to the email.

local procedure AddAttachment(
    var SMTPMail: Codeunit "SMTP Mail";
    var TempExcelBuf: Record "Excel Buffer" temporary;
    BookName: Text)
var
    TempBlob: Codeunit "Temp Blob";
    InStr: InStream;
begin
    ExportExcelFileToBlob(TempExcelBuf, TempBlob);
    TempBlob.CreateInStream(InStr);
    SMTPMail.AddAttachmentStream(InStr, BookName);
end;

local procedure ExportExcelFileToBlob(
    var TempExcelBuf: Record "Excel Buffer" temporary;
    var TempBlob: Codeunit "Temp Blob")
var
    OutStr: OutStream;
begin
    TempBlob.CreateOutStream(OutStr);
    TempExcelBuf.SaveToStream(OutStr, true);
end;

As you can see, I’m using the new Temp Blob storage module of version 15. If you have the data in the Temp Blob Codeunit, then you could also store the file into a blob field in a table or send it to a web service.

Updating an existing Excel file

The other example is to export data to an already existing Excel file. It is basically updating that file without removing any of the existing data. An example Excel file is added to the repository on GitHub. You can import that file with the Excel Template page. The Excel Template table doesn’t store the Excel file in a blob field in the table, but instead, it uses the persistent blob feature of the Blob Storage module in v15.

The code for the Excel Buffer to load this file instead of creating a new file is in the function InitExcelBuffer in Codeunit Export Sales Order 2 Excel. It uses the function UpdateBookStream of the Excel Buffer. This function saves the stream to a temporary file on the server and then opens it.

local procedure InitExcelBuffer(var TempExcelBuf: Record "Excel Buffer" temporary): Boolean
var
    ExcelTemplate: Record "Excel Template";
    TempBlob: Codeunit "Temp Blob";
    InStr: InStream;
begin
    ExcelTemplate.FindFirst();
    if not ExcelTemplate.GetTemplateFileAsTempBlob(TempBlob) then
        exit;
    
    TempBlob.CreateInStream(InStr);
    TempExcelBuf.UpdateBookStream(InStr, SheetNameTxt, true);
    exit(true);
end;

Instead of adding new rows to the Excel Buffer, we have now to enter data directly into cells. This is an existing feature and nothing different from the past. Take a look at function FillHeaderData and FillLineData in the Codeunit for an example. Finally you need to call the function WriteAllToCurrentSheet to write the data into the Excel file. The rest of the code is similar to the previous example.


Contest: find the secret value in a Business Central app

$
0
0

This is the first blog post about shipping a secret value in a Business Central app. I have prepared an app file that contains a secret value that will be stored into Isolated Storage. But I’m not 100% if this method is secure enough, and that’s why I need your help. Please, download the app file and find the secret value. If you succeed, then I need to harden it. If nobody can find the value then that either means nobody really tried or it is secure enough. Let’s see what our community can do!

Ok, let’s crack that thing, what do you want me to do?

Glad you are eager to find that secret value! You can download the app file here.

Install the app in a Business Central cloud sandbox. After installation, search for the page ‘Test Secret Value’ and enter the value that you think is shipped with the app and click on Next. The page will tell you the result. 😊

This value is not correct…

But when you enter the correct value, then you get this page.

Only for Business Central cloud

The app that I have created can only be installed in a cloud sandbox. The reason for that is that I have used the 70-million object range. If I had created the app in the 50.000 – 99.999 range, then cracking the secret would be very simple. Please let me know in the comments below if you need an app for a local docker container and I’m happy to provide one.

What’s next?

I really hope some people will try to crack this. Please post your results in the comments. Or contact me directly if you think you’ve got the secret but don’t want to reveal it. In the meantime, I’ll be writing another blog post that explains the approach in more detail.

Can you share the source code?

Well, you can extract the source code from the app file, can’t you? But to make it easy for you, I’ve published all source code on GitHub. Feel free to explore the code, I guess you will figure at quickly what I’ve done. Anyway, that’s for another blog post that will follow shortly. For now, I’m just sharing the app without further explanation because that comes closest to a published app that you can download from AppSource.

Good luck with cracking the secret! I’m really curious to see your results!

Follow-up on secret values in a Business Central app

$
0
0

Wow… the community proved to be strong! About 10 days ago I asked to find the secret value in an app file and many of you really tried! Thank you so much!

Before I move on, I would like to draw your attention to this idea. Please give it your vote! For more info, see below at the end of this post.

What happened with the contest?

Because I shared the source code, it was not hard to figure out that the navxdata file contained the secret value. What I really wanted to know was if there are ways to get values out of the navxdata file. And it turned out to be possible. Erik Hougaard was the first to crack that nut. It took only 10 minutes for him! Apparently, he knows some details about the navxdata file that are not so widely known in the community. A few other people also found the value by hex editing into the navxdata file. Well done!

Some other people took the source code and manipulated it to get to the secret value. And for sure, that’s a possibility if you can get to the source code at all. Honestly, I only shared the source code to give you an idea about the technique I used, but not to manipulate the code. Having access to the source code reveals the way how the secret is stored and then the fun is over.

So, what’s next?

Well, my first recommendation is to not share source code when an app contains a secret value. So at least, set showMyCode to false in the app.json file.

As one suggested, you could just store the secret value into the code when you set showMyCode to false. And technically, that’s absolutely right. But I don’t think you should store any secrets in code, so I still believe the approach with the navxdata file is a valid one. Well, as long as Microsoft doesn’t give us any other option.

If you want to deploy your app file in an on-prem environment, then you should only use a runtime package. To prove that point, I’ve uploaded a runtime package here. If you are interested, please download it and try to get to the navxdata file that is inside it and then try to get the secret value. As Erik said, it would only be a little bit more difficult, so let’s see if he (or others) can find it again…

Security by obscurity

An extra level of security that I’ve added to this runtime package is obfuscating the secret value. There are several techniques you could apply, like creating multiple values, mixing them together, XOR-ing values, etc. If you want to get some ideas, then you can look into the code at GitHub. I’ve added some code in the table SecretValue to obfuscate the secret value with bitwise XOR comparison. Not with one, but with two random values. Just for the fun of it, and because it doesn’t exist out of the box in AL code. 😁

local procedure XORSecretValues(
    var SecretValue1List: List of [Byte];
    SecretValue2List: List of [Byte];
    SecretValue3List: List of [Byte])
var
    Index: Integer;
    SecretValueResult: List of [Byte];
    Byte1: Byte;
    Byte2: Byte;
begin
    foreach Byte1 in SecretValue1List do begin
        Index += 1;
        if Index mod 2 <> 0 then
            SecretValue2List.Get(Index, Byte2)
        else
            SecretValue3List.Get(Index, Byte2);
        SecretValueResult.Add(XORBytes(Byte1, Byte2));
    end;
    Clear(SecretValue1List);
    SecretValue1List := SecretValueResult;
end;
    
local procedure XORBytes(Byte1: Byte; Byte2: Byte) ReturnValue: Byte
var
    Binary1: Text;
    Binary2: Text;
    Bool1: Boolean;
    Bool2: Boolean;
    i: Integer;
    XORValue: Text;
begin
    Binary1 := ByteToBinary(Byte1);
    Binary2 := ByteToBinary(Byte2);

    for i := 1 to 8 do begin
        Evaluate(Bool1, Binary1[i]);
        Evaluate(Bool2, Binary2[i]);
        XORValue += Format(Bool1 xor Bool2, 0, 2);
    end;
    ReturnValue := BinaryToByte(XORValue);
end;

local procedure ByteToBinary(Value: Byte) ReturnValue: Text;
begin
    while Value >= 1 do begin
        ReturnValue := Format(Value MOD 2) + ReturnValue;
        Value := Value DIV 2;
    end;
    ReturnValue := ReturnValue.PadLeft(8, '0');
end;

local procedure BinaryToByte(Value: Text) ReturnValue: Byte;
var
    Multiplier: Integer;
    IntValue: Integer;
    i: Integer;
begin
    Multiplier := 1;
    for i := StrLen(Value) downto 1 do begin
        Evaluate(IntValue, Value.Substring(i, 1));
        ReturnValue += IntValue * Multiplier;
        Multiplier *= 2;
    end;
end;

Limiting access to code with access modifiers

Another recommendation I would like to make is to keep create a small app that only contains the secret value handling. Your other apps can then depend on it. Even better would be if that small base app doesn’t just store the secret value but uses it to get access to an Azure Key Vault in which you have the real secrets. In that way, the Azure Key Vault is a resource that belongs to your app and your other apps can just use that app.

But wait… if other apps can depend on it, then the secrets that are stored in an Azure Key Vault could leave your app and that would expose the values to any other app that takes a dependency. Well, not if you take some precautions.

Mark all the codeunits in that app as internal and non-debuggable. That will prevent dependent apps from calling the code. But that includes your apps, right? But there is another option on top of that. In the app.json you can define what other apps can access the internal objects. That opens the possibility to create trusted dependent apps and still prevent other apps from accessing your internal code.

That’s exactly what I’ve done in the source code. You will find a second app on GitHub, called Demo Secret Values Management. And that app is listed as a trusted app in the app.json of the Demo Secret Values app.

"internalsVisibleTo": [
    {
      "appId": "a9e18554-682c-42c0-a3bf-7427ce225bfb",
      "name": "Demo Secret Values Management",
      "publisher": "Arend-Jan Kauffmann"    
    }
  ]

This management app contains code to set the secret values from outside and tocall the function that prepares the table so it can be exported to a navxdata file. When you take this approach, you don’t have to ship the management app, it’s only needed during development and the build process. The corresponding PowerShell commands are available in the Scripts folder. This management app also contains pages to inspect the values in Isolated Storage and the table.

Platform feature

I’ve tried really hard to find a way to store a value inside the app in the most secure way. And for a moment I thought I found one. But after some good reading, I’ve reached the conclusion that there is no other way than trying to hide or obfuscate the code and data. With these blog posts and corresponding code, I’ve tried to inspire you. Please feel free to copy it and modify it to your needs.

What we really need is a platform feature to store secrets. The best option I can think of is creating an Azure Key Vault and an application identity that has access to it. The application identity, which consists of a client id and a secret, should not be shipped together with the app file, but rather be specified outside. For example as extra properties when we upload the app to AppSource. Then those values should become available in our app (and our app only) with AL code, for example with NavApp.AzureClientId and NavApp.AzureSecret. Or, even better, with a built-in feature to get values from the Azure Key Vault.

Can we expect such a platform feature in the future? Well, I certainly believe so. I’ve created an idea to add this as a built-in feature in the AL code. Please review it and give it your vote! The more votes it gets, the higher the chance we will get it. I’ve heard rumours that it is on the radar, but we still need to tell Microsoft that we really need such a feature.

Thanks for following this journey, I hope I’ve given you some ideas on how to handle secret values. If you have any other idea or enhancements, please don’t hesitate to share!

Using tables instead of table extensions

$
0
0

Recently James Crowter wrote an excellent article about table extensions and how they affect performance. In short, table extensions are great for flexibility and ease of development, but performance decreases when the number of table extensions is adding up. Especially when table extensions are used for hot tables. With hot tables I mean tables that are used often, like Item, Customer, Sales Line, Item Ledger Entry, etc.

So I started thinking, when would you decide to not use table extensions? And how to work supplemental tables but still have the same experience for the end-user? In this blog post, I want to do some suggestions. I’m aware this can be done in many ways, so this is just a suggestion.

Considerations

Let’s first go through the process of choosing between table extension or supplemental table. Basically it should be possible to put all extra fields you have for a particular table into a supplemental table rather than a table extension. Behind the scenes, the table extension is a supplemental table anyway. The only difference, and definitely an important difference, is that the table extension appears to be one table for the developer. But with some extra effort we can do that ourselves, but what do we gain for that extra effort? Well, I guess that’s already clear, we get better overall performance because there is no automatic join. We decide in code when to read the supplemental table.

And that’s exactly what should be the main consideration when choosing between a supplemental table and a table extension. Do you need those extra fields in many places in the code? Or just at a few places? In other words, would you benefit from the automatic join of the companion table or would it be a waste of resources because you only use those fields in a few places? Besides that, you also need to look at how much the table is being used. Is it a hot table, used in many different places and a favorite table to extend? Then you should also be careful with table extensions.

The Data Table

Ok, let’s imagine I have some fields to be added to the Item. Note that I’m not saying ‘to the Item table‘. I want to the user to see those fields on the Item list and card as if they were part of the Item table but I’ve chosen to put them into a supplemental table. The first step is to create that table.

table 50100 "Item Extra Fields AJK"
{
    fields
    {
        field(1; Id; Guid)
        {
        }
        field(2; "Item No."; Code[20])
        {
            FieldClass = FlowField;
            CalcFormula = lookup (Item."No." where(SystemId = field(id)));
        }
        field(3; "Perishable"; Boolean)
        {
            trigger OnValidate()
            begin
                if Perishable then
                    "Storage Temperature" := 4
                else
                    "Storage Temperature" := 0;
            end;
        }
        field(4; "Food Category"; Enum FoodCategory)
        {
        }
        field(5; "Storage Temperature"; Decimal)
        {
        }
    }

    keys
    {
        key(PK; Id) { }
    }
}

As you can see, the table is not using the Item No. field as primary key. Instead, it uses an Id field that I want to be the same value as the SystemId of the Item record.

The Item No. field is a FlowField, so if I run the table I can see to what Item that record belongs.

The other fields are the fields that I want to add to the Item Card and Item List page. The field Perishable has code in the OnValidate trigger, and I want that to execute as normal when the user modifies that field.

The Tableextension

How are we going to work with this table? I figured it would make sense to create a table extension for the Item table with functions to read and save the supplemental table. In that way, we have one central place to get to the data and to save it. The table extension can also hold FlowFields to the fields of the supplemental table, so you can get the values directly without the need to read the supplemental table. That’s especially handy when reading values. With SetAutoCalcFields you can force to join the values from the supplemental table when you want to loop through the base table instead of reading the supplemental table for every single record.

tableextension 50100 "Item AJK" extends Item
{
    fields
    {
        field(50100; "Perishable AJK"; Boolean)
        {
            Caption = 'Perishable';
            FieldClass = FlowField;
            CalcFormula = lookup ("Item Extra Fields AJK".Perishable where(Id = field(SystemId)));
        }
        field(50101; "Food Category AJK"; Enum FoodCategory)
        {
            Caption = 'Food Category';
            FieldClass = FlowField;
            CalcFormula = lookup ("Item Extra Fields AJK"."Food Category" where(Id = field(SystemId)));
        }
        field(50102; "Storage Temperature AJK"; Decimal)
        {
            Caption = 'Storage Temperature';
            FieldClass = FlowField;
            CalcFormula = lookup ("Item Extra Fields AJK"."Storage Temperature" where(Id = field(SystemId)));
        }
    }

    var
        _ItemExtraFields: Record "Item Extra Fields AJK";

    procedure GetItemExtraFields(var ItemExtraFields: Record "Item Extra Fields AJK")
    begin
        ReadItemExtraFields();
        ItemExtraFields := _ItemExtraFields;
    end;

    procedure SetItemExtraFields(var ItemExtraFields: Record "Item Extra Fields AJK")
    begin
        _ItemExtraFields := ItemExtraFields;
    end;

    procedure SaveItemExtraFields()
    begin
        if not IsNullGuid(_ItemExtraFields.Id) then
            if not _ItemExtraFields.Modify() then
                _ItemExtraFields.Insert(false, true);
    end;

    procedure DeleteItemExtraFields()
    begin
        ReadItemExtraFields();
        if _ItemExtraFields.Delete() then;
    end;

    local procedure ReadItemExtraFields()
    begin
        if _ItemExtraFields.Id <> SystemId then
            if not _ItemExtraFields.Get(SystemId) then begin
                _ItemExtraFields.Init();
                _ItemExtraFields.Id := SystemId;
                _ItemExtraFields.SystemId := SystemId;
            end;
    end;
}

As you can see, the FlowFields are based on the value of the SystemId field, linked to the Id field of the supplement table. In the function ReadItemExtraFields these values are set in case the record does not exist.

The current record of the corresponding supplemental table is stored in a global variable in the table extension. With a get and set function, it is possible to read or set the actual values.

The function SaveItemExtraFields is used to really save the actual values back to the supplement table in the database. In this approach, not all Item records do automatically have a corresponding record in the supplemental table, so I use the construct if not Modify then Insert. The Insert uses the second parameter to tell the platform to not create a new SystemId but to use the value that we already set in the function ReadItemExtraFields. Of course this only works correctly if you follow the flow of these functions:

  • GetItemExtraFields
  • SetItemExtraFields
  • SaveItemExtraFields

If you would only use SetItemExtraFields and did not properly initialize the record with the Id values, then this is going to fail.

Usage on an editable Page

Let’s have a look at how this works on the Item Card page.

pageextension 50101 "Item Card AJK" extends "Item Card"
{
    layout
    {
        addafter(Item)
        {
            group(FoodDetails)
            {
                Caption = 'Food Details';
                field(PerishableAJK; ItemExtraFields.Perishable)
                {
                    Caption = 'Perishable';
                    ApplicationArea = All;

                    trigger OnValidate()
                    begin
                        ItemExtraFields.Validate(Perishable);
                        SetItemExtraFields(ItemExtraFields);
                    end;
                }
                field(FoodCategeryAJK; ItemExtraFields."Food Category")
                {
                    Caption = 'Food Category';
                    ApplicationArea = All;

                    trigger OnValidate()
                    begin
                        SetItemExtraFields(ItemExtraFields);
                    end;

                }
                field(StorageTemperatureAJK; ItemExtraFields."Storage Temperature")
                {
                    Caption = 'Storage Temperature';
                    ApplicationArea = All;

                    trigger OnValidate()
                    begin
                        SetItemExtraFields(ItemExtraFields);
                    end;
                }
            }
        }
    }

    var
        ItemExtraFields: Record "Item Extra Fields AJK";

    trigger OnInsertRecord(BelowxRec: Boolean): Boolean
    begin
        SaveItemExtraFields();
    end;

    trigger OnModifyRecord(): Boolean
    begin
        SaveItemExtraFields();
    end;

    trigger OnClosePage()
    begin
        SaveItemExtraFields();
    end;

    trigger OnAfterGetCurrRecord()
    begin
        GetItemExtraFields(ItemExtraFields);
    end;
}

The supplemental table is stored in a global variable on the page and read in the OnAfterGetRecord trigger. The fields are then displayed on the screen with source expression pointing to the respective fields. That makes sure the values are directly stored in the record. But now we are getting to some details that I think developers are not going to like much. Because it is different from how we are used to working with table fields, we need to do some extra stuff.

First of all, you need to specify the caption again. That means we now have the same caption three times: in the supplemental table, on the FlowField and now also on the Page. They will end up as three different captions inside the .xlf file and they all need to be translated. So we must make sure to keep the captions and the translations are aligned across all places we use these fields.

The other point is that the OnValidate trigger is not executing automatically. The value will be stored in the field of the global record variable, but it is treated as a global variable and not as a field with an OnValidate trigger. Hence, we have to explicitly validate the field from code on the page.

I have chosen to save the current record value back to the base table immediately. It would also be possible to do so from the OnModify trigger, but better safe than sorry.

Usage on a read-only Page

Displaying the fields on the Item List page is just a matter of adding the FlowFields. They support sorting and filtering, so the user would not even notice that these fields come from a supplemental table anyway.

pageextension 50100 "Item List AJK" extends "Item List"
{
    layout
    {
        addafter(Description)
        {
            field("Perishable AJK"; "Perishable AJK") { ApplicationArea = All; }
            field("Food Category AJK"; "Food Category AJK") { ApplicationArea = All; }
        }
    }
}

Deleting the record

The final part would be a delete trigger. For that, I suppose to use an event subscriber rather than using the page triggers. In the most simple way, that would look like this:

codeunit 50100 "Item Subscribers AJK"
{
    SingleInstance = true;

    [EventSubscriber(ObjectType::Table, Database::Item, 'OnAfterDeleteEvent', '', false, false)]
    local procedure OnDelete(var Rec: Record Item)
    begin
        Rec.DeleteItemExtraFields();
    end;
}

Final thoughts

There are other approaches possible as well. Like using the OnInsert event to always create a corresponding record in the supplemental table and assume that record is always available. That would require an install and upgrade procedure as well to sync the tables during installation or upgrading of the app.

This is by far not as convenient as using table extensions, I have to admit that. And I’m not very proud of this solution. But it works and with this little effort, we can avoid the heavy load of multiple table extensions. As James said, we should use great power responsibly. So, if writing some extra code is the price we have to pay for getting a system that performs better, so be it.

I hope Microsoft will work on other solutions in the meantime. Which would not be easy. Think about enabling joins from table extensions on the fly, as we do with SetAutoCalcFields for FlowFields. It would still be hard to cover scenarios where tables are passed through an event. They will then probably not contain the joined data, so we still need to get them. Maybe similar to CalcFields? What do I know…

All code that is demonstrated here can also be found on GitHub.

Codeunit API’s in Business Central

$
0
0

This blog post was on my list way too long… But now I found some time to sit down and write it.

Disclaimer

What I’m going to show here is officially not supported (yet). It is an undocumented feature that already exists for a couple of years. I believe it can even be used in Dynamics NAV 2018 and maybe earlier versions as well. In fact, Microsoft uses this feature themselves in the Power Automate Flow connector for approvals. So it is a feature that goes undocumented and officially unsupported, but I wouldn’t expect it to go away. Instead, I hope it is going to be turned into an officially supported feature.

As a matter of fact, the title of this blog post should be something like ‘Unbound actions with Codeunit web services in Business Central’. But I’m not sure if everybody would immediately recognize what it is about.

Bound vs. Unbound Actions

As you may know, it is possible to define actions on API pages that can be called with a restful API call. For example, you can call Post on a Sales Invoice like this:

post https://api.businesscentral.dynamics.com/v2.0/{environment}/api/v1.0/companies({id})/salesinvoices({id}})/Microsoft.NAV.Post
Authorization: Bearer {token}
Content-Type: application/json

This function Post is available on the API page for Sales Invoices and it looks like this:

[ServiceEnabled]
[Scope('Cloud')]
procedure Post(var ActionContext: WebServiceActionContext)
var
    SalesHeader: Record "Sales Header";
    SalesInvoiceHeader: Record "Sales Invoice Header";
    SalesInvoiceAggregator: Codeunit "Sales Invoice Aggregator";
begin
    GetDraftInvoice(SalesHeader);
    PostInvoice(SalesHeader, SalesInvoiceHeader);
    SetActionResponse(ActionContext, SalesInvoiceAggregator.GetSalesInvoiceHeaderId(SalesInvoiceHeader));
end;

What is important here, that this function is called a ‘bound action’ because it is bound to an existing entity, in this case, a Sales Invoice.

But what if you want to call a function in a Codeunit with an API call? That is possible by publishing the Codeunit as a web service and call it with a SOAP web service call. Would it also be possible to do that with a restful API call, like the API pages? And the answer to that is, yes, that is possible! The web services page doesn’t show you an ODataV4 URL for a published Codeunit, but it actually is possible to call the Codeunit with an ODataV4 URL. That is called ‘unbound actions’. Calling a Codeunit is not bound to any entity at all. Not even to the company, which is normally the first entity you specify in the ODataV4 or API URL.

Simple Example of an Unbound Action

Let’s create a simple Codeunit and publish it as a web service.

codeunit 50100 "My Unbound Action API"
{
    procedure Ping(): Text
    begin
        exit('Pong');
    end;
}

We can’t publish a Codeunit as an API, the only possibility is to publish it as a web service. For that, we add this XML file to the app:

<?xml version="1.0" encoding="UTF-8"?>
<ExportedData>
    <TenantWebServiceCollection>
        <TenantWebService>
            <ObjectType>CodeUnit</ObjectType>
            <ObjectID>50100</ObjectID>
            <ServiceName>MyUnboundActions</ServiceName>
            <Published>true</Published>
        </TenantWebService>
    </TenantWebServiceCollection>
</ExportedData>

After installation, the web service is available. But the ODataV4 URL is not applicable according to this page.

web services list

Let’s just ignore that and call the web service with the ODataV4 url nonetheless. I’m using the VS Code extension Rest Client for this. As you can see, the URL is build up as the normal ODataV4 url, but it ends with NAV.MyUnboundActions_Ping. The name of the function is composed as follows: /NAV.[service name]_[function name]

post https://bcsandbox.docker.local:7048/BC/ODataV4/NAV.MyUnboundActions_Ping
Authorization: Basic {{username}} {{password}}

The result of this call (response headers removed for brevity):

HTTP/1.1 200 OK
{
  "@odata.context": "https://bcsandbox.docker.local:7048/BC/ODataV4/$metadata#Edm.String",
  "value": "Pong"
}

Isn’t that cool? We can publish Codeunits as web service and still use restful API calls to invoke them, instead of using SOAP!

Reading data

What about using data? Let’s try another example and see what happens. I’ve added another function that simply reads the first record of the Customer table. Since we haven’t specified any company, what would happen?

codeunit 50100 "My Unbound Action API"
{
    procedure Ping(): Text
    begin
        exit('Pong');
    end;
    procedure GetFirstCustomerName(): Text
    var
        Cust: Record Customer;
    begin
        Cust.FindFirst();
        exit(Cust.Name);
    end;
}

The call to the web service looks like this:

post https://bcsandbox.docker.local:7048/BC/ODataV4/NAV.MyUnboundActions_GetFirstCustomerName
Authorization: Basic {{username}} {{password}}

And the result of this call is an error:

HTTP/1.1 400 You must choose a company before you can access the "Customer" table.
{
  "error": {
    "code": "Internal_ServerError",
    "message": "You must choose a company before you can access the \"Customer\" table.  CorrelationId:  7b627296-5aca-4e4a-8e46-9d54f199b702."
  }
}

Obviously, we need to specify a company. Let’s try to do that by specifying the company in the url:

post https://bcsandbox.docker.local:7048/BC/ODataV4/Company('72e17ce1-664e-ea11-bb30-000d3a256c69')/NAV.MyUnboundActions_GetFirstCustomerName
Authorization: Basic {{username}} {{password}}

However, we still get an error:

HTTP/1.1 404 No HTTP resource was found that matches the request URI 'https://bcsandbox.docker.local:7048/BC/ODataV4/Company(%2772e17ce1-664e-ea11-bb30-000d3a256c69%27)/NAV.MyUnboundActions_GetFirstCustomerName'.
{
  "error": {
    "code": "BadRequest_NotFound",
    "message": "No HTTP resource was found that matches the request URI 'https://bcsandbox.docker.local:7048/BC/ODataV4/Company(%2772e17ce1-664e-ea11-bb30-000d3a256c69%27)/NAV.MyUnboundActions_GetFirstCustomerName'.  CorrelationId:  04668a8d-1f2b-4e1e-aebe-883886e8fa2b."
  }
}

What is going on? An OData url points to an entity. Every entity has its own unique url. But the Codeunit function is not bound to any entity, like an Item, Customer, Sales Order, etc. That’s why it is called an unbound action. But if the company was part of the url, then it is bound to the company entity and not considered to be an unbound action anymore. This is simply due to the fact that Business Central works with multiple companies in one database. If that was just one company, then you wouldn’t have the company in the url and the unbound action would work.

Instead of adding the company as an entity component to the url, it is possible to add a company query parameter. Then the call looks like this:

post https://bcsandbox.docker.local:7048/BC/ODataV4/NAV.MyUnboundActions_GetFirstCustomerName?company=72e17ce1-664e-ea11-bb30-000d3a256c69
Authorization: Basic {{username}} {{password}}

And this works:

HTTP/1.1 200 OK
{
  "@odata.context": "https://bcsandbox.docker.local:7048/BC/ODataV4/$metadata#Edm.String",
  "value": "Adatum Corporation"
}

Alternatively, you can also add the company as a header instead of a query parameter:

post https://bcsandbox.docker.local:7048/BC/ODataV4/NAV.MyUnboundActions_GetFirstCustomerName
Authorization: Basic {{username}} {{password}}
Company: 72e17ce1-664e-ea11-bb30-000d3a256c69

As you can see, we can use the company id instead of the company name. To get the company id, you can use this call (notice the get instead of post):

get https://bcsandbox.docker.local:7048/BC/ODataV4/Company
Authorization: Basic {{username}} {{password}}

And use the id from the response.

HTTP/1.1 200 OK
{
  "@odata.context": "https://bcsandbox.docker.local:7048/BC/ODataV4/$metadata#Company",
  "value": [
    {
      "Name": "CRONUS USA, Inc.",
      "Evaluation_Company": true,
      "Display_Name": "",
      "Id": "72e17ce1-664e-ea11-bb30-000d3a256c69",
      "Business_Profile_Id": ""
    },
    {
      "Name": "My Company",
      "Evaluation_Company": false,
      "Display_Name": "",
      "Id": "084479f8-664e-ea11-bb30-000d3a256c69",
      "Business_Profile_Id": ""
    }
  ]
}

Using Parameters

What about passing in parameters? Well, that’s also possible. As you may have seen, all calls the to unbound actions use the HTTP POST command. That means we are sending data. So far, the demo didn’t do that. Let’s do that in the next demo. I have added a function Capitalize with a text input parameter.

codeunit 50100 "My Unbound Action API"
{
    procedure Ping(): Text
    begin
        exit('Pong');
    end;
    procedure GetFirstCustomerName(): Text
    var
        Cust: Record Customer;
    begin
        Cust.FindFirst();
        exit(Cust.Name);
    end;
    procedure Capitalize(input: Text): Text
    begin
        exit(input.ToUpper);
    end;
}

To add the parameter data to the call, we need to add content. Don’t forget to set the header Content-Type!

post https://bcsandbox.docker.local:7048/BC/ODataV4/NAV.MyUnboundActions_Capitalize
Authorization: Basic {{username}} {{password}}
Content-Type: application/json
{
    "input": "business central rocks!"
}

And here is the result of this call:

HTTP/1.1 200 OK
{
  "@odata.context": "https://bcsandbox.docker.local:7048/BC/ODataV4/$metadata#Edm.String",
  "value": "BUSINESS CENTRAL ROCKS!"
}

Be careful with capitals in parameter names! The first character must be lower case. Even when you use uppercase, it will be corrected. If you use uppercase in the call, then you might see this error message:

HTTP/1.1 400 Exception of type 'Microsoft.Dynamics.Nav.Service.OData.NavODataBadRequestException' was thrown.
{
  "error": {
    "code": "BadRequest",
    "message": "Exception of type 'Microsoft.Dynamics.Nav.Service.OData.NavODataBadRequestException' was thrown.  CorrelationId:  e0003c52-0159-4cf5-974d-312ef4729c56."
  }
}

Return Types

So far, the demo’s only returned text types. What happens if we return a different type, like an integer, a boolean or datetime? Here you have some examples:

codeunit 50100 "My Unbound Action API"
{
    procedure Ping(): Text
    begin
        exit('Pong');
    end;
    procedure GetFirstCustomerName(): Text
    var
        Cust: Record Customer;
    begin
        Cust.FindFirst();
        exit(Cust.Name);
    end;
    procedure Capitalize(input: Text): Text
    begin
        exit(input.ToUpper);
    end;
    procedure ItemExists(itemNo: Text): Boolean
    var
        Item: Record Item;
    begin
        Item.SetRange("No.", itemNo);
        exit(not item.IsEmpty());
    end;
    procedure GetCurrentDateTime(): DateTime
    begin
        exit(CurrentDateTime());
    end;
}

Functions ItemExists and GetCurrentDateTime are added to the Codeunit.

The call to ItemExists and the result:

post https://bcsandbox.docker.local:7048/BC/ODataV4/NAV.MyUnboundActions_ItemExists
Authorization: Basic {{username}} {{password}}
Content-Type: application/json
Company: 72e17ce1-664e-ea11-bb30-000d3a256c69
{
    "itemNo": "1896-S"
}
HTTP/1.1 200 OK
{
  "@odata.context": "https://bcsandbox.docker.local:7048/BC/ODataV4/$metadata#Edm.Boolean",
  "value": true
}

And this is how the call to GetCurrentDateTime and the response looks like:

post https://bcsandbox.docker.local:7048/BC/ODataV4/NAV.MyUnboundActions_GetCurrentDateTime
Authorization: Basic {{username}} {{password}}
Content-Type: application/json
HTTP/1.1 200 OK
{
  "@odata.context": "https://bcsandbox.docker.local:7048/BC/ODataV4/$metadata#Edm.DateTimeOffset",
  "value": "2020-03-02T15:13:39.49Z"
}

What about return complex types, like a Json payload? Unfortunately, that doesn’t work as you would like:

codeunit 50100 "My Unbound Action API"
{
    procedure Ping(): Text
    begin
        exit('Pong');
    end;
    procedure GetFirstCustomerName(): Text
    var
        Cust: Record Customer;
    begin
        Cust.FindFirst();
        exit(Cust.Name);
    end;
    procedure Capitalize(input: Text): Text
    begin
        exit(input.ToUpper);
    end;
    procedure ItemExists(itemNo: Text): Boolean
    var
        Item: Record Item;
    begin
        Item.SetRange("No.", itemNo);
        exit(not item.IsEmpty());
    end;
    procedure GetCurrentDateTime(): DateTime
    begin
        exit(CurrentDateTime());
    end;
    procedure GetJsonData() ReturnValue: Text
    var
        Jobj: JsonObject;
    begin
        JObj.Add('key', 'value');
        Jobj.WriteTo(ReturnValue);
    end;
}
post https://bcsandbox.docker.local:7048/BC/ODataV4/NAV.MyUnboundActions_GetJsonData
Authorization: Basic {{username}} {{password}}
Content-Type: application/json
HTTP/1.1 200 OK
{
  "@odata.context": "https://bcsandbox.docker.local:7048/BC/ODataV4/$metadata#Edm.String",
  "value": "{\"key\":\"value\"}"
}

The data is formatted as a Json text value instead of a real Json structure. And if you try to change the function to return a JsonObject rather than a text variable, then the whole web service is not valid anymore as a web service and you will not be able to call it. For this to work, we need an option to define custom entities and add it to the metadata. It would be great if Microsoft would enable this!

Support in the cloud

All these demos were on my local docker environment. But this works exactly the same on the cloud platform. Just change the url and it will work like a charm:

For basic authentication you need the use this url and specify your tenant:

post https://api.businesscentral.dynamics.com/v2.0/{tenantid}/{environment}/ODataV4/NAV.MyUnboundActions_Ping
Authorization: Basic {{username}} {{password}}

For example, when I use the sandbox environment on my tenant, I can replace {tenantid} with kauffmann.nl and {environment} with sandbox:

post https://api.businesscentral.dynamics.com/v2.0/kauffmann.nl/sandbox/ODataV4/NAV.MyUnboundActions_Ping
Authorization: Basic {{username}} {{password}}

For OAuth and production environments, you should use this url (no tenant id needed):

post https://api.businesscentral.dynamics.com/v2.0/{environment}/ODataV4/NAV.MyUnboundActions_Ping
Authorization: Bearer {token}

Use the ODataV4 and not the API endpoint

Remember that this only works with the ODataV4 endpoint and not with the API endpoint. You need to publish the Codeunit as a web service first. To get this on the API endpoint, it should also implement namespaces and versioning as we know it in the API pages. Versioning is a key feature, as it allows us to implement versioned contracts. And personally, I wouldn’t mind if Microsoft also removes the word NAV from both bound and unbound actions.

That’s it! Hope you enjoyed it! Based on my conversations with Microsoft, I know that this topic is something they are discussing for the future. What do you think, should this be turned into a Codeunit type API or is it useless and can we stick with Page and Query API’s? Let me know in the comments!

Deep insert with Business Central API’s

$
0
0

Recently I got a question from a partner: is it possible to insert a sales order and lines all at once with a Business Central API? And the answer to that question is, yes, that is possible. However, little documentation is available about this feature. The only documentation I could find is in an article about API limits in Business Central. One of the recommendations is to reduce the number of calls is to do deep inserts. The body of the request can contain nested entities, e.g. a sales header and sales lines. The documentation shows a short example of inserting a Sales Quote and lines and that’s all you get.

Let’s take a close look at how to create deep insert requests. Examples are for the standard sales order endpoint.

To insert a sales order including lines, you need to send a request like below. Note: I’m using the url for my local docker instance, if you want to test against the cloud environment, then the host is https://api.businesscentral.dynamics.com and the path is /v2.0/{{tenant}}/{{environment}}/api/v1.0/companies({{companyId}})/salesOrders.

The first block shows the request url and the headers, the second block the JSON request body.

POST /api/v1.0/companies({{companyId}})/salesOrders HTTP/1.1
Host: https://bcsandbox.docker.local:7048/bc
Authorization: Basic QURNSU46U2VjcmV0UGFzc3dvcmQ=
Content-Type: application/json
{  
    "customerId": "{{customerId}}",
    "salesOrderLines": [
        {
            "itemId": "{{itemId}}",
            "quantity": 5
    	}
    ]
}

It is important to notice that the salesOrderLines key is a JSON array. It may contain any number of entities. So, if we were to insert three lines, then the request would look like this (ok, same item, but you get the idea):

{  
    "customerId": "{{customerId}}",
    "salesOrderLines": [
        {
            "itemId": "{{itemId}}",
            "quantity": 5
    	},
    	{
            "itemId": "{{itemId}}",
            "quantity": 3
    	},
        {
            "itemId": "{{itemId}}",
            "quantity": 8
    	}
    ]
}

Let’s have a look at the result of the last request, with three lines. The response that comes back looks like this (use the scrollbar to see the complete payload).

{
    "@odata.context": "https://bcsandbox.docker.local:7048/bc/api/v1.0/$metadata#companies(9c4f2ddc-9e75-ea11-bbf0-000d3a38a583)/salesOrders/$entity",
    "@odata.etag": "W/\"JzQ0OzNtN240VjJmek4rdXY1RW5hWSszcDVUM09PeDZYV0labTg4aXlHSnRKWkU9MTswMDsn\"",
    "id": "f3af2160-227c-ea11-9be0-d0e787b03942",
    "number": "S-ORD101008",
    "externalDocumentNumber": "",
    "orderDate": "2020-04-11",
    "postingDate": "2020-04-11",
    "customerId": "82fe170f-9f75-ea11-bbf0-000d3a38a583",
    "contactId": "",
    "customerNumber": "10000",
    "customerName": "Adatum Corporation",
    "billToName": "Adatum Corporation",
    "billToCustomerId": "82fe170f-9f75-ea11-bbf0-000d3a38a583",
    "billToCustomerNumber": "10000",
    "shipToName": "Adatum Corporation",
    "shipToContact": "Robert Townes",
    "currencyId": "00000000-0000-0000-0000-000000000000",
    "currencyCode": "USD",
    "pricesIncludeTax": false,
    "paymentTermsId": "6cc81e09-9f75-ea11-bbf0-000d3a38a583",
    "shipmentMethodId": "00000000-0000-0000-0000-000000000000",
    "salesperson": "PS",
    "partialShipping": true,
    "requestedDeliveryDate": "0001-01-01",
    "discountAmount": 0,
    "discountAppliedBeforeTax": true,
    "totalAmountExcludingTax": 16012.8,
    "totalTaxAmount": 960.77,
    "totalAmountIncludingTax": 16973.57,
    "fullyShipped": true,
    "status": "Draft",
    "lastModifiedDateTime": "2020-04-11T18:29:28.41Z",
    "phoneNumber": "",
    "email": "robert.townes@contoso.com",
    "sellingPostalAddress": {
        "street": "192 Market Square",
        "city": "Atlanta",
        "state": "GA",
        "countryLetterCode": "US",
        "postalCode": "31772"
    },
    "billingPostalAddress": {
        "street": "192 Market Square",
        "city": "Atlanta",
        "state": "GA",
        "countryLetterCode": "US",
        "postalCode": "31772"
    },
    "shippingPostalAddress": {
        "street": "192 Market Square",
        "city": "Atlanta",
        "state": "GA",
        "countryLetterCode": "US",
        "postalCode": "31772"
    }
}

Wait… where are the lines? It doesn’t return the lines! That’s because we didn’t ask… Another feature that is mentioned in the documentation about API limits is to use Expand to fetch related entries. It reduces the number of calls. With expand the result will contain both header and lines.

So we change the POST request to include the sales lines. Notice the extra ?$expand=salesOrderLines at the end of the url. The entity to expand is the same as the entity in the request body to specify the lines. That’s not by coincidence of course… 😊 On a side note, the $expand parameter also works on GET commands.

POST /api/v1.0/companies({{companyId}})/salesOrders?$expand=salesOrderLines HTTP/1.1
Host: https://bcsandbox.docker.local:7048/bc
Authorization: Basic QURNSU46U2VjcmV0UGFzc3dvcmQ=
Content-Type: application/json

And this is the result. Scroll down to see that the sales lines are now included.

{
    "@odata.context": "https://bcsandbox.docker.local:7048/bc/api/v1.0/$metadata#companies(9c4f2ddc-9e75-ea11-bbf0-000d3a38a583)/salesOrders/$entity",
    "@odata.etag": "W/\"JzQ0O2NXMUpaWEtsdk45bnN6dG0rOFJlSTExTVNjZUtHbVg0ZGNjTm5IU0g4TTA9MTswMDsn\"",
    "id": "8ac93e25-237c-ea11-9be0-d0e787b03942",
    "number": "S-ORD101009",
    "externalDocumentNumber": "",
    "orderDate": "2020-04-11",
    "postingDate": "2020-04-11",
    "customerId": "82fe170f-9f75-ea11-bbf0-000d3a38a583",
    "contactId": "",
    "customerNumber": "10000",
    "customerName": "Adatum Corporation",
    "billToName": "Adatum Corporation",
    "billToCustomerId": "82fe170f-9f75-ea11-bbf0-000d3a38a583",
    "billToCustomerNumber": "10000",
    "shipToName": "Adatum Corporation",
    "shipToContact": "Robert Townes",
    "currencyId": "00000000-0000-0000-0000-000000000000",
    "currencyCode": "USD",
    "pricesIncludeTax": false,
    "paymentTermsId": "6cc81e09-9f75-ea11-bbf0-000d3a38a583",
    "shipmentMethodId": "00000000-0000-0000-0000-000000000000",
    "salesperson": "PS",
    "partialShipping": true,
    "requestedDeliveryDate": "0001-01-01",
    "discountAmount": 0,
    "discountAppliedBeforeTax": true,
    "totalAmountExcludingTax": 16012.8,
    "totalTaxAmount": 960.77,
    "totalAmountIncludingTax": 16973.57,
    "fullyShipped": true,
    "status": "Draft",
    "lastModifiedDateTime": "2020-04-11T18:34:59.11Z",
    "phoneNumber": "",
    "email": "robert.townes@contoso.com",
    "sellingPostalAddress": {
        "street": "192 Market Square",
        "city": "Atlanta",
        "state": "GA",
        "countryLetterCode": "US",
        "postalCode": "31772"
    },
    "billingPostalAddress": {
        "street": "192 Market Square",
        "city": "Atlanta",
        "state": "GA",
        "countryLetterCode": "US",
        "postalCode": "31772"
    },
    "shippingPostalAddress": {
        "street": "192 Market Square",
        "city": "Atlanta",
        "state": "GA",
        "countryLetterCode": "US",
        "postalCode": "31772"
    },
    "salesOrderLines": [
        {
            "@odata.etag": "W/\"JzQ0O0laTE8xb0NZU2hERlZMclNWaFg2djBCM0J2ZnRJeEVackU1VDZOMTJGYzQ9MTswMDsn\"",
            "id": "8ac93e25-237c-ea11-9be0-d0e787b03942-10000",
            "documentId": "8ac93e25-237c-ea11-9be0-d0e787b03942",
            "sequence": 10000,
            "itemId": "96fe170f-9f75-ea11-bbf0-000d3a38a583",
            "accountId": "00000000-0000-0000-0000-000000000000",
            "lineType": "Item",
            "description": "ATHENS Desk",
            "unitOfMeasureId": "17ff170f-9f75-ea11-bbf0-000d3a38a583",
            "quantity": 5,
            "unitPrice": 1000.8,
            "discountAmount": 0,
            "discountPercent": 0,
            "discountAppliedBeforeTax": false,
            "amountExcludingTax": 5004,
            "taxCode": "FURNITURE",
            "taxPercent": 6,
            "totalTaxAmount": 300.24,
            "amountIncludingTax": 5304.24,
            "invoiceDiscountAllocation": 0,
            "netAmount": 5004,
            "netTaxAmount": 300.24,
            "netAmountIncludingTax": 5304.24,
            "shipmentDate": "2020-04-11",
            "shippedQuantity": 0,
            "invoicedQuantity": 0,
            "invoiceQuantity": 5,
            "shipQuantity": 5,
            "lineDetails": {
                "number": "1896-S",
                "displayName": "ATHENS Desk"
            },
            "unitOfMeasure": {
                "code": "PCS",
                "displayName": "Piece",
                "symbol": null,
                "unitConversion": null
            }
        },
        {
            "@odata.etag": "W/\"JzQ0O3dzRk50S2xONTkyMkM2enRpQkVTTjN6bmNHa3gyczJuWnBWWlRCeUxrY0E9MTswMDsn\"",
            "id": "8ac93e25-237c-ea11-9be0-d0e787b03942-20000",
            "documentId": "8ac93e25-237c-ea11-9be0-d0e787b03942",
            "sequence": 20000,
            "itemId": "96fe170f-9f75-ea11-bbf0-000d3a38a583",
            "accountId": "00000000-0000-0000-0000-000000000000",
            "lineType": "Item",
            "description": "ATHENS Desk",
            "unitOfMeasureId": "17ff170f-9f75-ea11-bbf0-000d3a38a583",
            "quantity": 3,
            "unitPrice": 1000.8,
            "discountAmount": 0,
            "discountPercent": 0,
            "discountAppliedBeforeTax": false,
            "amountExcludingTax": 3002.4,
            "taxCode": "FURNITURE",
            "taxPercent": 5.99987,
            "totalTaxAmount": 180.14,
            "amountIncludingTax": 3182.54,
            "invoiceDiscountAllocation": 0,
            "netAmount": 3002.4,
            "netTaxAmount": 180.14,
            "netAmountIncludingTax": 3182.54,
            "shipmentDate": "2020-04-11",
            "shippedQuantity": 0,
            "invoicedQuantity": 0,
            "invoiceQuantity": 3,
            "shipQuantity": 3,
            "lineDetails": {
                "number": "1896-S",
                "displayName": "ATHENS Desk"
            },
            "unitOfMeasure": {
                "code": "PCS",
                "displayName": "Piece",
                "symbol": null,
                "unitConversion": null
            }
        },
        {
            "@odata.etag": "W/\"JzQ0O0hCNlc4R1pSZXJzSjgzQ2t6U25CeDZqbjl2R1NadVdVUWxjVGJVeE5yOEE9MTswMDsn\"",
            "id": "8ac93e25-237c-ea11-9be0-d0e787b03942-30000",
            "documentId": "8ac93e25-237c-ea11-9be0-d0e787b03942",
            "sequence": 30000,
            "itemId": "96fe170f-9f75-ea11-bbf0-000d3a38a583",
            "accountId": "00000000-0000-0000-0000-000000000000",
            "lineType": "Item",
            "description": "ATHENS Desk",
            "unitOfMeasureId": "17ff170f-9f75-ea11-bbf0-000d3a38a583",
            "quantity": 8,
            "unitPrice": 1000.8,
            "discountAmount": 0,
            "discountPercent": 0,
            "discountAppliedBeforeTax": false,
            "amountExcludingTax": 8006.4,
            "taxCode": "FURNITURE",
            "taxPercent": 6.00007,
            "totalTaxAmount": 480.39,
            "amountIncludingTax": 8486.79,
            "invoiceDiscountAllocation": 0,
            "netAmount": 8006.4,
            "netTaxAmount": 480.39,
            "netAmountIncludingTax": 8486.79,
            "shipmentDate": "2020-04-11",
            "shippedQuantity": 0,
            "invoicedQuantity": 0,
            "invoiceQuantity": 8,
            "shipQuantity": 8,
            "lineDetails": {
                "number": "1896-S",
                "displayName": "ATHENS Desk"
            },
            "unitOfMeasure": {
                "code": "PCS",
                "displayName": "Piece",
                "symbol": null,
                "unitConversion": null
            }
        }
    ]
}

Let’s have a look at the sales order API page. Below you see part of the source code of the Sales Order API v1.0. I’ve left out all code that is not interesting right now.

page 20028 "APIV1 - Sales Orders"
{
    APIVersion = 'v1.0';
    Caption = 'salesOrders', Locked = true;
    ChangeTrackingAllowed = true;
    DelayedInsert = true;
    EntityName = 'salesOrder';
    EntitySetName = 'salesOrders';
    ODataKeyFields = Id;
    PageType = API;
    SourceTable = "Sales Order Entity Buffer";
    Extensible = false;

    layout
    {
        area(content)
        {
            repeater(Group)
            {
                field(id; Id)
                {
                    ApplicationArea = All;
                    Caption = 'id', Locked = true;
                    Editable = false;

                    trigger OnValidate()
                    begin
                        RegisterFieldSet(FIELDNO(Id));
                    end;
                }
                
                part(salesOrderLines; 20044)
                {
                    ApplicationArea = All;
                    Caption = 'Lines', Locked = true;
                    EntityName = 'salesOrderLine';
                    EntitySetName = 'salesOrderLines';
                    SubPageLink = "Document Id" = FIELD(Id);
                }
            }
        }
    }
}

As you can see, salesOrderLines is a pagepart. On a side note, it is inside a repeater, ever seen that before? Anyway, what is important here is the EntitySetName. That’s the name that is exposed in the API endpoint, and that’s what you need to add to the $expand= parameter. It is also the name that is used in the request body for the deep insert. This is a case-sensitive value!

Deep insert with custom API’s

Another frequently asked question is if it is possible to extend API’s. Unfortunately, that’s not the case. Note the Extensible = false property in the source code of the standard API page above. If you want to support custom fields in API’s, then you need to create custom API’s. All features, like expand and deep insert, are also available for custom API’s. But beware! I’ve spent a lot of time creating custom API’s that support both direct insert and deep insert, and it can really drive you crazy. The behavior of subpages for direct inserts is not 100% the same as for deep inserts. Actually, this blog post originally had another part about how to support deep inserts in a custom sales order API. Unfortunately, I ran into too many issues and the blog post became way too complex with different workarounds and a lot of text to explain why the workarounds were needed. So, instead of going down that rabbit hole here, I’ve reached out to Microsoft to share my findings.

What I would recommend if you want to create a custom API that supports deep inserts and direct inserts, is to look at the source code of the standard API’s. Make a copy of it and add your own fields. You may need to do some extra plumbing then, but it will save you a lot of time and frustration.

Next blog post will be about another API performance feature: multiple calls at once, aka batch calls. Stay tuned!

Deep insert with Business Central APIs (part 2)

$
0
0

In my previous post about deep inserts with Business Central APIs, I mentioned creating custom APIs that support deep inserts. I tried to create an example with Sales Header / Sales Line, but I gave up because I ran into too many issues. In this post I want to explain what you need for creating custom APIs that support deep inserts. I will do so with a simple Header & Line table example.

The header table has just two fields and some code to simulate a number series.

table 60000 "My Header"
{
    fields
    {
        field(1; "No."; Code[10]) { }
        field(2; Description; Text[50]) { }
    }

    keys
    {
        key(PK; "No.") { }
    }

    trigger OnInsert()
    var
        MyHeader: Record "My Header";
    begin
        if "No." = '' then
            if MyHeader.FindLast() then
                "No." := IncStr(MyHeader."No.")
            else
                "No." := 'HEADER001';
    end;
}

The line table is linked to the header table in the usual way, similar to Sales Line.

table 60001 "My Line"
{
    fields
    {
        field(1; "Header No."; Code[20])
        {
            TableRelation = "My Header"."No.";
        }
        field(2; "Line No."; Integer) { }
        field(3; "Header Id"; Guid)
        {
            TableRelation = "My Header".SystemId;
        }
        field(4; Description; Text[50]) { }
    }

    keys
    {
        key(PK; "Header No.", "Line No.") { }
    }
}

But it also has a field “Header Id” which is linked to the SystemId field of the header table. And that is a key part in this story.

Linking by SystemId field

One of the recommendations when creating custom APIs is to make use of the SystemId field. Before we dive into the code of the custom API, we need to understand the role of the SystemId field. It is used in the URL like this:

{{baseurl}}/api/ajk/demo/v1.0/companies({{companyid}})/headers({{headerid}})

The companyid and headerid in the URL are the SystemId fields of these tables. In the API page you need to set the property ODataKeyFields to the field that you want to filter with this value. Usually the ODataKeyFields property is set to be SystemId.

page 60000 "My Header API"
{
    PageType = API;
    SourceTable = "My Header";
    APIPublisher = 'ajk';
    APIGroup = 'demo';
    APIVersion = 'v1.0';
    EntitySetName = 'headers';
    EntityName = 'header';
    DelayedInsert = true;
    ODataKeyFields = SystemId;

    layout
    {
        area(Content)
        {
            field(id; SystemId) { }
            field(headerNo; "No.") { }
            field(description; Description) { }
            part(lines; MyAPILines)
            {
                EntitySetName = 'lines';
                EntityName = 'line';
                SubPageLink = "Header Id" = field(SystemId);
            }
        }
    }
}

This is the code of the custom API for the header table. Simple and straightforward, isn’t it? The most important detail is the part that points to the page with the lines. As you can see, the link between the header and the lines is not based on the “No.” field of the header table, but on the SystemId field. Why don’t we just use the “No.” field as in normal pages? The reason is that API pages don’t behave like normal UI pages. They behave differently and you should treat them as a totally different beast.

This header API contains a page part for the lines, so we can expand the URL with the entity set name of that page part. We call that a navigational property of the API, a relation from one entity to another entity.

{{baseurl}}/api/ajk/demo/v1.0/companies({{companyid}})/headers({{headerid}})/lines

This URL points directly to the lines. When you use this URL to insert a new line, then it will use the header id from the URL as a filter for the lines. But it will not first read the header record in the header API page! Because of this, you can’t use any other value to link the header and line part than the value provided with the URL. The header API page does not even execute the triggers OnAfterGetRecord and OnAfterGetCurrRecord.

And that’s why you need the SystemId field of the header as a field in the line table.

The lines part

Let’s have a look at the lines part. We’ve just seen that the lines are filtered based on the SystemId of the header. With that value, we should be able to get the header record and populate the “Header No.” of the line table. But we need another property to get the value into the “Header Id” field: PopulateAllFields. Without this property, things still don’t work. It makes sure that non-key fields of new records are automatically populated based on a single filter value.

The lines part is just a regular ListPart page. It’s not an API page, because then it becomes an API entity on itself and could be called without the header. Let’s have a look at the lines API.

page 60001 MyAPILines
{
    PageType = ListPart;
    SourceTable = "My Line";
    DelayedInsert = true;
    AutoSplitKey = true;
    PopulateAllFields = true;
    ODataKeyFields = SystemId;

    layout
    {
        area(Content)
        {
            repeater(lines)
            {
                field(id; Format(SystemId, 0, 4).ToLower()) { }
                field(headerNo; "Header No.") { }
                field(headerId; "Header Id") { }
                field(lineNo; "Line No.") { }
                field(description; Description) { }
            }
        }
    }

    var
        IsDeepInsert: Boolean;

    trigger OnInsertRecord(BelowxRec: Boolean): Boolean
    var
        MyHeader: Record "My Header";
        MyLine: Record "My Line";
    begin
        if IsDeepInsert then begin
            MyHeader.GetBySystemId("Header Id");
            "Header No." := MyHeader."No.";
            MyLine.SetRange("Header No.", "Header No.");
            if MyLine.FindLast() then
                "Line No." := MyLine."Line No." + 10000
            else
                "Line No." := 10000;
        end;
    end;

    trigger OnNewRecord(BelowxRec: Boolean)
    var
        MyHeader: Record "My Header";
    begin
        IsDeepInsert := IsNullGuid("Header Id");
        if not IsDeepInsert then begin
            MyHeader.GetBySystemId("Header Id");
            "Header No." := MyHeader."No.";
        end;
    end;
}

First of all, because this page is a normal page, and not an API page, it is not possible to put the SystemId field on it. That’s been solved with Format(SystemId, 0, 4).ToLower(). This returns the value in exactly the same way as the SystemId field on an API page.

As you can see, the PopulateAllFields property on the page is set to true. And the ODataKeyFields property is also set, because this page is exposed as part of an API.

The real stuff, which includes a little bit unexpected behavior, is in the triggers OnNewRecord and OnInsertRecord.

Let’s first look at the OnNewRecord. Consider a normal direct insert of a line record:

post {{baseurl}}/api/ajk/demo/v1.0/companies({{companyid}})/headers({{headerid}})/lines
Authorization: Basic {{username}} {{password}}
Content-Type: application/json

{
    "description": "Some line"
}

When you insert a new record directly, not as a deep insert, then the header id value is part of the URL. The header page can take that value and pass it to the line page by means of the SubPageLink property. In the lines page, the PopulateAllFields feature kicks in, and it sets the value of the “Header Id” field before the OnNewRecord trigger is executed. As a result, we can just read the header record with the GetBySystemId function in the OnNewRecord trigger and fill the “Header No.” field. Finally, that will enable the AutoSplitKey feature to correctly calculate the next line number.

But the deep insert behaves a little bit different. Consider a deep insert of a header and some lines:

post {{baseurl}}/api/ajk/demo/v1.0/companies({{companyid}})/headers?$expand=lines
Authorization: Basic {{username}} {{password}}
Content-Type: application/json

{
    "description": "Deep insert",
    "lines": [
        {
           "description": "First line"
        },
        {
            "description": "Second line"
        }
    ]
}

In this situation, the “Header Id” field is not populated in the OnNewRecord trigger. It’s just empty, nada, nope, null. If that’s the case, then we can assume it’s a deep insert. But we need the “Header No.” field to the header record (which is being created in the same call). After many tests, I figured out that when the record is inserted from the OnInsert trigger, then the “Header Id” field does contain the value. Unfortunately, the AutoSplitKey function to automatically calculate line numbers doesn’t work properly in this situation. You need to write your own code in the OnInsertTrigger to calculate the next line number.

Field validations

Because API pages require delayed insert, all fields field will be validated before the record is actually inserted. For deep inserts that means the “Header No.” field does not yet contain a value. And if you have field validation code that needs the header record, then you run into the situation that the header can’t be read. How to fix that?

Well, it turns out to be quite easy. During a deep insert, you don’t have the Header Id in the OnNewRecord trigger on the page. But when the OnValidate triggers of the fields are executed (which happens after the OnNewRecord trigger), then the Header Id field does have a value. The only thing you need to do is to read the Header record based on the “Header Id” field instead of the “Header No.” field. Makes sense, right?

Here is the line table again with an example of how to read the header table. Works both with deep inserts and direct inserts.

table 60001 "My Line"
{
    fields
    {
        field(1; "Header No."; Code[20])
        {
            TableRelation = "My Header"."No.";
        }
        field(2; "Line No."; Integer) { }
        field(3; "Header Id"; Guid)
        {
            TableRelation = "My Header".SystemId;
        }
        field(4; Description; Text[50])
        {
            trigger OnValidate()
            begin
                CheckHeaderStatus();
            end;
        }
    }

    keys
    {
        key(PK; "Header No.", "Line No.") { }
     }

    var
        MyHeader: Record "My Header";
        
     local procedure CheckHeaderStatus()
     begin
         GetHeader();
         // Do status check
     end;

     local procedure GetHeader()
     begin
         if MyHeader.SystemId <> "Header Id" then
            MyHeader.GetBySystemId("Header Id");
     end;
}

Now you can probably also see why it is hard to create a custom API for Sales Header and Sales Line that supports deep inserts. There is no “Header Id” field on the Sales Line, and the header is just read with the “Document No.” field. If you find yourself in that situation, then I recommend to take a look at the standard APIs, copy it and adjust to your needs. These standard APIs are not based on the Sales Header and Sales Line tables but use buffer tables instead.

Hope this all makes sense. To be honest, it was not easy to explain this stuff. And I don’t blame you if it’s not clear to you. Please let me know in the comments and I’ll try to clarify!

Converting Enum values in AL

$
0
0

During my AL training classes, a frequently asked question is how to convert enum values to and from integer or text. So I thought it would be a good idea to share with you what possibilities you have.

First of all, we need to understand the properties of an enum. Let’s look at an enum definition and see how it is constructed.

With these properties, you can always get from ordinal to name or from name to ordinal. But don’t make the mistake that ordinal and index are the same. They are not! Of course, in case you define the ordinal values as 1,2,3,… then they are identical to the index. But what if an enum is extended? Then ordinal value and the index will definitely not be identical anymore.

Example 1: Format

It’s quite common to use the Format command to convert a non-text value to text. You can do the same with the enum variable. Consider this code:

    procedure EnumConvertDemo()
    var
        Level: Enum Level;
    begin
        Level := Level::Gold;
        Message(Format(Level));
    end;

This results in a message with the caption of the enum value:

There are several standard formats properties that you can use. But none of them will convert the enum value to the name. The only value you can get with format is the caption or the ordinal value.

Format Example
Format(Level, 0, 0)Gold level
Format(Level, 0, 1)Gold level
Format(Level, 0, 2)30
Format(Level, 0, 9)30

Example 2: Getting the ordinal value

There is a different way to get the ordinal value. Instead of using the Format function you can use the method AsInteger(). With the code below, you get 30 in the integer variable.

    procedure EnumConvertDemo()
    var
        Level: Enum Level;
        OrdinalValue: Integer;
    begin
        Level := Level::Gold;
        OrdinalValue := Level.AsInteger();
    end;

Example 3: Getting the name value

The real name of an enum can be very useful in case of import or exports. Captions are not the best option. Maybe for export, it can be useful, but for imports, you better use the ordinal or the name. However, there is no method AsText() to directly get the name like you can with AsInteger() for the ordinal. But, there is a way to get it. Even with one line of code.

The enum variable has two properties: Names and Ordinals. Both properties return a list of text. See also the first picture in this post. The trick is to keep in mind that both lists do have equal length and the values at a certain index belong to each other. Let’s have a look at the code:

    procedure EnumConvertDemo()
    var
        Level: Enum Level;
        OrdinalValue: Integer;
        Index: Integer;
        LevelName: Text;
    begin
        Level := Level::Gold;

        OrdinalValue := Level.AsInteger();  // Ordinal value = 30
        Index := Level.Ordinals.IndexOf(OrdinalValue);  // Index = 3
        LevelName := Level.Names.Get(Index); // Name = Gold
    end;

This is a safe and generic way to get the name and it will also work with enumextension values.

What about one line of code, the above was three lines? Here you go:

    procedure EnumConvertDemo()
    var
        Level: Enum Level;
        OrdinalValue: Integer;
        Index: Integer;
        LevelName: Text;
    begin
        Level := Level::Gold;

        LevelName := Level.Names.Get(Level.Ordinals.IndexOf(Level.AsInteger)); // Name = Gold
    end;

Example 4: Convert from integer

This is an easy one. If you have the integer value (not the index!) then you can convert to the enum with the method FromInteger().

    procedure EnumConvertDemo()
    var
        Level: Enum Level;
        OrdinalValue: Integer;
    begin
        OrdinalValue := 30;
        Level := Enum::Level.FromInteger(OrdinalValue);
    end;

As you can see, I use Enum::Level here, because the Level variable itself does not have the FromInteger method. But, for sake of completeness, if I would give the enum variable a different name, then the Enum::Level is not even needed:

    procedure EnumConvertDemo()
    var
        CustLevel: Enum Level;
        OrdinalValue: Integer;
    begin
        OrdinalValue := 30;
        CustLevel := Level.FromInteger(OrdinalValue);
    end;

This code example also indicates that variable naming is important and can influence the scope. I prefer using Enum::Level because that is similar to what we are used to with Database::Customer, Page::”Customer Card”, etc. and it is independent from variable names.

Example 5: Convert from text

With text I mean the name, not the caption. Again, there is no method FromName() like the FromInteger() method. So that is at least consequent. And consequently, we can get the enum value by making use of the Ordinals and Names property.

    procedure EnumConvertDemo()
    var
        Level: Enum Level;
        OrdinalValue: Integer;
        Index: Integer;
        LevelName: Text;
    begin
        LevelName := 'Gold';

        Index := Level.Names.IndexOf(LevelName); // Index = 3
        OrdinalValue := Level.Ordinals.Get(Index); // Ordinal value = 30
        Level := Enum::Level.FromInteger(OrdinalValue);
    end;

And again, this can also be written as one line of code:

    procedure EnumConvertDemo()
    var
        Level: Enum Level;
        LevelName: Text;
    begin
        LevelName := 'Gold';

        Level := Enum::Level.FromInteger(Level.Ordinals.Get(Level.Names.IndexOf(LevelName)));
    end;

Finally…

The Ordinals and Names properties are also available at Enum::Level. That means you don’t even have to use the variable for the conversion. If you want to get the ordinal from a name or vice versa, then this also works:

    procedure EnumConvertDemo()
    var
        LevelName: Text;
        OrdinalValue: Integer;
    begin
        LevelName := 'Gold';

        OrdinalValue := Enum::Level.Ordinals.Get(Enum::Level.Names.IndexOf(LevelName));
        LevelName := Enum::Level.Names.Get(Enum::Level.Ordinals.IndexOf(OrdinalValue));
    end;

In APIs it is quite normal to use the name value for enum fields. But you don’t need to write code in APIs to convert to the real enum value, that’s done automatically by the platform.

That’s it! Hope you enjoyed it!


Service-to-service authentication for automation APIs in Business Central 2020 release wave 2

$
0
0

One of the many new features in Business Central 2020 release wave 2, aka v17, is service-to-service authentication for automation APIs. What does that mean, how does it help us, and how does it work?

Automation APIs are used for automating company setup through APIs. In terms of Microsoft, they can be used to ‘hydrate the tenant’. Think about creating a new company, importing and applying RapidStart packages, manage users and permissions, and, last but not least, uploading and installing per tenant extensions. Automation APIs are widely used in DevOps pipelines to upload and install per tenant extensions.

The automation APIs support only one authentication mechanism, the bearer token. To get a bearer token, you need to go through the OAuth authentication process. Only delegated admins (in Azure Active Directory) and Business Central users with the right permissions can call the APIs. This means that the process, in most cases PowerShell code executed by a DevOps agent, needs to impersonate a delegated admin or Business Central user. This requires a setup process to let the user first authorize the app to access Business Central on his behalf. This is also known as the consent process. Because the code that calls the APIs normally doesn’t run under the user credentials, they need to work with a refresh token, store it in a safe place and use the refresh token to get a new token whenever they call Business Central APIs.

This is going to change. In version v17 it will be possible to let the app identify himself with his secret value and get a token without needing a refresh token. The consent process will also be different. The external application that calls the automation APIs needs to be registered inside Business Central where it can also be given consent. After that, the external application can call automation APIs without any further setup requiring user interaction and without refresh tokens.

Let’s look at how that works. Imagine a PowerShell script (the external application) that wants to upload a per tenant extension. The process is as follows:

  • Register the external application in Azure Active Directory
  • Register the external application in Business Central and run the consent flow to authorize it
  • External application calls the automation API

Register the external application in Azure Active Directory

This step has to be done by the company owning the external application that will call Business Central automation APIs.

Navigate to Azure portal and open Azure Active Directory. Click on App registrations in the menu and then on New registration. Fill in a name and the supported account type. For supported account type you should choose from the first two options:

  • Accounts in this organizational directory only ([organization] – Single tenant)
  • Accounts in any organizational directory (Any Azure AD directory – Multitenant)

Choose the first option if your external application will only be used inside your own organization. If you intend to let other organizations use your external application, then choose the second option.

On this screen you should also fill in the Redirect URI. Enter this URL into the field: https://businesscentral.dynamics.com/OAuthLanding.htm.

Click on Register to create the application. Make a copy of the Application (client) ID from the overview screen. You will need this later when registering the app in Business Central.

The next step is to set the API permissions that the external application needs. If you are already familiar with OAuth and Business Central APIs, then you will see that there is a difference in this step. Click on API permissions in the menu and then on Add a permission. From the list of commonly used Microsoft APIs, you select Dynamics 365 Business Central. On a side note, it still doesn’t have the new logo. Did Microsoft forget that? Anyway, now comes the difference. Instead of selecting delegated permissions you must select Application permissions. As the description says, this is for applications that runs as a background service without a signed-in user. From the list of permissions select Automation.ReadWrite.All.

This permission is described in the details of this new feature. You’ll notice another permission here, called app_access. At this moment, it’s unknown what that permission is for. My best guess is that this permission will allow us to call Business Central data APIs from a background service. Of course I’ve tried it, but I couldn’t get any further than getting a list of companies. According to Microsoft this permission is not for us (for now I hope) and they are still working on license options for background processes that access Business Central data as a non-interactive user. That scenario is definitely not going to be supported in v17. But I really hope we will see that soon.

Anyway, don’t forget to click on the button Add permissions to save the settings. You should now have a screen like this:

Last step in registering the app in Azure is to create a secret. Click on Certificates & secrets in the menu and then click on New client secret. Choose whatever expiration period you want and click on Add. Don’t forget to make a copy of the created secret because this is only time you will be able to see it.

We are now done with the registration in Azure and we can head to Business Central to register the application.

Register the external application in Business Central

In Business Central search for AAD Applications and open the page. Click on New to add a new record. Fill in the Client Id that you copied in the previous step or received from the organization that owns the external application. A description is also needed.

The next step is to grant consent. Click on the Grant Consent action in the top. You will now get a page that requires you to log in. The user that gives consent needs to be a Global Administrator, an Application Administrator, or a Cloud Application Administrator. If you don’t have that role, then let a user who has the required role log in here. This is only needed in this step of the registration process. After logging in you will get a page that asks for the permission we have set earlier.

Click on Accept and the page will close and you will get a confirmation message in Business Central.

Last step is to give this application permission inside Business Central. Under the User Permission Sets add these two permission

  • D365 AUTOMATION
  • D365 EXTENSION MGT

Now the external application has been fully set up to access Business Central automation APIs. We can move on to the external application and call the automation APIs.

External application calls the automation API

We are going to use PowerShell to call automation APIs. First example is for Windows PowerShell 5.1. You can use this code in the good old Windows PowerShell ISE. Don’t worry about my client secret, that one is already invalidated. 😉

What I would like to indicate here is the tenandId. It’s just the domain instead of a GUID. Both options work, but this is more readable to me. But, more importantly, the registration in Azure in the first step was done in another AAD tenant (named Cronus Company). Because the external application (the script below) was registered as a multitenant application I can use it for other tenants.

#########################
# PowerShell example
#########################

$ClientID     = "0e654beb-85b9-4f3c-8e53-0601d4bd3c15"
$ClientSecret = "_g1L8xZyPCc-~UFW24WUggNPjKGak4~y7r"
$loginURL     = "https://login.microsoftonline.com"
$tenantId     = "kauffmann.nl"
$scopes       = "https://api.businesscentral.dynamics.com/.default"
$environment  = "v17"
$baseUrl      = "https://api.businesscentral.dynamics.com/v2.0/$environment/api/microsoft/automation/v1.0"
$appFile      = "D:\Repos\Local\AL\ALProject254\Default publisher_ALProject254_1.0.0.0.app"

# Get access token 
$body = @{grant_type="client_credentials";scope=$scopes;client_id=$ClientID;client_secret=$ClientSecret}
$oauth = Invoke-RestMethod -Method Post -Uri $("$loginURL/$tenantId/oauth2/v2.0/token") -Body $body

# Get companies
$companies = Invoke-RestMethod `
             -Method Get `
             -Uri $("$baseurl/companies") `
             -Headers @{Authorization='Bearer ' + $oauth.access_token}

$companies.value | Format-Table -AutoSize
$companyId = $companies.value[0].id

# Upload and install app
Invoke-RestMethod `
-Method Patch `
-Uri $("$baseurl/companies($companyId)/extensionUpload(0)/content") `
-Headers @{Authorization='Bearer ' + $oauth.access_token;'If-Match'='*'} `
-ContentType "application/octet-stream" `
-InFile $appFile

The script above needs to handle the expiration of the token itself. Most probably this results in getting a new token for every time the script runs. It’s recommended to install the MSAL.PS module. This PowerShell module has a function Get-MsalToken which handles the call to Azure to acquire the token and it uses a cache to securely safe and reuse the token until it expires. To install the module run this command:

Install-Module -name MSAL.PS -Force -AcceptLicense

Now the script can be changed to this. Note that the variable loginURL has been removed, that’s not needed anymore.

##############################
# PowerShell example with MSAL
##############################

$ClientID     = "0e654beb-85b9-4f3c-8e53-0601d4bd3c15"
$ClientSecret = "_g1L8xZyPCc-~UFW24WUggNPjKGak4~y7r"
$tenantId     = "kauffmann.nl"
$scopes       = "https://api.businesscentral.dynamics.com/.default"
$environment  = "v17"
$baseUrl      = "https://api.businesscentral.dynamics.com/v2.0/$environment/api/microsoft/automation/v1.0"
$appFile      = "D:\Repos\Local\AL\ALProject254\Default publisher_ALProject254_1.0.0.0.app"

# Get access token 
$token = Get-MsalToken `
         -ClientId $ClientID `
         -TenantId $tenantId `
         -Scopes $scopes `
         -ClientSecret (ConvertTo-SecureString -String $ClientSecret -AsPlainText -Force)

# Get companies
$companies = Invoke-RestMethod `
             -Method Get `
             -Uri $("$baseurl/companies") `
             -Headers @{Authorization='Bearer ' + $token.AccessToken}

$companies.value | Format-Table -AutoSize
$companyId = $companies.value[0].id

# Upload and install app
Invoke-RestMethod `
-Method Patch `
-Uri $("$baseurl/companies($companyId)/extensionUpload(0)/content") `
-Headers @{Authorization='Bearer ' + $token.AccessToken;'If-Match'='*'} `
-ContentType "application/octet-stream" `
-InFile $appFile

If you are using PowerShell Core, then the script can be simplified further because of some new parameters for authentication. Instead of using the Headers parameter for authentication, we can use the Authentication and Token parameter. It’s cosmetic of course, but improves the readability.

###################################
# PowerShell Core example with MSAL
###################################

$ClientID     = "0e654beb-85b9-4f3c-8e53-0601d4bd3c15"
$ClientSecret = "_g1L8xZyPCc-~UFW24WUggNPjKGak4~y7r"
$tenantId     = "kauffmann.nl"
$scopes       = "https://api.businesscentral.dynamics.com/.default"
$environment  = "v17"
$baseUrl      = "https://api.businesscentral.dynamics.com/v2.0/$environment/api/microsoft/automation/v1.0"
$appFile      = "D:\Repos\Local\AL\ALProject254\Default publisher_ALProject254_1.0.0.0.app"

# Get access token 
$token = Get-MsalToken `
         -ClientId $ClientID `
         -TenantId $tenantId `
         -Scopes $scopes `
         -ClientSecret (ConvertTo-SecureString -String $ClientSecret -AsPlainText -Force)

# Get companies
$companies = Invoke-RestMethod `
             -Method Get `
             -Uri $("$baseurl/companies") `
             -Authentication OAuth `
             -Token (ConvertTo-SecureString -String $token.AccessToken -AsPlainText -Force)

$companies.value | Format-Table -AutoSize
$companyId = $companies.value[0].id

# Upload and install app
Invoke-RestMethod `
-Method Patch `
-Uri $("$baseurl/companies($companyId)/extensionUpload(0)/content") `
-Authentication OAuth `
-Token (ConvertTo-SecureString -String $token.AccessToken -AsPlainText -Force) `
-Headers @{"If-Match"="*"} `
-ContentType "application/octet-stream" `
-InFile $appFile

Final comments

I can see two benefits of this new feature. From an end-user point of view, they only have to do the registration in Business Central. This can be documented and should be easy to complete. And the external application does not have to implement a user interface to acquire a token and refresh token and store them in a safe place. That’s a win-win I would say!

The AAD Application card in Business Central contains a field for the App Id and App Name. It suggests that apps can register AAD applications themselves. I’ve not seen any information about doing that automatically as you can do with web services. But the registration of the AAD application in Business Central can also be done by AL code. From an install upgrade Codeunit you can call the function CreateAADApplication in Codeunit “AAD Application Interface” to do so. This does NOT automatically grant consent but is convenient for the end-user. The only step he needs to do is the grant consent flow. Make sure to add the extension info after the call, because the code doesn’t support that (well, not in the v17 preview, that is).

If the external application will be used inside your organization only, then you can select single tenant when registering the app in Azure. After setting the permissions in Azure, there is a button Grant admin consent for [organization]. You can use that instead of going through the grant consent flow in Business Central. But again, that only grants consent inside your organization. To get access to Business Central the app still needs to be registered in Business Central and get the right permission sets. The only difference is that you can skip the grant consent flow in Business Central in case you did that already in the Azure portal. Did I already say that this is only for inside your organization?

I hope this explanation was clear enough to get you going with this new feature in Business Central!

Business Central App maintenance policy tightened

$
0
0

Microsoft recently published information about maintaining apps and per-tenant extensions (PTEs) for Business Central. This rule can be simply summarized as “keep your apps compatible with Business Central or we are forced to remove your app”. This sounds a little bit harsh, but it makes sense if you think about it. Customers choose Business Central for several reasons, and one of those reasons is that Microsoft promises an always up-to-date business solution. If Microsoft can’t keep that promise because apps become incompatible, then that’s a bad thing.

Deprecated features

Now, one could say that future releases always should be backward compatible so that older apps still do work. And to a certain extent that is right. It doesn’t work if Business Central updates come with a lot of breaking changes causing apps to fail easily. On the contrary, to improve the application and platform, it’s sometimes necessary to introduce new technologies and code changes that are not backward compatible. But to give partners time to prepare, Microsoft will announce deprecated features at least a year ahead. This list is a high-level list of features that you certainly should keep on eye on. It also contains a section about obselete objects. Objects and code that is part of a feature that will be deprecated in the future will be marked as obsolete one year before the really disappear.

Breaking changes

Another topic in this list is called Breaking Changes. This is for changes that are a result of a redesign of code, resulting in moved, removed or replaced objects, fields or functions. Be aware that the topic of breaking changes does not come with the promise of announcing them one year ahead. Or at least I don’t see it. They can occur for other reasons than deprecated features. For example, when code is redesigned to mitigate performance issues. To help partners with those breaking changes and how to solve them, Microsoft published a document on GitHub with more information. That document also confirms that breaking changes might happen and sometimes can’t be avoided. In short, the code of deprecated features will be marked as obsolete a year ahead, but breaking changes in code still may occur earlier. It may even happen that partners detect a breaking change before Microsoft does.

What can we learn from this? That we should not trust on the list of deprecated features and obsolete functions. Then it might happen that Microsoft suddenly informs you that your app or PTE is not compatible with the next version. When that happens you have still time to solve the issue, but you want to catch them as early as possible.

Detect breaking changes

What can you do to detect breaking changes as soon as they appear? In my opinion, there is only one good answer: implement a proper CI/CD process. You may already have a CI/CD process to support ongoing development that focuses on the current version of Business Central. But you should extend that with automatic builds, including automated tests, against the next minor and next major builds. Only in that way you will be able to detect breaking changes as soon as they appear. A build could break on compilation against the next major, and you want to know that. Or maybe the automated tests report errors for a next release, that’s also something you want to know.

Image by Freddy Kristiansen (source)

For this to happen, you need to have access to the preview releases. A major release is made available about one month before the release date. But Microsoft also publishes insider builds for the next minor and next major release. With those builds, you can keep up earlier and I definitely recommend to get access to those insider builds. Credentials for the insider builds are provided through Microsoft Collaborate. This is part of the Microsoft “Ready to Go” program. Follow this link to find detailed information on how to register on Collaborate.

When you got access, it’s a matter of setting up a build process that includes the next releases. If you already have a build process for the current release, then it should be easy to add that. If you don’t have any automated builds yet, then I definitely recommend setting that up. There is plenty of information available, but a good start would be the blog of Freddy Kristiansen. Or follow an online course about DevOps from Waldo.

How to test Business Central webhooks

$
0
0

I don’t think webhooks need an introduction. But for those who are new to the topic, here you can find the official documentation. In short, a webhook is a push notification. The “don’t call us, we’ll call you” scenario of web services. You tell Business Central in what resource you are interested (e.g. Item, Customer, etc.), and they’ll call you in case there is any change. With resource I mean any of the existing APIs, including custom APIs.

Over time, especially during my API courses and also after the session I did at the virtual Directions event about APIs, I’ve got many questions about how to test a Business Central webhook. You want to see the results before you move on with creating an application for it, right? And most people are struggling with the handshake mechanism, how to get that to work when testing the webhooks with a low-code platform or a simple solution?

Ok, here you go. Three different ways to test and work with Business Central webhooks:

Webhook subscriptions flow – Business Central perspective

Before we dive into these options, we need to understand the flow of subscribing to a Business Central Webhook. Let’s say we want to receive notifications in case there is any change in the customers. And we have a service running at a URL https://my.webhooklistener.com to receive those notifications. To tell Business Central to send notifications for the customers resource to this URL, we need to post the following request (leaving out the Authorization header):

POST https://api.businesscentral.dynamics.com/production/api/v2.0/subscriptions
Content-Type: application/json

{
  "notificationUrl": "https://my.webhooklistener.com",
  "resource": "companies(264f8dd2-4a2a-eb11-bb4f-000d3a25f2a9)/customers",
  "clientState": "SuperSecretValue123!"
}

What happens in the background, is that Business Central calls the notification URL passing a validationToken parameter. The service must respond within 5 seconds with status 200 and include the value of the validationToken parameter in the response body. When this handshake mechanism is successfully completed, Business Central will create the webhook subscription. Because a picture is worth a thousand words, here is the flow diagram of this process.

After this, Business Central will start to send notifications to the notification URL when a change occurs in the resource. Until the subscription expires, which happens after 3 days. Then the subscription must be renewed which performs the same handshake verification process.

Webhook subscriptions flow – subscriber perspective

Let’s now look at the subscriber side at the notification URL. The subscriber receives the notifications, checks the clientState value (optionally, but highly recommended), and processes the notifications. However, the very same URL is also called by Business Central during the handshake with the validationToken parameter to verify if the subscriber is actually up and running. And this does not only happen when the subscription is created, it also happens when the subscription is renewed. In other words, the subscriber should first check if the request contains a validationToken parameter and if so, respond with the validation token value in the response body. Let’s look at the flow diagram of the subscriber.

This flow is exactly what we need to implement in order to successfully subscribe to a Business Central webhook. You will recognize this flow in the next examples.

Azure Functions webhook subscriber

With Azure Functions you can easily create a subscriber on Azure and check the results. The simplest way would be to just log the received notifications so you connect to the Azure Functions monitor and watch the messages coming in. Or you go one step further and store the messages on Azure storage, push them to Service Bus queue or store them in a database like Azure SQL or (even better) Cosmos DB. The code below is a simple Azure Function that just logs the received notifications.

using System;
using System.IO;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.Http;
using Microsoft.AspNetCore.Http;
using Microsoft.Extensions.Logging;
using Newtonsoft.Json;

namespace Kauffmann.TestBCWebhook
{
    public static class BCWebhook
    {
        const string clientState = "SuperSecretValue123!";
        [FunctionName("BCWebhook")]
        public static async Task<IActionResult> Run(
            [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = null)] HttpRequest req,
            ILogger log)
        {
            log.LogInformation("BCWebhook function received a request.");

            string validationToken = req.Query["validationToken"];
            if (!String.IsNullOrWhiteSpace(validationToken))
            {
                log.LogInformation($"BCWebhook function processed validationToken: {validationToken}");
                return new OkObjectResult(validationToken);
            }

            string requestBody = await new StreamReader(req.Body).ReadToEndAsync();

            dynamic data = JsonConvert.DeserializeObject(requestBody);
            if (data.value[0].clientState != clientState)
            {
                log.LogError("Received notification with incorrect clientState:");
                log.LogError(requestBody);
                return new BadRequestResult();
            }

            log.LogInformation("New notification:");
            log.LogInformation(requestBody);
            
            return new AcceptedResult();
        }
    }
}

On a side note, the notification payload contains the clientState value in every single object. The code above only checks it for the first object in the array. It seems overkill to me that it is repeated in every object, it would be sufficient if the clientState value was at the top level, next to value.

The output of the Azure Function looks like this.

If you want to inspect the notification further, then just copy the value and use a JSON viewer. I’m a fan of this online JSON viewer: http://jsonviewer.stack.hu/

Just grab the code and paste it into your Azure Function. Running Azure Functions requires an active Azure Subscription, and to view to the log you need to navigate through the Azure portal. The next option does not require that, it’s completely free and easy to use.

Pipedream

This service can be used to create workflows based on triggers. Workflows are code and you can run them for free (and there are paid plans as well of course). Head over to Pipedream to let them tell what you can do.

I’ve created a workflow that you can just copy and deploy right away. The only thing you need is to create an account, which is free. The workflow consists of a trigger, follows by three steps. Again, you should recognize the steps from the flow diagram for the event subscriber.

How to get your hands on this workflow and make use of it?

Just open this link to get to the shared version of the workflow: https://pipedream.com/@ajkauffmann/business-central-api-webhook-p_3nCDWl

You will get a window that shows the workflow and which has a green button COPY in the right top corner. Click on the COPY button and the workflow will be copied to your Pipedream environment. Then you get a blue DEPLOY button just above the trigger. If you click on it, you will get a unique URL displayed in the trigger that you can use to create a subscriber. You need to be logged in order to deploy a workflow.

Taking the above workflow as an example, you can create a webhook subscription with this call (first time it might run into a timeout!):

POST https://api.businesscentral.dynamics.com/production/api/v2.0/subscriptions
Content-Type: application/json

{
  "notificationUrl": "https://enln9ir1q8i5w5g.m.pipedream.net",
  "resource": "companies(264f8dd2-4a2a-eb11-bb4f-000d3a25f2a9)/customers",
  "clientState": "SuperSecretValue123!"
}

When the workflow is active you will see the events in the left column. These events can be clicked and you can explore the output of every single step in the workflow.

Power Automate Flow

How to receive webhook notifications with Power Automate is a question I see quite often. The handshake mechanism is not supported by the HTTP request trigger, so people are struggling with this getting to work. But by implementing the flow diagram above, it will be no problem. Here is a Flow that handles the validationToken (the handshake) and also checks the clientState.

This Flow is available for download, both for Power Automate and as Logic Apps template. Just download them and import them in your environment. I’ve also submitted it as a template, so hopefully it becomes available soon as a starter template.

Download links:

After the Flow has been activated, you can get the URL from the trigger ‘When a HTTP request is received’. Use this URL to create the subscription. In the Flow history you can view all runs including the data and how it has been processed. If you want to store the notifications, then add it after the step ‘Respond Accepted’ in the ‘If yes’ branch of the step ‘Check clientState’. Pretty simple, isn’t it?

On top of this, it would also be easy to create a custom connector in Power Automate so you can create your own triggers. The only difference with the Flow above would be the trigger, the other steps will be exactly the same. But… you need to keep one thing in mind! Subscriptions expire after three days, you need to renew them before they get expired. That can be done manually of course, which is perfectly fine for testing. For production scenarios you could think of creating a Flow that runs periodically and renews your subscriptions. That should not be very hard to create. However, we are now going beyond the scope of this post which was about testing webhooks.

With this I hope you got some ideas how to easily test your webhooks!

What’s new in Business Central API v2.0

$
0
0

With the release of Business Central 2020 wave 2, a new version of the standard API set has been released. This new version got its own documentation here: https://docs.microsoft.com/en-us/dynamics-nav/api-reference/v2.0/. But while looking into it I was missing an overview of what has been changed between v1.0 and v2.0. And nothing was mentioned in the what’s new overview either. The only resource I’ve found so far is this video from the virtual launch event: https://events1.social27.com/MSDyn365BCLaunchEvent/agenda/player/72576. Which is a great video by the way, but I that kind of guy that prefers to read instead of watching. So I decided to dive in and try to compile a complete list of all these changes. I guess I can’t be 100% complete here, but with your help we can expand this list. If you notice any change that should be on this list, then please drop me an email or write it in the comments below.

URL

First of all, the URL has been changed. For version v2.0 you need to use /api/v2.0 in the URL. The full URL of the API in a production environment on SaaS is now:

https://api.businesscentral.dynamics.com/v2.0/production/api/v2.0

Don’t let the double v2.0 in the URL confuse you. The first v2.0 is the version of the online platform that supports multiple environments. It allows you to use your own names for the environments. The second v2.0 is the api version that you want to call.

For an on-prem server, or docker container, the URL is now:

https://mysandbox:7048/bc/api/v2.0

Whereas mysandbox is the name of your container or server where the Business Central service tier is running.

One important thing to mention here: the v1.0 version is still available. Your existing integrations with API v1.0 will not break because v2.0 was introduced. This is a very common scenario in the API world, you get the time to implement the new version. Of course you will be encouraged to do so because v1.0 will be removed at a certain point in the future. But hey, the API beta version should already been gone by now but it is still available. 😉

Source code

The source code of API v1.0 and v2.0 can be found on GitHub in the ALAppExtensions repository:
https://github.com/microsoft/ALAppExtensions/tree/master/Apps/W1/APIV1
https://github.com/microsoft/ALAppExtensions/tree/master/Apps/W1/APIV2
Just clone the repository and you have full access to the code, which can be very helpful!

Differences

The most important question is of course how different the API v2.0 is from v1.0. Are there many new fields, did the structure change, etc.? Best way to see the differences the two version is by comparing the metadata. And there is a massive difference between the two versions. See here for yourself: https://editor.mergely.com/oTms3IZF. Let’s look at a number of important differences.

Nested objects

In v1.0 there was something called Edm Types or complex types. This was a feature to return nested JSON objects in the result of an API call. In the picture below that is the case with the address property in the customer API.

// Customers API v1.0
{
    "@odata.etag": "W/\"JzQ0O0NVYTIrTXBEV0V6VGZtcStpa3Zaekp0clFGZFRndDArZFJSRFh6eUkrL0U9MTswMDsn\"",
    "id": "983833b3-b1fd-ea11-bc7d-00155df3a615",
    "number": "10000",
    "displayName": "Adatum Corporation",
    "type": "Company",
    "phoneNumber": "",
    "email": "robert.townes@contoso.com",
    "website": "",
    "taxLiable": true,
    "taxAreaId": "df3a33b3-b1fd-ea11-bc7d-00155df3a615",
    "taxAreaDisplayName": "ATLANTA, GA",
    "taxRegistrationNumber": "",
    "currencyId": "00000000-0000-0000-0000-000000000000",
    "currencyCode": "USD",
    "paymentTermsId": "aa3733b3-b1fd-ea11-bc7d-00155df3a615",
    "shipmentMethodId": "00000000-0000-0000-0000-000000000000",
    "paymentMethodId": "8a3a33b3-b1fd-ea11-bc7d-00155df3a615",
    "blocked": " ",
    "lastModifiedDateTime": "2020-09-23T15:31:09.68Z",
    "address": {
        "street": "192 Market Square",
        "city": "Atlanta",
        "state": "GA",
        "countryLetterCode": "US",
        "postalCode": "31772"
    }
}

In the code of the v1.0 API page this was defined as follows (note the property ODataEDMType):

    field(address; PostalAddressJSON)
    {
        Caption = 'address', Locked = true;
        ODataEDMType = 'POSTALADDRESS';
        ToolTip = 'Specifies the address for the customer.';

        trigger OnValidate()
        begin
            PostalAddressSet := TRUE;
        end;
    }

To read more about the property ODataEDMType, I recommend reading this detailed post from Vjeko about this topic. The point I would like to make here is that this property is not used anymore in the v2.0 API. Not any of the APIs does have a nested data structure that is defined in this way. Instead, you will see first-level properties or navigation properties. The address property in the customer API for example has been replaced by first-level properties. In the picture below you can see that we now have all address fields directly on the same level as the other properties, just as they appear in the table.

// Customers API v2.0
{
    "@odata.etag": "W/\"JzQ0O0NVYTIrTXBEV0V6VGZtcStpa3Zaekp0clFGZFRndDArZFJSRFh6eUkrL0U9MTswMDsn\"",
    "id": "983833b3-b1fd-ea11-bc7d-00155df3a615",
    "number": "10000",
    "displayName": "Adatum Corporation",
    "type": "Company",
    "addressLine1": "192 Market Square",
    "addressLine2": "",
    "city": "Atlanta",
    "state": "GA",
    "country": "US",
    "postalCode": "31772",
    "phoneNumber": "",
    "email": "robert.townes@contoso.com",
    "website": "",
    "taxLiable": true,
    "taxAreaId": "df3a33b3-b1fd-ea11-bc7d-00155df3a615",
    "taxAreaDisplayName": "ATLANTA, GA",
    "taxRegistrationNumber": "",
    "currencyId": "00000000-0000-0000-0000-000000000000",
    "currencyCode": "USD",
    "paymentTermsId": "aa3733b3-b1fd-ea11-bc7d-00155df3a615",
    "shipmentMethodId": "00000000-0000-0000-0000-000000000000",
    "paymentMethodId": "8a3a33b3-b1fd-ea11-bc7d-00155df3a615",
    "blocked": " ",
    "lastModifiedDateTime": "2020-09-23T15:31:09.68Z"
}

An example of a navigational property can be found in the items API. In v1.0 the properties of the Base Unit of Measure are represented as a nested object.

// Items API v1.0
{
    "@odata.etag": "W/\"JzQ0O1NsakZCQWNRU21INi8xOS9zWFhXSWRwWGxNbVArOXdjWVM0NGxqOUJxUUU9MTswMDsn\"",
    "id": "a23833b3-b1fd-ea11-bc7d-00155df3a615",
    "number": "1896-S",
    "displayName": "ATHENS Desk",
    "type": "Inventory",
    "itemCategoryId": "2fd92eb9-b1fd-ea11-bc7d-00155df3a615",
    "itemCategoryCode": "TABLE",
    "blocked": false,
    "baseUnitOfMeasureId": "023933b3-b1fd-ea11-bc7d-00155df3a615",
    "gtin": "",
    "inventory": 4,
    "unitPrice": 1000.8,
    "priceIncludesTax": false,
    "unitCost": 780.7,
    "taxGroupId": "ee3a33b3-b1fd-ea11-bc7d-00155df3a615",
    "taxGroupCode": "FURNITURE",
    "lastModifiedDateTime": "2020-09-23T15:31:11.057Z",
    "baseUnitOfMeasure": {
        "code": "PCS",
        "displayName": "Piece",
        "symbol": null,
        "unitConversion": null
    }
}

In the Items API v2.0 not all of these Unit of Measure properties are included as first-level properties. Only the Base Unit of Measure Code is included as that is a field in the table.

// Items API v2.0
{
    "@odata.etag": "W/\"JzQ0O0xYWXRncXdObnU0Q2ppc25kV1Jac2NNMHZmUExXSjNVSy8yWGZBSFdXY0E9MTswMDsn\"",
    "id": "a23833b3-b1fd-ea11-bc7d-00155df3a615",
    "number": "1896-S",
    "displayName": "ATHENS Desk",
    "type": "Inventory",
    "itemCategoryId": "2fd92eb9-b1fd-ea11-bc7d-00155df3a615",
    "itemCategoryCode": "TABLE",
    "blocked": false,
    "gtin": "",
    "inventory": 4,
    "unitPrice": 1000.8,
    "priceIncludesTax": false,
    "unitCost": 780.7,
    "taxGroupId": "ee3a33b3-b1fd-ea11-bc7d-00155df3a615",
    "taxGroupCode": "FURNITURE",
    "baseUnitOfMeasureId": "023933b3-b1fd-ea11-bc7d-00155df3a615",
    "baseUnitOfMeasureCode": "PCS",
    "lastModifiedDateTime": "2020-09-23T15:34:03.207Z"
}

So, where did those properties go? The Items API v2.0 has a new navigation property, called unitOfMeasure. A navigation property represents data from related tables and can be optionally included in the returned data. To add this as to the result, you need to add the parameter ?$expand=unitOfMeasure. The URL then looks like:

https://api.businesscentral.dynamics.com/v2.0/production/api/v2.0/company({id})/items?$expand=unitOfMeasure

The result now looks like this:

// Items API v2.0
{
    "@odata.etag": "W/\"JzQ0O0xYWXRncXdObnU0Q2ppc25kV1Jac2NNMHZmUExXSjNVSy8yWGZBSFdXY0E9MTswMDsn\"",
    "id": "a23833b3-b1fd-ea11-bc7d-00155df3a615",
    "number": "1896-S",
    "displayName": "ATHENS Desk",
    "type": "Inventory",
    "itemCategoryId": "2fd92eb9-b1fd-ea11-bc7d-00155df3a615",
    "itemCategoryCode": "TABLE",
    "blocked": false,
    "gtin": "",
    "inventory": 4,
    "unitPrice": 1000.8,
    "priceIncludesTax": false,
    "unitCost": 780.7,
    "taxGroupId": "ee3a33b3-b1fd-ea11-bc7d-00155df3a615",
    "taxGroupCode": "FURNITURE",
    "baseUnitOfMeasureId": "023933b3-b1fd-ea11-bc7d-00155df3a615",
    "baseUnitOfMeasureCode": "PCS",
    "lastModifiedDateTime": "2020-09-23T15:34:03.207Z",
    "unitOfMeasure": {
        "@odata.etag": "W/\"JzQ0O25yOXVDUTNxWDdtMTRIcnZ5UHZpOGp2Q0lQWFl1NFhqbzA0OTdXOGNPbDA9MTswMDsn\"",
        "id": "023933b3-b1fd-ea11-bc7d-00155df3a615",
        "code": "PCS",
        "displayName": "Piece",
        "internationalStandardCode": "EA",
        "symbol": "",
        "lastModifiedDateTime": "2020-09-23T15:31:13.643Z"
    }
}

Another property that was changed in the same way is the property dimensions. Previously, you would find dimensions of a journal line under the property dimensions. In v2.0 this has been changed to dimensionSetLines and needs to be expanded.

To find all available navigational properties, you can read either the documentation. If an API has a navigation property is will be documented. For the Items API it can be found here. You can of course also look into the metadata, where you will find the complete definition in XML.

The change to use navigation properties instead of complex types has a big effect on performance. Complex types needed to be constructed by code, but the navigation properties are just other API pages (or subpages if you want). Which results in less code, but you need to remember to include $expand= in the url to get those navigation properties.

Relationship multiplicities

The move from complex types to API subpages comes with another change that was needed. The complex types could be defined as an object or as a collection. But API subpages were always treated as a collection, even if they only could contain one record. Here is an example what I mean. In the source code of the customers API page you will find this subpage:

part(customerFinancialDetails; 20048)
{
    Caption = 'Customer Financial Details', Locked = true;
    EntityName = 'customerFinancialDetail';
    EntitySetName = 'customerFinancialDetails';
    SubPageLink = SystemId = FIELD(SystemId);
}

The metadata shows what the result will look like, as a collection:

<NavigationProperty Name="customerFinancialDetails" Type="Collection(Microsoft.NAV.customerFinancialDetail)" Partner="customer" ContainsTarget="true">
    <ReferentialConstraint Property="id" ReferencedProperty="id" />
</NavigationProperty>

When you call the customers API with $expand=customerFinancialDetails you will get the details in an array. But the page subpage itself is based on the same table customer and just includes a number of flowfields. This is done for performance reasons, by the way. In API v1.0 the JSON looks like this:

{
    "id": "5eea6afd-4a2a-eb11-bb4f-000d3a25f2a9",
    "number": "10000",
    "displayName": "Adatum Corporation",
    "type": "Company",
    "customerFinancialDetails": [
        {
            "@odata.etag": "W/\"JzQ0O25GVTRvaWdTUk13aG9TOUEySzhYaTM1WE84SkxDeXF5MEdTeUlZblhGSlk9MTswMDsn\"",
            "id": "5eea6afd-4a2a-eb11-bb4f-000d3a25f2a9",
            "number": "10000",
            "balance": 0,
            "totalSalesExcludingTax": 223598.4,
            "overdueAmount": 0
        }
    ]
}

As you can see, the array contains only one record, and it will always contain just one record. With complex types, you could define this in the Edm type definition. But Edm types are now not used anymore. Instead, Business Central v17 allows to specify the multiplicity as a property on the page part and that’s being used in API v2.0 app to get a single nested object instead of a collection:

part(customerFinancialDetails; "APIV2 - Cust Financial Details")
{
    //TODO - WaitingModernDevProperty, Caption = 'Customer Financial Details';
    CaptionML = ENU = 'Multiplicity=ZeroOrOne';
    EntityName = 'customerFinancialDetail';
    EntitySetName = 'customerFinancialDetails';
    SubPageLink = SystemId = Field(SystemId);
}

The metadata now shows that this is a single object instead of a collection. But be careful, because it is a single object it’s referenced by its EntityName, not by the EntitySetName!

<NavigationProperty Name="customerFinancialDetail" Type="Microsoft.NAV.customerFinancialDetail" ContainsTarget="true">
    <ReferentialConstraint Property="id" ReferencedProperty="id" />
</NavigationProperty>

This is another breaking change with API v2.0. In this example, we must add $expand=customerFinancialDetail to the URL (without the s at the end). The result now looks like:

{
    "id": "5eea6afd-4a2a-eb11-bb4f-000d3a25f2a9",
    "number": "10000",
    "displayName": "Adatum Corporation",
    "type": "Company",
    "customerFinancialDetail": {
        "id": "5eea6afd-4a2a-eb11-bb4f-000d3a25f2a9",
        "number": "10000",
        "balance": 0,
        "totalSalesExcludingTax": 223598.4,
        "overdueAmount": 0
    }
}

Back to the property that is used to define the multiplicity. At the release of Business Central v17, there was no property available. As a workaround the CaptionML property is used in this way:

CaptionML = ENU = 'Multiplicity=ZeroOrOne';

Multiplicuty can be set as ‘ZeroOrOne’ or ‘Many’. The default is ‘Many’, which will result in a collection. In runtime 6.1 the new property Multiplicity is already available, however, it doesn’t work yet. If you need this in your custom APIs, then you have to use the workaround with CaptionML for now.

Key fields in the url

In v1.0 there were still some APIs that didn’t use the SystemId as the primary key. But in v2.0 that has changed. All entities can be retrieved with the SystemId, which is a unique GUID. In case you have scenario to create a new record and provide the SystemId, that’s also possible in API v2.0.

Enums

Another change in the schema of API v2.0 is the possibility to use enums. To use enums, you must add ?$schemaversion=2.0 to the API url. If you do so, you will get different behavior for fields that are based on enums. All exposed fields that were of type option in v1.0 are converted into enums for v2.0. The difference can be explored here: https://editor.mergely.com/IWfP3kR0/.

Here is an example of what is added to the schema:

<EnumType Name="contactType">
    <Member Name="Company" Value="0" />
    <Member Name="Person" Value="1" />
</EnumType>
<EnumType Name="customerBlocked">
    <Member Name="_x0020_" Value="0" />
    <Member Name="Ship" Value="1" />
    <Member Name="Invoice" Value="2" />
    <Member Name="All" Value="3" />
</EnumType>

Those fields were previously exposed as Edm.String, but now they are strong typed:

<Property Name="blocked" Type="Microsoft.NAV.customerBlocked" />

This is very helpful in case you want to know which values are available for enum fields. For example to verify data before sending it or to display only valid values in a dropdown to a user.

What about that strange value “_x0020_”? That’s of course the empty ordinal value, also known as ” “. It’s the Unicode encoded representation of a space. You would expect that you also need to use this Unicode encoding when posting data. But it turns out that spaces can still be used. Below are two examples of a JSON body for the customers API. The first one is only accepted when you use schema version 2.0, the second one works fine in all cases.

// This body works only with ?$schemaversion=2.0
{
    "displayName": "Demo Enum",
    "blocked": "_x0020_"
}

// This body works for both schemaversions
{
    "displayName": "Demo Enum",
    "blocked": " "
}

In case you want to present the possible options to a user, then you need to decode the Unicode value to get back the original value. But wait… there is another new feature in API v2.0 that’s even better…

Captions

Yep… captions are now exposed by a specific endpoint. For those integrations that need to present fields and/or enum values in a UI, they can grab the captions coming from BC, including the translations! The endpoint is named entityDefinitions. Actually, this is not only available for v2.0, it’s also available for v1.0. But for v1.0 it doesn’t make much sense, because most translations are missing in v1.0. In other words, it’s a new platform feature for APIs in Business Central v17, and API v2.0 is the first app to make use of it.

Ok, what is it?

The entityDefinitions endpoint provides multilanguage captions for the entity, the entity set plus all properties of the entity. And if you include schema version 2.0, then you get the enum captions included. This is how the structure looks like:

{
    "entityName": "customer",
    "entitySetName": "customers",
    "entityCaptions": [],
    "entitySetCaptions": [],
    "properties": [],
    "actions": [],
    "enumMembers": []
}

Each array contains multiple translations. For example, the entityCaptions looks like this:

"entityCaptions": [
    {
        "languageCode": 3079,
        "caption": "Debitor"
    },
    {
        "languageCode": 2055,
        "caption": "Debitor"
    },
    {
        "languageCode": 1044,
        "caption": "Kunde"
    },
    {
        "languageCode": 1031,
        "caption": "Debitor"
    },
    {
        "languageCode": 1053,
        "caption": "Kund"
    },
    {
        "languageCode": 4105,
        "caption": "Customer"
    },
    {
        "languageCode": 2057,
        "caption": "Customer"
    },
    {
        "languageCode": 5129,
        "caption": "Customer"
    },
    {
        "languageCode": 3081,
        "caption": "Customer"
    },
    {
        "languageCode": 1030,
        "caption": "Kunde"
    },
    {
        "languageCode": 1043,
        "caption": "Klant"
    },
    {
        "languageCode": 2067,
        "caption": "Klant"
    },
    {
        "languageCode": 1049,
        "caption": "Клиент"
    },
    {
        "languageCode": 1033,
        "caption": "Customer"
    },
    {
        "languageCode": 1040,
        "caption": "Cliente"
    },
    {
        "languageCode": 2064,
        "caption": "Cliente"
    },
    {
        "languageCode": 1035,
        "caption": "Asiakas"
    },
    {
        "languageCode": 1029,
        "caption": "Zákazník"
    },
    {
        "languageCode": 3082,
        "caption": "Cliente"
    },
    {
        "languageCode": 3084,
        "caption": "Client"
    },
    {
        "languageCode": 4108,
        "caption": "Client"
    },
    {
        "languageCode": 2058,
        "caption": "Cliente"
    },
    {
        "languageCode": 1039,
        "caption": "Viðskiptamaður"
    },
    {
        "languageCode": 1036,
        "caption": "Client"
    },
    {
        "languageCode": 2060,
        "caption": "Client"
    }
]

The properties contains a list of all properties with their captions (only showing displayName here):

"properties": [
    {
        "name": "displayName",
        "captions": [
            {
                "languageCode": 3079,
                "caption": "Anzeigename"
            },
            {
                "languageCode": 2055,
                "caption": "Anzeigename"
            },
            {
                "languageCode": 1044,
                "caption": "Visningsnavn"
            },
            {
                "languageCode": 1031,
                "caption": "Anzeigename"
            },
            {
                "languageCode": 1053,
                "caption": "Visningsnamn"
            },
            {
                "languageCode": 4105,
                "caption": "Display Name"
            },
            {
                "languageCode": 2057,
                "caption": "Display Name"
            },
            {
                "languageCode": 5129,
                "caption": "Display Name"
            },
            {
                "languageCode": 3081,
                "caption": "Display Name"
            },
            {
                "languageCode": 1030,
                "caption": "Visningsnavn"
            },
            {
                "languageCode": 1043,
                "caption": "Weergavenaam"
            },
            {
                "languageCode": 2067,
                "caption": "Weergavenaam"
            },
            {
                "languageCode": 1049,
                "caption": "Отображаемое имя"
            },
            {
                "languageCode": 1033,
                "caption": "Display Name"
            },
            {
                "languageCode": 1040,
                "caption": "Nome visualizzato"
            },
            {
                "languageCode": 2064,
                "caption": "Nome visualizzato"
            },
            {
                "languageCode": 1035,
                "caption": "Näyttönimi"
            },
            {
                "languageCode": 1029,
                "caption": "Zobrazit název"
            },
            {
                "languageCode": 3082,
                "caption": "Nombre para mostrar"
            },
            {
                "languageCode": 3084,
                "caption": "Nom d’affichage"
            },
            {
                "languageCode": 4108,
                "caption": "Nom d’affichage"
            },
            {
                "languageCode": 2058,
                "caption": "Nombre para mostrar"
            },
            {
                "languageCode": 1039,
                "caption": "Birtingarheiti"
            },
            {
                "languageCode": 1036,
                "caption": "Nom d’affichage"
            },
            {
                "languageCode": 2060,
                "caption": "Nom d’affichage"
            }
        ]
    }
]

Where do the captions and the translations come from? Not from the base app! Every API page has captions defined as you can see here in the first lines of the customers API:

page 30009 "APIV2 - Customers"
{
    APIVersion = 'v2.0';
    EntityCaption = 'Customer';
    EntitySetCaption = 'Customers';
    ChangeTrackingAllowed = true;
    DelayedInsert = true;
    EntityName = 'customer';
    EntitySetName = 'customers';
    ODataKeyFields = SystemId;
    PageType = API;
    SourceTable = Customer;
    Extensible = false;

    layout
    {
        area(content)
        {
            repeater(Group)
            {
                field(id; SystemId)
                {
                    Caption = 'Id';
                    Editable = false;
                }
                field(number; "No.")
                {
                    Caption = 'No.';
                }
                field(displayName; Name)
                {
                    Caption = 'Display Name';
                    ShowMandatory = true;
                    
[further lines truncated]

You can see the property EntityCaption and EntitySetCaption, both of which are also visible in the JSON from entityDefinitions. Same counts for the caption on the fields. The translations of these captions are included in the API app. They are not taken from the base app. This is something to keep in mind, as this can be really confusing!

If you use schema version 2.0, then you also get the enums in this way:

{
    "entityName": "customerBlocked",
    "entitySetName": null,
    "entityCaptions": [],
    "entitySetCaptions": [],
    "properties": [],
    "actions": [],
    "enumMembers": [
        {
            "name": "_x0020_",
            "value": 0,
            "captions": [
                {
                    "languageCode": 1033,
                    "caption": " "
                }
            ]
        },
        {
            "name": "Ship",
            "value": 1,
            "captions": [
                {
                    "languageCode": 1033,
                    "caption": "Ship"
                }
            ]
        },
        {
            "name": "Invoice",
            "value": 2,
            "captions": [
                {
                    "languageCode": 1033,
                    "caption": "Invoice"
                }
            ]
        },
        {
            "name": "All",
            "value": 3,
            "captions": [
                {
                    "languageCode": 1033,
                    "caption": "All"
                }
            ]
        }
    ]
}

What is strange here is that we only see captions for language code 1033 (ENU). Somehow, the translations of the enum “Customer Blocked” are not included. I checked the translation files of the base app (because the enums are defined in the base app), but the translations for those enums are definitely there. My guess is that the entityDefinitions only takes translations from the API app and does not look into the base app. This is something for Microsoft to fix!

This new entityDefinitions endpoint also works with custom APIs. Just include the captions and translations and you’re all set!

That’s what I could find about the new API features in Business Central v17 and API v2.0. If you have any addition, please let me know!

Batch calls with Business Central APIs (1) – Basic operation

$
0
0

A while ago I wrote about deep inserts with Business Central APIs. And I promised to write about batch calls too, so it’s about time to live up to that. 😀 Actually, I did some online sessions about it, like this one for DynamicsCon and also for Directions 2020. But I think it is still worthy to write it down. And because there is a lot to share about this topic, it’s my plan to write a series of posts. Today, we start with the basics of batch calls.

Background

When calling Business Central APIs you do one operation at a time. For example, you can only insert or modify one customer, or create one sales invoice. With deep inserts, it is possible to create header and lines together, and then you can create multiple lines. But that’s only possible on the line records, you still create one header at a time. With a batch request, it is possible to combine multiple operations in one request. The batch request is submitted as a single HTTP POST request to the $batch endpoint and the body contains a set of operations. From an optimization point of view, batching minimizes the number of requests from an API consumer because you combine multiple requests into one.

Batch URL

The $batch endpoint is available on all API endpoints. For the Business Central SaaS environment you can use this URL:

https://{{baseurl}}/api/v2.0/$batch

Everything I describe here also works for custom APIs. The URL will then look like:

https://{{baseurl}}/api/[publisher]/[group]/[version]/$batch

The parameter {{baseurl}} can be replaced with the standard endpoint URLs for Business Central as explained here: https://docs.microsoft.com/en-us/dynamics-nav/api-reference/v2.0/endpoints-apis-for-dynamics.

Request headers

Because the request body will be JSON the Content-Type header of the request must be set to appication/json. If the response should be JSON as well (and you want that!), then the Accept header must also be set to application/json. If you leave the Accept header out, then the response will be multipart/mixed. Here is the basic structure of the request, without the body. I leave out the Authorization header, but you need to add that obviously. 🙂

POST {{baseurl}}/api/v2.0/$batch
Content-Type: application/json
Accept: application/json

Request body

There are two body types supported when doing batch requests: multipart/mixed and application/json. Because multipart/mixed is more complex to use and read while the JSON body is much more readable and works fine with Business Central API, I will only discuss application/json in this blog post. The request body is a JSON with this basic format:

{
	"requests": []
}

As you can see, that’s quite a simple JSON payload, isn’t it?

The requests array must contain one ore more operations, and each of them must contains an id, the method and a URL and optionally also headers and a body. Here is an example of an operation to insert a single journal line.

{
	"method": "POST",
	"id": "r1",
	"url": "companies({{companyId}})/journals({{journalId}})/journalLines",
	"headers": {
		"Content-Type": "application/json"
	},
	"body": {
        "accountId": "{{accountId}}",
        "postingDate": "2020-10-20",
		"documentNumber": "SALARY2020-10",
		"amount": -3250,
		"description": "Salary to Bob"
	}
}

Each operation has a number of properties:

id

mandatory

Unique identification of this operation.

method

mandatory

One of the standard HTTP methods GET, POST, PATCH, PUT or DELETE. This value is case insensitive.

url

mandatory

Path to the API. This can be a relative path or an absolute path. 
relative path replaces the $batch part of the batch request url.
An absolute path replaces the complete url of the batch request.
The url may contain parameters like $select, $filter, etc.

headers

optional

List of headers for this operation.
Format: "header-name": "value"

body

optional

Mandatory if the request has method POST, PATCH or PUT.
The content of the body is the same as for a single request. For Business Central APIs this is usually a JSON payload. The headers must contain a Content-Type header that indicates the type of data in the body.

Let’s take the operation above, to insert a single journal line and compose a batch request that inserts multiple journal lines in one go. The request body looks like:

{
	"requests": [
		{
			"method": "POST",
			"id": "r1",
			"url": "companies({{companyId}})/journals({{journalId}})/journalLines",
			"headers": {
				"Content-Type": "application/json"
			},
			"body": {
			    "accountId": "{{accountId}}",
			    "postingDate": "2020-10-20",
			    "documentNumber": "SALARY2020-12",
			    "amount": -3250,
			    "description": "Salary to Bob"
			}
		},
		{
			"method": "POST",
			"id": "r2",
			"url": "companies({{companyId}})/journals({{journalId}})/journalLines",
			"headers": {
				"Content-Type": "application/json"
			},
			"body": {
			    "accountId": "{{accountId}}",
			    "postingDate": "2020-10-20",
			    "documentNumber": "SALARY2020-12",
			    "amount": -3500,
			    "description": "Salary to John"
	        }
		},
        {
			"method": "POST",
			"id": "r3",
			"url": "companies({{companyId}})/journals({{journalId}})/journalLines",
			"headers": {
				"Content-Type": "application/json"
			},
			"body": {
			    "accountId": "{{accountId2}}",
			    "postingDate": "2020-10-20",
			    "documentNumber": "SALARY2020-12",
			    "amount": 6750,
			    "description": "Salaries December 2020"
	        }
		}	
	]
}

As you can imagine, this can be done for inserting multiple customers, items, invoices, etc. The body can even be combined with deep inserts!

The response

Just like the request body is a combination of multiple operations, the response body is a combination of the multiple results. Let’s take a look at the response body for the batch request example:

{
    "responses": [
        {
            "id": "r1",
            "status": 201,
            "headers": {
                "location": "https://bcsandbox.docker.local:7048/bc/api/v2.0/companies(9f161476-1d3d-eb11-bb72-000d3a2b9218)/journals(f91409ba-1d3d-eb11-bb72-000d3a2b9218)/journalLines(6a9fec9f-6a40-eb11-a853-d0e7bcc597da)",
                "content-type": "application/json; odata.metadata=minimal",
                "odata-version": "4.0"
            },
            "body": {
                "@odata.context": "https://bcsandbox.docker.local:7048/bc/api/v2.0/$metadata#companies(9f161476-1d3d-eb11-bb72-000d3a2b9218)/journals(f91409ba-1d3d-eb11-bb72-000d3a2b9218)/journalLines/$entity",
                "@odata.etag": "W/\"JzQ0O0hOcTdrU25sTDNHNnRjTnBqMHNNMm94ZDUwK1JFK0txSmtkc0VQemN6Nmc9MTswMDsn\"",
                "id": "6a9fec9f-6a40-eb11-a853-d0e7bcc597da",
                "journalId": "f91409ba-1d3d-eb11-bb72-000d3a2b9218",
                "journalDisplayName": "DEFAULT",
                "lineNumber": 10000,
                "accountType": "G_x002F_L_x0020_Account",
                "accountId": "ae4110b4-1d3d-eb11-bb72-000d3a2b9218",
                "accountNumber": "60700",
                "postingDate": "2020-10-20",
                "documentNumber": "SALARY2020-12",
                "externalDocumentNumber": "",
                "amount": -3250.00,
                "description": "Salary to Bob",
                "comment": "",
                "taxCode": "NONTAXABLE",
                "balanceAccountType": "G_x002F_L_x0020_Account",
                "balancingAccountId": "00000000-0000-0000-0000-000000000000",
                "balancingAccountNumber": "",
                "lastModifiedDateTime": "2020-12-17T13:20:26.873Z"
            }
        },
        {
            "id": "r2",
            "status": 201,
            "headers": {
                "location": "https://bcsandbox.docker.local:7048/bc/api/v2.0/companies(9f161476-1d3d-eb11-bb72-000d3a2b9218)/journals(f91409ba-1d3d-eb11-bb72-000d3a2b9218)/journalLines(6b9fec9f-6a40-eb11-a853-d0e7bcc597da)",
                "content-type": "application/json; odata.metadata=minimal",
                "odata-version": "4.0"
            },
            "body": {
                "@odata.context": "https://bcsandbox.docker.local:7048/bc/api/v2.0/$metadata#companies(9f161476-1d3d-eb11-bb72-000d3a2b9218)/journals(f91409ba-1d3d-eb11-bb72-000d3a2b9218)/journalLines/$entity",
                "@odata.etag": "W/\"JzQ0O1cwbTBRYms5SVVjVEMzbzhCckhyc25YMzJ3N2paRGJWUXVyNDlTSGwvcU09MTswMDsn\"",
                "id": "6b9fec9f-6a40-eb11-a853-d0e7bcc597da",
                "journalId": "f91409ba-1d3d-eb11-bb72-000d3a2b9218",
                "journalDisplayName": "DEFAULT",
                "lineNumber": 20000,
                "accountType": "G_x002F_L_x0020_Account",
                "accountId": "ae4110b4-1d3d-eb11-bb72-000d3a2b9218",
                "accountNumber": "60700",
                "postingDate": "2020-10-20",
                "documentNumber": "SALARY2020-12",
                "externalDocumentNumber": "",
                "amount": -3500.00,
                "description": "Salary to John",
                "comment": "",
                "taxCode": "NONTAXABLE",
                "balanceAccountType": "G_x002F_L_x0020_Account",
                "balancingAccountId": "00000000-0000-0000-0000-000000000000",
                "balancingAccountNumber": "",
                "lastModifiedDateTime": "2020-12-17T13:20:26.927Z"
            }
        },
        {
            "id": "r3",
            "status": 201,
            "headers": {
                "location": "https://bcsandbox.docker.local:7048/bc/api/v2.0/companies(9f161476-1d3d-eb11-bb72-000d3a2b9218)/journals(f91409ba-1d3d-eb11-bb72-000d3a2b9218)/journalLines(6c9fec9f-6a40-eb11-a853-d0e7bcc597da)",
                "content-type": "application/json; odata.metadata=minimal",
                "odata-version": "4.0"
            },
            "body": {
                "@odata.context": "https://bcsandbox.docker.local:7048/bc/api/v2.0/$metadata#companies(9f161476-1d3d-eb11-bb72-000d3a2b9218)/journals(f91409ba-1d3d-eb11-bb72-000d3a2b9218)/journalLines/$entity",
                "@odata.etag": "W/\"JzQ0O25SN2NHT2s3QklhTVVNUDlwMlp6ZCtkdm12T3ZrUllNdnJ4aHZnbm5yV0U9MTswMDsn\"",
                "id": "6c9fec9f-6a40-eb11-a853-d0e7bcc597da",
                "journalId": "f91409ba-1d3d-eb11-bb72-000d3a2b9218",
                "journalDisplayName": "DEFAULT",
                "lineNumber": 30000,
                "accountType": "G_x002F_L_x0020_Account",
                "accountId": "844110b4-1d3d-eb11-bb72-000d3a2b9218",
                "accountNumber": "20700",
                "postingDate": "2020-10-20",
                "documentNumber": "SALARY2020-12",
                "externalDocumentNumber": "",
                "amount": 6750.00,
                "description": "Salaries December 2020",
                "comment": "",
                "taxCode": "NONTAXABLE",
                "balanceAccountType": "G_x002F_L_x0020_Account",
                "balancingAccountId": "00000000-0000-0000-0000-000000000000",
                "balancingAccountNumber": "",
                "lastModifiedDateTime": "2020-12-17T13:20:26.947Z"
            }
        }
    ]
}

As you can see, each operation has a corresponding result in the response, identified by the id. You should always use the id of the individual operation to find the corresponding result. Don’t assume that the results will always be in the same order as the request! They may seem to be in the same order, but the OData standard describes that the results can be in any order.

Each operation result has a status, which is the HTTP status that you would normally get for a single request. It includes the response headers and response body of the individual operation. The response of the batch request itself will always have status 200 if the server was able to read the batch request. Even if the batch request has operations that couldn’t be processed because of an error condition, the batch response status will still be 200. So don’t only look at the response status of the batch request, you need to read the response body to find out the results of each operation. Only when the batch request body is malformed, or you are not authorized, then the whole batch request will fail and you will get a status 500 (malformed batch request) or status 401 (not authorized).

So far it’s really simple, isn’t it? In the next blog posts in this series, I will cover topics like:

  • What happens if an error occurs in one of the requests
  • Process a batch as one big transaction
  • Combine multiple operations types, like POST and GET
  • Define the order of operations
  • Reduce the size of the response payload

Don’t expect 5 additional blog posts… I’m going to combine them of course, as a batch… you get it? 😉

Viewing all 118 articles
Browse latest View live