Saturday, May 14, 2022

Merge Multiple PDF fiels in Single PDF file in Business Central

 Hello Team,

Merge multiple pdf files is single pdf file is demanding requirement in Business Central SaaS Version

We have develop very simple API Utility that can take N number on input Base64 data and combine in single PDF and return sigle Base 64 file for merge PDF.






Key Feature:

- Merge N Numbers of different file in Single PDF

- Merge Standard Attached PDF Documents on Master with Primary Document File

- Send Merge PDF Files on Email

- Save Merge PDF File in Document Attachment

Code Snapshot:


 procedure MergeInvoices(var SIH_vRec: Record "Sales Invoice Header")
    var
        ReportSelect_lRec: Record "Report Selections";
        CurrentSIH_lRec: Record "Sales Invoice Header";
        Base64Convert: Codeunit "Base64 Convert";

        RecRef: RecordRef;

        Base64String: Text;
        Win_gDlg: Dialog;
        Curr_gInt: Integer;
    begin
        Win_gDlg.Open('Total #1#############\Current #2##############');
        Win_gDlg.Update(1, SIH_vRec.Count);

        ReportSelect_lRec.Reset();
        ReportSelect_lRec.Setrange(Usage, ReportSelect_lRec.Usage::"S.Invoice");
        ReportSelect_lRec.FindFirst();
        ReportSelect_lRec.Testfield("Report ID");

        IF SIH_vRec.FindSet() THen
            repeat
                Curr_gInt += 1;
                Win_gDlg.Update(2, Curr_gInt);


                CurrentSIH_lRec.get(SIH_vRec."No.");
                CurrentSIH_lRec.SetFilter("No.", SIH_vRec."No.");
                RecRef.GetTable(CurrentSIH_lRec);
                RecRef.SetView(CurrentSIH_lRec.GetView());

                Clear(Rpt_TempBlob);
                Clear(Rpt_OutStream);
                Clear(Rpt_InStream);
                Rpt_TempBlob.CreateOutStream(Rpt_OutStream);
                Report.SaveAs(ReportSelect_lRec."Report ID", RecRef.GetFilters(),
ReportFormat::Pdf, Rpt_OutStream, RecRef);
                Rpt_TempBlob.CreateInStream(Rpt_InStream);
                Base64String := Base64Convert.ToBase64(Rpt_InStream);

                //FileBase64 Object
                Clear(JObject);
                JObject.Add('FileBase64', Base64String);
                ObjectJArray.Add(JObject);
            until SIH_vRec.NEXT() = 0;
        Win_gDlg.Close();

        CallPDFMergeAPI();
    end;

 local procedure CallPDFMergeAPI()
    var
        Base64Convert: Codeunit "Base64 Convert";
        DoubleQuote_Lbl: Label '""""';
        lText: Text;
        ShipmentResponse: Text;
        MergeFileName_lTxt: Text;
        gHttpClient: HttpClient;
        OrderDetailContent: HttpContent;
        lheaders: HttpHeaders;
        greqMsg: HttpRequestMessage;
        gResponseMsg: HttpResponseMessage;

    begin
        OrderObject.Add('Files', ObjectJArray);
        OrderObject.WriteTo(lText);
        // Message(lText);

        OrderDetailContent.WriteFrom(lText);

        Clear(greqMsg);
        Clear(gHttpClient);
        greqMsg.SetRequestUri(API Link here....);
        lheaders.Clear();
        OrderDetailContent.GetHeaders(lheaders);
        lheaders.Remove('Content-Type');
        lheaders.Add('Content-Type', 'application/json');
        OrderDetailContent.GetHeaders(lheaders);
        greqMsg.Content(OrderDetailContent);
        greqMsg.Method := 'post';
        gHttpClient.Send(greqMsg, gResponseMsg);
        ShipmentResponse := '';

        gResponseMsg.Content().ReadAs(ShipmentResponse);
        ShipmentResponse := DelChr(ShipmentResponse, '<>', DoubleQuote_Lbl);

        //  Message(ShipmentResponse);

        Clear(Rpt_TempBlob);
        Rpt_TempBlob.CreateOutStream(Rpt_OutStream);
        Base64Convert.FromBase64(ShipmentResponse, Rpt_OutStream);
        Rpt_TempBlob.CreateInStream(Rpt_InStream);
        MergeFileName_lTxt := 'MergeFile_' + FORMAT(CURRENTDATETIME, 0,
'<Day,2>_<Month,2>_<Year4>_<Hours24>_<Minutes,2>_<Seconds,2>') + '_' +
Format(Random(100)) + '.pdf';
        DownloadFromStream(Rpt_InStream, '', '', '', MergeFileName_lTxt);
    end;

    var
        Rpt_TempBlob: Codeunit "Temp Blob";
        OrderObject: JsonObject;
        ObjectJArray: JsonArray;
        JObject: JsonObject;
        Rpt_InStream: InStream;
        Rpt_OutStream: OutStream;

Done...

Very simple and most useful solution.

If you need this utility than contact to Intech Systems

Thank you for reading...

Keep Sharing...Keep Growing...

Friday, May 13, 2022

API Base Cloud Printing in Business Central SaaS Version

Hello Friends,

It was very demanding in Business Central Cloud version to allow user to print directly from BC / Mobile App / Tablet App.

This can only be possible using Cloud API for printing.

We have develop very advance cloud printing API utility for print any Report by just single click from Business Central - any version.









procedure GetBase64_DS(ReportID: Integer; RecRef: RecordRef)
    var
        APISetup_lRec: Record "Cloud Print API Setup";
        ReportPrinterSetup: Record "Report Printer Setup";
        Base64Convert: Codeunit "Base64 Convert";
        Rpt_TempBlob: Codeunit "Temp Blob";
        gHttpClient: HttpClient;
        gcontent: HttpContent;
        lheaders: HttpHeaders;
        greqMsg: HttpRequestMessage;
        gResponseMsg: HttpResponseMessage;
        Rpt_InStream: InStream;
        DuplexPrintOption_lInt: Integer;
        ReqJObject: JsonObject;
        Rpt_OutStream: OutStream;
        Base64String: Text;
        JSONResponse: Text;
        ltext: Text;
        lurl: Text;
        PDFCMDBase64_lTxt: Text;
        PrintCmdText: Text;
        PrinterName_iTxt: Text;
        ReportFileName_lTxt: Text;
    begin
        APISetup_lRec.GET;
        APISetup_lRec.Testfield("Exe Path");
        APISetup_lRec.Testfield("Folder Path");
        APISetup_lRec.Testfield("Print API");

        ReportPrinterSetup.Reset();
        ReportPrinterSetup.SetRange("User ID", UserId);
        ReportPrinterSetup.SetRange("Report ID", ReportID);
        IF ReportPrinterSetup.FindFirst() THEN begin
            DuplexPrintOption_lInt := ReportPrinterSetup."Print Duplex Mode" + 1;
            PrinterName_iTxt := ReportPrinterSetup."Printer Name"
        End Else begin
            ReportPrinterSetup.SetRange("Report ID", 0);
            IF ReportPrinterSetup.FindFirst() THEN begin
                DuplexPrintOption_lInt := ReportPrinterSetup."Print Duplex Mode" + 1;
                PrinterName_iTxt := ReportPrinterSetup."Printer Name";
            End;
        end;


        Rpt_TempBlob.CreateOutStream(Rpt_OutStream);
        Report.SaveAs(ReportID, RecRef.GetFilters(), ReportFormat::Pdf,
Rpt_OutStream, RecRef);
        Rpt_TempBlob.CreateInStream(Rpt_InStream);
        Base64String := Base64Convert.ToBase64(Rpt_InStream);

        ReportFileName_lTxt := 'Inv_' + FORMAT(CURRENTDATETIME, 0,
'<Day,2>_<Month,2>_<Year4>_<Hours24>_<Minutes,2>_<Seconds,2>') + '_' +
Format(Random(100)) + '.pdf';

        PrintCmdText := StrSubstNo('%1 -duplex %2 -printer "%3" %4%5',
APISetup_lRec."Exe Path", DuplexPrintOption_lInt, PrinterName_iTxt,
APISetup_lRec."Folder Path", ReportFileName_lTxt);

        PDFCMDBase64_lTxt := Base64Convert.ToBase64(PrintCmdText);

        ReqJObject.Add('ReportBase64', Base64String);
        ReqJObject.Add('PrintCMDCommand', PDFCMDBase64_lTxt);
        ReqJObject.Add('FileName', ReportFileName_lTxt);
        ReqJObject.Add('FolderPath', Base64Convert.ToBase64(APISetup_lRec."Folder Path"));
        ReqJObject.Add('Exepath', Base64Convert.ToBase64(APISetup_lRec."Exe Path"));
        ReqJObject.WriteTo(ltext);

        lurl := APISetup_lRec."Print API";

        gcontent.WriteFrom(ltext);
        greqMsg.SetRequestUri(lurl);
        lheaders.Clear();
        gcontent.GetHeaders(lheaders);

        lheaders.Remove('Content-Type');
        lheaders.Add('Content-Type', 'application/json');
        gcontent.GetHeaders(lheaders);
        greqMsg.Content(gcontent);
        greqMsg.GetHeaders(lheaders);
        greqMsg.Method := 'POST';
        if not gHttpClient.Send(greqMsg, gResponseMsg) then
            Error('API Authorization token request failed...');

        JSONResponse := '';
        gResponseMsg.Content().ReadAs(JSONResponse);


        IF STRPOS(JSONResponse, 'An error has occurred') <> 0 then
            Error('Error %1 \\ PDF Command (in Base64) %2', JSONResponse, PDFCMDBase64_lTxt);

        IF STRPOS(JSONResponse, 'Time of printing is:') <> 0 then
            Message('Document Print Successfully')
        Else
            Message(JSONResponse);
    end;
}

Done...

This will print any report in just single click - where user can setup Default printer name in Business Central

Also this utility provide advance command of print like - Simple / Duplex Print and Paper Source Selection.

New Revaluation of Printing in cloud application !!!

If you intrested to purcahse this utility than contact to Intech System.

Thank you for reading....

Keep Sharing...Keep Growing...


Physical Token base Digital Singnature with Business Central SaaS Version

Hello Friends,

Digital Singnature is very demanding request with Business Central SaaS Version

It was compare to develope very easy for BC On Prime or NAV Version - where we have local system full access.

For any cloud application it is quite challening. 

At Intech we dp the development using Windows Application in .Net

Develop very useful utility that can connect to BC using API and provide facitlty for Digital Signature

Key Feature of Digital Signature Utility

  • Sign Single Document
  • Bulk Sign Documents
  • Sign and attach signed document in ERP
  • Sign and Email to Customer/Vendor
  • Sign and attach to ERP & Email to Customer/Vendor

Snapshot of Utility Developed:




If you intrested in solution than let connect to Intech Systems.
Thank you.


Thursday, May 5, 2022

AL Extension - Code Optimization for BC Development

 Hello Fiends,

Read about ways to tune performance when developing for Business Central.

Read complete blog at - https://docs.microsoft.com/en-us/dynamics365/business-central/dev-itpro/performance/performance-developer


Writing efficient pages

There are many patterns that a developer can use to get a page to load faster. Consider the following patterns:

  • Avoid unnecessary recalculation
  • Do less
  • Offloading the UI thread

Pattern - Avoid unnecessary recalculation

To avoid unnecessary recalculation of expensive results, consider caching the data and refresh the cache regularly. Let's say you want to show the top five open sales orders or a VIP customers list on the role center. The content of such a list probably doesn't change significantly every hour. There's no need to calculate that from raw data every time the page is loaded. Instead, create a table that can contain the calculated data and refresh every hour/day using a background job.

Another example of unexpected recalculation is when using query objects. In contrast to using the record API, query results aren't cached in the primary key cache in the Business Central server. Any use of a query object will always go to the database. So, sometimes it's faster to not use a query object.

Pattern - Do less

One way to speed up things is to reduce the amount of work that the system must do. For example, to reduce slowness of role centers, consider how many page parts are needed for the user. Another benefit of a simple page with few UI elements can also be ease of use and navigation.

Remove calculated fields from lists if they aren't needed, especially on larger tables. Setting the field's Enabled or Visible properties to false is not enough. The field definition needs to be removed from the page or page extension definition. Also, if indexing is inadequate, calculated fields can significantly slow down a list page.

Consider creating dedicated lookup pages instead of the normal pages when adding a lookup (the one that looks like a dropdown) from a field. Default list pages will run all triggers and fact boxes even if they aren't shown in the lookup. For example, Business Central 2019 release wave 1 added dedicated lookup pages for Customer, Vendor, and Item to the Base Application.

Pattern - Offloading the UI thread

To get to a responsive UI fast, consider using Page Background Tasks for calculated values, for example, the values shown in cues.

For more information about Page Background Tasks, see Page Background Tasks.

Making Edit-in-Excel faster

The Edit in Excel feature uses UI pages exposed through OData. This means that triggers need to be run for all records returned from the Business Central server to Excel. As a developer, you need to make your AL code conditional on the ClientType. Specifically, avoid updating fact boxes, avoid calculation, and avoid defaulting logic.

Writing efficient Web Services

Business Central supports for Web services to make it easier to integrate with external systems. As a developer, you need to think about performance of web services both seen from the Business Central server (the endpoint) and as seen from the consumer (the client).

Endpoint performance

Anti-patterns (don't do this)

Avoid using standard UI pages to expose as web service endpoints. Many things, such as fact boxes, are not returned in web service results, but will use resources to prepare.

Things that have historically caused performance issues on pages that are exposed as endpoints are:

  • Heavy logic in OnAfterGetCurrRecord or OnAfterGetRecord
  • Many SIFT fields
  • FactBoxes

Avoid exposing calculated fields, because calculated fields are expensive. Try to move them to a separate page or to refactor the code so the value is stored on the physical table (if applicable). Complex types are also a performance hit because they take time to calculate.

Don't use temp tables as a source if you have many records. Temp tables that are based on APIs are a performance hit. The server has to fetch and insert every record, and there's no caching on data in temp tables. Paging becomes difficult to do in a performant manner. A rule of thumb is if you have more than 100 records, don't use temp tables.

Don't insert child records belonging to same parent in parallel. This condition causes locks on parent and Integration Record tables because parallel calls try to update the same parent record. The solution is to wait for the first call to finish or use $batch, which will make sure calls get executed one after another.

Do not use a deprecated protocol such as SOAP. Instead, utilize newer technology stacks such as OData, or preferably API pages/queries. The latter are up to 10 times faster than using the SOAP protocol. One way to migrate from SOAP towards OData is to utilize OData unbound actions. For more information, see Creating and Interacting with an OData V4 Unbound Action.

Performance patterns (do this)

  • Instead of exposing UI pages as web service endpoints, use the API pages or API queries because they've been optimized for this scenario. Select the highest API version available. Don't use the beta version of the API pages. To read more about API pages, see API Page Type.

  • If you do expose UI pages as web service endpoints as web service endpoints, note that triggers need to be run for all records returned from the Business Central server. As a developer, you need to make your AL code conditional on the ClientType. Specifically, avoid updating factboxes, avoid calculation, and avoid defaulting logic.

  • The choice of protocol (SOAP, OData, or APIs) for the endpoint can have a significant impact on performance. Favor OData version 4 or APIs for the best performance. It is possible to expose procedures in a codeunit as an OData end point using unbound actions. To read more about OData unbound actions, see Creating and Interacting with an OData V4 Unbound Action.

  • If you want OData endpoints that work as data readers (like for consumption in Power BI), consider using API queries and set DataAccessIntent = ReadOnly. For more information, see API Query Type and DataAccessIntent Property.

OData Performance patterns

When calling OData web services, there are a number of strategies that you can use to speed up your queries

  • Limiting the set ($filter or $top) if you're using an expensive $expand statement
  • Using OData transaction $batch
  • Using Data Access Intent Read-only with OData

For more details about OData query performance, see OData Query Performance.

How to handle large volumes of web service calls

When integrating to Business Central from external systems using web services, it is important to understand the operational limits for the Business Central servers that host the web service endpoints being called. To ensure that excessive traffic doesn't cause stability and performance issues for all users, the online version of Business Central server has set up throttling limits on web service endpoints.

Make sure that your external application can handle the two HTTP status codes 429 (Too Many Requests) and 504 (Gateway Timeout).

  • Handling status code 429 requires the client to adopt a retry logic while providing a cool off period. You can apply different strategies, like:

    • Regular interval retry
    • Incremental intervals retry
    • Exponential back-off
    • Randomization
  • Handling status code 504 - Gateway Timeout requires the client to refactor the long running request to execute within time limit by splitting the request into multiple requests. Then, deal with potential 429 codes by applying a back off strategy.

Read more about web service limits, see Working with API limits in Dynamics 365 Business Central.

The same advice applies for outgoing web service calls using the AL module HttpClient. Make sure your AL code can handle slow response times, throttling, and failures in external services that you integrate with.

Writing efficient reports

Reports generally fall into two categories. They can be specific to a single instance of an entity, like an invoice. Or, they can be of a more analytical nature that joins data from multiple instances of multiple entities. Typically, performance issues in reports lie in the latter category. The following articles contain advice about implementing faster reports:

  • Use Read Scale-Out to read data from a read-only copy of the database, see Using Read Scale-Out for Better Performance for more information.
  • Use Partial Records to reduce the data loaded from the database, see Using Partial Records for more information.
  • Use AL queries to optimize the way data is read from the database, see Queries in Business Central for more information.
  • Compared to Word layouts, RDL layouts can result in slower performance with document reports, especially for actions related to the user interface (like sending emails). For more information, see Creating an RDL Layout Report.

Read more about how to tune RDL reports here:

 Tip

From the Business Central client, you can export report results as raw data to a Microsoft Excel file. The file contains all columns of the dataset, but without the layout applied. Use the file to help validate that the report returns the expected data, and to ensure that the report layout controls match the dataset value types. To export a report, run the report and select the Send to > Microsoft Excel Document (data only) on the request page. For more information, see Working with Reports - Send to Excel.

Efficient extracts to data lakes or data warehouses

When establishing a data lake or a data warehouse, you typically need to do two types of data extraction:

  1. A historical load (all data from a given point-in-time)
  2. Delta loads (what's changed since the historical load)

The fastest (and least disruptive) way to get a historical load from Business Central online is to get a database export as a BACPAC file (using the Business Central admin center) and restore it in Azure SQL Database or on a SQL Server. For on-premises installations, you can just take a backup of the tenant database.

The fastest (and least disruptive) way to get delta loads from Business Central online is to set up API queries configured with read-scaleout and use the data audit field LastModifiedOn (introduced in version 17.0) on filters.

AL performance patterns

Knowledge about different AL performance patterns can greatly improve the performance of the code you write. In this section, we'll describe the following patterns and their impact on performance.

Pattern - Use built-in data structures

AL comes with built-in data structures that have been optimized for performance and server resource consumption. Make sure that you're familiar with them to make your AL code as efficient as possible.

When working with strings, make sure to use the TextBuilder data type and not repeated use of the += operator on a Text variable. General guidance is to use a Text data type if you concatenate fewer than five strings (here the internal allocation of a TextBuilder and the final ToText invocation is more expensive). If you need to concatenate five strings or more or concatenate strings in a loop, then TextBuilder is faster. Also, please use a TextBuilder data type instead of BigText when possible. For more information, see TextBuilder Data Type.

If you need a key-value data structure that is optimized for fast lookups, use a Dictionary data type. For more information, see Dictionary Data Type.

Use a List data type if you need an unbound "array" (where you would previously create a temporary table object). For more information, see List Data Type.

Use the Media or Mediaset data types instead of the Blob data type. The Media and MediaSet data types have a couple advantages over the Blob data type when working with images. First of all, a thumbnail version of the image is generated when you save the data. You can use the thumbnail when loading a page and then load the larger image asynchronously using a page background task. Second, data for Media and MediaSet data types is cached on the client. Data for the Blob data type is never cached on the server. It's always fetched from the database.

Pattern - Run async (and parallelize)

It's often desirable to offload AL execution from the UI thread to a background session.

Here are some examples of this pattern:

  • Don't let the user wait for batches
  • Split large tasks into smaller tasks and run them in parallel

There are many different ways to spin up a new task:

They come with different characteristics as described in this table:

Method to start a new taskProperties
Page Background TaskCan (will) be canceled
Read-only
Call back to parent session
Lightweight
StartSessionCreated immediately
Runs on same server
Not as controlled as a Page Background Task
TaskQueued up
Any server in a cluster can start it
Survives server restarts
No logging
Job queueScheduled
Recurrence
Any server in a cluster can start it
Survives server restarts
Logging of results

Pattern - Use set-based methods instead of looping

The AL methods such as FindSetCalcFieldsCalcSums, and SetAutoCalcFields are examples of set-based operations that are much faster than looping over a result set and do the calculation for each row.

One common use of the CalcSums method is to efficiently calculate totals.

Try to minimize work done in the OnAfterGetRecord trigger code. Common performance coding patterns in this trigger are:

  • Avoiding CalcFields calls. Defer them until the end.
  • Avoiding CurrPage.Update() calls.
  • Avoiding repeated calculations. Move them outside the loop, if possible.
  • Avoid changing filters. This pattern requires the server to throw away the result set.
  • Never do any database writes here. With more than one user on the system, this will give database locking issues and even deadlock errors.

Consider using a query object if you want to use a set-based coding paradigm. These pros and cons for using query objects:

Pros for using a query objectCons for using a query object
- Will bypass the AL record API where server reads all fields.
- With a covering index, you can get fast read performance for tables with many fields.
- Can join multiple tables.
- Query object result sets aren't cached in the servers primary key (data) cache.
- No writes are allowed.
- You can't add a page on a query object.

Read more about query objects here:

Pattern - Use partial records when looping over data, in reports, or when table extension fields aren't needed

When writing AL code for which the fields needed on a record or recordref are known, you can use the partial records capability to only load out these fields initially. The remaining fields are still accessible, but they'll be loaded as needed.

Partial records improve performance in two major ways. First, they limit the fields that need to be loaded from the database. Loading more fields leads to more data being read, sent over the connection, and created on the record. Second, partial records limit the number of table extensions that need to be joined.

The performance gains compound when looping over many records, because both effects scale with the number of rows loaded.

For more information, see Using Partial Records.

Table extension impact on performance

Table extensions are seperate tables in the database and therefore need to be joined together in the data stack when accessed via a record. With tables extensions being stored individually, the amount of joins necessary grows with the number of table extensions extending a table. Together with the current inability to define indexes that span base and extension fields, one should avoid splitting one's code into too many table extensions.

With central tables to the application, such as General Ledger Entry (G/L Entry), one should be extra cautious adding table extensions since these tables are frequently used throughout the application.

The adverse affects of many table extensions can be mitigated with the application of partial records, see Using Partial Records. However, since the developer many not have ownership of all executed code, and therefore isn't able to apply partial records everywhere, the above recommenddation still stands.

An alternative approach when doing data modeling for extending a table with new fields is to use a related table and define a FlowField on the base table.

Here are the pros and cons of the two data models:

Data model for extending a tableProperties
Table extensionFields can be added to lists and are searchable.
Always loaded with the base table.
Expensive at runtime but easy to use.
Use only for critical fields.
Related tablesNeed to set up table relations.
Dedicated page for editing.
Requires flow field to be shown in lists.
Doesn't affect performance of base table.
Excellent for FactBoxes.

Limit your event subscriptions

The following are best practices for getting performant events:

  • There's no significant cost of having a publisher defined.
  • Static automatic has a cost over manually binding (there's an overhead of creating and disposing objects).
  • Codeunit size of the subscriber matters. Try to have smaller codeunits.
  • Use single instance codeunits for subscribers, if possible.

Table events change the behavior of SQL optimizations on the Business Central Server in the following ways:

  • The Business Central Server will issue SQL update/delete statements row in a for loop rather than one SQL statement.
  • They impact ModifyAll and DeleteAll methods that normally do bulk SQL operations to be forced to do single row operations.

Efficient data access

Many performance issues are related to how data is defined, accessed, and modified. It's important to know how concepts in AL metadata and the AL language translate to their counterparts in SQL.

Tables and keys

Many performance issues can be traced back to missing indexes (also called keys in Business Central), but index design is often not a key skill for AL developers. For best performance, even with large amounts of data, it's imperative to design appropriate indexes according to the way your code will access data.

These articles on indexing are worth knowing as an AL developer:

Indexes have a cost to update, so it's recommended to not add too many of them on a table.

Using data audit fields to only read recent data

Every table in Business Central) includes the following two system fields, which can be used for filtering records:

  • SystemCreatedAt
  • SystemModifiedAt

One example is to use the system field SystemModifiedAt to implement delta reads. For more information about system fields, see System Fields.

Non-clustered Columnstore Indexes (NCCI)

Starting in the 2021 release wave 2 of Business Central, non-clustered columnstore indexes (sometimes referred to as NCCIs) are supported on tables.

You can use a non-clustered columnstore index to efficiently run real-time operational analytics on the Business Central database without the need to define SIFT indexes up front (and without the locking issues that SIFT indexes sometimes impose on the system.)

Read more about non-clustered columnstore indexes here:

SumIndexField Technology (SIFT)

SumIndexField Technology (SIFT) lets you quickly calculate the sums of numeric data type columns in tables, even in tables with thousands of records. The data type includes Decimal, Integer, BigInteger, and Duration. SIFT optimizes the performance of FlowFields and query results in a Business Central application.

Ensure appropriate SIFT indices for all FlowFields of type sum or count.

Read more about SIFT here:

The following article can help you find missing SIFT indexes on FlowFields:

Troubleshooting: Long Running SQL Queries Involving FlowFields by Disabling SmartSQL.

How AL relates to SQL

The AL programming language, to some degree, hides how data is read and written to the database. To effectively code for performance, you need to know how AL statements translate to the equivalent SQL statements.

The following articles cover how AL relates to SQL:

How to get insights into how AL translates to SQL

If you want to track how Business Central Server translates AL statements to SQL statements, use either database statistics in the AL debugger or telemetry on long running queries.

Read more here:

How to reduce database locking

Sometimes, performance issues are not due to resource starvation, but due to processes waiting for other processes to release locks on shared objects. When AL code needs to update data, it is customary to take a database lock on it to ensure that other processes do not change the data at the same time.

When using the Record.LockTable method, this will apply the WITH (updlock) hint on all subsequent calls to the database until the transaction is committed, not only on the table that the record variable is defined on, but on all calls to the database. Hence, it is good practice to defer the Record.LockTable call as late as possible in your AL code, to make sure that only the data that is in scope for being updated, is locked.

Read more here:

Database locking caused by web service calls

Do not insert child records belonging to the same parent record in parallel. This condition causes locks on both the parent table and the integration record table because parallel calls try to update the same parent record. The solution is to wait for the first call to finish or use OData $batch, which will make sure calls get run one after another.

Non-blocking number sequences

If you need a fast, non-blocking number sequence that can be used from AL, refer to the number sequence object type. Use a number sequence object if you:

  • Don't want to use a number series
  • Can accept holes in the number range

For more information, see NumberSequence Data Type.

Analyzing database locks

There are two tools that you can use to analyze database locks happening in the environment: the Database Locks page, and database lock timeout telemetry.

The Database Locks page gives a snapshot of all current database locks in SQL Server. It provides information like the table and database resource affected by the lock, and sometimes also the AL object or method that ran the transaction that caused the lock. These details can help you better understand the locking condition.

Database lock timeout telemetry gathers information about database locks that have timed out. The telemetry data allows you to troubleshoot what caused these locks.

Read more here:

Using Read-Scale Out

Business Central supports the Read Scale-Out feature in Azure SQL Database and SQL Server. Read Scale-Out is used to load-balance analytical workloads in the database that only read data. Read Scale-Out is built in as part of Business Central online, but it can also be enabled for on-premises.

Read Scale-Out applies to queries, reports, or API pages. With these objects, instead of sharing the primary, they can be set up to run against a read-only replica. This setup essentially isolates them from the main read-write workload so that they won't affect the performance of business processes.

As a developer, you control Read Scale-Out on report, API page, and query objects by using the DataAccessControl property. For more information, see Using Read Scale-Out for Better Performance.

Testing and validating performance

It's imperative to test and validate a Business Central project before deploying it to production. In this section, you find resources on how to analyze and troubleshoot performance issues and guidance on how to validate performance of a system.

Performance Unit Testing

You can use the SessionInformation data type in unit tests that track the number of SQL statements or rows read. Use it before and after the code to be tested. Then, have assert statements that check for normal behavior.

For more information, see SessionInformation Data Type.

Performance Scenario and Regression Testing

Use the Performance Toolkit to simulate the amount of resources that customers use in realistic scenarios to compare performance between builds of their solutions.

The Performance Toolkit helps answer questions such as, "Does my solution for Business Central support X number of users doing this, that, and the other thing at the same time?"

For more information, see The Performance Toolkit Extension.

 Note

To test insert/update performance, make sure to un-install the test framework first. If the test framework is installed, then no insert/update statements can utilize bulk mode and will instead run row-by-row.

Performance Throughput Analysis

The Performance Toolkit doesn't answer questions such as, "How many orders can Business Central process per hour?" For this kind of analysis, test the time to execute key scenarios using the Performance Toolkit, and then use the guidance on Operational Limits for Business Central Online. For advanced analysis, consider using a queueing model such as a M/M/1 queue to answer whether the system can process the workload you intend.

Performance telemetry

The following performance telemetry is available in Azure Application Insights (if that has been configured for the environment):

  • Database locks
  • Long Running AL operations
  • Long Running SQL Queries
  • Page views
  • Reports
  • Sessions started
  • Web Service Requests

For more information, see the Analyzing performance issues using telemetry section.

Troubleshooting

The following articles can be of help in troubleshooting performance issues:

Tuning the Development Environment

The following articles explain what you can do as a developer to tune your development environment for better performance: