web
You’re offline. This is a read only version of the page.
close
Skip to main content

Notifications

Announcements

No record found.

Community site session details

Community site session details

Session Id :
Microsoft Dynamics AX (Archived)

Cross company query taking time

(0) ShareShare
ReportReport
Posted on by 115

Hi All,

I am using the following code to delete contact information of a customer in selected companies. This method is taking long time around 5 hours to finish to process 30000 customers, I assumed this will not take that long. Is there a way to optimize this query.

container       conCompanies = [ 'XXX'];

while select

        crossCompany :conCompanies custTable

        order by dataAreaId       

    {

        this.deleteContactInformation(custTable);       

    }

*This post is locked for comments

I have the same question (0)
  • Brandon Wiese Profile Picture
    17,788 on at

    It's not likely that the crossCompany query on custTable is responsible for the 5 hours of work.  Can you post the code behind your deleteContactInformation() method?

  • AX QA Profile Picture
    115 on at

    private void deleteContactInformation(CustTable     _custTable)

    {

       DirPartyContactInfoView         dirPartyContactInfoView;

       LogisticsLocation               location;

       ttsBegin;

       while select party, location from dirPartyContactInfoView

               where dirPartyContactInfoView.Party == _custTable.Party

       {

           location = LogisticsLocation::find(dirPartyContactInfoView.Location, true);

           if(location.validateDelete())

               location.delete();

       }

       ttsCommit;

    }

  • Suggested answer
    Brandon Wiese Profile Picture
    17,788 on at

    The problem is not your crossCompany query at all.  

    It's entirely the location.delete() method.  Let's look closer.

    public void delete()
    {
        LogisticsLocation location;
    
        if (LogisticsLocationEntity::canDeleteLocation(this.RecId))
        {
            ttsbegin;
            if (isConfigurationkeyEnabled(configurationKeyNum(smmOutlook)))
            {
                smmContactPersonSynchronizeFieldMapping::updateDeletedLocation(this.RecId);
            }

    The LogisticsLocationEntity::canDeleteLocation(..) method does a tremendous amount of work.

    It loops through a list of tables checking to see if the delete is valid or not.

            tableList = LogisticsLocationEntity::locationTableList();
            if (tableList && _location)
            {
                query = new Query();
                query.queryType(QueryType::Union);
                query.allowCrossCompany(true);
                for (i=1; i<=conLen(tableList); i+=2)
                {
                    dictTable = new SysDictTable(conPeek(tableList,i));
                    qbDataSource = query.addDataSource(conPeek(tableList,i));
                    qbDataSource.update(false);
                    qbDataSource.fields().dynamic(false);
                    qbDataSource.fields().clearFieldList();
                    qbDataSource.fields().addField(conPeek(tableList,i+1));
                    qbDataSource.addRange(conPeek(tableList,i+1)).value(queryValue(_location));
                    qbDataSource.addSortField(dictTable.fieldName2Id(identifierStr(dataAreaId)));
                    qbDataSource.unionType(UnionType::UnionAll);
                }
                qRun = new QueryRun(query);
                readOne();
                olddataAreaId = dataAreaId;

    How many tables?

    Let's look at LogisticsLocationEntity::locationtableList().

    static client server container locationTableList()
    {
        container list;
    
        list = [tableNum(AssetLocation)    ,fieldNum(AssetLocation,LogisticsLocation)
        ,tableNum(BankAccountTable)    ,fieldNum(BankAccountTable,Location)
        ,tableNum(BankGroup)    ,fieldNum(BankGroup,Location)
        ,tableNum(CustBankAccount)    ,fieldNum(CustBankAccount,Location)
        ,tableNum(CustInvoiceTable)    ,fieldNum(CustInvoiceTable,DeliveryLocation)
        ,tableNum(IntrastatParameters)    ,fieldNum(IntrastatParameters,AgentLocation)
        ,tableNum(IntrastatServicePoint_FI)    ,fieldNum(IntrastatServicePoint_FI,Location)
        ,tableNum(ProjInvoiceTable)    ,fieldNum(ProjInvoiceTable,InvoiceLocation)
        ,tableNum(ProjTable)    ,fieldNum(ProjTable,DeliveryLocation)];
        list +=
    //    ,tablenum(PurchReqLine)    ,fieldnum(PurchReqLine,DeliveryLocation)
        [tableNum(SalesBasket)    ,fieldNum(SalesBasket,DeliveryLocation)
        ,tableNum(SalesCarrier)    ,fieldNum(SalesCarrier,Location)
        ,tableNum(SalesQuotationBasket)    ,fieldNum(SalesQuotationBasket,DeliveryLocation)];
        list +=
        [tableNum(Tax1099SoftwareVendParameters)    ,fieldNum(Tax1099SoftwareVendParameters,Location)
        ,tableNum(Tax1099TransmitterParameters)    ,fieldNum(Tax1099TransmitterParameters,Location)
        ,tableNum(TaxAuthorityAddress)    ,fieldNum(TaxAuthorityAddress,Location)
        ,tableNum(TaxEvatParameters_NL)    ,fieldNum(TaxEvatParameters_NL,ContactLocation)
        ,tableNum(TaxIntraCommTable_NL)    ,fieldNum(TaxIntraCommTable_NL,ContactLocation)
    /*    ,tablenum(TrvExpTrans)    ,fieldnum(TrvExpTrans,LogisticsLocation)*/
        ,tableNum(VendBankAccount)    ,fieldNum(VendBankAccount,Location)
    /*  ,tablenum(VendRequest)    ,fieldnum(VendRequest,BankLocation)
        ,tablenum(VendRequest)    ,fieldnum(VendRequest,cContactLocation)
        ,tablenum(VendRequest)    ,fieldnum(VendRequest,cLocation)
        ,tablenum(VendRFQJour)    ,fieldnum(VendRFQJour,DeliveryLocation)*/];
    
        return list;
    }

    That's why it takes so long to delete one LogisticsLocation record.

  • Suggested answer
    BrandonSA Profile Picture
    1,673 on at

    You could gain a little bit of speed by using delete_from, and also by moving the ttsbegin/ttscommit to outside of the entire process. There is also a while within a while, and then a separate find inside of that. Is it not possible to reduce this

    while

    {

      while

      {

          ttsbegin;

          Table::find()

          ttscommit;

      }

    }

    structure into something more like

    ttsbegin;

    while select crosscompany

    {

      delete_from

    }

    ttscommit;

    where your delete_from retrieves the exact record you need to delete with joins and relationships?

  • Brandon Wiese Profile Picture
    17,788 on at

    Using delete_from against LogisticsLocation will not help at all here, because the presence of a .delete() method will cause it to degenerate into a while select/delete loop in the kernel anyway.

  • Brandon Wiese Profile Picture
    17,788 on at

    Also, do you really want to create a single transaction with a scope of 5 hours?  It seems like 30,000 smaller transactions would be easier on the system.

  • AX QA Profile Picture
    115 on at

    Brandon, Yes I am aware that it checks all table relations and lead to slow performance. I am checking the possibility of enhancing performance.

  • AX QA Profile Picture
    115 on at

    Sorry did not get you on this point 'Also, do you really want to create a single transaction with a scope of 5 hours?'

    Yes 30000 customers is not a big number but there are multiple contact informations against each customer.

  • Suggested answer
    Brandon Wiese Profile Picture
    17,788 on at

    You can use .skipDeleteMethod(true) on the delete buffer, and it should avoid calling the .delete() method which does all that work, but then it also skips the validation and other work done within that method which means it falls entirely on you to make sure that your delete is valid and consistent.

    msdn.microsoft.com/.../xrecord.skipdeletemethod.aspx

  • Brandon Wiese Profile Picture
    17,788 on at

    [quote user="AX QA"]Sorry did not get you on this point 'Also, do you really want to create a single transaction with a scope of 5 hours?'[/quote]

    The suggestion of putting all of your delete operations inside one transaction (ttsbegin/ttscommit) that lasts 5 hours is not a good idea.  Transactions should be kept short to avoid blocking, transaction log growth on SQL Server, and other serious issues.

Under review

Thank you for your reply! To ensure a great experience for everyone, your content is awaiting approval by our Community Managers. Please check back later.

Helpful resources

Quick Links

Responsible AI policies

As AI tools become more common, we’re introducing a Responsible AI Use…

Neeraj Kumar – Community Spotlight

We are honored to recognize Neeraj Kumar as our Community Spotlight honoree for…

Leaderboard > 🔒一 Microsoft Dynamics AX (Archived)

#1
Martin Dráb Profile Picture

Martin Dráb 4 Most Valuable Professional

#1
Priya_K Profile Picture

Priya_K 4

#3
MyDynamicsNAV Profile Picture

MyDynamicsNAV 2

Last 30 days Overall leaderboard

Featured topics

Product updates

Dynamics 365 release plans