Notifications
Announcements
No record found.
Hi Friends,
I liked to archive a big custom table, move some records to archive table. If I duplicate origin table to create a archiving table, is there any problem?
Thanks,
*This post is locked for comments
I don't see any issue in creating duplicate table but when you say 'Move', you would have to take care about ref records, like you have recids of different table and you are using that record for other processes\display and all.
One obvious problem with duplication is maintenance cost and a risk of bugs caused by things out of sync. For example, if you want to add a field, you have to add it to both tables, or your archive table will miss the data.
Also, it's not clear what advantage you hope to get by creating an "archive" table in the same database. Maybe you're trying to deal with a performance problem, which may not be solved by it. If it's the case, you should rather fix the underlying problem.
If your table is excessively large You should rather review the design of your data model, such as normalizing your large table to several smaller ones.
If there is a business reason (not just an attempt for optimization) for such a data model. you can design a special-purpose history table (which isn't the same thing as a copy).
Also, consider looking at IDMF, which supports data archiving.
Thanks guys.
It's for performance issue, table is getting too big, 50 million records, due to business rule, we can't delete them even we don't use obsolete records.
And we don't want to move to a different DB.
We have an AX DB which has 11 years worth of transactions and been through upgrades of AX3.0->4.0->2009->2012 with much larger volume than yours and there are no such issues with performance. It is 2 TB in size including the logs.
The key here is to correctly design your server architecture, get your covering indexes right, do not pull more fields than necessary in the select statements and finally update fragmented indexes and statistics regularly.
When we have noticed that we could no longer optimize indexes and code, we have opted to ditch the traditional SAN storage serving our physical SQL instances, and went all virtual on the Nutanix platform. We have SSD hot tier disks integrated directly in the server nodes, which are using it as a Blob storage with block-level read/write operations, distributed across the server nodes, so the more servers you have in the cluster the faster your read operations are.
I also have split up our database data files across multiple vDisk volumes, and distributed the tables across the 8 physical data files evenly, with 1 additional file holding LOB data (like document attachments). This way write operations for high transactional tables are going round-robin on the vDisks, more evenly spreading our load on the system.
There is one additional solution which comes to my mind and is a supported featore that you may try on SQL Server, which is to have a sliding window partition. With that, you could keep fresh data, like the last 3 months in a different data file from your "archive" data in the same table, to effectively reduce the size of the data that your indexes needs to cover in one partition:
littlekendra.com/.../sliding-window-table-partitioning-what-to-decide-before-you-automate
However, this feature is limited to the Enterprise Edition SQL license.
50 million records isn't too much by itself, but it maybe a problem if you data model and indexes aren't suitable for how the table is used. I still suggest you focus on the performance problem, because fixing it will help you in either case.
If you say that the records are useless but you still have to waste resources by keeping them in the database (which takes space, slows down backups and restores and so on), putting them to a separate table without indexes may be indeed the the best approach, if you're wiling to pay the price for developing and maintaining it.
Hi Vilmos,
Thanks, our DB size is close to yours , about 1.8TB, Specifically, about 30 tables are huge with 40, 50, 60 millions records and growing. We did what we can do as you mentioned, like indexing, defragment regularly, and special index outside DAX since we are on 2009.
What we don't have is the SQL server architecture you mentioned, we don't have expertise on that, and how much would it cost?
How would AX 365 help us? if we upgrade to it.
River
When we asked how could we upgrade to Dynamics 365 for Operations, at the time there were no data upgrade scripts available to transfer historical data (it is still in closed beta if I am right). Since we want to keep that, we are still waiting for a tested, reliable way to carry it forward to the cloud, or on-premise.
In the meanwhile your only option is to use advanced techniques on top of the basic tasks which you also have carried out.
As for the hardware, we have had a very long selection process (6-8 months) and additional months to set it up, then transfer our production system over to the new instances, so that would not be an immediate relief for you. The goal was to have a powerful hardware which could host all of our Production VMs (and Test/QA) for the next 5 years without any issues. Commercially it was not cheap at all, but if you want your business to run smooth and have the customers in and out as fast as possible as we do, carefully selecting your servers and sizing it appropriately for your business needs keeping AX architecture in mind is key.
Working with IDMF Tool,which is new to me.
We have Scheduled the PROD and Archive Analysis Snapshot and performed successfully.
Scheduled the Byfiscal Template for FY ,in which the schedule took a long time (7 days for one individual FY i.e. having 60 millions of records) and our DB size is 3.5TB .
But i need to know ,how the Archive Function Works like on what factors it will archive the data :
Index/records in the tables or how the archive picks up the Tables to go for records insertion in Archive and went for the records offlined. So,that i can reduce the Archival duration for the Byfiscal Templates.
From the above conversation ,I understand that the indexing, defragment and fragmented Index needs to schedule .Is this can reduce the Archiving the records durations??
is Low Server configuration also leads to increasing the Archiving duration??
As ,we are in critical stage in this issue.
Yours response will be highly appreciated!!
Kindly provide your valuable suggestions .
Thanks,Durga
Under review
Thank you for your reply! To ensure a great experience for everyone, your content is awaiting approval by our Community Managers. Please check back later.
As AI tools become more common, we’re introducing a Responsible AI Use…
We are honored to recognize Neeraj Kumar as our Community Spotlight honoree for…
These are the community rock stars!
Stay up to date on forum activity by subscribing.
Martin Dráb 4 Most Valuable Professional
Priya_K 4
MyDynamicsNAV 2