[DAEXIM-7] Provide option to use more granular reads during export and import operation Created: 25/May/18  Updated: 03/Jul/18  Resolved: 06/Jun/18

Status: Resolved
Project: daexim
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Improvement Priority: Medium
Reporter: Ajay Lele Assignee: Ajay Lele
Resolution: Done Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified


 Description   

It is noticed that when amount of data in datastore is large, daexim export operation, esp. when performed on the non-leader, fails with AskTimeOutException. Current export implementation reads data from root of the data-tree in one-shot, and so the read operation does not scale very well.
 
Reading data in smaller chunks e.g. on per module/node basis will be more scalable, but I think the reason data is read in one-shot, is to keep it consistent across the different modules. However in some scenarios e.g. when there is no data dependency across models or when write operations to data-store can be prevented while export is going on, this consistency need not be enforced.
 
This ticket will add a new boolean option called 'strict-data-consistency' in input of export operation. When value of this option is true (default), one-shot read the way it happens currently will happen. But when value is false, reads will be performed on per module/node basis, after removing the exclusions, and the data will be combined to write the output file. Output files produced by both means will be exactly same as each other, it's just the method of producing them that will be different.



 Comments   
Comment by Ajay Lele [ 03/Jul/18 ]

Pushed another patch to provide similar option for import operation also.

Generated at Wed Feb 07 19:57:20 UTC 2024 using Jira 8.20.10#820010-sha1:ace47f9899e9ee25d7157d59aa17ab06aee30d3d.