[CONTROLLER-1517] Upgrading model leads to existing data in old model being flushed Created: 18/May/16  Updated: 25/Jul/23  Resolved: 07/Nov/16

Status: Resolved
Project: controller
Component/s: clustering
Affects Version/s: None
Fix Version/s: None

Type: Bug
Reporter: Srini Seetharaman Assignee: Unassigned
Resolution: Cannot Reproduce Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment:

Operating System: All
Platform: All


Issue Links:
Blocks
is blocked by CONTROLLER-1511 CDS: persist SchemaContext Confirmed
External issue ID: 5905

 Description   

I had a simple model included in my ODL run (within a model.jar having artifact version 0.1.0-SNAPSHOT). The model was as follows:

module benchmarking {
namespace "urn:sdnhub:benchmarking";
prefix benchmarking;

description "This is a dummy model for benchmarking";

revision "2016-03-09"

{ description "initial version"; }

container dummy1 {
leaf data1

{ type uint32; }

}
}

I populated the data1 with an integer at runtime.

Then I generated a new model.jar with artifact version 0.2.0-SNAPSHOT and included the benchmarking.yang with revision-date "2016-03-10". I placed this model.jar in the deploy folder of karaf. The OSGi system picked it up and installed this. The model-0.1.0-SNAPSHOT.jar and model-0.2.0-SNAPSHOT.jar are both Active in OSGi.

After introducing the upgraded model, I noticed that the data in the old model was purged. This is unexpected. We need a way to preserve the current data in the old model so that the apps have an opportunity to port it over to the new model.



 Comments   
Comment by Robert Varga [ 21/Jun/16 ]

I suspect this is actually the cluster datastore data pruner at work. Tom, can you confirm?

Comment by Tom Pantelis [ 22/Jun/16 ]

Yes that sounds like the pruner although that only occurs on a karaf restart. On persistence recovery if it encounters a yang element that doesn't have a corresponding schema in the SchemaContext then it prunes it. You have to keep both schemas to avoid this.

Comment by Srini Seetharaman [ 07/Nov/16 ]

In my analysis, I was using restconf to check if the data in the old model is available or not. That is an invalid test because restconf is not the right way to check old model. I did another test to use an app to check if the data in the old model exists, and it does.

Marking this bug as invalid.

Generated at Wed Feb 07 19:55:45 UTC 2024 using Jira 8.20.10#820010-sha1:ace47f9899e9ee25d7157d59aa17ab06aee30d3d.