Closed hlms closed 11 months ago
I think you are mixing old data format with new one. Write your own serializer and handle such cases there.
This got solved in the following way:
Earlier, map looked like the following:
HTreeMap map = db.hashMap("Detail")
.createOrOpen();
Fixed version:
HTreeMap<String, Detail> map = db.hashMap("Detail", Serializer.STRING, Serializer.JAVA)
.createOrOpen();
Add serialVersionUID to the class:
@Data
@NoArgsConstructor
public class Detail implements Serializable {
private static final long serialVersionUID = -4627389989283518760L;
private String source;
private String target;
}
Store some objects with string key and this class objects as values.
Now, update the class definition leaving the serialVersionUID unchanged:
@Data
@NoArgsConstructor
public class Detail implements Serializable {
private static final long serialVersionUID = -4627389989283518760L;
private String source;
private String target;
private int state; // new field
}
Persist some more objects with this class object values.
Now, you will be able to read old and new objects with this class.
@jankotek ,
Note that - in the reported scenario in my previous comment - the exception is thrown when user attempts to persist the object with new definition.
User is NOT even attempting to read the old definition objects.
Why is the exception required in this case?
This looks like an issue to me. Why does mapdb need to throw an exception (about already persisted data in past) while writing new data?
@jankotek ,
Could you re-open this issue?
Hello team,
When is the following exception thrown?
Looks like earlier I had added a custom class
Detail
(with definition v1) object as value in HTreeMap.After sometime, I added a field "state" in that custom class
Detail
.Now, when I attempt to put such custom class
Detail
(with definition v2) object as value in HTreeMap, it is failing.Is that correct? Then how should we handle such scenarios when the object which is persisted - gets its definition updated?