jankotek / mapdb

MapDB provides concurrent Maps, Sets and Queues backed by disk storage or off-heap-memory. It is a fast and easy to use embedded Java database engine.
https://mapdb.org
Apache License 2.0
4.87k stars 873 forks source link

java.lang.AssertionError: Missing field: state #1022

Closed hlms closed 11 months ago

hlms commented 11 months ago

Hello team,

When is the following exception thrown?

Exception in thread "main" java.lang.AssertionError: Missing field: state
        at org.mapdb.elsa.ElsaSerializerPojo.serializeUnknownObject(ElsaSerializerPojo.java:490)
        at org.mapdb.elsa.ElsaSerializerBase.serialize(ElsaSerializerBase.java:1093)
        at org.mapdb.elsa.ElsaSerializerBase.serialize(ElsaSerializerBase.java:1026)
        at org.mapdb.DB$defaultSerializer$1.serialize(DB.kt:225)
        at org.mapdb.StoreDirectAbstract.serialize(StoreDirectAbstract.kt:245)
        at org.mapdb.StoreWAL.put(StoreWAL.kt:380)
        at org.mapdb.HTreeMap.valueWrap(HTreeMap.kt:1208)
        at org.mapdb.HTreeMap.putprotected(HTreeMap.kt:344)
        at org.mapdb.HTreeMap.put(HTreeMap.kt:324)

Looks like earlier I had added a custom class Detail (with definition v1) object as value in HTreeMap.

@Data
@NoArgsConstructor
public class Detail implements Serializable {

  private String source;
  private String target;
}

After sometime, I added a field "state" in that custom class Detail.

@Data
@NoArgsConstructor
public class Detail implements Serializable {

  private String source;
  private String target;
  private int state;
}

Now, when I attempt to put such custom class Detail (with definition v2) object as value in HTreeMap, it is failing.

Is that correct? Then how should we handle such scenarios when the object which is persisted - gets its definition updated?

jankotek commented 11 months ago

I think you are mixing old data format with new one. Write your own serializer and handle such cases there.

hlms commented 11 months ago

This got solved in the following way:

Earlier, map looked like the following:

HTreeMap map = db.hashMap("Detail")
            .createOrOpen();

Fixed version:

HTreeMap<String, Detail> map = db.hashMap("Detail", Serializer.STRING, Serializer.JAVA)
            .createOrOpen();

Add serialVersionUID to the class:

@Data
@NoArgsConstructor
public class Detail implements Serializable {
  private static final long serialVersionUID = -4627389989283518760L;
  private String source;
  private String target;
}

Store some objects with string key and this class objects as values.

Now, update the class definition leaving the serialVersionUID unchanged:

@Data
@NoArgsConstructor
public class Detail implements Serializable {
  private static final long serialVersionUID = -4627389989283518760L;
  private String source;
  private String target;
  private int state;  // new field 
}

Persist some more objects with this class object values.

Now, you will be able to read old and new objects with this class.

@jankotek ,

Note that - in the reported scenario in my previous comment - the exception is thrown when user attempts to persist the object with new definition.
User is NOT even attempting to read the old definition objects.
Why is the exception required in this case?

hlms commented 11 months ago

This looks like an issue to me. Why does mapdb need to throw an exception (about already persisted data in past) while writing new data?

hlms commented 11 months ago

@jankotek ,
Could you re-open this issue?