timshannon / badgerhold

BadgerHold is an embeddable NoSQL store for querying Go types built on Badger
MIT License
519 stars 51 forks source link

Upsert creates duplicate records #112

Open marius-enlock opened 3 months ago

marius-enlock commented 3 months ago

Hello,

I suspect that under certain conditions the Upsert method creates duplicate records.

I have tried reproducing the issue, but I couldn't manage yet - I will try more and update here with findings.

What I think is happening: When badgerdb starts compressing the data or compresses the data, the Upserts end up creating duplicate records. A simple badgerhold.Delete(key,Record) deletes all the duplicates.

I have a multi go-routine app that runs as a long standing process and that keeps the badger db connection open, I noticed for a while that sometimes I get the same record twice with only one field modified.

The database options are the following:

        opts := badgerhold.DefaultOptions
        opts.Options = badger.DefaultOptions(dbStoragePath).WithEncryptionKey(encPassword[:32]).WithIndexCacheSize(800<<20).WithValueLogFileSize(100<<20).WithValueLogMaxEntries(uint32(10000))
        db, err := badgerhold.Open(opts)

I have observed this behavior over a couple of months while developing the app but I couldn't make a reproducer of the issue. (I have tried Upserting 1000 records from multiple nested go routines 1000 times but there were no duplicates in the end). In my project I ended up just calling Delete(key,Record) before doing an Upsert. My keys always have custom types of the form type MyType uuid.UUID.

An example of the struct I upsert:

import "github.com/google/uuid"

type RootID uuid.UUID

func (id RootID) Compare(v interface{}) (int, error) {
    vID, ok := v.(RootID)
    if !ok {
        return 0, fmt.Errorf("invalid interface")
    }

    if id == vID {
        return 0, nil
    }

    return -1, nil
}

type Root struct {
    ID       RootID `badgerhold:"key"`
    Username string
        // Other fields that do not have badgerhold annotations but have primitive type.
}

I think I need to let my attempt of reproducing the issue program run a bit or trigger a compression from badgerdb.

timshannon commented 3 months ago

I know badger has some "less consistent" options that can be passed in, so I'd be careful using any of those.

Also Badger runs in a "snapshot" isolation mode, where readers read the previously committed value during open transactions, so if you are doing some sort of check for duplicates before inserting new ones, you can absolutely generate duplicates.