lunixbochs / struc

Better binary packing for Go
MIT License
564 stars 42 forks source link

Slice of structs? #30

Open quantum1423 opened 8 years ago

quantum1423 commented 8 years ago

This fails:

type srvEdgeResp struct {
    Num    int `struc:"uint8,sizeof=Neighs"`
    Neighs []Neighbor
}

type Neighbor struct {
    Address net.IP              `struc:"[4]byte"`
    Port    int                 `struc:"uint16"`
    Secret  []byte              `struc:"[32]byte"`
    PubKey  natrium.EdDSAPublic `struc:"[32]byte"`
}

with the error reflect: call of reflect.Value.NumField on slice Value.

Am I doing something horribly wrong, or is this a limitation of the library? It'll be quite annoying if this cannot be done at all.

lunixbochs commented 8 years ago

This will probably be a trivial fix. I need to generally support embedded structs better (issues #1 and #4 as well). I'll take a look tonight!

lunixbochs commented 8 years ago

This is partially implemented in the slice-struct branch (pack works, unpack does not).

quantum1423 commented 8 years ago

My current project is blocked on this thing working, so it would be nice if this feature be completed and merged as early as possible. I'm implementing a custom protocol which makes extensive use of repeating structures many times.

lunixbochs commented 8 years ago

I fixed packing, currently working to make unpacking work.

lunixbochs commented 8 years ago

I just pushed an implementation of struct slice support for pack, unpack, and sizeof. Can you test it? I'd also like a performance report, as there are some places I can optimize if it's not fast enough for you.

quantum1423 commented 8 years ago

It's fast enough for my particular use case, but maybe not in general? In my use case it's not even fast enough to saturate the network (30000 ns per operation), but it doesn't matter since that part of my system is not I/O bound (it's essentially an RPC protocol, where the server-side processing is very much CPU-bound).

But it might not be acceptable if it were some sort of bulk data transfer protocol.

lunixbochs commented 8 years ago

Yeah it should be closer to 1000-2000ns/op, so I must not be caching the field parses. I'll leave this issue open as a note to optimize performance.

lunixbochs commented 8 years ago

Hmm, it is cached. How big are your struct slices? I added a benchmark against my nightmare case example struct (which you can run with go test -bench . and got pretty reasonable numbers (with a slice of 8 nested structs):

BenchmarkFullEncode-4     200000          6534 ns/op
BenchmarkFullDecode-4     300000          4312 ns/op

Edit: New benchmark numbers. Packing structs should now be especially faster.

BenchmarkFullEncode-4     300000          4004 ns/op
BenchmarkFullDecode-4     300000          4180 ns/op
quantum1423 commented 8 years ago

It's the same thing I posted in the OP. I tested with Neighs having a length of 10. And the benchmark code is:

func BenchmarkStructs(b *testing.B) {
    var lol srvEdgeResp
    lol.Neighs = make([]Neighbor, 10)
    xaxa := new(bytes.Buffer)
    for i := 0; i < b.N; i++ {
        err := struc.Pack(xaxa, &lol)
        err = struc.Unpack(xaxa, &lol)
        if err != nil {
            panic(err.Error())
        }
    }
}
lunixbochs commented 8 years ago

Can you run your benchmark again on the latest master?

mpontillo commented 6 years ago

I encountered this issue as well in one of my projects; I had to work around this for now by reading data with encoding/binary (writing works). I tried to hack struc a bit, but as this is my first Go project, I didn't get very far.

I'm happy to help test a fix (and possibly write test cases when I have time).