Encoding and decoding is tested and working on Linux and iOS 15.3.
I'm getting a crash on Windows. Not sure what's the problem there yet.
I need to commit a few more changes before any pull request:
Options for frame width/height, compression strategy and a path to a compression dictionary should probably be loaded in through YAML. I'm thinking a constructor overload similar to how LibAvEncoder works.
Codec parameters & GetCodecParamsStruct() on the decoder is not implemented yet.
HasNextPacket() on the decoder is not implemented, but it seems like from the other examples this isn't so strict?
Fps is not implemented, but I'm not sure it matters for this codec type.
The zstd.h header file is just assumed to be available because the ZDepth project is pulling it in. The Zstd library itself should be added to the project dependencies and CMakeLists etc.
I'm currently initialising the codec context in the class constructors. I wasn't sure if this is the correct practice on this project? I see that NvEncoder runs BuildEncoder() in it's constructor, initialising the context, but LibAvEncoder doesn't seem to initialise a context until it starts preparing frames.
I'm only unpacking the cv::Mat into type CV_32FC1 at the moment, but this needs to be selected depending on the frame type (probably determing this from the codec parameters struct?).
I am currently loading a dictionary file to use with zstd that is created using the zstd --train program. The dictionary I'm using is trained on around 10 minutes of iOS depth footage. I haven't included this binary file in the repository yet. Maybe it should be up to the user to train and load a dictionary using their own dataset? Or maybe there should be some dictionaries distributed in the repo itself for common frame types like IR, depth, confidence etc. ? I intend on making the dictionary optional however, since zstd can also run in a simpler (less-efficient) mode without using a dictionary.
I'll keep posting to this issue as I make changes...
Let me know initial thoughts if anyone has a chance to have a quick look!
Options for frame width/height, compression strategy and a path to a compression dictionary should probably be loaded in through YAML. I'm thinking a constructor overload similar to how LibAvEncoder works.
Works
Codec parameters & GetCodecParamsStruct() on the decoder is not implemented yet.
HasNextPacket() on the decoder is not implemented, but it seems like from the other examples this isn't so strict?
Correct
Fps is not implemented, but I'm not sure it matters for this codec type.
Correct
The zstd.h header file is just assumed to be available because the ZDepth project is pulling it in. The Zstd library itself should be added to the project dependencies and CMakeLists etc.
Is this so we don't have to rely on the ZDepth project?
I'm currently initialising the codec context in the class constructors. I wasn't sure if this is the correct practice on this project? I see that NvEncoder runs BuildEncoder() in it's constructor, initialising the context, but LibAvEncoder doesn't seem to initialise a context until it starts preparing frames.
Let's follow the LibAvEncoder approach as NvEncoder is not supported (due to restrictions)
I'm only unpacking the cv::Mat into type CV_32FC1 at the moment, but this needs to be selected depending on the frame type (probably determing this from the codec parameters struct?).
Correct, in FrameStructToMat there is business logic
I am currently loading a dictionary file to use with zstd that is created using the zstd --train program. The dictionary I'm using is trained on around 10 minutes of iOS depth footage. I haven't included this binary file in the repository yet. Maybe it should be up to the user to train and load a dictionary using their own dataset? Or maybe there should be some dictionaries distributed in the repo itself for common frame types like IR, depth, confidence etc. ? I intend on making the dictionary optional however, since zstd can also run in a simpler (less-efficient) mode without using a dictionary.
We can add an optional step to host the specific dictionary if the user would like in the gitbook
This is my initial attempt at setting up Zstandard for frame encoding and decoding:
https://github.com/eidetic-av/Sensor-Stream-Pipe/commit/fb58d85f54a3c269ab7ddba9808aa85e18248527
I am primarily using this to stream iOS depth frames, but zstd could be frame-type agnostic.
At the moment it is used like any other IEncoder/IDecoder:
Encoding and decoding is tested and working on Linux and iOS 15.3. I'm getting a crash on Windows. Not sure what's the problem there yet.
I need to commit a few more changes before any pull request:
LibAvEncoder
works.GetCodecParamsStruct()
on the decoder is not implemented yet.HasNextPacket()
on the decoder is not implemented, but it seems like from the other examples this isn't so strict?zstd.h
header file is just assumed to be available because the ZDepth project is pulling it in. The Zstd library itself should be added to the project dependencies and CMakeLists etc.NvEncoder
runsBuildEncoder()
in it's constructor, initialising the context, butLibAvEncoder
doesn't seem to initialise a context until it starts preparing frames.cv::Mat
into typeCV_32FC1
at the moment, but this needs to be selected depending on the frame type (probably determing this from the codec parameters struct?).zstd --train
program. The dictionary I'm using is trained on around 10 minutes of iOS depth footage. I haven't included this binary file in the repository yet. Maybe it should be up to the user to train and load a dictionary using their own dataset? Or maybe there should be some dictionaries distributed in the repo itself for common frame types like IR, depth, confidence etc. ? I intend on making the dictionary optional however, since zstd can also run in a simpler (less-efficient) mode without using a dictionary.I'll keep posting to this issue as I make changes... Let me know initial thoughts if anyone has a chance to have a quick look!