Closed FrankSpalteholz closed 3 years ago
In the Dkvfx project, I used the HAP video codec to stream point clouds.
https://github.com/keijiro/Dkvfx
There is another similar project with VAT.
https://github.com/keijiro/HdrpVatExample
is this practical/fast enough in theory?
Yes, I think so.
I'm closing this issue now. Please feel free to reopen it for further problems.
P.S. There is yet another similar project with Alembic.
https://github.com/keijiro/Abcvfx
I'm sure that it's not practical for mobiles, though.
Alembic is indeed too slow and i'd like to stick to my pipeline using ply so let me ask you something else if you don't mind: After checking the map-array which is produced after baking the ply it's floating point (having negativ values ofcourse depenting on where the 3d-point lives) I've tried to export this position-array/map without loosing floating point precision where i've failed (png crops and reduces to 8bit and also the .Net exporter for tif didn't worked for me) I wanted to try out some tests by just using this single 16bit-image/array in a compositing tool and exporting via ffmpeg or hap-codec back to unity to see at least how fast it wil be. But after checking your hap-example i've realized that your test.mov-video is also "just" 24bit->8bit per channel makes it 256 values per x,y,z ... but i must be wrong on something because your example clearly shows that there must be some floating point precision ... where is that did i missed something? Thanks again and especially for this lighting speed response!
Try .exr instead of .png/.tiff.
i've realized that your test.mov-video is also "just" 24bit->8bit per channel makes it 256 values per x,y,z
No. It converts depth (float) -> hue (float) -> r,g,b (unorm8x3). In other wrods, encoding a floating point value with 24bit data.
Hi Keijiro,
i'm sorry in case this question has been already asked (but i couldn't find any similar issue) so ... i'm creating my own .ply-files in houdini (convert them as binary) and using them succefully in vfx-graph ... i'd like to also create videos -> rendertextures as input for position and color-map. right now it's just "one" texture2D per attribute but i'd like to have it been animated trough a houdini-rendered image-sequence/video ... is this practical/fast enough in theory? have you done something similar already?
Thank you so much for this repo specifically and ALL your examples in general ... i've learned so much from it!
Best wishes Frank
Edit: It's maybe worthy to note that my target platform is Android (Quest2) I was already be able using LWRP + 2 Videos (720x720pixels as .mp4). I'm using one video for rgb and another grey-version from it as "fake"-depth ... that works great with about 500k pixels more or less depending how much "extra"-magic i'm putting on top within the graph. But the target-platform (Quest2) is set as constraint.