Open brian2lee opened 1 year ago
Hi, can you provide the device type you used for the capture? We have tested the capture app on the iPhone 12 & 14 pro max with lidar and found it works well both in train and test capture mode.
I'm working on iPad Pro (12.9inch) (3rd gen), ios 15.5, OnePose Capture app ver. 1.4.0. Posted an issue about this when I'm trying to work with onepose, decided to change to onepose++. Still using the same setup for data capture. I've already obtained the training data,fixing the issue or another method to create my own test data would be appreciate. Sorry for the late reply and possible stupid mistake I've made since I'm new to this and struggling from all kind of problems here and there.
hi @brian2lee were you able to obtain the "annotated" data through the app? I am on iPhone 14 pro but even my screen is freezing or the "uploading" takes forever after capturing the data. I will greatly appreciate your help.
@AnukritiSinghh As I mentioned in the issue, I was able to obtain the annotated data eventually, but the app freezes most of the time. I got the data by many retries and luck.
Is there any update regarding the test mode of the app? After stopping the recording it'll just turn back to "ready" and does not generate any links for the captured data. @brian2lee did you manage to capture any test data?
@mkatras11 Not yet, still waiting for the developers' reply, if there's any further progress I'll update in the comment. But I'm new to this field so better not put too much hope on me.
Hi, sorry for the late reply. We are pretty busy on another project. We find that uploading may be unstable due to using app outside China mainland. We are trying to fix the problem!
@hxy-123 Thanks for letting us know. Can you share exactly what kind of data is needed for "testing"? It will help us in our own dataset preparation. Thanks for the help!
@AnukritiSinghh To my understanding you only need a sequence of rgb images for testing. You can use the annotation tool and keep only the extracted images. However, you don’t get any ground truth information for the testing part, only the prediction.
Thanks for your reply @mkatras11, looks like they need frames.txt file as well. Do you know how to get that for test files? Appreciate your help!!
@AnukritiSinghh If you make a recording with the annotation tool of OnePose cap app, you get a .zip file that includes a file called frames.txt (check image below).
Then you can use the script called parse_scanned_data.py to process them into a format usable to run inference with OnePose++.
Thank you so much!! @mkatras11
Hi @mkatras11 , Do you by any chance if onepose++ or onepose work for multiple objects pose estimation? I mean will it work if I have several objects in my frame? Thanks!!
Hi @AnukritiSinghh, It is indeed possible to make onepose++ work with multiple objects in one scene. Although I haven't personally implemented it myself, I have observed onepose being used in such scenarios. I am confident that the same can be achieved with onepose++ as well. To accomplish this, you would probably need to train an object detector first to identify the specific objects present in the scene. Once the object detector is trained and can successfully recognize the objects of interest and run the inference part of onepose/onepose++ on each of the corresponding objects identified by the detector. Overall, with the right adjustments to the code, this task is achievable. Just keep in mind that training an object detector and integrating it with onepose++ might require some effort, but it is certainly doable.
I ran into several issues while using the app. First of all, in annotate mode the screen froze several time when I tried to capture the training data. The app isn't stable enough that I'll have to repeat the process many time to generate a success dataset link. Secondly, until now I haven't successfully captured a test data. I followed the instruction mention in the app, but after pressing stop recording it'll just turn back to "ready" without generating any links.