Open Kazadhum opened 1 year ago
Hi @Kazadhum ,
To make the data collection a part of the pipeline, I need to make that process automatic, right? So I had an idea, to add an optional argument to the collect_data script that would make it save a collection every X seconds and also add a timeout so that the process ends without user interaction.
a) Is this a good idea, or worth exploring? b) If so, do you know if some sort of solution already exists that would solve this?
The philosophy of ATOM is to assume the data collection and labeling as a manual procedure, since we want to have the user intervention. So the collect data script is designed to be interactive.
It is possible to automate it, @JorgeFernandes-Git did it recently here https://github.com/lardemua/atom/issues/554
The problem is that this automation is not general, and need to be tuned for each problem in particular.
So I am not sure this is a path we want to dive in because even if we automate a collection procedure it would be just for a particular case. I would say that if you can implement without much effort something based on @JorgeFernandes-Git example, go ahead. If it takes much time not sure if we should go that way.
@miguelriemoliveira thank you for the feedback! I opened the issue here to determine if this was relevant before opening one on the original repo. But it looks like this is maybe not a great avenue to go down after all.
Perhaps the idea of calibrating a dataset is better? Did you do that already?
On Tue, Mar 21, 2023, 20:40 Diogo Vieira @.***> wrote:
@miguelriemoliveira https://github.com/miguelriemoliveira thank you for the feedback! I opened the issue here to determine if this was relevant before opening one on the original repo. But it looks like this is maybe not a great avenue to go down after all.
— Reply to this email directly, view it on GitHub https://github.com/Kazadhum/atom/issues/5#issuecomment-1478551444, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACWTHVUDQJFGVQVBQ5WX34DW5IG5FANCNFSM6AAAAAAWCWQXAU . You are receiving this because you were mentioned.Message ID: @.***>
Perhaps the idea of calibrating a dataset is better? Did you do that already?
@miguelriemoliveira Yes, I've recorded a quick video to show you, but maybe I should post it as an issue in the main ATOM repo and explain in further detail what I did and what I want to do in the future.
local_testing_calibration_evaluation.webm
So on the left terminal I'm running the Rigel test
job, followed by the plugin I developed to look at the calibration evaluation file (thanks for the help @JorgeFernandes-Git). The Rigel test
job launches two Docker containers, both containing all the software involved.
simulation_and_robot
and simply launches a gazebo instance with the tripod..csv
file.On the right, I attach a terminal to the second container so I can take a look at what's happening to make sure the process goes smoothly.
As of yet, the test
job needs an introspection part, which is why there's a delay, but after the timeout, which I set at 30 seconds for this video, the introspection is run.
Hi @Kazadhum ,
looks good.
Hi @miguelriemoliveira and @rarrais!
So, while the next meeting doesn't come, I wanted to ask something. So here's where we're currently at in terms of integrating ATOM in this pipeline:
To make the data collection a part of the pipeline, I need to make that process automatic, right? So I had an idea, to add an optional argument to the collect_data script that would make it save a collection every X seconds and also add a timeout so that the process ends without user interaction.
But that would be changing the ATOM source code, so I wanted to ask if:
a) Is this a good idea, or worth exploring? b) If so, do you know if some sort of solution already exists that would solve this?
If you think this is a good idea, I'll create an issue in the original ATOM repo and explore it before the next meeting. I would be working on this fork, of course.