jonbinney / deep_rabbit_hole

1 stars 1 forks source link

Come up with a location convention for output files #18

Open jonbinney opened 2 months ago

jonbinney commented 2 months ago

We keep training and test data in the "datasets" folder, and maybe a similar convention for output files would help? At the moment annotations made by the inference script go into the folder for the dataset they were created from, as do text descriptions. Visualization videos go into the object_tracker_0 directory. This works pretty well, but a few thoughts:

I've been mulling over something like the following:

The idea is a bit similar to having "out of tree" builds vs. in-tree. Scripts would also eventually dump files into the output directory which include the (hyperparameters) they were run with.

@adamantivm @alejandromarcu thoughts?

adamantivm commented 2 months ago

I would be fine with this approach, but note that tools like MLFlow at least as far as I undrestand, are exactly for this purpose - to store results, with their corresponding artifacts, input parameters, and results so that they can be more easily organized, understood and looked at later.

I wouldn't be opposed to a PR and some guidance to store the results on our bucket using our own designed criteria, but would have a small preference for trying out one of these tools, maybe Weights & Biases instead - even if they take a bit more work at the beginning, they should help with understanding the state of the art.

I can volunteer for this as a next step if you all are interested in this path - we can discuss it on Monday

alejandromarcu commented 2 months ago

I agree we need a better organization, and I think it would be nice to try Weight and Biases or other tools, or even just see if there are standard ways to do it, so that we are not re-inventing the wheel

jonbinney commented 2 months ago

Sounds good!