strands-project / trajectory_behaviours

Apply qualitative spatial temporal relationships between detected trajectories and static objects, per ROI.
2 stars 8 forks source link

Option to limit number of trajectories when learning #4

Open hawesie opened 9 years ago

hawesie commented 9 years ago

This is to limit the time it takes to learn the activity models, so it can happen overnight in parallel with everything else. Limits could be number of trajectories or days of data.

We could also look at running the learning on the static PC.

PDuckworth commented 9 years ago

@hawesie: if it is possible to connect a pc to the robot's mongodb store (I don't know how to do this, but you seem to have done it with Bob), then this should be simple. All the learning data is in Mongodb. The models could be saved in mongodb also.

PDuckworth commented 9 years ago

@hawesie Question. Is there a "best time" to upload part-Activity Graphs into to mongodb?

My idea being, when a new trajectory is detected, I use QSRLib etc, and generate an Activity Graph, compare it to the learnt models, and say whether novel or not. (all within half a second or so).

But during the "offline learning" stage, I pull the trajectories from mongodb and re-compute the activity graph for the same trajectory (and all trajectories). I want to store (at least most of) the information about the activity graph in mongodb so I don't duplicate work during the offline learning stage. I could also remove the offline learning stage altogether using this method.

But, should I upload in the novelty_server, or in the novelty_client, or somewhere else?

Ferdian already pointed me to the mongodb client here.

hawesie commented 9 years ago

You can do it any time or place you want. It shouldn’t be a problem whatever you choose.

PDuckworth commented 9 years ago

@hawesie Thanks, I now upload the activity graph in the novelty server. This is a bit of a game changer with respect to the Offline Learnng actionlib server. Are we OK to leave it as is for the deployment? Then think about making the learning online (and iterative) after?