Open zbynekwinkler opened 4 years ago
https://github.com/robotika/osgar/pull/636#issuecomment-699623320 osgar/drivers/lora.py
https://github.com/robotika/osgar/pull/636#issuecomment-699623607 move ArtifactsReporter
away from subt/artifacts.py
file.
https://github.com/robotika/osgar/pull/639#discussion_r495765685 delete confusing comment in cloudsim2osgar
https://github.com/robotika/osgar/pull/644#discussion_r498179875 refactor osgar.Node
and all different SomethingHandler
classes in bus.py
(when one is updated/changed, all of them need to be changed).
https://github.com/robotika/osgar/pull/657#issuecomment-704326791 refactor CommsClient
out from ros_proxy_node
and move the rest of it to cloudsim2osgar.py
. CommsClient sends and receives bytes so said bytes can contain msgpack encoded data from osgar and only osgar needs to be able to parse them (not the c++ CommsClient).
Follow left wall should not follow wall on the right and vice versa. Currently it finds the closest wall and then turns with its left or right side to it. We had a nice counter example in system urban where mobos was running in circles due to this.
Run validator automatically on each and every cloudsim run (run it on some server, autodownload logs, generate some html report, email it, publish it to a website).
Create our own cloudsim somewhere - need at least 5 computers to be worth it (1 with nvidia gpu for simulation, 4 for robots - no gpu needed if we switch from pytorch to opencv). The computers don't have to be super fast - we could limit RTF on the simulation side. The simulation is effectively able to use only 4 cores + GPU with 4GB. If the robots won't need GPU, a 4 cores might be enough. So about 20 cpu cores and one nvidia gpu.
https://github.com/robotika/osgar/pull/667#issuecomment-706516329 in zmqrouter log all uncaught exceptions in child processes
We don't need to switch from pytorch to opencv to avoid running on GPU. All it takes is to say we want to run on cpu: https://github.com/robotika/osgar/blob/e4d77a3b642075d2c76d74d3de54cfed37473091/subt/artf_node.py#L90
Or simply to not have the gpu, in which case it will switch to cpu automatically.
Cleanup zmq-subt-x4.json
- it contains "mines" like
["rosmsg.orientation", "app.orientation"]
which is now working, because subt/main.py
is using self.orientation
from pose3d
.
Running two DNN detectors in sequence is not ideal for a local development. I am getting so much delay errors that the console is unusable.
Also the opencv dnn running on CPU seems to be allocating nontrivial amount of threads that compete over the cpu cores with everything else. That leads to unpredictable runtime behavior - the other CPU cores are for other modules and not for greedy opencv. Having such a behavior also complicates planning for our own cloudsim and its hw needs.
#667 (comment) in zmqrouter log all uncaught exceptions in child processes
I thought that when a node crashes, the whole thing is taken down, but that is not true. The crash goes unnoticed. The only time the whole thing stops is when any of the nodes stops regularly. For example exception thrown from the __init__
goes totally unnoticed (except message to the console, which on cloudsim means /dev/null).
https://github.com/robotika/osgar/pull/693#discussion_r505403380 we should introduce something like --draw profile or --draw delay ... i.e. optional --draw extra parameter (there are other modules where I am also commenting out graphs of different variables).
Follow left wall should not follow wall on the right and vice versa. Currently it finds the closest wall and then turns with its left or right side to it. We had a nice counter example in system urban where mobos was running in circles due to this.
Like here: https://github.com/robotika/subt-artf/issues/55#issuecomment-709548658
Add unittest to subt.drone setting height: https://github.com/robotika/osgar/pull/702#pullrequestreview-509854815
Add capability to report multiple artifacts from a single image. Our detectors can handle multiple objects of interest in the same scene, our reporting cannot.
Wait until ROS starts up https://github.com/robotika/osgar/pull/713#discussion_r510239114
System Track robots do not have working artefact detector - change from 2D answer to extended info with 3D relative position. See #735
I think we should figure out a way how to use the information provided by the service returning the robot offset from the artifact origin for limiting from where artifact may be submitter, thus not reporting artifacts while we get the offset. That would remove the need for the constants defining the staging area. https://github.com/robotika/osgar/pull/738#issuecomment-723906923
Please add all stuff we say "let's merge now anyway, fix it later".