Map-A-Droid / MAD

Map PoGo stuff with Android devices
211 stars 133 forks source link

Device loop with new database #835

Closed kamieniarz closed 3 years ago

kamieniarz commented 4 years ago

When you create new database but add route manually (without init scan) to mon_mitm area which has no spawnpoints in that area, devices will be stuck in loop untill first spawnpoint within area range appear in db.

Temp fix: if you have your own route saved you should launch init mode for a minute or sth (so device can get some spawnpoints to db) then update area route and restart mad - devices should be out of loop and should start scanning using your route. Adding spawnpoints manually in db should work too tho

sn0opy commented 4 years ago

If you add your own route, the device should actually walk around your route no matter if there's a spawnpoint in DB or not. What I think happened is, that you created the geofence, created the area, hit save and hit APPLY SETTINGS, MAD now tries to generate a route, but gets stuck because there's nothing to generate a route for and THEN you added your own route without actually hitting APPLY SETTINGS?

kamieniarz commented 4 years ago

I never used apply settings and actually I never use it - sometimes it doesn't work.

What I did was: new db, start mad, add geofence, add area, add walker, add devices and other settings, set all routed in areas, mad restart - device loop. I restarted mad few times and only areas with devices startcords got out of loop. Other devices were in loop until first spawnpoints appeared within their geofence

sn0opy commented 4 years ago

I can only think of one scenario where this might occur: you use prio queue with starve_route enabled. Otherwise, MAD will use your route and it will walk around through said route. Which mode have you used for this? Could you provide logs of this? Preferably directly here on Github.

kamieniarz commented 4 years ago

I don't use prioq. Also logs had no info about calculating route so mad was aware of route I added. I'm 100% sure spawnpoints triggered scan launch. My logs are without debug mode and a big bessy cause of almost 50 devices but that's basically how it looked like:


[04-29 14:36:28.79] [MainProcess|device001] [   RouteManagerBase:570 ] [    INFO] Failed updating routepools after adding a worker to it
[04-29 14:36:28.79] [MainProcess|device001] [   RouteManagerBase:206 ] [    INFO] Worker device001 unregistering from routemanager poznan-centrum
[04-29 14:36:28.79] [MainProcess|device001] [   RouteManagerBase:209 ] [    INFO] Deleting old routepool of device001
[04-29 14:36:28.80] [           device001] [         WorkerBase:326 ] [    INFO] Internal cleanup of device001 started
[04-29 14:36:28.80] [           device001] [         WorkerBase:329 ] [    INFO] Internal cleanup of device001 signalling end to websocketserver
[04-29 14:36:28.80] [           device001] [         WorkerBase:332 ] [    INFO] Stopping worker's asyncio loop
[04-29 14:36:28.80] [           device001] [       communicator:31  ] [    INFO] Communicator of device001 calling exit to cleanup worker in websocket```
sn0opy commented 4 years ago

I am able to reproduce it. Steps to reproduce:

[04-29 16:43:03.18] [MainProcess|$origin] [   RouteManagerBase:550 ] [    INFO] Starting routemanager temp in get_next_location
[04-29 16:43:03.18] [MainProcess|$origin] [  RouteManagerRaids:61  ] [    INFO] Starting routemanager temp
[04-29 16:43:03.20] [MainProcess|$origin] [   RouteManagerBase:245 ] [    INFO] Try to activate PrioQ thread for route temp
[04-29 16:43:03.20] [MainProcess|$origin] [   RouteManagerBase:268 ] [    INFO] Cannot activate Prio Q - maybe wrong mode or delay_after_prio_event is null
[04-29 16:43:03.20] [MainProcess|$origin] [   RouteManagerBase:847 ] [    INFO] No more coords - breakup
[04-29 16:43:03.20] [MainProcess|$origin] [   RouteManagerBase:570 ] [    INFO] Failed updating routepools after adding a worker to it
[04-29 16:43:03.20] [MainProcess|$origin] [   RouteManagerBase:205 ] [    INFO] Worker $origin unregistering from routemanager temp
[04-29 16:43:03.20] [MainProcess|$origin] [   RouteManagerBase:209 ] [    INFO] Deleting old routepool of $origin
[04-29 16:43:03.20] [MainProcess|$origin] [   RouteManagerBase:221 ] [    INFO] Routemanager temp does not have any subscribing workers anymore, calling stop
[04-29 16:43:03.20] [MainProcess|$origin] [   RouteManagerBase:168 ] [    INFO] Adding route temp to queue
[04-29 16:43:03.20] [MainProcess|$origin] [  RouteManagerRaids:74  ] [    INFO] Shutdown Route temp
[04-29 16:43:03.20] [MainProcess|$origin] [   RouteManagerBase:174 ] [    INFO] Shutdown of route temp completed
[04-29 16:43:03.21] [            $origin] [         WorkerBase:326 ] [    INFO] Internal cleanup of $origin started
[04-29 16:43:03.21] [            $origin] [         WorkerBase:328 ] [    INFO] Internal cleanup of $origin signalling end to websocketserver
[04-29 16:43:03.21] [            $origin] [         WorkerBase:332 ] [    INFO] Stopping worker's asyncio loop
[04-29 16:43:03.21] [            $origin] [       communicator:30  ] [    INFO] Communicator of $origin calling exit to cleanup worker in websocket                    [04-29 16:43:03.26] [          scanner] [    WebsocketServer:313 ] [ WARNING] Connection to $origin was closed, stopping receiver. Exception:
muckelba commented 4 years ago

I had that issue as well a few weeks ago. Exact same Szenario.

Grennith commented 4 years ago

I guess one way to tackle it would be to set init to true if a route is empty initially. However, this may just cause an issue with corrupt configurations, e.g. a geofence that basically is too small to handle (is that possible) or an area where no locations are eventually (scanning the ocean?) and thus looping in "init: true"

kamieniarz commented 3 years ago

Seems to be fixed