st0nedB / rooms

With "Rooms" mobile devices can perform indoor self-localization using an app and low-cost BLE beacons.
MIT License
64 stars 7 forks source link

Issue with coreml (i think) #3

Closed rossssco closed 3 years ago

rossssco commented 4 years ago

Hi,

After following all the required steps I have no joy I'm afraid. The only non-standard setup I have is that I'm not using ESP iBeacons.

The app eventually times out with an "invalid model!" message.

I had used;

MacOS Catalina - 10.15.5 (19F101) Xcode 11.5 (11E608c) (iv kept all project format to "Xcode 9.3-compatible" Iphone 7 - Software Version 13.5.1

pipenv install

Creating a virtualenv for this project… Pipfile: /Users/ross/Documents/rooms3/machine-learning-python/Pipfile Using /usr/local/bin/python3.7m (3.7.8) to create virtualenv… ⠇ Creating virtual environment...created virtual environment CPython3.7.8.final.0-64 in 1178ms creator CPython3Posix(dest=/Users/ross/.local/share/virtualenvs/machine-learning-python-nexj2ZoU, clear=False, global=False) seeder FromAppData(download=False, pip=latest, setuptools=latest, wheel=latest, via=copy, app_data_dir=/Users/ross/Library/Application Support/virtualenv/seed-app-data/v1.0.1) activators BashActivator,CShellActivator,FishActivator,PowerShellActivator,PythonActivator,XonshActivator

I have attached a few screenshots, the .json files and a log of the following output

./roomsModelMaker.py --num-beacon 2 --http

(the sample rate is low as, well it was quite frustrating doing all the moving about again and again!)

screenshots.zip rooms-json.zip machine-learning.zip

Please let me know if you need further info. I'd be more than happy to try out any new suggestions.

st0nedB commented 4 years ago

Hi, sorry it did not work out for you so far. Thanks for providing the screenshots and files, that helped a bit. I am not entirely sure where the problem is though. It seems the script is running correctly and producing a model. For the files I saw, that you used 25 samples per room, which is very low. I used at least 120 samples/area in my tests. Though honestly, I am not sure if that's the issue. I attached a plot of the data u provided and I think it should be possible to learn a model from there. My tests showed an accuracy of 80% with the data you provided, but that will vary from run to run due to the low number of samples. If you want a more precise statement on the accuracy matter, my recommendation is to record more samples. The sampling rate is fixed by iOS unfortunately and it can't be changed. The measurements are performed by iOS and only passed to the app when a new value is available. Usually this value is around 1 Hz. I don't understand what you mean by "doing the moving about again and again!". It is sufficient to collect measurements for each room once, i.e. to collect a 120 measurements for a room takes 120 seconds. A total of 4 minutes in your case.

Regarding the issue with the invalid model, I just added a new commit which might help fixing the issue. Please check if it helps and let me know if it helps.

Figure_1 Blue dots are the first room, orange the second room; x-axis is normalized RSSI of the first beacon, y-axis is normalized RSSI of the second beacon.

rossssco commented 4 years ago

Cheers for the prompt response.

I understand 25 was low. I had originally kept with the default of 500. Then getting a message like "Invalid Model!" to me suggested I would need to generate new .json's, re-run the python, and try the process again. Hence why I went so low in the end.

I have pulled the latest version (git pull) I'm new to Xcode will opening the same project/workspace require any configuration changes to reflect the updated branch? I then build the app again with the arrow button.

I would assume its taken changes as now the behavior has changed slightly.

Where previously the "Invalid Model!" error would appear after say 3-5mins (keeping the iPhone from locking) Now the error will not appear unless, after 3-5mins, I let the phone lock. Then after unlocking it shows the same error.

Also just something I have noticed. If I import the model (successfully) then navigate to "Prediction". I get "Starting". If I force quit the app then launch again. The Prediction tab shows "Starting ... #Prediction Likelyhood"

Sorry I can't provide more info than that. I do have an old iphone5s If perhaps this could be model-specific?

Logs look all good again. Thanks again for your work and help.

rooms-08-07-20-logs.zip

st0nedB commented 4 years ago

Thanks for your hint on the undescriptive/unclear error message. I have opened another issue to come up with more descriptive ones. I hope this can avoid frustrations in the future. I don't have an iPhone 7, so I can't really say if it is device specific. We can work together to find out. Just created a new branch to develop a solution for the issue. Please clone, build and install it (don't worry, reconfiguring the app should not be necessary). Then run the app and keep it attached to Xcode to monitor the debugging output. It should print some numbers prior to showing the error on the app.

rossssco commented 4 years ago

Hey @st0nedB

Right, new branch checked out. Xcode using this debug branch. (I did remove the app from the phone and re-install just to be sure)

As you can see from the log. I let it run for quite some time. After wanting to see a '4' (joking) I decided to lock the phone. Here a new output was displayed:

Background task registered Background task ended

I unlocked the phone and the "prediction" tab fired up. Again the 0,1,2,3 is this expected and helpful?

I locked the phone again a minute later. Here an actual debug error was displayed (see log) However, this to me seems related to the background registration and operation. Not the initial running of the app?

I will fire up the old Iphone later today as I'm concerned this iPhone7 might have its own issues Happy to continue. Cheers.

debug.log

st0nedB commented 4 years ago

Superb! That narrows it down. Please pull the latest and try that. I have provided some more detailed debugging output for Xcode. Sorry for the numbers, this one should be helpful now 😄 When you run it in debugger, you should see the prediction. From what you describe, I don't think there is something wrong with the model. More likely something else (maybe MQTT) is causing the issues. Did you check whether the MQTT test message was published to your broker correctly? If you press the "Test" button a message is published to the topic and you should be able to see it at the broker.

st0nedB commented 4 years ago

You are right regarding the background tasks, btw.

Background task registered
Background task ended

Thats done to save power. When the user allows background location usage, it theoretically allows the app to monitor the beacons in the background. However, this also consumes a sizable amount of power. In order to circumvent that, the app monitors the gyroscope and only fires up the beacon ranging, when the device moves. This works, but iOS does not allow monitoring the gyroscope in the background AFAIK. To circumvent that, a background task is restarted every 30s (the maximum guaranteed background processing time) to allow the app to monitor the gyroscope in the background. This roughly reduces energy consumption by ~50% (according to Xcode) when the device is not moving.

Edit: The app automatically stops any activity when the battery drops below 30% to avoid draining it.

rossssco commented 4 years ago

Ha, we're making progress ! :)

git pulled whilst still checked out on the issue#3 branch. Proceeded to build with the phone connected for debugging.

On the phone itself, it no longer hangs on "Starting" Now i get a changing display of "bedroom" 50.00% / lounge 50.00%"

Thanks for the extra debugging. Now I can pretend to understand! lol I have also included a screenshot of my MQTT environment/history

What payload is trying to be published btw? I.E from my extremely limited knowledge of mqtt I can see the "Tests" coming through ok.

I would imaging once everything is working I am wanting mqtt to update person state (via device_tracker?) or could it literally just update the person state to be either of the respective rooms/locations?

Also thanks for the knowledge regarding the gyroscope. As you'll see toward the end of the debug2.log I tested that ;)

Cheers!

mqtt debug2.log

st0nedB commented 4 years ago

Hey great! So it seems that the issue was a prediction threshold I had set previously to avoid oscillating results (e.g. in areas that overlap). The threshold would prevent the app from updating the current room, if the accuracy was below 70%. As we can see from your logs, the prediction likelihood is always 50%, which I find weird. Are the predictions meaningful? Does it predict bedroom/lounge when you are in the respective room or does it oscillate? Do the percentage values change when you move between rooms similar to when you were recording the data? From the analysis of the data (see the scatterplot in my first response) I'd expect the model should be able to predict the correct room when not in the overlapping area. The app publishes any changes in rooms to MQTT in JSON format {"likelihood": xx, "room": yy} which can then be parsed by HA templates.

Edit Had some time today and looked into the issue further. Made some commits to both the app and the python machine learning. Please rebuild both (including the pipenv environment) as I updated some dependencies. I plan to make more updates to the Python script to reduce the number of parameters in the next days. Please let me know when you had time to check them out ;)

st0nedB commented 3 years ago

Should be fixed with the latest commits. Closing for now.