childmindresearch / wristpy

https://childmindresearch.github.io/wristpy/
GNU Lesser General Public License v2.1
2 stars 1 forks source link

Task: Add gradient descent option as default minimization of closest_point_fit error #8

Open ReinderVosDeWael opened 6 months ago

ReinderVosDeWael commented 6 months ago

Description

Create option to have different methods for closest_point_fit function (user_config setting). Implement a gradient descent option, this will now be set as the default as opposed to GGIR iterative process.

Tasks

Freeform Notes

No response

Asanto32 commented 5 months ago

Comment continued:

Currently calibration uses 72 hrs of raw acceleration data and attempts to calibrate. If this fails, the algorithm, iteratively, adds in 12hr chunks of data until successful. This not only ignores data towards the end of the data collection period, but, additionally, can be much slower than just using all the valid data to start with.

Asanto32 commented 4 weeks ago

@ReinderVosDeWael @frey-perez should we redefine this issue/write out new tasks? I think the first step is to provide an alternative minimization function, i.e. using gradient descent as opposed to closest_point_fit function? But we maintain the same error function (linear transformation to get no_motion_data to unit sphere).

ReinderVosDeWael commented 4 weeks ago

Yeah I'd keep the interface the same at this stage (i.e. same input, same output) but change all the internals.

Asanto32 commented 2 weeks ago

After discussion, we are pivoting to using a z-score implementation to find the closest point fit, as a closed method to avoid potential issues with gradient descent (edge case where scale =0, offset = 1)

Asanto32 commented 1 week ago

This is once again changing. The z-score initial guess is not valid, and we will create two separate CalibrationClass (GGIRCalibration and MinimizeCalibration)

clane9 commented 1 week ago

Just to expand/document what was the issue. The z-score init would be ok if the data were sampled uniformly from a sphere (e.g. a gaussian). But in this case the no motion acceleration data tend to be concentrated around a particular unit norm direction (e.g. (0, 0, 1) if the watch was set down on its face). In this case, z-scoring will completely destroy the data.

More generally, I think any transformation that adds an offset is very questionable. It's not clear to me what the physical meaning of an offset error even is, how it could be introduced, or how to correct for it. I would also be cautious about doing per-axis scale corrections, unless you have no motion data with varying directions (i.e. watch set down in multiple positions). No calibration or single global scale correction seem most sensible to me.