Closed Laberbear closed 8 years ago
Really good idea, especially on windows different hardware would probably show some unexpected behaviour when evaluated with only one source. In my opinion the best way to implement this would be a simple, stupid automated system which evaluates statistics of collected data and displays it to the user, while selecting the most plausible... again this should be kept simple. The User can then look at the Data and select a source overwriting the automated system's decision. "+" most possibilities for users user gets help deciding which source to use easy statistical calculations should be pretty safe "-" it will take more work than deciding for one solution and implementing it multiple sources need to be run simultaneously for a limited time, which may create performance issues in said time This solution probably affords multiple REST Endpoints
I'd prefer a hybrid solution. As always, the user should not be forced to select an option but be granted to do so.
This results in the problem of choosing a default configuration. Surely setting a static default value wont be the solution to this. To minimize power and memory consumption I'd go with an inital test logic on start up to determine the prefered endpoints. This logic, as already mentioned by WTKFA, should be kept simple.
In addition to choosing defaults the user should be able to set the endpoints to use themself through the REST-API. One should evaluate whether there need to be an additional endpoint for this or the preference endpoint should be used, as these values will most likely be saved in the same database table. Regarding separation of concerns an additional endpoint might better serve the purpose.
Alright then we'll do that: We need:
The automatic system will be quite a lot of work I think, so getting the user option in there should be priority now.
For the automatic validation of the data there are three methods in my mind right now:
The next question is when to apply/test those values. We could do it on startup, however some components are hot-pluggable which means they can be inserted/removed at runtime. Also this would cause a much longer initalization time.
The more sensible choice is probably doing the evaluation at runtime on every value and switching to different sources (as long as there is no user preferred one). However this could result in weird historic data when the backend jumps from MeasuringSource to MeasuringSource.
I would definitely not iterate over the different sources at runtime! Delivering valuable and persistent data should be our main focus.
Will this be needed in the initial release?
We could run all implemented source on different systems, make some test which runs best and set the default accordingly.
An issue #18 relating to this was opened to schedule the implementation for the second release.
As we decided on how to implement this feature I'll close this evaluation issue.
The gathering backend design of the system allows for multiple different sources for the data. Testing all hardware configurations with our system is impossible, so there should be a way for the user (or an automated system) to change the source of the data if it seems wrong or just returns nonesense.
Here is an overview of the required systems for each option:
User Controlled Sources
Automated Sources
Some kind of watcher over the gathered data that evaluates it and tries to find the most realistic data. (Not sure how to safely accomplish that)