junh-ki / dias_kuksa

Pieces of connectivity implementation with Eclipse KUKSA for DIAS.
https://dias-kuksa-doc.readthedocs.io/en/latest/
GNU General Public License v3.0
4 stars 3 forks source link

TODO - Diagnostic Part #13

Open junh-ki opened 3 years ago

junh-ki commented 3 years ago
~~0. merge two can log files (based on timestamp)~~ *(05/Jan/2021)* ~~1. Originally, the evaluation of tampering is done after 100h of "good" mapping.~~ *(05/Jan/2021)* ~~For demonstration, this shall be reduced and therefore, simplified to 3m.~~ ~~2. Total sampling host includes each total sum of (1-a, 1-b, 1-c, 2, 3-a, 3-b)~~ *(06/Jan/2021)* ~~as well as the actual sampling time from the beginning to the end.~~ ~~3. Evaluation point should be decided through Hono-InfluxDB-Connector's~~ *(06/Jan/2021)* ~~argument (sampling time as an argument, i.e., --sampling 180 => 3m).~~ ~~4. Spring Boot Loop:~~ ~~https://stackoverflow.com/questions/36541857/spring-boot-infinite-loop-service~~ *(09/Jan/2021)* ~~5. Evaluation is applicable for "good" map downstream data only.~~ *(09/Jan/2021)* ~~Pre-evaluation: Filter out bins with too low cumulative work stored.~~ ~~(e.g., do not use bins with work < 0.5 kWh => Exclude them)~~ ~~=> test reference NOx map needs to be used for creating a factor map (both 4-1, 4-2)~~ ~~5-1. Bin Wise Evaluation with Bin Statistics~~ *(10/Jan/2021)* ~~(Use pre-evaluation filter –only use valid bins.)~~ ~~Bin i, for i=1...12, DS data good map data only, for valid bins:~~ ~~NOx_i/work_i/ Noxref_i= factor bin i~~ ~~therefore, a NOx reference map is needed here.~~ ~~=> Result: Factor map (invalid bins are filled with void)~~ ~~- Factor range classification (suspiciously low ~ ... ~ very bad)~~ ~~- Criteria for tampering: Minimum number of bins, Minimum factor~~ ~~- Weighting number (tampering weight per bin - 0 1 2 4)~~ ~~5-2. Average Evaluation~~ *(10/Jan/2021)* ~~- Make the average of the valid bins' factors~~ ~~- Average Factor Range, Average Classification~~ ~~!! Both methods 4-1 and 4-2 shall run in parallel. If either detects tampering,~~ ~~"tampering" shall be concluded.~~ ~~6. Make new queries for creating evaluated maps to InfluxDB~~ *(10/Jan/2021)* ~~7. Make Grafana dashboard(s) for evaluated maps~~ *(12/Jan/2021)* ~~8. dockerize~~ *(11/Jan/2021)* ~~9. include in the `docker-compose.yml` file~~ *(11/Jan/2021)* ~~10. `server.url` or `export.ip` argument better be changed: `influxdb:8086` => `http://influxdb:8086`~~ *(12/Jan/2021)* ~~11. `hono-influxdb-connector` should be modified from using curl to using the injected `influxdb-java` dependency~~ *(12/Jan/2021)* ~~12. Evaluation Point Panel~~ *(13/Jan/2021)* ~~13. according to the factor map, the bin-wise evaluation map is not wrong. <= must check whether the factor map actually follows the target bin map (`tscr_bad`)~~ *(13/Jan/2021)* 14. NOxForWork is weirdly higher than its expected value according to pdf. <= needs to be analyzed (or maybe because it was meant to be only analyzing `tscr_good`) (Also DS sensor data is always 3012 which is its maximum. take that into account) 15. In one bin, cumulativeWork is 0 while cumulativeNOxDS is 3.383. <= should be analyzed in `preprocessor_bosch.py` - Map switching
junh-ki commented 3 years ago
$ curl -G 'http://localhost:8086/query?db=mydb' --data-urlencode 'q=SELECT * FROM "mymeas"'

{"results":[{"statement_id":0,"series":[{"name":"mymeas","columns":["time","myfield","mytag1","mytag2"],"values":[["2017-03-01T00:16:18Z",33.1,null,null],["2017-03-01T00:17:18Z",12.4,"12","14"]]}]}]}
junh-ki commented 3 years ago

If there are so many empty bins in the evaluated bin map, it is due to the pre-evaluation that filters out bins with too low cumulative work stored (do not use bins with work < 0.5 kWh)

junh-ki commented 3 years ago

1. KUKSA-Test

2. KUKSA-Renningen

junh-ki commented 3 years ago
junh-ki commented 3 years ago

at least for sampling time (total sampling time, individual bin sampling time), it is better to be cloud based?

or implementing buffer? queue. (21/01/2021)

if queue is empty:
    # send the current telemetry
else:
    # send the current telemetry to pop queue
    # send the first-in telemetry
If timeout:
    go to queue
Schmandreas commented 3 years ago

Implementing a buffer is probably the better solution. Buffer could be reused for other applications as well.

junh-ki commented 3 years ago

Implementing a buffer is probably the better solution. Buffer could be reused for other applications as well.

@Schmandreas I agree. Buffer part in cloudfeeder.py is implemented here: c621457c396f33990bb77cd705bfb77b1f6ca995. I tested it on my desk and it works well so far. We will run some tests in the truck this afternoon :)

junh-ki commented 3 years ago

socket.timeout: timed out > https://www.kite.com/python/answers/how-to-catch-a-socket-timeout-exception-in-python (22/01/2021) (c540f0e)

junh-ki commented 3 years ago
SSL error in data received
protocol: <asyncio.sslproto.SSLProtocol object at 0xb538f130>
transport: <_SelectorSocketTransport fd=10 read=polling write=<idle, bufsize=0>>
Traceback (most recent call last):
  File "/usr/lib/python3.7/asyncio/sslproto.py", line 526, in data_received
    ssldata, appdata = self._sslpipe.feed_ssldata(data)
  File "/usr/lib/python3.7/asyncio/sslproto.py", line 207, in feed_ssldata
    self._sslobj.unwrap()
  File "/usr/lib/python3.7/ssl.py", line 767, in unwrap
    return self._sslobj.shutdown()
ssl.SSLError: [SSL: KRB5_S_INIT] application data after close notify (_ssl.c:2609)
Exception in thread Thread-2:
Traceback (most recent call last):
  File "/home/pi/.local/lib/python3.7/site-packages/websockets/protocol.py", line 827, in transfer_data
    message = await self.read_message()
  File "/home/pi/.local/lib/python3.7/site-packages/websockets/protocol.py", line 895, in read_message
    frame = await self.read_data_frame(max_size=self.max_size)
  File "/home/pi/.local/lib/python3.7/site-packages/websockets/protocol.py", line 971, in read_data_frame
    frame = await self.read_frame(max_size)
  File "/home/pi/.local/lib/python3.7/site-packages/websockets/protocol.py", line 1051, in read_frame
    extensions=self.extensions,
  File "/home/pi/.local/lib/python3.7/site-packages/websockets/framing.py", line 105, in read
    data = await reader(2)
  File "/usr/lib/python3.7/asyncio/streams.py", line 679, in readexactly
    await self._wait_for_data('readexactly')
  File "/usr/lib/python3.7/asyncio/streams.py", line 473, in _wait_for_data
    await self._waiter
concurrent.futures._base.CancelledError

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/lib/python3.7/threading.py", line 917, in _bootstrap_inner
    self.run()
  File "/home/pi/kuksa.val/clients/vss-testclient/../common/clientComm.py", line 144, in run
    self.loop.run_until_complete(self.mainLoop())
  File "/usr/lib/python3.7/asyncio/base_events.py", line 584, in run_until_complete
    return future.result()
  File "/home/pi/kuksa.val/clients/vss-testclient/../common/clientComm.py", line 126, in mainLoop
    await self.msgHandler(ws)
  File "/home/pi/kuksa.val/clients/vss-testclient/../common/clientComm.py", line 112, in msgHandler
    resp = await webSocket.recv()
  File "/home/pi/.local/lib/python3.7/site-packages/websockets/protocol.py", line 509, in recv
    await self.ensure_open()
  File "/home/pi/.local/lib/python3.7/site-packages/websockets/protocol.py", line 812, in ensure_open
    raise self.connection_closed_exc()
websockets.exceptions.ConnectionClosedError: code = 1006 (connection closed abnormally [internal]), no reason
junh-ki commented 3 years ago

old_good.xlsx pems_cold.xlsx tscr_good.xlsx