Closed stillwer closed 5 years ago
My current guess is that the data is bad because there was a failed read of data. That means the size of that particular array element is zero. I will check tomorrow but that was the case for this particular day. I think there were 4 other events during RELAMPAGO to look into.
Just put a sample of data in here but my guess was right. There are 4 events from RELAMPAGO that have this behavior: 20181031 in the 12UTC file, 20181117 in the 21 UTC file, 20181126 in the 9UTC file, and 20181205 in the 9UTC file. This is a python problem caused by trying to read data of difference sizes. Just need to make sure to check the file and make sure that the number of rows is correct. This doesn't seem to me to be a system critical problem, so I will change this from "Do Before DOE" to "Not Time Sensitive". This change will also allow for the change to take place so thermocouples can be added or subtracted mid file.
The new python processing looks to make sure that all lines are a minimum width. In the case of the picture above...the width is 4. If any line is shorter than that, it pads the data with -1e9. The 4 instances I know of that this occurred (all Relampago) are 20181031 at 12UTC, 20181117 at 21 UTC, 20181126 at 9 UTC, and 20181205 at 9UTC. These files have just been checked and are correct. The filtering checking for data misses grabs this extra padding value and everything seems to flow well. Will close for now but reopen if it happens again.
Just noticed a problem with MPD01's data from Relampago. Not sure what is going on but the housekeeping child file from 20181126 at 9UTC has values that are 9.9692e36. I don't have access to the child files at the moment but when the system comes back, need to investigate what is causing the system to have such high values? Is this a python conversion error or is the data that it is reading the same and a labview error?