RobertK66 / obc_1769_core

Implements hardware abstractions and Layer2(3) modules for usage of the OBC hardware in cubesat projects
GNU General Public License v3.0
1 stars 1 forks source link

Prepare sequence to read and store external sensor data #45

Open kodsurf opened 2 years ago

kodsurf commented 2 years ago

Intro :

Assuming that magnetometer i2c sensor is integrated to obc. Data has to be collected through orbit. Data to be stored untill communication link is established with Earth. After data transmission arrays / memory can be cleared.

LEO orbit is around 130min = 7800s

With sampling frequency of 1/20 seconds array with 390 values is enough to store sensor data over whole orbital period.

According to @WolfgangTreb data transmission windows would be available 3-4 times a week. With such - obviously ram memory won't be enough. And data has to be recorded to SD card.

Task concept :

Using already integrated internal i2c sensor - program the routine which will collect sensor data at specified sampling frequency. Stores it temporary in ram then writes it to SD card, clearing array from obc ram memory.

Todo:

We are at oppen discussion regarding when to collect data : possibly ontly when satellite is in region of interest ( high altitude/ polar caps)

To implement this routines @kodsurf had to be introduced to SD card memory read /write/ access functions. Also hardware (SD card is missing from obc that I am working with)

kodsurf commented 2 years ago

Code skeleton implement with internal i2c sensor that measures temporary and humidity would be an appropriate template for any another sensor ,that periodically collects data.

RobertK66 commented 2 years ago

this is same/related as/to #11. We should continue from System Level to define which kind of data is expected and how important different data sets are. At the moment (thinking from low level up) I know of: Sensor Signals: OBC-temp1, OBC-temp2, OBC-hum, OBC-SupVoltage, OBC-Current, OBC-SP-Current, ST(ACIE)-temp1, ST-temp2(?), ST-SNR, THR-??? ...., MAG-??? ..., ADCS.-????, EPU-Voltages[x], EPU-Currents[x], .... Operational-Events: OBC-Events (Errors, signals, infos per module), THR-Fuses(?), ST-mode changes, EPU-mode changes, ADCS-modes/events, ....

So my thinking at this moment is: We will have a number of x 'SignalStores' and 'EventStores'. We have to make each store as a 'wrap around' area in MRAM chips. So there is a specific place reserved which can be used up to its maximum and then oldest data gets lost. The different stores have to be categorized to have most important data (e.g. error events) for as long as needed to (mostly) grantee a 'save down link' possibility. Less important stuff (eg. temperature data) could be allowed to be overwritten after eg. x orbits or so.....

On higher (System) level The term HKD (housekeeping data) vs other (mission!?)data has to be specified very carefully here!

For signals I see 2 types: Periodic ones with orbit period (temp) and more static ones (like supply voltage). For the periodic ones I would suggest to take advantage of the knowledge of periodicity to get better down link performance by not transmitting them in sequence order but by a sort of 'random sampling algorithm'. (So if we only can receive 20, 40 values of the signal and not all of then are in sequence and subsequent samples then we could see more of the overall 'signal form' but the only 2 to 4 sec of a signal with sampling rate of 10 per second ..)->Compression when downloading. For the static ones, I could think of having a high sampling rate (to detect e.g. sudden Current changes) but not store every sample if there was not much change to its previous one -> Compression already when recording.

(This comes from our experience with the pegasus data we got. Once we had enough sequential samples downloaded we got 'always the same boring temperature cycles' and no interesting content. We never had seen useful Current/Voltages signals other than 'current values' corresponding to switched on/off subsystems)

For events I can think of using something similar to a modern logging infrastructure. Here this would be a seperate Store for fatal and errors - should be capable to be completely down linked prior to overrun. And Logging the rest of events with (maybe configurable) filters which we only download when we are interested in. Or a very short store with all events which could be filtered on downlink, Or .... 😄

Enough for now. Its up to design / discussion.