avalentino / s1isp

Sentinel-1 Instrument Source Packets decoder
Apache License 2.0
3 stars 1 forks source link

Missing subcommed data parsing #5

Closed sirbastiano closed 3 days ago

sirbastiano commented 5 days ago

It is not obvious to me how to do parse the 'subcom_data_records' into PVT.

In the richall package (https://github.com/Rich-Hall/sentinel1decoder/blob/main/sentinel1decoder/l0file.py) this was carried out trough an API.

I think I need to load some bytes somehow...

avalentino commented 5 days ago

Via API is already possible:

The example is not tested but the idea should be clear I hope. At the moment I think that we do not have a way to do it via CLI, but it should be feasible in an easy way. We just need to define clearly what is the expected behavior.

avalentino commented 5 days ago

For sure I need to improve the docs and examples.

sirbastiano commented 5 days ago

Perhaps you are super good in the explanations eheh. Doing as you say, I got this, I don't know if this is the expected behaviour:

Incomplete sub-commutated data cycle: 26 Incomplete sub-commutated data cycle: 57 Incomplete sub-commutated data cycle: 88 Incomplete sub-commutated data cycle: 119 Incomplete sub-commutated data cycle: 150 Incomplete sub-commutated data cycle: 181 Incomplete sub-commutated data cycle: 212 Incomplete sub-commutated data cycle: 243 Incomplete sub-commutated data cycle: 274 Incomplete sub-commutated data cycle: 305 Incomplete sub-commutated data cycle: 336 Incomplete sub-commutated data cycle: 367 Incomplete sub-commutated data cycle: 398 Incomplete sub-commutated data cycle: 429 Incomplete sub-commutated data cycle: 460 Incomplete sub-commutated data cycle: 491 Incomplete sub-commutated data cycle: 522 Incomplete sub-commutated data cycle: 553 Incomplete sub-commutated data cycle: 584 Incomplete sub-commutated data cycle: 615 Incomplete sub-commutated data cycle: 646 Incomplete sub-commutated data cycle: 677 Incomplete sub-commutated data cycle: 708 Incomplete sub-commutated data cycle: 739 Incomplete sub-commutated data cycle: 770

Anyway, I got the decoded stream in output.

avalentino commented 5 days ago

Are those warning messages? In principle the decoder try to detect incomplete sub-com sequences. The current behavior should be to just log and discard the incomplete ones, so yes it is normal that you get some output. What I do not remember is if there are "normal" situations in which a sub-com sequence can be incomplete in the standard acquisition timeline. Clarifying it could help to improve the logging and the robustness of the process.

By the way, do the state-vectors that you get have a "more or less"regular time spacing?

sirbastiano commented 5 days ago

Yes, if is incomplete the calculation get stopped. It may be due to asyncronicity between GPS and Attitude sensor? It should be possible to have a partial decoding in any case.

I verified the state vectors are more or less the same of another package doing the same thing: same number of rows and slightly different values (I assume you are using a more precise representation).

We can use bpack to create a dataframe in the sameway for the records. I don't know if this is already implemented but it is not too much of an effort.

avalentino commented 5 days ago

Yes, if is incomplete the calculation get stopped. It may be due to asyncronicity between GPS and Attitude sensor?

Yes, from the top of my memory it should be something like tat. But I never completed the investigation.

It should be possible to have a partial decoding in any case.

The problem is that we need to have a complete enough buffer to decode with black. If we increase the granularity of the black descriptors of the sub-com records in principle it should be possible to have partial decoding at the cost of some small slowdown, in theory. By the way sub-com data ares small so I do not think that the execution time will become a real problem

I verified the state vectors are more or less the same of another package doing the same thing: same number of rows and slightly different values (I assume you are using a more precise representation).

If you have the code to do the comparison I would be interested in add int to the test suite, to at least as and example in the notebooks folder.

We can use bpack to create a dataframe in the sameway for the records. I don't know if this is already implemented but it is not too much of an effort.

Yes this is more or less what the current implementation should do. By the way I do not remember all the details about the granularity (see point above), I need to check.

sirbastiano commented 5 days ago

The problem is that we need to have a complete enough buffer to decode with black. If we increase the granularity of the black descriptors of the sub-com records in principle it should be possible to have partial decoding at the cost of some small slowdown, in theory. By the way sub-com data ares small so I do not think that the execution time will become a real problem

Indeed, they are resovled in less than a second. It'ok if we don't do that.

If you have the code to do the comparison I would be interested in add int to the test suite, to at least as and example in the notebooks folder.

I will make it tomorrow then

avalentino commented 3 days ago

A new example notebook has been added for sub-commutated data decoding.