OpenC3 / cosmos

OpenC3 COSMOS
https://openc3.com
Other
105 stars 30 forks source link

Stored Telemetry Behavior #478

Open jeffstjean opened 1 year ago

jeffstjean commented 1 year ago

I'm not sure if this is intended behavior or a bug that I stumbled across but it might be related to the following issues/PRs but it's hard to identify the root cause, if any: #228, #201, #321, #402

The issue I'm facing involves viewing or exporting telemetry with DataViewer and DataExtractor. I have a simple microservice that parses a file with repeated telemetry packets (the log file from the spacecraft with historical telemetry) and emits packets into the Telemetry module. The issue arises when I mark these packets as stored and then try to view/export with the frontend tools. If the PACKET_TIME for the packets is between, say, 11:00:00 AM and 11:10:00 AM, when I query those timestamps, there is no data present. However, if I process the same log file with stored set to false, the DataViewer and DataExtractor work as expected - with the side effect that PacketViewer now shows the packets as they are emitted (which I expected).

Is this the intended behavior of the stored flag? I feel like the DataViewer and DataExtractor should show the entire time series based off of PACKET_TIME irregardless of whether its stored where the PacketViewer should only show the most recent non-stored packet. Is this a bug or just a misunderstanding on my part of these tools?

I can provide an example for reproducing but it may take a bit as I'm in the middle of some other testing. I was hoping this might be enough information to push me in the right direction but let me know if a full example is needed

ryanmelt commented 1 year ago

The intended behavior is that stored data is available to the streaming api after it has left the Redis streams and put into files. (Generally 10 minutes). While it is in the streams only the realtime data is used for streaming.

jeffstjean commented 1 year ago

I'm not sure if this should be a separate attribute or if it can be added to the logic of the stored flag but it would be nice to treat telemetry packets that represent the state of a target in the past rather than the current state of a target. If this is the intended behavior of stored then great! This would be ideal for supporting things like the Playback flag in the cFS Extended Header.

I'm working on familiarizing myself with the inner workings of Cosmos so forgive me if I'm misunderstanding anything!

ryanmelt commented 1 year ago

That is the purpose of the stored flag.

jeffstjean commented 1 year ago

I did some more digging into this and I think the way stored telemetry streams are written/read doesn't play nicely with the logged streaming thread - specifically if a stored packet hasn't yet been transferred from a stream to a log file.

The behavior I am observing is outlined below. For each dummy telemetry packet that I emit, I set the packet time such that each packet is one seconds further in the future (just to make querying easier)

  1. Receive realtime (not stored) packet
  2. Use DataExtractor to download all packets from the time range of interest
  3. Observe one packet in CSV
  4. Receive stored packet
  5. Use DataExtractor to download all packets from the time range of interest (same as 2)
  6. Observe one packet in CSV
  7. Delete plugin (forcing streams to be flushed to files)
  8. Re-install plugin
  9. Use DataExtractor to download all packets from the time range of interest (same as 2)
  10. Observe two packets in CSV

I believe this occurs during the redis_thread_body() function:

def redis_thread_body
  topics, offsets, item_objects_by_topic, packet_objects_by_topic = @collection.topics_offsets_and_objects
  results = []
  if topics.length > 0
    xread_result = OpenC3::Topic.read_topics(topics, offsets, 500) do |topic, msg_id, msg_hash, _|
      stored = OpenC3::ConfigParser.handle_true_false(msg_hash["stored"])
      **next if stored # Ignore stored packets while realtime streaming**
    { ... }
  end
end

Since both RealtimeStreamingThread and LoggedStreamingThread use this function, I believe the stored packets are getting ignored.

I attempted to recreate this with the demo plugin and succeeded, albeit with a slightly different behavior. Steps to recreate using the demo:

  1. Create a microservice called TEST with a file called test_microservice.rb. Paste in the code below.
    
    require 'openc3'
    require 'openc3/interfaces'
    require 'openc3/microservices/microservice'
    require 'openc3/tools/cmd_tlm_server/interface_thread'

module OpenC3 class TestMicroservice < Microservice def initialize(name) super(name) end

def run
  while true
    OpenC3::Topic.read_topics(["DEFAULT__DECOMCMD__{INST}__EMIT_PACKET"]) do |topic, _msg_id, msg_hash, _redis|
      args = JSON.parse(msg_hash["json_data"], :allow_nan => true, :create_additions => true)

      # increment timestamp by 1s
      @time = 1111111111 if @time == nil # middle of 2005
      @time = @time + 1
      stored = args["STORED"] == 1

      # create packet and set timestamps
      packet = OpenC3::System.telemetry.packet("INST", "EMITTED_TELEMETRY")
      packet.write("TIMESTAMP", @time, :RAW)
      packet.received_time = Time.now.sys
      packet.stored = args["STORED"] == 1

      # emit telemetry
      msg = "Sending tlm as "
      msg = msg + "stored" if stored
      msg = msg + "realtime" if !stored
      OpenC3::Logger.info(msg)
      OpenC3::TelemetryTopic.write_packet(packet, scope: @scope)
    end
  end
end

end end

OpenC3::TestMicroservice.run if FILE == $0

2. Add the following command to inst_cmds.txt:

COMMAND <%= target_name %> EMIT_PACKET BIG_ENDIAN <%= render "_ccsds_cmd.txt", locals: {id: 100} %> APPEND_PARAMETER STORED 1 UINT MIN 1 0 "Emit as stored packet?" STATE "No" 0 STATE "Yes" 1

3. Add the following telemetry to inst_tlm.txt:

TELEMETRY <%= target_name %> EMITTED_TELEMETRY BIG_ENDIAN <%= render "_ccsds_tlm.txt", locals: {apid: 100} %> APPEND_ITEM TIMESTAMP 32 UINT ITEM PACKET_TIME 0 0 DERIVED "Formatted acquisition timestamp" READ_CONVERSION unix_time_conversion.rb TIMESTAMP

4. Add the microservice to the plugin.txt

MICROSERVICE TEST test_microservice TARGET_NAME INST CMD ruby test_microservice.rb


6. Start Cosmos (do not auto-install demo plugin (unsure if this affects behavior).
7. Upload the plugin, pressing the X on all options except inst_target_name, inst_int_name, log_retain_time, reduced_log_retain_time (last two only required to prevent errors)
8. Send the INST EMIT_PACKET command with STORED set to **"NO"**
9. Verify packet INST EMITTED_TELEMETRY has a received count of 1
10. Use the data extractor to download a CSV of INST EMITTED_TELEMETRY with date range 01/01/2005 - 12/31/2005
11. Verify the CSV has one entry
12. Send the INST EMIT_PACKET command with STORED set to **"YES"**
13. Verify packet INST EMITTED_TELEMETRY still has a received count of 1
14. Use the data extractor to download a CSV of INST EMITTED_TELEMETRY with date range 01/01/2005 - 12/31/2005
15. Observe data extractor hanging about halfway (presumably on the second stored packet)
16. See the following warning: `Reducer Warning: DEFAULT/decom_logs/tlm/INST/20050318/20050318015833000000000__20050318015833000000000__DEFAULT__INST__ALL__stored__decom.bin.gz: Could not be retrieved`
17. Press cancel
18. Verify the CSV still has one entry
19. Delete the plugin
20. Reinstall the plugin with the same options
21. Use the data extractor to download a CSV of INST EMITTED_TELEMETRY with date range 01/01/2005 - 12/31/2005
22. Observe error that there is no data for this range
ajavorskidev commented 1 year ago

Is there potentially any update to this? I am running into the same issue within my project.