devDucks / asi-rs

ZWO ASI drivers written in Rust
GNU General Public License v3.0
3 stars 1 forks source link

refactor exposure cycle #5

Open MattBlack85 opened 2 years ago

MattBlack85 commented 2 years ago

An exposure cycle is usually following this flow:

1) a client request the camera to expose for X seconds 2) the driver call into the library function to do the exposure 3) when done, the client gets the BLOB with the exposure data

right now, to have a semi-functional driver that can be tested the driver converts binary data into FITS and dumps the picture to the disk.

The exposure cycle should be refactored such that:

1) clients request an exposure 2) the driver prepare the properties to signal an exposure is ongoing 3) when the exposure is done the driver should dump the exposure data into a buffer 4) the client gets the raw data and does whatever it wants with it

open questions about this API remain:

AndreaMarini74 commented 2 years ago

One of the correct process for managing the camera shot process could be something like this:

  1. clients request one or more exposures
  2. the driver prepares the properties to signal an exposure is ongoing, adding an item to the exposure list
  3. when the exposure is done the driver dumps the exposure data physically (storage)
  4. when the dump is done, the driver updates the exposure list item
  5. Client polling the exposure list, fetches each image. Images are save in a binary mode, they are converted when the client fetches them.

An idea of this could be as the following diagram: Exposure Cycle

MattBlack85 commented 2 years ago

@AndreaMarini74 good inputs!

two questions:

  1. the driver may run on low power devices with poor filesystem I/O performance, is it worth to store blobs on the filesystem?
  2. the usual flow of all of the astrophotography applications in the wild is that acquiring an image is a blocking process and usually nothing moves forward until the image is transfered to the client, while I don't care about standards (like probably you already understand :D ) I am just wondering if allowing this process to be non synchronous (especially think of very short exposures, you may easily end up with unsynchronized UI where the timer for the exposure is clocking but you are shown with the picture you just downloaded from the item list) is the right thing to do
AndreaMarini74 commented 2 years ago
  1. Yes, true. I considered the fact the device has low resources so how many o them are required to keep an image in memory ? how does this affect the performance ? So my idea is it keeps the image and stores it to the physical storage. In memory you just have the pointer to it. So the client can keep the data and manage as it want, removing the raw when he finished to kepp it.
  2. correct. Also about that i was thinking about the fact that the driver can take each image at time. it does not wait for the client. The only way the client has to know the driver finished to perform the task is looking at the list item properties. The list item can be a list of several tasks to perform like four shots, moving the mount for dithering, other four shot (one by one). This task has to be performed sequentially by the nature of them self but the from one and another, the driver has not to wait for the client.
MattBlack85 commented 2 years ago

@AndreaMarini74 the blob is already available in memory when you ask for it to the camera, if you look at the code here, code you need to pass a pointer to a buffer to the SDK, so effectively it is already a 2-step action for the camera itself, you ask the camera to expose and it puts it in memory, you ask the camera to download the exposure and it flushes the memory copying bytes from the camera memory to the buffer owned by the camera driver.

Not sure I understand correctly the item list actions, I cannot see actions like dithering there as it will live into the mount driver 🤔 unless I am missing something

AndreaMarini74 commented 2 years ago

@MattBlack85 good! i miss the point the driver already works using memory instead of saving locally the image, what i was trying to depict was the fact that the item i call driver is the implementation and the camera is the driver camera. It is ok the camera takes in memory the image, my idea is the software when is aware the camera ends, save it to the disk in the raw format. The client can that the row format and do what every it wants. It is should be a sort of API layer between the camera and the client. About the dithering, yes i know it is a mount action, i just tried to figure a situation when you have multiple action and you have to manage it. For example, suppose you have a sequence of 16 photos of about 10 minutes. Every two photos you have to perform a dither. If you manage with a multiple thread way, one for camera, one for mount, the mount thread should start when the second photo is finished. You have two solution: the first is that you that the sequence and decouple each action on it thread and it is the thread that check if he can run or not. In this first solution idea, for this example, the mount thread runs for 20' before having the camera response the photo is finished. In the second solution idea, you put the sequence in a list and base on the sequence step type a correct thread is started. In this way, you have a centralized component that manage the sequence, and the mount starts when it is his turn. Pay attention: this is not the case the action is performed when the client action is finished BUT when the device, finished; so, if the camera finished to take the image but the client has not yet, took it, the dither can start, because the camera is free.