For the moment, here is the typical pipeline to obtain the blood flow in the retina's blood vessels :
Use the real-time visualisation of the eye with Holovibes to get a good focus on the eye of the patient, and then record the high throughput of the camera with Holovibes. The calculus to obtain a real-time visualisation is usually a spatial fourier transform (fresnel transform) followed by a PCA. Those computations don't enable to access the moments and the doppler shift enabling the estimation of the blood flow.
Load the interferograms in Holowaves (Matlab), and then pursue the following computations :
Fresnel transformation to propagate the signal in the retina's plane
apply SVD filtering on the matrix formed by the sequence of flattened frames (grouped by batches of usually 512 frames). This operation takes out the first singular components corresponding to reflection noise.
STFT : FFT time fourier transform on the resulting grouped batch matrices
On the frequency signal obtained from the STFT, compute the moment 0, moment 1 and moment 2.
For a complex signal with fequencies $$f$$ going from $$-\infty$$ to $$+\infty$$, a moment $$m$$ of order $$k$$ will be equal to $$mk=\int\limits{-\infty}^{\infty}f^kX(f)df$$. To remove signal parts cause by movements of the eye we select frequencies via parameters $f_1$ and $f_2$ and calculate $$mk=\int\limits{-f_2}^{-f1}f^kX(f)df + \int\limits{f_1}^{f_2}f^kX(f)df$$ for $k=0,1,2$.
In PulseWave (MatLab), by using the moments, a segmentation mask is computed, as well as the doppler shift in the vessels. This doppler shift allows to estimate the blood velocity. Then, the blood flow is estimated using he blood velocity and the estimated volume of the vessels deducted from the diameter of the segmented mask.
In this pipeline, the longest step is by far the second one. We would like to use the performance of Holovibes to accelerate this step ; by incorporating it to the CLI, we could even use the ProcessHoloFiles.ps1 to automatically generate the moments for all the interferograms stored in a directory.
Here are the steps to achieve such a dream :
[x] Make the .holo format compatible with the recording of the moments. This necessitates to create a new ImageType , and to organize the data differently (since there will be 3 matrices, the simplest may be to write each matrix one after the other)
[ ] Handle the reading of such .holo by Holovibes (display an error for the moment)
[x] Code the backend computations to obtain the moments
[x] Create the API functions, and the GUI methods as well as the CLI options
[x] Make it work with the ProcessHolovibes.ps1, with default parameters
[ ] Problem with CUDA, in hsv.cu, there is these errors: __copy:: D->D: failed: cudaErrorInvalidValue: invalid argument and [Thrust] Error while computing a percentile. This is linked to the usage of the function "apply_operations_on_hsv" with gpu_output as a parameter
[x] When using ImageType Moments, It needs to use "Composite image" before showing a result.
[x] When trying to record Moments, it doesn t record anything while saying 2048 image to be record
[ ] Holovibe crash when you open a file that has too mush fps while switching to Moments ImageType
Question:
Why does RecordMode Processed takes twice as much space than Raw, how are data stored ? I only see this for raw AND processed: std::fwrite(frame, 1, framesize, file);
Do we need to apply operations on the hsv after computing them ?
Linked to the first question; why do we need to store the Moments in float, while processed and raw store the data in char
What's the correct usage to get the Moments, is it realy usefull to have a ImageType Moments, when getting the video output isn't realy usefull (i believe ?), or is it just best to only have a record mode, which would lead to rebuild a class to compute the HSV
For the moment, here is the typical pipeline to obtain the blood flow in the retina's blood vessels :
Use the real-time visualisation of the eye with Holovibes to get a good focus on the eye of the patient, and then record the high throughput of the camera with Holovibes. The calculus to obtain a real-time visualisation is usually a spatial fourier transform (fresnel transform) followed by a PCA. Those computations don't enable to access the moments and the doppler shift enabling the estimation of the blood flow.
Load the interferograms in Holowaves (Matlab), and then pursue the following computations :
In PulseWave (MatLab), by using the moments, a segmentation mask is computed, as well as the doppler shift in the vessels. This doppler shift allows to estimate the blood velocity. Then, the blood flow is estimated using he blood velocity and the estimated volume of the vessels deducted from the diameter of the segmented mask.
In this pipeline, the longest step is by far the second one. We would like to use the performance of Holovibes to accelerate this step ; by incorporating it to the CLI, we could even use the ProcessHoloFiles.ps1 to automatically generate the moments for all the interferograms stored in a directory.
Here are the steps to achieve such a dream :