vital-ultrasound / ai-echocardiography-for-low-resource-countries

AI-assisted echocardiography for low-resource countries
1 stars 1 forks source link

Curation, selection and validation of 4CV frame labels #21

Closed mxochicale closed 2 years ago

mxochicale commented 2 years ago

There are few scripts for Anonymization and video_to_sliding_video.py from 3909cc0. However, those might not work with the rawdatasets of 81 participants and take into account the inconsistency of such videos (different sources, views, modes, absolute paths or others). That said, I am raising this ticket to address the above and related topics to prepare and curate datasets.

mxochicale commented 2 years ago

How to related json metadata for labels with frames of mp4 videos?

  Frame_index/number_of_frames=10602/23285,  frame_timestamp=05:53:753.400
/home/mx19/datasets/vital-us/echocardiography/preprocessed-datasets/01NVb-003-072/T1/imageframes/nframes10602.png
Function 'maks_for_captured_us_image' executed in 0.0018s

  Frame_index/number_of_frames=10603/23285,  frame_timestamp=05:53:786.767
/home/mx19/datasets/vital-us/echocardiography/preprocessed-datasets/01NVb-003-072/T1/imageframes/nframes10603.png
Function 'maks_for_captured_us_image' executed in 0.0018s
mxochicale commented 2 years ago

echocardiography/videos/01NVb-003-072/T2/01NVb-003-072-2-echo-cont.mp4

al-us/echocardiography/preprocessed-datasets/01NVb-003-072/T2/masked4CV/nframes10734.png Function 'maks_for_captured_us_image' executed in 0.0020s Frame_index/number_of_frames=10735/20129, current_frame_timestamp=05:58:191.167 /home/mx19/datasets/vital-us/echocardiography/preprocessed-datasets/01NVb-003-072/T2/masked4CV/nframes10735.png Function 'maks_for_captured_us_image' executed in 0.0019s Frame_index/number_of_frames=10736/20129, current_frame_timestamp=05:58:224.533 /home/mx19/datasets/vital-us/echocardiography/preprocessed-datasets/01NVb-003-072/T2/masked4CV/nframes10736.png Function 'maks_for_captured_us_image' executed in 0.0019s

...

Frame_index/number_of_frames=20128/20129, current_frame_timestamp=00:00:0.000 Frame_index/number_of_frames=20129/20129, current_frame_timestamp=00:00:0.000 Function 'Video_to_ImageFrame' executed in 62.1589s


![nframes10617](https://user-images.githubusercontent.com/11370681/138823468-caacfc66-ef04-4760-9317-8f2696883553.png)

* ~/datasets/vital-us/echocardiography/preprocessed-datasets/01NVb-003-072/T2/cropped4CV

Frame_index/number_of_frames=14131/20129, current_frame_timestamp=07:51:504.367 /home/mx19/datasets/vital-us/echocardiography/preprocessed-datasets/01NVb-003-072/T2/cropped4CV/nframes14131.png Function 'cropped_image_frame' executed in 0.0000s Frame_index/number_of_frames=14132/20129, current_frame_timestamp=07:51:537.733 /home/mx19/datasets/vital-us/echocardiography/preprocessed-datasets/01NVb-003-072/T2/cropped4CV/nframes14132.png Function 'cropped_image_frame' executed in 0.0000s Frame_index/number_of_frames=14133/20129, current_frame_timestamp=07:51:571.100 /home/mx19/datasets/vital-us/echocardiography/preprocessed-datasets/01NVb-003-072/T2/cropped4CV/nframes14133.png Function 'cropped_image_frame' executed in 0.0000s

...

Frame_index/number_of_frames=20128/20129, current_frame_timestamp=00:00:0.000 Frame_index/number_of_frames=20129/20129, current_frame_timestamp=00:00:0.000 Function 'Video_to_ImageFrame' executed in 37.2505s



![nframes10636](https://user-images.githubusercontent.com/11370681/138823428-ced75029-03db-4651-af60-fb2bfa3f3229.png)
mxochicale commented 2 years ago

log from the above hash python png_to_avi.py --config ../config_files/config_i2v.yml on Wed 3 Nov 08:59:25 GMT 2021

/home/mx19/datasets/vital-us/echocardiography/preprocessed-datasets/01NVb-003-072/T1/cropped_us_image/animations/clip001.avi
100%|████████████████████████████████████████████████████████████████████████████████| 180/180 [00:02<00:00, 67.43it/s]
/home/mx19/datasets/vital-us/echocardiography/preprocessed-datasets/01NVb-003-072/T2/cropped_us_image/animations/clip001.avi
100%|████████████████████████████████████████████████████████████████████████████████| 180/180 [00:02<00:00, 70.21it/s]
/home/mx19/datasets/vital-us/echocardiography/preprocessed-datasets/01NVb-003-072/T2/cropped_us_image/animations/clip002.avi
100%|██████████████████████████████████████████████████████████████████████████████████| 89/89 [00:01<00:00, 69.08it/s]
/home/mx19/datasets/vital-us/echocardiography/preprocessed-datasets/01NVb-003-072/T2/cropped_us_image/animations/clip003.avi
100%|██████████████████████████████████████████████████████████████████████████████████| 90/90 [00:01<00:00, 68.90it/s]
/home/mx19/datasets/vital-us/echocardiography/preprocessed-datasets/01NVb-003-072/T3/cropped_us_image/animations/clip001.avi
100%|██████████████████████████████████████████████████████████████████████████████████| 60/60 [00:00<00:00, 74.16it/s]
/home/mx19/datasets/vital-us/echocardiography/preprocessed-datasets/01NVb-003-072/T3/cropped_us_image/animations/clip002.avi
100%|██████████████████████████████████████████████████████████████████████████████████| 90/90 [00:01<00:00, 72.09it/s]
mxochicale commented 2 years ago

Code has been simplited in the above commit to use few lines in the config file:

## 01NVb-003-072
participant_directory: '/home/mx19/datasets/vital-us/echocardiography/videos-echo/01NVb-003-072'
preprocessed_datasets_path: '/home/mx19/datasets/vital-us/echocardiography/preprocessed-datasets'
video_output_pathname: 'cropped_us_image'
participant_path_json_file: '/home/mx19/datasets/vital-us/echocardiography/json/01NVb-003-072'
bounds:
  start_x: 480
  start_y: 120
  width: 1130
  height: 810

to create

/01NVb-003-072$ tree -d
.
├── T1
│   └── cropped_us_image
│       └── clip001
├── T2
│   └── cropped_us_image
│       ├── clip001
│       ├── clip002
│       └── clip003
└── T3
    └── cropped_us_image
        ├── clip001
        └── clip002

12 directories

which is executed in around:


Function 'Video_to_ImageFrame' executed in 120.6810s
mxochicale commented 2 years ago

070 - T1-04clips; T2-04clips; T3-01clips

python video_to_imageframes.py --config ../config_files/config_v2i.yml

/home/mx19/datasets/vital-us/echocardiography/videos-echo/01NVb-003-070/T1/01NVb-003-070-1 echo.mp4
/home/mx19/repositories/echocardiography/datasets/labelling-annotation/json_files/4CV/01NVb_003_070/01NVb-003-070-1-4CV.json
/home/mx19/datasets/vital-us/echocardiography/preprocessed-datasets/01NVb-003-070/T1/cropped_us_image

  Frame_height=1080,  frame_width=1920 fps=30 nframes=28473 

...

/home/mx19/datasets/vital-us/echocardiography/preprocessed-datasets/01NVb-003-070/T1/cropped_us_image/clip004/nframe09770_of_28472.png
[h264 @ 0x55e959b67340] Invalid NAL unit size (0 > 2168).
[h264 @ 0x55e959b67340] Error splitting the input into NAL units.
[h264 @ 0x55e959b4a980] cbp too large (58) at 2 41
[h264 @ 0x55e959b4a980] error while decoding MB 2 41
[h264 @ 0x55e959ba06c0] No start code is found.
[h264 @ 0x55e959ba06c0] Error splitting the input into NAL units.
[h264 @ 0x55e959bbd080] No start code is found.
[h264 @ 0x55e959bbd080] Error splitting the input into NAL units.
Invalid UE golomb code
[h264 @ 0x55e959b83d00] cbp too large (3199971767) at 73 52
[h264 @ 0x55e959b83d00] error while decoding MB 73 52
[h264 @ 0x55e959bd9a40] No start code is found.
[h264 @ 0x55e959bd9a40] Error splitting the input into NAL units.
[h264 @ 0x55e959bf6400] No start code is found.
[h264 @ 0x55e959bf6400] Error splitting the input into NAL units.

...

>Function 'Video_to_ImageFrame' executed in 120.1562s

python png_to_avi.py --config ../config_files/config_i2v.yml
> Function 'conver_pngframes_to_avi' executed in 12.9088s

071 - T1-00clips; T2-02clips; T3-01clips

/home/mx19/datasets/vital-us/echocardiography/videos-echo/01NVb-003-071/T1/01NVb-003-071-1 echo.mp4
/home/mx19/repositories/echocardiography/datasets/labelling-annotation/json_files/4CV/01NVb_003_071/01NVb-003-071-1-4CV.json
/home/mx19/datasets/vital-us/echocardiography/preprocessed-datasets/01NVb-003-071/T1/cropped_us_image

  Frame_height=1080,  frame_width=1920 fps=30 nframes=21968 

[h264 @ 0x55d25ece6800] Invalid NAL unit size (0 > 5335).
[h264 @ 0x55d25ece6800] Error splitting the input into NAL units.
[h264 @ 0x55d25ef54a80] out of range intra chroma pred mode
[h264 @ 0x55d25ef54a80] error while decoding MB 98 39
[h264 @ 0x55d25ecec780] Invalid NAL unit size (-578446953 > 16443).
[h264 @ 0x55d25ecec780] Error splitting the input into NAL units.
[h264 @ 0x55d25eda1500] Invalid NAL unit size (-2095475258 > 11039).
[h264 @ 0x55d25eda1500] Error splitting the input into NAL units.
[h264 @ 0x55d25edfd540] Invalid NAL unit size (0 > 5105).
[h264 @ 0x55d25edfd540] Error splitting the input into NAL units.
[h264 @ 0x55d25ee19f40] Invalid NAL unit size (13302527 > 3857).
[h264 @ 0x55d25ee19f40] Error splitting the input into NAL units.
[h264 @ 0x55d25ee36900] Invalid NAL unit size (46513 > 12955).
[h264 @ 0x55d25ee36900] Error splitting the input into NAL units.

Function 'Video_to_ImageFrame' executed in 71.6225s

python png_to_avi.py --config ../config_files/config_i2v.yml
Function 'conver_pngframes_to_avi' executed in 9.5130s

072 - T1-01clips; T2-03clips; T3-02clips

python video_to_imageframes.py --config ../config_files/config_v2i.yml
Function 'Video_to_ImageFrame' executed in 109.7876s

python png_to_avi.py --config ../config_files/config_i2v.yml

073 - T1-02clips; T2-01clips; T3-00clips

python video_to_imageframes.py --config ../config_files/config_v2i.yml
Function 'Video_to_ImageFrame' executed in 72.1309s

python png_to_avi.py --config ../config_files/config_i2v.yml

074 - T1-02clips; T2-02clips; T3-00clips

python video_to_imageframes.py --config ../config_files/config_v2i.yml
Function 'Video_to_ImageFrame' executed in 98.3033s

python png_to_avi.py --config ../config_files/config_i2v.yml

075 - T1-01clips; T2-02clips; T3-02clips

python video_to_imageframes.py --config ../config_files/config_v2i.yml
Function 'Video_to_ImageFrame' executed in 85.0689s

python png_to_avi.py --config ../config_files/config_i2v.yml

076 - T1-01clips; T2-01clips; T3-02clips

python video_to_imageframes.py --config ../config_files/config_v2i.yml
Function 'Video_to_ImageFrame' executed in 122.1669s

python png_to_avi.py --config ../config_files/config_i2v.yml

077 - T1-00clips; T2-00clips; T3-00clips

(rt-ai-echo-VE) mx19@sie133-lap:~/repositories/echocardiography/datasets/curation-and-selection$ python video_to_imageframes.py --config ../config_files/config_v2i.yml
/home/mx19/datasets/vital-us/echocardiography/videos-echo/01NVb-003-077/T1/01NVb-003-077-1 cont.mp4
/home/mx19/repositories/echocardiography/datasets/labelling-annotation/json_files/4CV/01NVb_003_077/01NVb-003-077-1-4CV_Na.json
/home/mx19/datasets/vital-us/echocardiography/preprocessed-datasets/01NVb-003-077/T1/cropped_us_image

  Frame_height=1080,  frame_width=1920 fps=30 nframes=9032 

Traceback (most recent call last):
  File "video_to_imageframes.py", line 357, in <module>
    Video_to_ImageFrame(
  File "video_to_imageframes.py", line 22, in wrap_func
    result = func(*args, **kwargs)
  File "video_to_imageframes.py", line 295, in Video_to_ImageFrame
    length_of_timestamp_vector = len(start_label)
UnboundLocalError: local variable 'start_label' referenced before assignment

078 - T1-01clips; T2-01clips; T3-01clips

python video_to_imageframes.py --config ../config_files/config_v2i.yml
Function 'Video_to_ImageFrame' executed in 82.2932s

python png_to_avi.py --config ../config_files/config_i2v.yml
> Function 'conver_pngframes_to_avi' executed in 15.4657s

079 - T1-01clips; T2-02clips; T3-01clips

python video_to_imageframes.py --config ../config_files/config_v2i.yml
Function 'Video_to_ImageFrame' executed in 82.9324s

python png_to_avi.py --config ../config_files/config_i2v.yml

080 - T1-00clips; T2-00clips; T3-00clips

(rt-ai-echo-VE) mx19@sie133-lap:~/repositories/echocardiography/datasets/curation-and-selection$ python video_to_imageframes.py --config ../config_files/config_v2i.yml
/home/mx19/datasets/vital-us/echocardiography/videos-echo/01NVb-003-080/T1/01NVb-003-080-1 echo.mp4
/home/mx19/repositories/echocardiography/datasets/labelling-annotation/json_files/4CV/01NVb_003_080/01NVb-003-080-1-4CV.json
/home/mx19/datasets/vital-us/echocardiography/preprocessed-datasets/01NVb-003-080/T1/cropped_us_image

  Frame_height=1080,  frame_width=1920 fps=30 nframes=14161 

[h264 @ 0x55af9e3c74c0] P sub_mb_type 14 out of range at 48 27
[h264 @ 0x55af9e3c74c0] error while decoding MB 48 27
[h264 @ 0x55af9e3e3e80] Invalid NAL unit size (0 > 2197).
[h264 @ 0x55af9e3e3e80] Error splitting the input into NAL units.
[h264 @ 0x55af9e400840] P sub_mb_type 6 out of range at 102 38
[h264 @ 0x55af9e400840] error while decoding MB 102 38
Function 'Video_to_ImageFrame' executed in 0.3206s

081 - T1-00clips; T2-00clips; T3-00clips

$ python video_to_imageframes.py --config ../config_files/config_v2i.yml
/home/mx19/datasets/vital-us/echocardiography/videos-echo/01NVb-003-081/T1/01NVb-003-081-1 echo.mp4
/home/mx19/repositories/echocardiography/datasets/labelling-annotation/json_files/4CV/01NVb_003_081/README.md
/home/mx19/datasets/vital-us/echocardiography/preprocessed-datasets/01NVb-003-081/T1/cropped_us_image

  Frame_height=1080,  frame_width=1920 fps=30 nframes=16062 

Traceback (most recent call last):
  File "video_to_imageframes.py", line 357, in <module>
    Video_to_ImageFrame(
  File "video_to_imageframes.py", line 22, in wrap_func
    result = func(*args, **kwargs)
  File "video_to_imageframes.py", line 286, in Video_to_ImageFrame
    json_data = json.load(json_file)
  File "/home/mx19/anaconda3/envs/rt-ai-echo-VE/lib/python3.8/json/__init__.py", line 293, in load
    return loads(fp.read(),
  File "/home/mx19/anaconda3/envs/rt-ai-echo-VE/lib/python3.8/json/__init__.py", line 357, in loads
    return _default_decoder.decode(s)
  File "/home/mx19/anaconda3/envs/rt-ai-echo-VE/lib/python3.8/json/decoder.py", line 337, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
  File "/home/mx19/anaconda3/envs/rt-ai-echo-VE/lib/python3.8/json/decoder.py", line 355, in raw_decode
    raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
mxochicale commented 2 years ago

:warning: json files should match the existance of the same paths, otherwise the script will create an error

considering the following json file:

mx19@sie133-lap:~/repositories/echocardiography/datasets/labelling-annotation/json_files/4CV/01NVb_003_060$ tree -h
.
├── [ 904]  01NVb-003-060-1-4CV.json
└── [ 340]  README.md

0 directories, 2 files

we might have two potential scenarios

one with issues

/home/mx19/datasets/vital-us/echocardiography/preprocessed-datasets/01NVb-003-060/T1/cropped_us_image/clip001/nframe08181_of_17036_T4min32sec972msec.png
Traceback (most recent call last):
  File "video_to_imageframes.py", line 370, in <module>
    Video_to_ImageFrame(
  File "video_to_imageframes.py", line 23, in wrap_func
    result = func(*args, **kwargs)
  File "video_to_imageframes.py", line 251, in Video_to_ImageFrame
    json_file_i = json_files[T_days_i[0]]
IndexError: list index out of range
(rt-ai-echo-VE) mx19@sie133-lap:~/repositories/echocardiography/datasets/curation-and-selection$ python video_to_imageframes.py --config ../config_files/config_v2i.yml
Function 'Video_to_ImageFrame' executed in 0.0000s

with the following data paths

mx19@sie133-lap:~/datasets/vital-us/echocardiography/videos-echo/01NVb-003-060$ tree -h
.
├── [4.0K]  extras
├── [4.0K]  T1
│   └── [883M]  01NVb-003-060-1 echo.mp4
├── [4.0K]  T2
│   └── [4.0K]  extras
│       ├── [189M]  01NVb-003-060-2 echo cont.mp4
│       └── [2.0G]  01NVb-003-060-2 echo.mp4
└── [4.0K]  T3

5 directories, 3 files

the other one with a working example with the following data paths

mx19@sie133-lap:~/datasets/vital-us/echocardiography/videos-echo/01NVb-003-060$ tree -h
.
├── [4.0K]  extras
│   ├── [4.0K]  T2
│   │   └── [4.0K]  extras
│   │       ├── [189M]  01NVb-003-060-2 echo cont.mp4
│   │       └── [2.0G]  01NVb-003-060-2 echo.mp4
│   └── [4.0K]  T3
└── [4.0K]  T1
    └── [883M]  01NVb-003-060-1 echo.mp4

5 directories, 3 files
mxochicale commented 2 years ago

p060 - T1-01clips; T2-00clips; T3-00clips

Function 'Video_to_ImageFrame' executed in 30.4030s
Function 'conver_pngframes_to_avi' executed in 2.1116s

p061 - T1-00clips; T2-00clips; T3-01clips

Function 'Video_to_ImageFrame' executed in 40.8828s
Function 'conver_pngframes_to_avi' executed in 2.5747s

p063 - T1-02clips; T2-02clips; T3-01clips

Function 'Video_to_ImageFrame' executed in 113.1682s
Function 'conver_pngframes_to_avi' executed in 13.6143s

p064 - T1-02clips; T2-03clips; T3-00clips

Function 'Video_to_ImageFrame' executed in 81.5419s
Function 'conver_pngframes_to_avi' executed in 10.6027s

p065 - T1-02clips; T2-04clips; T3-05clips

Function 'Video_to_ImageFrame' executed in 92.5387s
Function 'conver_pngframes_to_avi' executed in 25.7689s

p066 - T1-01clips; T2-02clips; T3-00clips

Function 'Video_to_ImageFrame' executed in 46.4046s
Function 'conver_pngframes_to_avi' executed in 6.7478s

p067 - T1-00clips; T2-00clips; T3-00clips

Function 'Video_to_ImageFrame' executed in - 
Function 'conver_pngframes_to_avi' executed in -

p068 - T1-02clips; T2-00clips; T3-02clips

Function 'Video_to_ImageFrame' executed in 64.1658s
Function 'conver_pngframes_to_avi' executed in 9.3333s

p069 - T1-02clips; T2-03clips; T3-02clips

Function 'Video_to_ImageFrame' executed in 115.2581s
Function 'conver_pngframes_to_avi' executed in 12.3159s
mxochicale commented 2 years ago

python video_to_imageframes.py --config ../config_files/config_v2i.yml

mx19@sie133-lap:~/datasets/vital-us/echocardiography/preprocessed-datasets/01NVb-003-060/T1/cropped_us_image/clip001$ tree -h
.
├── [466K]  nframe08032_of_17036_T4min28sec1msec.png
├── [480K]  nframe08033_of_17036_T4min28sec34msec.png
├── [470K]  nframe08034_of_17036_T4min28sec67msec.png
├── [465K]  nframe08035_of_17036_T4min28sec101msec.png
├── [466K]  nframe08036_of_17036_T4min28sec134msec.png
├── [461K]  nframe08037_of_17036_T4min28sec167msec.png
├── [440K]  nframe08038_of_17036_T4min28sec201msec.png
├── [440K]  nframe08039_of_17036_T4min28sec234msec.png
├── [404K]  nframe08040_of_17036_T4min28sec268msec.png
├── [439K]  nframe08041_of_17036_T4min28sec301msec.png
├── [444K]  nframe08042_of_17036_T4min28sec334msec.png
├── [435K]  nframe08043_of_17036_T4min28sec368msec.png
├── [453K]  nframe08044_of_17036_T4min28sec401msec.png
├── [434K]  nframe08045_of_17036_T4min28sec434msec.png
├── [469K]  nframe08046_of_17036_T4min28sec468msec.png
├── [471K]  nframe08047_of_17036_T4min28sec501msec.png
├── [461K]  nframe08048_of_17036_T4min28sec534msec.png
├── [474K]  nframe08049_of_17036_T4min28sec568msec.png
├── [458K]  nframe08050_of_17036_T4min28sec601msec.png
├── [476K]  nframe08051_of_17036_T4min28sec635msec.png
├── [476K]  nframe08052_of_17036_T4min28sec668msec.png
├── [466K]  nframe08053_of_17036_T4min28sec701msec.png
├── [466K]  nframe08054_of_17036_T4min28sec735msec.png
├── [452K]  nframe08055_of_17036_T4min28sec768msec.png
├── [453K]  nframe08056_of_17036_T4min28sec801msec.png
├── [439K]  nframe08057_of_17036_T4min28sec835msec.png
├── [458K]  nframe08058_of_17036_T4min28sec868msec.png
├── [452K]  nframe08059_of_17036_T4min28sec901msec.png
├── [446K]  nframe08060_of_17036_T4min28sec935msec.png
├── [432K]  nframe08061_of_17036_T4min28sec968msec.png
├── [450K]  nframe08062_of_17036_T4min29sec2msec.png
├── [455K]  nframe08063_of_17036_T4min29sec35msec.png
├── [461K]  nframe08064_of_17036_T4min29sec68msec.png
├── [462K]  nframe08065_of_17036_T4min29sec102msec.png
├── [454K]  nframe08066_of_17036_T4min29sec135msec.png
├── [472K]  nframe08067_of_17036_T4min29sec168msec.png
├── [461K]  nframe08068_of_17036_T4min29sec202msec.png
├── [472K]  nframe08069_of_17036_T4min29sec235msec.png
├── [473K]  nframe08070_of_17036_T4min29sec269msec.png
├── [456K]  nframe08071_of_17036_T4min29sec302msec.png
├── [477K]  nframe08072_of_17036_T4min29sec335msec.png
├── [474K]  nframe08073_of_17036_T4min29sec369msec.png
├── [468K]  nframe08074_of_17036_T4min29sec402msec.png
├── [463K]  nframe08075_of_17036_T4min29sec435msec.png
├── [457K]  nframe08076_of_17036_T4min29sec469msec.png
├── [453K]  nframe08077_of_17036_T4min29sec502msec.png
├── [457K]  nframe08078_of_17036_T4min29sec535msec.png
├── [456K]  nframe08079_of_17036_T4min29sec569msec.png
├── [433K]  nframe08080_of_17036_T4min29sec602msec.png
├── [446K]  nframe08081_of_17036_T4min29sec636msec.png
├── [431K]  nframe08082_of_17036_T4min29sec669msec.png
├── [448K]  nframe08083_of_17036_T4min29sec702msec.png
├── [454K]  nframe08084_of_17036_T4min29sec736msec.png
├── [458K]  nframe08085_of_17036_T4min29sec769msec.png
├── [462K]  nframe08086_of_17036_T4min29sec802msec.png
├── [467K]  nframe08087_of_17036_T4min29sec836msec.png
├── [457K]  nframe08088_of_17036_T4min29sec869msec.png
├── [478K]  nframe08089_of_17036_T4min29sec902msec.png
├── [477K]  nframe08090_of_17036_T4min29sec936msec.png
├── [467K]  nframe08091_of_17036_T4min29sec969msec.png
├── [479K]  nframe08092_of_17036_T4min30sec3msec.png
├── [480K]  nframe08093_of_17036_T4min30sec36msec.png
├── [479K]  nframe08094_of_17036_T4min30sec69msec.png
├── [471K]  nframe08095_of_17036_T4min30sec103msec.png
├── [464K]  nframe08096_of_17036_T4min30sec136msec.png
├── [465K]  nframe08097_of_17036_T4min30sec169msec.png
├── [463K]  nframe08098_of_17036_T4min30sec203msec.png
├── [449K]  nframe08099_of_17036_T4min30sec236msec.png
├── [422K]  nframe08100_of_17036_T4min30sec270msec.png
├── [426K]  nframe08101_of_17036_T4min30sec303msec.png
├── [440K]  nframe08102_of_17036_T4min30sec336msec.png
├── [442K]  nframe08103_of_17036_T4min30sec370msec.png
├── [449K]  nframe08104_of_17036_T4min30sec403msec.png
├── [432K]  nframe08105_of_17036_T4min30sec436msec.png
├── [456K]  nframe08106_of_17036_T4min30sec470msec.png
├── [463K]  nframe08107_of_17036_T4min30sec503msec.png
├── [468K]  nframe08108_of_17036_T4min30sec536msec.png
├── [473K]  nframe08109_of_17036_T4min30sec570msec.png
├── [466K]  nframe08110_of_17036_T4min30sec603msec.png
├── [478K]  nframe08111_of_17036_T4min30sec637msec.png
├── [459K]  nframe08112_of_17036_T4min30sec670msec.png
├── [483K]  nframe08113_of_17036_T4min30sec703msec.png
├── [463K]  nframe08114_of_17036_T4min30sec737msec.png
├── [478K]  nframe08115_of_17036_T4min30sec770msec.png
├── [472K]  nframe08116_of_17036_T4min30sec803msec.png
├── [464K]  nframe08117_of_17036_T4min30sec837msec.png
├── [461K]  nframe08118_of_17036_T4min30sec870msec.png
├── [466K]  nframe08119_of_17036_T4min30sec903msec.png
├── [460K]  nframe08120_of_17036_T4min30sec937msec.png
├── [460K]  nframe08121_of_17036_T4min30sec970msec.png
├── [452K]  nframe08122_of_17036_T4min31sec4msec.png
├── [447K]  nframe08123_of_17036_T4min31sec37msec.png
├── [454K]  nframe08124_of_17036_T4min31sec70msec.png
├── [459K]  nframe08125_of_17036_T4min31sec104msec.png
├── [444K]  nframe08126_of_17036_T4min31sec137msec.png
├── [465K]  nframe08127_of_17036_T4min31sec170msec.png
├── [467K]  nframe08128_of_17036_T4min31sec204msec.png
├── [476K]  nframe08129_of_17036_T4min31sec237msec.png
├── [480K]  nframe08130_of_17036_T4min31sec271msec.png
├── [481K]  nframe08131_of_17036_T4min31sec304msec.png
├── [481K]  nframe08132_of_17036_T4min31sec337msec.png
├── [478K]  nframe08133_of_17036_T4min31sec371msec.png
├── [466K]  nframe08134_of_17036_T4min31sec404msec.png
├── [485K]  nframe08135_of_17036_T4min31sec437msec.png
├── [456K]  nframe08136_of_17036_T4min31sec471msec.png
├── [476K]  nframe08137_of_17036_T4min31sec504msec.png
├── [470K]  nframe08138_of_17036_T4min31sec537msec.png
├── [469K]  nframe08139_of_17036_T4min31sec571msec.png
├── [467K]  nframe08140_of_17036_T4min31sec604msec.png
├── [469K]  nframe08141_of_17036_T4min31sec638msec.png
├── [458K]  nframe08142_of_17036_T4min31sec671msec.png
├── [454K]  nframe08143_of_17036_T4min31sec704msec.png
├── [447K]  nframe08144_of_17036_T4min31sec738msec.png
├── [451K]  nframe08145_of_17036_T4min31sec771msec.png
├── [437K]  nframe08146_of_17036_T4min31sec804msec.png
├── [457K]  nframe08147_of_17036_T4min31sec838msec.png
├── [460K]  nframe08148_of_17036_T4min31sec871msec.png
├── [451K]  nframe08149_of_17036_T4min31sec904msec.png
├── [473K]  nframe08150_of_17036_T4min31sec938msec.png
├── [477K]  nframe08151_of_17036_T4min31sec971msec.png
├── [481K]  nframe08152_of_17036_T4min32sec5msec.png
├── [478K]  nframe08153_of_17036_T4min32sec38msec.png
├── [478K]  nframe08154_of_17036_T4min32sec71msec.png
├── [480K]  nframe08155_of_17036_T4min32sec105msec.png
├── [478K]  nframe08156_of_17036_T4min32sec138msec.png
├── [461K]  nframe08157_of_17036_T4min32sec171msec.png
├── [472K]  nframe08158_of_17036_T4min32sec205msec.png
├── [452K]  nframe08159_of_17036_T4min32sec238msec.png
├── [428K]  nframe08160_of_17036_T4min32sec272msec.png
├── [462K]  nframe08161_of_17036_T4min32sec305msec.png
├── [447K]  nframe08162_of_17036_T4min32sec338msec.png
├── [459K]  nframe08163_of_17036_T4min32sec372msec.png
├── [451K]  nframe08164_of_17036_T4min32sec405msec.png
├── [451K]  nframe08165_of_17036_T4min32sec438msec.png
├── [446K]  nframe08166_of_17036_T4min32sec472msec.png
├── [455K]  nframe08167_of_17036_T4min32sec505msec.png
├── [440K]  nframe08168_of_17036_T4min32sec538msec.png
├── [463K]  nframe08169_of_17036_T4min32sec572msec.png
├── [466K]  nframe08170_of_17036_T4min32sec605msec.png
├── [473K]  nframe08171_of_17036_T4min32sec639msec.png
├── [459K]  nframe08172_of_17036_T4min32sec672msec.png
├── [473K]  nframe08173_of_17036_T4min32sec705msec.png
├── [475K]  nframe08174_of_17036_T4min32sec739msec.png
├── [460K]  nframe08175_of_17036_T4min32sec772msec.png
├── [475K]  nframe08176_of_17036_T4min32sec805msec.png
├── [476K]  nframe08177_of_17036_T4min32sec839msec.png
├── [462K]  nframe08178_of_17036_T4min32sec872msec.png
├── [471K]  nframe08179_of_17036_T4min32sec905msec.png
├── [466K]  nframe08180_of_17036_T4min32sec939msec.png
└── [467K]  nframe08181_of_17036_T4min32sec972msec.png

0 directories, 150 files

python png_to_avi.py --config ../config_files/config_i2v.yml

https://user-images.githubusercontent.com/11370681/141446629-18c27f44-4b2f-4ca9-a35b-37db943bba1f.mp4

mxochicale commented 2 years ago

Hi @huynhatd13

Thanks

nhatpth commented 2 years ago

Hi @mxochicale

mxochicale commented 2 years ago

Hi @huynhatd13

Can you confirm the following status of the echo datasets?:

mxochicale commented 2 years ago

Hear rate beat in 4CV from Videos and annotations of participants 70 to 80

Videos and annotations of participants 70 to 80

mx19@sie133-lap:~/datasets/vital-us/echocardiography/videos-echo-annotated$ tree
.
├── 01NVb-003-070
│   ├── T1
│   │   ├── 01NVb-003-070-1-4CV.json
│   │   └── 01NVb-003-070-1 echo.mp4
│   ├── T2
│   │   ├── 01NVb-003-070-2-4CV.json
│   │   └── 01NVb-003-070-2 echo.mp4
│   └── T3
│       ├── 01NVb-003-070-3-4CV.json
│       └── 01NVb-003-070-3 echo.mp4
├── 01NVb-003-071
│   ├── T1
│   │   ├── 01NVb-003-071-1-4CV.json
│   │   └── 01NVb-003-071-1 echo.mp4
│   ├── T2
│   │   ├── 01NVb-003-071-2-4CV.json
│   │   └── 01NVb-003-071-2 echo.mp4
│   └── T3
│       ├── 01NVb-003-071-3-4CV.json
│       └── 01NVb-003-071-3 echo.mp4
├── 01NVb-003-072
│   ├── T1
│   │   ├── 01NVb-003-072-1-echo.mp4
│   │   └── 01NVb_003_072_T1_4CV.json
│   ├── T2
│   │   ├── 01NVb-003-072-2-echo-cont.mp4
│   │   ├── 01NVb_003_072_T2_4CV.json
│   │   └── extras
│   │       └── 01NVb-003-072-2-echo_mp4_
│   └── T3
│       ├── 01NVb-003-072-3-echo.mp4
│       └── 01NVb_003_072_T3_4CV.json
├── 01NVb-003-073
│   ├── T1
│   │   ├── 01NVb-003-073-1-4CV.json
│   │   └── 01NVb-003-073-1 echo.mp4
│   ├── T2
│   │   ├── 01NVb-003-073-2-4CV.json
│   │   └── 01NVb-003-073-2 echo.mp4
│   └── T3
├── 01NVb-003-074
│   ├── T1
│   │   ├── 01NVb-003-074-1-4CV.json
│   │   └── 01NVb-003-074-1 echo.mp4
│   ├── T2
│   │   ├── 01NVb-003-074-2-4CV.json
│   │   └── 01NVb-003-074-2 echo.mp4
│   └── T3
│       ├── 01NVb-003-074-3-4CV.json
│       └── 01NVb-003-074-3 echo.mp4
├── 01NVb-003-075
│   ├── T1
│   │   ├── 01NVb-003-075-1-4CV.json
│   │   └── 01NVb-003-075-1 echo.mp4
│   ├── T2
│   │   ├── 01NVb-003-075-2-4CV.json
│   │   └── 01NVb-003-075-2 echo.mp4
│   └── T3
│       ├── 01NVb-003-075-3-4CV.json
│       └── 01NVb-003-075-3 echo.mp4
├── 01NVb-003-076
│   ├── T1
│   │   ├── 01NVb-003-076-1-4CV.json
│   │   └── 01NVb-003-076-1 echo.mp4
│   ├── T2
│   │   ├── 01NVb-003-076-2-4CV.json
│   │   └── 01NVb-003-076-2 echo.mp4
│   └── T3
│       ├── 01NVb-003-076-3-4CV.json
│       └── 01NVb-003-076-3 echo.mp4
├── 01NVb-003-077
│   ├── T1
│   │   ├── 01NVb-003-077-1-4CV.json
│   │   ├── 01NVb-003-077-1 echo.mp4
│   │   └── extras
│   │       └── 01NVb-003-077-1_cont_mp4_
│   ├── T2
│   │   ├── 01NVb-003-077-2-4CV.json
│   │   └── 01NVb-003-077-2 echo.mp4
│   └── T3
│       ├── 01NVb-003-077-3-4CV.json
│       └── 01NVb-003-077-3 echo.mp4
├── 01NVb-003-078
│   ├── T1
│   │   ├── 01NVb-003-078-1-4CV.json
│   │   └── 01NVb-003-078-1 echo.mp4
│   ├── T2
│   │   ├── 01NVb-003-078-2-4CV.json
│   │   └── 01NVb-003-078-2 echo.mp4
│   └── T3
│       ├── 01NVb-003-078-3-4CV.json
│       └── 01NVb-003-078-3 echo.mp4
├── 01NVb-003-079
│   ├── T1
│   │   ├── 01NVb-003-079-1-4CV.json
│   │   └── 01NVb-003-079-1 echo.mp4
│   ├── T2
│   │   ├── 01NVb-003-079-2-4CV.json
│   │   └── 01NVb-003-079-2 echo.mp4
│   └── T3
│       ├── 01NVb-003-079-3-4CV.json
│       └── 01NVb-003-079-3 echo.mp4
└── 01NVb-003-080
    ├── T1
    │   ├── 01NVb-003-080-1-4CV.json
    │   └── 01NVb-003-080-1-echo.mp4
    ├── T2
    │   └── 01NVb-003-080-2 echo_mp4_
    └── T3
        └── 01NVb-003-080-3 echo_mp4_

46 directories, 64 files

Clips and empty json files

(rt-ai-echo-VE) mx19@sie133-lap:~/repositories/echocardiography/scripts/learning-pipeline$ python learning_pipeline.py --config ../config_files/learning_pipeline/config_learning_pipeline.yml
[ERROR] [EchoClassesDataset.__init__()] Error reading /home/mx19/datasets/vital-us/echocardiography/videos-echo-annotated/01NVb-003-077/T3/01NVb-003-077-3-4CV.json (empty). Removing from list
[ERROR] [EchoClassesDataset.__init__()] Error reading /home/mx19/datasets/vital-us/echocardiography/videos-echo-annotated/01NVb-003-074/T3/01NVb-003-074-3-4CV.json (empty). Removing from list
[ERROR] [EchoClassesDataset.__init__()] Error reading /home/mx19/datasets/vital-us/echocardiography/videos-echo-annotated/01NVb-003-077/T1/01NVb-003-077-1-4CV.json (empty). Removing from list
[ERROR] [EchoClassesDataset.__init__()] Error reading /home/mx19/datasets/vital-us/echocardiography/videos-echo-annotated/01NVb-003-077/T2/01NVb-003-077-2-4CV.json (empty). Removing from list
Number of clips: 88 
mxochicale commented 2 years ago

Thanks @huynhatd13 for uploading the files. I can confirm that files are there from particpant 87 to 106.

In addition, I just uploaded the remain patient data (090-106); [4:20](innovationsproject.slack.com

It would be great if you can help with:

mxochicale commented 2 years ago

Hi @huynhatd13

Just realised that the following annotations with the name(1) are not making too much sense. For instance, 01NVb-003-052-2-4CV.json is using "fname":"01NVb-003-052-2 echo.mp4", whereas 01NVb-003-052-2-4CV(1).json is using "fname":"01NVb-003-052-2 echo.mp4". Would you please help to comment, rename or delete files in the filezilla server?

mx19@sie133-lap:~/datasets/vital-us/echocardiography/videos-echo-annotated$ tree
.
├── 01NVb-003-052
│   ├── T1
│   │   ├── 01NVb-003-052-1-4CV.json
│   │   └── 01NVb-003-052-1 echo.mp4
│   ├── T2
│   │   ├── 01NVb-003-052-2-4CV(1).json
│   │   ├── 01NVb-003-052-2-4CV.json
│   │   ├── 01NVb-003-052-2 echo (2).mp4
│   │   └── 01NVb-003-052-2 echo.mp4
│   └── T3
└── 01NVb-003-053
    ├── T1
    │   ├── 01NVb-003-053-1-4CV.json
    │   └── 01NVb-003-053-1 echo.mp4
    ├── T2
    │   ├── 01NVb-003-053-2-4CV.json
    │   └── 01NVb-003-053-2 echo.mp4
    └── T3
        ├── 01NVb-003-053-3-4CV (1).json
        ├── 01NVb-003-053-3-4CV.json
        ├── 01NVb-003-053-3 echo cont.mp4
        └── 01NVb-003-053-3 echo.mp4

8 directories, 14 files
mxochicale commented 2 years ago
print(f'=================== LABELLED =======================')
basic_demographics['LABELLED'].value_counts().plot.pie(autopct='%.1f %%', ylabel='TOTAL', legend=True)
plt.axis('equal')
plt.show()
nhatpth commented 2 years ago

Hi @huynhatd13

Just realised that the following annotations with the name(1) are not making too much sense. For instance, 01NVb-003-052-2-4CV.json is using "fname":"01NVb-003-052-2 echo.mp4", whereas 01NVb-003-052-2-4CV(1).json is using "fname":"01NVb-003-052-2 echo.mp4". Would you please help to comment, rename or delete files in the filezilla server?

mx19@sie133-lap:~/datasets/vital-us/echocardiography/videos-echo-annotated$ tree
.
├── 01NVb-003-052
│   ├── T1
│   │   ├── 01NVb-003-052-1-4CV.json
│   │   └── 01NVb-003-052-1 echo.mp4
│   ├── T2
│   │   ├── 01NVb-003-052-2-4CV(1).json
│   │   ├── 01NVb-003-052-2-4CV.json
│   │   ├── 01NVb-003-052-2 echo (2).mp4
│   │   └── 01NVb-003-052-2 echo.mp4
│   └── T3
└── 01NVb-003-053
    ├── T1
    │   ├── 01NVb-003-053-1-4CV.json
    │   └── 01NVb-003-053-1 echo.mp4
    ├── T2
    │   ├── 01NVb-003-053-2-4CV.json
    │   └── 01NVb-003-053-2 echo.mp4
    └── T3
        ├── 01NVb-003-053-3-4CV (1).json
        ├── 01NVb-003-053-3-4CV.json
        ├── 01NVb-003-053-3 echo cont.mp4
        └── 01NVb-003-053-3 echo.mp4

8 directories, 14 files

I have modified the files and uploaded

mxochicale commented 2 years ago

Hi @huynhatd13

datasets

Thanks for the annotation of another 10 subjects. Would you like to check what is happening with the following datasets:

validation

Not sure if you have run the notebook but would be nice if you provide input on how to improve the validation pipeline https://github.com/vital-ultrasound/echocardiography/blob/13-validation-of-4cv/scripts/curation-selection-validation/validation-of-4cv-labels.ipynb. I would suggest to do this List of files to be verified together.

nhatpth commented 2 years ago

Hi @huynhatd13

datasets

Thanks for the annotation of another 10 subjects. Would you like to check what is happening with the following datasets:

  • [x] 40-T3: not annotated? - I can not read the video
  • [x] 41-T3: not sure which is the annotated video: echo or echo cont ? echo.mp4. you can check in the "file/fname:" in the json file
  • [x] 44-T1: no echo data? - No echo data
  • [x] 49-T1: not annotated? No 4CV view
  • [x] 49-T2: not annotated? No 4CV view
  • [x] 49-T3: no echo data? No echo data as the patients transfered home

validation

Not sure if you have run the notebook but would be nice if you provide input on how to improve the validation pipeline https://github.com/vital-ultrasound/echocardiography/blob/13-validation-of-4cv/scripts/curation-selection-validation/validation-of-4cv-labels.ipynb. I would suggest to do this List of files to be verified together.

nhatpth commented 2 years ago

I have run the validation-of-4cv.ipynb notebook on my laptop. Will add comments on the code

mxochicale commented 2 years ago
12594392 -rw-rw-r-- 1 mx19 mx19 5.8M Apr 21 16:42 videoID_00_041-1_label_00.pth
12594408 -rw-rw-r-- 1 mx19 mx19 5.8M Apr 21 17:31 videoID_00_041-3_label_00.pth
12594409 -rw-rw-r-- 1 mx19 mx19 5.8M Apr 21 17:31 videoID_01_041-1_label_00.pth
12594393 -rw-rw-r-- 1 mx19 mx19 1.6M Apr 21 16:42 videoID_01_041-2_label_00.pth
12594394 -rw-rw-r-- 1 mx19 mx19 5.8M Apr 21 16:42 videoID_02_041-2_label_00.pth
12594414 -rw-rw-r-- 1 mx19 mx19 5.8M Apr 21 17:34 videoID_02_041-3_label_00.pth
12594410 -rw-rw-r-- 1 mx19 mx19 5.8M Apr 21 17:31 videoID_03_041-2_label_00.pth
12594395 -rw-rw-r-- 1 mx19 mx19 5.8M Apr 21 16:42 videoID_03_041-3_label_00.pth
12594396 -rw-rw-r-- 1 mx19 mx19 5.8M Apr 21 16:42 videoID_04_041-2_label_00.pth
12594411 -rw-rw-r-- 1 mx19 mx19 5.8M Apr 21 17:31 videoID_04_041-3_label_00.pth
12594397 -rw-rw-r-- 1 mx19 mx19 5.8M Apr 21 16:42 videoID_05_041-1_label_00.pth
12594412 -rw-rw-r-- 1 mx19 mx19 5.8M Apr 21 17:31 videoID_05_041-3_label_00.pth
12594413 -rw-rw-r-- 1 mx19 mx19 5.8M Apr 21 17:31 videoID_06_041-2_label_00.pth
12594398 -rw-rw-r-- 1 mx19 mx19 5.8M Apr 21 16:42 videoID_06_041-3_label_00.pth
12594399 -rw-rw-r-- 1 mx19 mx19 3.6M Apr 21 16:42 videoID_07_041-3_label_00.pth
mxochicale commented 2 years ago

Notebook can now be used for selection of particpant and pixel size of image https://github.com/vital-ultrasound/echocardiography/blob/13-validation-of-4cv/scripts/curation-selection-validation/validation-of-4cv-labels.ipynb. To which the following animations have been created for partipant 41.

It seems that background are pickign up few frames from the 4CV as I am not sure what is wrong here with the background frames for clip06 from participant 41: animation_s41_clips06-14

For others clips, the background seems to be OKAY: animation_s41_clips04-12

Run https://github.com/vital-ultrasound/echocardiography/blob/13-validation-of-4cv/scripts/curation-selection-validation/validation-of-4cv-labels.ipynb to make further comments

mxochicale commented 2 years ago

From the meeting with @huynhatd13 on Tue 26 Apr 08:45:12 BST 2022 where I explained the notebook, we think that the following points will be beneficial in the verification of 4CV:

In the weekly meeting of 26-Apr-2022, Alberto suggested leaving quality assessment for later. Andy suggested paying attention to the false negatives/positives of clips as those frames will impact on the extraction of cardiac output.

mxochicale commented 2 years ago
 CLIP:00 of BKGR label for torch.Size([1, 50, 100, 100]) TOTAL_FRAMES: 151 from clip_frame_clip_idx 52
 CLIP:01 of BKGR label for torch.Size([1, 50, 100, 100]) TOTAL_FRAMES: 152 from clip_frame_clip_idx 64
 CLIP:02 of BKGR label for torch.Size([1, 50, 100, 100]) TOTAL_FRAMES: 150 from clip_frame_clip_idx 79
 CLIP:03 of BKGR label for torch.Size([1, 50, 100, 100]) TOTAL_FRAMES: 151 from clip_frame_clip_idx 74
 CLIP:04 of BKGR label for torch.Size([1, 50, 100, 100]) TOTAL_FRAMES: 151 from clip_frame_clip_idx 51
 CLIP:05 of BKGR label for torch.Size([1, 50, 100, 100]) TOTAL_FRAMES: 150 from clip_frame_clip_idx 55
 CLIP:06 of BKGR label for torch.Size([1, 50, 100, 100]) TOTAL_FRAMES: 151 from clip_frame_clip_idx 93
 CLIP:07 of BKGR label for torch.Size([1, 50, 100, 100]) TOTAL_FRAMES: 151 from clip_frame_clip_idx 82
 CLIP:08 of 4CV label for torch.Size([1, 50, 100, 100]) TOTAL_FRAMES: 106 from clip_frame_clip_idx 54
 CLIP:09 of 4CV label for torch.Size([1, 50, 100, 100]) TOTAL_FRAMES: 108 from clip_frame_clip_idx 50
 CLIP:10 of 4CV label for torch.Size([1, 50, 100, 100]) TOTAL_FRAMES: 79 from clip_frame_clip_idx 40
 CLIP:11 of 4CV label for torch.Size([1, 50, 100, 100]) TOTAL_FRAMES: 92 from clip_frame_clip_idx 44
 CLIP:12 of 4CV label for torch.Size([1, 50, 100, 100]) TOTAL_FRAMES: 102 from clip_frame_clip_idx 50
 CLIP:13 of 4CV label for torch.Size([1, 50, 100, 100]) TOTAL_FRAMES: 122 from clip_frame_clip_idx 53
 CLIP:14 of 4CV label for torch.Size([1, 50, 100, 100]) TOTAL_FRAMES: 135 from clip_frame_clip_idx 52
 CLIP:15 of 4CV label for torch.Size([1, 50, 100, 100]) TOTAL_FRAMES: 119 from clip_frame_clip_idx 58
0 8
1 9
2 10
3 11
4 12
5 13
6 14
7 15
 CLIP:00 of BKGR label for torch.Size([1, 200, 100, 100]) TOTAL_FRAMES: 150 from clip_frame_clip_idx 0
 CLIP:01 of BKGR label for torch.Size([1, 200, 100, 100]) TOTAL_FRAMES: 151 from clip_frame_clip_idx 0
 CLIP:02 of BKGR label for torch.Size([1, 200, 100, 100]) TOTAL_FRAMES: 150 from clip_frame_clip_idx 0
 CLIP:03 of BKGR label for torch.Size([1, 200, 100, 100]) TOTAL_FRAMES: 151 from clip_frame_clip_idx 0
 CLIP:04 of BKGR label for torch.Size([1, 200, 100, 100]) TOTAL_FRAMES: 150 from clip_frame_clip_idx 0
 CLIP:05 of BKGR label for torch.Size([1, 200, 100, 100]) TOTAL_FRAMES: 151 from clip_frame_clip_idx 0
 CLIP:06 of BKGR label for torch.Size([1, 200, 100, 100]) TOTAL_FRAMES: 40 from clip_frame_clip_idx 0
 CLIP:07 of BKGR label for torch.Size([1, 200, 100, 100]) TOTAL_FRAMES: 150 from clip_frame_clip_idx 0
 CLIP:08 of 4CV label for torch.Size([1, 200, 100, 100]) TOTAL_FRAMES: 106 from clip_frame_clip_idx 0
 CLIP:09 of 4CV label for torch.Size([1, 200, 100, 100]) TOTAL_FRAMES: 108 from clip_frame_clip_idx 0
 CLIP:10 of 4CV label for torch.Size([1, 200, 100, 100]) TOTAL_FRAMES: 79 from clip_frame_clip_idx 0
 CLIP:11 of 4CV label for torch.Size([1, 200, 100, 100]) TOTAL_FRAMES: 92 from clip_frame_clip_idx 0
 CLIP:12 of 4CV label for torch.Size([1, 200, 100, 100]) TOTAL_FRAMES: 102 from clip_frame_clip_idx 0
 CLIP:13 of 4CV label for torch.Size([1, 200, 100, 100]) TOTAL_FRAMES: 122 from clip_frame_clip_idx 0
 CLIP:14 of 4CV label for torch.Size([1, 200, 100, 100]) TOTAL_FRAMES: 135 from clip_frame_clip_idx 0
 CLIP:15 of 4CV label for torch.Size([1, 200, 100, 100]) TOTAL_FRAMES: 119 from clip_frame_clip_idx 0
0 8
1 9
2 10
3 11
4 12
5 13
6 14
7 15
mxochicale commented 2 years ago

On our weekly meeting,Tue 3 May 2022, Alberto suggested generating animations for all videos to have some sort of quality control. Videos will be shared in filezilla server.

mxochicale commented 2 years ago

The above commit contains:

mxochicale commented 2 years ago

Hi All

As discussed in our last meeting, I have just created animated echo clips with the aim to verify if 4CV and background clips are looking as clinical expected. So, if you open [1], using your KCL credentials or login as a guest of the share-point, you will find GIF animation 51 files for participants 40 to 48 for clips for background and 4CV with an image resolution 250Wx250H*.

It would be great if @huynhatd13 , @gomezalberto and perhaps Luigi or others like to have a look and share your comments/feedback for quality control of clips? I will do my best to create animations for all available labelled videos and put them in filezilla and provide further instructions for quality control before the end of the week. If interested, the implementation for the animation and other code bits are available in this self-explanatory and documented notebook [2].

Notes:

Ps. On Friday, I am sharing updates for the learning pipeline in the technical group chat.

Thanks Miguel

1: https://emckclac.sharepoint.com/sites/MT-BMEIS-VITAL-US/Shared%20Documents/Forms/AllItems.aspx?id=%2Fsites%2FMT%2DBMEIS%2DVITAL%2DUS%2FShared%20Documents%2FGeneral%2F06%20Clinical%2FEvaluation%2Fechoes%5Fvalidation%5FMay2022%5FTEMPORAL&viewid=1158d72f%2D3e4f%2D43ff%2Daac1%2D2f995401359e 2: https://github.com/vital-ultrasound/echocardiography/blob/13-validation-of-4cv/scripts/curation-selection-validation/validation-of-4cv-labels.ipynb

mxochicale commented 2 years ago

Hi All

Just completed 169 gif aninamtions for subjects 40 to 79. See few key points and notes.

KEY POINTS

NOTES

EXTRAS

Ps. Before the end of the week, I will be sharing few updates regarding the learning workflow.

Thanks Miguel

mxochicale commented 2 years ago

TODO

mxochicale commented 2 years ago

Hi @nhatpth

would you please help to annotate 052,070 and upload json files in filezilla?

Then this issue will be ready to be closed.

mxochicale commented 2 years ago

Thanks @nhatpth for annotating subjects 052,070.

Something is wrong with annotations for 01NVb-003-052-3-4CV.json even after changing 01NVb-003-052-2 echo.mp4 to 01NVb-003-052-3 echo.mp4:

{"project":{"pid":"VIA_PROJECT_ID","rev":"VIA_PROJECT_REV_ID__","rev_timestamp":"VIA_PROJECT_REV_TIMESTAMP__","pname":"echo_4CV_template_VIA Project","creator":"VGG Image Annotator (http://www.robots.ox.ac.uk/~vgg/software/via)","created":1634093189898,"vid_list":["1"]},"config":{"file":{"loc_prefix":{"1":"","2":"","3":"","4":""}},"ui":{"file_content_align":"center","file_metadata_editor_visible":true,"spatial_metadata_editor_visible":true,"temporal_segment_metadata_editor_visible":true,"spatial_region_label_attribute_id":"","gtimeline_visible_row_count":"4"}},"attribute":{"1":{"aname":"4CV","anchor_id":"FILE1_Z2_XY0","type":2,"desc":"","options":{},"default_option_id":""}},"file":{"1":{"fid":"1","fname":"01NVb-003-052-2 echo.mp4","type":4,"loc":1,"src":""}},"metadata":{"1_Byx0j4N6":{"vid":"1","flg":0,"z":[677.13275,679.59108],"xy":[],"av":{"1":"4CV"}},"1_t9KVo6UE":{"vid":"1","flg":0,"z":[673.237,675.63275],"xy":[],"av":{"1":"4CV"}}},"view":{"1":{"fid_list":["1"]}}}

mxochicale commented 2 years ago

Hi @nhatpth , any updates with the above annotation 01NVb-003-052-3-4CV.json?

mxochicale commented 2 years ago

Last check from https://github.com/vital-ultrasound/echocardiography/blob/main/data/labelling/json_files/4CV/README.md to

Screenshot from 2022-08-18 17-30-09

mxochicale commented 2 years ago

I am closing this one but perhaps if there is time, please double check:

Something is wrong with annotations for 01NVb-003-052-3-4CV.json even after changing 01NVb-003-052-2 echo.mp4 to 01NVb-003-052-3 echo.mp4:

{"project":{"pid":"VIA_PROJECT_ID","rev":"VIA_PROJECT_REV_ID__","rev_timestamp":"VIA_PROJECT_REV_TIMESTAMP__","pname":"echo_4CV_template_VIA Project","creator":"VGG Image Annotator (http://www.robots.ox.ac.uk/~vgg/software/via)","created":1634093189898,"vid_list":["1"]},"config":{"file":{"loc_prefix":{"1":"","2":"","3":"","4":""}},"ui":{"file_content_align":"center","file_metadata_editor_visible":true,"spatial_metadata_editor_visible":true,"temporal_segment_metadata_editor_visible":true,"spatial_region_label_attribute_id":"","gtimeline_visible_row_count":"4"}},"attribute":{"1":{"aname":"4CV","anchor_id":"FILE1_Z2_XY0","type":2,"desc":"","options":{},"default_option_id":""}},"file":{"1":{"fid":"1","fname":"01NVb-003-052-2 echo.mp4","type":4,"loc":1,"src":""}},"metadata":{"1_Byx0j4N6":{"vid":"1","flg":0,"z":[677.13275,679.59108],"xy":[],"av":{"1":"4CV"}},"1_t9KVo6UE":{"vid":"1","flg":0,"z":[673.237,675.63275],"xy":[],"av":{"1":"4CV"}}},"view":{"1":{"fid_list":["1"]}}}