Closed mxochicale closed 2 years ago
01NVb_003_072/01NVb_003_072_T1_4CV.json
0 376.279
1 381.54942
Terminal logs for few frames
Frame_index/number_of_frames=10602/23285, frame_timestamp=05:53:753.400
/home/mx19/datasets/vital-us/echocardiography/preprocessed-datasets/01NVb-003-072/T1/imageframes/nframes10602.png
Function 'maks_for_captured_us_image' executed in 0.0018s
Frame_index/number_of_frames=10603/23285, frame_timestamp=05:53:786.767
/home/mx19/datasets/vital-us/echocardiography/preprocessed-datasets/01NVb-003-072/T1/imageframes/nframes10603.png
Function 'maks_for_captured_us_image' executed in 0.0018s
10602/23285
10603/23285
Terminal logs for few frames
Frame_index/number_of_frames=10790/23285, frame_timestamp=06:00:26.333
/home/mx19/datasets/vital-us/echocardiography/preprocessed-datasets/01NVb-003-072/T1/imageframes/nframes10790.png
Function 'maks_for_captured_us_image' executed in 0.0018s
Frame_index/number_of_frames=10791/23285, frame_timestamp=06:00:59.700
/home/mx19/datasets/vital-us/echocardiography/preprocessed-datasets/01NVb-003-072/T1/imageframes/nframes10791.png
Function 'maks_for_captured_us_image' executed in 0.0018s
Frame_index/number_of_frames=10792/23285, frame_timestamp=06:00:93.067
/home/mx19/datasets/vital-us/echocardiography/preprocessed-datasets/01NVb-003-072/T1/imageframes/nframes10792.png
Function 'maks_for_captured_us_image' executed in 0.0018s
Frame_index/number_of_frames=10793/23285, frame_timestamp=06:00:126.433
/home/mx19/datasets/vital-us/echocardiography/preprocessed-datasets/01NVb-003-072/T1/imageframes/nframes10793.png
Function 'maks_for_captured_us_image' executed in 0.0019s
Frame_index/number_of_frames=10794/23285, frame_timestamp=06:00:159.800
/home/mx19/datasets/vital-us/echocardiography/preprocessed-datasets/01NVb-003-072/T1/imageframes/nframes10794.png
Function 'maks_for_captured_us_image' executed in 0.0019s
Frame_index/number_of_frames=10795/23285, frame_timestamp=06:00:193.167
/home/mx19/datasets/vital-us/echocardiography/preprocessed-datasets/01NVb-003-072/T1/imageframes/nframes10795.png
Function 'maks_for_captured_us_image' executed in 0.0018s
Frame_index/number_of_frames=10796/23285, frame_timestamp=06:00:226.533
10790/23285
10795/23285
Plots for Average Absolute difference; NNZ in subtraction; Average difference over NZNNZ
...
al-us/echocardiography/preprocessed-datasets/01NVb-003-072/T2/masked4CV/nframes10734.png Function 'maks_for_captured_us_image' executed in 0.0020s Frame_index/number_of_frames=10735/20129, current_frame_timestamp=05:58:191.167 /home/mx19/datasets/vital-us/echocardiography/preprocessed-datasets/01NVb-003-072/T2/masked4CV/nframes10735.png Function 'maks_for_captured_us_image' executed in 0.0019s Frame_index/number_of_frames=10736/20129, current_frame_timestamp=05:58:224.533 /home/mx19/datasets/vital-us/echocardiography/preprocessed-datasets/01NVb-003-072/T2/masked4CV/nframes10736.png Function 'maks_for_captured_us_image' executed in 0.0019s
...
Frame_index/number_of_frames=20128/20129, current_frame_timestamp=00:00:0.000 Frame_index/number_of_frames=20129/20129, current_frame_timestamp=00:00:0.000 Function 'Video_to_ImageFrame' executed in 62.1589s
![nframes10617](https://user-images.githubusercontent.com/11370681/138823468-caacfc66-ef04-4760-9317-8f2696883553.png)
* ~/datasets/vital-us/echocardiography/preprocessed-datasets/01NVb-003-072/T2/cropped4CV
Frame_index/number_of_frames=14131/20129, current_frame_timestamp=07:51:504.367 /home/mx19/datasets/vital-us/echocardiography/preprocessed-datasets/01NVb-003-072/T2/cropped4CV/nframes14131.png Function 'cropped_image_frame' executed in 0.0000s Frame_index/number_of_frames=14132/20129, current_frame_timestamp=07:51:537.733 /home/mx19/datasets/vital-us/echocardiography/preprocessed-datasets/01NVb-003-072/T2/cropped4CV/nframes14132.png Function 'cropped_image_frame' executed in 0.0000s Frame_index/number_of_frames=14133/20129, current_frame_timestamp=07:51:571.100 /home/mx19/datasets/vital-us/echocardiography/preprocessed-datasets/01NVb-003-072/T2/cropped4CV/nframes14133.png Function 'cropped_image_frame' executed in 0.0000s
...
Frame_index/number_of_frames=20128/20129, current_frame_timestamp=00:00:0.000 Frame_index/number_of_frames=20129/20129, current_frame_timestamp=00:00:0.000 Function 'Video_to_ImageFrame' executed in 37.2505s
![nframes10636](https://user-images.githubusercontent.com/11370681/138823428-ced75029-03db-4651-af60-fb2bfa3f3229.png)
python png_to_avi.py --config ../config_files/config_i2v.yml
on Wed 3 Nov 08:59:25 GMT 2021/home/mx19/datasets/vital-us/echocardiography/preprocessed-datasets/01NVb-003-072/T1/cropped_us_image/animations/clip001.avi
100%|████████████████████████████████████████████████████████████████████████████████| 180/180 [00:02<00:00, 67.43it/s]
/home/mx19/datasets/vital-us/echocardiography/preprocessed-datasets/01NVb-003-072/T2/cropped_us_image/animations/clip001.avi
100%|████████████████████████████████████████████████████████████████████████████████| 180/180 [00:02<00:00, 70.21it/s]
/home/mx19/datasets/vital-us/echocardiography/preprocessed-datasets/01NVb-003-072/T2/cropped_us_image/animations/clip002.avi
100%|██████████████████████████████████████████████████████████████████████████████████| 89/89 [00:01<00:00, 69.08it/s]
/home/mx19/datasets/vital-us/echocardiography/preprocessed-datasets/01NVb-003-072/T2/cropped_us_image/animations/clip003.avi
100%|██████████████████████████████████████████████████████████████████████████████████| 90/90 [00:01<00:00, 68.90it/s]
/home/mx19/datasets/vital-us/echocardiography/preprocessed-datasets/01NVb-003-072/T3/cropped_us_image/animations/clip001.avi
100%|██████████████████████████████████████████████████████████████████████████████████| 60/60 [00:00<00:00, 74.16it/s]
/home/mx19/datasets/vital-us/echocardiography/preprocessed-datasets/01NVb-003-072/T3/cropped_us_image/animations/clip002.avi
100%|██████████████████████████████████████████████████████████████████████████████████| 90/90 [00:01<00:00, 72.09it/s]
Code has been simplited in the above commit to use few lines in the config file:
## 01NVb-003-072
participant_directory: '/home/mx19/datasets/vital-us/echocardiography/videos-echo/01NVb-003-072'
preprocessed_datasets_path: '/home/mx19/datasets/vital-us/echocardiography/preprocessed-datasets'
video_output_pathname: 'cropped_us_image'
participant_path_json_file: '/home/mx19/datasets/vital-us/echocardiography/json/01NVb-003-072'
bounds:
start_x: 480
start_y: 120
width: 1130
height: 810
to create
/01NVb-003-072$ tree -d
.
├── T1
│ └── cropped_us_image
│ └── clip001
├── T2
│ └── cropped_us_image
│ ├── clip001
│ ├── clip002
│ └── clip003
└── T3
└── cropped_us_image
├── clip001
└── clip002
12 directories
which is executed in around:
Function 'Video_to_ImageFrame' executed in 120.6810s
python video_to_imageframes.py --config ../config_files/config_v2i.yml
/home/mx19/datasets/vital-us/echocardiography/videos-echo/01NVb-003-070/T1/01NVb-003-070-1 echo.mp4
/home/mx19/repositories/echocardiography/datasets/labelling-annotation/json_files/4CV/01NVb_003_070/01NVb-003-070-1-4CV.json
/home/mx19/datasets/vital-us/echocardiography/preprocessed-datasets/01NVb-003-070/T1/cropped_us_image
Frame_height=1080, frame_width=1920 fps=30 nframes=28473
...
/home/mx19/datasets/vital-us/echocardiography/preprocessed-datasets/01NVb-003-070/T1/cropped_us_image/clip004/nframe09770_of_28472.png
[h264 @ 0x55e959b67340] Invalid NAL unit size (0 > 2168).
[h264 @ 0x55e959b67340] Error splitting the input into NAL units.
[h264 @ 0x55e959b4a980] cbp too large (58) at 2 41
[h264 @ 0x55e959b4a980] error while decoding MB 2 41
[h264 @ 0x55e959ba06c0] No start code is found.
[h264 @ 0x55e959ba06c0] Error splitting the input into NAL units.
[h264 @ 0x55e959bbd080] No start code is found.
[h264 @ 0x55e959bbd080] Error splitting the input into NAL units.
Invalid UE golomb code
[h264 @ 0x55e959b83d00] cbp too large (3199971767) at 73 52
[h264 @ 0x55e959b83d00] error while decoding MB 73 52
[h264 @ 0x55e959bd9a40] No start code is found.
[h264 @ 0x55e959bd9a40] Error splitting the input into NAL units.
[h264 @ 0x55e959bf6400] No start code is found.
[h264 @ 0x55e959bf6400] Error splitting the input into NAL units.
...
>Function 'Video_to_ImageFrame' executed in 120.1562s
python png_to_avi.py --config ../config_files/config_i2v.yml
> Function 'conver_pngframes_to_avi' executed in 12.9088s
/home/mx19/datasets/vital-us/echocardiography/videos-echo/01NVb-003-071/T1/01NVb-003-071-1 echo.mp4
/home/mx19/repositories/echocardiography/datasets/labelling-annotation/json_files/4CV/01NVb_003_071/01NVb-003-071-1-4CV.json
/home/mx19/datasets/vital-us/echocardiography/preprocessed-datasets/01NVb-003-071/T1/cropped_us_image
Frame_height=1080, frame_width=1920 fps=30 nframes=21968
[h264 @ 0x55d25ece6800] Invalid NAL unit size (0 > 5335).
[h264 @ 0x55d25ece6800] Error splitting the input into NAL units.
[h264 @ 0x55d25ef54a80] out of range intra chroma pred mode
[h264 @ 0x55d25ef54a80] error while decoding MB 98 39
[h264 @ 0x55d25ecec780] Invalid NAL unit size (-578446953 > 16443).
[h264 @ 0x55d25ecec780] Error splitting the input into NAL units.
[h264 @ 0x55d25eda1500] Invalid NAL unit size (-2095475258 > 11039).
[h264 @ 0x55d25eda1500] Error splitting the input into NAL units.
[h264 @ 0x55d25edfd540] Invalid NAL unit size (0 > 5105).
[h264 @ 0x55d25edfd540] Error splitting the input into NAL units.
[h264 @ 0x55d25ee19f40] Invalid NAL unit size (13302527 > 3857).
[h264 @ 0x55d25ee19f40] Error splitting the input into NAL units.
[h264 @ 0x55d25ee36900] Invalid NAL unit size (46513 > 12955).
[h264 @ 0x55d25ee36900] Error splitting the input into NAL units.
Function 'Video_to_ImageFrame' executed in 71.6225s
python png_to_avi.py --config ../config_files/config_i2v.yml
Function 'conver_pngframes_to_avi' executed in 9.5130s
python video_to_imageframes.py --config ../config_files/config_v2i.yml
Function 'Video_to_ImageFrame' executed in 109.7876s
python png_to_avi.py --config ../config_files/config_i2v.yml
python video_to_imageframes.py --config ../config_files/config_v2i.yml
Function 'Video_to_ImageFrame' executed in 72.1309s
python png_to_avi.py --config ../config_files/config_i2v.yml
python video_to_imageframes.py --config ../config_files/config_v2i.yml
Function 'Video_to_ImageFrame' executed in 98.3033s
python png_to_avi.py --config ../config_files/config_i2v.yml
python video_to_imageframes.py --config ../config_files/config_v2i.yml
Function 'Video_to_ImageFrame' executed in 85.0689s
python png_to_avi.py --config ../config_files/config_i2v.yml
python video_to_imageframes.py --config ../config_files/config_v2i.yml
Function 'Video_to_ImageFrame' executed in 122.1669s
python png_to_avi.py --config ../config_files/config_i2v.yml
(rt-ai-echo-VE) mx19@sie133-lap:~/repositories/echocardiography/datasets/curation-and-selection$ python video_to_imageframes.py --config ../config_files/config_v2i.yml
/home/mx19/datasets/vital-us/echocardiography/videos-echo/01NVb-003-077/T1/01NVb-003-077-1 cont.mp4
/home/mx19/repositories/echocardiography/datasets/labelling-annotation/json_files/4CV/01NVb_003_077/01NVb-003-077-1-4CV_Na.json
/home/mx19/datasets/vital-us/echocardiography/preprocessed-datasets/01NVb-003-077/T1/cropped_us_image
Frame_height=1080, frame_width=1920 fps=30 nframes=9032
Traceback (most recent call last):
File "video_to_imageframes.py", line 357, in <module>
Video_to_ImageFrame(
File "video_to_imageframes.py", line 22, in wrap_func
result = func(*args, **kwargs)
File "video_to_imageframes.py", line 295, in Video_to_ImageFrame
length_of_timestamp_vector = len(start_label)
UnboundLocalError: local variable 'start_label' referenced before assignment
python video_to_imageframes.py --config ../config_files/config_v2i.yml
Function 'Video_to_ImageFrame' executed in 82.2932s
python png_to_avi.py --config ../config_files/config_i2v.yml
> Function 'conver_pngframes_to_avi' executed in 15.4657s
python video_to_imageframes.py --config ../config_files/config_v2i.yml
Function 'Video_to_ImageFrame' executed in 82.9324s
python png_to_avi.py --config ../config_files/config_i2v.yml
(rt-ai-echo-VE) mx19@sie133-lap:~/repositories/echocardiography/datasets/curation-and-selection$ python video_to_imageframes.py --config ../config_files/config_v2i.yml
/home/mx19/datasets/vital-us/echocardiography/videos-echo/01NVb-003-080/T1/01NVb-003-080-1 echo.mp4
/home/mx19/repositories/echocardiography/datasets/labelling-annotation/json_files/4CV/01NVb_003_080/01NVb-003-080-1-4CV.json
/home/mx19/datasets/vital-us/echocardiography/preprocessed-datasets/01NVb-003-080/T1/cropped_us_image
Frame_height=1080, frame_width=1920 fps=30 nframes=14161
[h264 @ 0x55af9e3c74c0] P sub_mb_type 14 out of range at 48 27
[h264 @ 0x55af9e3c74c0] error while decoding MB 48 27
[h264 @ 0x55af9e3e3e80] Invalid NAL unit size (0 > 2197).
[h264 @ 0x55af9e3e3e80] Error splitting the input into NAL units.
[h264 @ 0x55af9e400840] P sub_mb_type 6 out of range at 102 38
[h264 @ 0x55af9e400840] error while decoding MB 102 38
Function 'Video_to_ImageFrame' executed in 0.3206s
$ python video_to_imageframes.py --config ../config_files/config_v2i.yml
/home/mx19/datasets/vital-us/echocardiography/videos-echo/01NVb-003-081/T1/01NVb-003-081-1 echo.mp4
/home/mx19/repositories/echocardiography/datasets/labelling-annotation/json_files/4CV/01NVb_003_081/README.md
/home/mx19/datasets/vital-us/echocardiography/preprocessed-datasets/01NVb-003-081/T1/cropped_us_image
Frame_height=1080, frame_width=1920 fps=30 nframes=16062
Traceback (most recent call last):
File "video_to_imageframes.py", line 357, in <module>
Video_to_ImageFrame(
File "video_to_imageframes.py", line 22, in wrap_func
result = func(*args, **kwargs)
File "video_to_imageframes.py", line 286, in Video_to_ImageFrame
json_data = json.load(json_file)
File "/home/mx19/anaconda3/envs/rt-ai-echo-VE/lib/python3.8/json/__init__.py", line 293, in load
return loads(fp.read(),
File "/home/mx19/anaconda3/envs/rt-ai-echo-VE/lib/python3.8/json/__init__.py", line 357, in loads
return _default_decoder.decode(s)
File "/home/mx19/anaconda3/envs/rt-ai-echo-VE/lib/python3.8/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/home/mx19/anaconda3/envs/rt-ai-echo-VE/lib/python3.8/json/decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
considering the following json file:
mx19@sie133-lap:~/repositories/echocardiography/datasets/labelling-annotation/json_files/4CV/01NVb_003_060$ tree -h
.
├── [ 904] 01NVb-003-060-1-4CV.json
└── [ 340] README.md
0 directories, 2 files
we might have two potential scenarios
/home/mx19/datasets/vital-us/echocardiography/preprocessed-datasets/01NVb-003-060/T1/cropped_us_image/clip001/nframe08181_of_17036_T4min32sec972msec.png
Traceback (most recent call last):
File "video_to_imageframes.py", line 370, in <module>
Video_to_ImageFrame(
File "video_to_imageframes.py", line 23, in wrap_func
result = func(*args, **kwargs)
File "video_to_imageframes.py", line 251, in Video_to_ImageFrame
json_file_i = json_files[T_days_i[0]]
IndexError: list index out of range
(rt-ai-echo-VE) mx19@sie133-lap:~/repositories/echocardiography/datasets/curation-and-selection$ python video_to_imageframes.py --config ../config_files/config_v2i.yml
Function 'Video_to_ImageFrame' executed in 0.0000s
with the following data paths
mx19@sie133-lap:~/datasets/vital-us/echocardiography/videos-echo/01NVb-003-060$ tree -h
.
├── [4.0K] extras
├── [4.0K] T1
│ └── [883M] 01NVb-003-060-1 echo.mp4
├── [4.0K] T2
│ └── [4.0K] extras
│ ├── [189M] 01NVb-003-060-2 echo cont.mp4
│ └── [2.0G] 01NVb-003-060-2 echo.mp4
└── [4.0K] T3
5 directories, 3 files
mx19@sie133-lap:~/datasets/vital-us/echocardiography/videos-echo/01NVb-003-060$ tree -h
.
├── [4.0K] extras
│ ├── [4.0K] T2
│ │ └── [4.0K] extras
│ │ ├── [189M] 01NVb-003-060-2 echo cont.mp4
│ │ └── [2.0G] 01NVb-003-060-2 echo.mp4
│ └── [4.0K] T3
└── [4.0K] T1
└── [883M] 01NVb-003-060-1 echo.mp4
5 directories, 3 files
Function 'Video_to_ImageFrame' executed in 30.4030s
Function 'conver_pngframes_to_avi' executed in 2.1116s
Function 'Video_to_ImageFrame' executed in 40.8828s
Function 'conver_pngframes_to_avi' executed in 2.5747s
Function 'Video_to_ImageFrame' executed in 113.1682s
Function 'conver_pngframes_to_avi' executed in 13.6143s
Function 'Video_to_ImageFrame' executed in 81.5419s
Function 'conver_pngframes_to_avi' executed in 10.6027s
Function 'Video_to_ImageFrame' executed in 92.5387s
Function 'conver_pngframes_to_avi' executed in 25.7689s
Function 'Video_to_ImageFrame' executed in 46.4046s
Function 'conver_pngframes_to_avi' executed in 6.7478s
Function 'Video_to_ImageFrame' executed in -
Function 'conver_pngframes_to_avi' executed in -
Function 'Video_to_ImageFrame' executed in 64.1658s
Function 'conver_pngframes_to_avi' executed in 9.3333s
Function 'Video_to_ImageFrame' executed in 115.2581s
Function 'conver_pngframes_to_avi' executed in 12.3159s
mx19@sie133-lap:~/datasets/vital-us/echocardiography/preprocessed-datasets/01NVb-003-060/T1/cropped_us_image/clip001$ tree -h
.
├── [466K] nframe08032_of_17036_T4min28sec1msec.png
├── [480K] nframe08033_of_17036_T4min28sec34msec.png
├── [470K] nframe08034_of_17036_T4min28sec67msec.png
├── [465K] nframe08035_of_17036_T4min28sec101msec.png
├── [466K] nframe08036_of_17036_T4min28sec134msec.png
├── [461K] nframe08037_of_17036_T4min28sec167msec.png
├── [440K] nframe08038_of_17036_T4min28sec201msec.png
├── [440K] nframe08039_of_17036_T4min28sec234msec.png
├── [404K] nframe08040_of_17036_T4min28sec268msec.png
├── [439K] nframe08041_of_17036_T4min28sec301msec.png
├── [444K] nframe08042_of_17036_T4min28sec334msec.png
├── [435K] nframe08043_of_17036_T4min28sec368msec.png
├── [453K] nframe08044_of_17036_T4min28sec401msec.png
├── [434K] nframe08045_of_17036_T4min28sec434msec.png
├── [469K] nframe08046_of_17036_T4min28sec468msec.png
├── [471K] nframe08047_of_17036_T4min28sec501msec.png
├── [461K] nframe08048_of_17036_T4min28sec534msec.png
├── [474K] nframe08049_of_17036_T4min28sec568msec.png
├── [458K] nframe08050_of_17036_T4min28sec601msec.png
├── [476K] nframe08051_of_17036_T4min28sec635msec.png
├── [476K] nframe08052_of_17036_T4min28sec668msec.png
├── [466K] nframe08053_of_17036_T4min28sec701msec.png
├── [466K] nframe08054_of_17036_T4min28sec735msec.png
├── [452K] nframe08055_of_17036_T4min28sec768msec.png
├── [453K] nframe08056_of_17036_T4min28sec801msec.png
├── [439K] nframe08057_of_17036_T4min28sec835msec.png
├── [458K] nframe08058_of_17036_T4min28sec868msec.png
├── [452K] nframe08059_of_17036_T4min28sec901msec.png
├── [446K] nframe08060_of_17036_T4min28sec935msec.png
├── [432K] nframe08061_of_17036_T4min28sec968msec.png
├── [450K] nframe08062_of_17036_T4min29sec2msec.png
├── [455K] nframe08063_of_17036_T4min29sec35msec.png
├── [461K] nframe08064_of_17036_T4min29sec68msec.png
├── [462K] nframe08065_of_17036_T4min29sec102msec.png
├── [454K] nframe08066_of_17036_T4min29sec135msec.png
├── [472K] nframe08067_of_17036_T4min29sec168msec.png
├── [461K] nframe08068_of_17036_T4min29sec202msec.png
├── [472K] nframe08069_of_17036_T4min29sec235msec.png
├── [473K] nframe08070_of_17036_T4min29sec269msec.png
├── [456K] nframe08071_of_17036_T4min29sec302msec.png
├── [477K] nframe08072_of_17036_T4min29sec335msec.png
├── [474K] nframe08073_of_17036_T4min29sec369msec.png
├── [468K] nframe08074_of_17036_T4min29sec402msec.png
├── [463K] nframe08075_of_17036_T4min29sec435msec.png
├── [457K] nframe08076_of_17036_T4min29sec469msec.png
├── [453K] nframe08077_of_17036_T4min29sec502msec.png
├── [457K] nframe08078_of_17036_T4min29sec535msec.png
├── [456K] nframe08079_of_17036_T4min29sec569msec.png
├── [433K] nframe08080_of_17036_T4min29sec602msec.png
├── [446K] nframe08081_of_17036_T4min29sec636msec.png
├── [431K] nframe08082_of_17036_T4min29sec669msec.png
├── [448K] nframe08083_of_17036_T4min29sec702msec.png
├── [454K] nframe08084_of_17036_T4min29sec736msec.png
├── [458K] nframe08085_of_17036_T4min29sec769msec.png
├── [462K] nframe08086_of_17036_T4min29sec802msec.png
├── [467K] nframe08087_of_17036_T4min29sec836msec.png
├── [457K] nframe08088_of_17036_T4min29sec869msec.png
├── [478K] nframe08089_of_17036_T4min29sec902msec.png
├── [477K] nframe08090_of_17036_T4min29sec936msec.png
├── [467K] nframe08091_of_17036_T4min29sec969msec.png
├── [479K] nframe08092_of_17036_T4min30sec3msec.png
├── [480K] nframe08093_of_17036_T4min30sec36msec.png
├── [479K] nframe08094_of_17036_T4min30sec69msec.png
├── [471K] nframe08095_of_17036_T4min30sec103msec.png
├── [464K] nframe08096_of_17036_T4min30sec136msec.png
├── [465K] nframe08097_of_17036_T4min30sec169msec.png
├── [463K] nframe08098_of_17036_T4min30sec203msec.png
├── [449K] nframe08099_of_17036_T4min30sec236msec.png
├── [422K] nframe08100_of_17036_T4min30sec270msec.png
├── [426K] nframe08101_of_17036_T4min30sec303msec.png
├── [440K] nframe08102_of_17036_T4min30sec336msec.png
├── [442K] nframe08103_of_17036_T4min30sec370msec.png
├── [449K] nframe08104_of_17036_T4min30sec403msec.png
├── [432K] nframe08105_of_17036_T4min30sec436msec.png
├── [456K] nframe08106_of_17036_T4min30sec470msec.png
├── [463K] nframe08107_of_17036_T4min30sec503msec.png
├── [468K] nframe08108_of_17036_T4min30sec536msec.png
├── [473K] nframe08109_of_17036_T4min30sec570msec.png
├── [466K] nframe08110_of_17036_T4min30sec603msec.png
├── [478K] nframe08111_of_17036_T4min30sec637msec.png
├── [459K] nframe08112_of_17036_T4min30sec670msec.png
├── [483K] nframe08113_of_17036_T4min30sec703msec.png
├── [463K] nframe08114_of_17036_T4min30sec737msec.png
├── [478K] nframe08115_of_17036_T4min30sec770msec.png
├── [472K] nframe08116_of_17036_T4min30sec803msec.png
├── [464K] nframe08117_of_17036_T4min30sec837msec.png
├── [461K] nframe08118_of_17036_T4min30sec870msec.png
├── [466K] nframe08119_of_17036_T4min30sec903msec.png
├── [460K] nframe08120_of_17036_T4min30sec937msec.png
├── [460K] nframe08121_of_17036_T4min30sec970msec.png
├── [452K] nframe08122_of_17036_T4min31sec4msec.png
├── [447K] nframe08123_of_17036_T4min31sec37msec.png
├── [454K] nframe08124_of_17036_T4min31sec70msec.png
├── [459K] nframe08125_of_17036_T4min31sec104msec.png
├── [444K] nframe08126_of_17036_T4min31sec137msec.png
├── [465K] nframe08127_of_17036_T4min31sec170msec.png
├── [467K] nframe08128_of_17036_T4min31sec204msec.png
├── [476K] nframe08129_of_17036_T4min31sec237msec.png
├── [480K] nframe08130_of_17036_T4min31sec271msec.png
├── [481K] nframe08131_of_17036_T4min31sec304msec.png
├── [481K] nframe08132_of_17036_T4min31sec337msec.png
├── [478K] nframe08133_of_17036_T4min31sec371msec.png
├── [466K] nframe08134_of_17036_T4min31sec404msec.png
├── [485K] nframe08135_of_17036_T4min31sec437msec.png
├── [456K] nframe08136_of_17036_T4min31sec471msec.png
├── [476K] nframe08137_of_17036_T4min31sec504msec.png
├── [470K] nframe08138_of_17036_T4min31sec537msec.png
├── [469K] nframe08139_of_17036_T4min31sec571msec.png
├── [467K] nframe08140_of_17036_T4min31sec604msec.png
├── [469K] nframe08141_of_17036_T4min31sec638msec.png
├── [458K] nframe08142_of_17036_T4min31sec671msec.png
├── [454K] nframe08143_of_17036_T4min31sec704msec.png
├── [447K] nframe08144_of_17036_T4min31sec738msec.png
├── [451K] nframe08145_of_17036_T4min31sec771msec.png
├── [437K] nframe08146_of_17036_T4min31sec804msec.png
├── [457K] nframe08147_of_17036_T4min31sec838msec.png
├── [460K] nframe08148_of_17036_T4min31sec871msec.png
├── [451K] nframe08149_of_17036_T4min31sec904msec.png
├── [473K] nframe08150_of_17036_T4min31sec938msec.png
├── [477K] nframe08151_of_17036_T4min31sec971msec.png
├── [481K] nframe08152_of_17036_T4min32sec5msec.png
├── [478K] nframe08153_of_17036_T4min32sec38msec.png
├── [478K] nframe08154_of_17036_T4min32sec71msec.png
├── [480K] nframe08155_of_17036_T4min32sec105msec.png
├── [478K] nframe08156_of_17036_T4min32sec138msec.png
├── [461K] nframe08157_of_17036_T4min32sec171msec.png
├── [472K] nframe08158_of_17036_T4min32sec205msec.png
├── [452K] nframe08159_of_17036_T4min32sec238msec.png
├── [428K] nframe08160_of_17036_T4min32sec272msec.png
├── [462K] nframe08161_of_17036_T4min32sec305msec.png
├── [447K] nframe08162_of_17036_T4min32sec338msec.png
├── [459K] nframe08163_of_17036_T4min32sec372msec.png
├── [451K] nframe08164_of_17036_T4min32sec405msec.png
├── [451K] nframe08165_of_17036_T4min32sec438msec.png
├── [446K] nframe08166_of_17036_T4min32sec472msec.png
├── [455K] nframe08167_of_17036_T4min32sec505msec.png
├── [440K] nframe08168_of_17036_T4min32sec538msec.png
├── [463K] nframe08169_of_17036_T4min32sec572msec.png
├── [466K] nframe08170_of_17036_T4min32sec605msec.png
├── [473K] nframe08171_of_17036_T4min32sec639msec.png
├── [459K] nframe08172_of_17036_T4min32sec672msec.png
├── [473K] nframe08173_of_17036_T4min32sec705msec.png
├── [475K] nframe08174_of_17036_T4min32sec739msec.png
├── [460K] nframe08175_of_17036_T4min32sec772msec.png
├── [475K] nframe08176_of_17036_T4min32sec805msec.png
├── [476K] nframe08177_of_17036_T4min32sec839msec.png
├── [462K] nframe08178_of_17036_T4min32sec872msec.png
├── [471K] nframe08179_of_17036_T4min32sec905msec.png
├── [466K] nframe08180_of_17036_T4min32sec939msec.png
└── [467K] nframe08181_of_17036_T4min32sec972msec.png
0 directories, 150 files
Hi @huynhatd13
Thanks
Hi @mxochicale
Hi @huynhatd13
Can you confirm the following status of the echo datasets?:
58 HR /home/mx19/datasets/vital-us/echocardiography/videos-echo-annotated/01NVb-003-070/T1/01NVb-003-070-1 echo.mp4 Clip number: 44; Label: 4CV Random index in the segment clip: 27 of n_available_frames 72
58 HR /home/mx19/datasets/vital-us/echocardiography/videos-echo-annotated/01NVb-003-070/T1/01NVb-003-070-1 echo.mp4 Clip number: 45; Label: 4CV Random index in the segment clip: 67 of n_available_frames 80
57 HR /home/mx19/datasets/vital-us/echocardiography/videos-echo-annotated/01NVb-003-070/T1/01NVb-003-070-1 echo.mp4 Clip number: 46; Label: 4CV Random index in the segment clip: 34 of n_available_frames 49
72 HR /home/mx19/datasets/vital-us/echocardiography/videos-echo-annotated/01NVb-003-070/T1/01NVb-003-070-1 echo.mp4 Clip number: 47; Label: 4CV Random index in the segment clip: 39 of n_available_frames 171
61 HR /home/mx19/datasets/vital-us/echocardiography/videos-echo-annotated/01NVb-003-070/T2/01NVb-003-070-2 echo.mp4 Clip number: 48; Label: 4CV Random index in the segment clip: 59 of n_available_frames 75
116 HR /home/mx19/datasets/vital-us/echocardiography/videos-echo-annotated/01NVb-003-070/T2/01NVb-003-070-2 echo.mp4 Clip number: 49; Label: 4CV Random index in the segment clip: 45 of n_available_frames 65
68 HR /home/mx19/datasets/vital-us/echocardiography/videos-echo-annotated/01NVb-003-070/T2/01NVb-003-070-2 echo.mp4 Clip number: 50; Label: 4CV Random index in the segment clip: 12 of n_available_frames 25
75 HR /home/mx19/datasets/vital-us/echocardiography/videos-echo-annotated/01NVb-003-070/T2/01NVb-003-070-2 echo.mp4 Clip number: 51; Label: 4CV Random index in the segment clip: 20 of n_available_frames 43
54 HR /home/mx19/datasets/vital-us/echocardiography/videos-echo-annotated/01NVb-003-070/T3/01NVb-003-070-3 echo.mp4 Clip number: 52; Label: 4CV Random index in the segment clip: 57 of n_available_frames 124
68 HR /home/mx19/datasets/vital-us/echocardiography/videos-echo-annotated/01NVb-003-071/T1/01NVb-003-071-1 echo.mp4 Clip number: 53; Label: 4CV Random index in the segment clip: 17 of n_available_frames 73
57 HR /home/mx19/datasets/vital-us/echocardiography/videos-echo-annotated/01NVb-003-071/T2/01NVb-003-071-2 echo.mp4 Clip number: 54; Label: 4CV Random index in the segment clip: 55 of n_available_frames 175
68 HR /home/mx19/datasets/vital-us/echocardiography/videos-echo-annotated/01NVb-003-071/T2/01NVb-003-071-2 echo.mp4 Clip number: 55; Label: 4CV Random index in the segment clip: 65 of n_available_frames 139
81 HR /home/mx19/datasets/vital-us/echocardiography/videos-echo-annotated/01NVb-003-071/T3/01NVb-003-071-3 echo.mp4 Clip number: 56; Label: 4CV Random index in the segment clip: 21 of n_available_frames 106
60 HR /home/mx19/datasets/vital-us/echocardiography/videos-echo-annotated/01NVb-003-072/T1/01NVb-003-072-1-echo.mp4 Clip number: 57; Label: 4CV Random index in the segment clip: 114 of n_available_frames 158
70 HR /home/mx19/datasets/vital-us/echocardiography/videos-echo-annotated/01NVb-003-072/T2/01NVb-003-072-2-echo-cont.mp4 Clip number: 58; Label: 4CV Random index in the segment clip: 15 of n_available_frames 155
90 HR /home/mx19/datasets/vital-us/echocardiography/videos-echo-annotated/01NVb-003-072/T2/01NVb-003-072-2-echo-cont.mp4 Clip number: 59; Label: 4CV Random index in the segment clip: 44 of n_available_frames 70
72 HR /home/mx19/datasets/vital-us/echocardiography/videos-echo-annotated/01NVb-003-072/T2/01NVb-003-072-2-echo-cont.mp4 Clip number: 60; Label: 4CV Random index in the segment clip: 21 of n_available_frames 63
68 HR /home/mx19/datasets/vital-us/echocardiography/videos-echo-annotated/01NVb-003-072/T3/01NVb-003-072-3-echo.mp4 Clip number: 61; Label: 4CV Random index in the segment clip: 21 of n_available_frames 43
86 HR /home/mx19/datasets/vital-us/echocardiography/videos-echo-annotated/01NVb-003-072/T3/01NVb-003-072-3-echo.mp4 Clip number: 62; Label: 4CV Random index in the segment clip: 44 of n_available_frames 56
75 HR /home/mx19/datasets/vital-us/echocardiography/videos-echo-annotated/01NVb-003-073/T1/01NVb-003-073-1 echo.mp4 Clip number: 63; Label: 4CV Random index in the segment clip: 69 of n_available_frames 105
112 HR /home/mx19/datasets/vital-us/echocardiography/videos-echo-annotated/01NVb-003-073/T1/01NVb-003-073-1 echo.mp4 Clip number: 64; Label: 4CV Random index in the segment clip: 20 of n_available_frames 73
98 HR /home/mx19/datasets/vital-us/echocardiography/videos-echo-annotated/01NVb-003-073/T2/01NVb-003-073-2 echo.mp4 Clip number: 65; Label: 4CV Random index in the segment clip: 123 of n_available_frames 183
90 HR /home/mx19/datasets/vital-us/echocardiography/videos-echo-annotated/01NVb-003-074/T1/01NVb-003-074-1 echo.mp4 Clip number: 66; Label: 4CV Random index in the segment clip: 65 of n_available_frames 97
91 HR /home/mx19/datasets/vital-us/echocardiography/videos-echo-annotated/01NVb-003-074/T1/01NVb-003-074-1 echo.mp4 Clip number: 67; Label: 4CV Random index in the segment clip: 80 of n_available_frames 94
87 HR /home/mx19/datasets/vital-us/echocardiography/videos-echo-annotated/01NVb-003-074/T2/01NVb-003-074-2 echo.mp4 Clip number: 68; Label: 4CV Random index in the segment clip: 14 of n_available_frames 102
85 HR /home/mx19/datasets/vital-us/echocardiography/videos-echo-annotated/01NVb-003-074/T2/01NVb-003-074-2 echo.mp4 Clip number: 69; Label: 4CV Random index in the segment clip: 75 of n_available_frames 106
99 HR /home/mx19/datasets/vital-us/echocardiography/videos-echo-annotated/01NVb-003-075/T1/01NVb-003-075-1 echo.mp4 Clip number: 70; Label: 4CV Random index in the segment clip: 89 of n_available_frames 144
97 HR /home/mx19/datasets/vital-us/echocardiography/videos-echo-annotated/01NVb-003-075/T2/01NVb-003-075-2 echo.mp4 Clip number: 71; Label: 4CV Random index in the segment clip: 14 of n_available_frames 90
96 HR /home/mx19/datasets/vital-us/echocardiography/videos-echo-annotated/01NVb-003-075/T2/01NVb-003-075-2 echo.mp4 Clip number: 72; Label: 4CV Random index in the segment clip: 72 of n_available_frames 104
NA HR /home/mx19/datasets/vital-us/echocardiography/videos-echo-annotated/01NVb-003-075/T3/01NVb-003-075-3 echo.mp4 Clip number: 73; Label: 4CV Random index in the segment clip: 28 of n_available_frames 189
NA HR /home/mx19/datasets/vital-us/echocardiography/videos-echo-annotated/01NVb-003-075/T3/01NVb-003-075-3 echo.mp4 Clip number: 74; Label: 4CV Random index in the segment clip: 74 of n_available_frames 174
55 HR /home/mx19/datasets/vital-us/echocardiography/videos-echo-annotated/01NVb-003-076/T1/01NVb-003-076-1 echo.mp4 Clip number: 75; Label: 4CV Random index in the segment clip: 55 of n_available_frames 162
71 HR /home/mx19/datasets/vital-us/echocardiography/videos-echo-annotated/01NVb-003-076/T2/01NVb-003-076-2 echo.mp4 Clip number: 76; Label: 4CV Random index in the segment clip: 65 of n_available_frames 221
67 HR /home/mx19/datasets/vital-us/echocardiography/videos-echo-annotated/01NVb-003-076/T3/01NVb-003-076-3 echo.mp4 Clip number: 77; Label: 4CV Random index in the segment clip: 89 of n_available_frames 137
70 HR /home/mx19/datasets/vital-us/echocardiography/videos-echo-annotated/01NVb-003-076/T3/01NVb-003-076-3 echo.mp4 Clip number: 78; Label: 4CV Random index in the segment clip: 17 of n_available_frames 161
106 HR /home/mx19/datasets/vital-us/echocardiography/videos-echo-annotated/01NVb-003-078/T1/01NVb-003-078-1 echo.mp4 Clip number: 79; Label: 4CV Random index in the segment clip: 125 of n_available_frames 180
104 HR /home/mx19/datasets/vital-us/echocardiography/videos-echo-annotated/01NVb-003-078/T2/01NVb-003-078-2 echo.mp4 Clip number: 80; Label: 4CV Random index in the segment clip: 15 of n_available_frames 311
73 HR /home/mx19/datasets/vital-us/echocardiography/videos-echo-annotated/01NVb-003-078/T3/01NVb-003-078-3 echo.mp4 Clip number: 81; Label: 4CV Random index in the segment clip: 175 of n_available_frames 266
59 HR /home/mx19/datasets/vital-us/echocardiography/videos-echo-annotated/01NVb-003-079/T1/01NVb-003-079-1 echo.mp4 Clip number: 82; Label: 4CV Random index in the segment clip: 238 of n_available_frames 328
65 HR /home/mx19/datasets/vital-us/echocardiography/videos-echo-annotated/01NVb-003-079/T2/01NVb-003-079-2 echo.mp4 Clip number: 83; Label: 4CV Random index in the segment clip: 36 of n_available_frames 133
70 HR /home/mx19/datasets/vital-us/echocardiography/videos-echo-annotated/01NVb-003-079/T2/01NVb-003-079-2 echo.mp4 Clip number: 84; Label: 4CV Random index in the segment clip: 150 of n_available_frames 167
58 HR /home/mx19/datasets/vital-us/echocardiography/videos-echo-annotated/01NVb-003-079/T3/01NVb-003-079-3 echo.mp4 Clip number: 85; Label: 4CV Random index in the segment clip: 67 of n_available_frames 235
64 HR /home/mx19/datasets/vital-us/echocardiography/videos-echo-annotated/01NVb-003-079/T3/01NVb-003-079-3 echo.mp4 Clip number: 86; Label: 4CV Random index in the segment clip: 142 of n_available_frames 220
69 HR /home/mx19/datasets/vital-us/echocardiography/videos-echo-annotated/01NVb-003-080/T1/01NVb-003-080-1-echo.mp4 Clip number: 87; Label: 4CV Random index in the segment clip: 51 of n_available_frames 154
mx19@sie133-lap:~/datasets/vital-us/echocardiography/videos-echo-annotated$ tree
.
├── 01NVb-003-070
│ ├── T1
│ │ ├── 01NVb-003-070-1-4CV.json
│ │ └── 01NVb-003-070-1 echo.mp4
│ ├── T2
│ │ ├── 01NVb-003-070-2-4CV.json
│ │ └── 01NVb-003-070-2 echo.mp4
│ └── T3
│ ├── 01NVb-003-070-3-4CV.json
│ └── 01NVb-003-070-3 echo.mp4
├── 01NVb-003-071
│ ├── T1
│ │ ├── 01NVb-003-071-1-4CV.json
│ │ └── 01NVb-003-071-1 echo.mp4
│ ├── T2
│ │ ├── 01NVb-003-071-2-4CV.json
│ │ └── 01NVb-003-071-2 echo.mp4
│ └── T3
│ ├── 01NVb-003-071-3-4CV.json
│ └── 01NVb-003-071-3 echo.mp4
├── 01NVb-003-072
│ ├── T1
│ │ ├── 01NVb-003-072-1-echo.mp4
│ │ └── 01NVb_003_072_T1_4CV.json
│ ├── T2
│ │ ├── 01NVb-003-072-2-echo-cont.mp4
│ │ ├── 01NVb_003_072_T2_4CV.json
│ │ └── extras
│ │ └── 01NVb-003-072-2-echo_mp4_
│ └── T3
│ ├── 01NVb-003-072-3-echo.mp4
│ └── 01NVb_003_072_T3_4CV.json
├── 01NVb-003-073
│ ├── T1
│ │ ├── 01NVb-003-073-1-4CV.json
│ │ └── 01NVb-003-073-1 echo.mp4
│ ├── T2
│ │ ├── 01NVb-003-073-2-4CV.json
│ │ └── 01NVb-003-073-2 echo.mp4
│ └── T3
├── 01NVb-003-074
│ ├── T1
│ │ ├── 01NVb-003-074-1-4CV.json
│ │ └── 01NVb-003-074-1 echo.mp4
│ ├── T2
│ │ ├── 01NVb-003-074-2-4CV.json
│ │ └── 01NVb-003-074-2 echo.mp4
│ └── T3
│ ├── 01NVb-003-074-3-4CV.json
│ └── 01NVb-003-074-3 echo.mp4
├── 01NVb-003-075
│ ├── T1
│ │ ├── 01NVb-003-075-1-4CV.json
│ │ └── 01NVb-003-075-1 echo.mp4
│ ├── T2
│ │ ├── 01NVb-003-075-2-4CV.json
│ │ └── 01NVb-003-075-2 echo.mp4
│ └── T3
│ ├── 01NVb-003-075-3-4CV.json
│ └── 01NVb-003-075-3 echo.mp4
├── 01NVb-003-076
│ ├── T1
│ │ ├── 01NVb-003-076-1-4CV.json
│ │ └── 01NVb-003-076-1 echo.mp4
│ ├── T2
│ │ ├── 01NVb-003-076-2-4CV.json
│ │ └── 01NVb-003-076-2 echo.mp4
│ └── T3
│ ├── 01NVb-003-076-3-4CV.json
│ └── 01NVb-003-076-3 echo.mp4
├── 01NVb-003-077
│ ├── T1
│ │ ├── 01NVb-003-077-1-4CV.json
│ │ ├── 01NVb-003-077-1 echo.mp4
│ │ └── extras
│ │ └── 01NVb-003-077-1_cont_mp4_
│ ├── T2
│ │ ├── 01NVb-003-077-2-4CV.json
│ │ └── 01NVb-003-077-2 echo.mp4
│ └── T3
│ ├── 01NVb-003-077-3-4CV.json
│ └── 01NVb-003-077-3 echo.mp4
├── 01NVb-003-078
│ ├── T1
│ │ ├── 01NVb-003-078-1-4CV.json
│ │ └── 01NVb-003-078-1 echo.mp4
│ ├── T2
│ │ ├── 01NVb-003-078-2-4CV.json
│ │ └── 01NVb-003-078-2 echo.mp4
│ └── T3
│ ├── 01NVb-003-078-3-4CV.json
│ └── 01NVb-003-078-3 echo.mp4
├── 01NVb-003-079
│ ├── T1
│ │ ├── 01NVb-003-079-1-4CV.json
│ │ └── 01NVb-003-079-1 echo.mp4
│ ├── T2
│ │ ├── 01NVb-003-079-2-4CV.json
│ │ └── 01NVb-003-079-2 echo.mp4
│ └── T3
│ ├── 01NVb-003-079-3-4CV.json
│ └── 01NVb-003-079-3 echo.mp4
└── 01NVb-003-080
├── T1
│ ├── 01NVb-003-080-1-4CV.json
│ └── 01NVb-003-080-1-echo.mp4
├── T2
│ └── 01NVb-003-080-2 echo_mp4_
└── T3
└── 01NVb-003-080-3 echo_mp4_
46 directories, 64 files
(rt-ai-echo-VE) mx19@sie133-lap:~/repositories/echocardiography/scripts/learning-pipeline$ python learning_pipeline.py --config ../config_files/learning_pipeline/config_learning_pipeline.yml
[ERROR] [EchoClassesDataset.__init__()] Error reading /home/mx19/datasets/vital-us/echocardiography/videos-echo-annotated/01NVb-003-077/T3/01NVb-003-077-3-4CV.json (empty). Removing from list
[ERROR] [EchoClassesDataset.__init__()] Error reading /home/mx19/datasets/vital-us/echocardiography/videos-echo-annotated/01NVb-003-074/T3/01NVb-003-074-3-4CV.json (empty). Removing from list
[ERROR] [EchoClassesDataset.__init__()] Error reading /home/mx19/datasets/vital-us/echocardiography/videos-echo-annotated/01NVb-003-077/T1/01NVb-003-077-1-4CV.json (empty). Removing from list
[ERROR] [EchoClassesDataset.__init__()] Error reading /home/mx19/datasets/vital-us/echocardiography/videos-echo-annotated/01NVb-003-077/T2/01NVb-003-077-2-4CV.json (empty). Removing from list
Number of clips: 88
Thanks @huynhatd13 for uploading the files. I can confirm that files are there from particpant 87 to 106.
In addition, I just uploaded the remain patient data (090-106); [4:20](innovationsproject.slack.com
It would be great if you can help with:
Hi @huynhatd13
Just realised that the following annotations with the name(1)
are not making too much sense. For instance, 01NVb-003-052-2-4CV.json
is using "fname":"01NVb-003-052-2 echo.mp4", whereas 01NVb-003-052-2-4CV(1).json
is using "fname":"01NVb-003-052-2 echo.mp4". Would you please help to comment, rename or delete files in the filezilla server?
mx19@sie133-lap:~/datasets/vital-us/echocardiography/videos-echo-annotated$ tree
.
├── 01NVb-003-052
│ ├── T1
│ │ ├── 01NVb-003-052-1-4CV.json
│ │ └── 01NVb-003-052-1 echo.mp4
│ ├── T2
│ │ ├── 01NVb-003-052-2-4CV(1).json
│ │ ├── 01NVb-003-052-2-4CV.json
│ │ ├── 01NVb-003-052-2 echo (2).mp4
│ │ └── 01NVb-003-052-2 echo.mp4
│ └── T3
└── 01NVb-003-053
├── T1
│ ├── 01NVb-003-053-1-4CV.json
│ └── 01NVb-003-053-1 echo.mp4
├── T2
│ ├── 01NVb-003-053-2-4CV.json
│ └── 01NVb-003-053-2 echo.mp4
└── T3
├── 01NVb-003-053-3-4CV (1).json
├── 01NVb-003-053-3-4CV.json
├── 01NVb-003-053-3 echo cont.mp4
└── 01NVb-003-053-3 echo.mp4
8 directories, 14 files
print(f'=================== LABELLED =======================')
basic_demographics['LABELLED'].value_counts().plot.pie(autopct='%.1f %%', ylabel='TOTAL', legend=True)
plt.axis('equal')
plt.show()
Hi @huynhatd13
Just realised that the following annotations with the name
(1)
are not making too much sense. For instance,01NVb-003-052-2-4CV.json
is using "fname":"01NVb-003-052-2 echo.mp4", whereas01NVb-003-052-2-4CV(1).json
is using "fname":"01NVb-003-052-2 echo.mp4". Would you please help to comment, rename or delete files in the filezilla server?mx19@sie133-lap:~/datasets/vital-us/echocardiography/videos-echo-annotated$ tree . ├── 01NVb-003-052 │ ├── T1 │ │ ├── 01NVb-003-052-1-4CV.json │ │ └── 01NVb-003-052-1 echo.mp4 │ ├── T2 │ │ ├── 01NVb-003-052-2-4CV(1).json │ │ ├── 01NVb-003-052-2-4CV.json │ │ ├── 01NVb-003-052-2 echo (2).mp4 │ │ └── 01NVb-003-052-2 echo.mp4 │ └── T3 └── 01NVb-003-053 ├── T1 │ ├── 01NVb-003-053-1-4CV.json │ └── 01NVb-003-053-1 echo.mp4 ├── T2 │ ├── 01NVb-003-053-2-4CV.json │ └── 01NVb-003-053-2 echo.mp4 └── T3 ├── 01NVb-003-053-3-4CV (1).json ├── 01NVb-003-053-3-4CV.json ├── 01NVb-003-053-3 echo cont.mp4 └── 01NVb-003-053-3 echo.mp4 8 directories, 14 files
I have modified the files and uploaded
Hi @huynhatd13
Thanks for the annotation of another 10 subjects. Would you like to check what is happening with the following datasets:
Not sure if you have run the notebook but would be nice if you provide input on how to improve the validation pipeline https://github.com/vital-ultrasound/echocardiography/blob/13-validation-of-4cv/scripts/curation-selection-validation/validation-of-4cv-labels.ipynb. I would suggest to do this List of files to be verified
together.
Hi @huynhatd13
datasets
Thanks for the annotation of another 10 subjects. Would you like to check what is happening with the following datasets:
- [x] 40-T3: not annotated? - I can not read the video
- [x] 41-T3: not sure which is the annotated video: echo or echo cont ? echo.mp4. you can check in the "file/fname:" in the json file
- [x] 44-T1: no echo data? - No echo data
- [x] 49-T1: not annotated? No 4CV view
- [x] 49-T2: not annotated? No 4CV view
- [x] 49-T3: no echo data? No echo data as the patients transfered home
validation
Not sure if you have run the notebook but would be nice if you provide input on how to improve the validation pipeline https://github.com/vital-ultrasound/echocardiography/blob/13-validation-of-4cv/scripts/curation-selection-validation/validation-of-4cv-labels.ipynb. I would suggest to do this
List of files to be verified
together.
I have run the validation-of-4cv.ipynb notebook on my laptop. Will add comments on the code
EchoClassesDataset()
12594392 -rw-rw-r-- 1 mx19 mx19 5.8M Apr 21 16:42 videoID_00_041-1_label_00.pth
12594408 -rw-rw-r-- 1 mx19 mx19 5.8M Apr 21 17:31 videoID_00_041-3_label_00.pth
12594409 -rw-rw-r-- 1 mx19 mx19 5.8M Apr 21 17:31 videoID_01_041-1_label_00.pth
12594393 -rw-rw-r-- 1 mx19 mx19 1.6M Apr 21 16:42 videoID_01_041-2_label_00.pth
12594394 -rw-rw-r-- 1 mx19 mx19 5.8M Apr 21 16:42 videoID_02_041-2_label_00.pth
12594414 -rw-rw-r-- 1 mx19 mx19 5.8M Apr 21 17:34 videoID_02_041-3_label_00.pth
12594410 -rw-rw-r-- 1 mx19 mx19 5.8M Apr 21 17:31 videoID_03_041-2_label_00.pth
12594395 -rw-rw-r-- 1 mx19 mx19 5.8M Apr 21 16:42 videoID_03_041-3_label_00.pth
12594396 -rw-rw-r-- 1 mx19 mx19 5.8M Apr 21 16:42 videoID_04_041-2_label_00.pth
12594411 -rw-rw-r-- 1 mx19 mx19 5.8M Apr 21 17:31 videoID_04_041-3_label_00.pth
12594397 -rw-rw-r-- 1 mx19 mx19 5.8M Apr 21 16:42 videoID_05_041-1_label_00.pth
12594412 -rw-rw-r-- 1 mx19 mx19 5.8M Apr 21 17:31 videoID_05_041-3_label_00.pth
12594413 -rw-rw-r-- 1 mx19 mx19 5.8M Apr 21 17:31 videoID_06_041-2_label_00.pth
12594398 -rw-rw-r-- 1 mx19 mx19 5.8M Apr 21 16:42 videoID_06_041-3_label_00.pth
12594399 -rw-rw-r-- 1 mx19 mx19 3.6M Apr 21 16:42 videoID_07_041-3_label_00.pth
Notebook can now be used for selection of particpant and pixel size of image https://github.com/vital-ultrasound/echocardiography/blob/13-validation-of-4cv/scripts/curation-selection-validation/validation-of-4cv-labels.ipynb. To which the following animations have been created for partipant 41.
It seems that background are pickign up few frames from the 4CV as I am not sure what is wrong here with the background frames for clip06 from participant 41:
For others clips, the background seems to be OKAY:
Run https://github.com/vital-ultrasound/echocardiography/blob/13-validation-of-4cv/scripts/curation-selection-validation/validation-of-4cv-labels.ipynb to make further comments
From the meeting with @huynhatd13 on Tue 26 Apr 08:45:12 BST 2022 where I explained the notebook, we think that the following points will be beneficial in the verification of 4CV:
In the weekly meeting of 26-Apr-2022, Alberto suggested leaving quality assessment for later. Andy suggested paying attention to the false negatives/positives of clips as those frames will impact on the extraction of cardiac output.
CLIP:00 of BKGR label for torch.Size([1, 50, 100, 100]) TOTAL_FRAMES: 152 from clip_frame_clip_idx 57
CLIP:01 of BKGR label for torch.Size([1, 50, 100, 100]) TOTAL_FRAMES: 150 from clip_frame_clip_idx 72
CLIP:02 of BKGR label for torch.Size([1, 50, 100, 100]) TOTAL_FRAMES: 150 from clip_frame_clip_idx 77
CLIP:03 of BKGR label for torch.Size([1, 50, 100, 100]) TOTAL_FRAMES: 151 from clip_frame_clip_idx 75
CLIP:04 of BKGR label for torch.Size([1, 50, 100, 100]) TOTAL_FRAMES: 40 from clip_frame_clip_idx 0
CLIP:05 of BKGR label for torch.Size([1, 50, 100, 100]) TOTAL_FRAMES: 151 from clip_frame_clip_idx 84
CLIP:06 of BKGR label for torch.Size([1, 50, 100, 100]) TOTAL_FRAMES: 151 from clip_frame_clip_idx 66
CLIP:07 of BKGR label for torch.Size([1, 50, 100, 100]) TOTAL_FRAMES: 92 from clip_frame_clip_idx 48
CLIP:08 of 4CV label for torch.Size([1, 50, 100, 100]) TOTAL_FRAMES: 106 from clip_frame_clip_idx 53
CLIP:09 of 4CV label for torch.Size([1, 50, 100, 100]) TOTAL_FRAMES: 108 from clip_frame_clip_idx 50
CLIP:10 of 4CV label for torch.Size([1, 50, 100, 100]) TOTAL_FRAMES: 79 from clip_frame_clip_idx 31
CLIP:11 of 4CV label for torch.Size([1, 50, 100, 100]) TOTAL_FRAMES: 92 from clip_frame_clip_idx 42
CLIP:12 of 4CV label for torch.Size([1, 50, 100, 100]) TOTAL_FRAMES: 102 from clip_frame_clip_idx 50
CLIP:13 of 4CV label for torch.Size([1, 50, 100, 100]) TOTAL_FRAMES: 122 from clip_frame_clip_idx 58
CLIP:14 of 4CV label for torch.Size([1, 50, 100, 100]) TOTAL_FRAMES: 135 from clip_frame_clip_idx 78
CLIP:15 of 4CV label for torch.Size([1, 50, 100, 100]) TOTAL_FRAMES: 119 from clip_frame_clip_idx 63
0 8
1 9
2 10
3 11
4 12
5 13
6 14
7 15
CLIP:00 of BKGR label for torch.Size([1, 50, 100, 100]) TOTAL_FRAMES: 151 from clip_frame_clip_idx 52
CLIP:01 of BKGR label for torch.Size([1, 50, 100, 100]) TOTAL_FRAMES: 152 from clip_frame_clip_idx 64
CLIP:02 of BKGR label for torch.Size([1, 50, 100, 100]) TOTAL_FRAMES: 150 from clip_frame_clip_idx 79
CLIP:03 of BKGR label for torch.Size([1, 50, 100, 100]) TOTAL_FRAMES: 151 from clip_frame_clip_idx 74
CLIP:04 of BKGR label for torch.Size([1, 50, 100, 100]) TOTAL_FRAMES: 151 from clip_frame_clip_idx 51
CLIP:05 of BKGR label for torch.Size([1, 50, 100, 100]) TOTAL_FRAMES: 150 from clip_frame_clip_idx 55
CLIP:06 of BKGR label for torch.Size([1, 50, 100, 100]) TOTAL_FRAMES: 151 from clip_frame_clip_idx 93
CLIP:07 of BKGR label for torch.Size([1, 50, 100, 100]) TOTAL_FRAMES: 151 from clip_frame_clip_idx 82
CLIP:08 of 4CV label for torch.Size([1, 50, 100, 100]) TOTAL_FRAMES: 106 from clip_frame_clip_idx 54
CLIP:09 of 4CV label for torch.Size([1, 50, 100, 100]) TOTAL_FRAMES: 108 from clip_frame_clip_idx 50
CLIP:10 of 4CV label for torch.Size([1, 50, 100, 100]) TOTAL_FRAMES: 79 from clip_frame_clip_idx 40
CLIP:11 of 4CV label for torch.Size([1, 50, 100, 100]) TOTAL_FRAMES: 92 from clip_frame_clip_idx 44
CLIP:12 of 4CV label for torch.Size([1, 50, 100, 100]) TOTAL_FRAMES: 102 from clip_frame_clip_idx 50
CLIP:13 of 4CV label for torch.Size([1, 50, 100, 100]) TOTAL_FRAMES: 122 from clip_frame_clip_idx 53
CLIP:14 of 4CV label for torch.Size([1, 50, 100, 100]) TOTAL_FRAMES: 135 from clip_frame_clip_idx 52
CLIP:15 of 4CV label for torch.Size([1, 50, 100, 100]) TOTAL_FRAMES: 119 from clip_frame_clip_idx 58
0 8
1 9
2 10
3 11
4 12
5 13
6 14
7 15
CLIP:00 of BKGR label for torch.Size([1, 200, 100, 100]) TOTAL_FRAMES: 150 from clip_frame_clip_idx 0
CLIP:01 of BKGR label for torch.Size([1, 200, 100, 100]) TOTAL_FRAMES: 151 from clip_frame_clip_idx 0
CLIP:02 of BKGR label for torch.Size([1, 200, 100, 100]) TOTAL_FRAMES: 150 from clip_frame_clip_idx 0
CLIP:03 of BKGR label for torch.Size([1, 200, 100, 100]) TOTAL_FRAMES: 151 from clip_frame_clip_idx 0
CLIP:04 of BKGR label for torch.Size([1, 200, 100, 100]) TOTAL_FRAMES: 150 from clip_frame_clip_idx 0
CLIP:05 of BKGR label for torch.Size([1, 200, 100, 100]) TOTAL_FRAMES: 151 from clip_frame_clip_idx 0
CLIP:06 of BKGR label for torch.Size([1, 200, 100, 100]) TOTAL_FRAMES: 40 from clip_frame_clip_idx 0
CLIP:07 of BKGR label for torch.Size([1, 200, 100, 100]) TOTAL_FRAMES: 150 from clip_frame_clip_idx 0
CLIP:08 of 4CV label for torch.Size([1, 200, 100, 100]) TOTAL_FRAMES: 106 from clip_frame_clip_idx 0
CLIP:09 of 4CV label for torch.Size([1, 200, 100, 100]) TOTAL_FRAMES: 108 from clip_frame_clip_idx 0
CLIP:10 of 4CV label for torch.Size([1, 200, 100, 100]) TOTAL_FRAMES: 79 from clip_frame_clip_idx 0
CLIP:11 of 4CV label for torch.Size([1, 200, 100, 100]) TOTAL_FRAMES: 92 from clip_frame_clip_idx 0
CLIP:12 of 4CV label for torch.Size([1, 200, 100, 100]) TOTAL_FRAMES: 102 from clip_frame_clip_idx 0
CLIP:13 of 4CV label for torch.Size([1, 200, 100, 100]) TOTAL_FRAMES: 122 from clip_frame_clip_idx 0
CLIP:14 of 4CV label for torch.Size([1, 200, 100, 100]) TOTAL_FRAMES: 135 from clip_frame_clip_idx 0
CLIP:15 of 4CV label for torch.Size([1, 200, 100, 100]) TOTAL_FRAMES: 119 from clip_frame_clip_idx 0
0 8
1 9
2 10
3 11
4 12
5 13
6 14
7 15
On our weekly meeting,Tue 3 May 2022, Alberto suggested generating animations for all videos to have some sort of quality control. Videos will be shared in filezilla server.
The above commit contains:
Animation size has reached 21084101 bytes, exceeding the limit of 20971520.0. If you're sure you want a larger animation embedded, set the animation.embed_limit rc parameter to a larger value (in MB). This and further frames will be dropped.
remote: warning: File scripts/curation-selection-validation/validation-of-4cv-labels.ipynb is 74.53 MB; this is larger than GitHub's recommended maximum file size of 50.00 MB
interval_between_frames_in_milliseconds=33.3 ## 1/30=0.033333
Hi All
As discussed in our last meeting, I have just created animated echo clips with the aim to verify if 4CV and background clips are looking as clinical expected. So, if you open [1], using your KCL credentials or login as a guest of the share-point, you will find GIF animation 51 files for participants 40 to 48 for clips for background and 4CV with an image resolution 250Wx250H*.
It would be great if @huynhatd13 , @gomezalberto and perhaps Luigi or others like to have a look and share your comments/feedback for quality control of clips? I will do my best to create animations for all available labelled videos and put them in filezilla and provide further instructions for quality control before the end of the week. If interested, the implementation for the animation and other code bits are available in this self-explanatory and documented notebook [2].
Notes:
Ps. On Friday, I am sharing updates for the learning pipeline in the technical group chat.
Thanks Miguel
1: https://emckclac.sharepoint.com/sites/MT-BMEIS-VITAL-US/Shared%20Documents/Forms/AllItems.aspx?id=%2Fsites%2FMT%2DBMEIS%2DVITAL%2DUS%2FShared%20Documents%2FGeneral%2F06%20Clinical%2FEvaluation%2Fechoes%5Fvalidation%5FMay2022%5FTEMPORAL&viewid=1158d72f%2D3e4f%2D43ff%2Daac1%2D2f995401359e 2: https://github.com/vital-ultrasound/echocardiography/blob/13-validation-of-4cv/scripts/curation-selection-validation/validation-of-4cv-labels.ipynb
Hi All
Just completed 169 gif aninamtions for subjects 40 to 79. See few key points and notes.
Ps. Before the end of the week, I will be sharing few updates regarding the learning workflow.
Thanks Miguel
TODO
Error: /01NVb/01NVb/Group 1-ECHO + LUS/01NVb-003-070/T1/01NVb-003-071-1-4CV.json: open for write: permission denied
Hi @nhatpth
would you please help to annotate 052,070 and upload json files in filezilla?
Then this issue will be ready to be closed.
Thanks @nhatpth for annotating subjects 052,070.
Something is wrong with annotations for 01NVb-003-052-3-4CV.json
even after changing 01NVb-003-052-2 echo.mp4 to 01NVb-003-052-3 echo.mp4:
{"project":{"pid":"VIA_PROJECT_ID","rev":"VIA_PROJECT_REV_ID__","rev_timestamp":"VIA_PROJECT_REV_TIMESTAMP__","pname":"echo_4CV_template_VIA Project","creator":"VGG Image Annotator (http://www.robots.ox.ac.uk/~vgg/software/via)","created":1634093189898,"vid_list":["1"]},"config":{"file":{"loc_prefix":{"1":"","2":"","3":"","4":""}},"ui":{"file_content_align":"center","file_metadata_editor_visible":true,"spatial_metadata_editor_visible":true,"temporal_segment_metadata_editor_visible":true,"spatial_region_label_attribute_id":"","gtimeline_visible_row_count":"4"}},"attribute":{"1":{"aname":"4CV","anchor_id":"FILE1_Z2_XY0","type":2,"desc":"","options":{},"default_option_id":""}},"file":{"1":{"fid":"1","fname":"01NVb-003-052-2 echo.mp4","type":4,"loc":1,"src":""}},"metadata":{"1_Byx0j4N6":{"vid":"1","flg":0,"z":[677.13275,679.59108],"xy":[],"av":{"1":"4CV"}},"1_t9KVo6UE":{"vid":"1","flg":0,"z":[673.237,675.63275],"xy":[],"av":{"1":"4CV"}}},"view":{"1":{"fid_list":["1"]}}}
Hi @nhatpth , any updates with the above annotation 01NVb-003-052-3-4CV.json
?
I am closing this one but perhaps if there is time, please double check:
Something is wrong with annotations for 01NVb-003-052-3-4CV.json
even after changing 01NVb-003-052-2 echo.mp4 to 01NVb-003-052-3 echo.mp4:
{"project":{"pid":"VIA_PROJECT_ID","rev":"VIA_PROJECT_REV_ID__","rev_timestamp":"VIA_PROJECT_REV_TIMESTAMP__","pname":"echo_4CV_template_VIA Project","creator":"VGG Image Annotator (http://www.robots.ox.ac.uk/~vgg/software/via)","created":1634093189898,"vid_list":["1"]},"config":{"file":{"loc_prefix":{"1":"","2":"","3":"","4":""}},"ui":{"file_content_align":"center","file_metadata_editor_visible":true,"spatial_metadata_editor_visible":true,"temporal_segment_metadata_editor_visible":true,"spatial_region_label_attribute_id":"","gtimeline_visible_row_count":"4"}},"attribute":{"1":{"aname":"4CV","anchor_id":"FILE1_Z2_XY0","type":2,"desc":"","options":{},"default_option_id":""}},"file":{"1":{"fid":"1","fname":"01NVb-003-052-2 echo.mp4","type":4,"loc":1,"src":""}},"metadata":{"1_Byx0j4N6":{"vid":"1","flg":0,"z":[677.13275,679.59108],"xy":[],"av":{"1":"4CV"}},"1_t9KVo6UE":{"vid":"1","flg":0,"z":[673.237,675.63275],"xy":[],"av":{"1":"4CV"}}},"view":{"1":{"fid_list":["1"]}}}
There are few scripts for
Anonymization
andvideo_to_sliding_video.py
from 3909cc0. However, those might not work with the rawdatasets of 81 participants and take into account the inconsistency of such videos (different sources, views, modes, absolute paths or others). That said, I am raising this ticket to address the above and related topics to prepare and curate datasets.