ocean-data-factory-sweden / kso

Notebooks to upload/download marine footage, connect to a citizen science project, train machine learning models and publish marine biological observations.
GNU General Public License v3.0
4 stars 12 forks source link

Issue in tutorial 9 with running detection model with Uploaded footage #353

Closed Bergylta closed 5 months ago

Bergylta commented 5 months ago

πŸ› Bug

A clear and concise description of what the bug is.

To Reproduce (REQUIRED)

Input: Model: GU_baitbox_newmodel1 Movie: 230921_RORT_04.MP4 Threshold

mlp.detect_yolo(
    source=pp.movies_paths,
    save_dir=save_dir.selected,
    conf_thres=conf_thres.value,
    artifact_dir=artifact_dir,
    save_output=True,
    project=mlp.project_name,
    name=exp_name.value,
    model=model.value,
    latest=True,
)

Output:

---------------------------------------------------------------------------
UnidentifiedImageError                    Traceback (most recent call last)
Cell In[13], line 1
----> 1 mlp.detect_yolo(
      2     source=pp.movies_paths,
      3     save_dir=save_dir.selected,
      4     conf_thres=conf_thres.value,
      5     artifact_dir=artifact_dir,
      6     save_output=True,
      7     project=mlp.project_name,
      8     name=exp_name.value,
      9     model=model.value,
     10     latest=True,
     11 )

File /usr/src/app/kso-dev/kso_utils/project.py:1773, in MLProjectProcessor.detect_yolo(self, project, name, source, save_dir, conf_thres, artifact_dir, model, img_size, save_output, test, latest)
   1761 if latest:
   1762     results = model.predict(
   1763         project=project,
   1764         name=name,
   (...)
   1771         stream=True,
   1772     )
-> 1773     for i in results:
   1774         print(i)
   1775 else:

File /usr/local/lib/python3.8/dist-packages/torch/utils/_contextlib.py:35, in _wrap_generator.<locals>.generator_context(*args, **kwargs)
     32 try:
     33     # Issuing `None` to a generator fires it up
     34     with ctx_factory():
---> 35         response = gen.send(None)
     37     while True:
     38         try:
     39             # Forward the response to our caller and get its next request

File /usr/local/lib/python3.8/dist-packages/ultralytics/engine/predictor.py:235, in BasePredictor.stream_inference(self, source, model, *args, **kwargs)
    232     self.setup_model(model)
    234 # Setup source every time predict is called
--> 235 self.setup_source(source if source is not None else self.args.source)
    237 # Check if save_dir/ label file exists
    238 if self.args.save or self.args.save_txt:

File /usr/local/lib/python3.8/dist-packages/ultralytics/engine/predictor.py:213, in BasePredictor.setup_source(self, source)
    210 self.imgsz = check_imgsz(self.args.imgsz, stride=self.model.stride, min_dim=2)  # check image size
    211 self.transforms = getattr(self.model.model, 'transforms', classify_transforms(
    212     self.imgsz[0])) if self.args.task == 'classify' else None
--> 213 self.dataset = load_inference_source(source=source,
    214                                      imgsz=self.imgsz,
    215                                      vid_stride=self.args.vid_stride,
    216                                      buffer=self.args.stream_buffer)
    217 self.source_type = self.dataset.source_type
    218 if not getattr(self, 'stream', True) and (self.dataset.mode == 'stream' or  # streams
    219                                           len(self.dataset) > 1000 or  # images
    220                                           any(getattr(self.dataset, 'video_flag', [False]))):  # videos

File /usr/local/lib/python3.8/dist-packages/ultralytics/data/build.py:157, in load_inference_source(source, imgsz, vid_stride, buffer)
    144 def load_inference_source(source=None, imgsz=640, vid_stride=1, buffer=False):
    145     """
    146     Loads an inference source for object detection and applies necessary transformations.
    147 
   (...)
    155         dataset (Dataset): A dataset object for the specified input source.
    156     """
--> 157     source, webcam, screenshot, from_img, in_memory, tensor = check_source(source)
    158     source_type = source.source_type if in_memory else SourceTypes(webcam, screenshot, from_img, tensor)
    160     # Dataloader

File /usr/local/lib/python3.8/dist-packages/ultralytics/data/build.py:132, in check_source(source)
    130     in_memory = True
    131 elif isinstance(source, (list, tuple)):
--> 132     source = autocast_list(source)  # convert all list elements to PIL or np arrays
    133     from_img = True
    134 elif isinstance(source, (Image.Image, np.ndarray)):

File /usr/local/lib/python3.8/dist-packages/ultralytics/data/loaders.py:483, in autocast_list(source)
    481 for im in source:
    482     if isinstance(im, (str, Path)):  # filename or uri
--> 483         files.append(Image.open(requests.get(im, stream=True).raw if str(im).startswith('http') else im))
    484     elif isinstance(im, (Image.Image, np.ndarray)):  # PIL or np Image
    485         files.append(im)

File /usr/local/lib/python3.8/dist-packages/PIL/Image.py:3305, in open(fp, mode, formats)
   3303     warnings.warn(message)
   3304 msg = "cannot identify image file %r" % (filename if filename else fp)
-> 3305 raise UnidentifiedImageError(msg)

UnidentifiedImageError: cannot identify image file '/mimer/NOBACKUP/groups/snic2021-6-9/project_movies/movies_GU/concatenated/230921_RORT_04.MP4'

Expected behavior

A clear and concise description of what you expected to happen.

Environment

If applicable, add screenshots to help explain your problem.

Additional context

PS, there is something wrong with tutorial 6 too, i can't find where to choose footage there is only a box with

### choose a confidence threshold for the evaluation