ultralytics / yolov5

YOLOv5 πŸš€ in PyTorch > ONNX > CoreML > TFLite
https://docs.ultralytics.com
GNU Affero General Public License v3.0
49.72k stars 16.12k forks source link

Device Busy even though just one camera when running Yolov5 segmentation on Ubuntu 20.04 #12351

Closed Anthonia2001 closed 8 months ago

Anthonia2001 commented 10 months ago

Search before asking

YOLOv5 Component

No response

Bug

Hi! I am a Ubuntu user to run Yolov5 with my ROS node. At the beginnings, I used the code in yolov5s to detect object and provide x,y,z from Intel Realsense Camera D435. The program is running without any errors.

After I changed to yolov5s-seg for the segmentation task. The code predict.py still works but then I modified it into the code below:

import argparse
import os
import platform
import sys
from pathlib import Path
from PIL import ImageDraw, ImageFont
import torch
import numpy as np
import time
import pyrealsense2 as rs
import math

FILE = Path(__file__).resolve()
ROOT = FILE.parents[1]  # YOLOv5 root directory
if str(ROOT) not in sys.path:
    sys.path.append(str(ROOT))  # add ROOT to PATH
ROOT = Path(os.path.relpath(ROOT, Path.cwd()))  # relative

from ultralytics.utils.plotting import Annotator, colors, save_one_box

from models.common import DetectMultiBackend
from utils.dataloaders import IMG_FORMATS, VID_FORMATS, LoadImages, LoadScreenshots, LoadStreams
from utils.general import (LOGGER, Profile, check_file, check_img_size, check_imshow, check_requirements, colorstr, cv2,
                           increment_path, non_max_suppression, print_args, scale_boxes, scale_segments,
                           strip_optimizer)
from utils.segment.general import masks2segments, process_mask, process_mask_native
from utils.torch_utils import select_device, smart_inference_mode

from math import atan2, cos, sin, sqrt, pi, asin, acos

def drawAxis(img, p_, q_, colour, scale):
    p = list(p_)
    q = list(q_)

    angle = atan2(p[1] - q[1], p[0] - q[0])  # angle in radians
    hypotenuse = sqrt((p[1] - q[1]) * (p[1] - q[1]) + (p[0] - q[0]) * (p[0] - q[0]))
    # Here we lengthen the arrow by a factor of scale
    q[0] = p[0] - scale * hypotenuse * cos(angle)
    q[1] = p[1] - scale * hypotenuse * sin(angle)
    cv2.line(img, (int(p[0]), int(p[1])), (int(q[0]), int(q[1])), colour, 1, cv2.LINE_AA)
    # create the arrow hooks
    p[0] = q[0] + 9 * cos(angle + pi / 4)
    p[1] = q[1] + 9 * sin(angle + pi / 4)
    cv2.line(img, (int(p[0]), int(p[1])), (int(q[0]), int(q[1])), colour, 1, cv2.LINE_AA)
    p[0] = q[0] + 9 * cos(angle - pi / 4)
    p[1] = q[1] + 9 * sin(angle - pi / 4)
    cv2.line(img, (int(p[0]), int(p[1])), (int(q[0]), int(q[1])), colour, 1, cv2.LINE_AA)

def get_frame_stream():
    cfg = rs.config() #ok
    cfg.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 60) #ok
    cfg.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 60) #ok

    pipe = rs.pipeline()
    profile = pipe.start(cfg)

    #pipe.start(cfg)
    align_to = rs.stream.color
    align = rs.align(align_to)

    frames = pipe.wait_for_frames()
    aligned_frames = align.process(frames)
    depth_frame = aligned_frames.get_depth_frame()
    color_frame = aligned_frames.get_color_frame()
    depth_frame.get_height()

    if not depth_frame or not color_frame:
        # If there is no frame, probably camera not connected, return False
        print(
            "Error, impossible to get the frame, make sure that the Intel Realsense camera is correctly connected")
        return None, None

        # Apply filter to fill the Holes in the depth image
    color_intrin = color_frame.profile.as_video_stream_profile().intrinsics

    spatial = rs.spatial_filter()
    spatial.set_option(rs.option.holes_fill, 3)
    filtered_depth = spatial.process(depth_frame)

    hole_filling = rs.hole_filling_filter()
    filled_depth = hole_filling.process(filtered_depth)

    # Create colormap to show the depth of the Objects
    colorizer = rs.colorizer()
    depth_colormap = np.asanyarray(colorizer.colorize(filled_depth).get_data())

    # Convert images to numpy arrays
    # distance = depth_frame.get_distance(int(50),int(50))
    # print("distance", distance)
    depth_image = np.asanyarray(filled_depth.get_data())
    color_image = np.asanyarray(color_frame.get_data())

    # cv2.imshow("Colormap", depth_colormap)
    # cv2.imshow("depth img", depth_image)

    return True, frames, color_image, depth_image, color_frame, depth_frame, color_intrin

def run(
    weights=ROOT / 'yolov5s-seg.pt',  # model.pt path(s)
    source=ROOT / 'data/images',  # file/dir/URL/glob/screen/0(webcam)
    data=ROOT / 'data/coco128-seg.yaml',  # dataset.yaml path
    imgsz=(640, 640),  # inference size (height, width)
    conf_thres=0.25,  # confidence threshold
    iou_thres=0.45,  # NMS IOU threshold
    max_det=1000,  # maximum detections per image
    device='',  # cuda device, i.e. 0 or 0,1,2,3 or cpu
    view_img=False,  # show results
    save_txt=False,  # save results to *.txt
    save_conf=False,  # save confidences in --save-txt labels
    save_crop=False,  # save cropped prediction boxes
    nosave=False,  # do not save images/videos
    classes=None,  # filter by class: --class 0, or --class 0 2 3
    agnostic_nms=False,  # class-agnostic NMS
    augment=False,  # augmented inference
    visualize=False,  # visualize features
    update=False,  # update all models
    project=ROOT / 'runs/predict-seg',  # save results to project/name
    name='exp',  # save results to project/name
    exist_ok=False,  # existing project/name ok, do not increment
    line_thickness=3,  # bounding box thickness (pixels)
    hide_labels=False,  # hide labels
    hide_conf=False,  # hide confidences
    half=False,  # use FP16 half-precision inference
    dnn=False,  # use OpenCV DNN for ONNX inference
    vid_stride=1,  # video frame-rate stride
    retina_masks=False,

):
    source = str(source)
    save_img = not nosave and not source.endswith('.txt')  # save inference images
    is_file = Path(source).suffix[1:] in (IMG_FORMATS + VID_FORMATS)
    is_url = source.lower().startswith(('rtsp://', 'rtmp://', 'http://', 'https://'))
    webcam = source.isnumeric() or source.endswith('.streams') or (is_url and not is_file)
    screenshot = source.lower().startswith('screen')
    if is_url and is_file:
        source = check_file(source)  # download

    # Directories
    save_dir = increment_path(Path(project) / name, exist_ok=exist_ok)  # increment run
    (save_dir / 'labels' if save_txt else save_dir).mkdir(parents=True, exist_ok=True)  # make dir

    # Load model
    device = select_device(device)
    model = DetectMultiBackend(weights, device=device, dnn=dnn, data=data, fp16=half)
    stride, names, pt = model.stride, model.names, model.pt
    imgsz = check_img_size(imgsz, s=stride)  # check image size

    # Dataloader
    bs = 1  # batch_size
    if webcam:
        view_img = check_imshow(warn=True)
        dataset = LoadStreams(source, img_size=imgsz, stride=stride, auto=pt, vid_stride=vid_stride)
        bs = len(dataset)
    elif screenshot:
        dataset = LoadScreenshots(source, img_size=imgsz, stride=stride, auto=pt)
    else:
        dataset = LoadImages(source, img_size=imgsz, stride=stride, auto=pt, vid_stride=vid_stride)
    vid_path, vid_writer = [None] * bs, [None] * bs

    # Run inference
    model.warmup(imgsz=(1 if pt else bs, 3, *imgsz))  # warmup
    seen, windows, dt = 0, [], (Profile(), Profile(), Profile())
    for path, im, im0s, vid_cap, s in dataset:
        with dt[0]:
            frame_img = vid_cap
            im = torch.from_numpy(im).to(model.device)
            im = im.half() if model.fp16 else im.float()  # uint8 to fp16/32
            imnew = im.half() if model.fp16 else im.int()  # uint8 to fp16/32
            im /= 255  # 0 - 255 to 0.0 - 1.0
            if len(im.shape) == 3:
                im = im[None]  # expand for batch dim

        # Inference
        with dt[1]:
            visualize = increment_path(save_dir / Path(path).stem, mkdir=True) if visualize else False
            pred, proto = model(im, augment=augment, visualize=visualize)[:2]

        # NMS
        with dt[2]:
            pred = non_max_suppression(pred, conf_thres, iou_thres, classes, agnostic_nms, max_det=max_det, nm=32)

        # Second-stage classifier (optional)
        # pred = utils.general.apply_classifier(pred, classifier_model, im, im0s)

        # Process predictions
        for i, det in enumerate(pred):  # per image
            seen += 1
            p, im0, frame = path[i], im0s[i].copy(), dataset.count
            s += f'{i}: '

            p = Path(p)  # to Path
            save_path = str(save_dir / p.name)  # im.jpg
            txt_path = str(save_dir / 'labels' / p.stem) + ('' if dataset.mode == 'image' else f'_{frame}')  # im.txt
            s += '%gx%g ' % im.shape[2:]  # print string
            annotator = Annotator(im0, line_width=line_thickness, example=str(names))
            if len(det):
                masks = process_mask(proto[i], det[:, 6:], det[:, :4], im.shape[2:], upsample=True)  # HWC
                #print(masks)
                det[:, :4] = scale_boxes(im.shape[2:], det[:, :4], im0.shape).round()  # rescale boxes to im0 size

                masks2 = process_mask(proto[i], det[:, 6:], det[:, :4], imnew.shape[2:], upsample=True)  # HWC
                #print(masks)
                det[:, :4] = scale_boxes(imnew.shape[2:], det[:, :4], im0.shape).round()  # rescale boxes to im0 size

                segments2 = [
                    scale_segments(im0.shape if retina_masks else imnew.shape[2:], x, im0.shape, normalize=False)
                    for x in reversed(masks2segments(masks))]
                #NumberOfElements = sizeof(arr) / sizeof(arr[0]);
                #segments2_conver = np.array(segments2, dtype=np.int32)
                #print(segments2_conver)

                segments = [
                    scale_segments(im0.shape if retina_masks else im.shape[2:], x, im0.shape, normalize=True)
                    for x in reversed(masks2segments(masks))]
                #print(segments)

                # Print results
                for c in det[:, 5].unique():
                    n = (det[:, 5] == c).sum()  # detections per class
                    s += f"{n} {names[int(c)]}{'s' * (n > 1)}, "  # add to string
                    #print(n)
                # Mask plotting
                annotator.masks(
                    masks,
                    colors=[colors(x, True) for x in det[:, 5]],
                    im_gpu=torch.as_tensor(im0, dtype=torch.float16).to(device).permute(2, 0, 1).flip(0).contiguous() /
                    255 if retina_masks else im[i])

                ret, new_frame, bgr_image, depth_image, bgr_frame, depth_frame, color_intrin = get_frame_stream()
                # Write results
                for j, (*xyxy, conf, cls) in enumerate(reversed(det[:, :6])):
                    seg = segments[j].reshape(-1)  # (n,2) to (n*2)
                    #print(segments[j])
                    line = (cls, *seg, conf) if save_conf else (cls, *seg)  # label format
                    #with open(f'{txt_path}.txt', 'a') as f:
                        #f.write(('%g ' * len(line)).rstrip() % line + '\n')
                    c = int(cls)  # integer class
                    label = None if hide_labels else (names[c] if hide_conf else f'{names[c]} {conf:.2f}')

                    annotator.box_label(xyxy, label, color=colors(c, True))
                    box = xyxy
                    centerx, centery = int((int(box[0]) + int(box[2])) / 2), int((int(box[1]) + int(box[3])) / 2)
                    center_point = centerx, centery
                    #distance_mm = depth_image[centery, centerx]
                    #print(center_point)
                    #ImageDraw.ImageDraw.polygon(, segments2[j], outline=colors(c, True), width=3)
                    #cv2.circle(vid_cap,(segments2[j]), radius=1, color=(0,0,255), thickness=1)
                    im0 = annotator.result()
                    seg_point = np.array(segments2[j], dtype=np.int32)
                    #cv2.putText(im0, "{} mm".format(distance_mm), (centerx, centery - 10), 0, 0.5, (0, 0, 255), 2)
                    #center_point_XYZ = rs.rs2_deproject_pixel_to_point(color_intrin, center_point, distance_mm)
                    #print(np.array(center_point_XYZ).astype(int))
                    print(center_point)
                    cv2.polylines(im0, [np.array(segments2[j], dtype=np.int32)], isClosed=True, color=(0,0,0), thickness=3)

                    "So this is the list of segmentation point segments2[j] "
                    #print([np.array(segments2[j], dtype=np.int32)])

                    "So we will move on to do the PCA for detecting vector X,Y for and object"
                    ""
                    # Perform PCA analysis
                    mean = np.empty((0))
                    mean, eigenvectors, eigenvalues = cv2.PCACompute2(np.array(segments2[j], dtype=np.float64), mean)
                    # Store the center of the object
                    cntr = (int(mean[0, 0]), int(mean[0, 1]))

                    # Draw the principal components
                    cv2.circle(im0, cntr, 3, (255, 0, 255), 2)
                    p1 = (cntr[0] + 0.02 * eigenvectors[0, 0] * eigenvalues[0, 0],
                          cntr[1] + 0.02 * eigenvectors[0, 1] * eigenvalues[0, 0])
                    p2 = (cntr[0] - 0.02 * eigenvectors[1, 0] * eigenvalues[1, 0],
                          cntr[1] - 0.02 * eigenvectors[1, 1] * eigenvalues[1, 0])
                    drawAxis(im0, cntr, p1, (0, 255, 0), 1)
                    drawAxis(im0, cntr, p2, (255, 255, 0), 5)
                    angle = atan2(eigenvectors[0, 1], eigenvectors[0, 0])  # orientation in radians
                    print(angle)

                    "So in This part we will try to extract the maximum, minimum of coordinates x,y"
                    "Remember that the highest point Y have the lowest value"
                    "Remember that the furthest point X have the highest value"

                    min_xy = np.min(seg_point, axis=0)
                    min_x , min_y = int(min_xy[0]), int(min_xy[1])

                    #index_ymin = np.where(seg_point == min_y)[0]
                    #list_ymin = index_ymin.tolist()
                    #cv2.circle(im0, index_ymin, 3, (255, 0, 0))
                    #index_xmin = np.where(seg_point == min_x)[0]
                    #cv2.circle(im0, index_xmin, 3, (255, 0, 0))

                    max_xy = np.max(seg_point, axis=0)
                    max_x, max_y = int(max_xy[0]), int(max_xy[1])

                    index_ymax = np.where(seg_point == max_y)[0]
                    index_xmax = np.where(seg_point == max_x)[0]
                    #cv2.circle(im0, index_ymax, 3, (255, 0, 0))q
                    #cv2.circle(im0, index_xmax, 3, (255, 0, 0))
                    #print(max)

                    center_point_updated = int((max_x+min_x)/2), int((max_y+min_y)/2)
                    cv2.circle(im0, center_point_updated, 3, (255, 0, 0))

                    distance_1 = depth_image[centery, min_x+5]
                    distance_2 = depth_image[centery, max_x-5]
                    distance_3 = depth_image[min_y+5, centerx]
                    distance_4 = depth_image[max_y-5, centerx]
                    distance_mm = depth_image[centery, centerx]

                    point1 = rs.rs2_deproject_pixel_to_point(color_intrin, [min_x+5, centery], distance_1)
                    point2 = rs.rs2_deproject_pixel_to_point(color_intrin, [max_x-5, centery], distance_2)
                    point3 = rs.rs2_deproject_pixel_to_point(color_intrin, [centerx, min_y+5], distance_3)
                    point4 = rs.rs2_deproject_pixel_to_point(color_intrin, [centerx, max_y-5], distance_4)
                    cnt_point = rs.rs2_deproject_pixel_to_point(color_intrin, [centerx, centery], distance_mm)

                    wide_0 = int(90)
                    high_0 = int(140)

                    wide = int(math.sqrt(math.pow(point1[0] - point2[0], 2) + math.pow(point1[1] - point2[1], 2) +
                                         math.pow(point1[2] - point2[2], 2)))
                    high = int(math.sqrt(math.pow(point3[0] - point4[0], 2) + math.pow(point3[1] - point4[1], 2) +
                                         math.pow(point3[2] - point4[2], 2)))
                    if wide < high:
                        cv2.putText(im0, "{} mm".format(wide), (centerx, centery + 10), 0, 0.5, (0, 0, 255), 2)
                        cv2.putText(im0, "{} mm".format(high), (centerx, centery + 30), 0, 0.5, (0, 0, 255), 2)
                    else:
                        cv2.putText(im0, "{} mm".format(wide), (centerx, centery + 10), 0, 0.5, (0, 0, 255), 2)
                        cv2.putText(im0, "{} mm".format(high), (centerx, centery + 30), 0, 0.5, (0, 0, 255), 2)

                    if j == 5:
                        if wide < high:
                            wide_0 = wide
                            high_0 = high
                    cv2.circle(im0, (min_x, centery), 3, (255, 0, 0))
                    cv2.circle(im0, (max_x, centery), 3, (255, 0, 0))
                    cv2.circle(im0, (centerx, min_y), 3, (255, 0, 0))
                    cv2.circle(im0, (centerx, max_y), 3, (255, 0, 0))

                    """
                    Calculate the angle of the object respective to the camera
                    """
                    """
                    roll = round(asin(wide/wide_0)*180/pi, 1)

                    pitch = round(asin(high/high_0)*180/pi, 1)

                    cv2.putText(im0, "{} degree".format(roll), (centerx, centery + 50), 0, 0.5, (0, 0, 255), 2)
                    cv2.putText(im0, "{} degree".format(pitch), (centerx, centery + 70), 0, 0.5, (0, 0, 255), 2)
                    #print(center_point_updated)
                    """
                    #cv2.circle(im0, max_xy, 3, (255, 0, 0))
                    #annotator.draw.polygon(segments[j], outline=colors(c, True), width=3)

            # Stream results
            if view_img:
                if platform.system() == 'Linux' and p not in windows:
                    windows.append(p)
                    cv2.namedWindow(str(p), cv2.WINDOW_NORMAL | cv2.WINDOW_KEEPRATIO)  # allow window resize (Linux)
                    cv2.resizeWindow(str(p), im0.shape[1], im0.shape[0])
                cv2.imshow(str(p), im0)
                if cv2.waitKey(1) == ord('q'):  # 1 millisecond
                    exit()

        # Print time (inference-only)
        LOGGER.info(f"{s}{'' if len(det) else '(no detections), '}{dt[1].dt * 1E3:.1f}ms")

    # Print results
    t = tuple(x.t / seen * 1E3 for x in dt)  # speeds per image
    LOGGER.info(f'seg:%.1fms seg, Speed: %.1fms pre-process, %.1fms inference, %.1fms NMS per image at shape {(1, 3, *imgsz)}' % t)

def parse_opt():
    parser = argparse.ArgumentParser()
    parser.add_argument('--weights', nargs='+', type=str, default=ROOT / 'yolov5s-seg.pt', help='model path(s)')
    parser.add_argument('--source', type=str, default=ROOT / 'data/images', help='file/dir/URL/glob/screen/0(webcam)')
    parser.add_argument('--data', type=str, default=ROOT / 'data/coco128.yaml', help='(optional) dataset.yaml path')
    parser.add_argument('--imgsz', '--img', '--img-size', nargs='+', type=int, default=[640], help='inference size h,w')
    parser.add_argument('--conf-thres', type=float, default=0.25, help='confidence threshold')
    parser.add_argument('--iou-thres', type=float, default=0.45, help='NMS IoU threshold')
    parser.add_argument('--max-det', type=int, default=1000, help='maximum detections per image')
    parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
    parser.add_argument('--view-img', action='store_true', help='show results')
    parser.add_argument('--save-txt', action='store_true', help='save results to *.txt')
    parser.add_argument('--save-conf', action='store_true', help='save confidences in --save-txt labels')
    parser.add_argument('--save-crop', action='store_true', help='save cropped prediction boxes')
    parser.add_argument('--nosave', action='store_true', help='do not save images/videos')
    parser.add_argument('--classes', nargs='+', type=int, help='filter by class: --classes 0, or --classes 0 2 3')
    parser.add_argument('--agnostic-nms', action='store_true', help='class-agnostic NMS')
    parser.add_argument('--augment', action='store_true', help='augmented inference')
    parser.add_argument('--visualize', action='store_true', help='visualize features')
    parser.add_argument('--update', action='store_true', help='update all models')
    parser.add_argument('--project', default=ROOT / 'runs/predict-seg', help='save results to project/name')
    parser.add_argument('--name', default='exp', help='save results to project/name')
    parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment')
    parser.add_argument('--line-thickness', default=3, type=int, help='bounding box thickness (pixels)')
    parser.add_argument('--hide-labels', default=False, action='store_true', help='hide labels')
    parser.add_argument('--hide-conf', default=False, action='store_true', help='hide confidences')
    parser.add_argument('--half', action='store_true', help='use FP16 half-precision inference')
    parser.add_argument('--dnn', action='store_true', help='use OpenCV DNN for ONNX inference')
    parser.add_argument('--vid-stride', type=int, default=1, help='video frame-rate stride')
    parser.add_argument('--retina-masks', action='store_true', help='whether to plot masks in native resolution')
    opt = parser.parse_args()
    opt.imgsz *= 2 if len(opt.imgsz) == 1 else 1  # expand
    print_args(vars(opt))
    return opt

def main(opt):
    check_requirements(ROOT / 'requirements.txt', exclude=('tensorboard', 'thop'))
    run(**vars(opt))

if __name__ == '__main__':
    opt = parse_opt()
    main(opt)

And it gave me this error:

  File "/home/anthonia/Thesis/src/robot_project/yolov5-master/segment/test_seg.py", line 68, in get_frame_stream
    profile = pipe.start(cfg)
RuntimeError: xioctl(VIDIOC_S_FMT) failed, errno=16 Last Error: Device or resource busy
terminate called without an active exception
Aborted (core dumped)

The camera image window still appeared, but when it could detect the objects, it crashed and informed the device is busy even though I didn't use the realsense in another apps. Is there the problem with threading or Ubuntu with Windows problem because when I running it on Windows, it works?

Environment

Minimal Reproducible Example

No response

Additional

No response

Are you willing to submit a PR?

github-actions[bot] commented 10 months ago

πŸ‘‹ Hello @Anthonia2001, thank you for your interest in YOLOv5 πŸš€! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.

If this is a πŸ› Bug Report, please provide a minimum reproducible example to help us debug it.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results.

Requirements

Python>=3.8.0 with all requirements.txt installed including PyTorch>=1.8. To get started:

git clone https://github.com/ultralytics/yolov5  # clone
cd yolov5
pip install -r requirements.txt  # install

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

YOLOv5 CI

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training, validation, inference, export and benchmarks on macOS, Windows, and Ubuntu every 24 hours and on every commit.

Introducing YOLOv8 πŸš€

We're excited to announce the launch of our latest state-of-the-art (SOTA) object detection model for 2023 - YOLOv8 πŸš€!

Designed to be fast, accurate, and easy to use, YOLOv8 is an ideal choice for a wide range of object detection, image segmentation and image classification tasks. With YOLOv8, you'll be able to quickly and accurately detect objects in real-time, streamline your workflows, and achieve new levels of accuracy in your projects.

Check out our YOLOv8 Docs for details and get started with:

pip install ultralytics
glenn-jocher commented 10 months ago

@Anthonia2001 hi there! It seems like the issue you're encountering could be due to the camera resource being already in use. This particular error can sometimes occur if the camera resource is not released properly elsewhere in the application or if another process is holding onto it. Since it works on Windows, it's possible that there's an underlying system difference causing the discrepancy.

To resolve this, you might want to check for any other processes or scripts that might be using the camera resource and ensure that they release it properly. You could also try restarting the system to ensure that the camera resource is freed up.

If the issue persists, you could look into any specific device locking mechanisms for the Intel Realsense camera on Ubuntu. It's also worth checking if there are any specific device permissions that need to be set for accessing the camera on Ubuntu.

I hope this helps! Let me know if you have any other questions or if there's anything else I can assist you with.

github-actions[bot] commented 9 months ago

πŸ‘‹ Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.

For additional resources and information, please see the links below:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLO πŸš€ and Vision AI ⭐