nicknochnack / ActionDetectionforSignLanguage

A practical implementation of sign language estimation using an LSTM NN built on TF Keras.
410 stars 253 forks source link

i'm getting an error while running the data collection feed #26

Closed utkx2 closed 1 year ago

utkx2 commented 1 year ago

code :


cap = cv2.VideoCapture(0)
with mp_holistic.Holistic(min_detection_confidence=0.5, min_tracking_confidence=0.5) as holistic:

    # NEW LOOP
    # Loop through actions
    for action in actions:
        # Loop through sequences aka videos
        for sequence in range(start_folder, start_folder+no_sequences):
            # Loop through video length aka sequence length
            for frame_num in range(sequence_length):

                # Read feed
                ret, frame = cap.read()
                cv2.startWindowThread()

                # Make detections
                image, results = mediapipe_detection(frame, holistic)

                # Draw landmarks
                draw_styled_landmarks(image, results)

                # NEW Apply wait logic
                if frame_num == 0: 
                    cv2.putText(image, 'STARTING COLLECTION', (120,200), 
                               cv2.FONT_HERSHEY_SIMPLEX, 1, (0,255, 0), 4, cv2.LINE_AA)
                    cv2.putText(image, 'Collecting frames for {} Video Number {}'.format(action, sequence), (15,12), 
                               cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 1, cv2.LINE_AA)
                    # Show to screen
                    cv2.imshow('OpenCV data collection Feed', image)
                    cv2.waitKey(1)

                else: 
                    cv2.putText(image, 'Collecting frames for {} Video Number {}'.format(action, sequence), (15,12), 
                               cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 1, cv2.LINE_AA)
                    # Show to screen
                    cv2.imshow('OpenCV data collection Feed', image)

                # NEW Export keypoints
                keypoints = extract_keypoints(results)
                npy_path = os.path.join(DATA_PATH, action, str(sequence), str(frame_num))
                np.save(npy_path, keypoints)

                # break
                if cv2.waitKey(10) & 0xFF == ord('q'):
                    break

    cap.release()
    cv2.destroyAllWindows()
    cv2.waitKey(1)```

    error:

    FileNotFoundError                         Traceback (most recent call last)
Cell In[58], line 43
     41 keypoints = extract_keypoints(results)
     42 npy_path = os.path.join(DATA_PATH, action, str(sequence), str(frame_num))
---> 43 np.save(npy_path, keypoints)
     45 # break
     46 if cv2.waitKey(10) & 0xFF == ord('q'):

File <__array_function__ internals>:180, in save(*args, **kwargs)

File [~/miniconda3/envs/tensorflow/lib/python3.10/site-packages/numpy/lib/npyio.py:518](https://file+.vscode-resource.vscode-cdn.net/Users/utkx2/Desktop/python/MLDL/Projects/Human-Action-Recognition/~/miniconda3/envs/tensorflow/lib/python3.10/site-packages/numpy/lib/npyio.py:518), in save(file, arr, allow_pickle, fix_imports)
    516     if not file.endswith('.npy'):
    517         file = file + '.npy'
--> 518     file_ctx = open(file, "wb")
    520 with file_ctx as fid:
    521     arr = np.asanyarray(arr)

FileNotFoundError: [Errno 2] No such file or directory: '/Users/utkx2/Desktop/python/MLDL/Projects/Human-Action-Recognition/MP_Data/hello/31/0.npy'

what should i do?? Even my MP_Data folder is created
mylifenetwork commented 1 year ago

I'm getting the same error, how did you fix it?

aaronse commented 8 months ago

Notice "/31/" in the file path. This looks like something others ran into as well. e.g. https://github.com/nicknochnack/ActionDetectionforSignLanguage/issues/14

Cause is start_folder didn't exist in the original video. Guessing Nicholas added later on during additional capture/training session.

Fix is to add start_folder = 0 to the "Setup Folders for Collection" cell. e.g.

# Path for exported data, numpy arrays
DATA_PATH = os.path.join('.', 'MP_Data') 

# Actions that we try to detect
actions = np.array(['l', 'r', 'rotate'])

# Thirty videos worth of data
no_sequences = 30

# Videos are going to be 30 frames in length
sequence_length = 30

# Folder start
start_folder = 0