GoogleCloudPlatform / python-docs-samples

Code samples used on cloud.google.com
Apache License 2.0
7.31k stars 6.38k forks source link

Error API Cloud Video Intelligence - Requested model could not be loaded #10733

Open AgostinoA opened 11 months ago

AgostinoA commented 11 months ago

In which file did you encounter the issue?

import io

from google.cloud import videointelligence_v1p3beta1 as videointelligence

# path = 'path_to_file'
# project_id = 'gcp_project_id'
# model_id = 'automl_classification_model_id'

client = videointelligence.StreamingVideoIntelligenceServiceClient()

model_path = "projects/{}/locations/us-central1/models/{}".format(
    project_id, model_id
)

# Here we use classification as an example.
automl_config = videointelligence.StreamingAutomlClassificationConfig(
    model_name=model_path
)

video_config = videointelligence.StreamingVideoConfig(
    feature=videointelligence.StreamingFeature.STREAMING_AUTOML_CLASSIFICATION,
    automl_classification_config=automl_config,
)

# config_request should be the first in the stream of requests.
config_request = videointelligence.StreamingAnnotateVideoRequest(
    video_config=video_config
)

# Set the chunk size to 5MB (recommended less than 10MB).
chunk_size = 5 * 1024 * 1024

# Load file content.
# Note: Input videos must have supported video codecs. See
# https://cloud.google.com/video-intelligence/docs/streaming/streaming#supported_video_codecs
# for more details.
stream = []
with io.open(path, "rb") as video_file:
    while True:
        data = video_file.read(chunk_size)
        if not data:
            break
        stream.append(data)

def stream_generator():
    yield config_request
    for chunk in stream:
        yield videointelligence.StreamingAnnotateVideoRequest(input_content=chunk)

requests = stream_generator()

# streaming_annotate_video returns a generator.
# The default timeout is about 300 seconds.
# To process longer videos it should be set to
# larger than the length (in seconds) of the stream.
responses = client.streaming_annotate_video(requests, timeout=600)

for response in responses:
    # Check for errors.
    if response.error.message:
        print(response.error.message)
        break

    for label in response.annotation_results.label_annotations:
        for frame in label.frames:
            print(
                "At {:3d}s segment, {:5.1%} {}".format(
                    frame.time_offset.seconds,
                    frame.confidence,
                    label.entity.entity_id,
                )
            )

Describe the issue

https://cloud.google.com/video-intelligence/docs/streaming/video-classification

I am conducting a series of tests on AutoML, using the code provided in the official documentation, and I am encountering an issue. After training the model with VortexAI, completing the process, and retrieving the Model ID, the Python APIs consistently return 'Requested model could not be loaded.' Honestly, the demos are poorly written, and the exceptions in the code are very implicit. Some demos don't work, and you have to make corrections in certain areas to get them to function properly.

python3 ./test.py
"Requested model could not be loaded."

The model was specifically trained for Video Intelligence in the VortexAI section and, in turn, did not function correctly. In the documentation, it's not clear whether the model necessarily requires an Endpoint. Both the code and the documentation have a comprehension issue.

Screen401

leahecole commented 11 months ago

hey @yil532 ! hoping you can triage this to the right person on your team

AgostinoA commented 11 months ago

@yil532 Guys, is there any news?

AgostinoA commented 10 months ago

@yil532 Guys, is there any news? We need to solve this problem, for a corporate job, if you have other faster means of communication let us know, we are still paying for a service on Google Vortex that doesn't work properly

@leahecole Kindly can you solicit

feboz commented 10 months ago

Hi, any news about this issue?

AgostinoA commented 10 months ago

@leahecole Excuse me I saw that @yil532, left the post, but can you give us support for this issue. I name for a company that pay for your service and for an issues, you can not get support, we urgently need to solve, thank you

m-strzelczyk commented 7 months ago

@AgostinoA What model_id are you using? Did you verify that

model_path = "projects/{}/locations/us-central1/models/{}".format(
    project_id, model_id
)

Sets model_path to point at a model that exists and is accessible to the user/service account executing the script?