when I use the python demo/demo_with_text.py --chunk_size 4 --img_path ./example/vipseg/images/12_1mWNahzcsAc --amp --temporal_setting semionline --size 480 --output ./example/output --prompt person.hat.horse the error will occur,but the automatic mode is ok.
(deva) shangc@user-SYS-4029GP-TRT:~/Tracking-Anything-with-DEVA$ python demo/demo_with_text.py --chunk_size 4 --img_path ./example/vipseg/images/12_1mWNahzcsAc --amp --temporal_setting semionline --size 480 --output ./example/output --prompt person.hat.horse
UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3483.)
final text_encoder_type: bert-base-uncased
Traceback (most recent call last):
File "/home/shangc/Tracking-Anything-with-DEVA/demo/demo_with_text.py", line 35, in
gd_model, sam_model = get_grounding_dino_model(cfg, 'cuda')
File "/home/shangc/Tracking-Anything-with-DEVA/deva/ext/grounding_dino.py", line 28, in get_grounding_dino_model
gd_model = GroundingDINOModel(model_config_path=GROUNDING_DINO_CONFIG_PATH,
File "/home/shangc/Grounded-Segment-Anything-main/GroundingDINO/groundingdino/util/inference.py", line 119, in init
self.model = load_model(
File "/home/shangc/Grounded-Segment-Anything-main/GroundingDINO/groundingdino/util/inference.py", line 32, in load_model
model = build_model(args)
File "/home/shangc/Grounded-Segment-Anything-main/GroundingDINO/groundingdino/models/init.py", line 17, in build_model
model = build_func(args)
File "/home/shangc/Grounded-Segment-Anything-main/GroundingDINO/groundingdino/models/GroundingDINO/groundingdino.py", line 372, in build_groundingdino
model = GroundingDINO(
File "/home/shangc/Grounded-Segment-Anything-main/GroundingDINO/groundingdino/models/GroundingDINO/groundingdino.py", line 107, in init
self.tokenizer = get_tokenlizer.get_tokenlizer(text_encoder_type)
File "/home/shangc/Grounded-Segment-Anything-main/GroundingDINO/groundingdino/util/get_tokenlizer.py", line 17, in get_tokenlizer
tokenizer = AutoTokenizer.from_pretrained(text_encoder_type)
File "/home/shangc/miniconda3/envs/deva/lib/python3.9/site-packages/transformers/models/auto/tokenization_auto..py", line 481, in from_pretrained
tokenizer_config = get_tokenizer_config(pretrained_model_name_or_path, **kwargs)
File "/home/shangc/miniconda3/envs/deva/lib/python3.9/site-packages/transformers/models/auto/tokenization_auto.py", line 343, in get_tokenizer_config
resolved_config_file = cached_path(
File "/home/shangc/miniconda3/envs/deva/lib/python3.9/site-packages/transformers/file_utils.py", line 1776, in cached_path
output_path = get_from_cache(
File "/home/shangc/miniconda3/envs/deva/lib/python3.9/site-packages/transformers/file_utils.py", line 2000, in get_from_cache
raise ValueError(
ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on.
when I use the python demo/demo_with_text.py --chunk_size 4 --img_path ./example/vipseg/images/12_1mWNahzcsAc --amp --temporal_setting semionline --size 480 --output ./example/output --prompt person.hat.horse the error will occur,but the automatic mode is ok.
(deva) shangc@user-SYS-4029GP-TRT:~/Tracking-Anything-with-DEVA$ python demo/demo_with_text.py --chunk_size 4 --img_path ./example/vipseg/images/12_1mWNahzcsAc --amp --temporal_setting semionline --size 480 --output ./example/output --prompt person.hat.horse UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3483.) final text_encoder_type: bert-base-uncased Traceback (most recent call last): File "/home/shangc/Tracking-Anything-with-DEVA/demo/demo_with_text.py", line 35, in
gd_model, sam_model = get_grounding_dino_model(cfg, 'cuda')
File "/home/shangc/Tracking-Anything-with-DEVA/deva/ext/grounding_dino.py", line 28, in get_grounding_dino_model
gd_model = GroundingDINOModel(model_config_path=GROUNDING_DINO_CONFIG_PATH,
File "/home/shangc/Grounded-Segment-Anything-main/GroundingDINO/groundingdino/util/inference.py", line 119, in init
self.model = load_model(
File "/home/shangc/Grounded-Segment-Anything-main/GroundingDINO/groundingdino/util/inference.py", line 32, in load_model
model = build_model(args)
File "/home/shangc/Grounded-Segment-Anything-main/GroundingDINO/groundingdino/models/init.py", line 17, in build_model
model = build_func(args)
File "/home/shangc/Grounded-Segment-Anything-main/GroundingDINO/groundingdino/models/GroundingDINO/groundingdino.py", line 372, in build_groundingdino
model = GroundingDINO(
File "/home/shangc/Grounded-Segment-Anything-main/GroundingDINO/groundingdino/models/GroundingDINO/groundingdino.py", line 107, in init
self.tokenizer = get_tokenlizer.get_tokenlizer(text_encoder_type)
File "/home/shangc/Grounded-Segment-Anything-main/GroundingDINO/groundingdino/util/get_tokenlizer.py", line 17, in get_tokenlizer
tokenizer = AutoTokenizer.from_pretrained(text_encoder_type)
File "/home/shangc/miniconda3/envs/deva/lib/python3.9/site-packages/transformers/models/auto/tokenization_auto..py", line 481, in from_pretrained
tokenizer_config = get_tokenizer_config(pretrained_model_name_or_path, **kwargs)
File "/home/shangc/miniconda3/envs/deva/lib/python3.9/site-packages/transformers/models/auto/tokenization_auto.py", line 343, in get_tokenizer_config
resolved_config_file = cached_path(
File "/home/shangc/miniconda3/envs/deva/lib/python3.9/site-packages/transformers/file_utils.py", line 1776, in cached_path
output_path = get_from_cache(
File "/home/shangc/miniconda3/envs/deva/lib/python3.9/site-packages/transformers/file_utils.py", line 2000, in get_from_cache
raise ValueError(
ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on.