Using transformers (version 4.43.2), this command:
import os
from transformers import AutoImageProcessor, AutoModel
from huggingface_hub import login
huggingface_token = os.getenv("HUGGINGFACE_TOKEN")
login(huggingface_token)
processor = AutoImageProcessor.from_pretrained("histai/hibou-b", trust_remote_code=True)
hf_model = AutoModel.from_pretrained("histai/hibou-b", trust_remote_code=True)
Throws this Error:
ValueError: Unrecognized image processor in histai/hibou-b. Should have a image_processor_type key in its preprocessor_config.json of config.json, or one of the following model_type keys in its config.json: align, beit, bit, blip, blip-2, bridgetower, chameleon, chinese_clip, clip, clipseg, conditional_detr, convnext, convnextv2, cvt, data2vec-vision, deformable_detr, deit, depth_anything, deta, detr, dinat, dinov2, donut-swin, dpt, efficientformer, efficientnet, flava, focalnet, fuyu, git, glpn, grounding-dino, groupvit, hiera, idefics, idefics2, imagegpt, instructblip, instructblipvideo, kosmos-2, layoutlmv2, layoutlmv3, levit, llava, llava-next-video, llava_next, mask2former, maskformer, mgp-str, mobilenet_v1, mobilenet_v2, mobilevit, mobilevitv2, nat, nougat, oneformer, owlv2, owlvit, perceiver, pix2struct, poolformer, pvt, pvt_v2, regnet, resnet, rt_detr, sam, segformer, seggpt, siglip, swiftformer, swin, swin2sr, swinv2, table-transformer, timesformer, tvlt, tvp, udop, upernet, van, videomae, vilt, vipllava, vit, vit_hybrid, vit_mae, vit_msn, vitmatte, xclip, yolos, zoedepth
The same issue arises with the hibou-L model. I have tried also with 'token=True' and modifying my local preprocessor_config.json to "image_processor_type": "bit" with no success.
Using transformers (version 4.43.2), this command:
Throws this Error:
The same issue arises with the
hibou-L
model. I have tried also with 'token=True' and modifying my local preprocessor_config.json to"image_processor_type": "bit"
with no success.