louaaron / Score-Entropy-Discrete-Diffusion

[ICML 2024 Best Paper] Discrete Diffusion Modeling by Estimating the Ratios of the Data Distribution (https://arxiv.org/abs/2310.16834)
https://aaronlou.com/blog/2024/discrete-diffusion/
MIT License
348 stars 33 forks source link

Unable to download weights #4

Closed arpitbansal297 closed 5 months ago

arpitbansal297 commented 5 months ago

Hi! Thanks a lot for this amazing work. I am trying to download weights from the huggingface: https://huggingface.co/louaaron/sedd-medium and have been getting error for the following command.

from transformers import AutoModel model = AutoModel.from_pretrained("louaaron/sedd-medium")

Error: Traceback (most recent call last): File "/cmlscratch/avi1/optimization-playground/play.py", line 51, in model = AutoModelForCausalLM.from_pretrained(args.model_name, **model_args) File "/cmlscratch/avi1/.pyenv/versions/llm/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 521, in from_pretrained config, kwargs = AutoConfig.from_pretrained( File "/cmlscratch/avi1/.pyenv/versions/llm/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 1143, in from_pretrained raise ValueError( ValueError: Unrecognized model in louaaron/sedd-medium. Should have a model_type key in its config.json, or contain one of the following strings in its name: albert, align, altclip, audio-spectrogram-transformer, autoformer, bark, bart, beit, bert, bert-generation, big_bird, bigbird_pegasus, biogpt, bit, blenderbot, blenderbot-small, blip, blip-2, bloom, bridgetower, bros, camembert, canine, chinese_clip, chinese_clip_vision_model, clap, clip, clip_vision_model, clipseg, clvp, code_llama, codegen, conditional_detr, convbert, convnext, convnextv2, cpmant, ctrl, cvt, data2vec-audio, data2vec-text, data2vec-vision, deberta, deberta-v2, decision_transformer, deformable_detr, deit, depth_anything, deta, detr, dinat, dinov2, distilbert, donut-swin, dpr, dpt, efficientformer, efficientnet, electra, encodec, encoder-decoder, ernie, ernie_m, esm, falcon, fastspeech2_conformer, flaubert, flava, fnet, focalnet, fsmt, funnel, fuyu, gemma, git, glpn, gpt-sw3, gpt2, gpt_bigcode, gpt_neo, gpt_neox, gpt_neox_japanese, gptj, gptsan-japanese, graphormer, groupvit, hubert, ibert, idefics, imagegpt, informer, instructblip, jukebox, kosmos-2, layoutlm, layoutlmv2, layoutlmv3, led, levit, lilt, llama, llava, longformer, longt5, luke, lxmert, m2m_100, marian, markuplm, mask2former, maskformer, maskformer-swin, mbart, mctct, mega, megatron-bert, mgp-str, mistral, mixtral, mobilebert, mobilenet_v1, mobilenet_v2, mobilevit, mobilevitv2, mpnet, mpt, mra, mt5, musicgen, mvp, nat, nezha, nllb-moe, nougat, nystromformer, oneformer, open-llama, openai-gpt, opt, owlv2, owlvit, patchtsmixer, patchtst, pegasus, pegasus_x, perceiver, persimmon, phi, pix2struct, plbart, poolformer, pop2piano, prophetnet, pvt, qdqbert, qwen2, rag, realm, reformer, regnet, rembert, resnet, retribert, roberta, roberta-prelayernorm, roc_bert, roformer, rwkv, sam, seamless_m4t, seamless_m4t_v2, segformer, sew, sew-d, siglip, siglip_vision_model, speech-encoder-decoder, speech_to_text, speech_to_text_2, speecht5, splinter, squeezebert, stablelm, swiftformer, swin, swin2sr, swinv2, switch_transformers, t5, table-transformer, tapas, time_series_transformer, timesformer, timm_backbone, trajectory_transformer, transfo-xl, trocr, tvlt, tvp, umt5, unispeech, unispeech-sat, univnet, upernet, van, videomae, vilt, vipllava, vision-encoder-decoder, vision-text-dual-encoder, visual_bert, vit, vit_hybrid, vit_mae, vit_msn, vitdet, vitmatte, vits, vivit, wav2vec2, wav2vec2-bert, wav2vec2-conformer, wavlm, whisper, xclip, xglm, xlm, xlm-prophetnet, xlm-roberta, xlm-roberta-xl, xlnet, xmod, yolos, yoso

louaaron commented 5 months ago

This is a special model so it can't be loaded by automodel. load_model.py does the loading.