Closed pratikchhapolika closed 1 year ago
Hi @pratikchhapolika, thanks for raising an issue!
This is a question best placed in our forums. We try to reserve the github issues for feature requests and bug reports.
Hi @pratikchhapolika, thanks for raising an issue!
This is a question best placed in our forums. We try to reserve the github issues for feature requests and bug reports.
Hi @amyeroberts , Since I did not get any response in forums so thought to ask here.
@pratikchhapolika I understand, however the github issues are still reserved for feature requests and bugs as it's not sustainable for everyone to ask here if there isn't a response on the forum.
Another place to ask for help on questions such as these are on the discord forum. Specifically, there's an ask-for-help
channel which is very active.
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.
I am following this page for
Prompt tuning for Dolly-v2-7b model for Question and Answer
: https://huggingface.co/docs/peft/task_guides/clm-prompt-tuningInstead of doing the training in old
pytorch way
. I am doing the training usingTrainer api
. Also in this link https://huggingface.co/stevhliu/bloomz-560m_PROMPT_TUNING_CAUSAL_LM/tree/main , I see 2 filesadapter_config.json
andadapter_model.bin
.But when I save the model using Trainer api I do not see any
config file
. Also model size is bigger than what is shown in above link.Is this correct way to train, save and load model for Prompt Tuning. ?
The inference take lot of time to generate. and gives some gibberish output
Who can help?
@stevhliu @sgugger @lvwerra
Here is my code: The use-case is:
I have
Context
which has lot of paragraphs and thenQuestion
, the model has toanswer
theQuestion
based onContext
in a professional manner. Also can it classify theQuestion
as relevant if answer is present inContext
and irrelevant ifanswer
is not inContext
The code that I have written is:
model = get_peft_model(model, peft_config)
Define a function to map examples to inputs and targets
tokenized_train_data = [preprocess_function(example) for example in train_data]
dataset = DemoDataset(tokenized_train_data)
Is this correct way to save?
trainer.save_model("dolly3b_demo_model")
Inference
Is this correct way to do inference