Open Matthieu-Tinycoaching opened 1 year ago
Thanks again for reporting the issue.
I found that the model produces NaN values after the 5th layer.
Apparently this is a known issue when running T5 models in FP16: https://github.com/huggingface/transformers/pull/10956. As far as I understand, the issue is that the model is trained with bfloat16
and there are numerical issues when the model is then run in float16
(these are 2 different types).
We can try the change that is proposed in this PR but it does not seem efficient. Another solution would be to support bfloat16
execution.
Maybe you can go one step further and use the int8
quantization for this model?
Ok right 👍
Hey there!
I discovered some similar quirks evaluating ctranslate2 with flan-t5.
I compared the outputs of flan-t5-xl and flan-t5-xxl on GPU using float32, int8_float16, float16, and int8
The results for "Who is Barrack Obama?":
With Flan-T5-XL float32: president of united states
With Flan-T5-XL int8_float16: president of united states
With Flan-T5-XL float16: <pad><pad><pad><pad><pad><pad>
With Flan-T5-XL int8: politician
With Flan-T5-XXL float32: president of united states
With Flan-T5-XXL int8_float16: sanders defeated barkan as president
With Flan-T5-XXL float16: <pad><pad><pad><pad><pad><pad>
With Flan-T5-XXL int8: president of makulin country south africa name or africa name authority of president of south logan city or south log stage south africa name south africa name south africa citizenship south africa name south africa nationality or south africa author south africa name south africa name south leader south name south south africa
Conclusion:
The model is losing a lot in terms of precision when running in int8_float16 in int8. XXL model seems to be more affected as it generates nonsensical text in int8.
Here is the code being used:
import ctranslate2
import transformers
import time
#ct2-transformers-converter --model google/flan-t5-xl --output_dir flan-xl-c2
#ct2-transformers-converter --model google/flan-t5-xxl --output_dir flan-xxl-c2
translator = ctranslate2.Translator("flan-xl-c2", device="cuda", compute_type="int8")
tokenizer = transformers.AutoTokenizer.from_pretrained("google/flan-t5-xxl", truncation=True, truncation_side="left")
prompt = """
Who is Barrack Obama?
Answer: """
while True:
start_time = time.time()
input_tokens = tokenizer.convert_ids_to_tokens(tokenizer.encode(prompt))
results = translator.translate_batch([input_tokens], beam_size=1, sampling_topk=40, sampling_temperature=0.4, max_input_length=0, max_decoding_length=200, return_scores=False)
output_tokens = results[0].hypotheses[0]
output_text = tokenizer.decode(tokenizer.convert_tokens_to_ids(output_tokens))
print(output_text)
print("--- %s seconds ---" % (time.time() - start_time))
Update: After investigating, it seems a part of the answer is there: https://github.com/huggingface/transformers/pull/22095/commits
Seems also related to https://github.com/huggingface/transformers/issues/20287
Yeah, I get it returning its own input many times. No real answers.
So from reading those issues, it doesn't quantize properly to either fp16 or int8, but supposedly works fine as bf16 or fp32.
Also, it looks like this is the actual underlying issue: https://github.com/triton-inference-server/fastertransformer_backend/issues/95#issuecomment-1542087731
Here are more details: https://github.com/huggingface/transformers/issues/20287#issuecomment-1342219429
@guillaumekln This appears to be the necessary patch: https://github.com/larsmennen/transformers/commit/f90b269505a02d5476661ad77cb426e44e7561d3
However it looks like the patch has already been upstream for a while. Any ideas?
Can you help testing this development?
pip install --force-reinstall
The model must converted again with this version.
@guillaumekln Thank you for the quick fix! This appears to solve the problem for me.
We are running the tests as we speak on a FlanT5 XXL.
I seems there is the same problem in fp8. I will try with fp16. Maybe related to this ? https://github.com/OpenNMT/CTranslate2/pull/1239/files#diff-c2632813f7cc9ff44bda20479812ebefd15be7f7fe1bd099e553111bcb6750acR171
I seems there is the same problem in fp8.
You meant int8, right? More specifically did you try int8 or int8_float16?
Sorry about the typo. Int8 indeed.
So here at the tests with Flan T5 XXL:
pip install ctranslate2-3.13.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
With ct2-transformers-converter --model google/flan-t5-xxl --output_dir flan-xxl-c2
Who is Barrack Obama?:
int8: us president/barackhopatsumibi/podium/barhamambua00barackoblamesurprize/summonant/218974312
int8_float6: Error: ValueError: expected storage to be of type float32, but is of type float16
float16: Error: ValueError: expected storage to be of type float32, but is of type float16
With ct2-transformers-converter --model google/flan-t5-xxl --output_dir flan-xxl-c2 --quantization float16
int8: president united states pubic servant elect
int8_float6: Error: ValueError: expected storage to be of type float32, but is of type float16
float16: Error: ValueError: expected storage to be of type float32, but is of type float16
I may be missing something.
Thanks for the test.
The error should be fixed in this build: https://github.com/OpenNMT/CTranslate2/actions/runs/5068556228
You can check whether it improves the output for "float16", but "int8_float16" will probably behave like "int8". I'm not sure what else is needed to improve the output with 8-bit quantization.
Results are looking good. I also experienced garbled output previously with CTranslate and T5-Flan series.
Running a domain specific t5-xl based on flan-xl and trying @baptistejamin prompt with latest build: https://github.com/OpenNMT/CTranslate2/actions/runs/5068556228 :
Who is barrack obama? -> Barack Obama is the 44th President of the United States. He was elected in 2008 and has been in office since January 20, 2009.
With: translate_args = { "repetition_penalty": 1.3, "sampling_topk": 1, "beam_size": 1, "sampling_temperature": 0.3, "max_decoding_length": 512, }
ctranslate2.Translator( device="cpu", inter_threads=8, intra_threads=8, compute_type="int8", )
Equally response on: device="cuda", inter_threads=8, intra_threads=8, compute_type="int8_float16",
Fixed for me as well, thank you
I can confirm it works fine with Flan XL int8 with the latest build
However, Flan XXL int8 it still returns non-sensical text, and it for the same prompt, the output is worse than a XL model.
Finally, float16 mode still returns <pad><pad><pad><pad>...
for both Flan XL and Flan XXL official models.
It behaves exactly as before the initial patch was made.
Hi, Is this corrected build published on the last release ?
No the change is not merged yet. It does not seem complete since the generation for Flan XXL int8 is still broken.
I've been working with XXL in int8 with 3.15.1 and it appears to be emitting valid responses. Both cuda and CPU. @baptistejamin FYI
What’s your prompt and parameters ?Sent from my iPhoneOn 15 Jun 2023, at 03:55, Bradley Fox @.***> wrote: I've been working with XXL in int8 with 3.15.1 and it appears to be emitting valid responses. Both cuda and CPU. @baptistejamin FYI
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you were mentioned.Message ID: @.***>
@baptistejamin Any prompt works just fine - However, I had a recent converted model fail in the same way. It that case it was how tokenizer was converted to the config.json that that CT2 creates. The bos and and decoder start sequence tokens were </s>
, which is wrong for T5.
I haven't spent much time troubleshooting but it comes from when the Seq2Seq model inits in model_spec.py: 386
class SequenceToSequenceModelConfig(ModelConfig):
"""Configuration for sequence-to-sequence models."""
def __init__(
self,
unk_token: str = "<unk>",
bos_token: str = "<s>",
eos_token: str = "</s>",
decoder_start_token: Optional[str] = "<s>",
add_source_bos: bool = False,
add_source_eos: bool = False,
):
Maybe this is also your issue?
FYI, CTranslate2 3.17 supports the "bfloat16" compute type which should be used instead of "float16" for these models.
Hi,
I've installed the "python-wheels" artifact from #1066 .
Then, I use the following code to answer to a question based on a context:
which generates the following text :
<pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad>
Any suggestion regarding this result?