huggingface / transformers

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
https://huggingface.co/transformers
Apache License 2.0
134.57k stars 26.91k forks source link

LLaVa inference crashes with error: Error device-side assert triggered at line 738 in file /mmfs1/gscratch/zlab/timdettmers/git/bitsandbytes/csrc/ops.cu #28276

Closed Meatfucker closed 9 months ago

Meatfucker commented 10 months ago

System Info

Who can help?

No response

Information

Tasks

Reproduction

When running inference on LLaVa, it sometimes will crash seemingly randomly with the following error.

Error device-side assert triggered at line 738 in file /mmfs1/gscratch/zlab/timdettmers/git/bitsandbytes/csrc/ops.cu /opt/conda/conda-bld/pytorch_1699449183005/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [0,0,0] Assertion-sizes[i] <= index && index < sizes[i] && "index out of bounds"failed.

Here is roughly how Im loading the model and doing inference.

model_name = "llava-hf/llava-1.5-13b-hf"
        model = LlavaForConditionalGeneration.from_pretrained(model_name, torch_dtype=torch.float16, low_cpu_mem_usage=True, load_in_4bit=True, bnb_4bit_compute_dtype=torch.float16, bnb_4bit_use_double_quant=True)
        tokenizer = LlamaTokenizerFast.from_pretrained(model_name)
        multimodal_tokenizer = AutoProcessor.from_pretrained(model_name)
async def generate(self):
        """function for generating responses with the llm"""
        llm_defaults = await get_defaults('global')
        userhistory = await self.load_history()  # load the users past history to include in the prompt
        tempimage = None
        if self.reroll is True:
            await self.delete_last_history_pair()
            self.reroll = False
        if self.image_url:
            image_url = self.image_url[0]  # Consider the first image URL found
            response = requests.get(image_url)
            if response.status_code == 200:
                image_data = BytesIO(response.content)
                new_image = Image.open(image_data)
                tempimage = new_image
                image_url_pattern = r'\bhttps?://\S+\.(?:png|jpg|jpeg|gif)\S*\b'  # Updated regex pattern for image URLs
                self.prompt = re.sub(image_url_pattern, '', self.prompt)

        with torch.backends.cuda.sdp_kernel(enable_flash=True, enable_math=True, enable_mem_efficient=True):  # enable flash attention for faster inference
            with torch.no_grad():
                if tempimage:
                    if self.user.id not in self.metatron.llm_user_history or not self.metatron.llm_user_history[self.user.id]:
                        formatted_prompt = f'{llm_defaults["wordsystemprompt"][0]}\n\nUSER:<image>{self.prompt}\nASSISTANT:'  # if there is no history, add the system prompt to the beginning
                    else:
                        formatted_prompt = f'{userhistory}\nUSER:<image>{self.prompt}\nASSISTANT:'
                    inputs = self.multimodal_tokenizer(formatted_prompt, tempimage, return_tensors='pt').to("cuda")
                    llm_generate_logger = logger.bind(user=self.user.name, prompt=self.prompt)
                    llm_generate_logger.info("WORDGEN Generate Started.")
                    output = await asyncio.to_thread(self.model.generate, **inputs, max_new_tokens=200, do_sample=False)
                    llm_generate_logger.debug("WORDGEN Generate Completed")
                    result = self.multimodal_tokenizer.decode(output[0], skip_special_tokens=True)
                else:
                    if self.user.id not in self.metatron.llm_user_history or not self.metatron.llm_user_history[self.user.id]:
                        formatted_prompt = f'{llm_defaults["wordsystemprompt"][0]}\n\nUSER:{self.prompt}\nASSISTANT:'  # if there is no history, add the system prompt to the beginning
                    else:
                        formatted_prompt = f'{userhistory}\nUSER:{self.prompt}\nASSISTANT:'
                    inputs = self.tokenizer(formatted_prompt, return_tensors='pt').to("cuda")
                    llm_generate_logger = logger.bind(user=self.user.name, prompt=self.prompt)
                    llm_generate_logger.info("WORDGEN Generate Started.")
                    output = await asyncio.to_thread(self.model.generate, **inputs, max_new_tokens=200, do_sample=False)
                    llm_generate_logger.debug("WORDGEN Generate Completed")
                    result = self.tokenizer.decode(output[0], skip_special_tokens=True)

        response_index = result.rfind("ASSISTANT:")  # this and the next line extract the bots response for posting to the channel
        self.llm_response = result[response_index + len("ASSISTANT:"):].strip()
        await self.save_history()  # save the response to the users history
        gc.collect()

Expected behavior

I would expect it to infer and return a response as it normally does. Interestingly, this same code never crashes when ran on windows, only on linux.

aldoz-mila commented 10 months ago

I'm having the same issue while doing inference with the same weights (1.5-13b), also on Linux and GPU. Interestingly, I never have the issue when doing inference with the smaller (7b) model.

ArthurZucker commented 10 months ago

This should have been fixed on main by #28032

github-actions[bot] commented 9 months ago

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.