ahans30 / Binoculars

[ICML 2024] Binoculars: Zero-Shot Detection of LLM-Generated Text
https://arxiv.org/abs/2401.12070
BSD 3-Clause "New" or "Revised" License
189 stars 26 forks source link

never get the result #6

Closed imrshohel closed 6 months ago

imrshohel commented 6 months ago

CPU: Cor i7 8th gen GPU: 8GB RAM: 40 GB

But when I ran the script, it ran into the local server but never finished its task. It continuously counts its time. Short to long text, I tried everything possible variation, but it never completed its analysis. Is there anything I need to adjust? Whatever the warning message, I already solved the PC configuration.

ahans30 commented 6 months ago

This is because Binoculars runs two 7B sized language models which could take forever on CPUs. I'm sorry but running LMs on the CPU is not recommended and this behavior is expected. I would recommend you use resources like Kaggle or Colab Notebook to get GPU access if you're constrained on resources.

You might wanna check out this HuggingFace tutorial to optimize LM inference on a CPU. Please feel free to send in PR for proper CPU support if you're able to optimize for it.

Closing this issue since not much I can do here. Thanks for your interest in the project. :)