lmstudio-ai / lmstudio-bug-tracker

Bug tracking for the LM Studio desktop application
10 stars 3 forks source link

LM Studio 0.2.28 with AMD ROCm not support Meta Llama 3.1 #63

Closed kittizz closed 4 months ago

kittizz commented 4 months ago

{
  "title": "Failed to load model",
  "cause": "llama.cpp error: 'error loading model vocabulary: unknown pre-tokenizer type: 'smaug-bpe''",
  "errorData": {
    "n_ctx": 8192,
    "n_batch": 512,
    "n_gpu_layers": null
  },
  "data": {
    "memory": {
      "ram_capacity": "63.83 GB",
      "ram_unused": "13.85 GB"
    },
    "gpu": {
      "type": "AMD ROCm",
      "vram_recommended_capacity": "15.98 GB",
      "vram_unused": "15.86 GB"
    },
    "os": {
      "platform": "win32",
      "version": "10.0.22631",
      "supports_avx2": true
    },
    "app": {
      "version": "0.2.22",
      "downloadsDir": "C:\\Users\\xver-lab\\.cache\\lm-studio\\models"
    },
    "model": {}
  }
}```
reversesh3ll commented 4 months ago

I believe there was an issue with the quantitation of the initial model released by lmstudio-community. Simply re-download the model with the latest version released today. This fixed the issue for me.