Closed ElCoyote27 closed 1 month ago
@ElCoyote27 could you please try this version instead? https://releases.lmstudio.ai/linux/0.2.23/beta/LM_Studio-0.2.23-Ubuntu-20.04.AppImage
@yagil Thank you so much!!! That version loads without any issues on RHEL8! Super excited as I have an A4000 in this server! Thank you!
Awesome! Glad to hear. Brought to you by @dbevenius :) 🙌
$ ./LM_Studio-0.2.23-Ubuntu-20.04.AppImage
18:20:04.142 › App starting...
(node:3215715) UnhandledPromiseRejectionWarning: ReferenceError: Cannot access 'q' before initialization
at /tmp/.mount_LM_Stu1k6lEW/resources/app/.webpack/main/index.js:11:38584
(Use `lm-studio --trace-warnings ...` to show where the warning was created)
(node:3215715) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 1)
18:20:04.587 › Downloads folder from settings.json: /export/home/raistlin/.cache/lm-studio/models
18:20:04.594 › Extensions backends directory already exists at /export/home/raistlin/.cache/lm-studio/extensions/backends
18:20:04.598 › Available backend descriptors:
{
"extension" : [],
"bundle" : [
{
"path": "/tmp/.mount_LM_Stu1k6lEW/resources/app/.webpack/main/build/Release/CUDA",
"manifest": {
"target_libraries": [
{
"name": "llm_engine_cuda.node",
"type": "llm_engine",
"version": "0.1.0"
},
{
"name": "liblmstudio_bindings_cuda.node",
"type": "liblmstudio",
"version": "0.2.23"
}
],
"type": "llama_cuda",
"platform": "linux",
"supported_model_formats": [
"gguf"
]
}
},
{
"path": "/tmp/.mount_LM_Stu1k6lEW/resources/app/.webpack/main/build/Release/NoGPU",
"manifest": {
"target_libraries": [
{
"name": "llm_engine.node",
"type": "llm_engine",
"version": "0.1.0"
},
{
"name": "liblmstudio_bindings.node",
"type": "liblmstudio",
"version": "0.2.23"
}
],
"type": "llama_cpu",
"platform": "linux",
"supported_model_formats": [
"gguf"
]
}
},
{
"path": "/tmp/.mount_LM_Stu1k6lEW/resources/app/.webpack/main/build/Release/OpenCL",
"manifest": {
"target_libraries": [
{
"name": "llm_engine_clblast.node",
"type": "llm_engine",
"version": "0.1.0"
},
{
"name": "liblmstudio_bindings_clblast.node",
"type": "liblmstudio",
"version": "0.2.23"
}
],
"type": "llama_opencl",
"platform": "linux",
"supported_model_formats": [
"gguf"
]
}
}
]
}
18:20:04.599 › Backend keys and libpaths for use:
{
"llama_cuda" : {
"libLmStudioPath": "/tmp/.mount_LM_Stu1k6lEW/resources/app/.webpack/main/build/Release/CUDA/liblmstudio_bindings_cuda.node",
"llmEngineLibPath": "/tmp/.mount_LM_Stu1k6lEW/resources/app/.webpack/main/build/Release/CUDA/llm_engine_cuda.node"
},
"llama_cpu" : {
"libLmStudioPath": "/tmp/.mount_LM_Stu1k6lEW/resources/app/.webpack/main/build/Release/NoGPU/liblmstudio_bindings.node",
"llmEngineLibPath": "/tmp/.mount_LM_Stu1k6lEW/resources/app/.webpack/main/build/Release/NoGPU/llm_engine.node"
},
"llama_opencl" : {
"libLmStudioPath": "/tmp/.mount_LM_Stu1k6lEW/resources/app/.webpack/main/build/Release/OpenCL/liblmstudio_bindings_clblast.node",
"llmEngineLibPath": "/tmp/.mount_LM_Stu1k6lEW/resources/app/.webpack/main/build/Release/OpenCL/llm_engine_clblast.node"
}
}
18:20:04.599 › Surveying backend-hardware compatibility...
18:20:04.600 › Loading LM Studio core from: '/tmp/.mount_LM_Stu1k6lEW/resources/app/.webpack/main/build/Release/CUDA/liblmstudio_bindings_cuda.node'
1th kill successful!!!
18:20:05.229 › Loading LM Studio core from: '/tmp/.mount_LM_Stu1k6lEW/resources/app/.webpack/main/build/Release/NoGPU/liblmstudio_bindings.node'
1th kill successful!!!
18:20:05.411 › Loading LM Studio core from: '/tmp/.mount_LM_Stu1k6lEW/resources/app/.webpack/main/build/Release/OpenCL/liblmstudio_bindings_clblast.node'
18:20:05.676 › Backend-hardware compatibility survey complete:
{
"llama_cuda" : {
"libPaths": {
"libLmStudioPath": "/tmp/.mount_LM_Stu1k6lEW/resources/app/.webpack/main/build/Release/CUDA/liblmstudio_bindings_cuda.node",
"llmEngineLibPath": "/tmp/.mount_LM_Stu1k6lEW/resources/app/.webpack/main/build/Release/CUDA/llm_engine_cuda.node"
},
"isCompatible": true,
"hardwareSurveyResult": {
"gpuSurveyResult": {
"result": {
"code": "Success",
"message": ""
},
"gpuInfo": [
{
"name": "NVIDIA RTX A4000",
"deviceId": 0,
"totalMemoryCapacityBytes": 16881025024,
"integrationType": "Discrete",
"detectionPlatform": "Cuda",
"otherInfo": {}
}
]
},
"cpuSurveyResult": {
"result": {
"code": "Success",
"message": ""
},
"cpuInfo": {
"architecture": "x86",
"supportedInstructionSets": [
"AVX2"
]
}
}
}
},
"llama_cpu" : {
"libPaths": {
"libLmStudioPath": "/tmp/.mount_LM_Stu1k6lEW/resources/app/.webpack/main/build/Release/NoGPU/liblmstudio_bindings.node",
"llmEngineLibPath": "/tmp/.mount_LM_Stu1k6lEW/resources/app/.webpack/main/build/Release/NoGPU/llm_engine.node"
},
"isCompatible": true,
"hardwareSurveyResult": {
"gpuSurveyResult": {
"result": {
"code": "NoDevicesFound",
"message": "No gpus found without acceleration backend compilation!"
},
"gpuInfo": []
},
"cpuSurveyResult": {
"result": {
"code": "Success",
"message": ""
},
"cpuInfo": {
"architecture": "x86",
"supportedInstructionSets": [
"AVX2"
]
}
}
}
},
"llama_opencl" : {
"libPaths": {
"libLmStudioPath": "/tmp/.mount_LM_Stu1k6lEW/resources/app/.webpack/main/build/Release/OpenCL/liblmstudio_bindings_clblast.node",
"llmEngineLibPath": "/tmp/.mount_LM_Stu1k6lEW/resources/app/.webpack/main/build/Release/OpenCL/llm_engine_clblast.node"
},
"isCompatible": true,
"hardwareSurveyResult": {
"gpuSurveyResult": {
"result": {
"code": "Success",
"message": ""
},
"gpuInfo": [
{
"name": "NVIDIA RTX A4000",
"deviceId": 0,
"totalMemoryCapacityBytes": 16881025024,
"integrationType": "Discrete",
"detectionPlatform": "OpenCl",
"otherInfo": {
"opencl_subplatform": "NVIDIA CUDA"
}
}
]
},
"cpuSurveyResult": {
"result": {
"code": "Success",
"message": ""
},
"cpuInfo": {
"architecture": "x86",
"supportedInstructionSets": [
"AVX2"
]
}
}
}
}
}
D [fallbackBackendPref] Initializing FileData
18:20:05.683 › GPU preferences file already exists at /export/home/raistlin/.cache/lm-studio/gpu-preferences.json
1th kill successful!!!
18:20:05.685 › Loading LM Studio core from: '/tmp/.mount_LM_Stu1k6lEW/resources/app/.webpack/main/build/Release/CUDA/liblmstudio_bindings_cuda.node'
18:20:05.865 › Successfully wrote to GPU preferences file to set GPU type to 'Nvidia CUDA'
Logger created with filePath /tmp/lmstudio-server-log.txt
Client with id 'httpServer' registered.
D [LLMExternalAPIProvider] Creating HTTP server extender
D [LLMExternalAPIProvider] Registering IPC server
D [PlatformExternalAPIProvider] Creating HTTP server extender
D [PlatformExternalAPIProvider] Registering IPC server
D [SystemExternalAPIProvider] Creating HTTP server extender
D [SystemExternalAPIProvider] Registering IPC server
D [DiagnosticsExternalAPIProvider] Creating HTTP server extender
D [DiagnosticsExternalAPIProvider] Registering IPC server
D [NotepadMinusMinusExternalAPIProvider] Registering IPC server
D [DeepLinkHandlingExternalAPIProvider] Registering IPC server
[3215715:0513/182031.142143:ERROR:browser_main_loop.cc(274)] Gtk: gtk_widget_add_accelerator: assertion 'GTK_IS_ACCEL_GROUP (accel_group)' failed
18:20:31.188 › Checking if LM Studio dev tools exist on the system...
18:20:31.254 › LM Studio dev tools are not present, copying them over to .cache/lm-studio/bin...
18:20:32.163 › [AppUpdater] Checking for updates... (current state: idle)
18:20:32.164 › AppUpdater state changed to checking-for-updates-periodic
18:20:32.165 › [AppUpdater] Fetching version info from https://versions.lmstudio.ai
18:20:32.221 › First model catalog download.
D [LMSAuthenticator][FcfsClient(cId=Gr/U5Up12wjYHMKW7YzjULSh)] Client created.
D [LMSAuthenticator][FcfsClient(cId=Gr/U5Up12wjYHMKW7YzjULSh)] Holder created, references: 1
Client with id 'Gr/U5Up12wjYHMKW7YzjULSh' registered.
D [LMSAuthenticator][FcfsClient(cId=Gr/U5Up12wjYHMKW7YzjULSh)] Holder created, references: 2
D [LMSAuthenticator][FcfsClient(cId=Gr/U5Up12wjYHMKW7YzjULSh)] Holder created, references: 3
D [LMSAuthenticator][FcfsClient(cId=LM Studio)] Client created.
D [LMSAuthenticator][FcfsClient(cId=LM Studio)] Holder created, references: 1
Client with id 'LM Studio' registered.
D [LMSAuthenticator][FcfsClient(cId=LM Studio)] Holder created, references: 2
18:20:32.227 › Detecting the 'best backend' available for use...
18:20:32.227 › First compatible backend found on non-mac is 'llama_cuda', setting as best
18:20:32.227 › Best backend detected to be 'llama_cuda'
setConfiguration called but the number of loaded models is not 1.
[readJsonFile] Error reading file /export/home/raistlin/.cache/lm-studio/config-presets/config.map.json: SyntaxError: Unexpected end of JSON input
Error reading file /export/home/raistlin/.cache/lm-studio/config-presets/config.map.json: SyntaxError: Unexpected end of JSON input
I [LMSAuthenticator][FcfsClient(cId=LM Studio)][LMSContext(ep=handle,t=channel)] Handling deep link
18:20:32.254 › Current downloads folder: /export/home/raistlin/.cache/lm-studio/models
18:20:32.315 › AppUpdater state changed to idle
18:20:32.340 › Downloaded model catalog from https://raw.githubusercontent.com/lmstudio-ai/model-catalog/main/catalog.json.
18:20:32.341 › Loaded existing catalog: Contains 16 models.
18:20:32.342 › Catalog is unchanged. Not replacing.
18:20:32.429 › Last model catalog download was 208ms ago. Skipping.
Hi, I'm trying to use LM Studio 0.2.23 on RHEL8 (usually has the same requirements as Ubuntu 20.04). Thank you for agreeing to downgrade your build chains to 20.04., btw The AppImage now starts but there is still a GLIBC errror and a popup comes up: