-
To run LLaMA 3.1 (or similar large language models) locally, you need specific hardware requirements, especially for storage and other resources. Here's a breakdown of what you typically need:
### …
-
- remove fitting of the subject-specific GLM models within `fitGLMM()`
- this requires som re-working of various bits and ends but should be feasible
- see if it's possible to speed up the NB L…
-
## Is your feature request related to a problem? Please describe.
Some files in the project, like `blockchain.go`, are too large for most AI models. With over 3,000 lines, it can't be directly 'paste…
lock9 updated
3 weeks ago
-
**Problem Description**
I have different ollama endpoints and I would like to choose from them. Right now I can only configure one. I run smaller models locally and larger models on inference server.…
-
Hi there,
First, really nice job on developing adam-mini! It's a really refreshing approach for adam.
I did some initial testing by integrating adam_mini into torchtitan, and ran it with varying si…
-
1) Thanks FInally who explain pricesly which modelwork and where to get each one. That's great.
2) Why does this custom node not appear in the comfy MANAGER?
3) What's the difference between this on…
-
Appreciate the great work! I have two questions:
1) Is the ability to solve reversal curse an emergent ability as you scale up the MDM?
2) I am trying to replicate your result for reversal curse but…
-
### Expected behaviour
### Actual behaviour
### Steps to reproduce
I am not sure why this happened.
In my game i use that door. But when i set 2 different materials(base and transparent) win…
-
### Search before asking
- [X] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/ultralytics/ultralytics/discussion…
-
Model fine-tuning is the process of further training a pre-trained machine learning model on a specific dataset or task. This technique allows the model to adapt its knowledge to a particular domain o…