-
I have installed trl
-
### Search before asking
- [X] I had searched in the [issues](https://github.com/HamaWhiteGG/autogen4j/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
There i…
-
### Your current environment
The output of `python collect_env.py`
```text
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A…
jgen1 updated
3 weeks ago
-
python call :
from nemoguardrails import LLMRails
rails = LLMRails(config)
messages=[{
"role": "user", "content": "what is an mbr ?"
}]
options = {"output_vars": True}
output = rails.generate…
-
### Your current environment
The output of `python collect_env.py`
```text
Collecting environment information...
PyTorch version: 2.4…
-
### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a sim…
-
(unfortunately you will need a physical Pixel 8 or above to implement this)
Many Commons contributors contribute in various languages, for instance in Urdu when posting a picture of a local dish th…
-
**Is your feature request related to a problem? Please describe.**
The doc refers to Ollama with the mixtral model.
**Describe the solution you'd like**
Update the doc.
**Describe alternativ…
-
### What is the issue?
No issues with any model that fits into a single 3090 but seems to run out of memory when trying to distribute to the second 3090.
```
INFO [wmain] starting c++ runner | ti…
-
Command: python -m main interactive /mistral-7B-v0.1/
Error:
Prompt: Hello
Traceback (most recent call last):
File "/usr/local/anaconda3/envs/mistral/lib/python3.10/runpy.py", line 196, in _ru…