-
### Title
Data Governance for the Machine Learning Pipeline
### Guide
Reproducible Research
### Draft
https://docs.google.com/document/d/1PuOzOsvjUf6DfaoCzUnfC73FpelbY40UFTXg2H9huk0/edi…
-
调试配置:
...
--finetuning_type freeze \
--num_layer_trainable 3 \
--name_module_trainable mlp \
--max_source_length 200 \
--max_target_length 1024 \
--output_dir path_to_sf…
-
### Issue with current documentation:
I am using gpt4all to chat with my documents locally but my code involves some kind of Python code. So I want to know is gpt4all "ggml-gpt4all-j-v1.3-groovy.bin"…
-
### System Info
Command line:
`RUST_BACKTRACE=1 text-generation-launcher --model-id bigcode/starcoderbase --num-shard 1 --port 8080`
Version: commit `880a76e`, with a local install following th…
-
Hi
At the moment, all output from gpt4all-j is automatically printed to the screen. This makes the code quite 'noisy' when running from a script.
Ideally this information shouldn't be printed u…
-
Thanks for your wonderful work that FasterTransformer is a great job that benefit many people.
Fastertransformer works very well on Multi Head Attention (MHA) like GPT/OPT/Bloom etc..
However, Mu…
-
### System Info
On prem setup with A100, running via docker container
```
2023-05-24T15:22:40.644196Z INFO text_generation_launcher: Runtime environment:
Target: x86_64-unknown-linux-gnu
Cargo v…
-
I've been having a hellish experience trying to get llama.cpp Python bindings to work for multiple GPUs. I have two RTX 2070s and Ubuntu OS, and I want to get llama.cpp performing inference using the …
y6t4 updated
5 months ago
-
*This issue is a catch all for questions about using aider with other or local LLMs. The text below is taken from the [FAQ](https://aider.chat/docs/faq.html#can-i-use-aider-with-other-llms-local-llms-…
-
Currently, this only supports OpenAI models using the OpenAI API. Oobabooga Webui has an API extension that allows requests to be sent to it and open source models to generate the content instead comp…