-
**Issue by [ericniebler](https://github.com/ericniebler)**
_Wednesday Feb 15, 2023 at 22:17 GMT_
_Originally opened as https://github.com/brycelelbach/wg21_p2300_execution/issues/27_
----
Discuss c…
-
## Failure
System: macOS/arm64. `ocaml-base-compiler.5.2.0`.
What's unusual: The failing module has 616,242 lines (it is auto-generated). It also has lots of mutually recursive types (just under…
-
**TL;DR:** Currently CodeCarbon is focused on usage impacts related to energy consumption of the host. To assess impacts on a broader scope and in a multisteps way, as defined in Life Cycle Assessment…
-
Hey! Great work on this project! I got it t work on a couple of t5 instruction tuned models from huggingface, I was just curious, has anyone been able to get the code to work with quantized modes? Cur…
-
## ❓ Questions and Help
How do you run models that are offloaded to the CPU, Trying to work with ```enable_sequential_cpu_offload``` or ```enable_model_cpu_offload```, when running ```torch_xla.sy…
-
- [ ] [Finetuning LLMs for ReAct. Unleashing the power of finetuning to… | by Pranav Jadhav | Feb, 2024 | Towards AI](https://pub.towardsai.net/finetuning-llms-for-react-9ab291d84ddc)
# Finetuning L…
-
Composite key names need to be modified to follow conventions:
Option 1: `_column_name_1_column_name_2_fkey`
Option 2: `__denorm_fkey` (if the name is too long)
A dry run can be done to figu…
-
Hi~ could I train the model and update parameters by mutate prompting code?
-
Use this issue to track general modular compilation concerns.
This is very incomplete, but the general plan so far is this:
The binary format currently uses symbolic references, which generally beco…
-
After downloading the weights of llama 2 70b from hf, I tried to load the weights using
```
model = AutoModelForCausalLM.from_pretrained(
"meta-llama/Llama-2-70b-hf",
cache_dir="/cache"
…