-
- PyTorch-Forecasting version: 1.0.0
- PyTorch version: 2.4.0
- Python version: 3.9.19
- Operating System: MacOs(Darwin)
### Expected behavior
The current implementation of different classes…
-
**Is your feature request related to a problem? Please describe.**
To improve our project's efficiency and effectiveness, we need to integrate new data sources, validate and clean collected data, des…
-
## Feature Name
Cognitive Computations
## Feature Description
## Overview of Cognitive Computations
**Cognitive Computations** is a community-driven group founded by Eric Hartford, focusin…
-
Hi,
For LORA fine-tuning, are there ways to save only the adapter models and not the full model files? More importantly, what are the easiest ways to perform model merging, given a base model and a…
-
When I use 8-bit quantization in the pre-training process, the code throws an error.
You cannot perform fine-tuning on purely quantized models. Please attach trainable adapters on top of the qu…
-
### Due diligence
- [X] I have done my due diligence in trying to find the answer myself.
### Topic
The paper
### Question
How would you tweak the language model transformer with system prompts /…
-
Development of a simple benchmark to evaluate the performance of different models with Solidity code. This benchmark will be used to measure the impact of fine-tuning on the models.
-
The pretrained models have names like: generator_v1
However, train.py looks for checkpoints with the following code:
```
if os.path.isdir(a.checkpoint_path):
cp_g = scan_checkpoi…
-
Hello authors,
I have some questions to ask about your _general_dataset.json_.
1. Why didn't you include other modes than gpt4 and gpt 3.5?
2. What are the specific versions of gpt4 and gpt 3.5?
…
-
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
For large la…