-
Hello again @fschmid56 , thanks for the awesome repo!
I would like to finetune DyMNs on my own dataset for audio classification. Is it possible?
If so, would the best pipeline be to just classif…
-
### Description
New robust statistical procedures as contained in the WRS2, bmtest GRD & flipscores Packages
### Purpose
Incorporate widely used robust statistical procedures
### Use-case
…
-
I am trying to finetune Llama 3.1 with below settings
Unsloth 2024.8: Fast Llama patching. Transformers = 4.44.2.
GPU: NVIDIA A10G. Max memory: 21.988 GB. Platform = Linux.
Pytorch: 2.1.0+cu118. CU…
-
## Description
I was trying to use TRT modelopt library to quantize a resnet18 from pytorch. The code to reproduce is:
```
from torchvision import models
from torch import nn, optim
# Def…
-
### Discussed in https://github.com/r-devel/r-project-sprint-2023/discussions/4
Originally posted by **giscus[bot]** June 26, 2023
# R Project Sprint 2023 - Addressing Bugs in nlme
Addressi…
-
The package has no function that allows to calculate APE and AME. When I try to use the margins::margins() function, which calculates the aforementioned values for non-linear models (such as Logit and…
-
Thank you so much for your excellent and inspiring work!!!
I could reproduce the exciting performance using your pre-trained model. However, I failed to reproduce the performances by re-training y…
-
In ROS level models seem to be present:
```
$ rostopic echo /gazebo/model_states
---
name: ['apollo15_landing_site_1000x1000', 'tetris']
pose:
-
position:
x: 0.0
y: 0.0
z…
-
Hi,
I want to know how to merge the transformer layers and input embedding layer only. I want to keep the original `lm_head`. Is this possible?
-
Hi, I'm new to NLP and trying to pre-train a transformer, but the default dimension is high so I add a linear layer according to the demo below:
```
from sentence_transformers import SentenceTrans…