-
Hi!
I've been struggling with convergence issues while running models with the glmmTMB() function.
I have tried multiple different transformations for my response variable, and found out that s…
-
I made a demo of loading 4bit quantized Llama2, it seems to work and uses 6G GPU RAM.
Limitations:
- only supports Llama, only 4bit quantization
- only tested with Llama-7b, and haven't tested yet …
-
Here is my understanding of the existing state of things and what I think we should be doing to make our lower-bit kernels more performant at both small and larger batch sizes. I'm making this an RFC …
-
**Title** Nistats: the General Linear Model, fast and easy
**Presentor and Affiliation**
Bertrand Thirion, Inria
**Collaborators**
Nistats is developped by a growing international community fr…
-
This looks like an extension of Satterthwaite df or Welch anova df.
Johansen is not a quick read (I don't understand anything which skimming it)
The advantage is that there is a literature that …
-
I've been training some models for weeks but now I get this when I run the training cell:
Traceback (most recent call last):
File "/content/diffusers/examples/dreambooth/train_dreambooth.py", lin…
-
Hi!
I see you use **metafor::rma()** to run a meta-analysis via linear (mixed-effects) models in [limma_meta_analysis.R](https://github.com/GabrielHoffman/misc_vp/blob/master/limma_meta_analysis.R)…
-
So our DQN training is exceptionally slow. Currently my projection for 10,000 episodes is 550 days, and thats only testing for a sample size of 50 episodes. Later in the training we should be reaching…
-
I have fitted some equivalent models with gamlj::gamljGlmMixed() and with lme4::glmer(). I was under the assumption that gamljGlmMixed() was a wrapper for glmer() and would produce equivalent results,…
-
I think it would be really helpful to allow a mixture of degrees for a model. This could be passed as a list or numpy array to the degree parameter. This has real-life applications: https://pdfs.seman…