-
Hello,
I am testing out the GEUV.compute_weights.sh in just the 1000G.EUR population and keep running into issues.
It never creates a .hsq and gives me this error, indicating that the tmp file tmp…
-
Hi,
I have noticed that the gemma output, when used bslmm option, is written to a folder "output/". But the FUSION.compute_weights.R trying to read from the current directory instead of output fold…
-
Hello, I'm trying to run plink-qc.nf on my Linux HPC cluster with:
nextflow run -c my_nf.config /Users/mchiment/.nextflow/assets/h3abionet/h3agwas/plink-qc.nf -profile sgeSingularity --samplesheet…
-
### Your current environment
```text
Address sizes: 43 bits physical, 48 bits virtual
CPU(s): 128
On-line CPU(s) list: 0-127
Thread(s) per c…
-
## Goal
Create a bot that is able to answer questions asked by users based on RAG framework on government data sourced by parsing PDFs.
## Description
We have a number of PDFs in Hindi Engli…
-
These models are available in live, backtesting & research in the cloud environment.
Access installed models and their revisions
```python
from huggingface_hub import scan_cache_dir
…
-
### Your current environment
```PyTorch version: 2.3.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC ve…
-
list model i get is
{
"models": [
{
"datasetName": null,
"datasetUrl": null,
"description": "Command R+ is Cohere's latest LLM and is the first op…
-
### What is the issue?
Ollama v0.1.33
Intel Core i9 14900K 96GB ram
Nvidia RTX 4070 TI Super 16GB
Attempts to load the `gemma:7b-instruct-v1.1-fp16` are failing.
I have tried
* restarting O…
-
### Summary
The Hub has grown fast in the last months, and as such we need to make sure the models are reproducible and give the expected results, as some small bugs might have happened inadvertent…