-
Hello Adapter-hub team,
I'm working with the **T5-small** model for text summarization. I fine-tuned it and also trained the adapter for the same dataset on the same machine and with the same confi…
-
### 请提出你的问题
### 加载FastGeneration可以正常推理,注释掉load("FastGeneration", verbose=True)直接采用GPU推理时报错
---------------------------------------------------------------------------
RuntimeError …
-
# Disclaimer
Participation by NIST in the creation of the documentation of mentioned software is not intended to imply a recommendation or endorsement by the National Institute of Standards and Tec…
-
Great Work! Can you provide **Chart-Text Alignment Data**? Or how to separate this apart from instruction tuning data?
-
**Is your feature request related to a problem? Please describe.**
Athina AI has open-sourced their approach towards text summarization.
We need to implement this in some sort of summarization cr…
-
## Summary
WasmEdge is a lightweight inference runtime for AI and LLM applications. We want to build specialized and finetuned models for WasmEdge community. The model should be supported by WasmEd…
-
Hi,
Command-R has 128K context, RAG support with Grounded generation and multilingual
It seems a strong 35B model, especially in summarization of long texts, middle east languages and more.
The mo…
-
### Python 3.9.13
* How do you deploy Kubeflow Pipelines (KFP)?
* KFP version:
2.7.0
* KFP SDK version:
### Steps to reproduce
I have created the following functions and compone…
-
Hi,
I'm trying to finetune the BioMedLM for Medical Question Answering using our custom dataset using Hugging Face's transformer's library. Since we're looking to optimize the memory usage, we're usi…
-
### Title
Doesn't work with local Ollama llama3 models
### Description
I've set the base url to local Ollama, and using downloaded llama3 models, it can interact with the models but it could not pe…