-
**Problem**
No source are working on kalosm::language: error Unexpected status code: 401
**Steps To Reproduce**
Example code:
```
use kalosm::language::*;
#[tokio::main]
async fn main()…
-
I would love to have a tutorial on how to add an existing model from candle examples.
For example, I want to add clip, starcoder and rwkv, to name a few.
-
**Problem**
Trying to run the basic example from README.
```rust
use kalosm::language::*;
#[tokio::main]
async fn main() -> Result {
let model = Llama::phi_3().await?;
let mut cha…
-
You should be able to train some models based off of a workflow in Floneum using either [burn](https://github.com/huggingface/candle) or [candle](https://github.com/huggingface/candle).
This can be…
-
## Specific Demand
Kalosm-language should support batched generation for faster local inference. This can be very useful when generating many unrelated streams of text
## Implement Suggestion
…
-
- https://github.com/huggingface/candle/tree/main/candle-examples/examples/segment-anything
- https://keras.io/examples/vision/sam/
- Maybe with EfficientSAM: https://github.com/yformer/EfficientSAM
ad-si updated
7 months ago
-
Hi,
I'd like to rig one of the examples into a service, where the service (http) gets a prompt and runs `TextGeneration`. As it stands, `TextGeneration` wants to _own_ model and tokenizer, which mean…
-
**Problem**
Hey there, I hope this is ok to put some feedback as a user attempting to use kalosm for the first time.
Getting started using Kalosm has some rough edges right now that seem to have some…
-
## Specific Demand
There are lot of servers that is compatible with Open AI such as [ollama](https://ollama.com/blog/openai-compatibility) and [edgen](https://edgen.co/).
## Implement Sugges…
-
## Specific Demand
When using batches, it can be difficult to find the right batch size especially with a wide variety of hardware with end user applications.
## Implement Suggestion
Instead …