-
As documented in the images below, there are some teething issues for the front end for the couplings.
Namely:
1. If coupling is toggled on/off multiple times, a model may be shown to be coupled m…
-
[`payu`](https://github.com/payu-org/payu/) runs and then re-submits itself so that it can run a model a number of times sequentially.
It passes state information, like the number of runs remainin…
-
Hello there!
Using the model exporter notebook only seems to output the onnx.json config file. I've tried a few different times/days with different settings and the result is always the same! No on…
-
**Description**
Using tritonserver to delay loading(--model-control-mode=explicit) the llava-mixtral-8x7b model, there is a probability that when my client initiates load_model, it triggers the serve…
-
Hi!
I've noticed that this code has a problem. It keeps loading and re loading the 3d model. You can see it by yourself if you open devsTool and take a look at the network tab.
This doesn't happ…
-
I have tried the meta-llama/Llama-3-8b-chat-hf chatmodel and togethercomputer/m2-bert-80M-8k-retrieval embedding model with embeddingDimension: 768, but I'm having the following error many times when …
-
OpenAI released Whisper-Turbo, which is a drop-in replacement for large, multilingual, 8x faster, less memory, but minimal degradation in performance.
https://github.com/openai/whisper
-
from paperqa import Settings, ask
import os
os.environ["OPENAI_API_KEY"] = "EMPTY"
local_llm_config = {
"model_list": [
{
"model_name": "ollama/llama3",
"litellm_params": {
"model": "ollama/ll…
-
Really great work!!! May I ask you for the St1 and St2 inference times to produce a 3D model on e.g. 3090ti or 4090?
Thanks in advance.
-
Hi there,
We use this package for long times. First of all thanks for this package.
Recently I noticed for an exception called `UnserializeFailedException`. This exception is thrown because the co…