-
I'm trying to run the example with 2 H100x8 nodes to test the DirectGPU-TCPX speed.
There's some credential issues when calling "gsutil cp", so I've created a local docker image(h100launcher in the c…
-
-
Prompt
```
> create a tool that 1. pulls the top 1000 coins from coingecko 2. divides the 24h price change by the circulating market cap 3. sorts them by the adjusted price change 4. lists top 10 ga…
-
This issue is to explain how to host locally the LLM model.
For all the solutions listed below, `ngrok.com` (or any similar tool) can be used to share the local AI server to other people.
We ha…
-
Hi, having this issue with connecting to external llms.
Enviroment server for remote LLM:
- Amd 79503xd
- 64 GB RAM
- 2x 7900xtx
- Using LM-STUDIO fosr hosting LLM server
Enviroment Cli…
-
## The problem
Users who create memberships and wish to provide access to a large-number of courses end up using the course auto-enrollment feature to add every course to the membership.
The UX …
-
It seems that GPT-3.5 isn't always consistent with its output, and Llama-2-13B has the same issue of extra output.
Sequential runs using GPT-3.5, note the usage of mac/ubuntu and how the prompt is …
-
```
from langchain_openai import ChatOpenAI
import pandas as pd
glm4_base_client = ChatOpenAI(model="glm-4v-9b",
api_key="your_api_key",
base…
-
### What would you like to see?
I would like to know if Anything LLM supports a feature that allows integration with external APIs, specifically Dify API. I already have the API URL and API key from …
-
Hydrating the default prompt chokes on hydrating the default examples.
This can be reproduced by running the tests
```
go test ./...
? github.com/google/go-react/examples/app-editor [no tes…