joelostblom / viz-oer

An interactive open educational resource for learning data visualization
https://joelostblom.github.io/viz-oer/
1 stars 0 forks source link

your usage of an LLM proxy behind the {ojs} code-blocks? #21

Open Analect opened 2 months ago

Analect commented 2 months ago

@joelostblom, @rorywhite200 I hope you don't mind me asking. I happened upon your repo, via an issue raised on the quarto-live repo ref Altair. I was interested in techniques that others are using via quarto to present (educate on) topics, allowing for interactive coding, which the wasm-enabled quarto-live should be great for.

I noticed also how you are using LLM chat in a novel way, integrated in the {obs} blocks, per here. I ran up a quarto preview against your branch and tested it out. I hope you don't mind.

I was wondering if you were using something like BricksLLM to manage that proxy api into the openai service, or if it's something enabled by openai themselves. I'd be curious to better understand if it's possible to hook any model endpoint in there, perhaps as long as it mimics the openai api reference.

Thanks for your input.

joelostblom commented 2 months ago

Thanks for your message @Analect , you bring up and important point and we'll temporarily make this repo private to fix an issue related to what you mention. We'll respond with a longer message in a few days once this is fixed.

Analect commented 2 months ago

@joelostblom ... thanks for this update. I obviously wasn't able to see it, once you had taken the repo private, albeit temporarily. If you are able to add more colour on how I might use my own LLM endpoint and whether your approach is using a standard openai-spec API, such that it can interface with the likes of litellm, that would be great.

joelostblom commented 1 month ago

Sorry about that @Analect , I thought you would still get the notification about my comment, but I guess that would only work for notification emails.

I believe you should be able to switch out the backend model as you wish. As you can see in https://github.com/joelostblom/viz-oer/blob/main/textbook/chat.js, we use a standard open-AI spec and send the request to an openAI server that has some notable restrictions on it to avoid abuse. @rorywhite200 feel free to fill in if I missed something.

rorywhite200 commented 1 month ago

Thanks @joelostblom. @Analect just to add we are using our own Heroku proxy server that sends our requests to OpenAI. It just allows us to control rate limiting and which requests we accept. We have implemented fairly strict requirements for the content of the requests, so that even though it uses the OpenAPI spec it has an additional layer of screening.