The-3Labs-Team / tinymce-chatgpt-plugin

🤖 A TinyMCE plugin for ChatGPT (gpt model) OpenAI or Custom LLM compatible endpoints
https://3labs.it
MIT License
17 stars 4 forks source link

Security of the OpenAI key? #7

Closed nekhbet closed 3 months ago

nekhbet commented 3 months ago

How is it handled?

Thanks!

murdercode commented 3 months ago

According to the logic of the TinyMCE plugin, the key is exposed to the client. Since this is a client-side library, there is no truly secure way to protect it, and it should be treated for personal/team use among trusted individuals.

One solution is to use a custom LLM that acts as a reverse proxy to a server-side script, which can secure the key.

As specified in the package, it is a project for hobby and personal use ;)

nekhbet commented 3 months ago

Thank you for explaining.

willcastillo commented 2 months ago

It would be great if the plugin makes a call to the server, instead of the directly to Open AI. In addition to keeping the api key secure-ish, and enable us to use this in a SaaS env, it will open up the possibilities about how to use it. Just imagine us being able to define a system message and additional context to the requests!

murdercode commented 2 months ago

It would be great if the plugin makes a call to the server, instead of the directly to Open AI. In addition to keeping the api key secure-ish, and enable us to use this in a SaaS env, it will open up the possibilities about how to use it. Just imagine us being able to define a system message and additional context to the requests!

It can be achieved using a custom endpoint, however, its development would be beyond the scope of this package. Certainly the package is to be interpreted for personal use and in an environment where the API key may be exposed, for a shared solution an ad hoc one must be created (or the paid TinyMCE one used).