Closed nekhbet closed 3 months ago
According to the logic of the TinyMCE plugin, the key is exposed to the client. Since this is a client-side library, there is no truly secure way to protect it, and it should be treated for personal/team use among trusted individuals.
One solution is to use a custom LLM that acts as a reverse proxy to a server-side script, which can secure the key.
As specified in the package, it is a project for hobby and personal use ;)
Thank you for explaining.
It would be great if the plugin makes a call to the server, instead of the directly to Open AI. In addition to keeping the api key secure-ish, and enable us to use this in a SaaS env, it will open up the possibilities about how to use it. Just imagine us being able to define a system message and additional context to the requests!
It would be great if the plugin makes a call to the server, instead of the directly to Open AI. In addition to keeping the api key secure-ish, and enable us to use this in a SaaS env, it will open up the possibilities about how to use it. Just imagine us being able to define a system message and additional context to the requests!
It can be achieved using a custom endpoint, however, its development would be beyond the scope of this package. Certainly the package is to be interpreted for personal use and in an environment where the API key may be exposed, for a shared solution an ad hoc one must be created (or the paid TinyMCE one used).
How is it handled?
Thanks!