Closed Mister-G-Lu closed 1 year ago
[...] For example, if I put in "sadistic", how does it send to the Pygmalion [...] ?
The code that actually writes up the prompt that gets sent to the model is in this file: https://github.com/PygmalionAI/gradio-ui/blob/master/src/prompting.py
For more details about what the prompt looks like, you can refer to "The manual way" under the "Intended use" section on the HuggingFace model card.
A readme for Developer or Developer's guide could help to contribute a bit more.
I'll close this as a wontfix because I don't have plans to write a developer's guide for this repo. I probably won't be accepting large contributions here since work is focused on the official front-end now, so I'd rather invest my time there. Feel free to ask any further questions you have though, and I'll try to get back to you here.
as a fellow Python coder, a lot of the logic has stumped me while trying to look over how all this works. The front end makes sense, but it's difficult to tell how it interacts to allow the Personality part to influence the output. [For example, if I put in "sadistic", how does it send to the Pygmalion to help formulate the responses?] It looks tricky to fiddle around with the logic of having the person's writing have more or less weight. It's a little hard for me to figure out the program logic.
A readme for Developer or Developer's guide could help to contribute a bit more.