Hello, sorry for the late reply as I have been quite busy over the last month.
Character Factory consists of various small scripts. Some of these scripts run through a command line interface (CLI), some have a web user interface (Gradio WebUI), and others are designed to run in Google Colab (the least complicated way to run the script is using Google Colab). These scripts utilize the LLM model (Zephyr or Mistral) to generate character descriptions, names, personalities etc. They also use Stable Diffusion to create a character avatar. Created character can be exported to a .png file with metadata using my Aichar Python library. This file includes character data such as name, personality, example dialogue, etc. To interact with the character, you can load the character file in text-generation-webui, SillyTavern, Ai-companion etc. and engage in chat with the generated character.
In summary, the process involves using the LLM to create information about a character, using Stable Diffusion to generate an avatar that matches the character, and exporting it as a .png file with metadata. Then, the character file can be loaded into text-generation-webui, SillyTavern, Ai-companion etc. allowing you to chat with the generated character.
I hope this response clarifies the purpose of these scripts a bit more.
Hello, sorry for the late reply as I have been quite busy over the last month. Character Factory consists of various small scripts. Some of these scripts run through a command line interface (CLI), some have a web user interface (Gradio WebUI), and others are designed to run in Google Colab (the least complicated way to run the script is using Google Colab). These scripts utilize the LLM model (Zephyr or Mistral) to generate character descriptions, names, personalities etc. They also use Stable Diffusion to create a character avatar. Created character can be exported to a .png file with metadata using my Aichar Python library. This file includes character data such as name, personality, example dialogue, etc. To interact with the character, you can load the character file in text-generation-webui, SillyTavern, Ai-companion etc. and engage in chat with the generated character.
In summary, the process involves using the LLM to create information about a character, using Stable Diffusion to generate an avatar that matches the character, and exporting it as a .png file with metadata. Then, the character file can be loaded into text-generation-webui, SillyTavern, Ai-companion etc. allowing you to chat with the generated character.
I hope this response clarifies the purpose of these scripts a bit more.