I want the model to be able to interact with my computer. For example, by importing pywhatkit, if I say AI to open a youtube video in my text, it should be able to open this video. What I need is to know how can I check my prompts in the UI (in which script etc).
So it should be something like this:
if "open" and "on youtube" in command or "play" and "on youtube" in command:
video = command.replace("on youtube", "").replace("open", "").replace("play", "")
print("Opening " + video + "on youtube")
talk("Opening " + video + "on youtube")
pywhatkit.playonyt(video)
Or another example is opening a file on computer. Should be look like:
if "open" in command:
app = command.replace("open", "")
if "epic games" in command:
print("Opening " + app + "...")
talk("Opening " + app + "...")
subprocess.call("D:\Epic Games\Launcher\Portal\Binaries\Win32\EpicGamesLauncher.exe")
My original plan was to use LLMs on my assistant code but Oobabooga is giving much more accessibility so implementing to here might be better for future.
I want the model to be able to interact with my computer. For example, by importing pywhatkit, if I say AI to open a youtube video in my text, it should be able to open this video. What I need is to know how can I check my prompts in the UI (in which script etc).
So it should be something like this:
Or another example is opening a file on computer. Should be look like:
if "open" in command: app = command.replace("open", "") if "epic games" in command: print("Opening " + app + "...") talk("Opening " + app + "...") subprocess.call("D:\Epic Games\Launcher\Portal\Binaries\Win32\EpicGamesLauncher.exe")
My original plan was to use LLMs on my assistant code but Oobabooga is giving much more accessibility so implementing to here might be better for future.