OpenAdaptAI / OpenAdapt

AI-First Process Automation with Large ([Language (LLMs) / Action (LAMs) / Multimodal (LMMs)] / Visual Language (VLMs)) Models
https://www.OpenAdapt.AI
MIT License
850 stars 109 forks source link

Design mobile interface #197

Open abrichr opened 1 year ago

abrichr commented 1 year ago

We will evaluate the model's own confidence in its ability by simply asking it multiple times what is the next appropriate action. If it disagrees with itself, we will ask the user what to do next.

We would like to get human feedback on what to do next in the simplest and easiest way possible.

Interface:

User is presented with list of options for what to do next. Then they can select:

flyguy712 commented 1 year ago

@abrichr I'm going to pick this one up. I'll start by putting together some prototype designs, will review them with you (and any others on the team) once ready to get feedback, then will move on to building a live prototype using SwiftUI.