VedalAI / neuro-amongus

Among Us Plugin for Neuro-sama
GNU General Public License v3.0
531 stars 50 forks source link

(Discussion) Interacting with the meeting button #73

Open Alexejhero opened 1 year ago

Alexejhero commented 1 year ago

How do we handle choosing when to press the meeting button? I'm interested to hear any ideas anyone might have.

Currently we record data about the meeting button position and whether or not the player used the interact button, we could use that and just feed it through the neural network, but that might lead to random meetings.

krogenth commented 1 year ago

Has any testing been done to see how often the model attempts to use the meeting button?

If we're willing to recollect data, we could separate the emergency meeting from other interactions. While the interactions are the same, the circumstance to use the emergency meeting is different from others. That should give the model some more clear connections to when to use the button.

Otherwise with the data we currently collect, I don't see much more we could do than hope the model doesn't use it immediately.

While it's maybe against the spirit of the project, we could make restrictions dll side to keep the model from interacting. Something like having seen a body allows the model to interact with the meeting button.

Alexejhero commented 1 year ago

If we're willing to recollect data, we could separate the emergency meeting from other interactions. While the interactions are the same, the circumstance to use the emergency meeting is different from others. That should give the model some more clear connections to when to use the button.

We're recollecting data in about 1h but we don't need to make any changes, as we're already recording interactions and the nearest interactable, we can determine from the python side when the button is pressed: if interacted and interactable is systemconsole and distance to button < n

While it's maybe against the spirit of the project, we could make restrictions dll side to keep the model from interacting. Something like having seen a body allows the model to interact with the meeting button.

We could do something like that, only trigger the meeting button if a vent was seen, or if a body was seen on cams, or something like that, question is how do we condition it to go to the button once that event occurs? We were thinking of creating like a fake task and giving it to the neural network in place of the sabotage, so the nn has a sense of urgency to go there.