workofart / brawlstars-ai

An artificial intelligence that plays brawlstars from only pixel input
MIT License
37 stars 8 forks source link

Suggestions for a dead project :P #1

Open ghost opened 3 years ago

ghost commented 3 years ago

Hey! I just wanted to say that this is a pretty cool project you made. :) I would reccomend to try and use some sort of VM android device or bluestacks to run a simulator, even though I know that you didn't want to in the beginning. I also would recommend using self play learning so your AI doesnt have that ceiling introduced by the use of reinforcement learning. I understand that recognition of objects in the game was a huge part of the challenge, so using a simulator allows you to be able to track things with seamless accuracy instead of using CNNS yourself. An added benefit is that you get more elements introduced into your net as data, such as projectiles and more. I also would recommend training in a 1v1 enviro as a demo, then transition into a 3v3 environment. keep your comps the same so the net can get used to the intricacies of the brawlers mechanics. Thanks for reading through!

Cheers!

(sorry for any spelling mistakes, on a time crunch : P)

ghost commented 3 years ago

To further elaborate:

1.I'm not sure whether a BS API exists, so downloading the packets locally and running them without comm. with servers is what I thought you would need to do. (I think that, if there is one for an actual game itself, you have to talk to Brawl Stats or Starlist pro, but i'm 99 percent sure that they only get stats)

  1. You should include a range in the motion and aiming vals, wrap the val line around the movement/aiming circles to achieve 2 dim. movement with a one dimensional val, with no movement being the range of the origin and a certain percentage of the line. (e.g. wrap a number line around the movement and aiming joysticks with standing still as the origin of movement.) If you do this again using self play and the other stuffs I talked about, you'll almost definitely have to do a full overhaul :( 💯💯 💯 Still think that this is really cool, I'll definitely help you out if you decide to continue (Just DM me!
workofart commented 3 years ago

Hey @pandabearz1! I'm really happy that you're interested in this project. I felt the same when I started it too :)

Thanks for all the suggestions, I'll try to respond to each one of them separately:

Q: I would reccomend to try and use some sort of VM android device or bluestacks to run a simulator

A: I've been using an android simulator for this project, otherwise it would be a tackling a different problem altogether. I've even included that as one of the requirements in the README. If you're referring to a Brawlstars simulator, I believe there's no public simulator available, perhaps there might be SuperCell's internal simulator, but that's inaccessible.


Q: I also would recommend using self play learning so your AI doesnt have that ceiling introduced by the use of reinforcement learning.

A: Good point 👍 . If your self-play refers to playing against another reinforcement learning agent, it might open some new doors. But since the minimum number of players for a game to even start in 3v3 is 6, I didn't have enough computing power to run 6 agents on my machine. But definitely a promising route.


Q: I understand that recognition of objects in the game was a huge part of the challenge, so using a simulator allows you to be able to track things with seamless accuracy instead of using CNNS yourself.

A: Yes, this project initially was intended to be a reinforcement learning project; however, the more I worked on it, I realized there's a perception problem to it. Ideally, I would want to use a simulator that provides all the data/coordinates of elements in the game, which would allow the project to focus on the "planning" portion.


Q: I also would recommend training in a 1v1 enviro as a demo, then transition into a 3v3 environment.

A: I don't believe there's a 1v1 environment in Brawlstars. If this was a SuperCell internal project, it would be a lot easier. The simulator that you mentioned and 1v1 game modes.


Q: I'm not sure whether a BS API exists, so downloading the packets locally and running them without comm. with servers is what I thought you would need to do.

A: For the scope of this project, I wanted to avoid going down that route as creating a "internal BS API" would be a pretty big project by itself 😃


Q: You should include a range in the motion and aiming vals, wrap the val line around the movement/aiming circles to achieve 2 dim.

A: This is a good suggestion, but I wanted to get something naive like a baseline working first before tweaking the model/action space. There are definitely more things to improve based on the "domain-knowledge" gained from playing the game ourselves 😄


Thanks again for all the suggestions. Personally, I don't think it worthwhile to tackle this as a side project without a simulator and against a constant-updating game environment. Let me know if you have any other thoughts? Perhaps as a SuperCell internal project? 😝

ghost commented 3 years ago

Thanks for leaving such a detailed reply! I agree that its a bit much to tackle wo a simulator. I was thinking of digging into the .apk file and trying to create one myself with help from the modding communities from reddit (🤣 ) But seems like a bit much. I was referring to two reinforcement agents duking it out when referring to self play : ) On a side note, I just wanted to mention that I reached out to SC about this, and they ghosted me. Kinda annoying tbh :( Thanks for replying in a timely matter unlike me!

Cheers!

workofart commented 3 years ago

@pandabearz1 Yeah, reaching out to SC might be a good bet, if they would reply, of course. But nevertheless, this was a very fruitful discussion. I'm glad to see someone as excited about this as I am. 😄 I'll leave this issue open to see if anyone else has other ideas/thoughts.

HackMan69hack commented 4 months ago

Nice project you can also use yolov8