Ragnt / AngryOxide

802.11 Attack Tool
GNU General Public License v3.0
957 stars 47 forks source link

[Feature] Reinforcement Learning #23

Closed aluminum-ice closed 2 weeks ago

aluminum-ice commented 6 months ago

I am the developer of one popular pwnagotchi.ai fork. I've been following AO and I am intrigued. I would like to proposal what I think would be a new feature for AO taken from pwnagotchi: Use reinforcement learning to optimize probing and attacks. This would enable AO to more intelligently sweep through the parameter space (e.g., channels, lengths of time to listen) to maximize the chances of collecting handshakes.

I'd love to help out time permitting. Happy to clarify my proposed enhancement/feature if its not clear.

Ragnt commented 6 months ago

What methods are you thinking to introduce this? The limit of this currently is the autohunt features, but that's not "learning" so to speak.

What is the general flow / inputs / outputs that you anticipate would need to be implemented?

Not against this idea, just want to better understand it before I start trying to figure out how/when to work it.

aluminum-ice commented 6 months ago

It's precisely the autohunt that I am thinking of. In the case of pwnagotchi, it uses reinforcement learning to update several parameters to maximize the chances of getting good, crackable handshakes: these include parameters like:

I personally would like to increase these parameters to include the type of activity you do (more/less passive, more/less active), and geographic information (commercial areas have a very different environment to residential, urban are different from surban) and even the speed at which the device is moving (if you are driving in car, you should spend less time doing recon because targets will move out of range much faster than a device being walked or stationary).

Ragnt commented 6 months ago

I like the idea, but this would be quite a large amount of changes to the current codebase, I would consider this a version 2.0 feature. Note that we aren't even at 1.0 yet. I don't have the time currently due to real life (I'm not paid to maintain AO) to really get after these changes on my own.

But by all means, if you want to begin to map out how you would do this and start making necessary PR's, that would be cool.

In the meantime, I suggest you join the discord to communicate any questions or changes you are making.

aluminum-ice commented 6 months ago

I'm not familiar with rust but I looked over your code and generally got the gist of it. The parameters I'm thinking of tuning via reinforcements learning are distributed through the code but I suspect it's possible to update those hardcoded values with a call to a function that does the learning and returns updated parameters. Let me study your code a bit.

Ragnt commented 6 months ago

Sounds good.

There is also a plug-in framework in the works, and in theory lots of those params may be able to be modified at runtime by a plugin.

On Sat, Feb 24, 2024 at 12:21 AM, aluminum-ice @.***(mailto:On Sat, Feb 24, 2024 at 12:21 AM, aluminum-ice < wrote:

I'm not familiar with rust but I looked over your code and generally got the gist of it. The parameters I'm thinking of tuning via reinforcements learning are distributed through the code but I suspect it's possible to update those hardcoded values with a call to a function that does the learning and returns updated parameters. Let me study your code a bit.

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>