Closed Altoids1 closed 5 years ago
the biggest of thinks
Lot of issues with this bounty.
From what I understand from discord messages is that you want to make it so that slimes can be "trained" by a xenobiologist to perform actions. If you were to create a neural network that could do this you'd basically be making a AI that is fully sentient. You'd be payed billions of dollars not some measly 50$ as this is something scientists have been trying to create for decades if not centuries.
This is also frankly that's not really feasible to do in a traditional neural network. You can retrain nn's to a degree but not to the amount that you're suggesting.
The whole idea of using a neural network is overengineered. It's a massive waste of resources regardless of how much you try to optimize it and it's way to complex of a problem.
I think we possibly need a more transparent bounty process that involves approval from both headcoders and past headcoders. We shouldn't be setting aside a budget for bounties that are seemingly unfeasible as that money could be spent on things that actually can be done.
Lot of issues with this bounty.
From what I understand from discord messages is that you want to make it so that slimes can be "trained" by a xenobiologist to perform actions. If you were to create a neural network that could do this you'd basically be making a AI that is fully sentient. You'd be payed billions of dollars not some measly 50$ as this is something scientists have been trying to create for decades if not centuries.
This is also frankly that's not really feasible to do in a traditional neural network. You can retrain nn's to a degree but not to the amount that you're suggesting.
The whole idea of using a neural network is overengineered. It's a massive waste of resources regardless of how much you try to optimize it and it's way to complex of a problem.
I think we possibly need a more transparent bounty process that involves approval from both headcoders and past headcoders. We shouldn't be setting aside a budget for bounties that are seemingly unfeasible as that money could be spent on things that actually can be done.
Well the money doesn't actually go anywhere unless someone actually does it, so either we lose nothing, or we gain a billion dollar AI coded in BYOND for fifty bucks.
I love how the label bot correctly assumed this as a feature request..
Probably uses machine learning. Maybe one day it will correctly apply meme labels.
This is just sad
@AshCorr I've vocalised with Altoids already the difficulty I believe in utilising reinforcement learning for solving this problem, but how you're addressing some of the difficulties with this problem comes across as clear misinformation. It's saddening to me that you pushed the idea of there being a requirement of needing an "AI that is fully sentient.", as this comes across as rather misleading for solving a problem like this. Most likely a simple mistake but I think its important to outline that his bounty mostly focuses on making mobs more robust through a pretaught reinforcement learning process. If you would like to look into those sorts of deep learning solutions, look at https://deepmind.com/blog/alphago-zero-learning-scratch/ or https://openai.com/five/ . These bots currently beat humans at these tasks without sentience or inhumane reactions. They beat us by utilising logic created through reinforcement learning (They play against themselves thousands of times each month and utilise backflow algorhithms to make their deep learning solution adhere closer to the currently aimed optimised solution). This is similar to how a normal neural network does back flow propagation, but there are lots of differing algorithms that exist for optimising these convolutional neural networks. His proposal is possible, but quite resource intensive, even when just processing the already taught convolutional neural network.
Although I agree that the bounty is far too low to incentivise this development as this is PhD level difficulty in my opinion. I know PhD students working to win with their chess bots, and in those cases takes a team of 3 around 6 months to get it to a competitive level. I also would presume any neural network implementation would be written in c++ or python for access to something like tensorflow, and then linked across to the dm, similar to how some TG admin plugins work.
Edit: I want to add, this is in no way a negative against you but instead this a topic that saddens me to see misrepresented. I have the upmost respect for your intent, it's more I'm highlighting how I see your language possibly misinforming others.
You're going to have a lot of fun shuffling all of the information required for the ML component into an environment that's outside of DM. You basically need to hand them a full representation of game world, turf and mob wise. Which is a lot of overhead.
Also, why was machine learning chosen as a requirement? Why not revive Goof's GOAP AI system, if you want something extremely robust? Or just consider a proper simple state machine.
@AshCorr I've vocalised with Altoids already the difficulty I believe in utilising reinforcement learning for solving this problem, but how you're addressing some of the difficulties with this problem comes across as clear misinformation. It's saddening to me that you pushed the idea of there being a requirement of needing an "AI that is fully sentient.", as this comes across as rather misleading for solving a problem like this. Most likely a simple mistake but I think its important to outline that his bounty mostly focuses on making mobs more robust through a pretaught reinforcement learning process. If you would like to look into those sorts of deep learning solutions, look at https://deepmind.com/blog/alphago-zero-learning-scratch/ or https://openai.com/five/ . These bots currently beat humans at these tasks without sentience or inhumane reactions. They beat us by utilising logic created through reinforcement learning (They play against themselves thousands of times each month and utilise backflow algorhithms to make their deep learning solution adhere closer to the currently aimed optimised solution). This is similar to how a normal neural network does back flow propagation, but there are lots of differing algorithms that exist for optimising these convolutional neural networks. His proposal is possible, but quite resource intensive, even when just processing the already taught convolutional neural network.
Although I agree that the bounty is far too low to incentivise this development as this is PhD level difficulty in my opinion. I know PhD students working to win with their chess bots, and in those cases takes a team of 3 around 6 months to get it to a competitive level. I also would presume any neural network implementation would be written in c++ or python for access to something like tensorflow, and then linked across to the dm, similar to how some TG admin plugins work.
Edit: I want to add, this is in no way a negative against you but instead this a topic that saddens me to see misrepresented. I have the upmost respect for your intent, it's more I'm highlighting how I see your language possibly misinforming others.
I'm basing my response off messages Altoids posted himself. I don't think I've misinterpreted anything, I quote "Adding the capacity to teach your slimes to do things" this implies that we need a very generalised AI that is capable of learning how to tasks on a round per round basis. Yes, AlphaGo and OpenAI Five do this to some degree but nothing even close to what Altoids is suggesting as they both require a huge amount of data to train them to perform a particular task. I'm saying that an AI that is capable of learning how to do a task from just one round of being with a xenobiologist would have to be nearly sentient, you know, as intelligent as a dog you might say.
Perhaps I was misleading to say "fully sentient" so I apologise for that, I should have said "almost sentient" instead.
But let's say that Altoids decides to not go down the path of allowing xenobio's to teach their slimes new tricks, I still don't think that using reinforcement learning is a viable option. Is it possible to do? Yes, in this case, it is. Should we do it? NO! There would still be a large overhead to running an already trained algorithm in DM, you'd be better off just setting up a sandbox environment with your slimes and reinforcement algorithm, run the experiment and then analyze the slime behaviour of the winning variants and translate that into DM code. Basically what @Skull132 is saying, a simple state machine would probably be good enough. After all the game is a 2D plane, there is only so much behaviour a simple mob can have in such an environment.
The only case I can see is doing something similar to what botany does right now with plants(assuming botany hasn't changed significantly in the last year or so... I haven't played in a while) where each slime might have a gene that defines their attack strength, speed, health or maybe things like unlocking special attacks that have already been coded into the game. At this point, it stops becoming a machine learning problem and becomes a traditional coding problem.
Well the money doesn't actually go anywhere unless someone actually does it, so either we lose nothing, or we gain a billion dollar AI coded in BYOND for fifty bucks.
What I'm trying to say is that instead of reserving 50$ for this bounty that will either never happen or come out with a result that will be worth way less than 50$ we could just close the bounty and pay someone 50$ to merge the 100+ mirror PR's. I'm all for Altoids doing this in his spare time, but we should be prioritizing other bounties.
Tbf the concept of "train your slimes to do things" is pretty ambiguous. It might well mean open ended training. Or it might mean training under very specific rules. Still, would require a looong time to have an effect on large networks, even with reinforced learning.
@AshCorr I agree predominantly with what you're highlighting. I think the main difference of information between us is this quote from this page and other a lot of other things Altoids said on discord.
The slime being able to train live on the server is not necessary and would probably be ludicrously laggy, so the neural network's weights ought to be precalculated through some means that is easily retrainable or elaboratable by the HCs and Maintainers in the future.
I also didn't strain too far in the previous quote, that also i agree that unless you're doing this to show of to other servers like "we have better more fancy Ai" or as a personal achievement, using any reasonable pretrained NN solution is probably way way to machine intensive to be viable. It's also far too time consuming to construct, teach and optimise and if you managed to get good results with making a reinforcement learning solution, very good for you!
EDIT: I think in a matter of 20-30 minutes everyone in coder agreed that viable "learning" solutions for most us wouldn't be utilising Neural networks at all. Not saying Altoids won't have success with what he wants to do but instead we'd all focus on other ways to solve this, I'm happy to go through what I proposed to the group for solving this over discord (Without neural networks), in general I agree
lmaoing at your life
Bruh I'll pay $50 to the man who can spin up Tensorflow in the BYOND engine without setting the server on fire.
In all seriousness, this is retarded on multiple levels, betrays a misunderstanding of neural networks as a whole, and is more than likely completely unfeasible in the current game engine.
If anyone can actually meet the requirements of this fucking bounty I will personally pay you what I make in a year.
Brief Description
Implement the possibility for Xenobiologists to evolve or teach Slimes to become robust, and utilize a proper Neural-Network AI to determine this robust aggro AI.
Requirements
Bounty: $50
I'm putting it at a high bounty because it requires a knowledge of neural networks, intense optimization, and strong knowledge of the subsystem system of the game, as well as its engine. Also, I know that if I had to do this, I'd have to put a lot of initial just plain genuine AI research into figuring out an optimal method of training & assembling the AIs that makes them both actually useful in combat and not a huge lagmonster.