lis-epfl / robogen

RoboGen - Robot generation through artificial evolution
http://www.robogen.org
GNU General Public License v3.0
27 stars 16 forks source link

Automatic Randomized starting positions=> Don't use it, see last comment. #34

Closed LohGeek closed 11 years ago

LohGeek commented 11 years ago

Hello,

Fighting against the problem that the robot selects a good paths to follow in order to find the light, I modified "Scenario.cpp" in order to randomize the starting position.

I added at the beginning:

include

I modified the code as follows: // Starting position osg::Vec2 startingPosition = robogenConfig->getStartingPos()->getStartPosition( startPositionId); srand(time(NULL)); robot->getBB(minX, maxX, minY, maxY, minZ, maxZ); robot->translateRobot( osg::Vec3(startingPosition.x() - (maxX - minX) / 2 + ((double)( (rand()) %200 - 100))/100.0, startingPosition.y() - (maxY - minY) / 2 + ((double)( (rand()) %200 - 100))/100.0, terrainConfig->getHeight() + inMm(2) - minZ + 0.5)); robot->getBB(minX, maxX, minY, maxY, minZ, maxZ);

std::cout
        << "The robot is enclosed in the AABB(minX, maxX, minY, maxY, minZ, maxZ) ("
        << minX << ", " << maxX << ", " << minY << ", " << maxY << ", "
        << minZ << ", " << maxZ << ")" << std::endl;
std::cout << "Obstacles in this range will not be generated" << std::endl << std::endl;

@amaesani Will this change the behavior of the EA too?

tcies commented 11 years ago

Cool! Github needs a like button.

LohGeek commented 11 years ago

If you use this code, please check a couple of times if your robot falls on his feets. I dropped it from normal heigth + 0.5 in order to evade the issue of initializing it in an obstacles. In this case it can only fall over an obstacle.

akshararai commented 11 years ago

But is this a good idea? Wouldn't a robot that luckily starts close to the light automatically have a better fitness than one that starts away from it? I think randomization is a good choice in testing, but not training. Am I missing something here?

LohGeek commented 11 years ago

What you say is absolutely right, but I think that you are forgot the impact of generations. On the first generation the luckiest robot will surely be selected. But if you transmit most of the genes to the next generation, as an example using a low mutation probability, the same lucky robot will probably not be as lucky on the next generation and will reproduce less. Over many generations the robots bearing gene A should on average get a better fitness if A is a good mutation or a worse fitness if A is a nefast mutation.

The obvious drawback is the longer evolutionary period, the positive one is that you should obtain behaviors that are completely position independent.

LohGeek commented 11 years ago

One additional note, the randomization was designed to send the robot within a square of size 4x4 if you have a smaller map, please increase the map size or adapt the random number scale...

dorienhuysmans commented 11 years ago

Can you specify what you included in the beginning (#include...) ? Should this be: #include , library for random numbers.

Actually, my rebuilding fails anyway when including your part of code..

LohGeek commented 11 years ago

Yes: include time.h, I don't know why, the interpreter of github consider it as an active code part and makes it invisible, I didn't notice, sorry.

LohGeek commented 11 years ago

Hello, I discussed this randomized position as an evolutionary strategy with @prfernando and he mentioned some papers that already tried it, but with such a changeful starting point, it took ages to converge to a stable population (if it ever did).

So please remove/don't use this enhancement... It risk to be a waste of time.

If someone wants to use automatic random positions, he should modify the code to generate a random position that is conserved for a whole generation and randomized between each generation, both not meanwhile...

Sorry for the bad hint.

akshararai commented 11 years ago

My team partner was suggesting that we so a randomization at different points on a circle- as in all have the same distance from the source. This way we might actually have individuals that can search for the light source selected, as compared to those that are just lucky. Do you think that might work?

prfernando commented 11 years ago

My suggestions:

  1. Use multiple starting points (typically sampling the arena uniformly at random - within certain radius of the light source in the chasing scenario). But use the same starting points for all the individuals of a population, so that they all are compared using a fitness computed using the same conditions.
  2. If you want to change starting positions, do it across generations but allow enough number of generations to lapse so that the robot (brain) can evolve to do something useful in the current task.