Matteo-Pietro-Pillitteri / Computational-Intelligence

Repository CI 23/24
MIT License
0 stars 0 forks source link

Review of the lab n.2 (Nim Game) - made by Ivan Magistro Contenta, s314356 #3

Open ivanmag22 opened 1 year ago

ivanmag22 commented 1 year ago

Hi Matteo! I want to give you my opinions and advices about your work (lab. 2 about Nim game). Sorry, but they are in a random order.

I hope that you can reply to my message, in order to understand better your approach and implementation. :)

Matteo-Pietro-Pillitteri commented 1 year ago

Hi ivan, thanks for your review. So, how am i using the mutation rate?

Starting from this line of code: offspring = [np.clip(np.random.normal(loc=0, scale=σ, size=len(current_solution)) + best_solution, 0.01, None) for _ in range(λ)]

As you can see i start from the current best solution to generate the offspring. Yes the mutation step decreases linearly over the epochs, so at the beginning i start with an higher value for sigma and i want to take small steps in the end. Why i want this kind of behavior? Well, i think that it is important to have exploration at the beginning and once the algorithm is converging in a region of promising solutions, it is not appropriate to make large space. I think that maybe it is more appropiate at that point to search around that area by taking small steps.

Speaking about the roulette wheel: why did I chose this mechanism? Well, roulette wheel selection can facilitate the exploration of different solutions, since even individuals with relatively low fitness have a non-zero chance of being selected. I saw a lot of codes about roulette wheel online (the link in the readme is an example) and in my implementation I wrote this line of code: current_weight_sum += normalized_weight why? In short, current_weight_sum helps determine which action to select by simulating the rotation of the roulette wheel. Its progressive growth ensures that the action with a higher normalized weight has a higher probability of being selected.