MelDashti / CI2024_lab1

First Lab - Set Cover
0 stars 0 forks source link

Comments Lab1R #1

Open Daviruss1969 opened 5 days ago

Daviruss1969 commented 5 days ago

First of all, I really appreciate your implementation of the Tabu algorithm, this is pretty clear to understand and you got nice results 😸.

I still have some recommendations, those are my point of view, feel free to implement if you like them 🚀:

Dynamic constants 🚧

In your performance-analysis part, you said that the algorithm doesn't work well and is slow for the large instances.

Indeed, in your code you have the same constants no matter the size of the instance, i.e. max_iterations, initial_tabu_size or the 1000 factor to compute your tabu_size each iteration.

So, my idea is to compute some constants according to the problem size, i.e. depending on *UNIVERSE_SIZE NUM_SETS**. In this way, for large instances, you can maybe perform less iterations but with larger steps.

Here's is a template to show what I mean :

def compute_constants :
    problem_size = UNIVERSE_SIZE * NUM_SETS

    if problem_size <= TRESHOLD1:
        return some_consts

    if problem_size > TRESHOLD1 and problem_size <= TRESHOLD2:
        return some_consts
    ...

    return some_const

Tweak function performance 🥇

In order to improve the speed for your large instances, you can update your multiple_mutation algorithm.

Indeed, your implementation works, but doesn't provide the vectorization offers by NumPy. For large problem size, the benefit can be significant.

Here's the same function but providing vectorization with NumPy:

def multiple_mutation(solution, mutation_rate=0.01):
    mask = rng.random(NUM_SETS) < mutation_rate
    new_solution = np.logical_xor(solution, mask)
    return new_solution
MelDashti commented 2 days ago

Thank you for your insightful review! I really appreciate your suggestions, and I definitely agree that both of your suggestions will help improve the performance, especially for larger instances. The dynamic constants approach is a great idea, and I’ll definitely work on adjusting those based on problem size. Also, the vectorization of the multiple_mutation function looks like a great optimization—I'll be implementing both of these in my future commits. Thanks again for the feedback! 🙌