mmaecki / bio_algorithms

AI MSc PUT course
0 stars 0 forks source link

Write evaluation script #7

Open krystianMoras opened 3 months ago

krystianMoras commented 3 months ago

Should include

Comparison of the performance of 5 algorithms and implemented types of neighborhoods on all problem instances – plots:

    Quality = distance from the optimum (according to what measure?), the average and the best case (optionally: also the worst case).
    Running time (average)
    Efficiency of algorithms (average) – i.e., quality over time (suggest a good measure and justify your choice)
    G,S: average number of algorithm steps (step = changing the current solution)
    G,S,R,RW: average number of evaluated (i.e., visited – full or partial evaluation) solutions

    For the averages, we assess the stability of the results (standard deviations should always be shown along with the averages).

G,S – plot: quality of the initial solution vs. quality of the final solution (at least 200 repetitions, use small points) for several interesting instances; interesting instances are the ones that demonstrate some heterogeneity. For the charts shown, provide and interpret the [correlation](https://en.wikipedia.org/wiki/Rank_correlation)

G,S – plot: the number of restarts (up to at least 300, horizontal axis) in multi-random start vs. average and best of solutions found so far, for two (or a few) selected instances. Is it worth repeating the algorithm? If so, how many times?

Objective assessment of the similarity of locally optimal solutions found for two selected instances, and the assessment of their similarity to the global optimum (if, for ATSP, we don't know the global one, use the best local one). For example: a plot of at least 100 points: x=quality, y=similarity