matsim-org / matsim-code-examples

A repository containing code examples around MATSim
GNU General Public License v3.0
81 stars 178 forks source link

chosing the best plan #355

Open SarahSie opened 4 years ago

SarahSie commented 4 years ago

Hello all,

Based on the MATSim book, in the equilibrium solution, the output file, always keep the best 5 scored plans and at the end chose the best scored plan among those 5. however, after I run through the iteration, and then one run over it, then surprisingly a plan with the worst score, among the 5 remain plans, has been chosen. How I can fix this issue?

davibicudo commented 4 years ago

Hello Kate,

It depends on the plan Selector strategy you are using. The options are: SelectRandom, BestScore, KeepLastSelected, ChangeExpBeta, SelectExpBeta, SelectPathSizeLogit. The first three are self-explanatory, while the latter 3 use probabilistic models. Can you check in your config file, in the "strategy" module, which selector-strategy you are using?

SarahSie commented 4 years ago

Thank you David for your reply and help. I have copyied here the part of Strategy in my config file.

<module name="strategy" >

    <!-- the external executable will be called with a config file as argument.  This is the pathname to a possible skeleton config, to which additional information will be added.  Can be null. -->
    <param name="ExternalExeConfigTemplate" value="null" />

    <!-- time out value (in seconds) after which matsim will consider the external strategy as failed -->
    <param name="ExternalExeTimeOut" value="3600" />

    <!-- root directory for temporary files generated by the external executable. Provided as a service; I don't think this is used by MATSim. -->
    <param name="ExternalExeTmpFileRootDir" value="null" />

    <!-- fraction of iterations where innovative strategies are switched off.  Something link 0.8 should be good.  E.g. if you run from iteration 400 to iteration 500, innovation is switched off at iteration 480 -->
    <param name="fractionOfIterationsToDisableInnovation" value="0.8" />

    <!-- maximum number of plans per agent.  ``0'' means ``infinity''.  Currently (2010), ``5'' is a good number -->
    <param name="maxAgentPlanMemorySize" value="5" />

    <!-- strategyName of PlanSelector for plans removal.  Possible defaults: WorstPlanSelector SelectRandom SelectExpBetaForRemoval ChangeExpBetaForRemoval PathSizeLogitSelectorForRemoval . The current default, WorstPlanSelector is not a good choice from a discrete choice theoretical perspective. Alternatives, however, have not been systematically tested. kai, feb'12 -->
    <param name="planSelectorForRemoval" value="WorstPlanSelector" />

    <parameterset type="strategysettings" >

        <!-- iteration after which strategy will be disabled.  most useful for ``innovative'' strategies (new routes, new times, ...). Normally, better use fractionOfIterationsToDisableInnovation -->
        <param name="disableAfterIteration" value="-1" />

        <!-- path to external executable (if applicable) -->
        <param name="executionPath" value="null" />

        <!-- strategyName of strategy.  Possible default names: SelectRandomBestScoreKeepLastSelectedChangeExpBetaSelectExpBetaSelectPathSizeLogit (selectors), ReRoute TimeAllocationMutator ChangeLegMode TimeAllocationMutator_ReRoute ChangeSingleLegMode ChangeSingleTripMode SubtourModeChoice ChangeTripMode TripSubtourModeChoice  (innovative strategies). -->
        <param name="strategyName" value="ReRoute" />

        <!-- subpopulation to which the strategy applies. "null" refers to the default population, that is, the set of persons for which no explicit subpopulation is defined (ie no subpopulation attribute) -->
        <param name="subpopulation" value="null" />

        <!-- weight of a strategy: for each agent, a strategy will be selected with a probability proportional to its weight -->
        <param name="weight" value="0.1" />
    </parameterset>

    <parameterset type="strategysettings" >

        <!-- iteration after which strategy will be disabled.  most useful for ``innovative'' strategies (new routes, new times, ...). Normally, better use fractionOfIterationsToDisableInnovation -->
        <param name="disableAfterIteration" value="-1" />

        <!-- path to external executable (if applicable) -->
        <param name="executionPath" value="null" />

        <!-- strategyName of strategy.  Possible default names: SelectRandomBestScoreKeepLastSelectedChangeExpBetaSelectExpBetaSelectPathSizeLogit (selectors), ReRoute TimeAllocationMutator ChangeLegMode TimeAllocationMutator_ReRoute ChangeSingleLegMode ChangeSingleTripMode SubtourModeChoice ChangeTripMode TripSubtourModeChoice  (innovative strategies). -->
        <param name="strategyName" value="ChangeExpBeta" />

        <!-- subpopulation to which the strategy applies. "null" refers to the default population, that is, the set of persons for which no explicit subpopulation is defined (ie no subpopulation attribute) -->
        <param name="subpopulation" value="null" />

        <!-- weight of a strategy: for each agent, a strategy will be selected with a probability proportional to its weight -->
        <param name="weight" value="0.8" />
    </parameterset>

    <parameterset type="strategysettings" >

        <!-- iteration after which strategy will be disabled.  most useful for ``innovative'' strategies (new routes, new times, ...). Normally, better use fractionOfIterationsToDisableInnovation -->
        <param name="disableAfterIteration" value="-1" />

        <!-- path to external executable (if applicable) -->
        <param name="executionPath" value="null" />

        <!-- strategyName of strategy.  Possible default names: SelectRandomBestScoreKeepLastSelectedChangeExpBetaSelectExpBetaSelectPathSizeLogit (selectors), ReRoute TimeAllocationMutator ChangeLegMode TimeAllocationMutator_ReRoute ChangeSingleLegMode ChangeSingleTripMode SubtourModeChoice ChangeTripMode TripSubtourModeChoice  (innovative strategies). -->
        <param name="strategyName" value="SubtourModeChoice" />

        <!-- subpopulation to which the strategy applies. "null" refers to the default population, that is, the set of persons for which no explicit subpopulation is defined (ie no subpopulation attribute) -->
        <param name="subpopulation" value="null" />

        <!-- weight of a strategy: for each agent, a strategy will be selected with a probability proportional to its weight -->
        <param name="weight" value="0.1" />
    </parameterset>
</module>

Can I write BestScore instead of "WorstPlanSelector"?

davibicudo commented 4 years ago

WorstPlanSelector is the removal selector, i.e. the strategy used to discard a plan when memory becomes full. I'm not sure BestScore would work there and even if it does the effect would be the opposite of what you intend: the best plans would be discarded. Your plan selector strategy is currently the second of the list of "strategysettings", ChangeExpBeta. This strategy is probabilistic and doesn't guarantee that the best is always selected. To know the details of how it works, search for its name in the MATSim Book. To enforce that the absolute best is chosen, replace it with BestScore. This is however not recommended, since you wan't some randomness to avoid falling into local minima. You may also try the combination BestScore as selector and SelectRandom as plan-remover, this way you will always have the best plan selected while reducing the risk of falling into local minima. What you currently have however is the recommended settings, most commonly used. It is important to notice that you need a large number of iterations to reach convergence, just a few aren't enough. It depends on your use case but usually at least one hundred.

tschlenther commented 4 years ago

Hello all,

Based on the MATSim book, in the equilibrium solution, the output file, always keep the best 5 scored plans and at the end chose the best scored plan among those 5. however, after I run through the iteration, and then one run over it, then surprisingly a plan with the worst score, among the 5 remain plans, has been chosen. How I can fix this issue?

Typically, innovatove strategies are switched off after a certain fractionOfIterationsToDisableInnovation, configurable in the same config group. You seem to have used the recommended standard value of 0.8. That means, after 80% of the iterations are finished, agents can not alter their plans any more but only select one among the existing ones. With your current setup, they should end up with the best plan selected. That means, if you want to rerun a simulation with the results with no plan mutation (for whatever reason - for example a use case with within-day-replanning) you should switch innovation off (by removing innovative plan strategies for example).

Best,

Tilmann

SarahSie commented 4 years ago

Thanks @tschlenther, for your reply. So if I remove param name="fractionOfIterationsToDisableInnovation" value="0.8" / From my config file then I can switch innovation off for my one run after the equilibrium? Another point is, I had a look into my equilibrium (200 iterations) output results in the last 20%, and the plans that are chosen do not have the best scores in comparison to the previous iterations. For example, I could see for an especial agent that some plans in around 100 iterations have better scores, although they arrive later. However, the last iterations (the last 20%) just include plans with a worse score but with earlier arrival time. For my simulation, it is necessary that agents experience a better score and not necessary to choose a plan based on the earlier arrival time.

SarahSie commented 4 years ago

Thanks, @davibicudo, for the explanation. I currently do 200 iterations. The current setting may not be the best. Because when I saw the output for one special agent through 200 iterations, I recognized that he has a better score with traveling with public transport but later arrival in the middle of iterations (let say 100th iteration), however at the end of the iterations (let say after 150th iteration) only (I say Only) the plans that the travel by car are chosen and are kept. The agent has much worse scores but earlier arrival. I need to prevent this kind of selection (that only remain in agents' plan one mode of traveling with the word score but earlier arrival.)