Closed twistfire closed 1 year ago
@twistfire ,
Thanks for your idea. It's coming in the new release.
@twistfire ,
You can try to use it here: https://mealpy.readthedocs.io/en/latest/pages/general/advance_guide.html#stopping-condition-termination
Hi there! Thanks for fast implementation, but I have reinstalled mealpy, now using version: Successfully installed mealpy-2.5.2
(.venv) (base) @.***:~/py_projects/ATARI_MLE$ pip uninstall mealpy Found existing installation: mealpy 2.5.1 Uninstalling mealpy-2.5.1: Would remove:
/home/fire/py_projects/ATARI_MLE/.venv/lib/python3.10/site-packages/mealpy-2.5.1.dist-info/*
/home/fire/py_projects/ATARI_MLE/.venv/lib/python3.10/site-packages/mealpy/*
/home/fire/py_projects/ATARI_MLE/.venv/lib/python3.10/site-packages/tests/bio_based/*
/home/fire/py_projects/ATARI_MLE/.venv/lib/python3.10/site-packages/tests/evolutionary_based/*
/home/fire/py_projects/ATARI_MLE/.venv/lib/python3.10/site-packages/tests/human_based/*
/home/fire/py_projects/ATARI_MLE/.venv/lib/python3.10/site-packages/tests/math_based/*
/home/fire/py_projects/ATARI_MLE/.venv/lib/python3.10/site-packages/tests/music_based/*
/home/fire/py_projects/ATARI_MLE/.venv/lib/python3.10/site-packages/tests/physics_based/*
/home/fire/py_projects/ATARI_MLE/.venv/lib/python3.10/site-packages/tests/swarm_based/*
/home/fire/py_projects/ATARI_MLE/.venv/lib/python3.10/site-packages/tests/system_based/*
/home/fire/py_projects/ATARI_MLE/.venv/lib/python3.10/site-packages/tests/utils/* Would not remove (might be manually added):
/home/fire/py_projects/ATARI_MLE/.venv/lib/python3.10/site-packages/tests/utils/decorators/init.py
/home/fire/py_projects/ATARI_MLE/.venv/lib/python3.10/site-packages/tests/utils/decorators/conftest.py
/home/fire/py_projects/ATARI_MLE/.venv/lib/python3.10/site-packages/tests/utils/decorators/test_decorators.py
/home/fire/py_projects/ATARI_MLE/.venv/lib/python3.10/site-packages/tests/utils/functions/init.py
/home/fire/py_projects/ATARI_MLE/.venv/lib/python3.10/site-packages/tests/utils/functions/conftest.py
/home/fire/py_projects/ATARI_MLE/.venv/lib/python3.10/site-packages/tests/utils/functions/test_singleobj_bounds.py
/home/fire/py_projects/ATARI_MLE/.venv/lib/python3.10/site-packages/tests/utils/functions/test_singleobj_dims.py
/home/fire/py_projects/ATARI_MLE/.venv/lib/python3.10/site-packages/tests/utils/functions/test_singleobj_return.py
/home/fire/py_projects/ATARI_MLE/.venv/lib/python3.10/site-packages/tests/utils/functions/test_singleobj_returndims.py
/home/fire/py_projects/ATARI_MLE/.venv/lib/python3.10/site-packages/tests/utils/plotters/init.py
/home/fire/py_projects/ATARI_MLE/.venv/lib/python3.10/site-packages/tests/utils/plotters/conftest.py
/home/fire/py_projects/ATARI_MLE/.venv/lib/python3.10/site-packages/tests/utils/plotters/test_plotters.py
/home/fire/py_projects/ATARI_MLE/.venv/lib/python3.10/site-packages/tests/utils/search/init.py
/home/fire/py_projects/ATARI_MLE/.venv/lib/python3.10/site-packages/tests/utils/search/conftest.py
/home/fire/py_projects/ATARI_MLE/.venv/lib/python3.10/site-packages/tests/utils/search/test_gridsearch.py
/home/fire/py_projects/ATARI_MLE/.venv/lib/python3.10/site-packages/tests/utils/search/test_randomsearch.py Proceed (Y/n)? y Successfully uninstalled mealpy-2.5.1 (.venv) (base) @.:~/py_projects/ATARI_MLE$ pip install mealpy Collecting mealpy Downloading mealpy-2.5.2-py3-none-any.whl (368 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 368.2/368.2 kB 3.8 MB/s eta 0:00:00 Requirement already satisfied: pandas>=1.2.0 in ./.venv/lib/python3.10/site-packages (from mealpy) (1.5.3) Requirement already satisfied: numpy>=1.16.5 in ./.venv/lib/python3.10/site-packages (from mealpy) (1.24.1) Requirement already satisfied: matplotlib>=3.3.0 in ./.venv/lib/python3.10/site-packages (from mealpy) (3.6.3) Requirement already satisfied: scipy>=1.7.1 in ./.venv/lib/python3.10/site-packages (from mealpy) (1.10.0) Requirement already satisfied: opfunu>=1.0.0 in ./.venv/lib/python3.10/site-packages (from mealpy) (1.0.0) Requirement already satisfied: packaging>=20.0 in ./.venv/lib/python3.10/site-packages (from matplotlib>=3.3.0->mealpy) (23.0) Requirement already satisfied: python-dateutil>=2.7 in ./.venv/lib/python3.10/site-packages (from matplotlib>=3.3.0->mealpy) (2.8.2) Requirement already satisfied: kiwisolver>=1.0.1 in ./.venv/lib/python3.10/site-packages (from matplotlib>=3.3.0->mealpy) (1.4.4) Requirement already satisfied: contourpy>=1.0.1 in ./.venv/lib/python3.10/site-packages (from matplotlib>=3.3.0->mealpy) (1.0.7) Requirement already satisfied: pillow>=6.2.0 in ./.venv/lib/python3.10/site-packages (from matplotlib>=3.3.0->mealpy) (9.4.0) Requirement already satisfied: fonttools>=4.22.0 in ./.venv/lib/python3.10/site-packages (from matplotlib>=3.3.0->mealpy) (4.38.0) Requirement already satisfied: pyparsing>=2.2.1 in ./.venv/lib/python3.10/site-packages (from matplotlib>=3.3.0->mealpy) (3.0.9) Requirement already satisfied: cycler>=0.10 in ./.venv/lib/python3.10/site-packages (from matplotlib>=3.3.0->mealpy) (0.11.0) Requirement already satisfied: pytz>=2020.1 in ./.venv/lib/python3.10/site-packages (from pandas>=1.2.0->mealpy) (2022.7.1) Requirement already satisfied: six>=1.5 in ./.venv/lib/python3.10/site-packages (from python-dateutil>=2.7->matplotlib>=3.3.0->mealpy) (1.16.0) Installing collected packages: mealpy Successfully installed mealpy-2.5.2 (.venv) (base) @.:~/py_projects/ATARI_MLE$
And when I try to use multiple criteria like this:
term_dict = { "max_epoch": 10000, "max_time": 100, "max_early_stop": 2500 }
I got the next result, it stops on the first epoch with the message:
2023/03/21 12:31:55 PM, INFO, mealpy.swarm_based.PSO.OriginalPSO: Solving single objective optimization problem. 2023/03/21 12:31:55 PM, INFO, mealpy.swarm_based.PSO.OriginalPSO: >Problem: P, Epoch: 1, Current best: 3.914585092535318, Global best: 3.914585092535318, Runtime: 0.01270 seconds 2023/03/21 12:31:55 PM, WARNING, mealpy.swarm_based.PSO.OriginalPSO: Stopping criterion with maximum running time/time bound (TB) (seconds) occurred. End program!
I don't understand what am I doing wrong or where is my misunderstanding for this case.
On Tue, Mar 21, 2023 at 5:47 AM Nguyen Van Thieu @.***> wrote:
@twistfire https://github.com/twistfire ,
You can try to use it here: https://mealpy.readthedocs.io/en/latest/pages/general/advance_guide.html#stopping-condition-termination
— Reply to this email directly, view it on GitHub https://github.com/thieu1995/mealpy/issues/97#issuecomment-1477536982, or unsubscribe https://github.com/notifications/unsubscribe-auth/AN5AOMVNSK5PHPLZCB46SE3W5F2K5ANCNFSM6AAAAAAVN4HVBI . You are receiving this because you were mentioned.Message ID: @.***>
@twistfire,
Can you show me your whole implementation? In your settings, the max_time is 100 seconds, so it shouldn't finished right after 1 epoch like this since 1 epoch takes around 0.0127 seconds.
Sure, here it is:
import numpy as np from mealpy.swarm_based.PSO import OriginalPSO
def fitness_function(x, addit_args: list, addit_args2: list):
y = (x[0]-2)**2 + (x[1]-3)**2 + (x[2]-5)**2 + addit_args[0] -
addit_args2[0]
# return the fitness value (a scalar)
return y
def wrapped_fitness_function(x): list1 = [1] list2 = [2]
res = fitness_function(x, addit_args=list1 , addit_args2=list2)
return res
pop_size = 200 epoch = 1000
c1 = 2.05 c2 = 2.05
w_min = 0.4 w_max = 0.9
problem_dict1 = { "fit_func": wrapped_fitness_function, "lb": [-10, -10, -10], "ub": [10, 10, 10], "minmax": "min", "verbose": False,
}
term_dict = { "max_epoch": 10000, "max_time": 100, "max_early_stop": 2500 }
x0 = [1, 1, 1] # initial guess
size=3)
print('x0 = ', x0)
start_positions = np.zeros((pop_size, 3)) start_positions[0, :] = x0 # set first row to x0
for i in range(1, pop_size):
start_positions[i, :] = np.random.uniform(low=problem_dict1["lb"], high=
problem_dict1["ub"], size=3)
print(start_positions)
model_OPSO1 = OriginalPSO(epoch, pop_size, c1, c2, w_min, w_max)
best_position, best_fitness = model_OPSO1.solve( problem=problem_dict1, starting_positions=start_positions, termination=term_dict,
#n_workers=10
)
print(''80) print(''80) print(best_position) print(best_fitness) print(''80) print(''80)
epochtimes = model_OPSO1.history.list_epoch_time print('times:') print(epochtimes) print(np.sum(epochtimes))
iterbest = model_OPSO1.history.list_current_best print(iterbest)
model_OPSO1.history.save_global_objectives_chart(filename= "history/model1_goc")
print(''80) print(''80) print('Model 2') print(''80) print(''80)
model_OPSO2 = OriginalPSO(epoch, pop_size, c1, c2, w_min, w_max) best_position_wig, best_fitness_wig = model_OPSO2.solve( problem = problem_dict1, termination = term_dict ) model_OPSO2.history.save_global_objectives_chart(filename= "history/model2_goc") print(''80) print(''80) print(best_position, best_fitness)
print(''80) print(best_position_wig, best_fitness_wig)
@twistfire,
I'm not sure what is wrong. I copied and run your code in my laptop. It works fine for me.
@twistfire ,
You can try to increase the max_time to 1000 or 10000 seconds to see what will happen.
@thieu1995 , thanks for your feedback. I have a very strange situation with this toy example. It just doesn't works at all..
I can't get it to work.
When i set:
# Use stopping conditions together
term_dict = {
"max_epoch": 10000,
"max_time": 10000,
"max_early_stop": 2500
}
it gives next output:
2023/03/22 08:52:08 AM, INFO, mealpy.swarm_based.PSO.OriginalPSO: Solving single objective optimization problem. 2023/03/22 08:52:08 AM, INFO, mealpy.swarm_based.PSO.OriginalPSO: >Problem: P, Epoch: 1, Current best: 0.6665798795963225, Global best: 0.6665798795963225, Runtime: 0.01008 seconds 2023/03/22 08:52:08 AM, WARNING, mealpy.swarm_based.PSO.OriginalPSO: Stopping criterion with maximum running time/time bound (TB) (seconds) occurred. End program!
[2.26776349 2.83805118 6.25245965] 0.6665798795963225
When I set
# Use stopping conditions together
term_dict = {
"max_epoch": 10000,
"max_time": 100000,
"max_early_stop": 2500
}
I got next output: 2023/03/22 09:09:47 AM, INFO, mealpy.swarm_based.PSO.OriginalPSO: Solving single objective optimization problem. 2023/03/22 09:09:47 AM, INFO, mealpy.swarm_based.PSO.OriginalPSO: >Problem: P, Epoch: 1, Current best: -0.6497575280479526, Global best: -0.6497575280479526, Runtime: 0.01437 seconds 2023/03/22 09:09:47 AM, INFO, mealpy.swarm_based.PSO.OriginalPSO: >Problem: P, Epoch: 2, Current best: -0.6497575280479526, Global best: -0.6497575280479526, Runtime: 0.01233 seconds .... 2023/03/22 09:10:37 AM, INFO, mealpy.swarm_based.PSO.OriginalPSO: >Problem: P, Epoch: 2501, Current best: -0.6497575280479526, Global best: -0.6497575280479526, Runtime: 0.01013 seconds 2023/03/22 09:10:37 AM, WARNING, mealpy.swarm_based.PSO.OriginalPSO: Stopping criterion with early stopping (ES) (fitness-based) occurred. End program!
note - the Global best doesn't change from the first epoch and it finishes using ES criterion...
It seems the optimizer is not working at all, because I don't have any changes in the resulting function that is very strange for me.
Don't know what to do because I want to try a mealpy, but I need to provide multiple stopping for that.
@twistfire ,
Let me correct you. The optimizer is working fine. It's just not working on your laptop (in your environment).
Firstly, the problem of the global best not changing after the first epoch is because your problem is too easy. The optimizer found the exact optimal point after one epoch.
Secondly, in your third attempt, you set the max_early_stop parameter to 2500, so it will stop after 2501 epochs. This is clearly stated in the Early Stopping condition (https://mealpy.readthedocs.io/en/latest/pages/general/advance_guide.html#stopping-condition-termination).
Thirdly, in your second attempt, you increased max_time to 10000, but the program exited after only two epochs because the Time Bound condition was met. It's possible that the issue is related to the time.time.perf_counter() function used in the library. May I ask if you are using Linux, Windows, or Mac?"
@thieu1995 , thank you for your reply.
you are completely correct. It was my incorrect conclusion. Optimizer works, of course.
I am working under WSL 2, Ubuntu, here are the parameters:
{'platform': 'Linux', 'platform-release': '5.15.90.1-microsoft-standard-WSL2', 'platform-version': '#1 SMP Fri Jan 27 02:56:13 UTC 2023', 'architecture': 'x86_64', 'hostname': 'note-4', 'ip-address': '127.0.1.1', 'mac-address': '00:15:5d:b6:b0:31', 'processor': 'x86_64', 'ram': '15 GB'}
If you need any additional data - just tell me, I will try to get them if needed.
Nevertheless, it's pretty strange situation - for this simple toy problem, it finds a solution using PSO and a swarm of 200 particles and even close initial guess (for one particle) very slowly... It's pretty strange for me and confusing because I want to apply this package for complex optimization and select the best technique for that...
P.S. It's worth to mentions that I have an initial guess for my optimization task - and it's pretty close to the solution - that is why I wanted to use particle swarm as a technique that can provide global best solution. My optimization task has about 20 parameters to find (fitting task), each of the params has its own bounds and the initial guess value. While for each set of parameters to optimize I have about 60..120 observations with uncertainty bounds given but the functional form is unknown..
By the way - I have wanted to ask how can I pass additional parameters for my fitness function? e.g. I have a function
def fitness_f(
**res_params**: list, # list of parameters to optimize,
res_params_fixed: list, # list of parameters that are fixed (not for optimization)
params_2: np.array, # energy grid (for SYNDAT)
particles_data: data.particle_pairdata, # for syndat
data: np.array, # input data for a window
unc: np. array # uncertainty of data in a window
) -> float:
so the data is passed into the fitness function because it;s used in further processing, but I need to optimize only parameters res_params and other parameters just pass to the fitness function. How it can be done? **kwargs?
@twistfire,
I have tried the code on Windows, Linux, and Mac, and it worked fine. So I think the problem is that you are using a Linux simulation under a Windows environment. The clock time doesn't know which one should be counted (from the current Linux process or the outside Windows process that is currently running the WSL simulation). Unfortunately, the time module can't handle this case. Therefore, I think you should either run your code on Windows or run your code on Linux that is not inside the Windows environment. It's better to install Linux parallelization with Windows instead. Why are you running Linux inside Windows anyway?
@twistfire ,
When you need to pass additional data to fitness function. I recommend you to design a custom Problem object like this in the document. https://mealpy.readthedocs.io/en/latest/pages/general/advance_guide.html#custom-problem
from mealpy.swarm_based import PSO
from mealpy.utils.problem import Problem
class MyOwnProblem(Problem):
def __init__(self, lb, ub, minmax, name="Whatever", dataset=None, additional=None, **kwargs):
self.dataset = dataset
self.additional = additional
super().__init__(lb, ub, minmax, **kwargs)
self.name = name
def NET(dataset, additional):
# I can train neural network here and return fitness here
def fit_func(self, solution):
# Do whatever you want here based on your dataset you pass in __init__ function.
network = NET(self.dataset, self.additional)
fitness = network.loss
return fitness
## Create an instance of MOP class
problem_cop = COP(lb=[-3, -5, 1, -10, ], ub=[5, 10, 100, 30, ], name="Network",
dataset=dataset, additional=additional, minmax="min")
## Define the model and solve the problem
model = PSO.OriginalPSO(epoch=1000, pop_size=50)
model.solve(problem=problem_cop)
By creating a custom class like the one above, you only need to pass the data once in your problem creation. If you pass data to the fitness function, you will have to pass it every time that function is called, which is not optimal. That's why you need to create a custom class.
Please remember to set the additional data before calling the super() function in init(), as shown above. When you call the super() function, it will check the fitness function you passed to see if it's callable or not (pre-called to test if the fitness function that user passed is correct or not).
Note: As mentioned in the documentation, it's not recommended to use the starting_positions parameter https://mealpy.readthedocs.io/en/latest/pages/general/advance_guide.html#starting-positions. Because this is a meta-heuristic algorithm. It's a random-based process, and you don't want your algorithm to get stuck in one place. You want it to explore the entire search space to find the global best solution. If you pass starting_positions, your algorithm may fail after the first epoch and won't be able to get out of the comfort zone (local optima).
it's just simple amd convenient for me.
I will try to investigate maybe it's possible to fix this issue.. Or rerun on Linux I have.
@twistfire ,
When you need to pass additional data to fitness function. I recommend you to design a custom Problem object like this in the document. https://mealpy.readthedocs.io/en/latest/pages/general/advance_guide.html#custom-problem
from mealpy.swarm_based import PSO from mealpy.utils.problem import Problem class MyOwnProblem(Problem): def __init__(self, lb, ub, minmax, name="Whatever", dataset=None, additional=None, **kwargs): self.dataset = dataset self.additional = additional super().__init__(lb, ub, minmax, **kwargs) self.name = name def NET(dataset, additional): # I can train neural network here and return fitness here def fit_func(self, solution): # Do whatever you want here based on your dataset you pass in __init__ function. network = NET(self.dataset, self.additional) fitness = network.loss return fitness ## Create an instance of MOP class problem_cop = COP(lb=[-3, -5, 1, -10, ], ub=[5, 10, 100, 30, ], name="Network", dataset=dataset, additional=additional, minmax="min") ## Define the model and solve the problem model = PSO.OriginalPSO(epoch=1000, pop_size=50) model.solve(problem=problem_cop)
By creating a custom class like the one above, you only need to pass the data once in your problem creation. If you pass data to the fitness function, you will have to pass it every time that function is called, which is not optimal. That's why you need to create a custom class.
Please remember to set the additional data before calling the super() function in init(), as shown above. When you call the super() function, it will check the fitness function you passed to see if it's callable or not (pre-called to test if the fitness function that user passed is correct or not).
Note: As mentioned in the documentation, it's not recommended to use the starting_positions parameter https://mealpy.readthedocs.io/en/latest/pages/general/advance_guide.html#starting-positions. Because this is a meta-heuristic algorithm. It's a random-based process, and you don't want your algorithm to get stuck in one place. You want it to explore the entire search space to find the global best solution. If you pass starting_positions, your algorithm may fail after the first epoch and won't be able to get out of the comfort zone (local optima).
Dear @thieu1995 , thanks for reply. I am not supeproficient in Python, just starting.
So for the Custom problem, it's pretty hard to understand the principle (IMHO). Can you please provide me with an example of my toy problem, if my fitness function is given like this:
def fitness_function(x: np.array,
addit_args_0: list,
addit_args_1: list,
...,
addit_data: pd.Dataframe
):
Note: only x is optimized - I don't really get how to pass all the parameters (addit_args_0, addit_args_1, ..., addit_data) to the optimization routine. So if it's possible - just provide an example with this toy-problem mentioned.
And to be clear: your recommendation is to define a problem using class for my optimization task?
I will describe my task more clearly, maybe it will help to understand.
I need to solve several optimization tasks in one program (I mean it's one large optimization task but it can't be solved at once I think, it has a large number of parameters (up to several hundreds) and the problem is that the number of parameters and their boundaries are unknown we can just seed them - but every option will require much time to search - at least for now). Especially if I will say that I don't know the function (it's actually black-box optimization for me).
That is why I am trying to use a multilevel iterated approach, where on the first level (it's repeated several times adding a set of new parameters to optimize to a main model). This small optimization task is solved individually - here I want to use PSO with termination restrictions (for example no more than 10 000 iterations or one minute per local region) giving the initial solution guess (at least one particle) as close as possible to a global minima value and providing boundaries for each parameter optimization - to save time & resources (or some alternative technique) to find good initial values for the second level of optimization - where a whole bunch of optimization parameters is used, but we also have boundaries and initial guesses for all of them - obtained from the first level.
Each small optimization task uses only part of a dataset I have and corresponding constraints which are calculated individually. Each success in local optimization adds parameters and some amount of data from initial dataset to a model dataset (it uses only part of the data and is evaluated using all the data) that is optimized on the second level. On the second level sets of parameters - of a dataset is used (or all the data from the dataset can be used - it depends on the data we have).
Actually, it's a lot of small optimization tasks - steps - each adds some parameters and data for a dataset used on the second level.
To clarify that my understanding of your recommendation is correct. Do I need to iteratively set up small optimization problems (using proposed class)? providing each "problem" with required subsets of data from an available dataset - and based on the results of level 1 optimization(s) - to set up another class of problem 2 - with other fitness functions and other data - right?
@twistfire ,
First you should know that whenever you try to use an optimizer to solve an optimization problem, you need to know the specific number of variables (problem size) that you want to solve, and it has to be fixed throughout the entire optimization process. You can't change the problem size during the optimization process. You also need to know the bounds (lower and upper bounds) for each variable, and finally, you need to know the fitness function. It can be a black box or not, but when each set of variables is passed to the fitness function, it should return a different value. The optimizer can then use that fitness value to find the minimum or maximum by adjusting the variables.
For example, for your toy problem, note that when passing the lower bound (lb) and upper bound (ub), it should be a list where each value represents a variable (or 1 dimension). That is why you don't need to pass the number of variables in the Problem. For instance, if you want to solve the sum square problem with 5 variables, you can simply pass lb=[-10, -10, -10, -10, -10] and ub=[10, 10, 10, 10, 10], assuming that the bound is -10 < x_i < 10.
from mealpy.swarm_based import PSO
from mealpy.utils.problem import Problem
class MyOwnProblem(Problem):
def __init__(self, lb, ub, minmax, addit_args_0=None, addit_args_1=None, addit_data=pd.DataFrame, **kwargs):
self.addit_args_0 = addit_args_0
self.addit_args_1 = addit_args_1
self.addit_data = addit_data
super().__init__(lb, ub, minmax, **kwargs)
def fit_func(self, solution):
# Do whatever you want here based on your dataset you pass in __init__ function.
# Calculate the fitness and return it here using your black-box model or whatever
return fitness
## Now you can create an instance of your Class. Meaning you create a single problem (object), learning more about OOP in Python from here if you don't know what it is: https://www.youtube.com/watch?v=Ej_02ICOIgs&t=1916s
problem1 = MyOwnProblem(your_lower_bound, your_upper_bound, addit_args_0, addit_args_1, addit_data)
## You can solve that single problem by:
model = PSO.OriginalPSO(epoch=1000, pop_size=50)
model.solve(problem=problem1)
I'm not sure if I understand your problem, but I think you might need to divide your dataset into several pieces if each of your small problems only uses a small portion of the dataset. Then, you can create a loop to solve all of these small problems. For example:
## Assumption that you have divided your whole dataset into several piece like this
dataset = [data1, data2, data3, data4,...]
results = []
for idx, data in enumerate(dataset):
# Create a new problem
prob = MyOwnProblem(lower_bound, upper_bound, addit_args_0, addit_args_1, data)
algo = PSO.OriginalPSO(epoch=1000, pop_size=50)
best_location, best_fitness = algo.solve(problem=prob)
# Assumption that you want to save all the variables for the next stage
results.append(best_location)
## Level 2 (Next stage)
## Do whatever you want to do with the "results" you get from level 1 from here.
prob_level2 = MyOwnProblem(lowerbound, upperbound, addit_agrs_0, addit_args_1, your_data)
opt = PSO.OriginalPSO(epoch=1000, pop_size=50)
best_location, best_fitness = opt.solve(problem=prob_level2)
I hope that helps!
Hi there! It will be nice if you can provide multiple stopping criteria working together.
E.g. - max time of optimization OR additionally - change of objective function during a predefined number of cycles less than some threshold... (as I understand it's Early Stopping).
what works first doesn't matter.
Or some priorities/weights can be added to each of the criteria - if needed.