Closed vanquanTRAN closed 2 years ago
Hello @vanquanTRAN,
the initial population is generated from initial positions. Increase the initial positions to a number bigger than the population, by setting: initialize={"grid": 4, "random": 15, "vertices": 4}
The current warning is:
Warning: Not enough initial positions for population size
Population size is reduced to, n_inits
I will change this to make it clear how to increase the population.
I tried to modify all parameters of initialize={"grid": 100, "random": 100, "vertices": 2000}, however with population size 50, the current warning always stay Warning: Not enough initial positions for population size Population size is reduced to 10 How can i fix this problem ?
Hello @vanquanTRAN,
I need a complete example that I can copy and run. Additionally I need the information about the python and hyperactive version + your OS.
""" Created on Sun Jun 6 13:06:22 2021
@author: Admin """ data_tv.csv import numpy as np import pandas as pd
import matplotlib.pyplot as plt plt.rcParams["font.family"] = "Times New Roman" plt.rcParams.update({'font.size': 22}) import matplotlib matplotlib.rc('xtick', labelsize=14) matplotlib.rc('ytick', labelsize=14) matplotlib.pyplot.figure(dpi=1200) import seaborn as sns from sklearn.metrics import mean_squared_error, mean_absolute_error, r2_score from sklearn.preprocessing import StandardScaler from sklearn.model_selection import train_test_split import xgboost as xgb from sklearn.ensemble import RandomForestRegressor, AdaBoostRegressor, GradientBoostingRegressor from sklearn.svm import SVR from numpy.random import seed seed(2)
data = pd.read_csv(r"data_tv.csv")
X = data.iloc[:,:-1] # Features y = data.iloc[:,-1] # Target sc = StandardScaler() X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3) X_train = sc.fit_transform(X_train) X_test = sc.transform(X_test)
from sklearn.model_selection import cross_val_score from hyperactive import Hyperactive, RandomRestartHillClimbingOptimizer,BayesianOptimizer,ParticleSwarmOptimizer,SimulatedAnnealingOptimizer,RandomAnnealingOptimizer
def model(opt): gbr = RandomForestRegressor(n_estimators=opt["n_estimators"],
max_depth=opt["max_depth"],
max_features=opt["max_features"],
min_samples_split=opt["min_samples_split"],
min_samples_leaf=opt["min_samples_leaf"])
scores=np.mean(cross_val_score(gbr, X_train, y_train,cv=10, n_jobs=-1,
scoring="r2"))
print(
"Iteration:", opt.optimizer.nth_iter, " Best score", opt.optimizer.best_score
)
return 1*scores
search_space = {"n_estimators": list(range(500, 600,10)),
"max_depth": list(range(4, 9,2)),
"max_features": list(range(4, 6,1)),
"min_samples_split": list(np.arange(0.001, 0.05,0.001)),
"min_samples_leaf": list(np.arange(0.001, 0.05,0.001))}
optimizer = ParticleSwarmOptimizer(population=50, inertia=0.4, cognitive_weight=0.7, social_weight=0.7, temp_weight=0.3, rand_rest_p=0.05, )
hyper = Hyperactive() hyper.add_search(model, search_space, optimizer=optimizer, n_iter=500) hyper.run()
i use hyperactive 3.2.3
@vanquanTRAN,
in your example code I do not see how you used the initialize
-parameter to increase the number of initial positions.
import numpy as np
from .base_population_optimizer import BasePopulationOptimizer from ...search import Search from ._particle import Particle
class ParticleSwarmOptimizer(BasePopulationOptimizer, Search): def init( self, search_space, initialize={"grid": 100, "random": 100, "vertices": 2000}, population=10, inertia=0.5, cognitive_weight=0.5, social_weight=0.5, temp_weight=0.2, rand_rest_p=0.03, ): super().init(search_space, initialize)
self.population = population
self.inertia = inertia
self.cognitive_weight = cognitive_weight
self.social_weight = social_weight
self.temp_weight = temp_weight
self.rand_rest_p = rand_rest_p
self.particles = self._create_population(Particle)
self.optimizers = self.particles
def _sort_best(self):
scores_list = []
for _p_ in self.particles:
scores_list.append(_p_.score_current)
scores_np = np.array(scores_list)
idx_sorted_ind = list(scores_np.argsort()[::-1])
self.p_sorted = [self.particles[i] for i in idx_sorted_ind]
def init_pos(self, pos):
nth_pop = self.nth_iter % len(self.particles)
self.p_current = self.particles[nth_pop]
self.p_current.init_pos(pos)
self.p_current.inertia = self.inertia
self.p_current.cognitive_weight = self.cognitive_weight
self.p_current.social_weight = self.social_weight
self.p_current.temp_weight = self.temp_weight
self.p_current.rand_rest_p = self.rand_rest_p
self.p_current.velo = np.zeros(len(self.conv.max_positions))
def iterate(self):
n_iter = self._iterations(self.particles)
self.p_current = self.particles[n_iter % len(self.particles)]
self._sort_best()
self.p_current.global_pos_best = self.p_sorted[0].pos_best
pos = self.p_current.iterate()
return pos
def evaluate(self, score_new):
self.p_current.evaluate(score_new)
I modified in the particle_swarm_optimization.py of package gradient_free_optimizers
Hello @vanquanTRAN,
it looks like you are changing the source code of Hyperactive. You should not do this. The way Hyperactive behaves should only be changed by using the official API: https://github.com/SimonBlanke/Hyperactive#hyperactive-api-reference
If you want to learn how to use Hyperactive you can also look into the tutorial: https://github.com/SimonBlanke/hyperactive-tutorial/blob/main/notebooks/hyperactive_tutorial.ipynb
Look into the FAQ of the readme. Can the bug be resolved by one of those solutions?
Describe the bug
Code to reproduce the behavior
Error message from command line
System information:
Additional context Dear Simon, i have one question relating the population size of PSO, search_space, initialize={"grid": 4, "random": 2, "vertices": 4}, population=10, inertia=0.5, cognitive_weight=0.5, social_weight=0.5, temp_weight=0.2, rand_rest_p=0.03, ) i can not increase the population size which is automatically reduced to 10 when i modifies to 20 or greater ? i tried to midifiy on file search.py
but it is not changed anything, so can you help me to debug this issue please ? I would investigate the population effect on time-consuming and cost function of my problem. Thank you for your help