ahmedfgad / GeneticAlgorithmPython

Source code of PyGAD, a Python 3 library for building the genetic algorithm and training machine learning algorithms (Keras & PyTorch).
https://pygad.readthedocs.io
BSD 3-Clause "New" or "Revised" License
1.88k stars 463 forks source link

Batch size changed during run #217

Open alien2327 opened 1 year ago

alien2327 commented 1 year ago

Hi, thank you for your all effort!

I encountered an error when I tried to run algorithm with batched fitness. The solution shape change during run. In here, the batch size changed 10 to 9. Could you help me to solve this?

Thanks.

Here is my PyGAD instance setting

----------------------------------------------------------------------
                           PyGAD Lifecycle                           
======================================================================
Step                  Handler                          Output Shape
======================================================================
on_start()            on_start()                       None        
----------------------------------------------------------------------
Fitness Function      fitness_func()                   (1)         
Fitness batch size: 10
----------------------------------------------------------------------
On Fitness            on_fitness()                     None        
----------------------------------------------------------------------
Parent Selection      steady_state_selection()         (10, 119)   
Number of Parents: 10
----------------------------------------------------------------------
On Parents            on_parents()                     None        
----------------------------------------------------------------------
Crossover             scattered_crossover()            (10, 119)   
----------------------------------------------------------------------
On Crossover          on_crossover()                   None        
----------------------------------------------------------------------
Mutation              random_mutation()                (10, 119)   
Mutation Genes: 10
Random Mutation Range: (0.0, 2.0)
Gene Space: {'low': 0.0, 'high': 1.0}
Mutation by Replacement: True
Allow Duplicated Genes: True
----------------------------------------------------------------------
On Mutation           on_mutation()                    None        
----------------------------------------------------------------------
On Generation         on_generation()                  None        
----------------------------------------------------------------------
On Stop               on_stop()                        None        
----------------------------------------------------------------------
======================================================================
Population Size: (100, 119)
Number of Generations: 300
Initial Population Range: (0.5, 1.5)
Keep Elitism: 1
Gene DType: [<class 'float'>, None]
Save Best Solutions: False
Save Solutions: False
======================================================================

And here is python script

self.ga_inst = pygad.GA(
    initial_population=self.initial_population(), # Just generate (100, 119) random number uisng numpy
    init_range_low=0.5,
    init_range_high=1.5,
    gene_space={
        "low": 0.0, 
        "high": 1.0
    },
    num_generations=300,
    num_parents_mating=10,
    fitness_func=self.fitness_func,
    num_genes=119,
    sol_per_pop=20,
    parent_selection_type='sss',
    crossover_type='scattered',
    mutation_type='random',
    mutation_num_genes=10,
    random_mutation_min_val=0.0,
    random_mutation_max_val=2.0,
    mutation_by_replacement=True,
    fitness_batch_size=10,
    mutation_percent_genes=.05
)

And my error message was like this,

[INFO] GA start
0 (10, 119)  -> I print out run count and solution.shape in fitness_funtion()
1 (10, 119)
2 (10, 119)
3 (10, 119)
4 (10, 119)
5 (10, 119)
6 (10, 119)
7 (10, 119)
8 (10, 119)
9 (10, 119)
10 (10, 119)
11 (10, 119)
12 (10, 119)
13 (10, 119)
14 (10, 119)
15 (10, 119)
16 (10, 119)
17 (10, 119)
18 (10, 119)
19 (9, 119)
There is a mismatch between the number of solutions passed to the fitness function (9) and the number of fitness values returned (10). They must match.
Traceback (most recent call last):
  File "/mnt/dsk1/yhlee/workdir/torch_env/.venv/lib/python3.9/site-packages/pygad/pygad.py", line 1702, in cal_pop_fitness
    raise ValueError(f"There is a mismatch between the number of solutions passed to the fitness function ({len(batch_indices)}) and the number of fitness values returned ({len(batch_fitness)}). They must match.")
ValueError: There is a mismatch between the number of solutions passed to the fitness function (9) and the number of fitness values returned (10). They must match.
ahmedfgad commented 1 year ago

Thanks @alien2327.

To better debug the issue, can you share a full working code to run on my end? This will help too much!

alien2327 commented 1 year ago

Thank you for your reply @ahmedfgad

Here is my custom class containing pygad.GA class.

class CMAQNetGA(object):
    def __init__(self, model, opt, base_map, geodata, device) -> None:
        self.device = device
        self.model = model
        self.geodata = geodata
        self.opt = opt
        self.target_map = base_map.clone().detach().numpy()
        self.n_genes = 119
        self.ga_inst = pygad.GA(
            initial_population=self.initial_population(),
            init_range_low=self.opt.init_low,
            init_range_high=self.opt.init_high,
            gene_space={
                "low": 0.0, 
                "high": 1.0
            },
            num_generations=self.opt.n_generations,
            num_parents_mating=self.opt.n_mating,
            fitness_func=self.fitness_func,
            num_genes=self.n_genes,
            sol_per_pop=self.opt.sol_per_pop,
            parent_selection_type=self.opt.parent_selection_type,
            crossover_type=self.opt.crossover_type,
            mutation_type=self.opt.mutation_type,
            mutation_num_genes=2,
            random_mutation_min_val=0.0,
            random_mutation_max_val=1.0,
            mutation_by_replacement=True,
            fitness_batch_size=self.opt.batch_size,
            mutation_percent_genes=self.opt.mutation_percent,
            parallel_processing=[
                self.opt.ga_parallel_type, 
                self.opt.ga_parallel
            ],
            on_start=self.on_start,
            on_fitness=self.on_fitness, # empty method
            on_parents=self.on_parents,  # empty method
            on_crossover=self.on_crossover, # empty method
            on_mutation=self.on_mutation, # empty method
            on_generation=self.on_generation,
            on_stop=self.on_stop
        )
        self.ga_inst.summary()
        self.optim_target = dict(zip(list(range(17)), [1.0 for _ in range(17)]))
        self.base_target_score = None
        self.grid_num, self.mask = self.get_masked_map()
        self.result_file = open("GA_optim.csv", "w", newline='')

    def initial_population(self):
        return np.random.uniform(
            low=self.opt.init_low,
            high=self.opt.init_high,
            size=(self.opt.init_size, 119)
        )

    def get_masked_map(self) -> tuple[int, np.ndarray]:
        grid_num = 0
        mask = np.zeros((1, 1, 82, 67))
        for key in self.optim_target.keys():
            target_idx = self.geodata(key)
            grid_num += len(target_idx)
            for (x, y) in target_idx:
                mask[0, 0, x, y] = 1
        return grid_num, mask

    def fitness_func(self, ga_inst, solution, solution_idx):
        self.model.eval()
        with torch.no_grad():
            output = self.model(
                torch.FloatTensor(solution).reshape(self.opt.batch_size, self.n_genes).to(self.device)
            ).cpu().detach().numpy()

        fitness = 0
        if self.opt.batch_size == 1:
            fitness = np.sum(np.abs((output - self.target_conc)*self.mask))
        else:
            fitness = np.array([np.sum(np.abs((output[i] - self.target_conc[i])*self.mask[0])) \
                                  for i in range(self.opt.batch_size)])
        fitness /= self.grid_num
        fitness = 1 / (fitness + 1e-5)
        return fitness

    def on_start(self, ga_inst):
        print("[INFO] GA start")

    def on_fitness(self, ga_inst, population_fitness):
        ...

    def on_parents(self, ga_inst, selected_parents):
        ...

    def on_crossover(self, ga_inst, offspring_crossover):
        ...

    def on_mutation(self, ga_inst, offspring_mutation):
        ...

    def on_generation(self, ga_inst):
        solution, fitness, _ = self.ga_inst.best_solution()
        print("\r[IFNO] Generation {gen:5d} | Fitness {fit:.6f}"\
            .format(gen=ga_inst.generations_completed, fit=fitness), end='')
        csv.writer(self.result_file).writerow(solution.reshape(-1))

    def on_stop(self, ga_inst, last_population_fitness):
        self.result_file.close()
        print("\n[INFO] GA stop")

    def set_target(self, target:dict):
        self.optim_target = target
        for key, val in self.optim_target.items():
            target_idx = self.geodata(key)
            for (x, y) in target_idx:
                self.target_map[0, 0, x, y] *= val
        if self.opt.batch_size != 1:
            self.target_map = np.array([self.target_map for _ in range(self.opt.batch_size)])

    def run(self):
        return self.ga_inst.run()

    def show_result(self) -> tuple:
        fig = self.ga_inst.plot_fitness()
        solution, solution_fitness, solution_idx = self.ga_inst.best_solution()
        print("Fitness value of the best solution = {solution_fitness}"\
            .format(solution_fitness=solution_fitness))
        print("Index of the best solution : {solution_idx}"\
            .format(solution_idx=solution_idx))
        return fig, solution, solution_fitness, solution_idx

The purpose of this class is that to get several input vectors (solution) which generate same output data from deep learning model (for personal project). Custom method for pygad.GA in here,

  1. initial_population : generate (init_pop, n_genes) random values using numpy.random.uniform
  2. fitness_func : input populations to DL model (pytorch) and calculate the fitness with target value

After all settings done, I just run run() method, and got the error.

ga = CMAQNetGA(
        model, opt, base_map=base_map, geodata=geodata, device=device)
ga.set_target(optim_target)
ga.run()
ahmedfgad commented 1 year ago

Thanks @alien2327 for sharing the code.

May I know these inputs so that I can run the code?

  1. model
  2. opt
  3. base_map
  4. geodata
  5. device
  6. optim_target

You do not have to share confidential inputs. Just use dummy input if possible. I just need to run the code and trace the bug.

alien2327 commented 1 year ago

Thanks @ahmedfgad

The data for training model and base_map and detail structure of hidden layer of the model is confidential, so I should share this as dummy data, but others are just a number or free data/model.

1. model

class Net(nn.Module):
    def __init__(self, opt, base:torch.FloatTensor) -> None:
        super(Net, self).__init__()
        self.name = 'Net'
        self.base = base

        self.layer_fc_1 = nn.Linear(119, 256)
        self.layer_elu_1 = nn.GELU()
        self.layer_fc_2 = nn.Linear(256, 256)
        self.layer_elu_2 = nn.GELU()
        self.layer_fc_3 = nn.Linear(256, 256)
        self.layer_elu_3 = nn.GELU()
        self.layer_fc_4 = nn.Linear(256, 1)

        self.layer_conv_1 = nn.Conv2d(6, 64, 3, padding='same')
        self.layer_norm2d_1 = nn.BatchNorm2d(num_features=64)
        self.layer_gelu_1 = nn.GELU()
        self.layer_conv_2 = nn.Conv2d(64, 128, 3, padding='same')
        self.layer_norm2d_2 = nn.BatchNorm2d(num_features=128)
        self.layer_gelu_2 = nn.GELU()
        self.layer_conv_3 = nn.Conv2d(128, 64, 3, padding='same')
        self.layer_norm2d_3 = nn.BatchNorm2d(num_features=64)
        self.layer_gelu_3 = nn.GELU()
        self.layer_conv_4 = nn.Conv2d(64, 1, 1, padding='same')

    def forward(self, x:torch.FloatTensor) -> torch.FloatTensor:
        x = self.layer_fc_1(x) # x have (batch_size, 119)
        x = self.layer_elu_1(x)
        x = self.layer_fc_2(x)
        x = self.layer_elu_2(x)
        x = self.layer_fc_3(x)
        x = self.layer_elu_3(x)
        x = self.layer_fc_4(x)

        base = self.layer_conv_1(self.base) # x have (batch_size, 6, 82, 67)
        base = self.layer_norm2d_1(base)
        base = self.layer_gelu_1(base)
        base = self.layer_conv_2(base)
        base = self.layer_norm2d_2(base)
        base = self.layer_gelu_2(base)
        base = self.layer_conv_3(base)
        base = self.layer_norm2d_3(base)
        base = self.layer_gelu_3(base)
        base = self.layer_conv_4(base)

        x = torch.einsum('bkij,bk->bkij', [base, x])
        # some hidden layers
        return x # should be (batch_size, 1, 82, 67) shape

2. opt : this is just for containing parameters for model and pygad.GA

class Params(object):
    def __init__(self) -> None:
        self.parser = argparse.ArgumentParser()
        self.initialized = False

    def initialize(self) -> None:
        self.parser.add_argument('--data_dir', type=str, default="./datasets", help='Project data path')
        self.parser.add_argument('--model_dir', type=str, default="./models", help='Model file path')
        self.parser.add_argument('--gpu_id', type=int, default=0)
        self.parser.add_argument('--debug', action='store_true')
        self.parser.add_argument('--local-rank', type=int, default=0)
        self.parser.add_argument('--nproc_per_node', type=int, default=1)
        self.parser.add_argument('--nodes', type=int, default=1)

        self.parser.add_argument('--fitness_batch_size', type=int, default=10)
        self.parser.add_argument('--num_genes', type=int, default=119)
        self.parser.add_argument('--init_pop_size', type=int, default=1000)
        self.parser.add_argument('--init_range_low', type=float, default=0.1)
        self.parser.add_argument('--init_range_high', type=float, default=2.0)
        self.parser.add_argument('--gene_space_low', type=float, default=0.0)
        self.parser.add_argument('--gene_space_high', type=float, default=1.0)
        self.parser.add_argument('--sol_per_pop', type=int, default=20)
        self.parser.add_argument('--num_generations', type=int, default=100)
        self.parser.add_argument('--num_parents_mating', type=int, default=2)
        self.parser.add_argument('--mutation_percent_genes', type=float, default=0.05)
        self.parser.add_argument('--parent_selection_type', type=str, default='sss')
        self.parser.add_argument('--crossover_type', type=str, default='two_points')
        self.parser.add_argument('--mutation_type', type=str, default='scramble')
        self.parser.add_argument('--mutation_num_genes', type=int, default=2)
        self.parser.add_argument('--random_mutation_min_val', type=float, default=0.0)
        self.parser.add_argument('--random_mutation_max_val', type=float, default=1.0)
        self.parser.add_argument('--keep_parents', type=int, default=8)
        self.parser.add_argument('--keep_elitism', type=int, default=1)
        self.parser.add_argument('--ga_parallel_type', type=str, default='process')
        self.parser.add_argument('--ga_parallel', type=int, default=0)

        self.initialized = True

    def parse(self, args=None) -> argparse.Namespace:
        if not self.initialized: self.initialize()
        self.opt = self.parser.parse_args(args=args)
        return self.opt

3. base_map This is one of the confidential, but the input shape is (1, 1, 82, 67), so you may just use random 4d float.

4. geodata This code is not confidential but it use geopandas for GIS data, so (sorry if you familiar with GIS) maybe you don't want to read the code. The purpose of geodata is that to extract some grid index within 0<=y<82 , 0<=x<67, and you can get (some_number, 2) shape (i.e., (1000, 2)) int array by calling geodata method. So you may just use random 2d int.

5. device

device =  torch.device('cuda:0')

6. optim_target

target = 0.9 # some random float number
optim_target = {
     0 : target, 
     1 : target, 
     2 : target, 
     3 : target, 
     4 : target,
     5 : target, 
     6 : target, 
     7 : target, 
     8 : target, 
     9 : target, 
    10 : target,
    11 : target, 
    12 : target, 
    13 : target,
    14 : target, 
    15 : target, 
    16 : target 
}
ahmedfgad commented 1 year ago

I edited the code you sent because it had somethings missing. I used torch.device('cpu:0') instead of torch.device('cuda:0') because I have the torch library that works on the CPU only. Is it OK or I have to use CUDA?

Yet I did not reach the PyGAD error because I get this error regarding the model. Is there a way to fix it?

RuntimeError: Given groups=1, weight of size [64, 6, 3, 3], expected input[1, 1, 82, 67] to have 6 channels, but got 1 channels instead

This is the code I used. Please make the necessary edits so that this code returns the PyGAD error.

import numpy as np
import pygad
import torch
import csv

def geodata(key):
    return torch.from_numpy(np.random.randint(low=1, high=67, size=(key, 2)))

class CMAQNetGA(object):
    def __init__(self, model, opt, base_map, geodata, device) -> None:
        self.device = device
        self.model = model
        self.geodata = geodata
        self.opt = opt
        self.target_map = base_map.clone().detach().numpy()
        self.n_genes = 119
        self.ga_inst = pygad.GA(
            initial_population=self.initial_population(),
            init_range_low=self.opt.init_low,
            init_range_high=self.opt.init_high,
            gene_space={
                "low": 0.0, 
                "high": 1.0
            },
            num_generations=self.opt.n_generations,
            num_parents_mating=self.opt.n_mating,
            fitness_func=self.fitness_func,
            num_genes=self.n_genes,
            sol_per_pop=self.opt.sol_per_pop,
            parent_selection_type=self.opt.parent_selection_type,
            crossover_type=self.opt.crossover_type,
            mutation_type=self.opt.mutation_type,
            mutation_num_genes=2,
            random_mutation_min_val=0.0,
            random_mutation_max_val=1.0,
            mutation_by_replacement=True,
            fitness_batch_size=self.opt.batch_size,
            mutation_percent_genes=self.opt.mutation_percent,
            parallel_processing=[
                self.opt.ga_parallel_type, 
                self.opt.ga_parallel
            ],
            on_start=self.on_start,
            on_fitness=self.on_fitness, # empty method
            on_parents=self.on_parents,  # empty method
            on_crossover=self.on_crossover, # empty method
            on_mutation=self.on_mutation, # empty method
            on_generation=self.on_generation,
            on_stop=self.on_stop
        )
        self.ga_inst.summary()
        self.optim_target = dict(zip(list(range(17)), [1.0 for _ in range(17)]))
        self.base_target_score = None
        self.grid_num, self.mask = self.get_masked_map()
        self.result_file = open("GA_optim.csv", "w", newline='')

    def initial_population(self):
        return np.random.uniform(
            low=self.opt.init_low,
            high=self.opt.init_high,
            size=(self.opt.init_size, 119)
        )

    def get_masked_map(self) -> tuple[int, np.ndarray]:
        grid_num = 0
        mask = np.zeros((1, 1, 82, 67))
        for key in self.optim_target.keys():
            target_idx = self.geodata(key)
            grid_num += len(target_idx)
            print(target_idx.shape)
            for (x, y) in target_idx:
                mask[0, 0, x, y] = 1
        return grid_num, mask

    def fitness_func(self, ga_inst, solution, solution_idx):
        self.model.eval()
        with torch.no_grad():
            output = self.model(
                torch.FloatTensor(solution).reshape(self.opt.batch_size, self.n_genes).to(self.device)
            ).cpu().detach().numpy()

        fitness = 0
        if self.opt.batch_size == 1:
            fitness = np.sum(np.abs((output - self.target_conc)*self.mask))
        else:
            fitness = np.array([np.sum(np.abs((output[i] - self.target_conc[i])*self.mask[0])) \
                                  for i in range(self.opt.batch_size)])
        fitness /= self.grid_num
        fitness = 1 / (fitness + 1e-5)
        return fitness

    def on_start(self, ga_inst):
        print("[INFO] GA start")

    def on_fitness(self, ga_inst, population_fitness):
        ...

    def on_parents(self, ga_inst, selected_parents):
        ...

    def on_crossover(self, ga_inst, offspring_crossover):
        ...

    def on_mutation(self, ga_inst, offspring_mutation):
        ...

    def on_generation(self, ga_inst):
        solution, fitness, _ = self.ga_inst.best_solution()
        print("\r[IFNO] Generation {gen:5d} | Fitness {fit:.6f}"\
            .format(gen=ga_inst.generations_completed, fit=fitness), end='')
        csv.writer(self.result_file).writerow(solution.reshape(-1))

    def on_stop(self, ga_inst, last_population_fitness):
        self.result_file.close()
        print("\n[INFO] GA stop")

    def set_target(self, target:dict):
        self.optim_target = target
        for key, val in self.optim_target.items():
            target_idx = self.geodata(key)
            for (x, y) in target_idx:
                self.target_map[0, 0, x, y] *= val
        if self.opt.batch_size != 1:
            self.target_map = np.array([self.target_map for _ in range(self.opt.batch_size)])

    def run(self):
        return self.ga_inst.run()

    def show_result(self) -> tuple:
        fig = self.ga_inst.plot_fitness()
        solution, solution_fitness, solution_idx = self.ga_inst.best_solution()
        print("Fitness value of the best solution = {solution_fitness}"\
            .format(solution_fitness=solution_fitness))
        print("Index of the best solution : {solution_idx}"\
            .format(solution_idx=solution_idx))
        return fig, solution, solution_fitness, solution_idx

class Net(torch.nn.Module):
    def __init__(self, opt, base:torch.FloatTensor) -> None:
        super(Net, self).__init__()
        self.name = 'Net'
        self.base = base

        self.layer_fc_1 = torch.nn.Linear(119, 256)
        self.layer_elu_1 = torch.nn.GELU()
        self.layer_fc_2 = torch.nn.Linear(256, 256)
        self.layer_elu_2 = torch.nn.GELU()
        self.layer_fc_3 = torch.nn.Linear(256, 256)
        self.layer_elu_3 = torch.nn.GELU()
        self.layer_fc_4 = torch.nn.Linear(256, 1)

        self.layer_conv_1 = torch.nn.Conv2d(6, 64, 3, padding='same')
        self.layer_norm2d_1 = torch.nn.BatchNorm2d(num_features=64)
        self.layer_gelu_1 = torch.nn.GELU()
        self.layer_conv_2 = torch.nn.Conv2d(64, 128, 3, padding='same')
        self.layer_norm2d_2 = torch.nn.BatchNorm2d(num_features=128)
        self.layer_gelu_2 = torch.nn.GELU()
        self.layer_conv_3 = torch.nn.Conv2d(128, 64, 3, padding='same')
        self.layer_norm2d_3 = torch.nn.BatchNorm2d(num_features=64)
        self.layer_gelu_3 = torch.nn.GELU()
        self.layer_conv_4 = torch.nn.Conv2d(64, 1, 1, padding='same')

    def forward(self, x:torch.FloatTensor) -> torch.FloatTensor:
        x = self.layer_fc_1(x) # x have (batch_size, 119)
        x = self.layer_elu_1(x)
        x = self.layer_fc_2(x)
        x = self.layer_elu_2(x)
        x = self.layer_fc_3(x)
        x = self.layer_elu_3(x)
        x = self.layer_fc_4(x)

        base = self.layer_conv_1(self.base) # x have (batch_size, 6, 82, 67)
        base = self.layer_norm2d_1(base)
        base = self.layer_gelu_1(base)
        base = self.layer_conv_2(base)
        base = self.layer_norm2d_2(base)
        base = self.layer_gelu_2(base)
        base = self.layer_conv_3(base)
        base = self.layer_norm2d_3(base)
        base = self.layer_gelu_3(base)
        base = self.layer_conv_4(base)

        x = torch.einsum('bkij,bk->bkij', [base, x])
        # some hidden layers
        return x # should be (batch_size, 1, 82, 67) shape

import argparse
class Params(object):
    def __init__(self) -> None:
        self.parser = argparse.ArgumentParser()
        self.initialized = False
        self.init_low = -1
        self.init_high = 1
        self.init_size = 10
        self.n_generations = 5
        self.n_mating = 5
        self.sol_per_pop = 10
        self.parent_selection_type='sss'
        self.crossover_type='single_point'
        self.mutation_type='random'
        self.batch_size = 2
        self.mutation_percent=10
        self.ga_parallel_type='thread'
        self.ga_parallel=2

    def initialize(self) -> None:
        self.parser.add_argument('--data_dir', type=str, default="./datasets", help='Project data path')
        self.parser.add_argument('--model_dir', type=str, default="./models", help='Model file path')
        self.parser.add_argument('--gpu_id', type=int, default=0)
        self.parser.add_argument('--debug', action='store_true')
        self.parser.add_argument('--local-rank', type=int, default=0)
        self.parser.add_argument('--nproc_per_node', type=int, default=1)
        self.parser.add_argument('--nodes', type=int, default=1)

        self.parser.add_argument('--fitness_batch_size', type=int, default=10)
        self.parser.add_argument('--num_genes', type=int, default=119)
        self.parser.add_argument('--init_pop_size', type=int, default=1000)
        self.parser.add_argument('--init_range_low', type=float, default=0.1)
        self.parser.add_argument('--init_range_high', type=float, default=2.0)
        self.parser.add_argument('--gene_space_low', type=float, default=0.0)
        self.parser.add_argument('--gene_space_high', type=float, default=1.0)
        self.parser.add_argument('--sol_per_pop', type=int, default=20)
        self.parser.add_argument('--num_generations', type=int, default=100)
        self.parser.add_argument('--num_parents_mating', type=int, default=2)
        self.parser.add_argument('--mutation_percent_genes', type=float, default=0.05)
        self.parser.add_argument('--parent_selection_type', type=str, default='sss')
        self.parser.add_argument('--crossover_type', type=str, default='two_points')
        self.parser.add_argument('--mutation_type', type=str, default='scramble')
        self.parser.add_argument('--mutation_num_genes', type=int, default=2)
        self.parser.add_argument('--random_mutation_min_val', type=float, default=0.0)
        self.parser.add_argument('--random_mutation_max_val', type=float, default=1.0)
        self.parser.add_argument('--keep_parents', type=int, default=8)
        self.parser.add_argument('--keep_elitism', type=int, default=1)
        self.parser.add_argument('--ga_parallel_type', type=str, default='process')
        self.parser.add_argument('--ga_parallel', type=int, default=0)

        self.initialized = True

    def parse(self, args=None) -> argparse.Namespace:
        if not self.initialized: self.initialize()
        self.opt = self.parser.parse_args(args=args)
        return self.opt

base_map = torch.from_numpy(np.random.rand(1, 1, 82, 67))

# geodata = torch.from_numpy(np.random.randint(low=1, high=100, size=(1000, 2)))

device =  torch.device('cpu:0')

target = 0.9 # some random float number
optim_target = {
     0 : target, 
     1 : target, 
     2 : target, 
     3 : target, 
     4 : target,
     5 : target, 
     6 : target, 
     7 : target, 
     8 : target, 
     9 : target, 
    10 : target,
    11 : target, 
    12 : target, 
    13 : target,
    14 : target, 
    15 : target, 
    16 : target 
}

opt = Params()
model = Net(opt=opt, base=base_map)

ga = CMAQNetGA(model, 
               opt, 
               base_map=base_map, 
               geodata=geodata, 
               device=device)
ga.set_target(optim_target)
ga.run()
alien2327 commented 1 year ago

@ahmedfgad Sorry for late response! Little bit bussy days ;)

I checked what I send you last day, and I commented wrong discription. Sorry for confused.

The base map size, which is mentioned in 3. base_map,

3. base_map This is one of the confidential, but the input shape is (1, 1, 82, 67), so you may just use random 4d float.

actually is (1, 6, 82, 67), which has 6 channel of 82 height and 67 width.

And yes of course it doesn't matter what device is used, so using cpu instead of gpu should be totally fine.

Best regards,