Closed gitped closed 2 years ago
We evaluated 3 separate times with REPETITIONS=1
to parallelize it across multiple GPUs for faster evaluation, REPETITIONS=3
is also fine.
You can modify train.py
of the desired model to include the following in the very beginning (you can set the seed value as per your choice):
def seed(seed=42):
np.random.seed(seed)
random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
Thank you. Also, do you happen to know what are the time complexities of CILRS, AIM, and the 3 fusion models you test?
What exactly do you mean by time complexities (train time, eval time, or algorithmic complexity)?
I mean their algorithmic complexities in big O notation, such as O(n^2).
Let N be the number of tokens (for HxW grid data like images or LiDAR BEV, N = H*W)
Hello, 1.- In your transfuser paper you report the mean and std dev over 9 runs of each method (3 training seeds, each seed evaluated 3 times). Does this mean that you changed the
REPETITIONS
value in the run_evaluation.sh script toREPETITIONS=3
or did you evaluate each model 3 separate times withREPETITIONS=1
?2.- I understand that each model is generated with a random training seed, leading to some variance in the results. If I wanted to make more reproducible and deterministic models, in what part of the code can I set fixed training seeds or modify the network initialization method?