issues
search
thushv89
/
AdaCNN
AdaCNN algorithm. Clean implementation
0
stars
0
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Major Bug: In updating target network weights
#31
thushv89
closed
6 years ago
0
Separate Sigmoid units for remove add no adapt
#30
thushv89
opened
6 years ago
0
# Clean up multi-gpu code
#29
thushv89
closed
6 years ago
0
try initializing momentum with small random weights
#28
thushv89
closed
6 years ago
1
Try bumping up the existing weights by fraction removed at each action
#27
thushv89
closed
6 years ago
1
Is l2 decay hampering the learning in AdaCNN?
#26
thushv89
closed
6 years ago
2
detect what's causing NaNs
#25
thushv89
closed
6 years ago
1
Introduce the Age Parameter to keep past knowledge safe
#24
thushv89
closed
6 years ago
0
Try out new action space
#23
thushv89
closed
6 years ago
0
Bug: Some feed_dict arguments were not using normalization method for states. Instead was feeding the state in without normalizing
#22
thushv89
closed
6 years ago
0
Reset the network and try several different policies without keep adapting the structure from the beginning.
#21
thushv89
closed
6 years ago
0
Implement RMSProp
#20
thushv89
closed
6 years ago
1
Implement BatchNormalization
#19
thushv89
closed
6 years ago
1
A new training mechanism as opposed to naively training all the weights of CNN
#18
thushv89
opened
7 years ago
0
BUG: AdaCNN was not getting trained on current data because naivetrain action was returning the string donothing
#17
thushv89
closed
7 years ago
0
something wrong with invalid action checking?
#16
thushv89
closed
7 years ago
0
new pruning selection technique
#15
thushv89
closed
7 years ago
1
current code using lot of hacky if cases to treat imagenet separately
#14
thushv89
opened
7 years ago
0
adapting fulcon layers (if exist)
#13
thushv89
closed
7 years ago
0
New policy for adapting structure
#12
thushv89
closed
7 years ago
1
Try the followign way of optimizing AdaCNN
#11
thushv89
opened
7 years ago
0
change the iterating from epoch -> batch to epoch -> task -> batch
#10
thushv89
closed
7 years ago
0
Things to keep in mind
#9
thushv89
opened
7 years ago
0
might have been calculating the q learning loss wrong
#8
thushv89
opened
7 years ago
1
additional reward if max accuracy is pushed up by either add or remove action
#7
thushv89
closed
7 years ago
0
get mean activation instead of max activation and plot q values side by side
#6
thushv89
closed
7 years ago
1
no training qlearner during just taking action finetune (non adaptive)
#5
thushv89
closed
7 years ago
0
Try having a reward for multiple actions instead of 1 action when training the function approximator
#4
thushv89
closed
7 years ago
1
the ops keep increasing every batch
#3
thushv89
closed
7 years ago
0
commented all multiprocessing code in ada_cnn.py
#2
thushv89
opened
7 years ago
0
DataGenerator output num_gpus batches at ones
#1
thushv89
closed
6 years ago
0