Open hazxone opened 6 years ago
Hi @hazxone,
I didn't have tried training using images with the size of 128x128. How about changing the learning rate? I'd like to try it if I have a time...
Hi @yu4u Yup I've tried changing the learning rate from 1e-3, -2, -1. All end up with the same issue as above. Weirdly, if I trained using image size of 64, they end up nicely at loss 3.25. (I'm using adience and UTKFace dataset because I need to detect age range from 1-100 since I noticed the pretrained weight doesnt detect age 20 and below.)
Since the network use pure cnn and not face embeddings (like facenet), is it better to use dataset with random different head pose or dataset with all face have been aligned properly?
Ground Truth : 26 F
Ground Truth : 29 F
I'm afraid that the current WideResNet model is not good for this resolution because it assumes the CIFAR-10 dataset (32x32 inputs).
Could you see whether the following modification of wide_resnet.py
resolves the issue or not.
before
pool = AveragePooling2D(pool_size=(8, 8), strides=(1, 1), padding="same")(relu)
flatten = Flatten()(pool)
predictions_g = Dense(units=2, kernel_initializer=self._weight_init, use_bias=self._use_bias,
kernel_regularizer=l2(self._weight_decay), activation="softmax",
name="pred_gender")(flatten)
predictions_a = Dense(units=101, kernel_initializer=self._weight_init, use_bias=self._use_bias,
kernel_regularizer=l2(self._weight_decay), activation="softmax",
name="pred_age")(flatten)
after
flatten = GlobalAveragePooling2D()(relu)
predictions_g = Dense(units=2, kernel_initializer=self._weight_init, use_bias=self._use_bias,
kernel_regularizer=l2(self._weight_decay), activation="softmax",
name="pred_gender")(flatten)
predictions_a = Dense(units=101, kernel_initializer=self._weight_init, use_bias=self._use_bias,
kernel_regularizer=l2(self._weight_decay), activation="softmax",
name="pred_age")(flatten)
and import GlobalAveragePooling2D as
from keras.layers import Input, Activation, add, Dense, Flatten, Dropout, GlobalAveragePooling2D
Hi, Update from the modified GlobalAveragePooling2D using 128 image size. Still no success. Even though the loss drops to 4.73 after 17 epochs, it will only predict 32 F on all faces.
How do I add more size layers to the wideresnet? Now it is 128 --> 64 --> 32 -->prediction. I want to add 128 --> 64 --> 32 -->16 --> prediction
This is the log file if you are interested. log (2).zip
@hazxone I'm tring to train on 128 image size for gender only with smaller model of depth 10 and width 4. I guess this is a bad idea? Btw can you put the .mat somewhere for download? I keep getting a memory error when trying to create_db.py
with image size 128.
Traceback (most recent call last):
File "create_db.py", line 64, in <module>
main()
File "create_db.py", line 55, in main
img = cv2.imread(root_path + str(full_path[i][0]))
cv2.error: OpenCV(4.1.1) /io/opencv/modules/core/src/alloc.cpp:72: error: (-4:Insufficient memory) Failed to allocate 750000 bytes in function 'OutOfMemoryError'
Was able to create wiki_db.mat
with 128 size.
So now I saw this: https://www.mathworks.com/matlabcentral/answers/63010-concatenating-mat-files-into-a-single-file
And am thinking of doing create_db.py
for half of the imdb dataset into a .mat
and then the second half into another .mat
and concatenating them in matlab or octave. Is there a better way?
But I got another memory error.
Traceback (most recent call last):
File "create_db.py", line 106, in <module>
main()
File "create_db.py", line 65, in main
output1 = {"image": np.array(out_imgs), "gender": np.array(out_genders), \
numpy.core._exceptions.MemoryError: Unable to allocate array with shape (88451, 128, 128, 3) and data type uint8
Even though I did the following:
sudo su
root@deeplearning-test:/home/ubuntu/pixelation/age-gender-estimation# echo 1 > /proc/sys/vm/overcommit_memory
and
sudo su
root@deeplearning-test:/home/ubuntu/pixelation/age-gender-estimation/data# echo "2" > /proc/sys/vm/overcommit_memory
I reduced size to 64 but still got a memory error although imdb_db.mat
was created in data
folder.
python create_db.py --output data/imdb_db.mat --db imdb --img_size 64
100%|█████████████████████████████████| 460723/460723 [02:56<00:00, 2608.98it/s]
Traceback (most recent call last):
File "create_db.py", line 108, in <module>
main()
File "create_db.py", line 68, in main
scipy.io.savemat(output_path, output)
File "/home/ubuntu/.local/lib/python3.6/site-packages/scipy/io/matlab/mio.py", line 219, in savemat
MW.put_variables(mdict)
File "/home/ubuntu/.local/lib/python3.6/site-packages/scipy/io/matlab/mio5.py", line 849, in put_variables
self._matrix_writer.write_top(var, asbytes(name), is_global)
File "/home/ubuntu/.local/lib/python3.6/site-packages/scipy/io/matlab/mio5.py", line 590, in write_top
self.write(arr)
File "/home/ubuntu/.local/lib/python3.6/site-packages/scipy/io/matlab/mio5.py", line 629, in write
self.write_numeric(narr)
File "/home/ubuntu/.local/lib/python3.6/site-packages/scipy/io/matlab/mio5.py", line 655, in write_numeric
self.write_element(arr)
File "/home/ubuntu/.local/lib/python3.6/site-packages/scipy/io/matlab/mio5.py", line 496, in write_element
self.write_regular_element(arr, mdtype, byte_count)
File "/home/ubuntu/.local/lib/python3.6/site-packages/scipy/io/matlab/mio5.py", line 512, in write_regular_element
self.write_bytes(arr)
File "/home/ubuntu/.local/lib/python3.6/site-packages/scipy/io/matlab/mio5.py", line 480, in write_bytes
self.file_stream.write(arr.tostring(order='F'))
MemoryError
Hi yu4u,
I want to train the network with 128 image size. (40k images) However, the lost reduce stop at 24 (at 20 epochs) and then it doesn't decrease anymore. I've tried with adam, sgd, rmsprop. But the results are all the same. Do you have any tips for training the network from scratch? Or is there any pretrained weights for 128 image size?
Thanks for the age-gender-estimation code!