Closed ghost closed 6 years ago
@dhanasekar416 Answers below!
Thanks for the Reply Greg.
@dhanasekar416 More answers below.
100GB of RAM is more than sufficient; the instance just comes with more than was needed. Scaling down to 128x128 shouldn't give you issues with that much DRAM.
I did try with 512x512, but found no noticeable improvement.
I might have an S3 download link in the future, once EyeNet is a little more performant. TensorFlow: 1.2 CUDA: 8 cuDNN: 7 (I think this is correct)
That Compute Engine should work, no problem; the GPU's are the priority with this type of problem. For this problem, it took roughly 30-40 minutes with 8 GPU's. I would not recommend running on CPU's when training.
My model architecture follows the VGG architecture. VGG consists of multiple layers, then pooling, as opposed to layer > pool > layer > pool, etc. Blocks of layers with the same filter size applied multiple times is used to extract more complex and representative features.
I've tried to do this, but felt it took more time than it was worth. It might be something I add in the future!
@gregwchase Well do you have any detailed report on this project explaining the complete project clearly like why you used 3 ConvLayer, Principle and other stuff? If you do have can you post it. And If I resize image during preprocessing to 128 x 128 will there still be good accuracy?
Hi greg. If i use original pixel values instead of resizing, will it improve my accuracy. I am planning to use aws p2.16x large.
@CodeRed1704 Since all images are varying sizes, they need to be resized identically.
That said, using larger images can improve accuracy; for this project, I didn't notice an improvement using images at 512x512 resolution. Because of this, they were resized to 256x256. This yielded faster training, with comparable results.
A p2.16xlarge instance may be overkill; I'd suggest a p2.8xlarge.
Hi Greg, I have a few queries, Please do answer
Please Do reply.