Akasxh / Terrain-Recognition

High accuracy, explainable, lightweight CNN for terrain recognition.
GNU General Public License v3.0
11 stars 16 forks source link

Enhancing Terrain-v3 as Terrain-v4 for Performance Optimization #20

Closed deepanshubaghel closed 1 month ago

deepanshubaghel commented 1 month ago

This pull request enhances the terrain recognition model . It implements a convolutional neural network (CNN) architecture designed for improved accuracy and efficiency in classifying terrain images.

### Key Changes Model Architecture:

Data Augmentation:

Training Process:

Model Evaluation:

Model Saving:

After training, the model is saved for future use, allowing for easy loading and inference without retraining.

Organize the Dataset

Data Main/ ├── train/ ├── test/

issue #19

{57F4ED1D-D804-4105-B588-73A0FFC83AC7}

{2DF42413-562F-439C-839E-6879AD1A9B64}

deepanshubaghel commented 1 month ago

@Akasxh

Akasxh commented 1 month ago

Well I've just reviewed V4 and it seems excellent. Could you explain the key difference other than using scheduler while training the model?

I was implementing a similar approach but it reached a point, that it was not overfitting at all, is there any other change you've implemented?

deepanshubaghel commented 1 month ago

Well I've just reviewed V4 and it seems excellent. Could you explain the key difference other than using scheduler while training the model?

I was implementing a similar approach but it reached a point, that it was not overfitting at all, is there any other change you've implemented?

V4 improves on V3 by adding Batch Normalization, more aggressive data augmentation, optimized steps per epoch, and validation steps. These changes make the model more robust, better regularized, and less prone to overfitting, leading to improved generalization.

Akasxh commented 1 month ago

image Could you point out the difference, and I have noticed that there is one less dense layer at the output.

deepanshubaghel commented 1 month ago

image Could you point out the difference, and I have noticed that there is one less dense layer at the output.

{9EC9AD2C-8931-4946-90E8-F9BF389242DD}

here steps per epoch will be changed

[One major change between V3 and V4 was the dataset. In V4, a new dataset was introduced because the dataset used in V3 was not available. This change significantly impacted the model's performance and required adjustments to the preprocessing steps and possibly the model architecture to suit the new dataset. ]

Akasxh commented 1 month ago

The change in dataset is a major one, can you try using the dataset i have provided and modify the dataset you have along with using images from the provided dataset. Having one less terrain makes a huge change, and we are only using 5, so I want you to not reduce it further. you can just use the dataset i've provided and add your images and train the model with one more filter in the output.

Please do that.

Akasxh commented 1 month ago

https://drive.google.com/drive/folders/1hbL1m39TF8ABe0oCj5XYDbHXY-gPIjcQ?usp=drive_link

the dataset with around 10k images of 5 classes

deepanshubaghel commented 1 month ago

The change in dataset is a major one, can you try using the dataset i have provided and modify the dataset you have along with using images from the provided dataset. Having one less terrain makes a huge change, and we are only using 5, so I want you to not reduce it further. you can just use the dataset i've provided and add your images and train the model with one more filter in the output.

Please do that.

Akasxh commented 1 month ago

Did you get it now? How good was your accuracy now after adding one class and that dataset?

Akasxh commented 1 month ago

sgd with momentum is essentially adam isn't it?

deepanshubaghel commented 1 month ago

sgd with momentum is essentially adam isn't it?

SGD with Momentum vs Adam

{9957AA41-A080-41D9-B8F7-35E7E6EAF9E3}

{877703CA-5A0A-4424-BDBE-62917522410D}

Adam with 4 layer cnn

{7A7A821F-F810-4A66-B4C2-8C92D5A8760F}

{5A47E710-C006-453D-850C-F8BE23E004CA}

RMSprop with 4 layer cnn

{AE24E492-48BD-4478-B382-682CA5E75B34}

{A6C49828-3C14-4AB5-BEA0-151F195F9B09}

RMSprop with 3 layer cnn

{92E1BDC3-8171-48A0-9C0C-62B9669C92AE}

{0FA73A0E-2EC3-4DDD-998C-DAE126E166AF}

Adagrad

{08CF4B33-3012-4433-A515-B282732A9AF4}

{79161F2B-0BB0-4D2C-8EDD-9D6DCE9546C3}

Akasxh commented 1 month ago

RMSprop with 3 layer CNN gave pretty good results, why do you think so it gave suxh good results?

Akasxh commented 1 month ago

Well please write a separate readme along with this commit that has any of your unique findings you have discovered while experimenting with these architectures. The findings would help people learn more about the optimizers as well as how the architecture is affecting the accuracy.

deepanshubaghel commented 1 month ago

RMSprop with 3 layer CNN gave pretty good results, why do you think so it gave suxh good results?

RMSprop worked well with the 3-layer CNN because it dynamically adjusts the learning rate, allowing the model to learn efficiently without getting stuck in local minima. I’ve been experimenting with different optimizers based on my experience to see which one delivers the best performance, as finding the right optimizer can make a big difference in how well the model trains..

Akasxh commented 1 month ago

The same is in Adam and to say Adam is momentum + rmsprop here as well.

Adam has learning rate optimiser built in, has faster convergence due to momentum.

There are few versions of Adam that outperforms it and all verz Adam is known for its efficiency vs computation time

deepanshubaghel commented 1 month ago

The same is in Adam and to say Adam is momentum + rmsprop here as well.

Adam has learning rate optimiser built in, has faster convergence due to momentum.

There are few versions of Adam that outperforms it and all verz Adam is known for its efficiency vs computation time

Thanks, @Akasxh! I really appreciate your insights. I’m looking forward to diving into those alternatives and seeing how they perform. It’s an exciting journey for me as I explore the impact of different optimizers on model training. I’m eager to learn more and see what I can discover!

deepanshubaghel commented 1 month ago

@Akasxh