abhisheks008 / DL-Simplified

Deep Learning Simplified is an Open-source repository, containing beginner to advance level deep learning projects for the contributors, who are willing to start their journey in Deep Learning. Devfolio URL, https://devfolio.co/projects/deep-learning-simplified-f013
https://quine.sh/repo/abhisheks008-DL-Simplified-499023976
MIT License
321 stars 290 forks source link

Tuav Fire Detection using DL #516

Open abhisheks008 opened 1 month ago

abhisheks008 commented 1 month ago

Deep Learning Simplified Repository (Proposing new issue)

:red_circle: Project Title : Tuav Fire Detection using DL
:red_circle: Aim : The aim is to detect fire from the given images in the dataset using image processing and deep learning methods.
:red_circle: Dataset : https://www.kaggle.com/datasets/enesyurt/tuav-fire-detection
:red_circle: Approach : Try to use 3-4 algorithms to implement the models and compare all the algorithms to find out the best fitted algorithm for the model by checking the accuracy scores. Also do not forget to do a exploratory data analysis before creating any model.


📍 Follow the Guidelines to Contribute in the Project :


:red_circle::yellow_circle: Points to Note :


:white_check_mark: To be Mentioned while taking the issue :


Happy Contributing 🚀

All the best. Enjoy your open source journey ahead. 😎

jahnvisahni31 commented 1 month ago

Full name : Jahnvi sahni GitHub Profile Link : https://github.com/jahnvisahni31 Email ID : jahnvisahni98@gmail.com Approach for this Project : model implementation using dl What is your participant role? GSSOC'24

I want to contribute to this

abhisheks008 commented 1 month ago

Need some clarification on the approach @jahnvisahni31

jahnvisahni31 commented 1 month ago

we can use cnn, vgg16, resnet50, inceptionv3 and then do the comparative analysis that which is better version

abhisheks008 commented 1 month ago

Assigned to you @jahnvisahni31

jahnvisahni31 commented 1 month ago

thankyou will get this done at earliest

abhisheks008 commented 1 month ago

It's okay, no need to hurry. It's a long event, take your time and showcase the best result.

jahnvisahni31 commented 1 month ago

In this we have to make the test and train repo because as in the dataset they have given 9 folders which have mixed images of fire and non fire so by using imagenet we can automatically categorize the dataset but i dont know how to approach other models can you guide me a little? @abhisheks008

abhisheks008 commented 1 month ago

Certainly! Here’s a step-by-step guide on how to approach the task of organizing your dataset into train and test sets and categorizing the images using models beyond ImageNet:

Step 1: Dataset Preparation

  1. Organize the Dataset: You mentioned having 9 folders with mixed images of fire and non-fire. First, you need to create two main directories: train and test.

  2. Split the Data:

    • Random Split: Randomly split the images into training and testing datasets. For instance, you could use an 80/20 split.
    • Maintain Distribution: Ensure that both fire and non-fire images are well-represented in both sets.

    Here’s a simple way to do this using Python:

    import os
    import shutil
    from sklearn.model_selection import train_test_split
    
    # Paths
    dataset_path = 'path_to_your_dataset'
    train_path = 'path_to_train'
    test_path = 'path_to_test'
    
    # Create train and test directories
    os.makedirs(train_path, exist_ok=True)
    os.makedirs(test_path, exist_ok=True)
    
    # Gather all image paths
    images = []
    for folder_name in os.listdir(dataset_path):
        folder_path = os.path.join(dataset_path, folder_name)
        for img_name in os.listdir(folder_path):
            images.append(os.path.join(folder_path, img_name))
    
    # Split the data
    train_images, test_images = train_test_split(images, test_size=0.2, random_state=42)
    
    # Move images to respective directories
    for img_path in train_images:
        shutil.copy(img_path, train_path)
    for img_path in test_images:
        shutil.copy(img_path, test_path)

Step 2: Categorizing Images

To categorize the images, you can use a pre-trained model or train a new model. Here’s an outline of both approaches:

Using a Pre-trained Model (e.g., ResNet, VGG, etc.)

  1. Load a Pre-trained Model:

    • Use a model like ResNet or VGG, which are available in libraries such as TensorFlow or PyTorch.
    import torch
    from torchvision import models, transforms
    from PIL import Image
    
    # Load pre-trained model
    model = models.resnet50(pretrained=True)
    model.eval()
    
    # Define image transformations
    preprocess = transforms.Compose([
        transforms.Resize(256),
        transforms.CenterCrop(224),
        transforms.ToTensor(),
        transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
    ])
    
    # Function to classify an image
    def classify_image(image_path):
        input_image = Image.open(image_path)
        input_tensor = preprocess(input_image)
        input_batch = input_tensor.unsqueeze(0)
    
        with torch.no_grad():
            output = model(input_batch)
    
        # Output predictions
        _, predicted = torch.max(output, 1)
        return predicted.item()
  2. Categorize the Images:

    • Use the model to predict whether each image is of fire or not.
    fire_images = []
    non_fire_images = []
    
    for img_path in os.listdir(test_path):
        category = classify_image(os.path.join(test_path, img_path))
        if category == fire_label:  # Assuming fire_label is defined
            fire_images.append(img_path)
        else:
            non_fire_images.append(img_path)

Training a Custom Model

  1. Create a Dataset Class:

    • Define a custom dataset class for loading the images.
    from torch.utils.data import Dataset
    from torchvision.io import read_image
    
    class FireDataset(Dataset):
        def __init__(self, img_dir, transform=None):
            self.img_dir = img_dir
            self.transform = transform
            self.img_labels = [img.split('_')[0] for img in os.listdir(img_dir)]
            self.img_paths = [os.path.join(img_dir, img) for img in os.listdir(img_dir)]
    
        def __len__(self):
            return len(self.img_labels)
    
        def __getitem__(self, idx):
            image = read_image(self.img_paths[idx])
            label = 1 if 'fire' in self.img_labels[idx] else 0
            if self.transform:
                image = self.transform(image)
            return image, label
  2. Training the Model:

    • Define your model, loss function, and optimizer.
    import torch.nn as nn
    import torch.optim as optim
    
    class SimpleCNN(nn.Module):
        def __init__(self):
            super(SimpleCNN, self).__init__()
            self.conv1 = nn.Conv2d(3, 16, 3, 1)
            self.conv2 = nn.Conv2d(16, 32, 3, 1)
            self.fc1 = nn.Linear(32 * 6 * 6, 128)
            self.fc2 = nn.Linear(128, 2)
    
        def forward(self, x):
            x = nn.functional.relu(self.conv1(x))
            x = nn.functional.max_pool2d(x, 2, 2)
            x = nn.functional.relu(self.conv2(x))
            x = nn.functional.max_pool2d(x, 2, 2)
            x = x.view(-1, 32 * 6 * 6)
            x = nn.functional.relu(self.fc1(x))
            x = self.fc2(x)
            return x
    
    model = SimpleCNN()
    criterion = nn.CrossEntropyLoss()
    optimizer = optim.Adam(model.parameters(), lr=0.001)
  3. Training Loop:

    from torch.utils.data import DataLoader
    
    # Load the dataset
    transform = transforms.Compose([
        transforms.Resize((64, 64)),
        transforms.ToTensor(),
    ])
    train_dataset = FireDataset(train_path, transform=transform)
    test_dataset = FireDataset(test_path, transform=transform)
    
    train_loader = DataLoader(train_dataset, batch_size=32, shuffle=True)
    test_loader = DataLoader(test_dataset, batch_size=32, shuffle=False)
    
    # Training function
    def train_model(model, train_loader, criterion, optimizer, num_epochs=10):
        for epoch in range(num_epochs):
            model.train()
            running_loss = 0.0
            for inputs, labels in train_loader:
                optimizer.zero_grad()
                outputs = model(inputs)
                loss = criterion(outputs, labels)
                loss.backward()
                optimizer.step()
                running_loss += loss.item() * inputs.size(0)
    
            epoch_loss = running_loss / len(train_loader.dataset)
            print(f'Epoch {epoch+1}/{num_epochs}, Loss: {epoch_loss:.4f}')
    
    # Train the model
    train_model(model, train_loader, criterion, optimizer)

Step 3: Evaluating the Model

  1. Evaluate on the Test Set:

    from sklearn.metrics import accuracy_score
    
    model.eval()
    all_preds = []
    all_labels = []
    with torch.no_grad():
        for inputs, labels in test_loader:
            outputs = model(inputs)
            _, preds = torch.max(outputs, 1)
            all_preds.extend(preds.numpy())
            all_labels.extend(labels.numpy())
    
    accuracy = accuracy_score(all_labels, all_preds)
    print(f'Accuracy: {accuracy:.4f}')

By following these steps, you should be able to organize your dataset, categorize the images using a pre-trained model, or train a custom model for classifying fire and non-fire images.

jahnvisahni31 commented 1 month ago

i have done please check the pull request @abhisheks008