Sarah111-AHM / Semsmah

2 stars 0 forks source link

Natural network python project Handwritten digital recognition #26

Closed Sarah111-AHM closed 1 year ago

Sarah111-AHM commented 1 year ago

Recognizing handwritten digits is a classic problem in machine learning and can be tackled using neural networks. Here's a Python project that uses the Keras library to build and train a neural network to recognize handwritten digits:

  1. First, you'll need to install the Keras library and its dependencies. You can do this by running the following command in your terminal or command prompt:
pip install keras
  1. Next, you'll need to download the MNIST dataset, which consists of 70,000 images of handwritten digits. You can download the dataset from the following link: https://www.kaggle.com/c/digit-recognizer/data

  2. Once you've downloaded the dataset, you can load it into your Python code using the following code:

import pandas as pd

train_df = pd.read_csv('train.csv')
test_df = pd.read_csv('test.csv')

# Separate the labels from the images
train_labels = train_df['label']
train_images = train_df.drop(['label'], axis=1)

test_images = test_df
  1. Now, you'll need to preprocess the data. This involves scaling the pixel values of the images to be between 0 and 1, and converting the labels to one-hot vectors:
import numpy as np
from keras.utils import to_categorical

# Scale the pixel values
train_images = np.array(train_images) / 255.0
test_images = np.array(test_images) / 255.0

# Convert labels to one-hot vectors
train_labels = to_categorical(train_labels)
  1. Now you're ready to build your neural network. Here's an example architecture:
from keras.models import Sequential
from keras.layers import Dense, Flatten

model = Sequential()
model.add(Flatten(input_shape=(28, 28)))
model.add(Dense(128, activation='relu'))
model.add(Dense(10, activation='softmax'))

This model has an input layer that flattens the 28x28 pixel images into a 784-dimensional vector, followed by a hidden layer with 128 neurons and a ReLU activation function, and an output layer with 10 neurons (one for each digit) and a softmax activation function.

  1. Now you can compile and train your model using the following code:
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])

model.fit(train_images, train_labels, epochs=10, validation_split=0.2)

This code compiles the model with the Adam optimizer and categorical crossentropy loss function, and trains it for 10 epochs on the training data with a validation split of 0.2.

  1. Finally, you can evaluate your model on the test data and make predictions:
test_predictions = model.predict_classes(test_images)

# Save the predictions to a CSV file
submission_df = pd.DataFrame({'ImageId': range(1, len(test_predictions)+1), 'Label': test_predictions})
submission_df.to_csv('submission.csv', index=False)

This code uses the predict_classes method to make predictions on the test data, and then saves the predictions to a CSV file that you can submit to the Kaggle competition.

That's it! With this project, you should be able to build and train a neural network to recognize handwritten digits with reasonably good accuracy.

Sure! Here are some additional details and explanations for each step of the project:

  1. Installing Keras: Keras is a high-level neural network library that makes it easy to build and train neural networks. You can install it using the pip package manager.

  2. Downloading the MNIST dataset: The MNIST dataset is a well-known dataset of 70,000 grayscale images of handwritten digits, each of size 28x28 pixels. It is commonly used as a benchmark for image classification algorithms.

  3. Loading the dataset: The dataset is provided in CSV format, with one row per image. The first column contains the label (i.e., the digit that the image represents), and the remaining columns contain the pixel values.

  4. Preprocessing the data: To prepare the data for training, we need to scale the pixel values to be between 0 and 1, and convert the labels to one-hot vectors. Scaling the pixel values helps to improve the convergence of the training algorithm, while one-hot encoding the labels makes it easier to train a neural network to classify the images.

  5. Building the neural network: The architecture of the neural network is an important consideration. In this example, we use a simple feedforward neural network with one hidden layer. The input layer consists of a flattened 28x28 pixel image, the hidden layer has 128 neurons with a ReLU activation function, and the output layer has 10 neurons (one for each digit) with a softmax activation function. The number of neurons in the hidden layer is a hyperparameter that can be tuned to improve the performance of the network.

  6. Compiling and training the neural network: Before we can train the network, we need to compile it with an optimizer and a loss function. The optimizer determines how the weights of the network are updated during training, and the loss function measures how well the network is performing. In this example, we use the Adam optimizer and categorical crossentropy loss function. We then train the network on the training data for 10 epochs, with a validation split of 0.2 to monitor the performance of the network on a separate validation set.

  7. Evaluating the neural network: After training the network, we can evaluate its performance on the test data. We use the predict_classes method to make predictions on the test data, and then save the predictions to a CSV file that can be submitted to the Kaggle competition. The accuracy of the network can be improved by tuning the hyperparameters (e.g., the number of neurons in the hidden layer or the learning rate of the optimizer) or by using more advanced techniques such as data augmentation or convolutional neural networks.