carmelo-cyber / Carmelo_GitHub

All code for any situation
0 stars 0 forks source link

Deep Learning Techniques #13

Open carmelo-cyber opened 1 year ago

carmelo-cyber commented 1 year ago

Import necessary libraries

import tensorflow as tf from tensorflow.keras.preprocessing.text import Tokenizer from tensorflow.keras.preprocessing.sequence import pad_sequences

Define the dataset

sentences = [ "I love cats.", "I love dogs.", "I love monkeys.", "I don't love cats.", "I don't love dogs.", "I don't love monkeys." ] labels = [1, 1, 1, 0, 0, 0]

Create a tokenizer to preprocess the text

tokenizer = Tokenizer(num_words=100, oov_token="") tokenizer.fit_on_texts(sentences) sequences = tokenizer.texts_to_sequences(sentences) padded = pad_sequences(sequences, maxlen=5)

Define the model

model = tf.keras.Sequential([ tf.keras.layers.Embedding(100, 8, input_length=5), tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(8)), tf.keras.layers.Dense(1, activation="sigmoid") ])

Compile the model

model.compile( optimizer="adam", loss="binary_crossentropy", metrics=["accuracy"] )

Train the model

model.fit(padded, labels, epochs=10)

Use the trained model to make predictions

predictions = model.predict(padded) print(predictions)