VK-Ant / Tensorflow_Certification_Practice

6 stars 0 forks source link

Questions about tfcerfi #1

Open huangssssx opened 3 months ago

huangssssx commented 3 months ago

I will be taking the tf dev exam in a few days. I want to do some pre-exam training. I wonder if you have any simulation questions or suggestions for me?Thank you for your help

VK-Ant commented 3 months ago

Yes definitely man, Dont need the question. Every question is similar to practice problems like image classification, time series, and basic tf problems.

Just you will increase the accuracy! One example image classification problem (Improve the model architecture and achieve 98 percent accuracy)

In my experience, 1 basic TensorFlow house price prediction, 2 image classifications, 1 nlp, 1 time-series. Everything is simple if you follow daniel udemy videos and my GitHub practice code. You will able to pass 100 percent sure.

huangssssx commented 3 months ago

Thank you so much, this has narrowed down a lot of scope, and it's really painful to play kaggle~ 

327957254 @.***

 

------------------ 原始邮件 ------------------ 发件人: "Venkat Kumar @.>; 发送时间: 2024年4月16日(星期二) 晚上7:16 收件人: @.>; 抄送: @.>; @.>; 主题: Re: [VK-Ant/Tensorflow_Certification_Practice] Questions about tfcerfi (Issue #1)

Yes definitely man, Dont need the question. Every question is similar to practice problems like image classification, time series, and basic tf problems.

Just you will increase the accuracy! One example image classification problem (Improve the model architecture and achieve 98 percent accuracy)

In my experience, 1 basic TensorFlow house price prediction, 2 image classifications, 1 nlp, 1 time-series. Everything is simple if you follow daniel udemy videos and my GitHub practice code. You will able to pass 100 percent sure.

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>

VK-Ant commented 3 months ago

yes, kaggle is hard, But I am happy to explore a lot man. Kaggle is my hobby

huangssssx commented 2 months ago

I have read the exercises in your github. I have made a lot of attempts at https://raw.githubusercontent.com/Stedy/machine-learning-with-r-datasets/master/insurance.csv, but I haven't been able to get the ideal result, rather, it's very bad. I wonder if you have any ideas? The following is my code (will this level of questions appear in tf exam? I'm really confused ~):

import pandas as pd
import tensorflow as tf
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler

def build_model(input_shape, embedding_output_dims, output_units):
    inputs = tf.keras.Input(shape=(input_shape,), dtype=tf.float32)
    categorical_inputs = inputs[:, :3]
    numeric_inputs = inputs[:, 3:]

    embedding = tf.keras.layers.Embedding(input_dim=1000, output_dim=embedding_output_dims, input_length=3)(categorical_inputs)
    embedding = tf.keras.layers.Flatten()(embedding)

    concat = tf.keras.layers.Concatenate()([embedding, numeric_inputs])

    dense1 = tf.keras.layers.Dense(128, activation='relu')(concat)
    dense2 = tf.keras.layers.Dense(64, activation='relu')(dense1)
    outputs = tf.keras.layers.Dense(output_units)(dense2)

    model = tf.keras.Model(inputs=inputs, outputs=outputs)
    model.compile(optimizer='adam', loss='mse')
    return model

def prepare_data(data):
    X = data.drop("charges", axis=1)
    Y = data["charges"]

    scaler = MinMaxScaler()
    X[['age', 'bmi', 'children']] = scaler.fit_transform(X[['age', 'bmi', 'children']])

    for column in ['sex', 'smoker', 'region']:
        X[column] = X[column].astype('category').cat.codes

    X = X.astype(float)

    X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.2, random_state=42)
    return X_train, X_test, y_train, y_test

url = "https://raw.githubusercontent.com/stedy/Machine-Learning-with-R-datasets/master/insurance.csv"
data = pd.read_csv(url)
X_train, X_test, y_train, y_test = prepare_data(data)

model = build_model(input_shape=X_train.shape[1], embedding_output_dims=5, output_units=1)
model.fit(X_train, y_train, epochs=100, batch_size=32)

·······

34/34 [==============================] - 0s 576us/step - loss: 58076804.0000 Epoch 88/100 34/34 [==============================] - 0s 567us/step - loss: 57702480.0000 Epoch 89/100 34/34 [==============================] - 0s 576us/step - loss: 57302400.0000 Epoch 90/100 34/34 [==============================] - 0s 667us/step - loss: 57031960.0000 Epoch 91/100 34/34 [==============================] - 0s 536us/step - loss: 56810192.0000 Epoch 92/100 34/34 [==============================] - 0s 667us/step - loss: 56549992.0000 Epoch 93/100 34/34 [==============================] - 0s 546us/step - loss: 56385016.0000 Epoch 94/100 34/34 [==============================] - 0s 636us/step - loss: 56244312.0000 Epoch 95/100 34/34 [==============================] - 0s 652us/step - loss: 56259528.0000 Epoch 96/100 34/34 [==============================] - 0s 576us/step - loss: 56006100.0000 Epoch 97/100 34/34 [==============================] - 0s 604us/step - loss: 55910948.0000 Epoch 98/100 34/34 [==============================] - 0s 557us/step - loss: 55796360.0000 Epoch 99/100 34/34 [==============================] - 0s 576us/step - loss: 55785280.0000 Epoch 100/100 34/34 [==============================] - 0s 545us/step - loss: 55754772.0000

VK-Ant commented 2 months ago

i thought your preprocessing work must concern part, min-max scaler gives this kind of result. use standard scaler. And also add batch normalization in your model architecture.

In exam perspective, you concern particular preprocessing and feature engineering part. And don't concern this thing much important is model creation.