nicknochnack / FaceDetection

An end-to-end walkthrough with a custom object detection pipeline for face detection!
156 stars 117 forks source link

help me resolve this #4

Open Somurahwa opened 4 months ago

Somurahwa commented 4 months ago

Epoch 1/10

ValueError Traceback (most recent call last) Cell In[74], line 1 ----> 1 hist = model.fit(train, epochs=10, validation_data=val, callbacks=[tensorboard_callback])

File ~\Desktop\Panashe\tensorflow\TFODCourse\murahwa\Lib\site-packages\keras-3.1.1-py3.11.egg\keras\src\utils\traceback_utils.py:122, in filter_traceback..error_handler(*args, **kwargs) 119 filtered_tb = _process_traceback_frames(e.traceback) 120 # To get the full stack trace, call: 121 # keras.config.disable_traceback_filtering() --> 122 raise e.with_traceback(filtered_tb) from None 123 finally: 124 del filtered_tb

Cell In[68], line 19, in FaceTracker.train_step(self, batch, *kwargs) 16 with tf.GradientTape() as tape: 17 classes, coords = self.model(X, training=True) ---> 19 batch_classloss = self.closs(y[0], classes) 20 batch_localizationloss = self.lloss(tf.cast(y[1], tf.float32), coords) 22 total_loss = batch_localizationloss+0.5batch_classloss

ValueError: Cannot take the length of shape with unknown rank.

bakhta19 commented 4 months ago

hey, im facing the same problem here, have you found a solution for this error ?

Somurahwa commented 4 months ago

Yes let me send you my modified code early in the morning tomorrow

On Wed, 17 Apr, 2024 at 20:59, Bakhta HADJAR @.***> wrote:

hey, im facing the same problem here, have you found a solution for this error ?

— Reply to this email directly, view it on GitHub https://github.com/nicknochnack/FaceDetection/issues/4#issuecomment-2062002466, or unsubscribe https://github.com/notifications/unsubscribe-auth/BHNU5HZPLVNE7EP4LDGH7A3Y53BB5AVCNFSM6AAAAABGFKP3KGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANRSGAYDENBWGY . You are receiving this because you authored the thread.Message ID: @.***>

manhtuan2989 commented 4 months ago

Hi, I am facing the same problem, too. May you send me the solution?

bakhta19 commented 4 months ago

thank you, ill be waiting :)

arthurdesma commented 4 months ago

I do have the same problem can someone help pls ?

cempack commented 4 months ago

Same issue here :/

cempack commented 4 months ago

Yes let me send you my modified code early in the morning tomorrow On Wed, 17 Apr, 2024 at 20:59, Bakhta HADJAR @.> wrote: hey, im facing the same problem here, have you found a solution for this error ? — Reply to this email directly, view it on GitHub <#4 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/BHNU5HZPLVNE7EP4LDGH7A3Y53BB5AVCNFSM6AAAAABGFKP3KGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANRSGAYDENBWGY . You are receiving this because you authored the thread.Message ID: @.>

Could you send the solution please ?

Somurahwa commented 4 months ago

class FaceTracker(Model): def init(self, eyetracker, kwargs): super().init(kwargs) self.model = eyetracker

def compile(self, opt, classloss, localizationloss, kwargs): super().compile(kwargs) self.closs = classloss self.lloss = localizationloss self.opt = opt

@tf.function # Decorate train_step with @tf.function def train_step(self, batch, **kwargs): X, y = batch

with tf.GradientTape() as tape:
  try:
    classes, coords = self.model(X, training=True)

    # Ensure y[0] has a defined rank (handle potential reshaping)
    y_0 = tf.reshape(y[0], [-1, 1])  # Example: Reshape to (batch_size,

1) if needed

Check your data format and reshape accordingly

    # Ensure classes has a defined rank (check model output shape)
    # ... (reshape classes if necessary based on your model's output)

    batch_classloss = self.closs(y_0, classes)
    batch_localizationloss = self.lloss(tf.cast(y[1], tf.float32),

coords) total_loss = batch_localizationloss + 0.5 * batch_classloss

  except tf.errors.InvalidArgumentError as e:
    # Handle the case where y[0] might have a batch size of zero

(optional) if 'Input tensors must be of size at least 1' in str(e): return {"total_loss": tf.constant(0.0)} # Dummy loss (optional) else: raise e # Re-raise other errors

  grad = tape.gradient(total_loss, self.model.trainable_variables)
  self.opt.apply_gradients(zip(grad, self.model.trainable_variables))

return {"total_loss": total_loss, "class_loss": batch_classloss,

"regress_loss": batch_localizationloss}

def test_step(self, batch, **kwargs): X, y = batch classes, coords = self.model(X, training=False)

# Ensure y[0] has a defined rank (handle potential reshaping)
y_0 = tf.reshape(y[0], [-1, 1])  # Example: Reshape to (batch_size, 1)

if needed

Check your data format and reshape accordingly

# Ensure classes has a defined rank (check model output shape)
# ... (reshape classes if necessary based on your model's output)

batch_classloss = self.closs(y_0, classes)
batch_localizationloss = self.lloss(tf.cast(y[1], tf.float32), coords)
total_loss = batch_localizationloss + 0.5 * batch_classloss
return {"total_loss": total_loss, "class_loss": batch_classloss,

"regress_loss": batch_localizationloss}

def call(self, X, kwargs): return self.model(X, kwargs)

On Wed, Apr 17, 2024 at 8:59 PM Bakhta HADJAR @.***> wrote:

hey, im facing the same problem here, have you found a solution for this error ?

— Reply to this email directly, view it on GitHub https://github.com/nicknochnack/FaceDetection/issues/4#issuecomment-2062002466, or unsubscribe https://github.com/notifications/unsubscribe-auth/BHNU5HZPLVNE7EP4LDGH7A3Y53BB5AVCNFSM6AAAAABGFKP3KGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANRSGAYDENBWGY . You are receiving this because you authored the thread.Message ID: @.***>

Somurahwa commented 4 months ago

here is my updated code

On Mon, Apr 22, 2024 at 10:15 AM Panashe Murahwa @.***> wrote:

class FaceTracker(Model): def init(self, eyetracker, kwargs): super().init(kwargs) self.model = eyetracker

def compile(self, opt, classloss, localizationloss, kwargs): super().compile(kwargs) self.closs = classloss self.lloss = localizationloss self.opt = opt

@tf.function # Decorate train_step with @tf.function def train_step(self, batch, **kwargs): X, y = batch

with tf.GradientTape() as tape:
  try:
    classes, coords = self.model(X, training=True)

    # Ensure y[0] has a defined rank (handle potential reshaping)
    y_0 = tf.reshape(y[0], [-1, 1])  # Example: Reshape to

(batch_size, 1) if needed

Check your data format and reshape accordingly

    # Ensure classes has a defined rank (check model output shape)
    # ... (reshape classes if necessary based on your model's output)

    batch_classloss = self.closs(y_0, classes)
    batch_localizationloss = self.lloss(tf.cast(y[1], tf.float32),

coords) total_loss = batch_localizationloss + 0.5 * batch_classloss

  except tf.errors.InvalidArgumentError as e:
    # Handle the case where y[0] might have a batch size of zero

(optional) if 'Input tensors must be of size at least 1' in str(e): return {"total_loss": tf.constant(0.0)} # Dummy loss (optional) else: raise e # Re-raise other errors

  grad = tape.gradient(total_loss, self.model.trainable_variables)
  self.opt.apply_gradients(zip(grad, self.model.trainable_variables))

return {"total_loss": total_loss, "class_loss": batch_classloss,

"regress_loss": batch_localizationloss}

def test_step(self, batch, **kwargs): X, y = batch classes, coords = self.model(X, training=False)

# Ensure y[0] has a defined rank (handle potential reshaping)
y_0 = tf.reshape(y[0], [-1, 1])  # Example: Reshape to (batch_size,

1) if needed

Check your data format and reshape accordingly

# Ensure classes has a defined rank (check model output shape)
# ... (reshape classes if necessary based on your model's output)

batch_classloss = self.closs(y_0, classes)
batch_localizationloss = self.lloss(tf.cast(y[1], tf.float32), coords)
total_loss = batch_localizationloss + 0.5 * batch_classloss
return {"total_loss": total_loss, "class_loss": batch_classloss,

"regress_loss": batch_localizationloss}

def call(self, X, kwargs): return self.model(X, kwargs)

On Wed, Apr 17, 2024 at 8:59 PM Bakhta HADJAR @.***> wrote:

hey, im facing the same problem here, have you found a solution for this error ?

— Reply to this email directly, view it on GitHub https://github.com/nicknochnack/FaceDetection/issues/4#issuecomment-2062002466, or unsubscribe https://github.com/notifications/unsubscribe-auth/BHNU5HZPLVNE7EP4LDGH7A3Y53BB5AVCNFSM6AAAAABGFKP3KGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANRSGAYDENBWGY . You are receiving this because you authored the thread.Message ID: @.***>

bakhta19 commented 4 months ago

ir works, thank you so much

cempack commented 4 months ago

here is my updated code On Mon, Apr 22, 2024 at 10:15 AM Panashe Murahwa @.*> wrote: class FaceTracker(Model): def init(self, eyetracker, kwargs): super().init(kwargs) self.model = eyetracker def compile(self, opt, classloss, localizationloss, kwargs): super().compile(kwargs) self.closs = classloss self.lloss = localizationloss self.opt = opt @tf.function # Decorate train_step with @tf.function def train_step(self, batch, *kwargs): X, y = batch with tf.GradientTape() as tape: try: classes, coords = self.model(X, training=True) # Ensure y[0] has a defined rank (handle potential reshaping) y_0 = tf.reshape(y[0], [-1, 1]) # Example: Reshape to (batch_size, 1) if needed # Check your data format and reshape accordingly # Ensure classes has a defined rank (check model output shape) # ... (reshape classes if necessary based on your model's output) batch_classloss = self.closs(y_0, classes) batch_localizationloss = self.lloss(tf.cast(y[1], tf.float32), coords) total_loss = batch_localizationloss + 0.5 batch_classloss except tf.errors.InvalidArgumentError as e: # Handle the case where y[0] might have a batch size of zero (optional) if 'Input tensors must be of size at least 1' in str(e): return {"total_loss": tf.constant(0.0)} # Dummy loss (optional) else: raise e # Re-raise other errors grad = tape.gradient(total_loss, self.model.trainable_variables) self.opt.apply_gradients(zip(grad, self.model.trainable_variables)) return {"total_loss": total_loss, "class_loss": batch_classloss, "regress_loss": batch_localizationloss} def test_step(self, batch, kwargs): X, y = batch classes, coords = self.model(X, training=False) # Ensure y[0] has a defined rank (handle potential reshaping) y_0 = tf.reshape(y[0], [-1, 1]) # Example: Reshape to (batch_size, 1) if needed # Check your data format and reshape accordingly # Ensure classes has a defined rank (check model output shape) # ... (reshape classes if necessary based on your model's output) batch_classloss = self.closs(y_0, classes) batch_localizationloss = self.lloss(tf.cast(y[1], tf.float32), coords) total_loss = batch_localizationloss + 0.5 * batch_classloss return {"total_loss": total_loss, "class_loss": batch_classloss, "regress_loss": batch_localizationloss} def call(self, X, kwargs): return self.model(X, kwargs) On Wed, Apr 17, 2024 at 8:59 PM Bakhta HADJAR @.> wrote: > hey, im facing the same problem here, have you found a solution for this > error ? > > — > Reply to this email directly, view it on GitHub > <#4 (comment)>, > or unsubscribe > https://github.com/notifications/unsubscribe-auth/BHNU5HZPLVNE7EP4LDGH7A3Y53BB5AVCNFSM6AAAAABGFKP3KGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANRSGAYDENBWGY > . > You are receiving this because you authored the thread.Message ID: > @.> >

Thank you, fixed it for me.

gandrabharadhwaj01 commented 4 months ago

same issue

hermanumrao commented 2 months ago

here is my updated code On Mon, Apr 22, 2024 at 10:15 AM Panashe Murahwa @.*> wrote: class FaceTracker(Model): def init(self, eyetracker, kwargs): super().init(kwargs) self.model = eyetracker def compile(self, opt, classloss, localizationloss, kwargs): super().compile(kwargs) self.closs = classloss self.lloss = localizationloss self.opt = opt @tf.function # Decorate train_step with @tf.function def train_step(self, batch, *kwargs): X, y = batch with tf.GradientTape() as tape: try: classes, coords = self.model(X, training=True) # Ensure y[0] has a defined rank (handle potential reshaping) y_0 = tf.reshape(y[0], [-1, 1]) # Example: Reshape to (batch_size, 1) if needed # Check your data format and reshape accordingly # Ensure classes has a defined rank (check model output shape) # ... (reshape classes if necessary based on your model's output) batch_classloss = self.closs(y_0, classes) batch_localizationloss = self.lloss(tf.cast(y[1], tf.float32), coords) total_loss = batch_localizationloss + 0.5 batch_classloss except tf.errors.InvalidArgumentError as e: # Handle the case where y[0] might have a batch size of zero (optional) if 'Input tensors must be of size at least 1' in str(e): return {"total_loss": tf.constant(0.0)} # Dummy loss (optional) else: raise e # Re-raise other errors grad = tape.gradient(total_loss, self.model.trainable_variables) self.opt.apply_gradients(zip(grad, self.model.trainable_variables)) return {"total_loss": total_loss, "class_loss": batch_classloss, "regress_loss": batch_localizationloss} def test_step(self, batch, kwargs): X, y = batch classes, coords = self.model(X, training=False) # Ensure y[0] has a defined rank (handle potential reshaping) y_0 = tf.reshape(y[0], [-1, 1]) # Example: Reshape to (batch_size, 1) if needed # Check your data format and reshape accordingly # Ensure classes has a defined rank (check model output shape) # ... (reshape classes if necessary based on your model's output) batch_classloss = self.closs(y_0, classes) batch_localizationloss = self.lloss(tf.cast(y[1], tf.float32), coords) total_loss = batch_localizationloss + 0.5 * batch_classloss return {"total_loss": total_loss, "class_loss": batch_classloss, "regress_loss": batch_localizationloss} def call(self, X, kwargs): return self.model(X, kwargs) On Wed, Apr 17, 2024 at 8:59 PM Bakhta HADJAR @.> wrote: > hey, im facing the same problem here, have you found a solution for this > error ? > > — > Reply to this email directly, view it on GitHub > <#4 (comment)>, > or unsubscribe > https://github.com/notifications/unsubscribe-auth/BHNU5HZPLVNE7EP4LDGH7A3Y53BB5AVCNFSM6AAAAABGFKP3KGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANRSGAYDENBWGY > . > You are receiving this because you authored the thread.Message ID: > @.> >

thanks man, works like a charm