-
Hi,
Congratulation for such great work. I am building an application where I need a robust hand pose estimation model like yours. I tried to figure out the code to use own images myself but couldn't …
-
## Preprocessing the dataset
The greyscale assigned to each pixel within an image has a value range of 0-255. We will want to flatten (smoosh… scale…) this range to 0-1. To achieve this flattening, we…
-
## Preprocessing the dataset
The greyscale assigned to each pixel within an image has a value range of 0-255. We will want to flatten (smoosh… scale…) this range to 0-1. To achieve this flattening, we…
-
Thanks for sharing the code of the amazing work. Currently, I have used it for training a model on our own dataset but am now facing some problems.
I simply create a small training set of around 50…
-
Hi lorenmt!
Thank you for sharing your code. I would like to train the `model_segnet_mtan.py` with my own dataset. I have prepared the data to have the same size (500x500) and to be in the same f…
Njuod updated
3 years ago
-
**System information**
- Have I written custom code (as opposed to using a stock example script provided in TensorFlow):
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): macOS 11.2.3
- Mo…
-
## Preprocessing the dataset
The greyscale assigned to each pixel within an image has a value range of 0-255. We will want to flatten (smoosh… scale…) this range to 0-1. To achieve this flattening, we…
-
## Preprocessing the dataset
The greyscale assigned to each pixel within an image has a value range of 0-255. We will want to flatten (smoosh… scale…) this range to 0-1. To achieve this flattening, we…
-
I have a model that I am trying to train where the loss does not go down. I have a custom image set that I am using. These images are 106 x 106 px (black and white) and I have two (2) classes, Bargra…
-
## Preprocessing the dataset
The greyscale assigned to each pixel within an image has a value range of 0-255. We will want to flatten (smoosh… scale…) this range to 0-1. To achieve this flattening, we…