Closed bw4sz closed 3 years ago
I am learning the deep forest and using the old tf- keras based codebase. Today I was tying the train a model. In the keras baed old training code. I am getting errors from here.
test_model = deepforest.deepforest()
test_model.use_release()
It looks like This is pointing to the latest PyTorch-based release. Is there a way to get the last old Keras-based release?
Reading config file: D:\Work\miniconda\tf_gpu\lib\site-packages\deepforest\data\deepforest_config.yml
A blank deepforest object created. To perform prediction, either train or load an existing model.
Model from DeepForest release https://github.com/weecology/DeepForest/releases/tag/1.0.0 was already downloaded. Loading model from file.
Loading pre-built model: https://github.com/weecology/DeepForest/releases/tag/1.0.0
Traceback (most recent call last):
File "training.py", line 94, in
Thanks for reporting this. We are actively working on smoothing the transition to 1.0.0. The issue is
https://github.com/weecology/DeepForest/issues/193
The easiest thing you to do is manually download the appropriate release tensorflow model.
https://github.com/weecology/DeepForest/releases/tag/v0.3.0
then you can just load the model
following
https://github.com/weecology/DeepForest/blob/tensorflow/docs/getting_started.md#model-weights
Please continue to post issues on how best to make the tensorflow version useful for past users.
On Mon, Jun 7, 2021 at 6:03 PM ML-learner79 @.***> wrote:
I am learning the deep forest and using the old tf- keras based codebase. Today I was tying the train a model. In the keras baed old training code. I am getting errors from here. test_model = deepforest.deepforest() test_model.use_release() It looks like This is pointing to the latest PyTorch-based release. Is there a way to get the last old Keras-based release?
Reading config file: D:\Work\miniconda\tf_gpu\lib\site-packages\deepforest\data\deepforest_config.yml A blank deepforest object created. To perform prediction, either train or load an existing model. Model from DeepForest release https://github.com/weecology/DeepForest/releases/tag/1.0.0 was already downloaded. Loading model from file. Loading pre-built model: https://github.com/weecology/DeepForest/releases/tag/1.0.0 Traceback (most recent call last): File "training.py", line 94, in test_model.use_release() File "D:\Work\miniconda\tf_gpu\lib\site-packages\deepforest\deepforest.py", line 173, in use_release self.model = utilities.read_model(self.weights, self.config) File "D:\Work\miniconda\tf_gpu\lib\site-packages\deepforest\utilities.py", line 51, in read_model model = models.load_model(model_path, backbone_name='resnet50') File "D:\Work\miniconda\tf_gpu\lib\site-packages\kerasretinanet\models init_.py", line 83, in load_model return keras.models.load_model(filepath, custom_objects=backbone(backbone_name).custom_objects) File "D:\Work\miniconda\tf_gpu\lib\site-packages\keras\engine\saving.py", line 492, in load_wrapper return load_function(*args, *kwargs) File "D:\Work\miniconda\tf_gpu\lib\site-packages\keras\engine\saving.py", line 583, in load_model with H5Dict(filepath, mode='r') as h5dict: File "D:\Work\miniconda\tf_gpu\lib\site-packages\keras\utils\io_utils.py", line 191, in init self.data = h5py.File(path, mode=mode) File "D:\Work\miniconda\tf_gpu\lib\site-packages\h5py_hl\files.py", line 408, in init* swmr=swmr) File "D:\Work\miniconda\tf_gpu\lib\site-packages\h5py_hl\files.py", line 173, in make_fid fid = h5f.open(name, flags, fapl=fapl) File "h5py_objects.pyx", line 54, in h5py._objects.with_phil.wrapper File "h5py_objects.pyx", line 55, in h5py._objects.with_phil.wrapper File "h5py\h5f.pyx", line 88, in h5py.h5f.open OSError: Unable to open file (file signature not found)
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/weecology/DeepForest/issues/195#issuecomment-856361646, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAJHBLGQPODDAXTDERVEF3TTRVT63ANCNFSM46DSEVNQ .
-- Ben Weinstein, Ph.D. Postdoctoral Fellow University of Florida http://benweinstein.weebly.com/
Thanks for the update. Now I am seeing something strange in training. Previously when I use test_model.use_release()
. and train using my dataset Mean Average Precision was around 0.85 but Now when I download and use the model. test_model = deepforest.deepforest(saved_model="C:/Download/NEON.h5")
For the same data mAP is significantly reduced to around 0.40. I am not sure what might have made such changes.
can you paste the code and maybe one sample image, I'll look into it, I can't immediately think of why there should be any change at all. All the use_release() function does is load the weights. There isn't much code there.
On Tue, Jun 8, 2021 at 10:11 AM ML-learner79 @.***> wrote:
Thanks for the update. Now I am seeing something strange in training. Previously when I use test_model.use_release(). and train using my dataset Mean Average Precision was around 0.85 but Now when I download and use the model. test_model = deepforest.deepforest(saved_model="C:/Download/NEON.h5") For the same data mAP is significantly reduced to around 0.40. I am not sure what might have made such changes.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/weecology/DeepForest/issues/195#issuecomment-856945526, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAJHBLENPYALPVQXKYYUFVTTRZFNTANCNFSM46DSEVNQ .
-- Ben Weinstein, Ph.D. Postdoctoral Fellow University of Florida http://benweinstein.weebly.com/
Hey, thanks for the prompt response. Here is code
import os
from deepforest import get_data
from deepforest import deepforest
from deepforest import utilities
from deepforest import preprocess
path_train = get_data("D:/Work/training/orthomosaic_image.jpg")
crop_dir = os.getcwd()
train_annotations= preprocess.split_raster(path_to_raster=path_train, annotations_file="training_anotation.csv", base_dir=crop_dir, patch_size=1600, patch_overlap=0.10)
annotations_file = os.path.join(crop_dir, "training_example.csv")
train_annotations.to_csv(annotations_file,index=False, header=None)
import time
st = time.time()
test_model = deepforest.deepforest(saved_model="C:/Download/NEON.h5")
test_model.config["epochs"] = 20
test_model.config["save-snapshot"] = False
test_model.config["random_transform"] = True
test_model.train(annotations=annotations_file, input_type="fit_generator")
test_model.model.save("./model/16002_RGB.h5")
test_model.model.save_weights("./model/16002_RGB_weight.h5")
print('Model Saving Completed')
print(time.time()-st)
print('Model Evaluation')
annotations_file = get_data(annotations_file)
mAP = test_model.evaluate_generator(annotations=annotations_file)
print("Mean Average Precision is: {:.3f}".format(mAP))
test_model.plot_curves()
The one thing I am confused about is how to set up the deepforest.yml file. as I am not using the use_release
Is that causing the problem?
Nothing obvious jumps out. In truth I would recommend upgrading to 1.0.0, the pytorch branch performs well. But let's keep trying here. One thing I see is on use_release(), the weights are added to config,
self.config["weights"] = self.weights
go ahead and add that before training in case it was assumed to be inherent from use_release()
test_model.config["weights] = "C:/Download/NEON.h5"
Thanks for the update. I will update this and test it. later in the week I will update PyTorch that was the last option I was thinking of. Will update you on the progress in a couple of days.
Just brainstorming here if you do see any difference between loading the model (including the optimizer) or just the weights
reloaded = deepforest.deepforest(weights="example_save_weights.h5")
If you can try both, that would help. I cannot reproduce any difference in performance between the use_release() and just downloading the release model and pointing at it.
Yes the earlier issue was me just loading the model but not the weight
test_model = deepforest.deepforest(saved_model="C:/Download/NEON.h5")
test_model.config["weights] = "C:/Download/NEON.h5"
Adding these two solve the problem and now the training mAP is align with previous.
fascinating. I'll add to docs.
Added note to README.
https://github.com/weecology/DeepForest/blob/tensorflow/README.md
This issue is stale because it has been open for 30 days with no activity.
This issue was closed because it has been inactive for 14 days since being marked as stale.
This issue outlines the transition of this repo from a tensorflow backend to pytorch. The tensorflow has breaking changes upstream from a keras-resnet dependency. See
https://github.com/weecology/DeepForest/issues/192
We have merged the repo https://github.com/weecology/DeepForest-pytorch into this repo and this will be the main branch starting a version 1.0
The tensorflow version is still available https://github.com/weecology/DeepForest/tree/tensorflow, but will no longer be mantained.