keplr-io / quiver

Interactive convnet features visualization for Keras
https://keplr-io.github.io/quiver/
MIT License
1.75k stars 223 forks source link

TypeError: can't pickle _thread.RLock objects #66

Closed Gloupys closed 6 years ago

Gloupys commented 6 years ago

Hello, everything's in the title.

I am currently working with the MaskRCNN network described here: https://github.com/matterport/Mask_RCNN and I can't get past the server.launch. Apparently it can't get a pickle file from what I can tell...

Any way to get rid of that error? The server launches but the exception comes after that.

Thanks.

zx-code123 commented 6 years ago

@Gloupys I met the same problem,did you solve it?

Gloupys commented 6 years ago

Nope, still the same issue. I have no clue what I could do, read on Stack Overflow that upgrading some modules might work, but even by doing that it didn't change anything...

jakebian commented 6 years ago

Pretty sure this is a keras issue: https://github.com/keras-team/keras/issues/8343

In particular my suspicion is it happens when quiver uses the keras serialization method (to_json). It just means somehow you managed to sneak some non-picklable python objects into your model;

One thing you can try is building keras from source locally, and modify the source to swap out the pickle module with dill, which handles a lot of things that pickle considers unserializable. Good luck!

evilnose commented 6 years ago

This happened to me when I was trying to call model.to_json(). It turned out to stem from some issue in deepcopying the Lambda layer. What I tried in the end was to replace lambda expressions with actual functions, and this somehow resolved the issue. E.g.:

Before last_layer_out = Lambda(lambda x: some_function(x, other_local_var))(last_layer_out) Now x=Lambda(same_function_with_one_param)(x)

This was tedious as I had to create global variables in place of the missing parameters, but it somehow worked so I couldn't complain. I saw that you are using lambda expressions within Lambda layers in your code as well, so maybe you can try doing this.

jakebian commented 6 years ago

@evilnose's comment is a specific instance of my more generic remark. Quiver delegates all model serialization to the official method from keras, hence you should address issues of this sort via complaining to keras OR making your models more picklable. Closing.

AI-ML-Enthusiast commented 5 years ago

@evilnose I am facing the same problem. I would like to convert the following line from model.py into regular function. scores = utils.batch_slice([scores, ix], lambda x, y: tf.gather(x, y), self.config.IMAGES_PER_GPU) How can I do it, would you tell me please?

evilnose commented 5 years ago

@ibrahimLearning simply replacing lambda x, y: tf.gather(x, y) with tf.gather should work, i.e. scores = utils.batch_slice([scores, ix], tf.gather, self.config.IMAGES_PER_GPU). More generally, you can just def a function yourself and use that, (i.e. def gather(x, y): return tf.gather(x, y)), but that is unnecessary in this case.

You could also circumvent the pickling issue by simply calling save_weights() instead of save_model(), and load the weights with an existing model architecture each time instead.

AI-ML-Enthusiast commented 5 years ago

@evilnose Thank you much for your answer. I have one more question.

You could also circumvent the pickling issue by simply calling save_weights() instead of save_model(), and load the weights with an existing model architecture each time instead.

But I want to save full architecture of model. How can I do this?

evilnose commented 5 years ago

@ibrahimLearning Try save_model(). If that doesn't work even after you've changed your lambda's to actual functions, then try calling to_json() as an alternative, as documented here, which worked for me. Note that to_json() only saves the architecture, so you should call save_weights() separately to save the weights.

AI-ML-Enthusiast commented 5 years ago

@evilnose Thank you very much for your comments. I have one more question please. Will I change all of the lambda function in model.py?

evilnose commented 5 years ago

@ibrahimLearning Before you do that -- I did some investigation, and I found this post on Stack Overflow, which seems to imply that the issue is not the lambda function but the un-serializable arguments passed to Lambda layers. Are you using any Lambda layers in your model at all? And if so, are you passing any arguments with type tf.Tensor? I think that might be the actual reason or at least necessary to account for first.