ageron / handson-ml

⛔️ DEPRECATED – See https://github.com/ageron/handson-ml3 instead.
Apache License 2.0
25.17k stars 12.92k forks source link

Google Cloud ... how to #334

Open terminsen opened 5 years ago

terminsen commented 5 years ago

Hi -

I have gone through several of the examples in the book now. I am very interested in taking it to the next level ... deploy some of these models in the Google Cloud. Has anybody seen a good video demo on how to transform tensorflow + python code similar to what is in the book to Google Cloud ?

I have traversed Google Cloud believe me, but the examples that I can find are very much written already with the cloud in mind and it is therefore difficult to see how one rearranges a tensorflow program written to your desktop (and reading datafiles from the desktop) to a situation where it has to work in the cloud.

Thank you,

ageron commented 5 years ago

Hi @terminsen, that's a great question. I left this topic out of the first edition because TensorFlow Serving was a moving target, Google Cloud ML Engine did not exist yet, and also deploying ML models depends a lot on your needs. But I'm working on a second edition right now, and it will explain how to deploy a model to Google Cloud ML Engine. In the meantime, you can check out this codelab: https://codelabs.developers.google.com/codelabs/end-to-end-ml/ along with this video: https://youtu.be/72XxyRV6M-M Hope this helps

terminsen commented 5 years ago

Hi -

Thank you for the mail. And thank you for the links.

I will buy a copy for sure.

If you already made one example (for the second edition) - lets say the time series RNN example :-) - I would be most grateful if you could show it. I looked at the links you posted, and they share (I think) a common feature with a lot of the information on Google Cloud homepage: It is really generic material, and in order to do it ... you need to see a simple example.

One thing is that one also needs to split the code up in different programs (https://cloud.google.com/ml-engine/docs/tensorflow/packaging-trainer):

├── setup.py └── trainer ├── init.py ├── model.py ├── task.py └── util.py

How is that done when you developed the Tensorflow code first on you laptop and it runs perfectly at home ? That is the most likely steps: You develop a standalone Tensorflow program that use data stored locally on your desktop ... then you want to go to the cloud. Or, maybe you need to start develop the Tensorflow code in the cloud in the first go ?

I write all these questions because I hope you will explain all of this in the second edition :-)

Thanks.

r-ichi commented 5 years ago

Regarding the new version of the book: is there any way to acces the updated/new content you are working on in advance? Since the book will be released in august, and that's kinda far away in time. I would like to learn as much as possible from your work and lessons as soon as possible.

BTW, does the new version of the book include deeper analysis and lessons on the different clustering methods presented in the jupyter notebook?

Thanks in advance for your reply!

marco82ger commented 5 years ago

Interesting issue, I am also really curious. Thanks @terminsen

ageron commented 5 years ago

Hi @r-ichi and @marco82ger ,

Yes, the early release of the 2nd edition is available via O'Reilly's Safari platform. Unfortunately, it does require a subscription, but it's possible to get a free trial.

Here is what changed compared to the 1st edition:

In short, the first part has changed little expect for a new chapter on unsupervised learning, but the second part has huge changes, due to the fact that (1) TensorFlow 2.0 is a big change, (2) Deep Learning is a fast-moving field.

I've done all chapters up to the tf.data chapter (I'm half-way through that one). Next I'll update chapters 13, 14, 15 and 16, and I'll finish the chapter on deployment. Still some work to do, but I'm 100% on it! :)

ageron commented 5 years ago

Chapter mapping:

Chapters – 1st edition Chapters – 2nd edition Topic
1 - 8 1- 8 Intro to Machine Learning using Scikit-Learn
N/A 9 Unsupervised Learning technique
10 10 Intro to Neural Network (with Keras in the 2nd edition)
11 11 Techniques to train deep nets
9 12 TensorFlow's Low Level API
N/A 13 Load and Preprocessing Data
13-16 14-17 CNNs, RNNs, Auto-Encoders, Reinforcement Learning
12 18 Distributed Training & Deployment
marco82ger commented 5 years ago

thanks a lot Geron! Nevertheless, I viewed the link for the GCP and watched the link but it is quite fast and not details are given except in the first part where the lecturer is quite slow. But then he is super fast and I am lost.

I quote the question from @terminsen How is that done when you developed the Tensorflow code first on you laptop and it runs perfectly at home ? That is the most likely steps: You develop a standalone Tensorflow program that use data stored locally on your desktop ... then you want to go to the cloud.

Maybe GCP is not the easiest solution. What about AWS? Thanks in advance

ageron commented 5 years ago

Hi @marco82ger ,

Please take a look at my TF2 course notebooks at https://github.com/ageron/tf2_course

In particular 03_loading_and_preprocessing_data.ipynb and 04_deploy_and_distribute_tf2.ipynb.

There are two main scenarios when you go to the cloud:

Running a trained model on GCP is not too hard. First, learn to deploy on TF Serving (as shown in the notebook), then basically you can use GCP as a hosted TF Serving.

For training (e.g., on TPU), check out this Colab notebook.

Hope this helps, Aurélien

marco82ger commented 5 years ago

Thanks a lot for your always useful and fast reply. It is an honour to talk to you directly!

r-ichi commented 5 years ago

He's right, it's so cool that you take the time not only to teach the world this stuff, but also to talk with us and answer our questions.

You da best ;) !

PD: In the new version of the book, in Chapter 9 "Unsupervised Learning Techniques" in the subsection titles "The K-Means Algorithm", just before the images that explain the different initilization of centroids of K-Means, there is something weird... Ctrl+F and search for "footenote"

ageron commented 5 years ago

Thanks @marco82ger and @r-ichi , you are very encouraging! :) And thanks for the footenote (to be pronounced with a strong French accent), it was supposed to be footnote. I write the book using asciidoc (in Atom, using the great asciidoctor plugins). When I want to add a footnote, I type: footnote:[blabla]. Thanks for catching the typo!

marco82ger commented 5 years ago

Hi,

I am going throw chapter 13 and I am running the experiments in the exercise section. Let's assume I want to train an existing model on TPU using transfer learning with the only difference of the output layer which will be adapted for my dataset, like done in ex. 9. I checked your link with the Colab notebook, really helpful! For simplicity, let's assume I want to replicate exercise 9.3 and 9.4 (chapter 13), but this time on GCP.

Assuming my dataset is already prepared and stored in Google Storage, in order to use the transfer learning and reuse Inception v3 model with my new output layer (which is the only layer that I have to train), how could I reuse the code from ex 9.3 ad 9.4 combined with the Colab code in their notebook?

Especially, in Cola section "Keras model: 3 convolutional layers, 2 dense layers" and "Train and validate the model"

Thanks for your time, Marco