cf020031308 / cf020031308.github.io

A collection of my personal knowledge, and tools to manage it.
https://cf020031308.github.io/
53 stars 6 forks source link

fast.ai #6

Closed cf020031308 closed 5 years ago

cf020031308 commented 5 years ago

Abandon. Learning https://nndl.github.io/

https://course.fast.ai/videos/?lesson=1 https://pytorch.org/tutorials/beginner/nn_tutorial.html https://github.com/fastai/fastai/blob/master/README.md https://docs.fast.ai/

https://github.com/hiromis/notes/blob/master/Lesson1.md

cf020031308 commented 5 years ago

Two main places that we will be tending to get data from for the course:

  1. Academic datasets
    • Academic datasets are really important. They are really interesting. They are things where academics spend a lot of time curating and gathering a dataset so that they can show how well different kinds of approaches work with that data. The idea is they try to design datasets that are challenging in some way and require some kind of breakthrough to do them well.
    • We are going to start with an academic dataset called the pet dataset.
  2. Kaggle competition datasets

Both types of datasets are interesting for us particularly because they provide strong baseline. That is to say you want to know if you are doing a good job. So with Kaggle datasets that come from a competition, you can actually submit your results to Kaggle and see how well you would have gone in that competition. If you can get in about the top 10%, then I'd say you are doing pretty well.

Academic datasets, academics write down in papers what the state of the art is so how well did they go with using models on that dataset. So this is what we are going to do. We are going to try to create models that get right up towards the top of Kaggle competitions, preferably in the top 10, not just top 10% or that meet or exceed academic state-of-the-art published results. So when you use an academic dataset, it's important to cite it. You don't need to read that paper right now, but if you are interested in learning more about it and why it was created and how it was created, all the details are there.

cf020031308 commented 5 years ago

This might seem weird because images have size. This is a shortcoming of current deep learning technology which is that a GPU has to apply the exact same instruction to a whole bunch of things at the same time in order to be fast. If the images are different shapes and sizes, you can't do that. So we actually have to make all of the images the same shape and size. In part 1 of the course, we are always going to be making images square shapes. Part 2, we will learn how to use rectangles as well. It turns out to be surprisingly nuanced. But pretty much everybody in pretty much all computer vision modeling nearly all of it uses this approach of square. 224 by 224, for reasons we'll learn about, is an extremely common size that most models tend to use so if you just use size=224, you're probably going to get pretty good results most of the time. This is kind of the little bits of artisanship that I want to teach you which is what generally just works. So if you just use size 224, that'll generally just work for most things most of the time.

cf020031308 commented 5 years ago

It really helps train a deep learning model if each one of those red green and blue channels has a mean of zero and a standard deviation of one.

If your data is not normalized, it can be quite difficult for your model to train well. So if you have trouble training a model, one thing to check is that you've normalized it.

cf020031308 commented 5 years ago

The first time I run this on a newly installed box, it downloads the ResNet34 pre-trained weights. What that means is that this particular model has actually already been trained for a particular task. And that particular task is that it was trained on looking at about one and a half million pictures of all kinds of different things, a thousand categories of things, using an image dataset called ImageNet.

So the idea is that we don't start with a model that knows nothing at all, but we start by downloading a model that knows something about recognizing images already.

cf020031308 commented 5 years ago

It's kind of the focus of the whole course which is how to do this thing called "transfer learning." How to take a model that already knows how to do something pretty well and make it so that it can do your thing really well. We will take a pre-trained model, and then we fit it so that instead of predicting a thousand categories of ImageNet with ImageNet data, it predicts the 37 categories of pets using your pet data.

cf020031308 commented 5 years ago

https://arxiv.org/pdf/1803.09820.pdf

a loss function is something that tells you how good was your prediction. Specifically that means if you predicted one class of cat with great confidence, but actually you were wrong, then that's going to have a high loss because you were very confident about the wrong answer. So that's what it basically means to have high loss.

cf020031308 commented 5 years ago

a confusion matrix which basically shows you for every actual type of dog or cat, how many times was it predicted to be that dog or cat.

cf020031308 commented 5 years ago

The learning rate basically says how quickly am I updating the parameters in my model.

cf020031308 commented 5 years ago

A good rule of thumb is after you unfreeze (i.e. train the whole thing), pass a max learning rate parameter, pass it a slice, make the second part of that slice about 10 times smaller than your first stage.

cf020031308 commented 5 years ago

极验 https://github.com/hiromis/notes/blob/master/Lesson1.md#splunk-anti-fraud-software-13810

cf020031308 commented 5 years ago

cf020031308 commented 5 years ago

https://www.pyimagesearch.com/2017/12/04/how-to-create-a-deep-learning-dataset-using-google-images/

cf020031308 commented 5 years ago

if you see your validation loss get dam pretty dam high, before we even learn what validation loss is, just know this, if it does that, your learning rate is too high. With low learning rate, our error rate does get better but very very slowly.

cf020031308 commented 5 years ago

What if we train for just one epoch? Our error rate is certainly better than random, 5%. But look at this, the difference between training loss and validation loss ﹣ a training loss is much higher than the validation loss. So too few epochs and too lower learning rate look very similar. So you can just try running more epochs and if it's taking forever, you can try a higher learning rate. If you try a higher learning rate and the loss goes off to 100,000 million, then put it back to where it was and try a few more epochs. That's the balance. That's all you care about 99% of the time. And this is only the 1 in 20 times that the defaults don't work for you.

Too many epochs create something called "overfitting".

cf020031308 commented 5 years ago

So the only thing that tells you that you're overfitting is that the error rate improves for a while and then starts getting worse again. You will see a lot of people, even people that claim to understand machine learning, tell you that if your training loss is lower than your validation loss, then you are overfitting. As you will learn today in more detail and during the rest of course, that is absolutely not true.

Any model that is trained correctly will always have a lower training loss than validation loss.

cf020031308 commented 5 years ago

"Tensor" means array, but specifically it's an array of a regular shape. So it's not an array where row 1 has two things, row 3 has three things, and row 4 has one thing, what you call a "jagged array". That's not a tensor. A tensor is any array which has a rectangular or cube or whatever ﹣ a shape where every row is the same length and every column is the same length.

cf020031308 commented 5 years ago