patflick / miopen-benchmark

benchmarking miopen
BSD 3-Clause "New" or "Revised" License
17 stars 15 forks source link

Initialization in MIOpen-benchmarks #8

Closed mattsinc closed 6 years ago

mattsinc commented 6 years ago

I see that there is a uniform() function in tensor.hpp to initialize a given array to random values. However, it does not seem to ever be called in the code base. Thus, I was wondering how arrays that are allocated in device_alloc() are initialized. Are they not initialized at all? If so, is this the intended behavior?

Matt

mattsinc commented 6 years ago

Wanted to ping about this again.

Matt

patflick commented 6 years ago

Apologies for the very late reply. Thank you for pointing this out, I must've forgotten to use the .uniform().

This should be used for the weight tensors in 2 Layers: Conv and Linear (am I missing one?) And the inputs to the models should be initialized as well:

I don't think the other layers require initialization. Am I missing something?

Cheers

patflick commented 6 years ago

@mattsinc I added the initialization to the places mentioned above. I currently don't have a hip/Rocm test system available, thus I pushed the changes to the dev branch for now. Can you let me know if these changes compile and work for you?

Cheers

mattsinc commented 6 years ago

Hi @patflick,

Thanks for getting back to me. I agree that those places are good/correct places to initialize. But there are a couple more places that I think should also be initialized:

I'm assuming the reason you didn't initialize these is that you're assuming init_forward will handle it? Else I'm not sure why alexnet and resnet needed to be initialized but these didn't.

Also, wouldn't we want to initialize in device_alloc()?

Thanks, Matt

patflick commented 6 years ago

You are right about the other benchmarks (wino, layerwise). I think main.cpp can be deleted (it seems like a rather useless left over from early on).

Initializing in device_alloc() would be an overkill. A lot of buffers are used to save the output from some layer (write before read), and thus don't need initialization.

patflick commented 6 years ago

This should do it. Let me know when you get a chance to test the changes in the dev branch.

cheers

mattsinc commented 6 years ago

In regards to your patches, I posted a couple comments in alexnet.cpp about typos and such: https://github.com/patflick/miopen-benchmark/commit/eebcfca6ca947734dbda764f69aebcce0fae2b95

I grabbed your commits and ran all benchmarks on my ROCm setup (1.6.4). I found that all of them still run/compile/etc. Alexnet and Reset have prints like "uniform rands: 307200" now during the Init Fwd part. That's good but it seems like this print should maybe be converted to a DEBUG() print?

Winograd and Layerwise didn't have any such prints until I added the 2 new commits you just posted. With those commits, I see similar prints.

In regards to main.cpp, I found it super useful to have a single layer test in your repo when I started. I actually uncommented the part you have commented out and commented out the check_add part. So my personal opinion is that it's valuable to leave main.cpp in, but of course it's up to you.

Unrelated to all of the above: something I've been trying and failing to understand about this repo, what is the batch size for the different benchmarks? As best as I can tell, there is no batch size and the input is just the sizes passed in for each layer. Is this understanding correct?

Thanks, Matt

patflick commented 6 years ago

RE printouts: That's very odd considering there is no print statement in the .uniform() function tensor.hpp:L218.

RE main.cpp: maybe a small example might be useful. I'll consider adding one, although I may not have enough time until a few weeks from now.

RE batch size: The batch size (input dimensions) are set per model. For example:

These were the most common input dimensions and batch sizes when I implemented these benchmarks initially. Feel free to play around with these.

Cheers - Patrick

mattsinc commented 6 years ago

@patflick, you're right, I added that print in May when checking if the function was ever called. My mistake.

Re: main.cpp: I would think just leaving main.cpp in, except with the BatchNorm forward layer would be sufficient. Your call though.

Re batch size: wow, can't believed I missed that ... thanks!!

Anyways, other than the small fixes in alexnet.cpp and potentially adding main.cpp back in, you should feel free to merge into master.

Matt

patflick commented 6 years ago

pulled into master with pull request #11 closing the issue.