Closed akhauriyash closed 2 years ago
There is a couple of reasons. The main one is that NB2 actually contains some architectures that do not make sense - in their codebase if a cell structure is, for example, a simple conv followed by none then: 1) conv is still executed, 2) none takes as input the output of conv, and simply ignores it by producing torch.zeros_like(x)
, 3) the rest of the network is executed normally and the network is trained even though it is effectively learning nonsense (as input does not flow to the output). For measuring latency we optimise models by removing unnecessary operations, which also means that we simply do not execute networks like that at all (as they are constant functions anyway, after training). From the top of my head, I would say that's the reason behind all the "15284" pickles. For the remaining ones, I would suspect the extra missing results to be models that failed to run on-device, despite our best effort - unfortunately I don't have much time now to take a close look at those. If you want to help me out a bit, I would appreciate if you could identify what archs are missing. Otherwise I will put it in my backlog and will take a look at them when the amount of other work decreases a bit. iirc, some models failed to run/convert with SNPE, although SNPE has been updated significantly since then so it might not be reproducible now. also edge-tpu had problems with at some some models - if these are in fact the files that are missing some models, that's most likely the reason.
Thanks a lot for your response! That makes sense. I will try to identify which ones are missing soon, and get back to you. Hope thats okay! I already have the script ready so it shouldn't be too much work.
Hello,
I downloaded the pickle files and tested them as shown below:
Why are there < 15625 architectures? Am I missing something?
Thanks!