Closed manikyabard closed 3 years ago
Code changes look fine, but I am wondering whether we should just drop the Dict
version. I get that it is easier to specify a single column override with the Dict
, but it is really not that hard to generate a vector of overrides + nothing
with a generator. It's so deep into the API hierarchy too that we can forgo the same level of convenience we apply to the top level functions.
And if the consuming model building function always expects an in-order vector, then the Dict
approach doesn't really make sense.
Yeah, that does make sense. I think I'll then just remove the method which takes in a Dict
, and update the tabularclassification notebook to show an example of using custom embedding sizes for people new to FastAI.jl or julia.
Thanks Manikya for the change! Is this ready to merge?
If everything looks okay with the changes, then yeah it should be.
Updates
get_emb_sz
to ensure the order of embedding sizes is as expected which might not have been the case with just usingDict
key order.