Closed turgut090 closed 4 years ago
Hi Turgut,
thanks for your PR! It is great that you are building a wrapper for TF Addons from R! (In fact, I had planned to write about TF Addons for a long time, but did not have time to create the wrapper.) So, this really is very important work for the R community!
Regarding the post: I think that for readers, it would be much easier to grasp that importance if the text was (quite a bit!) shorter and went less into details. Especially, I would not go into explaining Transformer here, and I would also refrain from showing too many code pieces and visualizations that are not really needed to make the point.
Think about what really matters: That you are providing access to a rich reservoir of techniques, and all that, nicely integrated into R keras
! It's not really about any single layer (although I admit MultiHeadAttention
is cool!), not about a single activation, etc. ... Especially taking into account the fact that TF Addons is under constant development, which is exactly the point! It's a place where all the new, cool stuff is going to appear, and you're enabling that from R... and you don't want to distract from that.
In short, I think all this would get across much more clearly if the post was structured more like the README of your package: a concise overview, and examples how to use. Here, for example, is a nice and short example of a package announcement post:
https://blogs.rstudio.com/ai/posts/2019-12-18-tfhub-0.7.0/
So, shorter is really better here - you want your readers to quickly understand what this is about, and what it's useful for. You could still put the detailed examples/explanations in the github repo, as vignettes or gists, and link to them from the post.
Thank you for review and time. I will make changes and open PR, again!
Hi, Sigrid. Please, review when you have time.