Closed aguschin closed 2 years ago
Yeah we should consolidate these 2 tickets.
@jorgeorpinel, I see you updated the description above. Some feedback looks outdated, so I assume this is the feedback for mlem.ai (i.e. main
branch). Since #188 have more recent GS version, would be great if you adapt the feedback for that version.
Or you can keep it as it is, I'll apply what I think should be applied and then you'll TAL at #188 and we'll continue there.
UPD: get-started/deploying was not updated yet in #188 .
What I did here is to move all the specific suggestions from https://github.com/iterative/mlem.ai/issues/58 and kept that one only for more general feedback that may involve rewriting the GS. The specific ones here can be applied on top of existing GS I think. I see some are being addressed in https://github.com/iterative/mlem.ai/pull/188. ππΌ
Some feedback looks outdated
Feel free to scratch it out if it no longer applies, or check the box if it has been solved.
@jorgeorpinel, @mike0sv. While rewriting Get Started (see in review app) to be pretty simple while demonstrating key Use Cases for MLEM, I'm tempted to showcase local docker deploy
instead of Heroku. The reason: it's easier to reproduce locally. OFC I will mention Heroku, Sagemaker and K8s in the very first sentences on the page, but my concern is that it will be too toyish anyway. WDYT?
Basically local docker deploy
is building a docker image + running it locally on you machine.
The simplicity of this approach compared to Heroku is that you don't need (1) to login/register to Heroku and get API key, (2) no need to explain what is MLEM Env (may be incorrect here TBH, Mike please check me).
Still, deploying to a place outside of local machine, is a real deployment, while local docker deploy is a local deploy π€
Ok, I did everything with Heroku and I like it. No need to use local Docker deploy I think.
@jorgeorpinel, I think I addressed almost everything here. Except for:
What is Saving models about? Codification? Adopting GitOps? Possibly rename + emphasize the main purpose of
mlem.api.save
; Mention/link toload()
?
- Truncate code block (link to full file in example repo).
- Show relevant parts of the .mlem file (link to full ").
- Mention/link to other ML frameworks supported.
Could you please check out the new GS index page? Do we need to shorten the code block? Do we need to shorten YAML output? Is this ok to do?
My point: maybe it's long, but reproducible. This is what I would expect from GS, that should teach me the basic features of MLEM.
Btw, if we're going to use another dataset (not iris), the training script will be longer. In that case, we'll have to extract details to some repo, I'm afraid. Rn the "pro" of the current GS is that we don't need any repo.
But still, we have example-mlem-get-started
repo, which we can put to some use. Rn in #188 we use it only as a source of models to load (when we provide examples, we try to supply the urls to real model in that repo, so the command would run).
WDYT? @jorgeorpinel
UPDATE: OK, I see you already decided to stick with Heroku. ππΌ
I think I addressed almost everything here
What about
Kind of a product question though: is
mlem
too verbose?
apply()
is not used in Applying models. Is that OK?What is Saving models about? ...
please check out the new GS index page...
Nice. See my review.
Truncating code blocks in general is helpful (to emphasize the important parts of such code blocks AND to make the doc faster to read).
Thanks for the feedback, @jorgeorpinel! Applied suggestion from your review as well, except for few things (left my comments there).
Is it (Exporting models) required for serving/deploying? If so is it done automatically by serve|deploy?
No, serve is standalone. Build use serve under the hood. Deploy use build and serve under the hood. Explained this in get-started/building.
Use tabs to go from API to CLI? (This applies to most pages.)
The only page that needs this, considering the current content, is applying models.
We don't use this "tabs" mechanics anywhere on the mlem.ai, so let's keep it simple for users - for now. We'll get back to it once we start working on #190
Truncate sample output (multiple places)
Where exactly? The output may be verbose, but I think it's good to be clear in GS, to show people what is happening.
Move 4 use cases from the index page to the end of "Saving" section to say: "you just saved the model. Now you can use it in one of these ways:"
Good idea. Did that, you can check it out.
U: Now apply() is not used in Applying models. Is that OK?
Yes, it's not something we need in GS. It's covered in the CLI/API reference.
Build use serve under the hood. Deploy use build and serve under the hood. Explained this in get-started/building.
OK thanks for clarifying. TBH we don't need to mention this in GS (although a short note probably doesn't hurt either). In general it's better to leave implementation details for other sections of the docs π
The output may be verbose, but I think it's good to be clear in GS
Yes, it's probably more of a product question: Is MLEM's default output is too verbose? (Is there an issue somewhere to discuss this?)
Is MLEM's default output is too verbose
we don't have it, although I just mentioned this in https://github.com/iterative/mlem/issues/390 please feel free to create a separate issue or discuss it there if you think it's the same topic
hooray, we have covered everything then. I'm closing the issue. Thanks for your effort @jorgeorpinel ππ»
Thank you.
We still have #58 π¬
Specific improvements to existing content.
mlem.api.save
; Mention/link toload()
?serve|deploy
?create env|deployment
); Truncate sample outputmlem.api.apply
is less clear thanmlem.api.load
- while the 2nd is not explained in GS. Let's 1st with 2nd.