iterative / mlem.ai

✨ Landing page for MLEM
https://mlem.ai/
BSD Zero Clause License
27 stars 12 forks source link

GS: more feedback #129

Closed aguschin closed 1 year ago

aguschin commented 2 years ago

Specific improvements to existing content.

aguschin commented 2 years ago

related to https://github.com/iterative/mlem.ai/issues/58

jorgeorpinel commented 2 years ago

related to https://github.com/iterative/mlem.ai/issues/58

Yeah we should consolidate these 2 tickets.

aguschin commented 1 year ago

@jorgeorpinel, I see you updated the description above. Some feedback looks outdated, so I assume this is the feedback for mlem.ai (i.e. main branch). Since #188 have more recent GS version, would be great if you adapt the feedback for that version.

Or you can keep it as it is, I'll apply what I think should be applied and then you'll TAL at #188 and we'll continue there.

UPD: get-started/deploying was not updated yet in #188 .

jorgeorpinel commented 1 year ago

What I did here is to move all the specific suggestions from https://github.com/iterative/mlem.ai/issues/58 and kept that one only for more general feedback that may involve rewriting the GS. The specific ones here can be applied on top of existing GS I think. I see some are being addressed in https://github.com/iterative/mlem.ai/pull/188. πŸ‘πŸΌ

jorgeorpinel commented 1 year ago

Some feedback looks outdated

Feel free to scratch it out if it no longer applies, or check the box if it has been solved.

aguschin commented 1 year ago

@jorgeorpinel, @mike0sv. While rewriting Get Started (see in review app) to be pretty simple while demonstrating key Use Cases for MLEM, I'm tempted to showcase local docker deploy instead of Heroku. The reason: it's easier to reproduce locally. OFC I will mention Heroku, Sagemaker and K8s in the very first sentences on the page, but my concern is that it will be too toyish anyway. WDYT?

Basically local docker deploy is building a docker image + running it locally on you machine.

The simplicity of this approach compared to Heroku is that you don't need (1) to login/register to Heroku and get API key, (2) no need to explain what is MLEM Env (may be incorrect here TBH, Mike please check me).

Still, deploying to a place outside of local machine, is a real deployment, while local docker deploy is a local deploy πŸ€”

aguschin commented 1 year ago

Ok, I did everything with Heroku and I like it. No need to use local Docker deploy I think.

@jorgeorpinel, I think I addressed almost everything here. Except for:

What is Saving models about? Codification? Adopting GitOps? Possibly rename + emphasize the main purpose of mlem.api.save; Mention/link to load()?

  • Truncate code block (link to full file in example repo).
  • Show relevant parts of the .mlem file (link to full ").
  • Mention/link to other ML frameworks supported.

Could you please check out the new GS index page? Do we need to shorten the code block? Do we need to shorten YAML output? Is this ok to do?

My point: maybe it's long, but reproducible. This is what I would expect from GS, that should teach me the basic features of MLEM.

aguschin commented 1 year ago

Btw, if we're going to use another dataset (not iris), the training script will be longer. In that case, we'll have to extract details to some repo, I'm afraid. Rn the "pro" of the current GS is that we don't need any repo.

But still, we have example-mlem-get-started repo, which we can put to some use. Rn in #188 we use it only as a source of models to load (when we provide examples, we try to supply the urls to real model in that repo, so the command would run).

WDYT? @jorgeorpinel

jorgeorpinel commented 1 year ago
re Heroku vs. Docker deployment (GS) > tempted to showcase local docker deploy instead of Heroku... easier to reproduce locally. but my concern is that it will be too toyish anyway. WDYT? It's a good idea but installing Docker locally is kind of a pain from what I remember (and some features are not supported outside of Linux). Heroku seems more realistic indeed, and users who don't want to try it can get a good idea from just reading the GS (if it's well done). > no need to explain what is MLEM Env Didn't get what you meant there.

UPDATE: OK, I see you already decided to stick with Heroku. πŸ‘πŸΌ

jorgeorpinel commented 1 year ago

I think I addressed almost everything here

What about

What is Saving models about? ...

please check out the new GS index page...

Nice. See my review.

Truncating code blocks in general is helpful (to emphasize the important parts of such code blocks AND to make the doc faster to read).

aguschin commented 1 year ago

Thanks for the feedback, @jorgeorpinel! Applied suggestion from your review as well, except for few things (left my comments there).

Is it (Exporting models) required for serving/deploying? If so is it done automatically by serve|deploy?

No, serve is standalone. Build use serve under the hood. Deploy use build and serve under the hood. Explained this in get-started/building.

Use tabs to go from API to CLI? (This applies to most pages.)

The only page that needs this, considering the current content, is applying models. We don't use this "tabs" mechanics anywhere on the mlem.ai, so let's keep it simple for users - for now. We'll get back to it once we start working on #190

Truncate sample output (multiple places)

Where exactly? The output may be verbose, but I think it's good to be clear in GS, to show people what is happening.

Move 4 use cases from the index page to the end of "Saving" section to say: "you just saved the model. Now you can use it in one of these ways:"

Good idea. Did that, you can check it out.

U: Now apply() is not used in Applying models. Is that OK?

Yes, it's not something we need in GS. It's covered in the CLI/API reference.

jorgeorpinel commented 1 year ago

Build use serve under the hood. Deploy use build and serve under the hood. Explained this in get-started/building.

OK thanks for clarifying. TBH we don't need to mention this in GS (although a short note probably doesn't hurt either). In general it's better to leave implementation details for other sections of the docs πŸ™‚

The output may be verbose, but I think it's good to be clear in GS

Yes, it's probably more of a product question: Is MLEM's default output is too verbose? (Is there an issue somewhere to discuss this?)

aguschin commented 1 year ago

Is MLEM's default output is too verbose

we don't have it, although I just mentioned this in https://github.com/iterative/mlem/issues/390 please feel free to create a separate issue or discuss it there if you think it's the same topic

aguschin commented 1 year ago

hooray, we have covered everything then. I'm closing the issue. Thanks for your effort @jorgeorpinel πŸ™πŸ»

jorgeorpinel commented 1 year ago

Thank you.

jorgeorpinel commented 1 year ago

We still have #58 😬