kubernetes / dashboard

General-purpose web UI for Kubernetes clusters
Apache License 2.0
14.47k stars 4.17k forks source link

Console thoughts #21

Closed bgrant0607 closed 6 years ago

bgrant0607 commented 9 years ago

Copying from https://github.com/kubernetes/kubernetes/issues/8270:

This was discussed at the Kubernetes contributors meeting in December.

First of all, visualization/dashboard is more important than actuation. The CLI (kubectl) is expected to be the main tool for controlling the system and applications running on it. Table-friendly views should also be incorporated into kubectl, not just a GUI. I want to ensure that we maintain a common look and feel and functionality with kubectl for analogous table views and use cases -- the primary use cases and essential information are identical. That's not to say it shouldn't support actuation, but that creates additional issues, such as SSO auth.

With respect to visualization/dashboards, we need views that are customized to the most common use cases:

  1. Understanding the system architecture and/or application topology: diagram view that shows interconnection between objects
  2. Deployment status: what images/versions are deployed, what deployments are in progress, what pods deviate, health by deployment version, provenance info
  3. App debugging: for devs -- what’s failing most recently / most often, why, how often, logs, events
  4. System debugging: dashboard w/ system and node health, uptime, versions, config, status events, change history, surface logs (from builds, containers or container failures), what’s where (node-centric view: example visualization: http://azure.microsoft.com/blog/2014/08/28/hackathon-with-kubernetes-on-azure/)
  5. Resource usage analysis: how much resources (mem, cpu, disk) are being used (current/historical, individual/distribution), usage relative to quota or limits, why did my thing run out of resources, top (sort by decreasing usage)
  6. App dashboard / launchpad: status overview, launch links to app-domain, custom app dashboards, links to cadvisor and/or kubelet GUIs, links to elasticsearch and heapster (logging and monitoring dashboards), etc.

Some functionality is probably lacking in order to support all these views at the moment (e.g., stats collection).

Presentation guidelines:

Come up with recommended semantic labels/annotation keys and meanings:

Other issues:

cc @lavalamp @jackgr @smarterclayton @bryk @JanetKuo

smarterclayton commented 9 years ago

@jwforres

lavalamp commented 9 years ago

First thought: if you want kubectl & UI to stay synced, the most obvious thing to do is to have the UI actually use kubectl to generate its tables. We could add a convenient output format (HTML?) to kubectl for this purpose.

On Sat, Oct 24, 2015 at 7:56 AM, Clayton Coleman notifications@github.com wrote:

@jwforres https://github.com/jwforres

— Reply to this email directly or view it on GitHub https://github.com/kubernetes/console/issues/21#issuecomment-150824234.

jwforres commented 9 years ago

I wouldn't recommend limiting a browser client to HTML that is automatically generated by a CLI tool. Browsers allow for better data visualization than CLIs which means instead of rendering raw data in certain columns you add things like status icons, progress bars, whatever. Also the types of user scenarios that you would use a GUI for is often different than the scenarios you would drop to the CLI for (sometimes its even a different type of user that uses one vs the other). This means that data we deem important in the CLI may not be as important in the console and vice versa. I'd argue user / use case relevance is more important than strictly maintaining consistency between kubectl and a GUI.

On Mon, Oct 26, 2015 at 12:58 PM, Daniel Smith notifications@github.com wrote:

First thought: if you want kubectl & UI to stay synced, the most obvious thing to do is to have the UI actually use kubectl to generate its tables. We could add a convenient output format (HTML?) to kubectl for this purpose.

On Sat, Oct 24, 2015 at 7:56 AM, Clayton Coleman <notifications@github.com

wrote:

@jwforres https://github.com/jwforres

— Reply to this email directly or view it on GitHub <https://github.com/kubernetes/console/issues/21#issuecomment-150824234 .

— Reply to this email directly or view it on GitHub https://github.com/kubernetes/console/issues/21#issuecomment-151208815.

bgrant0607 commented 9 years ago

Re. CLI/UI alignment, at minimum, things we deemed critical to display in the CLI should usually also appear in the UI, information displayed in the two shouldn't be contradictory, and terminology used in the 2 should be consistent. This may mean that some processing currently done in kubectl should move into the API, such as summarization of pod status: https://github.com/kubernetes/kubernetes/blob/master/pkg/kubectl/resource_printer.go#L542

jwforres commented 9 years ago

So a couple of comments about what our projects are doing today:

Timeseries graphs see: http://angular-patternfly.rhcloud.com/#/api/patternfly.charts.directive:pfUtilizationChart

and the other charts under patternfly.charts

For icons we are using https://www.patternfly.org/styles/icons/
pficon-route pficon-service pficon-replicator (for replication controllers) pficon-registry pficon-image pficon-cluster pficon-container-node

And for how we visually summarize pod status: pod_status_summary

jwforres commented 9 years ago

I'd like to see reusable web components that are specific to k8s to be pushed to https://github.com/kubernetes-ui We've contributed a couple recently like: https://github.com/kubernetes-ui/topology-graph https://github.com/kubernetes-ui/container-terminal

bryk commented 9 years ago

@lavalamp Re consistency with CLI: Consistency can be easily achieved when the API is the place that does business logic and CLI (kubectl) and UI (console) are only presentation layers. @jwforres and @bgrant0607 are correct that we should present the same concepts, but in different way, depending on the use case.

The console uses a Go server to get data from. So, if something is not available in the API, the server can, at least, import code from kubectl and reuse it. @bgrant0607 Is this feasible?

bryk commented 9 years ago

@jwforres Re sharing reusable components: Definitely. However, we will always start with implementing a component in the console and then releasing it if it proves to be useful for others. I think that's what you're doing in openshift, right?

jwforres commented 9 years ago

@bryk We are doing some of both. The terminal widget we knew we wanted in several of our projects so we wrote it as an upstream bower component to begin with.

Whereas the log viewer we just wrote we did in openshift first because of time constraints, but will most likely be contributing it into patternfly.

@jwforres https://github.com/jwforres Re sharing reusable components: Definitely. However, we will always start with implementing a component in the console and then releasing it if it proves to be useful for others. I think that's what you're doing in openshift, right?

— Reply to this email directly or view it on GitHub https://github.com/kubernetes/console/issues/21#issuecomment-151442458.

lavalamp commented 9 years ago

On Tue, Oct 27, 2015 at 2:57 AM, Piotr Bryk notifications@github.com wrote:

@lavalamp https://github.com/lavalamp Re consistency with CLI: Consistency can be easily achieved when the API is the place that does business logic and CLI (kubectl) and UI (console) are only presentation layers. @jwforres https://github.com/jwforres and @bgrant0607 https://github.com/bgrant0607 are correct that we should present the same concepts, but in different way, depending on the use case.

Not sure I follow-- the API of course gives the same data to any client. But the display choices kubectl makes (e.g., what fields to show) is the thing that I thought we wanted to be consistent with. IOW it's not business logic consistency that's the issue. Or are you talking about moving e.g. the "describe" logic into the API?

The console uses a Go server to get data from. So, if something is not available in the API, the server can, at least, import code from kubectl and reuse it. @bgrant0607 https://github.com/bgrant0607 Is this feasible?

— Reply to this email directly or view it on GitHub https://github.com/kubernetes/console/issues/21#issuecomment-151439691.

bgrant0607 commented 9 years ago

@lavalamp There are examples of logic in kubectl, even in get that should be moved into the control plane (#7311). For an example, see: https://github.com/kubernetes/console/issues/21#issuecomment-151228637

bgrant0607 commented 9 years ago

@jwforres No idea whether it's practical to import CLI code into the Go console server. I don't know what the API is between the console server and the javascript.

jwforres commented 9 years ago

@bgrant0607 not practical. We kept that API as minimal as possible on purpose. It's basically a single auto-generated config.js file that tells the console where to find the openshift and k8s APIs and what the oauth config is. There is also a discussion about the console being split out from the openshift binary, which in that case it most likely wont use Go to serve assets anymore, generating bindata.go causes us infinite headaches. See https://trello.com/c/nKR022N1

lavalamp commented 9 years ago

OK, I can see why that would be desirable, but we have no precedent currently. I can think of three possibilities, and I'm not a fan of any.

  1. Run kubectl as a service in the cluster.
  2. Make e.g. describe subresources on objects that put out these decorated objects.
  3. Use a header/query parameter to request decorated objects.

It's not very clear to me what format a decorated object would have. Presumably just plain text (like the function you linked) is not flexible enough.

On Tue, Oct 27, 2015 at 1:06 PM, Brian Grant notifications@github.com wrote:

@lavalamp https://github.com/lavalamp There are examples of logic in kubectl, even in get that should be moved into the control plane (#7311). For an example, see:

21 (comment)

https://github.com/kubernetes/console/issues/21#issuecomment-151228637

— Reply to this email directly or view it on GitHub https://github.com/kubernetes/console/issues/21#issuecomment-151627538.

jimmidyson commented 9 years ago

Having the console as a "simple" front-end to REST APIs gives us much more flexibility to deploy extensions & build a modular (microservice-y) console. This is the approach that fabric8 has taken with the work by @gashcrumb & @jstrachan. Views & REST APIs need to be kept separate. Totally agree on moving as much stuff up into the API layer though, with JSON-encoded responses that the view (CLI browser, etc) layers render appropriately.

bryk commented 9 years ago

Let's move project architecture discussion to PR #32. It outlines our initial architecture plans.

If interested, please comment on the PR. Note also that project is moving forward quickly, so I'm applying the architecture right now. Thank you for all your input!

bgrant0607 commented 8 years ago

There's a good table of usability considerations here on p. 82 of this TR: http://resources.sei.cmu.edu/asset_files/TechnicalReport/2001_005_001_13859.pdf

kluzny commented 8 years ago

+1 for json, if for extensibility only

bgrant0607 commented 8 years ago

I'd make app debugging and deployment status the highest priority workflow-specific views.

bryk commented 8 years ago

How do you imagine the app debugging view? Any specific thoughts here?

bryk commented 8 years ago

cc @romlein @Lukenickerson

jwforres commented 8 years ago

I think it depends on what the goal is of your debugging view, we recently added a "monitoring" page which follows logs / metrics / events for things (screenshots below) but we also have things in context all over the place. If you are looking at a pod that is crash looping we give you a link to generate a "debug" pod and immediately exec you into it using 'sh' as the command instead, and tell you what the container command would have been so you can try it yourself. We provide inline notifications in relevant places to bug people to do things like add health checks to their pod template. My general point is debugging isn't a "view" its a user story, and different workflows are going to mean different things in different places.

monitoring_page

monitoring_page_expanded

bgrant0607 commented 8 years ago

I made some comments on the proposal PR about how the view could be customized for different use cases: https://github.com/kubernetes/dashboard/pull/589#discussion_r57986320

@jwforres How would the user find a crashlooping pod? I think that's the kind of thing that a debugging-focused view could surface by default, such as by sorting by number of restarts, time of last restart, pods with containers failing liveness probes, etc.

jwforres commented 8 years ago

@bgrant0607 You can find crash looping pods any number of ways. We re-designed our overview and one of the things we surface now are pod warnings, like when you have a crash looping container. Also if you've seen our "pod donut" chart visualization before ( i think it was in an earlier comment) it flags pods in what we consider to be a warning state in a yellow color, clicking that donut takes you to the pod list where you can also see the warnings. The pod donut visualization exists if you are looking at the overview or the details of an RC.

crash-loop

crash-loop-2

jwforres commented 8 years ago

also for visual reference, this is the debug pod i mentioned before

debug_terminal

bryk commented 8 years ago

Thanks @jwforres for your input. We want to take similar approach that you described. I.e., to have general views support your debugging use cases, by, e.g., highlighting things that need your attention or are in broken state. Plus some views/operations dedicated to debugging like restart your pod, exec into pod, etc.

fejta-bot commented 6 years ago

Issues go stale after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta. /lifecycle stale

fejta-bot commented 6 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta. /lifecycle rotten /remove-lifecycle stale

fejta-bot commented 6 years ago

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close