Open fntlnz opened 5 years ago
does not create a custom resource
Would there be any benefit to doing this? This is a serious question (I don't know enough about what the solution here should be). This paradigm of using custom resources seems to be all-the-rage right now though, and in our environment I can think of a few benefits to having traces be a custom resource.
The status information we will already get out to implement this will be usable to replace the
fields in the get commands here
Yeah this would be great. Right now when jobs fail to run, I find that I want to know what their status is and why they failed, and ultimately I want to know what's going on with the pod that it spawned.
@dalehamel I think we can do the status thing very easily without a custom resource. I was just saying that having a custom resource would make easier to manage the lifecycle of a trace, however I don't like the idea of forcing users to have some long running process solely for kubectl trace besides the trace they are running. The philosophy of this tool was to just run your traces and giving you results, and it would be cool to avoid any additional complexity for the user, that's why we didn't do any server side logic yet.
I'm just not 100% sure that can be avoided as this project develops.
The philosophy of this tool was to just run your traces and giving you results, and it would be cool to avoid any additional complexity for the user, that's why we didn't do any server side logic yet.
Good to stick to the philosophy of keeping it simple :+1: I think that this should be documented somewhere - this mental framework will rule-out (or lower the priority of) solutions that would intoduce daemonsets, configmaps or other service-side resources in favor of alternatives that can be done clients-side.
I'm just not 100% sure that can be avoided as this project develops.
Better not to introduce it until it's needed / there's a compelling case.
In the meantime, for this issue, I think that narrows the solutions-space down to just walking through the API and constructing a data model to be displayed for objects that one wants described. I believe this is basically what the describe command does elsewhere, for example describe node
includes data from a variety of sources.
Similar to the describe for normal kubernetes resources but for traces. Nota bene: remember that
kubectl trace
does not create a custom resource but leverages kubernetes resources to inspect the target cluster with bpftrace programs, so this command can be slightly different fromkubectl describe
even if pursuing the same goals.Should aggregate the events for the aggregated resources we create to do a trace.
The status information we will already get out to implement this will be usable to replace the
<missing>
fields in the get commands here