This is a kubectl plugin that allows you to profile applications with low-overhead in Kubernetes environments by generating FlameGraphs and many other outputs as JFR, thread dump, heap dump and class histogram for Java applications by using jcmd. For Python applications, thread dump output and speed scope format file are also supported. See Usage section. More functionalities will be added in the future.
Running kubectl-prof
does not require any modification to existing pods.
This is an open source fork of kubectl-flame with several new features and bug fixes.
--runtime=containerd
(default)--runtime=crio
In order to profile a Java application in pod mypod
for 1 minute and save the flamegraph into /tmp
run:
kubectl prof my-pod -t 5m -l java -o flamegraph --local-path=/tmp
NOTICE:
--local-path
is omitted, flamegraph result will be saved into current directoryProfiling Java application in alpine based containers require using --alpine
flag:
kubectl prof mypod -t 1m --lang java -o flamegraph --alpine
NOTICE: this is only required for Java apps, the --alpine
flag is unnecessary for other languages.
jcmd
as default toolProfiling Java Pod and generate JFR output require using -o/--output jfr
option:
kubectl prof mypod -t 5m -l java -o jfr
async-profiler
In this case, profiling Java Pod and generate JFR output require using -o/--output jfr
and --tool async-profiler
options:
kubectl prof mypod -t 5m -l java -o jfr --tool jcmd
jcmd
as default toolIn this case, profiling Java Pod and generate the thread dump output require using -o/--output threaddump
options:
kubectl prof mypod -l java -o threaddump
jcmd
as default toolIn this case, profiling Java Pod and generate the heap dump output require using -o/--output heapdump
options:
kubectl prof mypod -l java -o heapdump --tool jcmd
jcmd
as default toolIn this case, profiling Java Pod and generate the heap histogram output require using -o/--output heaphistogram
options:
kubectl prof mypod -l java -o heaphistogram --tool jcmd
Supported container runtimes values are: crio
, containerd
.
kubectl prof mypod -t 1m --lang java --runtime crio
In order to profile a Python application in pod mypod
for 1 minute and save the flamegraph into /tmp
run:
kubectl prof mypod -t 1m --lang python -o flamegraph --local-path=/tmp
In this case, profiling Python Pod and generate the thread dump output require using -o/--output threaddump
option:
kubectl prof mypod -t 1m --lang python --local-path=/tmp -o threaddump
In this case, profiling Python Pod and generate the thread dump output require using -o/--output speedscope
option:
kubectl prof mypod -t 1m --lang python --local-path=/tmp -o speedscope
In order to profile a Golang application in pod mypod
for 1 minute run:
kubectl prof mypod -t 1m --lang go -o flamegraph
In order to profile a Python application in pod mypod
for 1 minute run:
kubectl prof mypod -t 1m --lang node -o flamegraph
In order to profile a Ruby application in pod mypod
for 1 minute run:
kubectl prof mypod -t 1m --lang ruby -o flamegraph
In order to profile a Clang application in pod mypod
for 1 minute run:
kubectl prof mypod -t 1m --lang clang -o flamegraph
In order to profile a Clang++ application in pod mypod
for 1 minute run:
kubectl prof mypod -t 1m --lang clang++ -o flamegraph
kubectl prof mypod -l java -o flamegraph -t 5m --interval 60s --cpu-limits=1 -r containerd --image=localhost/my-agent-image-jvm:latest --image-pull-policy=IfNotPresent
kubectl prof mypod -n contprof --service-account=profiler --target-namespace=contprof-stupid-apps -l go
kubectl prof mypod --cpu-requests 100m --cpu-limits 200m --mem-requests 100Mi --mem-limits 200Mi -l python
kubectl prof --help
Install Krew
Install repository and plugin:
kubectl krew index add kubectl-prof https://github.com/josepdcs/kubectl-prof
kubectl krew search kubectl-prof
kubectl krew install kubectl-prof/prof
kubectl prof --help
See the release page for the full list of pre-built assets. And download the binary according yours architecture.
wget https://github.com/josepdcs/kubectl-prof/releases/download/1.2.5/kubectl-prof_1.2.5_linux_amd64.tar.gz
tar xvfz kubectl-prof_1.2.5_linux_amd64.tar.gz && sudo install kubectl-prof /usr/local/bin/
$ go get -d github.com/josepdcs/kubectl-prof
$ cd $GOPATH/src/github.com/josepdcs/kubectl-prof
$ make install-deps
$ make
Modify Makefile, property DOCKER_BASE_IMAGE, and run:
$ make agents
kubectl-prof
launch a Kubernetes Job on the same node as the target pod. Under the hood kubectl-prof
can use the
following tools according the programming language:
--tool async-profiler
and -o flamegraph
.--tool async-profiler
and -o jfr
.--tool async-profiler
and -o collapsed
or -o raw
.-o/--output
is given.--tool jcmd
and -o jfr
.--tool jcmd
and -o threaddump
.--tool jcmd
and -o heapdump
.--tool jcmd
and -o histogram
.-o/--output
is given.--tool
is given and default output is flame graphs if no option -o/--output
is also given.-o flamegraph
.-o raw
.-o/--output
is given.-o flamegraph
.-o threaddump
.-o speedscope
.-o raw
. -o/--output
is given.-o flamegraph
.-o speedscope
.-o callgrind
.-o/--output
is given.-o flamegraph
.-o raw
.-o/--output
is given.--prof-basic-prof
flag.The raw output is a text file with the raw data from the profiler. It could be used to generate flame graphs, or you can use https://www.speedscope.app/ to visualize the data.
kubectl-prof
also supports to work in modes discrete and continuous:
-t time
option.--interval time
option in addition to -t time
.In addition, kubectl-prof
will attempt to profile all the processes detected in the container.
It will try to profile them all based on the provided language. When this happens, the tool will display a warning similar to:
⚠ Detected more than one PID to profile: [2508 2509]. It will be attempt to profile all of them. Use the --pid flag specifying the corresponding PID if you only want to profile one of them.
But if you want to profile a specific process, you have two options:
--pid PID
flag if you know the PID (the previous warning can help you identify the PID you want to profile). --pgrep process-matching-name
flag.Please refer to the contributing.md file for information about how to get involved. We welcome issues, questions, and pull requests
This project is licensed under the terms of the Apache 2.0 open source license. Please refer to LICENSE for the full terms.
Service | Status |
---|---|
Github Actions | |
GoReport |