Kotlin Shell is a prototype tool for performing shell programming in Kotlin and Kotlin script. It provides shell-like API which takes advantage of Kotlin features.
For examples go to the examples section.
Creating processes is extremly easy in Kotlin Shell:
shell {
"echo hello world"()
}
// echo hello world
Piping is also supported:
shell {
val toUpper = stringLambda { it.toUpper() to "" }
pipeline { file("data.txt") pipe "grep abc".process() pipe toUpper }
}
// cat data.txt | grep abc | tr '[:lower:]' '[:upper:]'
The library is designed primary for Unix-like operating systems and was tested fully on MacOS. Windows support is not planned at the moment.
Kotlin Shell is distributed via GitHub Packages.
Use kshell
command for running scripts from command line. To read more about it and download the command go here.
You can also download binaries of kotlin-shell-kts
to use the script definition in custom way.
Gradle:
repositories {
dependencies {
implementation("eu.jrie.jetbrains:kotlin-shell-core:0.2.1")
}
}
For more information about using GitHub packages with Gradle go here or to packages section of thi repository.
Kotlin Shell features slf4j logging. To use it add logging implementation or NOP logger to turn it off:
implementation("org.slf4j:slf4j-nop:1.7.26")
You can also download binaries of kotlin-shell-core
to use the library in any other project.
Kotlin Shell scripts have sh.kts
extension.
Some environment variables may be set to customize script execution. Go to the environment section to learn more.
To run the script type:
kshell script.sh.kts
Read more and download the command here.
Kotlin Shell scripts support shebang:
#!/usr/bin/env kshell
A more low level approach is supported with kotlinc
:
kotlinc -cp PATH_TO_SHELL_KTS_ALL_JAR -Dkotlin.script.classpath -script SCRIPT.sh.kts ARGS
example:
kotlinc -cp lib/kotlin-shell-kts-all.jar -Dkotlin.script.classpath -script hello.sh.kts
Calling the shell
block will provide access to Kotlin Shell API:
shell {
// code
}
Kotlin Shell is driven by kotlinx.io and kotlinx.coroutines. Therefore the API is fully non-blockign and most functions are suspending. To take advantage of that, you need to pass the script as suspend fun
and CoroutineScope
as parameters to the suspending shell
block.
With given scope:
shell (
scope = myScope
) {
"echo hello world!"()
}
With new coroutine scope:
shell {
"echo hello world!"()
}
The blocking api features basic shell commands without the need of wrapping it into coroutines calls:
"echo hello world!".process().run()
It can be accessed in Kotlin code as well by using ScriptingShell
class.
The shell
block gives access to full api of kotlin-shell
. It receives GlobalScope
as implicit parameter:
shell {
"echo hello world!"()
}
Before starting any process you need to create ProcessExecutable
. Then you can start it directly or use it in pipeline.
To start new system process use dsl:
val echo = systemProcess {
cmd { "echo" withArg "hello" }
}
echo()
or extensions:
val echo = "echo hello".process()
echo()
or simply:
"echo hello"()
To start process from file contents use File.process()
extension:
val process = scriptFile.process(arg1, arg2)
process()
or simply:
scriptFile(arg1, arg2)
creating virtual KotlinScript processes is not implemented yet
To run equivalent process multiple times call ProcessExecutable.copy()
Pipelines can operate on processes, lambdas, files, strings, byte packages and streams.
Every executable element in Kotlin Shell receives its own ExecutionContext
, which consist of stdin
, stdout
and stderr
implemented as Channels
. In the library channels are used under aliases ProcessChannel
, ProcessSendChannel
and ProcessReceiveChannel
their unit is always ByteReadPacket
. Shell
itself is an ExecutionContext
and provides default channels:
stdin
is always empty and closed ProcessReceiveChannel
, which effectively acts like /dev/null
. It It can be accessed elsewhere by nullin
member.stdout
is a rendezvous ProcessSendChannel
, that passes everything to System.out
.stderr
is a reference to stdout
.Beside them there is also special member ProcessSendChannel
called nullout
, which acts like /dev/null
.
Pipeline elements are connected by ProcessChannel
s, that override their context's default IO. Only the neccessary streams are overriden, so not piped ones are redirected to the channels, that came with the context. Each element in the pipeline ends its execution after processing the last received packet before receiving close signal from stdin
channel.
Pipelines are logically divided into three parts: FROM
, THROUGH
and TO
. The api is designed to look seamless, but in order to take full advantage of piping it is necessary to distinguish these parts. Every element can emit some output, but doesn't have to. They also shouldn't close they outputs after execution. It is done automatically by piping engine and ensures that channels used by other entities (such as stdout
) won't be closed.
Every pipeline starts with single element FROM
section. It can be Process
, lambda, File
, String
, InputStream
, ByteReadPacket
or Channel
. Elements used here receive no input (for processes and lambdas there is nullin
provided). Then the THROUGH
or TO
part occurs.
Piping THROUGH
can be performed on Process
or lambda and can consist of any number of elements. They receive the input simutanously while the producer is going (due to the limitations of zt-exec
library SystemProcess
may wait till the end of input) and can emit output as they go.
Every pipeline is ended with single element TO
section. Elements here take input, but do not emit any output. If no TO
element is provided, the pipeline
builder will implicitly end the pipeline with stdout
.
Schematic grammar for piping could look like this:
PIPELINE -> FROM THROUGH TO
PIPELINE -> FROM TO
FROM -> PROCESS | LAMBDA | FILE | STRING | INPUT_STREAM | BYTE_READ_PACKET | PROCESS_SEND_CHANNEL
THROUGH -> PROCESS | LAMBDA
TO -> PROCESS | LAMBDA | FILE | STRING_BUILDER | OUTPUT_STREAM | BYTE_PACKET_BUILDER | PROCESS_RECEIVE_CHANNEL
To construct and execute the pipeline use pipeline
builder:
pipeline { a pipe b pipe c }
Pipeline can be started with Process
, lambda, File
, String
, ByteReadPacket
or InputStream
. Once the pipeline is created it cannot be modified.
The pipeline
builder takes an optional parameter mode
of type ExecutionMode
. It can be used for detaching or demonizing the pipeline. By default it uses ExecutionMode.ATTACHED
pipeline (ExecutionMode.ATTACHED) { a pipe b pipe c }
pipeline (ExecutionMode.DETACHED) { a pipe b pipe c }
pipeline (ExecutionMode.DAEMON) { a pipe b pipe c }
Constructed pipeline can be stored in an object of Pipeline
type:
val p = pipeline { a pipe b }
You can perform several operiations on it:
Pipeline.join()
joins the pipelinePipeline.kill()
kill all elements of the pipelineAnd access processes
member, which is a list of all processes in the pipeline.
To fork stderr
from process or lambda use forkErr
:
pipeline { a pipe (b forkErr { /* fork logic */ }) pipe c }
it redirects elements error stream to given pipeline.
The builder function receives the new error ProcessReceiveChannel
as an implicit argument.
The function should return new Pipeline
. If this pipeline wont be ended with TO
, it will implicitly be appended with stdout
.
The fork logic can be stored in a variable:
val fork = pipelineFork { it pipe filter pipe file }
pipeline { a forkErr fork }
The fork belongs to process executable or lambda itself so it can be used outside pipeline as well:
val process = "cmd arg".process() forkErr { /* fork */ }
process()
val lambda = stringLambda { /* lambda */ } forkErr { /* fork */ }
pipeline { lambda pipe { /* ... */} }
As a shorthand it is possible to fork error directly to given channel:
val channel: ProcessChannel = Channel()
val b = a forkErr channel
Basic lambda structure for piping is PipelineContextLambda
:
suspend (ExecutionContext) -> Unit
It takes context which consists of stdin
, stdout
and stderr
channels. It can receive content immediately after it was emitted by the producer, as well as its consumer can receive sent content simultaneously.
The end of input is signalized with closed stdin
. PipelineContextLambda
shouldn't close outputs after execution.
There are several wrappers for PipelineContextLambda
, that can make piping easier. Most of them follow the template (stdin) -> Pair<stdout, stderr>
name | definition | builder |
---|---|---|
PipelineContextLambda |
suspend (ExecutionContext) -> Unit | contextLambda { } |
PipelinePacketLambda |
suspend (ByteReadPacket) -> Pair<ByteReadPacket, ByteReadPacket> | packetLambda { } |
PipelineByteArrayLambda |
suspend (ByteArray) -> Pair<ByteArray, ByteArray> | byteArrayLambda { } |
PipelineStringLambda |
suspend (String) -> Pair<String, String> | stringLambda { / code / } |
PipelineStreamLambda |
suspend (InputStream, OutputStream, OutputStream) -> Unit | streamLambda { } |
shell {
val upper = stringLambda { line ->
line.toUpperCase() to ""
}
pipeline { "cat file".process() pipe upper pipe file("result") }
}
Detached process or pipeline is being executed asynchronous to the shell. It can be attached or awaited at any time. Also all of not-ended detached jobs will be awaited after the end of the script before finishing shell
block.
To detach process use detach()
function:
val echo = "echo hello world!".process()
detach(echo)
To join process use Process.join()
method:
process.join()
You can perform these operations also on multiple processes:
detach(p1, p2, p2)
await(p1, p2, p3)
To join all processes use joinAll()
.
To access detached processes use detachedProcesses
member. It stores list of pair of detached job id to process
To detach pipeline use detach()
builder:
detach { p1 pipe p2 pipe p3 }
or pipeline()
with correct mode:
pipeline (ExecutionMode.DETACHED) { p1 pipe p2 pipe p3 }
To join pipeline call Pipeline.join()
:
val pipeline = detach { p1 pipe p2 pipe p3 }
pipeline.join()
To access detached processes use detachedPipelines
member. It stores list of pair of detached job id to pipeline
To attach detached job (process or pipeline) use fg()
:
fg(Int)
accepting detached job id. By default it will use 1
as id.fg(Process)
accepting detached processfg(Pipeline)
accepting detached pipelineTo join all detached jobs call joinDetached()
At the current stage demonizing processes and pipelines is implemented in very unstable and experimental way.
Though it should not be used.
Environment in Kotlin Shell is divided into two parts shell environment
and shell variables
. The environment from system is also copied.
To access the environment call:
environment
list or env
command for shell environment
variables
list for shell variables
shellEnv
or set
command for combined environmentsystemEnv
for the environment inherited from systemsystem environment
is copied to shell environment
at its creation. To access system environment any time call systemEnv
shell environment
is copied to Shell
from the system. It can be modified and is copied to sub shells.
To set environment use export
:
export("KEY" to "VALUE")
To make it read-only add readonly
:
readonly export("KEY" to "VALUE")
To print environment variable use env
:
env("KEY")
To remove use unset
:
unset("key")
shell variables
are empty by default. They can be modified and are not copied to sub shells
To set variable use variable
:
variable("KEY" to "VALUE")
To make it read-only add readonly
:
readonly variable("KEY" to "VALUE")
To print shell variable use env
:
env("KEY")
To remove variable use unset
:
unset("key")
Kotlin Shell uses some special variables for customisation of execution.
They can be set explicitly by shell
builders or can be inherited from system.
If any of these will not be set, default values will be used.
variable | type | usage | default value |
---|---|---|---|
SYSTEM_PROCESS_INPUT_STREAM_BUFFER_SIZE |
Int |
size of SystemProcessInputStream buffer |
16 |
PIPELINE_RW_PACKET_SIZE |
Long |
maximal size of packets used in piping | 16 |
PIPELINE_CHANNEL_BUFFER_SIZE |
Int |
size of ProcessChannel s used in piping |
16 |
REDIRECT_SYSTEM_OUT |
YES /NO |
Specifies weather System.out should be bypassed with Shell.stdout . As a result it will synchronize stdlib print() and println() with shell outputs |
YES |
Kotlin Shell implements some of the most popular shell commands with additions of special methods and properties.
To call the command use invoke()
:
cmd()
then its output will be processed to stdout
.
To pipe the command put simply put it in the pipeline:
pipeline { cmd pipe process }
&
as detach
cd
with cd(up)
for cd ..
and cd(pre)
for cd -
env
exit
as return@shell
export
fg
jobs
mkdir
print
and echo
as print()
/println()
ps
readonly
set
unset
variable
Shell
member functions provide easy ways for performing popular shell tasks:
file()
- gets or creates file relative to current directoryTo implement custom shell command create extension member of Shell class and override its getter:
val Shell.cmd: ShellCommand
get() = command {
/* command implementation returning String */
}
such command can be declared outside shell
block and be used as dependency.
To implement custom shell method use the basic function template:
suspend fun Shell.() -> T
where T
is desired return type or Unit
. Such functions can be declared outside shell
block and be used as dependency.
Shell
members provide easy Kotlin-like access to popular parameters:
detachedPipelines
detachedProcesses
directory
environment
nullin
nullout
processes
shellEnv
systemEnv
variables
To create sub shell use shell
block:
shell {
/* code */
shell {
/* code */
}
}
By default sub shell will inherit environment, directory, IO streams and constants. You can explicitly specify shell variables and directory to use:
shell {
shell (
vars = mapOfVariables,
dir = directoryAsFile
) {
/* code */
}
}
Sub shells suspend execution of the parent shell.
Sub shell can be used to provide custom environment for commands:
shell {
export("KEY" to "ONE")
shell (
vars = mapOf("KEY" to "TWO")
) {
"echo ${env("KEY")} // TWO
}
// rest of the script
}
Or to temporarly change the directory:
shell {
"echo ${env("PWD")} // ../dir
shell (
dir = file("bin")
) {
"echo ${env("PWD")} // ../dir/bin
}
// rest of the script
}
Kotlin Shell scripts support external and internal dependencies. The mechanism from kotlin-main-kts
is being used. Learn more about it in KEEP and blog post.
External dependencies from maven repositories can be added via @file:Repository
@file:DependsOn
annotation:
@file:Repository("MAVEN_REPOSITORY_URL")
@file:DependsOn("GROUP:PACKAGE:VERSION")
then they can be imported with standard import
statement.
To import something from local file use @file:Import
:
@file:Import("SCRIPT.sh.kts")
then they can be imported with standatd import
statement.
Examples on writing Kotlin shell scripts can be found in the examples repository.
A good source of detailed examples are also integration tests in this repository.