The steps involved in running the hadoop-less spark shell are quite configurable:
1) shell.sh script can be generated - there are some env.variables which need to be linked to the AnalyticsShell object but this can be done in a patterned way
2) shell-init.scala.sh scripts can be generated with configurable wildcard imports
3) AnalyticsDriver class is the "face" of the shell
4) AnalyticsDriver companion object can be completely provided by the spark module in general
5) AnalyticsShell object can be generalised if configured well
Ideally most part under API and then some binding under each spark module
The steps involved in running the hadoop-less spark shell are quite configurable: 1) shell.sh script can be generated - there are some env.variables which need to be linked to the AnalyticsShell object but this can be done in a patterned way 2) shell-init.scala.sh scripts can be generated with configurable wildcard imports 3) AnalyticsDriver class is the "face" of the shell 4) AnalyticsDriver companion object can be completely provided by the spark module in general 5) AnalyticsShell object can be generalised if configured well
Ideally most part under API and then some binding under each spark module