ros2 / sros2

tools to generate and distribute keys for SROS 2
Apache License 2.0
90 stars 45 forks source link

SROS2 Keystore Proposal #21

Open ruffsl opened 7 years ago

ruffsl commented 7 years ago

I'd like to refactor the SROS2 keystore and CLI to accommodate a more streamlined features set that is also scalable to a larger number of keys, as well as being a little more agnostic to the secure transport. Below is a proposed outline of a new CLI building on top from the current ros2 secure subcommand behavior:

ros2 secure

keystore

auto

Automatically initialize, build, create, and sign keystore in one command

  • -c --config
  • -g, --governance
  • -p, --policy
  • -n, --namespace
  • -f, --force
  • -n, --namespace

init

Initializes only the keystore folder and root CAs

  • -c --config
  • -f, --force

build

Builds keystore; regenerating and distributes governance

  • -g, --governance
  • -f, --force
  • -u, --unsigned

create

Create keystore entry for given namespace

  • -p, --policy
  • -f, --force
  • -u, --unsigned
  • -n, --namespace

sign

Sign keystore entry for given namespace

  • -f, --force
  • -r, --recursive
  • -n, --namespace

remove

Remove keystore entry for given namespace

  • -r, --recursive
  • -n, --namespace

Legend

Storage

I'd also like to suggest a standard layout for the default keystore layout on disk. Later we could probably play around with different types of secure keystore data structures and the like, but for now a simple nested directory should serve as a good start. This also has the added benefit of being simple to translate keystore read permissions to only certain authorized ros2 process given suitable apparmor profiles.

The proposed layout is as follows:

$ tree keystore_root/
keystore_root/        # Keystore root directory
├── governance.cnf    # Governance config
├── keystore.cnf      # Keystore config
├── permissions.cnf   # Permissions config
├── participants      # Participants root namespace subtree
│   └── namespace
│       └── here
│           ├── listener
│           │   ├── cert.pem
│           │   ├── governance.cert.pem -> ../../../../public/governance.cert.pem
│           │   ├── governance.p7s -> ../../../../shared/governance.p7s
│           │   ├── key.pem
│           │   ├── permissions.cert.pem -> ../../../../public/permissions.cert.pem
│           │   ├── permissions.p7s
│           │   ├── permissions.xml
│           │   └── req.pem
│           └── talker
│               ├── cert.pem            # namespace/here/talker public certificate
│               ├── governance.cert.pem -> ../../../../public/governance.cert.pem
│               ├── governance.p7s -> ../../../../shared/governance.p7s
│               ├── key.pem             # namespace/here/talker private key
│               ├── permissions.cert.pem -> ../../../../public/permissions.cert.pem
│               ├── permissions.p7s     # namespace/here/talker permissions singed
│               ├── permissions.xml     # namespace/here/talker permissions file
│               └── req.pem             # namespace/here/talker certificate request
├── private
│   ├── governance.key.pem      # Governance CA private key
│   ├── governance.xml          # Governance file
│   └── permissions.key.pem     # Permissions CA private key
├── public
│   ├── governance.cert.pem     # Governance CA public certificate
│   └── permissions.cert.pem    # Permissions CA public certificate
└── shared
    └── governance.p7s          # Governance singed

With in root directory of the keystore are several config files and a set of specifically name folders. The configuration files are not specific, and may vary given the middleware transport/vendor, or customization file of the user. However a default naming scheme should be established to at least support implicit referencing when using CLI, given a default keystore structure.

The participants folder serves as the root of the namespace subtree in which the participant resources are nested and organized. This makes it simple for participants to lookup runtime certificates and secrets, while also permitting users to manage and orchestrate those resources in manner that mirrors the runtime computation graph.

The private and public folders are used to segregate resources that are sensitive or not within the keystore. The shared folder surveys as a special directory for resources that must be distributed among participants, but are not necessary public outside the keystore.

Resources within the public and shared directories could be soft linked to created entries in the participants subtree. This helps ensure all participants have up-to-date instances of common resources, reduces the duplicate files on disk, and benefits from OS page caching. This also still enabling more advanced users overwrite and customize the structure.

In practice with respect to the keystore, secure ros2 runtime process should only be granted read permissions for the exact namespace directory given the participants (also not recursive), the public folder for necessary CA certificates, and the shared folder for common resources within the secure system. In addition, only the public folder should be given read access to non ros2 related process, i.e. for a signed certificate revocation list inside that may require public hosting.

Remarks

Namespace collisions

One potential hazard with the participants subtree mimicking the namespace hierarchy is that participant namespaces could collide with resource file names, i.e. a folder and file can not share the same name within the same parent directory. However given that ros2 prevents the inclusion of periods . within the a node's namespaces, this issue should be avoidable given that all resources/files within keystore should be denoted with a file extension, e.g. .pem or .p7s, et cetera.

Another issue may be that two participants of the same namespace can not immediately have a separate set of keys or resources. This could be alleviated by resigning to use separate keystores for such use cases, but does not seem ideal. However, as with ros1, I expect ros2 may expect users to service only one node per absolute namespace, and utilize topics as the primary means of overlap or redundancy.

Then again when configured, ros1 was able assigned participants a unique node namespace through a combination of starttime date and PID number. It is not immediately clear how such anonymizing capabilities could be achieved here; also perhaps such anonymization would fundamentally work against our secure design principles.

Implementation agnostic

One goal I'd like for this effort is to be agnostic to the exact secure transport, such that if ros2 where to come around and support in addition to Secure DDS, but also in the future say gRPC or dps-for-iot, that the a new sros2 plugin could be written and invoked under the hood to enable all the secret and token handling for whatever may come.

Obviously this is quite ambitious, and perhaps the whole security ecosystem for such future transports may be drastically different. However it is my guess that given ros2's classic pub/sub approach, the roles between subjects and objects in an Access Control Matrix, and longstanding duality of public key cryptography, my hope is that much of this design will sufficiently generalize and translate naturally.

mikaelarguedas commented 6 years ago

Thanks @ruffsl for putting that proposal together! The approach sounds good to me as long as we keep an abstraction of the storage mechanism to be able to plug in different flavors of key storage solutions in the future.

Quick remark based on offline discussion:

Note: If you start pounding on the xml generation, please make sure to base your work on the latest xml generation that has recently been updated to match the spec more closely and seems to work for latest RTI Connext 5.3.0 and TwinOaks CoreDX.

It would also be interesting to detail on this issue the format used to define the various configuration files