NLnetLabs / krill

RPKI Certificate Authority and Publication Server written in Rust
https://nlnetlabs.nl/projects/routing/krill/
Mozilla Public License 2.0
295 stars 42 forks source link

Manual PKCS#11 signer testing with AWS CloudHSM #556

Closed ximon18 closed 2 years ago

ximon18 commented 2 years ago

Note: In the shell command logs below commands are prefixed by [N]> to indicate which of several shell sessions the command was/should be issued in.

Tested using Krill commit 5b444bf (the current head of the hsm branch) and AWS Cloud HSM on Ubuntu 21.10.

Prepare to use AWS Cloud HSM:

  1. Login to the AWS Console.
  2. Use the VPC service Launch VPC Wizard and create a "VPC with single public subnet" using the default settings.
  3. Use the EC2 service Launch Instance button to create a t2.micro Ubuntu Server 18.04 LTS (HVM) SSD 64-bit (x86) EC2 instance in the newly created VPC.
    • Auto-assign public IP should be enabled.
    • Security group should permit SSH (port 22) access from your developer machine.
  4. Use the CloudHSM service Create cluster button to create a HSM cluster in the newly created VPC. Go and get a :coffee: as this step takes a few minutes. Wait until the cluster enters the Uninitialized state.
  5. Click uninitialized cluster ID to view the cluster "General configuration page"
  6. Back in the EC2 service select your EC2 instance and use the Action -> Change security groups button and add the security group ID of the uninitialized CloudHSM instance. Remember to click Save.
  7. On the CloudHSM "General configuration page" again click the "Initialize" button to create an HSM in the cluster. Go and get another cup of :coffee:...
  8. Press the "Cluster CSR" button to download the CSR file.

At a terminal create a certificate using the CSR:

[1]> openssl genrsa -aes256 -out customerCA.key 2048
[1]> openssl req -new -x509 -days 3652 -key customerCA.key -out customerCA.crt
[1]> openssl x509 -req -days 3652 -in <cluster ID>_ClusterCsr.csr \
                              -CA customerCA.crt \
                              -CAkey customerCA.key \
                              -CAcreateserial \
                              -out <cluster ID>_CustomerHsmCertificate.crt
  1. Click Next in the CloudHSM console and upload the newly generated certificate files.
  2. The HSM now enters the "Active" state. Note the HSM IP address, you'll need it below.

Next we SSH to the EC2 instance in the VPC and install the CloudHSM software and Krill.

[1]> ssh-add /path/to/your/AWS/EC2.pem
[1]> scp customerCA.crt ubuntu@<ip of your EC2 instance>:
[1]> ssh ubuntu@<ip of your EC2 instance>
$ sudo apt update && sudo apt upgrade -y
$ wget https://s3.amazonaws.com/cloudhsmv2-software/CloudHsmClient/Bionic/cloudhsm-client_latest_u18.04_amd64.deb
$ sudo apt install -y ./cloudhsm-client_latest_u18.04_amd64.deb
$ sudo /opt/cloudhsm/bin/configure -a <CLOUD_HSM_IP_ADDRESS>
Updating server config in /opt/cloudhsm/etc/cloudhsm_client.cfg
Updating server config in /opt/cloudhsm/etc/cloudhsm_mgmt_util.cfg
$ sudo mv customerCA.crt /opt/cloudhsm/etc/customerCA.crt
$ /opt/cloudhsm/bin/cloudhsm_mgmt_util /opt/cloudhsm/etc/cloudhsm_mgmt_util.cfg
Ignoring E2E enable flag in the configuration file

Connecting to the server(s), it may take time
depending on the server(s) load, please wait...

Connecting to server '10.0.0.173': hostname '10.0.0.173', port 2225...
Connected to server '10.0.0.173': hostname '10.0.0.173', port 2225.
E2E enabled on server 0(10.0.0.173)
aws-cloudhsm>listUsers
Users on server 0(10.0.0.173):
Number of users found:2

    User Id         User Type   User Name              MofnPubKey    LoginFailureCnt     2FA
         1      PRECO       admin                                NO               0       NO
         2      AU          app_user                             NO               0       NO

aws-cloudhsm>loginHSM PRECO admin password
loginHSM success on server 0(10.0.0.173)
aws-cloudhsm>changePswd PRECO admin <SOME ADMIN PASSWORD>
*************************CAUTION********************************
This is a CRITICAL operation, should be done on all nodes in the
cluster. AWS does NOT synchronize these changes automatically with the 
nodes on which this operation is not executed or failed, please 
ensure this operation is executed on all nodes in the cluster.  
****************************************************************

Do you want to continue(y/n)?y
Changing password for admin(PRECO) on 1 nodes
changePswd success on server 0(10.0.0.173)
aws-cloudhsm>logout
logoutHSM success on server 0(10.0.0.173)

aws-cloudhsm>loginHSM CO admin <SOME ADMIN PASSWORD>
loginHSM success on server 0(10.0.0.173)
aws-cloudhsm>createUser CU krill <SOME USER PASSWORD>
*************************CAUTION********************************
This is a CRITICAL operation, should be done on all nodes in the
cluster. AWS does NOT synchronize these changes automatically with the 
nodes on which this operation is not executed or failed, please 
ensure this operation is executed on all nodes in the cluster.  
****************************************************************

Do you want to continue(y/n)?y
Creating User krill(CU) on 1 nodes
createUser success on server 0(10.0.0.173)
aws-cloudhsm>quit

disconnecting from servers, please wait...

Now disable CloudHSM replication otherwise the C_SignInit PKCS#11 function will fail with error CKR_FUNCTION_FAILED:

$ sudo /opt/cloudhsm/bin/configure-pkcs11 --disable-key-availability-check

Now install the AWS PKCS#11 library:

$ wget https://s3.amazonaws.com/cloudhsmv2-software/CloudHsmClient/Bionic/cloudhsm-pkcs11_latest_u18.04_amd64.deb
$ sudo apt install ./cloudhsm-pkcs11_latest_u18.04_amd64.deb
$ sudo /opt/cloudhsm/bin/configure-pkcs11 --hsm-ca-cert /opt/cloudhsm/etc/customerCA.crt
$ sudo /opt/cloudhsm/bin/configure-pkcs11 -a <CLOUD_HSM_IP_ADDRESS>

Let's use the keyls tool to verify that we can connect to the HSM and that it is empty:

$ curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
$ source $HOME/.cargo/env
$ sudo apt install -y build-essential pkg-config libssl-dev openssl
$ cargo install --git https://github.com/ximon18/keyls --branch main --locked
$ keyls 'pkcs11:2305843009213693953:krill:<SOME USER PASSWORD>@/opt/cloudhsm/lib/libcloudhsm_pkcs11.so'
Using PKCS#11 slot id 2305843009213693953 (0x2000000000000001)
No keys found

The AWS PKCS#11 library by default outputs a lot of logging which I've stripped from the output above.

A special note about 2305843009213693953: this is the integer equivalent of the 0x2000000000000001 slot id, as the keyls tool doesn't currently support hex slot IDs. The 0x2000000000000001 slot ID was found using the pkcs11-tool from the opensc package.

Inspect the token using pkcs11-tool:

$ sudo apt install -y opensc
$ pkcs11-tool --module /opt/cloudhsm/lib/libcloudhsm_pkcs11.so -I
Cryptoki version 2.40
Manufacturer     CloudHSM
Library          CloudHSM PKCS#11 (ver 5.2)
Using slot 0 with a present token (0x2000000000000001)

Krill was installed like so:

$ cargo install --git https://github.com/NLnetLabs/krill --rev 5b444bf --locked --features hsm

The Krill config file looked like this:

$ cat /tmp/krill.conf
admin_token = "abc"
data_dir = "/tmp/krill"
service_uri = "https://localhost:3000/"
log_level = "trace"
log_type = "stderr"

[[signers]]
type = "PKCS#11"
name = "AWS Cloud HSM"
lib_path = "/opt/cloudhsm/lib/libcloudhsm_pkcs11.so"
slot = 0x2000000000000001
user_pin = "krill:<SOME USER PASSWORD>"

Run Krill:

$ krill -c /tmp/krill.conf

Krill configures itself to use the PKCS#11 signer for and a fallback OpenSSL signer:

[INFO] ... Configuring signer 'AWS Cloud HSM' (type: PKCS#11, default: true, one_off: false)
[INFO] ... Configuring signer 'OpenSSL one-off signer' (type: OpenSSL, default: false, one_off: true)

In another terminal use krillc to create a CA:

[2]> ssh ubuntu@<ip of your EC2 instance>
$ export KRILL_CLI_TOKEN=abc
$ krillc add --ca some_ca

On creation of a CA, Krill attempts to initialize the signers and contact the PKCS#11 token:

[TRACE] ... Loading PKCS#11 library '"/opt/cloudhsm/lib/libcloudhsm_pkcs11.so"'
[TRACE] ... Loaded PKCS#11 library '"/opt/cloudhsm/lib/libcloudhsm_pkcs11.so"'
...
[INFO] ... Using PKCS#11 token 'hsm1 (model: NITROX-III CNN35, vendor: Marvell Semiconductors, Inc.)' in slot 2305843009213693953 of server 'CloudHSM (Cryptoki v5.2)' via library '/opt/cloudhsm/lib/libcloudhsm_pkcs11.so'
...
[INFO] ... Signer 'AWS Cloud HSM' is ready for use
[INFO] ... Signer 'OpenSSL one-off signer' is ready for use

Prepare another terminal for issuing commands to a testbed instance of Krill, in this case the NLnet Labs public testbed:

[3]> ssh ubuntu@<ip of your EC2 instance>
$ export KRILL_CLI_SERVER=https://testbed.rpki.nlnetlabs.nl/ 
$ export KRILL_CLI_TOKEN=********

Using TWO DIFFERENT TERMINALS register Krill with the NLnet Labs public testbed as a publisher:

[2]> $ krillc repo request --ca some_ca > /tmp/req.xml
[3]> $ krillc pubserver publishers add --request /tmp/req.xml >/tmp/res.xml
[2]> $ krillc repo configure --ca some_ca --response /tmp/res.xml

Using TWO DIFFERENT TERMINALS register Krill as a child CA under the testbed:

[2]> $ krillc parents request --ca some_ca > /tmp/req2.xml
[3]> $ krillc children add --ca testbed --asn 18 --ipv4 10.0.0.0/24 --child some_ca --request /tmp/req2.xml >/tmp/res2.xml
[2]> $ krillc parents add --ca some_ca --response /tmp/res2.xml --parent testbed

Finally, create a ROA:

[2]> $ krillc roas update --ca some_ca --add "10.0.0.1/32 => 18"

There will now be three key pairs stored in SoftHSM:

[2]> $ keyls 'pkcs11:2305843009213693953:krill:<SOME USER PASSWORD>@/opt/cloudhsm/lib/libcloudhsm_pkcs11.so'
Using PKCS#11 slot id 2305843009213693953 (0x2000000000000001)
Found 6 keys
+------------------------------------------+-------------+-------+-----------+--------+
| ID                                       | Type        | Name  | Algorithm | Length |
+------------------------------------------+-------------+-------+-----------+--------+
| 527BC53ABF517DB7915EB3ACA6504277813749CA | Private Key | Krill | RSA       | 2048   |
| 527BC53ABF517DB7915EB3ACA6504277813749CA | Public Key  | Krill | RSA       | 2048   |
| 662310366A8DE504D124D3A7216FEE5B3614AD1C | Private Key | Krill | RSA       | 2048   |
| 662310366A8DE504D124D3A7216FEE5B3614AD1C | Public Key  | Krill | RSA       | 2048   |
| 845DB89B99FF92DEBE2971A8670DD6CA91AF9A42 | Private Key | Krill | RSA       | 2048   |
| 845DB89B99FF92DEBE2971A8670DD6CA91AF9A42 | Public Key  | Krill | RSA       | 2048   |
+------------------------------------------+-------------+-------+-----------+--------+

There will now be two key identifiers mapped for the PKCS#11 signer and none to the OpenSSL signer:

[2]> $ jq '.signer_name, .signer_identity.private_key_internal_id, .keys' /tmp/krill/signers/894db307-a599-485f-bbad-2ca0768de1cb/snapshot.json 
"AWS Cloud HSM"
"527bc53abf517db7915eb3aca6504277813749ca"
{
  "604E3AFB1EC380BE78DEF44726EC0496469F732E": "845db89b99ff92debe2971a8670dd6ca91af9a42",
  "128CC82916509896A2B7331E6373891C55383BFD": "662310366a8de504d124d3a7216fee5b3614ad1c"
}

[2]> $ jq '.signer_name, .signer_identity.private_key_internal_id, .keys' /tmp/krill/signers/f9016bbe-d8fb-450e-8432-415d9363dce2/snapshot.json 
"OpenSSL one-off signer"
"8A5CBEF65644D82BFE14ACB075BE2174E7E665CF"
{}

And the OpenSSL keys directory contains only the identity key that Krill created for it:

[2]> $ ls -1 /tmp/krill/keys/
8A5CBEF65644D82BFE14ACB075BE2174E7E665CF

Finally, cleanup the testbed:

[4]> krillc children remove --ca testbed --child some_ca
[4]> krillc pubserver publishers remove -p some_ca
ximon18 commented 2 years ago

v0.10.0-rc2 testing

Environment:

An AWS EC2 instance with:

$ $ cat /etc/lsb-release 
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=18.04
DISTRIB_CODENAME=bionic
DISTRIB_DESCRIPTION="Ubuntu 18.04.6 LTS"

$ uname -r
5.4.0-1083-aws

The AWS CloudHSM was setup as above.

Preparation:

Setup the VM to install proposed DEB packages from https://packages.nlnetlabs.nl/ per https://krill.docs.nlnetlabs.nl/en/stable/install-and-run.html#installing-specific-versions.

Install the keyls helper tool and prepare our krill.conf:

[1]> cargo install --git https://github.com/ximon18/keyls --branch main --locked

[1]> keyls 'pkcs11:2305843009213693953:krill:password@/opt/cloudhsm/lib/libcloudhsm_pkcs11.so'
Using PKCS#11 slot id 2305843009213693953 (0x2000000000000001)
No keys found

[1]> cat /tmp/krill.conf
admin_token = "abc"
data_dir = "/tmp/krill"
service_uri = "https://localhost:3000/"
log_level = "trace"
log_type = "stderr"

[[signers]]
type = "PKCS#11"
name = "AWS Cloud HSM"
lib_path = "/opt/cloudhsm/lib/libcloudhsm_pkcs11.so"
slot = 0x2000000000000001
user_pin = "krill:password"

Also prepare two additional terminals for communicating with the local Krill and the external testbed:

In another terminal [2]:

[2]> $ export KRILL_CLI_TOKEN=abc

In yet another terminal [3] prepare to manage the testbed side of the setup:

[3]> export KRILL_CLI_SERVER=https://testbed.krill.cloud/ 
[3]> export KRILL_CLI_TOKEN=********

Testing

Install and run Krill:

[1]> sudo apt install -y krill

[1]> krill --version
Krill 0.10.0-rc2

[1]> krill -c /tmp/krill.conf

In terminal [2] add a CA to Krill:

[2]> krillc add --ca some_ca

Using TWO DIFFERENT TERMINALS register Krill with the NLnet Labs public testbed as a publisher:

[2]> krillc repo request --ca some_ca > /tmp/req.xml
[3]> krillc pubserver publishers add --request /tmp/req.xml >/tmp/res.xml
[2]> krillc repo configure --ca some_ca --response /tmp/res.xml

Using TWO DIFFERENT TERMINALS register Krill as a child CA under the testbed:

[2]> krillc parents request --ca some_ca > /tmp/req2.xml
[3]> krillc children add --ca testbed --asn 18 --ipv4 10.0.0.0/24 --child some_ca --request /tmp/req2.xml >/tmp/res2.xml
[2]> krillc parents add --ca some_ca --response /tmp/res2.xml --parent testbed

NOTE: At the [3]> krillc children add step above I hit issue https://github.com/NLnetLabs/krill/issues/868, a problem with the testbed running v0.10.0-rc2. To work around this I invoked the HTTP API directly like so:

[3]> wget -qO- --header="Authorization: Bearer ${KRILL_CLI_TOKEN}" https://testbed.krill.cloud/api/v1/cas/testbed/children/some_ca/parent_response.xml >/tmp/res2.xml

Finally, create a ROA:

[2]> krillc roas update --ca some_ca --add "10.0.0.1/32 => 18"

And look at which keys we have now:

[2]>  keyls 'pkcs11:2305843009213693953:krill:password@/opt/cloudhsm/lib/libcloudhsm_pkcs11.so'
Using PKCS#11 slot id 2305843009213693953 (0x2000000000000001)
Found 6 keys
+------------------------------------------+-------------+-------+-----------+--------+
| ID                                       | Type        | Name  | Algorithm | Length |
+------------------------------------------+-------------+-------+-----------+--------+
| 06933ABADC3138D24291CEB37E439F94FF3AA898 | Private Key | Krill | RSA       | 2048   |
| 06933ABADC3138D24291CEB37E439F94FF3AA898 | Public Key  | Krill | RSA       | 2048   |
| 23DB2ABCF5C275DCB715A02AD989398EDE41F668 | Private Key | Krill | RSA       | 2048   |
| 23DB2ABCF5C275DCB715A02AD989398EDE41F668 | Public Key  | Krill | RSA       | 2048   |
| BB5283F1FE648A5B5120D6C25C6F71640600C0A9 | Private Key | Krill | RSA       | 2048   |
| BB5283F1FE648A5B5120D6C25C6F71640600C0A9 | Public Key  | Krill | RSA       | 2048   |
+------------------------------------------+-------------+-------+-----------+--------+

:champagne: :+1:

Cleanup

Finally, cleanup the testbed:

[3]> krillc children remove --ca testbed --child some_ca
[3]> krillc pubserver publishers remove -p some_ca

Checking Krills internals

And here are the signer store files on disk:

[2]> cat /tmp/krill/signers/02507cd0-75bf-42ec-a43f-9e9ac9471447/snapshot.json

{
  "id": "02507cd0-75bf-42ec-a43f-9e9ac9471447",
  "version": 3,
  "signer_name": "AWS Cloud HSM",
  "signer_info": "PKCS#11 Signer [token: hsm1 (model: NITROX-III CNN35, vendor: Marvell Semiconductors, Inc.), slot: 2305843009213693953, server: CloudHSM (Cryptoki v5.5), library: /opt/cloudhsm/lib/libcloudhsm_pkcs11.so]",
  "signer_identity": {
    "public_key": "MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAh19AzWAqol5uxdY+1JnIMTjbRu7bmGJTGS14qPMHm23PBZGuEDPfxLQfjKrsRVv5eYgHf/KVkdvSInojWlXz/Z9JkAJaaj1DdSTq522hYGSBaCbllp4Y52i4vUqBbbsdlPtJgmvfbEZ1O/RpPdngv9ptC3omAam0uuUvSe7lh4+gJuEhPcjRhgGjneUFxBAp8GjHqjH8/oiIcrhkqb4GHSasvnLynLwQioNSDbLHZPy+5JpMk74/tSNwDu8In/S8hzs45e9ltEpvbxr7d85I1GLftiEFesZ0cM5xe0fcZD8QAOl/miWxsyNkFTnAn+4fNkIUXrwzXUWxDqu2e94lRwIDAQAB",
    "private_key_internal_id": "bb5283f1fe648a5b5120d6c25c6f71640600c0a9"
  },
  "keys": {
    "29B6D0589D702A8CEF67D72F90A5EB4673D56FB7": "06933abadc3138d24291ceb37e439f94ff3aa898",
    "65AB52E8A1996265EE351F88D7AB1D7A0F3A50D1": "23db2abcf5c275dcb715a02ad989398ede41f668"
  }
}

[2]> cat /tmp/krill/signers/512dabb5-085d-4ba7-a9f5-5e54a0c8842e/snapshot.json

{
  "id": "1d10c562-3bb7-4741-88bd-1a91c850b862",
  "version": 1,
  "signer_name": "OpenSSL one-off signer",
  "signer_info": "OpenSSL Soft Signer [version: OpenSSL 1.1.1  11 Sep 2018, keys dir: /tmp/krill/keys]",
  "signer_identity": {
    "public_key": "MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAsqKaPai9VFstv5A02nJxSQ5c6H3uvKrPBoTuVe2tm7Df5ccEyzvzP1YldDkL9QiEPZktWdGDqvzvQw8T4iWSqLkQUE1hmhoCPnnrXMcggASw+NmZ9Q0KLTf1G3V7Ryug9LojNKFOzQ8lYlrKdzRHAkjeFydcz+WBBUORPb6/c+9qEhYczFMuPQ8QfHyol27se6ppTr02uN3VXNtwyxwnDLRnk+v3jr8q+dTWT3lqaBQSPD9UfZM3AXj+mOY26Fz9ROd3wMykT9f8c+3vTpAc7+H3cKjyVBMMQURhMzbjA9zJU/F8qvKlH3/eTLZyOMPwyC/B7s4BMh1IHepJQDyh/QIDAQAB",
    "private_key_internal_id": "B9114780849FBE24C8040E15C0B4848446C17B8A"
  },
  "keys": {}
}
ximon18 commented 2 years ago

HSM support was delivered with the Krill v0.10.0 release.