Closed shankari closed 3 months ago
First decision: Do we try for profile 1 or just go ahead with profile 2? Given the time left, I am going to try directly for profile 2, but will fall back to profile 1 if I run into glitches.
The fork () has several hardcoded certificates, which are the ones passed in to the maeve command line.
- "--tls-server-cert"
- "/certificates/csms.pem"
- "--tls-server-key"
- "/certificates/csms.key"
- "--tls-trust-cert"
- "/certificates/trust.pem"
Compared to the list of certificates created by the Makefile from ((cd config/certificates && make)
), which, in turn calls the [get-ca-cert.sh](https://github.com/thoughtworks/maeve-csms/blob/main/scripts/get-ca-cert.sh)
script, we have:
It certainly seems like the checked in cpo_sub_ca2
inherits directly from the root
$ openssl verify -show_chain -CAfile config/certificates/MORootCACert.pem config/certificates/cpo_sub_ca2.pem
config/certificates/cpo_sub_ca2.pem: OK
Chain:
depth=0: C = US, O = EV Charging PKI, OU = TEST MO Sub-CA, CN = P-256 TEST Tier 1 MO Sub-CA (untrusted)
depth=1: C = US, O = EV Charging PKI, DC = EVCPKI, OU = TEST Root CA, CN = P-256 TEST Root CA
So I am not sure why it is not just named cpo_sub_ca1
. As an aside, I wonder if the PKI testing event should include an intermediate certificate just to make things more complex.
Moving along, I see a new start-maeve.sh
. This essentially runs the same docker-compose as always, but has a couple of other lines to register tokens. However, there is no command to register a station with a client certificate. In the spirit of working to working, let's see if this starts properly and we can add the station at the basic auth level.
checking EVerest, it is consistent with OCPP and not with MaEVe. We might want to file an issue against MaEVe to make it consistent with the OCPP standard. https://github.com/EVerest/EVerest/blob/4ae3884d65a5a0011bb26a59a0b59c18e83c79f6/docs/tutorials/how_to_ocpp/index.rst#L189
The same 201 configuration that we used earlier has the securityProfile. if we just change this to wss://
and securityProfile:2
, will it Just Work with the CSMS configuration?
"Actual": "[{\"configurationSlot\": 1, \"connectionData\": {\"messageTimeout\": 30, \"ocppCsmsUrl\": \"ws://localhost:9000/cp001\", \"ocppInterface\": \"Wired0\", \"ocppTransport\": \"JSON\", \"ocppVersion\": \"OCPP20\", \"securityProfile\": 1}}]"
so it is in fact easier to try with OCPP SP 2 (Maeve: 1) first and we will do it.
~The constant that we are looking for is TLS_WITH_CLIENT_SIDE_CERTIFICATES
. Let's see how we handle certificates in that case.~ That only seems to be used for the websocket connection.
It looks like this would be specified in the SecurityCtrl
Forked Maeve is finally up.
everest-demo-manager-1 | 2024-03-10 21:05:42.976901 [INFO] ocpp:OCPP201 :: OCPP client successfully connected to plain websocket server
everest-demo-manager-1 | 2024-03-10 21:05:42.987001 [INFO] ocpp:OCPP201 :: Received BootNotificationResponse: {
everest-demo-manager-1 | "currentTime": "2024-03-10T21:05:42.000Z",
everest-demo-manager-1 | "interval": 300,
everest-demo-manager-1 | "status": "Accepted"
everest-demo-manager-1 | }
everest-demo-manager-1 | with messageId: 3f4d42a2-d961-428d-be19-818f11b95857
curl http://localhost:9410/api/v0/cs/cp002 -H 'content-type: application/json' \
> -d '{"securityProfile": 1, "base64SHA256Password": "3oGi4B5I+Y9iEkYtL7xvuUxrvGOXM/X2LQrsCwf/knA="}'
Now let's change the connection to wss, the station to cp002 and the security profile to 2
sqlite> update VARIABLE_ATTRIBUTE set "VALUE" = '[{"configurationSlot": 1, "connectionData": {"messageTimeout": 30, "ocppCsmsUrl": "wss://host.docker.internal/ws/cp002", "ocppInterface": "Wired0", "ocppTransport": "JSON", "ocppVersion": "OCPP20", "securityProfile": 2}}]' where VARIABLE_ID == 19;
sqlite> select * from VARIABLE_ATTRIBUTE where VARIABLE_ID == 19;
19|19|2|1|0|0|[{"configurationSlot": 1, "connectionData": {"messageTimeout": 30, "ocppCsmsUrl": "wss://host.docker.internal/ws/cp002", "ocppInterface": "Wired0", "ocppTransport": "JSON", "ocppVersion": "OCPP20", "securityProfile": 2}}]
Now we try to make a CSMS connection, but fail because we don't have the CSMS root installed correctly. Ha! figuring out code through log statements!
2024-03-10 21:27:31.928547 [INFO] ocpp:OCPP201 :: All EVSE ready. Starting OCPP2.0.1 service
2024-03-10 21:27:32.032225 [INFO] ocpp:OCPP201 :: Connecting TLS websocket to uri: wss://host.docker.internal/ws/cp002 with profile 2
2024-03-10 21:27:32.198246 [INFO] evse_security:E :: Requesting certificate file: CSMS
2024-03-10 21:27:32.139004 [DEBG] ocpp:OCPP201 :: Loading ca csms bundle to verify server certificate: /workspace/dist/etc/everest/certs/ca/v2g/V2G_ROOT_CA.pem
2024-03-10 21:27:32.310501 [INFO] evse_security:E :: Requesting certificate file: CSMS
2024-03-10 21:27:32.425798 [ERRO] ocpp:OCPP201 void ocpp::WebsocketBase::log_on_fail(const std::error_code&, const boost::system::error_code&, int) :: Failed to connect to websocket server, error_code: 8, reason: TLS handshake failed, HTTP response code: 0, category: websocketpp.transport.asio.socket, transport error code: 167772567, Transport error category: asio.ssl
Checking the mailing list, we also get (from https://lists.lfenergy.org/g/everest/message/1255)
csms_leaf_cert_directory:
description: Directory where CSMS leaf certificates are stored. If relative will be prefixed with everest prefix + etc/everest/certs. Otherwise absolute file path is used.
type: string
default: client/csms
csms_leaf_key_directory:
description: Directory where CSMS private keys are stored. If relative will be prefixed with everest prefix + etc/everest/certs. Otherwise absolute file path is used.
type: string
default: client/csms
and
csms_ca_bundle:
description: Path to csms_ca_bundle file. If relative will be prefixed with everest prefix + etc/everest/certs. Otherwise absolute file path is used.
type: string
default: ca/v2g/V2G_ROOT_CA.pem
For the record, these are parameters to the EVSE security module https://github.com/EVerest/everest-core/blob/55416be6d11fc85bf1943ecb3c57b267f24ccdaf/modules/EvseSecurity/manifest.yaml#L37 (as one might have guessed) and would be overridden here in our demo: https://github.com/EVerest/everest-core/blob/55416be6d11fc85bf1943ecb3c57b267f24ccdaf/config/config-sil-ocpp201.yaml#L134 (like the private_key_password
curently is)
For now, we are going to copy the files to the correct locations.
While copying the root V2G cert that is required, i wondered whether I should copy the root-V2G-cert.pem
or MORootCACert.pem
, before discovering that they are, in fact, the same
$ diff config/certificates/root-V2G-cert.pem config/certificates/MORootCACert.pem
(I originally found this by trying to verify the csms certificate against both roots and finding that it validated in both cases!!)
$ openssl verify -show_chain -CAfile config/certificates/root-V2G-cert.pem -untrusted config/certificates/cpo_sub_ca2.pem config/certificates/csms.pem
config/certificates/csms.pem: OK
Chain:
depth=0: C = US, O = Sandia, OU = EV Department, CN = USEMAC00000004 (untrusted)
depth=1: C = US, O = EV Charging PKI, OU = TEST MO Sub-CA, CN = P-256 TEST Tier 1 MO Sub-CA (untrusted)
depth=2: C = US, O = EV Charging PKI, DC = EVCPKI, OU = TEST Root CA, CN = P-256 TEST Root CA
After manually copying the root CA, the
$ docker cp ~/joet-everest/maeve-csms/config/certificates/root-V2G-cert.pem everest-demo-manager-1:/workspace/dist/etc/everest/certs/ca/v2g/V2G_ROOT_CA.pem
Successfully copied 3.58kB to everest-demo-manager-1:/workspace/dist/etc/everest/certs/ca/v2g/V2G_ROOT_CA.pem
The handshake is still failing. Given that we are at security profile 2 (TLS with basic auth), with the following diagram from the spec, I would assume that as long as we had the root certificate installed in the CSMS, we would not need the CSMS certificate to be preinstalled.
But I'm still getting the same failure
2024-03-10 22:07:54.385075 [INFO] ocpp:OCPP201 :: Connecting TLS websocket to uri: wss://host.docker.internal/ws/cp002 with profile 2
2024-03-10 22:07:54.535828 [INFO] evse_security:E :: Requesting certificate file: CSMS
2024-03-10 22:07:54.475198 [DEBG] ocpp:OCPP201 :: Loading ca csms bundle to verify server certificate: /workspace/dist/etc/everest/certs/ca/v2g/V2G_ROOT_CA.pem
2024-03-10 22:07:54.648244 [INFO] evse_security:E :: Requesting certificate file: CSMS
2024-03-10 22:07:54.757161 [ERRO] ocpp:OCPP201 void ocpp::WebsocketBase::log_on_fail(const std::error_code&, const boost::system::error_code&, int) :: Failed to connect to websocket server, error_code: 8, reason: TLS handshake failed, HTTP response code: 0, category: websocketpp.transport.asio.socket, transport error code: 167772567, Transport error category: asio.ssl
even though the root cert is installed
# ls -al /workspace/dist/etc/everest/certs/ca/v2g/V2G_ROOT_CA.pem
-rw-r--r-- 1 19017117 dialout 814 Mar 10 18:47 /workspace/dist/etc/everest/certs/ca/v2g/V2G_ROOT_CA.pem
Maybe we have to copy in the entire bundle and not only the root? Do we also need to copy over the leaves in that case? Let's check the code...
From the code, it looks like there are multiple bundles and several of them seem to have the same name. Although the
ocpp::SecurityConfiguration secConfig;
secConfig.csms_ca_bundle = fs::path("/tmp/certs/ca/v2g/V2G_CA_BUNDLE.pem");
secConfig.mf_ca_bundle = fs::path("/tmp/certs/ca/v2g/V2G_CA_BUNDLE.pem");
secConfig.v2g_ca_bundle = fs::path("/tmp/certs/ca/v2g/V2G_CA_BUNDLE.pem");
secConfig.mo_ca_bundle = fs::path("/tmp/certs/ca/mo/MO_CA_BUNDLE.pem");
I found the mapping code but it looks like it has changed since the previous EVerest version.
X509CertificateBundle verify_file(this->ca_bundle_path_map.at(certificate_type), EncodingFormat::PEM);
EVLOG_debug << "Requesting certificate file: [" << conversions::ca_certificate_type_to_string(certificate_type)
<< "] file:" << verify_file.get_path();
But it looks like we do expect it to be a bundle, so let's copy over the bundle and see if it works.
Aha! It turns out that EVerest autogenerates CSMS keys if it doesn't find them
# openssl verify --show_chain /workspace/dist/etc/everest/certs/ca/csms/CPO_SUB_CA1.pem
CN = CPOSubCA1, O = EVerest, C = DE, DC = V2G
error 20 at 0 depth lookup: unable to get local issuer certificate
error /workspace/dist/etc/everest/certs/ca/csms/CPO_SUB_CA1.pem: verification failed
Let's delete all the certs and retry
Deleting all of them and only recreating the V2G bundle...
Checking the generate_test_certs.sh
, we need to:
$CA_V2G_PATH/V2G_CA_BUNDLE.pem
Let's get all of them in place and see if it works. We then need to ask the EVerest team to clean this up
I've tried the bundle and the root in various places and they all fail. Concretely, this is really confusing - this is supposed to be the bundle, but the default is set to ROOT_CA
.pem. if we make this be the bundle, then where is the root for it to verify against?
csms_ca_bundle:
description: Path to csms_ca_bundle file. If relative will be prefixed with everest prefix + etc/everest/certs. Otherwise absolute file path is used.
type: string
default: ca/v2g/V2G_ROOT_CA.pem
Next steps:
Ok so now the container has all the certs needed for validation
# openssl verify --show_chain -CAfile /workspace/dist/etc/everest/certs/ca/v2g/V2G_ROOT_CA.pem -untrusted /workspace/dist/etc/everest/certs/ca/csms/CPO_SUB_CA2.pem /workspace/dist/etc/everest/certs/ca/csms/csms.pem
/workspace/dist/etc/everest/certs/ca/csms/csms.pem: OK
Chain:
depth=0: C = US, O = Sandia, OU = EV Department, CN = USEMAC00000004 (untrusted)
depth=1: C = US, O = EV Charging PKI, OU = TEST MO Sub-CA, CN = P-256 TEST Tier 1 MO Sub-CA (untrusted)
depth=2: C = US, O = EV Charging PKI, DC = EVCPKI, OU = TEST Root CA, CN = P-256 TEST Root CA
Does it work?
Nope, failed
2024-03-10 23:20:52.072019 [INFO] ocpp:OCPP201 :: Connecting TLS websocket to uri: wss://host.docker.internal/ws/cp002 with profile 2
2024-03-10 23:20:52.230144 [INFO] evse_security:E :: Requesting certificate file: CSMS
2024-03-10 23:20:52.172485 [DEBG] ocpp:OCPP201 :: Loading ca csms bundle to verify server certificate: /workspace/dist/etc/everest/certs/ca/v2g/V2G_ROOT_CA.pem
2024-03-10 23:20:52.393411 [INFO] evse_security:E :: Requesting certificate file: CSMS
2024-03-10 23:20:52.501315 [ERRO] ocpp:OCPP201 void ocpp::WebsocketBase::log_on_fail(const std::error_code&, const boost::system::error_code&, int) :: Failed to connect to websocket server, error_code: 8, reason: TLS handshake failed, HTTP response code: 0, category: websocketpp.transport.asio.socket, transport error code: 167772567, Transport error category: asio.ssl
Before I upgrade, I am going to try one more thing: Take the certs from EVerest and put them into MaEVe and see if we can get that to work. Then it is just certificate wrangling, after an update to make sure that everything works properly.
Checking the current version of the script, it generates keys directly via openssl - e.g.
/ext/cache/cpm/libevse-security/4f93b8e3adbd827ed9b21a75891dbe39c13101a2/libevse-security/tests
openssl ecparam -genkey -name "$EC_CURVE" | openssl ec "$SYMMETRIC_CIPHER" -passout pass:"$password" -out "$CLIENT_CSO_PATH/SECC_LEAF.key"
openssl req -new -key "$CLIENT_CSO_PATH/SECC_LEAF.key" -passin pass:"$password" -config configs/seccLeafCert.cnf -out "$CSR_PATH/SECC_LEAF.csr"
openssl x509 -req -in "$CSR_PATH/SECC_LEAF.csr" -extfile configs/seccLeafCert.cnf -extensions ext -CA "$CA_CSMS_PATH/CPO_SUB_CA2.pem" -CAkey "$CLIENT_CSMS_PATH/CPO_SUB_CA2.key" -passin pass:"$password" -set_serial 12348 -days "$VALIDITY" -out "$CLIENT_CSO_PATH/SECC_LEAF.pem"
And then the SECC key is copied over as the CSMS key
cp "$CLIENT_CSO_PATH/SECC_LEAF.key" "$CLIENT_CSMS_PATH/CSMS_LEAF.key"
# assume CSO and CSMS are same authority
cp -r $CA_CSMS_PATH/* $CA_CSO_PATH
cp "$CLIENT_CSO_PATH/SECC_LEAF.pem" "$CLIENT_CSMS_PATH/CSMS_LEAF.pem"
But we don't see the CSMS files although the SECC files exist
a8a8b1400059:/workspace/dist/etc/everest/certs# find . -name \*CSMS\*
a8a8b1400059:/workspace/dist/etc/everest/certs# find . -name \*SECC\*
./client/cso/SECC_LEAF.pem
./client/cso/SECC_LEAF.key
./client/cso/SECC_LEAF_PASSWORD.txt
./client/cso/SECC_LEAF.der
Rerunning the script after tweaking the path, we get
# find . -name \*CSMS\*
./client/csms/CSMS_LEAF.key
./client/csms/CSMS_LEAF.pem
./client/invalid/INVALID_CSMS.pem
./client/invalid/INVALID_CSMS.key
And the cert chain is vald
# openssl verify -show_chain -CAfile ca/v2g/V2G_ROOT_CA.pem --untrusted ca/csms/CPO_SUB_CA1.pem --untrusted ca/csms/CPO_SUB_CA2.pem client/csms/CSMS_LEAF.pem
client/csms/CSMS_LEAF.pem: OK
Chain:
depth=0: CN = SECCCert, O = EVerest, C = DE, DC = CPO (untrusted)
depth=1: CN = CPOSubCA2, O = EVerest, C = DE, DC = V2G (untrusted)
depth=2: CN = CPOSubCA1, O = EVerest, C = DE, DC = V2G (untrusted)
depth=3: CN = V2GRootCA, O = EVerest, C = DE, DC = V2G
Let's copy this over to the CSMS and see if we can get it to work.
Note also a doh moment in that the path for the CSMS pem is client/csms
and not ca/csms
Copied over, the new chain in the CSMS is now valid
$ openssl verify -show_chain -CAfile config/certificates/root-V2G-cert.pem -untrusted config/certificates/cpo_sub_ca1.pem -untrusted config/certificates/cpo_sub_ca2.pem config/certificates/csms.pem
config/certificates/csms.pem: OK
Chain:
depth=0: CN = SECCCert, O = EVerest, C = DE, DC = CPO (untrusted)
depth=1: CN = CPOSubCA2, O = EVerest, C = DE, DC = V2G (untrusted)
depth=2: CN = CPOSubCA1, O = EVerest, C = DE, DC = V2G (untrusted)
depth=3: CN = V2GRootCA, O = EVerest, C = DE, DC = V2G
The MaEVe certificate generation has the following
cat "${script_dir}"/../config/certificates/cpo_sub_ca1.pem "${script_dir}"/../config/certificates/cpo_sub_ca2.pem > "${script_dir}"/../config/certificates/trust.pem
But the trust.pem that is checked into the fork is not a concatenation but directly derived from the root
openssl verify -show_chain -CAfile config/certificates/MORootCACert.pem config/certificates/trust.pem
config/certificates/trust.pem: OK
Chain:
depth=0: C = US, O = EV Charging PKI, OU = TEST MO Sub-CA, CN = P-256 TEST Tier 1 MO Sub-CA (untrusted)
depth=1: C = US, O = EV Charging PKI, DC = EVCPKI, OU = TEST Root CA, CN = P-256 TEST Root CA
Since we have replaced the certs, let's replace the trust as well.
$ cat config/certificates/cpo_sub_ca1.pem config/certificates/cpo_sub_ca2.pem > config/certificates/trust.pem
$ openssl verify -show_chain -CAfile config/certificates/root-V2G-cert.pem -untrusted config/certificates/trust.pem config/certificates/csms.pem
config/certificates/csms.pem: OK
Chain:
depth=0: CN = SECCCert, O = EVerest, C = DE, DC = CPO (untrusted)
depth=1: CN = CPOSubCA2, O = EVerest, C = DE, DC = V2G (untrusted)
depth=2: CN = CPOSubCA1, O = EVerest, C = DE, DC = V2G (untrusted)
depth=3: CN = V2GRootCA, O = EVerest, C = DE, DC = V2G
Now, let's restart everything!
I'm still getting the same error. Aha! the error is in MaEVe and was masked because we started with -d
$ docker logs maeve-csms-gateway-1
Error: processing tls key pair: tls: failed to parse private key
Usage:
All the keys are valid, but are EC (not RSA) and have a password
openssl ec -in config/certificates/cpo_sub_ca1.key
read EC key
Enter pass phrase for config/certificates/cpo_sub_ca1.key:
writing EC key
-----BEGIN EC PRIVATE KEY-----
...
-----END EC PRIVATE KEY-----
This is thrown from
tlsCert, err := tls.X509KeyPair(cb, kb)
if err != nil {
return fmt.Errorf("processing tls key pair: %v", err)
}
Verified, by testing with the sample code in the go X509 code, that the cpo_sub_ca1
keypair generates the same error.
Wait was this the error all along?
Do the certs checked in to the fork actually work?! csms.key
and csms.pem
work
https://github.com/EVerest/everest-demo/assets/2423263/da315b6e-ab50-4239-b943-00210123c13b
BUT, cpo_sub_ca2.pem
and cpo_sub_ca2.key
(as checked in right now) DO NOT WORK @sahabulh @crr-snl for visibility. It complains that the private key does not match the public key.
https://github.com/EVerest/everest-demo/assets/2423263/77052d17-8f01-4529-ba0c-283e317c9562
However, the CSMS keypair is still fine, so the gateway starts up properly from the fork.
$ git status
On branch current_fork
nothing to commit, working tree clean
maeve-csms-manager-1 | time=2024-03-11T01:24:37.283Z level=INFO msg="checking for pending charge station certificates changes"
maeve-csms-manager-1 | time=2024-03-11T01:24:37.283Z level=INFO msg="checking for pending charge station settings changes"
maeve-csms-manager-1 | time=2024-03-11T01:24:37.283Z level=INFO msg="sync triggers" duration=565.422794ms sync.trigger.previous="" sync.trigger.count=0
maeve-csms-gateway-1 | 2024/03/11 01:48:41 http: TLS handshake error from 192.168.176.4:39520: local error: tls: bad record MAC
tls: bad record MAC
just means that the incoming packet is corrupted in some way. Potential fixes include:
This is where the python version would be helpful - it would allow us have an inspectable working version which we can then use to compare against the non-working versions. I should also be able to inspect the network but not sure what I can see with an encrypted connection.
Tried turning off the load balancer
curl http://localhost:9410/api/v0/cs/cp001 -H 'content-type: application/json' -d '{"securityProfile": 0, "base64SHA256Password": "3oGi4B5I+Y9iEkYtL7xvuUxrvGOXM/X2LQrsCwf/knA="}'
ws://host.docker.internal:9310/ws/cp001
(update VARIABLE_ATTRIBUTE set "VALUE" = '[{"configurationSlot": 1, "connectionData": {"messageTimeout": 30, "ocppCsmsUrl": "ws://host.docker.internal:9310/ws/cp001", "ocppInterface": "Wired0", "ocppTransport": "JSON", "ocppVersion": "OCPP20", "securityProfile": 1}}]' where VARIABLE_ID == 19;
)curl http://localhost:9410/api/v0/cs/cp002 -H 'content-type: application/json' -d '{"securityProfile": 1, "base64SHA256Password": "3oGi4B5I+Y9iEkYtL7xvuUxrvGOXM/X2LQrsCwf/knA="}'
)update VARIABLE_ATTRIBUTE set "VALUE" = '[{"configurationSlot": 1, "connectionData": {"messageTimeout": 30, "ocppCsmsUrl": "ws://host.docker.internal:9311/ws/cp002", "ocppInterface": "Wired0", "ocppTransport": "JSON", "ocppVersion": "OCPP20", "securityProfile": 2}}]' where VARIABLE_ID == 19;
)
TLS handshake error from 192.168.65.1:19797: local error: tls: bad record MAC
My next step will be to write a simple python program that we can run in the same docker container and make the same connection. This is an alternative to getting access to the python-based EVSE implementation. However, I am going to upgrade to the most recent release since there have been several changes to the security code since then. I will then create an example of a broken docker-compose
and send it out to the community in parallel with my own investigation.
The fork () has several hardcoded certificates, which are the ones passed in to the maeve command line. So I am not sure why it is not just named
cpo_sub_ca1
. As an aside, I wonder if the PKI testing event should include an intermediate certificate just to make things more complex.
ISO 15118 states that if only 1 SubCA/ICA is present, it has to use the profile of SubCA 2. That's why I named it that way to avoid confusion.
While copying the root V2G cert that is required, i wondered whether I should copy the
root-V2G-cert.pem
orMORootCACert.pem
, before discovering that they are, in fact, the same$ diff config/certificates/root-V2G-cert.pem config/certificates/MORootCACert.pem
Again, for the sake for simplicity, I used the same root, same SubCA and same leaf everywhere. That means all the cryptography related files for MO, OEM, CPO/CSO, V2G are same. We can of course use different certs and keys for different chains.
After manually copying the root CA, the
$ docker cp ~/joet-everest/maeve-csms/config/certificates/root-V2G-cert.pem everest-demo-manager-1:/workspace/dist/etc/everest/certs/ca/v2g/V2G_ROOT_CA.pem Successfully copied 3.58kB to everest-demo-manager-1:/workspace/dist/etc/everest/certs/ca/v2g/V2G_ROOT_CA.pem
Maybe we have to copy in the entire bundle and not only the root? Do we also need to copy over the leaves in that case? Let's check the code...
Why do we need the CSMS key to be present in EVerest? It should be a private property of the CSMS, which is an external entity.
V2G_CA_BUNDLE
What certs makes up a bundle? RootCA and SubCAs? Only SubCAs? What is the order?
Bumped up the everest version https://github.com/EVerest/everest-demo/pull/26
Will wait for clean images to build before retrying the SSL connection.
Why do we need the CSMS key to be present in EVerest? It should be a private property of the CSMS, which is an external entity.
As I said:
The handshake is still failing. Given that we are at security profile 2 (TLS with basic auth), with the following diagram from the spec, I would assume that as long as we had the root certificate installed in the CSMS, we would not need the CSMS certificate to be preinstalled.
However, given that the TLS handshake was failing, it could be that the EVerest implementation was incorrect. As you can see from the history, there are configuration parameters to the security module that refer to the CSMS certificate and key.
What certs makes up a bundle? RootCA and SubCAs? Only SubCAs? What is the order?
Please see my comment around the code in the generate scripts shell script that generates the bundle.
I've tried the bundle and the root in various places and they all fail. Concretely, this is really confusing - this is supposed to be the bundle, but the default is set to
ROOT_CA
.pem. if we make this be the bundle, then where is the root for it to verify against? Next steps:
This is really confusing. There should be a consistent way to organize all the certs. Also, I see CA keys inside the client folder. For example: /etc/everest/certs/client/csms
folder contains CPO_SUB_CA1.key
and CPO_SUB_CA2.key
.
V2G_CA_BUNDLE
What certs makes up a bundle? RootCA and SubCAs? Only SubCAs? What is the order?
We should be using the bundles that EonTI has made available on Google Drive for use during the PKI test event at NREL. Keysight and Hubject have provided similar cert bundles for CharIn Testivals, but they're not in scope for April event.
We should be using the bundles that EonTI has made available on Google Drive for use during the PKI test event at NREL. Keysight and Hubject have provided similar cert bundles for CharIn Testivals, but they're not in scope for April event.
At least from Sahabul's fork, there are no certificate bundles in the certs provided by Eonti. By "bundles", I mean a full certificate chain, concatenated into a single pem file. Each pem file checked in contains a single cert. It is not clear where EVerest and/or MaEVe expect the chain. I do note the comment in the mailing list: https://lists.lfenergy.org/g/everest/message/1255 "Make sure to include all intermediate certs in the certificate chain file in the correct order."
Do the certs checked in to the fork actually work?!
If the CPO SubCA 2 key and cert doesn't match, that means the key is not needed of any authorization, which, I think, is true. The python OCPP client works fine without it so I didn't even notice that they were wrong.
And the certs in my MaEVe fork work because I can see it working. And like I mentioned above, the CPO SubCA 2 key is not needed for, at least, the authorization req/res pair.
After bumping up the version https://github.com/EVerest/everest-demo/pull/26, and https://github.com/EVerest/everest-demo/pull/28, we get a different error, which is more insightful
2024-03-11 16:35:35.667213 [DEBG] ocpp:OCPP201 :: Loading ca csms bundle to verify server certificate: /workspace/dist/etc/everest/certs/ca/v2g/V2G_ROOT_CA.pem
2024-03-11 16:35:35.888409 [WARN] ocpp:OCPP201 bool ocpp::WebsocketTLS::verify_csms_cn(const std::string&, bool, boost::asio::ssl::verify_context&) :: Invalid certificate error 'unable to get local issuer certificate' (at chain depth '0')
2024-03-11 16:35:35.888709 [ERRO] ocpp:OCPP201 void ocpp::WebsocketBase::log_on_fail(const std::error_code&, const boost::system::error_code&, int) :: Failed to connect to websocket server, error_code: 8, reason: TLS handshake failed, HTTP response code: 0, category: websocketpp.transport.asio.socket, transport error code: 167772567, Transport error category: asio.ssl
So it looks like we do need a full bundle there (Invalid certificate error 'unable to get local issuer certificate' (at chain depth '0')
) One final try: copy over the bundle with debug logging before I generate the MRE and hand it over to others.
(cp dist/etc/everest/certs/ca/v2g/V2G_CA_BUNDLE.pem dist/etc/everest/certs/ca/v2g/V2G_ROOT_CA.pem
)
Aha!
2024-03-11 16:42:41.838334 [INFO] ocpp:OCPP201 :: Connecting TLS websocket to uri: wss://host.docker.internal/ws/cp001 with security-profile 2
2024-03-11 16:42:41.887152 [DEBG] ocpp:OCPP201 :: Loading ca csms bundle to verify server certificate: /workspace/dist/etc/everest/certs/ca/v2g/V2G_ROOT_CA.pem
2024-03-11 16:42:42.069517 [WARN] ocpp:OCPP201 bool ocpp::WebsocketTLS::verify_csms_cn(const std::string&, bool, boost::asio::ssl::verify_context&) :: FQDN 'host.docker.internal' does not match CN 'SECCCert' or any alternative names
2024-03-11 16:42:42.073916 [ERRO] ocpp:OCPP201 void ocpp::WebsocketBase::log_on_fail(const std::error_code&, const boost::system::error_code&, int) :: Failed to connect to websocket server, error_code: 8, reason: TLS handshake failed, HTTP response code: 0, category: websocketpp.transport.asio.socket, transport error code: 167772567, Transport error category: asio.ssl
2024-03-11 16:42:42.074029 [INFO] ocpp:OCPP201 :: Reconnecting in: 2000ms, attempt: 1
terminate called after throwing an instance of 'std::logic_error'
what(): get_issuer_key_hash must only be used on self-signed certs
2024-03-11 16:42:42.123560 [CRIT] manager int boot(const boost::program_options::variables_map&) :: Module evse_security (pid: 678) exited with status: 6. Terminating all modules.
2024-03-11 16:42:42.125019 [INFO] manager :: SIGTERM of child: api (pid: 671) succeeded.
2024-03-11 16:42:42.125193 [INFO] manager :: SIGTERM of child: auth (pid: 672) succeeded.
2024-03-11 16:42:42.125247 [INFO] manager :: SIGTERM of child: car_simulator_1 (pid: 673) succeeded.
2024-03-11 16:42:42.125289 [INFO] manager :: SIGTERM of child: car_simulator_2 (pid: 674) succeeded.
2024-03-11 16:42:42.125347 [INFO] manager :: SIGTERM of child: energy_manager (pid: 675) succeeded.
2024-03-11 16:42:42.125579 [INFO] manager :: SIGTERM of child: evse_manager_1 (pid: 676) succeeded.
2024-03-11 16:42:42.125818 [INFO] manager :: SIGTERM of child: evse_manager_2 (pid: 677) succeeded.
2024-03-11 16:42:42.125917 [INFO] manager :: SIGTERM of child: grid_connection_point (pid: 679) succeeded.
2024-03-11 16:42:42.125973 [INFO] manager :: SIGTERM of child: iso15118_car (pid: 690) succeeded.
2024-03-11 16:42:42.126060 [INFO] manager :: SIGTERM of child: iso15118_charger (pid: 692) succeeded.
2024-03-11 16:42:42.126234 [INFO] manager :: SIGTERM of child: ocpp (pid: 693) succeeded.
2024-03-11 16:42:42.126410 [INFO] manager :: SIGTERM of child: persistent_store (pid: 695) succeeded.
2024-03-11 16:42:42.126594 [INFO] manager :: SIGTERM of child: slac (pid: 696) succeeded.
2024-03-11 16:42:42.126656 [INFO] manager :: SIGTERM of child: system (pid: 697) succeeded.
2024-03-11 16:42:42.126696 [INFO] manager :: SIGTERM of child: token_provider_1 (pid: 698) succeeded.
2024-03-11 16:42:42.126727 [INFO] manager :: SIGTERM of child: yeti_driver_1 (pid: 700) succeeded.
2024-03-11 16:42:42.126760 [INFO] manager :: SIGTERM of child: yeti_driver_2 (pid: 705) succeeded.
2024-03-11 16:42:42.126788 [CRIT] manager int boot(const boost::program_options::variables_map&) :: Exiting manager.
I see at least two errors:
FQDN 'host.docker.internal' does not match CN 'SECCCert' or any alternative names
andwhat(): get_issuer_key_hash must only be used on self-signed certs
Fixing the first is trivial, the second needs more work to debug. But there is a trace and a pointer to the module with the error, so we should be able to figure out what that does.
Also, if we want more logging, we can edit dist/etc/everest/default_logging.cfg
and set the filter to
Filter="%Severity% >= INFO or (%Process% contains OCPP201 and %Severity% >= DEBG)"
The second error comes from
std::string X509Wrapper::get_issuer_key_hash() const {
if (is_selfsigned()) {
return get_key_hash();
} else {
throw std::logic_error("get_issuer_key_hash must only be used on self-signed certs");
}
}
Let's see where this is called, and with which cert
It is used to check for certificate equality, right after the issuer name.
bool operator==(const CertificateHashData& other) const {
return get_issuer_name_hash() == other.issuer_name_hash && get_issuer_key_hash() == other.issuer_key_hash &&
get_serial_number() == other.serial_number;
}
Here's where that check happens. We should fix the cert name and then maybe add additional logs to figure out which certs are being verified. https://github.com/EVerest/libocpp/blob/51b3a62f598f7f093193ab2af67ec7966f0dcc79/lib/ocpp/common/websocket/websocket_tls.cpp#L307
@shankari I am also testing in parallel to you and my findings are similar to yours. That's why I'm not adding any additional info here. I didn't try bumping up the version though. I was trying to make it work with 2023.10.0 as that is what currently supported by the hardware. But of course we can use the latest version for software testing.
I re-generated the certificates to have the correct FQDN for the CSMS certificate. The FQDN error is gone
2024-03-12 15:21:30.149417 [DEBG] ocpp:OCPP201 bool ocpp::WebsocketTLS::verify_csms_cn(const std::string&, bool, boost::asio::ssl::verify_context&) :: FQDN matches CN of server certificate: host.docker.internal
But I still see
2024-03-12 15:21:30.171826 [DEBG] ocpp:OCPP201 void Everest::MQTTAbstractionImpl::on_mqtt_message(std::shared_ptr<Everest::Message>) :: topic everest/evse_security/main/cmd starts with everest/
terminate called after throwing an instance of 'std::logic_error'
what(): get_issuer_key_hash must only be used on self-signed certs
2024-03-12 15:21:30.214575 [CRIT] manager int boot(const boost::program_options::variables_map&) :: Module evse_security (pid: 2192) exited with status: 6. Terminating all modules.
This seems like it might need to come from the CSMS
2024-03-12 15:21:30.171112 [DEBG] ocpp:OCPP201 Everest::Everest::subscribe_var(const Requirement&, const std::string&, const JsonCallback&)::<lambda(const Everest::json&)> :: Incoming evse_manager_2:EvseManager->evse:evse_manager->session_event
2024-03-12 15:21:30.171169 [DEBG] ocpp:OCPP201 void Everest::MQTTAbstractionImpl::on_mqtt_message(std::shared_ptr<Everest::Message>) :: topic everest/evse_manager_2/evse/var starts with everest/
2024-03-12 15:21:30.171347 [DEBG] ocpp:OCPP201 Everest::Everest::subscribe_var(const Requirement&, const std::string&, const JsonCallback&)::<lambda(const Everest::json&)> :: Incoming evse_manager_2:EvseManager->evse:evse_manager->session_event
2024-03-12 15:21:30.171664 [DEBG] ocpp:OCPP201 void Everest::MQTTAbstractionImpl::on_mqtt_message(std::shared_ptr<Everest::Message>) :: topic everest/evse_manager_1/evse/var starts with everest/
2024-03-12 15:21:30.171826 [DEBG] ocpp:OCPP201 void Everest::MQTTAbstractionImpl::on_mqtt_message(std::shared_ptr<Everest::Message>) :: topic everest/evse_security/main/cmd starts with everest/
I bet that the issue is that EVerest assumes that the CSMS certificate is also a bundle. Tried to create a bundle
$ cp csms.pem csms_leaf.pem
$ cat csms_leaf.pem cpo_sub_ca2.pem cpo_sub_ca1.pem root-V2G-cert.pem > csms.pem
At this point, we are close - I bet this is just a configuration issue where we are not setting up the certs correctlly. Will create an MRE so that we can get support and then finish this up
Created MRE. The MRE doesn't have debug logging turned on, so without the log spew, we can see that the connection actually succeeds.
2024-03-12 17:23:41.145898 [INFO] ocpp:OCPP201 :: Connecting TLS websocket to uri: wss://host.docker.internal/ws/cp001 with security-profile 2
2024-03-12 17:23:41.193596 [DEBG] ocpp:OCPP201 :: Loading ca csms bundle to verify server certificate: /workspace/dist/etc/everest/certs/ca/v2g/V2G_ROOT_CA.pem
2024-03-12 17:23:41.379596 [INFO] ocpp:OCPP201 :: OCPP client successfully connected to TLS websocket server
2024-03-12 17:23:41.396522 [INFO] ocpp:OCPP201 :: Received BootNotificationResponse: {
"currentTime": "2024-03-12T17:23:41.000Z",
"interval": 300,
"status": "Accepted"
}
with messageId: 74cd331d-a8cd-4810-b689-9072a5b638cc
Before the process terminates
terminate called after throwing an instance of 'std::logic_error'
what(): get_issuer_key_hash must only be used on self-signed certs
2024-03-12 17:23:41.446046 [CRIT] manager int boot(const boost::program_options::variables_map&) :: Module evse_security (pid: 605) exited with status: 6. Terminating all modules.
@shankari I ran your MRE and got the following outputs:
Looks like I get an Invalid certificate error: unable to get local issuer certificate
repeatedly
@jhoshiko please look at the script, and the step that copies over the bundle. You can also look through the issue for that error and the step that is supposed to resolve it. Is that running successfully?
@shankari Ok, looks like there was issue finding bash that caused issues copying the bundle over. I fixed those and am now getting the same error in this comment
Unable to find bash error for reference:
OCI runtime exec failed: exec failed: unable to start container process: exec: "C:/Program Files/Git/usr/bin/bash": stat C:/Program Files/Git/usr/bin/bash: no such file or directory: unknown
@jhoshiko this seems to be an issue with running on windows. This is very weird because docker exec
is supposed to run the command in the container. And I run docker exec /bin/bash
multiple times before (e.g. docker exec everest-ac-demo-manager-1 /bin/bash -c "tar xzvf cached_certs_correct_name.tar.gz"
Would be interesting to understand how those worked, and this did not. What did you have to do to fix it?
@shankari I'm still not sure why the issue is occuring, but it seems to be specifically an issue with using bash on windows. The only time the error occurs is when /bin/bash
is used on lined 83, 86, and 88 in demo-iso15118-2-ac-plus-ocpp201.sp2.sh. I was able to get it to work by not using a bash shell on windows and using WSL instead, but it also looks like replacing /bin/bash
with bash
also works.
This is my output for the MRE:
In PR #22, we created a one-line demo that allowed us to test end to end charging with OCPP 2.0.1. However it only supports Basic Auth
This issue will track the changes required to change it to support Security Profile 3 (2 in MaEVe since it starts with 0), using a client certificate for authentication.
It will temporarily use a forked version of MaEVe that has hardcoded certificates from an adversarial PKI testing event. Eventually, we will want to have the demo use an open CA and non-proprietary certificates, but make it easy to configure so that testers can easily use proprietary certificates or implementations.
@jhoshiko @sahabulh @crr-snl for visibility