Open jhoshiko opened 8 months ago
@shankari I have a couple questions related to the MRE.
The EvseSecurity module has several parameters that point to the different certificate bundles. Each of these parameters has a default path which is where I am trying to copy the certs that Tara has just provided.
Here are my questions:
1.) majority of the files that Tara provided are .crt
files which, if I'm not mistaken, are signed certificates. However, the EvseSecurity module manifest.yaml
is expecting paths to .pem
files, which are the public keys. I think that the goal is to put the .pem
file and the .crt
files in the same directory that the manifest.yaml
is pointing to by default, but I wanted to double check if that's a good assumption
2.) I might be getting mixed up with where these certs are supposed to be copied. We have the contract certs and the EVSE certs, but I'm not sure which bundle parameters are the correct ones. The manifest.yaml
has the following options:
- The v2g_ca_bundle (default: ca/v2g/V2G_ROOT_CA.pem) is used to verify the installation of SECC leaf certificates using the update_leaf_certificate command.
- The csms_ca_bundle (default: ca/v2g/V2G_ROOT_CA.pem) is used to verify the installation of CSMS leaf certificates using the update_leaf_certificate command.
- The mo_ca_bundle (default: ca/mo/MO_ROOT_CA.pem) is used to verify contract certificates provided by the electric vehicle as part of the ISO 15118 Plug & Charge process.
- The mf_ca_bundle (default: ca/mf/MF_ROOT_CA.pem) is used to verify firmware update files.
I've ruled out the mf_ca_bundle
because we're not installing firmware. I was thinking that the contract certificates will go where mo_ca_bundle
is pointing and EVSE contract will go where v2g_ca_bundle
points to, but I wanted to double check with you first.
Although not directed to me, I have some of the answers.
.crt
file is PEM
encoded. So, you can just change the extension to .pem
. And yes, the are certificates. The .pem
files referred in the EvseSecurity module manifest.yaml
are also certificates, they are not just public keys. But certificates contain the public keys. So, basically, you can rename the EonTi certs with .crt
extensions to corresponding .pem
files in the manifest.yaml
.mo_ca_bundle
directory and the root in the EVSE chain will go to the v2g_ca_bundle
directory.Ok, I've copied the both root certs to ca/mo/MO_ROOT_CA.pem
and ca/v2g/V2G_ROOT_CA.pem
respectively. I also copied the new ca/v2g/V2G_ROOT_CA.pem
to the maeve-csms/config/certificates/root-V2G-cert.pem
. However, when I go to test everest, I am getting a failed TLS handshake error:
2024-03-28 23:22:24.097905 [INFO] ocpp:OCPP201 :: libocpp: Updating OCSP cache on 0 certificates
2024-03-28 23:22:24.098003 [INFO] ocpp:OCPP201 :: libocpp: Done updating OCSP cache
2024-03-28 23:22:25.070620 [INFO] ocpp:OCPP201 :: Reconnecting to TLS websocket at uri: wss://host.docker.internal/ws/cp001 with security profile: 2
2024-03-28 23:22:25.076471 [INFO] evse_security:E :: Building new certificate hierarchy!
2024-03-28 23:22:25.123439 [INFO] evse_security:E :: Requesting certificate file: [CSMS] file:"/workspace/dist/etc/everest/certs/ca/v2g/V2G_ROOT_CA.pem"
2024-03-28 23:22:25.120025 [DEBG] ocpp:OCPP201 :: Loading ca csms bundle to verify server certificate: /workspace/dist/etc/everest/certs/ca/v2g/V2G_ROOT_CA.pem
2024-03-28 23:22:25.170845 [INFO] evse_security:E :: Requesting certificate file: [CSMS] file:"/workspace/dist/etc/everest/certs/ca/v2g/V2G_ROOT_CA.pem"
2024-03-28 23:22:25.221372 [ERRO] ocpp:OCPP201 void ocpp::WebsocketBase::log_on_fail(const std::error_code&, const boost::system::error_code&, int) :: Failed to connect to websocket server, error_code: 8, reason: TLS handshake failed, HTTP response code: 0, category: websocketpp.transport.asio.socket, transport error code: 167772567, Transport error category: asio.ssl
I wanted to check if I perhaps missed a step, such as adding additional certs to maeve. I still haven't replaced any of the other certificates found in cached_certs_correct_name.tar.gz
, just the ones in dist/etc/everest/certs/ca/mo
and dist/etc/everest/certs/ca/v2g
respectively.
Sorry for the late response. What did you do with the leaf certs? EonTi didn't provide leaf cert for the CSMS (MaEVe in this case). So, if you don't have that, OCPP with TLS with EonTi certs will not work. Actually, I was originally doing what you are doing now. I am also waiting for the CSMS cert.
As we were successful with the locally generated certificates, now we are going to test OCPP 2.0.1 with MaEVe using certificates that EonTi provided. I wanted to open a new issue and @shankari agreed to this here. But as this issue is created for the same purpose, lets go with this one.
The solution was supposed to be straight forward. We just needed to copy the EonTi certificate to replace the locally generated ones in proper places. When I did that, I got this following error while establishing TLS between the OCPP server (MaEVe) and client (EVerest):
2024-04-19 17:13:52.167434 [INFO] ocpp:OCPP201 :: Connecting TLS websocket to uri: wss://host.docker.internal/ws/cp001 with security-profile 3
2024-04-19 17:13:52.183841 [INFO] evse_security:E :: TPM Key: false
2024-04-19 17:13:52.186571 [WARN] evse_security:E evse_security::GetKeyPairResult evse_security::EvseSecurity::get_key_pair_internal(evse_security::LeafCertificateType, evse_security::EncodingFormat) :: CSMS leaf requires full bundle, but full bundle not found at path: "/workspace/dist/etc/everest/certs/client/csms"
2024-04-19 17:13:52.226964 [INFO] ocpp:OCPP201 :: Using certificate: "/workspace/dist/etc/everest/certs/client/csms/CSMS_LEAF.pem"
2024-04-19 17:13:52.227413 [INFO] ocpp:OCPP201 :: Using key file: "/workspace/dist/etc/everest/certs/client/csms/CSMS_LEAF.key"
2024-04-19 17:13:52.271330 [INFO] evse_security:E :: Building new certificate hierarchy!
2024-04-19 17:13:52.366119 [INFO] evse_security:E :: Requesting certificate file: [CSMS] file:"/workspace/dist/etc/everest/certs/ca/v2g/V2G_ROOT_CA.pem"
2024-04-19 17:13:52.319390 [DEBG] ocpp:OCPP201 :: Loading ca csms bundle to verify server certificate: /workspace/dist/etc/everest/certs/ca/v2g/V2G_ROOT_CA.pem
2024-04-19 17:13:52.451376 [INFO] evse_security:E :: Requesting certificate file: [CSMS] file:"/workspace/dist/etc/everest/certs/ca/v2g/V2G_ROOT_CA.pem"
2024-04-19 17:13:52.515189 [ERRO] ocpp:OCPP201 void ocpp::WebsocketBase::log_on_fail(const std::error_code&, const boost::system::error_code&, int) :: Failed to connect to websocket server, error_code: 8, reason: TLS handshake failed, HTTP response code: 0, category: websocketpp.transport.asio.socket, transport error code: 167772567, Transport error category: asio.ssl
MaEVe gateway logs reported this:
2024/04/19 17:13:52 http: TLS handshake error from 172.22.0.1:44978: local error: tls: bad record MAC
2024/04/19 17:13:55 http: TLS handshake error from 172.22.0.1:44982: local error: tls: bad record MAC
Using wireshark, I got this:
TLSv1.3 Record Layer: Alert (Level: Fatal, Description: Bad Certificate)
Content Type: Alert (21)
Version: TLS 1.2 (0x0303)
Length: 2
Alert Message
Level: Fatal (2)
Description: Bad Certificate (42)
When I checked if the certificate chain is valid, I got a positive result. For CSMS server chain:
Validating that the certificates are set up correctly
config/certificates/csms.pem: OK
Chain:
depth=0: O = "Eonti, Inc", OU = CCMS, CN = USCPIC001LTON4 (untrusted)
depth=1: C = US, O = "DigiCert, Inc.", OU = TEST MO Sub-CA, CN = V2G TEST 1 MO Sub-CA (untrusted)
depth=2: C = US, O = "DigiCert, Inc.", DC = V2G, OU = TEST Root CA, CN = V2G TEST Root CA
For CSMS client chain:
Configured everest certs, validating that the chain is set up correctly
dist/etc/everest/certs/client/csms/CSMS_LEAF.pem: OK
Chain:
depth=0: C = US, O = "DigiCert, Inc.", DC = CPO, CN = valid.evse.leaf (untrusted)
depth=1: C = US, O = "DigiCert, Inc.", CN = CPO TEST P-256 Sub-CA2 (untrusted)
depth=2: C = US, O = "DigiCert, Inc.", DC = V2G, OU = TEST CPO Sub-CA, CN = V2G TEST CPO Sub-CA1 (untrusted)
depth=3: C = US, O = "DigiCert, Inc.", DC = V2G, OU = TEST Root CA, CN = V2G TEST Root CA
My suspicion was that the CN of the CSMS server cert not being host.docker.internal
might be the cause. I told Francis from EonTi about this on last Monday. But I am confused because this should have thrown a specific, custom error message, not from an external, well known package.
As @shankari suggested, I tested if changing the CN in the CSMS server certificate creates the same error. So, I took the currently working cert bundle and just replaced the CSMS server cert-key where the CN was host.docker.internal
with a pair where the CN is CSMSCert
.
Similarly, the certificate chains were validated but ultimately caused the same error:
2024-04-19 17:01:45.103588 [INFO] ocpp:OCPP201 :: Connecting TLS websocket to uri: wss://host.docker.internal/ws/cp001 with security-profile 3
2024-04-19 17:01:45.151745 [INFO] evse_security:E :: TPM Key: false
2024-04-19 17:01:45.152203 [INFO] evse_security:E :: TPM Key: false
2024-04-19 17:01:45.152804 [WARN] evse_security:E evse_security::GetKeyPairResult evse_security::EvseSecurity::get_key_pair_internal(evse_security::LeafCertificateType, evse_security::EncodingFormat) :: CSMS leaf requires full bundle, but full bundle not found at path: "/workspace/dist/etc/everest/certs/client/csms"
2024-04-19 17:01:45.153408 [INFO] ocpp:OCPP201 :: Using certificate: "/workspace/dist/etc/everest/certs/client/csms/CSMS_LEAF.pem"
2024-04-19 17:01:45.153689 [INFO] ocpp:OCPP201 :: Using key file: "/workspace/dist/etc/everest/certs/client/csms/CSMS_LEAF.key"
2024-04-19 17:01:45.199226 [INFO] evse_security:E :: Building new certificate hierarchy!
2024-04-19 17:01:45.245640 [INFO] evse_security:E :: Requesting certificate file: [CSMS] file:"/workspace/dist/etc/everest/certs/ca/v2g/V2G_ROOT_CA.pem"
2024-04-19 17:01:45.243213 [DEBG] ocpp:OCPP201 :: Loading ca csms bundle to verify server certificate: /workspace/dist/etc/everest/certs/ca/v2g/V2G_ROOT_CA.pem
2024-04-19 17:01:45.332854 [INFO] evse_security:E :: Requesting certificate file: [CSMS] file:"/workspace/dist/etc/everest/certs/ca/v2g/V2G_ROOT_CA.pem"
2024-04-19 17:01:45.407118 [ERRO] ocpp:OCPP201 void ocpp::WebsocketBase::log_on_fail(const std::error_code&, const boost::system::error_code&, int) :: Failed to connect to websocket server, error_code: 8, reason: TLS handshake failed, HTTP response code: 0, category: websocketpp.transport.asio.socket, transport error code: 167772567, Transport error category: asio.ssl
For CSMS server chain:
Validating that the certificates are set up correctly
config/certificates/csms.pem: OK
Chain:
depth=0: CN = CSMSCert, O = EVerest, C = DE, DC = CPO (untrusted)
depth=1: CN = CPOSubCA2, O = EVerest, C = DE, DC = V2G (untrusted)
depth=2: CN = CPOSubCA1, O = EVerest, C = DE, DC = V2G (untrusted)
depth=3: CN = V2GRootCA, O = EVerest, C = DE, DC = V2G
For CSMS client chain:
Configured everest certs, validating that the chain is set up correctly
dist/etc/everest/certs/client/csms/CSMS_LEAF.pem: OK
Chain:
depth=0: CN = CSMSCert, O = EVerest, C = DE, DC = CPO (untrusted)
depth=1: CN = CPOSubCA2, O = EVerest, C = DE, DC = V2G (untrusted)
depth=2: CN = CPOSubCA1, O = EVerest, C = DE, DC = V2G (untrusted)
depth=3: CN = V2GRootCA, O = EVerest, C = DE, DC = V2G
Here we are using the same chain for both the server and the client, but this is not an issue. We do the same thing for the working scenario. Ideally, they should be two different chains, like the ones provided by EonTi.
@the-bay-kay, I am not sure if you need to use wireshark, but in case you do, there are many instructions on how to use wireshark to view traffic over a docker network (e.g. https://github.com/linuxserver/docker-wireshark or https://superuser.com/questions/1685605/how-to-capture-docker-container-traffic-using-wireshark). If you do end up using wireshark, particularly to decode the EXI messages (https://github.com/EVerest/everest-demo/issues/44#issuecomment-2135773108), I would suggest that you change all the docker compose files to include wireshark by default and change the README to include instructions on how to access the wireshark desktop
Hi all! Just wanted to ask some quick clarifying questions about the certificate replacement process, to make sure I'm on the right track reproducing & debugging this.
As I understand it, I need to replace the following root certs: (all under .../dist/etc/everest/certs/
)
ca/mo/MO_ROOT_CA.pem
ca/v2g/V2G_ROOT_CA.pem
Are there any certs I'm missing that I should replace? (I've seen the CSMS_Leaf mentioned a few times in this thread, for example). And, to that end -- I should replace the entire chain, correct? Or is replacing the root sufficient?
Assuming I've got the correct list of certs to replace, where exactly in the container (manager-1) should I find replace them?
My guess is /ext/source/build/dist/.../certs/
, but I want to double check that's the correct one. As can be seen above, my '/workspace/dist/directory doesn't have the
etc/` directory, as in Sahabulh's Logs).
Finally -- where will I be placing the certs in the MaEVe container? I see there's a script to fetch certificates (link), but I haven't been able to find where the config directory is within the docker container.
I'll go ahead and start experimenting with my current assumptions -- let me know if there's anything wrong in my understanding of this project!
@the-bay-kay the list of certs and the locations they map to are in the demo script (demo-iso15118-2-ac-plus-ocpp.sh -3
). All the logs above are from running that script. You need to replace all the certs, not just the root. The location of the certs has changed since sahabul's logs due to https://github.com/EVerest/everest-demo/pull/39. You can see the new cert location in the updated script, and the change in the commit history.
As we discussed:
Just added the certs provided to cert archive, and demo-iso15118-ac-plus-ocpp.sh
- more or less as follows...
Manually running the verification shows that the cert chain is OK...
And, when executing this variation of the script, it launches ok -- but, once in the runtime loop, we repeatedly receive the following error...
Interestingly, it seems that things are working correctly when testing the UI...
https://github.com/EVerest/everest-demo/assets/98350084/67ce02ee-e4b4-4606-b1df-b2d614cbc122
Just wanted to post these initial findings, about to start diving into these websocket errors!
And, when executing this variation of the script, it launches ok -- but, once in the runtime loop, we repeatedly receive the following error...
Correct. The "underlying transport error" is what we need to fix. The OCSP error happens in both situations and can be ignored (at least for now).
Interestingly, it seems that things are working correctly when testing the UI...
Interesting. Obviously, I would assume that this should not work. Would be good to understand what is validating the cert/token if the connection to the CSMS is broken. Is the station doing some local validation? That may or may not be incorrect - we do want the station to be robust to network failure, and sometimes charging providers allow free sessions if the network is not available because it is their problem rather than the customer's problem. But if they don't allow free access (which is part of the ISO 15118 spec), then the charging shouldn't work.
BTW, tests like this would have been part of the adversarial PKI testing event in April that switched to a virtual format...
Would be good to understand what is validating the cert/token if the connection to the CSMS is broken.
It seems that that the network failure behavior is to accept the validation, given that OfflineTxForUnknownIDEnabled
is flagged. For more context, OCPP "Section 2.6, C15 - Offline Authorization", Figure 35
Tracing through from this error...
log_on_fail seems to be getting called on_fail_plain, the fail handler for the client connection_pointer set in connect_plain()
... Reading up more on C's client implementation now, to see when the error handler might be called
Enabled debug logging in /ext/source/build/dist/etc/everest/default_logging.cfc
-- pipping the output into a file to look at the logs...
cat demo-iso-15118-2-ac-plus-ocpp.sh | bash -s - -r $(pwd) -e 2>&1 | temp.log
Searching for the same error (while ignoring the erroneous characters...)
2024-06-10 16:43:15.255847 [[1;30mDEBG[0m] [1;32mocpp:OCPP201 [0m [1;36mocpp::WebsocketPlain::WebsocketPlain(const ocpp::WebsocketConnectionOptions&)[0m [1;30m:[0m[1;32m[0m: Initialised WebsocketPlain with URI: ws://localhost:9000/cp001
2024-06-10 16:43:15.256006 [[1;37mINFO[0m] [1;32mocpp:OCPP201 [0m [1;36m[0m [1;30m:[0m[1;32m[0m: Connecting to plain websocket at uri: ws://localhost:9000/cp001 with security profile: 1
2024-06-10 16:43:15.256785 [[1;30mDEBG[0m] [1;32mocpp:OCPP201 [0m [1;36mvoid ocpp::WebsocketPlain::connect_plain()[0m [1;30m:[0m[1;32m[0m: Connecting with security profile: 1
2024-06-10 16:43:15.256840 [[1;30mDEBG[0m] [1;32mocpp:OCPP201 [0m [1;36mstd::optional<std::__cxx11::basic_string<char> > ocpp::WebsocketBase::getAuthorizationHeader()[0m [1;30m:[0m[1;32m[0m: AuthorizationKey present, encoding authentication header
2024-06-10 16:43:15.256869 [[1;30mDEBG[0m] [1;32mocpp:OCPP201 [0m [1;36mstd::optional<std::__cxx11::basic_string<char> > ocpp::WebsocketBase::getAuthorizationHeader()[0m [1;30m:[0m[1;32m[0m: Basic Auth header: Basic Y3AwMDE6REVBREJFRUZERUFEQkVFRg==
2024-06-10 16:43:15.257947 [[1;31mERRO[0m] [1;32mocpp:OCPP201 [0m [1;36mvoid ocpp::WebsocketBase::log_on_fail(const std::error_code&, const boost::system::error_code&, int)[0m [1;30m:[0m[1;32m[0m: Failed to connect to websocket server, error_code: 2, reason: Underlying Transport Error, HTTP response code: 0, category: websocketpp.transport, transport error code: 0, Transport error category: system
2024-06-10 16:43:15.258073 [[1;37mINFO[0m] [1;32mocpp:OCPP201 [0m [1;36m[0m [1;30m:[0m[1;32m[0m: Reconnecting in: 2000ms, attempt: 1
2
Zooming into the new debug logs, and cleaning them up...
ocpp::WebsocketPlain::WebsocketPlain(const ocpp::WebsocketConnectionOptions&)[0m [1;30m:[0m[1;32m[0m: Initialised WebsocketPlain with URI: ws://localhost:9000/cp001
void ocpp::WebsocketPlain::connect_plain()[0m [1;30m:[0m[1;32m[0m: Connecting with security profile: 1
std::optional<std::__cxx11::basic_string<char> > ocpp::WebsocketBase::getAuthorizationHeader()[0m [1;30m:[0m[1;32m[0m: AuthorizationKey present, encoding authentication header
std::optional<std::__cxx11::basic_string<char> > ocpp::WebsocketBase::getAuthorizationHeader() Basic Auth header: Basic Y3AwMDE6REVBREJFRUZERUFEQkVFRg==
void ocpp::WebsocketBase::log_on_fail(const std::error_code&, const boost::system::error_code&, int) Failed to connect to websocket server, error_code: 2, reason: Underlying Transport Error, HTTP response code: 0, category: websocketpp.transport, transport error code: 0, Transport error category: system
So, we see a few outputs from getAuthorizationHeader() here. Time to trace back and see what calls this - I'm unsure how helpful this trail will be, but it's a start!
I should've scrolled a bit further down -- I'm not sure the logs above were the relevant bits. Looking at the logs after the failure...
Zooming in a bit on the security calls (and again, cleaning them up) ....
We finally narrow in on...
DEBG evse_security: std::filesystem::__cxx11::path evse_security::get_private_key_path_of_certificate(const X509Wrapper&, const std::filesystem::__cxx11::path&, std::optional<std::__cxx11::basic_string<char> >) Key found for certificate at path: "/ext/source/build/dist/etc/everest/certs/client/cso/SECC_LEAF.key"
DEBG evse_security: static bool evse_security::OpenSSLSupplier::x509_is_child(evse_security::X509Handle*, evse_security::X509Handle*) Certificate issued by error: unable to get local issuer certificate
DEBG evse_security: static bool evse_security::OpenSSLSupplier::x509_is_child(evse_security::X509Handle*, evse_security::X509Handle*) Certificate issued by error: unable to get local issuer certificate
DEBG evse_security: static bool evse_security::OpenSSLSupplier::x509_is_child(evse_security::X509Handle*, evse_security::X509Handle*) Certificate issued by error: unable to get local issuer certificate
DEBG evse_security: static bool evse_security::OpenSSLSupplier::x509_is_child(evse_security::X509Handle*, evse_security::X509Handle*) Certificate issued by error: unable to get local issuer certificate
WARN evse_security: static std::string evse_security::OpenSSLSupplier::x509_get_responder_url(evse_security::X509Handle*) Could not retrieve OCSP Responder URL from certificateARN ocpp:OCPP201 void ocpp::v201::OcspUpdater::updater_thread_loop() ; libocpp: Got unexpected message type from CSMS, expected: GetCertificateStatusResponse, got: InternalError, will retry.
I can't help but wonder, is this certs/client/cso/SECC_LEAF.key
file the issue? It doesn't seem to be in the list of certificates we're replacing in the demo script -- and, as we can see, the x509 validation fails on each of the children. Investigating further
(An Aside: For those new-ish to Charging Infrastructure Certification, as I was when starting this project -- I found this article particularly helpful when trying to understand the methodology / justification behind Plug-and-Charge network authentication!)
I can't help but wonder, is this certs/client/cso/SECC_LEAF.key file the issue? It doesn't seem to be in the list of certificates we're replacing in the demo script -- and, as we can see, the x509 validation fails on each of the children. Investigating further
We are not replacing it in the demo script, demo script works => it is not used. Feel free to remove it it completely, test with the original demo certs and verify that it works.
demo scripts works => it is not used.
Yup, you were right. Deleting SECC_LEAF.key
and running
cat demo-iso-15118-2-ac-plus-ocpp.sh | bash -s - -r $(pwd) -2 2>&1 | temp_two.log
Gives the logs below, and runs as expected (searching for OfflineTxForUnknownIdLabel
comes up empty). Back to the drawing board!
Quick update -- Using the lice comb to find this bug, attempting to trace through the code and understand what's happening. I'll be building a call-stack diagram, just for my own understanding (added under the fold). Looking at the following log block surrounding the error...
We know that the error occurs after WebsocketTLS::connect is called, as the reconnect callback is getting triggered. I've traced as far as WebsocketTLS::connect_tls. We can assume either this, or whatever calls the original connect, is encapsulating the error. Let's say it's the latter, for the purpose of sketching our call stack. We'll look for where this is being called later -- for now, let's build our way up from the root of the tree.
At the bottom of our tree, Evse::get_verify_file which eventually calls X509CertificateBundle::get_certficate_hierarchy, which in turn calls X509CertificateHierarchy::build_hierarchy. We can rule out this block as the problem child, as we see none of the error logs thrown during these calls (and, indeed, the x509 verification happens, per the DEBG logs earlier in this thread)
I think we can rule out the current EVSE leaf functions -- we would expect logs if any of them were to reach a fail state. Will continue tracing, and update with any findings!
My current hunch is, we can rule out everything under get_very_file
-- the on_tls_init
chain seems to be executing O.K., currently trying to trace where exactly the wss_client
performs the initialization. Starting to look at the calls in WebsocketTLS::connect_tls
-- since we're slotting the certs into Profile 2, I think we can start our investigation at 214
Edit: indeed, we can start on 281, as the debug logs indicate we get this far...
Comparing the working and broken debug logs...
There seems to be no discernible differences we can glean from the logs -- the on_mqtt_message
calls occur at slightly offset times, but we can ignore that...
Briefly reading up on Asio and WebSocket++'s implementation of the asio_client, just to better understand what's happening when wss_client->connect()
is being called...
Fishing for leads, pivoting to trying Wireshark. I've had some issues with docker-wireshark -- perhaps it's my lack of familiarity with the software, but I haven't been able to monitor any container traffic (see screenshots under fold). Going to spend a bit of time with Edgeshark, to weigh my options
EDIT: Well, that was wonderfully painless! Setting up Edgeshark seems to be as simple as running it in a parallel container. Perhaps that was my issue with the linuxserver docker-wireshark -- should it have been a parallel container, rather than an internal one? Either way, I'll stick with edgeshark, since it seems to be working (video below).
Added Edgeshark and execution details in PR #60 -- will keep the discussion of certificate investigation in this thread, as it contains all of the background information.
TLSv1.3 Record Layer: Alert (Level: Fatal, Description: Bad Certificate) Content Type: Alert (21) Version: TLS 1.2 (0x0303) Length: 2 Alert Message Level: Fatal (2) Description: Bad Certificate (42)
So far, I've been unable to capture this alert. I'll absolutely chalk this up to my lack of experience with the software -- just wanted to give some updates on my methods, in case I'm barking up the wrong tree.
Currently, I'm capturing the virtual bridges between for Maeve-CSMS and Everest-Demo, and filtering for TLS
(See screenshots under fold). Doing so captures only the Client Hello
handshake record. Looking at the TCP packets (again, ignoring the handshakes), I see a variety of messages -- most from yeti_driver_1, others from powermeter or other iso-151118_charger.
Taking a step back to carefully go through the certs provided, some general thoughts on the cert replacement:
Could a mismatch in the MO Root CN be causing issues? Our current MO has a Subject Name CN of MORootCA
-- the EonTi Provided certs do not have the same SN. Since we aren't replacing any of the other certs in certs/ca/mo
, any attempt to validate these from this new root would fail.
Admittedly, I'm somewhat hazy on where in the code this chain would be evaluated. I understand the maeve setup -- we replace the other the certs with those generated by the makefile, and replace the MO Root in the config, using the patch to update said config. We may not be transferring the other MO certs, but shouldn't we be replacing them? Will spend some time testing now.
Another point of confusion -- within the given certs, the only CSMS Leaf cert that is viable (e.g., that has a CN of host.docker.internal
) depends on a single MO Sub-Certs, _not two CPO Certs, as is currently the case with our self-signed certs. We certainly can't copy over the chains one-for-one -- will this be an issue? I'm wishing I new exactly where in the code the certs are verified during runtime.
Opening an issue for discussing the creation of an MRE that use EonTI certificates instead of the current self-signed certificates.