fix start failure when running in k8s context due to own certificate generated with different pod hostname
Purpose
when running in k8s context and pki is set to a volume mount, the server fails to restart after pod's deletion, because the alternate domain names contain the pod's hostname name which is changed by k8s with every pod delete.
Logs:
[21:52:37 INF] Check application instance certificate. [CN=OpcPlc] [619B3829CE368A8F0946C791DA23532C1ADD1092]
[21:52:37 INF] Check domains in certificate.
[21:52:37 INF] Server Domain names:
[21:52:37 INF] opcplc.e4i-runtime
[21:52:37 INF] opcplc
[21:52:37 INF] opcplc-747d795f5-57d2b
[21:52:37 INF] Certificate Domain names:
[21:52:37 INF] OPCPLC.E4I-RUNTIME
[21:52:37 INF] OPCPLC
[21:52:37 INF] OPCPLC-747D795F5-8B5FV
[21:52:37 ERR] The server is configured to use domain 'opcplc-747d795f5-57d2b' which does not appear in the certificate. Use certificate anyway?
[21:52:37 FTL] OPC UA server failed unexpectedly
Opc.Ua.ServiceResultException: The certificate with subject CN=OpcPlc in the configuration is invalid.
Please update or delete the certificate from this location:
pki/own
at Opc.Ua.Configuration.ApplicationInstance.CheckApplicationInstanceCertificate(Boolean silent, UInt16 minimumKeySize, UInt16 lifeTimeInMonths)
at OpcPlc.OpcApplicationConfiguration.ConfigureAsync() in D:\a\1\s\src\OpcApplicationConfiguration.cs:line 174
at OpcPlc.Program.ConsoleServerAsync(CancellationToken cancellationToken) in D:\a\1\s\src\Program.cs:line 253
at OpcPlc.Program.MainAsync(String[] args, CancellationToken cancellationToken) in D:\a\1\s\src\Program.cs:line 181
[21:52:37 INF] OPC UA server exiting...
Does this introduce a breaking change?
[ ] Yes
[x] No
Pull Request Type
What kind of change does this Pull Request introduce?
[x] Bugfix
[ ] Feature
[ ] Code style update (formatting, local variables)
[ ] Refactoring (no functional changes, no api changes)
[ ] Documentation content changes
[ ] Other... Please describe:
How to Test
in k8s context using a volume mount to app, restart the pod. the previously generated own cert should be taken-over
fix start failure when running in k8s context due to own certificate generated with different pod hostname
Purpose
when running in k8s context and pki is set to a volume mount, the server fails to restart after pod's deletion, because the alternate domain names contain the pod's hostname name which is changed by k8s with every pod delete.
Logs:
Does this introduce a breaking change?
Pull Request Type
What kind of change does this Pull Request introduce?
How to Test
in k8s context using a volume mount to app, restart the pod. the previously generated own cert should be taken-over
What to Check
Verify that the following are valid
Other Information