Open shaun-forgie opened 5 months ago
For the authentication and authorization requirement part that you mentioned in the Header Sections [Data Integrity requirements]. Kafka itself has all the configuration to maintain the data integrity requirements. It uses the SSL/TLS to ensure that all broker/client and inter-broker network communication is encrypted. It also resolve the query regarding the signature verification as we can configure SSL truststore and keystore to do the signature verification. For this I already mentioned in the Readme.md
that we can enable this by setting up the below properties that each publisher and consumer has to do if kafka is configured for SSL/TLS connectivity
org.killbill.billing.plugin.kafka.sslEnabled=true
org.killbill.billing.plugin.kafka.trustStoreLocation=/Users/prashant.kumar/Downloads/keystore1.jks
org.killbill.billing.plugin.kafka.trustStorePassword=password
org.killbill.billing.plugin.kafka.keyPassword=cashfree
org.killbill.billing.plugin.kafka.keyStoreLocation=/Users/prashant.kumar/Downloads/keystore1.jks
org.killbill.billing.plugin.kafka.keyStorePassword=password
The one who are using the kafka also want the data integrity requirement which kafka anyways solve that. They just have to configure the kafka in such a way to make it highly secure. I suggest you to please go through this confluence. It will resolve your all below data integrity requirement
Regarding the Body section, most parts are clear except for three fields:
Message ID
: This likely corresponds to the trackingId used in the rolled_up_usage to track the message. Is that correct?Usage Source Reference
: What is the purpose of the usage source reference? Is it used for the createdBy field in the rolled_up_usage table?Meta Data
: After consumption, where will this metadata be stored?I make a distinction between two systems - Sender and Origin due to the fact that often the sending system is not the system where the usage data was originally captured or produced. In large complex environments it is often imporant to make that distinction.
The security mechanisms you have listed are certainly valid for identifying the sending system...but if the content being sent came from another source then being able to sign the content is useful.
We can however deal with this after the first release.
Any message sent to the message broker will need to have data fields in order to be successfully processed and included on an invoice:
Header Sections [Data Integrity requirements]
Body Sections [Kill Bill requirements M = mandator O = optional]
[Matching against existing customer billing plan]
[Recording actual usage values]
[Not currently supported in the raw usage record but useful to store for analytical purposes in another table]
Examples of meta data could include: collected by, location, and environmental characteristics like temperature