prashant280920 / killbill-kafka-consumer-plugin

Open source Contribution for Kill Bill to provide a plugin which can support usage push through kafka stream.
2 stars 0 forks source link

Usage Message Data Structure #4

Open shaun-forgie opened 1 month ago

shaun-forgie commented 1 month ago

Any message sent to the message broker will need to have data fields in order to be successfully processed and included on an invoice:

Header Sections [Data Integrity requirements]

Body Sections [Kill Bill requirements M = mandator O = optional]

[Matching against existing customer billing plan]

  1. Tenant ID (m) - billing tenancy reference
  2. Subscription ID (m) - customer subscription reference

[Recording actual usage values]

  1. Message ID (m) - tracking reference used to uniquely identify usgae entry
  2. Usage Unit Type (m) - name of usage which is a text value as defined in the billing catalog
  3. Usage Quantity (m) - BigDecimal floating point able to store usage amounts
  4. Usage Date / Time (m) - DateTime vaue indicating when usage occured

[Not currently supported in the raw usage record but useful to store for analytical purposes in another table]

  1. Usage Source Reference (o)- transaction reference from source system uniquely identifying the origin of the data
  2. Meta Data (o) - custom fields / tags that need to be added to the usage record

Examples of meta data could include: collected by, location, and environmental characteristics like temperature

prashant280920 commented 3 weeks ago

For the authentication and authorization requirement part that you mentioned in the Header Sections [Data Integrity requirements]. Kafka itself has all the configuration to maintain the data integrity requirements. It uses the SSL/TLS to ensure that all broker/client and inter-broker network communication is encrypted. It also resolve the query regarding the signature verification as we can configure SSL truststore and keystore to do the signature verification. For this I already mentioned in the Readme.md that we can enable this by setting up the below properties that each publisher and consumer has to do if kafka is configured for SSL/TLS connectivity

org.killbill.billing.plugin.kafka.sslEnabled=true
org.killbill.billing.plugin.kafka.trustStoreLocation=/Users/prashant.kumar/Downloads/keystore1.jks
org.killbill.billing.plugin.kafka.trustStorePassword=password
org.killbill.billing.plugin.kafka.keyPassword=cashfree
org.killbill.billing.plugin.kafka.keyStoreLocation=/Users/prashant.kumar/Downloads/keystore1.jks
org.killbill.billing.plugin.kafka.keyStorePassword=password

The one who are using the kafka also want the data integrity requirement which kafka anyways solve that. They just have to configure the kafka in such a way to make it highly secure. I suggest you to please go through this confluence. It will resolve your all below data integrity requirement

Regarding the Body section, most parts are clear except for three fields:

  1. Message ID: This likely corresponds to the trackingId used in the rolled_up_usage to track the message. Is that correct?
  2. Usage Source Reference: What is the purpose of the usage source reference? Is it used for the createdBy field in the rolled_up_usage table?
  3. Meta Data: After consumption, where will this metadata be stored?
shaun-forgie commented 3 weeks ago

I make a distinction between two systems - Sender and Origin due to the fact that often the sending system is not the system where the usage data was originally captured or produced. In large complex environments it is often imporant to make that distinction.

The security mechanisms you have listed are certainly valid for identifying the sending system...but if the content being sent came from another source then being able to sign the content is useful.

We can however deal with this after the first release.