Closed dianagriffin closed 10 months ago
Documenting results of engineering discussion:
domain-cc
folder. If VRO Team implements this, we need a knowledge transfer and documented software requirements. Otherwise, Team CC can implement this with minimal infrastructure/DevOps support from the VRO Team.svc-bie-kafka
folder. One possibility is that this platform service would listen on RabbitMQ for Kafka-subscription requests and send Kafka-event notifications via RabbitMQ to the subscriber.This strategy would minimize the domain's dependence on the VRO platform to implement a general BIE Kafka client. This can potentially free up the VRO Team to enable focusing on setting up the infrastructure for connecting to non-prod and prod BIE Kafka, setting up a Slack connection for non-prod and prod, etc.
" to have the capability to subscribe to [some kid of] events" " to consume [some kid of] events"
Just to clarify, do these two quotes mean the same thing? If not, an explanation what the author means may make the story more clear. If they are same, editing the story to use consistent language would, IMHO, make it easier to understand.
So the point of this is to listen to some types of evens in Kafka and forward them to Slack channels? Forgive me if it's an obvious question, but why? Why to Slack? So that what? So that " Contention Classification team member . . . can parse and analyze that data in measuring the accuracy of my contention classification service."? Are they to do it by hand? As in a human will be doing data entry from Slack to some form?
@engineer-plus-plus They are not exactly the same; maybe they are "2 sides of the same coin"? Basically, something needs to subscribe/register-to-listen to those events, and something needs to consume/handle the events as they come in.
I'm providing the following update to help @dianagriffin create sprint tickets. Further details will be provided as a result of the tech spec from #1662.
New approach: Implement a general (non-BIE-specific) Kakfa mock and client (tasks 2, 3, and 4); then updated them to work with BIE's Kafka service (tasks 5, 6, 7, and 8).
While there are more tasks than my original thoughts, the amount of effort is less. The tasks are smaller in scope and have fewer task dependencies, allowing us to get more done earlier. For example, tasks 1, 2, 3, and 9 can be started in parallel.
mocks/mock-bie-kafka/src/docker/
svc-bie-kafka
Gradle subprojectdomain-xample/xample-workflows/
graph TD
subgraph Kafka
subgraph BIE[BIE env]
bie-kafka[kafka-server]
end
subgraph test_env[Test env]
mock-kafka[mock-kafka-server]
end
end
kafka-client -->|subscribe| bie-kafka -.->|event| kafka-client
subgraph VRO
subgraph svc-bie-kafka
kafka-client -.-> kafkaEvent-notifier
end
subgraph xample-workflow
subscribe(subscribe to kafka topic)
event-handler(kafkaEvent-handler) -.-> saveToDB
end
subscribe --> subscribeQ
subscribeQ[\subscribe Queue/] --> svc-bie-kafka
kafkaEvent-notifier -.-> kafkaEventQ[\Queue for kafkaEvent/] -.-> event-handler
saveToDB -.-> DB
DB[("Platform\n(DB)")]
end
DB --> DataDog
style DB fill:#aea,stroke-width:4px
style svc-bie-kafka fill:#AAF,stroke-width:2px,stroke:#777
style xample-workflow fill:#FAA,stroke-width:2px,stroke:#777
style test_env fill:#AAA,stroke-width:2px,stroke:#777
style Kafka fill:#EEE,stroke-width:2px,stroke:#777
This is an epic representing new feature development for VRO to consume contention events and make that data integration available to partner products in VRO.
User Stories
Requirements
Consume/subscribe to the following topics from the BIE service:
TST_CONTENTION_BIE_CONTENTION_ASSOCIATED_TO_CLAIM_V02
TST_CONTENTION_BIE_CONTENTION_UPDATED_V02
TST_CONTENTION_BIE_CONTENTION_CLASSIFIED_V02
TST_CONTENTION_BIE_CONTENTION_DELETED_V02
Persist the events published to these streams somewhere that is queryable. We are interested in the following data elements for each event:
If events include other fields, it is likely fine to persist them -- please discuss with product owner. Please don't persist PII. Note that contention name/text can contain PII.
We don’t want this persistence to require a change to our ATO, so it should not exceed the policy we had in place for the data we persisted for RRD (which I believe was 90 days but please look at code to confirm).
Acceptance Criteria
Tickets within this epic
1662
1674
1675
1676
1677
1678
1679
1680
1682
1683
Not included in this work
This initial work will lay a foundation for future feature development using and analyzing contention event stream data. Further implementation of features/logic that are triggered by the event stream data will be covered by future tickets/epics.
Note s about work