This issue details the migration of mainframe over to RabbitMQ, due to the decision to use a message queue over our existing structure of using the database as a queue itself, having mainframe dish out jobs and receive results via an HTTP API.
Changes Involved
Mainframe no longer distributes jobs, the loader will push jobs to an "incoming" queue in its place.
Results will no longer be received via the HTTP API. Instead, have mainframe pull results from an "outgoing" results queue that the clients will push to every so often, and write those to the database.
Rather than having the database store the entirety of our scan data, instead make it so that only the results of a scan are stored.
This issue details the migration of mainframe over to RabbitMQ, due to the decision to use a message queue over our existing structure of using the database as a queue itself, having mainframe dish out jobs and receive results via an HTTP API.
Changes Involved