Story discovery engine for the Counterdata Network. Grabs relevant stories from various APIs, runs them against bespoke classifier models, post results to a central server.
The project is deployed right now via a number of manual steps. It now involves a number of technologies and should be more automated:
the story-processor Celery-based worker
the RabbitMQ work queue
the cron-initiated story fetchers
the story dashboard Streamlit web app
the Postgres story-processor database
(potentially new URL cache)
This would be far easier if we could deploy it via some automation tool. Perhaps a Docker swarm? Other ideas? We need a proposal for how to do this effectively that meets our needs and capabilities, and then we need to do the work to implement that plan.
The project is deployed right now via a number of manual steps. It now involves a number of technologies and should be more automated:
This would be far easier if we could deploy it via some automation tool. Perhaps a Docker swarm? Other ideas? We need a proposal for how to do this effectively that meets our needs and capabilities, and then we need to do the work to implement that plan.