Finding civilized conversation in a more polarized age
Thank you for visiting the EchoBurst repository. This README details the broad purpose and objectives of the project, as well as where to find more detailed information on different aspects of the project.
After two years of hiatus, I finally feel I have the time, experience/skills and the data to make a real run at this project. The repo will be incrementally updated over the next month or so until summer break hits and I hopefully have time to dedicate to the project's technical development.
In days’ past, we had limited choices in what media we consumed and who we interacted with. This limited reach forced us to be less selective: we would talk to those who were closest, watch the channels we could afford and read the papers printed in our area. With the dawn of the internet, many hoped that we would see an expansion in how many views a person had access to. But we have found the opposite to be the case: since we can now choose from a functionally unlimited number of perspectives, we can customize who and what we interact with to fit whatever views we already have. This creates echo chambers of unparalleled fortitude, which greatly narrows our prespective, and makes it easier to accept fake or misleading stories that align with our established worldview. It has become increasingly important that we find a way to encourage more a more diverse media diet, and find ways to check the stories we hear against a diverse set of established sources.
This problem is too large to be solved by any one effort, but we hope to contribute in some small part. The way we hope to do this is by creating a tool that works against the paradigm created by social media of increasingly isolated echo chambers and growing distrust and animosity towards those who dissent against our established beliefs. As explained in the project description, we hope to do this by making it easier to find comments that promote civil discussion (i.e., are not simply toxic, but contribute to the conversation) but oppose the view of the user. In this way, we hope that users will be able to expand their horizons without having to sift through hundreds of destructive and potentially hateful comments to find them. Additionally, to prevent false equivilence between all positions and stories, this will be coupled with NLP enabled fact checking and fake news detection. This process will rely on a wide range of established news stories, and we'll be working to ensure that this is done transparently, and is loyal only to the truth, as muc has it can be established.
In an age where political and scientific discourse can literally reshape the face of the planet, our unwillingness to communicate with those we disagree with has caused views to polarize to an astounding degree, and discussion has broken down. If we isolate ourselves from everyone who disagrees with us, we greatly reduce our collective ability to affect change. It is the thesis of this project that most people generally want the same thing: a better, healthier, fairer and safer world. Often we simply differ on how we believe this can be accomplished. Even in cases where prejudice and distrust infect our discourse, exposure and interaction between hostile groups often leads to the discovery of shared ground.
For details of how to contribute, please see our CONTRIBUTING page. Anyone interested is encouraged to do so, and we especially need expertise in NLP and how these models can be effectively integrated into a web extention.
Our concrete short and medium term goals have been posted in the Roadmap issue, where they can be discussed, checked off and modified as progress is made. This is of course subject to change, but we're hoping to follow the general timeline set out. A more general set of development stages can be found in the Wiki.
The wiki has been updated and now contains an outline of the new structure of the project, the different planned machine learning components, and a very rough outline of the desired stages of development.