Open jwzimmer-zz opened 3 years ago
Other questions:
Other questions (not that fleshed out, I thought I'd write them up nicely but instead ended up copy-pasting messy things from the colab notebook for time!):
Top priority (per discussion with @nguyenhphilip on 11/1:
Subsequent priorities:
Stretch goals:
Visualization ideas/ thoughts:
Ideas from talking with Juniper Lovato:
Some random ideas i had while thinking about why studying tropes is relevant:
tropes are particular abstractions with generally agreed upon meanings used to convey and facilitate the communication of ideas within the structure of some larger/collective narrative
Can we predict the future state of a system based on the the actors (individual tropes) and the structures (meta tropes) that define it?
do stories defined by similar sets of meta and individual tropes follow the same developmental arc i.e. have the same or similar outcomes?
If not, why?
What are the most important components of this story?
are more popular tropes more convincing? Are they better representations of the phenomena they depict?
Notes from talking with @janeadams and @nguyenhphilip:
overall - to do:
=== Links from @janeadams === you could probably create (or ask Melissa to create) a #p_tvtropes channel here, or ask peter to add Phil to the compstorylab slack — might be easier to pull other people (eg laurent) into the project on an as-needed basis / share charts with curious folks
here’s a link dump: Nadieh Bremer’s “Why do cats and dogs?” https://whydocatsanddogs.com/cats here’s the design process for that viz: https://www.visualcinnamon.com/2019/04/designing-google-cats-and-dogs happiness scores from hedonometer: https://hedonometer.org/api.html networkx clustering: https://networkx.org/documentation/stable//reference/algorithms/generated/networkx.algorithms.cluster.clustering.html my acm iui paper using backbone method: https://www.overleaf.com/project/5f778826c6077c00013f5499 python implementation of backbone method: https://github.com/aekpalakorn/python-backbone-network graphgen tool for sql (have not used): https://medium.com/district-data-labs/graph-analytics-over-relational-datasets-with-python-89fb14587f07 laurent’s onion decomposition: https://arxiv.org/pdf/1510.08542.pdf someone else’s python implementation of onion decomp: https://github.com/junipertcy/onion_decomposition networkx + plotly to create interactive network graph with nodes colored by [centrality, # of connections]: https://plotly.com/python/network-graphs/
and we talked about: creating a network graph where nodes are user-generated indexes and links are weighted by the total number of connections between all tropes in each index cluster creating adjacency matrices or heatmaps of within-index connections, trope-to-trope, with edges weighted by the number of times each trope-trope connection occurs network sparsification to include only: tropes that co-occur often; tropes with a minimum centrality measure (like the stanford demo here https://dhs.stanford.edu/social-media-literacy/tvtropes-pt-1-the-weird-geometry-of-the-internet/), or some other sparsification method (e.g. backbone method https://arxiv.org/abs/0904.2389 or onion decomposition)
Want to make sure we're not retreading what has already been done in the dhs.stanford.edu article series...
Seeing if we get the same categories as in https://github.com/jwzimmer/tv-tropes/tree/main/Stanford_Neighborhoods would be interesting, especially since:
(Closed https://github.com/jwzimmer/tv-tropes/issues/7 because the topic there has been subsumed by this issue.)
Now that we've got all this info, what do we want to do with it? Let's lay them out, then run them by Prof Cheney for feedback before we put too much time into actually analyzing things.
Things we could do (no bad ideas!):
Advice on visualization from Jane Adams: