Closed xehu closed 3 months ago
Just documenting this as the current link to the Fracture old repo: https://github.com/StanfordHCI/predicting-team-viability/tree/master/inference/Fracture%20data/Fracture%20data
One potential thing we might consider doing: we can extract the final submissions from the text chat and evaluate them ourselves? Most of the submissions are within the text themselves, so we just have to (1) create some kind of evaluation rubric; and (2) use that to evaluate the tasks.
This leads to a question:
Closing as this is about a previous version of the project and no longer related to the toolkit
@markwhiting Currently, I have a clean version of the Fracture data (cleaned up from https://github.com/Watts-Lab/team-process-map/blob/main/feature_engine/data/raw_data/fracture_data_raw.json.txt, into this csv: https://github.com/Watts-Lab/team-process-map/blob/main/feature_engine/data/raw_data/fracture_conversations.csv). However, the only dependent variable I have for Fracture is whether the team fractured or not; I do not have insight into how the team performed on the actual task.
Do you happen to still have any of the original data, where it would be possible to extract that dependent variable information! It would definitely make it easier to compare these tasks apples to apples!