Closed zkokaja closed 1 year ago
We are updating the code a lot, it would be great to have even just really simple tests to run to ensure nothing breaks. Let's brainstorm some ideas, and what to apply it to: podcast, 247, glove, just one conversation? How do we evaluate success?
This is discussed here https://github.com/hassonlab/247-pickling/issues/131
We are updating the code a lot, it would be great to have even just really simple tests to run to ensure nothing breaks. Let's brainstorm some ideas, and what to apply it to: podcast, 247, glove, just one conversation? How do we evaluate success?