This was an experiment to try to surface common resources in related titles, to encourage authors to find content to add to their casebook. Since it was released it's been only available to admins, which means that the number of tabs along the top of a casebook varies between internal and external users, potentially affecting UI choices.
For very large casebooks, the feature doesn't load quickly enough (see #1433). For others, it doesn't provide very relevant results.
Here are some sample output from this feature on prod:
14th Amendment Course (a very large book): Rendered the page but generated 1,900 recommended resources.
Torts!: Generated 4,000 recommended resources, including private/test casebooks like one from @cath9.
Small casebook with few resources: "Legal Documents present in similar casebooks, but not in this one. Unable to find related content" [sic]
Recommending we delete this functionality for now and return to it with specific research questions in mind, e.g.:
How should we best determine what a "good" recommendation is, given a small number of casebooks?
How can we make use of usage data to inform recommendations? Currently this only looks at number of casebooks, not whether or how they are used.
What pool of casebooks should be sampled for recommendations? Casebooks with lots of clones both skew the dataset and provide potential sources of information. Should only public and/or professor-authored casebooks "count"?
Should this functionality be entirely automated/data-driven, or does it work best when combined with human analysis and curation?
How do we test the feature, ideally in front of users, and plan a go-live strategy (to avoid an experiment remaining in limbo for too long)?
This was an experiment to try to surface common resources in related titles, to encourage authors to find content to add to their casebook. Since it was released it's been only available to admins, which means that the number of tabs along the top of a casebook varies between internal and external users, potentially affecting UI choices.
For very large casebooks, the feature doesn't load quickly enough (see #1433). For others, it doesn't provide very relevant results.
Here are some sample output from this feature on prod:
Recommending we delete this functionality for now and return to it with specific research questions in mind, e.g.: