Closed kevinreiss closed 1 year ago
Related to #185.
If it isn't too expensive I think adding the template that does the COINs generation to each result in the hit list should cover this. Cornell tries to support this through an unapi server but that seems buggy when I tried it in their system.
@tampakis I'll assign this one to myself.
We were thinking it might be cool to index this url into traject.
Yeah, better to do this as one big string in a stored Solr field at index time than having to return a bunch more fields on the show page and re-build (and escape, etc., etc.) that string on the fly.
This would be great because any third parties that might query our Blacklight via API would have ready access to an OpenURL for the resource that we control. This is definitely useful for integrations with things that speak OpenURL like Aeon and ILLiad.
OK, it's settled then. ;-)
Adding this to the index view allows zotero to scrape the listings on all search results but the call to re-parse the marc record for each entry is rather expensive. If we want to pursue adding an OpenURL key to the solr doc that we can just print out for each entry on demand this is a good way to support zotero. If that seems too cumbersome there are other ways to do it we can pursue.
Recieved a feedback item on Zotero,
"""In using the Zotero "Save To" button (for Firefox), the new catalog results in slightly different data being captured than the old catalog. In my brief survey of variations, the captured data from the old catalog is superior (more data captured and cleaner versions of it)."""
Related to #256.
The routine Zotero uses to scrape citations is at https://github.com/zotero/translators/blob/master/Library%20Catalog%20(Voyager).js. That file seems to scrape a Marc record from Voyager and feeds it to Zotero's default importer which has a marc parsing option. Instead of improving BL's internal OpenURL generation mechanism another option is to prepare Zotero translator for our Blacklight instance. However than we'd be on the hook for managing it. I'm not sure if our approach to handling the full MARC record in OL is generalizable across all BL instances so I'm not sure be able to contribute a generic Blacklight translator to the zotero project, which makes this approach less desirable. Still, this would be the most reliable in terms of "power" users since they'd get the same results in either system since all either one would be doing is feeding the MARC data to Zotero and letting it does what it wants with MARC.
Perhaps the unapi gem could help us support this. This is what Cornell uses. It does need to be updated for BL 6. https://github.com/cbeer/blacklight_unapi. @tampakis do you think this is worth trying. At least in the Cornell implementation this plugin handles the online links in the fashion suggested by some public service librarians.
I think this makes sense.
See 38e18801add4. Close this when that goes to production.
@tampakis Did we want to stick with RIS or fall back to MARC? If we want to do MARC I'll open a separate issue to investigate what to do with non-voyager records.
I think MARC would probably be ideal. That way we can leave it to Zotero to parse out the fields it wants.
We are still using the RIS format via the unapi integration. Still seems like MARC would be more robust. The addition of the SCSB records (that don't currently readily have their marc form available) complicates this.
Currently individual pages support Zotero scraping a record's bib information via COINs. Zotero cannot do the same scraping on the search results page. Staff expect that it will. They can do this in the current discovery systems we have.