Open jmuchovej opened 1 year ago
Hey, would you mind elaborating a bit more about what you mean by querying for a URL?
Do you mean like searching for URLs as well through the search box? In that when you search for the link of the text, the text shows up?
By "querying a URL" I meant pulling the BibTeX database from a URL (e.g., https://paperpile.com/eb/MGDkEJAdLM which then provides a BibTeX file that could be parsed).
In my case, anytime I need to update my BibTeX database, I have to query a similar URL (e.g., by wget <url>
, then do a "reindex DB" â but presumably "reindex DB" could handle this all from within Logseq. (Of course this could introduce further lag, but that's something I signed up for. ÂŻ_(ă)_/ÂŻ)
Ahh, that makes sense. Perhaps local caching could be implemented but this definitely does fall under the scope of things.
Tradeoff about reindexing it could introduce lag, but local caching should help fix it and maybe maybe the reindexing manual or daily or something.
Overall though, I think this does fit under things. PR would be appreciated! On 9 Feb 2023, 12:06 AM +0400, John Muchovej @.***>, wrote:
By "querying a URL" I meant pulling the BibTeX database from a URL (e.g., https://paperpile.com/eb/MGDkEJAdLM which then provides a BibTeX file that could be parsed). In my case, anytime I need to update my BibTeX database, I have to query a similar URL (e.g., by wget
, then do a "reindex DB" â but presumably "reindex DB" could handle this all from within Logseq. (Of course this could introduce further lag, but that's something I signed up for. ÂŻ(ă)/ÂŻ) â Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>
Hmm. Iâm not sure what you mean by local caching.
The way I have things setup, when the BibTeX DB is a URL, on every âreindex DBâ it pulls that URL. I know that wget
has some way of checking if the contents differ (but a checksum doesnât seem applicable since the BibTeX Paperpile dumps is more like a set than a list) so I need to see if axios
has something similar.
Essentially, this lag would still be manually triggered by the user. (This seems acceptable, for instance, if Iâm reading a paper and come across a few references I want to read, I often add them to Paperpile and want a the database refreshed immediately following.)
If what you meant by local caching is that the plugin operates as usual, rather than every time a user opens the citation window the DB needs to be reindexed, I didnât mean to communicate that (that a reindex would be triggered on every âcreate lit refâ/etc.). Thatâs definitely not a good default behavior âcause itâll slow folks down.
Ahh yes that's what I was going for as well.  I just meant the reindex should not happen on load but only when manually triggered through command pallete(unless the user has the reindex DB on start enabled). Looks good though otherwise. On 9 Feb 2023 at 3:13 PM +0400, John Muchovej @.***>, wrote:
Hmm. Iâm not sure what you mean by local caching. The way I have things setup, when the BibTeX DB is a URL, on every âreindex DBâ it pulls that URL. I know that wget has some way of checking if the contents differ (but a checksum doesnât seem applicable since the BibTeX Paperpile dumps is more like a set than a list) so I need to see if axios has something similar. Essentially, this lag would still be manually triggered by the user. (This seems acceptable, for instance, if Iâm reading a paper and come across a few references I want to read, I often add them to Paperpile and want a the database refreshed immediately following.) If what you meant by local caching is that the plugin operates as usual, rather than every time a user opens the citation window the DB needs to be reindexed, I didnât mean to communicate that (that a reindex would be triggered on every âcreate lit refâ/etc.). Thatâs definitely not a good default behavior âcause itâll slow folks down. â Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>
Question:
Does this mean that citation with URLs (rather than Zotero's in-built citation code) are possible without force-import of every item within Zotero (assuming it has a lot of scrap or soon-to-be consolidated material atm)?
Or to put it in another way, just because the Better BibLaTeX
has the record it does not mean that I want to see it being recorded into LogSeq, since I will delete the Zotero entry if it is seen as redundant. How does on reconcile with that?
Iâm not sure I understand what youâre asking â perhaps this will clear things up, though:
The URL here is to the .bib
file. Not URLs within citations.
@jmuchovej thanks for the response, here is a bit of background:
- How can I get it to do a dynamic import based on the ones I have referenced in other notes?
No part of logseq-citation-manager
(currently) does auto-imports.
To do this, youâd probably need to query the Logseq API.
- (Alternatively, maybe tag the items that have not been fully "sorted out yet" as "scraps" and then manually delete once it has been reviewed in the notes?)
That could ease the search, but since thereâs no auto-import in the plugin, youâd probably still need to manually import.
Aside: Logseq has first-party support for Zotero. Have you checked that the Zotero support from Logseq canât do this? (If so, disregard this question â just making sure itâs been considered.)
đ I'm curious if support for URLs is something within the scope of
logseq-citaiton-manager
? I use Paperpile to manage/store citations and they've released a "Workflow" that supports querying a URL for a BibTeX file.I already have a working prototype, so I'm happy to open a PR if you think it's within-scope; but I wanted to check before I clean things up and open a PR. đ