hoarder-app / hoarder

A self-hostable bookmark-everything app (links, notes and images) with AI-based automatic tagging and full text search
https://hoarder.app
GNU Affero General Public License v3.0
5.4k stars 169 forks source link

feature: import from omnivor #455

Closed mrinc closed 1 month ago

mrinc commented 1 month ago

Omnivore doesn't have an export, its a real PITA, so either docs of how to simply export from omnivore or an import feature would be awesome....

I'll see if I can do something and push a PR, else if you have any ideas on it that would be great.

Thanks!

kamtschatka commented 1 month ago

shouldn't you rather create an issue with omnivore to get an export feature?

mrinc commented 1 month ago

There are a bunch of discussions already on this.

But this is 2 fold, if someone comes around trying to work out how to do it and searches the issues - this issue will show up.

So regardless of the outcome, it can at least help someone in the future :)

kamtschatka commented 1 month ago

ok, but then just link the discussion here and we'll close this issue. The hoarder maintainers are not going to add a bookmark export to omnivore ;-)

mrinc commented 1 month ago

That is unacceptable! I expect them to do it! (jk) :P

Annnddd .... here we go :)

#!/bin/bash

if [ "$#" -ne 1 ]; then
    echo "Usage: $0 <API_KEY>"
    exit 1
fi

API_KEY="$1"
ENDPOINT="https://api-prod.omnivore.app/api/graphql"
OUTPUT_FILE="omnivore_bookmarks.html"

# Initialize the bookmarks file
echo '<!DOCTYPE NETSCAPE-Bookmark-file-1>
<META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=UTF-8">
<TITLE>Bookmarks</TITLE>
<H1>Bookmarks</H1>
<DL><p>' > "$OUTPUT_FILE"

fetch_page() {
    local after="$1"
    local query
    if [ -z "$after" ]; then
        query='{"query": "query { search(first: 100, query: \"\", includeContent: false) { ... on SearchSuccess { edges { node { id url title } } pageInfo { hasNextPage endCursor } } } }"}'
    else
        query="{\"query\": \"query { search(first: 100, after: \\\"$after\\\", query: \\\"\\\", includeContent: false) { ... on SearchSuccess { edges { node { id url title } } pageInfo { hasNextPage endCursor } } } }\"}"
    fi

    curl --silent --location "$ENDPOINT" \
         --header "Authorization: $API_KEY" \
         --header 'Content-Type: application/json' \
         --header 'Accept: application/json' \
         --data "$query"
}

process_results() {
    local json="$1"
    echo "$json" | jq -r '.data.search.edges[] | "<DT><A HREF=\"\(.node.url)\">\(.node.title)</A>"' >> "$OUTPUT_FILE"
}

after=""
has_next_page=true

while $has_next_page; do
    response=$(fetch_page "$after")
    process_results "$response"

    has_next_page=$(echo "$response" | jq -r '.data.search.pageInfo.hasNextPage')
    after=$(echo "$response" | jq -r '.data.search.pageInfo.endCursor')

    echo "Processed a page. More pages: $has_next_page"

    # Optional: add a small delay to avoid hitting rate limits
    sleep 1
done

# Close the bookmarks file
echo '</DL><p>' >> "$OUTPUT_FILE"

echo "Bookmarks have been saved to $OUTPUT_FILE"

Create the script (bash) and run it passing in your API key as the argument.

It'll generate a bookmarks file that you can import into hoarder.

mrinc commented 1 month ago

I didn't expect them to do it - thus why moreof a question of ideas.

But anywhoo, there is a bash script that will export omnivore links in a bookmark format that can be easily imported into hoarder :)

So if anyone has to migrate later on, they can just run this ....

Only bash, curl and jq required - so a non python dev can also use it.

Going to close now since records exist for any future person in the same stuck.

sylvesterroos commented 6 days ago

Omnivore is closing down, and they have added an export feature