If you're in a hurry, feel free to jump straight to the [[#usecases][demos]].
TLDR: I'm using [[https://github.com/karlicoss/HPI][HPI]] (Human Programming Interface) package as a means of unifying, accessing and interacting with all of my personal data.
HPI is a Python package (named ~my~), a collection of modules for:
The package hides the [[https://beepb00p.xyz/sad-infra.html#exports_are_hard][gory details]] of locating data, parsing, error handling and caching. You simply 'import' your data and get to work with familiar Python types and data structures.
Here's a short example to give you an idea: "which subreddits I find the most interesting?"
import my.reddit.all from collections import Counter return Counter(s.subreddit for s in my.reddit.all.saved()).most_common(4)
| orgmode | 62 | | emacs | 60 | | selfhosted | 51 | | QuantifiedSelf | 46 |
I consider my digital trace an important part of my identity. ([[https://beepb00p.xyz/tags.html#extendedmind][#extendedmind]]) Usually the data is siloed, accessing it is inconvenient and borderline frustrating. This feels very wrong.
In contrast, once the data is available as Python objects, I can easily plug it into existing tools, libraries and frameworks. It makes building new tools considerably easier and opens up new ways of interacting with the data.
I tried different things over the years and I think I'm getting to the point where other people can also benefit from my code by 'just' plugging in their data, and that's why I'm sharing this.
Imagine if all your life was reflected digitally and available at your fingertips. This library is my attempt to achieve this vision.
:results: Table of contents:
Why?
How does a Python package help?
What's inside?
How do you use it?
Ad-hoc and interactive
How does it get input data?
Q & A
HPI Repositories
Related links
-- :END:
Why? :PROPERTIES: :CUSTOM_ID: motivation :END:
The main reason that led me to develop this is the dissatisfaction of the current situation:
Our personal data is siloed and trapped across cloud services and various devices
Even when it's possible to access it via the API, it's hardly useful, unless you're an experienced programmer, willing to invest your time and infrastructure.
We have insane amounts of data scattered across the cloud, yet we're left at the mercy of those who collect it to provide something useful based on it
Integrations of data across silo boundaries are almost non-existent. There is so much potential and it's all wasted.
I'm not willing to wait till some vaporware project reinvents the whole computing model from scratch
As a programmer, I am in capacity to do something right now, even though it's not necessarily perfect and consistent.
I've written a lot about it [[https://beepb00p.xyz/sad-infra.html#why][here]], so allow me to simply quote:
:results:
search and information access
productivity
journaling and history
consuming digital content
health and body maintenance
personal finance
why I can't do anything when I'm offline or have a wonky connection?
tools for thinking and learning
mediocre interfaces
communication and collaboration
backups
:END:
I'm tired of having to use multiple different messengers and social networks
I'm tired of shitty bloated interfaces
Why do we have to be at mercy of their developers, designers and product managers? If we had our data at hand, we could fine-tune interfaces for our needs.
I'm tired of mediocre search experience
Text search is something computers do exceptionally well. Yet, often it's not available offline, it's not incremental, everyone reinvents their own query language, and so on.
I'm frustrated by poor information exploring and processing experience
While for many people, services like Reddit or Twitter are simply time killers (and I don't judge), some want to use them efficiently, as a source of information/research. Modern bookmarking experience makes it far from perfect.
You can dismiss this as a list of first-world problems, and you would be right, they are. But the major reason I want to solve these problems is to be better at learning and working with knowledge, so I could be better at solving the real problems.
When I started solving some of these problems for myself, I've noticed a common pattern: the [[https://beepb00p.xyz/sad-infra.html#exports_are_hard][hardest bit]] is actually getting your data in the first place. It's inherently error-prone and frustrating.
But once you have the data in a convenient representation, working with it is pleasant -- you get to explore and build instead of fighting with yet another stupid REST API.
This package knows how to find data on your filesystem, deserialize it and normalize it to a convenient representation. You have the full power of the programming language to transform the data and do whatever comes to your mind.
** Why don't you just put everything in a massive database? :PROPERTIES: :CUSTOM_ID: database :END: Glad you've asked! I wrote a whole [[https://beepb00p.xyz/unnecessary-db.html][post]] about it.
In short: while databases are efficient and easy to read from, often they aren't flexible enough to fit your data. You're probably going to end up writing code anyway.
While working with your data, you'll inevitably notice common patterns and code repetition, which you'll probably want to extract somewhere. That's where a Python package comes in.
Here's the (incomplete) list of the modules:
:results: | [[https://github.com/karlicoss/HPI/tree/master/my/bluemaestro.py][=my.bluemaestro=]] | [[https://bluemaestro.com/products/product-details/bluetooth-environmental-monitor-and-logger][Bluemaestro]] temperature/humidity/pressure monitor | | [[https://github.com/karlicoss/HPI/tree/master/my/body/blood.py][=my.body.blood=]] | Blood tracking (manual org-mode entries) | | [[https://github.com/karlicoss/HPI/tree/master/my/body/exercise/all.py][=my.body.exercise.all=]] | Combined exercise data | | [[https://github.com/karlicoss/HPI/tree/master/my/body/exercise/cardio.py][=my.body.exercise.cardio=]] | Cardio data, filtered from various data sources | | [[https://github.com/karlicoss/HPI/tree/master/my/body/exercise/cross_trainer.py][=my.body.exercise.cross_trainer=]] | My cross trainer exercise data, arbitrated from different sources (mainly, Endomondo and manual text notes) | | [[https://github.com/karlicoss/HPI/tree/master/my/body/weight.py][=my.body.weight=]] | Weight data (manually logged) | | [[https://github.com/karlicoss/HPI/tree/master/my/calendar/holidays.py][=my.calendar.holidays=]] | Holidays and days off work | | [[https://github.com/karlicoss/HPI/tree/master/my/coding/commits.py][=my.coding.commits=]] | Git commits data for repositories on your filesystem | | [[https://github.com/karlicoss/HPI/tree/master/my/demo.py][=my.demo=]] | Just a demo module for testing and documentation purposes | | [[https://github.com/karlicoss/HPI/tree/master/my/emfit/__init__.py][=my.emfit=]] | [[https://shop-eu.emfit.com/products/emfit-qs][Emfit QS]] sleep tracker | | [[https://github.com/karlicoss/HPI/tree/master/my/endomondo.py][=my.endomondo=]] | Endomondo exercise data | | [[https://github.com/karlicoss/HPI/tree/master/my/fbmessenger.py][=my.fbmessenger=]] | Facebook Messenger messages | | [[https://github.com/karlicoss/HPI/tree/master/my/foursquare.py][=my.foursquare=]] | Foursquare/Swarm checkins | | [[https://github.com/karlicoss/HPI/tree/master/my/github/all.py][=my.github.all=]] | Unified Github data (merged from GDPR export and periodic API updates) | | [[https://github.com/karlicoss/HPI/tree/master/my/github/gdpr.py][=my.github.gdpr=]] | Github data (uses [[https://github.com/settings/admin][official GDPR export]]) | | [[https://github.com/karlicoss/HPI/tree/master/my/github/ghexport.py][=my.github.ghexport=]] | Github data: events, comments, etc. (API data) | | [[https://github.com/karlicoss/HPI/tree/master/my/hypothesis.py][=my.hypothesis=]] | [[https://hypothes.is][Hypothes.is]] highlights and annotations | | [[https://github.com/karlicoss/HPI/tree/master/my/instapaper.py][=my.instapaper=]] | [[https://www.instapaper.com][Instapaper]] bookmarks, highlights and annotations | | [[https://github.com/karlicoss/HPI/tree/master/my/kobo.py][=my.kobo=]] | [[https://uk.kobobooks.com/products/kobo-aura-one][Kobo]] e-ink reader: annotations and reading stats | | [[https://github.com/karlicoss/HPI/tree/master/my/lastfm.py][=my.lastfm=]] | Last.fm scrobbles | | [[https://github.com/karlicoss/HPI/tree/master/my/location/google.py][=my.location.google=]] | Location data from Google Takeout | | [[https://github.com/karlicoss/HPI/tree/master/my/location/home.py][=my.location.home=]] | Simple location provider, serving as a fallback when more detailed data isn't available | | [[https://github.com/karlicoss/HPI/tree/master/my/materialistic.py][=my.materialistic=]] | [[https://play.google.com/store/apps/details?id=io.github.hidroh.materialistic][Materialistic]] app for Hackernews | | [[https://github.com/karlicoss/HPI/tree/master/my/orgmode.py][=my.orgmode=]] | Programmatic access and queries to org-mode files on the filesystem | | [[https://github.com/karlicoss/HPI/tree/master/my/pdfs.py][=my.pdfs=]] | PDF documents and annotations on your filesystem | | [[https://github.com/karlicoss/HPI/tree/master/my/photos/main.py][=my.photos.main=]] | Photos and videos on your filesystem, their GPS and timestamps | | [[https://github.com/karlicoss/HPI/tree/master/my/pinboard.py][=my.pinboard=]] | [[https://pinboard.in][Pinboard]] bookmarks | | [[https://github.com/karlicoss/HPI/tree/master/my/pocket.py][=my.pocket=]] | [[https://getpocket.com][Pocket]] bookmarks and highlights | | [[https://github.com/karlicoss/HPI/tree/master/my/polar.py][=my.polar=]] | [[https://github.com/burtonator/polar-bookshelf][Polar]] articles and highlights | | [[https://github.com/karlicoss/HPI/tree/master/my/reddit.py][=my.reddit=]] | Reddit data: saved items/comments/upvotes/etc. | | [[https://github.com/karlicoss/HPI/tree/master/my/rescuetime.py][=my.rescuetime=]] | Rescuetime (phone activity tracking) data. | | [[https://github.com/karlicoss/HPI/tree/master/my/roamresearch.py][=my.roamresearch=]] | [[https://roamresearch.com][Roam]] data | | [[https://github.com/karlicoss/HPI/tree/master/my/rss/all.py][=my.rss.all=]] | Unified RSS data, merged from different services I used historically | | [[https://github.com/karlicoss/HPI/tree/master/my/rss/feedbin.py][=my.rss.feedbin=]] | Feedbin RSS reader | | [[https://github.com/karlicoss/HPI/tree/master/my/rss/feedly.py][=my.rss.feedly=]] | Feedly RSS reader | | [[https://github.com/karlicoss/HPI/tree/master/my/rtm.py][=my.rtm=]] | [[https://rememberthemilk.com][Remember The Milk]] tasks and notes | | [[https://github.com/karlicoss/HPI/tree/master/my/runnerup.py][=my.runnerup=]] | [[https://github.com/jonasoreland/runnerup][Runnerup]] exercise data (TCX format) | | [[https://github.com/karlicoss/HPI/tree/master/my/smscalls.py][=my.smscalls=]] | Phone calls and SMS messages | | [[https://github.com/karlicoss/HPI/tree/master/my/stackexchange/gdpr.py][=my.stackexchange.gdpr=]] | Stackexchange data (uses [[https://stackoverflow.com/legal/gdpr/request][official GDPR export]]) | | [[https://github.com/karlicoss/HPI/tree/master/my/stackexchange/stexport.py][=my.stackexchange.stexport=]] | Stackexchange data (uses API via [[https://github.com/karlicoss/stexport][stexport]]) | | [[https://github.com/karlicoss/HPI/tree/master/my/taplog.py][=my.taplog=]] | [[https://play.google.com/store/apps/details?id=com.waterbear.taglog][Taplog]] app data | | [[https://github.com/karlicoss/HPI/tree/master/my/time/tz/main.py][=my.time.tz.main=]] | Timezone data provider, used to localize timezone-unaware timestamps for other modules | | [[https://github.com/karlicoss/HPI/tree/master/my/time/tz/via_location.py][=my.time.tz.via_location=]] | Timezone data provider, guesses timezone based on location data (e.g. GPS) | | [[https://github.com/karlicoss/HPI/tree/master/my/twitter/all.py][=my.twitter.all=]] | Unified Twitter data (merged from the archive and periodic updates) | | [[https://github.com/karlicoss/HPI/tree/master/my/twitter/archive.py][=my.twitter.archive=]] | Twitter data (uses [[https://help.twitter.com/en/managing-your-account/how-to-download-your-twitter-archive][official twitter archive export]]) | | [[https://github.com/karlicoss/HPI/tree/master/my/twitter/twint.py][=my.twitter.twint=]] | Twitter data (tweets and favorites). Uses [[https://github.com/twintproject/twint][Twint]] data export. | | [[https://github.com/karlicoss/HPI/tree/master/my/vk/vk_messages_backup.py][=my.vk.vk_messages_backup=]] | VK data (exported by [[https://github.com/Totktonada/vk_messages_backup][Totktonada/vk_messages_backup]]) | :END:
Some modules are private, and need a bit of cleanup before merging:
| my.workouts | Exercise activity, from Endomondo and manual logs | | my.sleep.manual | Subjective sleep data, manually logged | | my.nutrition | Food and drink consumption data, logged manually from different sources | | my.money | Expenses and shopping data | | my.webhistory | Browsing history (part of [[https://github.com/karlicoss/promnesia][promnesia]]) |
Also, check out [[https://beepb00p.xyz/myinfra.html#mypkg][my infrastructure map]]. It might be helpful for understanding what's my vision on HPI. * Instant search :PROPERTIES: :CUSTOM_ID: search :END: Typical search interfaces make me unhappy as they are siloed, slow, awkward to use and don't work offline*. So I built my own ways around it! I write about it in detail [[https://beepb00p.xyz/pkm-search.html#personal_information][here]].
In essence, I'm mirroring most of my online data like chat logs, comments, etc., as plaintext. I can overview it in any text editor, and incrementally search over all of it in a single keypress. ** orger :PROPERTIES: :CUSTOM_ID: orger :END: [[https://github.com/karlicoss/orger][orger]] is a tool that helps you generate an org-mode representation of your data.
It lets you benefit from the existing tooling and infrastructure around org-mode, the most famous being Emacs.
I'm using it for:
Orger comes with some existing [[https://github.com/karlicoss/orger/tree/master/modules][modules]], but it should be easy to adapt your own data source if you need something else.
I write about it in detail [[http://beepb00p.xyz/orger.html][here]] and [[http://beepb00p.xyz/orger-todos.html][here]]. * promnesia :PROPERTIES: :CUSTOM_ID: promnesia :END: [[https://github.com/karlicoss/promnesia#demo][promnesia]] is a browser extension I'm working on to escape silos by unifying annotations and browsing history* from different data sources.
I've been using it for more than a year now and working on final touches to properly release it for other people. ** dashboard :PROPERTIES: :CUSTOM_ID: dashboard :END:
As a big fan of [[https://beepb00p.xyz/tags.html#quantified-self][#quantified-self]], I'm working on personal health, sleep and exercise dashboard, built from various data sources.
I'm working on making it public, you can see some screenshots [[https://www.reddit.com/r/QuantifiedSelf/comments/cokt4f/what_do_you_all_do_with_your_data/ewmucgk][here]]. ** timeline :PROPERTIES: :CUSTOM_ID: timeline :END:
Timeline is a [[https://beepb00p.xyz/tags.html#lifelogging][#lifelogging]] project I'm working on.
I want to see all my digital history, search in it, filter, easily jump at a specific point in time and see the context when it happened. That way it works as a sort of external memory.
Ideally, it would look similar to Andrew Louis's [[https://hyfen.net/memex][Memex]], or might even reuse his interface if he open sources it. I highly recommend watching his talk for inspiration.
** What were my music listening stats for 2018? :PROPERTIES: :CUSTOM_ID: lastfm :END:
Single import away from getting tracks you listened to:
from my.lastfm import scrobbles list(scrobbles())[200: 205]
: [Scrobble(raw={'album': 'Nevermind', 'artist': 'Nirvana', 'date': '1282488504', 'name': 'Drain You'}), : Scrobble(raw={'album': 'Dirt', 'artist': 'Alice in Chains', 'date': '1282489764', 'name': 'Would?'}), : Scrobble(raw={'album': 'Bob Dylan: The Collection', 'artist': 'Bob Dylan', 'date': '1282493517', 'name': 'Like a Rolling Stone'}), : Scrobble(raw={'album': 'Dark Passion Play', 'artist': 'Nightwish', 'date': '1282493819', 'name': 'Amaranth'}), : Scrobble(raw={'album': 'Rolled Gold +', 'artist': 'The Rolling Stones', 'date': '1282494161', 'name': "You Can't Always Get What You Want"})]
Or, as a pretty Pandas frame:
import pandas as pd df = pd.DataFrame([{ 'dt': s.dt, 'track': s.track, } for s in scrobbles()]).set_index('dt') df[200: 205]
: track
: dt
: 2010-08-22 14:48:24+00:00 Nirvana — Drain You
: 2010-08-22 15:09:24+00:00 Alice in Chains — Would?
: 2010-08-22 16:11:57+00:00 Bob Dylan — Like a Rolling Stone
: 2010-08-22 16:16:59+00:00 Nightwish — Amaranth
: 2010-08-22 16:22:41+00:00 The Rolling Stones — You Can't Always Get What...
We can use [[https://github.com/martijnvermaat/calmap][calmap]] library to plot a github-style music listening activity heatmap:
import matplotlib.pyplot as plt plt.figure(figsize=(10, 2.3))
import calmap df = df.set_index(df.index.tz_localize(None)) # calmap expects tz-unaware dates calmap.yearplot(df['track'], how='count', year=2018)
plt.tight_layout() plt.title('My music listening activity for 2018') plot_file = 'hpi_files/lastfm_2018.png' plt.savefig(plot_file) plot_file
[[https://beepb00p.xyz/hpi_files/lastfm_2018.png]]
This isn't necessarily very insightful data, but fun to look at now and then!
** What are the most interesting Slate Star Codex posts I've read? :PROPERTIES: :CUSTOM_ID: hypothesis_stats :END:
My friend asked me if I could recommend them posts I found interesting on [[https://slatestarcodex.com][Slate Star Codex]]. With few lines of Python I can quickly recommend them posts I engaged most with, i.e. the ones I annotated most on [[https://hypothes.is][Hypothesis]].
from my.hypothesis import pages from collections import Counter cc = Counter({(p.title + ' ' + p.url): len(p.highlights) for p in pages() if 'slatestarcodex' in p.url}) return cc.most_common(10)
| The Anti-Reactionary FAQ http://slatestarcodex.com/2013/10/20/the-anti-reactionary-faq/ | 32 | | Reactionary Philosophy In An Enormous, Planet-Sized Nutshell https://slatestarcodex.com/2013/03/03/reactionary-philosophy-in-an-enormous-planet-sized-nutshell/ | 17 | | The Toxoplasma Of Rage http://slatestarcodex.com/2014/12/17/the-toxoplasma-of-rage/ | 16 | | What Universal Human Experiences Are You Missing Without Realizing It? https://slatestarcodex.com/2014/03/17/what-universal-human-experiences-are-you-missing-without-realizing-it/ | 16 | | Meditations On Moloch http://slatestarcodex.com/2014/07/30/meditations-on-moloch/ | 12 | | Universal Love, Said The Cactus Person http://slatestarcodex.com/2015/04/21/universal-love-said-the-cactus-person/ | 11 | | Untitled http://slatestarcodex.com/2015/01/01/untitled/ | 11 | | Considerations On Cost Disease https://slatestarcodex.com/2017/02/09/considerations-on-cost-disease/ | 10 | | In Defense of Psych Treatment for Attempted Suicide http://slatestarcodex.com/2013/04/25/in-defense-of-psych-treatment-for-attempted-suicide/ | 9 | | I Can Tolerate Anything Except The Outgroup https://slatestarcodex.com/2014/09/30/i-can-tolerate-anything-except-the-outgroup/ | 9 |
** Accessing exercise data :PROPERTIES: :CUSTOM_ID: exercise :END: E.g. see use of ~my.workouts~ [[https://beepb00p.xyz/heartbeats_vs_kcals.html][here]].
** Book reading progress :PROPERTIES: :CUSTOM_ID: kobo_progress :END:
I publish my reading stats on [[https://www.goodreads.com/user/show/22191391-dima-gerasimov][Goodreads]] so other people can see what I'm reading/have read, but Kobo [[https://beepb00p.xyz/ideas.html#kobo2goodreads][lacks integration]] with Goodreads. I'm using [[https://github.com/karlicoss/kobuddy][kobuddy]] to access my my Kobo data, and I've got a regular task that reminds me to sync my progress once a month.
The task looks like this:
,* TODO [#C] sync [[https://goodreads.com][reading progress]] with kobo DEADLINE: <2019-11-24 Sun .+4w -0d> [[eshell: python3 -c 'import my.kobo; my.kobo.print_progress()']]
With a single Enter keypress on the inlined =eshell:= command I can print the progress and fill in the completed books on Goodreads, e.g.:
A_Mathematician's_Apology by G. H. Hardy Started : 21 Aug 2018 11:44 Finished: 22 Aug 2018 12:32
Fear and Loathing in Las Vegas: A Savage Journey to the Heart of the American Dream (Vintage) by Thompson, Hunter S. Started : 06 Sep 2018 05:54 Finished: 09 Sep 2018 12:21
Sapiens: A Brief History of Humankind by Yuval Noah Harari Started : 09 Sep 2018 12:22 Finished: 16 Sep 2018 07:25
Inadequate Equilibria: Where and How Civilizations Get Stuck by Eliezer Yudkowsky Started : 31 Jul 2018 22:54 Finished: 16 Sep 2018 07:25
Albion Dreaming by Andy Roberts Started : 20 Aug 2018 21:16 Finished: 16 Sep 2018 07:26
** Messenger stats :PROPERTIES: :CUSTOM_ID: messenger_stats :END: How much do I chat on Facebook Messenger?
from my.fbmessenger import messages
import pandas as pd import matplotlib.pyplot as plt
df = pd.DataFrame({'dt': m.dt, 'messages': 1} for m in messages()) df.set_index('dt', inplace=True)
df = df.resample('M').sum() # by month df = df.loc['2016-01-01':'2019-01-01'] # past subset for determinism
fig, ax = plt.subplots(figsize=(15, 5)) df.plot(kind='bar', ax=ax)
x_labels = df.index.strftime('%Y %b') ax.set_xticklabels(x_labels)
plot_file = 'hpi_files/messenger_2016_to_2019.png' plt.tight_layout() plt.savefig(plot_file) return plot_file
[[https://beepb00p.xyz/hpi_files/messenger_2016_to_2019.png]]
** Which month in 2020 did I make the most git commits in? :PROPERTIES: :CUSTOM_ID: hpi_query_git :END:
If you like the shell or just want to quickly convert/grab some information from HPI, it also comes with a JSON query interface - so you can export the data, or just pipeline to your heart's content:
$ hpi query my.coding.commits.commits --stream # stream JSON objects as they're read --order-type datetime # find the 'datetime' attribute and order by that --after '2020-01-01' --before '2021-01-01' # in 2020 | jq '.committed_dt' -r # extract the datetime
| cut -d'-' -f-2 | sort | uniq -c | awk '{print $2,$1}' | sort -n | termgraph
2020-01: ▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 458.00 2020-02: ▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 440.00 2020-03: ▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 545.00 2020-04: ▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 585.00 2020-05: ▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 518.00 2020-06: ▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 755.00 2020-07: ▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 467.00 2020-08: ▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 449.00 2020-09: ▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 1.03 K 2020-10: ▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 791.00 2020-11: ▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 474.00 2020-12: ▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 383.00
See [[https://github.com/karlicoss/HPI/blob/master/doc/QUERY.md][query docs]] for more examples
** Querying Roam Research database :PROPERTIES: :CUSTOM_ID: roamresearch :END: I've got some code examples [[https://beepb00p.xyz/myinfra-roam.html#interactive][here]].
Also see [[https://github.com/karlicoss/HPI/blob/master/doc/SETUP.org#data-flow]["Data flow"]] documentation with some nice diagrams explaining on specific examples.
In short:
The data is [[https://beepb00p.xyz/myinfra.html#exports][periodically synchronized]] from the services (cloud or not) locally, on the filesystem
As a result, you get [[https://beepb00p.xyz/myinfra.html#fs][JSONs/sqlite]] (or other formats, depending on the service) on your disk.
Once you have it, it's trivial to back it up and synchronize to other computers/phones, if necessary.
To schedule periodic sync, I'm using [[https://beepb00p.xyz/scheduler.html#cron][cron]].
=my.= package only accesses the data on the filesystem
That makes it extremely fast, reliable, and fully offline capable.
As you can see, in such a setup, the data is lagging behind the 'realtime'. I consider it a necessary sacrifice to make everything fast and resilient.
In theory, it's possible to make the system almost realtime by having a service that sucks in data continuously (rather than periodically), but it's harder as well.
** Why Python? :PROPERTIES: :CUSTOM_ID: why_python :END:
I don't consider Python unique as a language suitable for such a project. It just happens to be the one I'm most comfortable with. I do have some reasons that I think make it /specifically/ good, but explaining them is out of this post's scope.
In addition, Python offers a [[https://github.com/karlicoss/awesome-python#data-analysis][very rich ecosystem]] for data analysis, which we can use to our benefit.
That said, I've never seen anything similar in other programming languages, and I would be really interested in, so please send me links if you know some. I've heard LISPs are great for data? ;)
Overall, I wish [[https://en.wikipedia.org/wiki/Foreign_function_interface][FFIs]] were a bit more mature, so we didn't have to think about specific programming languages at all.
** Can anyone use it? :PROPERTIES: :CUSTOM_ID: can_anyone_use_it :END: Yes!
everything is easily extensible
Starting from simply adding new modules to any dynamic hackery you can possibly imagine within Python.
** How easy is it to use? :PROPERTIES: :CUSTOM_ID: how_easy_to_use :END: The whole setup requires some basic programmer literacy:
If you have any ideas on making the setup simpler, please let me know!
* What about privacy? :PROPERTIES: :CUSTOM_ID: privacy :END: The modules contain no data, only code* to operate on the data.
Everything is [[https://beepb00p.xyz/tags.html#offline][*local first*]], the input data is on your filesystem. If you're truly paranoid, you can even wrap it in a Docker container.
There is still a question of whether you trust yourself at even keeping all the data on your disk, but it is out of the scope of this post.
If you'd rather keep some code private too, it's also trivial to achieve with a private subpackage.
** But /should/ I use it? :PROPERTIES: :CUSTOM_ID: should_i_use_it :END:
Sure, maybe you can achieve a perfect system where you can instantly find and recall anything that you've done. Do you really want it? Wouldn't that, like, make you less human?
I'm not a gatekeeper of what it means to be human, but I don't think that the shortcomings of the human brain are what makes us such.
So I can't answer that for you. I certainly want it though. I'm [[https://beepb00p.xyz/tags.html#pkm][quite open]] about my goals -- I'd happily get merged/augmented with a computer to enhance my thinking and analytical abilities.
While at the moment [[https://en.wikipedia.org/wiki/Hard_problem_of_consciousness][we don't even remotely understand]] what would such merging or "mind uploading" entail exactly, I can clearly delegate some tasks, like long term memory, information lookup, and data processing to a computer. They can already handle it really well.
What about these people who have perfect recall and wish they hadn't.
Sure, maybe it sucks. At the moment though, my recall is far from perfect, and this only annoys me. I want to have a choice at least, and digital tools give me this choice.
** Would it suit /me/? :PROPERTIES: :CUSTOM_ID: would_it_suit_me :END:
Probably, at least to some extent.
First, our lives are different, so our APIs might be different too. This is more of a demonstration of what's I'm using, although I did spend effort towards making it as modular and extensible as possible, so other people could use it too. It's easy to modify code, add extra methods and modules. You can even keep all your modifications private.
But after all, we've all sharing many similar activities and using the same products, so there is a huge overlap. I'm not sure how far we can stretch it and keep modules generic enough to be used by multiple people. But let's give it a try perhaps? :)
Second, interacting with your data through the code is the central idea of the project. That kind of cuts off people without technical skills, and even many people capable of coding, who dislike the idea of writing code outside of work.
It might be possible to expose some [[https://en.wikipedia.org/wiki/No-code_development_platform][no-code]] interfaces, but I still feel that wouldn't be enough.
I'm not sure whether it's a solvable problem at this point, but happy to hear any suggestions!
** What it isn't? :PROPERTIES: :CUSTOM_ID: what_it_isnt :END:
It's not vaporware
The project is a little crude, but it's real and working. I've been using it for a long time now, and find it fairly sustainable to keep using for the foreseeable future.
It's not going to be another silo
While I don't have anything against commercial use (and I believe any work in this area will benefit all of us), I'm not planning to build a product out of it.
I really hope it can grow into or inspire some mature open source system.
Please take my ideas and code and build something cool from it!
HPI Repositories :PROPERTIES: :CUSTOM_ID: hpi_repos :END:
One of HPI's core goals is to be as extendable as possible. The goal here isn't to become a monorepo and support every possible data source/website to the point that this isn't maintainable anymore, but hopefully you get a few modules 'for free'.
If you want to write modules for personal use but don't want to merge them into here, you're free to maintain modules locally in a separate directory to avoid any merge conflicts, and entire HPI repositories can even be published separately and installed into the single ~my~ python package (For more info on this, see [[https://github.com/karlicoss/HPI/tree/master/doc/MODULE_DESIGN.org][MODULE_DESIGN]])
Other HPI Repositories:
If you want to create your own to create your own modules/override something here, you can use the [[https://github.com/purarue/HPI-template][template]].
Related links :PROPERTIES: :CUSTOM_ID: links :END: Similar projects:
[[https://hyfen.net/memex][Memex]] by Andrew Louis
[[https://github.com/novoid/Memacs][Memacs]] by Karl Voit
[[https://news.ycombinator.com/item?id=9615901][Me API - turn yourself into an open API (HN)]]
[[https://github.com/markwk/qs_ledger][QS ledger]] from Mark Koester
[[https://dogsheep.github.io][Dogsheep]]: a collection of tools for personal analytics using SQLite and Datasette
[[https://github.com/tehmantra/my][tehmantra/my]]: directly inspired by this package
[[https://github.com/bcongdon/bolero][bcongdon/bolero]]: exposes your personal data as a REST API
[[https://en.wikipedia.org/wiki/Solid_(web_decentralization_project)#Design][Solid project]]: personal data pod, which websites pull data from
[[https://remotestorage.io][remoteStorage]]: open protocol for apps to write data to your own storage
[[https://perkeep.org]][Perkeep]: a tool with [[https://perkeep.org/doc/principles]][principles] and esp. [[https://perkeep.org/doc/uses]][use cases] for self-sovereign storage of personal data
[[https://www.openhumans.org]][Open Humans]: a community and infrastructure to analyse and share personal data
Other links:
NetOpWibby: [[https://news.ycombinator.com/item?id=21684949][A Personal API (HN)]]
[[https://beepb00p.xyz/sad-infra.html][The sad state of personal data and infrastructure]]: here I am going into motivation and difficulties arising in the implementation
[[https://beepb00p.xyz/myinfra-roam.html][Extending my personal infrastructure]]: a followup, where I'm demonstrating how to integrate a new data source (Roam Research)
-- :PROPERTIES: :CUSTOM_ID: fin :END:
Open to any feedback and thoughts!
Also, don't hesitate to raise an issue, or reach me personally if you want to try using it, and find the instructions confusing. Your questions would help me to make it simpler!
In some near future I will write more about:
, but happy to answer any questions on these topics now!