arturo-lang / grafito

Portable, Serverless & Lightweight SQLite-based Graph Database in Arturo
MIT License
143 stars 7 forks source link

UI enhancement + General cleanup #8

Closed drkameleon closed 2 years ago

drkameleon commented 2 years ago

We have to make the main UI look better and most importantly behave better.


Fixes #3

dumblob commented 2 years ago

Any support for "widgets" (i.e. support for showing the same data in different ways)? I'd like to switch from a graph view to a tree view with collapsible selectable nodes resembling the tree view known from ordinary GUIs:

Tree_Checkboxes

(of course this would require some heuristics to "break" cycles - maybe the easiest is to "ignore" some connections by filtering them out and allowing one to adjust such "subtraction" filter manually)

This is something I always expected from graph DB user interfaces but didn't see anywhere despite I for myself consider it a killer feature :smile:.

drkameleon commented 2 years ago

Any support for "widgets" (i.e. support for showing the same data in different ways)? I'd like to switch from a graph view to a tree view with collapsible selectable nodes resembling the tree view known from ordinary GUIs:

Tree_Checkboxes

(of course this would require some heuristics to "break" cycles - maybe the easiest is to "ignore" some connections by filtering them out and allowing one to adjust such "subtraction" filter manually)

This is something I always expected from graph DB user interfaces but didn't see anywhere despite I for myself consider it a killer feature πŸ˜„.

That's a very interesting suggestion I have to admit. :)

Since I'm - quite apparently - working on the UI right now (and quite non-stop for that matter), I've been playing a lot with the layout in order to make it work properly and with more features. One of the ones that is basically part of the revamp is the ability to see query results as a table. But what you mention is definitely an interesting addition.

Let's see...

For now, I'm sending you a tiny sneak peek preview of what the UI is slowly turning into... ;)

Screenshot 2022-01-12 at 12 25 27
dumblob commented 2 years ago

Yes! Everybody likes tables, so it's definitely a perfect addition! Tables would also benefit from cycles removal to make them less "sparse" IMHO :wink:.

Will the new UI support "saved queries incl. the chosen filters and the corresponding view type(s)"?

I'm asking because I see just one "table" icon on the left which suggests that after writing a query (possibly with a filter) one has to always select the desired view. In other words the best case would be two clicks - selecting ("dispatching") a query and then selecting the view. Which would be annoying...

Btw. will you support some more advanced query editing experience (at least a multiline input with highlighting and real-time evaluation "after each keyboard key press")?

drkameleon commented 2 years ago

Yes! Everybody likes tables, so it's definitely a perfect addition! Tables would also benefit from cycles removal to make them less "sparse" IMHO πŸ˜‰.

I totally agree.

Will the new UI support "saved queries incl. the chosen filters and the corresponding view type(s)"?

I'm asking because I see just one "table" icon on the left which suggests that after writing a query (possibly with a filter) one has to always select the desired view. In other words the best case would be two clicks - selecting ("dispatching") a query and then selecting the view. Which would be annoying...

I guess we're at a rather early stage to say anything about this, but I'm willing to do anything that makes the whole experience smoother and more practical. So, all ideas are more than welcome! πŸ˜„

Btw. will you support some more advanced query editing experience (at least a multiline input with highlighting and real-time evaluation "after each keyboard key press")?

What I have in mind right now is:

And we'll see as we move on...

Regarding real-time evaluation, I have thought about it but it will ultimately come down to how bulky it becomes (continuously running queries in the background while one is typing will make the whole thing quite heavy no matter how fast each query is processed. Perhaps, we could do this only for "short" queries, or if some option is enabled? Could be...)

dumblob commented 2 years ago

Regarding real-time evaluation, I have thought about it but it will ultimately come down to how bulky it becomes (continuously running queries in the background while one is typing will make the whole thing quite heavy no matter how fast each query is processed. Perhaps, we could do this only for "short" queries, or if some option is enabled? Could be...)

Actually from my experience this is not necessarily that heavy as it sounds. sqlite is being very well cached by the os and generally if you'll add the capability to cancel any query processing at (nearly) any point of its execution, each key press can then cancel all currently running queries (should be blazingly fast) and then run the new query.

I think it's not necessary to go the differential dataflow ([1], [2], [3], [4], ...) route (but I don't want to discourage you from doing it - it's a very cool technology which still waits for its widespread use).

All in all the new UI sounds very promising - I might then recommend Grafito to some people looking for generic DB (e.g. for an internal warehouse of physical goods with "structure changing a lot over time" - already had this query from a good friend) :wink:.

drkameleon commented 2 years ago

Actually from my experience this is not necessarily that heavy as it sounds. sqlite is being very well cached by the os and generally if you'll add the capability to cancel any query processing at (nearly) any point of its execution, each key press can then cancel all currently running queries (should be blazingly fast) and then run the new query.

That makes perfect sense tbh.

As far as SQLite is concerned, truth be told - believe it or not - the only db engine that has never let me down (speed included) is this. And no, I've never really been a RDBMS guy, but still. That's why I decided to give it a go as Graph engine backend in the first place.

Plus, what I wanted to avoid is the extreme memory - and general system requirements - overhead from pretty much every single "graph db" in the market: Neo4j, for example, which is the market leader, and a very nice offer overall, needed I-don't-even-know-how-many GBs of memory only to be able to decently handle a very limited nodeset. Well, they over-sell their "native graph" aspect, and good for them, but I don't know why I should care so much as a user/customer, when merely hosting will end up costing me a lot...

All in all the new UI sounds very promising - I might then recommend Grafito to some people looking for generic DB (e.g. for an internal warehouse of physical goods with "structure changing a lot over time" - already had this query from a good friend) πŸ˜‰.

Thanks a lot for your kind words. I really appreciate it! :)

And, yes, feel free to spread the word. Only give me some time to... smoothen some of its rough edges (mainly: manage to merge this PR + a couple of other things) and... make it totally self-containable, so that people can download Grafito as a single, standalone library/binary without having to bother too much about dependencies (or Arturo).

Right now, it's definitely possible; I just want to make it 100% functional first. πŸ˜‰

dumblob commented 2 years ago

As far as SQLite is concerned, truth be told - believe it or not - the only db engine that has never let me down (speed included) is this. And no, I've never really been a RDBMS guy, but still. That's why I decided to give it a go as Graph engine backend in the first place.

This is interesting. I have precisely the same experience - even terabytes of data are not a problem (assuming good filesystem under the hood, of course). Actually I've also written a "tiny" sqlite wrapper in Python (single file, ~1000 SLOC code & ~1000 SLOC comments + full documentation + unit tests + performance tests, zero dependencies, full multithreading & multiprocessing support with absolutely zero side effects and zero manual intervention - not even setup) turning it into the simpliest key-value/document-inspired dead-easy to use transactional DB with hard durability guarantees and maximum speed your filesystem & I/O scheduler allows (but not the minimum possible latency due to Python constraints & some bad design of the Python's built-in sqlite wrapper :cry:) I've ever used.

On the other hand I need to criticize sqlite upstream to offer very misleading vague documentation, quite insufficient documentation, and very few safety measures (it's super easy to shoot yourself in the foot and sqlite will be happy to do that for you). On top of that comes a quite badly designed "built-in" wrapper which Python comes with (as always, the road to hell is paved with good intentions). Solution? Read the sqlite source very thoroughly, read CPython source very thoroughly, and finally come up with a maze of workarounds & assertions.

But hey, it works in the end! And actually it saturates my HDD throughput (it's a 10 years old notebook though) in my (un)benchmarks. Funnily enough, a year ago or so I saw some benchmarks from a single beefy server (very fast professional SAS HDDs, several very powerful CPUs, s**t loads of RAM, very fast NICs etc.) with PostgreSQL I think (it might have been MySQL or maybe yugabyteDB or CocroachDB - I don't remember exactly) set to comparable ultra-high guarantees as my wrapper (which I call PersistentDict as its API is 100% dict plus a very few additional methods).

And I had to laugh really loud, because the write & read performance in most benchmarks was comparable to my notebook with PersistentDict! I only remember I've tried to understand what the problem with their setup was, but no, there wasn't any problem, it was really that slow. So yeah, industry-standard DBs are not optimized for guarantees (I believe have I had seen their benchmarks with weaker guarantees settings my PersistentDict wouldn't win by a very large margin). And as it seems at least one of these industry-level DBs is by far not even capable of using the full potential of the HW under strict guarantees settings.

Sorry for the story, I just felt the urge to publish it (the problem is I really don't remember the DB nor the benchmark web page so you have to take my word unfortunately :cry:).

And, yes, feel free to spread the word. Only give me some time to... smoothen some of its rough edges (mainly: manage to merge this PR + a couple of other things) and... make it totally self-containable, so that people can download Grafito as a single, standalone library/binary without having to bother too much about dependencies (or Arturo).

Sure, take your time.

Actually my friend would need to run it behind some login interface (not everybody could edit or read the whole DB). Do you want to support such use cases? Support for that would probably require already some pre-defined graphs in the DB which would then govern the users (their unique identifiers) and their corresponding rights and these "rights" nodes would then connect to every single "real data" node users will be adding.

drkameleon commented 2 years ago

On the other hand I need to criticize sqlite upstream to offer very misleading vague documentation, quite insufficient documentation, and very few safety measures (it's super easy to shoot yourself in the foot and sqlite will be happy to do that for you).

Regarding documentation, I think I would agree on that too - I guess trial and error is our friend. haha

But hey, it works in the end! And actually it saturates my HDD throughput (it's a 10 years old notebook though) in my (un)benchmarks. Funnily enough, a year ago or so I saw some benchmarks from a single beefy server (very fast professional SAS HDDs, several very powerful CPUs, s**t loads of RAM, very fast NICs etc.) with PostgreSQL I think (it might have been MySQL or maybe yugabyteDB or CocroachDB - I don't remember exactly) set to comparable ultra-high guarantees as my wrapper (which I call PersistentDict as its API is 100% dict plus a very few additional methods).

And I had to laugh really loud, because the write & read performance in most benchmarks was comparable to my notebook with PersistentDict! I only remember I've tried to understand what the problem with their setup was, but no, there wasn't any problem, it was really that slow. So yeah, industry-standard DBs are not optimized for guarantees (I believe have I had seen their benchmarks with weaker guarantees settings my PersistentDict wouldn't win by a very large margin). And as it seems at least one of these industry-level DBs is by far not even capable of using the full potential of the HW under strict guarantees settings.

Sorry for the story, I just felt the urge to publish it (the problem is I really don't remember the DB nor the benchmark web page so you have to take my word unfortunately 😒).

Very interesting. Based on my own experience too, I can assure you I totally believe it.

Actually my friend would need to run it behind some login interface (not everybody could edit or read the whole DB). Do you want to support such use cases? Support for that would probably require already some pre-defined graphs in the DB which would then govern the users (their unique identifiers) and their corresponding rights and these "rights" nodes would then connect to every single "real data" node users will be adding.

The Grafito DB's already include some sort of pre-defined nodes (mainly, metadata), but yes, I've been thinking of different way this store could be expanded. And some user-management info could definitely go there.

Stay tuned! Lots of interesting stuff coming! (and I'm definitely super-inspired by this project :) )

dumblob commented 2 years ago

Btw. seeing all the amounts of changes and effort, maybe you could take a look how to further speed things up by taking a look at https://github.com/FastVM/minivm (yes, as fast or faster than Node.js on only 500 SLOC with comments!) as Shaw will soon release an "alpha" version and so far the performance is stunning in the domain of interpreted languages.

Maybe minivm could one day become an alternative (or the main?) backend for Arturo. IDK

drkameleon commented 2 years ago

https://github.com/FastVM/minivm

This is definitely interesting. In general, I've been looking into different language/VM development efforts and see how we can benefit.

For now, at least for the things I've been using it so far, Arturo is fast enough - without meaning that I'm totally satisfied with its speed (perhaps I'll never be lol). Sure thing is, I've left the VM optimization part for the end so that the language has first taken shape and form - and is functional and usable. But I definitely will deal with it - either by optimizing the existing VM, by adopting other ideas, or - as you suggested - by making it flexible enough so that it can even fit into a different VM. πŸ˜‰

drkameleon commented 2 years ago

Sneak peek πŸ˜ƒ

Default theme:

Screenshot 2022-01-20 at 13 53 03 Screenshot 2022-01-20 at 13 56 50 Screenshot 2022-01-20 at 13 56 57

Dark theme:

Screenshot 2022-01-20 at 13 53 33 Screenshot 2022-01-20 at 13 57 14 Screenshot 2022-01-20 at 13 57 02
dumblob commented 2 years ago

Cool screenshots!

Btw. how do you implement pagination? Thinking of potential use cases for me I'd need transactional pagination - i.e. getting first page of a query, then from a different instance of grafito changing part of the data from which the result of the paginated query was assembled shouldn't affect the paginated result disregarding how many times and at which page I'll click.

drkameleon commented 2 years ago

Cool screenshots!

Thanks! Doing my best to get it ready as fast as possible! :)

Btw. how do you implement pagination?

Do you mean for the tables? If that's the case, I'm just pushing the existing graph data (from the query) to the DataTable table. It's not exactly "vanilla", there are a lot of things I've changed, but I haven't implemented this whole section from scratch.

Thinking of potential use cases for me I'd need transactional pagination - i.e. getting first page of a query, then from a different instance of grafito changing part of the data from which the result of the paginated query was assembled shouldn't affect the paginated result disregarding how many times and at which page I'll click.

I'm not sure if I understand your point right. Right now, when a query is performed, Grafito just:

Now, the table view does have pages (with the selected numbers of elements, etc), but all of the pages are pre-fetched.

What I want to do - and that's mainly with the graph view in mind - is sort of impose some - changeable - limit as to how many nodes can be shown simultaneously. (The table view can easily handle thousands, if not millions, of records; the graph view on the other hand can easily have a lag, even with a couple of hundred nodes given the computational complexity of the edges).

I guess we'll have to come up with different optimizations as the project progresses... For now, my main goal is to get what I had in mind working, and then we'll build on top of that πŸ˜‰

dumblob commented 2 years ago

Do you mean for the tables?

Yes.

If that's the case, I'm just pushing the existing graph data (from the query) to the DataTable table. It's not exactly "vanilla", there are a lot of things I've changed, but I haven't implemented this whole section from scratch.

Hm, does DataTable support large number of rows with zero computational effort & about zero memory more than is currently visually displayed? I couldn't find this (IMHO very important feature) anywhere.

What I mean is e.g. something like https://github.com/jpmorganchase/regular-table - i.e. scrolling down removes all invisible elements and instead makes the last invisible element having such a hight which corresponds to all already seen elements together. Analogically for scrolling to the top (with the exception it doesn't take into account the real height because the rows have not yet been seen and thus the scroll bar can't be precisely calculated, so it's just some guess).

I'm not sure if I understand your point right. Right now, when a query is performed, Grafito just:

  • fetches all the results
  • updates the graph view
  • updates the table view

Now, the table view does have pages (with the selected numbers of elements, etc), but all of the pages are pre-fetched.

Yep, if you fetch the data itself in a transaction and cache them after fetching, then this whole process maintains transactional semantics. That's what I'll need :wink:.

What I want to do - and that's mainly with the graph view in mind - is sort of impose some - changeable - limit as to how many nodes can be shown simultaneously. (The table view can easily handle thousands, if not millions, of records; the graph view on the other hand can easily have a lag, even with a couple of hundred nodes given the computational complexity of the edges).

I myself actually strongly prefer infinite table (or any other collection-like widget - incl. a tree) without any paging but instead with some nifty little features which make life much easier & smoother - e.g. by default remembering the last scroll position (both vertical & horizontal) from last time the page/query was shown.

I guess we'll have to come up with different optimizations as the project progresses... For now, my main goal is to get what I had in mind working, and then we'll build on top of that wink

There's plenty of time for optimizations. For now maybe just prepare the architecture so that UI always behaves more like "windowing technique" over existing data - which seems to boil down mainly to 2 things:

  1. ensue both UI itself and user input emit signals (or somehow "react") when moving around, but not just raw dumb signals but depending on the details of the current state/context - e.g. "user started scrolling to the right and we didn't yet fetched enough nodes to satisfy fast scrolling - we better request some more data from that specific region the user is going to see soon".

  2. each data node & data connection (or any other "atomic piece of data") shall be lazy-loaded by default in a prioritized order and only on demand to save memory (this of course also requires streaming data "on demand" from the server) and of course also unloaded (there could be some client-side cache with some maximum size depending on client type /mainstream smartphone versus beefy desktop or server/ and if this will get reached, then everything older will be purged from the cache)

(note "prioritized order" is something which is usually being skipped but is vital for the best user experience - that's why Nanity from Unreal Engine 5 is able to do what noone else can - btw. Unreal Engine achieves that through virtualization of tiny parts and then quickly remapping the content of these virtualized parts on demand)

But I talk too much :cry:.

drkameleon commented 2 years ago

Do you mean for the tables?

Yes.

If that's the case, I'm just pushing the existing graph data (from the query) to the DataTable table. It's not exactly "vanilla", there are a lot of things I've changed, but I haven't implemented this whole section from scratch.

Hm, does DataTable support large number of rows with zero computational effort & about zero memory more than is currently visually displayed? I couldn't find this (IMHO very important feature) anywhere.

What I mean is e.g. something like https://github.com/jpmorganchase/regular-table - i.e. scrolling down removes all invisible elements and instead makes the last invisible element having such a hight which corresponds to all already seen elements together. Analogically for scrolling to the top (with the exception it doesn't take into account the real height because the rows have not yet been seen and thus the scroll bar can't be precisely calculated, so it's just some guess).

I'm not sure if I understand your point right. Right now, when a query is performed, Grafito just:

  • fetches all the results
  • updates the graph view
  • updates the table view

Now, the table view does have pages (with the selected numbers of elements, etc), but all of the pages are pre-fetched.

Yep, if you fetch the data itself in a transaction and cache them after fetching, then this whole process maintains transactional semantics. That's what I'll need πŸ˜‰.

What I want to do - and that's mainly with the graph view in mind - is sort of impose some - changeable - limit as to how many nodes can be shown simultaneously. (The table view can easily handle thousands, if not millions, of records; the graph view on the other hand can easily have a lag, even with a couple of hundred nodes given the computational complexity of the edges).

I myself actually strongly prefer infinite table (or any other collection-like widget - incl. a tree) without any paging but instead with some nifty little features which make life much easier & smoother - e.g. by default remembering the last scroll position (both vertical & horizontal) from last time the page/query was shown.

I guess we'll have to come up with different optimizations as the project progresses... For now, my main goal is to get what I had in mind working, and then we'll build on top of that wink

There's plenty of time for optimizations. For now maybe just prepare the architecture so that UI always behaves more like "windowing technique" over existing data - which seems to boil down mainly to 2 things:

  1. ensue both UI itself and user input emit signals (or somehow "react") when moving around, but not just raw dumb signals but depending on the details of the current state/context - e.g. "user started scrolling to the right and we didn't yet fetched enough nodes to satisfy fast scrolling - we better request some more data from that specific region the user is going to see soon".
  2. each data node & data connection (or any other "atomic piece of data") shall be lazy-loaded by default in a prioritized order and only on demand to save memory (this of course also requires streaming data "on demand" from the server) and of course also unloaded (there could be some client-side cache with some maximum size depending on client type /mainstream smartphone versus beefy desktop or server/ and if this will get reached, then everything older will be purged from the cache)

(note "prioritized order" is something which is usually being skipped but is vital for the best user experience - that's why Nanity from Unreal Engine 5 is able to do what noone else can - btw. Unreal Engine achieves that through virtualization of tiny parts and then quickly remapping the content of these virtualized parts on demand)

But I talk too much 😒.

You have many interesting points.

For similar things, as I said I've used DataTables in the past. And from my experience, it performs very very well even for large datasets.

Though I haven't yet tried it, for what you mention I would think of the Scroller extension: https://datatables.net/extensions/scroller/examples/styling/bulma.html

Let's see, let's see... For now, I'm quite struggling to finish with the graph view, interaction/manipulation, etc (this thing has turned out far more complicated than just a diagram you can play with - but I believe it's worth it), which is the top most-demanding part of the whole UI. And then I'll get down to work with the table (basically, optimizing it & fixing a couple of things)

πŸ˜„

dumblob commented 2 years ago

You have many interesting points.

I'll try my best to stay low for a while.

For similar things, as I said I've used DataTables in the past. And from my experience, it performs very very well even for large datasets.

Though I haven't yet tried it, for what you mention I would think of the Scroller extension: https://datatables.net/extensions/scroller/examples/styling/bulma.html

Wow, your Google-fu is much better than mine. This plugin seems to implement the idea I mentioned. So hopefully it'd solve the issue (now it's only a question of not fetching all the potentially gigabytes of data on the client up front but rather request them on demand from the server only when the user scrolls).

Let's see, let's see... For now, I'm quite struggling to finish with the graph view, interaction/manipulation, etc (this thing has turned out far more complicated than just a diagram you can play with - but I believe it's worth it), which is the top most-demanding part of the whole UI.

I totally believe it! At least you can feel immediate satisfaction from "easier" view kinds like table or tree :wink:.

And then I'll get down to work with the table (basically, optimizing it & fixing a couple of things)

:+1:

drkameleon commented 2 years ago

@dumblob Multi-line input enabled &... here's another sneak peek, with that - and many, many other new features - included (slowly, we're finally getting there πŸ˜‰ )

sneakpeek2

drkameleon commented 2 years ago

Ready to merge πŸš€

(There will be bugs & issues here and there - I don't pretend this is 100% complete - but better deal with them step-by-step, via different PRs, open issues, etc to keep it more organized + this PR is not even a PR any more, but has over-over-grown its initial goals by... far)

dumblob commented 2 years ago

Sorry for the delay. That last screenshot looks really cool! A lot of work.

Regarding multiline input I wonder whether opening the multiline editor is being pre-filled with existing text from the query field if any?

Btw. do you generally plan to make the UI tablet-friendly? I.e. basically same as it is now but just ensuring the minimum button/icon size is finger-friendly :wink:?

drkameleon commented 2 years ago

That last screenshot looks really cool! A lot of work.

Thanks! πŸ˜„ Doing my best...

Regarding multiline input I wonder whether opening the multiline editor is being pre-filled with existing text from the query field if any?

Right now, the content of the multiline editor starts empty and whenever you open it up it comes pre-filled with your existing query (the one you had written in the multiline editor).

What I've been thinking is: either use the one from the single-line query input, or even forcefully open it when you start writing a query and that exceed some pre-defined threshold. I don't know. Ideas?

Btw. do you generally plan to make the UI tablet-friendly? I.e. basically same as it is now but just ensuring the minimum button/icon size is finger-friendly πŸ˜‰?

The main UI is (loosely) based on Bulma. And generally, every UI I've created with Bulma so far seems to be behaving quite consistently, without a lot effort. (Have a look at one of my latest projects: https://deleahora.com - 100% static, with a dynamic backend written Ruby/Sinatra + Arturo, and Bulma-based UI)


That being said and as you may have probably seen from my additions to Arturo (e.g. https://github.com/arturo-lang/arturo/pull/407), my main goal (which has pretty much been achieved by now) is to make Grafito work flawlessly as a standalone app.

Basically, I've made it so that I can "compile" Grafito as a single binary, which won't need Nim or anything else for that matter. And will work pretty much like very other GUI app.

Lot's of interesting things coming! πŸ˜‰

dumblob commented 2 years ago

Right now, the content of the multiline editor starts empty and whenever you open it up it comes pre-filled with your existing query (the one you had written in the multiline editor).

What I've been thinking is: either use the one from the single-line query input, or even forcefully open it when you start writing a query and that exceed some pre-defined threshold. I don't know. Ideas?

Yes, I'd forcefully open it (after "freezing" it first to account for the delay of the popup incl. its animation). Later we can think of some more seamless way (perhaps just adding automatically new lines and changing the "dispatch key" from return/enter to something else - probably ditching any "confirmation" altogether and immediately/live execute it after each key stroke).

The main UI is (loosely) based on Bulma. And generally, every UI I've created with Bulma so far seems to be behaving quite consistently, without a lot effort. (Have a look at one of my latest projects: https://deleahora.com - 100% static, with a dynamic backend written Ruby/Sinatra + Arturo, and Bulma-based UI)

Yes, that's all fine and I don't think needs any changing. What I mean is really just the min size of clickable objects (incl. icons, buttons, etc.). E.g. the button from a vertical Guillemet on the right side of the query line is by far too tiny to be comfortable for tablets :wink:.

That being said and as you may have probably seen from my additions to Arturo (e.g. arturo-lang/arturo#407), my main goal (which has pretty much been achieved by now) is to make Grafito work flawlessly as a standalone app.

Yes, I'm a huge proponent of the "offline first" approach. I don't know what would work best for Grafito in a multiuser environment (maybe some predefined scheme for federation and authentication and role management - all this seamlessly integrated so that writing a query to query my own instance and all related remote instances would use more or less the same syntax as querying only my own instance - of course a rendezvous point will be required for such P2P communication but maybe we could abuse some already existing instances of totally unrelated services ran by huge companies). What I know though is that multiuser support will be needed to make Grafito useful in practice :wink:.

Basically, I've made it so that I can "compile" Grafito as a single binary, which won't need Nim or anything else for that matter. And will work pretty much like very other GUI app.

:+1:

Lot's of interesting things coming! wink

Looking forward to that!