Open dduran1967 opened 7 years ago
We don't have a system notification container spec at the moment. It does sound useful, however. Are you thinking a per-user system container? Or one for the whole server? What sort of notifications will go in there?
What I do with inbox is monitor it using the websocket, and if a new item comes in, I process it, typically by validating it, processing it, and then deleting the original item.
I'd be interested to know if you got such a thing working as I believe there to be a bug in the notification system, at lest for POSTs:
https://github.com/solid/node-solid-ws/blob/master/index.js#L13
The line here says
solidWs.publish(path.basename(req.originalUrl))
The basename
function returns the LAST past of the URI, whereas what is required is the container part of the URI, which I think is the dirname
.
@dmitrizagidulin - Thanks for your response. I'm, sorry it's taken awhile to get back to you, lots of family obligations over the weekend, but the time has provided some clarity and perspective on the hurdles we (Yodata) must clear to deliver a complete and viable solution based on SOLID and what part of our effort can be generalized to help advance the platform for others. So I will break this down into two issues, 1. logging pod requests and 2, a container events API which i will open as a new issue.
Suggestion: solid-server should log all pod requests to a well known container
a. If we're asking people to trust solid-server with their data, then solid-servers should meet some minimal expectations for standard practices that help protect and secure data;
b. Just like other data stored in SOLID, logs should be RDF with a common vocab so developers can create system tools for interacting with log data on any solid server.
I suggest that if the solid server has its own dedicated resource about the "system" itself, eg http://example.org/system
it can point to its own Inbox. There is no need to introduce the concept of a default container, and it will work with any conforming LDN implementation out of the box.
@melvincarvalho - It wouldn't work for us to apply logic after the data has been written because we need to transform the data before other observers are notified. I'm curious though, where is your monitor process running?
@dduran1967 ah, so that's definitely on the roadmap - keeping an access log of all requests. We were thinking of keeping a log either per each user account, or per each folder.
I agree that the log should be in RDF form -- do you happen to know any appropriate log vocabs/ontologies?
@csarven - I agree, a server should have an account of its own. (and maybe an inbox)
@dduran1967 the nice thing about the decentralized nature of solid is that the process can run anywhere. In this case, actually I dont run the inbox monitor on the server where the inbox lives (though that would be faster), but rather, on my local machine.
If you're interested, here's my rough prototype code:
https://github.com/solid-live/solid-inbox/blob/gh-pages/bin/checkinbox.js
I think there's a vocab used in rdflib for http - https://www.w3.org/TR/HTTP-in-RDF/
I also agree there must be a server level account to secure access and integrity of certain data.
I think these logs should be somewhere under /system and not per folder to prevent loss of logs when a container is deleted.
I also agree there must be a server level account to secure access and integrity of certain data.
So, one of the things we need to add in the near-term, is the idea of implementing server-controlled resources. For example, for issue #111-Server should keep track of who created a resource, the server needs to record (and later return on request) which user (webid) created a particular resource via PUT/POST/PATCH. The current proposed method of implementation is to store this info in a file's corresponding .meta
resource, and to give control over the .meta
solely to the server (it wouldn't be user-editable).
Similarly, an account's root .acl
file (and the webid profile itself) would also become system-protected resources. That is, a user (or a rogue application) would not be able to delete them -- they would only be able to be deleted via a Delete Account api call.
So, instead of a single server account to preserve the integrity of certain data, the integrity would be protected by the server implementation itself, on the API level.
I think these logs should be somewhere under /system and not per folder to prevent loss of logs when a container is deleted.
This goes somewhat against Solid's core philosophy of being user-centric. Just as the user has control over their own data, they should also have control over the access logs for that data. Given that, it makes sense to keep the access logs in each folder (in an SVN-like .accesslogs
subfolder, for example), so that the user can delete them along with deleting the folder.
As far as the server administrators being able to keep track of (and access) request logs -- they already have that capability, just by the virtue of controlling the server. For example, node-solid-server
(like all Node apps) is not meant to be deployed via directly listening to a server port. Instead, it's meant to be fronted by something like Nginx or Apache. And each of those have excellent access logging capabilities (which regular users don't have access to, and wouldn't be able to delete).
I'm still learning the subtleties of terminology so please forgive me for being less than precise with my statements.
So, instead of a single server account to preserve the integrity of certain data, the integrity would be protected by the server implementation itself, on the API level.
That's exactly what I was attempting to express so I agree 100%.
We're also on the same page about keeping user data in user controlled space. By "/system" I assume the root of the user's pod, not the server root.
By "/system" I assume the root of the user's pod, not the server root.
Ahh, I see, ok.
I'm still learning the subtleties of terminology
Hey, it's totally ok - the terminology is complicated! :)
In the sense of, when we talk about "solid servers" we're often switching between at least 2 different meanings -- a multi-user identity provider solid server (like databox.me), and a single-user personal solid server (like for a personal website).
We've also tried, as much as is possible, to have each account (in the case of a multi-user IDP server) be treated as if it owns its own virtual server -- it has its own subdomain, api endpoints, and so on.
Yes, I understand the multi-tenant / idp configuration, this is also how we are configured.
We are building a data and application integration solution using an event streaming pattern on top of LDN + SOLID.
For development we've just been using inbox but it seems to me solid inbox and timeline are meant for higher level events presented to the data owner, and mixing that type of data with tons of system events and errors doesn't seem right.
Default containers for system events seem like core spec material but I'm not finding anything. Has this been considered?