Open ga-it opened 3 months ago
Groupfolders' user access lists are respected when crawling for a user's data. What might not work yet, is adding a user to a groupfolder after the users files have been crawled as well as the other way around.
Thanks @marcelklehr - does that imply that the embeddings databases are separately created for each user?
I am not sure of the overhead of that design on a large corporate installation with TBs of data (e.g. multiple embeddings databases and multiple crawls)?
For that reason and changes in permissions, do you think the linked approach may work better - i.e. one embeddings database with permissions applied dynamically per user context?
does that imply that the embeddings databases are separately created for each user?
Yes, that's how it is at the moment. We are aware that this is not ideal.
Is a change on the roadmap or should I file the above as a feature request?
It's on the roadmap, yes :)
I cannot find a way to log this as a feature request on this repo.
As per chat with @marcelklehr
As I understand it, context chat currently creates embeddings for each user for the files they have access to.
The context chat backend should instead create one set of embeddings and overlay permissions on the LMM/RAG query.
We have a huge number of shared files in group folders - Nextcloud is our document management system - TBs.
If I understand the embedding process correctly, this results in a multiplied factor of users by embeddings.
Our GPUs (RTX 4090) have been fully occupied for weeks by the Context Chat backend in its crawl process.
A further impact is that as permissions change, this can result in out-of-sync embeddings with user permissions.
One set of content embeddings should be created and kept up to date for content in the Nextcloud instance.
A separate permissions matrix should be maintained and applied to LLM/RAG queries.
By way of an example from another development, one embeddings database with permissions applied seems to be what is outlined here (another application, not Nextcloud, by way of example):
https://www.osohq.com/post/authorizing-llm
I understand this is on the roadmap.
The urgency for us - our servers and GPUs have been running for weeks non-stop on crawling and embedding current files.
If possible, when this is implemented provide a migration utilising current embeddings to a shared embeddings model to prevent yet another crawl!
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.
Describe the bug Context chat allows questions without a specified file or folder context. To the extent that this answer is produced from existing embeddings / vector database, what is the risk that this answer will include information from a context the user does not have access to?
This will not be impacted via referred source documents as if these include sources the user does not have access to, they will now be able to follow the link through to the document. But if contents from these documents are included in the answer, clearly this breaches security.
For example, if there is a group folder with salary information that is only accessible to finance staff, but embeddings are generated and then available for general query, this would breaches folder security.
Further, if this is correctly handled, are changes to a users permissions retroactively applied to previously generated embeddings?
I am uncertain if this is possible, but cannot see any documentation as to how this is handled.
To Reproduce Steps to reproduce the behavior:
Expected behavior Context chat answers should be based on a combination of the LLM used by Context Chat and only embeddings that are consistent with their Nextcloud permissions. Technically I am unsure how this would be accomplished, but conceptually imagine that a users permission matrix could be applied to embeddings to block answers from content they do not have access to.
One embeddings database with permissions applied seems to be what is outlined here (another application, not Nextcloud, by way of example):
https://www.osohq.com/post/authorizing-llm
Setup Details (please complete the following information):
Additional context Add any other context about the problem here.