I'll need some help from the bubblewrap maintainers to land this change. Please, review this carefully. I'm not completely sure does it break something or not. But seems like it's okay
I ran the tests that already exist in the project, and they passed. I also launched various software from flatpak. I replaced the system bwrap with the ones I built, with and without algorithm changes. In both, I output the contents of mountinfo at the very end of setup_newroot() execution. The list of mount points and their flags seem to be exactly the same on the data I've tried.
It improves the complexity to O(N log N). I also have tried to measure performance difference:
flatpak run com.microsoft.Edge with and without optimization
0.013712 => 0.004324
0.040225 => 0.006671
0.004751 => 0.002611
A test where I pass bwrap 2000 "--bind" flags:
bwrap --dev-bind / / --bind /usr /tmp/1 --bind /usr /tmp/2 ... --bind /usr /tmp/2000 /bin/true
Before: 14.038314
After: 0.218774
Below I'll describe how algorithm basically works:
First, our problem isn't interactive type, because we have all bind operations and their order in arguments. The algorithm use it. Instead of rechecking /proc/self/mountinfo before every --bind operation we:
Just mount what user asked without any questions
Save all these "requests" in list and then optimize the work with knowledge of all the problem.
After all the mount operations completed, we call bind_mount_fixup() function to remount all the mount points with correct flags, that:
Collects all the data about mount points from /proc/self/mountinfo
Using received information, build the tree of mount points
Go through array of mount operations and collect all the nodes that correspond mount operations
And at the same time "virtually" propagates changes in the graph to submounts
Goes through all actual mount points from mountinfo, requests correct flags from graph and remounts them if needed
Also, when we collect mount operations, we also add to the list all mount's of proc, dev, mqueue, tmp. That's because we will anyway collect them from mount info to the graph. And added operation prevents us from change flags on those mount to incorrect.
Also, when we bind file we create temp file as before. But before we delete them right after they become unneeded. In my Pull Request we save all paths of those temp files and unlink() all of them later, right after we performed bind_mount_fixup(). That's because if we delete them, their soft links becomes incorrect and this break retrieve_kernel_case() function.
Read this.
Below is more context about lazy propagation of flags:
If we propagate flags naively with DFS algorithm we'll still have O(N^2) complexity.
That's why the main part of algorithm is lazy propagation on the graph.
After tree of mount points is built (step 2):
Start Euler tour on the graph, so start DFS from root and assign:
start[node] - which denotes the time at which we enter this node during DFS
finish[node - which denotes the time at which we leave this node during DFS.
Now for any node say X and another node Y in subtree, start[X] < start[Y] <= finish[Y] <= finish[X].
That way we can map each node to index in array.
That way when we assign interval start[X] .. finish[X] on array with flag, we assign it for all the sub mounts of the mount.
Initialize segment tree with lazy propagation of assignment.
Segment tree is data structure that is useful when you have an array with values and you want to:
Change values of array fast.
Query sum on interval fast.
When we set flag on subtree, we assign 1 on all the needed interval.
When we unset flag on subtree, we assign 0 on interval.
When we check if flag set on somewhat node, we just query the sum of 1-length interval [start[X], start[X]] on segment tree.
So when we set/unset NODEV/RDONLY flag to subtree of X, we do a lazy update of adding flag to all indices from start[X] to finish[X].
I'll need some help from the bubblewrap maintainers to land this change. Please, review this carefully. I'm not completely sure does it break something or not. But seems like it's okay
I ran the tests that already exist in the project, and they passed. I also launched various software from flatpak. I replaced the system bwrap with the ones I built, with and without algorithm changes. In both, I output the contents of mountinfo at the very end of
setup_newroot()
execution. The list of mount points and their flags seem to be exactly the same on the data I've tried.It improves the complexity to O(N log N). I also have tried to measure performance difference:
flatpak run com.microsoft.Edge
with and without optimizationA test where I pass bwrap 2000 "--bind" flags:
bwrap --dev-bind / / --bind /usr /tmp/1 --bind /usr /tmp/2 ... --bind /usr /tmp/2000 /bin/true
Below I'll describe how algorithm basically works:
First, our problem isn't interactive type, because we have all
bind
operations and their order in arguments. The algorithm use it. Instead of rechecking/proc/self/mountinfo
before every--bind
operation we:After all the mount operations completed, we call
bind_mount_fixup()
function to remount all the mount points with correct flags, that:/proc/self/mountinfo
mountinfo
, requests correct flags from graph and remounts them if neededAlso, when we collect mount operations, we also add to the list all mount's of
proc
,dev
,mqueue
,tmp
. That's because we will anyway collect them from mount info to the graph. And added operation prevents us from change flags on those mount to incorrect.Also, when we bind file we create temp file as before. But before we delete them right after they become unneeded. In my Pull Request we save all paths of those temp files and
unlink()
all of them later, right after we performedbind_mount_fixup()
. That's because if we delete them, their soft links becomes incorrect and this breakretrieve_kernel_case()
function. Read this.Below is more context about lazy propagation of flags:
If we propagate flags naively with DFS algorithm we'll still have O(N^2) complexity. That's why the main part of algorithm is lazy propagation on the graph.
After tree of mount points is built (step 2):