When famfs creates a file (famfs_mkfile() in the api, normally from 'famfs cp' or 'famfs creat'), it will fail if the file already exists. But if a rogue delete had taken place, and then a cp or creat tried to create the same relative path, it would not see the file in the mounted filesystem - and would proceed to create and log the file instance.
A rogue delete is any 'rm' that did not occur through the famfs api/cli - and since the api/cli currently does not support delete, it's any delete.
This would effectively make a mess of things.
The log would contain two file creates of the same relative path, with different allocation
Logplay would instantiate the first, and ignore the second
The Master node would have the second file (hey, only the master can create files), but any client that has not experienced a rogue delete would have the first file (which would not map to the same memory as the "rogue create" on the master).
This could be solved by building a hash table of relative paths during logplay (master only), and 1) detecting relative path collisions in the log, and 2) detecting famfs_mkfile() or famfs_mkdir() calls that generate relative path collisions in the log (which are detectable via the mounted namespace if there have been no rogue namespace operations)
One downside to this is that it will make the O() order of file creation worse; it's already kinda expensive because space allocation plays the log to get the free/available bitmap - which is not persisted. There is not an "easy" way to persist the hash table either, so that might need to be re-generated on each [batch of] file create. (batches because 'cp *' and 'cp -r' and 'mkdir -p' lock the log and build the bitmap once for a batch of creates).
Hmm. we could persist the bitmap in a new meta file, and only expose that on the master. Even the hash table could be handled that way too. Extend the flock(log) to cover those files, and it may be a fully working approach worth considering...eventually.
This has not been observed in the wild; Will put this "on ice" initially, but may need to
When famfs creates a file (famfs_mkfile() in the api, normally from 'famfs cp' or 'famfs creat'), it will fail if the file already exists. But if a rogue delete had taken place, and then a cp or creat tried to create the same relative path, it would not see the file in the mounted filesystem - and would proceed to create and log the file instance.
A rogue delete is any 'rm' that did not occur through the famfs api/cli - and since the api/cli currently does not support delete, it's any delete.
This would effectively make a mess of things.
This could be solved by building a hash table of relative paths during logplay (master only), and 1) detecting relative path collisions in the log, and 2) detecting famfs_mkfile() or famfs_mkdir() calls that generate relative path collisions in the log (which are detectable via the mounted namespace if there have been no rogue namespace operations)
One downside to this is that it will make the O() order of file creation worse; it's already kinda expensive because space allocation plays the log to get the free/available bitmap - which is not persisted. There is not an "easy" way to persist the hash table either, so that might need to be re-generated on each [batch of] file create. (batches because 'cp *' and 'cp -r' and 'mkdir -p' lock the log and build the bitmap once for a batch of creates).
Hmm. we could persist the bitmap in a new meta file, and only expose that on the master. Even the hash table could be handled that way too. Extend the flock(log) to cover those files, and it may be a fully working approach worth considering...eventually.
This has not been observed in the wild; Will put this "on ice" initially, but may need to