As pointed out by Shashank Agrawal, the current documented method does not materially increase the number of operations required for an attacker to brute-force the correct input (OTP code, or double OTP, or with Controlled Randomness). The reasoning is not complex: the attacker can simply iterate through all possible inputs, and follow the documented method to reconstruct the neighbor, and generate the Merkle Proof required as usual.
The analysis only makes sense for brute-forcing the position of the leaves. It is not the correct analysis for an attacker aiming to recover the OTP for a given timestamp.
However, the method may still be very effective in preventing brute-force attack. To see that, first consider in the context of ASIC and GPU attack. The method would be effective in preventing such attack because the whole scrambled memory layout has to be loaded in memory for each brute-force attempt, and the scrambled memory layout can be made arbitrarily large by inserting noise into the output space, it makes any ASIC and GPU attack challenging because those massive parallel devices have difficulty fitting a large volume of memory or being forced to randomly access a non-local memory space.
Now, consider if we extend the projected scrambled memory layout space (for storing chunks) to an exceedingly large space (e.g. 2^256) instead of a dense array of size 4M as documented. The space can be stored off-memory and off-disk, e.g. on IPFS, or a local virtual filesystem. The space can be shared with other users, whose data fill in the gaps and serve as the noise for the attacker. Because the real user knows the correct OTP(s), he can very efficiently retrieve the correct chunks from this large space. On the other hand, the attacker has to make one retrieval per brute-force attempt. Retrieval from IPFS (or a local filesystem) is a non-trivial operation and can take several hundred milliseconds to several seconds (or several hundred microseconds to several milliseconds, in local virtual filesystem). The attacker's brute-force attack would be bottlenecked by the retrieval operation, hence be doomed to fail.
We still need to be careful with the size of the OTP space to prevent the attacker from being able to enumerate all OTPs (hence locations) and download all possible chunks upfront. A single OTP would not be sufficient because the total size would be 10^6 4 8 bytes = 32MB. Controlled randomness cannot be used here because legitimate users would also have to make µ retrievals to validate their own enumeration on the random parameter. But consider when two OTPs are used in conjunction, the size of the space by enumeration would be 10^12 4 8 bytes = 32TB, which should be sufficient to prevent an attacker from downloading all possible chunks upfront.
As pointed out by Shashank Agrawal, the current documented method does not materially increase the number of operations required for an attacker to brute-force the correct input (OTP code, or double OTP, or with Controlled Randomness). The reasoning is not complex: the attacker can simply iterate through all possible inputs, and follow the documented method to reconstruct the neighbor, and generate the Merkle Proof required as usual.
The analysis only makes sense for brute-forcing the position of the leaves. It is not the correct analysis for an attacker aiming to recover the OTP for a given timestamp.
However, the method may still be very effective in preventing brute-force attack. To see that, first consider in the context of ASIC and GPU attack. The method would be effective in preventing such attack because the whole scrambled memory layout has to be loaded in memory for each brute-force attempt, and the scrambled memory layout can be made arbitrarily large by inserting noise into the output space, it makes any ASIC and GPU attack challenging because those massive parallel devices have difficulty fitting a large volume of memory or being forced to randomly access a non-local memory space.
Now, consider if we extend the projected scrambled memory layout space (for storing chunks) to an exceedingly large space (e.g. 2^256) instead of a dense array of size 4M as documented. The space can be stored off-memory and off-disk, e.g. on IPFS, or a local virtual filesystem. The space can be shared with other users, whose data fill in the gaps and serve as the noise for the attacker. Because the real user knows the correct OTP(s), he can very efficiently retrieve the correct chunks from this large space. On the other hand, the attacker has to make one retrieval per brute-force attempt. Retrieval from IPFS (or a local filesystem) is a non-trivial operation and can take several hundred milliseconds to several seconds (or several hundred microseconds to several milliseconds, in local virtual filesystem). The attacker's brute-force attack would be bottlenecked by the retrieval operation, hence be doomed to fail.
We still need to be careful with the size of the OTP space to prevent the attacker from being able to enumerate all OTPs (hence locations) and download all possible chunks upfront. A single OTP would not be sufficient because the total size would be 10^6 4 8 bytes = 32MB. Controlled randomness cannot be used here because legitimate users would also have to make µ retrievals to validate their own enumeration on the random parameter. But consider when two OTPs are used in conjunction, the size of the space by enumeration would be 10^12 4 8 bytes = 32TB, which should be sufficient to prevent an attacker from downloading all possible chunks upfront.