I've had a case where consecutive add()'s and remove()'s on a QueueFile will not wait for the write to sync to disk before removing()'ing it again.
public class Bench {
public static void main(String[] args) throws IOException {
Path tmp = Paths.get("tests-" + System.currentTimeMillis()).toAbsolutePath();
Files.createDirectories(tmp);
byte[] payload = new byte[] {0x11, 0x22, 0x33};
QueueFile test = newTape(tmp, "test");
long start = System.nanoTime();
for (int i = 0; i < 1000; ++i) {
test.add(payload);
byte[] read = test.peek();
if (read == null || read.length != payload.length) {
throw new IllegalStateException();
}
}
long delta = System.nanoTime() - start;
System.out.println("Time taken: " + delta + "ns");
}
public static QueueFile newTape(Path tmp, String name) throws IOException {
Path tapeFile = tmp.resolve(name + ".tape");
return new QueueFile.Builder(tapeFile.toFile()).zero(false).build();
}
}
On Linux it performs as I'd expect: Slower disks take longer then faster disks. On the Windows 10 system I tested this on a pretty slow HDD and it almost outperformed a tmpfs on my Linux system, which suggests that it's not actually writing to disk.
I've made a custom build of tape2 where I added raf.getFD().sync(); after every call to raf.write() which brought the runtime back up to where I'd expect it.
I've had a case where consecutive add()'s and remove()'s on a QueueFile will not wait for the write to sync to disk before removing()'ing it again.
On Linux it performs as I'd expect: Slower disks take longer then faster disks. On the Windows 10 system I tested this on a pretty slow HDD and it almost outperformed a tmpfs on my Linux system, which suggests that it's not actually writing to disk. I've made a custom build of tape2 where I added
raf.getFD().sync();
after every call toraf.write()
which brought the runtime back up to where I'd expect it.