Closed coolrecep closed 6 years ago
Only the latency #s are 0'd out on your view, correct? It opens fine on my LibreOffice and the actual reported latencies are all 0.
In the JSON I do see (multiple!) reported lat_ns fields per each test. My guess is that fio-3.1 has changed the format slightly. I'll make a note to check it under Linux as well and verify this.
I just pushed a fix tested under Linux, but I'm away from my Windows laptop today so can't try the PS1 update.
If you could run it on a VHD (go to Disk Manager, create and initialize a 4GB VHD virtual disk image) you would know much faster if it fixed your problem or not, and then let it run on your real system if you really care about latency reports on the SSD tested.
I just ran it on my laptop here at work with a VHD drive, using FIO 3.1 from Bluestop and am seeing proper latency reporting going on.
When you validate this, can you close the issue?
Thx -EFP3
OK! Here is the latest update.
First, the Sustained Performance Stability 4KB Random test takes way too long time! Is this normal?
I mean 18 hours to test a 4 GB VHD on a Samsung 960 PRO!
This time, Sustained 4K Random Read & Write Tables are OK on the VHD.
Here is the result: ezfio-master.zip
If everything is OK, I'll re-run it on the physical drive. It took more than 30 hours to finish the test on the 1 TB SATA drive.
Oh, BTW, after this much of testing and bug reporting, I really believe that I need some credit :=)
Thanks
BTW, I got a new copy of FIO, v3.3 for Windows. I began runnig another test on the 4 GB VHD. Will share results with you.
OK! Looks like v3.3 solved this looong test problem!
And here is the second run on the 4 GB VHD with FIO v3.3
First of all, thanks for actually testing the fixes and reporting back! I'll drop a note in the README.md as to your valiant (and lengthy!) efforts! :)
FWIW the new VHD #s are many times the actual HW #s you had earlier, but they look OK. I think on the real device you're going to have the same ~20K mixed IOPS as before...my guess is Windows is caching the VHD writes (or the VHD was on an NVME drive which really could give >500MB/s sequential performance).
Since the exact same command line is fine on 3.3 I'd say there's something fishy w/BlueStop FIO 3.1 under Windows. I'll grab a real server today and run to completion with both. I aborted the test on my laptop yesterday after a few tests after I saw it was working because it really kills my system performance.
The too-long FIO test run log itself says, "runtime: 1201" which is exactly right (20 mins 60 secs/min = 1200). But I can tell FIO continued to do IO afterwards because the iostat.csv file has hours* of logs of high IO!
One other thing, ezfio isn't really intended for consumer drives. Your model's not too bad, but preconditioning is going to take a long time. For comparison, the last SSD (enterprise NVMe) I was running did >500,000 mixed 4K IOPS (i.e. 25x your original SSD in this issue) on the sustained test, and was only 4x as large as yours so ezfio completed much faster.
You are welcome and thank you.
The VHD is on my 960 PRO NVMe SSD. But the first benchmmarks were run on a SATA SSD.
The system did not go to sleep. I always set to high performance and disable sleep. I also connect to PC via Teamviewer remotely to check the test status. I am 99% sure that this was a bug with FIO 3.1.. 3.3 solved it...
So, I think after all, with FIO 3.3 and the latest ezFIO, we are good to go.
Oh BTW, name is Recep Baltaş. :)
I noticed FIO running too long w/3.1 on my Windows Server 2012 VM (but running on a real server with a PCIE SSD) as well. I'll make a note to use the latest one Jens has linked to (the one from CI).
For your name, I'm not sure if I can make the ş come out properly in README.md due to ASCII code pages. Any problem leaving the s-cedilla as plain "s" in attribution?
Yepp. Definitely a problem with 3.1.
And yes, "s" is fine :)
So it looks like s-cedilla comes out just fine in the codepage used, so your name is in there properly. Looks like we're good now, so I'm closing this issue. Thx!
(Just for the record this was probably fixed by some of the integer overflow fixes that went into 3.2 - https://github.com/axboe/fio/commits/cae9edd999e5233a1ca54d34cd18d90596f125b6 )
Oh and the reduced performance seen between 3.1 and 3.3 is probably because best effort invalidation when the job starts was added to the windowsaio ioengine in 3.2 (see https://github.com/axboe/fio/commit/8300eba59e941f917fcc27ae10126e51bf0935b5 ) when doing otherwise cached I/O.
Thanks for the info. I've not been following the Windows fork closely, but as far as I know we're all good as of the latest revs. Weird that the overflow would manifest itself only in a couple specific releases (the ezfio test pattern hasn't changed in ages).
Hi,
here is one my latest runs (took about 30 hours I think). Sustained 4K Random Read & Write Tables look empty. Could you please check?
Regards
ezfio-master.zip