jpos / jPOS-EE

jPOS Extended Edition
http://jpos.org
GNU Affero General Public License v3.0
104 stars 151 forks source link

Java 8 does not properly flush buffers to disk on OSX #153

Open jameshilliard opened 4 years ago

jameshilliard commented 4 years ago

In investigating a performance regression in Java 9 and newer it was discovered that on Java 8 does not properly flush the buffers to disk. So we should probably see if we can fix this on Java 8 and find a workaround for the performance issue on Java 9 and newer on OSX. See #152 and this for details.

alcarraz commented 4 years ago

Does anyone have this same performance problem with java 8 in linux?

I believe I could never (or almost since a long time ago) perform a full test in linux without interrupting it, because of binlog module, may be java 8 in linux did always flush to disk

jameshilliard commented 4 years ago

I believe I could never (or almost since a long time ago) perform a full test in linux without interrupting it, because of binlog module

Is this an issue you are seeing for all java versions or just java 8?

may be java 8 in linux did always flush to disk

That was always my assumption, the change between java 8 and java 9 only seems to only be an issue on OSX.

alcarraz commented 4 years ago

Is this an issue you are seeing for all java versions or just java 8?

In java 11 too, and it also takes travis a while:

> Task :modules:binlog:test
Finished generating test XML results (0.002 secs) into: /home/travis/build/jpos/jPOS-EE/modules/binlog/build/test-results/test
Generating HTML test report...
Finished generating test html results (0.02 secs) into: /home/travis/build/jpos/jPOS-EE/modules/binlog/build/reports/tests/test
Stored cache entry for task ':modules:binlog:test' with cache key f211c7caf11e0727c8dfc2d69a79c412
:modules:binlog:test (Thread[Execution worker for ':',5,main]) completed. Took 3 mins 15.599 secs.

In my machine takes a lot longer, don't know if travis performs the test in an in memory filesystem or ssd disk:

JAVA_HOME=/usr/lib/jvm/java-1.11.0-openjdk-amd64/ gradle test
Starting a Gradle Daemon, 2 stopped Daemons could not be reused, use --status for details

> Configure project :modules:qi-core
Using Gradle Vaadin Plugin 2.0.0.beta2
<===========--> 85% EXECUTING [6m 51s]
> :modules:binlog:test > 0 tests completed
> :modules:binlog:test > Executing test org.jpos.binlog.BinLogTest

7 minutes and counting ...

jameshilliard commented 4 years ago

Hmm, what type of hard disk is your Linux system using? Also what distribution and kernel version is it?

alcarraz commented 4 years ago

It's a standard magnetic hard drive. Kernel and system programs are in an SSD.

Kernel version is 5.4.0, distribution Ubuntu 20.04

Still running:

<===========--> 85% EXECUTING [54m 7s]
> :modules:binlog:test > 0 tests completed
> :modules:binlog:test > Executing test org.jpos.binlog.BinLogTest
alcarraz commented 4 years ago

I tried debugging it once, it was not stuck just advancing really slow.

jameshilliard commented 4 years ago

Hmm, what does sudo hdparm -I /dev/sda with /dev/sda being the magnetic disk path show?

alcarraz commented 4 years ago

Not sure which part of the output are you interested in, I don't have any other problem with this setup.

jameshilliard commented 4 years ago

Looking for the output of that(model might be enough) to rule out it being a bad drive.

alcarraz commented 4 years ago

I really don't think is the harddrive, I could not be using this machine for developing and as a daily driver if that was the case but since you insist:

 sudo hdparm -I /dev/sda

/dev/sda:

ATA device, with non-removable media
    Model Number:       ST1000LM035-1RK172                      
    Serial Number:      WDERAMBJ
    Firmware Revision:  LCM2    
    Transport:          Serial, ATA8-AST, SATA 1.0a, SATA II Extensions, SATA Rev 2.5, SATA Rev 2.6, SATA Rev 3.0

I'm starting to think the problem is some kind of a deadlock with ext4 filesystem, building on the SSD disk and the same thing happens. It's being happening at least for the last year, so there were several kernel and OS upgrades.

Will have to create some partition to test with another FS, would love to hear from someone running linux with ext4 FS that can run this test without problem.

alcarraz commented 4 years ago

My bad, in fact it's using same the tmp dir in both cases 🤦, but it's just writing very slowly to the tmp file.

/tmp/binlog-5932872186016472376/00000001.dat
jameshilliard commented 4 years ago

I really don't think is the harddrive, I could not be using this machine for developing and as a daily driver if that was the case but since you insist

Most daily development doesn't involve forced flushing of dirty pages to the disk so your page cache will compensate for poor drive performance if you have enough ram.

Model Number: ST1000LM035-1RK172

Yeah, this is a DM-SMR drive(according to this at least) which means it will have terrible random write performance to the point where say rebuilding a ZFS array would likely fail due to it being so slow that the filesystem driver would think the drive died. This test forces a flush of dirty pages to the disk on Linux from my understanding while most writes in general just end up in the page cache and are written to disk periodically. A number of storage vendors recently got caught swapping out CMR drives for DM-SMR drives without telling people.

My bad, in fact it's using same the tmp dir in both cases 🤦, but it's just writing very slowly to the tmp file.

Weird, I suspect the fsync operation is somehow hitting the DM-SMR drive and slowing down the test somehow, maybe the way the call is being done is forcing a full fsync for all disks including those that the file isn't being written to? Should probably check the codepaths called by FileChannel.force in Linux or try seeing what's going on in strace.

alcarraz commented 4 years ago

Yes, I didn't this was related to random access, clearly a magnetic drive is not good for that.

In that case this test has no point to be run on developers machines since not everyone has tmp mounted in a SSD drive.

I run it in my raspberry pi, that has tmp in the sdcard and at least it finished (in 22m)

I would suggest to disable this test in the normal run and just running it on purpose.

But more importantly we need to define if this is the expected behavior of the module in a magnetic drive or we need to fine tune it, and keep the test out until we do that

jameshilliard commented 4 years ago

clearly a magnetic drive is not good for that

Well it's more than just that, DM-SMR drives are far worse than even a typical low end magnetic drive from 10 years ago. It's kind of crazy that vendors are putting them in normal laptops/desktops when they are really only suitable as non-RAID infrequent access file archive drives.

But more importantly we need to define if this is the expected behavior of the module in a magnetic drive or we need to fine tune it, and keep the test out until we do that

Well I'm assuming the flushing to disk behavior is intentional to ensure data integrity.

jameshilliard commented 4 years ago

I refactored binlog to use the new nio file api's in #160, I'm assuming it probably won't make much difference for this particular issue but might be worth trying out as it should in theory have some performance advantages.

jameshilliard commented 4 years ago

Updated #160 again with some more changes to use the AsynchronousFileChannel api instead of the regular FileChannel api.