lburgazzoli / lb-hazelcast

Apache License 2.0
3 stars 0 forks source link

how can lb-hazelcast add support for off-heap purpose ? #1

Closed bwzhang2011 closed 9 years ago

bwzhang2011 commented 10 years ago

Hi lb, thanks a lot for providing support for off-heap with hazelcast while such feature remains in enterpise-edition.

after tracing the problem calling trace, I notice it's due to the default reference to DefaultNodeInitializer while it might be replaced by some other implementation like offheapNodeInitializer. but we need to make it run under the NodeInitializerFactory(in hazelcast) I don't quite catch where we can set the OffheapNodeInitializer and make it run like in the example test.

so would you mind sharing some experience for it to make it constructed as the factory is hidden by the Node I could not find solution way to make it compatible.

lburgazzoli commented 10 years ago

The node initializer is loaded via Java' ServiceLoader look in resources/meta-inf/services.

btw, this is an hack to provide some real world examples to OpenHFT' SharedHashMap and may disappear in the future :-)

bwzhang2011 commented 10 years ago

Thanks a lot for such comment. I will make a try and as you point out it's due to the hack way and I just want to test out of development way.

bwzhang2011 commented 10 years ago

Hi,lb. I've got it run in Example class. but when I test it under for pressure test. it throws exception trace like this:

java.lang.IllegalStateException: Not enough space left in entry for value, needs 400 but only 240 left at net.openhft.collections.VanillaSharedHashMap$Segment.acquireEntry(VanillaSharedHashMap.java:613) at net.openhft.collections.VanillaSharedHashMap$Segment.acquire(VanillaSharedHashMap.java:553) at net.openhft.collections.VanillaSharedHashMap.lookupUsing(VanillaSharedHashMap.java:267) at net.openhft.collections.VanillaSharedHashMap.acquireUsing(VanillaSharedHashMap.java:258) at com.github.lburgazzoli.hazelcast.offheap.hft.OffHeapStorage.put(OffHeapStorage.java:70) at com.hazelcast.map.record.OffHeapRecord.setValue(OffHeapRecord.java:62) at com.hazelcast.map.record.OffHeapRecord.(OffHeapRecord.java:37) at com.hazelcast.map.record.OffHeapRecordFactory.newRecord(OffHeapRecordFactory.java:52) at com.hazelcast.map.MapService.createRecord(MapService.java:409) at com.hazelcast.map.MapService.createRecord(MapService.java:404) at com.hazelcast.map.DefaultRecordStore.put(DefaultRecordStore.java:661) at com.hazelcast.map.operation.PutOperation.run(PutOperation.java:33) at com.hazelcast.spi.impl.BasicOperationService.processOperation(BasicOperationService.java:363) at com.hazelcast.spi.impl.BasicOperationService.access$300(BasicOperationService.java:102) at com.hazelcast.spi.impl.BasicOperationService$BasicOperationProcessorImpl.process(BasicOperationService.java:739) at com.hazelcast.spi.impl.BasicOperationScheduler$PartitionThread.process(BasicOperationScheduler.java:276) at com.hazelcast.spi.impl.BasicOperationScheduler$PartitionThread.doRun(BasicOperationScheduler.java:270) at com.hazelcast.spi.impl.BasicOperationScheduler$PartitionThread.run(BasicOperationScheduler.java:245)

I just test it with some simple modification:

private void run() throws Exception { IMap<String, String> map = newHzInstance().getMap(MAPNAME);

    int count = 10000;

    for(int i = 0 ; i < count ; i ++) {
        LOGGER.debug("put {}", map.put(String.valueOf(i), "val1"));
        LOGGER.debug("put {}", map.put(String.valueOf(i + 1), "val2"));
        LOGGER.debug("get {}", map.get(String.valueOf(i)));
        LOGGER.debug("get {}", map.get(String.valueOf(i + 1)));
    }

    Hazelcast.shutdownAll();
}
lburgazzoli commented 10 years ago

SharedHashMap has a fixed number of entries and for this example it is configured with a very low number, please see https://github.com/lburgazzoli/lb-hazelcast/blob/master/hazelcast-offheap-hft/src/main/java/net/openhft/collections/OffHeapUtil.java

bwzhang2011 commented 10 years ago

Hi, I just test it with different configuration like the entries,segments,entrySize ,but I could not seek for the proper params.

bwzhang2011 commented 10 years ago

so would you mind telling us how to test it for running.

lburgazzoli commented 10 years ago

I've just fixed a couple of issues and now you can configure SharedHashMap via System properties. In addition the defaults are now reasonable for your tests.

However there still are a few things I need to solve but as it is just an hack, I do not know when they will be solved.

bwzhang2011 commented 10 years ago

Thanks a lot for quick resolve. I dare think it provides such better hack to test with the off-heap with some replacement for the enterprise feature.

bwzhang2011 commented 10 years ago

Hope there will be more better solution towards that for the improvement towards GC and thanks a lot all the time for the great efforts.

bwzhang2011 commented 10 years ago

Hi, there still will be more improvement for the parameter configuration adjust once we set the count to a very large number. and we could set some standard configuration for evaluation for adjustment.

bwzhang2011 commented 10 years ago

The reopen reason is once you increase the test count(just from 10000 to 100000 or more) the similar error message got like ever before. and I found it's hard to adjust the parameters for that.

bwzhang2011 commented 10 years ago

sorry, I could not pass the exam test while I set the count=100000 or more, it's hard to find the proper parameters to pass the test.

lburgazzoli commented 10 years ago

which error?

bwzhang2011 commented 10 years ago

directly out of memory(I test in under windows): A fatal error has been detected by the Java Runtime Environment: EXCEPTION_ACCESS_VIOLATION (0xc0000005) at pc=0x6e9caac3, pid=4688, tid=2256 JRE version: Java(TM) SE Runtime Environment (7.0_51-b13) (build 1.7.0_51-b13) Java VM: Java HotSpot(TM) Client VM (24.51-b03 mixed mode, sharing windows-x86 ) Problematic frame: V [jvm.dll+0x12aac3] Failed to write core dump. Minidumps are not enabled by default on client versions of Windows

bwzhang2011 commented 10 years ago

here is the log trace : VM Arguments: jvm_args: -Xmx128m -Xms128m -Dfile.encoding=UTF-8 java_command: com.zjht.channel.server.test.hazelcast.HftExample01 Launcher Type: SUN_STANDARD

Environment Variables: JAVA_HOME=D:\Java\jdk1.7.0_51 PATH=D:/Java/jre7/bin/client;D:/Java/jre7/bin;D:/Java/jre7/lib/i386;C:\Program Files\NVIDIA Corporation\PhysX\Common;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;D:\Program Files\TortoiseGit\bin;D:\Program Files\TortoiseSVN\bin;D:\Java\jdk1.7.0_51\bin;D:\apache-maven-3.1.1\bin;d:\OracleClient\product\11.2.0\client_lite\bin;D:\apache-ant-1.9.3\bin;.;D:\apache-maven-3.1.1\bin;d:\programe files\wps2013\9.1.0.4468\office6;D:\eclipse; USERNAME=zhangbowen OS=Windows_NT PROCESSOR_IDENTIFIER=x86 Family 6 Model 58 Stepping 9, GenuineIntel

--------------- S Y S T E M ---------------

OS: Windows XP Build 2600 Service Pack 3

CPU:total 4 (2 cores per cpu, 2 threads per core) family 6 model 58 stepping 9, cmov, cx8, fxsr, mmx, sse, sse2, sse3, ssse3, sse4.1, sse4.2, popcnt, aes, erms, ht, tsc, tscinvbit, tscinv

Memory: 4k page, physical 2494364k(664216k free), swap 6512752k(4182340k free)

vm_info: Java HotSpot(TM) Client VM (24.51-b03) for windows-x86 JRE (1.7.0_51-b13), built on Dec 18 2013 19:09:58 by "java_re" with unknown MS VC++:1600

time: Wed Mar 26 11:23:43 2014

bwzhang2011 commented 10 years ago

by the way, while i egnore the in_memory configuration, everything goes fine.

lburgazzoli commented 10 years ago

could you please post your configuration properties?

bwzhang2011 commented 10 years ago

I just set the counter from 10000 to 100000 or more. that's the only difference.

bwzhang2011 commented 10 years ago

and the other configuration properties remains the same like the example.

bwzhang2011 commented 10 years ago

image

bwzhang2011 commented 10 years ago

When I put some big data resort to hft offheap support from lb-hazelcast I got some error stack. lb, would you mind sparing your time and take a look at it.

lburgazzoli commented 10 years ago

Hello, I'm quite busy at the moment but I'd have a look at this problem as soon as possible. Please be so kind to provide a test case even it is simple

bwzhang2011 commented 10 years ago

yeah I've got to know that. I will list some configuration as follows to reproduce such issue:

  1. here comes my hazelcast instance like follows: image

and the parameter for those list above is like this:

2.second I configure the map for OFFHEAP support in spring like this:

by the way the map for such above is to cache the data(might be some big picture file) into the serializable object.

while testing I noticed that if the data is big enough it will throw such mistake like the stack before.

  1. at last, once I dropped the big data, everything went well. but I found it was less down when I did load-balance test. the offheap performance did even worse than BINARY mode.

I didn't know whether you could did some load-balance test(I did that through jmeter, I store the cache data into the map with offheap support from lb-hazecalst off heap, but I notice that the performance is not good.

so I hope you could help me adjust such. when I was heard and known that chronicle does huge performance.

bwzhang2011 commented 10 years ago

image

bwzhang2011 commented 10 years ago

image

bwzhang2011 commented 10 years ago

lb,any thing update for such issue.

bwzhang2011 commented 10 years ago

By the way, during load test from jmeter(I just use hazelcast to cache some picture file), and I use lb-hazelcast as the offheap support for hazelcast map, the performance degrade much(and I did set the maxdirectorymemory as the hazelcast manual suggested).

lburgazzoli commented 10 years ago

Hi @bwzhang2011, as you may have noticed, I haven't had so much time lately to improve this hack (and please note that it is only that) and I won't have more in the future so I'll move/delete the code soon. You can obviously fork it. Thank you.

bwzhang2011 commented 10 years ago

Thanks a lot for your back. I have forked it and as I mentioned before, it worked fine but once we set some big data in such map as for some picture size, the exception throw and I don't know how to adjust the parameter. as the basement is based on the java-chronicle(but I don't know much about it) so I hope you could help me provide some suggestion for the adjustment and I did think the lb-hazelcast provides some choice for integration(for some feature only exists in the enterprise hazelcast)

lburgazzoli commented 10 years ago

Bear in mind that the entry size is the size of the serialized data + the serialized key so if you increase the size of the data you should check that there is enough room for the key too.