tlf30 / monkey-netty

A implementation of a server-client communication system for jMonkeyEngine using Netty.IO that utilizes both TCP and UDP communication.
MIT License
12 stars 5 forks source link

Oxplay workbranch adjustments pull request #9

Closed oxplay2 closed 3 years ago

oxplay2 commented 3 years ago

Please note its not related to any issue/feature (at least directly)

As we spoke before, its mainly for testing purpose, now you will be able to see that current Messages send too many class names.

Result output in Server when receiving messages:

More detailed message values:

Got message Test TCP Big Message A from client /127.0.0.1:36124
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
Got message Test UDP Big Message A from client /127.0.0.1:36124
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
Got message Test TCP Big Message B from client /127.0.0.1:36124
{test2=[I@212e74ac, test3=[TestValue1, TestValue2, TestValue3], test1=12}
Got message Test UDP Big Message B from client /127.0.0.1:36124
{test2=[I@436b74dc, test3=[TestValue1, TestValue2, TestValue3], test1=12}

Logger was set to INFO so now we also see what package exactly send: (This currently log only TCP messages)


         +-------------------------------------------------+
         |  0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f |
+--------+-------------------------------------------------+----------------+
|00000000| 00 00 01 4d 05 73 72 01 00 33 69 6f 2e 74 6c 66 |...M.sr..3io.tlf|
|00000010| 2e 6d 6f 6e 6b 65 79 6e 65 74 74 79 2e 74 65 73 |.monkeynetty.tes|
|00000020| 74 2e 6d 65 73 73 61 67 65 73 2e 54 65 73 74 54 |t.messages.TestT|
|00000030| 43 50 42 69 67 4d 65 73 73 61 67 65 41 78 70 73 |CPBigMessageAxps|
|00000040| 72 01 00 36 69 6f 2e 74 6c 66 2e 6d 6f 6e 6b 65 |r..6io.tlf.monke|
|00000050| 79 6e 65 74 74 79 2e 74 65 73 74 2e 6d 65 73 73 |ynetty.test.mess|
|00000060| 61 67 65 73 2e 54 65 73 74 53 65 72 69 61 6c 69 |ages.TestSeriali|
|00000070| 7a 61 62 6c 65 44 61 74 61 41 78 70 73 72 01 00 |zableDataAxpsr..|
|00000080| 13 6a 61 76 61 2e 75 74 69 6c 2e 41 72 72 61 79 |.java.util.Array|
|00000090| 4c 69 73 74 78 70 00 00 00 0a 77 04 00 00 00 0a |Listxp....w.....|
|000000a0| 73 72 01 00 0e 6a 61 76 61 2e 6c 61 6e 67 2e 4c |sr...java.lang.L|
|000000b0| 6f 6e 67 78 72 01 00 10 6a 61 76 61 2e 6c 61 6e |ongxr...java.lan|
|000000c0| 67 2e 4e 75 6d 62 65 72 78 70 00 00 00 00 00 00 |g.Numberxp......|
|000000d0| 00 00 73 71 00 7e 00 06 00 00 00 00 00 00 00 01 |..sq.~..........|
|000000e0| 73 71 00 7e 00 06 00 00 00 00 00 00 00 02 73 71 |sq.~..........sq|
|000000f0| 00 7e 00 06 00 00 00 00 00 00 00 03 73 71 00 7e |.~..........sq.~|
|00000100| 00 06 00 00 00 00 00 00 00 04 73 71 00 7e 00 06 |..........sq.~..|
|00000110| 00 00 00 00 00 00 00 05 73 71 00 7e 00 06 00 00 |........sq.~....|
|00000120| 00 00 00 00 00 06 73 71 00 7e 00 06 00 00 00 00 |......sq.~......|
|00000130| 00 00 00 07 73 71 00 7e 00 06 00 00 00 00 00 00 |....sq.~........|
|00000140| 00 08 73 71 00 7e 00 06 00 00 00 00 00 00 00 09 |..sq.~..........|
|00000150| 78                                              |x               |
+--------+-------------------------------------------------+----------------+

this will help ivestigate https://github.com/tlf30/monkey-netty/issues/4

tlf30 commented 3 years ago

Those changes look good to me. I would like to know more on the softCachingResolver. The javadoc (as with most in netty) leaves a lot to be desired. Does this have a noticeable increase in performance when dealing with a large number of messages?

tlf30 commented 3 years ago

Another question, have you had any luck on getting the timeout to work when the server is killed?

oxplay2 commented 3 years ago

About softCachingResolver

Well, i were looking at their sourcecode.

Advantage of not using "cacheDisabled" but some cache - is that we get class from cache instead:

return classLoader.loadClass(className);

doing this each time, so i suppose its much faster to get it from some ReferenceMap.

When using some "concurrent cache CachingResolver", it will just use special ReferenceMap for concurrent, it will be slower, but work parallel, but i was not sure were it need parallel for this, so i just added "softCachingResolver"

in some question, they use weak one like: https://stackoverflow.com/questions/8660491/how-to-implement-objectdecoderclassresolver-in-netty-3-2-7

but trully the only difference of this methods is how they store cached classes:

new WeakReferenceMap<String, Class<?>>(new HashMap<String, Reference<Class<?>>>()));

new SoftReferenceMap<String, Class<?>>(new HashMap<String, Reference<Class<?>>>()));

new WeakReferenceMap<String, Class<?>>(PlatformDependent.<String, Reference<Class<?>>>newConcurrentHashMap()));

new SoftReferenceMap<String, Class<?>>(PlatformDependent.<String, Reference<Class<?>>>newConcurrentHashMap()));

where cacheDisabled use none of them, just each time execute return classLoader.loadClass(className);

so in Case of Feature https://github.com/tlf30/monkey-netty/issues/4, we would probably need find what to put instead of null here:

new ObjectDecoder(Integer.MAX_VALUE, ClassResolvers.softCachingResolver(null)),

since null mean default ClassLoader, so maybe its a way to change way Classes are loaded to some ID instead Strings.

If you ask about increased performance, i didnt stress-test it, but im almost sure its micro-optimization we can do.

About "have you had any luck on getting the timeout to work when the server is killed?"

Nope, the only change i did was to allow user setup timeout per AppState. Myself i have exception each like 0.1 sec. so it do not use timeout for "off" server. Will need investigate it, i assume worth to create issue for it.

oxplay2 commented 3 years ago

Also looks like i forgot to change "changelog" file,

Should i change it here?

tlf30 commented 3 years ago

Only if you want, I will make sure to have it updated prior to a release

tlf30 commented 3 years ago

Lets get this merged in after #11 that way it only needs rebased once.

oxplay2 commented 3 years ago

Please let me know, when update this with HEAD repo.

I need go sleep now, so will do it tomorrow. (at least for my timezone hehe)

tlf30 commented 3 years ago

Sounds good, you can rebase to head when you are ready. I am in AKST, so probably a bit different from where you are at :)

oxplay2 commented 3 years ago

Merged remote-tracking branch 'upstream/master' into working-oxplay (this PR)

(i will reset my fork when it will be merged)