Tradeshop / TradeShop

Unique, new, powerful Minecraft TradeShop plugin!
https://tradeshop.github.io
Apache License 2.0
25 stars 11 forks source link

Cache Hopper (-Protection) Events #123

Closed SparklingComet closed 11 months ago

SparklingComet commented 2 years ago

Is your feature request related to a problem?

As reported recently (and in many other instanes) in the Discord server, intense hopper activity can lead to considerable tps drops due to high tick times for the hopper protection listener even if those hopper events have nothing to do with TradeShop's field of application.

The reason of course is that a high-activity server will have many hopper events every second, which means that the listener is invoked potentailly hundreds of times each second for completely unrelated events that still need to be checked by the listener.

Example: Let's assume a hopper speed of 1 sec^-1 (1 item "swallowed" every second). For 400 active hoppers, we will have 400 calls to the listener each second. Divide this by 20 (TPS) and we get 20 calls per tick. Not a lot. However, one tick will correspond to 50ms of CPU time, a portion of which (let's say 30%) is allocated to event handlers (15ms). Let's divide this by the number of plugins (let's say 20), and we get less than 1ms time for all TradeShop events, not even just the high-activity hopper ones.

Describe the solution you'd like.

Instead of checking whether a hopper event is allowed every time the event is called, we should do this approx. every 5s or so (potentially configurable value), and cache the result (allowed or not allowed). Subsequent listener calls will check whether that specific hopper has been checked within that timeframe and if so, simply re-use the result of the previous checks. If it has not been checked or the checks have expired, the regular protection code will run to determine whether the event is allowed or not, and the cache is updated.

Describe alternatives you've considered.

Alternative: leave everything as it is, does not scale well (or at all).

Agreements

Other

No response

SparklingComet commented 2 years ago

This issue is for discussion of the concept. I'll be working on a PR soon. Post ideas below.

@KillerOfPie

KillerOfPie commented 2 years ago

first I think your event call math is off, it also depends on server settings.

Default Spigot Settings:

ticks-per:
  hopper-transfer: 8
  hopper-check: 1
hopper-amount: 1

each Hopper will attempt to transfer once every 8 ticks which is 2.5 times per second. which means for 400 active hoppers we would be handling around 1000 calls. per second.

I don't think this will save a reasonable amount of time but we can try it, I would say just making a public value somewhere that is only updated by the reload function(since that is the only way to change a setting while the server is active atm)

I think just recommending changes to settings would be more efficient to reduce the load caused. the below Spigot Hopper settings will reduce the calls without changing the throughput of the hoppers.

Reduced Event Spigot Settings:

ticks-per:
  hopper-transfer: 32
  hopper-check: 1
hopper-amount: 4
KillerOfPie commented 2 years ago

we could save the location of hoppers that would affect shops when they are placed or a shop is placed then remove the code that decides if it is the correct Hopper. This would be similar to storing the association between a shop and it's since so reverse lookups were quicker.

I think this would save a lot more time then caching the 4-5 boolean for the settings(since they should mostly already be in memory)

SparklingComet commented 2 years ago

we could save the location of hoppers that would affect shops when they are placed or a shop is placed then remove the code that decides if it is the correct Hopper. This would be similar to storing the association between a shop and it's since so reverse lookups were quicker.

I think this would save a lot more time then caching the 4-5 boolean for the settings(since they should mostly already be in memory)

I'm not suggesting caching the 4-5 booleans for the settings, those are of course already cached. What I mean is we cache the yes/no result (cancel event or don't cancel event) for every single hopper, with an expiration of say 5sec.

As far as the math goes, I didn't know the defaults but those values make the lag even worse.

SparklingComet commented 2 years ago

As far as caching hoppers when placed and getting rid of the protection listener, that is very unsafe. What if a hopper is placed while Tradeshop is not installed? Or, a shopper was placed while an older version of Tradeshop was installed? It wouldn't be in the cache.

Furthermore, it requires caching to disk, while my proposed method does not. I'd be caching locations in a HashMap instead. It would also increase the already high complexity of the BlockPlaceEvwnt listener...

SparklingComet commented 2 years ago

It would be good to get some timings from a server that experiences high hopper lag from the ShopProtection listener.

KillerOfPie commented 2 years ago

I think the biggest problems with this would be:

  1. The potential memory usage, we would be switching cpu time for memory capacity which may still be problematic. The time problem either comes from large # of players using a moderate amount of hoppers or a few players with item sorters or other such mass hopper systems. With an item sorter at a minimum there would be 3 hoppers per item with up to 900(form what I could google) stack-able items in the game. Account for max coordinates(and depending on formatting, I did the math without a world coordinate), each hopper could use up to 27bytes of storage/memory. So, assuming a 150 item sorter(~450 hoppers) and limiting the x/z coordinates to 99999(uses up to 21bytes) we would potentially need 9.228515625MB of storage/memory just for the sorter if it is fully processing. I'll try and get timings from someone and get a better estimate of the potential memory usage live, seems that it would be too easy to fill even if it's clearing every 5 seconds considering there are already tons of hoppers going off(although we would make 11 out of 12 hopper checks quicker during that time.
  2. We would have to constantly manage clearing and re-adding to the map which would add more overhead to at least the initial hopper call every 5 seconds

As far as caching hoppers when placed and getting rid of the protection listener, that is very unsafe. What if a hopper is placed while Tradeshop is not installed? Or, a shopper was placed while an older version of Tradeshop was installed? It wouldn't be in the cache.

Furthermore, it requires caching to disk, while my proposed method does not. I'd be caching locations in a HashMap instead. It would also increase the already high complexity of the BlockPlaceEvwnt listener...

We are already using a similar method to detect the storage chests so most of these would already be non-problems. There are also only 5 spaces around a shop that can be used for hopper inputs and 1 for output so we could just add these coordinates into a list and just check if the hopper is in the lists, this runs into the same memory usage problem though if stored in memory just without the 5 second clear.

In the end I'm not sure it would be ideal for us to do anything about this, there are a few different plugins out that give alternate methods to using mass hoppers which would solve the problem better if people were to use them. We can also just advise people to adjust their servers hopper settings, with the alternate one I put above it would reduce the calls going out by 1/4 without changing the hoppers actual throughput.

KillerOfPie commented 2 years ago

Maybe I can add a metric for the # calls and/or # of hoppers that are handled to the 2.5.1 version that way we can see what we're roughly dealing with before we decide on how/if we handle it.

SparklingComet commented 2 years ago

I'm not sure where you're getting that huge memory overhead from. For each hopper we would only store its location in a map, nobody cares what's inside... One Location is 4*3 bytes for x.y,z coords plus a pointer to the World object (8 bytes), plus a pointer to the location object itself (if we want to be precise), hence summing up to 28 bytes, so nowhere near that number you quoted.

KillerOfPie commented 2 years ago

Was basing off storing as a string as we do with stored items, but 20 bytes is only 1 less than I used for the total so that overhead would still be similar. @ 20 Bytes per hopper we would be looking at 8.7890625MB for 150 hoppers which I assume is less then are actually ticking on servers that have hopper issues.

SparklingComet commented 2 years ago

Concerning cpu overhead due to HashMap operations:

Thus we would still reduce computation time.

KillerOfPie commented 2 years ago

Concerning cpu overhead due to HashMap operations:

  • insertion into the HashMap is done once every 5sec (100ticks), so negligible.
  • reading from the HashMap is done every time the hopper is called, but: there are already many HashMap read operations which we wouldn't do anymore (like checking if a hopper is over a chest that's a tradeshop).

Thus we would still reduce computation time.

insertion would have to happen as hoppers call events not once ever 5 sec, clearing the map would happen once every 5 sec, and reading is negligible. I'm not saying this wouldn't reduce compute time im saying IDK if the compute savings is worth the extra memory overhead.

SparklingComet commented 2 years ago

Was basing off storing as a string as we do with stored items, but 20 bytes is only 1 less than I used for the total so that overhead would still be similar. @ 20 Bytes per hopper we would be looking at 8.7890625MB for 150 hoppers which I assume is less then are actually ticking on servers that have hopper issues.

Sorry the math was off, it's 28 bytes (edited my reply). Still 28*150=about 1.5kB. The HashMap is usually twice the size of its contents so 3kB total, I don't understand why 8MB...

SparklingComet commented 2 years ago

Concerning cpu overhead due to HashMap operations:

  • insertion into the HashMap is done once every 5sec (100ticks), so negligible.
  • reading from the HashMap is done every time the hopper is called, but: there are already many HashMap read operations which we wouldn't do anymore (like checking if a hopper is over a chest that's a tradeshop).

Thus we would still reduce computation time.

insertion would have to happen as hoppers call events not once ever 5 sec, clearing the map would happen once every 5 sec, and reading is negligible. I'm not saying this wouldn't reduce compute time im saying IDK if the compute savings is worth the extra memory overhead.

No no the optimization is only updating the hopper entry after at least 5 sec have passed since the last time. That's the entire point...

KillerOfPie commented 2 years ago

Was basing off storing as a string as we do with stored items, but 20 bytes is only 1 less than I used for the total so that overhead would still be similar. @ 20 Bytes per hopper we would be looking at 8.7890625MB for 150 hoppers which I assume is less then are actually ticking on servers that have hopper issues.

Sorry the math was off, it's 28 bytes (edited my reply). Still 28*150=about 1.5kB. The HashMap is usually twice the size of its contents so 3kB total, I don't understand why 8MB...

lol sorry i was calculating kB but kept writing MB, also math is for a 150 item sorter which assumes 3 hoppers per item, which is 450 hoppers.

KillerOfPie commented 2 years ago

Concerning cpu overhead due to HashMap operations:

  • insertion into the HashMap is done once every 5sec (100ticks), so negligible.
  • reading from the HashMap is done every time the hopper is called, but: there are already many HashMap read operations which we wouldn't do anymore (like checking if a hopper is over a chest that's a tradeshop).

Thus we would still reduce computation time.

insertion would have to happen as hoppers call events not once ever 5 sec, clearing the map would happen once every 5 sec, and reading is negligible. I'm not saying this wouldn't reduce compute time im saying IDK if the compute savings is worth the extra memory overhead.

No no the optimization is only updating the hopper entry after at least 5 sec have passed since the last time. That's the entire point...

so the map would never be cleared? I think this could cause significantly more overhead since hoppers that get unloaded and stop ticking would be still be in the map? Also how would we be updating the entries, the only way I can think would be during the hopper event meaning each entry would need a timestamp which would be extra data?

SparklingComet commented 2 years ago

I mean even if we round up and say we get up to 20kB, that still pales in comparison to the memory the server has, it's 0.004% of a low-end 512MB server (which wouldn't be able to handle the hopper activity in any case so you need way more memory that that...)

SparklingComet commented 2 years ago

Concerning cpu overhead due to HashMap operations:

  • insertion into the HashMap is done once every 5sec (100ticks), so negligible.
  • reading from the HashMap is done every time the hopper is called, but: there are already many HashMap read operations which we wouldn't do anymore (like checking if a hopper is over a chest that's a tradeshop).

Thus we would still reduce computation time.

insertion would have to happen as hoppers call events not once ever 5 sec, clearing the map would happen once every 5 sec, and reading is negligible. I'm not saying this wouldn't reduce compute time im saying IDK if the compute savings is worth the extra memory overhead.

No no the optimization is only updating the hopper entry after at least 5 sec have passed since the last time. That's the entire point...

so the map would never be cleared? I think this could cause significantly more overhead since hoppers that get unloaded and stop ticking would be still be in the map? Also how would we be updating the entries, the only way I can think would be during the hopper event meaning each entry would need a timestamp which would be extra data?

Yes we would compare timestamps, that's true... A couple hundred timestamps wouldn't be much though.

The map would never be cleared at once ! Doing so would be cpu expensive due to garbage collection... We would leave everything inside the cache. Chunk unloading is a good point though, if a server runs for say 2 months without restarting and the map is large enough, we would have problems for sure ...

We could, however, clear all cache once an hour or so... That wouldn't be too bad because it would happen "rarely", so unloaded chunks don't clog up the memory.

KillerOfPie commented 2 years ago

I mean even if we round up and say we get up to 20kB, that still pales in comparison to the memory the server has, it's 0.004% of a low-end 512MB server (which wouldn't be able to handle the hopper activity in any case so you need way more memory that that...)

We can't base usage on total available since MC itself will happily use almost all of available(I run my test server on 2GB and its usually 850mB with no players online)

No players image

1 Player image

SparklingComet commented 2 years ago

I mean even if we round up and say we get up to 20kB, that still pales in comparison to the memory the server has, it's 0.004% of a low-end 512MB server (which wouldn't be able to handle the hopper activity in any case so you need way more memory that that...)

We can't base usage on total available since MC itself will happily use almost all of available(I run my test server on 2GB and its usually 850mB with no players online)

No players image

1 Player image

Sure but the amount of memory we'd use pales when compared to what MC itself is already using

SparklingComet commented 2 years ago

For the cache to be 1% of the server memory of a 512MB instance we would need to cache 90 million hoppers

KillerOfPie commented 2 years ago

The kB MB mixup makes this a lot more reasonable, let me see how many hoppers I need to place to lose TPS on my test server then I'll try a basic implementation of this and see what the difference is

KillerOfPie commented 2 years ago

Ended up having to install WorldEdit

All hoppers are actively transferring

~10000 are hopper to hopper, rest are hopper > chest > hopper > ...

Block count to exceed 50ms tick time(max to keep 20tps) image

Server Stats image

KillerOfPie commented 2 years ago

Ram Estimate: 168b(20B) Location 1b block boolean 55b time(plan on using Instant.now().getEpochSecond() since it should be the most efficient storage considering we need down to a second counter, we may be able to make it smaller by saving it divided by the update interval and rounding down)

Total per hopper: 224b(28B) Total for above hopper amount: 826,532B(807.16015625kB)

KillerOfPie commented 2 years ago

After looking a bit I don't think this will work, I think the most time consuming part of the lookup is just getting the hoppers location which we have to do before we can store/check either way. See below Timing reports from my testing and d04a3dd475a9c67ee79c90e4278654b349b5bfab for code at the final timing and the commented mess code in ListManager that murdered the server(no timing for these, see below screenshot) image

Timing with mostly normal code: https://timings.aikar.co/?id=f8f25f91e1cb426d9493accc68a14990#timings

Timing with normal code up to checking isShopChest(Shop is never retreived) https://timings.aikar.co/?id=ee05b0ee9e2b4141961cee81670680a4 image

SparklingComet commented 2 years ago

Left a comment on https://github.com/Tradeshop/TradeShop/commit/d04a3dd475a9c67ee79c90e4278654b349b5bfab

SparklingComet commented 2 years ago

55b time(plan on using Instant.now().getEpochSecond() since it should be the most efficient storage considering we need down to a second counter, we may be able to make it smaller by saving it divided by the update interval and rounding down)

Not sure what this is about? System.currentTimeMillis (1long=8bytes) is just as fine, cutting off everything past 1 sec does not make it more memory efficient...

SparklingComet commented 2 years ago

As far as the cache goes, I was thinking we could use some "special" HashMap from Guava which implements expiration for us:

LoadingCache<Key, Graph> graphs = CacheBuilder.newBuilder()
    .maximumSize(10000)
    .expireAfterWrite(10, TimeUnit.MINUTES)
    .build(
        new CacheLoader<Key, Graph>() {
          public Graph load(Key key) throws AnyException {
            return createExpensiveGraph(key);
          }
        });

(credits: https://stackoverflow.com/a/3802420)

Guava is already shaded into CraftBukkit so we wouldn't add any bloat ad would not need to package or shade the lib.

KillerOfPie commented 2 years ago

the time was because we dont need any data more accurate then a second so it reduces unnecessary data.

the guava may help but all of the expensive compute happens before we even have a location since the ItemMoveEvent doen't directly give a location meaning no matter how we implement this it will take up the same if not more compute time since we need a location before we can save or read from the map. we also have to filter out all the event calls that happen outside of hoppers.

SparklingComet commented 2 years ago

the time was because we dont need any data more accurate then a second so it reduces unnecessary data.

I don't understand... System time millis gives 1 long, how would this be more efficient? Actually chopping off the milliseconds doesn't make the memory footprint smaller...

the guava may help but all of the expensive compute happens before we even have a location since the ItemMoveEvent doen't directly give a location meaning no matter how we implement this it will take up the same if not more compute time since we need a location before we can save or read from the map. we also have to filter out all the event calls that happen outside of hoppers.

We can just call event.getSource().getLocation(), that doesn't actually compute anything, it's just getters.

SparklingComet commented 2 years ago

the time was because we dont need any data more accurate then a second so it reduces unnecessary data.

I don't understand... System time millis gives 1 long, how would this be more efficient? Actually chopping off the milliseconds doesn't make the memory footprint smaller...

the guava may help but all of the expensive compute happens before we even have a location since the ItemMoveEvent doen't directly give a location meaning no matter how we implement this it will take up the same if not more compute time since we need a location before we can save or read from the map. we also have to filter out all the event calls that happen outside of hoppers.

We can just call event.getSource().getLocation(), that doesn't actually compute anything, it's just getters.

I see what you mean, correction: we could use event.getDestination().getLocation(), that will be even better because it should give the location of the Trade chest (if it is a shop). That means that multiple hoppers going into the same chest will be cached together, unless I am not seeing the entire picture.

KillerOfPie commented 2 years ago

the time was because we dont need any data more accurate then a second so it reduces unnecessary data.

I don't understand... System time millis gives 1 long, how would this be more efficient? Actually chopping off the milliseconds doesn't make the memory footprint smaller...

the guava may help but all of the expensive compute happens before we even have a location since the ItemMoveEvent doen't directly give a location meaning no matter how we implement this it will take up the same if not more compute time since we need a location before we can save or read from the map. we also have to filter out all the event calls that happen outside of hoppers.

We can just call event.getSource().getLocation(), that doesn't actually compute anything, it's just getters.

You're correct I forgot a long is always 8B no matter how big the number actually is. It still makes more sense as a second than a millisecond since we dont need the extra 1000 steps of accuracy, and Instant is the newer implementation of System.currentTimeMillis which also allows for adjusting and checking more readably(vs just having lines of math)

the time was because we dont need any data more accurate then a second so it reduces unnecessary data.

I don't understand... System time millis gives 1 long, how would this be more efficient? Actually chopping off the milliseconds doesn't make the memory footprint smaller...

the guava may help but all of the expensive compute happens before we even have a location since the ItemMoveEvent doen't directly give a location meaning no matter how we implement this it will take up the same if not more compute time since we need a location before we can save or read from the map. we also have to filter out all the event calls that happen outside of hoppers.

We can just call event.getSource().getLocation(), that doesn't actually compute anything, it's just getters.

I see what you mean, correction: we could use event.getDestination().getLocation(), that will be even better because it should give the location of the Trade chest (if it is a shop). That means that multiple hoppers going into the same chest will be cached together, unless I am not seeing the entire picture.

Saving either the Destination or the source is equally bad in this case since both can be any inventory(I believe including the players inventory). If we save just the destination with the destination being the shopchest and the source a player then it may prevent players from placing items in the chest and if we save only source then it could prevent players from removing items from the shop.

So if we want this to work and be efficient we would have to store both the Source Location(~20B) and the Destination Location(~20B), a boolean for Block Status(1b) and a timestamp(8B) which means were looking at 48.125B per connection. With the Hopper/chest numbers from earlier, counting 1 connection per block(I think this may still be low) there would be 48,801 connections which would end the estimate at ~2.23975MB. I will try this since it seems to still be a reasonable amount but I feel like this isn't correct still.

KillerOfPie commented 2 years ago

As far as the cache goes, I was thinking we could use some "special" HashMap from Guava which implements expiration for us:

LoadingCache<Key, Graph> graphs = CacheBuilder.newBuilder()
    .maximumSize(10000)
    .expireAfterWrite(10, TimeUnit.MINUTES)
    .build(
        new CacheLoader<Key, Graph>() {
          public Graph load(Key key) throws AnyException {
            return createExpensiveGraph(key);
          }
        });

(credits: https://stackoverflow.com/a/3802420)

Guava is already shaded into CraftBukkit so we wouldn't add any bloat ad would not need to package or shade the lib.

I'm not going to use this for now, because I want to try updating the time regularly and only remove items that havnt been accessed in a bit(since if we were to remove them after 10 minutes there would more than likely be lag spike from having to re-process and add). There will be methods to purge a location from all entries like when a block is destroyed as well as specific location combinations if necessary. If this seems to work okay I may also look to see if I can put this in another thread(which would mean storing as something other than a Location Object I think?) since that would make the list have almost no effect on tps...

SparklingComet commented 2 years ago

Good point about source/destination, I had not thought about that at all. Couldn't we solve the problem with an instance check? See if source instance of hopper and only then cache the destination.

KillerOfPie commented 2 years ago

Good point about source/destination, I had not thought about that at all. Couldn't we solve the problem with an instance check? See if source instance of hopper and only then cache the destination.

We can check the Inventory Type, but we would need to check for both src and dest and thats not 100% either since plugins can make inventories with that type so there is still a chance. Currently we check if it's block is null and that section of the code takes 0.002932ms which also includes checking the actual blocks. Total time to get past that part is only 0.006709ms with the next step taking 0.061283ms, so i will put the check code after that for now and see how it goes.

KillerOfPie commented 2 years ago

The longest part in the first section of code which is checking all the Hopper settings for a quick escape when disabled is 0.003235ms(around half the total) so i may find a way to speed that up too since it will save a bunch of time, maybe I'll have a single boolean somewhere that updates with the final result of those checks on load/reload...

KillerOfPie commented 2 years ago

Failed again, didn;t reduce avgtick time and ended up just increasing memory until it ran out. usually happened within a minute or 2 @5GB of memory.

Added more timing readouts and did some excel magic to find avg processing times in seconds of the few it prints(full printing to console would make it unreadable, may make a csv readout similar to what I did in the log Addon if im going to continue this. Data below.

No Players: AVG Tick fluctuates between 0.7-2ms image

1 Player near Hopper Cluster: AVG Tick settles around 75 with the timing code, settles around 55 without timing code image

32/4 Modified Hopper Settings w/ No players: No Change

32/4 Modified Hopper Settings w/ 1 player near Hopper Cluster: AVG Tick settles around 30 with the timing code, and around 25 without(tested). image

64/8 Modified Hopper Settings w/ 1 player near Hopper Cluster: AVG Ticks settled around 20, this did seem to increase memory usage slightly image

Average Timings per Marker w/o Spigot Hopper Setting Change: Values in Milliseconds as difference from previous Marker Marker Time in ms
001 0
002 0.00272669
003 0.005919024
004 0.00325931
004;C 0.001400714
004;D 0.007061579
005 0.098904348
005;A 5.533929957
005;B 0.021214739
006 0.004131913
006;B 0.002212391

Any non shop event should be kicked out at 4;D.

Summary: I think our solution to this problem(outside of slowly optimizing the existing code) should just be to recommend servers to adjust Spigots hopper setting since this would not only make our plugin perform better on servers with ungodly amounts of hoppers but also help any other plugins that also do work on the hopper events. Making these changes(as long as they are done properly) does not affect the throughput of hoppers. They do deliver in chunks instead of individually which may change how some sorting systems react to them, but this is a common change large servers would already do so I would think information exists on necessary changes for such systems.

If you would like to try implementing and see if you can get it to work better have fun, but even if caching helps our plugin it won't help others. I will work on streamlining the code to reduce time either during 2.5 or for 2.6.

KillerOfPie commented 2 years ago

Oh also found that we do check to see if it was initiated by a hopper, so if you do want to try this yourself thats probably what you want to store to block instead of src or dest. But I still think it isn't an ideal solution.

event.getInitiator() returns inventory so event.getInitiator().getLocation() can be done

KillerOfPie commented 2 years ago

Welp, I went back and tried again using the Guava Cache after doing some optimization with the hoppers, which brought me to a sub 50mspt avg...and its now around 32mspt avg, going to play with the variables a bit more and see if i can get it any better. then im going to push and see what the new hopper cap is

KillerOfPie commented 2 years ago

I'm going to post this code to 2.5.1 because i'm very happy with it. had the limit to 50000 but I went passed it so I'm going to set it to 75000. my hopper/chest walls bring my down to 18fps so nobody should ever hit this limit in one area, even across a server this is kinda insane.

w/ 250000 limit on cache image

KillerOfPie commented 2 years ago

Went to test with the 32/4 settings and you can get much higher if people are still hitting it, good find on the Guava Cache BTW

Timings Report with 32/4: https://timings.aikar.co/?id=53f34bbd75ad4578ad4af8105ce3950f

image

SparklingComet commented 2 years ago

The data you shared is actually really insightful, thanks a lot for that :D

So if I understood correctly we are planning on keeping the Guava setup, perhaps with some tweaks?

SparklingComet commented 2 years ago

It would be good to get some timings from some "real" servers so we could identify what other events TradeShop is slow at.

KillerOfPie commented 2 years ago

The data you shared is actually really insightful, thanks a lot for that :D

So if I understood correctly we are planning on keeping the Guava setup, perhaps with some tweaks?

May tweak settings but I think they are pretty good already, only settings the change are max size(currently 100000 since at that point there would be lag even with the cache) and expireAfterAccess(which is currently at 1s). I used this instead of AfterWrite so that we don't have to keep reprocessing hoppers while they are transferring. It is also important to note that the server will still spike above 50mspt when mass amount of hoppers initially trigger the move event.

It would be good to get some timings from some "real" servers so we could identify what other events TradeShop is slow at.

It would be nice but the lag added by timing code that runs this often at small amounts of time(counted in ns and converted to ms) is not worth having in any version that goes out to production servers. I can definitely make it faster and output separate from console which would allow me to output all data instead of tiny percentage of it. but it would be significantly less readable and would need more post processing to be useful. Also with this having to run as fast as it does even adding multiple if statements can add significant amount of time, with the cache this would only really affect the peaks caused by the initial add, but that could still noticeably affect some of the worst servers.

TLDR; We can add the timing but I don't think it would provide useful enough information compared to spigots timings, except in individual cases

SparklingComet commented 2 years ago

Oh let me clarify, I wasn't suggesting we add something that measures clock speed, that wouldn't be good... No, I was thinking we should bug people so that they give us their own timings.

KillerOfPie commented 2 years ago

Oh let me clarify, I wasn't suggesting we add something that measures clock speed, that wouldn't be good... No, I was thinking we should bug people so that they give us their own timings.

Oh yes, that's good although I'm pretty sure this will fix almost all the problems people will have