multitheftauto / mtasa-blue

Multi Theft Auto is a game engine that incorporates an extendable network play element into a proprietary commercial single-player game.
https://multitheftauto.com
GNU General Public License v3.0
1.41k stars 438 forks source link

Out of memory Crash - 1.6.0-9.22789.0 #3840

Open haron4igg opened 4 days ago

haron4igg commented 4 days ago

Describe the bug

My players experiencing a huge spike in crashes due to memory usage (low memory, memory access and so on). My game mode is pretty memory intensive because of a lot of custom models, we spent days looking for the problem and so far we only found that older mta version having almost no problems, comparing to latest one's:

Crashes in last 30 days per versions:

1381 | 1.6.0-9.22789.0 675 | 1.6.0-9.22780.0 611 | 1.6.0-9.22763.0 247 | 1.6.0-9.22746.0 23 | 1.6.0-9.22650.0 17 | 1.6.0-9.22684.0 10 | 1.6.0-9.22771.0 9 | 1.6.0-9.22751.0

We are now suggesting player to rollback to older 22746 / 22650 version, and they report that there is no problems with that versions.

we had two crash scenarious, one is "good one", when player reconnects server 2-3 times and on 3-4th time he may get low memory and crash which is like ~99% of cases for us during last 10 years )

and new one - player start mta and joins server first time, plays 15-20 minutes, or after 2-3 minimizing he gets low memory and crash. this is what happens primarily with 22789.0

moment before player gets crash, after first low-memory warnings (textures/fonts not created) - his memstat looks totally okeish Image

crashes are different but mostly: Image Image Image

Going to update ticket as soon, as we identify exact version where problem appeared first time.

Steps to reproduce

  1. Join our server: mtasa://83.222.116.88:22003
  2. just play occasionaly for 10-15 minutes.
  3. Minimize-maximize MTA few times
  4. Eventtually you get low memory warning
  5. Few minuites after game crash. (people also report that by enabling showmemstat after memory warning, will cause crash immideately)

Version

Client: 1.6.0-9.22763.0 - 1.6.0-9.22789.0

Additional context

No response

Relevant log output

No response

Security Policy

Xenius97 commented 4 days ago

0x003C91CC is the most famous crash caused by unoptimized mods / scripts

https://wiki.multitheftauto.com/wiki/Famous_crash_offsets_and_their_meaning

haron4igg commented 4 days ago

Unfortunatelly we cant proceed with checking MTA versions to find which one introduced the issue, since latest 22789 is now enforced.

the last tested-stable for us were: 22746

Also, collected a bit more data over crashes in last 90 days, seems like 22780.0 were the latest 'good' for us.

CRASH COUNT | VERSION

1543 | 1.6.0-9.22684.0
1437 | 1.6.0-9.22789.0
 675 | 1.6.0-9.22780.0
 612 | 1.6.0-9.22763.0
 593 | 1.6.0-9.22746.0
 559 | 1.6.0-9.22650.0
 529 | 1.6.0-9.22741.0

PlatinMTA commented 4 days ago

Unfortunatelly we cant proceed with checking MTA versions to find which one introduced the issue, since latest 22789 is now enforced.

the last tested-stable for us were: 22746

Also, collected a bit more data over crashes in last 90 days, seems like 22780.0 were the latest 'good' for us.

CRASH COUNT | VERSION

1543 | 1.6.0-9.22684.0
1437 | 1.6.0-9.22789.0
 675 | 1.6.0-9.22780.0
 612 | 1.6.0-9.22763.0
 593 | 1.6.0-9.22746.0
 559 | 1.6.0-9.22650.0
 529 | 1.6.0-9.22741.0

I think your own data shows that this is not related to MTA whatsoever (most likely the higher amount of crashes on certain versions is related to the minversion being updated)... using almost 2.8GB when you have 3.2GB available for MTA is too much. You shouldnt be using that much memory.

But you did mention players reverting to older versions not having those issues... but then again you have 1100+ crashes with both mentioned versions so that just sounds like a placebo effect. You surely can optimize your models and scripts to lower that 2.8GB use to at least 2GB (160MB in vertices is too much, you surely can lower it without it being noticeable, same with the textures, instead of using 1024x1024 maybe try using 512x512 textures instead, they will still look nice).


As a comparison my server right now is hosting 260 players, with a lot of custom vehicles, ped skins and models, and these are the memory values on the busiest part of the map.

Screenshot Also the values crop and my resolution isnt that low... just 1600x900. We should take a look.

I know this is kind of an unfair comparison because we try to maintain the GTA: San Andreas aesthetic, so our models tend to be low poly. I know that DayZ servers don't really strive for that, but that doesn't mean you cant optimize the models you already got. I'm positive that if you half most of the textures from the skins your memory usage will drop drastically (500MB allocated just for textures is a lot). Renderware is old... and only capable of using 32 bit addresses. Compromises have to be made.

haron4igg commented 4 days ago

Unfortunatelly we cant proceed with checking MTA versions to find which one introduced the issue, since latest 22789 is now enforced. the last tested-stable for us were: 22746 Also, collected a bit more data over crashes in last 90 days, seems like 22780.0 were the latest 'good' for us.

CRASH COUNT | VERSION

1543 | 1.6.0-9.22684.0
1437 | 1.6.0-9.22789.0
 675 | 1.6.0-9.22780.0
 612 | 1.6.0-9.22763.0
 593 | 1.6.0-9.22746.0
 559 | 1.6.0-9.22650.0
 529 | 1.6.0-9.22741.0

I think your own data shows that this is not related to MTA whatsoever (most likely the higher amount of crashes on certain versions is related to the minversion being updated)... using almost 2.8GB when you have 3.2GB available for MTA is too much. You shouldnt be using that much memory.

But you did mention players reverting to older versions not having those issues... but then again you have 1100+ crashes with both mentioned versions so that just sounds like a placebo effect. You surely can optimize your models and scripts to lower that 2.8GB use to at least 2GB (160MB in vertices is too much, you surely can lower it without it being noticeable, same with the textures, instead of using 1024x1024 maybe try using 512x512 textures instead, they will still look nice).

As a comparison my server right now is hosting 260 players, with a lot of custom vehicles, ped skins and models, and these are the memory values on the busiest part of the map.

Screenshot I know this is kind of an unfair comparison because we try to maintain the GTA: San Andreas aesthetic, so our models tend to be low poly. I know that DayZ servers don't really strive for that, but that doesn't mean you cant optimize the models you already got. I'm positive that if you half most of the textures from the skins your memory usage will drop drastically (500MB allocated just for textures is a lot). Renderware is old... and only capable of using 32 bit addresses. Compromises have to be made.

1543 | 1.6.0-9.22684.0 - was spike here (1), which got fast fixed in next patch: 1437 | 1.6.0-9.22789.0 - is current problematic version (2) which causing crashes during first clean run of MTA Image

rest: 675 | 1.6.0-9.22780.0 612 | 1.6.0-9.22763.0 593 | 1.6.0-9.22746.0 559 | 1.6.0-9.22650.0 529 | 1.6.0-9.22741.0


but i totaly agree with optimisation points, doing this a lot actually... with each update during last ~10 years working with DayZ :D

PlatinMTA commented 4 days ago

1543 | 1.6.0-9.22684.0 - was spike here (1), which got fast fixed in next patch

I remember this crash (game crashed on disconnect). Really annoying crash.

1437 | 1.6.0-9.22789.0 - is current problematic version (2) which causing crashes during first clean run of MTA

did r22787 work for instance? if i'm not confused you guys still havent found a version where the amount of crashes did not spike. Does your server use CEF?

haron4igg commented 4 days ago

1543 | 1.6.0-9.22684.0 - was spike here (1), which got fast fixed in next patch

I remember this crash (game crashed on disconnect). Really annoying crash.

1437 | 1.6.0-9.22789.0 - is current problematic version (2) which causing crashes during first clean run of MTA

did r22787 work for instance? if i'm not confused you guys still havent found a version where the amount of crashes did not spike. Does your server use CEF?

Yea, we haven't found exact. but statistic advise that .22780.0 were last normal. Yes, we using CEF

PlatinMTA commented 4 days ago

Yea, we haven't found exact. but statistic advise that .22780.0 were last normal. Yes, we using CEF

Afaik last forced minclientversion before r22789 was r27763, according to my logs. Do you have for instance data related to amount of users using r22780 vs amount of crashes? That would be really useful. Recently some changes have been made in CEF (#2933), so maybe thats the issue. Maybe disabling GPU rendering could stop the crashes?

haron4igg commented 4 days ago

Yea, we haven't found exact. but statistic advise that .22780.0 were last normal. Yes, we using CEF

Afaik last forced minclientversion before r22789 was r27763, according to my logs. Do you have for instance data related to amount of users using r22780 vs amount of crashes? That would be really useful. Recently some changes have been made in CEF (#2933), so maybe thats the issue. Maybe disabling GPU rendering could stop the crashes?

Cant find any API to disable it, isn't it compilation flag?

PlatinMTA commented 4 days ago

Cant find any API to disable it, isn't it compilation flag?

You can disable it from the settings, and you can check if the client has it enabled with isBrowserGPUEnabled

Lpsd commented 4 days ago

It's a client setting, it's entirely up to the user. There is no API to control this, as per most other client settings (the server has no authority over them).

As I mentioned in Discord the CEF GPU rendering was introduced in 22771 but was broken due to compositing being re-enabled, then fixed in 22789 (by disabling compositing, but still having GPU enabled by default).

I doubt that disabling GPU in CEF will resolve your issue but you can ask players to try it out (MTA settings > Web Browser > Enable GPU rendering).

Lpsd commented 4 days ago

If 22780 was good for your players then it's 99% not CEF, since that was (mostly) broken from 22771 to 22789 as mentioned above.

haron4igg commented 3 days ago

Have 3 users so far, who got a lot off crashes, and since disabled GPU Rendering - having no more issues.

so i assume, because we are on the edge with models/textures, adding CEF to video memory takes all the space... Is it possible to show how much CEF uses video memory on 'showmemstat' ?

Fernando-A-Rocha commented 3 days ago

Wouldn't it be nice to have a client function to disable cef gpu rendering, so certain servers can control whether their clients should need it or not?

Lpsd commented 3 days ago

Wouldn't it be nice to have a client function to disable cef gpu rendering, so certain servers can control whether their clients should need it or not?

Not viable for two reasons:

1) Requires client restart 2) The principle of not allowing server to modify client settings

Lpsd commented 3 days ago

In my opinion it's not up to a server to decide that a client can't use GPU in CEF, just because that server wants to push memory limits to breaking point.

haron4igg commented 9 hours ago

After week of researches...:

  1. We found few users who were able to reproduce the issue within ~30 minutes of gaming session, with next hardware: -. NVIDIA GeForce RTX 3060 RAM: 32661 -. NVIDIA GeForce GTX 1650 RAM 16334.94921875 -. Intel(R) Arc(TM) A750 Graphics RAM 16208.4765625

  2. We been turning off resource-packs one by one to see if some of them are responsible for the problem.

  3. Were disabling newly added resources, and shaders to exclude in-shader memory leakage.

  4. Also runned a test without all the textures/shaders, so ~500 mb less memory and no shader-rendering.

As result, testers now getting this crash not after 30 mins, but after ~1.5-2 hours. (even with GPU CEF setting turned off) which is now kinda looks like a memory leak for me. Considering that we don't have this problem with older MTA version, i now really doubt that cause of this leak is my resources.

Also i released some patches to resource pack, reducing the size of the textures by ~200 mb in total, just for test purposes to public: as result - crash-rate reduced in prod, but only because normal gaming session is bellow 1 hour, players who stay longer sill having crashes.

Counting items on the client-side element tree, shows no advancement over the session time, so we are not leaking elements, shaders or textures.

And this is not the first time to be honest, when CEF gets some new update, and we are getting crashes #2446

{
  Element = 1,
  appart = 1,
  area = 26,
  base_ns_kt2 = 12,
  base_ns_kt3 = 8,
  base_ns_kt4 = 6,
  base_ns_kt5 = 2,
  blast_in_temp = 4,
  car_spawn = 198,
  cmarker = 6,
  col = 424,
  colmodelroot = 88,
  colshape = 250,
  craft_container = 16,
  dead_body = 142,
  dff = 908,
  dffroot = 88,
  dndRoot = 1,
  ["dx-font"] = 20,
  e_base = 63,
  expirable = 1,
  flare = 5,
  forbidden_area = 1,
  gen_tree_a = 3692,
  gen_tree_b = 11044,
  gen_tree_c = 1856,
  gps_blip = 114,
  ground_marker = 599,
  ["gui-button"] = 99,
  ["gui-checkbox"] = 8,
  ["gui-combobox"] = 12,
  ["gui-edit"] = 18,
  ["gui-font"] = 1,
  ["gui-gridlist"] = 23,
  ["gui-label"] = 127,
  ["gui-memo"] = 4,
  ["gui-scrollbar"] = 1,
  ["gui-scrollpane"] = 5,
  ["gui-staticimage"] = 4,
  ["gui-tab"] = 8,
  ["gui-tabpanel"] = 2,
  ["gui-window"] = 13,
  guiroot = 88,
  image = 76,
  label = 19,
  local_settings = 1,
  loot = 3921,
  loot_obj = 390,
  map = 125,
  marker = 646,
  mode_settings = 1,
  note = 10,
  npc_route_node = 1281,
  npc_spawn = 2453,
  npc_spawner = 56,
  npc_team = 19,
  object = 52015,
  ped = 610,
  pickup = 61,
  player = 1,
  radio_root = 1,
  removeWorldObject = 2476,
  resource = 88,
  root = 1,
  shader = 1077,
  sound_beacon = 99,
  spawn = 149,
  spawn_scene = 11,
  sync_root = 1,
  team = 7,
  team_ex = 1,
  texture = 1195,
  trigger = 161,
  txd = 932,
  txdroot = 88,
  unbug_marker = 14,
  vehicle = 146,
  water = 3
}
PlatinMTA commented 7 hours ago

Could it be a table (or a series of tables) that are not being cleaned properly? Global tables... not local ones because those get caught up by the garbage collector.

Memory leaks can also happen because you are not clearing properly some global variables. For example you are not cleaning them when elements get destroyed, players disconnect, or when they are not longer needed. You can check the memory usage of a resource in the performance browser.

This is a little script that should rise your RAM usage (an exaggeration, but you could easily make this mistake with an onClientRender event):

function tableCopy(orig)
    local orig_type = type(orig)
    local copy
    if orig_type == 'table' then
        copy = {}
        for orig_key, orig_value in pairs(orig) do
            copy[orig_key] = orig_value
        end
    else -- number, string, boolean, etc
        copy = orig
    end
    return copy
end

---------------------

allElements = {}
global = {}

function onStart()
    getChildren(root)

    utilizeRAM()
end
addEventHandler("onClientResourceStart", resourceRoot, onStart)

function getChildren(element)
    local children = getElementChildren(element)
    if #children == 0 then
        return
    end

    for key,element in ipairs(children) do
        local elementType = getElementType(element)
        if not allElements[elementType] then
            allElements[elementType] = {}
        end

        local i = #allElements[elementType]+1
        allElements[elementType][i] = element

        getChildren(element)
    end
end

function utilizeRAM()
    local iMax = 500000
    for i=1,iMax do
        global[i] = tableCopy(allElements)
    end
end

Image

BEFORE: Image

AFTER: Image

So, if you could check your memory usage for your resources that would be great. Realistically speaking I doubt you have a massive memory leak in one of your resources but that could be the case, so it would be nice to roll that out.

haron4igg commented 2 hours ago

Could it be a table (or a series of tables) that are not being cleaned properly? Global tables... not local ones because those get caught up by the garbage collector.

Memory leaks can also happen because you are not clearing properly some global variables. For example you are not cleaning them when elements get destroyed, players disconnect, or when they are not longer needed. You can check the memory usage of a resource in the performance browser.

This is a little script that should rise your RAM usage (an exaggeration, but you could easily make this mistake with an onClientRender event):

function tableCopy(orig) local orig_type = type(orig) local copy if orig_type == 'table' then copy = {} for orig_key, orig_value in pairs(orig) do copy[orig_key] = orig_value end else -- number, string, boolean, etc copy = orig end return copy end


allElements = {} global = {}

function onStart() getChildren(root)

utilizeRAM() end addEventHandler("onClientResourceStart", resourceRoot, onStart)

function getChildren(element) local children = getElementChildren(element) if #children == 0 then return end

for key,element in ipairs(children) do local elementType = getElementType(element) if not allElements[elementType] then allElements[elementType] = {} end

  local i = #allElements[elementType]+1
  allElements[elementType][i] = element

  getChildren(element)

end end

function utilizeRAM() local iMax = 500000 for i=1,iMax do global[i] = tableCopy(allElements) end end Image

BEFORE: Image

AFTER: Image

So, if you could check your memory usage for your resources that would be great. Realistically speaking I doubt you have a massive memory leak in one of your resources but that could be the case, so it would be nice to roll that out.

  1. I do check performance browser memory consumption by scripts, and its normal: For the player who stayed connected for ~1 hour: Image Just joined player: Image

  2. why then rollback to older mta version resolves the problem? Lua leak would be version independent…

haron4igg commented 1 hour ago

Could you guys please allow mta downgrade to r22771, so we retest precisely latest versions with my most problematic users, to locate the issue?

TracerDS commented 1 hour ago

Could you guys please allow mta downgrade to r22771, so we retest precisely latest versions with my most problematic users, to locate the issue?

Did you try nightly?