Open tethragon opened 1 year ago
Looking at your screenshots and poking at the game myself on Linux and Windows i think your issue points towards vram exhaustion. On Windows this is handled much better than on Linux usually. dxvk also isn't the greatest in this area and can maybe also make the allocated vram go up a bunch in some situations.
Could you try as a test making a file called dxvk.conf
next to the games exe and inserting dxvk.maxChunkSize = 16
This game seems to have quite alot of memory fragmentation. At least with dxvk
Looking at your screenshots and poking at the game myself on Linux and Windows i think your issue points towards vram exhaustion. On Windows this is handled much better than on Linux usually. dxvk also isn't the greatest in this area and can maybe also make the usage go up a bunch in some situations.
Could you try as a test making a file called
dxvk.conf
next to the games exe and insertingdxvk.maxChunkSize = 16
This game seems to have quite alot of memory fragmentation. At least with dxvk
I played with dxvk.info trying to solve this problem, poking around with dxgi.maxDeviceMemory variable in order to fool the game and not use all vram, to no positive result.
Regarding your suggestion I just did the test: Tried dxvk.maxChunkSize = 16 / 32 / 64 / 128 / 256 etc, With any value lower than 128 I get more severe performance hit (20 FPS). With values 256 and greater I get the FPS I currently have.
Do you think there is any other dxvk setting I could try ?
PS. Also tried with another pc which has a Radeon 6600XT(8 GB). The performance was somewhat better, but still the whole 8GB vram were utilized and there was a huge difference in FPS between windows and Linux (proton-dxvk)
Do you think there is any other dxvk setting I could try ?
Sadly no.
I see the game pretty much straight up uses 8GB vram at main menu, so it just loads in everything even when it might not be relevant. This is both at medium, high, very high and ultra where the used vram doesn't really change much (at main menu and start of game). The amount actually allocated by chunks add up to around 8.5 so here the fragmentation isn't as terrible but you will go above your vram limit which can be pretty bad on Linux as said. At low preset the used amount drops to around 4.2 and allocated chunks 6.6, so here dxvk inefficiencies starts to play a bigger role. At very low it's around 3.1 used and 6.3 allocated. These are numbers with dxvk on Windows so Linux might be slightly different. I think they are higher there.
Edit: note i'm using a 7900xtx which has 24GB vram.
PS. Also tried with another pc which has a Radeon 6600XT(8 GB). The performance was somewhat better, but still the whole 8GB vram were utilized and there was a huge difference in FPS between windows and Linux (proton-dxvk)
What is the difference with that card compared to Windows? Also what is your CPU
Do you think there is any other dxvk setting I could try ?
Sadly no.
I see the game pretty much straight up uses 8GB vram at main menu, so it just loads in everything even when it might not be relevant. This is both at medium, high, very high and ultra where the used vram doesn't really change much (at main menu and start of game). The amount actually allocated by chunks add up to around 8.5 so here the fragmentation isn't as terrible but you will go above your vram limit which can be pretty bad on Linux as said. At low preset the used amount drops to around 4.2 and allocated chunks 6.6, so here dxvk inefficiencies starts to play a role. At very low it's around 3.1 used and 6.3 allocated. These are numbers with dxvk on Windows so Linux might be slightly different.
Edit: note i'm using a 7900xtx which has 24GB vram.
PS. Also tried with another pc which has a Radeon 6600XT(8 GB). The performance was somewhat better, but still the whole 8GB vram were utilized and there was a huge difference in FPS between windows and Linux (proton-dxvk)
What is the difference with that card compared to Windows? Also what is your CPU
1) My CPU is Ryzen 5800X3D
2) I don't have easy access right now to the machine with 6600XT (8GB) which has a Ryzen 3800X. I remember that when I run the game there I hit ~90-100 fps in login screen (like windows) and about 150-170 FPS outside of combat (I didn't test combat). On this PC however, GPU was throttling because of the low IPC of CPU (3800X). Maybe the numbers I am giving are not accurate. I did reach to the conclusion that the higher VRAM was helping. I will get to this machine later and do more extensive testing.
3) May I ask what FPS you are hitting on Linux vs Windows with your GPU ? Not that I can afford this card, just curious.
Also I would like to add that using PROTON_USE_WINED3D=1
(which tells proton to translate to OpenGL) gives me almost 100% of the performance I get on windows. However this introduces various graphical glitches that are related to this game coding (same glitches I have when running the game native openGL linux client). This means that the problem isn't necessary related to how linux is handling vram but proves that it is a DXVK performance issue handling the vram in this game.
- May I ask what FPS you are hitting on Linux vs Windows with your GPU ? Not that I can afford this card, just curious.
At 1080p just standing still when you start the actual game.
Ultra dxvk Linux around 280 (radv)
Ultra dxvk Windows around 350
Ultra native Windows around 340
Very low dxvk Linux around 330 (radv)
Very low dxvk Windows around 400
Very low native Windows around 390
On Linux/Proton (not exactly sure where the issues is) i start hitting another problem that cause a perf hit with high core count CPU's especially in some Unity games. So if i restrict the game to say 6 logical cores it looks better again. WINE_CPU_TOPOLOGY=6:0,1,2,3,4,5
Ultra dxvk Linux around 380 (radv)
Very low dxvk Linux around 430 (radv)
Also I would like to add that using PROTON_USE_WINED3D=1 (which tells proton to translate to OpenGL) gives me almost 100% of the performance I get on windows.
With wined3d vram is probably alot less of a issue so imo it still points towards that. Note i'm not actually a dev and just help out testing and i don't have much technical insight. So don't take my word as gospel I'm impressed wined3d manages so well perf wise. Kodus and good work to the wined3d devs :beers:
Difference between dxvk and wined3d in low vram cases might also be due to differences between radv and radeonsi, since radv added GTT domain for everything it behaves even worse in some vram constrained cases (context: https://gitlab.freedesktop.org/mesa/mesa/-/issues/8763, https://gitlab.freedesktop.org/mesa/mesa/-/issues/8107)
That is to say, trying amdvlk or reverting the GTT commit in radv might be interesting as well, but it's still unlikely to work as well as on windows.
That is to say, trying amdvlk or reverting the GTT commit in radv might be interesting as well, but it's still unlikely to work as well as on windows.
OMG OMG OMG!!!
I just took a timeshift snapshot (so I could revert back easily if things go south) and installed amdvlk (from here) and WOW!!!!
My character selection screen jumped from 60 to 144 FPS (like windows!!!!!!!) However, in game (outside of combat) I get ~170 FPS, which is still worse than Windows' 250FPS, but much better than 120FPS I had up to now.
And look at that GPU utilization!!! Went down to 75% (still not Windows's 50%, but that is a great improvement)!!!!!
I can't believe it. And everyone says that mesa's radv is better than amdvlk and recommend not to install amdvlk.... I don't know what just happened!
So, could this be a mesa radv's vulkan implementation issue?
I can't believe it. And everyone says that mesa's radv is better than amdvlk and recommend not to install amdvlk
That is still the correct recommendation in 99% of cases.
So, could this be a mesa radv's vulkan implementation issue?
My guess is still that it's because RADV always sets the GTT domain for allocations, which amdvlk doesn't do and which makes it perform worse in some cases when VRAM is running out. You could try to confirm that by building radv with https://gitlab.freedesktop.org/mesa/mesa/-/commit/862b6a9a97ad9c47c14dbc76ea892293573c746f reverted.
I can't believe it. And everyone says that mesa's radv is better than amdvlk and recommend not to install amdvlk
That is still the correct recommendation in 99% of cases.
So, could this be a mesa radv's vulkan implementation issue?
My guess is still that it's because RADV always sets the GTT domain for allocations, which amdvlk doesn't do and which makes it perform worse in some cases when VRAM is running out. You could try to confirm that by building radv with https://gitlab.freedesktop.org/mesa/mesa/-/commit/862b6a9a97ad9c47c14dbc76ea892293573c746f reverted.
Could you please explain to me, now that I have installed amdvlk, radv was overwritten? Or they both exist? And how games know which one to choose? I see no difference in my inxi report....
You can have both installed and choose on a per-game basis using VK_ICD_FILENAMES=/usr/share/vulkan/icd.d/SOME_DRIVER_ICD.json
(path might differ per distro, don't know what you did to install amdvlk, ...), although amdvlk makes itself the default automatically when installed.
You can have both installed and choose on a per-game basis using
VK_ICD_FILENAMES=/usr/share/vulkan/icd.d/SOME_DRIVER_ICD.json
(path might differ per distro, don't know what you did to install amdvlk, ...), although amdvlk makes itself the default automatically when installed.
I used the deb file. I have linux mint 21.1
UPDATE:
OK, I played for about an hour using amdvlk and the good news are that the game never drops below 60-70 FPS (using radv the FPS were dropping at ~40 FPS during combat with many enemies). The bad news are that when a new map loads there is stuttering for a while probably until all shaders/textures load properly to VRAM. This stuttering does NOT happen with the radv. On windows os, during heavy combat the FPS never drop below 110-120 FPS + there is no stuttering. I wish I could afford a 7900XTX like @Blisto91 but I can't.... (and even if I could, the KWh cost where I live is very very expensive, so I couldn't afford the power consumption of 7900xtx either.... Maybe a 6700XT (12GB) is a good middle ground, but it is insane to buy a new GPU just because vulkan implementations on Linux apparently suck at VRAM exhaustion scenarios....
Hopefully the vram situation will improve with time the whole way through the stack. dxvk, radv (and amds drivers if needed) and if necessary amdgpu.
The bad news are that when a new map loads there is stuttering for a while probably until all shaders/textures load properly to VRAM. This stuttering does NOT happen with the radv.
Ye amdvlk doesn't support VK_EXT_graphics_pipeline_library
yet which is what allows dxvk to compile shaders upfront (main menu or loading screen) if the game does the same on Windows. So it will instead compile at draw time which gives stutters until you've seen everything.
Hopefully the vram situation will improve with time the whole way through the stack. dxvk, radv (and amds drivers needed) and if necessary amdgpu.
The bad news are that when a new map loads there is stuttering for a while probably until all shaders/textures load properly to VRAM. This stuttering does NOT happen with the radv.
Ye amdvlk doesn't support
VK_EXT_graphics_pipeline_library
yet which is what allows dxvk to compile shaders upfront (main menu or loading screen) if the game does the same on Windows. So it will instead compile at draw time which gives stutters until you've seen everything.
Do you think we could have a "best of both worlds" solution in the short future? Both compiling shaders upfront + handling vram exhaustion better at the same time?
Do you think we could have a "best of both worlds" solution in the short future? Both compiling shaders upfront + handling vram exhaustion better at the same time?
You can probably get amdvlk's out of vram behavior by just reverting https://gitlab.freedesktop.org/mesa/mesa/-/commit/862b6a9a97ad9c47c14dbc76ea892293573c746f and compiling mesa, although that is still far from really good low vram management, even if it's good enough in this case. VRAM management has been bad on both vendors for years and years on linux, I doubt there will be a solution in the near future.
You can probably get amdvlk's out of vram behavior by just reverting https://gitlab.freedesktop.org/mesa/mesa/-/commit/862b6a9a97ad9c47c14dbc76ea892293573c746f and compiling mesa
I think this is complicated for my technical skills atm and to tell you the truth I don't want to invest time on it.
VRAM management has been bad on both vendors for years and years on linux, I doubt there will be a solution in the near future.
Then I think it is time to either buy at least a 12GB gpu or play Last epoch on windows. I like neither of those two solutions....
PS Do you have any idea why openGL client (of the same game) has no problem with performance while also utilizing 100% vram on my card (and 6600GT-8GB also) ?
UPDATE:
After a couple of computer restarts, steam downloaded some updates (not game clients updates, something to do with shaders) and now the game is playable with almost no stuttering with amdvlk! Or maybe I played long enough and somehow the system "understood" what is needed to be in vram and there is no more stuttering..... I'll keep testing.
UPDATE:
Since I don't know if there is anything that can be done by dxvk devs to improve the performance in max vram saturation scenarios, I opened a issue in mesa forums, hoping that devs can see it and improve radv. Feel free to contribute there also.
The issue with radv is is known. See also https://gitlab.freedesktop.org/mesa/mesa/-/issues/8763
vram issues with dxvk is also known and would require large rewrites to improve properly.
You can have both installed and choose on a per-game basis using
VK_ICD_FILENAMES=/usr/share/vulkan/icd.d/SOME_DRIVER_ICD.json
(path might differ per distro, don't know what you did to install amdvlk, ...), although amdvlk makes itself the default automatically when installed.
I found out that is easier to simply use AMD_VULKAN_ICD=RADV
when you want to play a game with RADV (instead of looking for paths). You can also use AMD_VULKAN_ICD=AMDVLK
, for amdvlk, but there is no need for that since, after installation, amdvlk becomes the default.
Probably works fine most of the time but note that for some there might be a bug that makes not work 100% correctly https://github.com/GPUOpen-Drivers/AMDVLK/issues/331
@tethragon Can you give this another go with dxvk master? Vram usage should have improved a bunch in some scenarios
Playing the game on Linux (proton-DXVK) results in terrible performance hit. See attached comparison screenshot, For example in character selection screen I get 144 FPS on Windows (GPU utilization ~50%) while on Linux (DXVK) I get 60 FPS while gpu utilization is maxed out, which is not normal as there is not much for the gpu to do in this screen. In game it is a "greek tragedy". I get 250 FPS on Windows vs 100 FPS on linux. During combat with many enemies I get 120-150 FPS on Windows vs 40-50 FPS on Linux (DXVK). There is something that hitting the GPU hard and makes it under-perform.
Software information
LAST EPOCH. 1080p, fullscreen (tried all settings from low to ultra high).
System information
GPU: Sapphire Radeon RX5600XT
Driver: Graphics: Device-1: AMD Navi 10 [Radeon RX 5600 OEM/5600 XT / 5700/5700 XT] vendor: Sapphire driver: amdgpu v: kernel bus-ID: 09:00.0 Display: x11 server: X.Org v: 1.21.1.4 driver: X: loaded: amdgpu,ati unloaded: fbdev,modesetting,radeon,vesa gpu: amdgpu resolution: 1920x1080 OpenGL: renderer: AMD Radeon RX 5600 XT (navi10 LLVM 15.0.7 DRM 3.52 6.4.3-060403-generic) v: 4.6 Mesa 23.2.0-devel (git-96cf453 2023-06-08 jammy-oibaf-ppa) direct render: Yes
Wine version: Wine 8.12 (tried other versions also)
DXVK version: v2.2-119-g6be1f6d7bd5f832
Log files
Proton log:
steam-899770.log.zip
References
Opened issue in proton/valve github