ocornut / imgui

Dear ImGui: Bloat-free Graphical User interface for C++ with minimal dependencies
MIT License
60.92k stars 10.29k forks source link

Memory leak with with viewports when using OpenGL3 backend #4468

Closed parbo closed 1 year ago

parbo commented 3 years ago
Dear ImGui 1.84 WIP (18307)
--------------------------------
sizeof(size_t): 4, sizeof(ImDrawIdx): 2, sizeof(ImDrawVert): 20
define: __cplusplus=199711
define: _WIN32
define: _MSC_VER=1929
define: _MSVC_LANG=201402
define: IMGUI_HAS_VIEWPORT
define: IMGUI_HAS_DOCK
--------------------------------
io.BackendPlatformName: imgui_impl_glfw
io.BackendRendererName: imgui_impl_opengl3
io.ConfigFlags: 0x00000441
 NavEnableKeyboard
 DockingEnable
 ViewportsEnable
io.ConfigViewportsNoDecoration
io.ConfigInputTextCursorBlink
io.ConfigWindowsResizeFromEdges
io.ConfigMemoryCompactTimer = 60.0
io.BackendFlags: 0x00001406
 HasMouseCursors
 HasSetMousePos
 PlatformHasViewports
 RendererHasViewports
--------------------------------
io.Fonts: 1 fonts, Flags: 0x00000000, TexSize: 512,64
io.DisplaySize: 1280.00,720.00
io.DisplayFramebufferScale: 1.00,1.00
--------------------------------
style.WindowPadding: 8.00,8.00
style.WindowBorderSize: 1.00
style.FramePadding: 4.00,3.00
style.FrameRounding: 0.00
style.FrameBorderSize: 0.00
style.ItemSpacing: 8.00,4.00
style.ItemInnerSpacing: 4.00,4.00

My Issue/Question:

When using the opengl3 backend with viewports enabled there is a memory leak of a couple of MB per second after dragging a window outside the main viewport. It recovers if the windows is dragged back into the main viewport. I can reproduce with both example_sdl_opengl3 and example_glfw_opengl3. It does not happen with the opengl2 backend.

To reproduce: start the example, drag the "Hello, world!" window out of the viewport.

AidanSun05 commented 3 years ago

I think this is what you're talking about:

image

This extra memory allocation is for supporting a second platform context for the window that was just dragged out, and I'm pretty sure there's no leak. A memory leak is when a process doesn't deallocate its memory when it terminates. In the case of multi-viewports, the memory graph dipped down shortly after the window was dragged back into the viewport. This indicates that the extra memory was freed correctly, making it not leaked.

parbo commented 3 years ago

@AidanSun05 Keep it outside the main viewport. Memory usage grows indefinitely. While not a true memory leak in that the memory is returned, it is still a resource usage that makes extra viewports unusable.

ocornut commented 3 years ago

@AidanSun05 I presume @parbo is referring to an actual leak, as per the "a couple MB per second" comment.

It's probably highly dependent on the GPU driver and platform and there might be a possibility that updating drivers could fix it. OpenGL tends to be poorly supported. Even if the drivers are faulty, it would also really be worthwhile to investigate the cause.

For what it is worth, on my home laptop I do not see a memory increase in that window.

We get that sort of stuff mentioned from time to time (e.g. #3381 and #2981) but nobody with a repro could investigate further.

parbo commented 3 years ago

This is having the window outside the main viewport for 6 minutes, then putting it back. So the resources are eventually returned when the second viewport is no longer used, but the memory usage grows indefinitely until then.

image

AidanSun05 commented 3 years ago

@ocornut - Thanks for clearing things up. (I couldn't reproduce this on my machine either)

@parbo - This is apparently caused by your OS, your GPU, or its driver. Updating your software might help, though as previously mentioned, we can't really investigate/solve this issue because there's no way to repro it.

ocornut commented 3 years ago

I tried with my Intel HD integrated card and Nvidia GTX 1060M and couldn't get things to grow here. Confirmed GPU by printing

    #define GL_VENDOR    0x1F00
    #define GL_RENDERER  0x1F01
    const GLubyte* vendor = glGetString(GL_VENDOR);
    const GLubyte* renderer = glGetString(GL_RENDERER);
    printf("vendor = '%s'\n", vendor);
    printf("renderer = '%s'\n", renderer);

This is apparently caused by your OS, your GPU, or its driver.

We have to stay open-minded in the sense that OpenGL is a known pile of legacy mess, and there might be GL programming idioms that won't trigger driver issues the same way.

The "problem" if if you update your drivers now and it magically fixes it, it won't be investigated (again). So maybe best to take a chance and investigate.

Some ideas

parbo commented 3 years ago

It only happens with my Intel GPU, not with the Nvidia. I have a Dell XPS 15 in case it helps. The Intel UHD graphics driver version is 27.20.100.9168.

ocornut commented 3 years ago

Here what I would do

If it is indeed the call to ImGui_ImplOpenGL3_RenderDrawData() that does it,

If it is that loop, would you be able to look into OpenGL buffer orphaning idiom and see if modifying the code in that loop (presumably the glBufferData call) see if using orphaning idioms fixes it for you.

parbo commented 3 years ago

Kind of tricky doing anything when the main viewport isn't rendering, so I did bool is_main = ImGui::GetMainViewport() == draw_data->OwnerViewport; and used is_main as an additional condition in the two loops in ImGui_ImplOpenGL3_RenderDrawData(). It is definitely the glBufferData calls. I assume the calling pattern with multiple viewports makes the driver keep around old buffers for some reason.

I have very little experience with OpenGL, so I'm not sure it's anywhere near correct, but this hack seems to fix it:

    static GLsizeiptr v_max_size = 0;
    auto v_size = (GLsizeiptr)cmd_list->VtxBuffer.Size * (int)sizeof(ImDrawVert);
    if (v_size > v_max_size) {
      v_max_size = v_size;
      glBufferData(GL_ARRAY_BUFFER, v_max_size, NULL, GL_STREAM_DRAW);
    }
    static GLsizeiptr i_max_size = 0;
    auto i_size = (GLsizeiptr)cmd_list->IdxBuffer.Size * (int)sizeof(ImDrawIdx);
    if (i_size > i_max_size) {
      i_max_size = i_size;
      glBufferData(GL_ELEMENT_ARRAY_BUFFER, i_max_size, NULL, GL_STREAM_DRAW);
    }
    // Upload vertex/index buffers
    glBufferSubData(GL_ARRAY_BUFFER, 0, v_size, (const GLvoid *)cmd_list->VtxBuffer.Data);
    glBufferSubData(GL_ELEMENT_ARRAY_BUFFER, 0, i_size, (const GLvoid *)cmd_list->IdxBuffer.Data);

It should probably be done in some nicer way.

ocornut commented 3 years ago

Great to hear, that's one piece of the puzzle.

I think we would need to gather a maximum number of references (from other thread/issues) about what effectively works best with drivers. e.g. I worry that glBufferSubData() may carry its own source of bizarre issues.

Have you tried this?

        glBufferData(GL_ARRAY_BUFFER, (GLsizeiptr)cmd_list->VtxBuffer.Size * (int)sizeof(ImDrawVert), NULL, GL_STREAM_DRAW);
        glBufferData(GL_ARRAY_BUFFER, (GLsizeiptr)cmd_list->VtxBuffer.Size * (int)sizeof(ImDrawVert), (const GLvoid*)cmd_list->VtxBuffer.Data, GL_STREAM_DRAW);
        glBufferData(GL_ELEMENT_ARRAY_BUFFER, (GLsizeiptr)cmd_list->IdxBuffer.Size * (int)sizeof(ImDrawIdx), NULL, GL_STREAM_DRAW);
        glBufferData(GL_ELEMENT_ARRAY_BUFFER, (GLsizeiptr)cmd_list->IdxBuffer.Size * (int)sizeof(ImDrawIdx), (const GLvoid*)cmd_list->IdxBuffer.Data, GL_STREAM_DRAW);
ocornut commented 3 years ago

It would be worth trying to solution. We're going to have to have any of those change run on a maximum of platforms and measure performances (and possible "leaks"). I think @floooh had experience with that sort of thing?

parbo commented 3 years ago

Unfortunately, this does not fix it for me.

        glBufferData(GL_ARRAY_BUFFER, (GLsizeiptr)cmd_list->VtxBuffer.Size * (int)sizeof(ImDrawVert), NULL, GL_STREAM_DRAW);
        glBufferData(GL_ARRAY_BUFFER, (GLsizeiptr)cmd_list->VtxBuffer.Size * (int)sizeof(ImDrawVert), (const GLvoid*)cmd_list->VtxBuffer.Data, GL_STREAM_DRAW);
        glBufferData(GL_ELEMENT_ARRAY_BUFFER, (GLsizeiptr)cmd_list->IdxBuffer.Size * (int)sizeof(ImDrawIdx), NULL, GL_STREAM_DRAW);
        glBufferData(GL_ELEMENT_ARRAY_BUFFER, (GLsizeiptr)cmd_list->IdxBuffer.Size * (int)sizeof(ImDrawIdx), (const GLvoid*)cmd_list->IdxBuffer.Data, GL_STREAM_DRAW);
floooh commented 3 years ago

I think @floooh had experience with that sort of thing?

I'm seeing temporary memory spikes in my own swapchain wrapper code during window resizing, the exact nature depends on the operating system and 3D backend, but it goes from "memory usage goes up but remains stable during resize and recovers about a second after resize", to "memory keeps growing during resize and only recovers after the resize has finished" - but this is swapchain-surface related. But I haven't really found a solution to fix those memory spike issues (except of course not resizing the swapchain during window resizing).

I'm not as far yet with my ImGui-multiviewport-experiments to have a GL backend, so in this specific situation I can't help with an alternative implementation to check against unfortunately.

ocornut commented 3 years ago

I pushed a branch https://github.com/ocornut/imgui/tree/features/opengl_buffer_update (forked from docking)

To validate this nicely I think we would need people to test both branches (docking 58f5092c53dd7c3208d7ca717c861325616f58b0 vs this fork 009c1e08527b74542128fa35a6f05afaaa7a1cf5)

[ ] windows nvidia: [ ] windows amd: [ ] windows intel: [ ] linux [ ] osx [ ] webgl [ ] raspberry pi

If someone can run the updated backend for webgl, osx, raspberry pi, that'd be nice.

AidanSun05 commented 3 years ago

I have some Raspberry Pi 3's and 4's that I can run the backend on.

AidanSun05 commented 3 years ago

I ran a few tests on my Raspberry Pis. First, some notes:

Relevant code changes:

imconfig.h:

#define IMGUI_IMPL_OPENGL_ES3

GLFW example:

// Decide GL+GLSL versions
const char* glsl_version = "#version 300 es";
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 0);
glfwWindowHint(GLFW_CLIENT_API, GLFW_OPENGL_ES_API);

SDL example:

// Decide GL+GLSL versions
const char* glsl_version = "#version 300 es";
SDL_GL_SetAttribute(SDL_GL_CONTEXT_FLAGS, 0);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_PROFILE_MASK, SDL_GL_CONTEXT_PROFILE_ES);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, 3);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, 0);

Here's the data I collected from the GLFW and SDL examples:

Metric Docking Branch OpenGL Buffer Update Branch
Idle memory consumption 35-40 MB 35-40 MB
Demo window outside main viewport and resized constantly 140 MB +/- 10 135 MB +/- 10
Capped framerate 55-60 FPS 55-60 FPS
Unthrottled framerate 75-105 FPS 60-80 FPS

So the new buffer update seems to be helping (i.e. reducing the memory usage) in terms of the Raspberry Pi 4.

floooh commented 3 years ago

...looking at the new update code reminds me... I've been changing my buffer updates in the draw loop somewhat recently to only do one update per buffer per frame, this was because glBufferSubData() (more specifically: doing multiple glBufferSubData() on the same buffer per frame, even if the data is appended, not overwritten) had pretty bad performance problems on some devices (mainly Android, and some browsers when running in WebGL via WASM).

So what I'm doing now is a two-step copy: first loop over all commands and copy the per-command vertex- and index-data into continuous memory buffers, and then do one glBufferData() each for the vertex- and index-data:

https://github.com/floooh/sokol/blob/42d757c7a47a7096f15b48d491f06ef54fddb162/util/sokol_imgui.h#L1916-L1965

...later in the draw loop I'm adjusting the vertex buffer offset to the start of the per-command vertex chunk so that the indices point to the right vertices (under the hood this simply calls glVertexAttribPointer() with an offset as the last parameter):

https://github.com/floooh/sokol/blob/42d757c7a47a7096f15b48d491f06ef54fddb162/util/sokol_imgui.h#L1991-L1993

...but this was done purely for performance on some platforms (and it helped a lot), not for memory consumption.

PS: It would be nice if Dear ImGui had an option which writes the vertices and indices into such all-in-one buffers directly, maybe even let the buffers be provided by the API user. Only makes sense if ImGui directly writes the vertices to those buffers instead of doing extra memory copies though (which I assume might be non-trivial).

rokups commented 3 years ago

example_sdl_opengl3 on Linux, plasma desktop, amdgpu driver:

Metric Docking Branch OpenGL Buffer Update Branch
Idle memory consumption 30.6 MB 30.1 MB
Demo window outside main viewport and resized constantly 31.4 MB 31.0 MB
Capped framerate 143-144 FPS 143-144 FPS
Unthrottled framerate 9000-9700 FPS 9000-9700 FPS

Memory consumption remains constant at all times.

straywriter commented 3 years ago

hi,I found a way to solve this problem。 I have done some debugging work, but I still haven't found the specific cause. It may be the cause of OpengL or the incompatibility between GLFW creation window and OpengL. However, I haven't tested the CASE of SDL, so I'm not sure. However, I found that the main problem occurred here:

static void ImGui_ImplOpenGL3_InitPlatformInterface()
{
    ImGuiPlatformIO& platform_io = ImGui::GetPlatformIO();
    platform_io.Renderer_RenderWindow = ImGui_ImplOpenGL3_RenderWindow;
}

I took advantage of OpengL's backward compatibility features to solve this problem for the time being. This solution may not seem perfect, but at least it will keep memory from leaking. Porting opengL2 ImGui_ImplOpenGL2_RenderWindow to opengl3 can solve the problem effectively.

static void ImGui_ImplOpenGL2_InitPlatformInterface()
{
    ImGuiPlatformIO& platform_io = ImGui::GetPlatformIO();
    platform_io.Renderer_RenderWindow = ImGui_ImplOpenGL2_RenderWindow;
}

What do you think about that

cc

ocornut commented 3 years ago

@straywriter You have ignored my past message. We understand that "not using the opengl3 backend" fixes the issue for you but here we are trying to get the opengl3 backend fixed. Could you please try the branch mentioned here? https://github.com/ocornut/imgui/issues/4468#issuecomment-906503250

straywriter commented 3 years ago

@ocornut thanks, My computer works fine on features/opengl_buffer_update

Version: dear imgui, v1.84 WIP Branch: features/opengl_buffer_update

Back-ends: imgui_impl_glfw.cpp + imgui_impl_opengl3.cpp Compiler: Visual Studio 2019 Operating System: Windows 10

My GPU: GeForce GTX 1050 Ti Version:457.49

straywriter commented 3 years ago

@ocornut hi I find a new bug in features/opengl_buffer_update,What do you think of this

Version: dear imgui, v1.84 WIP Branch: features/opengl_buffer_update

Back-ends: imgui_impl_glfw.cpp + imgui_impl_opengl3.cpp Compiler: Visual Studio 2019 Operating System: Windows 10 My GPU: GeForce GTX 1050 Ti Version:457.49

The following bug occurs when using DockerSpace when the Dear ImGui Demo window is initialized outside

dd

AidanSun05 commented 3 years ago

@straywriter Interesting bug report. Noticed a few things, so here's a still shot of your Visual Studio crash screen for reference:

image

  1. It says "ig9icd32.(pdb/dll)", which is actually related to Intel GPUs/drivers (from a quick Google search).
  2. Based on this, I'm pretty sure your machine has both an NVIDIA GPU (as you mentioned) and an Intel GPU, but your process is actually using the latter.

So I borrowed a laptop with an Intel UHD Graphics 620 (driver version 27.20.100.9664; June 1, 2021), and sure enough, I was able to replicate an extremely similar bug:

Animation

Except that this time, when interacting with the distorted window, the code crashes on line 666 of imgui_impl_glfw.cpp, as shown at the end of the above GIF. https://github.com/ocornut/imgui/blob/009c1e08527b74542128fa35a6f05afaaa7a1cf5/backends/imgui_impl_glfw.cpp#L666

I was not able to replicate this behavior on my main dev machine with an NVIDIA GTX 1080 (driver version 471.96; August 31, 2021).

captnjameskirk commented 2 years ago

Thought I'd chime in with some data points. I tested the docking branch with and without the patched OpenGL3 backend, with SDL and GLFW, and with vsync and unthrottled. The good news is that the patch fixed the memory leak, with a surprise bonus of better cpu/gpu usage as well.

Tested on a 2018 MBP, Intel Core i7 2.6Ghz, Radeon Pro 560X.

The test app used 3 ImGui windows dragged out from the main viewport (demo window, style editor, and metrics/debugger). Also note that I used the current docking branch with the two patched opengl3 files copied over.

With vsync (60 fps): Build CPU GPU mem
SDL/unpatched 16 24 +1MB/s
GLFW/unpatched 13 23 +1MB/s
SDL/patched 10 24 +0MB/s
GLFW/patched 9 23 +0MB/s
Unthrottled (about 300 fps on my system in all cases): Build CPU GPU mem
SDL/unpatched 65 97 +5MB/s
GLFW/unpatched 45 97 +3MB/s
SDL/patched 32 92 +0MB/s
GLFW/patched 29 87 +0MB/s

A couple of things I took away from this:

Thanks @ocornut for the work on this patch. It's made multi-viewports usable, at least for me. 😄

ocornut commented 2 years ago

Alright let's do it, applying the change now! Thanks for the detailed table and everyone's feedback!

GLFW is more efficient than SDL in every case

Probably related to some subtle windowing params/setup which the drivers and compositing layers are relying on..

ocornut commented 2 years ago

Fixed by 389982e ! Thanks everyone for the precious help with this! (also snuck in an unrelated static analysis fix in there)

ocornut commented 2 years ago

Unfortunately we got two issues reported since that change happened:

Reopening, I don't know what the solution is yet.

ocornut commented 2 years ago

Sorry this has been open for long. Based on reports like now I eventually settled on using varying codepath based on GPU vendors :(

Pushed: ca222d30c8ca3e469c56dd981f3a348ea83b829f

EDIT only on Windows, based on https://github.com/ocornut/imgui/issues/5127#issuecomment-1093075994

ocornut commented 2 years ago

@parbo Can you still repro this leak if you force make it use the simple path?

            glBufferData(GL_ARRAY_BUFFER, vtx_buffer_size, (const GLvoid*)cmd_list->VtxBuffer.Data, GL_STREAM_DRAW);
            glBufferData(GL_ELEMENT_ARRAY_BUFFER, idx_buffer_size, (const GLvoid*)cmd_list->IdxBuffer.Data, GL_STREAM_DRAW);

If so, how about an alternative to: https://github.com/ocornut/imgui/issues/4468#issuecomment-904152488 I then asked you to use:

        glBufferData(GL_ARRAY_BUFFER, (GLsizeiptr)cmd_list->VtxBuffer.Size * (int)sizeof(ImDrawVert), NULL, GL_STREAM_DRAW);
        glBufferData(GL_ARRAY_BUFFER, (GLsizeiptr)cmd_list->VtxBuffer.Size * (int)sizeof(ImDrawVert), (const GLvoid*)cmd_list->VtxBuffer.Data, GL_STREAM_DRAW);
        glBufferData(GL_ELEMENT_ARRAY_BUFFER, (GLsizeiptr)cmd_list->IdxBuffer.Size * (int)sizeof(ImDrawIdx), NULL, GL_STREAM_DRAW);
        glBufferData(GL_ELEMENT_ARRAY_BUFFER, (GLsizeiptr)cmd_list->IdxBuffer.Size * (int)sizeof(ImDrawIdx), (const GLvoid*)cmd_list->IdxBuffer.Data, GL_STREAM_DRAW);

Can you make the 1st and 3st call use 0 size?

ocornut commented 2 years ago

Posting a shared recap here: (will update as necessary) https://gist.github.com/ocornut/89ae5820c497510761e4c313ef5e0219

ocornut commented 2 years ago

Anyone affected by corruption or leak issue, could you cherry-pick this: https://github.com/ocornut/imgui/commit/3a68146b And see if you get anything printed? EDIT 2022/09/27 merged 11f5be0ca you can now uncomment #define IMGUI_IMPL_OPENGL_DEBUG.

(You may alter the fprintf statement to be whatever works best in your codebase) Please also try with both values of UseBufferSubData (you may alter the if (bd->UseBufferSubData) check, turning it into true/false to try both paths). Please report with GPU/Drivers to ease parsing the information. Thanks!

ocornut commented 2 years ago

Could you also try both code path using GL_DYNAMIC_DRAW instead of GL_STREAM_DRAW ?

ocornut commented 2 years ago

Help on this would be useful. @parbo @gamecoder-nz @werwolv

parbo commented 2 years ago

The state for me is that I can no longer reproduce the memory leak, even with the code as it was when I originally reported this. I guess the Intel driver has been updated to fix the problem.

The tearing issue is still there if UseBufferSubData is true, and not there if set to false.

GL_DYNAMIC_DRAW vs GL_STREAM_DRAW seems to make no difference.

Output from debug printouts:

GL_MAJOR_VERSION = 3
GL_MINOR_VERSION = 0
GL_VENDOR = 'Intel'
GL_RENDERER = 'Intel(R) UHD Graphics'
ocornut commented 1 year ago

The state for me is that I can no longer reproduce the memory leak, even with the code as it was when I originally reported this. I guess the Intel driver has been updated to fix the problem.

Alright, I'll disable that stuff now.

The tearing issue is still there if UseBufferSubData is true, and not there if set to false.

Which tearing issue are you talking about? Anyhow we will disable UseBufferSubData now.

Thank you.

ocornut commented 1 year ago

Reverted with b8b0f9d. PLEASE REPORT IF YOU STILL HAVE ISSUES AFTER TODAY. I temporarily kept the alternative code path even though bd->UseBufferSubData is always false, allowing potential experiment shall new issues arises. Finger crossed this doesn't bite us back.

jsham0414 commented 1 year ago

It doesn't just happen in the opengl. I'm using directx11 and the same thing happens.

flarive commented 5 months ago

Hello @ocornut,

I'm also facing the memory leak bug with latest docking branch. I have an NVIDIA GeForce MX 250 on my Windows 11 Dell laptop. I have the latest NVIDIA drivers.

I have made a very simple repro to reproduce the memory issue : https://github.com/flarive/imGuiViewport_MemoryLeak

Image taken from the simple repro : memory_leak

When one of the dialog is outside the main window, the memory starts to increase and never stops increasing. If i just move the 2 dialogs into the main window, the memory stops increasing and falls back to something like 15Mo.

I have tried to use your fix from your opengl_buffer_update branch (https://github.com/ocornut/imgui/commit/009c1e08527b74542128fa35a6f05afaaa7a1cf5)

When using the 2 files from your fix, the memory leak seems to be fixed but i have strange glitches on my dialogs :(

Image taken from my project showing the glitches : glitch

I have time to help you if you need help on that issue. Sorry, i know you thought it was fixed... but in fact not really :)

Eviral

SorpoBlock commented 1 month ago

Hello, a friend of mine was trying to run my game, he has a GTX 1070 and Windows, not sure what his drivers are, but he kept getting the following error:

"ERROR - OpenGL error, type: 33361 severity: 33387 message: Buffer detailed info: Buffer object 9 (bound to GL_ELEMENT_ARRAY_BUFFER_ARB, usage hint is GL_STREAM_DRAW) will use VIDEO memory as the source for buffer object operations."

I personally don't use stream draw, only static or dynamic, so I was able to trace the issue to imgui_impl_opengl3.cpp line 579:

GL_CALL(glBufferData(GL_ELEMENT_ARRAY_BUFFER, idx_buffer_size, (const GLvoid*)cmd_list->IdxBuffer.Data, GL_STREAM_DRAW));

I do not get the issue on my RX 5700 XT with Windows and updated drivers nor do I get it on intel integrated graphics on a Linux laptop.

flarive commented 1 month ago

Hello, The solution for me was to force NVIDIA video card usage (GeForce MX250 on my laptop) instead of intel embedded chipset. It solved the issue without any hack.