AdguardTeam / AdGuardHome

Network-wide ads & trackers blocking DNS server
https://adguard.com/adguard-home.html
GNU General Public License v3.0
23.84k stars 1.75k forks source link

Memory leak and device crash #4437

Closed timkgh closed 1 year ago

timkgh commented 2 years ago

Issue Details

Expected Behavior

Run for long periods of time without crashes/memory leak.

Actual Behavior

Leaks memory until the router is out of memory and it crashes.

Screenshots

After about 26 days of uptime:

agh1

After the screenshot above, I restarted AGH and it shows a reasonable memory footprint:

agh2

You can see the memory use on the device drops from 456M to 177M after an AGH restart, but it continues to climb until it eventually exhausts the entire device memory.

Additional Information

Cache settings:

cache_size: 4194304
cache_ttl_min: 300
cache_ttl_max: 86400
cache_optimistic: true
safebrowsing_cache_size: 1048576
safesearch_cache_size: 1048576
parental_cache_size: 1048576
cache_time: 30

Clearing the stats and logs while AGH is running does not help reclaim memory, only an AGH restart does.

SiNONiMiTY commented 2 years ago

Bumping this up. This needs visibility.

EugeneOne1 commented 2 years ago

@timkgh, hello and apologies for a late response. Unfortunally, we can't reproduce the issue on our test machines. Could you please answer a couple of questions:

  1. How many filter lists do you use? How many rules in total?
  2. What value of querylog_size_memory (section dns) is set in AdGuardHome.yaml file?

Also could you please collect some profiling info to enrich the issue details? Here is how to perform it:

  1. Set debug_pprof in AdGuardHome.yaml to true. This will enable some profiling HTTP endpoints for localhost (127.0.0.1);
  2. Then, run AGH for some time. A few minutes should be enough;
  3. Next, fetch the http://127.0.0.1:6060/debug/pprof/heap and save it to the text file;
  4. Finally, send the resulting file to devteam@adguard.com with the subject like "Issue 4437".

Sorry for a bit complicated request. Thanks.

timkgh commented 2 years ago

I believe it's a cache issue. I've been running it with:

cache_size: 0
cache_optimistic: false

for a few days now and it's stable, the memory use does not creep up.

I don't think the cache limit of 4M from my previous config is being enforced, it appears that the cache grows unbounded.

image

As for your other question, I don't run lots of filters, just filters from the built-in lists:

filters:
- enabled: true
  url: https://adguardteam.github.io/AdGuardSDNSFilter/Filters/filter.txt
  name: AdGuard DNS filter
  id: 1
- enabled: true
  url: https://raw.githubusercontent.com/hoshsadiq/adblock-nocoin-list/master/hosts.txt
  name: NoCoin Filter List
  id: 1634786883
- enabled: true
  url: https://raw.githubusercontent.com/durablenapkin/scamblocklist/master/adguard.txt
  name: Scam Blocklist by DurableNapkin
  id: 1634786884
- enabled: true
  url: https://raw.githubusercontent.com/Spam404/lists/master/main-blacklist.txt
  name: Spam404
  id: 1634786885
- enabled: true
  url: https://curben.gitlab.io/malware-filter/urlhaus-filter-agh-online.txt
  name: Online Malicious URL Blocklist
  id: 1634786886
- enabled: true
  url: https://raw.githubusercontent.com/mitchellkrogza/The-Big-List-of-Hacked-Malware-Web-Sites/master/hosts
  name: The Big List of Hacked Malware Web Sites
  id: 1634786887
- enabled: true
  url: https://raw.githubusercontent.com/DandelionSprout/adfilt/master/Alternate%20versions%20Anti-Malware%20List/AntiMalwareAdGua
  name: Dandelion Sprout's Anti-Malware List
  id: 1640801008
- enabled: true
  url: https://raw.githubusercontent.com/crazy-max/WindowsSpyBlocker/master/data/hosts/spy.txt
  name: WindowsSpyBlocker - Hosts spy rules
  id: 1645727393

Query log size:

querylog_size_memory: 1000
EugeneOne1 commented 2 years ago

@timkgh, thanks for clarifications. Could you please perform a few more tests?

  1. Restore the cache_size value to 4 MB and set cache_optimistic to false;
  2. Run AGH for a couple of days and check the memory usage statistics.

Shortly, we suspect the optimistic caching feature causes these leaks, so the above test should confirm or deny the assumption.

Also, we'd like to get a bit different profiling data due to your clarifications. So could you please do the same as I described here, but put http://127.0.0.1:6060/debug/pprof/goroutine instead of http://127.0.0.1:6060/debug/pprof/heap.

timkgh commented 2 years ago

Trying cache_size: 4194304 and cache_optimistic: false

Meanwhile, I think you have sufficient info above to try to reproduce the issues in parallel and get the profiling you need.

It is also possible that these are leaking too, I have not turned them off and I do have the corresponding features enabled:

safebrowsing_cache_size: 1048576
safesearch_cache_size: 1048576
parental_cache_size: 1048576
SiNONiMiTY commented 2 years ago

Just an observation, it seems that the memory leak starts after filter automatic updates.

My configuration

cache_size: 0
cache_optimistic: false
filters_update_interval: 72

3 days ago, I restarted AGH after updating filter lists and memory usage is low. Today, after the scheduled automatic filter update, memory usage grew significantly. Will continue to observe this behavior.

timkgh commented 2 years ago

@SiNONiMiTY you can try setting filters_update_interval: 1 and see if it accelerates the leak. I have it set at 1 and I don't think it's the cause of leaks. The memory use does fluctuate, it goes up and down but that's normal, garbage collection in Go. Using the web interface, e.g. the query log will also cause a memory spike but it seems to then come down. Of course, any of these actions can still cause slight leaks that will only become apparent over very long periods of time.

SiNONiMiTY commented 2 years ago

@timkgh I will be trying this one. It is good that it goes down in your case. Have been looking at this for the past few days and this is the only consistent behavior I am seeing. I don't even have caching that can be another cause of memory leak.

Here are the preliminary results after setting update interval to 1 hour.

BEFORE image

AFTER (UPDATE 1) image

AFTER (UPDATE 2) image

jumpsmm7 commented 2 years ago

Issue Details

  • Version of AdGuard Home server:

    • v0.107.5
  • How did you install AdGuard Home:

    • GitHub releases
  • How did you setup DNS configuration:

    • router
  • If it's a router or IoT, please write device model:

    • Ubiquiti EdgeRouter ER-12, EdgeOS
  • CPU architecture:

    • mips64_softfloat (Cavium Octeon III V0.2 FPU V0.0)
  • Operating system and version:

    • EdgeOS v2.0.9-hotfix.2

Expected Behavior

Run for long periods of time without crashes/memory leak.

Actual Behavior

Leaks memory until the router is out of memory and it crashes.

Screenshots

After about 26 days of uptime:

agh1

After the screenshot above, I restarted AGH and it shows a reasonable memory footprint:

agh2

You can see the memory use on the device drops from 456M to 177M after an AGH restart, but it continues to climb until it eventually exhausts the entire device memory.

Additional Information

Cache settings:

cache_size: 4194304
cache_ttl_min: 300
cache_ttl_max: 86400
cache_optimistic: true
safebrowsing_cache_size: 1048576
safesearch_cache_size: 1048576
parental_cache_size: 1048576
cache_time: 30

Clearing the stats and logs while AGH is running does not help reclaim memory, only an AGH restart does.

@EugeneOne1 , it appears there maybe some form of anomaly being reported by users with ARM based arch as well concerning similar memory leak issues. I just wanted to bring this to your attention that it may not only be mipse arch impacted by the occurrance of this anomaly. You can see where I and other Adguardhome on asuswrt-merlin router users have been discussing it here at this link.

https://www.snbforums.com/threads/release-asuswrt-merlin-adguardhome-installer-amaghi.76506/page-40#post-755358

Here is where I have requested the aid of other asuswrt merlin users to see if we can get some more test.

http://www.snbforums.com/threads/adguardhome-vs-diversion-some-basics.78332/post-756017

jumpsmm7 commented 2 years ago

3. http://127.0.0.1:6060/debug/pprof/heap

@EugeneOne1 does this help you?

/opt/bin/go/bin/go tool pprof -top http://localhost:6060/debug/pprof/heap Fetching profile over HTTP from http://localhost:6060/debug/pprof/heap Saved profile in /root/pprof/pprof.AdGuardHome.alloc_objects.alloc_space.inuse_objects.inuse_space.001.pb.gz File: AdGuardHome Type: inuse_space Time: Apr 6, 2022 at 1:44am (UTC) Showing nodes accounting for 82.75MB, 100% of 82.75MB total flat flat% sum% cum cum% 67.74MB 81.86% 81.86% 77.74MB 93.94% github.com/AdguardTeam/urlfilter.NewDNSEngine 10MB 12.09% 93.94% 10MB 12.09% github.com/AdguardTeam/urlfilter.(DNSEngine).addRule 1MB 1.21% 95.15% 1MB 1.21% github.com/AdguardTeam/dnsproxy/proxy.(cacheItem).pack 0.51MB 0.61% 95.76% 0.51MB 0.61% crypto/x509.(CertPool).addCertFunc (inline) 0.50MB 0.61% 96.37% 0.50MB 0.61% github.com/AdguardTeam/AdGuardHome/internal/aghnet.(fsysARPDB).Refresh 0.50MB 0.61% 96.98% 0.50MB 0.61% golang.org/x/net/http2/hpack.newInternalNode (inline) 0.50MB 0.61% 97.58% 0.50MB 0.61% encoding/pem.Decode 0.50MB 0.6% 98.19% 0.50MB 0.6% runtime.allocm 0.50MB 0.6% 98.79% 0.50MB 0.6% runtime.malg 0.50MB 0.6% 99.40% 0.50MB 0.6% github.com/AdguardTeam/AdGuardHome/internal/home.Main.func1 0.50MB 0.6% 100% 0.50MB 0.6% github.com/AdguardTeam/urlfilter/filterlist.readLine 0 0% 100% 1.01MB 1.22% crypto/x509.(CertPool).AppendCertsFromPEM 0 0% 100% 0.50MB 0.61% github.com/AdguardTeam/AdGuardHome/internal/aghnet.(arpdbs).Refresh 0 0% 100% 0.50MB 0.6% github.com/AdguardTeam/AdGuardHome/internal/dnsforward.(Server).filterDNSRequest 0 0% 100% 1.50MB 1.81% github.com/AdguardTeam/AdGuardHome/internal/dnsforward.(Server).handleDNSRequest 0 0% 100% 0.50MB 0.6% github.com/AdguardTeam/AdGuardHome/internal/dnsforward.(Server).processFilteringBeforeRequest 0 0% 100% 1MB 1.21% github.com/AdguardTeam/AdGuardHome/internal/dnsforward.(Server).processUpstream 0 0% 100% 0.50MB 0.6% github.com/AdguardTeam/AdGuardHome/internal/filtering.(DNSFilter).CheckHost 0 0% 100% 77.74MB 93.94% github.com/AdguardTeam/AdGuardHome/internal/filtering.(DNSFilter).SetFilters 0 0% 100% 77.74MB 93.94% github.com/AdguardTeam/AdGuardHome/internal/filtering.(DNSFilter).initFiltering 0 0% 100% 0.50MB 0.6% github.com/AdguardTeam/AdGuardHome/internal/filtering.(DNSFilter).matchHost 0 0% 100% 0.50MB 0.61% github.com/AdguardTeam/AdGuardHome/internal/home.(clientsContainer).Reload (inline) 0 0% 100% 0.50MB 0.61% github.com/AdguardTeam/AdGuardHome/internal/home.(clientsContainer).addFromSystemARP 0 0% 100% 0.50MB 0.61% github.com/AdguardTeam/AdGuardHome/internal/home.(clientsContainer).periodicUpdate 0 0% 100% 1.01MB 1.22% github.com/AdguardTeam/AdGuardHome/internal/home.LoadSystemRootCAs 0 0% 100% 77.74MB 93.94% github.com/AdguardTeam/AdGuardHome/internal/home.enableFiltersLocked 0 0% 100% 1.01MB 1.22% github.com/AdguardTeam/AdGuardHome/internal/home.run 0 0% 100% 77.74MB 93.94% github.com/AdguardTeam/AdGuardHome/internal/home.run.func2 0 0% 100% 1.01MB 1.22% github.com/AdguardTeam/AdGuardHome/internal/home.setupContext 0 0% 100% 77.74MB 93.94% github.com/AdguardTeam/AdGuardHome/internal/home.startDNSServer 0 0% 100% 1MB 1.21% github.com/AdguardTeam/dnsproxy/proxy.(Proxy).Resolve 0 0% 100% 1MB 1.21% github.com/AdguardTeam/dnsproxy/proxy.(Proxy).cacheResp 0 0% 100% 1.50MB 1.81% github.com/AdguardTeam/dnsproxy/proxy.(Proxy).handleDNSRequest 0 0% 100% 1.50MB 1.81% github.com/AdguardTeam/dnsproxy/proxy.(Proxy).udpHandlePacket 0 0% 100% 1.50MB 1.81% github.com/AdguardTeam/dnsproxy/proxy.(Proxy).udpPacketLoop.func1 0 0% 100% 1MB 1.21% github.com/AdguardTeam/dnsproxy/proxy.(cache).set 0 0% 100% 0.50MB 0.6% github.com/AdguardTeam/urlfilter.(DNSEngine).MatchRequest 0 0% 100% 0.50MB 0.6% github.com/AdguardTeam/urlfilter.(DNSEngine).matchLookupTable 0 0% 100% 0.50MB 0.6% github.com/AdguardTeam/urlfilter/filterlist.(FileRuleList).RetrieveRule 0 0% 100% 0.50MB 0.6% github.com/AdguardTeam/urlfilter/filterlist.(RuleStorage).RetrieveHostRule 0 0% 100% 0.50MB 0.6% github.com/AdguardTeam/urlfilter/filterlist.(RuleStorage).RetrieveRule 0 0% 100% 0.50MB 0.61% golang.org/x/net/http2.(ClientConn).readLoop 0 0% 100% 0.50MB 0.61% golang.org/x/net/http2.(Framer).ReadFrame 0 0% 100% 0.50MB 0.61% golang.org/x/net/http2.(Framer).readMetaFrame 0 0% 100% 0.50MB 0.61% golang.org/x/net/http2.(clientConnReadLoop).run 0 0% 100% 0.50MB 0.61% golang.org/x/net/http2/hpack.(Decoder).Write 0 0% 100% 0.50MB 0.61% golang.org/x/net/http2/hpack.(Decoder).parseFieldLiteral 0 0% 100% 0.50MB 0.61% golang.org/x/net/http2/hpack.(Decoder).parseHeaderFieldRepr 0 0% 100% 0.50MB 0.61% golang.org/x/net/http2/hpack.(Decoder).readString 0 0% 100% 0.50MB 0.61% golang.org/x/net/http2/hpack.buildRootHuffmanNode 0 0% 100% 0.50MB 0.61% golang.org/x/net/http2/hpack.getRootHuffmanNode (inline) 0 0% 100% 0.50MB 0.61% golang.org/x/net/http2/hpack.huffmanDecode 0 0% 100% 0.50MB 0.6% runtime.mstart 0 0% 100% 0.50MB 0.6% runtime.mstart0 0 0% 100% 0.50MB 0.6% runtime.mstart1 0 0% 100% 0.50MB 0.6% runtime.newm 0 0% 100% 0.50MB 0.6% runtime.newproc.func1 0 0% 100% 0.50MB 0.6% runtime.newproc1 0 0% 100% 0.50MB 0.6% runtime.resetspinning 0 0% 100% 0.50MB 0.6% runtime.schedule 0 0% 100% 0.50MB 0.6% runtime.startm 0 0% 100% 0.50MB 0.6% runtime.systemstack 0 0% 100% 0.50MB 0.6% runtime.wakep 0 0% 100% 0.50MB 0.61% sync.(Once).Do (inline) 0 0% 100% 0.50MB 0.61% sync.(Once).doSlow

I also took the time to email the pprof per request.

jumpsmm7 commented 2 years ago

http://127.0.0.1:6060/debug/pprof/goroutine

@EugeneOne1

Per request for goroutine, here is the pprof for that.

/tmp/home/root# /opt/bin/go/bin/go tool pprof -top http://localhost:6060/debug/pprof/goroutine Fetching profile over HTTP from http://localhost:6060/debug/pprof/goroutine Saved profile in /root/pprof/pprof.AdGuardHome.goroutine.001.pb.gz File: AdGuardHome Type: goroutine Time: Apr 6, 2022 at 1:55am (UTC) Showing nodes accounting for 28, 100% of 28 total flat flat% sum% cum cum% 24 85.71% 85.71% 24 85.71% runtime.gopark 1 3.57% 89.29% 1 3.57% net/http.(connReader).backgroundRead 1 3.57% 92.86% 1 3.57% runtime.notetsleepg 1 3.57% 96.43% 1 3.57% runtime/pprof.runtime_goroutineProfileWithLabels 1 3.57% 100% 1 3.57% syscall.Syscall6 0 0% 100% 2 7.14% bufio.(Reader).Read 0 0% 100% 2 7.14% bytes.(Buffer).ReadFrom 0 0% 100% 2 7.14% crypto/tls.(Conn).Read 0 0% 100% 2 7.14% crypto/tls.(Conn).readFromUntil 0 0% 100% 2 7.14% crypto/tls.(Conn).readRecord (inline) 0 0% 100% 2 7.14% crypto/tls.(Conn).readRecordOrCCS 0 0% 100% 2 7.14% crypto/tls.(atLeastReader).Read 0 0% 100% 1 3.57% github.com/AdguardTeam/AdGuardHome/internal/aghnet.(HostsContainer).handleEvents 0 0% 100% 1 3.57% github.com/AdguardTeam/AdGuardHome/internal/aghos.(osWatcher).handleErrors 0 0% 100% 1 3.57% github.com/AdguardTeam/AdGuardHome/internal/aghos.(osWatcher).handleEvents 0 0% 100% 1 3.57% github.com/AdguardTeam/AdGuardHome/internal/filtering.(DNSFilter).filtersInitializer 0 0% 100% 1 3.57% github.com/AdguardTeam/AdGuardHome/internal/home.(Filtering).periodicallyRefreshFilters 0 0% 100% 1 3.57% github.com/AdguardTeam/AdGuardHome/internal/home.(RDNS).workerLoop 0 0% 100% 1 3.57% github.com/AdguardTeam/AdGuardHome/internal/home.(WHOIS).workerLoop 0 0% 100% 1 3.57% github.com/AdguardTeam/AdGuardHome/internal/home.(Web).Start 0 0% 100% 1 3.57% github.com/AdguardTeam/AdGuardHome/internal/home.(Web).Start.func1 0 0% 100% 1 3.57% github.com/AdguardTeam/AdGuardHome/internal/home.(Web).startBetaServer.func1 0 0% 100% 1 3.57% github.com/AdguardTeam/AdGuardHome/internal/home.(Web).tlsServerLoop 0 0% 100% 1 3.57% github.com/AdguardTeam/AdGuardHome/internal/home.(clientsContainer).handleHostsUpdates 0 0% 100% 1 3.57% github.com/AdguardTeam/AdGuardHome/internal/home.(clientsContainer).periodicUpdate 0 0% 100% 1 3.57% github.com/AdguardTeam/AdGuardHome/internal/home.Main 0 0% 100% 1 3.57% github.com/AdguardTeam/AdGuardHome/internal/home.Main.func1 0 0% 100% 1 3.57% github.com/AdguardTeam/AdGuardHome/internal/home.handleServiceControlAction 0 0% 100% 1 3.57% github.com/AdguardTeam/AdGuardHome/internal/home.memoryUsage.func1 0 0% 100% 1 3.57% github.com/AdguardTeam/AdGuardHome/internal/home.run 0 0% 100% 1 3.57% github.com/AdguardTeam/AdGuardHome/internal/home.run.func1 0 0% 100% 1 3.57% github.com/AdguardTeam/AdGuardHome/internal/querylog.(queryLog).periodicRotate 0 0% 100% 1 3.57% github.com/AdguardTeam/AdGuardHome/internal/stats.(statsCtx).periodicFlush 0 0% 100% 1 3.57% github.com/AdguardTeam/dnsproxy/proxy.(Proxy).tcpPacketLoop 0 0% 100% 1 3.57% github.com/AdguardTeam/dnsproxy/proxy.(Proxy).udpPacketLoop 0 0% 100% 1 3.57% github.com/AdguardTeam/dnsproxy/proxyutil.UDPRead (inline) 0 0% 100% 1 3.57% github.com/AdguardTeam/dnsproxy/proxyutil.udpRead 0 0% 100% 1 3.57% github.com/fsnotify/fsnotify.(Watcher).readEvents 0 0% 100% 1 3.57% github.com/fsnotify/fsnotify.(fdPoller).wait 0 0% 100% 1 3.57% github.com/kardianos/service.(sysv).Run 0 0% 100% 1 3.57% github.com/kardianos/service.(sysv).Run.func1 0 0% 100% 1 3.57% github.com/patrickmn/go-cache.(janitor).Run 0 0% 100% 2 7.14% golang.org/x/net/http2.(ClientConn).readLoop 0 0% 100% 2 7.14% golang.org/x/net/http2.(Framer).ReadFrame 0 0% 100% 2 7.14% golang.org/x/net/http2.(clientConnReadLoop).run 0 0% 100% 2 7.14% golang.org/x/net/http2.readFrameHeader 0 0% 100% 1 3.57% golang.org/x/sys/unix.EpollWait 0 0% 100% 4 14.29% internal/poll.(FD).Accept 0 0% 100% 2 7.14% internal/poll.(FD).Read 0 0% 100% 1 3.57% internal/poll.(FD).ReadMsg 0 0% 100% 7 25.00% internal/poll.(pollDesc).wait 0 0% 100% 7 25.00% internal/poll.(pollDesc).waitRead (inline) 0 0% 100% 7 25.00% internal/poll.runtime_pollWait 0 0% 100% 2 7.14% io.ReadAtLeast 0 0% 100% 2 7.14% io.ReadFull (inline) 0 0% 100% 1 3.57% main.main 0 0% 100% 4 14.29% net.(TCPListener).Accept 0 0% 100% 4 14.29% net.(TCPListener).accept 0 0% 100% 1 3.57% net.(UDPConn).ReadMsgUDP 0 0% 100% 1 3.57% net.(UDPConn).readMsg 0 0% 100% 2 7.14% net.(conn).Read 0 0% 100% 2 7.14% net.(netFD).Read 0 0% 100% 4 14.29% net.(netFD).accept 0 0% 100% 1 3.57% net.(netFD).readMsg 0 0% 100% 1 3.57% net/http.(ServeMux).ServeHTTP 0 0% 100% 3 10.71% net/http.(Server).ListenAndServe 0 0% 100% 3 10.71% net/http.(Server).Serve 0 0% 100% 1 3.57% net/http.(conn).serve 0 0% 100% 1 3.57% net/http.HandlerFunc.ServeHTTP 0 0% 100% 1 3.57% net/http.ListenAndServe (inline) 0 0% 100% 1 3.57% net/http.serverHandler.ServeHTTP 0 0% 100% 1 3.57% net/http/pprof.Index 0 0% 100% 1 3.57% net/http/pprof.handler.ServeHTTP 0 0% 100% 1 3.57% os/signal.loop 0 0% 100% 1 3.57% os/signal.signal_recv 0 0% 100% 11 39.29% runtime.chanrecv 0 0% 100% 4 14.29% runtime.chanrecv1 0 0% 100% 7 25.00% runtime.chanrecv2 0 0% 100% 1 3.57% runtime.goparkunlock (inline) 0 0% 100% 1 3.57% runtime.main 0 0% 100% 7 25.00% runtime.netpollblock 0 0% 100% 2 7.14% runtime.selectgo 0 0% 100% 1 3.57% runtime/pprof.(Profile).WriteTo 0 0% 100% 1 3.57% runtime/pprof.writeGoroutine 0 0% 100% 1 3.57% runtime/pprof.writeRuntimeProfile 0 0% 100% 1 3.57% sync.(Cond).Wait 0 0% 100% 1 3.57% sync.runtime_notifyListWait 0 0% 100% 3 10.71% time.Sleep

I will also submit it per email as well.

jumpsmm7 commented 2 years ago

@EugeneOne1

Feel free to request any other info. I will provide what ever I am able to gain access to. I will also be available for test if you have any suggestions.

jumpsmm7 commented 2 years ago

goroutine 57646 [running]:

runtime/pprof.writeGoroutineStacks({0x1c867c0, 0x400029e460}) runtime/pprof/pprof.go:693 +0x8c runtime/pprof.writeGoroutine({0x1c867c0, 0x400029e460}, 0x2) runtime/pprof/pprof.go:682 +0x3c runtime/pprof.(Profile).WriteTo(0x21eaf80, {0x1c867c0, 0x400029e460}, 0x2) runtime/pprof/pprof.go:331 +0x154 net/http/pprof.handler.ServeHTTP({0x4007dff7b1, 0x9}, {0x1c930f0, 0x400029e460}, 0x40002f1400) net/http/pprof/pprof.go:253 +0x520 net/http/pprof.Index({0x1c930f0, 0x400029e460}, 0x40002f1400) net/http/pprof/pprof.go:371 +0x104 net/http.HandlerFunc.ServeHTTP(0x1bb9ba8, {0x1c930f0, 0x400029e460}, 0x40002f1400) net/http/server.go:2047 +0x40 net/http.(ServeMux).ServeHTTP(0x4000290000, {0x1c930f0, 0x400029e460}, 0x40002f1400) net/http/server.go:2425 +0x18c net/http.serverHandler.ServeHTTP({0x400029e000}, {0x1c930f0, 0x400029e460}, 0x40002f1400) net/http/server.go:2879 +0x45c net/http.(conn).serve(0x40080fd540, {0x1c9af90, 0x4000195890}) net/http/server.go:1930 +0xb54 created by net/http.(Server).Serve net/http/server.go:3034 +0x4ac

goroutine 1 [chan receive, 52 minutes]:

github.com/kardianos/service.(sysv).Run.func1() github.com/kardianos/service@v1.2.0/service_sysv_linux.go:147 +0xa8 github.com/kardianos/service.(sysv).Run(0x40001959e0) github.com/kardianos/service@v1.2.0/service_sysv_linux.go:148 +0xc8 github.com/AdguardTeam/AdGuardHome/internal/home.handleServiceControlAction({0x0, {0x7fe4ec4e71, 0x25}, {0x7fe4ec4e9a, 0x14}, {0x0, 0x0, 0x0}, 0x0, {0x7fe4ec4eeb, ...}, ...}, ...) github.com/AdguardTeam/AdGuardHome/internal/home/service.go:214 +0x424 github.com/AdguardTeam/AdGuardHome/internal/home.Main({0x1c86e00, 0x1cb8220}) github.com/AdguardTeam/AdGuardHome/internal/home/home.go:123 +0x164 main.main() github.com/AdguardTeam/AdGuardHome/main.go:17 +0x3c

goroutine 7 [syscall, 52 minutes]:

os/signal.signal_recv() runtime/sigqueue.go:169 +0x108 os/signal.loop() os/signal/signal_unix.go:24 +0x20 created by os/signal.Notify.func1.1 os/signal/signal.go:151 +0x38

goroutine 8 [chan receive, 52 minutes]:

github.com/AdguardTeam/AdGuardHome/internal/home.Main.func1() github.com/AdguardTeam/AdGuardHome/internal/home/home.go:107 +0x3c created by github.com/AdguardTeam/AdGuardHome/internal/home.Main github.com/AdguardTeam/AdGuardHome/internal/home/home.go:105 +0x124

goroutine 9 [chan receive, 52 minutes]:

github.com/AdguardTeam/AdGuardHome/internal/home.(Web).Start(0x40004cc0e0) github.com/AdguardTeam/AdGuardHome/internal/home/web.go:188 +0x34c github.com/AdguardTeam/AdGuardHome/internal/home.run({0x0, {0x7fe4ec4e71, 0x25}, {0x7fe4ec4e9a, 0x14}, {0x0, 0x0, 0x0}, 0x0, {0x7fe4ec4eeb, ...}, ...}, ...) github.com/AdguardTeam/AdGuardHome/internal/home/home.go:493 +0x650 created by github.com/AdguardTeam/AdGuardHome/internal/home.(program).Start github.com/AdguardTeam/AdGuardHome/internal/home/service.go:45 +0xa4

goroutine 10 [chan receive, 2 minutes]:

github.com/AdguardTeam/AdGuardHome/internal/home.memoryUsage.func1() github.com/AdguardTeam/AdGuardHome/internal/home/memory.go:35 +0x64 created by github.com/AdguardTeam/AdGuardHome/internal/home.memoryUsage github.com/AdguardTeam/AdGuardHome/internal/home/memory.go:33 +0xcc

goroutine 12 [syscall, 52 minutes]:

syscall.Syscall6(0x16, 0xa, 0x400050fc00, 0x7, 0xffffffffffffffff, 0x0, 0x0) syscall/asm_linux_arm64.s:35 +0x10 golang.org/x/sys/unix.EpollWait(0xa, {0x400050fc00, 0x7, 0x7}, 0xffffffffffffffff) golang.org/x/sys@v0.0.0-20211216021012-1d35b9e2eb4e/unix/zsyscall_linux_arm64.go:56 +0x64 github.com/fsnotify/fsnotify.(fdPoller).wait(0x40003686a0) github.com/fsnotify/fsnotify@v1.5.1/inotify_poller.go:87 +0x60 github.com/fsnotify/fsnotify.(Watcher).readEvents(0x40000d8050) github.com/fsnotify/fsnotify@v1.5.1/inotify.go:193 +0x22c created by github.com/fsnotify/fsnotify.NewWatcher github.com/fsnotify/fsnotify@v1.5.1/inotify.go:60 +0x1c8

goroutine 13 [chan receive, 52 minutes]:

github.com/AdguardTeam/AdGuardHome/internal/aghos.(*osWatcher).handleErrors(0x40000e0b10) github.com/AdguardTeam/AdGuardHome/internal/aghos/fswatcher.go:74 +0x130 created by github.com/AdguardTeam/AdGuardHome/internal/aghos.NewOSWritesWatcher github.com/AdguardTeam/AdGuardHome/internal/aghos/fswatcher.go:63 +0x180

goroutine 14 [chan receive, 52 minutes]:

github.com/AdguardTeam/AdGuardHome/internal/aghos.(*osWatcher).handleEvents(0x40000e0b10) github.com/AdguardTeam/AdGuardHome/internal/aghos/fswatcher.go:110 +0xf8 created by github.com/AdguardTeam/AdGuardHome/internal/aghos.NewOSWritesWatcher github.com/AdguardTeam/AdGuardHome/internal/aghos/fswatcher.go:64 +0x1a0

goroutine 15 [select, 52 minutes]:

github.com/AdguardTeam/AdGuardHome/internal/aghnet.(*HostsContainer).handleEvents(0x40000f6b00) github.com/AdguardTeam/AdGuardHome/internal/aghnet/hostscontainer.go:240 +0x144 created by github.com/AdguardTeam/AdGuardHome/internal/aghnet.NewHostsContainer github.com/AdguardTeam/AdGuardHome/internal/aghnet/hostscontainer.go:185 +0x508

goroutine 16 [chan receive, 52 minutes]:

github.com/AdguardTeam/AdGuardHome/internal/home.(clientsContainer).handleHostsUpdates(0x21fc880) github.com/AdguardTeam/AdGuardHome/internal/home/clients.go:142 +0x70 created by github.com/AdguardTeam/AdGuardHome/internal/home.(clientsContainer).Init github.com/AdguardTeam/AdGuardHome/internal/home/clients.go:137 +0x358

goroutine 33 [IO wait]:

internal/poll.runtime_pollWait(0x7f5a69e030, 0x72) runtime/netpoll.go:234 +0xa4 internal/poll.(pollDesc).wait(0x40000f6118, 0x72, 0x0) internal/poll/fd_poll_runtime.go:84 +0x38 internal/poll.(pollDesc).waitRead(...) internal/poll/fd_poll_runtime.go:89 internal/poll.(FD).Accept(0x40000f6100) internal/poll/fd_unix.go:402 +0x1f4 net.(netFD).accept(0x40000f6100) net/fd_unix.go:173 +0x2c net.(TCPListener).accept(0x40000aa030) net/tcpsock_posix.go:140 +0x2c net.(TCPListener).Accept(0x40000aa030) net/tcpsock.go:262 +0x34 net/http.(Server).Serve(0x400029e000, {0x1c92ee0, 0x40000aa030}) net/http/server.go:3002 +0x364 net/http.(Server).ListenAndServe(0x400029e000) net/http/server.go:2931 +0xb0 net/http.ListenAndServe(...) net/http/server.go:3185 github.com/AdguardTeam/AdGuardHome/internal/home.run.func1(0x4000290000) github.com/AdguardTeam/AdGuardHome/internal/home/home.go:429 +0xcc created by github.com/AdguardTeam/AdGuardHome/internal/home.run github.com/AdguardTeam/AdGuardHome/internal/home/home.go:427 +0xb50

goroutine 36 [chan receive, 12 minutes]:

github.com/AdguardTeam/AdGuardHome/internal/home.(*RDNS).workerLoop(0x40000d4640) github.com/AdguardTeam/AdGuardHome/internal/home/rdns.go:122 +0x80 created by github.com/AdguardTeam/AdGuardHome/internal/home.NewRDNS github.com/AdguardTeam/AdGuardHome/internal/home/rdns.go:60 +0x140

goroutine 37 [chan receive, 51 minutes]:

github.com/AdguardTeam/AdGuardHome/internal/home.(*WHOIS).workerLoop(0x400036d260) github.com/AdguardTeam/AdGuardHome/internal/home/whois.go:249 +0x48 created by github.com/AdguardTeam/AdGuardHome/internal/home.initWHOIS github.com/AdguardTeam/AdGuardHome/internal/home/whois.go:57 +0x140

goroutine 66 [sync.Cond.Wait, 52 minutes]:

sync.runtime_notifyListWait(0x40002900d0,` 0x0) runtime/sema.go:513 +0x168 sync.(Cond).Wait(0x40002900c0) sync/cond.go:56 +0xe4 github.com/AdguardTeam/AdGuardHome/internal/home.(Web).tlsServerLoop(0x40004cc0e0) github.com/AdguardTeam/AdGuardHome/internal/home/web.go:252 +0x4f0 created by github.com/AdguardTeam/AdGuardHome/internal/home.(*Web).Start github.com/AdguardTeam/AdGuardHome/internal/home/web.go:163 +0x70

goroutine 67 [IO wait, 2 minutes]:

internal/poll.runtime_pollWait(0x7f5a69df48, 0x72) runtime/netpoll.go:234 +0xa4 internal/poll.(pollDesc).wait(0x400055e018, 0x72, 0x0) internal/poll/fd_poll_runtime.go:84 +0x38 internal/poll.(pollDesc).waitRead(...) internal/poll/fd_poll_runtime.go:89 internal/poll.(FD).Accept(0x400055e000) internal/poll/fd_unix.go:402 +0x1f4 net.(netFD).accept(0x400055e000) net/fd_unix.go:173 +0x2c net.(TCPListener).accept(0x40000aa018) net/tcpsock_posix.go:140 +0x2c net.(TCPListener).Accept(0x40000aa018) net/tcpsock.go:262 +0x34 net/http.(Server).Serve(0x40004cc000, {0x1c92ee0, 0x40000aa018}) net/http/server.go:3002 +0x364 net/http.(Server).ListenAndServe(0x40004cc000) net/http/server.go:2931 +0xb0 github.com/AdguardTeam/AdGuardHome/internal/home.(Web).Start.func1(0x400048a1e0, 0x40004cc0e0) github.com/AdguardTeam/AdGuardHome/internal/home/web.go:183 +0x64 created by github.com/AdguardTeam/AdGuardHome/internal/home.(Web).Start github.com/AdguardTeam/AdGuardHome/internal/home/web.go:180 +0x318

goroutine 68 [IO wait, 52 minutes]:

internal/poll.runtime_pollWait(0x7f5a69de60, 0x72) runtime/netpoll.go:234 +0xa4 internal/poll.(pollDesc).wait(0x400055e098, 0x72, 0x0) internal/poll/fd_poll_runtime.go:84 +0x38 internal/poll.(pollDesc).waitRead(...) internal/poll/fd_poll_runtime.go:89 internal/poll.(FD).Accept(0x400055e080) internal/poll/fd_unix.go:402 +0x1f4 net.(netFD).accept(0x400055e080) net/fd_unix.go:173 +0x2c net.(TCPListener).accept(0x40000aa048) net/tcpsock_posix.go:140 +0x2c net.(TCPListener).Accept(0x40000aa048) net/tcpsock.go:262 +0x34 net/http.(Server).Serve(0x40004cc1c0, {0x1c92ee0, 0x40000aa048}) net/http/server.go:3002 +0x364 net/http.(Server).ListenAndServe(0x40004cc1c0) net/http/server.go:2931 +0xb0 github.com/AdguardTeam/AdGuardHome/internal/home.(Web).startBetaServer.func1(0x40004cc0e0) github.com/AdguardTeam/AdGuardHome/internal/home/web.go:216 +0x64 created by github.com/AdguardTeam/AdGuardHome/internal/home.(Web).startBetaServer github.com/AdguardTeam/AdGuardHome/internal/home/web.go:213 +0x274

goroutine 69 [sleep, 2 minutes]:

time.Sleep(0x8bb2c97000) runtime/time.go:193 +0x140 github.com/AdguardTeam/AdGuardHome/internal/home.(clientsContainer).periodicUpdate(0x21fc880) github.com/AdguardTeam/AdGuardHome/internal/home/clients.go:264 +0x78 created by github.com/AdguardTeam/AdGuardHome/internal/home.(clientsContainer).Start github.com/AdguardTeam/AdGuardHome/internal/home/clients.go:155 +0x70

goroutine 70 [IO wait]:

internal/poll.runtime_pollWait(0x7f5a69dc90, 0x72) runtime/netpoll.go:234 +0xa4 internal/poll.(pollDesc).wait(0x40000f6198, 0x72, 0x0) internal/poll/fd_poll_runtime.go:84 +0x38 internal/poll.(pollDesc).waitRead(...) internal/poll/fd_poll_runtime.go:89 internal/poll.(FD).ReadMsg(0x40000f6180, {0x4005997f51, 0xffff, 0xffff}, {0x4007dff3b0, 0x28, 0x28}, 0x0) internal/poll/fd_unix.go:250 +0x230 net.(netFD).readMsg(0x40000f6180, {0x4005997f51, 0xffff, 0xffff}, {0x4007dff3b0, 0x28, 0x28}, 0x0) net/fd_posix.go:68 +0x64 net.(UDPConn).readMsg(0x400000e4d8, {0x4005997f51, 0xffff, 0xffff}, {0x4007dff3b0, 0x28, 0x28}) net/udpsock_posix.go:62 +0x60 net.(UDPConn).ReadMsgUDP(0x400000e4d8, {0x4005997f51, 0xffff, 0xffff}, {0x4007dff3b0, 0x28, 0x28}) net/udpsock.go:144 +0x60 github.com/AdguardTeam/dnsproxy/proxyutil.udpRead(0x400000e4d8, {0x4005997f51, 0xffff, 0xffff}, 0x28) github.com/AdguardTeam/dnsproxy@v0.41.4/proxyutil/udp_unix.go:68 +0x74 github.com/AdguardTeam/dnsproxy/proxyutil.UDPRead(...) github.com/AdguardTeam/dnsproxy@v0.41.4/proxyutil/udp.go:25 github.com/AdguardTeam/dnsproxy/proxy.(Proxy).udpPacketLoop(0x400010e380, 0x400000e4d8, {0x1c8f070, 0x400000e4b0}) github.com/AdguardTeam/dnsproxy@v0.41.4/proxy/server_udp.go:67 +0x23c created by github.com/AdguardTeam/dnsproxy/proxy.(Proxy).startListeners github.com/AdguardTeam/dnsproxy@v0.41.4/proxy/server.go:45 +0x190

goroutine 71 [IO wait, 1 minutes]:

internal/poll.runtime_pollWait(0x7f5a69dba8, 0x72) runtime/netpoll.go:234 +0xa4 internal/poll.(pollDesc).wait(0x40000f6218, 0x72, 0x0) internal/poll/fd_poll_runtime.go:84 +0x38 internal/poll.(pollDesc).waitRead(...) internal/poll/fd_poll_runtime.go:89 internal/poll.(FD).Accept(0x40000f6200) internal/poll/fd_unix.go:402 +0x1f4 net.(netFD).accept(0x40000f6200) net/fd_unix.go:173 +0x2c net.(TCPListener).accept(0x40000aa120) net/tcpsock_posix.go:140 +0x2c net.(TCPListener).Accept(0x40000aa120) net/tcpsock.go:262 +0x34 github.com/AdguardTeam/dnsproxy/proxy.(Proxy).tcpPacketLoop(0x400010e380, {0x1c92ee0, 0x40000aa120}, {0x89f7ca, 0x3}, {0x1c8f070, 0x400000e4b0}) github.com/AdguardTeam/dnsproxy@v0.41.4/proxy/server_tcp.go:58 +0x16c created by github.com/AdguardTeam/dnsproxy/proxy.(Proxy).startListeners github.com/AdguardTeam/dnsproxy@v0.41.4/proxy/server.go:49 +0x4d8

goroutine 72 [chan receive, 51 minutes]:

github.com/AdguardTeam/AdGuardHome/internal/filtering.(DNSFilter).filtersInitializer(0x40004d4000) github.com/AdguardTeam/AdGuardHome/internal/filtering/filtering.go:338 +0x40 created by github.com/AdguardTeam/AdGuardHome/internal/filtering.(DNSFilter).Start github.com/AdguardTeam/AdGuardHome/internal/filtering/filtering.go:975 +0x7c

goroutine 73 [sleep, 51 minutes]:

time.Sleep(0x34630b8a000) runtime/time.go:193 +0x140 github.com/AdguardTeam/AdGuardHome/internal/home.(Filtering).periodicallyRefreshFilters(0x21fc928) github.com/AdguardTeam/AdGuardHome/internal/home/filter.go:263 +0x3c created by github.com/AdguardTeam/AdGuardHome/internal/home.(Filtering).Start github.com/AdguardTeam/AdGuardHome/internal/home/filter.go:52 +0x48

goroutine 74 [sleep]:

time.Sleep(0x3b9aca00) runtime/time.go:193 +0x140 github.com/AdguardTeam/AdGuardHome/internal/stats.(statsCtx).periodicFlush(0x40000b81a0) github.com/AdguardTeam/AdGuardHome/internal/stats/unit.go:305 +0x7c created by github.com/AdguardTeam/AdGuardHome/internal/stats.(statsCtx).Start github.com/AdguardTeam/AdGuardHome/internal/stats/unit.go:183 +0x48

goroutine 75 [chan receive, 51 minutes]:

github.com/AdguardTeam/AdGuardHome/internal/querylog.(queryLog).periodicRotate(0x40001b2200) github.com/AdguardTeam/AdGuardHome/internal/querylog/querylogfile.go:150 +0xe0 created by github.com/AdguardTeam/AdGuardHome/internal/querylog.(queryLog).Start github.com/AdguardTeam/AdGuardHome/internal/querylog/qlog.go:107 +0x54

goroutine 57661 [IO wait]:

internal/poll.runtime_pollWait(0x7f5a62f7e0, 0x72) runtime/netpoll.go:234 +0xa4 internal/poll.(pollDesc).wait(0x40050ff898, 0x72, 0x0) internal/poll/fd_poll_runtime.go:84 +0x38 internal/poll.(pollDesc).waitRead(...) internal/poll/fd_poll_runtime.go:89 internal/poll.(FD).Read(0x40050ff880, {0x4005c2c600, 0x10fe, 0x10fe}) internal/poll/fd_unix.go:167 +0x1e4 net.(netFD).Read(0x40050ff880, {0x4005c2c600, 0x10fe, 0x10fe}) net/fd_posix.go:56 +0x44 net.(conn).Read(0x400000e998, {0x4005c2c600, 0x10fe, 0x10fe}) net/net.go:183 +0x4c crypto/tls.(atLeastReader).Read(0x40057da030, {0x4005c2c600, 0x10fe, 0x10fe}) crypto/tls/conn.go:777 +0x58 bytes.(Buffer).ReadFrom(0x4007375e78, {0x1c85400, 0x40057da030}) bytes/buffer.go:204 +0xa4 crypto/tls.(Conn).readFromUntil(0x4007375c00, {0x1c864e0, 0x400000e998}, 0x5) crypto/tls/conn.go:799 +0xe0 crypto/tls.(Conn).readRecordOrCCS(0x4007375c00, 0x0) crypto/tls/conn.go:606 +0xf4 crypto/tls.(Conn).readRecord(...) crypto/tls/conn.go:574 crypto/tls.(Conn).Read(0x4007375c00, {0x4006669e0a, 0x2, 0x2}) crypto/tls/conn.go:1277 +0x164 io.ReadAtLeast({0x7f5a59fdc8, 0x4007375c00}, {0x4006669e0a, 0x2, 0x2}, 0x2) io/io.go:328 +0xa8 io.ReadFull(...) io/io.go:347 encoding/binary.Read({0x7f5a59fdc8, 0x4007375c00}, {0x1ca6950, 0x2757138}, {0x7722e0, 0x4006669e08}) encoding/binary/binary.go:166 +0x90 github.com/miekg/dns.(Conn).ReadMsgHeader(0x400616fe38, 0x0) github.com/miekg/dns@v1.1.45/client.go:294 +0x160 github.com/miekg/dns.(Conn).ReadMsg(0x400616fe38) github.com/miekg/dns@v1.1.45/client.go:249 +0x2c github.com/AdguardTeam/dnsproxy/upstream.(dnsOverTLS).exchangeConn(0x40004b3710, {0x1ca8350, 0x4007375c00}, 0x4006965e60) github.com/AdguardTeam/dnsproxy@v0.41.4/upstream/upstream_dot.go:99 +0x130 github.com/AdguardTeam/dnsproxy/upstream.(*dnsOverTLS).Exchange(0x40004b3710, 0x4006965e60) github.com/AdguardTeam/dnsproxy@v0.41.4/upstream/upstream_dot.go:61 +0x380 github.com/AdguardTeam/dnsproxy/upstream.exchangeAsync({0x1c8f110, 0x40004b3710}, 0x4006965d40, 0x4005c50300) github.com/AdguardTeam/dnsproxy@v0.41.4/upstream/parallel.go:117 +0x54 created by github.com/AdguardTeam/dnsproxy/upstream.ExchangeParallel github.com/AdguardTeam/dnsproxy@v0.41.4/upstream/parallel.go:45 +0x128

goroutine 46 [select, 51 minutes]:

github.com/patrickmn/go-cache.(*janitor).Run(0x40004ca5b0, 0x4007434680) github.com/patrickmn/go-cache@v2.1.0+incompatible/cache.go:1079 +0x8c created by github.com/patrickmn/go-cache.runJanitor github.com/patrickmn/go-cache@v2.1.0+incompatible/cache.go:1099 +0xcc

goroutine 41 [IO wait]:

internal/poll.runtime_pollWait(0x7f5a69dac0, 0x72) runtime/netpoll.go:234 +0xa4 internal/poll.(pollDesc).wait(0x400476b318, 0x72, 0x0) internal/poll/fd_poll_runtime.go:84 +0x38 internal/poll.(pollDesc).waitRead(...) internal/poll/fd_poll_runtime.go:89 internal/poll.(FD).Read(0x400476b300, {0x4007450000, 0xfc4, 0xfc4}) internal/poll/fd_unix.go:167 +0x1e4 net.(netFD).Read(0x400476b300, {0x4007450000, 0xfc4, 0xfc4}) net/fd_posix.go:56 +0x44 net.(conn).Read(0x4007430000, {0x4007450000, 0xfc4, 0xfc4}) net/net.go:183 +0x4c crypto/tls.(atLeastReader).Read(0x4006fafe48, {0x4007450000, 0xfc4, 0xfc4}) crypto/tls/conn.go:777 +0x58 bytes.(Buffer).ReadFrom(0x400010ecf8, {0x1c85400, 0x4006fafe48}) bytes/buffer.go:204 +0xa4 crypto/tls.(Conn).readFromUntil(0x400010ea80, {0x1c864e0, 0x4007430000}, 0x5) crypto/tls/conn.go:799 +0xe0 crypto/tls.(Conn).readRecordOrCCS(0x400010ea80, 0x0) crypto/tls/conn.go:606 +0xf4 crypto/tls.(Conn).readRecord(...) crypto/tls/conn.go:574 crypto/tls.(Conn).Read(0x400010ea80, {0x4004810000, 0x1000, 0x1000}) crypto/tls/conn.go:1277 +0x164 bufio.(Reader).Read(0x400480a360, {0x40004cc580, 0x9, 0x9}) bufio/bufio.go:227 +0x214 io.ReadAtLeast({0x1c85240, 0x400480a360}, {0x40004cc580, 0x9, 0x9}, 0x9) io/io.go:328 +0xa8 io.ReadFull(...) io/io.go:347 golang.org/x/net/http2.readFrameHeader({0x40004cc580, 0x9, 0x9}, {0x1c85240, 0x400480a360}) golang.org/x/net@v0.0.0-20211216030914-fe4d6282115f/http2/frame.go:237 +0x5c golang.org/x/net/http2.(Framer).ReadFrame(0x40004cc540) golang.org/x/net@v0.0.0-20211216030914-fe4d6282115f/http2/frame.go:498 +0x90 golang.org/x/net/http2.(clientConnReadLoop).run(0x40002c3fb0) golang.org/x/net@v0.0.0-20211216030914-fe4d6282115f/http2/transport.go:2101 +0x10c golang.org/x/net/http2.(ClientConn).readLoop(0x4000170a80) golang.org/x/net@v0.0.0-20211216030914-fe4d6282115f/http2/transport.go:1997 +0x58 created by golang.org/x/net/http2.(Transport).newClientConn golang.org/x/net@v0.0.0-20211216030914-fe4d6282115f/http2/transport.go:725 +0xaa8

goroutine 57598 [IO wait]:

internal/poll.runtime_pollWait(0x7f5a62eed0, 0x72) runtime/netpoll.go:234 +0xa4 internal/poll.(pollDesc).wait(0x40050ff418, 0x72, 0x0) internal/poll/fd_poll_runtime.go:84 +0x38 internal/poll.(pollDesc).waitRead(...) internal/poll/fd_poll_runtime.go:89 internal/poll.(FD).Read(0x40050ff400, {0x4008d51680, 0x205, 0x205}) internal/poll/fd_unix.go:167 +0x1e4 net.(netFD).Read(0x40050ff400, {0x4008d51680, 0x205, 0x205}) net/fd_posix.go:56 +0x44 net.(conn).Read(0x400000e970, {0x4008d51680, 0x205, 0x205}) net/net.go:183 +0x4c crypto/tls.(atLeastReader).Read(0x4006fafab8, {0x4008d51680, 0x205, 0x205}) crypto/tls/conn.go:777 +0x58 bytes.(Buffer).ReadFrom(0x4007374cf8, {0x1c85400, 0x4006fafab8}) bytes/buffer.go:204 +0xa4 crypto/tls.(Conn).readFromUntil(0x4007374a80, {0x1c864e0, 0x400000e970}, 0x5) crypto/tls/conn.go:799 +0xe0 crypto/tls.(Conn).readRecordOrCCS(0x4007374a80, 0x0) crypto/tls/conn.go:606 +0xf4 crypto/tls.(Conn).readRecord(...) crypto/tls/conn.go:574 crypto/tls.(Conn).readHandshake(0x4007374a80) crypto/tls/conn.go:1009 +0x48 crypto/tls.(Conn).clientHandshake(0x4007374a80, {0x1c9aee8, 0x40058ef900}) crypto/tls/handshake_client.go:179 +0x230 crypto/tls.(Conn).handshakeContext(0x4007374a80, {0x1c9af20, 0x4000034068}) crypto/tls/conn.go:1445 +0x388 crypto/tls.(Conn).HandshakeContext(...) crypto/tls/conn.go:1395 crypto/tls.(Conn).Handshake(...) crypto/tls/conn.go:1379 github.com/AdguardTeam/dnsproxy/upstream.tlsDial(0x40000ec930, {0x89f7ca, 0x3}, 0x4000171080) github.com/AdguardTeam/dnsproxy@v0.41.4/upstream/upstream_pool.go:111 +0x268 github.com/AdguardTeam/dnsproxy/upstream.(TLSPool).Create(0x400742f1a0) github.com/AdguardTeam/dnsproxy@v0.41.4/upstream/upstream_pool.go:75 +0x68 github.com/AdguardTeam/dnsproxy/upstream.(*dnsOverTLS).Exchange(0x40004b3710, 0x40006245a0) github.com/AdguardTeam/dnsproxy@v0.41.4/upstream/upstream_dot.go:71 +0x4a8 github.com/AdguardTeam/dnsproxy/upstream.exchangeAsync({0x1c8f110, 0x40004b3710}, 0x4000624480, 0x40070ebe60) github.com/AdguardTeam/dnsproxy@v0.41.4/upstream/parallel.go:117 +0x54 created by github.com/AdguardTeam/dnsproxy/upstream.ExchangeParallel github.com/AdguardTeam/dnsproxy@v0.41.4/upstream/parallel.go:45 +0x128

goroutine 57647 [runnable]:

net/http.(connReader).backgroundRead(0x40052cf800) net/http/server.go:671 created by net/http.(connReader).startBackgroundRead net/http/server.go:668 +0xc4

goroutine 57610 [IO wait]:

internal/poll.runtime_pollWait(0x7f5a1d0bf0, 0x72) runtime/netpoll.go:234 +0xa4 internal/poll.(pollDesc).wait(0x40052b9e18, 0x72, 0x0) internal/poll/fd_poll_runtime.go:84 +0x38 internal/poll.(pollDesc).waitRead(...) internal/poll/fd_poll_runtime.go:89 internal/poll.(FD).Read(0x40052b9e00, {0x400930f680, 0x205, 0x205}) internal/poll/fd_unix.go:167 +0x1e4 net.(netFD).Read(0x40052b9e00, {0x400930f680, 0x205, 0x205}) net/fd_posix.go:56 +0x44 net.(conn).Read(0x400030e280, {0x400930f680, 0x205, 0x205}) net/net.go:183 +0x4c crypto/tls.(atLeastReader).Read(0x4007efa9f0, {0x400930f680, 0x205, 0x205}) crypto/tls/conn.go:777 +0x58 bytes.(Buffer).ReadFrom(0x40001f5e78, {0x1c85400, 0x4007efa9f0}) bytes/buffer.go:204 +0xa4 crypto/tls.(Conn).readFromUntil(0x40001f5c00, {0x1c864e0, 0x400030e280}, 0x5) crypto/tls/conn.go:799 +0xe0 crypto/tls.(Conn).readRecordOrCCS(0x40001f5c00, 0x0) crypto/tls/conn.go:606 +0xf4 crypto/tls.(Conn).readRecord(...) crypto/tls/conn.go:574 crypto/tls.(Conn).readHandshake(0x40001f5c00) crypto/tls/conn.go:1009 +0x48 crypto/tls.(Conn).clientHandshake(0x40001f5c00, {0x1c9aee8, 0x40054b6bc0}) crypto/tls/handshake_client.go:179 +0x230 crypto/tls.(Conn).handshakeContext(0x40001f5c00, {0x1c9af20, 0x4000034068}) crypto/tls/conn.go:1445 +0x388 crypto/tls.(Conn).HandshakeContext(...) crypto/tls/conn.go:1395 crypto/tls.(Conn).Handshake(...) crypto/tls/conn.go:1379 github.com/AdguardTeam/dnsproxy/upstream.tlsDial(0x40000ec930, {0x89f7ca, 0x3}, 0x40070b4300) github.com/AdguardTeam/dnsproxy@v0.41.4/upstream/upstream_pool.go:111 +0x268 github.com/AdguardTeam/dnsproxy/upstream.(TLSPool).Create(0x400742f1a0) github.com/AdguardTeam/dnsproxy@v0.41.4/upstream/upstream_pool.go:75 +0x68 github.com/AdguardTeam/dnsproxy/upstream.(*dnsOverTLS).Exchange(0x40004b3710, 0x4006964a20) github.com/AdguardTeam/dnsproxy@v0.41.4/upstream/upstream_dot.go:71 +0x4a8 github.com/AdguardTeam/dnsproxy/upstream.exchangeAsync({0x1c8f110, 0x40004b3710}, 0x4006964900, 0x4006437860) github.com/AdguardTeam/dnsproxy@v0.41.4/upstream/parallel.go:117 +0x54 created by github.com/AdguardTeam/dnsproxy/upstream.ExchangeParallel github.com/AdguardTeam/dnsproxy@v0.41.4/upstream/parallel.go:45 +0x128

goroutine 16304 [IO wait]:

internal/poll.runtime_pollWait(0x7f5a1d0938, 0x72) runtime/netpoll.go:234 +0xa4 internal/poll.(pollDesc).wait(0x40001b2518, 0x72, 0x0) internal/poll/fd_poll_runtime.go:84 +0x38 internal/poll.(pollDesc).waitRead(...) internal/poll/fd_poll_runtime.go:89 internal/poll.(FD).Read(0x40001b2500, {0x4008b25900, 0x1107, 0x1107}) internal/poll/fd_unix.go:167 +0x1e4 net.(netFD).Read(0x40001b2500, {0x4008b25900, 0x1107, 0x1107}) net/fd_posix.go:56 +0x44 net.(conn).Read(0x400000e848, {0x4008b25900, 0x1107, 0x1107}) net/net.go:183 +0x4c crypto/tls.(atLeastReader).Read(0x4007efabe8, {0x4008b25900, 0x1107, 0x1107}) crypto/tls/conn.go:777 +0x58 bytes.(Buffer).ReadFrom(0x40049e73f8, {0x1c85400, 0x4007efabe8}) bytes/buffer.go:204 +0xa4 crypto/tls.(Conn).readFromUntil(0x40049e7180, {0x1c864e0, 0x400000e848}, 0x5) crypto/tls/conn.go:799 +0xe0 crypto/tls.(Conn).readRecordOrCCS(0x40049e7180, 0x0) crypto/tls/conn.go:606 +0xf4 crypto/tls.(Conn).readRecord(...) crypto/tls/conn.go:574 crypto/tls.(Conn).Read(0x40049e7180, {0x4005898000, 0x1000, 0x1000}) crypto/tls/conn.go:1277 +0x164 bufio.(Reader).Read(0x400480a1e0, {0x4004d863c0, 0x9, 0x9}) bufio/bufio.go:227 +0x214 io.ReadAtLeast({0x1c85240, 0x400480a1e0}, {0x4004d863c0, 0x9, 0x9}, 0x9) io/io.go:328 +0xa8 io.ReadFull(...) io/io.go:347 golang.org/x/net/http2.readFrameHeader({0x4004d863c0, 0x9, 0x9}, {0x1c85240, 0x400480a1e0}) golang.org/x/net@v0.0.0-20211216030914-fe4d6282115f/http2/frame.go:237 +0x5c golang.org/x/net/http2.(Framer).ReadFrame(0x4004d86380) golang.org/x/net@v0.0.0-20211216030914-fe4d6282115f/http2/frame.go:498 +0x90 golang.org/x/net/http2.(clientConnReadLoop).run(0x4004946fb0) golang.org/x/net@v0.0.0-20211216030914-fe4d6282115f/http2/transport.go:2101 +0x10c golang.org/x/net/http2.(ClientConn).readLoop(0x400537c780) golang.org/x/net@v0.0.0-20211216030914-fe4d6282115f/http2/transport.go:1997 +0x58 created by golang.org/x/net/http2.(Transport).newClientConn golang.org/x/net@v0.0.0-20211216030914-fe4d6282115f/http2/transport.go:725 +0xaa8

goroutine 57586 [IO wait]:

internal/poll.runtime_pollWait(0x7f5a62ec18, 0x72) runtime/netpoll.go:234 +0xa4 internal/poll.(pollDesc).wait(0x40050fef18, 0x72, 0x0) internal/poll/fd_poll_runtime.go:84 +0x38 internal/poll.(pollDesc).waitRead(...) internal/poll/fd_poll_runtime.go:89 internal/poll.(FD).Read(0x40050fef00, {0x4004ef1300, 0x10fe, 0x10fe}) internal/poll/fd_unix.go:167 +0x1e4 net.(netFD).Read(0x40050fef00, {0x4004ef1300, 0x10fe, 0x10fe}) net/fd_posix.go:56 +0x44 net.(conn).Read(0x4007430618, {0x4004ef1300, 0x10fe, 0x10fe}) net/net.go:183 +0x4c crypto/tls.(atLeastReader).Read(0x4007efa2a0, {0x4004ef1300, 0x10fe, 0x10fe}) crypto/tls/conn.go:777 +0x58 bytes.(Buffer).ReadFrom(0x4008d7b3f8, {0x1c85400, 0x4007efa2a0}) bytes/buffer.go:204 +0xa4 crypto/tls.(Conn).readFromUntil(0x4008d7b180, {0x1c864e0, 0x4007430618}, 0x5) crypto/tls/conn.go:799 +0xe0 crypto/tls.(Conn).readRecordOrCCS(0x4008d7b180, 0x0) crypto/tls/conn.go:606 +0xf4 crypto/tls.(Conn).readRecord(...) crypto/tls/conn.go:574 crypto/tls.(Conn).Read(0x4008d7b180, {0x4006765d02, 0x2, 0x2}) crypto/tls/conn.go:1277 +0x164 io.ReadAtLeast({0x7f5a59fdc8, 0x4008d7b180}, {0x4006765d02, 0x2, 0x2}, 0x2) io/io.go:328 +0xa8 io.ReadFull(...) io/io.go:347 encoding/binary.Read({0x7f5a59fdc8, 0x4008d7b180}, {0x1ca6950, 0x2757138}, {0x7722e0, 0x4006765d00}) encoding/binary/binary.go:166 +0x90 github.com/miekg/dns.(Conn).ReadMsgHeader(0x400052fe38, 0x0) github.com/miekg/dns@v1.1.45/client.go:294 +0x160 github.com/miekg/dns.(Conn).ReadMsg(0x400052fe38) github.com/miekg/dns@v1.1.45/client.go:249 +0x2c github.com/AdguardTeam/dnsproxy/upstream.(dnsOverTLS).exchangeConn(0x40004b3710, {0x1ca8350, 0x4008d7b180}, 0x400560def0) github.com/AdguardTeam/dnsproxy@v0.41.4/upstream/upstream_dot.go:99 +0x130 github.com/AdguardTeam/dnsproxy/upstream.(*dnsOverTLS).Exchange(0x40004b3710, 0x400560def0) github.com/AdguardTeam/dnsproxy@v0.41.4/upstream/upstream_dot.go:61 +0x380 github.com/AdguardTeam/dnsproxy/upstream.exchangeAsync({0x1c8f110, 0x40004b3710}, 0x400560ddd0, 0x40070eb9e0) github.com/AdguardTeam/dnsproxy@v0.41.4/upstream/parallel.go:117 +0x54 created by github.com/AdguardTeam/dnsproxy/upstream.ExchangeParallel github.com/AdguardTeam/dnsproxy@v0.41.4/upstream/parallel.go:45 +0x128

jumpsmm7 commented 2 years ago

Bumping this up. This needs visibility.

Definitely.

timkgh commented 2 years ago

Could it also have something to do with mmap'ing on these architectures?

timkgh commented 2 years ago

FWIW I've been running with cache_optimistic: false for a few days now and it seems stable. I'll turn cache_optimistic: true back on to try to narrow it down.

jumpsmm7 commented 2 years ago

TBH, What I think is needed is the adding the flag --no-mem-optimization to your init.d arguments.

timkgh commented 2 years ago

TBH, What I think is needed is the adding the flag --no-mem-optimization to your init.d arguments.

Can someone please explain what it really does?

jumpsmm7 commented 2 years ago

TBH, What I think is needed is the adding the flag --no-mem-optimization to your init.d arguments.

Can someone please explain what it really does?

I am under impression that AdGuardHome attempts to optimize its memory usage in some way or shape. Maybe it is not properly doing such, or incompatible. But like you have asked, it would definitely be better to understand what this option is doing.

EugeneOne1 commented 2 years ago

@jumpsmm7, have you faced a similar problem? There are no extraordinary memory allocations in the provided profiling information actually. The most memory-hungry processes are the filter updates. How many filter rules have you got?

jumpsmm7 commented 2 years ago

@EugeneOne1 I only had actually 1 list I was using which was OISD full.

jumpsmm7 commented 2 years ago

@EugeneOne1

@jumpsmm7, have you faced a similar problem? There are no extraordinary memory allocations in the provided profiling information actually. The most memory-hungry processes are the filter updates. How many filter rules have you got?

The problem we are actually running into, over a couple of days of normal use, this memory consumption increases and never releases. Basically, if adguardhome updates the filter list, each time it does, it holds on to more and more memory. It does not appear to be letting go of the old memory before performing filter list updates. Maybe there is some sort of memory leak issue that needs to be looked into, particularly in regards to the filter update processing versus when adguardhome purges its current memory holdups. This issue seems to disappear if the cache is disabled with cache_optimistic: false. Definitely something is going on.

timkgh commented 2 years ago

@EugeneOne1 Can you please explain what --no-mem-optimization does and whether it can affect certain architectures in a negative way?

jumpsmm7 commented 2 years ago

@EugeneOne1 Can you please explain what --no-mem-optimization does and whether it can affect certain architectures in a negative way?

In our case we would want to know if adding it may benefit. It appears the OP is not passing this as an argument when launching adguardhome.

jumpsmm7 commented 2 years ago

@timkgh I found the info myself https://github.com/AdguardTeam/AdGuardHome/commit/b3a68bb806dd16228398a64e66ef08b891cb3f30 and here https://github.com/AdguardTeam/AdGuardHome/issues/2044

SiNONiMiTY commented 2 years ago

been running AGH with the --no-mem-optimization argument and memory usage is consistent

maybe there is something wrong with the garbage collection

You can check my previous post for the memory usage without the argument

This is the current consumption now, and it is consistent at that range. image

EugeneOne1 commented 2 years ago

@SiNONiMiTY, well that's interesting. @timkgh, @jumpsmm7, could you also please try to run AGH with --no-mem-optimization flag?

EugeneOne1 commented 2 years ago

@jumpsmm7, AFAIK, it performs some runtime hacks to improve memory usage, but it seems obsolete honestly.

jumpsmm7 commented 2 years ago

@EugeneOne1

I have done some test already with this. My conclusion at first glance was that it did appear to indicate a shift in how much memory footprint adguardhome was holding onto.

image

However I did not get a chance to test it over several days yet.

jumpsmm7 commented 2 years ago

@EugeneOne1 Not to get too far off topic, about some of your early queries about the size of filters, I ran AdGuardHome for a short period of time to test its robustness. I had 3 million domains for rules. Here was the hitop outcome.

image

While it was utilizing, vigorously, half of my 2gb swap, AdGuardHome was still thriving at this state on our routers.

( note I did not keep it this way, this was merely a test.)

That leans me more to thinking it is probably the outdated garbage collection.

jumpsmm7 commented 2 years ago

@EugeneOne1 What does this indicate?

runtime stack: runtime.throw(0x90cd42, 0x27) runtime/panic.go:1117 +0x54 fp=0x7fc4e4ebf0 sp=0x7fc4e4ebc0 pc=0x444a4 runtime.(mheap).sysAlloc(0x21f0ee0, 0x400000, 0x7fc4e4ed08, 0x35ef0) runtime/malloc.go:739 +0x5dc fp=0x7fc4e4eca0 sp=0x7fc4e4ebf0 pc=0x1bb6c runtime.(mheap).grow(0x21f0ee0, 0x1, 0x0) runtime/mheap.go:1346 +0x80 fp=0x7fc4e4ed10 sp=0x7fc4e4eca0 pc=0x360f0 runtime.(mheap).allocSpan(0x21f0ee0, 0x1, 0x2c00, 0x0) runtime/mheap.go:1173 +0x670 fp=0x7fc4e4eda0 sp=0x7fc4e4ed10 pc=0x35f20 runtime.(mheap).alloc.func1() runtime/mheap.go:910 +0x54 fp=0x7fc4e4edf0 sp=0x7fc4e4eda0 pc=0x72344 runtime.(mheap).alloc(0x21f0ee0, 0x1, 0x7fc4e4012c, 0x26844) runtime/mheap.go:904 +0x64 fp=0x7fc4e4ee40 sp=0x7fc4e4edf0 pc=0x35494 runtime.(mcentral).grow(0x2203218, 0x0) runtime/mcentral.go:232 +0x74 fp=0x7fc4e4ee90 sp=0x7fc4e4ee40 pc=0x26b44 runtime.(mcentral).cacheSpan(0x2203218, 0x0) runtime/mcentral.go:158 +0x344 fp=0x7fc4e4ef00 sp=0x7fc4e4ee90 pc=0x26934 runtime.(mcache).refill(0x7f90f98108, 0x2c) runtime/mcache.go:162 +0xc0 fp=0x7fc4e4ef60 sp=0x7fc4e4ef00 pc=0x25c70 runtime.(*mcache).nextFree(0x7f90f98108, 0x21af82c, 0x7fc4e4efe8, 0x1cf00, 0x7fc4e4f018) runtime/malloc.go:882 +0x8c fp=0x7fc4e4efb0 sp=0x7fc4e4ef60 pc=0x1be2c runtime.mallocgc(0x178, 0x8d3660, 0x7fc4e4f001, 0x7fc4e4f0a8) runtime/malloc.go:1069 +0x75c fp=0x7fc4e4f060 sp=0x7fc4e4efb0 pc=0x1c71c runtime.newobject(0x8d3660, 0x47f68) runtime/malloc.go:1177 +0x38 fp=0x7fc4e4f090 sp=0x7fc4e4f060 pc=0x1c938 runtime.malg(0x8000, 0x0) runtime/proc.go:3988 +0x2c fp=0x7fc4e4f0e0 sp=0x7fc4e4f090 pc=0x4f5dc runtime.mpreinit(0x21af8e0) runtime/os_linux.go:355 +0x28 fp=0x7fc4e4f100 sp=0x7fc4e4f0e0 pc=0x41638 runtime.mcommoninit(0x21af8e0, 0xffffffffffffffff) runtime/proc.go:744 +0xdc fp=0x7fc4e4f150 sp=0x7fc4e4f100 pc=0x4808c runtime.schedinit() runtime/proc.go:637 +0x98 fp=0x7fc4e4f1b0 sp=0x7fc4e4f150 pc=0x47ba8 runtime.rt0_go(0x7fc4e4fe15, 0x7fc4e4fe36, 0x0, 0x7fc4e4fe40, 0x7fc4e4fe61, 0x7fc4e4fe6c, 0x7fc4e4fe97, 0x7fc4e4fe9f, 0x7fc4e4feaa, 0x7fc4e4febd, ...) runtime/asm_arm64.s:82 +0xa8 fp=0x7fc4e4f1e0 sp=0x7fc4e4f1b0 pc=0x797e8

EugeneOne1 commented 2 years ago

@jumpsmm7, I suppose you've omitted the text of panic, but the stack trace indicates an out-of-memory error.

jumpsmm7 commented 2 years ago

@jumpsmm7, I suppose you've omitted the text of panic, but the stack trace indicates an out-of-memory error.

I figured much. I was taking the time to review other user reports of memory issues. This actually appeared directly in the terminal session of a user attempting to start AdGuardHome one time. Our subsequent thought was the user was running out of memory and did not have a swap memory in place (which was the case.). I wanted to make sure this particular memory issue I referenced was not directly related, but it appears it isn't. Thank you for your honest analysis. http://www.snbforums.com/threads/release-asuswrt-merlin-adguardhome-installer-amaghi.76506/post-740350

jumpsmm7 commented 2 years ago

@EugeneOne1

I suspect what is happening, once the list update process happens, Adguardhome requires more memory to download the lists, and then Adguardhome utilizes even more memory to load the new entries. The problem is, the memory seems to not be getting freed afterwards. It either remains used or becomes cached. But upon the next list update process there is no more freed memory to squeeze from(unless the user has a swap file) ,but still at some point this lack of cleanup policy leads to Adguardhome crashing the system. While I am not saying this is happening on every Linux platform, it sounds like it could be the culprit in a busybox Linux type environment or Edge OS like the OP is running.

Here is a 24 hour, statistical memory usage of adguardhome.

image

On an asuswrt merlin router.

The only process being done at the 24hour interval is the filter update.

SiNONiMiTY commented 2 years ago

been running AGH with the --no-mem-optimization argument and memory usage is consistent

maybe there is something wrong with the garbage collection

You can check my previous post for the memory usage without the argument

This is the current consumption now, and it is consistent at that range. image

Now, after some time running AGH, there are still memory leaks after updating the filters. I'm afraid that even adding the --no-mem-optimization flag still doesn't help. For the meantime, I'll be creating a cron job to restart the adguardhome service once in a while to mitigate the memory leaks.

jumpsmm7 commented 2 years ago

been running AGH with the --no-mem-optimization argument and memory usage is consistent maybe there is something wrong with the garbage collection You can check my previous post for the memory usage without the argument This is the current consumption now, and it is consistent at that range. image

Now, after some time running AGH, there are still memory leaks after updating the filters. I'm afraid that even adding the --no-mem-optimization flag still doesn't help. For the meantime, I'll be creating a cron job to restart the adguardhome service once in a while to mitigate the memory leaks.

Keep in mind that you may need to be clever about your restarts because I am not quite sure the time exactness of filter updates. You may be preventing filter updates if it goes in reference of the last restart.

jumpsmm7 commented 2 years ago

I wonder if it may be more effective, if there is a way we can define a specific kill signal to tell adguardhome to perform a one time cleanup task, functionally similar to restarting it.

timkgh commented 2 years ago

I wonder if it may be more effective, if there is a way we can define a specific kill signal to tell adguardhome to perform a one time cleanup task, functionally similar to restarting it.

The correct thing to do is identify the leak and fix it. Anything else is just a bandaid.

jumpsmm7 commented 2 years ago

I wonder if it may be more effective, if there is a way we can define a specific kill signal to tell adguardhome to perform a one time cleanup task, functionally similar to restarting it.

The correct thing to do is identify the leak and fix it. Anything else is just a bandaid.

Oh I wholeheartedly agree, the garbage collection needs fixing, a.k.a. a better cleanup policy needs to be utilized upon filter updates. The only thing that seems to occur every 24 hours is the filter update. I notice a rise in memory subsequent the download, and subsequent the loading of entries into memory, however none of the previously occupied memory from before the process gets released.

SiNONiMiTY commented 2 years ago

Atleast now. we have narrowed down the cause of the memory leak.

timkgh commented 2 years ago

Maybe one of them. I think there are issues around the cache also, maybe the optimistic caching has issues. And possibly around the query log/search in the UI, it's very slow and the memory use seems to shoot up, not sure how much of that memory is recovered.

SiNONiMiTY commented 2 years ago

Maybe one of them. I think there are issues around the cache also, maybe the optimistic caching has issues. And possibly around the query log/search in the UI, it's very slow and the memory use seems to shoot up, not sure how much of that memory is recovered.

Yes its possible, If one area has some leaks, we can safely assume that the other routines that are supposedly doing garbage collection is also affected.

ERamseth commented 2 years ago

Hi all. I recovered my github account to follow up here. @jumpsmm7 was sharing my data here (I actually asked him to, so thank you!)

Here's a chart similar to the prior one posted but with --no-mem-optimization .

AGH_Mem-2022-04-11

I will be running another logging but this time with 1 hour filter updates (was previously set to 24hrs).

More info on my data collection is in this thread: https://www.snbforums.com/threads/adguardhome-memory-usage-analysis.78378/

One thing to note re: possible UI leaks idea... the UI-related mem jump is (I believe) the small jump inside the red oval, not the larger jump that follows.

ERamseth commented 2 years ago

Here is another log capture of --no-mem-optimization with hourly filter updates.

The big jumps do, in fact, match up with filter updates observed in the system log. But the updates aren't appearing hourly in the log. Maybe it only logs when the pulled file has changed? Also, only the first 2 filter updates have memory usage jumps associated with them. The others do not.

AGH_Mem-2022-04-13

timkgh commented 2 years ago

@EugeneOne1 any progress in investigating the potential leak around optimistic caching?

EugeneOne1 commented 2 years ago

@timkgh, hello again. The optimistic caching logic doesn't seem leaking actually. We're investigating your data and searching for the leaks source.

jumpsmm7 commented 2 years ago

@EugeneOne1

The only thing I managed to find was a incremental increase in memory usage when updating the list. But this incremental increase does not continue with the next days updating. It just seems to be only on the first update run after AdGuardHome has been restarted. It appears the memory seems consistent there onward. Overall the one thing that has benefited my setup was the inclusion of a swap space. Linux tends to go crazy when stale process's cannot be lower prioritized to a swap space. I could quickly see someone using a big in size block list, without a reasonable size swap space, running into a memory issue. One of the things I noticed in htop images provided by the OP is the lack of regard for a swap space.

You can see in mine, I run a 2gb swap space on a much tighter memory constraint than the OP.

image

timkgh commented 2 years ago

It's possible it's a mips Go GC issue https://github.com/golang/go/issues/39174

jumpsmm7 commented 2 years ago

@timkgh

It's possible it's a mips Go GC issue golang/go#39174

So a temporary fix for you might to be to add a pre-argument or a pre-command such as "ulimit -s 64" to your init.d launch for AdGuardHome. Something like this may help it run with tighter memory constraints.

timkgh commented 2 years ago

@timkgh

It's possible it's a mips Go GC issue golang/go#39174

So a temporary fix for you might to be to add a pre-argument or a pre-command such as "ulimit -s 64" to your init.d launch for AdGuardHome. Something like this may help it run with tighter memory constraints.

Why ulimit the stack?