Open cmahnke opened 1 year ago
Just to give you an idea:
$ find content -type f |wc -l
32454
and
$ hugo version
hugo v0.108.0+extended darwin/arm64 BuildDate=unknown
Why do you think this problem is related to the file watcher?
Actually I don't have a valid clue. But as you might remember I was quite busy developing Hugo sites beginning 2021 and back then there have been such problems quite early (in terms of tree growth).
But maybe it's just a hugo serve
problem...
Can you give me a hint how to narrow it down?
The error itself seems to be a timeout:
goroutine 52 [IO wait, 5 minutes]
There isn't a building problem since (output croped):
$ hugo
[...]
| DE
-------------------+--------
Pages | 782
Paginator pages | 2
Non-page files | 31976
Static files | 881
Processed images | 365
Aliases | 355
Sitemaps | 1
Cleaned | 0
Total in 11680 ms
What you do experience if you do:
hugo server --watch=false
Or maybe also:
hugo server --poll
Thanks, the first ( which is hugo server --watch=false
) already did it (well, at least starting hugo
):
| DE
-------------------+--------
Pages | 782
Paginator pages | 2
Non-page files | 31976
Static files | 881
Processed images | 365
Aliases | 355
Sitemaps | 1
Cleaned | 0
Built in 7396 ms
Environment: "development"
Serving pages from memory
Running in Fast Render Mode. For full rebuilds on change: hugo server --disableFastRender
Web Server is available at http://localhost:1313/ (bind address 127.0.0.1)
Press Ctrl+C to stop
But now the watcher is gone. But since building ist quite fast, this will do for further development.
Do you need any more information, in case you want to fix this instead of make the watcher configurable?
What does
hugo server --poll
Do?
There isn't a building problem since (output croped):
There are more differences between hugo
and hugo server
than the watching, one major thing is that we (by default) writes everything to memory in server mode. There are flags to disable that.
Do you need any more information, in case you want to fix this instead of make the watcher configurable?
We have millions of options already. We're not adding more without understanding that it's needed.
What does
hugo server --poll
Do?
$ hugo server --poll 1000ms
goroutine 95410 [select, 2 minutes]:
runtime.gopark(0x1402af9ff18?, 0x3?, 0x78?, 0xfe?, 0x1402af9feba?)
runtime/proc.go:363 +0xe4 fp=0x1402af9fd40 sp=0x1402af9fd20 pc=0x1024541e4
runtime.selectgo(0x1402af9ff18, 0x1402af9feb4, 0x1402af9ffa8?, 0x0, 0x140b6a79440?, 0x1)
runtime/select.go:328 +0x688 fp=0x1402af9fe60 sp=0x1402af9fd40 pc=0x102464af8
github.com/gohugoio/hugo/livereload.(*hub).run(0x10565c280)
github.com/gohugoio/hugo/livereload/hub.go:39 +0x94 fp=0x1402af9ffb0 sp=0x1402af9fe60 pc=0x10393dd64
github.com/gohugoio/hugo/livereload.Initialize.func1()
github.com/gohugoio/hugo/livereload/livereload.go:108 +0x28 fp=0x1402af9ffd0 sp=0x1402af9ffb0 pc=0x10393e2e8
runtime.goexit()
runtime/asm_arm64.s:1172 +0x4 fp=0x1402af9ffd0 sp=0x1402af9ffd0 pc=0x102485654
created by github.com/gohugoio/hugo/livereload.Initialize
github.com/gohugoio/hugo/livereload/livereload.go:108 +0x40
goroutine 95411 [select, 2 minutes, locked to thread]:
runtime.gopark(0x1402adb3fa0?, 0x2?, 0x88?, 0x3e?, 0x1402adb3f9c?)
runtime/proc.go:363 +0xe4 fp=0x1402adb3e30 sp=0x1402adb3e10 pc=0x1024541e4
runtime.selectgo(0x1402adb3fa0, 0x1402adb3f98, 0x0?, 0x0, 0x0?, 0x1)
runtime/select.go:328 +0x688 fp=0x1402adb3f50 sp=0x1402adb3e30 pc=0x102464af8
runtime.ensureSigM.func1()
runtime/signal_unix.go:991 +0x190 fp=0x1402adb3fd0 sp=0x1402adb3f50 pc=0x102468e10
runtime.goexit()
runtime/asm_arm64.s:1172 +0x4 fp=0x1402adb3fd0 sp=0x1402adb3fd0 pc=0x102485654
created by runtime.ensureSigM
runtime/signal_unix.go:974 +0xf4
goroutine 95412 [syscall, 2 minutes]:
runtime.sigNoteSleep(0x0)
runtime/os_darwin.go:123 +0x20 fp=0x1402b1d5790 sp=0x1402b1d5750 pc=0x10244e8a0
os/signal.signal_recv()
runtime/sigqueue.go:149 +0x2c fp=0x1402b1d57b0 sp=0x1402b1d5790 pc=0x10248131c
os/signal.loop()
os/signal/signal_unix.go:23 +0x1c fp=0x1402b1d57d0 sp=0x1402b1d57b0 pc=0x103933dcc
runtime.goexit()
runtime/asm_arm64.s:1172 +0x4 fp=0x1402b1d57d0 sp=0x1402b1d57d0 pc=0x102485654
created by os/signal.Notify.func1.1
os/signal/signal.go:151 +0x2c
goroutine 95413 [IO wait, 2 minutes]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
runtime/proc.go:363 +0xe4 fp=0x140d451bb60 sp=0x140d451bb40 pc=0x1024541e4
runtime.netpollblock(0x140d451bbf8?, 0x24bfcc4?, 0x1?)
runtime/netpoll.go:526 +0x158 fp=0x140d451bba0 sp=0x140d451bb60 pc=0x10244d6c8
internal/poll.runtime_pollWait(0x12d69aea8, 0x72)
runtime/netpoll.go:305 +0xa0 fp=0x140d451bbd0 sp=0x140d451bba0 pc=0x10247f150
internal/poll.(*pollDesc).wait(0x14000119c80?, 0x104525360?, 0x0)
internal/poll/fd_poll_runtime.go:84 +0x28 fp=0x140d451bc00 sp=0x140d451bbd0 pc=0x1024bb6c8
internal/poll.(*pollDesc).waitRead(...)
internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0x14000119c80)
internal/poll/fd_unix.go:614 +0x1d0 fp=0x140d451bca0 sp=0x140d451bc00 pc=0x1024bfd30
net.(*netFD).accept(0x14000119c80)
net/fd_unix.go:172 +0x28 fp=0x140d451bd60 sp=0x140d451bca0 pc=0x102530a78
net.(*TCPListener).accept(0x14000343a70)
net/tcpsock_posix.go:142 +0x28 fp=0x140d451bda0 sp=0x140d451bd60 pc=0x102546c78
net.(*TCPListener).Accept(0x14000343a70)
net/tcpsock.go:288 +0x2c fp=0x140d451bde0 sp=0x140d451bda0 pc=0x102545e8c
net/http.(*onceCloseListener).Accept(0x10468e6c0?)
<autogenerated>:1 +0x30 fp=0x140d451be00 sp=0x140d451bde0 pc=0x10282a040
net/http.(*Server).Serve(0x1400e136f00, {0x10468d030, 0x14000343a70})
net/http/server.go:3070 +0x30c fp=0x140d451bf30 sp=0x140d451be00 pc=0x10280696c
github.com/gohugoio/hugo/commands.(*commandeer).serve.func3()
github.com/gohugoio/hugo/commands/server.go:617 +0x30 fp=0x140d451bf60 sp=0x140d451bf30 pc=0x1039631e0
golang.org/x/sync/errgroup.(*Group).Go.func1()
golang.org/x/sync@v0.1.0/errgroup/errgroup.go:75 +0x5c fp=0x140d451bfd0 sp=0x140d451bf60 pc=0x1030d44ac
runtime.goexit()
runtime/asm_arm64.s:1172 +0x4 fp=0x140d451bfd0 sp=0x140d451bfd0 pc=0x102485654
created by golang.org/x/sync/errgroup.(*Group).Go
golang.org/x/sync@v0.1.0/errgroup/errgroup.go:72 +0xa4
r0 0x0
r1 0x0
r2 0x0
r3 0x0
r4 0x104241970
r5 0x16da6ec70
r6 0xa
r7 0x0
r8 0x3a7e26f536541563
r9 0x3a7e26f45bf2e563
r10 0x2
r11 0xfffffffd
r12 0x10000000000
r13 0x0
r14 0x0
r15 0x0
r16 0x148
r17 0x200b309a8
r18 0x0
r19 0x6
r20 0x16da6f000
r21 0x1807
r22 0x16da6f0e0
r23 0x0
r24 0xffffffffffffffff
r25 0x103e1dfa0
r26 0x16da6ec80
r27 0x28
r28 0x14000002b60
r29 0x16da6ec20
lr 0x1a061dcec
sp 0x16da6ec00
pc 0x1a05e7224
fault 0x1a05e7224
Fails, with quite some long massage, to long to attach. And it took a lot longer then the five minutes above.
$ hugo server --poll 1000ms &> poll-error.txt
$ du -chs poll-error.txt
96M poll-error.txt
96M total
I can also try with longer intervals, like 2000ms, 5000ms. More doesn't really make sense to me since, that't getting in the time span of a regular rebuild.
There isn't a building problem since (output croped):
There are more differences between
hugo
andhugo server
than the watching, one major thing is that we (by default) writes everything to memory in server mode. There are flags to disable that.
Yes, I know. That used to work about 1,5 years ago, but with smaller trees:
hugo server --disableFastRender --renderToDisk
But now it doesn't.
Do you need any more information, in case you want to fix this instead of make the watcher configurable?
We have millions of options already. We're not adding more without understanding that it's needed.
Certainly it's better to fix the root cause for this. But I'm not sure how well a fix would scale with let's say 100.000 files (just a made up but not unreasonable number).
$ hugo server --poll 5000ms
Doesn't work either, but even after about 20 min there isn't an error message... But it gave me a hint: Runtime and log length relate to the interval.
I've attached a compressed log of
hugo server --poll 100ms &> poll-error.txt
@bep: Can this be realized by allowing just anything as exclude pattern of #12222?
@cmahnke I have skimmed through this issue again, and I'm not sure what the problem described in here is, so I cannot answer that problem.
Currently the configuration for ignoring files doesn't distinguish between files for deployment and for watching. This can lead to problems with huge trees.
My use case are sites including large image pyramids, generated by
vips
. These are subdirectories including fragments of images which are used to provide zoomable high resolution images to the user.The result are several dozen image fragment for each provided image. This blows up the
content
tree quite a lot, but since these are generated externally, they doesn't need to be watched. Adding these folders to theignoreFiles
in the module mount section, also exclude them from access for the templates, which leads to the problem, that they can't get required metadata, like the image size.Since
hugo serve
actually breaks down on huge trees, including them have the side effect, thathugo
takes ages to start up until it fails: