gpdm / tinycore-targetdisplaymode

Build toolkit to create a bootable USB thumb drive based on Tiny Core Linux of less than 40 MiB. This includes a custom extension to enable "Target Display Mode" for vintage iMacs without the need to run a full-blown MacOS or Linux distro.
52 stars 3 forks source link

build.sh not found #3

Closed cansurmeli closed 1 year ago

cansurmeli commented 2 years ago

Hello.

I’m trying to execute this project for a 2009 21.5” iMac. Although I know this doesn’t really matter, the iMac itself has no macOS on it but merely a Fedora 36 installation.

During the build process, I get a lot of OK messages but also a lot of FATAL ERROR: Data queue size is too large. Although it finishes to build.

After building with Docker, I can’t interactively get into the container. Upon trying so, I get the following error message:

/bin/sh: /tmp/build/build.sh: not found

How to get around it? Is something out of date?

Thanks.

gpdm commented 2 years ago

@cansurmeli

Can you please post the entire build log output?

Thanks, -GP

cansurmeli commented 2 years ago

Sorry, I should’ve done that in the first place. Here you go:

out.log

gpdm commented 2 years ago

@cansurmeli

That error seems to be originating from unsquashfs, but I cannot really reproduce this.

Here's an excerpt from your log:

Connecting to repo.tinycorelinux.net (89.22.99.37:80)
saving to 'readline.tcz'
readline.tcz         100% |********************************|  128k  0:00:00 ETA
'readline.tcz' saved
readline.tcz: OK
FATAL ERROR:Data queue size is too large

Here's the same, how it should look like:

Connecting to repo.tinycorelinux.net (89.22.99.37:80)
saving to 'readline.tcz'
compiletc.tcz.dep OK
readline.tcz         100% |********************************|  128k  0:00:00 ETA
'readline.tcz' saved
readline.tcz: OK
Parallel unsquashfs: Using 2 processors
6 inodes (68 blocks) to write

[=================================================================|] 68/68 100%

created 2 files
created 4 directories
created 4 symlinks
created 0 devices
created 0 fifos

Too me with the given information, it looks like it's failing for some reason to unpack the downloaded package files during the build stage.

I can only speculate ...

Bottom line summary: Whatever container you ended up with during the initial build stage is not valid.

With the downloads failing to extract in the first place, also the internal dependencies (like GCC et all) are missing, so the build process supposed to run by build.sh would anyway never be able to complete.

So, the error you mentioned about '/bin/sh: /tmp/build/build.sh: not found` is a symptomatic problem.

First priority is to find out, why you get the "Data queue size is too large" error, and resolve this one first.

Cheers, -GP

gpdm commented 2 years ago

note to @gpdm

worthwile to investigate for docker build error detection capability for https://github.com/gpdm/tinycore-targetdisplaymode/blob/2e81670bbdf287d4924a49cd21ebb6383526cd75/Dockerfile#L5

If tce-load fails to extract, this should not only report as FATAL, but also yield FATAL, and result in immediately aborting the container build.

Despite the error, it just continues, resulting in a non-functional output container being generated.

Problem for treating the error as non-FATAL is for sure the chaining of two commands in the RUN section:

RUN tce-load -wic bash.tcz libisoburn.tcz git.tcz gcc.tcz compiletc.tcz ; \
    rm -rf /tmp/tce/optional/*

Any potential non-zero of tce-load will be cleared since the rm command is chained, and overrides the previous exit code.

Using && instead of ; should make errors in tce-load FATAL to abort build.

gpdm commented 2 years ago

note to @gpdm

Not really the isse of using && vs ;, but more internal behaviour of tce-load, as it seems.

Even if I do something like tce-load -wic bash.tcz1, I wouldn't get a fatal error code, it still returns '0'. That's why docker continues with the stage build, since it was not a FATAL.

Question is, if tce-load fails to unpack the file, would it also return '0' or non-zero?

Maybe 'set -o pipefail` could help, but hard to say as long as I can't reproduce and verify myself.

gpdm commented 2 years ago

@cansurmeli

Maybe you can try to change your Dockerfile locally to this version to give it a try:

FROM tatsushid/tinycore

ENV TC_ISO_URL="${TC_ISO_URL:-http://www.tinycorelinux.net/13.x/x86/release/TinyCore-current.iso}"

RUN set -o pipefail && \
    tce-load -wic bash.tcz libisoburn.tcz git.tcz gcc.tcz compiletc.tcz ; echo $?
RUN rm -rf /tmp/tce/optional/*

RUN tce-status -i | grep -Ee '^(bash|libisoburn|git|gcc|compiletc)$'

ADD files /tmp/build

USER root:root
ENTRYPOINT /tmp/build/build.sh

Then build like:

sudo docker build . -t tcbuild --no-cache

Please provide the logs.

gpdm commented 2 years ago

note to @gpdm

Depending on the outcome of -o pipefail and the behaviour of tce-load, it's propably an option to add this as a sanity check in Dockerfile:

RUN tce-status -i | grep -Ee '^(bash|libisoburn|git|gcc|compiletc)$'

When checking for something, that should be installed, but is not, because the previous stage failed, Docker would abort like so:

Step 5/8 : RUN tce-status -i | grep bash1
 ---> Running in 0ec9984f3efa
The command '/bin/sh -c tce-status -i | grep bash' returned a non-zero code: 1

It would be a potential workaround, if tce-load is really that limited concerning the return error codes.

Still, this would be only valid for sanity checking.

gpdm commented 1 year ago

added sanity checking in https://github.com/gpdm/tinycore-targetdisplaymode/commit/a9b5db21535df6eede131295d4ddde2ffc49e2e2 for Dockerfile.

Should abort prematurely now if prerequisite errors are detected in early phase.

cansurmeli commented 1 year ago

I haven't tested it all the way but merely done a docker build and the necessary output is being produced now.

andersk commented 10 months ago

For those finding this from Google: FATAL ERROR:Data queue size is too large seems to be one effect of a bug in unsquashfs before 4.5.1 with a very large value of ulimit -n (maximum number of open files). If you can’t upgrade squashfs-tools, you can work around this by setting ulimit -n to a smaller value.