Open eliasnaur opened 6 years ago
This issue is about Android, but in theory the same could be done for the iOS simulator.
CC @bradfitz @andybons
This
https://paulemtz.blogspot.dk/2013/05/android-testing-in-headless-emulator.html
seems to indicate that the standard emulator supports headless mode.
Sure, prepare a Docker image that lets us do the traditional make.bash, then snapshot, then clone to several mirrors, then run tests one-at-a-time.
It'll require some tweaking of the build system probably, but I'll do those parts if you do the bulk of the work in preparing the Dockerfile.
seems to indicate that the standard emulator supports headless mode.
headless helps but isn't required. Worst case we could run a headless X server so it thinks it has a window.
The following Dockerfile
# Copyright 2014 The Go Authors. All rights reserved.
# Use of this source code is governed by a BSD-style
# license that can be found in the LICENSE file.
FROM debian:sid
MAINTAINER golang-dev <golang-dev@googlegroups.com>
ENV DEBIAN_FRONTEND noninteractive
# gdb: optionally used by runtime tests for gdb
# strace: optionally used by some net/http tests
# gcc libc6-dev: for building Go's bootstrap 'dist' prog
# libc6-dev-i386 gcc-multilib: for 32-bit builds
# procps lsof psmisc: misc basic tools
# libgles2-mesa-dev libopenal-dev fonts-noto: required by x/mobile repo
# unzip openjdk-8-jdk python lib32z1: required by the Android SDK
RUN apt-get update && apt-get install -y \
--no-install-recommends \
ca-certificates \
curl \
gdb \
strace \
gcc \
libc6-dev \
libc6-dev-i386 \
gcc-multilib \
procps \
lsof \
psmisc \
libgles2-mesa-dev \
libopenal-dev \
fonts-noto \
fonts-noto-mono \
openssh-server \
unzip \
openjdk-8-jdk \
python \
lib32z1 \
&& rm -rf /var/lib/apt/lists/*
RUN mkdir -p /go1.4-amd64 \
&& ( \
curl --silent https://storage.googleapis.com/golang/go1.4.linux-amd64.tar.gz | tar -C /go1.4-amd64 -zxv \
) \
&& mv /go1.4-amd64/go /go1.4 \
&& rm -rf /go1.4-amd64 \
&& rm -rf /go1.4/pkg/linux_amd64_race \
/go1.4/api \
/go1.4/blog \
/go1.4/doc \
/go1.4/misc \
/go1.4/test \
&& find /go1.4 -type d -name testdata | xargs rm -rf
RUN curl -o /usr/local/bin/stage0 https://storage.googleapis.com/go-builder-data/buildlet-stage0.linux-amd64-kube \
&& chmod +x /usr/local/bin/stage0
RUN mkdir -p /android/sdk \
&& curl -o /android/sdk/sdk-tools-linux.zip https://dl.google.com/android/repository/sdk-tools-linux-3859397.zip \
&& unzip -d /android/sdk /android/sdk/sdk-tools-linux.zip \
&& rm -rf /android/sdk/sdk-tools-linux.zip
RUN yes | /android/sdk/tools/bin/sdkmanager --licenses \
&& /android/sdk/tools/bin/sdkmanager ndk-bundle "system-images;android-26;google_apis;x86_64" \
&& /android/sdk/tools/bin/sdkmanager "build-tools;21.1.2" "platforms;android-26" \
&& /android/sdk/tools/bin/sdkmanager --update
# Gradle for gomobile
RUN curl -L -o /android/gradle-5.2.1-bin.zip https://services.gradle.org/distributions/gradle-5.2.1-bin.zip \
&& unzip -d /android /android/gradle-5.2.1-bin.zip \
&& rm /android/gradle-5.2.1-bin.zip
# Cleanup
RUN rm -rf /android/sdk/sdk-tools-linux.zip \
&& apt-get -y remove unzip \
&& apt-get -y autoremove
COPY run-emulator.sh /android/run-emulator.sh
RUN chmod +x /android/run-emulator.sh
# Include a checkout of Go
COPY go-tip /go-tip
CMD ["/usr/local/bin/stage0"]
And supporting run-emulator.sh (as well as a Go checkout in go-tip
)
#!/bin/sh
set -e
# create emulator
echo no | /android/sdk/tools/bin/avdmanager create avd --force --name android-avd --package "system-images;android-26;google_apis;x86_64"
# run the emulator.
/android/sdk/emulator/emulator @android-avd -no-window &
# wait for it.
/android/sdk/platform-tools/adb wait-for-device shell 'while [[ -z $(getprop sys.boot_completed) ]]; do sleep 1; done;'
Can be started with:
sudo docker run --privileged -it <image-name> /bin/bash
Where I'm able to complete androidtest.sh with android/386 and android/amd64:
/android/run-emulator.sh
cd /go-tip/src
apt-get update && apt-get install git-core
export PATH=$PATH:/android/sdk/platform-tools; GOROOT_BOOTSTRAP=/go1.4 CC_FOR_TARGET=/android/sdk/ndk-bundle/toolchains/llvm/prebuilt/linux-x86_64/bin/i686-linux-android26-clang GOARCH=386 ./androidtest.bash
export PATH=$PATH:/android/sdk/platform-tools; GOROOT_BOOTSTRAP=/go1.4 CC_FOR_TARGET=/android/sdk/ndk-bundle/toolchains/llvm/prebuilt/linux-x86_64/bin/x86_64-linux-android26-clang GOARCH=amd64 ./androidtest.bash
It can even complete the full test suite of gomobile:
GOROOT_BOOTSTRAP=/go1.4 ./make.bash
export PATH=$PATH:/go-tip/bin:$HOME/go/bin:/android/sdk/platform-tools:/android/gradle-5.2.1/bin
export GOPATH=$HOME/go
export ANDROID_HOME=/android/sdk
go get -u golang.org/x/mobile/...
go test -short golang.org/x/mobile/...
Let me know how to proceed.
Ping @bradfitz
Sorry, I realistically won't be able to get to start on this until April 1st. I'm preparing for an upcoming Gophercon talk this weekend, traveling next week for the talk, and then on vacation the last week of March.
I might be able to work on this next week from Moscow, but no promises.
@andybons has been wanting to learn how this whole system works more, so this might be a good "starter project" for adding a builder.
BTW, I forgot to ask earlier: how long does this all take? I imagine sharding is necessary for being a trybot, but is your goal for it to be a trybot right away, or just to have it run on GCE by us in Kubernetes? If it's easier, we could skip sharding for the first version.
But I'm also fine with sharding for the first version, but curious about timing. I imagine make.bash is still fast like always (1 minute or less), but then how long are the various tests?
Sorry, I realistically won't be able to get to start on this until April 1st. I'm preparing for an upcoming Gophercon talk this weekend, traveling next week for the talk, and then on vacation the last week of March.
No problem at all. My ping was just as much to see if the Docker image was enough.
BTW, I forgot to ask earlier: how long does this all take? I imagine sharding is necessary for being a trybot, but is your goal for it to be a trybot right away, or just to have it run on GCE by us in Kubernetes? If it's easier, we could skip sharding for the first version.
My wishlist ordered by importance:
(And in the future: 4. Do 1-3 for a darwin builder running an 386/amd64 iOS emulator.)
In other words, "just" a regular non-sharded builder on GCE would make me very grateful.
But I'm also fine with sharding for the first version, but curious about timing. I imagine make.bash is still fast like always (1 minute or less), but then how long are the various tests?
On my i7-6700K@4GHz with 32GB memory with a running Android 8 x86_64 emulator, ./androidtest.bash takes ~2min and ./all.bash on the same machine takes ~3min.
BTW, I'm not even sure trybots can be stable enough before #23795 is fixed. You wouldn't happen to know someone to ping for the corresponding adb bug, https://issuetracker.google.com/issues/73230216 ?
I just stumpled on https://github.com/golang/go/issues/9579. Will the Android emulator run on GCE?
@bcmills, this is another builder request.
Gentle ping. If I can do anything to help this, please let me know.
I've been focused on vgo reviews and bugs (since that's fairly urgent for keeping 1.11 on schedule). I plan to get back to this after the 1.11 release, but if anybody else would like to take it before then you're welcome to it.
Change https://golang.org/cl/162959 mentions this issue: dashboard, buildlet: add a disabled builder with nested virt, for testing
Change https://golang.org/cl/163057 mentions this issue: buildlet: change image name for COS-with-vmx buildlet
Change https://golang.org/cl/163301 mentions this issue: env/linux-x86-vmx: add new Debian host that's like Container-Optimized OS + vmx
@eliasnaur, this is ready from my side if you want to give me a Docker image that uses the Android SDK + emulator (which can now use KVM on our buildlets) to run tests.
Is the one from https://github.com/golang/go/issues/23824#issuecomment-365800874 good enough?
Oh right. Forgot about that. I'll try.
@eliasnaur,
Step 8/14 : RUN yes | /android/sdk/tools/bin/sdkmanager --licenses && /android/sdk/tools/bin/sdkmanager ndk-bundle "system-images;android-26;google_apis;x86_64" && /android/sdk/tools/bin/sdkmanager "build-tools;21.1.2" "platforms;android-19" && /android/sdk/tools/bin/sdkmanager --update
---> Running in 1288386ab097
Exception in thread "main" java.lang.NoClassDefFoundError: javax/xml/bind/annotation/XmlSchema
at com.android.repository.api.SchemaModule$SchemaModuleVersion.<init>(SchemaModule.java:156)
at com.android.repository.api.SchemaModule.<init>(SchemaModule.java:75)
at com.android.sdklib.repository.AndroidSdkHandler.<clinit>(AndroidSdkHandler.java:81)
at com.android.sdklib.tool.SdkManagerCli.main(SdkManagerCli.java:117)
at com.android.sdklib.tool.SdkManagerCli.main(SdkManagerCli.java:93)
Caused by: java.lang.ClassNotFoundException: javax.xml.bind.annotation.XmlSchema
at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:583)
at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
... 5 more
The command '/bin/sh -c yes | /android/sdk/tools/bin/sdkmanager --licenses && /android/sdk/tools/bin/sdkmanager ndk-bundle "system-images;android-26;google_apis;x86_64" && /android/sdk/tools/bin/sdkmanager "build-tools;21.1.2" "platforms;android-19" && /android/sdk/tools/bin/sdkmanager --update' returned a non-zero code: 1
I updated the Dockerfile and run-emulator.sh script, please try again. Changes:
I've created CL 163377 to fix a gomobile test failure with jdk8, and CL 163378 so the gomobile init
step can go away.
Edit: The two CLs are in, all gomobile tests pass and the gomobile init step is no longer necessary.
Thank you for working on this.
I'm running it locally as a test.
I notice that when androidtest.bash is still doing the make.bash phase, the Android qemu process is successfully using KVM (great) but is also using 200% CPU and I haven't even started sending it tests yet because make.bash is still running. What is the emulator doing early on? Animating some live wallpaper or something? Anything we can do to reduce unnecessary CPU usage to make the build & tests faster?
I'm not sure. There is an emulator flag, -no-boot-anim
that "disable animation for faster boot", but I assumed -no-window
already effectively did that.
Performance isn't bad, btw... 9.5 minutes on my dev machine.
We'll need to figure out where/when to run run-emulator.
I imagine we'll want to start the emulator before make.bash runs, so it can finish booting while make.bash finishes, and then we'll run the:
/android/sdk/platform-tools/adb wait-for-device shell 'while [[ -z $(getprop sys.boot_completed) ]]; do sleep 1; done;'
... before we start running tests.
Does that sound right?
Should all that logic go into androidtest.bash? But that doesn't help us get to sharded builders, if the goal is still to be trybots (5 minute total goal). So perhaps the logic of starting an emulator & waiting for an emulator needs to be in the coordinator, because each sharded buildlet will need its own emulator.
But to start, it'd probably be easier just to move that into androidtest.bash for phase 1 of this bug (moving to GCE w/ KVM). Could you send a change to make androidtest.bash do that, conditional on some new environment variable?
That sounds about right.
Why not always wait-for-device/shell in androidtest.bash?
I created a Dockerfile (and run-emulator.sh) variant that
https://gist.github.com/eliasnaur/facb204db915a2adba60b5cd9445dd1b
Change https://golang.org/cl/163457 mentions this issue: androidtest.bash: wait for device to be ready before using it
FWIW, here's a Dockerfile that uses a pre-warmed emulator instance: https://gist.github.com/eliasnaur/b9bfc7c619738e78009d5f875e11142d. To create android-avd.avd and android-avd-ini:
avd.ini.encoding=UTF-8
path=/root/.android/avd/android-avd.avd
path.rel=avd/android-avd.avd
target=android-26
I see it takes 1m11s to run run-emulator.sh
. I imagine that'll parallelize nicely with make.bash.
I just ran the amd64 build again with the new Dockerfile and it's still 9.5 minutes (or just under: 9:26.22elapsed).
But preparing the emulator image in the Dockerfile is good. No need to do it for each build.
To move to a sharded world we'll need to do make.bash first and then running tests over N machines. That means androidtest.bash will likely be copied/gutted/moved elsewhere. Perhaps into the go_android_exec
.
Of the 9.5 minutes, 3 minutes are the make.bash part, 30 seconds are adb sync. So 6 minutes for tests. If we had 3 shards, that's ~2 minutes each, so 3.5+2 minutes = 5.5 minutes and they could be a trybot.
root@57e6dfecd4fb:/go-tip/src# export PATH=$PATH:/android/sdk/platform-tools; GOROOT_BOOTSTRAP=/go1.4 CC_FOR_TARGET=/android/sdk/ndk-bundle/toolchains/llvm/prebuilt/linux-x86_64/bin/x86_64-linux-android26-clang GOARCH=amd64 GOOS=android CGO_ENABLED=1 time ./make.bash --no-banner
Building Go cmd/dist using /go1.4.
Building Go toolchain1 using /go1.4.
Building Go bootstrap cmd/go (go_bootstrap) using Go toolchain1.
Building Go toolchain2 using go_bootstrap and Go toolchain1.
Building Go toolchain3 using go_bootstrap and Go toolchain2.
Building packages and commands for host, linux/amd64.
Building packages and commands for target, android/amd64.
394.83user 41.92system 3:01.99elapsed 239%CPU (0avgtext+0avgdata 1104028maxresident)k
7752inputs+1902944outputs (29major+8290732minor)pagefaults 0swaps
I suspect we could move all the "Push GOROOT to target device." part of androidtest.bash into ../misc/android/go_android_exec.go
Then we should be able to do just do make.bash followed by go tool dist test
and the sync-to-device part will happen lazily on the first test.
Also, we should probably do the:
GOOS=$GOHOSTOS GOARCH=$GOHOSTARCH go build \
-o ../bin/go_android_${GOARCH}_exec \
../misc/android/go_android_exec.go
... part of androidtest.bash in the Dockerfile, so that binary just exists in our $PATH, rather than needing to build it in androidtest.bash.
Thoughts?
FWIW, here's a Dockerfile that uses a pre-warmed emulator instance: ... To create android-avd.avd and android-avd-ini: ..
- run run-emulator.sh on the host machine.
Why can't that go in the Dockerfile? I'm not going to do things by hand on the host. And I'm not installing Java or Android SDK on the host.
FWIW, here's a Dockerfile that uses a pre-warmed emulator instance: ... To create android-avd.avd and android-avd-ini: ..
- run run-emulator.sh on the host machine.
Why can't that go in the Dockerfile? I'm not going to do things by hand on the host. And I'm not installing Java or Android SDK on the host.
Because docker build
doesn't run with kvm enabled, so the warm up run of the emulator fails.
I solved the problem with docker commit
: I used a Dockerfile that only created the emulator, then ran
sudo docker run --privileged -it <image> /android/sdk/emulator/emulator @android-avd -no-window -no-boot-anim -quit-after-boot 100
to run the emulator through boot before shutting down again. Then docker commit <container>
to create a new image with a warm boot enabled emulator.
I suspect we could move all the "Push GOROOT to target device." part of androidtest.bash into ../misc/android/go_android_exec.go
Then we should be able to do just do make.bash followed by
go tool dist test
and the sync-to-device part will happen lazily on the first test.
I don't see an easy way to do that without replicating adb sync
that avoids copying data that already exist on a device. In other words, go_android_exec.go doesn't have a quick way to know whether to skip the sync.
Also, we should probably do the:
GOOS=$GOHOSTOS GOARCH=$GOHOSTARCH go build \ -o ../bin/go_android_${GOARCH}_exec \ ../misc/android/go_android_exec.go
... part of androidtest.bash in the Dockerfile, so that binary just exists in our $PATH, rather than needing to build it in androidtest.bash.
I hope that won't be necessary. We'll lose testing of go_android_exec.go itself if we bake it into the docker image.
Given the above, how about the other way around: androidtest.bash without the tests? "androidtest.sh --no-test" perhaps, and then run that instead of make.bash
in the builder.
Because
docker build
doesn't run with kvm enabled, so the warm up run of the emulator fails.
Ah, thanks. And we can't do a super slow no-kvm warm-up in Docker and then use that image later with kvm, right? I recall @danderson talking about this (in https://twitter.com/dave_universetf/status/1098158628857511936) but that was about suspend/resume, not between two different boots.
I don't see an easy way to do that without replicating
adb sync
that avoids copying data that already exist on a device
When I said "happen lazily", I meant we'd record outside of the emulator whether we'd run adb sync yet. Imagine some file $TMP/did-adb-sync that we touch once we've run it. And then cmd/dist test will check for that file and not adb sync if it exists.
I hope that won't be necessary. We'll lose testing of go_android_exec.go itself if we bake it into the docker image.
In that case, if that's a concern/goal, we should build it as part of make.bash. We can teach dist to compile it for GOOS=android builds during cmdbootstrap. That work?
Given the above, how about the other way around: androidtest.bash without the tests? "androidtest.sh --no-test" perhaps, and then run that instead of
make.bash
in the builder.
I want to make Android less unique. If we can get it down to just make.bash & cmd/dist test, then it works like everything else and sharding comes basically for free.
I'm even thinking we can have the buildlet binary do the emulator start-up too, so then nothing needs to know about that. It'll just be like a phone is attached.
Because
docker build
doesn't run with kvm enabled, so the warm up run of the emulator fails.Ah, thanks. And we can't do a super slow no-kvm warm-up in Docker and then use that image later with kvm, right? I recall @danderson talking about this (in https://twitter.com/dave_universetf/status/1098158628857511936) but that was about suspend/resume, not between two different boots.
The emulator disables warm boot whenever the configuration or some flags change. Besides the error messages when I tried everything in the Dockerfile indicated that the emulator requires KVM for amd64 at least.
I don't see an easy way to do that without replicating
adb sync
that avoids copying data that already exist on a deviceWhen I said "happen lazily", I meant we'd record outside of the emulator whether we'd run adb sync yet. Imagine some file $TMP/did-adb-sync that we touch once we've run it. And then cmd/dist test will check for that file and not adb sync if it exists.
Ok, but how will it know that the device files are current and not left over from a previous run? But see below.
I hope that won't be necessary. We'll lose testing of go_android_exec.go itself if we bake it into the docker image.
In that case, if that's a concern/goal, we should build it as part of make.bash. We can teach dist to compile it for GOOS=android builds during cmdbootstrap. That work?
Yes.
Given the above, how about the other way around: androidtest.bash without the tests? "androidtest.sh --no-test" perhaps, and then run that instead of
make.bash
in the builder.I want to make Android less unique. If we can get it down to just make.bash & cmd/dist test, then it works like everything else and sharding comes basically for free.
I see. But then why not fold all of androidtest.bash into make.bash or cmd/dist? Perhaps with a flag (--prepare?) that is set by the builders and all.bash to avoid surprising users that only want a build.
I'm even thinking we can have the buildlet binary do the emulator start-up too, so then nothing needs to know about that. It'll just be like a phone is attached.
Great.
Ok, but how will it know that the device files are current and not left over from a previous run?
Our GCE-based builder VMs are fresh per build and get nuked after each build. So there is no previous run.
But even ignoring that, we could make make.bash delete that file so it'd work even on reused hosts.
I see. But then why not fold all of androidtest.bash into make.bash or cmd/dist?
We don't want everything in one place. We want the split make-vs-test phases (like everything else) which permits sharded tests. Then we could make Android be a trybot. The x/build/cmd/coordinator runs make.bash, does a snapshot, mirrors a snapshot to N VMs, and then runs tests across those N. If we support the make.bash phase as normal and the cmd/dist test phase like normal, then we get sharded tests for free.
to run the emulator through boot before shutting down again. Then
docker commit <container>
to create a new image with a warm boot enabled emulator.
great! I had forgotten about docker commit
. I think that'll work great. Did you measure the start-up time of the emulator doing that?
Because
docker build
doesn't run with kvm enabled, so the warm up run of the emulator fails.Ah, thanks. And we can't do a super slow no-kvm warm-up in Docker and then use that image later with kvm, right? I recall @danderson talking about this (in https://twitter.com/dave_universetf/status/1098158628857511936) but that was about suspend/resume, not between two different boots.
Between two different boots should work, that's just a change in the virtual hardware, which the disk won't notice. Your warmup will be 4-10x slower in pure user emulation, depending on what you're doing.
I don't see an easy way to do that without replicating
adb sync
that avoids copying data that already exist on a deviceWhen I said "happen lazily", I meant we'd record outside of the emulator whether we'd run adb sync yet. Imagine some file $TMP/did-adb-sync that we touch once we've run it. And then cmd/dist test will check for that file and not adb sync if it exists.
I hope that won't be necessary. We'll lose testing of go_android_exec.go itself if we bake it into the docker image.
In that case, if that's a concern/goal, we should build it as part of make.bash. We can teach dist to compile it for GOOS=android builds during cmdbootstrap. That work?
Given the above, how about the other way around: androidtest.bash without the tests? "androidtest.sh --no-test" perhaps, and then run that instead of
make.bash
in the builder.I want to make Android less unique. If we can get it down to just make.bash & cmd/dist test, then it works like everything else and sharding comes basically for free.
I'm even thinking we can have the buildlet binary do the emulator start-up too, so then nothing needs to know about that. It'll just be like a phone is attached.
Between two different boots should work, that's just a change in the virtual hardware, which the disk won't notice. Your warmup will be 4-10x slower in pure user emulation, depending on what you're doing.
It sounds like the Android emulator (wrapper around qemu) requires KVM, even if qemu doesn't.
to run the emulator through boot before shutting down again. Then
docker commit <container>
to create a new image with a warm boot enabled emulator.great! I had forgotten about
docker commit
. I think that'll work great. Did you measure the start-up time of the emulator doing that?
With the pre-warmed emulator (12.2 GB docker image) created with docker commit:
$ time sudo docker run --rm --privileged -it dfccdbafbc02 /android/sdk/emulator/emulator @android-avd -no-window -no-boot-anim -quit-after-boot 200
pulseaudio: pa_context_connect() failed
pulseaudio: Reason: Connection refused
pulseaudio: Failed to initialize PA contextaudio: Could not init `pa' audio driver
emulator: Cold boot: different AVD configuration
Your emulator is out of date, please update by launching Android Studio:
- Start Android Studio
- Select menu "Tools > Android > SDK Manager"
- Click "SDK Tools" tab
- Check "Android Emulator" checkbox
- Click "OK"
emulator: INFO: boot completed
emulator: Saving state on exit with session uptime 7079 ms
real 0m52,134s
user 0m0,021s
sys 0m0,034s
Without warm boot (9.6 GB docker image):
$ time sudo docker run --rm --privileged -it 3d1aeceaeb3a /android/sdk/emulator/emulator @android-avd -no-window -no-boot-anim -quit-after-boot 200
Couldn't statvfs() path: No such file or directory
pulseaudio: pa_context_connect() failed
pulseaudio: Reason: Connection refused
pulseaudio: Failed to initialize PA contextaudio: Could not init `pa' audio driver
Your emulator is out of date, please update by launching Android Studio:
- Start Android Studio
- Select menu "Tools > Android > SDK Manager"
- Click "SDK Tools" tab
- Check "Android Emulator" checkbox
- Click "OK"
emulator: INFO: boot completed
emulator: Saving state on exit with session uptime 17271 ms
real 1m1,979s
user 0m0,019s
sys 0m0,023s
So a bit faster with warm boot.
Between two different boots should work, that's just a change in the virtual hardware, which the disk won't notice. Your warmup will be 4-10x slower in pure user emulation, depending on what you're doing.
It sounds like the Android emulator (wrapper around qemu) requires KVM, even if qemu doesn't.
This is the error I get if I include the warm up command in the Dockerfile:
emulator: ERROR: x86_64 emulation currently requires hardware acceleration!
Please ensure KVM is properly installed and usable.
CPU acceleration status: /dev/kvm is not found: VT disabled in BIOS or KVM kernel module not loaded
Change https://golang.org/cl/163618 mentions this issue: cmd/dist: build exec wrappers at during bootstrap
@bradfitz , CLs 163619 and 163618 should implement your suggestions. The only magic thing left for Android (and iOS) is that the exec wrapper needs to be in PATH. Perhaps the go tool should look for the exec wrapper in GOROOT/bin and/or GOBIN if PATH doesn't contain any?
Change https://golang.org/cl/163619 mentions this issue: misc/android,cmd/dist: move GOROOT copying to the exec wrapper
Change https://golang.org/cl/163621 mentions this issue: misc: wait for device readyness in the exec wrapper
Change https://golang.org/cl/163625 mentions this issue: misc/android: serialize adb commands on android emulators
I did the docker commit thing, but I'm seeing it take 20 seconds later to use the warm image rather than the base pre-commit image:
Here I build golang/android
, then run the emulator in it with docker run --privileged
, commit it as golang/android-warm
, and then time re-starting the emulator in both the original image vs the warm image. The warm image seems to take 1m40s and the original is 1m20s-ish.
Am I missing a step?
bradfitz@gdev:~/src/golang.org/x/build/env/android-amd64$ docker build --tag=golang/android .
...
Successfully built f06b8bc59234
Successfully tagged golang/android:latest
bradfitz@gdev:~/src/golang.org/x/build/env/android-amd64$ docker run --privileged -it golang/android /android/sdk/emulator/emulator @android-avd -no-window -no-boot-anim -quit-after-boot 100
Couldn't statvfs() path: No such file or directory
pulseaudio: pa_context_connect() failed
pulseaudio: Reason: Connection refused
pulseaudio: Failed to initialize PA contextaudio: Could not init `pa' audio driver
Your emulator is out of date, please update by launching Android Studio:
- Start Android Studio
- Select menu "Tools > Android > SDK Manager"
- Click "SDK Tools" tab
- Check "Android Emulator" checkbox
- Click "OK"
emulator: INFO: boot completed
emulator: Saving state on exit with session uptime 75431 ms
bradfitz@gdev:~/src/golang.org/x/build/env/android-amd64$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2a03a697a8ff golang/android "/android/sdk/emulat…" About a minute ago Exited (0) 30 seconds ago unruffled_neumann
....
bradfitz@gdev:~/src/golang.org/x/build/env/android-amd64$ docker commit 2a03a697a8ff golang/android-warm
sha256:314bb914743d041d6366415b2078264c53ed1718d5ae880e795059e600ac8da5
bradfitz@gdev:~/src/golang.org/x/build/env/android-amd64$ time docker run --privileged -it golang/android-warm /android/sdk/emulator/emulator @android-avd -no-window -no-boot-anim -quit-after-boot 100
pulseaudio: pa_context_connect() failed
pulseaudio: Reason: Connection refused
pulseaudio: Failed to initialize PA contextaudio: Could not init `pa' audio driver
emulator: ERROR: fail to boot after 100 seconds, quit
emulator: Saving state on exit with session uptime 97567 ms
real 1m41.658s
user 0m0.084s
sys 0m0.016s
bradfitz@gdev:~/src/golang.org/x/build/env/android-amd64$ time docker run --privileged -it golang/android /android/sdk/emulator/emulator @android-avd -no-window -no-boot-anim -quit-after-boot 100
Couldn't statvfs() path: No such file or directory
pulseaudio: pa_context_connect() failed
pulseaudio: Reason: Connection refused
pulseaudio: Failed to initialize PA contextaudio: Could not init `pa' audio driver
Your emulator is out of date, please update by launching Android Studio:
- Start Android Studio
- Select menu "Tools > Android > SDK Manager"
- Click "SDK Tools" tab
- Check "Android Emulator" checkbox
- Click "OK"
emulator: INFO: boot completed
emulator: Saving state on exit with session uptime 74576 ms
real 1m19.830s
user 0m0.092s
sys 0m0.020s
bradfitz@gdev:~/src/golang.org/x/build/env/android-amd64$ time docker run --privileged -it golang/android-warm /android/sdk/emulator/emulator @android-avd -no-window -no-boot-anim -quit-after-boot 100
pulseaudio: pa_context_connect() failed
pulseaudio: Reason: Connection refused
pulseaudio: Failed to initialize PA contextaudio: Could not init `pa' audio driver
emulator: ERROR: fail to boot after 100 seconds, quit
emulator: Saving state on exit with session uptime 98278 ms
real 1m42.308s
user 0m0.088s
sys 0m0.024s
That's essentially the same thing I did so I don't think you missed a step. I did get some strange timings the first few times I ran the warm image; it took longer for the emulator to start but it booted faster. Perhaps docker is doing something slow with the image data?
Also your warm image timing is actually worse than what you see. The -quit-after-boot flag takes a number of seconds to wait for boot (100 in your case), and the emulator failed to boot in that time: emulator: ERROR: fail to boot after 100 seconds, quit
Perhaps we should start with the simpler cold booted emulator? Even if your timings eventually match mine, they might not apply to GCE.
My dev instance is also on GCE, on the same VM types.
So the warm boot isn't even working it seems. Can you investigate that while I finish up the buildlet & Dockerfile & dashboard/builders.go configuration? (I'll be sending that to you soon here.)
I'm on my own machine, not on GCE :) So I think that's a pretty good indication that warm booting is not a net win. The extra time probably goes to fetching the extra ~3GB of Docker image across the network.
Why should warm booting be slower, though?
Presumably it's faster on a developer's desktop, so why isn't it faster for us? In this case I have all the images pulled locally, so network isn't a concern. And we'll have these Docker files pre-baked into the image whose block device is lazily read over the super fast network anyway, so Dockerfile size isn't really a concern here.
Change https://golang.org/cl/163738 mentions this issue: env/android-amd64-emu: add new Android emulator image
Currently, the android/amd64 and android/386 builders run on an Android emulator with a amd64 system image. This is taxing the heavyly loaded mobile builder Mac Mini. It is also inefficient: the emulator builds competes with Android device builds and any concurrent iOS builds. Because builds are slow, android is not in the trybot set, leaving me to often pester CL authors with "this CL broke android".
Is it possible to add (and run) the Android emulator inside the existing docker images used for regular builders and then run android sharded tests as any other GOOS? If so, android/amd64 and android/386 builds would complete much faster, and take some pressure off the mobile device builder.
If the emulator builds are stable enough, android/amd64 (and perhaps android/386) could even join the trybot set, avoiding most (if not all) android specific followup work after a CL is submitted.