Closed waddles closed 1 year ago
Which macOS version and Tart VM image are you using? Tart's directory mounting feature requires both host and guest to run at least macOS 13 (Ventura).
Hello! I think, I'm having similar issue. I wasn't sure it is executor problem too, but it occurs only when using TART_EXECUTOR_HOST_DIR=true
I'm using macOS Ventura for both host and guest
My error happens at the start of Unity project build (Unity tries to create package cache):
An error occurred while resolving packages:
One or more packages could not be added to the local file system:
com.unity.addressables: ENOENT: no such file or directory, chmod '/Volumes/My Shared Files/hostdir/project/Library/PackageCache/.tmp-1382-3txiATFvKXMw/copy/Tests/Editor/CustomTestSchema.cs.meta'
com.unity.cinemachine: ENOENT: no such file or directory, chmod '/Volumes/My Shared Files/hostdir/project/Library/PackageCache/.tmp-1382-iR0pNEwDge07/copy/Samples~/Cinemachine Example Scenes/Scenes/CameraMagnets/CameraMagnetTargetController.cs'
com.unity.collab-proxy: ENOENT: no such file or directory, open '/Volumes/My Shared Files/hostdir/project/Library/PackageCache/.tmp-1382-tmjjpXy0eqjP/copy/Editor/PlasticSCM/Views/DownloadPlasticExeWindow.cs'
As per #21, Ventura 13.3. Strangely I have not had this problem at all with cachedir
mounted from host, but then cache only reads a single file at the start of the job and writes it again at the end - it's not doing thousands of iops like hostdir
does.
@waddles, @rigellin73 could you provide a .gitlab-ci.yml
or a link to the repository that could help reproducing this issue?
Unfortunately I don't have a public repo to test this with. I think the difference of using a temporary dir mounted from the host is fairly negligible though. I'm trying to create a script that will exhibit the behaviour but while testing I timed this test which is creating 10000 random CSV files, about 2MB each on a recent M2 with 2TB SSD.
gitlab@mac-mini-gitlab-runner-4 ~ % daemonize $(brew --prefix)/bin/tart run vm --no-graphics --dir=hostdir:~/builds
admin@admins-Virtual-Machine ~ % pwd
/Users/admin
admin@admins-Virtual-Machine ~ % time python3 /Volumes/My\ Shared\ Files/hostdir/create_files.py
python3 /Volumes/My\ Shared\ Files/hostdir/create_files.py 334.25s user 5.06s system 99% cpu 5:39.57 total
admin@admins-Virtual-Machine ~ % cd /Volumes/My\ Shared\ Files/hostdir
admin@admins-Virtual-Machine hostdir % time python3 create_files.py
python3 create_files.py 330.34s user 6.46s system 97% cpu 5:44.79 total
@waddles we've got a similar report and it seems making sure that both guest and host run macOS Ventura 13.3+ helped. Which version of macOS you have on your host and for the guests?
Confirming, we saw this with Ventura 13.3 host and guests. I was not able to create a test scenario that could repeatedly trigger it though.
Seems there is a bug in virtiofs support in macOS. I reported it to Apple via FB12594177. I thought https://github.com/cirruslabs/tart/pull/555 fixed the issue but unfortunelty it did not.
Closing and tracking the issue in https://github.com/cirruslabs/tart/issues/567 for visibility
This may not be the right place to report this issue as I think it may be in the virtualization framework itself but turning the flag off serves as a workaround.
I have a large repo (about 4GB, ~30000 files) and when I have the
TART_EXECUTOR_HOST_DIR=true
flag set, I get errors which look to be due to a saturation of the VirtIO disk interface.The errors happen when
cache.zip
(from cache dir mounted from host into temp dir mounted from host)