Closed jneira closed 4 years ago
It starts with 86% only with the project checkout (no cache, no stack or ghc installed)
Filesystem Size Used Avail Use% Mounted on
udev 3.4G 0 3.4G 0% /dev
tmpfs 695M 8.9M 686M 2% /run
/dev/sda1 84G 72G 12G 86% /
tmpfs 3.4G 8.0K 3.4G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 3.4G 0 3.4G 0% /sys/fs/cgroup
/dev/loop0 40M 40M 0 100% /snap/hub/43
/dev/loop1 94M 94M 0 100% /snap/core/8935
/dev/sda15 105M 3.6M 101M 4% /boot/efi
/dev/sdb1 14G 35M 13G 1% /mnt
That is weird, we shouldnt take up 60GB just from the checkout.
No,no, they are already taken by installed software. In fact, all ghcs are already installed in the linux image! cabal jobs take advantage on that. But it was easier let stack manage its ghc's in the default locations. The straightforward solution could be make stack use the installed ghc's.
I've opened an issue in the issue tracker for azure images: https://github.com/actions/virtual-environments/issues/709 I am changing the job to use the already installed ghc's, to avoid the problem temporary
Well, linux stack jobs have ran out of disk space since c8ed3e5 :
It make the download of ghc-8.8.1, necessary for wrapper-tests fail: