Open martin31821 opened 2 years ago
Thanks for the feedback. We'll have someone take a look, and get back to you when we have more information.
Are you mounting the file system with tls mount or just plain mount? Also can you explain how the mount is executed in your project (Fine via either code or words)?
:wave: @Cappuccinuo I am working with @martin31821. Our failing terraform code looked like this (last tested 15.12.2021, after which we have consolidated into a single EFS to circumvent this bug):
codebuild.tf:
resource "aws_codebuild_project" "this" {
...
file_system_locations {
identifier = "SSTATE_DIR"
location = "${var.sstate_fs.dns_name}:/"
mount_point = "/mnt/yocto_cache/sstate_cache"
mount_options = "nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2"
type = "EFS"
}
file_system_locations {
identifier = "SSTATE_DIR_RELEASE"
location = "${var.sstate_release_fs.dns_name}:/"
mount_point = "/mnt/yocto_cache/sstate_release_cache"
mount_options = "nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2"
type = "EFS"
}
file_system_locations {
identifier = "DL_DIR"
location = "${var.dldir_fs.dns_name}:/"
mount_point = "/mnt/yocto_cache/dl_dir"
mount_options = "nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2"
type = "EFS"
}
}
efs.tf:
resource "aws_efs_file_system" "sstate_fs" {
encrypted = true
performance_mode = "generalPurpose"
throughput_mode = "bursting"
}
resource "aws_efs_file_system" "sstate_release_fs" {
encrypted = true
performance_mode = "generalPurpose"
throughput_mode = "bursting"
}
resource "aws_efs_file_system" "dldir_fs" {
encrypted = true
performance_mode = "generalPurpose"
throughput_mode = "bursting"
}
resource "aws_efs_mount_target" "sstate_fs_a" {
file_system_id = aws_efs_file_system.sstate_fs.id
subnet_id = module.vpc.vpc_private_subnets[0]
security_groups = [aws_security_group.security_group.id]
}
resource "aws_efs_mount_target" "sstate_release_fs_a" {
file_system_id = aws_efs_file_system.sstate_release_fs.id
subnet_id = module.vpc.vpc_private_subnets[0]
security_groups = [aws_security_group.security_group.id]
}
resource "aws_efs_mount_target" "dldir_fs_a" {
file_system_id = aws_efs_file_system.dldir_fs.id
subnet_id = module.vpc.vpc_private_subnets[0]
security_groups = [aws_security_group.security_group.id]
}
security_group.tf:
resource "aws_security_group" "security_group" {
name = "${var.prefix}-codebuild-sg"
description = "${var.prefix}-codebuild-sg"
vpc_id = module.vpc.vpc_id
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = [
"0.0.0.0/0"]
}
ingress {
from_port = 2049
to_port = 2049
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}
Please don't hesitate to ask for further details :slightly_smiling_face:
P.S.: aws terraform module version was 3.69.0
We have been successfully using codebuild with multiple EFS mounts in the past. Since ~3 days mounting more than one EFS appears to silently fail.
Within the AWS console the EFS's all report to be available and Codebuild also lists these as mounted. However, running a ls on the mount-points within the codebuild instance itself reveals that only the first one of these mounts show up.
Removing this successfully mounted EFS from the codebuild configuration and rerunning the project will then mount the next EFS (I assume in alphabetic order), however mounting any additional EFS mounts will still fail.
We tried recreating the codebuild project and (one) of the offending volumes, without any luck so far.
I think it might be related to #112, but I'm not sure, since we hit the issue consistent.