When executing our codeDeployDeployApplication-test.ts test suite, the unit tests would pass, but then immediately throw the following error:
[Error: ENOENT: no such file or directory, open 'C:\Users\rbbarad\Desktop\azdo\public-repo\aws-toolkit-azure-devops\tests\taskTests\codeDeployDeployApplication\temp\test.v1705631182630.zip'] {
errno: -4058,
code: 'ENOENT',
syscall: 'open',
path: 'C:\\Users\\rbbarad\\Desktop\\azdo\\public-repo\\aws-toolkit-azure-devops\\tests\\taskTests\\codeDeployDeployApplication\\temp\\test.v1705631182630.zip'
}
This error would cause flakiness in our Linux Unit Tests, as it would throw this error and exceed the retry limit for the Linux CodeBuild job.
Problem
The reason for this error is due to our use of fs.createReadStream for reading the file contents and passing them as the request Body to S3 Upload. The S3 Upload request allows a few different types for the Body request (Buffer|string|etc.), but the Readable Stream is the most efficient method since it doesn't need to finish reading the file into memory in order to begin uploading like the other types would. The fs.createReadStream line returns a stream for immediate use, but can/will continue to read the file into the stream afterward (as the S3 Upload occurs). The readStream will conclude reading the file as the S3 Upload process occurs - the real S3 Upload is equipped to handle this.
However in our Mocked S3 Upload, it returned success instantly, and then cleanDeploymentArchive would delete the file immediately after (while fs.ReadStream is still trying to read the file). This would successfully delete the file, but then throw the ENOENT error as it continues to try reading the now deleted file.
Solution
This change fixes this issue by adding a 1 second sleep to the S3 Upload Mock, which allows sufficient time for the readStream to finish reading our test's resource files
Important: The nature of this error is specific to our unit tests and not to actual task flows. This is because the fs.ReadStream must conclude as the S3 Upload is running/concluding. Any issues with the S3 Upload would be thrown/caught and errors with the readStream will be thrown caught as well (these would occur before any file deletion attempts). In real-world situations, the file would never be deleted while the stream is still reading.
Testing
Ran tests locally.
Replicated the linux codebuild job in my personal aws account and pointed those to my forked repo to confirm that this runs successfully.
Checklist
[x] I have read the README document
[x] I have read the CONTRIBUTING document
[x] My code follows the code style of this project
[x] I have added tests to cover my changes
[ ] A short description of the change has been added to the changelog using the script npm run newChange
License
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.
Description
Background
When executing our
codeDeployDeployApplication-test.ts
test suite, the unit tests would pass, but then immediately throw the following error:This error would cause flakiness in our Linux Unit Tests, as it would throw this error and exceed the retry limit for the Linux CodeBuild job.
Problem
The reason for this error is due to our use of
fs.createReadStream
for reading the file contents and passing them as the requestBody
to S3 Upload. The S3 Upload request allows a few different types for the Body request (Buffer|string|etc.
), but the Readable Stream is the most efficient method since it doesn't need to finish reading the file into memory in order to begin uploading like the other types would. Thefs.createReadStream
line returns a stream for immediate use, but can/will continue to read the file into the stream afterward (as the S3 Upload occurs). The readStream will conclude reading the file as the S3 Upload process occurs - the real S3 Upload is equipped to handle this.However in our Mocked S3 Upload, it returned success instantly, and then
cleanDeploymentArchive
would delete the file immediately after (while fs.ReadStream is still trying to read the file). This would successfully delete the file, but then throw the ENOENT error as it continues to try reading the now deleted file.Solution
This change fixes this issue by adding a 1 second sleep to the S3 Upload Mock, which allows sufficient time for the readStream to finish reading our test's resource files
Important: The nature of this error is specific to our unit tests and not to actual task flows. This is because the fs.ReadStream must conclude as the S3 Upload is running/concluding. Any issues with the S3 Upload would be thrown/caught and errors with the readStream will be thrown caught as well (these would occur before any file deletion attempts). In real-world situations, the file would never be deleted while the stream is still reading.
Testing
Checklist
npm run newChange
License
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.