I'm using gulp-awspublish-router the following way:
gulp.src('./public/**/*')
.pipe(rename(function(path) {
path.dirname = argv.s3dir + '/' + path.dirname;
}))
.pipe(awspublishRouter({
cache: {
cacheTime: 315360000 // 1 yr
},
routes: {
'\.js$': {
'Content-Type': 'application/javascript; charset=UTF-8',
gzip: true
},
'\.css$': {
'Content-Type': 'text/css; charset=UTF-8',
gzip: true
},
// pass-through for anything that wasn't matched by routes above, to be uploaded with default options
"^.+$": "$&"
}
}))
.pipe(publisher.publish())
.pipe(publisher.sync())
.pipe(awspublish.reporter());
and I noticed that it automatically deletes existing files in the S3 bucket. So if befdf5efc8a4fa1485d0c5e01f2c7d10ebf981be/build.js was the new file, the file e25553ae888bc735187939c64cfe4e47ed5e289e/build.js would get deleted.
Hmm, this would probably be fixed by removing the sync() line, as what that does is remove the objects that don't exist on your local machine from the bucket.
I'm using gulp-awspublish-router the following way:
and I noticed that it automatically deletes existing files in the S3 bucket. So if befdf5efc8a4fa1485d0c5e01f2c7d10ebf981be/build.js was the new file, the file e25553ae888bc735187939c64cfe4e47ed5e289e/build.js would get deleted.
Is there a simple way to prevent it from doing so?