-
Just recognized this typo in one of the if conditions of the setup_s3_url function in aws.s3/R/s3HTTP.R, see: https://github.com/cloudyr/aws.s3/blob/93325b01de9bd27896cdcb55eacc54bc69383c80/R/s3HTTP.R…
-
Hello,
I have experienced a problem when using s3saveRDS with large files (around 200Mb). Here is the error:
> Error in memCompress(from = serialize(x, connection = NULL), type = "gzip") :
> …
-
Hi!
I'm trying to upload a large .RData file. It's a conditional random forest model so to my knowledge I can't split it up.
On the RStudio AMI I'm trying to upload the .RData file from my S3 b…
-
Calling gcs_list_objects() on my Google Cloud bucket returns exactly 1000 rows, even after adding some additional files just to make sure that here is more in there than a 1000 files.
Maybe the max…
-
A little out-of-scope, but would be useful to be able to do this.
https://cloud.google.com/storage/docs/access-control/signed-urls
-
I've used get_object to get a gz file into raw vector. However, when I used memDecompress, it showed internal error. Is there any solutions?
The code is following:
x.gz
xie67 updated
7 years ago
-
Got bogged down my the region thing so didn't have a chance to play with this yet, but on the to-do list. might involve more package API breakage.
-
If you request a file that does not exist, for example by using lower case instead of upper case for one of the letters, you get this error from R:
> Error in load(tmp, envir = envir) :
> the i…
elinw updated
7 years ago
-
```
job_extract
-
Is possible to use this package for S3 compatible storage. For S3 compatible storage I need to specific an endpoint and then just enter the following:
Access Key Id -
Secret Access Key -
REST En…