When downloading files, they are loaded into memory before being written to disk. This is no issue with smaller files, but I keep running into issues when I try to download larger ones, such as the 1.8GB csv file from this data set. The code above is not always throwing an error, only when running on a machine with limited resources or a slow internet connection.
What I would suggest is to allow the user to download the files in a different way. A simple argument like return_url, for example, could just do that and skip the download. The user could then use an external function like curl::curl_download or copy the link to an external application. I have a pull request ready, if there is interest.
The problem
When downloading files, they are loaded into memory before being written to disk. This is no issue with smaller files, but I keep running into issues when I try to download larger ones, such as the 1.8GB csv file from this data set. The code above is not always throwing an error, only when running on a machine with limited resources or a slow internet connection.
What I would suggest is to allow the user to download the files in a different way. A simple argument like
return_url
, for example, could just do that and skip the download. The user could then use an external function likecurl::curl_download
or copy the link to an external application. I have a pull request ready, if there is interest.