Closed debsush closed 8 years ago
1.) Underneath the hood, we are making an api call per dataset (see: https://github.com/quandl/quandl-r/blob/master/R/Quandl.R#L135), and performing the merge (see: https://github.com/quandl/quandl-r/blob/master/R/Quandl.R#L170). You could try making individual dataset calls, and seeing if there is a more efficient merge algorithm you can use. You could also try adding start_date
and end_date
parameters to limit the data being pulled, which may speed up the request.
2.) The API speed can potentially be increased by using the start_date
and end_date
parameters. This will limit the number of data points returned by the server.
When a 20 year data download from yahoo takes .3 sec and 10 year data download from Quandl takes 3 seconds, I would not call the above as work-around but rather limitations in Quandl API as far as speed is concerned. For a webapp, 3 seconds of data pull + 1-2 seconds of its internal processing will make each user request a 5-6 seconds full render process (which is on the higher side to retain users)
Thank you anyways.
SD
The speed of the data retrieval will be dependent on how many rows and how many columns are being fetched for a given dataset. We are always trying to make data retrieval on our platform faster and are exploring different approaches on minimizing our data retrieval response time.
Thank you. I am sure Quandl will soon be able to deliver the kindof speed that yahoo or google is able to deliver. Keep up the good work.
On Thu, Jun 2, 2016 at 2:40 PM, Clement Leung notifications@github.com wrote:
The speed of the data retrieval will be dependent on how many rows and how many columns are being fetched for a given dataset. We are always trying to make data retrieval on our platform faster and are exploring different approaches on minimizing our data retrieval response time.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/quandl/quandl-r/issues/48#issuecomment-223294411, or mute the thread https://github.com/notifications/unsubscribe/AOYQSV81I3cboDs-kP4Ioy6qx-gWAMr1ks5qHt1AgaJpZM4IpW2d .
Hi,
I have 2 queries:
Time to execute 9.36116 secs
The above is incredibly slow as it merges each column. Is there a more native faster way to achieve the same.
library(Quandl) library(quantmod)
starttime<-Sys.time() mydata<-Quandl("BSE/BOM500209", start_date="2001-01-01",type="xts") endtime<-Sys.time()-starttime
Time difference of 2.973606 secs
starttime<-Sys.time() datatemp<-getSymbols('INFY.BO',from = "2000-01-01",src='yahoo',warnings = FALSE,auto.assign = FALSE,env=parent.frame()) endtime<-Sys.time()-starttime
Time difference of 0.3525529 secs
Please suggest Thank you SD