Open zalpha178 opened 5 years ago
Per their terms of use:
Limitations
The Federal Reserve Bank of St. Louis may impose limits on certain features and services or restrict your access to parts or all of the FRED® API or the Federal Reserve Bank of St. Louis web site or web services without notice or liability. You acknowledge and agree that the Federal Reserve Bank of St. Louis may impose or adjust the limit on the amount of bandwidth you may use or the number of transactions you may send or receive through the FRED® API; such fixed upper limits may be set by the Federal Reserve Bank of St. Louis at any time, at the Federal Reserve Bank of St. Louis’s discretion.
So I believe they're probably rate-limiting your API key, but they don't disclose the exact limits. 429 Too Many Requests is the typical response for such rate limiting.
You say it won't fix even if you use sleep... how much are you sleeping? Sleep for multiple minutes, or cut your initial rate down sufficiently, and it should be fine.
I tried: time sleep(0.01), time sleep(0.05), time sleep(1) respectively. also using try: & except: If I tested with 600 series_id names, the maximum series I am able to download is about 450, exception handling will write another 150 series_id names into error log file. However, the testing results were not stable.
I appreciate your response very much. I either need to limit the number of series per request or check with Fred if rate limits can be increased.
Thanks, Mindy
HTTPError Traceback (most recent call last) c:\users\mindy\appdata\local\programs\python\python37-32\lib\site-packages\fredapi\fred.py in __fetch_data(self, url) 63 try: ---> 64 response = urlopen(url) 65 root = ET.fromstring(response.read())
c:\users\mindy\appdata\local\programs\python\python37-32\lib\urllib\request.py in urlopen(url, data, timeout, cafile, capath, cadefault, context) 221 opener = _opener --> 222 return opener.open(url, data, timeout) 223
c:\users\mindy\appdata\local\programs\python\python37-32\lib\urllib\request.py in open(self, fullurl, data, timeout) 530 meth = getattr(processor, meth_name) --> 531 response = meth(req, response) 532
c:\users\mindy\appdata\local\programs\python\python37-32\lib\urllib\request.py in http_response(self, request, response) 640 response = self.parent.error( --> 641 'http', request, response, code, msg, hdrs) 642
c:\users\mindy\appdata\local\programs\python\python37-32\lib\urllib\request.py in error(self, proto, args) 568 args = (dict, 'default', 'http_error_default') + orig_args --> 569 return self._call_chain(args) 570
c:\users\mindy\appdata\local\programs\python\python37-32\lib\urllib\request.py in _call_chain(self, chain, kind, meth_name, args) 502 func = getattr(handler, meth_name) --> 503 result = func(args) 504 if result is not None:
c:\users\mindy\appdata\local\programs\python\python37-32\lib\urllib\request.py in http_error_default(self, req, fp, code, msg, hdrs) 648 def http_error_default(self, req, fp, code, msg, hdrs): --> 649 raise HTTPError(req.full_url, code, msg, hdrs, fp) 650
HTTPError: HTTP Error 400: Bad Request
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
I put 600+ series_id in a text file and let Python loop through it.
There occurred error 'HTTPError: HTTP Error 400: Bad Request' & 'HTTPError: HTTP Error 429: Too Many Requests' It won't fix even I use sleep.
Then I tested with 100 series. It ran fine without any error. I also tested Excel Add-in provided by Fred. The maximum # of download is up to column GR, that's 174. However API can't download 174 series without error message. Can I ask what's is the maximum of series API can retrieve per connection request? Is there a way to download 1000+ series without error?
Thanks, Mindy