Open flothesof opened 8 years ago
At what time did you try this? I think this is related to #769 where LPDAAC has an unseemly amount of probably unnecessary downtime.
Hi Elliott,
Thanks for your reply. I tried this at the time I filed the issue. Is there any official page where I can check the downtime of the server?
Thanks,
Florian
Le 4 août 2016 20:05, "Elliott Sales de Andrade" notifications@github.com a écrit :
At what time did you try this? I think this is related to #769 https://github.com/SciTools/cartopy/issues/769 where LPDAAC has an unseemly amount of probably unnecessary downtime.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/SciTools/cartopy/issues/789#issuecomment-237634378, or mute the thread https://github.com/notifications/unsubscribe-auth/ACQFXTBkoKya1FHWwQ-62zi_xoZIW65cks5qcinQgaJpZM4JcgEV .
I get the same 401 error and I think it might be related to the new LP DAAC authorization requirement since July 20: https://lpdaac.usgs.gov/data_access/data_pool. I obtained the following urllib error:
try:
urllib.request.urlopen('http://e4ftl01.cr.usgs.gov/SRTM/SRTMGL3.003/2000.02.11/N14W092.SRTMGL3.hgt.zip')
except urllib.error.HTTPError as e:
print(e.headers)
Server: nginx/1.4.7
Date: Tue, 09 Aug 2016 11:10:21 GMT
Content-Type: text/html; charset=utf-8
Transfer-Encoding: chunked
Connection: close
Status: 401 Unauthorized
X-Frame-Options: SAMEORIGIN
X-XSS-Protection: 1; mode=block
X-Content-Type-Options: nosniff
WWW-Authenticate: Basic realm="Please enter your Earthdata Login credentials. If you do not have a Earthdata Login, create one at https://urs.earthdata.nasa.gov//users/new"
Cache-Control: no-cache
X-Runtime: 0.042869
X-Request-Id: ecb4f63b-157c-49dc-803e-73040edea6a0
Unfortunately the "Command Line Tips" document on the Earthdata site doesn't provide a Python example. I tried to create an authentification handler for cartopy's urllib.request, but didn't succeed. Other documents on the DAAC websites recommend using Pydap, but Pydap lacks a final Python 3 update and I'm not sure if there's already a recipe for 2.7 (https://github.com/pydap/pydap/issues/19).
Any help is appreciated!
Having the same issue. The odd thing is that it works fine on my mac book but not on my linux machine. I'm using Ubuntu 14.04
Having the same problem here (but with MODIS data). tried several authentication examples with python but no success yet.
There's an entry from August on the EarthData wiki: https://wiki.earthdata.nasa.gov/display/EL/How+To+Access+Data+With+Python. It shows the sample code needed to use the earthdata credentials to download data from the portal. This is probably exactly what needs to be done to get this problem fixed:
#!/usr/bin/python
from cookielib import CookieJar
from urllib import urlencode
import urllib2
# The user credentials that will be used to authenticate access to the data
username = "<Your Earthdata login username>"
password = "<Your Earthdata login password"
# The url of the file we wish to retrieve
url = "http://e4ftl01.cr.usgs.gov/MOLA/MYD17A3H.006/2009.01.01/MYD17A3H.A2009001.h12v05.006.2015198130546.hdf.xml"
# Create a password manager to deal with the 401 reponse that is returned from
# Earthdata Login
password_manager = urllib2.HTTPPasswordMgrWithDefaultRealm()
password_manager.add_password(None, "https://urs.earthdata.nasa.gov", username, password)
# Create a cookie jar for storing cookies. This is used to store and return
# the session cookie given to use by the data server (otherwise it will just
# keep sending us back to Earthdata Login to authenticate). Ideally, we
# should use a file based cookie jar to preserve cookies between runs. This
# will make it much more efficient.
cookie_jar = CookieJar()
# Install all the handlers.
opener = urllib2.build_opener(
urllib2.HTTPBasicAuthHandler(password_manager),
#urllib2.HTTPHandler(debuglevel=1), # Uncomment these two lines to see
#urllib2.HTTPSHandler(debuglevel=1), # details of the requests/responses
urllib2.HTTPCookieProcessor(cookie_jar))
urllib2.install_opener(opener)
# Create and submit the request. There are a wide range of exceptions that
# can be thrown here, including HTTPError and URLError. These should be
# caught and handled.
request = urllib2.Request(url)
response = urllib2.urlopen(request)
# Print out the result (not a good idea with binary data!)
body = response.read()
print body
Actually, to run this in Python 3, I needed to convert to a couple of renamings in the urllib module. I managed to make a clean request using this code:
from http.cookiejar import CookieJar
from urllib.parse import urlencode
import urllib.request
# The user credentials that will be used to authenticate access to the data
# The url of the file we wish to retrieve
# url = "http://e4ftl01.cr.usgs.gov/MOLA/MYD17A3H.006/2009.01.01/MYD17A3H.A2009001.h12v05.006.2015198130546.hdf.xml"
# Create a password manager to deal with the 401 reponse that is returned from
# Earthdata Login
password_manager = urllib.request.HTTPPasswordMgrWithDefaultRealm()
password_manager.add_password(None, "https://urs.earthdata.nasa.gov", username, passwd)
# Create a cookie jar for storing cookies. This is used to store and return
# the session cookie given to use by the data server (otherwise it will just
# keep sending us back to Earthdata Login to authenticate). Ideally, we
# should use a file based cookie jar to preserve cookies between runs. This
# will make it much more efficient.
cookie_jar = CookieJar()
# Install all the handlers.
opener = urllib.request.build_opener(
urllib.request.HTTPBasicAuthHandler(password_manager),
#urllib2.HTTPHandler(debuglevel=1), # Uncomment these two lines to see
#urllib2.HTTPSHandler(debuglevel=1), # details of the requests/responses
urllib.request.HTTPCookieProcessor(cookie_jar))
urllib.request.install_opener(opener)
# Create and submit the request. There are a wide range of exceptions that
# can be thrown here, including HTTPError and URLError. These should be
# caught and handled.
request = urllib.request.Request(url)
response = urllib.request.urlopen(request)
# Print out the result (not a good idea with binary data!)
body = response.read()
print(body)
So it seems this issue can be fixed if an extra login step is taken into account during the query for the SRTM data. What do you guys think?
Thanks @flothesof; in fact, what you have posted is enough to get things working without modifying Cartopy. Because install_opener
is a global operation, it applies to Cartopy as well. If you add the steps above up until the install_opener
line before attempting to use SRTM, everything works fine.
Getting this to work seamlessly in Cartopy will require a bit of thought on how to integrate it with the Downloader
class and how to get login information into it from the higher-level SRTM loader functions.
I am still having the same issue
HTTPError: HTTP Error 401: Unauthorized
Hi,
I'm trying to grab elevation data from a SRTM source. I've modified the SRTM example to something a little more basic:
The problem is that I get an error ending like this:
I'm assuming this is the website where the tiles for the SRTM map are downloaded that doesn't respond in the way it should. I tried to replicate the call to urlopen with the following code:
And get the same 401 error.
Any thoughts on how to solve this?
Thanks, Florian