Closed pdh0710 closed 4 years ago
Hi there!
Maybe I can help you out. Could you let me know where de431t.bsp
was downloaded from? I have checked the NAIF server and they provide the file split on two parts: de431_part-1.bsp
and de431_part-2.bsp
. I could not find the file you are using.
I was about to post that!
on the NAIF website: https://naif.jpl.nasa.gov/pub/naif/generic_kernels/spk/planets/ I found de431_part-1.bsp and de431_part-2.bsp. Given that, maybe just make a meta kernel with both files and load them that way instead of using de431t.bsp
Another option is to properly merge the two files using the NAIF spkmerge
utility program. Note that it is not possible to merge SPK files without using spkmerge
or the SPICE toolkit. Here you have the documentation for the utility program.
I downloade de431t.bsp
from here.
ftp://ssd.jpl.nasa.gov/pub/eph/planets/bsp/
In de431t.bsp
, numerically integrated TT-TDB values are embedded. It seems that you two didn't know about de43*t.bsp files at all.
I think it might be that the CSPICE version running under SpiceyPy in your installation is 32bits. Is there a way for you to check if this is the case?
How can I check CSPICE version under SpiceyPy is 32bit? I just installed SpiceyPy as the official document guided.
conda config --add channels conda-forge conda install spiceypy
I think this is just a malformed kernel. Spiceypy hasn't been using 32bit builds of cspice for a long time now.
there are details about tt-tdb in this file ftp://ssd.jpl.nasa.gov/pub/eph/planets/bsp/README.txt, i'm downloading the file to check myself
I have downloaded the file and run it on SpiceyPy 64bits and Matlab 64bits, and it worked just fine. Then I run it on a CSPICE 32bit environment (not on SpiceyPy) and I got the same issue reported on this thread.
@jdiazdelrio could you run the spice tool 'brief' on it? I'm downloading it now but I am only at 25% so far (how fast is your network????) .
Yes. Here's the output:
BRIEF -- Version 4.0.0, September 8, 2010 -- Toolkit Version N0066
Summary for: de431t.bsp
Bodies: MERCURY BARYCENTER (1) SATURN BARYCENTER (6) MERCURY (199)
VENUS BARYCENTER (2) URANUS BARYCENTER (7) VENUS (299)
EARTH BARYCENTER (3) NEPTUNE BARYCENTER (8) MOON (301)
MARS BARYCENTER (4) PLUTO BARYCENTER (9) EARTH (399)
JUPITER BARYCENTER (5) SUN (10) 1000000001
Start of Interval (ET) End of Interval (ET)
----------------------------- -----------------------------
13201 B.C. MAY 06 00:00:00.000 17191 MAR 15 00:00:00.000
Right now it downloads at 350Mbps.
okay I am willing to entertain that it is a 32 bit build of cspice that @pdh0710 is using but how??
I installed SpiceyPy a few hours ago as guided. Why does my SpiceyPy have the problem?
@pdh0710 is it possible you installed a 32bit version of anaconda? open a terminal and start the python interpreter by running python
, the top 3 lines should look something like this:
Python 3.6.10 | packaged by conda-forge | (default, Apr 6 2020, 14:40:13)
[GCC Clang 9.0.1 ] on darwin
Type "help", "copyright", "credits" or "license" for more information.
reply with what it says for your system.
My python message is...
Python 3.7.6 | packaged by conda-forge | (default, Mar 23 2020, 22:22:21) [MSC v.1916 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information.
okay so that reads to me as a 64 bit python running on a 64 bit operating system see https://stackoverflow.com/questions/28526062/does-64-bit-anaconda-on-win32-uses-32-bit-or-64-bit
That doesn't solve the issue yet but we must be getting closer. Just to check @pdh0710 could you compute the md5 hash of the bsp file?
@pdh0710 how much RAM does your system have, is it a VM?
(My system RAM is 64GB)
I removed SpiceyPy package and manually removed all cspice related files, then re-installed SpiceyPy package. Below message was showed.
The following NEW packages will be INSTALLED:
cspice conda-forge/win-64::cspice-66-h2fa13f4_1007 spiceypy conda-forge/noarch::spiceypy-3.0.2-py_0
So it seems that the re-installed cspice is 64bit version. But still the error is occurred.
very odd. I will test this on my windows machine sometime tomorrow but it looks like you are fully running the 64bit cspice, spiceypy, python and os. I checked the cspice-feedstock repository and although the recipe has 32 bit references it appears those have not been built for a long time as conda-forge has deprecated 32 bit windows a while ago. Since @jdiazdelrio was able to replicate the issue with a 32 bit environment I am not sure what else there is to check besides the md5 hashes of the kernel file and the cspice dll file that was installed
I don't think my de431t.bsp file is damaged. When I used the de431t.bsp file with python-jplephem module, no error was occurred and correct coordinates were calculated. This is the reason why I thought SpiceyPy caused the problem. Moreover, I re-downloaded de431t.bsp file and checked again. The problem was same.
I installed SpiceyPy module on my Ubuntu 18.04 server(x64). And I executed the above spicey_error.py
with de431t.bsp
. No error was occurred and planet coordinates were correctly calculated.
By the way, it seems that cspice module is not installed in the Ubuntu server. I wonder WHETHER cspice module is only needed in Windows system OR I missed cspice module installed in the Ubuntu server.
@pdh0710 if you installed spiceypy using pip on your Ubuntu server then it would not have used the conda packages to install spiceypy. SpiceyPy's conda distribution is slightly different as we use the conda forge infrastructure to build cspice instead of the local computer's C compiler (which is how the pip install works).
I have confirmed this issue on my windows machine with both the pip installed spiceypy and conda forge spiceypy and it looks like it must be a bug with how cspice is being compiled. Given the size of the kernel there may be some internal incompatibility that is windows specific.
@pdh0710 I would suggest for now using a smaller bsp kernel if possible, or use the spice tools to split that 3.1 gb kernel into several smaller files.
(Yes, I installed SpiceyPy using pip on the Ubuntu server)
O.K. I can understand the situation. Then, would you please let me know how I can safely split de431t.bsp?
I don't have first hand experience doing this, but I think you would need to use the spkmerge tool which is available on the NAIF website. With the tool you would use the BEGIN_TIME, END_TIME parameters along with the de431t.bsp file and the naif0012.tls file to limit the time range of data of the subset kernel file you generate. From the instructions this looks like a fairly easy process to script.
Another question I have, why are you using this kernel specifically? Is it because of the TT-TDB values specifically? Also do you really need the full time range of this kernel which is 13201 B.C. MAY 06 00:00:00.000
to 17191 MAR 15 00:00:00.000
, if you are really just interested in say the current decade this kernel file is way too big.
Using spkmerge to cut up SPKs is fairly straight forward. Here's some code that I've used to cut out just the data needed for a single image from large SPKs. The first function generated a NetworkX graph of how the different bodies are defined relative to each other and the second function uses that to get all of the bodies needed for the state of one body relative to another. You can probably skip the graph step and just get the information for all of the bodies in the SPK.
https://github.com/USGS-Astrogeology/ale/blob/master/ale/util.py#L329
@AndrewAnnex @jessemapel Thank you. I'll try the methods you recommend.
I prefer DE431t ephemeris data for 2 reasons.
Hi @AndrewAnnex. I think there might be an issue on the way CSPICE is compiled within the conda forge infrastructure. MS Visual studio, by default, seems to compile in 32bits, or at least that's what they say on their documentation:
The default build architecture uses 32-bit, x86-hosted tools to build 32-bit, x86-native Windows code. However, you probably have a 64-bit computer. When Visual Studio is installed on a 64-bit Windows operating system, additional developer command prompt shortcuts for the 64-bit, x64-hosted native and cross compilers are available.
Note that the command line options are the same for both 32 and 64-bit.
I'll have to research this a bit as it has been a while since I've looked at the windows based build process but I do call vcvarsall.bat in the windows build process in appveyor, I would expect that should have worked...
@jdiazdelrio do you have any ideas to prove this directly? I've now tested both my build process and the build process in conda-forge and both appear to be building x64 bit shared libraries of cspice after running the 'dumpbin' command:
8664 machine (x64)
5 number of sections
5EACD720 time date stamp Sat May 2 02:12:48 2020
0 file pointer to symbol table
0 number of symbols
F0 size of optional header
2022 characteristics
Executable
Application can handle large (>2GB) addresses
DLL
and I verified I was using vsvarsall.bat correctly in my build process, conda forge does a similar step: https://dev.azure.com/conda-forge/feedstock-builds/_build/results?buildId=152100&view=logs&j=f97ca392-d626-52d0-4b2e-4b27aa319a77&t=d276ba93-4510-59b4-3917-c75eb70f82df&l=361
In short this is a real mystery, the obvious things I have checked appear to indicate that cspice is being built correctly, more to the point I don't think spiceypy would work at all if it was a 32bit DLL being loaded by the 64bit python
I checked if cspice.dll is a 64bit, using dumpbin.exe
[ref]. dumpbin
reported that cspice.dll is a x64 DLL as shown below.
$ dumpbin /headers cspice.dll
Microsoft (R) COFF/PE Dumper Version 14.16.27035.0
Copyright (C) Microsoft Corporation. All rights reserved.
Dump of file cspice.dll
PE signature found
File Type: DLL
FILE HEADER VALUES
8664 machine (x64)
6 number of sections
5DDF47FE time date stamp Thu Nov 28 13:07:26 2019
0 file pointer to symbol table
0 number of symbols
F0 size of optional header
2022 characteristics
Executable
Application can handle large (>2GB) addresses
DLL
.......
.......
.......
However, x64 DLL does not guarantee that internal variables are 64 bits, especially on Windows OS/API.
I had stopped developing Windows programs long ago, but heard that 32bit types are more widely used in Windows x64 API than other x64 OS. Though it's somewhat ridiculous, Microsoft is such a company considers the backward compatibility more important. So, more attentions are needed in Windows x64 programming, whether types and variables are actually 64bits. My opinion is that this problem may be caused by unintentional use of 32bit types/variables.
gahh so I wonder if this is an error in SPICE itself rather than how I am building it, which as far as I can tell after a deep dive is correct.
In SpiceZdf.h SpiceInt would be mapped to a long
on windows systems which has the same max value as a 32-bit signed binary integer https://docs.microsoft.com/en-us/cpp/cpp/integer-limits?view=vs-2019, https://docs.microsoft.com/en-us/cpp/cpp/data-type-ranges?view=vs-2019 .
HOWEVER: I am very skeptical that this is the bug, how has anyone not run into this before?
Assuming I am still doing something wrong, while search around for people with the same issue, I found another project calceph. In their change log they claim to have fixed a issue with windows support and kernels larger than 4gb, but when I inspect the source I didn't find anything that significant. The use slightly different flags for the MSVC compiler /O2 /GR- /MD /nologo /EHs
but looking into the docs for those flags none of those would be important.
@pdh0710 @jdiazdelrio after communicating with the NAIF, I have learned that this is a known issue with the 64bit Windows CSPICE library, and that the library is limited to kernels no bigger than 2.1 GB. This is not a bug with how SpiceyPy or the conda-forge feedstock that builds cspice for conda users. The underlying issue is due to the 32bit long issue I noted and simply changing it to a 64bit long (long long) may be in conflict with constraints from ANSI C. As such, I won't consider writing a patch to the c code, as there may cause unintentional errors elsewhere.
I think the only solution for you @pdh0710 would be to use the spkmerge
to make a smaller kernel file or to use some of the smaller files provided by SSD/NAIF. Again, it primarily depends on how big of a time range you need to do your work, so if you really needed the full time span then using spkmerge
would be my recommendation. Otherwise using a Linux or macOS system would not share the same limitation.
I will leave this issue open for a day or two and then I will close it.
Oh, it's not good news, but I agree that modifying CSPICE source code is beyond the scope of this project. I will try the above recommended methods to split de431t.bsp
file next week.
Thank you for your consideration and recommendation.
@pdh0710 I am closing the issue with a few additional suggestions from the naif:
If you can use a Kernel that covers less time, this kernel: ftp://ssd.jpl.nasa.gov/pub/eph/planets/bsp/TTmTDB.de430.19feb2015.bsp covers the 1549 DEC 31 to 2650 JAN 25 range.
If you need the whole time range for de431t.bsp which is 13201 B.C. MAY 07 to 17191 MAR 01, then using spkmerge on a linux or macos system is the way to go, and you can make it smaller just by extracting data only for ID 1000000001 if you are really only interested in the TT-TDB information.
@AndrewAnnex Thank you. I tried spkmerge
. It worked well. I could make a small size version of de431t.bsp
easily.
(Please excuse my English)
When I execute
spicey_error.py
withde431t.bsp
ephemeris data file, the below error is occured. No error is occurred with otherde43*t.bsp
. So I think there is a problem SpiceyPy handlingde431t.bsp
. The size ofde431t.bsp
is about 3.4GB. The problem may be related to large file handling.Error :
spicey_error.py :
de431t.txt :
Environments :