Closed maxgulde closed 4 years ago
Are there any intermediate values you could ask the STK for, like the x,y,z vector to the satellite measured from the Earth's center? If so then you could break the problem in half by determining whether the difference between the two measurements was fundamental (they are predicting that the satellite is in two different places above the earth), or a difference in units (they compute longitude and latitude differently).
One possible difference is whether STK is using real geodetic coordinates like Skyfield, or simpler geocentric coordinates. Differences in how UTC is defined and how leap seconds are handled could make a difference.
My guess is that fairly little headway will be made with standard deviations alone. Instead, a single example with exact numbers would let us see the size of the effect and try to guess its cause. If you could produce a small stand-alone Python script that prints out some coordinates, and then contrast those coordinates with the ones that STK produces for exactly the same TLE and date, then we could look at that specific difference and try to track down its origin. It would be interesting to see where the difference arises!
Thanks a lot for your quick reply! I will check tomorrow (STK is way too expensive to have a personal license) about the intermediate values and come back with some more data asap.
I wrote a little script and checked it against the output of STK.
# Standard
from datetime import datetime, timedelta
# External
from skyfield.api import Topos, Loader, load, utc
from numpy import diff, where
# Load skyfield data
load = Loader('skyfieldData', expire=False)
tScale = load.timescale()
# Define location and time
Target = Topos(47.981752, 7.771864)
StartTime = datetime(2020, 3, 2, 10, 0, 0, tzinfo=utc)
TimeRange = tScale.utc([StartTime + timedelta(seconds=x*600) for x in range(0, 7)])
TimeRange.MT
TimeRange.gast
# Load satellite
TLEs = load.tle('active.txt')
Satellite = TLEs[40697]
# Compute LLA
SatPos = Satellite.at(TimeRange)
SatSubPoint = SatPos.subpoint()
Lat = SatSubPoint.latitude.degrees
Lon = SatSubPoint.longitude.degrees
Alt = SatSubPoint.elevation.km * 1e3
I used the following parameters as input
Target: 47.981752 N, 7.771864 E, Elevation = 0 m (in reality 273 m) Date: 2020-03-02 10:00:00 to 11:00:00 UTC Satellite: Sentinel-2A TLE date: 2020-02-03 10:30 UTC
TLE 1 40697U 15028A 20034.15846893 .00000014 00000-0 22143-4 0 9996 2 40697 98.5641 110.3052 0001305 90.2565 269.8780 14.30820624241118
Here's the output:
LLA
Time STK Lat STK Lon STK Alt SF Lat SF Lon SF Alt
10:00:00 33.427871 -178.191458 792504 33.427863 -178.189042 792505
10:10:00 67.979035 163.244993 800276 67.978995 163.247246 800277
10:20:00 72.589361 31.016963 801578 72.589320 31.019737 801578
10:30:00 38.523141 6.831691 794995 38.523140 6.834253 794996
10:40:00 3.080657 -2.055193 792486 3.080672 -2.052706 792486
10:50:00 -32.377482 -10.467248 801973 -32.377454 -10.464827 801974
11:00:00 -66.885553 -28.062232 814717 -66.885497 -28.059946 814717
So there are some discrepancies: Lon has the largest, Lat is slightly better, Alt seems a very good fit.
As per your suggestion, I also checked in the ICRF (STK) and GCRF (SF) coordinate systems, in case it's just a matter of coordinate transformation:
ICRF (STK) / GCRF (SF)
STK x (m) STK y (m) STK z (m) SF x (m) SF y (m) SF z (m)
4016521 -4436927 -3937867 -4016433 4436877 3938014
1176578 -2422307 -6634341 -1176448 2422202 6634402
-2107036 505401 -6824534 2107159 -505521 6824487
-4596747 3242585 -4437453 4596816 -3242676 4437316
-5352720 4756709 -372904 5352710 -4756735 372728
-4090143 4477132 3833074 4090057 -4477084 -3833222
-1289127 2513845 6595171 1288997 -2513742 -6595236
Now, I am not sure how big the difference between ICRF and GCRF is but would assume that it's marginal. We usually have a few hundred meters in vector length difference for comparison.
STK only seems to have the coordinate options "centric" and "detic" but I am not completely sure what to make of them.
Let me know your thoughts on that.
I checked a bit more in terms of regularity and found an interesting pattern. The following points have been recorded for 1-h time steps, they show the difference between latitude and longitude between STK and SF.
Whereas the difference in latitude seems to bound to a periodic behavior around zero, the longitude exhibit far stronger offsets.
I am not sure how big the difference between ICRF and GCRF is
Happily, they are the same: the GCRF is "geocentric ICRF".
STK only seems to have the coordinate options "centric" and "detic" but I am not completely sure what to make of them.
My guess is that they refer to: https://en.wikipedia.org/wiki/Latitude#Geodetic_and_geocentric_latitudes
Whereas the difference in latitude seems to bound to a periodic behavior around zero, the longitude exhibit far stronger offsets.
Drat, that's the opposite of what my theory predicted. I think it's longitude that stays constant, and latitude that differs, when geocentric latitude is used instead of true geodetic latitude.
A constant difference in latitude like that — does it remain the same all year? — suggests some difference in how the two libraries are handling the Earth's rotation. If I'm calculating correctly, the difference amounts to:
2.5e-3 degrees = 9 arcseconds = about 280 meters at the equator.
It will be interesting if we can learn where the difference comes from. In the meantime, though, given that satellite positions are said to only be accurate to within a few kilometers, it's possible that a 0.28 km difference (but I could have computed that incorrectly?) will be swamped by other noise and error in a typical TLE propagation. So you might be able to continue your project even while pursuing the question of where the 9 arcsecond difference comes from.
Ah, okay, thanks for the clarification.
I now have checked one single satellite for now (Sentinel-2A) and here, the difference really is negligible compared to the normal TLE noise. Before closing the issue, I would, however, dig some more into differences and particularly check
As soon as I got something, I will post the results here. Also, I will check again if I correctly plotted lat and lon and their are indeed not simply the other way around.
I looked earlier this afternoon at one idea, but the size of the effect is the wrong size. I thought that maybe the other library might be using Greenwich Apparent Sidereal Time to convert Earth-fixed coordinates to GCRS coordinates, whereas https://www.celestrak.com/publications/AIAA/2006-6753/AIAA-2006-6753.pdf instead suggests a slightly different conversion, that I implemented here:
But, if I'm doing the units conversion correctly, they differ pretty consistently by 4.23e-3, which is bigger than the difference you're seeing here.
I look forward to whether you uncover an explanation, but a difference of this magnitude seems common between the various astronomy libraries and web sites — for some reason they all seem to do Earth satellites a tiny bit differently.
Here's what we get throughout the year, sampled at a rate of 1 hour:
This time it is Sentinel-1A, the target location stayed the same, start time was 1st Mar 2020. What's very interesting to me is that after around half a year in the future the noise becomes a lot smaller. Does the SGP4 propagator change something at this point in time?
Does the SGP4 propagator change something at this point in time?
Wow, those are interesting graphs! I'm not enough of an expert to know about SGP4's behavior, unfortunately. All I have experience with is hooking the algorithm up to Python, so its behaviors are still largely unknown to me.
It would be interesting to see those graphs for the satellite x,y,z positions.
Interestingly, they show a very periodic behavior:
I am not sure how this is compatible with the ever-increasing Lon difference, for example. I will now check with the influence of the orbit.
So — if I'm understanding these graphs, the significant thing is that all three of them are centered exactly on the value zero! The differences between the two libraries when it comes to generating GCRF ("J2000") positions are purely transient ones that amount to exactly zero over time periods of hundreds of days.
It's very interesting that Z shows solid variation, whereas X and Y seem to trade it off between each other over the year — as though there's a variation that is related to the Earth-Sun direction and that slowly slews around through 360° over the whole year.
But, all of that being said: I think that means that the lag in longitude must happen after both libraries are done producing x,y,z coordinates, because otherwise we would see the x or y of one library always leading the other one as they chase each other around through the year, but instead we see them as often ahead of as behind the other in your graphs above.
All of which points us to the same place we already expected: the persistent difference you saw in longitude does not relate to the satellite positions themselves, but to something about how the two libraries associate clock time with the Earth's rotation, putting slightly different longitudes beneath the same spot at the same moment.
I believe you are right. But here's an interesting thing. To get some more statistics on this behaviour, I employed part of Planet's Flock constellation, 127 CubeSats to be precise, and ran the whole thing with them for a few days and a few hundred point target accesses.
For about 90% of all accesses, SF and STK give more or less exactly the same result. Lat and Lon are within a few arcsecs of each other, corresponding to the aforementioned few hundred meters. But every now and then, we have a huge difference in how the access (line of sight between point target and satellite) is computed. Despite the fact that I am not using any digital elevation model.
An example: Flock-3K-6 (SCN 43896) shows the expected behaviour in terms of LLA difference in STK and SF (except for the constant offset of lon):
If I now try to compute when the satellite comes above the horizon (at elevation = 0), I get vastly different results. Here, STK and SF differ by a few tens of seconds (STK earlier) and hence LLA differences are huge.
This effect was initially triggered the issue for me. I have a clearer view now and will try to check the elevation angle computation against each other now. Currently, I am using the SF function
topocentric.altaz()
to do so and interpolate linearly to get the zero elevation time and subpoint. And I am a bit confused that I get good results in most cases but not all ... wait a moment - this is for apparent positions only, correct? Shoot - that might be the reason for my troubles!
The two libraries might be making different calculations of altitude. Often satellite libraries account for atmospheric refraction. Maybe that might make a difference? Here is how altaz()
can be told to use refraction:
https://rhodesmill.org/skyfield/api-position.html#skyfield.positionlib.Apparent.altaz
I would say, try this. Choose a particular pass of the satellite. Ask both libraries for the moment of its rising. Take that time, and ask each library:
Maybe the other library does refraction by default, whereas — since it's always a guess — Skyfield has users opt-in to its being included?
I checked as per your suggestion:
Maybe I have to take a step back here: What I need in the end is the LLA of the satellite when it comes over the horizon (start and end). So far, I am using altaz()
to do so. And this seems to be the source of disagreement with STK.
I checked for the elevation and the discrepancy ist strongest when we have accesses at very low elevations (hence the not so interesting ones for most payloads). I tried to optimize the difference by means of different interpolation methods (linear vs quadratic vs cubic spline) but the effect was minimal.
Would you have any other idea on how this discrepancy could be resolved? Is there another way of computing the elevation angle, for example?
I had not even though of double-checking for light time propagation, that was a good thing to check.
Could you take one pass of the satellite, have both libraries produce every-second altitude and azimuth values over the duration of the pass, and graph the two on the same plot so we can see the discrepancy? And then maybe also do a plot of the differences. That might give us a feel physically for how the predictions differ.
I just did as you recommended and am blown away by the result (single pass, 1 second time steps):
A perfect match. But I found the explanation already. I used 60s steps in SF and interpolated between them. This is a problem because the "bend" in the elevation curse is hardly well-modeled using only 60 second time steps. What I need to do is converge more to the actual zero elevation point in time (i.e. closer than 30 seconds) and then interpolate. It will be a bit slower but has higher accuracy.
Thanks so much for your help with this! I now have the feeling that I understand the process a lot better. I will await your response and then close the issue.
To clarify whether I am understanding the graph: this satellite rose only about ⅕° above the horizon before falling back below it?
Yes, that is correct. As stated before
the discrepancy ist strongest when we have accesses at very low elevations
and this was the most extreme case I could find a pass for.
Wow, it only barely grazed the horizon! I was asking in case I misunderstood the units — yes, that was a nicely extreme case.
Feel free to go ahead and close. I am still not sure where the slight difference in longitude might be coming from, but if the resulting alt-az graph is close enough for your purposes, then we can leave that mystery for another day.
Thanks for being willing to make so many graphs in response to my questions!
Basically, if I have support points every 60 seconds for interpolation, this will yield a large time (of zero elevation) error and hence a large offset in access LLA.
Thanks for being so responsive and helping me out here. I am sure there are more pressing issues you need to be considering right now ;) Keep up the good work with Skyfield. Should we ever meet, beers are on me.
… beers are on me.
Sounds good, I look forward to it! :)
Just a final note: I implemented a more accurate access determination routine and now have a deviation from STK as would be expected from our discussion.
Correct matches: 757 from 757 Average start time difference (s): 0.020 Average end time difference (s): 0.020 Average lat start difference (deg): 0.000130 Average lat end difference (deg): 0.000127 Average lon start difference (deg): 0.000101 Average lon end difference (deg): 0.000096 Average alt start difference (m): 0.042 Average alt end difference (m): 0.036
It's beautiful to see so many leadings zeros. :) Thanks for sharing!
Dear Brandon,
First of all thanks a lot for providing this amazing library, it is highly appreciated. I am currently working on a project which determines sensor accesses from a variety of platforms. To that end, I use skyfield to compute satellite accesses, making use of the
Topos.at()
andTopocentric.altaz()
functionality.As starting point of my custom sensor propagation routine, I first compute the time of zero elevation (using linear interpolation between SGP4 timesteps) and then require the sub-satellite point LLA.
satellite.at(TimeOfZeroElevation).subpoint()
To check the result, I compare the output with AGI's STK software, which I am personally not a big fan of but trust very much in terms of results. Using the same TLE file as input (from celestrak), I found some major deviation, in particular in the latitudes and longitudes of the satellite subpoint at a given time:
While the difference in altitude is marginal for my intended application, the Lat and Lon values correspond to a few 10s of km. And that causes the routine to miss a few sensor accesses every now and then, in particular, if you consider small field of views, e.g. for highly-resolved optical sensors.
I first believed that this had to do with inaccuracies stemming from the interpolation process, so I compared values at the defined time steps used in the SGP4 propagation. But this had no influence.
Next, I also used Orekit's SGP4 propagator to check against STK and ended up with very similar values as STK. So my guess is that either I am doing something wrong or there might be a potential issue with the subpoint() computation.
I would be very happy if you could help me out.