Open CooperNederhood opened 6 years ago
That's an interesting question, so what I am thinking about is that in section 3.1, we could use the ancient dataset including those known cities' locations to get the GMM estimates the lost cities's location, and in the proof section, we suppose the location of lost cities as known data stored in our ancient dataset, and then using the same model to estimate some picked known city's location, so that's kind of like first we use some known x to get the estimator of y, for example y', then in the proof section we use this y' as known value to get the estimator of x, which can be noted as x', then we compare x and x' to check the robustness of this method, that's what I understand.
how big is the sample size though? Like would removing that one city have a significant impact on GMM estimate?
This was a fascinating read, thank you for presenting. I thought the "losing" known cities test was a clever way to do a pseudo out-of-sample test of your methodology.
However, you write "First, we use our ancient trade dataset, setting the locations of lost cities to their GMM estimates from section 3.1" [emphasis added]. But these GMM estimates partly reflect knowing the location of the city that we now seek to rediscover. If you completely drop known city i from the training data, then re-estimate, can you still rediscover the location?