QuantEcon / Expectations.jl

Expectation operators for Distributions.jl objects
MIT License
57 stars 16 forks source link

Ensure coverage of QuantEcon lecture notes expecations #13

Closed jlperla closed 6 years ago

jlperla commented 6 years ago

@Nosferican I tried to go through all of the expectations used in the QuantEcon lectures, so here is a list of the things that @ArnavSood could target to make sure the library can handle lecture note rewrites.

A few clear priorities:

I looked in the lectures for key places that QuantEcon expectations come up, to make sure these can be fulfilled:

arnavs commented 6 years ago

I'm having some trouble with the accuracy on the normal distribution (toy example x —-> abs(x)). The most accurate overall seems to be around 198 nodes, which still gets us only 2 decimals.

The issue with the bigger number of nodes is that we have crazily small exponents, like 1e-234, and a lot of them just come up zero. I tried some naive BigFloat stuff, didn't make any improvement. Let me know how I should proceed.

arnavs commented 6 years ago

So this is monotone decreasing (modulo the parity) in error magnitude until 198, at which point I guess we cross Julia's threshold for fixed-point math and bad things happen.

(Edit: Fixed pprint).

n = 1, % error = 100.0
n = 2, % error = -25.33141373155007
n = 3, % error = 27.639874544173214
n = 4, % error = -11.303549110349707
n = 5, % error = 16.47563149371697
n = 6, % error = -7.292219343478747
n = 7, % error = 11.754037621599968
n = 8, % error = -5.383170449416915
n = 9, % error = 9.138691421290764
n = 10, % error = -4.26636710249508
n = 11, % error = 7.476098114931476
n = 12, % error = -3.533332169380789
n = 13, % error = 6.325585286986237
n = 14, % error = -3.0152506233401914
n = 15, % error = 5.482059128371678
n = 16, % error = -2.629663865927483
n = 17, % error = 4.837082763846168
n = 18, % error = -2.3315085042773123
n = 19, % error = 4.32792194047939
n = 20, % error = -2.094075874954416
n = 21, % error = 3.9157576224607915
n = 22, % error = -1.9005305226096507
n = 23, % error = 3.575279739311255
n = 24, % error = -1.739733945610533
n = 25, % error = 3.289280392204266
n = 26, % error = -1.6040228228683735
n = 27, % error = 3.0456517010696658
n = 28, % error = -1.487951780644663
n = 29, % error = 2.8356265695470104
n = 30, % error = -1.3875451334427027
n = 31, % error = 2.6527007230632886
n = 32, % error = -1.2998324002681096
n = 33, % error = 2.4919468607763706
n = 34, % error = -1.2225495029917886
n = 35, % error = 2.3495640169431047
n = 36, % error = -1.1539405671539291
n = 37, % error = 2.2225731062798406
n = 38, % error = -1.0926229083143981
n = 39, % error = 2.108606163770545
n = 40, % error = -1.0374928890613468
n = 41, % error = 2.0057572790192615
n = 42, % error = -0.9876589058384491
n = 43, % error = 1.9124751350622795
n = 44, % error = -0.9423928024426794
n = 45, % error = 1.8274842061542598
n = 46, % error = -0.9010940579991134
n = 47, % error = 1.7497260767802711
n = 48, % error = -0.8632629954576764
n = 49, % error = 1.6783151323887222
n = 50, % error = -0.8284804666049124
n = 51, % error = 1.6125046763372217
n = 52, % error = -0.7963922577867941
n = 53, % error = 1.551660718759118
n = 54, % error = -0.7666969842603465
n = 55, % error = 1.4952414843871875
n = 56, % error = -0.7391365953847877
n = 57, % error = 1.442781234611195
n = 58, % error = -0.7134888565190284
n = 59, % error = 1.393877380011253
n = 60, % error = -0.6895613436166537
n = 61, % error = 1.3481801281533954
n = 62, % error = -0.6671866069432412
n = 63, % error = 1.3053841032417492
n = 64, % error = -0.6462182467315009
n = 65, % error = 1.2652215129114823
n = 66, % error = -0.6265277062673986
n = 67, % error = 1.2274565388731586
n = 68, % error = -0.6080016339177867
n = 69, % error = 1.1918807030945453
n = 70, % error = -0.590539699721663
n = 71, % error = 1.1583090171508863
n = 72, % error = -0.5740527777141015
n = 73, % error = 1.126576764552813
n = 74, % error = -0.5584614244512276
n = 75, % error = 1.0965367978946072
n = 76, % error = -0.543694598915556
n = 77, % error = 1.0680572572183595
n = 78, % error = -0.5296885802692075
n = 79, % error = 1.04101963495016
n = 80, % error = -0.5163860486841234
n = 81, % error = 1.0153171275071915
n = 82, % error = -0.5037353012776126
n = 83, % error = 0.9908532252185924
n = 84, % error = -0.4916895805397388
n = 85, % error = 0.9675405013185582
n = 86, % error = -0.4802064968645821
n = 87, % error = 0.9452995679642427
n = 88, % error = -0.4692475301499717
n = 89, % error = 0.9240581730153954
n = 90, % error = -0.4587775981222883
n = 91, % error = 0.9037504159168789
n = 92, % error = -0.44876468118174173
n = 93, % error = 0.8843160647547962
n = 94, % error = -0.43917949533062856
n = 95, % error = 0.8656999595749284
n = 96, % error = -0.4299952061372227
n = 97, % error = 0.8478514895207574
n = 98, % error = -0.4211871778546074
n = 99, % error = 0.8307241333416406
n = 100, % error = -0.4127327527535198
n = 101, % error = 0.8142750544951828
n = 102, % error = -0.40461105651141865
n = 103, % error = 0.7984647434203332
n = 104, % error = -0.39680282612937284
n = 105, % error = 0.7832567006931114
n = 106, % error = -0.38929025740191425
n = 107, % error = 0.7686171557124112
n = 108, % error = -0.3820568693896764
n = 109, % error = 0.7545148163611399
n = 110, % error = -0.375087383724754
n = 111, % error = 0.7409206457242203
n = 112, % error = -0.36836761689063147
n = 113, % error = 0.7278076625204081
n = 114, % error = -0.36188438387803407
n = 115, % error = 0.7151507623573924
n = 116, % error = -0.3556254118349827
n = 117, % error = 0.7029265573195539
n = 118, % error = -0.3495792625289817
n = 119, % error = 0.6911132317293229
n = 120, % error = -0.34373526258503717
n = 121, % error = 0.6796904122168466
n = 122, % error = -0.33808344060515894
n = 123, % error = 0.6686390504612625
n = 124, % error = -0.3326144703932482
n = 125, % error = 0.6579413171905257
n = 126, % error = -0.3273196195988792
n = 127, % error = 0.6475805061972993
n = 128, % error = -0.3221907031926417
n = 129, % error = 0.6375409472799497
n = 130, % error = -0.31722004124701686
n = 131, % error = 0.6278079271637784
n = 132, % error = -0.3124004205650802
n = 133, % error = 0.6183676175519459
n = 134, % error = -0.3077250597576001
n = 135, % error = 0.6092070095742788
n = 136, % error = -0.30318757740472047
n = 137, % error = 0.6003138539770945
n = 138, % error = -0.29878196299833365
n = 139, % error = 0.5916766064744171
n = 140, % error = -0.29450255037928263
n = 141, % error = 0.5832843777551939
n = 142, % error = -0.2903439934251627
n = 143, % error = 0.5751268876791629
n = 144, % error = -0.28630124377054317
n = 145, % error = 0.5671944232681301
n = 146, % error = -0.28236953036018486
n = 147, % error = 0.5594778001239095
n = 148, % error = -0.2785443406669698
n = 149, % error = 0.5519683269539164
n = 150, % error = -0.2748214034152781
n = 151, % error = 0.5446577729173999
n = 152, % error = -0.2711966726703727
n = 153, % error = 0.5375383375301087
n = 154, % error = -0.26766631317236045
n = 155, % error = 0.5306026229010953
n = 156, % error = -0.2642266867988341
n = 157, % error = 0.5238436080899639
n = 158, % error = -0.26087434006011584
n = 159, % error = 0.5172546254000131
n = 160, % error = -0.25760599253260774
n = 161, % error = 0.5108293384388355
n = 162, % error = -0.2544185261531487
n = 163, % error = 0.5045617217928144
n = 164, % error = -0.25130897529560786
n = 165, % error = 0.4984460421811861
n = 166, % error = -0.24827451756837804
n = 167, % error = 0.49247684096248817
n = 168, % error = -0.24531246527039113
n = 169, % error = 0.48664891788511044
n = 170, % error = -0.24242025745073384
n = 171, % error = 0.4809573159769213
n = 172, % error = -0.23959545252444353
n = 173, % error = 0.47539730748317754
n = 174, % error = -0.23683572139745215
n = 175, % error = 0.4699643807687415
n = 176, % error = -0.23413884106221897
n = 177, % error = 0.46465422810817486
n = 178, % error = -0.2315026886258703
n = 179, % error = 0.4594627342946772
n = 180, % error = -0.22892523573631007
n = 181, % error = 0.45438596600396036
n = 182, % error = -0.22640454337840255
n = 183, % error = 0.44942016185556305
n = 184, % error = -0.2239387570079875
n = 185, % error = 0.4445617231181609
n = 186, % error = -0.22152610200220127
n = 187, % error = 0.4398072050134115
n = 188, % error = -0.21916487940022336
n = 189, % error = 0.43515330856911966
n = 190, % error = -0.2168534619129916
n = 191, % error = 0.43059687298640753
n = 192, % error = -0.21459029018356074
n = 193, % error = 0.42613486848028614
n = 194, % error = -0.21237386927886082
n = 195, % error = 0.4217643895612221
n = 196, % error = -0.21020276539658847
n = 197, % error = 0.4174826487263636
n = 198, % error = -0.20807560277216236
n = 199, % error = 100.0
n = 200, % error = 100.0
jlperla commented 6 years ago

As abs(x) is not differentiable, I don't think the convergence properties hold. Especially since the mass is so closely to the discontinuity in the derivative. Are you seeing issues with more smooth functions?

arnavs commented 6 years ago

That’s a good point. I’ll check it out.

arnavs commented 6 years ago

@jlperla Yeah, this is highly accurate around 30 nodes for smooth functions that are nonetheless off the beaten track (i.e., x -> sin(x)^2). So we're good here.

Nosferican commented 6 years ago

Should I assume this is good for me to incorporate it in the lectures re-writes?

jlperla commented 6 years ago

In expectation, definitely :-) @ArnavSood is tagging this to be a proper package soon, and it won't be hard to pass at least the key tests you require. @sglyon will give us some feedback at the right time, but the idea is that we are in a position to potentially deprecate the QuantEcon quadrature schemes.

The big one that will be missing is the markov chain, conditional expectations... but I think we are better off skipping those for now and leaving those the way they are in the notes for now.

sglyon commented 6 years ago

I haven't followed along too closely, but I have had great success with the qnwdist function in quantecon (one of the only ones that didn't come from Miranda and Fackler). It does a good job at spanning the support of the distribution, without going too far out into the tails that we end up with really small numbers.

I think we should preserve that type of algorithm.

arnavs commented 6 years ago

@sglyon Currently we use three types of quadrature rules. The first is ordinary trapezoidal integration, for situations where we pick specific nodes. The second is the family of distribution-specific quadrature rules from Miranda-Fackler, by way of QuantEcon. And the third is Gauss-Legendre integration for distributions on compact supports where we don’t have a special rule.

I think the second two uses cases are both plausible applications, if it does better than the M/F specific quadrature rules. But @jlperla and @nosferican could tell you more.

jlperla commented 6 years ago

, but I have had great success with the qnwdist function in quantecon

Great, lets make sure to merge it! Sorry, I didn't notice that this was a non-gaussian quadrature method. The type design allows us to add in multi-dispatch for different quadrature algorithms. I think the code could largely be merged in for something like that with little trouble. I added #16 so we don't forget about it.

he first is ordinary trapezoidal integration, for situations where we pick specific nodes.

Just to be clear for everyone, integrating with the trapezoidal (or simpsons rule) is never ideal you have control over exactly where the nodes are. But if you have to stick with a particular set of nodes (e.g. discretizing a function in finite differences) then you have no other choice.

jlperla commented 6 years ago

One other thing, although maybe I missed the point

if it does better than the M/F specific quadrature rules

Just to be clear, I don't think we can do better than the M/F specific quadrature rules, as they are mostly the standard Gaussian quadrature appropriate for the distributions. What we can do is (1) use multiple-dispatch to choose the right quadrature given the distribution type; and (2) use the state-of-the-art in generating quadrature weights and nodes (i.e. https://github.com/ajt60gaibb/FastGaussQuadrature.jl ) which Sheehan is maintaining.

jlperla commented 6 years ago

Since #12 is already posed, I think you could close this as well after double-checking that it is otherwise completed.

arnavs commented 6 years ago

Yeah we’re good. @nosferican is working now on the implementation and he will let me know if there any issues or gaps in coverage.