szcf-weiya / ESL-CN

The Elements of Statistical Learning (ESL)的中文翻译、代码实现及其习题解答。
https://esl.hohoweiya.xyz
GNU General Public License v3.0
2.43k stars 594 forks source link

Ex. 9.5 #236

Open szcf-weiya opened 3 years ago

szcf-weiya commented 3 years ago

image

szcf-weiya commented 3 years ago

The following calculation assumes R_m is fixed, but I think it should not be fixed (see the following first several sentences). However, I failed to obtain a closed form when treating R_m depends on y. PNG image PNG image

szcf-weiya commented 3 years ago

binary greedy partition (without pruning)

for simplicity, I wrote a no-pruning version from scratch,

julia> includet("df_regtree.jl")

julia> [rep_calc_df(maxdepth=i) for i=0:4]
5-element Vector{Tuple{Float64, Float64}}:
 (1.080629443872466, 0.041187110826985264)
 (9.996914473623567, 0.11093327184178654)
 (21.34913882167176, 0.14589493792386243)
 (34.57260013395476, 0.2469943540268895)
 (47.90389232392168, 0.31056757408950914)

the number of terminal nodes M equal to 2^i, it can be shown that the empirical df is much larger than M except for M=1.

szcf-weiya commented 3 years ago

call tree::prune.tree

to return a tree with the given number of terminal nodes, I call tree::prune.tree, which has an argument best to specify the number of terminal nodes, but note that there might be situations that the specified best cannot be achieved, as mentioned in ?prune.tree

If there is no tree in the sequence of the requested size, the next largest is returned.

In my experiments, I indeed found such cases.

> source("df_regtree.R")
> mean(replicate(10, calc_df(m=1)))
[1] 1.137945
> mean(replicate(10, calc_df(m=5)))
[1] 32.18979
> mean(replicate(10, calc_df(m=10)))
[1] 50.46288

Again, the estimated dfs are much larger than m, except for m=1.

szcf-weiya commented 3 years ago

similar experiments in Ye (1998)

Ye, J. (1998). On Measuring and Correcting the Effects of Data Mining and Model Selection. Journal of the American Statistical Association, 93(441), 120–131. https://doi.org/10.2307/2669609 image image It also shows that the estimated (generalized) df are much larger than m.

And a close result is reported if I set m=19 in my code

> mean(replicate(10, calc_df(m=19)))
[1] 60.31616
litsh commented 10 months ago

Thanks for your great solution. May I ask why is the estimated degree of freedom so far from the one in theory?

szcf-weiya commented 10 months ago

@litsh what do you mean "the one in theory"? You mean the number of nodes? Actually, here I am trying to say that the number of nodes is not the theoretical degrees of freedom. There is a gap, and the gap is referred to as search cost.

If you are interested, you can check the paper on the excess part of degrees of freedom by comparing lasso and the best subset regression: Tibshirani, Ryan J. “Degrees of Freedom and Model Search.” Statistica Sinica 25, no. 3 (2015): 1265–96.

I also discussed the search cost of degrees of freedom for more methods (including the regression tree here) in my paper. https://arxiv.org/abs/2308.13630

litsh commented 10 months ago

Thank you for your reply! I will read the paper.

Thaison @.***

Original Email

Sender:"szcf-weiya"< @.*** >;

Sent Time:2023/12/5 23:16

To:"szcf-weiya/ESL-CN"< @.*** >;

Cc recipient:"litsh"< @. >;"Mention"< @. >;

Subject:Re: [szcf-weiya/ESL-CN] Ex. 9.5 (#236)

@litsh what do you mean "the one in theory"? You mean the number of nodes? Actually, here I am trying to say that the number of nodes is not the theoretical degrees of freedom. There is a gap, and the gap is referred to as search cost.

If you are interested, you can check the paper on the excess part of degrees of freedom by comparing lasso and the best subset regression: Tibshirani, Ryan J. “Degrees of Freedom and Model Search.” Statistica Sinica 25, no. 3 (2015): 1265–96.

I also discussed the search cost of degrees of freedom for more methods (including the regression tree here) in my paper. https://arxiv.org/abs/2308.13630

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.Message ID: @.***>