Open tjbencomo opened 5 years ago
Thanks @tjbencomo for the issue. If you look again on the tutorial
you've mentioned, which is actually a blog post, you can find first comment from 9 months ago that asks about the same thing.
surv_cutpoint seeks for the cutpoint based on the biggest/lowest value of the logrank statistics and not any statistical test. surv_pvalue as intended to return the pvalue of the single test, since it doesn't know how many tests have you performed in your analysis at all.
One can calculate the pvalue he/she needs and put the direct value to the ggsurvplot function.
On the other hand scientist tend to put the output of summary(lm) or summary(glm) not mentioning those p values are also there not adjusted for multiple tests since there are multiple variables.
Hi @MarcinKosinski thanks for the response. Two questions: 1) The comment refers to multiple biomarkers to test. Doesn't this mean testing multiple variables simultaneously to evaluate if any of the variables are significant biomarkers? I'm referring to making multiple tests on a single variable. 2) So what pvalue would you recommend using? Isn't it necessary to use maxstat's pvalue as maxstat finds the cutpoint with the best logrank statistic. Without maxstat's pvalue approximation, the logrank test will be biased as it doesn't know multiple tests were performed to find the cutpoint being compared.
About 2) you don't need any test to find the value that would maximise the logrank test statistic.
About 1) the person asks about multiple biomarkers but mentions multiple testing at the cutpoint selection stage. I would adjust p-value for logrank tests if there are multiple of them performed because you perform one for every considered biomarker.
I wouldn't adjust the single pvalue for a single variable just because the cutpoint came from a method that maximises the logrank statistic but actually doesn't deal the test as the test is not needed in my opinion for the cutpoint selection.
Very interesting discussion
Hi @tjbencomo,
As stated by @MarcinKosinski, surv_cutpoint()
seeks only for the optimal cutpoint.
There is no need to compute any p-value approximations to select the optimal outpoint. This is also why the default value of the pmethod
option in the function maxtat()
is "none".
I would'n also adjust the single pvalue.
Hi @kassambara, I think my confusion comes down to how maxstat and the log rank test are different. Maxstat's vignette explains maximally select rank statistics as:
The functional relationship between a quantitative or ordered predictor X and a quantitative, ordered or censored response Y is unknown. As a simple model one can assume that an unknown cutpoint µ in X determines two groups of observations regarding the response Y : the first group with X-values less or equal µ and the second group with X-values greater µ. A measure of the difference between two groups with respect to µ is the absolute value of an appropriate standardized two-sample linear rank statistic of the responses. The hypothesis of independence of X and Y can be formulated as H0 : P(Y ≤ y|X ≤ µ) = P(Y ≤ y|X > µ) for all y and µ ∈ R. This hypothesis can be tested as follows. For every reasonable cutpoint µ in X (e.g. cutpoints that provide a reasonable sample size in both groups), the absolute value of the standardized two-sample linear rank statistic |Sµ| is computed. The maximum of the standardized statistics of all possible cutpoints is used as a test statistic for the hypothesis of independence above. The cutpoint in X that provides the best separation of the responses into two groups, i.e. where the standardized statistics take their maximum, is used as an estimate of the unknown cutpoint.
To me, it sounds like maxstat is testing whether the variable X is related to the censored survival variable Y. The null hypothesis is that the values of Y (survival time) are equivalent for the two groups determined by the cutpoint. If the pvalue maxstat reports is significant, then there exists a survival difference between the groups. This sounds very similar to the log rank test which according to Wikipedia:
The logrank test statistic compares estimates of the hazard functions of the two groups at each observed event time ... the null hypothesis (of the two groups having identical survival and hazard functions)
I interpreted this to mean maxstat and the log rank test are comparing very similar (if not the same) hypotheses. Maxstat mentions it uses the absolute value of an appropriate standardized two-sample linear rank statistic of the responses
to measure the difference between the groups. I believe this is the log rank statistic as specified by smethod=LogRank
in the survminer code. The pvalue reported by maxstat is reporting whether there is a significant survival difference between the two groups stratified by the maximum cutpoint. According to my interpretation, Maxstat's pvalue should be used because computing a test statistic for each cutpoint inflates your Type-I error, increasing the chance of false positives.
Does my interpretation make sense? Although you may not need to compute a pvalue to find the optimal cutpoint, to make any conclusions regarding the hypothesis, it seems you need to account for all the test statistics you calculated to find the maximum test statstic.
But the test statistic is a formula. From the formula one can guess
the
optimal cutpoint. Taking all cutpoints to verify which gives the biggest
value of the tests statistic is the blind brute force.
That's still an interesting story whether one should adjust the p-value for a test statistic the moment there have been other statistics calculated. If they were calculated.
None the less, you know you shouldn't dichotomize the continuous variable, and instead take it as a continuous one into the proper statistical model? Problems caused by the categorization are widely spread http://biostat.mc.vanderbilt.edu/wiki/Main/CatContinuous
I would recommend putting proper functional form of the variable into the Cox model, if assumptions are met, or into a simple parametric Accelerated Failure Time model, instead of treating the estimates of the survival curve as the final research. That's just the start into the analysis.
sob., 12 sty 2019, 05:30: Tomas Bencomo notifications@github.com napisał(a):
I think my confusion comes down to how maxstat and the log rank test are different. Maxstat's vignette explains maximally select rank statistics as:
The functional relationship between a quantitative or ordered predictor X and a quantitative, ordered or censored response Y is unknown. As a simple model one can assume that an unknown cutpoint µ in X determines two groups of observations regarding the response Y : the first group with X-values less or equal µ and the second group with X-values greater µ. A measure of the difference between two groups with respect to µ is the absolute value of an appropriate standardized two-sample linear rank statistic of the responses. The hypothesis of independence of X and Y can be formulated as H0 : P(Y ≤ y|X ≤ µ) = P(Y ≤ y|X > µ) for all y and µ ∈ R. This hypothesis can be tested as follows. For every reasonable cutpoint µ in X (e.g. cutpoints that provide a reasonable sample size in both groups), the absolute value of the standardized two-sample linear rank statistic |Sµ| is computed. The maximum of the standardized statistics of all possible cutpoints is used as a test statistic for the hypothesis of independence above. The cutpoint in X that provides the best separation of the responses into two groups, i.e. where the standardized statistics take their maximum, is used as an estimate of the unknown cutpoint.
To me, it sounds like maxstat is testing whether the variable X is related to the censored survival variable Y. The null hypothesis is that the values of Y (survival time) are equivalent for the two groups determined by the cutpoint. If the pvalue maxstat reports is significant, then there exists a survival difference between the groups. This sounds very similar to the log rank test which according to Wikipedia:
The logrank test statistic compares estimates of the hazard functions of the two groups at each observed event time ... the null hypothesis (of the two groups having identical survival and hazard functions)
I interpreted this to mean maxstat and the log rank test are comparing very similar (if not the same) hypotheses. Maxstat mentions it uses the absolute value of an appropriate standardized two-sample linear rank statistic of the responses to measure the difference between the groups. I believe this is the log rank statistic as specified by smethod=LogRank in the survminer code. The pvalue reported by maxstat is reporting whether there is a significant survival difference between the two groups stratified by the maximum cutpoint. According to my interpretation, Maxstat's pvalue should be used because computing a test statistic for each cutpoint inflates your Type-I error, increasing the chance of false positives.
Does my interpretation make sense? Although you may not need to compute a pvalue to find the optimal cutpoint, to make any conclusions regarding the hypothesis, it seems you need to account for all the test statistics you calculated to find the maximum test statstic.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/kassambara/survminer/issues/359#issuecomment-453718717, or mute the thread https://github.com/notifications/unsubscribe-auth/AGdazkVR4R6j7vfEhab6qyTmf7OmNyX9ks5vCWTTgaJpZM4Z1HfX .
The maxstat documentation states that they are not guessing the optimal cutpoint, but finding a cutpoint through brute force methods that examine every cutpoint. The hypothesis test conducted by maxstat (and its reported p-value) are critical to conclude whether or not the variable X has any effect on Y. Has survminer been published in a peer reviewed journal you could point me towards to better understand your methodology?
I agree that more extensive survival analysis should be used to make final conclusions, I'm just afraid that the current survminer implementation does not notify users that they need to account for multiple comparisons when using the optimal cutpoint.
Hi, I was trying to find a tool to perform to identify this maximal separation cut point, and I arrived to the same conclusions as the author of this issue. The way most users of survminer are going to use this is to first find the cutpoint then compute the pvalue. However this will inflate the test statistic, and it will not have the expected 5% of false positives under a random distribution for a p-value cutoff of 0.05, because you are testing multiple cutpoints (with multiple p-values) and selecting the best one. I strongly advise the authors and the users of this package to use the maxstat pvalue which uses methodologies for correcting the p-value adapted to this particular case. Edit : To illustrate the issue I compared the distribution of p-values from maxstat with the condMC method vs selecting the cutpoint first then computing the logrank p-value on a random dataset. (This is in python for convenience using rpy2) random_dist_surv.pdf As you can see on the jupyter report the condMC method correctly returns an uniform distribution, and the p-values are completely incorrect when selecting the cutpoint first then computing the p-value without any adjustment.
import numpy as np
import pandas as pd
from rpy2.robjects import r, pandas2ri
import rpy2.robjects as ro
from lifelines.statistics import logrank_test
from rpy2.robjects.packages import importr
pandas2ri.activate()
maxstat = importr("maxstat")
survival = importr("survival")
survminer = importr("survminer")
n = 1000
allPvalsMaxRank = []
allPvalsLogRank = []
for i in range(n):
# Random values, time to event, and events
df = pd.DataFrame()
df["Val"] = np.random.normal(0.0,1.0,100)
df["TTE"] = np.random.randint(1,1000,100)
df["Event"] = np.random.randint(0,2,100).astype(bool)
r_dataframe = ro.conversion.py2rpy(df)
fml = ro.r(f"Surv(TTE, Event) ~ Val")
# Compute cutoff point and max rank p-value
mstat = maxstat.maxstat_test(fml, data=r_dataframe, smethod="LogRank", pmethod="condMC")
pval = mstat.rx2('p.value')[0]
allPvalsMaxRank.append(pval)
# P-value after cutoff selection
gr1 = df["Val"] < mstat.rx2("estimate")[0]
gr2 = np.logical_not(gr1)
allPvalsLogRank.append(logrank_test(df["TTE"][gr1], df["TTE"][gr2], df["Event"][gr1], df["Event"][gr2]).p_value)
import matplotlib.pyplot as plt
plt.figure(dpi=300)
plt.title("Distribution of Monte carlo max rank p-values on a random dataset")
plt.hist(allPvalsMaxRank, 20, density=True)
plt.show()
plt.figure()
plt.figure(dpi=300)
plt.title("Distribution of p-values computed after cutoff selection on a random dataset")
plt.hist(allPvalsLogRank, 20, density=True)
plt.show()
I'm concerned that the cutpoint returned by
surv_cutpoint()
does not provide an adjusted p-value from the maxstat package. This leads users to use this cutpoint with thesurv_pvalue()
function which does not account for multiple comparisons. This will cause an unadjusted p-value overestimating significance to be reported. The tutorial detailing how to determine optimal cutpoints does not mention the need to adjust for multiple comparisons nor does it state that the p-value plotted withggsurvplot(..., pval=TRUE,...)
does not adjust for multiple comparisons.surv_cutpoint()
withinsurv_cutpoint.R
sets maxstat to not compute a p-value at all:The maxstat package provides several different methods to approximate the adjusted p-value that is described on pages 4-5 of the maxstat vignette. Monte Carlo simulations can also be used to calculate the exact conditional p-value with
pmethod='condMC'
.Expected behavior
surv_cutpoint()
should return a dataframe containing the cutpoint, statistic, and adjusted p-value from the maxstat package.Actual behavior
surv_cutpoint()
only returns the cutpoint and statistic without any warning that the p-value needs to be adjusted for multiple comparisons. Documentation detailing how to use this functionality does not warn users about this issue and instead suggests using the unadjusted log rank test to compute the p-value after a cutpoint has been determined.Steps to reproduce the problem
The difference in p-values can be seen by using the TCGA data from the tutorial on determining cutpoints mentioned above.
survminer's ggsurvplot reports p-value=.051. Maxstat reports p-value ~ 0.473.
session_info()