xinyi030 / PHS43010_NonStatGroup

0 stars 0 forks source link

Literature Review and Methodology #3

Open xinyi030 opened 1 year ago

xinyi030 commented 1 year ago

This issue covers:


Responsibilities:

The assignees of this issue will have the following responsibilities:

Assignees: Ujjwal, Tongtong, Yiwei, William

xinyi030 commented 1 year ago

Hi All,

Just a quick reminder that you are supposed to clarify your plan for this issue by the end of May 13th. Please write down specific task assignment and your estimated finish time.

@TongtongJin @onebulrush @UjjwalSehrawat @cbayow22

And, thanks to William's work, here is a link to the papers downloaded from the reference: https://www.dropbox.com/sh/owow5m8nh0ydrf9/AAAmVjI0Bl2KRVq0hZBv0tIha?dl=0

Thanks, Xinyi

cbayow22 commented 1 year ago

I created a preliminary plan with tasks. Can we add timelines for this

  1. Literature Review:

    • 1a Summarize the key findings and concepts from the CRM design review paper @cbayow22 - Due 5/14
    • 1b Summarize the key findings from main references
    • 1bi - references 1-5 @TongtongJin
    • 1bii - references 6-10 @onebulrush
    • 1biii - references 11-15 @UjjwalSehrawat
    • 1biv - references 16-20 @cbayow22 - Due 5/16
  2. Methodology

    • Describe the use of the logistic regression model in our study @TongtongJin
    • Justify the use of the logistic regression model in our study @UjjwalSehrawat
    • Implementation plan @onebulrush
cbayow22 commented 1 year ago

Key Findings • Structured framework for designing a dose-finding studying using CRM to increase uptake of CRM in Phase 1 dose finding trials • Recommendation on Key Design parameters • Advise on conducting on pre-trial simulations work to tailor the design to a specific trial • Software recommendations • Template Text and tables that can be edited and inserted under a 3+3 design • Guidance on how to conduct and report dose finding studying using CRM

cbayow22 commented 1 year ago

Methods - No of doses

  1. No of doses
    • Consideration whether the doses and does range allow an accurate MTD estimate
    • Often doses are determined by practical considerations but Allometric Scaling can be used to choose which doses should be studied
    • Allometric scaling
    • Mean number of does levels based on priors studies was 5

1a.Allometric scaling typically used interspecies. includes

cbayow22 commented 1 year ago

Method: Target Toxicity level o Must be set before the trial starts, often set between 20 and 25% and can be as high as 40% o Factors contributing including disease, treatment, , clinical expertise, evidence from prior studies, guidance from trial statistician

cbayow22 commented 1 year ago

Method: Dose Toxicity level o Describes the probability of patient experiencing a DLT at a given dose using a model o Model is a fixed mathematical function that has a higher probability of observing the DLT as the Dose increases. o The model is written as F(β, d), where F(·,·) is the chosen dose-toxicity function (see Table 1), β is a vector of one or more parameters that alters the shape of the dose-toxicity relationship, and d is the dose label for a particular drug dose.

cbayow22 commented 1 year ago

Method: Dose Toxicity skeleton o The skeleton is the set of expected DLT probabilities at the does levels of interest and is specified by a clinician before a trial o Have a dose and prior, use formula from Table 1 depending on the model and compute the dose label. o Choice of model and skeleton are not unique since identical dose -escalation recommendation can be obtained. o 1 parameter logistic model more likely to make recommendations that lead to faster escalations , mor efficient trial, higher risk of participants experiencing DLT o 2 parameter logistic model , less efficiently identify the MTD, longer to reach the MTD o Using the skeleton, allow the dose recommendation after each cohort to get close to MTD o Use of an indifference interval or the probability interval within which the clinical for the DLT probability of the MTD to fall. TTL of 25% give o r take 5%, gives and indifference interval of 20 to 30%

cbayow22 commented 1 year ago

Method: Inference o Likelihood approach o Bayesian inference

cbayow22 commented 1 year ago

Method: Decision Rules, sample size, cohort size, safety modification

Method: Decision rules o Possible decision rules include choosing the dose with an estimated probability of DLT closest to the TTL or, more conservatively, choosing the dose with an estimated probability of DLT closest to, but not greater than, the TTL. The first option allows quicker escalation towards the true MTD, but may expose more patients to overdoses. The second option reduces the chance of overdosing patients, but may take longer to escalate towards the true MTD

Method: Sample size and cohort Size o Sample size – Specifying a lower bound based on Cheung’s work and practical upper bound in trial protocols. Cheung [45] proposed formulae that use a target average percentage of correctly selecting the MTD (say, 50% of the time) to obtain a lower bound for the trial sample size o Cohort Size – how many patients should be dosed at each recommended dose before a dose escalation is made. A cohort size of one allow better understanding of operating characteristics. There could be regulatory constraints. If cohort size is greater than 2, then a monitoring plan is needed

Method: Safety modification o Under most trials setups of CRM, coherence is guaranteed o Study teams generally chose at a level below the MTD level. o If the last patient had a DLT, the next patient would not receive a higher hose than that of the last patient even if the model recommended it.

TongtongJin commented 1 year ago

I created a preliminary plan with tasks. Can we add timelines for this 2. Literature Review:

  • 1a Summarize the key findings and concepts from the CRM design review paper @cbayow22 - Due 5/14
  • 1b Summarize the key findings from main references
  • 1bi - references 1-5 @TongtongJin
  • 1bii - references 6-10 @onebulrush
  • 1biii - references 11-15 @UjjwalSehrawat
  • 1biv - references 16-20 @cbayow22 - Due 5/16
  1. Methodology
  • Describe the use of the logistic regression model in our study @TongtongJin
  • Justify the use of the logistic regression model in our study @UjjwalSehrawat
  • Implementation plan @onebulrush

Thank you for outlining the framework of tasks. I agree with this plan and time schedule. And I'm wondering if the methodology part also needs to be completed by 5/16 or we can have a few more extra days? @xinyi030
And just to make sure that I understand this right, the 'describe the use of logistic regression model' here is to clarify the specific details of the two-parameter logistic model in both Bayesian and likelihood-based CRM? @cbayow22

xinyi030 commented 1 year ago

@TongtongJin Absolutely, please feel free to take the extra two days you need. We understand that quality work can sometimes require a bit more time. 😊

@Team, I'm really impressed with the plan you've come up with! Let's definitely move forward with this. Given the timeline, I would suggest that we aim to finalize the draft and push the results (of this issue) to NonStat.Rmd by 5/18. I believe in our collective ability to achieve this. 🌟

Please don't hesitate to reach out if you have any other questions, or if there's anything else you need from me.

Best, Xinyi

UjjwalSehrawat commented 1 year ago

The plan looks great! Thanks for putting it together.

cbayow22 commented 1 year ago

And just to make sure that I understand this right, the 'describe the use of logistic regression model' here is to clarify the specific details of the two-parameter logistic model in both Bayesian and likelihood-based CRM? @TongtongJin - Yes I think so

cbayow22 commented 1 year ago

Method: Stopping rules, Evaluating designs by simulation, Finishing the design, Trial conduct, Report contents, software used

Stopping rules o Encouraged using probabilistic approaches for early termination. o Examples

Evaluating designs by simulation o Understanding a design’s operating characteristics under different dose-toxicity scenarios o Dose toxicity scenarios should include: Scenarios where each dose in in fact the MTD; two extreme scenarios, in which the lowest dose is above the MTD and the highest dose is below the MTD o Run all competing designs (including a 3+3 design) across all simulation scenarios to compare the operating characteristics of interest.

Finishing the design o Documenting the setup specifications, which designs were compared under which scenarios and an easily interpretable summary of the design’s main features

Trial conduct

Report contents o Report in simple frequency table o NCI CTCAE grading system

Software for updating models and producing results o There are several software packages standalone and popular statistical software for designing, conducting and analyzing the dose-finding studies

cbayow22 commented 1 year ago

Reference Paper 16: Efficiency of New Dose Escalation Designs in Dose Finding Phase I Trials of Molecularly Targeted Agents • Literature review of get more insight on the efficiency of new dose escalation methods in phase I trials of molecularly targeted agents, design information from 84 trials that reached the MTD over the last decade was abstracted. • standard 3+3 design (41 trials, 49%).; Newer algorithm based methods were also used, including ATD (35 trials, 42%): CRM (mCRM), which was employed in only 6 trials (7%) • the mean MTD to starting dose ratio appeared to be at least twice as high for trials using a mCRM or an ATD as for trials using a standard ‘‘3+3’’ design • The mean number of patients exposed to a dose below the MTD for all three trial designs was similar, ranging from 19 to 23 • the mean number of patients exposed to doses exceeding the MTD was at least twice as high in trials using a standard ‘‘3+3’’ design or an ATD when compared to trials using a mCRM • support a more extensive implementation of innovative dose escalation designs such as mCRM and ATD in phase I cancer clinical trials of molecularly targeted agents

Reference Paper 17: The continual reassessment method for dose-finding studies: a tutorial • Explains how to implement the CRM • Does simulation and varying parameters to show how the CRM is better than the 3+3

cbayow22 commented 1 year ago

Reference paper 20: FDA. Guidance for Industry: Adaptive Design Clinical Trials for Drugs and Biologics. • Indicates Bayesian methods can be used • Bayesian inference is characterized by drawing conclusions based directly on posterior probabilities that a drug is effective and has important differences from frequentist inference (Berger and Wolpert 1988). For trials that use Bayesian inference with informative prior distributions, such as trials that explicitly borrow external information, Bayesian statistical properties are more informative than Type I error probability

TongtongJin commented 1 year ago

Summary for references 1-5

Phase 1 trial and maximum tolerated dose (MTD)

     The phase 1 trials in oncology are usually designed to obtain the optimal dose of a new treatment for efficacy testing in subsequent phase 2 trials. For cytotoxic agents, the probability of treatment benefit is presumed to be positively proportional to the dose in a certain range of consideration. Thus, the optimal dose in phase 1 trial is usually considered as the highest dose at a tolerable level of toxicity. And the optimal dose we are seeking for is exactly the maximum tolerated dose (MTD).

     To define MTD in a more rigorous way, it is the dose expected to produce some degree of medically unacceptable, dose limiting toxicity (DLT) in a specified proportion $\theta$ of patients. Namely, $$Prob(DLT│Dose=MTD)=\theta,$$ where the proportion $\theta$ is also defined as the target toxicity level (TTL). $^{[1][2]}$

Dose escalation methods in phase 1 trials

     To find the MTD we defined above, in clinical testing we adopt dose escalation methods, which is based on the prior belief that the toxicity increases monotonically with increasing dose. The principle of dose escalation in phase 1 trials is both maintaining the toxicity at a safe level and the information accumulation at a rapid speed and at the same time avoid patients being exposed to subtherapeutic doses as much as possible.

     Dose escalation methods can be mainly classified in two branches, rule-based designs such as traditional 3+3 design and model-based designs such as continual reassessment method. Rule-based designs don’t make any assumptions for the function of toxicity with respect to dose level. And the next step of dose is purely dependent on the information from the last dose. Then finally terminates at some certain stopping criteria. But model-based designs assume there’s a specific function between dose and toxicity, usually power functions, logistic functions etc., and then apply accumulated information from every dose to determine the next dose.

     From the perspective of practical use, rule-based designs like traditional 3+3 are easier to implement, but model-based designs need biostatistical expertise and available software on site to perform real-time model fitting. As for the information utilization, rule-based designs only use current information, but model-based designs make use of all toxicity information accumulated during the trial, which can be more comprehensive. In the aspect of the exposure to subtherapeutic doses, model-based designs relatively treat fewer patients at suboptimal doses than rule-based ones. Hence according to the principle of dose escalation methods, the model-based designs usually do better in rapid information accumulation and reducing excessive exposure to subtherapeutic doses. $^{[3]}$

Current popularity of rule-based designs and model-based designs in phase 1 trials

     Although the model-based designs show great advantages in many aspects, rule-based designs like the 3+3 design are still more prevailingly used, and model-based designs are rarely used. Some statistical results about the popularity of these two types of methods in phase 1 trials are as follows.

     Reference [4] examined through the records of cancer phase 1 trial from the Science Citation Index database from 1991 to 2006 and divided them into two sets (dose-finding trials and methodologic studies of dose-escalation designs). Then track among these two sets which trials adopted new statistical designs. As a result, only 1.6% trials follow one of the methodologic studies and show extensive lags on publication time. The rest of the trials all follow the traditional up-and-down method (a type of rule-based method).

     Reference [5] studied the degree of adoption of the methods with new trial designs on early phase trials of molecularly targeted agents (MTA) and immunotherapies. It searched papers published from 2008 to 2014 about phase 1 oncology trials and found that in dose-finding trials, 92.9% of them utilized rule-based designs and 5.4% used model-based designs or other novel designs. Particularly, among the MTA and immunotherapies trials, 5.8% used model-based designs. The results show that the adoption of model-based designs and novel designs remains low.

     Above phenomenon could be caused by limited time and effort of clinicians and statisticians and the lack of comprehensive and detailed tutorials and instructions for the newly designed approaches. $^{[4][5]}$

cbayow22 commented 1 year ago

Here is my first draft. Feel free to edit and add more information

Write a review of the CRM design for phase 1 dose-finding trials

CRM is an alternative to the standard 3+3 design based on using a model like a one parameter or two parameter logistic model, to understand the maximum tolerated dose in a phase 1 trial. CRM is more accurate in choosing the MTD, is less likely to choose ineffective doses, treats fewer patients at overly toxic doses, and treats fewer patients at very low doses (Garrett-Mayer E). Our paper plans to looks a two parameter model. A two-parameter model is likely to better estimate the shape of the entire dose-toxicity relationship [34], but less efficiently identify the MTD; it may take longer to reach the MTD since two parameters must be estimated, and there may be difficulties fitting the model or obtaining consistent estimates of model parameters [31].

The idea behind the CRM stars with a priori dose toxicity curve and a chosen target toxicity rate. Theis curve will be refit after every cohort( 1-3 patients)toxicity outcome is observed. At every new dose or same dose the all prior data is used to update the model/curve (Garrett-Mayer E). As required a discussion needs to take place with all relevant stakeholders. The target toxicity level is typically set between 20 to 25% and can be as high as 40%[27, 28]. In a review of 197 phase I trials published between 1997 and 2008, the median number of dose levels explored was five (range 2–12) [26].

Inference or decisions can be made using a likelihood or Bayesian methods using the accruing trial data and clinical judgment. In a Bayesian method data from patients in the trial is used to update prior on the model distribution which then gives a posterior distribution for the model parameters and therefore posterior beliefs for the probability of DLT at each dose. These posterior probabilities are used to make dose escalation decisions. By assessing a design’s operating characteristics with a specific prior in a variety of scenarios, the prior distribution can be recalibrated until the model makes recommendations for dose escalations and the MTD that the trial team are happy with (Wheeler,M Graham).

Possible decision rules include choosing the dose with an estimated probability of DLT closest to the TTL or, more conservatively, choosing the dose with an estimated probability of DLT closest to, but not greater than, the TTL. The first option allows quicker escalation towards the true MTD, but may expose more patients to overdoses. The second option reduces the chance of overdosing patients, but may take longer to escalate towards the true MTD (Wheeler,M Graham)

Samples size are determined by the study and how and where its being conducted. Specifying a lower bound based on Cheung’s work and practical upper bound in trial protocols. Cheung [45] proposed formulae that use a target average percentage of correctly selecting the MTD (say, 50% of the time) to obtain a lower bound for the trial sample size (Wheeler,M Graham). Although CRM designs, like standard ones, can halt after only 10–14 subjects, it is typically necessary to plan for at least 18–24 total subjects, after which the probability of a correct MTD choice rises slowly with sample size [17].Cohort size at each dose level typically is more than 1. A cohort size of one allow better understanding of operating characteristics but this is rarely used [17]. There could be regulatory constraints. If cohort size is greater than 2, then a monitoring plan is needed.

Stopping rules for the trial include the following examples. Early termination can be considered if the MTD is judged to be outside the planned set of doses. Adding additional patients is unlikely to yield information that would change the current MTD estimate. Fixed no of patients has been consecutively dosed at one dose level. Estimated probability of all dose levels having a DLT rate about the TTL is at least 90%. The probability that the next m patients to be dose in the trial will be given the same dose levels, regardless of DLT outcomes observed, exceed some level (Wheeler,M Graham).

TongtongJin commented 1 year ago

Methodology (Part 1)

(Describe the use of the logistic regression model in our study)

The model we use here to predict the relation between dose level and the probability of DLT is the two-parameter logistic regression model which has the form as follows: $$p_j=p(d_j|\beta_1,\beta_2)=\frac{exp(\beta_1+exp(\beta_2)d_j)}{1+exp(\beta_1+exp(\beta_2)d_j)}.$$ And in Bayesian setting, this is the likelihood function for dose level j given $\beta_1$ and $\beta_2$.

Bayesian CRM

In the Bayesian setting of CRM, we first need to choose the prior distributions of parameters $\beta_1$ and $\beta_2$. Let's denote the prior by $f(\beta_1,\beta_2)$. Then the posterior distribution given the data of k dose levels $D_k$ is as follows: $$L(D_k|\beta_1,\beta2)=\prod{j=1}^{k}p_j^{y_j}(1-p_j)^{n_j-y_j} ,$$ where $n_j$ is the number of tested patients at the j-th dose level, and $y_j$ is the number of patients showing DLT at the j-th dose level. Then we can get to the posterior distribution given $D_k$ by applying above in the Bayes' rule. The posterior is $$p_k(\beta_1,\beta_2|D_k)=\frac{L(D_k|\beta_1,\beta_2)f(\beta_1,\beta_2)}{\iint L(D_k|\beta_1,\beta_2)f(\beta_1,\beta_2)d \beta_1 d \beta_2}.$$

Then the posterior mean of DLT probability at each dose level is $$\mathbf{E}[p_j|D_k]=\iint p_j p_k(\beta_1,\beta_2|D_k)d \beta_1 d \beta_2$$

To look for an appropriate dose level for the next trial, our principle is to find the dose level with the DLT probability closest to TTL. Hence the next dose level can be defined as $$d{next}=\arg\min{d_j\in S}(|TTL-\mathbf{E}[p_j|D_k]|).$$ Here S is the set of all permissible choices of dose level. $^{[1]}$

Two-stage likelihood-based CRM

Two-stage likelihood CRM divides the process into two stages. In the first stage, the patients are dosed in single-patient cohorts until the first DLT appears. After the first appearance of DLT, the CRM starts to work on the data based on all the previous trials (first-stage data included).

The stage 2 procedure is similar to the above, but using a maximum likelihood estimation (MLE) to estimate the parameters $\beta_1$ and $\beta_2$ and calculate the corresponding probability of DLT at each dose level. The estimated parameters based on given data on k dose levels are $$(\hat{\beta_1},\hat{\beta2})=\arg\max{(\beta_1,\beta_2)} L(D_k|\beta_1,\beta_2).$$ Here $L(D_k|\beta_1,\beta_2)$ is the same as defined above in the Bayesian setting. Then we can compute the probability to DLT at each dose level under the current MLE of the parameters $p(d_j|\hat{\beta_1},\hat{\beta2})$. Now since we want the next dose level to have the closest probability of DLT to TTL, we're able to define the next dose level by $$d{next}=\arg\min_{d_j\in S}(|TTL-p(d_j|\hat{\beta_1},\hat{\beta_2})|).$$ Iterate above procedure until the dose level meets the stopping condition. Then the second stage terminates. $^{[??]}$

Reference: [??] Wages NA, Conaway MR, O’Quigley J. Performance of two-stage continual reassessment method relative to an optimal benchmark. Clinical Trials. 2013;10(6):862-875. doi:10.1177/1740774513503521

I updated the references in methodology, but I'm unsure about the number of above reference so I used [??] to indicate for now. Thanks!

xinyi030 commented 1 year ago

Thanks @TongtongJin and @cbayow22 for your great work. @HongzhangXie and I will give it a careful look and reach out to you if we have any questions.

@onebulrush @UjjwalSehrawat Could you post your work by today so that we have time to go through it? Thanks!

Best, Xinyi

onebulrush commented 1 year ago

Reference 6: Experimental designs for phase I and phase I/II dose-finding studies

For the atistical design of dose-finding studies, the standard design is a ‘memoryless’ design and it's not so satisfying. This paper describes designs with memory and we discuss how these designs are superior to memoryless designs. The most well-known design with memory is the continual reassessment method (CRM).

onebulrush commented 1 year ago

Reference 7: Adaptive designs for dual-agent phase I dose-escalation studies

Carrying out dual-agent phase I trials for medications is crucial. There are predominantly two kinds of dose-escalation trials: rule-based and model-based. Trials based on models are progressively adjusted with the help of Bayesian techniques, which merge preliminary data concerning the dose-toxicity relationship. Studies using simulations indicate that model-driven designs tend to treat a greater proportion of patients at near-optimal dose levels.

onebulrush commented 1 year ago

Reference 8: A quick guide why not to use A+B designs

This paper summarize why model­based designs such as the continual reassessment method (CRM) are more available than 3+3 and similar rule­based A+B designs. Compared with rule-based designs, Model­based designs can clearly define and can flexibly choose target DLT rate; many patients can be treated at the optimal dose; few patients can be treated at subtherapeutic doses; the utilisation of available data is efficient; extension to more complex questions is smooth & straightforward; deviations from the plan are easily accommodated.

onebulrush commented 1 year ago

Reference 9: Principles of dose finding studies in cancer: a comparison of trial designs

There are three classes of dose-escalation trial design: gorithmic approaches (including the popular 3+3 design), Bayesian model-based designs and Bayesian curve-free methods. The main benefit of algorithmic approaches is the simplicity. Model-based and curve-free Bayesian approaches are more preferable because they are more able to identify the dose with the desired toxicity rate and allocate a greater proportion of patient. For statistical and practical reasons, Bayesian model-based or curve-free approach is better. If there is sufficient evidence of high enough quality from previous studies, the model-based approach will be better, otherwise curve-free one is better.

onebulrush commented 1 year ago

Reference 10: Continual Reassessment Method: A Practical Design for Phase 1 Clinical Trials in Cancer

For the design and analysis of Phase I clinical trails in cancer, attention focuses rather on identifying a dose with a given targeted level is the best estimate of this level. Such sequential designs is called continual reassessment method (CRM). In the procedure, we update our notion of the dose-reponse relationship. From the simulations, this method is good.

UjjwalSehrawat commented 1 year ago

Summary for 11—15:

Clinicians are interested in phase 1 dose finding designs that can estimate the MTD using fewer patients with a fixed number of doses, or can test more dose levels for a given sample size. Several studies have compared CRM to the traditional SM (3+3 standard method that escalates doses after 3 patients with an option for an additional 3 patients; also called traditional method TM or the “up-and-down scheme) [D*] and found that CRM is more likely to recommend the correct MTD and dose more trial patients close to the MTD. [11,12,13,14,15] We expound on some of these studies below.

O’Quigley et al.’s paper [11] in response to Korn et al.’s [A*] findings about CRM’s slower time duration and poorer safety standards relative to SM concluded that, indeed, CRM does not take longer than SM when the comparison fairly accounts for comparable grouping inclusions and that CRM is, indeed, safer for a randomly chosen patient than SM as per simulation results that show that the probability of being treated at very high toxic levels is almost always higher with SM than CRM. Furthermore, O’Quigley et al. argued that if treating patients at unacceptably low sub-therapeutic levels is considered part of the safety definition, then CRM also performs much better than SM. Unlike SM, it is entirely straightforward to adjust CRM to make it safe as we require. All it requires is to change the target level, say from 0.2 to 0.1. In this case, the observed number of toxicities will be, on average, roughly halved. One of the main advantages of CRM, O’Quigley et al. argued, is its flexibility and ability to be adapted to potentially different situations unlike SM which is rigid, samples independently of any targeted percentile, and has no convergence properties. [11]

Thall et al. in his paper [12] described and compared 2 practical outcome-adaptive statistical methods for dose finding in phase 1 clinical trials: CRM and a logistic regression model-based method. Both methods used Bayesian probability models as a basis for learning from the accruing data during the trial, choosing doses for successive patient cohorts, and selecting an MTD. These methods were illustrated and compared to the SM by application to a particular trial in renal cell carcinoma. The paper compared average behavior by computer simulation under each of several hypothetical dose-toxicity curves. The comparisons showed that the Bayesian methods are much more reliable than the conventional algorithms for selecting an MTD, and that they have a low risk of treating patients at unacceptably toxic doses. [12]

Iasonos et al.'s [13] paper compared several CRM-based methods with SM. For comparison, variations were given to the number of dose levels (5 to 8) and the location of the true MTD. Only CRM with constraint in dose escalation was evaluated since it is more likely to be used by clinicians as the O’Quigley et al.’s original CRM [B*] allows skipping dose levels in the absence of DLT’s, potentially unnecessarily exposing patients to highly toxic drug levels which makes clinicians uncomfortable. Furthermore, 3 CRM-based methods that combine rule-based and model-based approaches were evaluated. They found that CRM and SM are comparable in terms of how fast they reach the MTD as well as the total sample size needed when testing a limited number of dose levels (<=5), however, as the number of dose levels was increased, CRM reached the MTD in fewer patients when used with a fixed sample of 20 patients. However, a sample size of 20—25 patients is not sufficient to achieve a narrow precision around the estimated toxicity rate at the MTD. CRM with a fixed-sample performed better than a CRM with stopping rule that ensures a narrow confidence interval around the toxicity rate at the MTD. CRM-based methods were found to be better than SM in terms of accuracy and optimal dose allocation in almost all cases except when the true dose was among the lower levels. [13]

Onar et al. in their paper [14] provided modifications to CRM largely motivated by specific challenges encountered in the context of the Pediatric Brain Tumor Consortium trials and compared them to SM. While some versions of CRM assume availability of doses in a continuous way given a range, this paper used preset levels as it is more acceptable to clinicians and easier to manage operationally especially in multi-institutional settings. Patients in these pediatric trials were dosed by body surface area (BSA) instead of in terms of “mg” as done in adult trials. A frequentist likelihood-based approach with a 2-parameter logistic model [C] was used as follows: Phi(x_j, a) = exp(alpha + beta x_j) / (1 + exp{alpha + beta* x_j}). Also, “prior information” was used to fit the model which is needed especially during early trial stages. Compared to SM, simulations indicated that their modified CRM was more accurate, exposed fewer patients to potentially toxic doses and tended to require fewer patients. They also argued that as the CRM-based MTD has a consistent definition across trials, it is convenient especially in consortium settings where multiple agents are being tested in studies often running simultaneously and accruing from the same patient population. [14]

Onar-Thomas et al. [15] compared the performance of CRM vs SM vs the Rolling-6 design via simulations. The Rolling-6 design is a relative newcomer developed with the intention to shorten trial duration by minimizing the period of time during which the trial is closed to accrual for toxicity assessment. Results indicated that the toxicity rates were comparable across the 3 designs, but the SM and the Rolling 6 designs tended to treat a higher % of patients at doses below the MTD. In cases where 5,6 or more dose levels were proposed to be studied and some toxicities were expected, model-based designs (CRM) had distinct advantages in being able to use the data from all dose levels in estimating the MTD, in accommodating patient-specific dosing and in providing an MTD estimate that is associated with a toxicity probability. Doses identified as MTD by the SM and the Rolling-6 differed in a large % of trials. Results also showed that body surface area (BSA)-based dosing used in pediatric trials can make a difference in dose escalation/de-escalation patterns in CRM relative to cases where such variations are not taken into account in the calculations (such as most adult trials) also leading to different MTDs in some cases. Rolling-6 was found to have shorter trials relative to SM. CRM lead to shorter trial duration for slow to medium accrual rates whereas Rolling-6 may have an advantage if the accrual rate is fast. Rolling-6 may be preferable over the CRM if very few or no toxicity is expected with the agent under study and if the dose finding period is long. [15]

Methodology (Justify the use of the logistic regression model in our study):

Model-based methods for phase-1 dose finding clinical trials such as CRM target some given toxicity level and concentrate experimentation around that level, using only the information available for independent toxic responses (yes/no) given the dose level. They typically use a 1-parameter or 2 parameter logistic regression model and have been shown to be more likely to recommend the correct MTD and dose more trial patients close to the MTD [11,12,13,14,15].

According to O’Quigley et al, CRM, a model-based design, is much safer and more adaptable to potentially different situations relative to SM. For example, one can reduce the observed number of toxicities by changing the target level MTD or number of dose levels unlike SM which is rigid, samples independently of any targeted percentile, and has no convergence properties. [11,13] CRM-based methods have also been shown to be more accurate, exposed fewer patients to potentially toxic doses and tended to require fewer patients. [14] In Onar-Thomas et al’s study, in cases where 5,6 or more dose levels were proposed to be studied and some toxicities were expected, model-based designs (CRM) had distinct advantages in being able to use the data from all dose levels in estimating the MTD, in accommodating patient-specific dosing and in providing an MTD estimate that is associated with a toxicity probability. [15]

In Onar et al’s study [14], a 2 parameter logistic model (which we will be using) was ultimately favored over its 1 parameter counterpart, similar to Piantadoshi et al. [C], due to its flexibility even though more information is needed to identify the parameters as they believed the flexibility gained in the response curve is beneficial to modeling dose-toxicity distributions. The model referred to here is: Phi(x_j, a) = exp(alpha + beta x_j) / (1 + exp{alpha + beta* x_j}). A certain level of heterogeneity was needed for the model parameters of the logistic function to become identifiable. This was circumvented via use of priors, however, it was also noted that if no DLT was observed in the first few doses, then there may be convergence problems and underestimation of toxicity probability leading to overestimation of target level. To solve these issues, a “correction factor” of 0.1 was used. [14]

According to O’Quigley & Shen [E], the 1-parameter models are preferred from an identifiability perspective, however, they provided alternatives to this model and a set of conditions under which the model is expected to perform well. They cautioned against the 1 parameter logistic regression model: Phi(x, a) = exp(ax) / (1 + exp{ax}), the model used by Korn et al. [A], which does not satisfy their optimality conditions. Goodman et al. [F*] was able to circumvent these issues with the 1 parameter logistic model as a result of modifications similar to the ones implemented by Onar et al. [14].

Additional References:

A*: Korn, E. L., Midthune, D., Chen, T. T., Rubinstein, L. V., Christian, M. C., & Simon, R. M. (1994). A comparison of two phase I trial designs. Statistics in medicine, 13(18), 1799-1806.

B*: O'Quigley, J., Pepe, M., & Fisher, L. (1990). Continual reassessment method: a practical design for phase 1 clinical trials in cancer. Biometrics, 33-48.

C:* Piantadosi, S., Fisher, J. D., & Grossman, S. (1998). Practical implementation of a modified continual reassessment method for dose-finding trials. Cancer chemotherapy and pharmacology, 41, 429-436.

D* : Storer, B. E. (1989). Design and analysis of phase I clinical trials. Biometrics, 925-937.

E*: O’Quigley, J., & Shen, L. Z. (1996). Continual reassessment method: a likelihood approach. Biometrics, 673-684.

F*: Goodman, S. N., Zahurak, M. L., & Piantadosi, S. (1995). Some practical improvements in the continual reassessment method for phase I studies. Statistics in medicine, 14(11), 1149-1161.

xinyi030 commented 1 year ago

Hey Team,

A big thanks to all of you for the amazing work you've been doing. You are all doing a great job!

I’ve gone through your drafts and wanted to credit you for your excellent work so far. I've uploaded our drafts on GitHub (NonStat_LiteratureGroup_5.20.Rmd). If you would like to, could you please fine-tune your sections directly in this file by Sunday, 12pm? Just remember, let's avoid direct pushes to GitHub. You can use branching or simply drop your edits in this issue. I'll consider the version on Sunday as the final one.

Also, when citing references, please make sure to clearly mark the reference papers. For guidance, you might want to refer to Tongtong Jin's work, which is highly readable on how to effectively handle references.

Afterward, I'll happily format and organize the literature review part based on your work.

For any queries or concerns, feel free to reach out. Thank you once again for your exceptional work.

Best, Xinyi

cbayow22 commented 1 year ago

@xinyi030 - How should we reference the the main paper written by Graham M Wheeler (How to design a dose finding studying using the continual reassessment method?

cbayow22 commented 1 year ago

I updated the references in my paper for this section. I am unclear what number should be assigned to reference the main paper written by Graham M Wheeler and this is currently pending

Write a review of the CRM design for phase 1 dose.docx

TongtongJin commented 1 year ago

I also edited the reference number in the above comment of methodology. Please check. Thanks!

onebulrush commented 1 year ago

Implementation plan

Here we reproduce results in specific figures by fitting a logistic regression model. We run simulations using the CRM with logistic regression, and we derive Figure 6&7. Figure 6 used a one-stage Bayesian approach and Figure 7 used a two-stage likelihood-based approach.

To reproduce Fig. 6, we should use a one-parameter logistic model and placed an exponential prior distribution with a mean of 1. Throughout the trial, calculations were made for the posterior estimates of the likelihood of dose-limiting toxicity (DLT) at each dosage. The upcoming group was then administered the dosage that had an estimated DLT probability most closely aligning with the target toxicity level (TTL).

To reproduce Fig. 7, instead we implemente a two-stage likelihood-based CRM design. We combined a one-parameter power model for the dose-toxicity relationship. During the initial stage, virtual patients were given gradually increasing doses, starting with a dose of 10 ng/kg, which was 1% of the Maximum Tolerable Dose (MTD) in dogs. If a grade 2+ non-DLT adverse event occurred in a patient, two more virtual patients received the same dosage. If none of the trio exhibited a Dose-Limiting Toxicity (DLT), the study continued escalating the dosage in the first stage. Once the first DLT was observed, the second stage, based on the model, was initiated.

A dose skeleton, which was determined after the first DLT (as it wasn't needed during the first phase), was used to establish dose labels for each dose. The likelihood of a DLT at each dose was determined using maximum likelihood methods, and the upcoming patient was assigned the dose that had an estimated DLT likelihood closest to the Target Toxicity Level (TTL). This was under the condition that no dose level that hadn't been tested could be bypassed. Single-patient cohorts were the norm because a low toxicity incidence was anticipated, and each virtual patient was thoroughly observed before the next patient was assigned a dose.

xinyi030 commented 1 year ago

@cbayow22 Thanks for the update!

I'll do the reference work for the whole group based on all of your notes.

Just wanted to check if this draft includes review of references 16-20? And, if in that case, could you point out where it is? Thanks!

Best, Xinyi

I updated the references in my paper for this section. I am unclear what number should be assigned to reference the main paper written by Graham M Wheeler and this is currently pending

Write a review of the CRM design for phase 1 dose.docx

cbayow22 commented 1 year ago

Reference paper 20: FDA. Guidance for Industry: Adaptive Design Clinical Trials for Drugs and Biologics. FDA guidance indicates Bayesian methods can be used in clinical trial for Drugs and Biologics. This guidance indicates that Bayesian inference is characterized by drawing conclusions based directly on posterior probabilities that a drug is effective and has important differences from frequentist inference (Berger and Wolpert 1988). For trials that use Bayesian inference with informative prior distributions, such as trials that explicitly borrow external information, Bayesian statistical properties are more informative than Type I error probability

cbayow22 commented 1 year ago

Reference Paper 16: Efficiency of New Dose Escalation Designs in Dose Finding Phase I Trials of Molecularly Targeted Agents This paper provide evidence that more extensive implementation of innovative dose escalation designs such as mCRM and ATD in phase I cancer clinical trials of molecularly targeted agents. This paper did a literature review based on 84 trials that reached MTD. The goal was to get more insight on the efficiency of new dose escalation methods in phase I trials of molecularly targeted agents. The literature review indicated a standard 3+3 design was used in 41 trials (49%) while newer algorithm based methods were also used, including ATD in 35 trials (42%) and CRM (mCRM), which was employed in only 6 trials (7%). The mean MTD to starting dose ratio appeared to be at least twice as high for trials using a mCRM or an ATD as for trials using a standard ‘‘3+3’’ design. The mean number of patients exposed to a dose below the MTD for all three trial designs was similar, ranging from 19 to 23. the mean number of patients exposed to doses exceeding the MTD was at least twice as high in trials using a standard ‘‘3+3’’ design or an ATD when compared to trials using a mCRM

Reference Paper 17: The paper 'The continual reassessment method for dose-finding studies: a tutorial' provides detailed explanation how to implement a CRM. Does simulation with varying parameters to show how the CRM is better than the 3+3

@xinyi030

cbayow22 commented 1 year ago

The other two references were books or papers that i could not locate

xinyi030 commented 1 year ago

Hey Team,

Thank you for the great work. Here's the preliminary draft of our part. NonStat.pdf

I plan to refine and finalize sections 2 and 3 of the document by Monday (with only few edits afterwards). Therefore, please check your parts carefully at your earliest convenience, and let me know if there's anything you would like to add/edit. Thanks!

Thanks again for your excellent work towards this issue!

Best, Xinyi


@onebulrush I was trying to cite the 8th paper in my .bib file but I couldn't find it. Could you kindly guide me on how you were able to access this?

@UjjwalSehrawat I might have missed it but, could you please point out your work on part 2.2? Also, you're doing great literature review work and, it would be even more helpful if you could paraphrase/organize it a little bit. If it's possible, can you finish the edition by today?

@cbayow22 Thanks for the update. I was able to find the preview version of papers 18&19, and I am attaching them in case you need them. I'm not sure if they would be helpful; and if so, could you send me your review for these two papers by tomorrow 12pm here? (It's okay if you don't think you can review them and let's just leave the two papers blank) 18_previewpdf.pdf19_previewpdf.pdf

onebulrush commented 1 year ago

@xinyi030 Thank you for your kind reply! I can't get the .bib file too. Here is the link: https://www.methodologyhubs.mrc.ac.uk/files/6814/6253/2385/A_quick_guide_why_not_to_use_AB_designs.pdf


@onebulrush Received. Thanks a lot!

cbayow22 commented 1 year ago

Reference Paper 19 The book 'Handbook of Methods for Designing, Monitoring, and Analyzing Dose-Finding Trials' discusses Phase I designs, methodology for Phase I/II. In addition various types of Phase II dose Finding trials are discussed. This book is trying to understand the high failure rate in Phase II clinical trials by improving the poor accuracy in early-phase trials. Book reviews the classical 3+3 design and model based designs. The book discusses barriers to adopting newer designs, available software and tools, current usage of different designs in clinical practice, modification of design rules related to start-up, stopping and choice of endpoints. Model based designed discussed include the CRM and escalation with overdose control

Reference Paper 18 The book 'Dose Finding by the Continual Reassessment Method' focusses on CRM approach for studies designed with binary outcome and subjects come from a homogenous population. This book is a 'how-to' book to implement CRM and includes what to do and what not to do. Book includes details on how to calibrate CRM parameters based on general patterns. The book also discusses the theoretical and methodological viewpoint including pathological behaviors of CRM modification in simpler settings. The goal of the book is finding dose criteria using a CRM design based on current literature and the CRM trials the author helped implement.

@xinyi030

cbayow22 commented 1 year ago

2.1 Updated referenceces_0522.docx

@xinyi030 - I updated some of the references. Highlighted in yellow ( 4 changes; two in text citation and 2 additional references to be included)

xinyi030 commented 1 year ago

Thanks Team,

Updating the draft based on your ongoing work. Please check the attached file. NonStat.pdf

I encourage all of you to check your sections, and let me know if you have any questions/feedbacks by today. ❤️

Thanks for the nice work!

Best, Xinyi


@UjjwalSehrawat Could you double-check and modify the expression of the model Phu(x_j, a) = exp(alpha + beta x_j) / (1 + exp(alpha + beta x_j})? (you reviewed it in Piantadosi, Fisher, and Grossman 1999). I'll translate this into Latex language once you've done that. Thanks!

xinyi030 commented 1 year ago

Hi Team,

Just a quick reminder - please review and finalize (if any) pending edits on your individual sections by May24, noon. Our team is currently in the process of preparing for the presentation, during which your contributions might be included.

Thanks, Xinyi

UjjwalSehrawat commented 1 year ago

Hi @xinyi030 Xinyi. My apologies for the delay but I have added my methodology section with improvements in my summaries and a few additional references. I have updated my original content with all these.

Best, Ujjwal

xinyi030 commented 1 year ago

Hello Team,

Our upcoming presentation is just around the corner. I wish you could revisit your work and try to answer any questions raised today that are related to your particular sections. It would be beneficial for us all to be prepared with clear and comprehensive responses.

Best, Xinyi

cbayow22 commented 1 year ago

Thanks Team,

Updating the draft based on your ongoing work. Please check the attached file. NonStat.pdf

I encourage all of you to check your sections, and let me know if you have any questions/feedbacks by today. ❤️

Thanks for the nice work!

Best, Xinyi

@UjjwalSehrawat Could you double-check and modify the expression of the model Phu(x_j, a) = exp(alpha + beta x_j) / (1 + exp(alpha + beta x_j})? (you reviewed it in Piantadosi, Fisher, and Grossman 1999). I'll translate this into Latex language once you've done that. Thanks!

@GabeNicholson