Joint Modeling of Treatment Effect on Time-to-Event Endpoint and Safety Covariates in Control Clinical Trial Data Analysis

Special Article - Biostatistics Theory and Methods

Austin Biom and Biostat. 2015;2(3): 1025.

# Joint Modeling of Treatment Effect on Time-to-Event Endpoint and Safety Covariates in Control Clinical Trial Data Analysis

Kao-Tai Tsai* and Karl E. Peace

JPHCOPH, Georgia Southern University, USA

*Corresponding author: Kao-Tai Tsai, JPHCOPH, Georgia Southern University, Statesboro, GA 30458, USA

Received: June 01, 2015; Accepted: June 11, 2015; Published: July 09, 2015

## Abstract

It is a common practice to perform a separate analysis of efficacy and safety data from clinical trials to estimate the benefit and risk aspects of a particular treatment regimen. However, by doing so, one is likely to miss the complete picture of the treatment effect given that these data are generated from the same study subjects and therefore most likely will be correlated. Therefore, it is desirable to analyze these data jointly to obtain a more complete profile of the treatment regimen. A substantial number of statistical methodologies have been proposed in the last decade to model the time-to-event data and longitudinal repeated measures jointly. These methods provide better insights to understand the treatment effect in time-to-event data by incorporating the information contained in the longitudinal repeated measures. In this article, we utilize the joint model method to analyze the time-to-event data, such as patient overall survival, and the repeated measures of laboratory test data to better estimate the treatment effect of a regimen. The data from a recent oncology clinical trial is used to illustrate the application of our proposed method.

Keywords: Joint modeling; Time-to-event data; Longitudinal repeated measures; Controlled clinical trials

## Introduction

During the course of a clinical trial, several types of data are usually collected. This includes data to investigate the efficacy of the intervention of a test drug, the demographic data of the subjects under study, the laboratory data to understand the pharmacological effect of the treatment on the body, and the possible adverse effects, etc. Conventionally, it is common practice to analyze the efficacy and safety data separately to estimate the benefit and risk aspects of the treatment regimen. However, by performing separate analysis, one is likely to miss the complete picture of the treatment effect given that these data are generated from the same study subjects and therefore most likely will be correlated.

For example, in cancer clinical trials to study patient survival after treatment, patients are usually given treatment which can substantially cause neutropenia, namely, the reduction of white blood cells. Severe neutropenia can lead to infection or sepsis which can in turn lead to other complications that affect patient’s survival. This indirect treatment effect usually is not captured during the efficacy analysis alone. Therefore, it is more desirable to analyze these data together to obtain a more complete profile of the treatment regimen. Similar strategies have also been implemented in the study of HIV, neuroscience, and prostate cancers, just to name a few. In the following, we will focus our attention on the analysis of survival data and repeated measures of laboratory parameters such as white blood cell counts and other adverse effects.

A substantial number of statistical methodologies have been proposed in the last decade to model the time-to-event data and longitudinal repeated measures jointly. For example, Tsiatis et al. [1] in the study of AIDS, Diggle [2] in the study of patients with schizophrenia symptoms, Henderson, et al. [3] studied the positive and negative symptom scale in neuroscience, and Law et al. [4] in the study of disease progression biomarkers, etc.

Using parametric or semi-parametric maximum likelihood or and Bayesian methods, these authors estimated the parameters for both the longitudinal and event processes and used the associated asymptotic properties of the estimates. They also showed that the estimates from the joint model can usually be more efficient than the estimates from the separate models.

In this article, we utilize the joint model method to analyze the patient overall survival incorporating the laboratory data of neutrophils counts to better estimate the treatment effect of an experimental cancer therapy. We describe the statistical methodologies about joint modeling in section 2 followed by the parameter estimation and model diagnostics in section 3. In section 4, we illustrate the applications of these methods to the data from a recent cancer drug study. We conclude our paper with discussion in section 5.

## Joint Modeling Methods

Let Ti denote the observed failure time for the i-th subject (i=1,…,n), which is taken as the minimum of the true event time ${\delta }_{i}=I\left({T}_{i}^{\text{*}}<{C}_{i}\right)$ and the censoring time Ci, namely, ${T}_{i}=min\left({T}_{i}^{*};{C}_{i}\right)$ . Furthermore, we define the event indicator as ${\delta }_{i}=I\left({T}_{i}^{\text{*}}<{C}_{i}\right)$ , where I(.) is the indicator function that takes the value 1 if the condition ${T}_{i}^{\text{*}}<{C}_{i}$ is satisfied, and 0 otherwise. Thus, the observed data for the time-to-event outcome consist of the pairs {(Ti,{(Ti,di),i=1,…,n}.

For the longitudinal responses, let yi(t) denote the value of the longitudinal outcome at time point t for the i-th subject. We should note here that we do not actually observe yi(t) at all time points, but only at the very specific occasions tij at which measurements were taken. Thus, the observed longitudinal data consists of the measurements yij={yi(tij), j=1,…,ni}.

As in many clinical trials, data could be measured with errors due to the limitation of instruments and quantifications. That is especially true for the laboratory data. Therefore, as a general setting, we assume the following relationship between the observed value yi(t) and the true underlying unobserved value mi(t):

yi(t) =mi(t)+εi, (1)

where εi is the random error following a continuous distribution function. In the following, we will assume εi follows a normal distribution for simplicity.

For the event process, we assume the following hazard model:

where wi is the vector of covariates. One of our aim is to associate mi(t) with the event outcome Ti in addition to the vector of covariates wi to better estimate the endpoints of interest.

To quantify the effect of mi(t) on the risk for an event, a commonly adopted approach is to use a relative risk model proposed by Therneau and Grambsch [5]:

where Mi(t)={mi(u),0<u<t) denotes the history of the true unobserved longitudinal process up to time t, h0(.) denotes the baseline risk function, and wi is a vector of baseline covariates (such as a treatment indicator, history of diseases, etc.) with a corresponding vector of regression coefficients.

In the model above, the parameter a quantifies the effect of the underlying longitudinal outcome to the risk for an event in an additive manner; for instance, in the example section below, a measures the effect of the value of Absolute Neutrophils Counts (ANC) on the risk for death due to the fact that a low ANC value is likely to lead to infection which can indirectly cause medical complications to affect the overall patient survival.

The baseline risk function h0(.) is typically left unspecified. However, within the joint modeling framework, Hsieh, et al. [6] had noted that unspecified h0(.) can lead to underestimated standard errors of the parameter estimates. To avoid this problem, one can specify the function using the Weibull [7], Gamma, or for more flexible models in which h0(.) is approximated using step functions or spline-based approaches. Alternatively, if the proportionality assumption in (2) or (3) fails, one can use the accelerated failure time model.

In order to incorporate a time dependent covariate within this framework, we define the survival function So as

with the corresponding hazard function for subject i being

hi(t|Mi(t),wi)=h0{Vi(t)}exp{Twi+ami(t)} (5) where

An important difference between equations (5) and (3) is that in the former the entire covariate history Mi (t) is assumed to influence the subject-specific risk (due to the fact that h0(.) is evaluated at Vi(t), whereas in the latter the subject-specific risk depends only on the current value of the time-dependent covariate mi(t). The survival function for a subject with covariate history mi(t) equals Si{t|Mi(t)}=S0{Vi(t)}, which means that this subject ages on an accelerated schedule Vi(t) compared to S0.

Equation (1) is a general framework of the relationship between the observed and true underlying data. The model needs to be explicitly specified during the data analysis to take into account the intermittent nature of the data collection. Namely, for subject i, one only observes yij={yi(tij), j=1,…,nij} at a set of time {tij=1,…,nij}.

Assuming the normal error distribution and linear mixed effects model to describe the subject-specific longitudinal process, we have

yi(tij)= mij(tij)+εij(tij)$={x}_{i}^{T}\left({t}_{ij}\right)\beta +{z}_{i}^{T}\left({t}_{ij}\right){b}_{i}+{\epsilon }_{i}\left({t}_{ij}\right),{\epsilon }_{i}\left({t}_{ij}\right)\text{~}N\left(0,{\sigma }^{2}\right)$ where β denotes the vector of the unknown fixed effects parameters, bi denotes a vector of random effects, xi(t) and zi(t) denote row vectors of the design matrices for the fixed and random effects, respectively, and εi(t) is the measurement error term, which is assumed to be independent of bi and with mean 0 and variance σ².

## Parameter estimation and model diagnostics

Several estimation methods had been proposed for the joint modeling, e.g., semiparametric maximum likelihood (Hsieh, et al. [6]; Henderson, et al. [3]; Wulfsohn and Tsiatis [8], Tsiatis and Davidian [9]) and Bayes methods (Chi and Ibrahim [10]; Brown and Ibrahim [11]; Wang and Taylor [8]; Xu and Zeger [12]).

Briefly, the maximum likelihood estimation for joint models is based on the maximization of the likelihood corresponding to the joint distribution of the time-to-event and longitudinal outcomes {Tii,yi}. Since the time-independent random effects bi underlies both the longitudinal and survival processes, assume

with

where $\theta ={\left({\theta }_{t}^{\text{'}},{\theta }_{y}^{\text{'}},{\theta }_{b}^{\text{'}}\right)}^{\text{'}}$ denotes the parameter vector, with θt denoting the parameters for the event time outcome, θy the parameters for the longitudinal outcomes, and θb the unique parameters of the randomeffects covariance matrix, and f(.) denotes an appropriate probability density function for the longitudinal or event process.

Under the modeling assumptions and the conditional independence assumptions in equation (8), assume f{yi(tij)|(biy} being the univariate normal density for the longitudinal responses, and f(bib being the multivariate normal density for the random effects, the joint log-likelihood contribution for the i-th subject is

where the likelihood of the event process is

with hi(.) given by either (3) or (5), and

${S}_{i}\left(t|{M}_{i}\left(t\right),{w}_{i},{\theta }_{t},\beta \right)=\mathrm{Pr}\left({T}_{i}^{\text{*}}>t|{M}_{i}\left(t\right),{w}_{i},{\theta }_{t},\beta \right)$

Since the integration in (10) generally has no analytical form, the maximization of the log-likelihood function (10) with respect to θ is conventionally performed using numerical integration techniques such as Gaussian quadrature and Monte Carlo. These approaches have been successfully applied in the joint modeling framework by various authors mentioned previously.

Residual plots are the conventional methods for model diagnostics to verify the appropriateness of the distributional assumptions and the adequacy of the model assumed. Model diagnostics for linear mixed model and time-to-event model have been well studied in the literature. However, given the inter-dependency between the longitudinal process and the event process, extra cautions are needed on the diagnostics for the joint model. For example, when a subject is discontinued the study or died, the data for either process will no longer be available.

When the process of subject discontinuation is random, the residuals may be less affected than the scenarios when the discontinuation was influenced by the failure of treatment and causes informative missing data issues. Rizopoulos, et al. [13] proposed a method to augment the observed data with randomly imputed longitudinal responses. Briefly, based on the parameter estimates of the joint model with available data, they performed multiple imputation with repeated sampling from the posterior distribution of the missing observations given the observed data. The complete profile of the longitudinal data can thus be established for each subject. The advantage of using the simulated values together with the observed data to calculate residuals is that these residuals inherit now the properties of the complete data model, and therefore they can be directly used in diagnostic plots.

## Example

In this section, we present an example from a recent clinical trial to illustrate the use of the procedures described above. This is a multicenter clinical trial to investigate the treatment effect of an experimental medicine on breast cancer. Patients were randomized into two groups, treatment and placebo, to study the treatment benefit in disease progression. During the trial, the patients’ laboratory data on Absolute Neutrophils Counts (ANC) were also collected at each treatment visit to monitor the level of neutrophils. Low level of neutrophiles can possibly cause infection and lead to other complications to affect the patient disease. Even though the primary endpoint of the study is the treatment effect on disease progression which is usually estimated using efficacy data only, it is more informative to understand how the safety aspects of the study can also contribute to the patient’s disease progression.

The longitudinal ANC data was analyzed using a linear mixed effect model with a random intercept and fixed treatment and visit effects. Since the LME model requires the normality assumption, data was transformed using the Box-Cox power function to conform to the normality assumption. The normal plot after the transformation is shown in Figure 1.