SESSION B

TITLE: Precision Gains in Randomized Studies Using Covariate Adjustment With Ordinal and Time-To-Event Endpoints
SPEAKER:
Ivan Diaz, Cornell University

Abstract: In this tutorial we will present new software and estimation methods for ordinal and time-to-event outcomes in randomized trials that do not rely on proportional odds/hazard assumptions. The proposed estimators leverage prognostic baseline variables to obtain equal or better asymptotic precision compared to traditional estimators. The proposed estimators have the following features: (i) they are interpretable under violations of the proportional odds/hazards assumption; (ii) they are consistent and at least as precise as the unadjusted estimators; (iii) for time-to-event outcomes, they remain consistent under violations of independent censoring (unlike the Kaplan–Meier estimator) when either the censoring or survival distributions, conditional on covariates, are estimated consistently; and (iv) they achieve the nonparametric efficiency bound when both of these distributions are consistently estimated. We will illustrate the performance of our methods using simulations based on resampling data from various completed clinical trials and from hospitalized COVID patient data. We will show that the methods achieve substantial precision gains from using covariate adjustment– equivalent to 9-21% reductions in the required sample size to achieve a desired power–for a variety of estimands (targets of inference) when the trial sample size was at least 200. We will further illustrate the use of the methods in practice using our novel R packages.

Instructors’ Biography:

Iván Díaz, Ph.D., is an Assistant Professor of Biostatistics at Weill Cornell Medicine. He completed his Ph.D. in Biostatistics at UC Berkeley and was a postdoctoral fellow at Department of Biostatistics in The Johns Hopkins Bloomberg School of Public Health. His research focuses on the study of statistical methods for causal inference from observational and randomized studies with complex datasets. He works at the intersection of causal inference, machine learning, and mathematical statistics to develop methods that provide relevant answers to substantive questions using state-of-the-art data analysis techniques.

 

 

 

 

 

 

 

 

This entry was posted in . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *