Background: In cancer outcome studies it is important to distinguish between effects of pre- and post-diagnostic exposures. A common approach to this is conduct a single analysis of fixed and time-varying exposures with naïve mutual adjustment. However, if changes in prognosis (i.e. recurrence) influence post-diagnostic exposure, this may introduce collider stratification bias (CSB). We conducted a simulation study to quantify the influence of CSB in studies of drug exposures and cancer outcomes.
Methods: We simulated event times for 2000 observations with binomial pre-diagnostic exposure: p=0.25, null hazard ratio (HR)=1 (max time:5.5 years, event rate: 0.02%/day). Subsequently we generated a time-varying continuation of post-diagnostic exposure with fixed start/stop probabilities (start=10%/year, top=3%/year). To illustrate CSB we modified stop probabilities in the year prior to death by a pre-death probability factor (PDPF) 1x-5x. HR for pre-diagnostic (Pre-HR) and post-diagnostic (Post-HR) exposures were estimated using Cox regression.
Results: In 50 simulated datasets, models with no PDPF had mean Post-HR=1.01, Pre-HR=1.01. Where stop probability was doubled (PDPF=2) effects were clear (Post-HR=0.78, Pre-HR=1.37). In the extreme (realistic) scenario (PDPF=5) effects were substantial (Post-HR=0.29, Pre-HR=2.11).
Conclusions: In scenarios where post-diagnostic exposure is modified by changes in prognosis, naïve adjustment leads to spurious effect observations for pre-/post-diagnostic exposures.