Abstract Immune checkpoint blockade (ICB) has revolutionized cancer therapy, yet many patients derive limited benefit or experience adverse effects. Thus, it becomes crucial to identify when a patient is likely to respond well to ICB treatment, thereby guiding treatment decisions and improving outcomes. Although several studies have applied machine learning (ML) to clinical and laboratory data to predict ICB response, independent validation in large, real-world cohorts remains limited. This gap, driven in part by the small number of available cohorts, restricts model generalizability and hinders broader clinical use. To expand on the most promising model, published as a logistic regression-based immunotherapy-response score (LORIS), we performed external validation using data collected at Moffitt Cancer Center.We analyzed 2,090 patients with advanced melanoma (n=908), non-small cell lung cancer (NSCLC; n=878), and renal cell carcinoma (RCC; n=304) treated with ICB at Moffitt between 2011-2025. The six clinical and laboratory features used in the original LORIS model—age, cancer type, prior systemic therapy, albumin, neutrophil-to-lymphocyte ratio, and tumor mutational burden—were extracted from patient records. Two validation strategies were implemented. First, we evaluated the published LORIS model by directly applying its coefficients to the Moffitt data; no retraining was performed, and all patients were used exclusively for testing, both within each cancer type and in a combined pan-cancer cohort. Second, to assess model reproducibility, we trained a new logistic regression model using an 80/20 train-test split of the Moffitt cohort and compared its performance with LORIS. All evaluations used 1- and 6-month pretreatment windows, with model performance quantified by AUC. Feature contributions were examined to identify predictors driving response.ICB response prediction performance varied substantially across diseases. The highest performance was observed in RCC, where achieved an AUC of 0.85 (1-month pretreatment) and 0.84 (6-month pretreatment). Melanoma showed moderate performance (AUC 0.71-0.74), while NSCLC had lower performance (AUC 0.53-0.55), contributing to the reduced discriminative ability of our pan-cancer model (AUC 0.56-0.61). Incorporating Programmed Death-Ligand 1 (PD-L1) did not consistently improve predictions and in several settings decreased performance. Model performance was similar between 1-month and 6-month pretreatment windows.This large-scale real-world validation confirms partially reproducible predictive signals for RCC and melanoma but highlights limitations in NSCLC and heterogeneous cohorts. Our findings underscore the need for expanded multimodal approaches to improve clinical applicability of ICB response prediction. Citation Format: Isis Yanina Narvaez-Bandera, Alyssa Pybus, Tosin Jolaogun, Paulo C. Morais Lyra, Jeremy Goecks, . Independent validation of the immunotherapy response model using real-world Moffitt Cancer Center cohort abstract. In: Proceedings of the American Association for Cancer Research Annual Meeting 2026; Part 1 (Regular Abstracts); 2026 Apr 17-22; San Diego, CA. Philadelphia (PA): AACR; Cancer Res 2026;86(7 Suppl):Abstract nr 4227.
Building similarity graph...
Analyzing shared references across papers
Loading...
Isis Yanina Narvaez-Bandera
Alyssa F. Pybus
Tosin Jolaogun
Cancer Research
Moffitt Cancer Center
Building similarity graph...
Analyzing shared references across papers
Loading...
Narvaez-Bandera et al. (Fri,) studied this question.
www.synapsesocial.com/papers/69d1fe18a79560c99a0a48d7 — DOI: https://doi.org/10.1158/1538-7445.am2026-4227
Synapse has enriched 5 closely related papers on similar clinical questions. Consider them for comparative context: