A Trust-Centered AI and Security Modeling Approach for Early Cancer Diagnosis, Population-Level Health Analysis, and Secure Deployment in U.S. Healthcare Infrastructure

Authors

  • Md Fokhrul Alam Department of Computer Science, Bachelor of Science in Computer Science & Engineering, Southeast University, Dhaka, Bangladesh Author
  • Md Fardaus Alam Department of Science & Technology, Diploma in Computer Science and Application, Bangladesh Open University, Gazipur, Bangladesh Author
  • Md Ashraful Alam Department of Computer Science, Bachelor of Science in Computer Science & Engineering, Southeast University, Dhaka, Bangladesh Author

DOI:

https://doi.org/10.63125/zyvt7f56

Keywords:

Trust-Centered AI, Early Cancer Diagnosis, Security And Privacy Modeling, Population Health Analytics, Deployment Readiness

Abstract

This study addresses the persistent gap between high-performing cancer AI prototypes and real-world adoption by proposing and testing a trust-centered AI plus security-modeling blueprint for early cancer diagnosis and population-level health analysis within U.S. healthcare infrastructure. The purpose was to quantify how often published “enterprise-ready” capabilities actually co-occur with deploy ability pillars such as interpretability, robustness, security, and equity. Using a quantitative, cross-sectional, case-based review design, each included paper was treated as a case reflecting cloud and enterprise healthcare deployment contexts (for example, multi-site systems, integrated EHR and imaging stacks, or networked inference services). The sample comprised 45 cases (N = 45). Key variables were five Likert-scored readiness dimensions (1–5): clinical validation rigor, interpretability and communication support, robustness and generalization evidence, security and privacy modeling, and fairness and equity evidence, plus composite indicators such as trust-mechanism presence and a rubric-scaled Trust-Centered Deployment Readiness (TDR). The analysis plan applied descriptive statistics (counts, percentages, means), cross-tabs between trust-mechanism grouping and validation readiness, and a composite readiness summary. Headline findings show that radiology and pathology cases dominated (31/45, 68.9%), interpretability appeared in 28/45 (62.2%) but comprehensive interpretability was limited (12/45, 26.7%; mean M = 3.1/5), external validation or multi-site evaluation occurred in only 16/45 (35.6%), and explicit security or privacy-by-design elements were present in 14/45 (31.1%) with the lowest readiness mean (M = 2.4/5). Trust-mechanism studies (19/45, 42.2%) showed higher validation readiness (M = 3.6/5) and more external validation (11/19, 57.9%) than performance-only studies (5/26, 19.2%). Overall, only 8/45 (17.8%) met high composite readiness (≥0.75), while 21/45 (46.7%) were moderate and 16/45 (35.6%) low. Implications indicate that healthcare AI procurement and governance should prioritize a complete evidence package that couples external validation, calibrated trust cues, security controls across the lifecycle, and subgroup equity reporting, rather than selecting models based on accuracy alone.

Downloads

Published

2020-12-03

How to Cite

Md Fokhrul Alam, Md Fardaus Alam, & Md Ashraful Alam. (2020). A Trust-Centered AI and Security Modeling Approach for Early Cancer Diagnosis, Population-Level Health Analysis, and Secure Deployment in U.S. Healthcare Infrastructure. American Journal of Advanced Technology and Engineering Solutions, 1(01), 01-39. https://doi.org/10.63125/zyvt7f56

Cited By: