Fraud-Detection Algorithms for Identifying Anomalous Transactions in Retail Banking Networks
DOI:
https://doi.org/10.63125/pefa8x59Keywords:
Fraud Detection, Anomalous Transaction Identification, Explainability, False-Positive Burden, Concept Drift ReadinessAbstract
This study addresses a practical problem in retail banking where fraud-detection algorithms can appear effective yet still produce low-trust alerts, heavy false-positive workload, and weak readiness for shifting fraud patterns, which reduces real operational value. Using a quantitative, cross-sectional, case-study–based design grounded in an enterprise retail banking network, the purpose was to test how six fraud-detection capability dimensions (Data Quality and Feature Readiness, Real-Time Processing, Model Robustness, Explainability, Integration and Scalability, and Monitoring and Updating) predict the primary outcome Anomalous Transaction Identification Performance and three trust-centered outcomes: Alert Quality Index, False-Positive Burden Score, and Drift Readiness Score . The sample comprised N = 200 professionals embedded in fraud operations, risk/compliance, and analytics/IT roles within the case environment. The analysis plan used descriptive statistics, reliability testing (Cronbach’s alpha), Pearson correlations, and multiple regression for hypothesis testing. Headline results show capability maturity was highest for Data Quality (M = 3.98, SD = 0.63) and Integration (M = 3.85, SD = 0.69), while the primary outcome ATIP was moderately high (M = 3.81, SD = 0.66); trust outcomes indicated moderate alert quality (AQI M = 3.76, SD = 0.68) but noticeable false-positive burden (FPBS M = 3.12, SD = 0.81) and comparatively weaker drift readiness (DRS M = 3.33, SD = 0.79). Reliability was strong across constructs (α = .81–.88). Correlations with ATIP were strongest for Data Quality (r = .62), Explainability (r = .58), and Monitoring and Updating (r = .55), all p < .001. The regression model explained substantial variance in ATIP (R² = 0.54; F(6,193) = 37.6; p < 0.001), with significant predictors including Data Quality (β = 0.28, p < 0.001), Explainability (β = 0.22, p = 0.002), Monitoring and Updating (β = 0.19, p = 0.006), and Real-Time Processing (β = 0.12, p = 0.041), while Robustness and Integration were not significant after controls. These findings imply that banks seeking measurable improvement should prioritize data readiness, explanation-centered alert design, and monitoring and update governance to raise operational trust, reduce unnecessary investigative load, and sustain performance under drift within enterprise fraud-detection deployments.
