Abstract
Misinformation on algorithmic social media platforms spreads rapidly, with approximately half of all impressions occurring within the first 80 minutes. Traditional fact-checking mechanisms operate on timescales of hours to days, arriving too late to have significant effect. Early-stage tools: prebunking, AI detection with labelling, and fast fact-checking could intervene within this critical window. However, their comparative and combined effectiveness remains largely unexplored. This study evaluates these three tools (prebunking, AI detection, and fast fact-checking), examining how intervention timing, algorithmic amplification, and tool combinations affect misinformation spread.
A novel ODE compartmental model is developed that incorporates three different pathways of exposure and intervention receipt with time-dependent activation with influence algorithmic amplification through a linear growth term. Evaluations are carried out by simulations systematically varying intervention timing and amplification strength.
Results show that intervention timing critically moderates effectiveness. Under moderate amplification factors, fully combined early interventions deployed within 60 minutes can reduce the total active spreaders by 3.3-3.6%, a relative reduction between 45%-55% and the total accumulative exposure reduced by 16-28%, a relative reduction between 16-33% when compared to later intervention deployment after 360 minutes. With algorithmic amplification applied, the combined early intervention not only provides a great reduction, but they are also able to reign in the rising misinformation growth and cap the spread.
The findings that the combined early-stage intervention is very effective at reducing misinformation spread can provide evidence-based guidance for platform designers and policymakers, suggesting an investment prioritisation in misinformation/fake news recognition training, faster AI detection and fact-checking.


