Federated Learning for Human-Centric ITS: Integrating Multimodal Interaction and Cross- Regional Adaptation

13 October 2025, Version 1
This content is an early or alternative research output and has not been peer-reviewed by Cambridge University Press at the time of posting.

Abstract

Intelligent Transportation Systems (ITS) struggle with privacy risks, cross-regional heterogeneity, and inadequate human-centric design—gaps that fragmented research has yet to address. This paper proposes a federated human-centric ITS framework that unifies privacy-preserving collaboration, multimodal interaction, and standardized evaluation. The framework uses dynamic sparse federated learning to reduce communication overhead while enabling few-shot cross-regional adaptation. It integrates context-gated spoken language understanding for natural driver interaction, zero-shot recommendation for personalized travel services, and VR-informed interface design to minimize cognitive load. Performance is assessed via super-efficiency SBM-DEA, ensuring comprehensive measurement of technical efficiency and user experience. Experiments on METR-LA and PEMS-BAY show 18.7% higher traffic prediction accuracy and 62.3% lower communication cost than baseline federated models, with 89% user satisfaction in VR evaluations. This work provides a scalable solution for next-generation ITS by balancing privacy, adaptability, and human-centricity

Comments

Comments are not moderated before they are posted, but they can be removed by the site moderators if they are found to be in contravention of our Commenting and Discussion Policy [opens in a new tab] - please read this policy before you post. Comments should be used for scholarly discussion of the content in question. You can find more information about how to use the commenting feature here [opens in a new tab] .
This site is protected by reCAPTCHA and the Google Privacy Policy [opens in a new tab] and Terms of Service [opens in a new tab] apply.