Abstract
Guaranteeing autonomous vehicle (AV) safety requires scalable evaluations that capture both everyday driving and critical edge cases. Scenario-based testing has become a key strategy, with scenario generation serving as its core. We classify generation techniques into three categories: rule-oriented, data-driven, and learning-enabled. For each, we examine representative approaches, simulation tools, description standards, and assessment metrics addressing fidelity, diversity, and risk exposure. Persistent issues include the simulation–reality gap, limited transferability, and the challenge of modeling rare events. Emerging directions such as language-guided generation, hybrid architectures, and open scenario repositories point toward more reliable and certifiable testing of AV systems.



![Author ORCID: We display the ORCID iD icon alongside authors names on our website to acknowledge that the ORCiD has been authenticated when entered by the user. To view the users ORCiD record click the icon. [opens in a new tab]](https://www.cambridge.org/engage/assets/public/coe/logo/orcid.png)