学术科研
    当前位置: 首页 >> 学术活动 >> 正文
      学术活动
      【学术报告】 从概率测试到可验证AI:大语言模型与神经符号推理在可验证自主系统中的应用 - 郑曦
      2026年01月04日 08:33   点击:[]

    报告主题:From Probabilistic Testing to Certifiable AI: Large Language Models and Neuro-Symbolic Reasoning for Verifiable Autonomous Systems(从概率测试到可验证AI:大语言模型与神经符号推理在可验证自主系统中的应用)

    主讲专家:Xi Zheng(郑曦)

    报告时间:2025年01月05日上午10:00-12:00

    报告地点:yl6809永利官网集团密码学院417

    报告人简介A/Prof. Xi Zheng is an Associate Professor at Macquarie University, Australia, and an ARC Future Fellow (2024–2028). His research focuses on the testing and verification of learning-enabled cyber-physical systems, the safety of autonomous driving and unmanned aerial vehicle (UAV) systems, and verifiable and certifiable artificial intelligence. He has published extensively in top-tier international conferences and journals such as ICSE, FSE, and TSE, and has secured over AUD 2.4 million in competitive research funding. His research outcomes have been adopted by industry partners, including Ant Group and multiple UAV companies. Beyond research, he has taken on significant leadership and service roles. He serves as the TPC Chair of MobiQuitous 2026 and as an OC/TPC member for ICSE 2026, FSE 2026, PerCom 2026, and CAV 2025. He is also a co-founder of the TACPS workshop series and a co-organizer of the Shonan Seminar and Dagstuhl Seminar, with a focus on neuro-symbolic AI and large language models for reliable autonomous systems.

    报告摘要:Learning-enabled Cyber-Physical Systems (LE-CPS), such as autonomous vehicles and drones, posesignificant challenges for safety assurance due to the unpredictability of deep neural networks. Our FSE'22 and 'TSE'23 studies exposed critical gaps in industry testing practices, motivating new techniques for test reduction and scenario-based validation in autonomous systems. This talk highlights two recent vision-led directions, The FSE'24 vision revives model-based testing through Large Language Models (LLMs), already adopted in industry pipelines, Follow-up works inT'SE'24 and ICSE'25 extend this to LLM-driven scenario generation and online testing for UAV autolanding. The FSE'25 vision, NeuroStrata, proposes a neurosymbolic shift from black-box learning to interpretable reasoning, enabling certifiable Al. This vision is now realized in a neurosymbolic perception module under real-world deployment with an Australian drone company, and has gained strong support across European institutions during my recent visit, including Oxford, Paris-Saclay, Hamburg, FSE'25 and CAV'25. Together, these efforts chart a path toward verifiable, certifable Al for safety-critical systems.

    yl6809永利(CHN·集团公司)官方网站-Official website 版权所有
    • 院长邮箱书记邮箱  
    • 地址:海南省海口市美兰区人民大道58号 邮编:570228
    • 联系电话:0898-66259127 传真:0898-66259127 邮件地址:scscs@hainanu.edu.cn
    • 师德师风问题举报 投诉电话:0898-66257132 投诉邮箱:scscs@hainanu.edu.cn

    Baidu
    sogou