Research
AI-driven security and software engineering
Modern society depends on critical infrastructure run by large, complex software, but software is expensive to develop and difficult to verify. AI will change how we build, verify, and maintain software. Towards this goal, we are making progress on:
- Teaching AI models to understand and generate code (EMNLP 2025).
- Evaluating AI models for code (NeurIPS 2025)
- Testing software automatically (ICSE 2025 CCS 2024, CCS 2023).
Trustworthy AI
While machine learning has made great strides, it isn't ready for many scenarios because of concerns over its security, privacy, and robustness. We are making progress towards trustworthy AI, including:
- Attack against safety-aligned LLMs (NeurIPS 2024)
- AI agent security (RAIE)