Research pillars

Robust & secure AI

Adversarial robustness, backdoor detection and mitigation, prompt injection and jailbreaking defenses for LLMs. Building models that remain reliable under attack.

Uncertainty quantification (QuantumUQ)

Confidence bounds and reliable predictions in classical and quantum ML. Open-source tools for practitioners and researchers.

Privacy-preserving collaboration (FL/HE)

Federated learning and homomorphic encryption for cross-site collaboration without sharing raw data. Critical for healthcare, energy, and telecoms.

Quantum-enhanced security & secure 6G resilience

Hybrid quantum–classical methods for security and resilience in next-generation communications and critical infrastructure.

The Trustworthy AI Stack

1. Data Integrity

2. Robust Learning

3. Uncertainty Quantification

4. Privacy & Secure Collaboration

5. Monitoring & Adaptation

Why this matters for society

Critical infrastructure—power grids, transport, healthcare, water, and communications—increasingly depends on AI. When these systems fail or are attacked, the impact is not just financial: it can affect safety, privacy, and trust. Trustworthy AI means building systems that are secure against adversarial and backdoor attacks, transparent about their uncertainties, and able to collaborate across organizations without exposing sensitive data. By advancing robust learning, uncertainty quantification, and privacy-preserving methods (including quantum-enhanced approaches), we help ensure that the next generation of AI serves society safely and reliably.

View publications · View projects