DoD Research Program

ODU Research Foundation
May 2025 – Aug 2025

Context

During my VICEROY Scholars Summer Research Internship, I investigated the vulnerability of state-of-the-art cross-view geo-localization systems to physical adversarial attacks. The project focused on the TransGeo framework and the VIGOR dataset, analyzing how navigation systems match ground-level images with aerial or satellite views to estimate location. Over the course of eight weeks, I set up the TransGeo codebase, evaluated baseline Recall@k performance, conducted error analysis using embedding visualizations, and implemented adversarial patch optimization pipelines in PyTorch. The objective was to design physically realizable adversarial patches capable of systematically misleading geo-localization models and measure their success rate and recall degradation. This experience was significant because it moved beyond theoretical cybersecurity discussions and into hands-on experimentation with real-world AI systems that could impact navigation, autonomous systems, and infrastructure security.

Artifact

An artifact from this experience is the Notion page summarizing the methodology, adversarial patch design, quantitative evaluation metrics, and performance results. The project documentation demonstrates my ability to conduct structured experimentation, implement optimization pipelines, and interpret quantitative security outcomes.

VICEROY Student Summer Project 2025

Reflection

In this research role, I was responsible for implementing and evaluating physical-style adversarial patches designed to mislead cross-view geo-localization models. I conducted literature reviews on digital and physical attack methods, analyzed failure cases such as occlusions and seasonal changes, and formulated custom loss functions to encourage different-city mislocalization. One of the main challenges I faced was translating adversarial attack theory into a functioning optimization pipeline that could reliably produce measurable degradation in Recall@k performance. I addressed this by iteratively debugging the training loop, refining patch placement strategies using segmentation outputs, and carefully evaluating performance metrics across validation splits.

Through this project, I learned how adversarial machine learning extends beyond abstract research and directly exposes vulnerabilities in real-world systems. I developed a deeper understanding of how seemingly minor perturbations can create significant downstream effects in AI-driven decision-making systems. This experience strengthened my technical proficiency in PyTorch, embedding visualization, model evaluation, and structured experimentation. More importantly, it shaped my professional identity by reinforcing my interest in AI security and system robustness.

This project relates directly to my broader professional development in cybersecurity and applied AI research. It taught me to think adversarial, evaluate system weaknesses systematically, and support findings with quantitative evidence. Rather than simply building models, I learned to question their assumptions and explore how they can fail. This mindset will be foundational in future roles involving secure systems design, adversarial defense research, or AI reliability.