ImAFUSA partner ICCS-NTUA explored how modern AI can detect drones in immersive virtual environments, using data from the ImAFUSA VR experiments conducted by Iscte.
The researchers from ICCS trained a custom drone detection model based on the YOLOv8 architecture and compared it with GPT-4o, a Large Language Model. YOLO (‘You Only Look Once’) architectures are widely used in real-time object detection tasks, making them well-suited for identifying drones in complex visual environments; however, the research concluded that LLMs can offer a robust alternative for tasks like visual scene understanding, especially when training data is limited or objects are extremely small.
This raises exciting possibilities at the intersection of vision models and language models for future AI-driven perception systems. You can read more about this here.
The European project ImAFUSA – Impact and Capacity Assessment Framework for U-space Societal Acceptance is coordinated by Sofia Kalakou, BRU-Iscte researcher, and explores how innovative air mobility (IAM) can be responsibly and sustainably integrated into urban mobility matching air and ground transport planning and management.
BRU-Iscte researchers Catarina Marques, Fernando Ferreira, Helena Almeida, João Guerreiro, and Margarida Santos, Sofia Samoili, Francisco Aniceto, and James Mcleod are also part of the team.
