Modern passenger vehicles increasingly rely on video cameras within Advanced Driver Assistance Systems (ADAS) and Automated Driving (AD) technologies to perceive their surroundings and make decisions. These perception systems, largely based on machine learning, face typical challenges of data-driven approaches. As their role in safety-critical scenarios grows, uncertainty quantification has gained significant attention to better characterize both the perceived environment and the confidence in these perceptions. Benchmarking these methods from an application point of view becomes crucial for their deployment. In this context, robust metrics are critical to evaluate how well an intelligent vehicle’s perception system performs. This paper investigates the relationship between conventional computer vision metrics and those required for ADAS/AD applications, characterizing perception performance from an application perspective. It presents experimental results highlighting both alignments and mismatches between standard computer vision measures and the demands of ADAS/AD use cases. Finally, it demonstrates how adopting a unified perspective on metrics can provide deeper insights into perception system performance and guide the selection of suitable evaluation criteria for deploying machine learning algorithms in intelligent vehicle perception.