Autonomous Vehicles

To fulfill the true promise of autonomous ground vehicles, AVs need machine perception to improve on, or at least match human perception. That will happen in part by using more sensors and information, such as LIDAR, radar, cameras, V2X communication, external data resources, and high-resolution 3D maps.

Humans don’t drive using static sensors. Our brains, attached to multiple sensors (eyes & ears among them), tell us where to look and when based on data being collected and processed in real time. Machine perception must behave similarly if it is to improve upon human perception.

This requires sensors attached to the fusion and decision layers of the AV stack to present control capabilities that direct sensor resources to collect pertinent data that significantly improves driving scene situational awareness.

EchoDrive Overview

This video demonstrates some of the advanced imaging and adaptive interrogation functionality of EchoDrive Cognitive AV Radar.

LEARN MORE ABOUT ECHODRIVE COGNITIVE AV RADAR

EchoDrive

Dynamic Sensor Control

No matter the sensor, manufacturer, resolution, or data rate, one-way data flows from sensor through fusion to decision will never achieve human-like perception. When humans hear a sound or see something in peripheral vision, the brain directs sensor resources on the object or scene area to resolve ambiguity. When continuous data consumption fails to resolve ambiguities in the driving scene, algorithms struggle to reach sufficient confidence to activate vehicle controls.

Whitepaper: Highly Adaptive Radar for Cognitive Imaging

EchoDrive is a new type of AV sensor - it delivers cognitive functionality by placing radar control in the AV stack itself. This allows the vehicle AI to resolve ambiguities and discrepancies by dynamically tasking the radar to measure specific aspects of the driving scene. We have authored a white paper on the topic and invite you to request your copy.

Latest News