The validation of new AD capabilities as well as training advanced artificial intelligence models for AD yearns for an almost unlimited amount of high-quality annotated data. Such annotation is by now not only required on image data from cameras in the form of bounding boxes but also on LIDAR & RADAR data streams on a pixel- or point-precise level. Very often simultaneous annotation of 3D data with synchronised 2D camera data is demanded for highly-accurate ground truth data generation.
The complexity of the new AD functions generates a strong requirement for diverse automotive data due to the many use-cases and scenarios to cover with such data. In this case pure manual annotation is simply not feasible anymore because of the incredible amount of required man power to annotate, for example, 100.000 km of test drive data.
Consequently, "CMORE" offers a wide range of semi- and fully-automated options to perform smart, efficient and best-quality annotation of vast amounts of data. By leveraging the power of the newest approaches in Deep Learning, "CMORE" created a set of semi-automated algorithms, which can improve annotation speed in C.LABEL by a factor of 10 and fully-automated algorithms for various use-cases, eliminating the need for human interaction altogether under the brand name of C.LYTIX.
By exploiting the full spectrum of DevOps solutions, "CMORE" will offer an agile and highly configurable framework for automated annotation, which allows for regular testing of the performance of the algorithms, automated re-training once new quality-controlled ground-truth data is available, and automated execution of annotation of newly available data. Our approach scales into on-premise and cloud infrastructure while harnessing the compute power of modern GPUs.
Our current work focusses on intelligent fusion of multiple sensor streams, such as camera, LIDAR, RADAR but also IMU data, to create close-to-perfect annotations in a fully automated setup. By supporting the fusion of up to 50 camera, LIDAR and RADAR sensors as an input to our algorithms, we are creating a dense and holistic understanding of the surrounding of a car in today's complex traffic scenes.
In the future, such a detailed description of the scene will enable us to perform annotation for scene interpretation by, e.g., identifying actions and events ("pedestrian crosses road from left to right") and we will hence further support, and in many cases initiate, the development and validation of future AD functionality.
For further details about our solutions contact us at firstname.lastname@example.org