RoboSense Releases the Latest Version of its Sensor Evaluation System for Autonomous Driving


Reading time ( words)

RoboSense LiDAR has released the latest version of the ground truth data system and evaluation tool chain RS-Reference 2.1, used for LiDARs and multi-sensor fusion systems performance evaluation. The original RS-Reference version was launched in the market in 2016, when the automotive-grade MEMS solid-state LiDAR RS-LiDAR-M1 project was established. Used by global OEMs and Tier1s, the system has been continuously improved and upgraded with more efficient and useful evaluation function modules and software tool chains.

While the evaluation function modules can be liked to an exam, the ground truth data is the "answer" for the evaluation of the perception system. Therefore, the accuracy of ground truth data must be significantly higher than the device under test (DuT) in all aspects including detection performance and geometric error.

The ground truth data, usually stored in the PB-Level, includes dynamic information such as obstacle types, speeds, and locations, and static information such as lane lines and road boundaries.

Data labeling quality and data generation efficiency are the key factors to ground truth data.

The RS-Reference system provides a set of ground truth data generation and evaluation solutions, and outputs detection performance and geometric error indicators with a labeling efficiency close to 1:1. This is significantly more accurate than real-time perception, manual labeling and traditional labeling tools.

High performance and mature sensor data collection system: the RS-Reference system contains the RoboSense 128-beam LiDAR RS-Ruby, Leopard camera, Continental 408 millimeter-wave radar, GI-6695 RTK, and two added RoboSense RS-Bpearl LiDAR for near-field blind spots in the 2.1 version.

Detached roof-mounted deployment without vehicle body modification: the RS-Reference system adapts to different vehicle sizes, does not occupy the sensor installation position of the DuTs, and directly evaluates the intelligent driving system that is consistent with the sensor sets of commercial vehicles. 

Vastly improved/accumulated perception algorithm and offline processing mechanism: The algorithm is key to smart labeling instead of manual labeling, and is responsible for the extraction of ground truth data. The RS-Reference system uses a customized and dedicated offline perception algorithm, which is the product of RoboSense’s 13+ years’ of accumulated experience of LiDAR sensing algorithm technology. It performs a “full life process tracking and identification" for each obstacle data, and extracts all ground truth data from each frame. The RS-Reference system can pick up speed and acceleration labeling, and accurately delineate the size of the labeling frame through comprehensive shape and size information. The system is also capable of accurately dividing obstacles that are in close proximity to each other in complex scenes.

Full-stack evaluation tool chain: it includes data collection tools, sensor calibration tools, visualization tools, manual verification tools, evaluation tools, etc. The 2.1 version upgrades the data management platform and adds the scene semantic labeling function that serves every step of the evaluation process.

Individual sensor evaluation in the multi-sensor fusion system: not only can the RS-Reference system evaluate the result of intelligent driving's perception fusion, but it can also provide targeted solutions based on the features of different types of sensors such as LiDAR, millimeter wave radar, and camera. Dedicated or customized tool modules can be developed according to customer needs for further in-depth analysis of the performance of the sensing system.

Extended application value of the RS-Reference: includes planning and control algorithm development support, which is able to generate massive ground truth data to build simulation scenes, and can evaluate road-side perception systems.

Share

Print


Suggested Items

Exciting Advances From NVIDIA’s GPU

05/03/2021 | Dan Feinberg, I-Connect007
NVIDIA’s Graphics Processing Technology Conference was, as expected, a showcase of new developments, as well as an opportunity for engineers and developers to learn, enhance skills, and discuss new ideas. Just hearing about all the amazing new developments and the accelerating expansion of AI in virtually all aspects of modern society gives those who attended a better idea of just how much AI is and will continue to change their work and our world.

CES Press Kickoff Presentation

01/07/2020 | Nolan Johnson, I-Connect007
On January 5, Editor Nolan Johnson attended the CES press kickoff presentation “2020 Trends to Watch,” which was hosted by CES Vice President of Research Steve Koenig and CES Director of Research Lesley Rohrbaugh. Koenig and Rohrbaugh set the stage for the week with their presentation, answering the question, “What’s happening in the industry?”

NASA Sounding Rocket Technology Could Enable Simultaneous, Multi-Point Measurements — First-Ever Capability

10/21/2019 | NASA
NASA engineers plan to test a new avionics technology — distributed payload communications — that would give scientists a never-before-offered capability in sounding rocket-based research.



Copyright © 2021 I-Connect007. All rights reserved.