In addition to improvements in near-field blind spots and FOV, this product provides advanced technologies for the mapping & localization and obstacle avoidance of robots.
Mid-70 LiDAR is a new product offered by Livox for low-speed robot scenarios. In addition to improvements in near-field blind spots and FOV, this product provides advanced technologies for the mapping & localization and obstacle avoidance of robots. This example clearly shows the feasibility of Mid-70 applications in similar scenarios and the advantages as a 3D LiDAR solution.
I. Mapping & Localization
At present, low-speed robots perform mapping and localization with 2D radar generally. Though a mature technology, it encounters some pain points in application, among which two main problems cannot be solved:
1. Due to little feature information, 2D maps are prone to degradation in repeated similar scenarios, resulting in localization loss or inaccuracy. In addition, the global localization problem is also difficult to solve.
2. Due to short measuring range, the 2D localization lidar products available on the market do not work well in large outdoor scenarios, so most low-speed robots can operate indoors only.
In order to solve these difficulties, Livox launched the Mid-70 LiDAR, together with open-source 3D mapping and localization algorithms for customers to use the 3D LiDAR more easily.
The Livox mapping and localization algorithms are developed based on the Laser Odometry and Mapping (LOAM) algorithm, which can extract feature points from point clouds, obtain relative poses via frame matching, then accumulate and optimize pose information, and obtain a global map finally. We adapted and optimized the algorithm based on the non-repetitive scanning characteristics of Livox LiDAR, and performed mapping and localization testing on a platform moving at a speed about 1.2 m/s with a Mid-70 both indoors and outdoors. The mapping and localization effect is good without any additional sensor assistance, which is shown in Figure 1 to Figure 4 below. The line in Figure 4 is the position point connecting the line obtained by relocation.
Figure 1: Indoor Mapping 1
Figure 2: Indoor Mapping 2
Figure 3: Outdoor Mapping
Figure 4: Indoor localization Test
We provide our mapping and localization algorithms on GitHub:
2. Obstacle Avoidance and Recognition
2D LiDAR only detects one plane, so it will face many challenges when used for obstacle avoidance, especially in indoor environments with hanging obstacles such as production lines and restaurants and outdoor scenarios that require autonomous planning. 3D LiDAR has inherent advantages in obstacle avoidance. Not only does it allow for object detection and collision avoidance in 3D space, but also provides advanced algorithm applications such as contour detection and object recognition, which enable robots to plan routes autonomously, expanding the application range.
In practical applications, the 3D LiDAR differs from traditional 2D lidar in obstacle avoidance in many ways, which are listed below:
The 3D LiDAR filter out ground points. Considering the application scenarios of normal low-speed robots, we can set the initial value of the LiDAR pose for simple ground filtering. We can also extract the ground to avoid interference from the ground, which allows robots to operate on sloped ground, while 2D lidar does not support this function. We can also remove the background objects by means of background modeling, which is useful for security monitoring applications.
When operating in factories, robots often need to detect small obstacles. As a result, denser point clouds may be needed to identify objects and acquire accurate contours. This is a strength of Livox LiDAR by virtue of its non-repetitive scanning feature, which obtains denser point cloud data by accumulating multi-frame point clouds. First, the localization module obtains the real-time pose of the robot. Then, we project real-time point clouds based on the pose to obtain realistic target positions in the scene. Compared to traditional multi-line LiDAR, Mid-70 accumulates multiple point cloud frames for significantly denser point cloud data. Even in a scenario has higher requirements for real-time data processing a data output frequency of 10Hz (10 frames per second) can be maintained. The result is shown in Figure 5 to Figure 8 below. 100 ms and 500 ms are the corresponding accumulation times, i.e., a one-frame point cloud and a five-frame point cloud.
3. Object Recognition
The object recognition function is required for low-speed robots, especially trolleys used for logistics and distribution, service robots, and production line AGVs. The Mid-70 LiDAR provides extremely precise recognition of nearby objects and is resistant to noisy points due to strong light interference. Figures 9 to 13 show the results of a comparison test with a popular 3D depth camera on the market. As you can see, the 3D depth camera has a lot of noisy points in harsh lighting (in the blue box in the figure below), which impact the shape detection performance. In contrast, Mid-70 performs well in harsh lighting, as it can accumulate denser point cloud data in a short period of time to better approximate contours and recognize objects.
Figure 9: The 3D depth camera detects a standard pallet 1.5m away
Figure 10: Mid-70 detects a standard pallet 1.5m away (@500ms)
Figure 11: Photo of a standard pallet
Figure 12: The 3D depth camera detects a pedestrian 2m away
Figure 13: Mid-70 detects a pedestrian 2m away (@500ms)
Mapping & localization and obstacle avoidance are the most basic upper algorithms for low-speed robots. The new Livox Mid-70 LiDAR provides sufficient support for these functions. Our supporting ecosystem will continue to provide new practical open-source projects and support customized development for specific scenarios. Relying on the great supply chain advantages, large-scale production capacity and sound quality control system of our parent company DJI, Livox will continue to provide cost-effective 3D LiDAR solutions for low-speed robots. We will further promote the large-scale commercial processes of autonomous unmanned operation together with our partners.