$USD
  • EUR€
  • £GBP
  • $USD
NEWS Laser Radar

Most Popular SLAM Open Source Framework for LiDAR

DFRobot Apr 11 2023 5789

SLAM means Simultaneous Localization and Mapping, is a technology that enables precise self-positioning for autonomous applications such as drones, robotics, cars, and AR/VR. SLAM can be used for autonomous navigation, augmented reality, and 3D reconstruction.

There are several open-source SLAM frameworks that are compatible with LiDAR, which provide developers with tools and libraries to implement SLAM algorithms using LiDAR data. In this article, we will introduce some of the most popular open-source SLAM frameworks that use LiDAR sensors, including ROS SLAM, OpenSLAM.org, OpenVSLAM, GSLAM, Maplab, ScaViSLAM, Kimera, OpenSfM, and VINS-Fusion.

 

1. ROS SLAM

ROS SLAM is an open source software platform based on robots, which uses the tools and libraries provided by ROS (Robot Operating System) to implement SLAM algorithms. It can help developers quickly implement robot systems and perform testing and debugging, providing a simple and easy way for robots, drones and self-driving cars to navigate and locate in unknown environments.

ROS 1 is the first version, which uses custom serialization formats, transport protocols and central discovery mechanisms.

ROS 2 is the second version, which supports serialization, transport and discovery through abstract middleware interfaces.

ROS 2 has some improvements over ROS 1, such as better support for multi-robot systems, real-time performance and security.

Robot Operating System, ROS1, ROS2
 

(Source: ROS core stacks · GitHub)

 

 

2. OpenSLAM.org

OpenSLAM.org is a platform for SLAM researchers to publish their algorithms. It has a repository on Github that contains many open-source SLAM algorithms, such as GMapping, TinySLAM, g2o, and ORB-SLAM.

OpenSLAM.org SLAM algorithms platform
 

(Source: OpenSLAM · GitHub)

 

 

3. OpenVSLAM:

OpenVSLAM is a versatile visual SLAM framework that supports monocular, stereo, and RGBD camera models and can be easily customized for other camera models.

It employs an indirect SLAM algorithm based on sparse features (such as ORB-SLAM, ProSLAM, and UcoSLAM) that can create maps that are storable and loadable and can localize new images based on pre-built maps.

The OpenVSLAM system is fully modular, encapsulating several functionalities in independent components with easy-to-understand APIs, making it easy to customize for specific tasks.

OpenVSLAM versatile visual SLAM framework
 

(Source: GitHub - OpenVSLAM)

 

4. GSLAM:

GSLAM is an open-source SLAM framework based on C++ that is characterized by its efficiency, flexibility, ease of use, and cross-platform capabilities. GSLAM supports a variety of sensor types, including monocular, stereo, RGB-D, and LiDAR, and can be used for applications such as 3D reconstruction, robot navigation, and object recognition.

GSLAM open-source SLAM framework
 

(Source: GitHub - GSLAM)

 

5. Maplab:

Maplab is an open-source visual-inertial mapping framework for multi-session and multi-robot mapping, written in C++. It is a modular and multimodal mapping framework that has been deployed on a variety of robotic platforms and provides the foundation for many research projects at ASL.

Maplab open-source visual-inertial mapping framework
 

(Source: GitHub - Maplab)

 

6. ScaViSLAM

ScaViSLAM is a visual SLAM framework that employs the "Double Window Optimization" (DWO) technique. Currently, it supports calibrated stereo rigs and RGB-D cameras. Monocular SLAM is not yet supported.

ScaViSLAM visual SLAM framework
 

(SourceGitHub - ScaViSLAM)

 

 

7. Kimera:

Kimera is an open-source library for real-time metric-semantic localization and mapping.

It uses camera images and inertial data to build a semantically annotated 3D mesh of the environment, supports ROS, and runs on a CPU. Kimera includes four modules: a fast and accurate Visual-Inertial Odometry (VIO) pipeline (Kimera-VIO), a full SLAM implementation based on robust pose-graph optimization (Kimera-RPGO), a per-frame and multi-frame 3D mesher (Kimera-Mesher), and a semantic annotation 3D mesher (Kimera-Semantics).

Kimera open-source library for real-time metric-semantic localization and mapping
 

(Source: GitHub  Kimera)

 

 

8.OpenSfM:

OpenSfM is an open-source Structure-from-Motion pipeline for reconstructing camera poses and 3D scenes from multiple images.

It is composed of basic modules for Structure from Motion (feature detection/matching, minimal solvers) with a focus on building a robust and scalable reconstruction pipeline.

OpenSfM also integrates measurements from external sensors (such as GPS, accelerometers) for georeferencing and robustness. A JavaScript viewer is provided to preview models and debug the pipeline.

OpenSfM open-source Structure-from-Motion pipeline
 

(Source: GitHub - OpenSfM)

 

 

9. VINS-Fusion

VINS-Fusion is an optimization-based multi-sensor state estimator. It is capable of achieving precise self-positioning for autonomous applications such as drones, automobiles, and AR/VR. VINS-Fusion is an extension of VINS-Mono and supports a variety of visual inertial sensor types including monocular camera + IMU, stereo camera + IMU, and even just a stereo camera.

VINS-Fusion optimization-based multi-sensor state estimator
 

(Source: GitHub - VINS-Fusion)

REVIEW