How to combine HectorSLAM and RGB-D camera data to achieve 3D mapping?
I represent a team of engineers from Lancaster University. We are attempting to combine 2D LIDAR data (preferably using HectorSLAM) and RGB-D camera data (as done by Technische Universität Darmstadt https://youtu.be/olGZv05RLHI) for an autonomous UAV mapping application. We are using AND RPLIDAR A2 scanner and a Realsense Depth Camera D415. The ROS distro is Kinetic. How could we achieve this and could it be performed using ROS on an Nvidia Jetson Nano? Can we run two SLAM algorithms concurrently (e.g HectorSLAM and ORB-SLAM), or do we need to combine the sensor data before applying SLAM? Is there any open-source code available to achieve this?
Many Thanks.