Галерея 3053188

Галерея 3053188




🔞 ПОДРОБНЕЕ ЖМИТЕ ТУТ 👈🏻👈🏻👈🏻

































Галерея 3053188
All Books Conferences Courses Journals & Magazines Standards Authors Citations
We did a survey and evaluation of the RGB-D SLAM system, and we mainly introduced the basic concept and structure and compare the differences between the various RGB-D SL... View more
Abstract: The traditional visual SLAM systems take the monocular or stereo camera as input sensor, with complex map initialization and map point triangulation steps needed for 3D m... View more
The traditional visual SLAM systems take the monocular or stereo camera as input sensor, with complex map initialization and map point triangulation steps needed for 3D map reconstruction, which are easy to fail, computationally complex and can cause noisy measurements. The emergence of RGB-D camera which provides RGB image together with depth information breaks this situation. While a number of RGB-D SLAM systems have been proposed in recent years, the current classification research on RGB-D SLAM is very lacking, and their advantages and shortcomings remain unclear regarding different applications and perturbations, such as illumination transformation, noise and rolling shutter effect of sensors. In this paper, we mainly introduced the basic concept and structure of the RGB-D SLAM system, and then introduced the differences between the various RGB-D SLAM systems in the three aspects of tracking, mapping, and loop detection, and we make a classification study on different RGB-D SLAM algorithms according to the three aspect. Furthermore, we discuss some advanced topics and open problems of RGB-D SLAM, hoping that it will help for future exploring. In the end, we conducted a large number of evaluation experiments on multiple RGB-D SLAM systems, and analyzed their advantages and disadvantages, as well as performance differences in different application scenarios, and provided references for researchers and developers.
Published in: IEEE Access ( Volume: 9 )
Date of Publication: 21 January 2021
We did a survey and evaluation of the RGB-D SLAM system, and we mainly introduced the basic concept and structure and compare the differences between the various RGB-D SL... View more
TABLE 1
Table Summarizing the Algorithms Used in Our Experiments. We Mark With a
\surd
When the Functionality is Included
TABLE 2
Tracking Accuracy Results on the Easy TUM RGB-D Dataset. ATE RMSE
(m)

TABLE 3
Tracking Accuracy Results on the Hard TUM RGB-D Dataset. ATE RMSE
(m)

TABLE 4
Tracking Accuracy Results on the ICL-NUIM Dataset. ATE RMSE
(m)

TABLE 5
Tracking Accuracy Results on the ETH3D Dataset. ATE RMSE
(m)

TABLE 6
Mean Tracking Time Results on the TUM RGB-D Dataset
(s)

TABLE 7
Surface Reconstruction Accuracy on the ICL-NUIM Dataset.
(m)

TABLE 8
Loop Closing Effect Evaluation on the TUM RGB-D Dataset ATE RMSE.
(m)

J. Civera and S. H. Lee, "RGB-D odometry and SLAM" in RGB-D Image Analysis and Processing, Cham, Switzerland:Springer, pp. 117-144, Oct. 2019.
F. Endres, J. Hess, J. Sturm, D. Cremers and W. Burgard, "3-D mapping with an RGB-D camera", IEEE Trans. Robot. , vol. 30, no. 1, pp. 177-187, Feb. 2014.
R. Mur-Artal and J. D. Tardos, "ORB-SLAM2: An open-source SLAM system for monocular stereo and RGB-D cameras", IEEE Trans. Robot. , vol. 33, no. 5, pp. 1255-1262, Oct. 2017.
A. Dai, M. Nießner, M. Zollhöfer, S. Izadi and C. Theobalt, "BundleFusion: Real-time globally consistent 3D reconstruction using on-the-fly surface reintegration", ACM Trans. Graph. , vol. 36, no. 4, pp. 24:1-24:18, Jul. 2017.
T. Schops, T. Sattler and M. Pollefeys, "BAD SLAM: Bundle adjusted direct RGB-D SLAM", Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR) , pp. 134-144, Jun. 2019.
G. Klein and D. Murray, "Parallel tracking and mapping for small AR workspaces", Proc. 6th IEEE ACM Int. Symp. Mixed Augmented Reality , pp. 225-234, Nov. 2007.
J. Engel, T. Schöps and D. Cremers, "LSD-SLAM: Large-scale direct monocular SLAM" in Computer Vision—ECCV 2014, Zürich, Switzerland:Springer, vol. 8690, pp. 834-849, 2014.
R. Mur-Artal, J. M. M. Montiel and J. D. Tardos, "ORB-SLAM: A versatile and accurate monocular SLAM system", IEEE Trans. Robot. , vol. 31, no. 5, pp. 1147-1163, Oct. 2015.
J. Engel, V. Koltun and D. Cremers, "Direct sparse odometry", IEEE Trans. Pattern Anal. Mach. Intell. , vol. 40, no. 3, pp. 611-625, Mar. 2018.
J. Zubizarreta, I. Aguinaga and J. M. M. Montiel, "Direct sparse mapping", IEEE Trans. Robot. , vol. 36, no. 4, pp. 1363-1370, 2020.
P. J. Huber, "Robust statistics" in International Encyclopedia of Statistical Science, Cham, Switzerland:Springer, pp. 1248-1251, 2011.
R. A. Newcombe, A. J. Davison, S. Izadi, P. Kohli, O. Hilliges, J. Shotton, et al., "KinectFusion: Real-time dense surface mapping and tracking", Proc. 10th IEEE Int. Symp. Mixed Augmented Reality , pp. 127-136, Oct. 2011.
A. Concha and J. Civera, "RGBDTAM: A cost-effective and accurate RGB-D tracking and mapping system", Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst. (IROS) , pp. 6756-6763, Sep. 2017.
A. Fontan, J. Civera and R. Triebel, "Information-driven direct RGB-D odometry", Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR) , pp. 4929-4937, Jun. 2020.
S. Rusinkiewicz and M. Levoy, "Efficient variants of the ICP algorithm", Proc. 3rd Int. Conf. 3-D Digit. Imag. Modeling , pp. 145-152, May/Jun. 2001.
J. Sturm et al., "Towards a benchmark for RGB-D SLAM evaluation", Proc. RGB-D Workshop Adv. Reasoning Depth Cameras Robot. Sci. Syst. Conf. (RSS) , 2011.
G. Bradski and A. Kaehler, Learning OpenCV: Computer Vision With the OpenCV Library, Newton, MA, USA:O’Reilly Media, 2008.
Q. Sun, J. Yuan, X. Zhang and F. Duan, "Plane-edge-SLAM: Seamless fusion of planes and edges for SLAM in indoor environments", IEEE Trans. Autom. Sci. Eng. , Nov. 2020.
"Semi-direct tracking and mapping with RGB-D camera" in Intelligent Robotics and Applications, Shenyang, China:Springer, pp. 461-472, 2019.
L. Ma, C. Kerl, J. Stückler and D. Cremers, "CPA-SLAM: Consistent plane-model alignment for direct RGB-D SLAM", Proc. IEEE Int. Conf. Robot. Automat. (ICRA) , pp. 1285-1291, May 2016.
D. G. Lowe, "Distinctive image features from scale-invariant keypoints", Int. J. Comput. Vis. , vol. 60, no. 2, pp. 91-110, Nov. 2004.
M. Hsiao, E. Westman, G. Zhang and M. Kaess, "Keyframe-based dense planar SLAM", Proc. IEEE Int. Conf. Robot. Autom. (ICRA) , pp. 5110-5117, May 2017.
C. Kerl, J. Sturm and D. Cremers, "Dense visual SLAM for RGB-D cameras", Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst. , pp. 2100-2106, Nov. 2013.
T. Whelan, S. Leutenegger, F. R. Salas-Moreno, B. Glocker and J. A. Davison, "Elasticfusion: Dense SLAM without A pose graph" in Robotics: Science and Systems XI, Rome, Italy:Sapienza Univ. Rome, Jul. 2015.
M. Keller, D. Lefloch, M. Lambers, S. Izadi, T. Weyrich and A. Kolb, "Real-time 3D reconstruction in dynamic scenes using point-based fusion", Proc. Int. Conf. 3D Vis , pp. 1-8, Jun. 2013.
T. Whelan, M. Kaess, M. Fallon, H. Johannsson, J. Leonard and J. Mcdonald, "Kintinuous: Spatially extended kinectfusion", 2012.
M. Nießner, M. Zollhöfer, S. Izadi and M. Stamminger, "Real-time 3D reconstruction at scale using voxel hashing", ACM Trans. Graph. , vol. 32, no. 6, pp. 1-11, Nov. 2013.
H. Roth and M. Vona, "Moving volume KinectFusion", Proc. Brit. Mach. Vis. Conf. , pp. 1-11, 2012.
M. Zeng, F. Zhao, J. Zheng and X. Liu, "Octree-based fusion for realtime 3D reconstruction", Graph. Models , vol. 75, no. 3, pp. 126-136, 2013.
J. Chen, D. Bautembach and S. Izadi, "Scalable real-time volumetric surface reconstruction", ACM Trans. Graph. , vol. 32, no. 4, pp. 1-16, Jul. 2013.
F. Steinbrucker, C. Kerl and D. Cremers, "Large-scale multi-resolution surface reconstruction from RGB-D sequences", Proc. IEEE Int. Conf. Comput. Vis. , pp. 3264-3271, Dec. 2013.
D. Galvez-López and J. D. Tardos, "Bags of binary words for fast place recognition in image sequences", IEEE Trans. Robot. , vol. 28, no. 5, pp. 1188-1197, Oct. 2012.
M. Kaess, A. Ranganathan and F. Dellaert, "ISAM: Incremental smoothing and mapping", IEEE Trans. Robot. , vol. 24, no. 6, pp. 1365-1378, Dec. 2008.
R. F. Salas-Moreno, R. A. Newcombe, H. Strasdat, P. H. J. Kelly and A. J. Davison, "SLAM: Simultaneous localisation and mapping at the level of objects", Proc. IEEE Conf. Comput. Vis. Pattern Recognit , pp. 1352-1359, Jun. 2013.
N. Sunderhauf, T. T. Pham, Y. Latif, M. Milford and I. Reid, "Meaningful maps with object-oriented semantic mapping", Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst. (IROS) , pp. 5079-5085, Sep. 2017.
J. McCormac, A. Handa, A. Davison and S. Leutenegger, "SemanticFusion: Dense 3D semantic mapping with convolutional neural networks", Proc. IEEE Int. Conf. Robot. Autom. (ICRA) , pp. 4628-4635, May 2017.
B. Bescos, J. M. Facil, J. Civera and J. Neira, "DynaSLAM: Tracking mapping and inpainting in dynamic scenes", IEEE Robot. Autom. Lett. , vol. 3, no. 4, pp. 4076-4083, Oct. 2018.
Y. Sun, M. Liu and M. Q.-H. Meng, "Improving RGB-D SLAM in dynamic environments: A motion removal approach", Robot. Auto. Syst. , vol. 89, pp. 110-122, Mar. 2017.
Y. Sun, M. Liu and M. Q.-H. Meng, "Motion removal for reliable RGB-D SLAM in dynamic environments", Robot. Auto. Syst. , vol. 108, pp. 115-128, Oct. 2018.
J. Vincent, M. Labbé, J.-S. Lauzon, F. Michaud, F. Grondin and P.-M. Comtois-Rivet, "Dynamic object tracking and masking for visual SLAM" in arXiv:2008.00072, 2020, [online] Available: http :// arxiv . org / abs / 2008 . 00072 .
B. Bescos, C. Campos, J. D. Tardós and J. Neira, "DynaSLAM II: Tightly-coupled multi-object tracking and SLAM" in arXiv:2010.07820, 2020, [online] Available: http :// arxiv . org / abs / 2010 . 07820 .
L. Xiao, J. Wang, X. Qiu, Z. Rong and X. Zou, "Dynamic-SLAM: Semantic monocular visual localization and mapping based on deep learning in dynamic environment", Robot. Auto. Syst. , vol. 117, pp. 1-16, Jul. 2019.
J. Cheng, Y. Sun and M. Q.-H. Meng, "Robust semantic mapping in challenging environments", Robotica , vol. 38, no. 2, pp. 256-270, Feb. 2020.
C. Choi, A. J. B. Trevor and H. I. Christensen, "RGB-D edge detection and edge-based registration", Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst. , pp. 1568-1575, Nov. 2013.
Y. Lu and D. Song, "Robust RGB-D odometry using point and line features", Proc. IEEE Int. Conf. Comput. Vis. (ICCV) , pp. 3934-3942, Dec. 2015.
M. Kuse and S. Shen, "Robust camera motion estimation using direct edge alignment and sub-gradient method", Proc. IEEE Int. Conf. Robot. Automat. , pp. 573-579, May 2016.
X. Wang, W. Dong, M. Zhou, R. Li and H. Zha, "Edge enhanced direct visual odometry", Proc. Brit. Mach. Vis. Conf. , pp. 1-12, 2016.
F. Schenk and F. Fraundorfer, "Combining edge images and depth maps for robust visual odometry", Proc. Brit. Mach. Vis. Conf. , pp. 1-12, 2017.
Y. Zhou, H. Li and L. Kneip, "Canny-VO: Visual odometry with RGB-D cameras based on geometric 3-D–2-D edge alignment", IEEE Trans. Robot. , vol. 35, no. 1, pp. 184-199, Feb. 2019.
C. Kim, P. Kim, S. Lee and H. J. Kim, "Edge-based robust RGB-D visual odometry using 2-D edge divergence minimization", Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst. (IROS) , pp. 1-9, Oct. 2018.
C. Kerl, J. Sturm and D. Cremers, "Robust odometry estimation for RGB-D cameras", Proc. IEEE Int. Conf. Robot. Autom. , pp. 3748-3754, May 2013.
N. Patel, F. Khorrami, P. Krishnamurthy and A. Tzes, "Tightly coupled semantic RGB-D inertial odometry for accurate long-term localization and mapping", Proc. 19th Int. Conf. Adv. Robot. (ICAR) , pp. 523-528, Dec. 2019.
M. Klingensmith, S. S. Sirinivasa and M. Kaess, "Articulated robot motion for simultaneous localization and mapping (ARM-SLAM)", IEEE Robot. Autom. Lett. , vol. 1, no. 2, pp. 1156-1163, Jul. 2016.
R. Scona, S. Nobili, R. Y. Petillot and M. Fallon, "Direct visual SLAM fusing proprioception for a humanoid robot", Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst. , pp. 1419-1426, Sep. 2017.
R. Scona, M. Jaimez, Y. R. Petillot, M. Fallon and D. Cremers, "StaticFusion: Background reconstruction for dense RGB-D SLAM in dynamic environments", Proc. IEEE Int. Conf. Robot. Autom. (ICRA) , pp. 1-9, May 2018.
M. Rünz, M. Buffier and L. Agapito, "Maskfusion: Real-time recognition tracking and reconstruction of multiple moving objects", Proc. IEEE Int. Symp. Mixed Augmented Reality , pp. 10-20, Oct. 2018.
M. Jaimez, C. Kerl, J. Gonzalez-Jimenez and D. Cremers, "Fast odometry and scene flow from RGB-D cameras based on geometric clustering", Proc. IEEE Int. Conf. Robot. Autom. (ICRA) , pp. 3992-3999, May 2017.
J. Starck, A. Maki, S. Nobuhara, A. Hilton and T. Matsuyama, "The multiple-camera 3-D production studio", IEEE Trans. Circuits Syst. Video Technol. , vol. 19, no. 6, pp. 856-869, Jun. 2009.
J. Tong, J. Zhou, L. Liu, Z. Pan and H. Yan, "Scanning 3D full human bodies using kinects", IEEE Trans. Vis. Comput. Graphics , vol. 18, no. 4, pp. 643-650, Apr. 2012.
G. Ye, Y. Liu, N. Hasler, X. Ji, Q. Dai and C. Theobalt, "Performance capture of interacting characters with handheld kinects" in Computer Vision—ECCV 2012, Florence, Italy:Springer, vol. 7573, pp. 828-841.
K. Guo, F. Xu, Y. Wang, Y. Liu and Q. Dai, "Robust non-rigid motion tracking and surface reconstruction using l0 regularization", Proc. IEEE Int. Conf. Comput. Vis. (ICCV) , pp. 3083-3091, Dec. 2015.
K. Wang, G. Zhang and S. Xia, "Templateless non-rigid reconstruction and motion tracking with a single RGB-D camera", IEEE Trans. Image Process. , vol. 26, no. 12, pp. 5966-5979, Dec. 2017.
J. Yang, D. Guo, K. Li, Z. Wu and Y.-K. Lai, "Global 3D non-rigid registration of deformable objects using a single RGB-D camera", IEEE Trans. Image Process. , vol. 28, no. 10, pp. 4746-4761, Oct. 2019.
S. Lovegrove, A. Patron-Perez and G. Sibley, "Spline fusion: A continuous-time representation for visual-inertial fusion with application to rolling shutter cameras", Proc. Brit. Mach. Vis. Conf. , pp. 8, Sep. 2013.
C. Kerl, J. Stückler and D. Cremers, "Dense continuous-time tracking and mapping with rolling shutter RGB-D cameras", Proc. IEEE Int. Conf. Comput. Vis. , pp. 2264-2272, Dec. 2015.
J.-H. Kim, C. Cadena and D. I. Reid, "Direct semi-dense SLAM for rolling shutter cameras", Proc. IEEE Int. Conf. Robot. Automat. (ICRA) , pp. 1308-1315, May 2016.
J. Sturm, N. Engelhard, F. Endres, W. Burgard and D. Cremers, "A benchmark for the evaluation of RGB-D SLAM systems", Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst. , pp. 573-580, Oct. 2012.
A. Handa, T. Whelan, J. McDonald and A. J. Davison, "A benchmark for RGB-D visual odometry 3D reconstruction and SLAM", Proc. IEEE Int. Conf. Robot. Autom. (ICRA) , pp. 1524-1531, May 2014.
P. Wang, L. Liu, N. Chen, H.-K. Chu, C. Theobalt and W. Wang, "Vid2curve: Simultaneous camera motion estimation and thin structure reconstruction from an RGB video", ACM Trans. Graph. , vol. 39, no. 4, pp. 132-141, 2020.

IEEE Account

Change Username/Password
Update Address



Purchase Details

Payment Options
Order History
View Purchased Documents



Need Help?

US & Canada: +1 800 678 4333
Worldwide: +1 732 981 0060

Contact & Support


SLAM (Simultaneous Localization and Mapping) is a technique developed for solving the problem of self-localization and mapping in an unknown environment. Since it was proposed in the 1980s for the first time, it has made great progress and has been widely used in robot navigation, autonomous driving, augmented reality and virtual reality. For the merits of size, cost and power consumption, SLAM systems using image data captured by cameras as input is becoming more and more popular, which is also called visual SLAM(vSLAM).
Most vSLAM methods have been traditionally based on low-level feature matching and multiple view geometry. This introduces several limitations to monocular vSLAM. For example, a large-baseline motion is needed to generate sufficient parallax for reliable depth estimation; and the scale is unobservable. This can be partially alleviated by including additional sensors (e.g., stereo cameras, inertial measurement units (IMUs), sonar) or the prior knowledge of the system or the scene. Another challenge is the dense reconstruction of low texture areas. Although recent approaches using deep learning have shown impressive results in this direction, more research is needed regarding their cost and dependence on the training data [1] .
RGB-D cameras, which can provide both colored image and depth image at the same time, are becoming more and more used for indoor scene reconstruction and can be used to solve the challenges mentioned above. The camera intrinsic parameters calibrated beforehand provide the scale factor for reconstruction and camera tracking. And RGB-D camera can provide depth information for all areas in the field of view with or without textures, making dense reconstruction easy to be done and removing the need for map initialization. It is no wonder that research of mapping and localization using an RGB-D camera has flourished in the last decade. Figure 1 shows the reconstruction results of several state-of-the-art RGB-D SLAM systems. Nowadays, RGB-D cameras have become the most popular sensors for indoor applications in robotics and AR/VR. In the future, it will be promising to use a single RGB-D camera or with other sensors to complete the SLAM task with better-designed algorithms.

Reconstructions of state-of-the-art RGB-D SLAM systems.
In this paper, we review real-time RGB-D SLAM algorithms, which remarkably evolve forward from 2011. We conclude the common designation of most RGB-D SLAM systems and give the basic architecture of an RGB-D SLAM. And we introduce each part of the RGB-D SLAM with relevant works and make some evaluation to help readers understand the advantages and disadvantages of them and finally know how to design an RGB-D SLAM. The remainder of the paper is organized as follows: Section II and III give an overview of the most common RGB-D SLAM pipeline and the notation and preliminaries of the following formulations. Section IV , V , VI introduces camera tracking, local mapping and loop closing algorithms with specific examples separately. Section VII discusses relevant advanced topics and open problems that were not covered in the previous sections. Section VIII gives some evaluation of tracking accuracy, mean tracking time, reconstruction accuracy, etc. of 7 different SLAM systems under 3 different datasets. Section IX lists the open source code and datasets we used. Section X gives a brief conclusion of the paper.
After several decades’ development, the pipeline of RGB-D SLAM or vSLAM(more generally) are basically fixed. Modern vSLAM systems are mostly designed using the idea of PTAM [6] , which divides the task of SLAM into camera tracking and local mapping, completed by separate threads. After PTAM has been proposed, some works [7] , [8] added other threads tackling loop closing, global BA, etc. As a result, most state of the art vSLAM systems are built on top of multi-threads and can be divided into two parts: front end and back end. The front end is responsible for providing real-time camera poses while the back end is responsible for slowly map update and optimization.
The basic architecture of RGB-D SLAM is described as Fig 2 and our article will expand based on this architecture. In the front end, RGB-D image {I}_{k}
and global map M_{G}
are used for image preprocess and pose estimation. In the back end, Local Mapping thread utilizes camera pose \xi _{k}
, preprocessed RGB-D image {I}'_{k}
and global map M_{G}
to perform map update and local optimization; Loop Closing threa
Толстушка призналась что хотела бы ебаться более агрессивно
Зашла в глорихол
Секретарша шлюха ебется с двумя

Report Page