Abstract---Over the past two decades, research in the field of Simultaneous Localization and Mapping (SLAM) has undergone a
significant evolution, highlighting its critical role in enabling autonomous exploration of unknown environments. This evolution ranges
from hand-crafted methods, through the era of deep learning, to more recent developments focused on Neural Radiance Fields
(NeRFs) and 3D Gaussian Splatting (3DGS) representations. Recognizing the growing body of research and the absence of a
comprehensive survey on the topic, this paper aims to provide the first comprehensive overview of SLAM progress through the lens of
the latest advancements in radiance fields. It sheds light on the background, evolutionary path, inherent strengths and limitations, and
serves as a fundamental reference to highlight the dynamic progress and specific challenges
TABLE 1: SLAM Systems Overview. We categorize the different methods into main RGB-D, RGB, and LiDAR-based
frameworks. In the leftmost column, we identify sub-categories of methods sharing specific properties, detailed in Sections
3.2.1 to 3.3.2 . Then, for each method, we report, from the second leftmost column to the second rightmost, the method name
and publication venue, followed by (a) the input modalities they can process: RGB, RGB-D, D ( e.g. LiDAR, ToF, Kinect,
etc.), stereo, IMU, or events; (b) mapping properties: scene encoding and geometry representations learned by the model;
(c) additional outputs learned by the method, such as object/semantic segmentation, or uncertainty modeling (Uncert.);
(d) tracking properties related to the adoption of a frame-to-frame or frame-to-model approach, the utilization of external
trackers, Global Bundle Adjustment (BA), or Loop Closure; (e) advanced design strategies, such as modeling sub-maps or
dealing with dynamic environments (Dyn. Env.); (f) the use of additional priors. Finally, we report the link to the project
page or source code in the rightmost column. † indicates code not released yet