SLAM is synchronous positioning and mapping, which can build a map of the surrounding environment and estimate its own motion during the camera movement. SLAM algorithms have broad application prospects in unmanned driving, service robots, surveying and mapping, AR and other scenarios. Traditional visual SLAM algorithms track based on the assumption that the environment is static, which is less robust in dynamic environments and does not make full use of image information. To this end, the team combined semantic segmentation technology with SLAM to build a semantic map of the environment while improving the robustness of the algorithm to help the robot better understand the surrounding environment.