MinkLoc++: Lidar and Monocular Image Fusion for Place Recognition

Overview

MinkLoc++: Lidar and Monocular Image Fusion for Place Recognition

Paper: MinkLoc++: Lidar and Monocular Image Fusion for Place Recognition accepted for International Joint Conference on Neural Networks (IJCNN) 2021 ArXiv

Jacek Komorowski, Monika Wysoczańska, Tomasz Trzciński

Warsaw University of Technology

Our other projects

  • MinkLoc3D: Point Cloud Based Large-Scale Place Recognition (WACV 2021): MinkLoc3D
  • Large-Scale Topological Radar Localization Using Learned Descriptors (ICONIP 2021): RadarLoc
  • EgonNN: Egocentric Neural Network for Point Cloud Based 6DoF Relocalization at the City Scale (IEEE Robotics and Automation Letters April 2022): EgoNN

Introduction

We present a discriminative multimodal descriptor based on a pair of sensor readings: a point cloud from a LiDAR and an image from an RGB camera. Our descriptor, named MinkLoc++, can be used for place recognition, re-localization and loop closure purposes in robotics or autonomous vehicles applications. We use late fusion approach, where each modality is processed separately and fused in the final part of the processing pipeline. The proposed method achieves state-of-the-art performance on standard place recognition benchmarks. We also identify dominating modality problem when training a multimodal descriptor. The problem manifests itself when the network focuses on a modality with a larger overfit to the training data. This drives the loss down during the training but leads to suboptimal performance on the evaluation set. In this work we describe how to detect and mitigate such risk when using a deep metric learning approach to train a multimodal neural network.

Overview

Citation

If you find this work useful, please consider citing:

@INPROCEEDINGS{9533373,  
   author={Komorowski, Jacek and Wysoczańska, Monika and Trzcinski, Tomasz},  
   booktitle={2021 International Joint Conference on Neural Networks (IJCNN)},   
   title={MinkLoc++: Lidar and Monocular Image Fusion for Place Recognition},   
   year={2021},  
   doi={10.1109/IJCNN52387.2021.9533373}
}

Environment and Dependencies

Code was tested using Python 3.8 with PyTorch 1.9.1 and MinkowskiEngine 0.5.4 on Ubuntu 20.04 with CUDA 10.2.

The following Python packages are required:

  • PyTorch (version 1.9.1)
  • MinkowskiEngine (version 0.5.4)
  • pytorch_metric_learning (version 1.0 or above)
  • tensorboard
  • colour_demosaicing

Modify the PYTHONPATH environment variable to include absolute path to the project root folder:

export PYTHONPATH=$PYTHONPATH:/home/.../MinkLocMultimodal

Datasets

MinkLoc++ is a multimodal descriptor based on a pair of inputs:

  • a 3D point cloud constructed by aggregating multiple 2D LiDAR scans from Oxford RobotCar dataset,
  • a corresponding RGB image from the stereo-center camera.

We use 3D point clouds built by authors of PointNetVLAD: Deep Point Cloud Based Retrieval for Large-Scale Place Recognition paper (link). Each point cloud is built by aggregating 2D LiDAR scans gathered during the 20 meter vehicle traversal. For details see PointNetVLAD paper or their github repository (link). You can download training and evaluation point clouds from here (alternative link).

After downloading the dataset, you need to edit config_baseline_multimodal.txt configuration file (in config folder). Set dataset_folder parameter to point to a root folder of PointNetVLAD dataset with 3D point clouds. image_path parameter must be a folder where downsampled RGB images from Oxford RobotCar dataset will be saved. The folder will be created by generate_rgb_for_lidar.py script.

Generate training and evaluation tuples

Run the below code to generate training pickles (with positive and negative point clouds for each anchor point cloud) and evaluation pickles. Training pickle format is optimized and different from the format used in PointNetVLAD code.

cd generating_queries/ 

# Generate training tuples for the Baseline Dataset
python generate_training_tuples_baseline.py --dataset_root 
   
    

# Generate training tuples for the Refined Dataset
python generate_training_tuples_refine.py --dataset_root 
    
     

# Generate evaluation tuples
python generate_test_sets.py --dataset_root 
     

     
    
   

is a path to dataset root folder, e.g. /data/pointnetvlad/benchmark_datasets/. Before running the code, ensure you have read/write rights to , as training and evaluation pickles are saved there.

Downsample RGB images and index RGB images linked with each point cloud

RGB images are taken directly from Oxford RobotCar dataset. First, you need to download stereo camera images from Oxford RobotCar dataset. See dataset website for details (link). After downloading Oxford RobotCar dataset, run generate_rgb_for_lidar.py script. The script finds 20 closest RGB images in RobotCar dataset for each 3D point cloud, downsamples them and saves them in the target directory (image_path parameter in config_baseline_multimodal.txt). During the training an input to the network consists of a 3D point cloud and one RGB image randomly chosen from these 20 corresponding images. During the evaluation, a network input consists of a 3D point cloud and one RGB image with the closest timestamp.

cd scripts/ 

# Generate training tuples for the Baseline Dataset
python generate_rgb_for_lidar.py --config ../config/config_baseline_multimodal.txt --oxford_root 
   

   

Training

MinkLoc++ can be used in unimodal scenario (3D point cloud input only) and multimodal scenario (3D point cloud + RGB image input). To train MinkLoc++ network, download and decompress the 3D point cloud dataset and generate training pickles as described above. To train the multimodal model (3D+RGB) download the original Oxford RobotCar dataset and extract RGB images corresponding to 3D point clouds as described above. Edit the configuration files:

  • config_baseline_multimodal.txt when training a multimodal (3D+RGB) model
  • config_baseline.txt and config_refined.txt when train unimodal (3D only) model

Set dataset_folder parameter to the dataset root folder, where 3D point clouds are located. Set image_path parameter to the path with RGB images corresponding to 3D point clouds, extracted from Oxford RobotCar dataset using generate_rgb_for_lidar.py script (only when training a multimodal model). Modify batch_size_limit parameter depending on the available GPU memory. Default limits requires 11GB of GPU RAM.

To train the multimodal model (3D+RGB), run:

cd training

python train.py --config ../config/config_baseline_multimodal.txt --model_config ../models/minklocmultimodal.txt

To train a unimodal model (3D only) model run:

cd training

# Train unimodal (3D only) model on the Baseline Dataset
python train.py --config ../config/config_baseline.txt --model_config ../models/minkloc3d.txt

# Train unimodal (3D only) model on the Refined Dataset
python train.py --config ../config/config_refined.txt --model_config ../models/minkloc3d.txt

Pre-trained Models

Pretrained models are available in weights directory

  • minkloc_multimodal.pth multimodal model (3D+RGB) trained on the Baseline Dataset with corresponding RGB images
  • minkloc3d_baseline.pth unimodal model (3D only) trained on the Baseline Dataset
  • minkloc3d_refined.pth unimodal model (3D only) trained on the Refined Dataset

Evaluation

To evaluate pretrained models run the following commands:

cd eval

# To evaluate the multimodal model (3D+RGB only) trained on the Baseline Dataset
python evaluate.py --config ../config/config_baseline_multimodal.txt --model_config ../models/minklocmultimodal.txt --weights ../weights/minklocmultimodal_baseline.pth

# To evaluate the unimodal model (3D only) trained on the Baseline Dataset
python evaluate.py --config ../config/config_baseline.txt --model_config ../models/minkloc3d.txt --weights ../weights/minkloc3d_baseline.pth

# To evaluate the unimodal model (3D only) trained on the Refined Dataset
python evaluate.py --config ../config/config_refined.txt --model_config ../models/minkloc3d.txt --weights ../weights/minkloc3d_refined.pth

Results

MinkLoc++ performance (measured by Average [email protected]%) compared to the state of the art:

Multimodal model (3D+RGB) trained on the Baseline Dataset extended with RGB images

Method Oxford ([email protected]) Oxford ([email protected]%)
CORAL [1] 88.9 96.1
PIC-Net [2] 98.2
MinkLoc++ (3D+RGB) 96.7 99.1

Unimodal model (3D only) trained on the Baseline Dataset

Method Oxford ([email protected]%) U.S. ([email protected]%) R.A. ([email protected]%) B.D ([email protected]%)
PointNetVLAD [3] 80.3 72.6 60.3 65.3
PCAN [4] 83.8 79.1 71.2 66.8
DAGC [5] 87.5 83.5 75.7 71.2
LPD-Net [6] 94.9 96.0 90.5 89.1
EPC-Net [7] 94.7 96.5 88.6 84.9
SOE-Net [8] 96.4 93.2 91.5 88.5
NDT-Transformer [10] 97.7
MinkLoc3D [9] 97.9 95.0 91.2 88.5
MinkLoc++ (3D-only) 98.2 94.5 92.1 88.4

Unimodal model (3D only) trained on the Refined Dataset

Method Oxford ([email protected]%) U.S. ([email protected]%) R.A. ([email protected]%) B.D ([email protected]%)
PointNetVLAD [3] 80.1 94.5 93.1 86.5
PCAN [4] 86.4 94.1 92.3 87.0
DAGC [5] 87.8 94.3 93.4 88.5
LPD-Net [6] 94.9 98.9 96.4 94.4
SOE-Net [8] 96.4 97.7 95.9 92.6
MinkLoc3D [9] 98.5 99.7 99.3 96.7
MinkLoc++ (RGB-only) 98.4 99.7 99.3 97.4
  1. Y. Pan et al., "CORAL: Colored structural representation for bi-modal place recognition", preprint arXiv:2011.10934 (2020)
  2. Y. Lu et al., "PIC-Net: Point Cloud and Image Collaboration Network for Large-Scale Place Recognition", preprint arXiv:2008.00658 (2020)
  3. M. A. Uy and G. H. Lee, "PointNetVLAD: Deep Point Cloud Based Retrieval for Large-Scale Place Recognition", 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  4. W. Zhang and C. Xiao, "PCAN: 3D Attention Map Learning Using Contextual Information for Point Cloud Based Retrieval", 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  5. Q. Sun et al., "DAGC: Employing Dual Attention and Graph Convolution for Point Cloud based Place Recognition", Proceedings of the 2020 International Conference on Multimedia Retrieval
  6. Z. Liu et al., "LPD-Net: 3D Point Cloud Learning for Large-Scale Place Recognition and Environment Analysis", 2019 IEEE/CVF International Conference on Computer Vision (ICCV)
  7. L. Hui et al., "Efficient 3D Point Cloud Feature Learning for Large-Scale Place Recognition" preprint arXiv:2101.02374 (2021)
  8. Y. Xia et al., "SOE-Net: A Self-Attention and Orientation Encoding Network for Point Cloud based Place Recognition", 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  9. J. Komorowski, "MinkLoc3D: Point Cloud Based Large-Scale Place Recognition", Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), (2021)
  10. Z. Zhou et al., "NDT-Transformer: Large-scale 3D Point Cloud Localisation Using the Normal Distribution Transform Representation", 2021 IEEE International Conference on Robotics and Automation (ICRA)
  • J. Komorowski, M. Wysoczanska, T. Trzcinski, "MinkLoc++: Lidar and Monocular Image Fusion for Place Recognition", accepted for International Joint Conference on Neural Networks (IJCNN), (2021)

License

Our code is released under the MIT License (see LICENSE file for details).

Implementation of 'lightweight' GAN, proposed in ICLR 2021, in Pytorch. High resolution image generations that can be trained within a day or two

512x512 flowers after 12 hours of training, 1 gpu 256x256 flowers after 12 hours of training, 1 gpu Pizza 'Lightweight' GAN Implementation of 'lightwe

Phil Wang 1.5k Jan 02, 2023
Locationinfo - A script helps the user to show network information such as ip address

Description This script helps the user to show network information such as ip ad

Roxcoder 1 Dec 30, 2021
(Python, R, C/C++) Isolation Forest and variations such as SCiForest and EIF, with some additions (outlier detection + similarity + NA imputation)

IsoTree Fast and multi-threaded implementation of Extended Isolation Forest, Fair-Cut Forest, SCiForest (a.k.a. Split-Criterion iForest), and regular

141 Dec 29, 2022
dualPC.R contains the R code for the main functions.

dualPC.R contains the R code for the main functions. dualPC_sim.R contains an example run with the different PC versions; it calls dualPC_algs.R whic

3 May 30, 2022
Implementation of Barlow Twins paper

barlowtwins PyTorch Implementation of Barlow Twins paper: Barlow Twins: Self-Supervised Learning via Redundancy Reduction This is currently a work in

IgorSusmelj 86 Dec 20, 2022
A PyTorch implementation of "CoAtNet: Marrying Convolution and Attention for All Data Sizes".

CoAtNet Overview This is a PyTorch implementation of CoAtNet specified in "CoAtNet: Marrying Convolution and Attention for All Data Sizes", arXiv 2021

Justin Wu 268 Jan 07, 2023
Conformer: Local Features Coupling Global Representations for Visual Recognition

Conformer: Local Features Coupling Global Representations for Visual Recognition (arxiv) This repository is built upon DeiT and timm Usage First, inst

Zhiliang Peng 378 Jan 08, 2023
A rule-based log analyzer & filter

Flog 一个根据规则集来处理文本日志的工具。 前言 在日常开发过程中,由于缺乏必要的日志规范,导致很多人乱打一通,一个日志文件夹解压缩后往往有几十万行。 日志泛滥会导致信息密度骤减,给排查问题带来了不小的麻烦。 以前都是用grep之类的工具先挑选出有用的,再逐条进行排查,费时费力。在忍无可忍之后决

上山打老虎 9 Jun 23, 2022
HyDiff: Hybrid Differential Software Analysis

HyDiff: Hybrid Differential Software Analysis This repository provides the tool and the evaluation subjects for the paper HyDiff: Hybrid Differential

Yannic Noller 22 Oct 20, 2022
FluidNet re-written with ATen tensor lib

fluidnet_cxx: Accelerating Fluid Simulation with Convolutional Neural Networks. A PyTorch/ATen Implementation. This repository is based on the paper,

JoliBrain 50 Jun 07, 2022
A different spin on dataclasses.

dataklasses Dataklasses is a library that allows you to quickly define data classes using Python type hints. Here's an example of how you use it: from

David Beazley 752 Nov 18, 2022
Stochastic Downsampling for Cost-Adjustable Inference and Improved Regularization in Convolutional Networks

Stochastic Downsampling for Cost-Adjustable Inference and Improved Regularization in Convolutional Networks (SDPoint) This repository contains the cod

Jason Kuen 17 Jul 04, 2022
Official repo for QHack—the quantum machine learning hackathon

Note: This repository has been frozen while we consider the submissions for the QHack Open Hackathon. We hope you enjoyed the event! Welcome to QHack,

Xanadu 118 Jan 05, 2023
[CoRL 21'] TANDEM: Tracking and Dense Mapping in Real-time using Deep Multi-view Stereo

TANDEM: Tracking and Dense Mapping in Real-time using Deep Multi-view Stereo Lukas Koestler1*    Nan Yang1,2*,†    Niclas Zeller2,3    Daniel Cremers1

TUM Computer Vision Group 744 Jan 04, 2023
Source code for models described in the paper "AudioCLIP: Extending CLIP to Image, Text and Audio" (https://arxiv.org/abs/2106.13043)

AudioCLIP Extending CLIP to Image, Text and Audio This repository contains implementation of the models described in the paper arXiv:2106.13043. This

458 Jan 02, 2023
The backbone CSPDarkNet of YOLOX.

YOLOX-Backbone The backbone CSPDarkNet of YOLOX. In this project, you can enjoy: CSPDarkNet-S CSPDarkNet-M CSPDarkNet-L CSPDarkNet-X CSPDarkNet-Tiny C

Jianhua Yang 9 Aug 22, 2022
Pytorch Implementation for CVPR2018 Paper: Learning to Compare: Relation Network for Few-Shot Learning

LearningToCompare Pytorch Implementation for Paper: Learning to Compare: Relation Network for Few-Shot Learning Howto download mini-imagenet and make

Jackie Loong 246 Dec 19, 2022
PyTorch implementation of our ICCV 2021 paper, Interpretation of Emergent Communication in Heterogeneous Collaborative Embodied Agents.

PyTorch implementation of our ICCV 2021 paper, Interpretation of Emergent Communication in Heterogeneous Collaborative Embodied Agents.

Saim Wani 4 May 08, 2022
Acute ischemic stroke dataset

AISD Acute ischemic stroke dataset contains 397 Non-Contrast-enhanced CT (NCCT) scans of acute ischemic stroke with the interval from symptom onset to

Kongming Liang 21 Sep 06, 2022
Code repository for the paper Computer Vision User Entity Behavior Analytics

Computer Vision User Entity Behavior Analytics Code repository for "Computer Vision User Entity Behavior Analytics" Code Description dataset.csv As di

Sameer Khanna 2 Aug 20, 2022