Change is Everywhere: Single-Temporal Supervised Object Change Detection in Remote Sensing Imagery (ICCV 2021)

Overview

Change is Everywhere
Single-Temporal Supervised Object Change Detection
in Remote Sensing Imagery

by Zhuo Zheng, Ailong Ma, Liangpei Zhang and Yanfei Zhong

[Paper] [BibTeX]



This is an official implementation of STAR and ChangeStar in our ICCV 2021 paper Change is Everywhere: Single-Temporal Supervised Object Change Detection for High Spatial Resolution Remote Sensing Imagery.

We hope that STAR will serve as a solid baseline and help ease future research in weakly-supervised object change detection.


News

  • 2021/08/28, The code is available.
  • 2021/07/23, The code will be released soon.
  • 2021/07/23, This paper is accepted by ICCV 2021.

Features

  • Learning a good change detector from single-temporal supervision.
  • Strong baselines for bitemporal and single-temporal supervised change detection.
  • A clean codebase for weakly-supervised change detection.
  • Support both bitemporal and single-temporal supervised settings

Citation

If you use STAR or ChangeStar (FarSeg) in your research, please cite the following paper:

@inproceedings{zheng2021change,
  title={Change is Everywhere: Single-Temporal Supervised Object Change Detection for High Spatial Resolution Remote Sensing Imagery},
  author={Zheng, Zhuo and Ma, Ailong and Liangpei Zhang and Zhong, Yanfei},
  booktitle={Proceedings of the IEEE international conference on computer vision},
  pages={},
  year={2021}
}

@inproceedings{zheng2020foreground,
  title={Foreground-Aware Relation Network for Geospatial Object Segmentation in High Spatial Resolution Remote Sensing Imagery},
  author={Zheng, Zhuo and Zhong, Yanfei and Wang, Junjue and Ma, Ailong},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={4096--4105},
  year={2020}
}

Getting Started

Install EVer

pip install --upgrade git+https://github.com/Z-Zheng/ever.git

Requirements:

  • pytorch >= 1.6.0
  • python >=3.6

Prepare Dataset

  1. Download xView2 dataset (training set and tier3 set) and LEVIR-CD dataset.

  2. Create soft link

ln -s </path/to/xView2> ./xView2
ln -s </path/to/LEVIR-CD> ./LEVIR-CD

Training and Evaluation under Single-Temporal Supervision

bash ./scripts/trainxView2/r50_farseg_changemixin_symmetry.sh

Training and Evaluation under Bitemporal Supervision

bash ./scripts/bisup_levircd/r50_farseg_changemixin.sh

License

ChangeStar is released under the Apache License 2.0.

Copyright (c) Zhuo Zheng. All rights reserved.

Comments
  • Can ChangeStar be used for general CD?

    Can ChangeStar be used for general CD?

    hi,

    Thanks for the great work. I wonder, can this work be used for general change detection? i.e., multi-class not just single class.

    If yes, do you have done the experiments? Thanks!

    opened by Richardych 3
  • hello, how to add changemixin when use bitemporal supervised

    hello, how to add changemixin when use bitemporal supervised

    hello I have question about your repo:

    1. how to add changeminxin when use bitemporal supervised, i see it in your paper table 4 but i cant find in codes?
    2. could changestar use LEVIR-CD train Single-Temporal(another dataset is too big for train, i cant download it)
    3. are your bitemporal suprvised methods just use torch.cat in the final layer? sorry for ask these question,
    opened by csliuchang 3
  • ValueError: Requested crop size (512, 512) is larger than the image size (384, 384)

    ValueError: Requested crop size (512, 512) is larger than the image size (384, 384)

    Traceback (most recent call last): File "./train_sup_change.py", line 48, in blob = trainer.run(after_construct_launcher_callbacks=[register_evaluate_fn]) File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/site-packages/ever/api/trainer/th_amp_ddp_trainer.py", line 117, in run test_data_loader=kw_dataloader['testdata_loader']) File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/site-packages/ever/core/launcher.py", line 232, in train_by_config signal_loss_dict = self.train_iters(train_data_loader, test_data_loader=test_data_loader, **config) File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/site-packages/ever/core/launcher.py", line 174, in train_iters is_master=self._master) File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/site-packages/ever/core/iterator.py", line 30, in next data = next(self._iterator) File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 435, in next data = self._next_data() File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 475, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/site-packages/torch/utils/data/dataset.py", line 218, in getitem return self.datasets[dataset_idx][sample_idx] File "/home/yujianzhi/tem/ChangeStar-master/data/levir_cd/dataset.py", line 30, in getitem blob = self.transforms(**dict(image=imgs, mask=gt)) File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/site-packages/albumentations/core/composition.py", line 191, in call data = t(force_apply=force_apply, **data) File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/site-packages/albumentations/core/transforms_interface.py", line 90, in call return self.apply_with_params(params, **kwargs) File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/site-packages/albumentations/core/transforms_interface.py", line 103, in apply_with_params res[key] = target_function(arg, **dict(params, **target_dependencies)) File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/site-packages/albumentations/augmentations/crops/transforms.py", line 48, in apply return F.random_crop(img, self.height, self.width, h_start, w_start) File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/site-packages/albumentations/augmentations/crops/functional.py", line 28, in random_crop crop_height=crop_height, crop_width=crop_width, height=height, width=width ValueError: Requested crop size (512, 512) is larger than the image size (384, 384) Traceback (most recent call last): File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/site-packages/torch/distributed/launch.py", line 260, in main() File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/site-packages/torch/distributed/launch.py", line 256, in main cmd=cmd) subprocess.CalledProcessError: Command '['/home/yujianzhi/anaconda3/envs/CStar/bin/python', '-u', './train_sup_change.py', '--local_rank=0', '--config_path=levircd.r50_farseg_changestar_bisup', '--model_dir=./log/bisup-LEVIRCD/r50_farseg_changestar']' returned non-zero exit status 1.

    it says: ValueError: Requested crop size (512, 512) is larger than the image size (384, 384) but my img is 512*512 exactly.

    opened by themoongodyue 3
  • How to get the bitemporal images' labels if the model is trained on LEVIR-CD dataset?

    How to get the bitemporal images' labels if the model is trained on LEVIR-CD dataset?

    Hello, I'm very interested in your work, but I encountered a problem in the process of research. If the model is trained on the LEVIR-CD dataset, how to obtain the changed labels when there are no segmentation maps for each bitemporal image in the dataset? I would appreciate it if you could solve my problems.

    opened by SONGLEI-arch 2
  • Reproduction Problem

    Reproduction Problem

    Hello author.

    Your work is great!

    But I ran into a problem while running your code.

    The performance came as shown in the picture below, but this number is much higher than the number in table1 of your paper. (IoU) Can you tell me the reason? Screen Shot 2022-01-01 at 7 44 17 PM

    All hyperparameters and data are identical.

    opened by seominseok0429 1
  • AssertionError error

    AssertionError error

    Hello, this is really great work. I have one question for you. The LEVIR-CD dataset trains well, but the xview2 dataset gives the following unknown error.

    Do you have any idea how to fix it? All processes follow the recipe exactly Screen Shot 2021-12-31 at 4 57 41 PM .

    opened by seominseok0429 1
  • RuntimeError: NCCL error in: /pytorch/torch/lib/c10d/ProcessGroupNCCL.cpp:911, unhandled system error, NCCL version 2.7.8

    RuntimeError: NCCL error in: /pytorch/torch/lib/c10d/ProcessGroupNCCL.cpp:911, unhandled system error, NCCL version 2.7.8

    i have crazy,help me please

    Traceback (most recent call last): File "./train_sup_change.py", line 48, in blob = trainer.run(after_construct_launcher_callbacks=[register_evaluate_fn]) File "/home/cy/miniconda3/envs/STAnet/lib/python3.8/site-packages/ever/api/trainer/th_amp_ddp_trainer.py", line 98, in run kwargs.update(dict(model=self.make_model())) File "/home/cy/miniconda3/envs/STAnet/lib/python3.8/site-packages/ever/api/trainer/th_amp_ddp_trainer.py", line 87, in make_model model = nn.parallel.DistributedDataParallel( File "/home/cy/miniconda3/envs/STAnet/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 496, in init dist._verify_model_across_ranks(self.process_group, parameters) RuntimeError: NCCL error in: /pytorch/torch/lib/c10d/ProcessGroupNCCL.cpp:911, unhandled system error, NCCL version 2.7.8 ncclSystemError: System call (socket, malloc, munmap, etc) failed. ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 31335) of binary: /home/cy/miniconda3/envs/STAnet/bin/python ERROR:torch.distributed.elastic.agent.server.local_elastic_agent:[default] Worker group failed

    opened by themoongodyue 1
  • Evaluation

    Evaluation

    Excuse me, I want to know how this module behave inference after training the model. And if you can offer an link for usage of 'ever' Lib, that will be fantastic

    opened by LIUZIJING-CHN 1
  • changestar_sisup results

    changestar_sisup results

    Hi, I have trained the model under single-temporal supervision, but the F1 result is only 0.73,which is worse than the result in your paper. Is there anything wrong with my experiment, below is my training log:

    1666753326.225779.log

    After training I only test the LEVIR-CD test set.

    opened by max2857 0
  • A question about PCC

    A question about PCC

    Hello,I have a question about PCC:

    PCC is mentioned in the paper. After obtaining the classification result through the segmentation model, how to obtain the change detection result through the classification result? Is it a direct subtraction?

    opened by Hyd1999618 0
  • [Feature] support [0~255] gt

    [Feature] support [0~255] gt

    The original dataset of LEVIR-CD consists of 0 and 255.

    However, the segmentation loss of this code works only when it consists of 0 and 1.

    Therefore, I added a code to change gt's 255 to 1.

    opened by seominseok0429 1
Releases(v0.1.0)
Owner
Zhuo Zheng
CV IN RS. Ph.D. Student.
Zhuo Zheng
Automated image registration. Registrationimation was too much of a mouthful.

alignimation Automated image registration. Registrationimation was too much of a mouthful. This repo contains the code used for my blog post Alignimat

Ethan Rosenthal 9 Oct 13, 2022
This is an official implementation of our CVPR 2021 paper "Bottom-Up Human Pose Estimation Via Disentangled Keypoint Regression" (https://arxiv.org/abs/2104.02300)

Bottom-Up Human Pose Estimation Via Disentangled Keypoint Regression Introduction In this paper, we are interested in the bottom-up paradigm of estima

HRNet 367 Dec 27, 2022
“Robust Lightweight Facial Expression Recognition Network with Label Distribution Training”, AAAI 2021.

EfficientFace Zengqun Zhao, Qingshan Liu, Feng Zhou. "Robust Lightweight Facial Expression Recognition Network with Label Distribution Training". AAAI

Zengqun Zhao 119 Jan 08, 2023
A PyTorch Implementation of Gated Graph Sequence Neural Networks (GGNN)

A PyTorch Implementation of GGNN This is a PyTorch implementation of the Gated Graph Sequence Neural Networks (GGNN) as described in the paper Gated G

Ching-Yao Chuang 427 Dec 13, 2022
Implementation of popular bandit algorithms in batch environments.

batch-bandits Implementation of popular bandit algorithms in batch environments. Source code to our paper "The Impact of Batch Learning in Stochastic

Danil Provodin 2 Sep 11, 2022
Using LSTM write Tang poetry

本教程将通过一个示例对LSTM进行介绍。通过搭建训练LSTM网络,我们将训练一个模型来生成唐诗。本文将对该实现进行详尽的解释,并阐明此模型的工作方式和原因。并不需要过多专业知识,但是可能需要新手花一些时间来理解的模型训练的实际情况。为了节省时间,请尽量选择GPU进行训练。

56 Dec 15, 2022
Barlow Twins and HSIC

Barlow Twins and HSIC Unofficial Pytorch implementation for Barlow Twins and HSIC_SSL on small datasets (CIFAR10, STL10, and Tiny ImageNet). Correspon

Yao-Hung Hubert Tsai 49 Nov 24, 2022
Projects of Andfun Yangon

AndFunYangon Projects of Andfun Yangon First Commit We can use gsearch.py to sea

Htin Aung Lu 1 Dec 28, 2021
Repo for the Tutorials of Day1-Day3 of the Nordic Probabilistic AI School 2021 (https://probabilistic.ai/)

ProbAI 2021 - Probabilistic Programming and Variational Inference Tutorial with Pryo Day 1 (June 14) Slides Notebook: students_PPLs_Intro Notebook: so

PGM-Lab 46 Nov 01, 2022
Improving Object Detection by Estimating Bounding Box Quality Accurately

Improving Object Detection by Estimating Bounding Box Quality Accurately Abstrac

2 Apr 14, 2022
TabNet for fastai

TabNet for fastai This is an adaptation of TabNet (Attention-based network for tabular data) for fastai (=2.0) library. The original paper https://ar

Mikhail Grankin 116 Oct 21, 2022
Yolox-bytetrack-sample - Python sample of MOT (Multiple Object Tracking) using YOLOX and ByteTrack

yolox-bytetrack-sample YOLOXとByteTrackを用いたMOT(Multiple Object Tracking)のPythonサン

KazuhitoTakahashi 12 Nov 09, 2022
Continuum Learning with GEM: Gradient Episodic Memory

Gradient Episodic Memory for Continual Learning Source code for the paper: @inproceedings{GradientEpisodicMemory, title={Gradient Episodic Memory

Facebook Research 360 Dec 27, 2022
PyTorch code for the ICCV'21 paper: "Always Be Dreaming: A New Approach for Class-Incremental Learning"

Always Be Dreaming: A New Approach for Data-Free Class-Incremental Learning PyTorch code for the ICCV 2021 paper: Always Be Dreaming: A New Approach f

49 Dec 21, 2022
StellarGraph - Machine Learning on Graphs

StellarGraph Machine Learning Library StellarGraph is a Python library for machine learning on graphs and networks. Table of Contents Introduction Get

S T E L L A R 2.6k Jan 05, 2023
A criticism of a recent paper on buggy image downsampling methods in popular image processing and deep learning libraries.

A criticism of a recent paper on buggy image downsampling methods in popular image processing and deep learning libraries.

70 Jul 12, 2022
Share a benchmark that can easily apply reinforcement learning in Job-shop-scheduling

Gymjsp Gymjsp is an open source Python library, which uses the OpenAI Gym interface for easily instantiating and interacting with RL environments, and

134 Dec 08, 2022
Collection of generative models in Pytorch version.

pytorch-generative-model-collections Original : [Tensorflow version] Pytorch implementation of various GANs. This repository was re-implemented with r

Hyeonwoo Kang 2.4k Dec 31, 2022
QA-GNN: Question Answering using Language Models and Knowledge Graphs

QA-GNN: Question Answering using Language Models and Knowledge Graphs This repo provides the source code & data of our paper: QA-GNN: Reasoning with L

Michihiro Yasunaga 434 Jan 04, 2023
[NeurIPS 2021 Spotlight] Code for Learning to Compose Visual Relations

Learning to Compose Visual Relations This is the pytorch codebase for the NeurIPS 2021 Spotlight paper Learning to Compose Visual Relations. Demo Imag

Nan Liu 88 Jan 04, 2023