(AAAI 2021) Progressive One-shot Human Parsing

Overview

End-to-end One-shot Human Parsing

This is the official repository for our two papers:


Introduction:

In the two papers, we propose a new task named One-shot Human Parsing (OSHP). OSHP requires parsing humans in a query image into an open set of reference classes defined by any single reference example (i.e., a support image) during testing, no matter whether they have been annotated during training (base classes) or not (novel classes). This new task mainly aims to accommodate human parsing into a wider range of applications that seek to parse flexible fashion/clothing classes that are not pre-defined in previous large-scale datasets.

Progressive One-shot Human Parsing (AAAI 2021) applies a progressive training scheme and is separated into three stages.

End-to-end One-shot Human Parsing (journal version) is a one-stage end-to-end training method, which has higher performance and FPS.


Main results:

You can find the well-trained models together with the performance in the following table.

EOPNet ATR-OS, Kway F1 ATR-OS, Kway Fold F2 LIP-OS, Kway F1 LIP-OS, Kway F2 CIHP-OS, Kway F1 CIHP-OS Kway F2
Novel mIoU 31.1 34.6 25.7 30.4 20.5 25.1
Human mIoU 61.9 63.3 43.0 45.7 49.1 45.5
Model Model Coming Soon Model Model Model Model

You can find the well-trained models together with the performance in the following table.

EOPNet ATR-OS, 1way F1 ATR-OS, 1way F2 LIP-OS, 1way F1 LIP-OS, 1way F2 CIHP-OS, 1way F1 CIHP-OS 1way F2
Novel mIoU 53.0 41.4 42.0 46.2 25.4 36.4
Human mIoU 68.2 69.5 57.0 58.0 53.8 55.4
Model Coming Soon

Getting started:

Data preparation:

First, please download ATR, LIP and CIHP dataset from source. Then, use the following commands to link the data into our project folder. Please also remember to download the atr flipped labels and cihp flipped labels.

# ATR dataset
$ ln -s YOUR_ATR_PATH/JPEGImages/* YOUR_PROJECT_ROOT/ATR_OS/trainval_images
$ ln -s YOUR_ATR_PATH/SegmentationClassAug/* YOUR_PROJECT_ROOT/ATR_OS/trainval_classes
$ ln -s YOUR_ATR_PATH/SegmentationClassAug_rev/* YOUR_PROJECT_ROOT/ATR_OS/Category_rev_ids


# LIP dataset
$ ln -s YOUR_LIP_PATH/TrainVal_images/TrainVal_images/train_images/* YOUR_PROJECT_ROOT/LIP_OS/trainval_images
$ ln -s YOUR_LIP_PATH/TrainVal_images/TrainVal_images/val_images/* YOUR_PROJECT_ROOT/LIP_OS/trainval_images
$ ln -s YOUR_LIP_PATH/TrainVal_parsing_annotations/TrainVal_parsing_annotations/train_segmentations/* YOUR_PROJECT_ROOT/LIP_OS/trainval_classes
$ ln -s YOUR_LIP_PATH/TrainVal_parsing_annotations/TrainVal_parsing_annotations/val_segmentations/* YOUR_PROJECT_ROOT/LIP_OS/trainval_classes
$ ln -s YOUR_LIP_PATH/Train_parsing_reversed_labels/TrainVal_parsing_annotations/* YOUR_PROJECT_ROOT/LIP_OS/Category_rev_ids
$ ln -s YOUR_LIP_PATH/val_segmentations_reversed/* YOUR_PROJECT_ROOT/LIP_OS/Category_rev_ids


# CIHP dataset
$ ln -s YOUR_CIHP_PATH/Training/Images/* YOUR_PROJECT_ROOT/CIHP_OS/trainval_images
$ ln -s YOUR_CIHP_PATH/Validation/Images/* YOUR_PROJECT_ROOT/CIHP_OS/trainval_images
$ ln -s YOUR_CIHP_PATH/Training/Category_ids/* YOUR_PROJECT_ROOT/CIHP_OS/trainval_classes
$ ln -s YOUR_CIHP_PATH/Validation/Category_ids/* YOUR_PROJECT_ROOT/CIHP_OS/trainval_classes
$ ln -s YOUR_CIHP_PATH/Category_rev_ids/* YOUR_PROJECT_ROOT/CIHP_OS/Category_rev_ids

Please also download our generated support .pkl files from source, which contains each class's image IDs. You can also generate support files on your own by controlling dtrain_dtest_split in oshp_loader.py, however, the training and validation list might be different from our paper.

Finally, your data folder should look like this:

${PROJECT ROOT}
|-- data
|   |--datasets
|       |-- ATR_OS
|       |   |-- list
|       |   |   |-- meta_train_id.txt
|       |   |   `-- meta_test_id.txt
|       |   |-- support
|       |   |   |-- meta_train_atr_supports.pkl
|       |   |   `-- meta_test_atr_supports.pkl
|       |   |-- trainval_images
|       |   |   |-- 997-1.jpg
|       |   |   |-- 997-2.jpg
|       |   |   `-- ...
|       |   |-- trainval_classes
|       |   |   |-- 997-1.png
|       |   |   |-- 997-2.png
|       |   |   `-- ... 
|       |   `-- Category_rev_ids
|       |       |-- 997-1.png
|       |       |-- 997-2.png
|       |       `-- ... 
|       |-- LIP_OS
|       |   |-- list
|       |   |   |-- meta_train_id.txt
|       |   |   |-- meta_test_id.txt
|       |   |-- support
|       |   |   |-- meta_train_lip_supports.pkl
|       |   |   `-- meta_test_lip_supports.pkl
|       |   |-- trainval_images
|       |   |   |-- ...
|       |   |-- trainval_classes
|       |   |   |-- ... 
|       |   `-- Category_rev_ids
|       |       |-- ... 
|       `-- CIHP_OS
|           |-- list
|           |   |-- meta_train_id.txt
|           |   |-- meta_test_id.txt
|           |-- support
|           |   |-- meta_train_cihp_supports.pkl
|           |   `-- meta_test_cihp_supports.pkl
|           |-- trainval_images
|           |   |-- ...
|           |-- trainval_classes
|           |   |-- ... 
|           `-- Category_rev_ids
|               |-- ... 

Finally, please download the DeepLab V3+ pretrained model (pretrained on COCO dataset) from source and put it into the data folder:

${PROJECT ROOT}
|-- data
|   |--pretrained_model
|       |--deeplab_v3plus_v3.pth

Installation:

Please make sure your current environment has Python >= 3.7.0 and pytorch >= 1.1.0. The pytorch can be downloaded from source.

Then, clone the repository and install the dependencies from the following commands:

git clone https://github.com/Charleshhy/One-shot-Human-Parsing.git
cd One-shot-Human-Parsing
pip install -r requirements.txt

Training:

To train EOPNet in End-to-end One-shot Human Parsing (journal version), run:

# OSHP kway on ATR-OS fold 1
bash scripts/atr_eop_kwf1.sh

Validation:

To evaluate EOPNet in End-to-end One-shot Human Parsing (journal version), run:

# OSHP kway on ATR-OS fold 1
bash scripts/evaluate_atr_eop_kwf1.sh

TODO:

  • Release training/validation code for POPNet
  • Release well-trained EOPNet 1-way models

Citation:

If you find our papers or this repository useful, please consider cite our papers:

@inproceedings{he2021progressive,
title={Progressive One-shot Human Parsing},
author={He, Haoyu and Zhang, Jing and Thuraisingham, Bhavani and Tao, Dacheng},
booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
year={2021}
}

@article{he2021end,
title={End-to-end One-shot Human Parsing},
author={He, Haoyu and Zhang, Jing and Zhuang, Bohan and Cai, Jianfei and Tao, Dacheng},
journal={arXiv preprint arXiv:2105.01241},
year={2021}
}

Acknowledgement:

This repository is mainly developed basing on Graphonomy and Grapy-ML.

This is a TensorFlow implementation for C2-Rec

This is a TensorFlow implementation for C2-Rec We refer to the repo SASRec. Requirements requirement.txt Datasets This repo includes Amazon Beauty dat

7 Nov 14, 2022
Code for Learning to Segment The Tail (LST)

Learning to Segment the Tail [arXiv] In this repository, we release code for Learning to Segment The Tail (LST). The code is directly modified from th

47 Nov 07, 2022
AI Based Smart Exam Proctoring Package

AI Based Smart Exam Proctoring Package It takes image (base64) as input: Provide Output as: Detection of Mobile phone. Detection of More than 1 person

NARENDER KESWANI 3 Sep 09, 2022
The source code of the paper "SHGNN: Structure-Aware Heterogeneous Graph Neural Network"

SHGNN: Structure-Aware Heterogeneous Graph Neural Network The source code and dataset of the paper: SHGNN: Structure-Aware Heterogeneous Graph Neural

Wentao Xu 7 Nov 13, 2022
Implementation of GGB color space

GGB Color Space This package is implementation of GGB color space from Development of a Robust Algorithm for Detection of Nuclei and Classification of

Resha Dwika Hefni Al-Fahsi 2 Oct 06, 2021
LSTM-VAE Implementation and Relevant Evaluations

LSTM-VAE Implementation and Relevant Evaluations Before using any file in this repository, please create two directories under the root directory name

Lan Zhang 5 Oct 08, 2022
Implementing DropPath/StochasticDepth in PyTorch

%load_ext memory_profiler Implementing Stochastic Depth/Drop Path In PyTorch DropPath is available on glasses my computer vision library! Introduction

Francesco Saverio Zuppichini 13 Jan 05, 2023
A Decentralized Omnidirectional Visual-Inertial-UWB State Estimation System for Aerial Swar.

Omni-swarm A Decentralized Omnidirectional Visual-Inertial-UWB State Estimation System for Aerial Swarm Introduction Omni-swarm is a decentralized omn

HKUST Aerial Robotics Group 99 Dec 23, 2022
(ImageNet pretrained models) The official pytorch implemention of the TPAMI paper "Res2Net: A New Multi-scale Backbone Architecture"

Res2Net The official pytorch implemention of the paper "Res2Net: A New Multi-scale Backbone Architecture" Our paper is accepted by IEEE Transactions o

Res2Net Applications 928 Dec 29, 2022
This code reproduces the results of the paper, "Measuring Data Leakage in Machine-Learning Models with Fisher Information"

Fisher Information Loss This repository contains code that can be used to reproduce the experimental results presented in the paper: Awni Hannun, Chua

Facebook Research 43 Dec 30, 2022
Improving Query Representations for DenseRetrieval with Pseudo Relevance Feedback:A Reproducibility Study.

APR The repo for the paper Improving Query Representations for DenseRetrieval with Pseudo Relevance Feedback:A Reproducibility Study. Environment setu

ielab 8 Nov 26, 2022
Intelligent Video Analytics toolkit based on different inference backends.

English | 中文 OpenIVA OpenIVA is an end-to-end intelligent video analytics development toolkit based on different inference backends, designed to help

Quantum Liu 15 Oct 27, 2022
You Only Look Once for Panopitic Driving Perception

You Only 👀 Once for Panoptic 🚗 Perception You Only Look at Once for Panoptic driving Perception by Dong Wu, Manwen Liao, Weitian Zhang, Xinggang Wan

Hust Visual Learning Team 1.4k Jan 04, 2023
A copy of Ares that costs 30 fucking dollars.

Finalement, j'ai décidé d'abandonner cette idée, je me suis comporté comme un enfant qui été en colère. Comme m'ont dit certaines personnes j'ai des c

Bleu 24 Apr 14, 2022
Simple and ready-to-use tutorials for TensorFlow

TensorFlow World To support maintaining and upgrading this project, please kindly consider Sponsoring the project developer. Any level of support is a

Amirsina Torfi 4.5k Dec 23, 2022
HandTailor: Towards High-Precision Monocular 3D Hand Recovery

HandTailor This repository is the implementation code and model of the paper "HandTailor: Towards High-Precision Monocular 3D Hand Recovery" (arXiv) G

Lv Jun 113 Jan 06, 2023
MACE is a deep learning inference framework optimized for mobile heterogeneous computing platforms.

Documentation | FAQ | Release Notes | Roadmap | MACE Model Zoo | Demo | Join Us | 中文 Mobile AI Compute Engine (or MACE for short) is a deep learning i

Xiaomi 4.7k Dec 29, 2022
A small tool to joint picture including gif

README 做设计的时候遇到拼接长图的情况,但是发现没有什么好用的能拼接gif的工具。 于是自己写了个gif拼接小工具。 可以自动拼接gif、png和jpg等常见格式。 效果 从上至下 从下至上 从左至右 从右至左 使用 克隆仓库 git clone https://github.com/Dels

3 Dec 15, 2021
NeuralTalk is a Python+numpy project for learning Multimodal Recurrent Neural Networks that describe images with sentences.

#NeuralTalk Warning: Deprecated. Hi there, this code is now quite old and inefficient, and now deprecated. I am leaving it on Github for educational p

Andrej 5.3k Jan 07, 2023
SMIS - Semantically Multi-modal Image Synthesis(CVPR 2020)

Semantically Multi-modal Image Synthesis Project page / Paper / Demo Semantically Multi-modal Image Synthesis(CVPR2020). Zhen Zhu, Zhiliang Xu, Anshen

316 Dec 01, 2022