This is the official implementation of TrivialAugment and a mini-library for the application of multiple image augmentation strategies including RandAugment and TrivialAugment.

Overview

Trivial Augment

This is the official implementation of TrivialAugment (https://arxiv.org/abs/2103.10158), as was used for the paper. TrivialAugment is a super simple, but state-of-the-art performing, augmentation algorithm.

We distribute this implementation with two main use cases in mind. Either you only use our (re-)implementetations of practical augmentation methods or you start off with our full codebase.

Use TrivialAugment and Other Methods in Your Own Codebase

In this case we recommend to simply copy over the file aug_lib.py to your codebase. You can now instantiate the augmenters TrivialAugment, RandAugment and UniAugment like this:

augmenter = aug_lib.TrivialAugment()

And simply use them on a PIL images img:

aug_img = augmenter(img)

This format also happens to be compatible with torchvision.transforms. If you do not have Pillow or numpy installed, do so by calling pip install Pillow numpy. Generally, a good position to augment an image with the augmenter is right as you get it out of the dataset, before you apply any custom augmentations.

The default augmentation space is fixed_standard, that is without AutoAugments posterization bug and using the set of augmentations used in Randaugment. This is the search space we used for all our experiments, that do not mention another augmentation space. You can change the augmentation space, though, with aug_lib.set_augmentation_space. This call for example

aug_lib.set_augmentation_space('fixed_custom',2,['cutout'])

will change the augmentation space to only ever apply cutout with a large width or nothing. The 2 here gives indications in how many strength levels the strength ranges of the augmentation space should be divided. If an augmentation space includes sample_pairing, you need to specify a set of images with which to pair before each step: aug_lib.blend_images = [LIST OF PIL IMAGES].

Our recommendation is to use the default fixed_standard search space for very cheap setups, like Wide-Resnet-40-2, and to use wide_standard for all other setups by calling aug_lib.set_augmentation_space('wide_standard',31) before the start of training.

Use Our Full Codebase

Clone this directory and cd into it.

git clone automl/trivialaugment
cd trivialaugment

Install a fitting PyTorch version for your setup with GPU support, as our implementation only support setups with at least one CUDA device and install our requirements:

pip install -r requirements.txt
# Install a pytorch version, in many setups this has to be done manually, see pytorch.org

Now you should be ready to go. Start a training like so:

python -m TrivialAugment.train -c confs/wresnet40x2_cifar100_b128_maxlr.1_ta_fixedsesp_nowarmup_200epochs.yaml --dataroot data --tag EXPERIMENT_NAME

For concrete configs of experiments from the paper see the comments in the papers LaTeX code around the number you want to reproduce. For logs and metrics use a tensorboard with the logs directory or use our aggregate_results.py script to view data from the tensorboard logs in the command line.

Confidence Intervals

Since in the current literature we rarely found confidence intervals, we share our implementation in evaluation_tools.py.

This repository uses code from https://github.com/ildoonet/pytorch-randaugment and from https://github.com/tensorflow/models/tree/master/research/autoaugment.

Code for reproducing experiments in "Improved Training of Wasserstein GANs"

Improved Training of Wasserstein GANs Code for reproducing experiments in "Improved Training of Wasserstein GANs". Prerequisites Python, NumPy, Tensor

Ishaan Gulrajani 2.2k Jan 01, 2023
Official Pytorch Implementation of 'Learning Action Completeness from Points for Weakly-supervised Temporal Action Localization' (ICCV-21 Oral)

Learning-Action-Completeness-from-Points Official Pytorch Implementation of 'Learning Action Completeness from Points for Weakly-supervised Temporal A

Pilhyeon Lee 67 Jan 03, 2023
The 2nd Version Of Slothybot

SlothyBot Go to this website: "https://bitly.com/SlothyBot" The 2nd Version Of Slothybot. The Bot Has Many Features, Such As: Moderation Commands; Kic

Slothy 0 Jun 01, 2022
Learning What and Where to Draw

###Learning What and Where to Draw Scott Reed, Zeynep Akata, Santosh Mohan, Samuel Tenka, Bernt Schiele, Honglak Lee This is the code for our NIPS 201

Scott Ellison Reed 337 Nov 18, 2022
[ACM MM2021] MGH: Metadata Guided Hypergraph Modeling for Unsupervised Person Re-identification

Introduction This project is developed based on FastReID, which is an ongoing ReID project. Projects BUC In projects/BUC, we implement AAAI 2019 paper

WuYiming 7 Apr 13, 2022
[NeurIPS 2021 Spotlight] Aligning Pretraining for Detection via Object-Level Contrastive Learning

SoCo [NeurIPS 2021 Spotlight] Aligning Pretraining for Detection via Object-Level Contrastive Learning By Fangyun Wei*, Yue Gao*, Zhirong Wu, Han Hu,

Yue Gao 139 Dec 14, 2022
a reimplementation of LiteFlowNet in PyTorch that matches the official Caffe version

pytorch-liteflownet This is a personal reimplementation of LiteFlowNet [1] using PyTorch. Should you be making use of this work, please cite the paper

Simon Niklaus 365 Dec 31, 2022
Churn prediction

Churn-prediction Churn-prediction Data preprocessing:: Label encoder is used to normalize the categorical variable Data Transformation:: For each data

1 Sep 28, 2022
Unsupervised Real-World Super-Resolution: A Domain Adaptation Perspective

Unofficial pytorch implementation of the paper "Unsupervised Real-World Super-Resolution: A Domain Adaptation Perspective"

16 Nov 21, 2022
The code for 'Deep Residual Fourier Transformation for Single Image Deblurring'

Deep Residual Fourier Transformation for Single Image Deblurring Xintian Mao, Yiming Liu, Wei Shen, Qingli Li and Yan Wang News 2021.12.5 Release Deep

145 Jan 05, 2023
Paper Title: Heterogeneous Knowledge Distillation for Simultaneous Infrared-Visible Image Fusion and Super-Resolution

HKDnet Paper Title: "Heterogeneous Knowledge Distillation for Simultaneous Infrared-Visible Image Fusion and Super-Resolution" Email:

wasteland 11 Nov 12, 2022
Object Detection using YOLO from PyImageSearch

Object Detection using YOLO from PyImageSearch By applying object detection, you’ll not only be able to determine what is in an image, but also where

Mohamed NIANG 1 Feb 09, 2022
End-to-end beat and downbeat tracking in the time domain.

WaveBeat End-to-end beat and downbeat tracking in the time domain. | Paper | Code | Video | Slides | Setup First clone the repo. git clone https://git

Christian J. Steinmetz 60 Dec 24, 2022
Code and data form the paper BERT Got a Date: Introducing Transformers to Temporal Tagging

BERT Got a Date: Introducing Transformers to Temporal Tagging Satya Almasian*, Dennis Aumiller*, and Michael Gertz Heidelberg University Contact us vi

54 Dec 04, 2022
Unsupervised Foreground Extraction via Deep Region Competition

Unsupervised Foreground Extraction via Deep Region Competition [Paper] [Code] The official code repository for NeurIPS 2021 paper "Unsupervised Foregr

28 Nov 06, 2022
Quick program made to generate alpha and delta tables for Hidden Markov Models

HMM_Calc Functions for generating Alpha and Delta tables from a Hidden Markov Model. Parameters: a: Matrix of transition probabilities. a[i][j] = a_{i

Adem Odza 1 Dec 04, 2021
Code for Multinomial Diffusion

Code for Multinomial Diffusion Abstract Generative flows and diffusion models have been predominantly trained on ordinal data, for example natural ima

104 Jan 04, 2023
This repository contains the source code for the paper "DONeRF: Towards Real-Time Rendering of Compact Neural Radiance Fields using Depth Oracle Networks",

DONeRF: Towards Real-Time Rendering of Compact Neural Radiance Fields using Depth Oracle Networks Project Page | Video | Presentation | Paper | Data L

Facebook Research 281 Dec 22, 2022
[ArXiv 2021] One-Shot Generative Domain Adaptation

GenDA - One-Shot Generative Domain Adaptation One-Shot Generative Domain Adaptation Ceyuan Yang*, Yujun Shen*, Zhiyi Zhang, Yinghao Xu, Jiapeng Zhu, Z

GenForce: May Generative Force Be with You 46 Dec 19, 2022
Bling's Object detection tool

BriVL for Building Applications This repo is used for illustrating how to build applications by using BriVL model. This repo is re-implemented from fo

chuhaojin 47 Nov 01, 2022