A graph adversarial learning toolbox based on PyTorch and DGL.

Related tags

Deep LearningGraphWar
Overview

GraphWar: Arms Race in Graph Adversarial Learning

NOTE: GraphWar is still in the early stages and the API will likely continue to change.

πŸš€ Installation

Please make sure you have installed PyTorch and Deep Graph Library(DGL).

# Comming soon
pip install -U graphwar

or

# Recommended
git clone https://github.com/EdisonLeeeee/GraphWar.git && cd GraphWar
pip install -e . --verbose

where -e means "editable" mode so you don't have to reinstall every time you make changes.

Get Started

Assume that you have a dgl.DGLgraph instance g that describes you dataset.

A simple targeted attack

from graphwar.attack.targeted import RandomAttack
attacker = RandomAttack(g)
attacker.attack(1, num_budgets=3) # attacking target node `1` with `3` edges 
attacked_g = attacker.g()
edge_flips = attacker.edge_flips()

A simple untargeted attack

from graphwar.attack.untargeted import RandomAttack
attacker = RandomAttack(g)
attacker.attack(num_budgets=0.05) # attacking the graph with 5% edges perturbations
attacked_g = attacker.g()
edge_flips = attacker.edge_flips()

Implementations

In detail, the following methods are currently implemented:

Attack

Targeted Attack

Methods Venue
RandomAttack A simple random method that chooses edges to flip randomly.
DICEAttack Marcin Waniek et al. πŸ“ Hiding Individuals and Communities in a Social Network, Nature Human Behavior'16
Nettack Daniel ZΓΌgner et al. πŸ“ Adversarial Attacks on Neural Networks for Graph Data, KDD'18
FGAttack Jinyin Chen et al. πŸ“ Fast Gradient Attack on Network Embedding, arXiv'18
Jinyin Chen et al. πŸ“ Link Prediction Adversarial Attack Via Iterative Gradient Attack, IEEE Trans'20
Hanjun Dai et al. πŸ“ Adversarial Attack on Graph Structured Data, ICML'18
GFAttack Heng Chang et al. πŸ“ A Restricted Black - box Adversarial Framework Towards Attacking Graph Embedding Models, AAAI'20
IGAttack Huijun Wu et al. πŸ“ Adversarial Examples on Graph Data: Deep Insights into Attack and Defense, IJCAI'19
SGAttack Jintang Li et al. πŸ“ Adversarial Attack on Large Scale Graph, TKDE'21

Untargeted Attack

Methods Venue
RandomAttack A simple random method that chooses edges to flip randomly
DICEAttack Marcin Waniek et al. πŸ“ Hiding Individuals and Communities in a Social Network, Nature Human Behavior'16
FGAttack Jinyin Chen et al. πŸ“ Fast Gradient Attack on Network Embedding, arXiv'18
Jinyin Chen et al. πŸ“ Link Prediction Adversarial Attack Via Iterative Gradient Attack, IEEE Trans'20
Hanjun Dai et al. πŸ“ Adversarial Attack on Graph Structured Data, ICML'18
Metattack Daniel ZΓΌgner et al. πŸ“ Adversarial Attacks on Graph Neural Networks via Meta Learning, ICLR'19
PGD, MinmaxAttack Kaidi Xu et al. πŸ“ Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective, IJCAI'19

Defense

Model-Level

Methods Venue
MedianGCN Liang Chen et al. πŸ“ Understanding Structural Vulnerability in Graph Convolutional Networks, IJCAI'21
RobustGCN Dingyuan Zhu et al. πŸ“ Robust Graph Convolutional Networks Against Adversarial Attacks, KDD'19

Data-Level

Methods Venue
JaccardPurification Huijun Wu et al. πŸ“ Adversarial Examples on Graph Data: Deep Insights into Attack and Defense, IJCAI'19

More details of literatures and the official codes can be found in Awesome Graph Adversarial Learning.

Comments
  • Benchmark Results of Attack Performance

    Benchmark Results of Attack Performance

    Hi, thanks for sharing the awesome repo with us! I recently run the attack sample code but the resultpgd_attack.py and random_attack.py under examples/attack/untargeted, but the accuracies of both evasion and poison attack seem not to decrease.

    I'm pretty confused by the attack results. For CV models, pgd attack easily decreases the accuracy to nearly random guesses, but the results of GreatX seem not to consent with CV models. Is it because the number of the perturbed edges is too small?

    Here are the results of pgd_attack.py

    Processing...
    Done!
    Training...
    100/100 [==============================] - Total: 874.37ms - 8ms/step- loss: 0.0524 - acc: 0.996 - val_loss: 0.625 - val_acc: 0.815
    Evaluating...
    1/1 [==============================] - Total: 1.82ms - 1ms/step- loss: 0.597 - acc: 0.843
    Before attack
     Objects in BunchDict:
    ╒═════════╀═══════════╕
    β”‚ Names   β”‚   Objects β”‚
    β•žβ•β•β•β•β•β•β•β•β•β•ͺ═══════════║
    β”‚ loss    β”‚  0.59718  β”‚
    β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
    β”‚ acc     β”‚  0.842555 β”‚
    β•˜β•β•β•β•β•β•β•β•β•β•§β•β•β•β•β•β•β•β•β•β•β•β•›
    PGD training...: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 200/200 [00:02<00:00, 69.74it/s]
    Bernoulli sampling...: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 20/20 [00:00<00:00, 804.86it/s]
    Evaluating...
    1/1 [==============================] - Total: 2.11ms - 2ms/step- loss: 0.603 - acc: 0.842
    After evasion attack
     Objects in BunchDict:
    ╒═════════╀═══════════╕
    β”‚ Names   β”‚   Objects β”‚
    β•žβ•β•β•β•β•β•β•β•β•β•ͺ═══════════║
    β”‚ loss    β”‚  0.603293 β”‚
    β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
    β”‚ acc     β”‚  0.842052 β”‚
    β•˜β•β•β•β•β•β•β•β•β•β•§β•β•β•β•β•β•β•β•β•β•β•β•›
    Training...
    100/100 [==============================] - Total: 535.83ms - 5ms/step- loss: 0.124 - acc: 0.976 - val_loss: 0.728 - val_acc: 0.779
    Evaluating...
    1/1 [==============================] - Total: 1.74ms - 1ms/step- loss: 0.766 - acc: 0.827
    After poisoning attack
     Objects in BunchDict:
    ╒═════════╀═══════════╕
    β”‚ Names   β”‚   Objects β”‚
    β•žβ•β•β•β•β•β•β•β•β•β•ͺ═══════════║
    β”‚ loss    β”‚  0.76604  β”‚
    β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
    β”‚ acc     β”‚  0.826962 β”‚
    β•˜β•β•β•β•β•β•β•β•β•β•§β•β•β•β•β•β•β•β•β•β•β•β•›
    

    Here are the results of random_attack.py

    Training...
    100/100 [==============================] - Total: 600.92ms - 6ms/step- loss: 0.0615 - acc: 0.984 - val_loss: 0.626 - val_acc: 0.811
    Evaluating...
    1/1 [==============================] - Total: 1.93ms - 1ms/step- loss: 0.564 - acc: 0.832
    Before attack
     Objects in BunchDict:
    ╒═════════╀═══════════╕
    β”‚ Names   β”‚   Objects β”‚
    β•žβ•β•β•β•β•β•β•β•β•β•ͺ═══════════║
    β”‚ loss    β”‚  0.564449 β”‚
    β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
    β”‚ acc     β”‚  0.832495 β”‚
    β•˜β•β•β•β•β•β•β•β•β•β•§β•β•β•β•β•β•β•β•β•β•β•β•›
    Peturbing graph...: 253it [00:00, 4588.44it/s]
    Evaluating...
    1/1 [==============================] - Total: 2.14ms - 2ms/step- loss: 0.585 - acc: 0.826
    After evasion attack
     Objects in BunchDict:
    ╒═════════╀═══════════╕
    β”‚ Names   β”‚   Objects β”‚
    β•žβ•β•β•β•β•β•β•β•β•β•ͺ═══════════║
    β”‚ loss    β”‚  0.584646 β”‚
    β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
    β”‚ acc     β”‚  0.826459 β”‚
    β•˜β•β•β•β•β•β•β•β•β•β•§β•β•β•β•β•β•β•β•β•β•β•β•›
    Training...
    100/100 [==============================] - Total: 530.04ms - 5ms/step- loss: 0.0767 - acc: 0.98 - val_loss: 0.574 - val_acc: 0.791
    Evaluating...
    1/1 [==============================] - Total: 1.77ms - 1ms/step- loss: 0.695 - acc: 0.813
    After poisoning attack
     Objects in BunchDict:
    ╒═════════╀═══════════╕
    β”‚ Names   β”‚   Objects β”‚
    β•žβ•β•β•β•β•β•β•β•β•β•ͺ═══════════║
    β”‚ loss    β”‚  0.695349 β”‚
    β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
    β”‚ acc     β”‚  0.81338  β”‚
    β•˜β•β•β•β•β•β•β•β•β•β•§β•β•β•β•β•β•β•β•β•β•β•β•›
    
    opened by ziqi-zhang 2
  • SG Attack example cannot run as expected on cuda

    SG Attack example cannot run as expected on cuda

    Hello, I got some error when I run SG Attack's example code on cuda device:

    Traceback (most recent call last):
      File "src/test.py", line 50, in <module>
        attacker.attack(target)
      File "/greatx/attack/targeted/sg_attack.py", line 212, in attack
        subgraph = self.get_subgraph(target, target_label, best_wrong_label)
      File "/greatx/attack/targeted/sg_attack.py", line 124, in get_subgraph
        self.label == best_wrong_label)[0].cpu().numpy()
    RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:3 and cpu!
    

    I found self.label is on cuda device, but best_wrong_label is on cpu. https://github.com/EdisonLeeeee/GreatX/blob/73eac351fdae842dbd74967622bd0e573194c765/greatx/attack/targeted/sg_attack.py#L123-L124

    I remove line94 .cpu(), everything is going well and no error report

    https://github.com/EdisonLeeeee/GreatX/blob/73eac351fdae842dbd74967622bd0e573194c765/greatx/attack/targeted/sg_attack.py#L94-L96

    I found there is a commit that adds .cpu end of line 94, so I dont know it's a bug or something else🀨

    opened by beiyanpiki 1
  • problem with metattack

    problem with metattack

    Thanks for this wonderful repo. However, when I run the metattack example ,the result is not promising Here is my result when attack Cora with metattack Training... 100/100 [====================] - Total: 520.68ms - 5ms/step- loss: 0.0713 - acc: 0.996 - val_loss: 0.574 - val_acc: 0.847 Evaluating... 1/1 [====================] - Total: 2.01ms - 2ms/step- loss: 0.522 - acc: 0.847 Before attack ╒═════════╀═══════════╕ β”‚ Names β”‚ Objects β”‚ β•žβ•β•β•β•β•β•β•β•β•β•ͺ═══════════║ β”‚ loss β”‚ 0.521524 β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚ acc β”‚ 0.846579 β”‚ β•˜β•β•β•β•β•β•β•β•β•β•§β•β•β•β•β•β•β•β•β•β•β•β•› Peturbing graph...: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 253/253 [01:00<00:00, 4.17it/s]Evaluating... 1/1 [====================] - Total: 2.08ms - 2ms/step- loss: 0.528 - acc: 0.844 After evasion attack ╒═════════╀═══════════╕ β”‚ Names β”‚ Objects β”‚ β•žβ•β•β•β•β•β•β•β•β•β•ͺ═══════════║ β”‚ loss β”‚ 0.528431 β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚ acc β”‚ 0.844064 β”‚ β•˜β•β•β•β•β•β•β•β•β•β•§β•β•β•β•β•β•β•β•β•β•β•β•› Training... 32/100 [=====>..............] - ETA: 0s- loss: 0.212 - acc: 0.956 - val_loss: 0.634 - val_acc: 0.807 100/100 [====================] - Total: 407.58ms - 4ms/step- loss: 0.0601 - acc: 0.996 - val_loss: 0.704 - val_acc: 0.787 Evaluating... 1/1 [====================] - Total: 1.66ms - 1ms/step- loss: 0.711 - acc: 0.819 After poisoning attack ╒═════════╀═══════════╕ β”‚ Names β”‚ Objects β”‚ β•žβ•β•β•β•β•β•β•β•β•β•ͺ═══════════║ β”‚ loss β”‚ 0.710625 β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚ acc β”‚ 0.818913 β”‚ β•˜β•β•β•β•β•β•β•β•β•β•§β•β•β•β•β•β•β•β•β•β•β•β•›

    opened by shanzhiq 1
  • Fix a venue of FeaturePropagation

    Fix a venue of FeaturePropagation

    Rossi's FeaturePropagation paper was submitted to ICLR'21 but was rejected. So how about updating with arXiv? This paper has not yet been accepted by other conferences.

    bug documentation 
    opened by jeongwhanchoi 1
  • Add ratio for `attacker.data()`, `attacker.edge_flips()`, and `attacker.feat_flips()`

    Add ratio for `attacker.data()`, `attacker.edge_flips()`, and `attacker.feat_flips()`

    This PR allows specifying an argument ratio for attacker.edge_flips(), and attacker.feat_flips(), which determines how many generated perturbations were used for further evaluation/visualization. Correspondingly, attacker.data() holds edge_ratio and feat_ratio for these two methods when constructing perturbed graph.

    • Example
    attacker = ...
    attacker.reset()
    attacker.attack(...)
    
    # Case1: Only 50% of generated edge perturbations were used
    trainer.evaluate(attacker.data(edge_ratio=0.5), mask=...)
    
    # Case2: Only 50% of generated feature perturbations were used
    trainer.evaluate(attacker.data(feat_ratio=0.5), mask=...)
    
    # NOTE: both arguments can be used simultaneously
    
    enhancement 
    opened by EdisonLeeeee 0
Releases(0.1.0)
  • 0.1.0(Jun 9, 2022)

    GraphWar 0.1.0 πŸŽ‰

    The first major release, built upon PyTorch and PyTorch Geometric (PyG).

    About GraphWar

    GraphWar is a graph adversarial learning toolbox based on PyTorch and PyTorch Geometric (PyG). It implements a wide range of adversarial attacks and defense methods focused on graph data. To facilitate the benchmark evaluation on graphs, we also provide a set of implementations on popular Graph Neural Networks (GNNs).

    Usages

    For more details, please refer to the documentation and examples.

    How fast can we train and evaluate your own GNN?

    Take GCN as an example:

    from graphwar.nn.models import GCN
    from graphwar.training import Trainer
    from torch_geometric.datasets import Planetoid
    dataset = Planetoid(root='.', name='Cora') # Any PyG dataset is available!
    data = dataset[0]
    model = GCN(dataset.num_features, dataset.num_classes)
    trainer = Trainer(model, device='cuda:0')
    trainer.fit({'data': data, 'mask': data.train_mask})
    trainer.evaluate({'data': data, 'mask': data.test_mask})
    

    A simple targeted manipulation attack

    from graphwar.attack.targeted import RandomAttack
    attacker = RandomAttack(data)
    attacker.attack(1, num_budgets=3) # attacking target node `1` with `3` edges 
    attacked_data = attacker.data()
    edge_flips = attacker.edge_flips()
    
    

    A simple untargeted (non-targeted) manipulation attack

    from graphwar.attack.untargeted import RandomAttack
    attacker = RandomAttack(data)
    attacker.attack(num_budgets=0.05) # attacking the graph with 5% edges perturbations
    attacked_data = attacker.data()
    edge_flips = attacker.edge_flips()
    

    We will continue to develop this project and introduce more state-of-the-art implementations of papers in the field of graph adversarial attacks and defenses.

    Source code(tar.gz)
    Source code(zip)
    graphwar-0.1.0-py3-none-any.whl(155.84 KB)
Owner
Jintang Li
Ph.D. student @ Sun Yat-sen University (SYSU), China.
Jintang Li
Music Generation using Neural Networks Streamlit App

Music_Gen_Streamlit "Music Generation using Neural Networks" Streamlit App TO DO: Make a run_app.sh Introduction [~5 min] (Sohaib) Team Member names/i

Muhammad Sohaib Arshid 6 Aug 09, 2022
This repository is for our EMNLP 2021 paper "Automated Generation of Accurate & Fluent Medical X-ray Reports"

Introduction: X-Ray Report Generation This repository is for our EMNLP 2021 paper "Automated Generation of Accurate & Fluent Medical X-ray Reports". O

no name 36 Dec 16, 2022
ICCV2021 Oral SA-ConvONet: Sign-Agnostic Optimization of Convolutional Occupancy Networks

Sign-Agnostic Convolutional Occupancy Networks Paper | Supplementary | Video | Teaser Video | Project Page This repository contains the implementation

64 Jan 05, 2023
PyTorch code for DriveGAN: Towards a Controllable High-Quality Neural Simulation

PyTorch code for DriveGAN: Towards a Controllable High-Quality Neural Simulation

76 Dec 24, 2022
A High-Performance Distributed Library for Large-Scale Bundle Adjustment

MegBA: A High-Performance and Distributed Library for Large-Scale Bundle Adjustment This repo contains an official implementation of MegBA. MegBA is a

旷视研穢陒 3D η»„ 336 Dec 27, 2022
Python Classes: Medical Insurance Project using Object Oriented Programming Concepts

Medical-Insurance-Project-OOP Python Classes: Medical Insurance Project using Object Oriented Programming Concepts Classes are an incredibly useful pr

Hugo B. 0 Feb 04, 2022
Implementation of "Efficient Regional Memory Network for Video Object Segmentation" (Xie et al., CVPR 2021).

RMNet This repository contains the source code for the paper Efficient Regional Memory Network for Video Object Segmentation. Cite this work @inprocee

Haozhe Xie 76 Dec 14, 2022
These are the materials for the paper "Few-Shot Out-of-Domain Transfer Learning of Natural Language Explanations"

Few-shot-NLEs These are the materials for the paper "Few-Shot Out-of-Domain Transfer Learning of Natural Language Explanations". You can find the smal

Yordan Yordanov 0 Oct 21, 2022
An implementation of EWC with PyTorch

EWC.pytorch An implementation of Elastic Weight Consolidation (EWC), proposed in James Kirkpatrick et al. Overcoming catastrophic forgetting in neural

Ryuichiro Hataya 166 Dec 22, 2022
How to Predict Stock Prices Easily Demo

How-to-Predict-Stock-Prices-Easily-Demo How to Predict Stock Prices Easily - Intro to Deep Learning #7 by Siraj Raval on Youtube ##Overview This is th

Siraj Raval 752 Nov 16, 2022
Cross-media Structured Common Space for Multimedia Event Extraction (ACL2020)

Cross-media Structured Common Space for Multimedia Event Extraction Table of Contents Overview Requirements Data Quickstart Citation Overview The code

Manling Li 49 Nov 21, 2022
Koopman operator identification library in Python

pykoop pykoop is a Koopman operator identification library written in Python. It allows the user to specify Koopman lifting functions and regressors i

DECAR Systems Group 34 Jan 04, 2023
A pure PyTorch implementation of the loss described in "Online Segment to Segment Neural Transduction"

ssnt-loss ℹ️ This is a WIP project. the implementation is still being tested. A pure PyTorch implementation of the loss described in "Online Segment t

弡致強 1 Feb 09, 2022
Bootstrapped Representation Learning on Graphs

Bootstrapped Representation Learning on Graphs This is the PyTorch implementation of BGRL Bootstrapped Representation Learning on Graphs The main scri

NerDS Lab :: Neural Data Science Lab 55 Jan 07, 2023
TAPEX: Table Pre-training via Learning a Neural SQL Executor

TAPEX: Table Pre-training via Learning a Neural SQL Executor The official repository which contains the code and pre-trained models for our paper TAPE

Microsoft 157 Dec 28, 2022
Implementation of the paper "Language-agnostic representation learning of source code from structure and context".

Code Transformer This is an official PyTorch implementation of the CodeTransformer model proposed in: D. ZΓΌgner, T. Kirschstein, M. Catasta, J. Leskov

Daniel ZΓΌgner 131 Dec 13, 2022
StyleGAN-NADA: CLIP-Guided Domain Adaptation of Image Generators

StyleGAN-NADA: CLIP-Guided Domain Adaptation of Image Generators [Project Website] [Replicate.ai Project] StyleGAN-NADA: CLIP-Guided Domain Adaptation

992 Dec 30, 2022
Phylogeny Partners

Phylogeny-Partners Two states models Instalation You may need to install the cython, networkx, numpy, scipy package: pip install cython, networkx, num

1 Sep 19, 2022
PyTorch Implementation of the paper Learning to Reweight Examples for Robust Deep Learning

Learning to Reweight Examples for Robust Deep Learning Unofficial PyTorch implementation of Learning to Reweight Examples for Robust Deep Learning. Th

Daniel Stanley Tan 325 Dec 28, 2022
Semi-supervised Representation Learning for Remote Sensing Image Classification Based on Generative Adversarial Networks

SSRL-for-image-classification Semi-supervised Representation Learning for Remote Sensing Image Classification Based on Generative Adversarial Networks

Feng 2 Nov 19, 2021