ZEBRA: Zero Evidence Biometric Recognition Assessment

Overview

ZEBRA: Zero Evidence Biometric Recognition Assessment

license: LGPLv3 - please reference our paper
version: 2020-06-11
author: Andreas Nautsch (EURECOM)

Disclaimer - this toolkit is a standalone implementation of our paper

Nautsch, Patino, Tomashenko, Yamagishi, Noe, Bonastre, Todisco and Evans:
"The Privacy ZEBRA: Zero Evidence Biometric Recognition Assessment"
in Proc. Interspeech 2020

[Online] pre-print

This work is academic (non-for-profit);
a reference implementation without warranty.

What is the ZEBRA framework?

How can we assess for privacy preservation in the processing of human signals, such as speech data?

Mounting privacy legislation calls for the preservation of privacy in speech technology, though solutions are gravely lacking. While evaluation campaigns are long-proven tools to drive progress, the need to consider a privacy adversary implies that traditional approaches to evaluation must be adapted to the assessment of privacy and privacy preservation solutions. This paper presents the first step in this direction: metrics.

We propose the ZEBRA framework which is inspired by forensic science.

On the contrary to method validation in modern cryptography, which is backboned by zero-knowledge proofs (see Shanon), we need to tackle zero-evidence. The former defines input data (e.g., an A is represented by the number 65); the latter models input data (we can only describe e.g., acoustic data, biometric identities, and semantic meaning).

Communication is more than the written word; we need to leave the stiff perspective of the written word behind, when the medium changes to speech and to other human signals (e.g., video surveillance).

Privacy preservation for human data is not binary

Only levels of privacy preservation can be quantified (theoretic proofs for a yes/no decision are unavailble).

The ZEBRA framework compares candidate privacy safeguards in an after-the-fact evaluation:

  • candidate algorithms protect human signals (e.g., speech) regarding the disclosure of specific sensitive information (e.g., the biometric identity);
  • knowing the facts of how much sensitive information could be exposed, how much is exposed after using each candidate safeguard?

Privacy & the realm of the adversary

The conventional signal processing or machine learning perspectives as system evaluators does not suffice anymore!

Adversaries are evaluators of safeguard evaluators.

We need to shift our perspective.

To optimize algorithms and their parameters, we are used to improve some average/expected performance loss.

  1. Expectation values reflect on a population level, yet, privacy is a fundamental human right that is: for each individual - how badly is information disclosed for those who belong to a minority in the eyes of a candidate privacy safeguard?
  2. An adversary can only infer information based on observations - figuratively speaking like a judge/jury assesses evidence.
  3. By formalizing decision inference based on evidence, the strength of evidence is estimated - it allows to reflect to which extent one of two decisions should be favored over another; given the circumstances of a case - an individual performance.
  4. Given the circumstances is formalized by the prior belief; like forensic practitioners cannot know the prior belief of a judge/jury, we cannot know the prior belief of an adversary.
  5. Empirical Cross-Entropy (ECE) plots are introduced in forensic voice biometrics to simulate ECE for all possible prior beliefs, such that one can report an expected gain in relative information - an average/expected performance.
  6. Categorical tags are introduced in forensic science and constantly refined since the 1960s to summarize different levels of strength of evidence into a scale that is easier to digest by the human mind.

ZEBRA, a zero-evidence framework to assess for preserving privacy on empirical data

The proposed ZEBRA framework has two metrics:

  • on the population level, the expected ECE is quantified by integrating out all possible prior beliefs; the result is: expected empirical cross-entropy [in bits] which is 0 bit for full privacy; and 1/log(4) ~ 0.721 bit for no privacy.
  • on the individual level, the worst-case strength of evidence is quantified. In forensic science, the strength of evidence is referred to a so-called log-likelihood ratio (LLR) which symmetrically encodes the relative strength of evidence of one possible decision outcome over the other; an LLR of 0 means zero strength of evidence for either possible deicision outcome - on the contrary, values towards inifnity would resembe towards 'inifnitely decisive' evidence (no privacy). The worst-case strength of evidence is the maximum(absolute(LLR)).

Categorical tags summarizes the maximum(absolute(LLR)) value; an example adopted from the literature:

Tag Description (for a 50:50 prior belief)
0 50:50 decision making of the adversary
A adversary makes better decisions than 50:50
B adversary makes 1 wrong decision out of 10 to 100
C adversary makes 1 wrong decision out of 100 to 1000
D adversary makes 1 wrong decision out of 1000 to 100.000
E adversary makes 1 wrong decision out of 100.000 to 1.000.000
F adversary makes 1 wrong decision in at least 1.000.000

The better an adversary can make decisions, despite the privacy preservation of a candidate safeguard applied, the worse is the categorical tag.

Scope of this ZEBRA reference implementation

  1. Computation and visualization of the ZEBRA framework.

    • Metrics in ZEBRA profile: (population, individual, tag)
    • ZEBRA profile in ECE plots
      full privacy: black profile
      no privacy: y = 0 (profiles equal to the x-axis)
      alt text

    For display only, LLRs are in base 10.

  2. Saving to: LaTeX, PDF and PNG formats.

  3. Automatic assessment of the 2020 VoicePrivacy Challenge
    ReadMe: use ZEBRA for kaldi experiments

  4. Computation and visualizations of conventional metrics:
    ReadMe: conventional plots & metrics

    • ECE plots (Ramos et al.)
      metrics: ECE & min ECE
    • APE plots (Brümmer et al.)
      metrics: DCF & min DCF
    • Computation only
      metrics: Cllr, min Cllr & ROCCH-EER

Installation

The installation uses Miniconda, which creates Python environments into a folder structure on your hard drive.

Deinstallation is easy: delete the miniconda folder.

  1. install miniconda, see:
    https://docs.conda.io/projects/conda/en/latest/user-guide/install/#regular-installation
  2. create a Python environment

    conda create python=3.7 --name zebra -y

  3. activate the environment

    conda activate zebra

  4. installing required packages

    conda install -y numpy pandas matplotlib seaborn tabulate

HowTo: use

A quick reference guide for using Python, the command line and to customization.

Command line: metric computation

Computing the metrics (command structure):

python zero_evidence.py -s [SCORE_FILE] -k [KEY_FILE]

An example is provided with scores.txt and key.txt as score and key files:

scr=exp/Baseline/primary/results-2020-05-10-14-29-38/ASV-libri_test_enrolls-libri_test_trials_f/scores
key=keys-voiceprivacy-2020/libri_test_trials_f

python zero_evidence.py -s $scr -k $key

Result:

ZEBRA profile
Population: 0.584 bit
Individual: 3.979 (C)

Command line: visualization

Display each plot: -p option: python zero_evidence.py -s $scr -k $key -p

Command line: customization

  1. Custom label for an experiment: -l option:

    python zero_evidence.py -s $scr -k $key -l "libri speech, primary baseline"
    

    libri speech, primary baseline
    Population: 0.584 bit
    Individual: 3.979 (C)

  2. Save the profile visualization (without their display): -e png

    python zero_evidence.py -s $scr -k $key -l "profile" -e png
    

    -l profile for a file name: ZEBRA-profile
    note: "ZEBRA-" is an automatic prefix to the exported plot file names

    Supported file types:

    • -e tex: LaTeX
    • -e pdf: PDF
    • -e png: PNG
  3. To save a plot with its display, use both options: -p -e png

Python: high-level implementation

Calling the API provided by zebra.py

from zebra import PriorLogOddsPlots, zebra_framework, export_zebra_framework_plots

# initialize the ZEBRA framework 
zebra_plot = PriorLogOddsPlots()  

# declare score & key paths
scr = 'exp/Baseline/primary/results-2020-05-10-14-29-38/ASV-libri_test_enrolls-libri_test_trials_f/scores'
key = 'keys-voiceprivacy-2020/libri_test_trials_f'

# run the framework
zebra_framework(plo_plot=zebra_plot, scr_path=scr, key_path=key)

# saving the ZEBRA plot
export_zebra_framework_plots(plo_plot=zebra_plot, filename='my-experiment', save_plot_ext='png')

Python: low-level implementation

Code snippets from zebra.py

Let's assume classA_scores & classB_scores are numpy arrays of scores.

from numpy import log, abs, hstack, argwhere
from zebra import PriorLogOddsPlots

zebra_plot = PriorLogOddsPlots(classA_scores, classB_scores)

# population metric
dece = zebra_plot.get_delta_ECE()

# individual metric
max_abs_LLR = abs(hstack((plo_plot.classA_llr_laplace, plo_plot.classB_llr_laplace))).max()

# categorical tag
max_abs_LLR_base10 = max_abs_LLR / log(10)
cat_idx = argwhere((cat_ranges < max_abs_LLR_base10).sum(1) == 1).squeeze()
cat_tag = list(categorical_tags.keys())[cat_idx]

# nicely formatted string representations
str_dece = ('%.3f' if dece >= 5e-4 else '%.e') % dece
str_max_abs_llr = ('%.3f' if max_abs_LLR >= 5e-4 else '%.e') % max_abs_LLR

if dece == 0:
    str_dece = '0'

if max_abs_LLR == 0:
    str_max_abs_llr = '0'

For getting the privacy related version on DCF plots, simply run: zebra_plot.get_delta_DCF().

Python: on changing categorical tags

  1. Make a copy of zebra.py
  2. Edit the following part to your liking

    the arrays such as array([0, eps]) contain lower and upper bounds; for numerical convenience only, an epsilon value is used.
    The limits are in base-10 LLR intervals.

    # Here are our categorical tags, inspired by the ENFSI sacle on the stength of evidence
    # Please feel free to try out your own scale as well :)
    # dict: { TAG : [min max] value of base10 LLRs }
    categorical_tags = {
        '0': array([0, eps]),
        'A': array([eps, 1]),
        'B': array([1, 2]),
        'C': array([2, 4]),
        'D': array([4, 5]),
        'E': array([5, 6]),
        'F': array([6, inf])
    }
    
    # pre-computation for easier later use
    cat_ranges = vstack(list(categorical_tags.values()))
    

Documentation

This package is based on:

In performance.py, one can find derived and adjusted code snippets.

For legacy compatability, the code is structured to also provide DCF and ECE visualizations.

Naturally, Cllr, min Cllr and the ROCCH-EER can also be computed with the ZEBRA toolkit. For optimization, please see pyBOSARIS; eventually, to optimize ZEBRA, we would recommend to optimize Cllr. (see Niko Brümmer's disseration or the BOSARIS toolkit user guide regarding convexity).

This toolkit is organized as follows:

  • demo_conventional_plots.py
    Creation of conventional ECE & APE plots
  • demo_voiceprivacy_challenge.py
    Automatic ZEBRA evaluation of an entire challenge
  • demo_zebra.py
    Example on creating a ZEBRA plot and exporting to tex, pdf, png
  • helpers.py
    Helpers to read score files from the kaldi folder structure
  • performance.py
    Library of integrated performance functions, see related software
  • plo_plots.py
    Implementation of ECE & APE plots in one class: PriorLogOddsPlots; with plot export functionality
  • zebra.py
    Wrapper functions to interact with PriorLogOddsPlots in ZEBRA style
  • zero_evidence.py
    Command line script for ZEBRA framework

Acknowledgements

This work is partly funded by the projects: ANR-JST VoicePersonae, ANR Harpocrates and ANR-DFG RESPECT.

Owner
Voice Privacy Challenge
The VoicePrivacy initiative aims to promote the development of privacy preservation tools for speech technology.
Voice Privacy Challenge
Joint Discriminative and Generative Learning for Person Re-identification. CVPR'19 (Oral)

Joint Discriminative and Generative Learning for Person Re-identification [Project] [Paper] [YouTube] [Bilibili] [Poster] [Supp] Joint Discriminative

NVIDIA Research Projects 1.2k Dec 30, 2022
YolactEdge: Real-time Instance Segmentation on the Edge

YolactEdge, the first competitive instance segmentation approach that runs on small edge devices at real-time speeds. Specifically, YolactEdge runs at up to 30.8 FPS on a Jetson AGX Xavier (and 172.7

Haotian Liu 1.1k Jan 06, 2023
Official implementation for ICDAR 2021 paper "Handwritten Mathematical Expression Recognition with Bidirectionally Trained Transformer"

Handwritten Mathematical Expression Recognition with Bidirectionally Trained Transformer Description Convert offline handwritten mathematical expressi

Wenqi Zhao 87 Dec 27, 2022
HDR Video Reconstruction: A Coarse-to-fine Network and A Real-world Benchmark Dataset (ICCV 2021)

Code for HDR Video Reconstruction HDR Video Reconstruction: A Coarse-to-fine Network and A Real-world Benchmark Dataset (ICCV 2021) Guanying Chen, Cha

Guanying Chen 64 Nov 19, 2022
SwinIR: Image Restoration Using Swin Transformer

SwinIR: Image Restoration Using Swin Transformer This repository is the official PyTorch implementation of SwinIR: Image Restoration Using Shifted Win

Jingyun Liang 2.4k Jan 08, 2023
Exemplo de implementação do padrão circuit breaker em python

fast-circuit-breaker Circuit breakers existem para permitir que uma parte do seu sistema falhe sem destruir todo seu ecossistema de serviços. Michael

James G Silva 17 Nov 10, 2022
Fast Differentiable Matrix Sqrt Root

Official Pytorch implementation of ICLR 22 paper Fast Differentiable Matrix Square Root

YueSong 42 Dec 30, 2022
Parameterized Explainer for Graph Neural Network

PGExplainer This is a Tensorflow implementation of the paper: Parameterized Explainer for Graph Neural Network https://arxiv.org/abs/2011.04573 NeurIP

Dongsheng Luo 89 Dec 12, 2022
NaturalCC is a sequence modeling toolkit that allows researchers and developers to train custom models

NaturalCC NaturalCC is a sequence modeling toolkit that allows researchers and developers to train custom models for many software engineering tasks,

159 Dec 28, 2022
AI创造营 :Metaverse启动机之重构现世,结合PaddlePaddle 和 Wechaty 创造自己的聊天机器人

paddle-wechaty-Zodiac AI创造营 :Metaverse启动机之重构现世,结合PaddlePaddle 和 Wechaty 创造自己的聊天机器人 12星座若穿越科幻剧,会拥有什么超能力呢?快来迎接你的专属超能力吧! 现在很多年轻人都喜欢看科幻剧,像是复仇者系列,里面有很多英雄、超

105 Dec 22, 2022
PyTorch and Tensorflow functional model definitions

functional-zoo Model definitions and pretrained weights for PyTorch and Tensorflow PyTorch, unlike lua torch, has autograd in it's core, so using modu

Sergey Zagoruyko 590 Dec 22, 2022
A Kitti Road Segmentation model implemented in tensorflow.

KittiSeg KittiSeg performs segmentation of roads by utilizing an FCN based model. The model achieved first place on the Kitti Road Detection Benchmark

Marvin Teichmann 890 Jan 04, 2023
Weakly- and Semi-Supervised Panoptic Segmentation (ECCV18)

Weakly- and Semi-Supervised Panoptic Segmentation by Qizhu Li*, Anurag Arnab*, Philip H.S. Torr This repository demonstrates the weakly supervised gro

Qizhu Li 159 Dec 20, 2022
DGN pymarl - Implementation of DGN on Pymarl, which could be trained by VDN or QMIX

This is the implementation of DGN on Pymarl, which could be trained by VDN or QM

4 Nov 23, 2022
El-Gamal on Elliptic Curve (Python)

El-Gamal-on-EC El-Gamal on Elliptic Curve (Python) References: https://docsdrive.com/pdfs/ansinet/itj/2005/299-306.pdf https://arxiv.org/ftp/arxiv/pap

3 May 04, 2022
Implementation of the paper: "SinGAN: Learning a Generative Model from a Single Natural Image"

SinGAN This is an unofficial implementation of SinGAN from someone who's been sitting right next to SinGAN's creator for almost five years. Please ref

35 Nov 10, 2022
CVNets: A library for training computer vision networks

CVNets: A library for training computer vision networks This repository contains the source code for training computer vision models. Specifically, it

Apple 1.1k Jan 03, 2023
An index of algorithms for learning causality with data

awesome-causality-algorithms An index of algorithms for learning causality with data. Please cite our survey paper if this index is helpful. @article{

Ruocheng Guo 2.3k Jan 08, 2023
Unofficial implementation (replicates paper results!) of MINER: Multiscale Implicit Neural Representations in pytorch-lightning

MINER_pl Unofficial implementation of MINER: Multiscale Implicit Neural Representations in pytorch-lightning. 📖 Ref readings Laplacian pyramid explan

AI葵 51 Nov 28, 2022
Implementation for our AAAI2021 paper (Entity Structure Within and Throughout: Modeling Mention Dependencies for Document-Level Relation Extraction).

SSAN Introduction This is the pytorch implementation of the SSAN model (see our AAAI2021 paper: Entity Structure Within and Throughout: Modeling Menti

benfeng 69 Nov 15, 2022