Code for the paper "Query Embedding on Hyper-relational Knowledge Graphs"

Related tags

Deep LearningStarQE
Overview

Query Embedding on Hyper-Relational Knowledge Graphs

This repository contains the code used for the experiments in the paper

Query Embedding on Hyper-Relational Knowledge Graphs.
Dimitrios Alivanistos and Max Berrendorf and Michael Cochez and Mikhail Galkin

If you encounter any problems, or have suggestions on how to improve this code, open an issue.

Abstract: Multi-hop logical reasoning is an established problem in the field of representation learning on knowledge graphs (KGs). It subsumes both one-hop link prediction as well as other more complex types of logical queries. Existing algorithms operate only on classical, triple-based graphs, whereas modern KGs often employ a hyper-relational modeling paradigm. In this paradigm, typed edges may have several key-value pairs known as qualifiers that provide fine-grained context for facts. In queries, this context modifies the meaning of relations, and usually reduces the answer set. Hyper-relational queries are often observed in real-world KG applications, and existing approaches for approximate query answering cannot make use of qualifier pairs. In this work, we bridge this gap and extend the multi-hop reasoning problem to hyper-relational KGs allowing to tackle this new type of complex queries. Building upon recent advancements in Graph Neural Networks and query embedding techniques, we study how to embed and answer hyper-relational conjunctive queries. Besides that, we propose a method to answer such queries and demonstrate in our experiments that qualifiers improve query answering on a diverse set of query patterns.

Requirements

We developed our repository using Python 3.8.5. Other version may also work.

First, please ensure that you have properly installed

in your environment. Running experiments is possible on both CPU and GPU. On a GPU, the training should go noticeably faster. If you are using GPU, please make sure that the installed versions match your CUDA version.

We recommend the use of virtual environments, be it virtualenv or conda.

Now, clone the repository and install other dependencies using pip. After moving to the root of the repo (and with your virtual env activated) type:

pip install .

If you want to change code, we suggest to use the editable mode of the pip installation:

pip install -e .

To log results, we suggest using wandb. Instructions on installation and setting up can be found here: https://docs.wandb.ai/quickstart

Running test (optional)

You can run the tests by installing the test dependencies

pip install -e '.[test]'

and then executing them

pytest

Both from the root of the project.

It is normal that you see some skipped tests.

Running experiments

The easiest way to start experiments is via the command line interface. The command line also provides more information on the options available for each command. You can show the help it by typing

hqe --help

into a terminal within your active python environment. Some IDEs, e.g. PyCharm, require you to start from a file if you want to enable the debugger. To this end, we also provide a thin wrapper in executables, which you can start by

python executables/main.py

Downloading the data

To run experiments, we offer the preprocessed queries for download. It is also possible to run the preprocessing steps yourself, cf. the data preprocessing README, using the following command

hqe preprocess skip-and-download-binary

Training a model

There are many options are available for model training. For an overview of options, run

hqe train --help

Some examples:


Train with default settings, using 10000 reified 1hop queries with a qualifier and use 5000 reified triples from the validation set. Details on how to specify the amount of samples can be found in [src/mphrqe/data/loader.Sample](the Sample class). Note that the data loading is taking care of only using data from the correct data split.

hqe train \
    -tr /1hop/1qual:atmost10000:reify \
    -va /1hop/1qual:5000:reify

Train with the same data, but with custom parameters for the model. The example below uses target pooling to get the embedding of the query graph, uses a dropout of 0.5 in the layers, uses cosine similarity instead of the dot product to compute similarity when ranking answers to the query, and enables wandb for logging the metrics. Finally, the trained model is stored as a file training-example-model.pt which then be used in the evaluation.

hqe train \
    -tr /1hop/1qual:atmost10000:reify \
    -va /1hop/1qual:5000:reify \
    --graph-pooling TargetPooling \
    --dropout 0.5 \
    --similarity CosineSimilarity \
    --use-wandb --wandb-name "training-example" \
    --save \
    --model-path "training-example-model.pt"

By default, the model path is relative to the current working directory. Providing an absolute path to a different directory can change that.

Performing hyper parameter optimization

To find optimal parameters for a dataset, one can run a hyperparameter optimization. Under the hood this is using the optuna framework.

All options for the hyperparameter optimization can be seen with

hqe optimize --help

Some examples:


Run hyper-parameter optimization. This will result in a set of runs with different hyper-parameters from which the user can pick the best.

hqe optimize \
    -tr "/1hop/1qual-per-triple:*" \
    -tr "/2i/1qual-per-triple:atmost40000" \
    -tr "/2hop/1qual-per-triple:40000" \
    -tr "/3hop/1qual-per-triple:40000" \
    -tr "/3i/1qual-per-triple:40000" \
    -va "/1hop/1qual-per-triple:atmost3500" \
    -va "/2i/1qual-per-triple:atmost3500" \
    -va "/2hop/1qual-per-triple:atmost3500" \
    -va "/3hop/1qual-per-triple:atmost3500" \
    -va "/3i/1qual-per-triple:atmost3500" \
    --use-wandb \
    --wandb-name "hpo-query2box-style"

Evaluating model performance

To evaluate a model's performance on the test set, we provide an example below:

hqe evaluate \
    --test-data "/1hop/1qual:5000:reify" \
    --use-wandb \
    --wandb-name "test-example" \
    --model-path "training-example-model.pt"

Citation

If you find this work useful, please consider citing

@misc{alivanistos2021query,
      title={Query Embedding on Hyper-relational Knowledge Graphs}, 
      author={Dimitrios Alivanistos and Max Berrendorf and Michael Cochez and Mikhail Galkin},
      year={2021},
      eprint={2106.08166},
      archivePrefix={arXiv},
      primaryClass={cs.AI}
}
You might also like...
Code for the paper Learning the Predictability of the Future

Learning the Predictability of the Future Code from the paper Learning the Predictability of the Future. Website of the project in hyperfuture.cs.colu

PyTorch code for the paper: FeatMatch: Feature-Based Augmentation for Semi-Supervised Learning
PyTorch code for the paper: FeatMatch: Feature-Based Augmentation for Semi-Supervised Learning

FeatMatch: Feature-Based Augmentation for Semi-Supervised Learning This is the PyTorch implementation of our paper: FeatMatch: Feature-Based Augmentat

Code for the paper A Theoretical Analysis of the Repetition Problem in Text Generation
Code for the paper A Theoretical Analysis of the Repetition Problem in Text Generation

A Theoretical Analysis of the Repetition Problem in Text Generation This repository share the code for the paper "A Theoretical Analysis of the Repeti

Code for our ICASSP 2021 paper: SA-Net: Shuffle Attention for Deep Convolutional Neural Networks
Code for our ICASSP 2021 paper: SA-Net: Shuffle Attention for Deep Convolutional Neural Networks

SA-Net: Shuffle Attention for Deep Convolutional Neural Networks (paper) By Qing-Long Zhang and Yu-Bin Yang [State Key Laboratory for Novel Software T

Open source repository for the code accompanying the paper 'Non-Rigid Neural Radiance Fields Reconstruction and Novel View Synthesis of a Deforming Scene from Monocular Video'.
Open source repository for the code accompanying the paper 'Non-Rigid Neural Radiance Fields Reconstruction and Novel View Synthesis of a Deforming Scene from Monocular Video'.

Non-Rigid Neural Radiance Fields This is the official repository for the project "Non-Rigid Neural Radiance Fields: Reconstruction and Novel View Synt

Code for the Shortformer model, from the paper by Ofir Press, Noah A. Smith and Mike Lewis.

Shortformer This repository contains the code and the final checkpoint of the Shortformer model. This file explains how to run our experiments on the

PyTorch code for ICLR 2021 paper Unbiased Teacher for Semi-Supervised Object Detection
PyTorch code for ICLR 2021 paper Unbiased Teacher for Semi-Supervised Object Detection

Unbiased Teacher for Semi-Supervised Object Detection This is the PyTorch implementation of our paper: Unbiased Teacher for Semi-Supervised Object Detection

Official code for paper "Optimization for Oriented Object Detection via Representation Invariance Loss".

Optimization for Oriented Object Detection via Representation Invariance Loss By Qi Ming, Zhiqiang Zhou, Lingjuan Miao, Xue Yang, and Yunpeng Dong. Th

Code for our CVPR 2021 paper
Code for our CVPR 2021 paper "MetaCam+DSCE"

Joint Noise-Tolerant Learning and Meta Camera Shift Adaptation for Unsupervised Person Re-Identification (CVPR'21) Introduction Code for our CVPR 2021

Comments
  • bug in SPARQL for 1hop-2i/0qual

    bug in SPARQL for 1hop-2i/0qual

    It looks like the SPARQL is not executable. should line 37 in test/validation and line 22 in train: FILTER ((?s1 != ?o2_s0) || (?s1 = ?o2_s0 && str(?p0)< str(?1) )) be FILTER ((?s1 != ?o2_s0) || (?s1 = ?o2_s0 && str(?p0)< str(?p1) )) ?

    opened by Kelaproth 2
Releases(v1.0.0-iclr)
Owner
DimitrisAlivas
Researcher. Data scientist. Passionate about Tech & AI
DimitrisAlivas
Neural Logic Inductive Learning

Neural Logic Inductive Learning This is the implementation of the Neural Logic Inductive Learning model (NLIL) proposed in the ICLR 2020 paper: Learn

36 Nov 28, 2022
なりすまし検出(anti-spoof-mn3)のWebカメラ向けデモ

FaceDetection-Anti-Spoof-Demo なりすまし検出(anti-spoof-mn3)のWebカメラ向けデモです。 モデルはPINTO_model_zoo/191_anti-spoof-mn3からONNX形式のモデルを使用しています。 Requirement mediapipe

KazuhitoTakahashi 8 Nov 18, 2022
[CVPR 2020] Interpreting the Latent Space of GANs for Semantic Face Editing

InterFaceGAN - Interpreting the Latent Space of GANs for Semantic Face Editing Figure: High-quality facial attributes editing results with InterFaceGA

GenForce: May Generative Force Be with You 1.3k Dec 29, 2022
A python script to convert images to animated sus among us crewmate twerk jifs as seen on r/196

img_sussifier A python script to convert images to animated sus among us crewmate twerk jifs as seen on r/196 Examples How to use install python pip i

41 Sep 30, 2022
Official implementation of "Open-set Label Noise Can Improve Robustness Against Inherent Label Noise" (NeurIPS 2021)

Open-set Label Noise Can Improve Robustness Against Inherent Label Noise NeurIPS 2021: This repository is the official implementation of ODNL. Require

Hongxin Wei 12 Dec 07, 2022
BBScan py3 - BBScan py3 With Python

BBScan_py3 This repository is forked from lijiejie/BBScan 1.5. I migrated the fo

baiyunfei 12 Dec 30, 2022
🎯 A comprehensive gradient-free optimization framework written in Python

Solid is a Python framework for gradient-free optimization. It contains basic versions of many of the most common optimization algorithms that do not

Devin Soni 565 Dec 26, 2022
Styled Handwritten Text Generation with Transformers (ICCV 21)

⚡ Handwriting Transformers [PDF] Ankan Kumar Bhunia, Salman Khan, Hisham Cholakkal, Rao Muhammad Anwer, Fahad Shahbaz Khan & Mubarak Shah Abstract: We

Ankan Kumar Bhunia 85 Dec 22, 2022
MPRNet-Cloud-removal: Progressive cloud removal

MPRNet-Cloud-removal Progressive cloud removal Requirements 1.Pytorch = 1.0 2.Python 3 3.NVIDIA GPU + CUDA 9.0 4.Tensorboard Installation 1.Clone the

Semi 95 Dec 18, 2022
Pytorch Code for "Medical Transformer: Gated Axial-Attention for Medical Image Segmentation"

Medical-Transformer Pytorch Code for the paper "Medical Transformer: Gated Axial-Attention for Medical Image Segmentation" About this repo: This repo

Jeya Maria Jose 615 Dec 25, 2022
ICSS - Interactive Continual Semantic Segmentation

Presentation This repository contains the code of our paper: Weakly-supervised c

Alteia 9 Jul 23, 2022
PyTorch implementation of Self-supervised Contrastive Regularization for DG (SelfReg)

SelfReg PyTorch official implementation of Self-supervised Contrastive Regularization for Domain Generalization (SelfReg, https://arxiv.org/abs/2104.0

64 Dec 16, 2022
Drone-based Joint Density Map Estimation, Localization and Tracking with Space-Time Multi-Scale Attention Network

DroneCrowd Paper Detection, Tracking, and Counting Meets Drones in Crowds: A Benchmark. Introduction This paper proposes a space-time multi-scale atte

VisDrone 98 Nov 16, 2022
Our implementation used for the MICCAI 2021 FLARE Challenge titled 'Efficient Multi-Organ Segmentation Using SpatialConfiguartion-Net with Low GPU Memory Requirements'.

Efficient Multi-Organ Segmentation Using SpatialConfiguartion-Net with Low GPU Memory Requirements Our implementation used for the MICCAI 2021 FLARE C

Franz Thaler 3 Sep 27, 2022
A Kaggle competition: discriminate gender based on handwriting

Gender discrimination based on handwriting See http://fastml.com/gender-discrimination/ for description. prep_data.py - a first step chunk_by_authors.

Zygmunt Zając 22 Jul 20, 2022
IMBENS: class-imbalanced ensemble learning in Python.

IMBENS: class-imbalanced ensemble learning in Python. Links: [Documentation] [Gallery] [PyPI] [Changelog] [Source] [Download] [知乎/Zhihu] [中文README] [a

Zhining Liu 176 Jan 04, 2023
This is the official code of L2G, Unrolling and Recurrent Unrolling in Learning to Learn Graph Topologies.

Learning to Learn Graph Topologies This is the official code of L2G, Unrolling and Recurrent Unrolling in Learning to Learn Graph Topologies. Requirem

Stacy X PU 16 Dec 09, 2022
Quickly and easily create / train a custom DeepDream model

Dream-Creator This project aims to simplify the process of creating a custom DeepDream model by using pretrained GoogleNet models and custom image dat

55 Dec 27, 2022
Learning hidden low dimensional dyanmics using a Generalized Onsager Principle and neural networks

OnsagerNet Learning hidden low dimensional dyanmics using a Generalized Onsager Principle and neural networks This is the original pyTorch implemenati

Haijun.Yu 3 Aug 24, 2022
A Flow-based Generative Network for Speech Synthesis

WaveGlow: a Flow-based Generative Network for Speech Synthesis Ryan Prenger, Rafael Valle, and Bryan Catanzaro In our recent paper, we propose WaveGlo

NVIDIA Corporation 2k Dec 26, 2022