A tight inclusion function for continuous collision detection

Overview

Tight-Inclusion Continuous Collision Detection

Build

A conservative Continuous Collision Detection (CCD) method with support for minimum separation.

You can read more about this work in our ACM Transactions on Graphics paper:

"A Large Scale Benchmark and an Inclusion-Based Algorithm forContinuous Collision Detection"

@article{Wang:2021:Benchmark,
    title   = {A Large Scale Benchmark and an Inclusion-Based Algorithm for Continuous Collision Detection},
    author  = {Bolun Wang and Zachary Ferguson and Teseo Schneider and Xin Jiang and Marco Attene and Daniele Panozzo},
    year    = 2021,
    journal = {ACM Transactions on Graphics}
}

Compiling Instruction

To compile the code, first make sure CMake is installed.

To build the library on Linux or macOS:

mkdir build
cd build
cmake ../ -DCMAKE_BUILD_TYPE=Release
make

Then you can run a CCD example:

./Tight_Inclusion_bin 

We also provide you an example to run the Sample Queries using our CCD method. You may need to install gmp before compiling the code. Then set the CMake option TIGHT_INCLUSION_WITH_TESTS as ON when compiling:

cmake ../ -DCMAKE_BUILD_TYPE=Release -DTIGHT_INCLUSION_WITH_TESTS=ON
make

Then you can run ./Tight_Inclusion_bin to test the handcrafted queries in the Sample Queries.

Usage

Include #include <tight_inclusion/inclusion_ccd.hpp>

To check edge-edge ccd, use bool inclusion_ccd::edgeEdgeCCD_double();

To check vertex-face ccd, use bool inclusion_ccd::vertexFaceCCD_double();

💡 If collision is detected, the ccd function will return true, otherwise, the ccd function will return false. Since our method is CONSERVATIVE, if the returned result is false, we guarantee that there is no collision happens. If the result is true, it is possible that there is no collision but we falsely report a collision, but we can guarantee that this happens only if the minimal distance between the two primitives in this time step is no larger than tolerance + ms + err. We wil explain these parameters below.

For both edge-edge ccd and vertex-face ccd, the input CCD query is presented by 8 vertices which are in the format of Eigen::Vector3d. Please read our code in tight_inclusion/inclusion_ccd.hpp for the correct input order of the vertices.

Beside the input vertices, there are some input and output parameters for users to tune the performace or to get more CCD information. Here is a list of the explanations of the parameters:

input:
    err                 The numerical filters of the x, y and z coordinates. It measures the errors introduced by floating-point calculation when solving inclusion functions.
    ms                  Minimum separation distance no less than 0. It guarantees that collision will be reported if the distance between the two primitives is less than ms.
    tolerance           User-specific solving precision. It is the target maximal x, y, and z length of the inclusion function. We suggest the user to set it as 1e-6.
    t_max               The time range [0, t_max] where we detect collisions. Since the input query implies the motion in time range [0, 1], t_max should no larger than 1.
    max_itr             The parameter to enable early termination of the algorithm. If you set max_itr < 0, early termination will be disabled, but this may cause longer runtime. We suggest to set max_itr = 1e6.
    CCD_TYPE            The parameter to choose collision detection algorithms. By default CCD_TYPE = 1. If set CCD_TYPE = 0, the code will switch to a naive conservative CCD algorithm, but lack of our advanced features. 
    
output:
    toi                 Time of impact. If multiple collisions happen in this time step, it will return the earlist collision time. If there is no collision, the returned toi value will be std::numeric_limits<double>::infinity().
    output_tolerance    The real solving precision. If early termination is enabled, the solving precision may not reach the target precision. This parameter will return the real solving precision when the code is terminated.

Tips

💡 The input parameter err is crucial to guarantee our algorithm to be a conservative method not affected by floating point rounding errors. To run a single query, you can set err = {{-1, -1, -1}} to enable a sub-function to calculate the real numerical filters when solving CCD. If you are integrating our CCD in simulators, you need to:

  • Include the headler: #include <tight_inclusion/interval_root_finder.hpp>.
  • Call std::array<double, 3> err_vf = inclusion_ccd::get_numerical_error() and std::array<double, 3> err_ee = inclusion_ccd::get_numerical_error()
  • Use the parameter err_ee each time you call bool inclusion_ccd::edgeEdgeCCD_double() and err_vf when you call bool inclusion_ccd::vertexFaceCCD_double().

The parameters for function inclusion_ccd::get_numerical_error() is explained below:

input:
    vertices            Vertices of the Axies-Aligned-Bounding-Box of the simulation scene. Before you run the simulation, you need to conservatively estimate the Axies-Aligned-Bounding-Box in which the meshes will located during the whole simulation process, and the vertices should be the corners of the AABB.
    check_vf            A boolean instruction showing if you are checking vertex-face or edge-edge CCD.
    using_minimum_separation    A boolean instruction. If you are using minimum-separation CCD (the input parameter ms > 0), please set it as true.

💡 For some simulators which use non-zero minimum separation distance (ms > 0) to make sure intersection-free for each time-step, we have a CMake option TIGHT_INCLUSION_WITH_NO_ZERO_TOI to avoid the returned collision time toi is 0. You need to set TIGHT_INCLUSION_WITH_NO_ZERO_TOI as ON when compiling by: cmake ../ -DCMAKE_BUILD_TYPE=Release -DTIGHT_INCLUSION_WITH_NO_ZERO_TOI=ON. Then when you use the CCD functions, the code will continue the refinement in higher precision if the output toi is 0 under the given tolerance. So, the eventually toi will not be 0.

To have a better understand, or to get more details of our Tight-Inclusion CCD algorithm, please refer to our paper.

You might also like...
Continuous Query Decomposition for Complex Query Answering in Incomplete Knowledge Graphs

Continuous Query Decomposition This repository contains the official implementation for our ICLR 2021 (Oral) paper, Complex Query Answering with Neura

On the model-based stochastic value gradient for continuous reinforcement learning

On the model-based stochastic value gradient for continuous reinforcement learning This repository is by Brandon Amos, Samuel Stanton, Denis Yarats, a

Continuous Diffusion Graph Neural Network

We present Graph Neural Diffusion (GRAND) that approaches deep learning on graphs as a continuous diffusion process and treats Graph Neural Networks (GNNs) as discretisations of an underlying PDE.

PyTorch implementation for  MINE: Continuous-Depth MPI with Neural Radiance Fields
PyTorch implementation for MINE: Continuous-Depth MPI with Neural Radiance Fields

MINE: Continuous-Depth MPI with Neural Radiance Fields Project Page | Video PyTorch implementation for our ICCV 2021 paper. MINE: Towards Continuous D

This repository contains the source code and data for reproducing results of Deep Continuous Clustering paper
This repository contains the source code and data for reproducing results of Deep Continuous Clustering paper

Deep Continuous Clustering Introduction This is a Pytorch implementation of the DCC algorithms presented in the following paper (paper): Sohil Atul Sh

This is the codebase for the ICLR 2021 paper Trajectory Prediction using Equivariant Continuous Convolution
This is the codebase for the ICLR 2021 paper Trajectory Prediction using Equivariant Continuous Convolution

Trajectory Prediction using Equivariant Continuous Convolution (ECCO) This is the codebase for the ICLR 2021 paper Trajectory Prediction using Equivar

PyTorch implementation for Stochastic Fine-grained Labeling of Multi-state Sign Glosses for Continuous Sign Language Recognition.

Stochastic CSLR This is the PyTorch implementation for the ECCV 2020 paper: Stochastic Fine-grained Labeling of Multi-state Sign Glosses for Continuou

Code for the ICASSP-2021 paper: Continuous Speech Separation with Conformer.

Continuous Speech Separation with Conformer Introduction We examine the use of the Conformer architecture for continuous speech separation. Conformer

A clean and robust Pytorch implementation of PPO on continuous action space.
A clean and robust Pytorch implementation of PPO on continuous action space.

PPO-Continuous-Pytorch I found the current implementation of PPO on continuous action space is whether somewhat complicated or not stable. And this is

Comments
  • Improved No Zero ToI

    Improved No Zero ToI

    • I changed the options for avoiding a zero time of impact by making the option a parameter to the CCD functions.
    • I also changed from a recursive algorithm to an iterative one with a maximum number of iterations.
    opened by zfergus 2
  • a case returned 0 TOI

    a case returned 0 TOI

    For the PT CCD case below, TICCD returned 0 TOI.

    3.5000000000e-01 0.0000000000e+00 3.5000000000e-01
    3.4604643638e-01 2.7847887896e-03 3.4658121160e-01
    3.5000819263e-01 -1.5763377304e-06 3.5001214242e-01
    3.4903882032e-01 -1.5866575093e-03 3.4560164356e-01
    0.0000000000e+00 0.0000000000e+00 0.0000000000e+00
    1.0120300789e-07 -1.4009994248e-04 -8.5653091896e-05
    -9.2976239634e-07 1.4361165221e-06 5.5168830145e-07
    1.1649271683e-04 -3.6172565398e-05 -4.7128237917e-05
    

    The format is

    node_x node_y node_z
    traingle_node0_x traingle_node0_y traingle_node0_z
    traingle_node1_x traingle_node1_y traingle_node1_z
    traingle_node2_x traingle_node2_y traingle_node2_z
    node_displacement_x node_displacement_y node_displacement_z
    traingle_node0_displacement_x traingle_node0_displacement_y traingle_node0_displacement_z
    traingle_node1_displacement_x traingle_node1_displacement_y traingle_node1_displacement_z
    traingle_node2_displacement_x traingle_node2_displacement_y traingle_node2_displacement_z
    

    I used the TICCD from the CCDWrapper repo (latest commit), and I called it like

    template <class T>
    bool Point_Triangle_CCD_TI(
        const Eigen::Matrix<T, 3, 1>& p, 
        const Eigen::Matrix<T, 3, 1>& t0, 
        const Eigen::Matrix<T, 3, 1>& t1,
        const Eigen::Matrix<T, 3, 1>& t2,
        const Eigen::Matrix<T, 3, 1>& dp, 
        const Eigen::Matrix<T, 3, 1>& dt0, 
        const Eigen::Matrix<T, 3, 1>& dt1,
        const Eigen::Matrix<T, 3, 1>& dt2,
        T eta, T thickness, T& toc)
    {
        T toc_prev = toc;
        T output_tolerance;
    
        T dist2_cur;
        Point_Triangle_Distance_Unclassified(p, t0, t1, t2, dist2_cur);
        T dist_cur = std::sqrt(dist2_cur);
        if (inclusion_ccd::vertexFaceCCD_double(p, t0, t1, t2,
            p + dp, t0 + dt0, t1 + dt1, t2 + dt2, { { -1, 0, 0 } },
            eta * (dist2_cur - thickness * thickness) / (dist_cur + thickness) + thickness,
            toc, 1e-6, toc_prev, 1e6, output_tolerance, 1))
        {
            if (toc < 1.0e-6) {
                std::cout << "PT CCD tiny!" << std::endl;
                if (inclusion_ccd::vertexFaceCCD_double(p, t0, t1, t2,
                    p + dp, t0 + dt0, t1 + dt1, t2 + dt2, { { -1, 0, 0 } },
                    thickness, toc, 1e-6, toc_prev, 1e6, output_tolerance, 1))
                {
                    toc *= (1.0 - eta);
                    return true;
                }
                else {
                    return false;
                }
            }
            return true;
        }
        return false;
    }
    
    printf("TICCD:\n");
    largestAlpha = 1;
    tstart = clock();
    if (Point_Triangle_CCD_TI(x[0], x[1], x[2], x[3], x[4], x[5], x[6], x[7], 0.1, 0.0, largestAlpha)) {
        printf("%le\n", largestAlpha);
    }
    else {
        printf("no collision\n");
    }
    printf("%.1lems\n", double(clock() - tstart) / CLOCKS_PER_SEC * 1000);
    

    The output is

    TICCD:
    PT CCD tiny!
    0.000000e+00
    1.9e+00ms
    

    However I implemented a simpler version of TICCD according to the paper as

    template <class T>
    void Point_Triangle_Distance_Vector_Unclassified(
        const Eigen::Matrix<T, 3, 1>& p, 
        const Eigen::Matrix<T, 3, 1>& t0, 
        const Eigen::Matrix<T, 3, 1>& t1,
        const Eigen::Matrix<T, 3, 1>& t2,
        const Eigen::Matrix<T, 3, 1>& dp, 
        const Eigen::Matrix<T, 3, 1>& dt0, 
        const Eigen::Matrix<T, 3, 1>& dt1,
        const Eigen::Matrix<T, 3, 1>& dt2,
        T t, T lambda, T beta,
        Eigen::Matrix<T, 3, 1>& distVec)
    {
        const Eigen::Matrix<T, 3, 1> tp = (1 - lambda - beta) * t0 + lambda * t1 + beta * t2;
        const Eigen::Matrix<T, 3, 1> dtp = (1 - lambda - beta) * dt0 + lambda * dt1 + beta * dt2;
        distVec = p + t * dp - (tp + t * dtp);
    }
    
    template <class T>
    bool Point_Triangle_CheckInterval_Unclassified(
        const Eigen::Matrix<T, 3, 1>& p, 
        const Eigen::Matrix<T, 3, 1>& t0, 
        const Eigen::Matrix<T, 3, 1>& t1,
        const Eigen::Matrix<T, 3, 1>& t2,
        const Eigen::Matrix<T, 3, 1>& dp, 
        const Eigen::Matrix<T, 3, 1>& dt0, 
        const Eigen::Matrix<T, 3, 1>& dt1,
        const Eigen::Matrix<T, 3, 1>& dt2,
        const std::array<T, 6>& interval,
        T gap)
    {
        Eigen::Matrix<T, 3, 1> distVecMax, distVecMin;
        distVecMax.setConstant(-2 * gap - 1);
        distVecMin.setConstant(2 * gap + 1);
        for (int t = 0; t < 2; ++t) {
            for (int lambda = 0; lambda < 2; ++lambda) {
                for (int beta = 0; beta < 2; ++beta) {
                    if (lambda == 1 && beta == 1) {
                        continue;
                    }
                    Eigen::Matrix<T, 3, 1> distVec;
                    Point_Triangle_Distance_Vector_Unclassified(p, t0, t1, t2, dp, dt0, dt1, dt2, 
                        interval[t], interval[2 + lambda], interval[4 + beta], distVec);
                    distVecMax = distVecMax.array().max(distVec.array());
                    distVecMin = distVecMin.array().min(distVec.array());
                }
            }
        }
        return (distVecMax.array() >= -gap).all() && (distVecMin.array() <= gap).all();
    }
    template <class T>
    bool Point_Triangle_TICCD(
        const Eigen::Matrix<T, 3, 1>& p, 
        const Eigen::Matrix<T, 3, 1>& t0, 
        const Eigen::Matrix<T, 3, 1>& t1,
        const Eigen::Matrix<T, 3, 1>& t2,
        Eigen::Matrix<T, 3, 1> dp, 
        Eigen::Matrix<T, 3, 1> dt0, 
        Eigen::Matrix<T, 3, 1> dt1,
        Eigen::Matrix<T, 3, 1> dt2,
        T eta, T thickness, T& toc)
    {
        T dist2_cur;
        Point_Triangle_Distance_Unclassified(p, t0, t1, t2, dist2_cur);
        T dist_cur = std::sqrt(dist2_cur);
        T gap = eta * (dist2_cur - thickness * thickness) / (dist_cur + thickness);
    
        T tTol = 1e-3;
    
        std::vector<std::array<T, 6>> roots;
        std::deque<std::array<T, 6>> intervals;
        intervals.push_back({0, toc, 0, 1, 0, 1});
        int iterAmt = 0;
        while (!intervals.empty()) {
            ++iterAmt;
    
            std::array<T, 6> curIV = intervals.front();
            intervals.pop_front();
    
            // check
            if (Point_Triangle_CheckInterval_Unclassified(p, t0, t1, t2, dp, dt0, dt1, dt2, curIV, gap)) {
                if (curIV[0] && curIV[1] - curIV[0] < tTol) {
                    // root found within tTol
                    roots.emplace_back(curIV);
                }
                else {
                    // split interval and push back
                    std::vector<T> intervalLen({curIV[1] - curIV[0], curIV[3] - curIV[2], curIV[5] - curIV[4]});
                    switch (std::max_element(intervalLen.begin(), intervalLen.end()) - intervalLen.begin()) {
                    case 0:
                        intervals.push_back({curIV[0], (curIV[1] + curIV[0]) / 2, curIV[2], curIV[3], curIV[4], curIV[5]});
                        intervals.push_back({(curIV[1] + curIV[0]) / 2, curIV[1], curIV[2], curIV[3], curIV[4], curIV[5]});
                        break;
    
                    case 1:
                        intervals.push_back({curIV[0], curIV[1], curIV[2], (curIV[2] + curIV[3]) / 2, curIV[4], curIV[5]});
                        intervals.push_back({curIV[0], curIV[1], (curIV[2] + curIV[3]) / 2, curIV[3], curIV[4], curIV[5]});
                        break;
    
                    case 2:
                        intervals.push_back({curIV[0], curIV[1], curIV[2], curIV[3], curIV[4], (curIV[4] + curIV[5]) / 2});
                        intervals.push_back({curIV[0], curIV[1], curIV[2], curIV[3], (curIV[4] + curIV[5]) / 2, curIV[5]});
                        break;
                    }
                }
            }
        }
        
        if (roots.empty()) {
            printf("TICCD PT converged with %d iters\n", iterAmt);
            return false;
        }
        else {
            for (const auto& rI : roots) {
                if (toc > rI[0]) {
                    toc = rI[0];
                }
            }
            printf("TICCD PT converged with %d iters\n", iterAmt);
            return true;
        }
    }
    

    and it kinds of worked on this case and output

    TICCD PT converged with 6143 iters
    4.882812e-04
    5.3e-01ms
    

    which with tighter convergence tolerance might be able to reach the ground truth solution -- no collision.

    In my implementation, I didn't include the epsilons in formula (5) in the paper. Is it the main cause here?

    opened by liminchen 2
  • Refactored and cleaned up code

    Refactored and cleaned up code

    I tested and timed it on the entire dataset. I get 0 FN, a couple of fewer FP on the handcrafted data, and the timing is a little faster on the simulation and faster on handcrafted.

    opened by zfergus 0
Releases(v1.0.2)
Owner
Continuous Collision Detection
Code for continuous collision detection methods bechmarked in "A Large Scale Benchmark and an Inclusion-Based Algorithm for Continuous Collision Detection"
Continuous Collision Detection
Forecasting directional movements of stock prices for intraday trading using LSTM and random forest

Forecasting directional movements of stock-prices for intraday trading using LSTM and random-forest https://arxiv.org/abs/2004.10178 Pushpendu Ghosh,

Pushpendu Ghosh 270 Dec 24, 2022
Code & Data for the Paper "Time Masking for Temporal Language Models", WSDM 2022

Time Masking for Temporal Language Models This repository provides a reference implementation of the paper: Time Masking for Temporal Language Models

Guy Rosin 12 Jan 06, 2023
Collection of in-progress libraries for entity neural networks.

ENN Incubator Collection of in-progress libraries for entity neural networks: Neural Network Architectures for Structured State Entity Gym: Abstractio

25 Dec 01, 2022
Tooling for GANs in TensorFlow

TensorFlow-GAN (TF-GAN) TF-GAN is a lightweight library for training and evaluating Generative Adversarial Networks (GANs). Can be installed with pip

803 Dec 24, 2022
Pytorch implementation of RED-SDS (NeurIPS 2021).

Recurrent Explicit Duration Switching Dynamical Systems (RED-SDS) This repository contains a reference implementation of RED-SDS, a non-linear state s

Abdul Fatir 10 Dec 02, 2022
Automates Machine Learning Pipeline with Feature Engineering and Hyper-Parameters Tuning :rocket:

MLJAR Automated Machine Learning Documentation: https://supervised.mljar.com/ Source Code: https://github.com/mljar/mljar-supervised Table of Contents

MLJAR 2.4k Dec 31, 2022
SalGAN: Visual Saliency Prediction with Generative Adversarial Networks

SalGAN: Visual Saliency Prediction with Adversarial Networks Junting Pan Cristian Canton Ferrer Kevin McGuinness Noel O'Connor Jordi Torres Elisa Sayr

Image Processing Group - BarcelonaTECH - UPC 347 Nov 22, 2022
Weighted QMIX: Expanding Monotonic Value Function Factorisation

This repo contains the cleaned-up code that was used in "Weighted QMIX: Expanding Monotonic Value Function Factorisation"

whirl 82 Dec 29, 2022
Official code for MPG2: Multi-attribute Pizza Generator: Cross-domain Attribute Control with Conditional StyleGAN

This is the official code for Multi-attribute Pizza Generator (MPG2): Cross-domain Attribute Control with Conditional StyleGAN. Paper Demo Setup Envir

Fangda Han 5 Sep 01, 2022
OpenVINO黑客松比赛项目

Window_Guard OpenVINO黑客松比赛项目 英文名称:Window_Guard 中文名称:窗口卫士 硬件 树莓派4B 8G版本 一个磁石开关 USB摄像头(MP4视频文件也可以) 软件(库) OpenVINO RPi 使用方法 本项目使用的OPenVINO是是2021.3版本,并使用了

Tango 6 Jul 04, 2021
Pytorch implementation of the paper DocEnTr: An End-to-End Document Image Enhancement Transformer.

DocEnTR Description Pytorch implementation of the paper DocEnTr: An End-to-End Document Image Enhancement Transformer. This model is implemented on to

Mohamed Ali Souibgui 74 Jan 07, 2023
PyTorch implementation of Deep HDR Imaging via A Non-Local Network (TIP 2020).

NHDRRNet-PyTorch This is the PyTorch implementation of Deep HDR Imaging via A Non-Local Network (TIP 2020). 0. Differences between Original Paper and

Yutong Zhang 1 Mar 01, 2022
Layered Neural Atlases for Consistent Video Editing

Layered Neural Atlases for Consistent Video Editing Project Page | Paper This repository contains an implementation for the SIGGRAPH Asia 2021 paper L

Yoni Kasten 353 Dec 27, 2022
Official Implementation of DAFormer: Improving Network Architectures and Training Strategies for Domain-Adaptive Semantic Segmentation

DAFormer: Improving Network Architectures and Training Strategies for Domain-Adaptive Semantic Segmentation [Arxiv] [Paper] As acquiring pixel-wise an

Lukas Hoyer 305 Dec 29, 2022
Read number plates with https://platerecognizer.com/

HASS-plate-recognizer Read vehicle license plates with https://platerecognizer.com/ which offers free processing of 2500 images per month. You will ne

Robin 69 Dec 30, 2022
Mouse Brain in the Model Zoo

Deep Neural Mouse Brain Modeling This is the repository for the ongoing deep neural mouse modeling project, an attempt to characterize the representat

Colin Conwell 15 Aug 22, 2022
Convolutional Neural Network for 3D meshes in PyTorch

MeshCNN in PyTorch SIGGRAPH 2019 [Paper] [Project Page] MeshCNN is a general-purpose deep neural network for 3D triangular meshes, which can be used f

Rana Hanocka 1.4k Jan 04, 2023
Saliency - Framework-agnostic implementation for state-of-the-art saliency methods (XRAI, BlurIG, SmoothGrad, and more).

Saliency Methods 🔴 Now framework-agnostic! (Example core notebook) 🔴 🔗 For further explanation of the methods and more examples of the resulting ma

PAIR code 849 Dec 27, 2022
Haze Removal can remove slight to extreme cases of haze affecting an image

Haze Removal can remove slight to extreme cases of haze affecting an image. Its most typical use is for landscape photography where the haze causes low contrast and low saturation, but it can also be

Grace Ugochi Nneji 3 Feb 15, 2022
This repository is for Contrastive Embedding Distribution Refinement and Entropy-Aware Attention Network (CEDR)

CEDR This repository is for Contrastive Embedding Distribution Refinement and Entropy-Aware Attention Network (CEDR) introduced in the following paper

phoenix 3 Feb 27, 2022