Quasi-Dense Similarity Learning for Multiple Object Tracking, CVPR 2021 (Oral)

Related tags

Deep Learningqdtrack
Overview

Quasi-Dense Tracking

This is the offical implementation of paper Quasi-Dense Similarity Learning for Multiple Object Tracking.

We present a trailer that consists of method illustrations and tracking visualizations. Take a look!

If you have any questions, please go to Discussions.

Abstract

Similarity learning has been recognized as a crucial step for object tracking. However, existing multiple object tracking methods only use sparse ground truth matching as the training objective, while ignoring the majority of the informative regions on the images. In this paper, we present Quasi-Dense Similarity Learning, which densely samples hundreds of region proposals on a pair of images for contrastive learning. We can naturally combine this similarity learning with existing detection methods to build Quasi-Dense Tracking (QDTrack) without turning to displacement regression or motion priors. We also find that the resulting distinctive feature space admits a simple nearest neighbor search at the inference time. Despite its simplicity, QDTrack outperforms all existing methods on MOT, BDD100K, Waymo, and TAO tracking benchmarks. It achieves 68.7 MOTA at 20.3 FPS on MOT17 without using external training data. Compared to methods with similar detectors, it boosts almost 10 points of MOTA and significantly decreases the number of ID switches on BDD100K and Waymo datasets.

Quasi-dense matching

Main results

Without bells and whistles, our method outperforms the states of the art on MOT, BDD100K, Waymo, and TAO benchmarks.

BDD100K test set

mMOTA mIDF1 ID Sw.
35.5 52.3 10790

MOT

Dataset MOTA IDF1 ID Sw. MT ML
MOT16 69.8 67.1 1097 316 150
MOT17 68.7 66.3 3378 957 516

Waymo validation set

Category MOTA IDF1 ID Sw.
Vehicle 55.6 66.2 24309
Pedestrian 50.3 58.4 6347
Cyclist 26.2 45.7 56
All 44.0 56.8 30712

TAO

Split AP50 AP75 AP
val 16.1 5.0 7.0
test 12.4 4.5 5.2

Installation

Please refer to INSTALL.md for installation instructions.

Usages

Please refer to GET_STARTED.md for dataset preparation and running instructions.

We release pretrained models on BDD100K dataset for testing.

More implementations / models on the following benchmarks will be released later:

  • Waymo
  • MOT16 / MOT17 / MOT20
  • TAO

Citation

@InProceedings{qdtrack,
  title = {Quasi-Dense Similarity Learning for Multiple Object Tracking},
  author = {Pang, Jiangmiao and Qiu, Linlu and Li, Xia and Chen, Haofeng and Li, Qi and Darrell, Trevor and Yu, Fisher},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  month = {June},
  year = {2021}
}
Comments
  • TypeError : Resnet : __init__() got an unexpected keyword argument 'init_cfg'

    TypeError : Resnet : __init__() got an unexpected keyword argument 'init_cfg'

    Greetings,

    I am currently using qdtrack in python 3.8, Cuda 11 based environment for RTX 3090 GPU. While training as well as testing, I am getting this error. All required datasets in required locations have been downloaded and maintained. Any help would be appreciated.

    opened by AmanGoyal99 12
  • Inconsistent Results on BDD100K Tracking Validation Set

    Inconsistent Results on BDD100K Tracking Validation Set

    Hi there.

    I ran the pre-trained BDD100K model on the tracking validation set and the resulting MOTA IDF1 scores are lower than what QDTrack claim: MOTA: 54.5, IDF1: 66.7 vs your MOTA: 63.5, IDF1 71.5.

    Kindly verify if this is the case for you or if there are any missing settings.

    I followed the instructions and ran this command: sh ./tools/dist_test.sh ./configs/bdd100k/qdtrack-frcnn_r50_fpn_12e_bdd100k.py ./ckpts/mmdet/qdtrack_frcnn_r50_fpn_12e_bdd100k_13328aed.pth 2 --out exp.pkl --eval track

    opened by taheranjary 8
  • Evaluation results on TAO-val

    Evaluation results on TAO-val

    Hello,

    When I train the model with your code for TAO (i.e., pretrain on LVIS and finetune on TAO-train), I get the following final results on TAO-val. which are lower than the scores reported in the original paper.

    |mAP0.5 | mAP0.75 | mAP[0.5:0.95] | |---------|---------|---------| |13.8 | 5.5 | 6.5 | | 16.1 | 5.0 | 7.0 |

    • above : reproduced // below : original

    Are there any issues that I have to consider for getting the original score?

    Thanks,

    opened by shwoo93 8
  • Training loss/Acc diagram

    Training loss/Acc diagram

    Thanks for the great work!

    I am trying to retrain QDTrack on BDD100k, however, it is converging really slowly (at least for the first epochs). Therefore I wanted to ask, whether it is possible to share your diagrams on training loss and acc?

    Thanks in advance!

    opened by LisaBernhardt 7
  • Unclear which links to pick from BDD website for dataset prep

    Unclear which links to pick from BDD website for dataset prep

    The Readme indicates Detection and Tracking sets, but the site shows 11 options, including: Images, MOT 2020 Labels, MOT 2020 Data, Detection 2020 Labels.

    Also, clicking MOT 2020 Data shows many different options. Should they all be downloaded?

    opened by diesendruck 7
  • about train

    about train

    when I train the net,Epoch 1 ,200/171305 ,the result as follow: lr: 7.992e-03,loss_rpn_cls: nan, loss_rpn_bbox: nan, loss_cls: nan, acc: 81.8194, loss_bbox: nan, loss_track: nan, loss_track_aux: nan, loss: nan why?

    opened by ningqing123 6
  • Is customization of backbone possible as mentioned in the mmdet library ?

    Is customization of backbone possible as mentioned in the mmdet library ?

    Kindly let me know if customization of backbone as mentioned in mmdet library could be used with qdtrack as well ?

    LInk : https://github.com/open-mmlab/mmdetection/blob/master/docs/tutorials/customize_models.md#add-a-new-backbone

    opened by AmanGoyal99 5
  • Your BDD100K instructions are unclear

    Your BDD100K instructions are unclear

    This is what you are saying:

    
    On the official download page, the required data and annotations are
    
    detection set images: Images
    detection set annotations: Detection 2020 Labels
    tracking set images: MOT 2020 Data
    tracking set annotations: MOT 2020 Labels
    

    But there is no Images or MOT 2020 Data on the official website for BDD

    opened by ghost 5
  • I'm confusing with the meaning of auxiliary loss

    I'm confusing with the meaning of auxiliary loss

    Hi , thanks for your great work. According to the paper, There is an auliliary loss, I do not really understand the intuition of this loss. 螢幕擷取畫面 (9)

    Can you give me some more explanation of this loss? Thanks.

    opened by hcv1027 4
  • RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!

    RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!

    Thank you for your paper and this repo! I would like to test your pretrained model on the BDD100k dataset. Therefore I followed the instructions (https://github.com/SysCV/qdtrack/blob/master/docs/GET_STARTED.md) - downloaded BDD100k, converted annotations as described and stored everything as your folder structure suggests.

    I used 'single-gpu testing' in the chapter 'Test a Model' and executed the following command in the terminal: python tools/test.py ${QDTrack}/configs/qdtrack-frcnn_r50_fpn_12e_bdd100k.py ${QDTrack}/pretrained_models/qdtrack-frcnn_r50_fpn_12e_bdd100k-13328aed.pth --out testrun_01.pkl --eval track --show-dir ${QDTrack}/data/results

    ${QDTrack} = indicates the path on my machine to qdtrack

    I get the following error: image

    Could you please help me solving this issue. Thanks a lot!

    opened by LisaBernhardt 4
  • about MOT17: loss_track degrades to zero after 50 iterations

    about MOT17: loss_track degrades to zero after 50 iterations

    Thanks for your great work! I'm now trying to run qdtrack on MOT17. I find the detection part went well during training and reached a reasonable mAP score. But, the loss of the quasi-dense embedding part degraded fastly to zero within 100 iterations, and obtained very low MOTA, MOTP, IDF1, etc., after training. Note that I modified nothing except the code related to dataset, which I've checked carefully thus I believe is not the reason. Should I modify the settings of quasi-dense embedding head to make it work? Do you have any suggestions? Thank you very much!

    opened by wswdx 4
  • detector freeze problem

    detector freeze problem

    Hi.

    I'm going to freeze the parameters of detector as you say(https://github.com/SysCV/qdtrack/issues/126).

    In qdtrack/models/mot/qdtrack.py, I tried to freeze the detector using freeze_detector(freeze_detector = True). But, when freeze_detector = True, self.detector, I got this error.

    Traceback (most recent call last): File "tools/train.py", line 169, in main() File "tools/train.py", line 140, in main test_cfg=cfg.get('test_cfg')) File "/workspace/qdtrack/qdtrack/models/builder.py", line 15, in build_model return build(cfg, MODELS, dict(train_cfg=train_cfg, test_cfg=test_cfg)) File "/workspace/mmcv/mmcv/cnn/builder.py", line 27, in build_model_from_cfg return build_from_cfg(cfg, registry, default_args) File "/workspace/mmcv/mmcv/utils/registry.py", line 72, in build_from_cfg raise type(e)(f'{obj_cls.name}: {e}') AttributeError: QDTrack: 'QDTrack' object has no attribute 'backbone'

    image

    Here is the config file I used. image

    I think, image be caused by self.detector.

    How can I put the backbone and neck, rpn_head, roi_head.bbox_head of the detector config file(/configs/base/faster_rcnn_r50_fpn.py) in self.detector?

    Thank you.

    opened by YOOHYOJEONG 1
  • Can I train only tracker?

    Can I train only tracker?

    Hi.

    I trained the detector using mmcv. And, I want to use the detector checkpoint trained using mmcv for the detector of qdtrack without any additional detector learning. In this case, Can I train only tracker of qdtrack?

    If I enter trained checkpoint using mmcv in init_cfg=dict(checkpoint='') in detecotr, is it the same as training only the tracker I mentioned?

    Thank you.

    opened by YOOHYOJEONG 2
  • The model and loaded state dict do not match exactly

    The model and loaded state dict do not match exactly

    Hi,

    Thanks for open-sourcing the code of your great work! Looks like there are some bugs when running the current tools/inference.py.

    When using the configs/bdd100k/qdtrack-frcnn_r50_fpn_12e_bdd100k.py as config and qdtrack-frcnn_r50_fpn_12e_bdd100k-13328aed.pth as the checkpoint (from the google drive link provided in the README file), the model and loaded state dict do not match exactly. Looks like you updated the name of layers but didn't update the definition of the pre-trained model. Manually changing the layer names in the .pth file will work.

    The model and loaded state dict do not match exactly
    
    unexpected key in source state_dict: backbone.conv1.weight, backbone.bn1.weight, backbone.bn1.bias, backbone.bn1.running_mean, backbone.bn1.running_var, backbone.bn1.num_batches_tracked, backbone.layer1.0.conv1.weight, backbone.layer1.0.bn1.weight, backbone.layer1.0.bn1.bias, backbone.layer1.0.bn1.running_mean, backbone.layer1.0.bn1.running_var, backbone.layer1.0.bn1.num_batches_tracked, backbone.layer1.0.conv2.weight, backbone.layer1.0.bn2.weight, backbone.layer1.0.bn2.bias, backbone.layer1.0.bn2.running_mean, backbone.layer1.0.bn2.running_var, backbone.layer1.0.bn2.num_batches_tracked, backbone.layer1.0.conv3.weight, backbone.layer1.0.bn3.weight, backbone.layer1.0.bn3.bias, backbone.layer1.0.bn3.running_mean, backbone.layer1.0.bn3.running_var, backbone.layer1.0.bn3.num_batches_tracked, backbone.layer1.0.downsample.0.weight, backbone.layer1.0.downsample.1.weight, backbone.layer1.0.downsample.1.bias, backbone.layer1.0.downsample.1.running_mean, backbone.layer1.0.downsample.1.running_var, backbone.layer1.0.downsample.1.num_batches_tracked, backbone.layer1.1.conv1.weight, backbone.layer1.1.bn1.weight, backbone.layer1.1.bn1.bias, backbone.layer1.1.bn1.running_mean, backbone.layer1.1.bn1.running_var, backbone.layer1.1.bn1.num_batches_tracked, backbone.layer1.1.conv2.weight, backbone.layer1.1.bn2.weight, backbone.layer1.1.bn2.bias, backbone.layer1.1.bn2.running_mean, backbone.layer1.1.bn2.running_var, backbone.layer1.1.bn2.num_batches_tracked, backbone.layer1.1.conv3.weight, backbone.layer1.1.bn3.weight, backbone.layer1.1.bn3.bias, backbone.layer1.1.bn3.running_mean, backbone.layer1.1.bn3.running_var, backbone.layer1.1.bn3.num_batches_tracked, backbone.layer1.2.conv1.weight, backbone.layer1.2.bn1.weight, backbone.layer1.2.bn1.bias, backbone.layer1.2.bn1.running_mean, backbone.layer1.2.bn1.running_var, backbone.layer1.2.bn1.num_batches_tracked, backbone.layer1.2.conv2.weight, backbone.layer1.2.bn2.weight, backbone.layer1.2.bn2.bias, backbone.layer1.2.bn2.running_mean, backbone.layer1.2.bn2.running_var, backbone.layer1.2.bn2.num_batches_tracked, backbone.layer1.2.conv3.weight, backbone.layer1.2.bn3.weight, backbone.layer1.2.bn3.bias, backbone.layer1.2.bn3.running_mean, backbone.layer1.2.bn3.running_var, backbone.layer1.2.bn3.num_batches_tracked, backbone.layer2.0.conv1.weight, backbone.layer2.0.bn1.weight, backbone.layer2.0.bn1.bias, backbone.layer2.0.bn1.running_mean, backbone.layer2.0.bn1.running_var, backbone.layer2.0.bn1.num_batches_tracked, backbone.layer2.0.conv2.weight, backbone.layer2.0.bn2.weight, backbone.layer2.0.bn2.bias, backbone.layer2.0.bn2.running_mean, backbone.layer2.0.bn2.running_var, backbone.layer2.0.bn2.num_batches_tracked, backbone.layer2.0.conv3.weight, backbone.layer2.0.bn3.weight, backbone.layer2.0.bn3.bias, backbone.layer2.0.bn3.running_mean, backbone.layer2.0.bn3.running_var, backbone.layer2.0.bn3.num_batches_tracked, backbone.layer2.0.downsample.0.weight, backbone.layer2.0.downsample.1.weight, backbone.layer2.0.downsample.1.bias, backbone.layer2.0.downsample.1.running_mean, backbone.layer2.0.downsample.1.running_var, backbone.layer2.0.downsample.1.num_batches_tracked, backbone.layer2.1.conv1.weight, backbone.layer2.1.bn1.weight, backbone.layer2.1.bn1.bias, backbone.layer2.1.bn1.running_mean, backbone.layer2.1.bn1.running_var, backbone.layer2.1.bn1.num_batches_tracked, backbone.layer2.1.conv2.weight, backbone.layer2.1.bn2.weight, backbone.layer2.1.bn2.bias, backbone.layer2.1.bn2.running_mean, backbone.layer2.1.bn2.running_var, backbone.layer2.1.bn2.num_batches_tracked, backbone.layer2.1.conv3.weight, backbone.layer2.1.bn3.weight, backbone.layer2.1.bn3.bias, backbone.layer2.1.bn3.running_mean, backbone.layer2.1.bn3.running_var, backbone.layer2.1.bn3.num_batches_tracked, backbone.layer2.2.conv1.weight, backbone.layer2.2.bn1.weight, backbone.layer2.2.bn1.bias, backbone.layer2.2.bn1.running_mean, backbone.layer2.2.bn1.running_var, backbone.layer2.2.bn1.num_batches_tracked, backbone.layer2.2.conv2.weight, backbone.layer2.2.bn2.weight, backbone.layer2.2.bn2.bias, backbone.layer2.2.bn2.running_mean, backbone.layer2.2.bn2.running_var, backbone.layer2.2.bn2.num_batches_tracked, backbone.layer2.2.conv3.weight, backbone.layer2.2.bn3.weight, backbone.layer2.2.bn3.bias, backbone.layer2.2.bn3.running_mean, backbone.layer2.2.bn3.running_var, backbone.layer2.2.bn3.num_batches_tracked, backbone.layer2.3.conv1.weight, backbone.layer2.3.bn1.weight, backbone.layer2.3.bn1.bias, backbone.layer2.3.bn1.running_mean, backbone.layer2.3.bn1.running_var, backbone.layer2.3.bn1.num_batches_tracked, backbone.layer2.3.conv2.weight, backbone.layer2.3.bn2.weight, backbone.layer2.3.bn2.bias, backbone.layer2.3.bn2.running_mean, backbone.layer2.3.bn2.running_var, backbone.layer2.3.bn2.num_batches_tracked, backbone.layer2.3.conv3.weight, backbone.layer2.3.bn3.weight, backbone.layer2.3.bn3.bias, backbone.layer2.3.bn3.running_mean, backbone.layer2.3.bn3.running_var, backbone.layer2.3.bn3.num_batches_tracked, backbone.layer3.0.conv1.weight, backbone.layer3.0.bn1.weight, backbone.layer3.0.bn1.bias, backbone.layer3.0.bn1.running_mean, backbone.layer3.0.bn1.running_var, backbone.layer3.0.bn1.num_batches_tracked, backbone.layer3.0.conv2.weight, backbone.layer3.0.bn2.weight, backbone.layer3.0.bn2.bias, backbone.layer3.0.bn2.running_mean, backbone.layer3.0.bn2.running_var, backbone.layer3.0.bn2.num_batches_tracked, backbone.layer3.0.conv3.weight, backbone.layer3.0.bn3.weight, backbone.layer3.0.bn3.bias, backbone.layer3.0.bn3.running_mean, backbone.layer3.0.bn3.running_var, backbone.layer3.0.bn3.num_batches_tracked, backbone.layer3.0.downsample.0.weight, backbone.layer3.0.downsample.1.weight, backbone.layer3.0.downsample.1.bias, backbone.layer3.0.downsample.1.running_mean, backbone.layer3.0.downsample.1.running_var, backbone.layer3.0.downsample.1.num_batches_tracked, backbone.layer3.1.conv1.weight, backbone.layer3.1.bn1.weight, backbone.layer3.1.bn1.bias, backbone.layer3.1.bn1.running_mean, backbone.layer3.1.bn1.running_var, backbone.layer3.1.bn1.num_batches_tracked, backbone.layer3.1.conv2.weight, backbone.layer3.1.bn2.weight, backbone.layer3.1.bn2.bias, backbone.layer3.1.bn2.running_mean, backbone.layer3.1.bn2.running_var, backbone.layer3.1.bn2.num_batches_tracked, backbone.layer3.1.conv3.weight, backbone.layer3.1.bn3.weight, backbone.layer3.1.bn3.bias, backbone.layer3.1.bn3.running_mean, backbone.layer3.1.bn3.running_var, backbone.layer3.1.bn3.num_batches_tracked, backbone.layer3.2.conv1.weight, backbone.layer3.2.bn1.weight, backbone.layer3.2.bn1.bias, backbone.layer3.2.bn1.running_mean, backbone.layer3.2.bn1.running_var, backbone.layer3.2.bn1.num_batches_tracked, backbone.layer3.2.conv2.weight, backbone.layer3.2.bn2.weight, backbone.layer3.2.bn2.bias, backbone.layer3.2.bn2.running_mean, backbone.layer3.2.bn2.running_var, backbone.layer3.2.bn2.num_batches_tracked, backbone.layer3.2.conv3.weight, backbone.layer3.2.bn3.weight, backbone.layer3.2.bn3.bias, backbone.layer3.2.bn3.running_mean, backbone.layer3.2.bn3.running_var, backbone.layer3.2.bn3.num_batches_tracked, backbone.layer3.3.conv1.weight, backbone.layer3.3.bn1.weight, backbone.layer3.3.bn1.bias, backbone.layer3.3.bn1.running_mean, backbone.layer3.3.bn1.running_var, backbone.layer3.3.bn1.num_batches_tracked, backbone.layer3.3.conv2.weight, backbone.layer3.3.bn2.weight, backbone.layer3.3.bn2.bias, backbone.layer3.3.bn2.running_mean, backbone.layer3.3.bn2.running_var, backbone.layer3.3.bn2.num_batches_tracked, backbone.layer3.3.conv3.weight, backbone.layer3.3.bn3.weight, backbone.layer3.3.bn3.bias, backbone.layer3.3.bn3.running_mean, backbone.layer3.3.bn3.running_var, backbone.layer3.3.bn3.num_batches_tracked, backbone.layer3.4.conv1.weight, backbone.layer3.4.bn1.weight, backbone.layer3.4.bn1.bias, backbone.layer3.4.bn1.running_mean, backbone.layer3.4.bn1.running_var, backbone.layer3.4.bn1.num_batches_tracked, backbone.layer3.4.conv2.weight, backbone.layer3.4.bn2.weight, backbone.layer3.4.bn2.bias, backbone.layer3.4.bn2.running_mean, backbone.layer3.4.bn2.running_var, backbone.layer3.4.bn2.num_batches_tracked, backbone.layer3.4.conv3.weight, backbone.layer3.4.bn3.weight, backbone.layer3.4.bn3.bias, backbone.layer3.4.bn3.running_mean, backbone.layer3.4.bn3.running_var, backbone.layer3.4.bn3.num_batches_tracked, backbone.layer3.5.conv1.weight, backbone.layer3.5.bn1.weight, backbone.layer3.5.bn1.bias, backbone.layer3.5.bn1.running_mean, backbone.layer3.5.bn1.running_var, backbone.layer3.5.bn1.num_batches_tracked, backbone.layer3.5.conv2.weight, backbone.layer3.5.bn2.weight, backbone.layer3.5.bn2.bias, backbone.layer3.5.bn2.running_mean, backbone.layer3.5.bn2.running_var, backbone.layer3.5.bn2.num_batches_tracked, backbone.layer3.5.conv3.weight, backbone.layer3.5.bn3.weight, backbone.layer3.5.bn3.bias, backbone.layer3.5.bn3.running_mean, backbone.layer3.5.bn3.running_var, backbone.layer3.5.bn3.num_batches_tracked, backbone.layer4.0.conv1.weight, backbone.layer4.0.bn1.weight, backbone.layer4.0.bn1.bias, backbone.layer4.0.bn1.running_mean, backbone.layer4.0.bn1.running_var, backbone.layer4.0.bn1.num_batches_tracked, backbone.layer4.0.conv2.weight, backbone.layer4.0.bn2.weight, backbone.layer4.0.bn2.bias, backbone.layer4.0.bn2.running_mean, backbone.layer4.0.bn2.running_var, backbone.layer4.0.bn2.num_batches_tracked, backbone.layer4.0.conv3.weight, backbone.layer4.0.bn3.weight, backbone.layer4.0.bn3.bias, backbone.layer4.0.bn3.running_mean, backbone.layer4.0.bn3.running_var, backbone.layer4.0.bn3.num_batches_tracked, backbone.layer4.0.downsample.0.weight, backbone.layer4.0.downsample.1.weight, backbone.layer4.0.downsample.1.bias, backbone.layer4.0.downsample.1.running_mean, backbone.layer4.0.downsample.1.running_var, backbone.layer4.0.downsample.1.num_batches_tracked, backbone.layer4.1.conv1.weight, backbone.layer4.1.bn1.weight, backbone.layer4.1.bn1.bias, backbone.layer4.1.bn1.running_mean, backbone.layer4.1.bn1.running_var, backbone.layer4.1.bn1.num_batches_tracked, backbone.layer4.1.conv2.weight, backbone.layer4.1.bn2.weight, backbone.layer4.1.bn2.bias, backbone.layer4.1.bn2.running_mean, backbone.layer4.1.bn2.running_var, backbone.layer4.1.bn2.num_batches_tracked, backbone.layer4.1.conv3.weight, backbone.layer4.1.bn3.weight, backbone.layer4.1.bn3.bias, backbone.layer4.1.bn3.running_mean, backbone.layer4.1.bn3.running_var, backbone.layer4.1.bn3.num_batches_tracked, backbone.layer4.2.conv1.weight, backbone.layer4.2.bn1.weight, backbone.layer4.2.bn1.bias, backbone.layer4.2.bn1.running_mean, backbone.layer4.2.bn1.running_var, backbone.layer4.2.bn1.num_batches_tracked, backbone.layer4.2.conv2.weight, backbone.layer4.2.bn2.weight, backbone.layer4.2.bn2.bias, backbone.layer4.2.bn2.running_mean, backbone.layer4.2.bn2.running_var, backbone.layer4.2.bn2.num_batches_tracked, backbone.layer4.2.conv3.weight, backbone.layer4.2.bn3.weight, backbone.layer4.2.bn3.bias, backbone.layer4.2.bn3.running_mean, backbone.layer4.2.bn3.running_var, backbone.layer4.2.bn3.num_batches_tracked, neck.lateral_convs.0.conv.weight, neck.lateral_convs.0.conv.bias, neck.lateral_convs.1.conv.weight, neck.lateral_convs.1.conv.bias, neck.lateral_convs.2.conv.weight, neck.lateral_convs.2.conv.bias, neck.lateral_convs.3.conv.weight, neck.lateral_convs.3.conv.bias, neck.fpn_convs.0.conv.weight, neck.fpn_convs.0.conv.bias, neck.fpn_convs.1.conv.weight, neck.fpn_convs.1.conv.bias, neck.fpn_convs.2.conv.weight, neck.fpn_convs.2.conv.bias, neck.fpn_convs.3.conv.weight, neck.fpn_convs.3.conv.bias, rpn_head.rpn_conv.weight, rpn_head.rpn_conv.bias, rpn_head.rpn_cls.weight, rpn_head.rpn_cls.bias, rpn_head.rpn_reg.weight, rpn_head.rpn_reg.bias, roi_head.bbox_head.fc_cls.weight, roi_head.bbox_head.fc_cls.bias, roi_head.bbox_head.fc_reg.weight, roi_head.bbox_head.fc_reg.bias, roi_head.bbox_head.shared_fcs.0.weight, roi_head.bbox_head.shared_fcs.0.bias, roi_head.bbox_head.shared_fcs.1.weight, roi_head.bbox_head.shared_fcs.1.bias, roi_head.track_head.convs.0.conv.weight, roi_head.track_head.convs.0.gn.weight, roi_head.track_head.convs.0.gn.bias, roi_head.track_head.convs.1.conv.weight, roi_head.track_head.convs.1.gn.weight, roi_head.track_head.convs.1.gn.bias, roi_head.track_head.convs.2.conv.weight, roi_head.track_head.convs.2.gn.weight, roi_head.track_head.convs.2.gn.bias, roi_head.track_head.convs.3.conv.weight, roi_head.track_head.convs.3.gn.weight, roi_head.track_head.convs.3.gn.bias, roi_head.track_head.fcs.0.weight, roi_head.track_head.fcs.0.bias, roi_head.track_head.fc_embed.weight, roi_head.track_head.fc_embed.bias
    
    missing keys in source state_dict: detector.backbone.conv1.weight, detector.backbone.bn1.weight, detector.backbone.bn1.bias, detector.backbone.bn1.running_mean, detector.backbone.bn1.running_var, detector.backbone.layer1.0.conv1.weight, detector.backbone.layer1.0.bn1.weight, detector.backbone.layer1.0.bn1.bias, detector.backbone.layer1.0.bn1.running_mean, detector.backbone.layer1.0.bn1.running_var, detector.backbone.layer1.0.conv2.weight, detector.backbone.layer1.0.bn2.weight, detector.backbone.layer1.0.bn2.bias, detector.backbone.layer1.0.bn2.running_mean, detector.backbone.layer1.0.bn2.running_var, detector.backbone.layer1.0.conv3.weight, detector.backbone.layer1.0.bn3.weight, detector.backbone.layer1.0.bn3.bias, detector.backbone.layer1.0.bn3.running_mean, detector.backbone.layer1.0.bn3.running_var, detector.backbone.layer1.0.downsample.0.weight, detector.backbone.layer1.0.downsample.1.weight, detector.backbone.layer1.0.downsample.1.bias, detector.backbone.layer1.0.downsample.1.running_mean, detector.backbone.layer1.0.downsample.1.running_var, detector.backbone.layer1.1.conv1.weight, detector.backbone.layer1.1.bn1.weight, detector.backbone.layer1.1.bn1.bias, detector.backbone.layer1.1.bn1.running_mean, detector.backbone.layer1.1.bn1.running_var, detector.backbone.layer1.1.conv2.weight, detector.backbone.layer1.1.bn2.weight, detector.backbone.layer1.1.bn2.bias, detector.backbone.layer1.1.bn2.running_mean, detector.backbone.layer1.1.bn2.running_var, detector.backbone.layer1.1.conv3.weight, detector.backbone.layer1.1.bn3.weight, detector.backbone.layer1.1.bn3.bias, detector.backbone.layer1.1.bn3.running_mean, detector.backbone.layer1.1.bn3.running_var, detector.backbone.layer1.2.conv1.weight, detector.backbone.layer1.2.bn1.weight, detector.backbone.layer1.2.bn1.bias, detector.backbone.layer1.2.bn1.running_mean, detector.backbone.layer1.2.bn1.running_var, detector.backbone.layer1.2.conv2.weight, detector.backbone.layer1.2.bn2.weight, detector.backbone.layer1.2.bn2.bias, detector.backbone.layer1.2.bn2.running_mean, detector.backbone.layer1.2.bn2.running_var, detector.backbone.layer1.2.conv3.weight, detector.backbone.layer1.2.bn3.weight, detector.backbone.layer1.2.bn3.bias, detector.backbone.layer1.2.bn3.running_mean, detector.backbone.layer1.2.bn3.running_var, detector.backbone.layer2.0.conv1.weight, detector.backbone.layer2.0.bn1.weight, detector.backbone.layer2.0.bn1.bias, detector.backbone.layer2.0.bn1.running_mean, detector.backbone.layer2.0.bn1.running_var, detector.backbone.layer2.0.conv2.weight, detector.backbone.layer2.0.bn2.weight, detector.backbone.layer2.0.bn2.bias, detector.backbone.layer2.0.bn2.running_mean, detector.backbone.layer2.0.bn2.running_var, detector.backbone.layer2.0.conv3.weight, detector.backbone.layer2.0.bn3.weight, detector.backbone.layer2.0.bn3.bias, detector.backbone.layer2.0.bn3.running_mean, detector.backbone.layer2.0.bn3.running_var, detector.backbone.layer2.0.downsample.0.weight, detector.backbone.layer2.0.downsample.1.weight, detector.backbone.layer2.0.downsample.1.bias, detector.backbone.layer2.0.downsample.1.running_mean, detector.backbone.layer2.0.downsample.1.running_var, detector.backbone.layer2.1.conv1.weight, detector.backbone.layer2.1.bn1.weight, detector.backbone.layer2.1.bn1.bias, detector.backbone.layer2.1.bn1.running_mean, detector.backbone.layer2.1.bn1.running_var, detector.backbone.layer2.1.conv2.weight, detector.backbone.layer2.1.bn2.weight, detector.backbone.layer2.1.bn2.bias, detector.backbone.layer2.1.bn2.running_mean, detector.backbone.layer2.1.bn2.running_var, detector.backbone.layer2.1.conv3.weight, detector.backbone.layer2.1.bn3.weight, detector.backbone.layer2.1.bn3.bias, detector.backbone.layer2.1.bn3.running_mean, detector.backbone.layer2.1.bn3.running_var, detector.backbone.layer2.2.conv1.weight, detector.backbone.layer2.2.bn1.weight, detector.backbone.layer2.2.bn1.bias, detector.backbone.layer2.2.bn1.running_mean, detector.backbone.layer2.2.bn1.running_var, detector.backbone.layer2.2.conv2.weight, detector.backbone.layer2.2.bn2.weight, detector.backbone.layer2.2.bn2.bias, detector.backbone.layer2.2.bn2.running_mean, detector.backbone.layer2.2.bn2.running_var, detector.backbone.layer2.2.conv3.weight, detector.backbone.layer2.2.bn3.weight, detector.backbone.layer2.2.bn3.bias, detector.backbone.layer2.2.bn3.running_mean, detector.backbone.layer2.2.bn3.running_var, detector.backbone.layer2.3.conv1.weight, detector.backbone.layer2.3.bn1.weight, detector.backbone.layer2.3.bn1.bias, detector.backbone.layer2.3.bn1.running_mean, detector.backbone.layer2.3.bn1.running_var, detector.backbone.layer2.3.conv2.weight, detector.backbone.layer2.3.bn2.weight, detector.backbone.layer2.3.bn2.bias, detector.backbone.layer2.3.bn2.running_mean, detector.backbone.layer2.3.bn2.running_var, detector.backbone.layer2.3.conv3.weight, detector.backbone.layer2.3.bn3.weight, detector.backbone.layer2.3.bn3.bias, detector.backbone.layer2.3.bn3.running_mean, detector.backbone.layer2.3.bn3.running_var, detector.backbone.layer3.0.conv1.weight, detector.backbone.layer3.0.bn1.weight, detector.backbone.layer3.0.bn1.bias, detector.backbone.layer3.0.bn1.running_mean, detector.backbone.layer3.0.bn1.running_var, detector.backbone.layer3.0.conv2.weight, detector.backbone.layer3.0.bn2.weight, detector.backbone.layer3.0.bn2.bias, detector.backbone.layer3.0.bn2.running_mean, detector.backbone.layer3.0.bn2.running_var, detector.backbone.layer3.0.conv3.weight, detector.backbone.layer3.0.bn3.weight, detector.backbone.layer3.0.bn3.bias, detector.backbone.layer3.0.bn3.running_mean, detector.backbone.layer3.0.bn3.running_var, detector.backbone.layer3.0.downsample.0.weight, detector.backbone.layer3.0.downsample.1.weight, detector.backbone.layer3.0.downsample.1.bias, detector.backbone.layer3.0.downsample.1.running_mean, detector.backbone.layer3.0.downsample.1.running_var, detector.backbone.layer3.1.conv1.weight, detector.backbone.layer3.1.bn1.weight, detector.backbone.layer3.1.bn1.bias, detector.backbone.layer3.1.bn1.running_mean, detector.backbone.layer3.1.bn1.running_var, detector.backbone.layer3.1.conv2.weight, detector.backbone.layer3.1.bn2.weight, detector.backbone.layer3.1.bn2.bias, detector.backbone.layer3.1.bn2.running_mean, detector.backbone.layer3.1.bn2.running_var, detector.backbone.layer3.1.conv3.weight, detector.backbone.layer3.1.bn3.weight, detector.backbone.layer3.1.bn3.bias, detector.backbone.layer3.1.bn3.running_mean, detector.backbone.layer3.1.bn3.running_var, detector.backbone.layer3.2.conv1.weight, detector.backbone.layer3.2.bn1.weight, detector.backbone.layer3.2.bn1.bias, detector.backbone.layer3.2.bn1.running_mean, detector.backbone.layer3.2.bn1.running_var, detector.backbone.layer3.2.conv2.weight, detector.backbone.layer3.2.bn2.weight, detector.backbone.layer3.2.bn2.bias, detector.backbone.layer3.2.bn2.running_mean, detector.backbone.layer3.2.bn2.running_var, detector.backbone.layer3.2.conv3.weight, detector.backbone.layer3.2.bn3.weight, detector.backbone.layer3.2.bn3.bias, detector.backbone.layer3.2.bn3.running_mean, detector.backbone.layer3.2.bn3.running_var, detector.backbone.layer3.3.conv1.weight, detector.backbone.layer3.3.bn1.weight, detector.backbone.layer3.3.bn1.bias, detector.backbone.layer3.3.bn1.running_mean, detector.backbone.layer3.3.bn1.running_var, detector.backbone.layer3.3.conv2.weight, detector.backbone.layer3.3.bn2.weight, detector.backbone.layer3.3.bn2.bias, detector.backbone.layer3.3.bn2.running_mean, detector.backbone.layer3.3.bn2.running_var, detector.backbone.layer3.3.conv3.weight, detector.backbone.layer3.3.bn3.weight, detector.backbone.layer3.3.bn3.bias, detector.backbone.layer3.3.bn3.running_mean, detector.backbone.layer3.3.bn3.running_var, detector.backbone.layer3.4.conv1.weight, detector.backbone.layer3.4.bn1.weight, detector.backbone.layer3.4.bn1.bias, detector.backbone.layer3.4.bn1.running_mean, detector.backbone.layer3.4.bn1.running_var, detector.backbone.layer3.4.conv2.weight, detector.backbone.layer3.4.bn2.weight, detector.backbone.layer3.4.bn2.bias, detector.backbone.layer3.4.bn2.running_mean, detector.backbone.layer3.4.bn2.running_var, detector.backbone.layer3.4.conv3.weight, detector.backbone.layer3.4.bn3.weight, detector.backbone.layer3.4.bn3.bias, detector.backbone.layer3.4.bn3.running_mean, detector.backbone.layer3.4.bn3.running_var, detector.backbone.layer3.5.conv1.weight, detector.backbone.layer3.5.bn1.weight, detector.backbone.layer3.5.bn1.bias, detector.backbone.layer3.5.bn1.running_mean, detector.backbone.layer3.5.bn1.running_var, detector.backbone.layer3.5.conv2.weight, detector.backbone.layer3.5.bn2.weight, detector.backbone.layer3.5.bn2.bias, detector.backbone.layer3.5.bn2.running_mean, detector.backbone.layer3.5.bn2.running_var, detector.backbone.layer3.5.conv3.weight, detector.backbone.layer3.5.bn3.weight, detector.backbone.layer3.5.bn3.bias, detector.backbone.layer3.5.bn3.running_mean, detector.backbone.layer3.5.bn3.running_var, detector.backbone.layer4.0.conv1.weight, detector.backbone.layer4.0.bn1.weight, detector.backbone.layer4.0.bn1.bias, detector.backbone.layer4.0.bn1.running_mean, detector.backbone.layer4.0.bn1.running_var, detector.backbone.layer4.0.conv2.weight, detector.backbone.layer4.0.bn2.weight, detector.backbone.layer4.0.bn2.bias, detector.backbone.layer4.0.bn2.running_mean, detector.backbone.layer4.0.bn2.running_var, detector.backbone.layer4.0.conv3.weight, detector.backbone.layer4.0.bn3.weight, detector.backbone.layer4.0.bn3.bias, detector.backbone.layer4.0.bn3.running_mean, detector.backbone.layer4.0.bn3.running_var, detector.backbone.layer4.0.downsample.0.weight, detector.backbone.layer4.0.downsample.1.weight, detector.backbone.layer4.0.downsample.1.bias, detector.backbone.layer4.0.downsample.1.running_mean, detector.backbone.layer4.0.downsample.1.running_var, detector.backbone.layer4.1.conv1.weight, detector.backbone.layer4.1.bn1.weight, detector.backbone.layer4.1.bn1.bias, detector.backbone.layer4.1.bn1.running_mean, detector.backbone.layer4.1.bn1.running_var, detector.backbone.layer4.1.conv2.weight, detector.backbone.layer4.1.bn2.weight, detector.backbone.layer4.1.bn2.bias, detector.backbone.layer4.1.bn2.running_mean, detector.backbone.layer4.1.bn2.running_var, detector.backbone.layer4.1.conv3.weight, detector.backbone.layer4.1.bn3.weight, detector.backbone.layer4.1.bn3.bias, detector.backbone.layer4.1.bn3.running_mean, detector.backbone.layer4.1.bn3.running_var, detector.backbone.layer4.2.conv1.weight, detector.backbone.layer4.2.bn1.weight, detector.backbone.layer4.2.bn1.bias, detector.backbone.layer4.2.bn1.running_mean, detector.backbone.layer4.2.bn1.running_var, detector.backbone.layer4.2.conv2.weight, detector.backbone.layer4.2.bn2.weight, detector.backbone.layer4.2.bn2.bias, detector.backbone.layer4.2.bn2.running_mean, detector.backbone.layer4.2.bn2.running_var, detector.backbone.layer4.2.conv3.weight, detector.backbone.layer4.2.bn3.weight, detector.backbone.layer4.2.bn3.bias, detector.backbone.layer4.2.bn3.running_mean, detector.backbone.layer4.2.bn3.running_var, detector.neck.lateral_convs.0.conv.weight, detector.neck.lateral_convs.0.conv.bias, detector.neck.lateral_convs.1.conv.weight, detector.neck.lateral_convs.1.conv.bias, detector.neck.lateral_convs.2.conv.weight, detector.neck.lateral_convs.2.conv.bias, detector.neck.lateral_convs.3.conv.weight, detector.neck.lateral_convs.3.conv.bias, detector.neck.fpn_convs.0.conv.weight, detector.neck.fpn_convs.0.conv.bias, detector.neck.fpn_convs.1.conv.weight, detector.neck.fpn_convs.1.conv.bias, detector.neck.fpn_convs.2.conv.weight, detector.neck.fpn_convs.2.conv.bias, detector.neck.fpn_convs.3.conv.weight, detector.neck.fpn_convs.3.conv.bias, detector.rpn_head.rpn_conv.weight, detector.rpn_head.rpn_conv.bias, detector.rpn_head.rpn_cls.weight, detector.rpn_head.rpn_cls.bias, detector.rpn_head.rpn_reg.weight, detector.rpn_head.rpn_reg.bias, detector.roi_head.bbox_head.fc_cls.weight, detector.roi_head.bbox_head.fc_cls.bias, detector.roi_head.bbox_head.fc_reg.weight, detector.roi_head.bbox_head.fc_reg.bias, detector.roi_head.bbox_head.shared_fcs.0.weight, detector.roi_head.bbox_head.shared_fcs.0.bias, detector.roi_head.bbox_head.shared_fcs.1.weight, detector.roi_head.bbox_head.shared_fcs.1.bias, track_head.track_head.convs.0.conv.weight, track_head.track_head.convs.0.gn.weight, track_head.track_head.convs.0.gn.bias, track_head.track_head.convs.1.conv.weight, track_head.track_head.convs.1.gn.weight, track_head.track_head.convs.1.gn.bias, track_head.track_head.convs.2.conv.weight, track_head.track_head.convs.2.gn.weight, track_head.track_head.convs.2.gn.bias, track_head.track_head.convs.3.conv.weight, track_head.track_head.convs.3.gn.weight, track_head.track_head.convs.3.gn.bias, track_head.track_head.fcs.0.weight, track_head.track_head.fcs.0.bias, track_head.track_head.fc_embed.weight, track_head.track_head.fc_embed.bias
    
    opened by yimingzhou1 1
  • BDD100k det conversion error

    BDD100k det conversion error

    When I try to run this command: python -m bdd100k.label.to_coco -m det -i bdd100k/labels/det_20/det_train.json -o data/bdd/labels/det_20/det_train_cocofmt.json I receive the following error:

    [2022-09-23 16:25:55,619 to_coco.py:301 main] Mode: det remove-ignore: False ignore-as-class: False [2022-09-23 16:25:55,619 to_coco.py:307 main] Loading annotations... [2022-09-23 16:26:02,429 to_coco.py:318 main] Converting annotations... 10%|████████ | 6879/69863 [00:00<00:08, 7435.14it/s] Traceback (most recent call last): File "/u/m/c/mcdougall/miniconda3/envs/torch310/lib/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "/u/m/c/mcdougall/miniconda3/envs/torch310/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/u/m/c/mcdougall/miniconda3/envs/torch310/lib/python3.10/site-packages/bdd100k-1.0.0-py3.10.egg/bdd100k/label/to_coco.py", line 337, in main() File "/u/m/c/mcdougall/miniconda3/envs/torch310/lib/python3.10/site-packages/bdd100k-1.0.0-py3.10.egg/bdd100k/label/to_coco.py", line 322, in main coco = bdd100k2coco_det( File "/u/m/c/mcdougall/miniconda3/envs/torch310/lib/python3.10/site-packages/bdd100k-1.0.0-py3.10.egg/bdd100k/label/to_coco.py", line 145, in bdd100k2coco_det if frame["labels"]: KeyError: 'labels'

    This error does not occur when running with ${SET_NAME} equal to val

    opened by IMcDougall 0
  • The reference image and key image are exactly the same

    The reference image and key image are exactly the same

    In the article (QDTrack), the difference between the key image and the reference image is indicated by the image below.

    Screenshot 2022-09-05 113254

    However, when debugging the training code, I saw that the reference image metadata and key image metadata returned by the data loader are exactly the same.

    Screenshot 2022-09-05 103058

    Do I need to change a parameter before starting training or is this an error in the code? I would be glad if you inform me.

    opened by Hcayirli 4
Releases(v0.1)
Owner
ETH VIS Research Group
Visual Intelligence and Systems Group at ETH Zürich
ETH VIS Research Group
Pytorch implementation of Bert and Pals: Projected Attention Layers for Efficient Adaptation in Multi-Task Learning

PyTorch implementation of BERT and PALs Introduction Work by Asa Cooper Stickland and Iain Murray, University of Edinburgh. Code for BERT and PALs; mo

Asa Cooper Stickland 70 Dec 29, 2022
Code for EMNLP2021 paper "Allocating Large Vocabulary Capacity for Cross-lingual Language Model Pre-training"

VoCapXLM Code for EMNLP2021 paper Allocating Large Vocabulary Capacity for Cross-lingual Language Model Pre-training Environment DockerFile: dancingso

Bo Zheng 15 Jul 28, 2022
Self-supervised spatio-spectro-temporal represenation learning for EEG analysis

EEG-Oriented Self-Supervised Learning and Cluster-Aware Adaptation This repository provides a tensorflow implementation of a submitted paper: EEG-Orie

Wonjun Ko 4 Jun 09, 2022
Replication Package for "An Empirical Study of the Effectiveness of an Ensemble of Stand-alone Sentiment Detection Tools for Software Engineering Datasets"

Replication Package for "An Empirical Study of the Effectiveness of an Ensemble of Stand-alone Sentiment Detection Tools for Software Engineering Data

2 Oct 06, 2022
Just playing with getting CLIP Guided Diffusion running locally, rather than having to use colab.

CLIP-Guided-Diffusion Just playing with getting CLIP Guided Diffusion running locally, rather than having to use colab. Original colab notebooks by Ka

Nerdy Rodent 336 Dec 09, 2022
PyTorch implementation of the end-to-end coreference resolution model with different higher-order inference methods.

End-to-End Coreference Resolution with Different Higher-Order Inference Methods This repository contains the implementation of the paper: Revealing th

Liyan 52 Jan 04, 2023
[SIGGRAPH Asia 2021] DeepVecFont: Synthesizing High-quality Vector Fonts via Dual-modality Learning.

DeepVecFont This is the homepage for "DeepVecFont: Synthesizing High-quality Vector Fonts via Dual-modality Learning". Yizhi Wang and Zhouhui Lian. WI

Yizhi Wang 17 Dec 22, 2022
PyTorch implementation for "Sharpness-aware Quantization for Deep Neural Networks".

Sharpness-aware Quantization for Deep Neural Networks Recent Update 2021.11.23: We release the source code of SAQ. Setup the environments Clone the re

Zhuang AI Group 30 Dec 19, 2022
Open standard for machine learning interoperability

Open Neural Network Exchange (ONNX) is an open ecosystem that empowers AI developers to choose the right tools as their project evolves. ONNX provides

Open Neural Network Exchange 13.9k Dec 30, 2022
This repo contains the official implementations of EigenDamage: Structured Pruning in the Kronecker-Factored Eigenbasis

EigenDamage: Structured Pruning in the Kronecker-Factored Eigenbasis This repo contains the official implementations of EigenDamage: Structured Prunin

Chaoqi Wang 107 Apr 20, 2022
Keyhole Imaging: Non-Line-of-Sight Imaging and Tracking of Moving Objects Along a Single Optical Path

Keyhole Imaging Code & Dataset Code associated with the paper "Keyhole Imaging: Non-Line-of-Sight Imaging and Tracking of Moving Objects Along a Singl

Stanford Computational Imaging Lab 20 Feb 03, 2022
Auditing Black-Box Prediction Models for Data Minimization Compliance

Data-Minimization-Auditor An auditing tool for model-instability based data minimization that is introduced in "Auditing Black-Box Prediction Models f

Bashir Rastegarpanah 2 Mar 24, 2022
Reproducing-BowNet: Learning Representations by Predicting Bags of Visual Words

Reproducing-BowNet Our reproducibility effort based on the 2020 ML Reproducibility Challenge. We are reproducing the results of this CVPR 2020 paper:

6 Mar 16, 2022
Interactive Image Generation via Generative Adversarial Networks

iGAN: Interactive Image Generation via Generative Adversarial Networks Project | Youtube | Paper Recent projects: [pix2pix]: Torch implementation for

Jun-Yan Zhu 3.9k Dec 23, 2022
A library for answering questions using data you cannot see

A library for computing on data you do not own and cannot see PySyft is a Python library for secure and private Deep Learning. PySyft decouples privat

OpenMined 8.5k Jan 02, 2023
Cross-media Structured Common Space for Multimedia Event Extraction (ACL2020)

Cross-media Structured Common Space for Multimedia Event Extraction Table of Contents Overview Requirements Data Quickstart Citation Overview The code

Manling Li 49 Nov 21, 2022
Pytorch implementation of CVPR2021 paper "MUST-GAN: Multi-level Statistics Transfer for Self-driven Person Image Generation"

MUST-GAN Code | paper The Pytorch implementation of our CVPR2021 paper "MUST-GAN: Multi-level Statistics Transfer for Self-driven Person Image Generat

TianxiangMa 46 Dec 26, 2022
PyTorch version of Stable Baselines, reliable implementations of reinforcement learning algorithms.

PyTorch version of Stable Baselines, reliable implementations of reinforcement learning algorithms.

DLR-RM 4.7k Jan 01, 2023
Learning to Segment Instances in Videos with Spatial Propagation Network

Learning to Segment Instances in Videos with Spatial Propagation Network This paper is available at the 2017 DAVIS Challenge website. Check our result

Jingchun Cheng 145 Sep 28, 2022
[ICCV2021] Safety-aware Motion Prediction with Unseen Vehicles for Autonomous Driving

Safety-aware Motion Prediction with Unseen Vehicles for Autonomous Driving Safety-aware Motion Prediction with Unseen Vehicles for Autonomous Driving

Xuanchi Ren 44 Dec 03, 2022