Integrating the Best of TF into PyTorch, for Machine Learning, Natural Language Processing, and Text Generation. This is part of the CASL project: http://casl-project.ai/

Overview



pypi Python Build codecov Documentation Status License

Texar-PyTorch is a toolkit aiming to support a broad set of machine learning, especially natural language processing and text generation tasks. Texar provides a library of easy-to-use ML modules and functionalities for composing whatever models and algorithms. The tool is designed for both researchers and practitioners for fast prototyping and experimentation. Texar-PyTorch was originally developed and is actively contributed by Petuum and CMU in collaboration with other institutes. A mirror of this repository is maintained by Petuum Open Source.

Texar-PyTorch integrates many of the best features of TensorFlow into PyTorch, delivering highly usable and customizable modules superior to PyTorch native ones.

Key Features

  • Two Versions, (Mostly) Same Interfaces. Texar-PyTorch (this repo) and Texar-TF have mostly the same interfaces. Both further combine the best design of TF and PyTorch:
    • Interfaces and variable sharing in PyTorch convention
    • Excellent factorization and rich functionalities in TF convention.
  • Versatile to support broad needs:
    • data processing, model architectures, loss functions, training and inference algorithms, evaluation, ...
    • encoder(s) to decoder(s), sequential- and self-attentions, memory, hierarchical models, classifiers, ...
    • maximum likelihood learning, reinforcement learning, adversarial learning, probabilistic modeling, ...
  • Fully Customizable at multiple abstraction level -- both novice-friendly and expert-friendly.
    • Free to plug in whatever external modules, since Texar is fully compatible with the native PyTorch APIs.
  • Modularized for maximal re-use and clean APIs, based on principled decomposition of Learning-Inference-Model Architecture.
  • Rich Pre-trained Models, Rich Usage with Uniform Interfaces. BERT, GPT2, XLNet, etc, for encoding, classification, generation, and composing complex models with other Texar components!
  • Clean, detailed documentation and rich examples.




Library API Example

A code example that builds and trains a Conditional GPT2 model (e.g., for machine translation and text summarization):

import texar.torch as tx
from texar.torch.run import *

# (1) Modeling
class ConditionalGPT2Model(nn.Module):
  """An encoder-decoder model with GPT-2 as the decoder."""
  def __init__(self, vocab_size):
    super().__init__()
    # Use hyperparameter dict for model configuration
    self.embedder = tx.modules.WordEmbedder(vocab_size, hparams=emb_hparams)
    self.encoder = tx.modules.TransformerEncoder(hparams=enc_hparams)
    self.decoder = tx.modules.GPT2Decoder("gpt2-small")  # With pre-trained weights

  def _get_decoder_output(self, batch, train=True):
    """Perform model inference, i.e., decoding."""
    enc_states = self.encoder(inputs=self.embedder(batch['source_text_ids']),
                              sequence_length=batch['source_length'])
    if train:  # Teacher-forcing decoding at training time
      return self.decoder(
          inputs=batch['target_text_ids'], sequence_length=batch['target_length'] - 1,
          memory=enc_states, memory_sequence_length=batch['source_length'])
    else:      # Beam search decoding at prediction time
      start_tokens = torch.full_like(batch['source_text_ids'][:, 0], BOS)
      return self.decoder(
          beam_width=5, start_tokens=start_tokens,
          memory=enc_states, memory_sequence_length=batch['source_length'])

  def forward(self, batch):
    """Compute training loss."""
    outputs = self._get_decoder_output(batch)
    loss = tx.losses.sequence_sparse_softmax_cross_entropy(  # Sequence loss
        labels=batch['target_text_ids'][:, 1:], logits=outputs.logits,
        sequence_length=batch['target_length'] - 1)  # Automatic masking
    return {"loss": loss}

  def predict(self, batch):
    """Compute model predictions."""
    sequence, _ = self._get_decoder_output(batch, train=False)
    return {"gen_text_ids": sequence}

  
# (2) Data
# Create dataset splits using built-in data loaders
datasets = {split: tx.data.PairedTextData(hparams=data_hparams[split])
            for split in ["train", "valid", "test"]}

model = ConditionalGPT2Model(datasets["train"].target_vocab.size)

# (3) Training
# Manage the train-eval loop with the Executor API
executor = Executor(
  model=model, datasets=datasets,
  optimizer={"type": torch.optim.Adam, "kwargs": {"lr": 5e-4}},
  stop_training_on=cond.epoch(20),
  log_every=cond.iteration(100),
  validate_every=cond.epoch(1),
  train_metric=("loss", metric.RunningAverage(10, pred_name="loss")),
  valid_metric=metric.BLEU(pred_name="gen_text_ids", label_name="target_text_ids"),
  save_every=cond.validation(better=True),
  checkpoint_dir="outputs/saved_models/")
executor.train()
executor.test(datasets["test"])

Many more examples are available here.

Installation

Texar-PyTorch requires:

  • python == 3.6 or 3.7
  • torch >= 1.0.0. Please follow the official instructions to install the appropriate version.

After torch is installed, install Texar from PyPI:

pip install texar-pytorch

To use cutting-edge features or develop locally, install from source:

git clone https://github.com/asyml/texar-pytorch.git
cd texar-pytorch
pip install .

To use tensorboard support with Executor, please install tensorboardX with the following command

pip install tensorboardX

Getting Started

Reference

If you use Texar, please cite the tech report with the following BibTex entry:

Texar: A Modularized, Versatile, and Extensible Toolkit for Text Generation
Zhiting Hu, Haoran Shi, Bowen Tan, Wentao Wang, Zichao Yang, Tiancheng Zhao, Junxian He, Lianhui Qin, Di Wang, Xuezhe Ma, Zhengzhong Liu, Xiaodan Liang, Wanrong Zhu, Devendra Sachan and Eric Xing
ACL 2019

@inproceedings{hu2019texar,
  title={Texar: A Modularized, Versatile, and Extensible Toolkit for Text Generation},
  author={Hu, Zhiting and Shi, Haoran and Tan, Bowen and Wang, Wentao and Yang, Zichao and Zhao, Tiancheng and He, Junxian and Qin, Lianhui and Wang, Di and others},
  booktitle={ACL 2019, System Demonstrations},
  year={2019}
}

License

Apache License 2.0

Companies and Universities Supporting Texar

                  

Comments
  • Design decision query: recommended pattern for auxiliary loss terms that should be ignored during evaluation?

    Design decision query: recommended pattern for auxiliary loss terms that should be ignored during evaluation?

    Hello!

    I came across texar-pytorch while in the process of writing my own version of the Executor, and am really happy that someone's already done the work for it, so firstly, thanks for the excellent repository.

    Broad Query One of the main questions I have pertains to the design requirement of including the loss as an item in the dictionary returned by the forward method of the model. Essentially, I'm wondering what's the recommended pattern for including terms in the loss (for eg. regularization terms) that ought to not be a part of the validation loss? Conceptually, I think of the forward pass as being completed with the model making its predictions, which is what I believe the predict() method is for. However, having the computation of the loss as a responsibility of the forward pass could lead to certain problems.

    Context Ideally, the training loss ought to be computed in a separate forward pass after the training epoch is completed. I'm aware that most people use an average over the training batches as an approximation of the training loss. However, this becomes an issue when comparing against a validation loss curve, where the difference between the training and validation curve indicates generalization error. This is for two reasons in the typical case:

    1. The model changes at the end of each batch when the optimizer takes a step, so it's an unfair comparison against the evaluation setting. The model might also overfit on a single batch.
    2. The model might have dropout and batch norm turned on during training which behave differently during evaluation.

    As far as I can tell, the training loss is computed within the training loop in the executor, as opposed to an additional forward pass over the training set with model.eval(). Is my understanding correct?

    On regularization Typically, any regularization terms over the model parameters is added to the loss before the call to backward. With the given interface, it seems like the right place to do this would be in the forward pass. However, I find it a little weird to make the model responsible for regularizing itself. I usually have a separate nn.Module subclass responsible for regularizing a model and dependency-inject the model into this class. That way I can swap out different regularizers without changing the model, in compliance with the Open-Close SOLID principle.

    Could you please explain how to achieve these two things (loss computation over the training set after the train loop, and computing auxiliary loss terms outside the model) with the current setup of the executor? It seems like this is a direct consequence of requiring the model to compute the loss, which seems to be a little problematic. This is currently the main two problems precluding my use of this otherwise awesome repo, so I'd appreciate any insight.

    Thanks!

    topic: executor discussion 
    opened by chiragraman 14
  • Change feature type names in `RecordData`

    Change feature type names in `RecordData`

    The feature types in RecordData (FixedLenFeature, FixedLenSequenceFeature, and VarLenFeature) are directly borrowed from TensorFlow and might cause confusion for PyTorch users. Based on my discussion with @AvinashBukkittu yesterday, we think it might be worthy to also introduce a set of aliases for these types, and use them in the examples.

    Here's the meaning behind each feature type, and what happens when we collate such features:

    • FixedLenFeature are tensor features that have a fixed shape. These features are stacked together (torch.stack(features, dim=0)) while collating.
    • FixedLenSequenceFeature represent a list of tensor features such that each list element have the same fixed shape. These features are padded and batched (tx.data.padded_batch(features)) while collating.
    • VarLenFeature can be tensors of any shape, or other objects that are not tensors. These features are simply put into a list (list(features)) while collating.

    I think we can rename the feature types to either:

    • describe the object types of the input features...
      • tensor, tensor_seq, and any/other.
    • ... or, describe how they're collated.
      • tensor, pad_tensor, and list.

    But yeah, any suggestions are welcome.


    Also, for the feature_original_types format: https://github.com/asyml/texar-pytorch/blob/43967ee238a5da2e996f4f71644a940a86cad009/texar/torch/data/data/record_data.py#L444-L470

    Since VarLenFeature can be any object, it does not really make sense to ask the user to write a dtype. Maybe we can also accept None here.

    enhancement topic: data 
    opened by huzecong 10
  • Add vae_text example

    Add vae_text example

    • Port vae_text from texar-TF.

    • Add external distribution MultivariateNormalDiag.

    • Add preprocessing for data batch.

    • Modify None checking condition for initial_state in RNNDecoderBase.

    • Modify max_pos for config_trans_yahoo.py.

    • Modify connectors mlp function.

    • Refactor vae_text training & generation decoder.

    • Refactor vae_text decoder embeddings.

    • Refactor to import texar.torch.

    • Polish code.

    opened by TomNong 10
  • Workflow and code update for numpy versions from 1.15 to 1.21

    Workflow and code update for numpy versions from 1.15 to 1.21

    This PR fixes https://github.com/asyml/texar-pytorch/issues/333 and fixes https://github.com/asyml/texar-pytorch/issues/341

    Code is updated to pass the mypy test with numpy>=1.20 (which fixes #333)

    Workflow is updated to enumerate numpy versions (from 1.15 to 1.21), and further include pytorch version (1.7.1 and 1.8.1), which fixes #341

    Note: In mypy.ini, I changed warn_unused_ignores to False (here) for the enumeration of numpy versions. Not sure if there are better ways to do it like in mypy-torch. Looks adding a same line under [mypy-numpy] doesn't work.

    opened by tanyuqian 9
  • Update RoBERTa vocabulary files

    Update RoBERTa vocabulary files

    import torch
    roberta = torch.hub.load('pytorch/fairseq', 'roberta.base')
    roberta.eval()
    
    tokens = roberta.encode('Hello world!')
    print(tokens)  # [    0, 31414,   232,   328,     2]
    
    import texar.torch as tx
    tokenizer = tx.data.RoBERTaTokenizer(pretrained_model_name='roberta-base')
    
    input_ids, _ = tokenizer.encode_text('Hello world!', max_seq_length=5)
    print(input_ids)  # [0, 31414, 232, 328, 2]
    
    opened by gpengzhi 9
  • Resolve issue #196: Data module enhancement

    Resolve issue #196: Data module enhancement

    1. Removed the device moving operations (.to(device)) in collate methods of built-in data modules and examples.

    2. Add a call to move_memory in __next__ of the dataset iterators before returning the batch. move_memory internally calls map_structure to recursively move all tensors to GPU memory.

      It is worth noting that we could have combined this into the pin_memory thread of the dataloader. We don't do this for two reasons:

      • It is pretty difficult to modify code in PyTorch without basically copy-pasting everything.
      • pin_memory is called for every tensor in the prefetched queue. Moving all of them to CUDA memory might result to excessive memory usage.

      It is also worth noting that this is a better practice than before. Without modifying any code, the Transformer example now runs ~15% faster (172ex/s to 199ex/s).

    3. The DataBase class is now renamed to DatasetBase to avoid confusion. A default collate implementation is not added, since the PyTorch default_collate function has undesirable behaviors (for instance, if each example contains a list of integers, the batch will be collated as a list of tensors), and it'll be inconsistent if we introduce some different default behavior.

    opened by huzecong 9
  • Add Tokenizer Module (Pre-trained Tokenizer)

    Add Tokenizer Module (Pre-trained Tokenizer)

    The design of the pre-trained tokenizer module borrows ideas from both pytorch-transformer and our pretrained module.

    To initiate a tokenizer, you can sepcify pretrained_model_name

    tokenizer = BERTTokenizer(pretrained_model_name='bert-base-uncased')
    

    or sepcify hparams

    tokenizer = BERTTokenizer(hparams={'pretrained_model_name': 'bert-base-uncased'})
    

    or directly load from the vocab file

    tokenizer = BERTTokenizer(hparams={'pretrained_model_name': None, 'vocab_file': 'path_to_vocab'})
    

    For the downstream tasks, you can add new tokens to the tokenizer

    tokenizer.add_tokens([...])
    

    and get the lasted vocabulary size, which can be used to define the downstream model

    current_vocab_size = len(tokenizer)
    

    You can also save the tokenizer to a directory (vocab file, special token file, added token file, and config file will be saved)

    tokenizer.save('path-to-directory')
    

    or load one from a directory

    tokenizer = BERTTokenizer.load('path-to-directory')
    

    Basically, we provide four core function for each tokenizer: text-to-token, text-to-id, id-to-text, token-to-text.

    tokens = tokenizer(inputs=text, task='text-to-token')
    ids = tokenizer(inputs=text, task='text-to-id')
    tokens = tokenizer(inputs=ids, task='id-to-token')
    text = tokenizer(inputs=ids, task='id-to-text')
    
    opened by gpengzhi 9
  • Not support Windows system?

    Not support Windows system?

    When I run the code in Windows, I met the error

    RuntimeError: Expected tensor for argument #1 'indices' to have scalar type Long; but got CUDAType instead

    and I run the same codes in Linux, all things were OK.

    I had checked the pytorch version were both 1.1.0.

    bug 
    opened by Codle 9
  • Seq2Seq Example with GPU Support

    Seq2Seq Example with GPU Support

    Hello,

    How can I run the Seq2Seq example with my GPU?

    I already modified the training data to use the cuda device as well as the model:

    device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") 
    
    train_data = tx.data.PairedTextData(hparams=config_data.train, device=device)
    val_data = tx.data.PairedTextData(hparams=config_data.val, device=device)
    test_data = tx.data.PairedTextData(hparams=config_data.test, device=device)
    
    model = Seq2SeqAttn(train_data)
    model.to(device)
    
    bug priority: high 
    opened by bigabig 9
  • Cannot download dataset in bert example

    Cannot download dataset in bert example

    When I run

    python data/download_glue_data.py --tasks=MRPC
    

    I got

    dyld: Library not loaded: /usr/local/opt/openssl/lib/libssl.1.0.0.dylib
      Referenced from: /usr/local/bin/wget
      Reason: image not found
    dyld: Library not loaded: /usr/local/opt/openssl/lib/libssl.1.0.0.dylib
      Referenced from: /usr/local/bin/wget
      Reason: image not found
    Processing MRPC...
    Traceback (most recent call last):
      File "data/download_glue_data.py", line 152, in <module>
        sys.exit(main(sys.argv[1:]))
      File "data/download_glue_data.py", line 142, in main
        format_mrpc(args.data_dir, args.path_to_mrpc)
      File "data/download_glue_data.py", line 55, in format_mrpc
        f"Train data not found at {mrpc_train_file}"
    AssertionError: Train data not found at data/MRPC/msr_paraphrase_train.txt
    

    Since you are testing bert example, can you investigate this issue? @atif93

    bug topic: examples 
    opened by gpengzhi 7
  • Question about the usage of helper in TransformerDecoder

    Question about the usage of helper in TransformerDecoder

    Hi~ I want to implement the step-by-step TransformerDecoder with a TrainingHelper(), but I don't know how to call the same forward function as the RNN's, e.g.

    outputs, hidden = self.gru(embedded, hidden) # forward for every step

    Has it been done in the step method of the class helper ? Hope for your help!

    question 
    opened by ha-lins 6
  • Code blocks in docstring are not fully rendered as sphinx doc.

    Code blocks in docstring are not fully rendered as sphinx doc.

    The code blocks in the docstring are not rendered as expected int he sphinx doc.

    The code block documentation In the following example(and a few following), are not rendered correctly in the resulting documentation. Probably there is some slight indent mismatching here.

    bug topic: docs 
    opened by hunterhector 0
  • GPU memory usage when doing beam search

    GPU memory usage when doing beam search

    A BART model (https://arxiv.org/pdf/1910.13461.pdf) is implemented here: https://github.com/tanyuqian/texar-pytorch/tree/master/examples/bart

    It has passed the test of text classification (MNLI) and summarization (CNN/DM) with greedy decoding, but it fails to run CNN/DM with beam search on a single GTX 1080Ti because of GPU memory, even when batch_size=1, beam_width=2, max_decoding_length=140.

    A script to show this issue is here: https://github.com/tanyuqian/texar-pytorch/blob/master/examples/bart/bart_cnn.py (run this code after downloading CNN/DM data following README)

    Note that in this fork, two more hyperparameters are added in TransformerDecoder ('normalize_before' and 'final_layer_norm'): https://github.com/tanyuqian/texar-pytorch/blob/master/texar/torch/modules/decoders/transformer_decoders.py#L290

    question topic: modules 
    opened by tanyuqian 0
  • Error when decoder has more than 1 layer.

    Error when decoder has more than 1 layer.

    https://github.com/asyml/texar-pytorch/blob/0ba18bff28cd8fff2640021354c15dfd4aef2f72/examples/vae_text/config_lstm_yahoo.py#L62

    The output is the follwoing: RuntimeError: Input batch size 128 doesn't match hidden[0] batch size 256

    The issue is due to the "initial_state=lstm_states" when the decoder is forwarded.

    question topic: examples 
    opened by pajola 0
  • Incorporating copy mechanism in decoder

    Incorporating copy mechanism in decoder

    I'm really enjoying this library, thanks for your work. Just curious, are there any plans to implement some sort of copying mechanism for decoding, e.g. CopyNet (https://arxiv.org/abs/1603.06393)?

    enhancement topic: modules 
    opened by roemmele 2
  • Add ELMo modules

    Add ELMo modules

    Add texar-styled ELMo encoder adapted from allennlp. The corresponding tokenizer will be in another PR.

    Resolve some comments in #298

    I checked the implementation of ELMo in allennlp, It seems that they used customized LSTM such that we cannot use our LSTM module to implement it directly. And the Highway module they used is different from our HighwayWrapper. I feel that it is better to directly use their implementations, and the correctness of the implementation is guaranteed by their unit tests. Please let me know your thought @huzecong

    opened by gpengzhi 2
Releases(v0.1.4)
  • v0.1.4(Apr 14, 2022)

    • Add tests for python3.8 and python3.9 https://github.com/asyml/texar-pytorch/pull/340
    • workflow and code update for numpy versions from 1.15 to 1.21. https://github.com/asyml/texar-pytorch/pull/352
    • Move out HParams and SpecialTokens to asyml-utilities https://github.com/asyml/texar-pytorch/pull/353
    • Several bug fixes https://github.com/asyml/texar-pytorch/issues/335 https://github.com/asyml/texar-pytorch/pull/345 https://github.com/asyml/texar-pytorch/pull/351
    Source code(tar.gz)
    Source code(zip)
  • v0.1.2(Mar 29, 2021)

    New features

    1. Integrated texar-pytorch with NNI and AdaptDL for distributed adaptive API (https://github.com/asyml/texar-pytorch/pull/331)
    2. Integrated with NNI for hyperparameter tuning (https://github.com/asyml/texar-pytorch/pull/324)
    3. Add Information Loss based on KL divergence (https://github.com/asyml/texar-pytorch/pull/328)
    4. Allow WordpieceTokenizer to return the original character spans (https://github.com/asyml/texar-pytorch/pull/332)
    5. Add a few modules: RNN Classifier (https://github.com/asyml/texar-pytorch/pull/303), SpanBERT (https://github.com/asyml/texar-pytorch/pull/300)

    Feature improvements:

    1. Fix a few documentation issues.

    Fixes

    1. Fix a type error in Beam Search top-k index (https://github.com/asyml/texar-pytorch/pull/330)
    2. Fix a file operation error in Executor (https://github.com/asyml/texar-pytorch/pull/323)
    3. Fix some evaluator bugs and related file handling (https://github.com/asyml/texar-pytorch/pull/320)
    4. Fix a problem where metrics cannot be pickled (https://github.com/asyml/texar-pytorch/pull/319)
    Source code(tar.gz)
    Source code(zip)
  • v0.1.1(Feb 7, 2020)

    New features

    • Support PyTorch 1.3. (#249)
    • Add T5 modules (T5Encoder, T5Decoder, and T5EncoderDecoder). (#280)
    • Add T5Tokenizer. (#283)
    • Support PyTorch 1.4. (#291)

    Feature improvements

    • Refactor the interface of GPT2 modules. (#238)
    • Support gpt2-xl checkpoint file in GPT2 modules. (#242)
    • Add code coverage check in CI. (#245)
    • Update the vocabulary files of RoBERTa modules. (#255)
    • Disable codecov/patch check in CI. (#265)
    • Provide option to freeze the embedding parameters. (#271)
    • Add encode_text_for_generation function in XLNetTokenizer. (#278)
    • Use warning instead of error in map_token_to_id function. (#285)
    • Add copyright header to unit tests. (#287)
    • Remove duplicated pytest in CI. (#289)
    • Update the versions of pylint, flake8, and mypy in CI. (#292)

    Fixes

    • Fix the documentation issues in SentencePieceTokenizer. (#236)
    • Fix the bugs in RoBERTa checkpoint file loading procedure. (#241)
    • Fix the documentation issues in Executor. (#244)
    • Fix the documentation issues in gpt-2 example. (#250)
    • Fix the bugs in bidirectional_dynamic_rnn and dynamic_rnn functions. (#252)
    • Fix the bugs in vae_text example. (#253)
    • Fix the bugs in sentence_classifier example. (#262)
    • Fix the path error when installing texar-pytorch in Windows. (#268)
    • Fix the bugs in XLNetTokenizer. (#273)
    • Fix the bugs in download_checkpoint function. (#274)
    • Fix the bugs in google drive downloading function. (#275)
    • Fix the bugs in the unit test of GPT2Decoder. (#288)
    • Fix the documentation issues in Decoder module. (#290)
    Source code(tar.gz)
    Source code(zip)
  • v0.1.0(Oct 15, 2019)

  • v0.0.1(Aug 2, 2019)

Owner
ASYML
Machine Learning as Machine Assembly, part of the CASL project https://casl-project.ai/
ASYML
Question answering app is used to answer for a user given question from user given text.

Question answering app is used to answer for a user given question from user given text.It is created using HuggingFace's transformer pipeline and streamlit python packages.

Siva Prakash 3 Apr 05, 2022
Crowd sourced training data for Rasa NLU models

NLU Training Data Crowd-sourced training data for the development and testing of Rasa NLU models. If you're interested in grabbing some data feel free

Rasa 169 Dec 26, 2022
A method to generate speech across multiple speakers

VoiceLoop PyTorch implementation of the method described in the paper VoiceLoop: Voice Fitting and Synthesis via a Phonological Loop. VoiceLoop is a n

Facebook Archive 873 Dec 15, 2022
Interpretable Models for NLP using PyTorch

This repo is deprecated. Please find the updated package here. https://github.com/EdGENetworks/anuvada Anuvada: Interpretable Models for NLP using PyT

Sandeep Tammu 19 Dec 17, 2022
Azure Text-to-speech service for Home Assistant

Azure Text-to-speech service for Home Assistant The Azure text-to-speech platform uses online Azure Text-to-Speech cognitive service to read a text wi

Yassine Selmi 2 Aug 06, 2022
edge-SR: Super-Resolution For The Masses

edge-SR: Super Resolution For The Masses Citation Pablo Navarrete Michelini, Yunhua Lu and Xingqun Jiang. "edge-SR: Super-Resolution For The Masses",

Pablo 40 Nov 10, 2022
Part of Speech Tagging using Hidden Markov Model (HMM) POS Tagger and Brill Tagger

Part of Speech Tagging using Hidden Markov Model (HMM) POS Tagger and Brill Tagger In this project, our aim is to tune, compare, and contrast the perf

Chirag Daryani 0 Dec 25, 2021
SimCSE: Simple Contrastive Learning of Sentence Embeddings

SimCSE: Simple Contrastive Learning of Sentence Embeddings This repository contains the code and pre-trained models for our paper SimCSE: Simple Contr

Princeton Natural Language Processing 2.5k Jan 07, 2023
Russian words synonyms and antonyms

ru_synonyms Russian words synonyms and antonyms. Install pip install git+https://github.com/ahmados/rusynonyms.git Usage from ru_synonyms import Anto

sumekenov 7 Dec 14, 2022
TextFlint is a multilingual robustness evaluation platform for natural language processing tasks,

TextFlint is a multilingual robustness evaluation platform for natural language processing tasks, which unifies general text transformation, task-specific transformation, adversarial attack, sub-popu

TextFlint 587 Dec 20, 2022
GSoC'2021 | TensorFlow implementation of Wav2Vec2

GSoC'2021 | TensorFlow implementation of Wav2Vec2

Vasudev Gupta 73 Nov 28, 2022
Natural Language Processing Tasks and Examples.

Natural Language Processing Tasks and Examples With the advancement of A.I. technology in recent years, natural language processing technology has bee

Soohwan Kim 53 Dec 20, 2022
☀️ Measuring the accuracy of BBC weather forecasts in Honolulu, USA

Accuracy of BBC Weather forecasts for Honolulu This repository records the forecasts made by BBC Weather for the city of Honolulu, USA. Essentially, t

Max Halford 12 Oct 15, 2022
NLP library designed for reproducible experimentation management

Welcome to the Transfer NLP library, a framework built on top of PyTorch to promote reproducible experimentation and Transfer Learning in NLP You can

Feedly 290 Dec 20, 2022
Incorporating KenLM language model with HuggingFace implementation of Wav2Vec2CTC Model using beam search decoding

Wav2Vec2CTC With KenLM Using KenLM ARPA language model with beam search to decode audio files and show the most probable transcription. Assuming you'v

farisalasmary 65 Sep 21, 2022
Official PyTorch implementation of Time-aware Large Kernel (TaLK) Convolutions (ICML 2020)

Time-aware Large Kernel (TaLK) Convolutions (Lioutas et al., 2020) This repository contains the source code, pre-trained models, as well as instructio

Vasileios Lioutas 28 Dec 07, 2022
Code for our paper "Transfer Learning for Sequence Generation: from Single-source to Multi-source" in ACL 2021.

TRICE: a task-agnostic transferring framework for multi-source sequence generation This is the source code of our work Transfer Learning for Sequence

THUNLP-MT 9 Jun 27, 2022
Minimal GUI for accessing the Watson Text to Speech service.

Description Minimal graphical application for accessing the Watson Text to Speech service. Requirements Python 3 plus all dependencies listed in requi

Moritz Maxeiner 1 Oct 22, 2021
Full Spectrum Bioinformatics - a free online text designed to introduce key topics in Bioinformatics using the Python

Full Spectrum Bioinformatics is a free online text designed to introduce key topics in Bioinformatics using the Python programming language. The text is written in interactive Jupyter Notebooks, whic

Jesse Zaneveld 33 Dec 28, 2022
:house_with_garden: Fast & easy transfer learning for NLP. Harvesting language models for the industry. Focus on Question Answering.

(Framework for Adapting Representation Models) What is it? FARM makes Transfer Learning with BERT & Co simple, fast and enterprise-ready. It's built u

deepset 1.6k Dec 27, 2022