Implemented shortest-circuit disambiguation, maximum probability disambiguation, HMM-based lexical annotation and BiLSTM+CRF-based named entity recognition

Overview

To Startup

进入根目录(ner文件夹 或 seg_tag文件夹),执行:

pip install -r requirements.txt

等待环境配置完成

程序入口为main.py文件,执行:

python main.py

seg_tag文件夹中将会一次性输出:

  1. 最大概率分词结果与P、R、F
  2. 最大概率分词(加法平滑)结果与P、R、F
  3. 最大概率分词(Jelinek-Mercer插值法平滑)结果与P、R、F
  4. 最短路分词结果与P、R、F
  5. 词性标注结果与两种评分的P、R、F
  6. 各算法耗时

ner文件夹中将会输出:

  1. 各标签的数量和各自的P、R、F
  2. 测试集上的P、R、F
  3. 混淆矩阵
  4. 算法耗时

自动分词与词性标注部分

文件结构

D:.
│  clean.ipynb # 处理数据集dag.py # 建图dictionary.py # 建立词典main.py # 程序入口mpseg.py # 最大概率分词模块pos.py # 词性标注模块spseg.py # 最短路分词模块requirements.txttrie.py # trie树score.py # 函数
│
├─data # 数据集sequences.txtwordpieces.txt
│          
└─__pycache__

每个模块均经过单元测试和集成测试

代码注释采用Google风格

建立词典

定义class Trie作为词典数据结构,在Trie的尾节点保存该词出现的次数与词性。

使用Trie可以最大化节约空间开销。

定义class Dictionary作为词典,并统计词频、词性、转移矩阵、发射矩阵等。

基于词典的最短路分词

给定句子sentence[N],调用类SPseg中的spcut方法,代码依次执行:

  1. 依据词典建立有向无环图(调用类DAG
  2. 最短路dp (调用dp函数)
  3. 回溯得到最短路径
  4. 返回分词结果

最短路分词获得的是尽可能小的分词集合。

基于统计的最大概率分词

给定句子sentence[N],调用类MPseg中的mpcut方法,代码依次执行:

  1. 依据词典建立有向无环图(调用类DAG
  2. 根据类Dictionary中统计的词频计算边权(边权为该词出现的概率)
  3. 最短路dp (调用dp函数)
  4. 回溯得到最短路径
  5. 返回分词结果

最大概率分词得到的分词结果y满足 $$ y = argmax{P(y|x)} = argmax \frac{P(x|y)P(y)}{P(x)} $$ 其中$P(x), P(x|y)$是常数,即: $$ y & = argmax P(y|x)\ & = argmax P(y) \ & = argmax \prod_1^n P(w_i) \ & = argmax log(\prod_1^n P(w_i))\ & = argmin (- \sum_i^m log(P(w_i)) )\ $$ 最大概率即可等价于在DAG上求边权为$-log(P)$的最短路径

数据平滑

考虑到unseen event,对于频率为0的事件,我们也应分配一定的概率。

代码给出了两种数据平滑方式:

  1. Adding smoothing (加法平滑方法)
  2. Jelinek-Mercer interpolation (JM插值法)

Adding smoothing: $$ P(w_i) = \frac{\delta + c(w_i)}{\delta|V| + \sum_j c(w_j)} $$ 代码中取$\delta = 1$

Jelinek-Mercer interpolation $$ P(w_i) = \lambda P_{ML}(w_i) + (1-\lambda)P_{unif} $$ 思想为n元模型的概率由n元模型和n-1元模型插值而成

代码中取0元模型为均匀分布:$P_{unif} = \frac{1}{|V|}$,并给出$\lambda$的默认值为0.9

基于HMM的词性标注

HMM是一种概率图模型,基于统计学习得到emission matrix和transition matrix,推断给定观测序列(分词结果)的隐状态(词性序列)。

给出分词结果,调用类WordTagging中的tagging方法,代码依次执行:

  1. 根据词频计算发射概率和转移概率
  2. Viterbi decoding,找到具有最大概率的隐状态序列
  3. 回溯,得到隐状态序列

HMM经Viterbi解码得到的词性序列满足: $$ y & = argmax P(y|x)\ & = argmax \frac{P(y)P(x|y)}{P(x)} \ & = argmax P(y)\ & = argmax {\pi[t_i]b_1[w_1] \prod_1^{n-1} a[t_i][t_{i+1}]b_{i+1}[w_{i+1}]} \ & = argmax {log(\pi[t_i]b_1[w_1] \prod_1^{n-1} a[t_i][t_{i+1}]b_{i+1}[w_{i+1}])}\ & = argmin {-( log(\pi[t_i]) + log(b_1[w_1]) + \sum_i^m {log(a[t_i][t_{i+1}])+log(b_{i+1}[w_{i+1}])} )}\ $$

准确率、召回率、F1 score与性能

由公式: $$ P = \frac{系统输出的正确结果}{系统输出的全部结果个数} \ R = \frac{系统输出的正确结果}{测试集中的结果个数} \ F = \frac{2\times P \times R}{P+R} $$ 执行python main.py命令,在测试数据上推断,可得到上述全部分词、词性标注结果,并得到准确率、召回率、F1 score和性能指标

分词准确率:MP(with JM smoothing) = MP(with Add1 smoothing) > MP(no smoothing) = SP

使用平滑技术能得到更好的分词效果,统计方法(MP)比词典法能得到更好的分词效果。

HMM词性标注中,先利用MP(with JM smoothing) 法分词,再对分词结果进行词性标注。同时采用了粗略的评价指标(不考虑顺序)和严格的评价指标(考虑顺序)。

对于给定的长为N的序列:

Methods Inference Time Complexity
MP分词 $O(N+M)$
SP分词 $O(N+M)$
HMM词性标注 $O(T^2N)$

其中,$M$为DAG中的边数,$T$词性总数。因此三个算法的推断复杂度都是线性的

命名实体识别部分

采用BiLSTM+CRF模型

img

其中,BiLSTM输入是给定的sentence(embedding sequence),输出为该词对应的命名实体标签。它通过双向的设置学习到观测序列(输入的字)之间的依赖,在训练过程中,LSTM能够根据目标(比如识别实体)自动提取观测序列的特征。但是,BiLSTM无法学习到输出序列之间的依赖与约束关系。

CRF等同于在BiLSTM的输出上添加了一层约束,使得模型也能学习到输出序列内部之间的的依赖。传统的CRF需要人为给出特征模板,但在该模型中,特征函数将由模型自行学习得到。

文件结构

D:.
│  dataloader.py # 载入数据集evaluation.py # 评估模型main.py # 程序入口model.py # BiLSTM、BiLSTM+CRF模型utils.py # 函数requirements.txt
│
├─data_ner # 数据集dev.char.bmestest.char.bmestrain.char.bmes
│
├─results # 训练好的模型BiLSTM+CRF.pkl
│
└─__pycache__

参数设置

Total epoches Batch size learning rate hidden size embedding size
30 64 0.001 128 128

每结束一个epoch,模型在验证集上评估,选取在验证集上效果最好的模型作为最终模型(optimal model)。

模型在测试集上能达到95%以上的准确率。

Reference

[1] 宗成庆 《统计自然语言处理》

[2] Lample G, Ballesteros M, Subramanian S, et al. Neural architectures for named entity recognition[J]. arXiv preprint arXiv:1603.01360, 2016.

[3] blog: 1. Understanding LSTM Networks -- colah's blog, 2. CRF Layer on the Top of BiLSTM - 1 | CreateMoMo

[4] code: 1. hiyoung123/ChineseSegmentation: 中文分词 (github.com) ,2. luopeixiang/named_entity_recognition: 中文命名实体识别(github.com), 3. Advanced: Making Dynamic Decisions and the Bi-LSTM CRF — PyTorch Tutorials 1.9.1+cu102 documentation

[5] dataset: 1. jiesutd/LatticeLSTM: Chinese NER using Lattice LSTM. Code for ACL 2018 paper. (github.com), 2. 人民日报1998

Code for ACL 2022 main conference paper "STEMM: Self-learning with Speech-text Manifold Mixup for Speech Translation".

STEMM: Self-learning with Speech-Text Manifold Mixup for Speech Translation This is a PyTorch implementation for the ACL 2022 main conference paper ST

ICTNLP 29 Oct 16, 2022
skweak: A software toolkit for weak supervision applied to NLP tasks

Labelled data remains a scarce resource in many practical NLP scenarios. This is especially the case when working with resource-poor languages (or text domains), or when using task-specific labels wi

Norsk Regnesentral (Norwegian Computing Center) 850 Dec 28, 2022
📜 GPT-2 Rhyming Limerick and Haiku models using data augmentation

Well-formed Limericks and Haikus with GPT2 📜 GPT-2 Rhyming Limerick and Haiku models using data augmentation In collaboration with Matthew Korahais &

Bardia Shahrestani 2 May 26, 2022
A paper list for aspect based sentiment analysis.

Aspect-Based-Sentiment-Analysis A paper list for aspect based sentiment analysis. Survey [IEEE-TAC-20]: Issues and Challenges of Aspect-based Sentimen

jiangqn 419 Dec 20, 2022
Code for "Generative adversarial networks for reconstructing natural images from brain activity".

Reconstruct handwritten characters from brains using GANs Example code for the paper "Generative adversarial networks for reconstructing natural image

K. Seeliger 2 May 17, 2022
To classify the News into Real/Fake using Features from the Text Content of the article

Hoax-Detector Authenticity of news has now become a major problem. The Idea is to classify the News into Real/Fake using Features from the Text Conten

Aravindhan 1 Feb 09, 2022
Maha is a text processing library specially developed to deal with Arabic text.

An Arabic text processing library intended for use in NLP applications Maha is a text processing library specially developed to deal with Arabic text.

Mohammad Al-Fetyani 184 Nov 27, 2022
Automated question generation and question answering from Turkish texts using text-to-text transformers

Turkish Question Generation Offical source code for "Automated question generation & question answering from Turkish texts using text-to-text transfor

Open Business Software Solutions 29 Dec 14, 2022
Scene Text Retrieval via Joint Text Detection and Similarity Learning

This is the code of "Scene Text Retrieval via Joint Text Detection and Similarity Learning". For more details, please refer to our CVPR2021 paper.

79 Nov 29, 2022
Curso práctico: NLP de cero a cien 🤗

Curso Práctico: NLP de cero a cien Comprende todos los conceptos y arquitecturas clave del estado del arte del NLP y aplícalos a casos prácticos utili

Somos NLP 147 Jan 06, 2023
Sequence-to-Sequence learning using PyTorch

Seq2Seq in PyTorch This is a complete suite for training sequence-to-sequence models in PyTorch. It consists of several models and code to both train

Elad Hoffer 514 Nov 17, 2022
NeurIPS'21: Probabilistic Margins for Instance Reweighting in Adversarial Training (Pytorch implementation).

source code for NeurIPS21 paper robabilistic Margins for Instance Reweighting in Adversarial Training

9 Dec 20, 2022
Pattern Matching in Python

Pattern Matching finalmente chega no Python 3.10. E daí? "Pattern matching", ou "correspondência de padrões" como é conhecido no Brasil. Algumas pesso

Fabricio Werneck 6 Feb 16, 2022
Question and answer retrieval in Turkish with BERT

trfaq Google supported this work by providing Google Cloud credit. Thank you Google for supporting the open source! 🎉 What is this? At this repo, I'm

M. Yusuf Sarıgöz 13 Oct 10, 2022
Simple, Pythonic, text processing--Sentiment analysis, part-of-speech tagging, noun phrase extraction, translation, and more.

TextBlob: Simplified Text Processing Homepage: https://textblob.readthedocs.io/ TextBlob is a Python (2 and 3) library for processing textual data. It

Steven Loria 8.4k Dec 26, 2022
Chinese NER(Named Entity Recognition) using BERT(Softmax, CRF, Span)

Chinese NER(Named Entity Recognition) using BERT(Softmax, CRF, Span)

Weitang Liu 1.6k Jan 03, 2023
An easier way to build neural search on the cloud

An easier way to build neural search on the cloud Jina is a deep learning-powered search framework for building cross-/multi-modal search systems (e.g

Jina AI 17.1k Jan 09, 2023
This repository implements a brute-force spellchecker utilizing the Damerau-Levenshtein edit distance.

About spellchecker.py Implementing a highly-accurate, brute-force, and dynamically programmed spellchecking program that utilizes the Damerau-Levensht

Raihan Ahmed 1 Dec 11, 2021
This repository contains helper functions which can help you generate additional data points depending on your NLP task.

NLP Albumentations For Data Augmentation This repository contains helper functions which can help you generate additional data points depending on you

Aflah 6 May 22, 2022
Official implementations for various pre-training models of ERNIE-family, covering topics of Language Understanding & Generation, Multimodal Understanding & Generation, and beyond.

English|简体中文 ERNIE是百度开创性提出的基于知识增强的持续学习语义理解框架,该框架将大数据预训练与多源丰富知识相结合,通过持续学习技术,不断吸收海量文本数据中词汇、结构、语义等方面的知识,实现模型效果不断进化。ERNIE在累积 40 余个典型 NLP 任务取得 SOTA 效果,并在 G

5.4k Jan 03, 2023