Explora I+D+i UPV

Volver atrás Publicación

First steps towards Skipping NNLMs

Compartir
Autores UPV

Año

CONGRESO

First steps towards Skipping NNLMs

Abstract

Statistical language modeling greatly suffers from the effects of data sparsity which is tackled by means of smoothing techniques. Continuous space language models are able to interpolate unseen word histories but new problems and challenges arise, as a very high computational cost during evaluation of N-gram probabilities, due to the softmax normalization constant. Several approaches to study how to reduce this computational cost have been proposed in the literature. This work tries to improve the use of pre-computed softmax normalization constants tables by including the Skipping N-grams technique into Neural Network Language Models (NNLMs) and describes some experiments conducted on IAM-DB corpus to validate the viability of the proposed technique. The skipping for NNLMs works as regularization, but additionally helps to simplify the use of pre-computation of softmax normalization constants, as will be shown in the preliminary experiments of this paper.