Abstract
A language model based in continuous representations of words is presented,
which has been applied to a statistical machine translation task. This model
is implemented by means of a bidirectional recurrent neural network, which is able
to take into account both the past and the future context of a word in order to
perform predictions. Due to its high temporal cost at training time, for obtaining
relevant training data an instance selection algorithm is used, which aims to capture
useful information for translating a test set. Obtained results show that the neural
model trained with the selected data outperforms the results obtained by an n-gram
language model.