Nonparametric Bayesian Models for Spoken Language Understanding

Kei Wakabayashi1, Johane Takeuchi2, Kotaro Funakoshi2, Mikio Nakano2
1Tsukuba University, 2Honda Research Institute Japan Co., Ltd.


Abstract

In this paper, we propose a new generative approach for semantic slot filling task in spoken language understanding using a nonparametric Bayesian formalism. Slot filling is typically formulated as a sequential labeling problem, which does not directly deal with the posterior distribution of possible slot values. We present a nonparametric Bayesian model involving the generation of arbitrary natural language phrases, which allows an explicit calculation of the distribution over an infinite set of slot values. We demonstrate that this approach significantly improves slot estimation accuracy compared to the existing sequential labeling algorithm.