Example on optimization of non-differentiable metrics in NLP

Maksim Kretov


In NLP tasks performance of the models is often measured with some non-differentiable metric, such as BLEU score. But during training phase differentiable losses such as cross-entropy are typically used. The corresponding problem is referred to as loss-evaluation mismatch. Few ways to amend this problem are discussed.


Made on
Tilda