Accepted for publication in JOTA.

Authors:

  • Shota Takahashi
  • Mirai Tanaka
  • Shiro Ikeda

keywords:

  • Nonnegative Matrix Factorization
  • Bregman divergence
  • proximal gradient algorithm

URL:


Abstract:

Nonnegative matrix factorization (NMF) is a popular method in machine learning and signal processing to decompose a given nonnegative matrix into two nonnegative matrices. In this paper, we propose new algorithms, called majorization-minimization Bregman proximal gradient algorithm (MMBPG) and MMBPG with extrapolation (MMBPGe) to solve NMF. These iterative algorithms minimize the objective function and its potential function monotonically. Assuming the Kurdyka–Łojasiewicz property, we establish that a sequence generated by MMBPG(e) globally converges to a stationary point. We apply MMBPG and MMBPGe to the Kullback–Leibler (KL) divergence-based NMF. While most existing KL-based NMF methods update two blocks or each variable alternately, our algorithms update all variables simultaneously. MMBPG and MMBPGe for KL-based NMF are equipped with a separable Bregman distance that satisfies the smooth adaptable property and that makes its subproblem solvable in closed form. Using this fact, we guarantee that a sequence generated by MMBPG(e) globally converges to a Karush–Kuhn–Tucker (KKT) point of KL-based NMF. In numerical experiments, we compare proposed algorithms with existing algorithms on synthetic data and real-world data.