
Sparsity-promoting dictionary learning algorithms based on hierarchical Bayesian models
Please login to view abstract download link
Dictionary matching/learning methods have been demonstrated to be a flexible and versatile framework for solving inverse problems where traditional techniques fail either because the forward model is complex, ill-defined, or difficult to parametrize, or the data are insufficient. The basic idea is to match the data to labeled dictionary entries, the labels of the matching entries providing an interpretation of the data. The methods can also be used for traditional classification problems. Dictionary matching is often preceded by a dictionary learning step, yielding a reduced dictionary to speed up the computations. To facilitate the interpretation of the results, the solutions are typically required to be sparse. In this talk, we discuss some ideas based on hierarchical Bayesian methods that allow effective computations with sparsity-promoting prior models. The talk is partly based on the results in the recent articles \cite{Pragliola,Waniorek,Bocchinfuso}. In the cited articles, the computational algorithms are based on maximum a posteriori (MAP) estimates. From a Bayesian point of view, a central question is how representative the MAP estimate is, and to what extent the sparsity-promoting priors are indeed concentrated around sparse solutions. Numerically efficient sampling algorithms to analyze these questions are discussed in the talk.