Nuova ricerca

SIMONE REBEGOLDI

Ricercatore t.d. art. 24 c. 3 lett. B
Dipartimento di Scienze Fisiche, Informatiche e Matematiche sede ex-Matematica


Home | Curriculum(pdf) | Didattica |


Pubblicazioni

2024 - A new proximal heavy ball inexact line-search algorithm [Articolo su rivista]
Bonettini, S.; Prato, M.; Rebegoldi, S.
abstract

We study a novel inertial proximal-gradient method for composite optimization. The proposed method alternates between a variablemetric proximal-gradient iterationwith momentum and an Armijo-like linesearch based on the sufficient decrease of a suitable merit function. The linesearch procedure allows for a major flexibility on the choice of the algorithm parameters. We prove the convergence of the iterates sequence towards a stationary point of the problem, in a Kurdyka–Łojasiewicz framework. Numerical experiments on a variety of convex and nonconvex problems highlight the superiority of our proposal with respect to several standard methods, especially when the inertial parameter is selected by mimicking the Conjugate Gradient updating rule.


2024 - Barzilai–Borwein-like rules in proximal gradient schemes for ℓ1-regularized problems [Articolo su rivista]
Crisci, Serena; Rebegoldi, Simone; Toraldo, Gerardo; Viola, Marco
abstract


2023 - A stochastic first-order trust-region method with inexact restoration for finite-sum minimization [Articolo su rivista]
Bellavia, S.; Krejic, N.; Morini, B.; Rebegoldi, S.
abstract

We propose a stochastic first-order trust-region method with inexact function and gradient evaluations for solving finite-sum minimization problems. Using a suitable reformulation of the given problem, our method combines the inexact restoration approach for constrained optimization with the trust-region procedure and random models. Differently from other recent stochastic trust-region schemes, our proposed algorithm improves feasibility and optimality in a modular way. We provide the expected number of iterations for reaching a near-stationary point by imposing some probability accuracy requirements on random functions and gradients which are, in general, less stringent than the corresponding ones in literature. We validate the proposed algorithm on some nonconvex optimization problems arising in binary classification and regression, showing that it performs well in terms of cost and accuracy, and allows to reduce the burdensome tuning of the hyper-parameters involved.


2023 - An abstract convergence framework with application to inertial inexact forward–backward methods [Articolo su rivista]
Bonettini, S.; Ochs, P.; Prato, M.; Rebegoldi, S.
abstract

In this paper we introduce a novel abstract descent scheme suited for the minimization of proper and lower semicontinuous functions. The proposed abstract scheme generalizes a set of properties that are crucial for the convergence of several first-order methods designed for nonsmooth nonconvex optimization problems. Such properties guarantee the convergence of the full sequence of iterates to a stationary point, if the objective function satisfies the Kurdyka–Łojasiewicz property. The abstract framework allows for the design of new algorithms. We propose two inertial-type algorithms with implementable inexactness criteria for the main iteration update step. The first algorithm, i2Piano, exploits large steps by adjusting a local Lipschitz constant. The second algorithm, iPila, overcomes the main drawback of line-search based methods by enforcing a descent only on a merit function instead of the objective function. Both algorithms have the potential to escape local minimizers (or stationary points) by leveraging the inertial feature. Moreover, they are proved to enjoy the full convergence guarantees of the abstract descent scheme, which is the best we can expect in such a general nonsmooth nonconvex optimization setup using first-order methods. The efficiency of the proposed algorithms is demonstrated on two exemplary image deblurring problems, where we can appreciate the benefits of performing a linesearch along the descent direction inside an inertial scheme.


2023 - On an iteratively reweighted linesearch based algorithm for nonconvex composite optimization [Articolo su rivista]
Bonettini, S.; Pezzi, D.; Prato, M.; Rebegoldi, S.
abstract

In this paper we propose a new algorithm for solving a class of nonsmooth nonconvex problems, which is obtained by combining the iteratively reweighted scheme with a finite number of forward–backward iterations based on a linesearch procedure. The new method overcomes some limitations of linesearch forward–backward methods, since it can be applied also to minimize functions containing terms that are both nonsmooth and nonconvex. Moreover, the combined scheme can take advantage of acceleration techniques consisting in suitable selection rules for the algorithm parameters. We develop the convergence analysis of the new method within the framework of the Kurdyka– Lojasiewicz property. Finally, we present the results of a numerical experience on microscopy image super resolution, showing that the performances of our method are comparable or superior to those of other algorithms designed for this specific application.


2023 - On the Convergence Properties of a Stochastic Trust-Region Method with Inexact Restoration [Articolo su rivista]
Bellavia, S.; Morini, B.; Rebegoldi, S.
abstract

We study the convergence properties of SIRTR, a stochastic inexact restoration trust-region method suited for the minimization of a finite sum of continuously differentiable functions. This method combines the trust-region methodology with random function and gradient estimates formed by subsampling. Unlike other existing schemes, it forces the decrease of a merit function by combining the function approximation with an infeasibility term, the latter of which measures the distance of the current sample size from its maximum value. In a previous work, the expected iteration complexity to satisfy an approximate first-order optimality condition was given. Here, we elaborate on the convergence analysis of SIRTR and prove its convergence in probability under suitable accuracy requirements on random function and gradient estimates. Furthermore, we report the numerical results obtained on some nonconvex classification test problems, discussing the impact of the probabilistic requirements on the selection of the sample sizes.


2022 - A nested primal-dual FISTA-like scheme for composite convex optimization problems [Articolo su rivista]
Bonettini, S.; Prato, M.; Rebegoldi, S.
abstract

We propose a nested primal–dual algorithm with extrapolation on the primal variable suited for minimizing the sum of two convex functions, one of which is continuously differentiable. The proposed algorithm can be interpreted as an inexact inertial forward–backward algorithm equipped with a prefixed number of inner primal–dual iterations for the proximal evaluation and a “warm–start” strategy for starting the inner loop, and generalizes several nested primal–dual algorithms already available in the literature. By appropriately choosing the inertial parameters, we prove the convergence of the iterates to a saddle point of the problem, and provide an O(1/n) convergence rate on the primal–dual gap evaluated at the corresponding ergodic sequences. Numerical experiments on some image restoration problems show that the combination of the “warm–start” strategy with an appropriate choice of the inertial parameters is strictly required in order to guarantee the convergence to the real minimum point of the objective function.


2022 - Analysis of a variable metric block coordinate method under proximal errors [Articolo su rivista]
Rebegoldi, S.
abstract

This paper focuses on an inexact block coordinate method designed for nonsmooth optimization, where each block-subproblem is solved by performing a bounded number of steps of a variable metric proximal–gradient method with linesearch. We improve on the existing analysis for this algorithm in the nonconvex setting, showing that the iterates converge to a stationary point of the objective function even when the proximal operator is computed inexactly, according to an implementable inexactness condition. The result is obtained by introducing an appropriate surrogate function that takes into account the inexact evaluation of the proximal operator, and assuming that such function satisfies the Kurdyka–Łojasiewicz inequality. The proof technique employed here may be applied to other new or existing block coordinate methods suited for the same class of optimization problems.


2022 - SCALED, INEXACT, AND ADAPTIVE GENERALIZED FISTA FOR STRONGLY CONVEX OPTIMIZATION [Articolo su rivista]
Rebegoldi, S.; Calatroni, L.
abstract

We consider a variable metric and inexact version of the fast iterative soft-thresholding algorithm (FISTA) type algorithm considered in [L. Calatroni and A. Chambolle, SIAM J. Optim., 29 (2019), pp. 1772-1798; A. Chambolle and T. Pock, Acta Numer., 25 (2016), pp. 161-319] for the minimization of the sum of two (possibly strongly) convex functions. The proposed algorithm is combined with an adaptive (nonmonotone) backtracking strategy, which allows for the adjustment of the algorithmic step-size along the iterations in order to improve the convergence speed. We prove a linear convergence result for the function values, which depends on both the strong convexity moduli of the two functions and the upper and lower bounds on the spectrum of the variable metric operators. We validate the proposed algorithm, named Scaled Adaptive GEneralized FISTA (SAGE-FISTA), on exemplar image denoising and deblurring problems where edge-preserving total variation (TV) regularization is combined with Kullback-Leibler-type fidelity terms, as is common in applications where signal-dependent Poisson noise is assumed in the data.


2021 - A comparison of nested primal-dual forward-backward methods for Poisson image deblurring [Relazione in Atti di Convegno]
Rebegoldi, Simone; Bonettini, Silvia; Prato, Marco
abstract

We consider an inexact version of the popular Fast Iterative Soft-Thresholding Algorithm (FISTA) suited for minimizing the sum of a differentiable convex data fidelity function plus a nondifferentiable convex regularizer whose proximal operator is not computable in closed form. The proposed method is a nested primal–dual forward–backward method inspired by the methodology developed in [10], according to which the proximal-gradient point is approximated by means of a prefixed number of inner primal-dual iterates initialized with an appropriate warmstart strategy. We report some preliminary numerical results on a weighted least squares total-variation based model for Poisson image deblurring, which show the efficiency of the proposed FISTA-like method with respect to other strategies for defining the inner loop associated to the proximal step.


2021 - A Scaled and Adaptive FISTA Algorithm for Signal-Dependent Sparse Image Super-Resolution Problems [Relazione in Atti di Convegno]
Lazzaretti, Marta; Rebegoldi, Simone; Calatroni, Luca; Estatico, Claudio
abstract

We propose a scaled adaptive version of the Fast Iterative Soft-Thresholding Algorithm, named S-FISTA, for the efficient solution of convex optimization problems with sparsity-enforcing regularization. S-FISTA couples a non-monotone backtracking procedure with a scaling strategy for the proximal–gradient step, which is particularly effective in situations where signal-dependent noise is present in the data. The proposed algorithm is tested on some image super-resolution problems where a sparsity-promoting regularization term is coupled with a weighted- ℓ2 data fidelity. Our numerical experiments show that S-FISTA allows for faster convergence in function values with respect to standard FISTA, as well as being an efficient inner solver for iteratively reweighted ℓ1 algorithms, thus reducing the overall computational times.


2021 - A scaled, inexact and adaptive Fast Iterative Soft-Thresholding Algorithm for convex image restoration [Relazione in Atti di Convegno]
Calatroni, L.; Rebegoldi, S.
abstract

In this note, we consider a special instance of the scaled, inexact and adaptive generalised Fast Iterative Soft-Thresholding Algorithm (SAGE-FISTA) recently proposed in [15] for the efficient solution of strongly convex composite optimisation problems. In particular, we address here the sole (non-strongly) convex optimisation scenario, which is frequently encountered in many imaging applications. The proposed inexact S-FISTA algorithm shows analogies to the variable metric and inexact version of FISTA studied in [6], the main difference being the use of an adaptive (non-monotone) backtracking strategy allowing for the automatic adjustment of the algorithmic step-size along the iterations (see [8], [17]). A quadratic convergence result in function values depending on the backtracking parameters and the upper and lower bounds on the spectrum of the variable metric operators is given. Experimental results on TV image deblurring problems with Poisson noise are then reported for numerical validation, showing improved computational efficiency and precision.


2020 - Convergence of inexact forward-backward algorithms using the forward-backward envelope [Articolo su rivista]
Bonettini, S.; Prato, M.; Rebegoldi, S.
abstract

This paper deals with a general framework for inexact forward-backward algorithms aimed at minimizing the sum of an analytic function and a lower semicontinuous, subanalytic, convex term. Such framework relies on an implementable inexactness condition for the computation of the proximal operator, and a linesearch procedure which is possibly performed whenever a variable metric is allowed into the forward-backward step. The main focus of the work is the convergence of the considered scheme without additional convexity assumptions on the objective function. Toward this aim, we employ the recent concept of forward{backward envelope to define a continuously differentiable surrogate function, which coincides with the objective at its stationary points, and satisfies the so-called Kurdyka-Lojasiewicz (KL) property on its domain. We adapt the abstract convergence scheme usually exploited in the KL framework to our inexact forward-backward scheme, and prove the convergence of the iterates to a stationary point of the problem, as well as the convergence rates for the function values. Finally, we show the effectiveness and the flexibility of the proposed framework on a large-scale image restoration test problem.


2020 - Efficient Block Coordinate Methods for Blind Cauchy Denoising [Relazione in Atti di Convegno]
Rebegoldi, S.; Bonettini, S.; Prato, M.
abstract

This paper deals with the problem of image blind deconvolution in presence of Cauchy noise, a type of non-Gaussian, impulsive degradation which frequently appears in engineering and biomedical applications. We consider a regularized version of the corresponding data fidelity function, by adding the total variation regularizer on the image and a Tikhonov term on the point spread function (PSF). The resulting objective function is nonconvex with respect to both the image and PSF block, which leads to the presence of several uninteresting local minima. We propose to tackle such challenging problem by means of a block coordinate linesearch based forward backward algorithm suited for nonsmooth nonconvex optimization. The proposed method allows performing multiple forward-backward steps on each block of variables, as well as adopting variable steplengths and scaling matrices to accelerate the progress towards a stationary point. The convergence of the scheme is guaranteed by imposing a linesearch procedure at each inner step of the algorithm. We provide some practical sound rules to adaptively choose both the variable metric parameters and the number of inner iterations on each block. Numerical experiments show how the proposed approach delivers better performances in terms of efficiency and accuracy if compared to a more standard block coordinate strategy.


2020 - New convergence results for the inexact variable metric forward-backward method [Articolo su rivista]
Bonettini, Silvia; Prato, Marco; Rebegoldi, Simone
abstract

Forward-backward methods are valid tools to solve a variety of optimization problems where the objective function is the sum of a smooth, possibly nonconvex term plus a convex, possibly nonsmooth function. The corresponding iteration is built on two main ingredients: the computation of the gradient of the smooth part and the evaluation of the proximity (or resolvent) operator associated to the convex term. One of the main difficulties, from both implementation and theoretical point of view, arises when the proximity operator is computed in an inexact way. The aim of this paper is to provide new convergence results about forward-backward methods with inexact computation of the proximity operator, under the assumption that the objective function satisfies the Kurdyka-Lojasiewicz property. In particular, we adopt an inexactness criterion which can be implemented in practice, while preserving the main theoretical properties of the proximity operator. The main result is the proof of the convergence of the iterates generated by the forward-backward algorithm in [1] to a stationary point. Convergence rate estimates are also provided. At the best of our knowledge, there exists no other inexact forward-backward algorithm with proved convergence in the nonconvex case and equipped with an explicit procedure to inexactly compute the proximity operator.


2019 - Recent advances in variable metric first-order methods [Capitolo/Saggio]
Bonettini, S.; Porta, F.; Prato, M.; Rebegoldi, S.; Ruggiero, V.; Zanni, L.
abstract

Minimization problems often occur in modeling phenomena dealing with real-life applications that nowadays handle large-scale data and require real-time solutions. For these reasons, among all possible iterative schemes, first-order algorithms represent a powerful tool in solving such optimization problems since they admit a relatively simple implementation and avoid onerous computations during the iterations. On the other hand, a well known drawback of these methods is a possible poor convergence rate, especially showed when an high accurate solution is required. Consequently, the acceleration of first-order approaches is a very discussed field which has experienced several efforts from many researchers in the last decades. The possibility of considering a variable underlying metric changing at each iteration and aimed to catch local properties of the starting problem has been proved to be effective in speeding up first-order methods. In this work we deeply analyze a possible way to include a variable metric in first-order methods for the minimization of a functional which can be expressed as the sum of a differentiable term and a nondifferentiable one. Particularly, the strategy discussed can be realized by means of a suitable sequence of symmetric and positive definite matrices belonging to a compact set, together with an Armijo-like linesearch procedure to select the steplength along the descent direction ensuring a sufficient decrease of the objective function.


2018 - A block coordinate variable metric linesearch based proximal gradient method [Articolo su rivista]
Bonettini, Silvia; Prato, Marco; Rebegoldi, Simone
abstract

In this paper we propose an alternating block version of a variable metric linesearch proximal gradient method. This algorithm addresses problems where the objective function is the sum of a smooth term, whose variables may be coupled, plus a separable part given by the sum of two or more convex, possibly nonsmooth functions, each depending on a single block of variables. Our approach is characterized by the possibility of performing several proximal gradient steps for updating every block of variables and by the Armijo backtracking linesearch for adaptively computing the steplength parameter. Under the assumption that the objective function satisfies the Kurdyka- Lojasiewicz property at each point of its domain and the gradient of the smooth part is locally Lipschitz continuous, we prove the convergence of the iterates sequence generated by the method. Numerical experience on an image blind deconvolution problem show the improvements obtained by adopting a variable number of inner block iterations combined with a variable metric in the computation of the proximal operator.


2018 - A Bregman inexact linesearch-based forward-backward algorithm for nonsmooth nonconvex optimization [Relazione in Atti di Convegno]
Rebegoldi, S.; Bonettini, S.; Prato, M.
abstract

In this paper, we present a forward–backward linesearch–based algorithm suited for the minimization of the sum of a smooth (possibly nonconvex) function and a convex (possibly nonsmooth) term. Such algorithm first computes inexactly the proximal operator with respect to a given Bregman distance, and then ensures a sufficient decrease condition by performing a linesearch along the descent direction. The proposed approach can be seen as an instance of the more general class of descent methods presented in [1], however, unlike in [1], we do not assume the strong convexity of the Bregman distance used in the proximal evaluation. We prove that each limit point of the iterates sequence is stationary, we show how to compute an approximate proximal–gradient point with respect to a Bregman distance and, finally, we report the good numerical performance of the algorithm on a large scale image restoration problem. [1] S. Bonettini, I. Loris, F. Porta, and M. Prato 2016, Variable metric inexact line-search-based methods for nonsmooth optimization, SIAM J. Optim. 26(2), 891–921.


2018 - Inertial variable metric techniques for the inexact forward-backward algorithm [Articolo su rivista]
Bonettini, S.; Rebegoldi, S.; Ruggiero, V.
abstract

One of the most popular approaches for the minimization of a convex functional given by the sum of a differentiable term and a nondifferentiable one is the forward-backward method with extrapolation. The main reason making this method very appealing for a wide range of applications is that it achieves a O(1/k2) convergence rate in the objective function values, which is optimal for a first order method. Recent contributions on this topic are related to the convergence of the iterates to a minimizer and the possibility of adopting a variable metric in the proximal step. Moreover, it has been also proved that the objective function convergence rate is actually o(1/k2). However, these results are obtained under the assumption that the minimization subproblem involved in the backward step is computed exactly, which is clearly not realistic in a variety of relevant applications. In this paper, we analyze the convergence properties when both variable metric and inexact computation of the backward step are allowed. To do this, we adopt a suitable inexactness criterion and we devise implementable conditions on both the accuracy of the inexact backward step computation and the variable metric selection, so that the o(1/k2) rate and the convergence of the iterates are preserved. The effectiveness of the proposed approach is also validated with a numerical experience showing the effects of the combination of inexactness with variable metric techniques.


2017 - A comparison of edge-preserving approaches for differential interference contrast microscopy [Articolo su rivista]
Rebegoldi, Simone; Bautista, Lola; Blanc Féraud, Laure; Prato, Marco; Zanni, Luca; Plata, Arturo
abstract

In this paper we address the problem of estimating the phase from color images acquired with differential-interference-contrast microscopy. In particular, we consider the nonlinear and nonconvex optimization problem obtained by regularizing a least-squares-like discrepancy term with an edge-preserving functional, given by either the hypersurface potential or the total variation one. We investigate the analytical properties of the resulting objective functions, proving the existence of minimum points, and we propose effective optimization tools able to obtain in both the smooth and the nonsmooth case accurate reconstructions with a reduced computational demand.


2017 - On the convergence of a linesearch based proximal-gradient method for nonconvex optimization [Articolo su rivista]
Bonettini, Silvia; Loris, Ignace; Porta, Federica; Prato, Marco; Rebegoldi, Simone
abstract

We consider a variable metric linesearch based proximal gradient method for the minimization of the sum of a smooth, possibly nonconvex function plus a convex, possibly nonsmooth term. We prove convergence of this iterative algorithm to a critical point if the objective function satisfies the Kurdyka- Lojasiewicz property at each point of its domain, under the assumption that a limit point exists. The proposed method is applied to a wide collection of image processing problems and our numerical tests show that our algorithm results to be flexible, robust and competitive when compared to recently proposed approaches able to address the optimization problems arising in the considered applications.


2016 - A cyclic block coordinate descent method with generalized gradient projections [Articolo su rivista]
Bonettini, Silvia; Prato, Marco; Rebegoldi, Simone
abstract

The aim of this paper is to present the convergence analysis of a very general class of gradient projection methods for smooth, constrained, possibly nonconvex, optimization. The key features of these methods are the Armijo linesearch along a suitable descent direction and the non Euclidean metric employed to compute the gradient projection. We develop a very general framework from the point of view of block–coordinate descent methods, which are useful when the constraints are separable. In our numerical experiments we consider a large scale image restoration problem to illustrate the impact of the metric choice on the practical performances of the corresponding algorithm.


2016 - On the constrained minimization of smooth Kurdyka– Lojasiewicz functions with the scaled gradient projection method [Relazione in Atti di Convegno]
Prato, Marco; Bonettini, Silvia; Loris, Ignace; Porta, Federica; Rebegoldi, Simone
abstract

The scaled gradient projection (SGP) method is a first-order optimization method applicable to the constrained minimization of smooth functions and exploiting a scaling matrix multiplying the gradient and a variable steplength parameter to improve the convergence of the scheme. For a general nonconvex function, the limit points of the sequence generated by SGP have been proved to be stationary, while in the convex case and with some restrictions on the choice of the scaling matrix the sequence itself converges to a constrained minimum point. In this paper we extend these convergence results by showing that the SGP sequence converges to a limit point provided that the objective function satisfies the Kurdyka– Lojasiewicz property at each point of its domain and its gradient is Lipschitz continuous.


2016 - Phase estimation in Differential-Interference-Contrast (DIC) microscopy [Relazione in Atti di Convegno]
Bautista, Lola; Rebegoldi, Simone; Blanc Féraud, Laure; Prato, Marco; Zanni, Luca; Plata, Arturo
abstract

We present a gradient-based optimization method for the estimation of a specimen’s phase function from polychromatic DIC images. The method minimizes the sum of a nonlinear least-squares discrepancy measure and a smooth approximation of the total variation. A new formulation of the gradient and a recent updating rule for the choice of the step size are both exploited to reduce computational time. Numerical simulations on two computer-generated objects show significant improvements, both in efficiency and accuracy, with respect to a more standard choice of the step size.


2016 - TV-regularized phase reconstruction in differential-interference-contrast (DIC) microscopy [Relazione in Atti di Convegno]
Rebegoldi, Simone; Bautista, Lola; Blanc Féraud, Laure; Prato, Marco; Zanni, Luca; Plata, Arturo
abstract

In this paper we address the problem of reconstructing the phase from color images acquired with differential-interference-contrast (DIC) microscopy. In particular, we reformulate the problem as the minimization of a least-squares fidelity function regularized with of a total variation term, and we address the solution by exploiting a recently proposed inexact forward-backward approach. The effectiveness of this method is assessed on a realistic synthetic test.


2015 - A blind deconvolution method for ground based telescopes and Fizeau interferometers [Articolo su rivista]
Prato, Marco; La Camera, A.; Bonettini, Silvia; Rebegoldi, Simone; Bertero, M.; Boccacci, P.
abstract

In the case of ground-based telescopes equipped with adaptive optics systems, the point spread function (PSF) is only poorly known or completely unknown. Moreover, an accurate modeling of the PSF is in general not available. Therefore in several imaging situations the so-called blind deconvolution methods, aiming at estimating both the scientific target and the PSF from the detected image, can be useful. A blind deconvolution problem is severely ill-posed and, in order to reduce the extremely large number of possible solutions, it is necessary to introduce sensible constraints on both the scientific target and the PSF. In a previous paper we proposed a sound mathematical approach based on a suitable inexact alternating minimization strategy for minimizing the generalizedKullback-Leibler divergence, assuring global convergence. In the framework of this method we showed that an important constraint on the PSF is the upper bound which can be derived from the knowledge of its Strehl ratio. The efficacy of the approach was demonstrated by means of numerical simulations. In this paper, besides improving the previous approach by the use of a further constraint on the unknown scientific target, we extend it to the case of multiple images of the same target obtained with different PSFs. The main application we have in mind is to Fizeau interferometry. As it is known this is a special feature of the Large Binocular Telescope (LBT). Of the two expected interferometers for LBT, one, LINC-NIRVANA, is forthcoming while the other, LBTI, is already operating and has provided the first Fizeau images, demonstrating the possibility of reaching the resolution of a 22.8 m telescope. Therefore the extension of our blind method to this imaging modality seems to be timely. The method is applied to realistic simulations of imaging both by single mirrors and Fizeau interferometers. Successes and failures of the method in the imaging of stellar fields are demonstrated in simple cases. These preliminary results look promising at least in specific situations. The IDL code of the proposed method is available on request and will be included in the forthcoming version of the Software Package AIRY (v.6.1).


2015 - Application of cyclic block generalized gradient projection methods to Poisson blind deconvolution [Relazione in Atti di Convegno]
Rebegoldi, Simone; Bonettini, Silvia; Prato, Marco
abstract

The aim of this paper is to consider a modification of a block coordinate gradient projection method with Armijo linesearch along the descent direction in which the projection on the feasible set is performed according to a variable non Euclidean metric. The stationarity of the limit points of the resulting scheme has recently been proved under some general assumptions on the generalized gradient projections employed. Here we tested some examples of methods belonging to this class on a blind deconvolution problem from data affected by Poisson noise, and we illustrate the impact of the projection operator choice on the practical performances of the corresponding algorithm.


2014 - Alternating minimization for Poisson blind deconvolution in astronomy [Relazione in Atti di Convegno]
Prato, Marco; Bonettini, Silvia; A., La Camera; Rebegoldi, Simone
abstract

Although the continuous progresses in the design of devices which reduce the distorting effects of an optical system, a correct model of the point spread function (PSF) is often unavailable and in general it has to be estimated manually from a measured image. As an alternative to this approach, one can address the so-called blind deconvolution problem, in which the reconstruction of both the target distribution and the model is performed simultaneously by considering the minimization of a fit-to-data function in which both the object and the PSF are unknown. Due to the strong ill-posedness of the resulting inverse problem, suitable a priori information are needed to recover a meaningful solution, which can be included in the minimization problem under the form of constraints on the unknowns. In this work we consider a recent optimization algorithm for the solution of the blind deconvolution problem from data affected by Poisson noise, and we propose a strategy to automatically select its parameters based on a measure of the optimality condition violation. Some numerical simulations on astronomical images show that the proposed approach allows to provide reconstructions very close to those obtained by manually optimizing the algorithm parameters.


2014 - An alternating minimization method for blind deconvolution in astronomy [Poster]
Rebegoldi, Simone; Bonettini, Silvia; A., La Camera; Prato, Marco
abstract

Blind deconvolution is the problem of image deblurring when both the original object and the blur are unknown. In this work, we show a particular astronomical imaging problem, in which p images of the same astronomical object are acquired and convolved with p different Point Spread Functions (PSFs). According to the maximum likelihood approach, this becomes a constrained minimization problem with p+1 blocks of variables, whose objective function is globally non convex. Thanks to the separable structure of the constraints, the problem can be treated by means of an inexact alternating minimization method whose limit points are stationary for the function. This method has been tested on some realistic datasets and the numerical results are hereby reported to show its effectiveness on both sparse and diffuse astronomical objects.