Nuova ricerca

FEDERICA PORTA

Ricercatore t.d. art. 24 c. 3 lett. A presso: Dipartimento di Scienze Fisiche, Informatiche e Matematiche sede ex-Matematica


Home | Curriculum(pdf) | Didattica |


Pubblicazioni

2021 - Variable metric techniques for forward–backward methods in imaging [Articolo su rivista]
Bonettini, S.; Porta, F.; Ruggiero, V.; Zanni, L.
abstract

Variable metric techniques are a crucial ingredient in many first order optimization algorithms. In practice, they consist in a rule for computing, at each iteration, a suitable symmetric, positive definite scaling matrix to be multiplied to the gradient vector. Besides quasi-Newton BFGS techniques, which represented the state-of-the-art since the 70's, new approaches have been proposed in the last decade in the framework of imaging problems expressed in variational form. Such recent approaches are appealing since they can be applied to large scale problems without adding significant computational costs and they produce an impressive improvement in the practical performances of first order methods. These scaling strategies are strictly connected to the shape of the specific objective function and constraints of the optimization problem they are applied to; therefore, they are able to effectively capture the problem features. On the other side, this strict problem dependence makes difficult extending the existing techniques to more general problems. Moreover, in spite of the experimental evidence of their practical effectiveness, their theoretical properties are not well understood. The aim of this paper is to investigate these issues; in particular, we develop a unified framework for scaling techniques, multiplicative algorithms and the Majorization–Minimization approach. With this inspiration, we propose a scaling matrix rule for variable metric first order methods applied to nonnegatively constrained problems exploiting a suitable structure of the objective function. Finally, we evaluate the effectiveness of the proposed approach on some image restoration problems.


2020 - A Limited Memory Gradient Projection Method for Box-Constrained Quadratic Optimization Problems [Relazione in Atti di Convegno]
Crisci, S.; Porta, F.; Ruggiero, V.; Zanni, L.
abstract

Gradient Projection (GP) methods are a very popular tool to address box-constrained quadratic problems thanks to their simple implementation and low computational cost per iteration with respect, for example, to Newton approaches. It is however possible to include, in GP schemes, some second order information about the problem by means of a clever choice of the steplength parameter which controls the decrease along the anti-gradient direction. Borrowing the analysis developed by Barzilai and Borwein (BB) for an unconstrained quadratic programming problem, in 2012 Roger Fletcher proposed a limited memory steepest descent (LMSD) method able to effectively sweep the spectrum of the Hessian matrix of the quadratic function to optimize. In this work we analyze how to extend the Fletcher’s steplength selection rule to GP methods employed to solve box-constrained quadratic problems. Particularly, we suggest a way to take into account the lower and the upper bounds in the steplength definition, providing also a theoretical and numerical evaluation of our approach.


2020 - Spectral properties of barzilai-borwein rules in solving singly linearly constrained optimization problems subject to lower and upper bounds [Articolo su rivista]
Crisci, S.; Porta, F.; Ruggiero, V.; Zanni, L.
abstract

In 1988, Barzilai and Borwein published a pioneering paper which opened the way to inexpensively accelerate first-order. In more detail, in the framework of unconstrained optimization, Barzilai and Borwein developed two strategies to select the step length in gradient descent methods with the aim of encoding some second-order information of the problem without computing and/or employing the Hessian matrix of the objective function. Starting from these ideas, several effcient step length techniques have been suggested in the last decades in order to make gradient descent methods more and also more appealing for problems which handle large-scale data and require real- time solutions. Typically, these new step length selection rules have been tuned in the quadratic unconstrained framework for sweeping the spectrum of the Hessian matrix, and then applied also to nonquadratic constrained problems, without any substantial modification, by showing them to be very effiective anyway. In this paper, we deeply analyze how, in quadratic and nonquadratic mini- mization problems, the presence of a feasible region, expressed by a single linear equality constraint together with lower and upper bounds, inuences the spectral properties of the original Barzilai-Borwein (BB) rules, generalizing recent results provided for box-constrained quadratic problems. This analysis gives rise to modified BB approaches able not only to capture second-order informa- tion but also to exploit the nature of the feasible region. We show the benefits gained by the new step length rules on a set of test problems arising also from machine learning and image processing applications.


2019 - Recent advances in variable metric first-order methods [Capitolo/Saggio]
Bonettini, S.; Porta, F.; Prato, M.; Rebegoldi, S.; Ruggiero, V.; Zanni, L.
abstract

Minimization problems often occur in modeling phenomena dealing with real-life applications that nowadays handle large-scale data and require real-time solutions. For these reasons, among all possible iterative schemes, first-order algorithms represent a powerful tool in solving such optimization problems since they admit a relatively simple implementation and avoid onerous computations during the iterations. On the other hand, a well known drawback of these methods is a possible poor convergence rate, especially showed when an high accurate solution is required. Consequently, the acceleration of first-order approaches is a very discussed field which has experienced several efforts from many researchers in the last decades. The possibility of considering a variable underlying metric changing at each iteration and aimed to catch local properties of the starting problem has been proved to be effective in speeding up first-order methods. In this work we deeply analyze a possible way to include a variable metric in first-order methods for the minimization of a functional which can be expressed as the sum of a differentiable term and a nondifferentiable one. Particularly, the strategy discussed can be realized by means of a suitable sequence of symmetric and positive definite matrices belonging to a compact set, together with an Armijo-like linesearch procedure to select the steplength along the descent direction ensuring a sufficient decrease of the objective function.


2018 - A nonsmooth regularization approach based on shearlets for Poisson noise removal in ROI tomography [Articolo su rivista]
Bubba, Tatiana Alessandra; Porta, Federica; Zanghirati, Gaetano; Bonettini, Silvia
abstract

Due to its potential to lower exposure to X-ray radiation and reduce the scanning time, region-of-interest (ROI) computed tomography (CT) is particularly appealing for a wide range of biomedical applications. To overcome the severe ill-posedness caused by the truncation of projection measurements, ad hoc strategies are required, since traditional CT reconstruction algorithms result in instability to noise, and may give inaccurate results for small ROI. To handle this difficulty, we propose a nonsmooth convex optimization model based on ℓ1 shearlet regularization, whose solution is addressed by means of the variable metric inexact line search algorithm (VMILA), a proximal-gradient method that enables the inexact computation of the proximal point defining the descent direction. We compare the reconstruction performance of our strategy against a smooth total variation (sTV) approach, by using both Poisson noisy simulated data and real data from fan-beam CT geometry. The results show that, while for synthetic data both shearets and sTV perform well, for real data, the proposed nonsmooth shearlet-based approach outperforms sTV, since the localization and directional properties of shearlets allow to detect finer structures of a textured image. Finally, our approach appears to be insensitive to the ROI size and location.


2018 - Serial and parallel approaches for image segmentation by numerical minimization of a second-order functional [Articolo su rivista]
Zanella, Riccardo; Porta, F.; Ruggiero, Valeria; Zanetti, M.
abstract

Because of its attractive features, second order segmentation has shown to be a promising tool in remote sensing. A known drawback about its implementation is computational complexity, above all for large set of data. Recently in Zanetti et al. [1], an efficient version of the block-coordinate descent algorithm (BCDA) has been proposed for the minimization of a second order elliptic approximation of the Blake–Zissermann functional. Although the parallelization of linear algebra operations is expected to increase the performance of BCDA when addressing the segmentation of large-size gridded data (e.g., full-scene images or Digital Surface Models (DSMs)), numerical evidence shows that this is not sufficient to get significant reduction of computational time. Therefore a novel approach is proposed which exploits a decomposition technique of the image domain into tiles. The solution can be computed by applying BCDA on each tile in parallel way and combining the partial results corresponding to the different blocks of variables through a proper interconnection rule. We prove that this parallel method (OPARBCDA) generates a sequence of iterates which converges to a critical point of the functional on the level set devised by the starting point. Furthermore, we show that the parallel method can be efficiently implemented even in a commodity multicore CPU. Numerical results are provided to evaluate the efficiency of the parallel scheme on large images in terms of computational cost and its effectiveness with respect to the behavior on the tile junctions.


2017 - On the convergence of a linesearch based proximal-gradient method for nonconvex optimization [Articolo su rivista]
Bonettini, Silvia; Loris, Ignace; Porta, Federica; Prato, Marco; Rebegoldi, Simone
abstract

We consider a variable metric linesearch based proximal gradient method for the minimization of the sum of a smooth, possibly nonconvex function plus a convex, possibly nonsmooth term. We prove convergence of this iterative algorithm to a critical point if the objective function satisfies the Kurdyka- Lojasiewicz property at each point of its domain, under the assumption that a limit point exists. The proposed method is applied to a wide collection of image processing problems and our numerical tests show that our algorithm results to be flexible, robust and competitive when compared to recently proposed approaches able to address the optimization problems arising in the considered applications.


2017 - Runge–Kutta-like scaling techniques for first-order methods in convex optimization [Articolo su rivista]
Porta, Federica; Cornelio, Anastasia; Ruggiero, Valeria
abstract

It is well known that there is a strong connection between time integration and convex optimization. In this work, inspired by the equivalence between the forward Euler scheme and the gradient descent method, we broaden our analysis to the family of Runge–Kutta methods and show that they enjoy a natural interpretation as first-order optimization algorithms. The strategies intrinsically suggested by Runge–Kutta methods are exploited in order to detail novel proposal for either scaling or preconditioning gradient-like approaches, whose convergence is ensured by the stability condition for Runge–Kutta schemes. The theoretical analysis is supported by the numerical experiments carried out on some test problems arising from suitable applications where the proposed techniques can be efficiently employed.


2016 - A Variable Metric Forward-Backward Method with Extrapolation [Articolo su rivista]
Bonettini, Silvia; Porta, Federica; Ruggiero, V.
abstract

Forward-backward methods are a very useful tool for the minimization of a functional given by the sum of a differentiable term and a nondifferentiable one, and their investigation has comprised several efforts from many researchers in the last decade. In this paper we focus on the convex case and, inspired by recent approaches for accelerating first-order iterative schemes, we develop a scaled inertial forward-backward algorithm which is based on a metric changing at each iteration and on a suitable extrapolation step. Unlike standard forward-backward methods with extrapolation, our scheme is able to handle functions whose domain is not the entire space. Both an O(1/k^2) convergence rate estimate on the objective function values and the convergence of the sequence of the iterates are proved. Numerical experiments on several test problems arising from image processing, compressed sensing, and statistical inference show the effectiveness of the proposed method in comparison to well-performing state-of-the-art algorithms.


2016 - On the constrained minimization of smooth Kurdyka– Lojasiewicz functions with the scaled gradient projection method [Relazione in Atti di Convegno]
Prato, Marco; Bonettini, Silvia; Loris, Ignace; Porta, Federica; Rebegoldi, Simone
abstract

The scaled gradient projection (SGP) method is a first-order optimization method applicable to the constrained minimization of smooth functions and exploiting a scaling matrix multiplying the gradient and a variable steplength parameter to improve the convergence of the scheme. For a general nonconvex function, the limit points of the sequence generated by SGP have been proved to be stationary, while in the convex case and with some restrictions on the choice of the scaling matrix the sequence itself converges to a constrained minimum point. In this paper we extend these convergence results by showing that the SGP sequence converges to a limit point provided that the objective function satisfies the Kurdyka– Lojasiewicz property at each point of its domain and its gradient is Lipschitz continuous.


2016 - The ROI CT problem: a shearlet-based regularization approach [Relazione in Atti di Convegno]
Bubba, Tatiana Alessandra; Porta, Federica; Zanghirati, Gaetano; Bonettini, Silvia
abstract

The possibility to significantly reduce the X-ray radiation dose and shorten the scanning time is particularly appealing, especially for the medical imaging community. Region-of-interest Computed Tomography (ROI CT) has this potential and, for this reason, is currently receiving increasing attention. Due to the truncation of projection images, ROI CT is a rather challenging problem. Indeed, the ROI reconstruction problem is severely ill-posed in general and naive local reconstruction algorithms tend to be very unstable. To obtain a stable and reliable reconstruction, under suitable noise circumstances, we formulate the ROI CT problem as a convex optimization problem with a regularization term based on shearlets, and possibly nonsmooth. For the solution, we propose and analyze an iterative approach based on the variable metric inexact line-search algorithm (VMILA). The reconstruction performance of VMILA is compared against different regularization conditions, in the case of fan-beam CT simulated data. The numerical tests show that our approach is insensitive to the location of the ROI and remains very stable also when the ROI size is rather small.


2016 - Variable metric inexact line-search based methods for nonsmooth optimization [Articolo su rivista]
Bonettini, Silvia; Loris, Ignace; Porta, Federica; Prato, Marco
abstract

We develop a new proximal--gradient method for minimizing the sum of a differentiable, possibly nonconvex, function plus a convex, possibly non differentiable, function. The key features of the proposed method are the definition of a suitable descent direction, based on the proximal operator associated to the convex part of the objective function, and an Armijo--like rule to determine the step size along this direction ensuring the sufficient decrease of the objective function. In this frame, we especially address the possibility of adopting a metric which may change at each iteration and an inexact computation of the proximal point defining the descent direction. For the more general nonconvex case, we prove that all limit points of the iterates sequence are stationary, while for convex objective functions we prove the convergence of the whole sequence to a minimizer, under the assumption that a minimizer exists. In the latter case, assuming also that the gradient of the smooth part of the objective function is Lipschitz, we also give a convergence rate estimate, showing the O(1/k) complexity with respect to the function values. We also discuss verifiable sufficient conditions for the inexact proximal point and we present the results of two numerical tests on total variation based image restoration problems, showing that the proposed approach is competitive with other state-of-the-art methods.


2015 - A convergent least-squares regularized blind deconvolution approach [Articolo su rivista]
Cornelio, Anastasia; Porta, Federica; Prato, Marco
abstract

The aim of this work is to present a new and efficient optimization method for the solution of blind deconvolution problems with data corrupted by Gaussian noise, which can be reformulated as a constrained minimization problem whose unknowns are the point spread function (PSF) of the acquisition system and the true image. The objective function we consider is the weighted sum of the least-squares fit-to-data discrepancy and possible regularization terms accounting for specific features to be preserved in both the image and the PSF. The solution of the corresponding minimization problem is addressed by means of a proximal alternating linearized minimization (PALM) algorithm, in which the updating procedure is made up of one step of a gradient projection method along the arc and the choice of the parameter identifying the steplength in the descent direction is performed automatically by exploiting the optimality conditions of the problem. The resulting approach is a particular case of a general scheme whose convergence to stationary points of the constrained minimization problem has been recently proved. The effectiveness of the iterative method is validated in several numerical simulations in image reconstruction problems.


2015 - A new steplength selection for scaled gradient methods with application to image deblurring [Articolo su rivista]
Porta, Federica; Prato, Marco; Zanni, Luca
abstract

Gradient methods are frequently used in large scale image deblurring problems since they avoid the onerous computation of the Hessian matrix of the objective function. Second order information is typically sought by a clever choice of the steplength parameter defining the descent direction, as in the case of the well-known Barzilai and Borwein rules. In a recent paper, a strategy for the steplength selection approximating the inverse of some eigenvalues of the Hessian matrix has been proposed for gradient methods applied to unconstrained minimization problems. In the quadratic case, this approach is based on a Lanczos process applied every m iterations to the matrix of the gradients computed in the previous m iterations, but the idea can be extended to a general objective function. In this paper we extend this rule to the case of scaled gradient projection methods applied to constrained minimization problems, and we test the effectiveness of the proposed strategy in image deblurring problems in both the presence and the absence of an explicit edge-preserving regularization term.


2015 - Limited-memory scaled gradient projection methods for real-time image deconvolution in microscopy [Articolo su rivista]
Porta, Federica; Zanella, R.; Zanghirati, G.; Zanni, Luca
abstract

Gradient projection methods have given rise to effective tools for image deconvolution in several relevant areas, such as microscopy, medical imaging and astronomy. Due to the large scale of the optimization problems arising in nowadays imaging applications and to the growing request of real-time reconstructions, an interesting challenge to be faced consists in designing new acceleration techniques for the gradient schemes, able to preserve their simplicity and low computational cost of each iteration. In this work we propose an acceleration strategy for a state-of-the-art scaled gradient projection method for image deconvolution in microscopy. The acceleration idea is derived by adapting a steplength selection rule, recently introduced for limited-memory steepest descent methods in unconstrained optimization, to the special constrained optimization framework arising in image reconstruction. We describe how important issues related to the generalization of the step-length rule to the imaging optimization problem have been faced and we evaluate the improvements due to the acceleration strategy by numerical experiments on large-scale image deconvolution problems.


2015 - On some steplength approaches for proximal algorithms [Articolo su rivista]
Porta, Federica; Loris, Ignace
abstract

We discuss a number of novel steplength selection schemes for proximal-based convex optimization algorithms. In particular, we consider the problem where the Lipschitz constant of the gradient of the smooth part of the objective function is unknown. We generalize two optimization algorithms of Khobotov type and prove convergence. We also take into account possible inaccurate computation of the proximal operator of the non-smooth part of the objective function. Secondly, we show convergence of an iterative algorithm with Armijo-type steplength rule, and discuss its use with an approximate computation of the proximal operator. Numerical experiments show the efficiency of the methods in comparison to some existing schemes.


2013 - Filter factor analysis of scaled gradient methods for linear least squares [Relazione in Atti di Convegno]
Porta, Federica; Cornelio, Anastasia; Zanni, Luca; Prato, Marco
abstract

A typical way to compute a meaningful solution of a linear least squares problem involves the introduction of a filter factors array, whose aim is to avoid noise amplification due to the presence of small singular values. Beyond the classical direct regularization approaches, iterative gradient methods can be thought as filtering methods, due to their typical capability to recover the desired components of the true solution at the first iterations. For an iterative method, regularization is achieved by stopping the procedure before the noise introduces artifacts, making the iteration number playing the role of the regularization parameter. In this paper we want to investigate the filtering and regularizing effects of some first-order algorithms, showing in particular which benefits can be gained in recovering the filters of the true solution by means of a suitable scaling matrix.


2013 - On the filtering effect of iterative regularization algorithms for discrete inverse problems [Articolo su rivista]
Cornelio, Anastasia; Porta, Federica; Prato, Marco; Zanni, Luca
abstract

Many real-world applications are addressed through a linear least-squares problem formulation, whose solution is calculated by means of an iterative approach. A huge amount of studies has been carried out in the optimization field to provide the fastest methods for the reconstruction of the solution, involving choices of adaptive parameters and scaling matrices. However, in presence of an ill-conditioned model and real data, the need of a regularized solution instead of the least-squares one changed the point of view in favour of iterative algorithms able to combine a fast execution with a stable behaviour with respect to the restoration error. In this paper we analyze some classical and recent gradient approaches for the linear least-squares problem by looking at their way of filtering the singular values, showing in particular the effects of scaling matrices and non-negative constraints in recovering the correct filters of the solution. An original analysis of the filtering effect for the image deblurring problem with Gaussian noise on the data is also provided.