
Luca ZANNI
Professore Ordinario Dipartimento di Scienze Fisiche, Informatiche e Matematiche sede exMatematica

Home 
Curriculum(pdf) 
Didattica 
Pubblicazioni
2023
 Neural architecture search via standard machine learning methodologies
[Articolo su rivista]
Franchini, Giorgia; Ruggiero, Valeria; Porta, Federica; Zanni, Luca
abstract
In the context of deep learning, the more expensive computational phase is the full training of the learning methodology. Indeed, its effectiveness depends on the choice of proper values for the socalled hyperparameters, namely the parameters that are not trained during the learning process, and such a selection typically requires an extensive numerical investigation with the execution of a significant number of experimental trials. The aim of the paper is to investigate how to choose the hyperparameters related to both the architecture of a Convolutional Neural Network (CNN), such as the number of filters and the kernel size at each convolutional layer, and the optimisation algorithm employed to train the CNN itself, such as the steplength, the minibatch size and the potential adoption of variance reduction techniques. The main contribution of the paper consists in introducing an automatic Machine Learning technique to set these hyperparameters in such a way that a measure of the CNN performance can be optimised. In particular, given a set of values for the hyperparameters, we propose a lowcost strategy to predict the performance of the corresponding CNN, based on its behavior after only few steps of the training process. To achieve this goal, we generate a dataset whose input samples are provided by a limited number of hyperparameter configurations together with the corresponding CNN measures of performance obtained with only few steps of the CNN training process, while the label of each input sample is the performance corresponding to a complete training of the CNN. Such dataset is used as training set for a Support Vector Machines for Regression and/or Random Forest techniques to predict the performance of the considered learning methodology, given its performance at the initial iterations of its learning process. Furthermore, by a probabilistic exploration of the hyperparameter space, we are able to find, at a quite low cost, the setting of a CNN hyperparameters which provides the optimal performance. The results of an extensive numerical experimentation, carried out on CNNs, together with the use of our performance predictor with NASBench101, highlight how the proposed methodology for the hyperparameter setting appears very promising.
2022
 Hybrid limited memory gradient projection methods for boxconstrained optimization problems
[Articolo su rivista]
Crisci, S.; Porta, F.; Ruggiero, V.; Zanni, L.
abstract
Gradient projection methods represent effective tools for solving largescale constrained
optimization problems thanks to their simple implementation and low computational
cost per iteration. Despite these good properties, a slow convergence rate
can affect gradient projection schemes, especially when high accurate solutions are
needed. A strategy to mitigate this drawback consists in properly selecting the values
for the steplength along the negative gradient. In this paper, we consider the
class of gradient projection methods with line search along the projected arc for boxconstrained
minimization problems and we analyse different strategies to define the
steplength. It is well known in the literature that steplength selection rules able to
approximate, at each iteration, the eigenvalues of the inverse of a suitable submatrix
of the Hessian of the objective function can improve the performance of gradient
projection methods. In this perspective, we propose an automatic hybrid steplength
selection technique that employs a proper alternation of standard Barzilai–Borwein
rules, when the final active set is not well approximated, and a generalized limited
memory strategy based on the Ritzlike values of the Hessian matrix restricted to the
inactive constraints, when the final active set is reached. Numerical experiments on
quadratic and nonquadratic test problems show the effectiveness of the proposed
steplength scheme.
2022
 On the convergence properties of scaled gradient projection methods with nonmonotone Armijo–like line searches
[Articolo su rivista]
Crisci, S.; Porta, F.; Ruggiero, V.; Zanni, L.
abstract
In order to solve constrained optimization problems on convex sets, the class of scaled gradient projection methods is often exploited in combination with nonmonotone Armijo–like line search strategies. These techniques are adopted for efficiently selecting the steplength parameter and can be realized by means of two different approaches: either the one along the arc or the one along the feasible directions. In this paper we deeply analyze the convergence properties of the scaled gradient projection methods equipped with the nonmonotone version of both these Armijo–like line searches. To the best of our knowledge, not all the convergence results proved for either the nonscaled or the monotone gradient projection algorithm have been also stated for the nonmonotone and scaled counterpart. The goal of this paper is to fill this gap of knowledge by detailing which hypotheses are needed to guarantee both the stationarity of the limit points and the convergence of the sequence generated by the nonmonotone scaled gradient projection schemes. Moreover, in the case of polyhedral constraint set, we discuss the identification of the active set at the solution for the sequence generated by the along the arc approach. Several numerical experiments on quadratic and nonquadratic optimization problems have been carried out in order to compare the behaviour of the considered scaled gradient projection methods.
2021
 Variable metric techniques for forward–backward methods in imaging
[Articolo su rivista]
Bonettini, S.; Porta, F.; Ruggiero, V.; Zanni, L.
abstract
Variable metric techniques are a crucial ingredient in many first order optimization algorithms. In practice, they consist in a rule for computing, at each iteration, a suitable symmetric, positive definite scaling matrix to be multiplied to the gradient vector. Besides quasiNewton BFGS techniques, which represented the stateoftheart since the 70's, new approaches have been proposed in the last decade in the framework of imaging problems expressed in variational form. Such recent approaches are appealing since they can be applied to large scale problems without adding significant computational costs and they produce an impressive improvement in the practical performances of first order methods. These scaling strategies are strictly connected to the shape of the specific objective function and constraints of the optimization problem they are applied to; therefore, they are able to effectively capture the problem features. On the other side, this strict problem dependence makes difficult extending the existing techniques to more general problems. Moreover, in spite of the experimental evidence of their practical effectiveness, their theoretical properties are not well understood. The aim of this paper is to investigate these issues; in particular, we develop a unified framework for scaling techniques, multiplicative algorithms and the Majorization–Minimization approach. With this inspiration, we propose a scaling matrix rule for variable metric first order methods applied to nonnegatively constrained problems exploiting a suitable structure of the objective function. Finally, we evaluate the effectiveness of the proposed approach on some image restoration problems.
2020
 A Limited Memory Gradient Projection Method for BoxConstrained Quadratic Optimization Problems
[Relazione in Atti di Convegno]
Crisci, S.; Porta, F.; Ruggiero, V.; Zanni, L.
abstract
Gradient Projection (GP) methods are a very popular tool to address boxconstrained quadratic problems thanks to their simple implementation and low computational cost per iteration with respect, for example, to Newton approaches. It is however possible to include, in GP schemes, some second order information about the problem by means of a clever choice of the steplength parameter which controls the decrease along the antigradient direction. Borrowing the analysis developed by Barzilai and Borwein (BB) for an unconstrained quadratic programming problem, in 2012 Roger Fletcher proposed a limited memory steepest descent (LMSD) method able to effectively sweep the spectrum of the Hessian matrix of the quadratic function to optimize. In this work we analyze how to extend the Fletcher’s steplength selection rule to GP methods employed to solve boxconstrained quadratic problems. Particularly, we suggest a way to take into account the lower and the upper bounds in the steplength definition, providing also a theoretical and numerical evaluation of our approach.
2020
 Artificial Neural Networks: The Missing Link Between Curiosity and Accuracy
[Relazione in Atti di Convegno]
Franchini, G.; Burgio, P.; Zanni, L.
abstract
Artificial Neural Networks, as the name itself suggests, are biologically inspired algorithms designed to simulate the way in which the human brain processes information. Like neurons, which consist of a cell nucleus that receives input from other neurons through a web of input terminals, an Artificial Neural Network includes hundreds of single units, artificial neurons or processing elements, connected with coefficients (weights), and are organized in layers. The power of neural computations comes from connecting neurons in a network: in fact, in an Artificial Neural Network it is possible to manage a different number of information at the same time. What is not fully understood is which is the most efficient way to train an Artificial Neural Network, and in particular what is the best minibatch size for maximize accuracy while minimizing training time. The idea that will be developed in this study has its roots in the biological world, that inspired the creation of Artificial Neural Network in the first place. Humans have altered the face of the world through extraordinary adaptive and technological advances: those changes were made possible by our cognitive structure, particularly the ability to reasoning and build causal models of external events. This dynamism is made possible by a high degree of curiosity. In the biological world, and especially in human beings, curiosity arises from the constant search of knowledge and information: behaviours that support the information sampling mechanism range from the very small (initial minibatch size) to the very elaborate sustained (increasing minibatch size). The goal of this project is to train an Artificial Neural Network by increasing dynamically, in an adaptive manner (with validation set), the minibatch size; our hypothesis is that this training method will be more efficient (in terms of time and costs) compared to the ones implemented so far.
2020
 GPU acceleration of a modelbased iterative method for Digital Breast Tomosynthesis
[Articolo su rivista]
Cavicchioli, R.; Hu, J. C.; Loli Piccolomini, E.; Morotti, E.; Zanni, L.
abstract
Digital Breast Tomosynthesis (DBT) is a modern 3D Computed Tomography Xray technique for the early detection of breast tumors, which is receiving growing interest in the medical and scientific community. Since DBT performs incomplete sampling of data, the image reconstruction approaches based on iterative methods are preferable to the classical analytic techniques, such as the Filtered Back Projection algorithm, providing fewer artifacts. In this work, we consider a ModelBased Iterative Reconstruction (MBIR) method well suited to describe the DBT data acquisition process and to include prior information on the reconstructed image. We propose a gradientbased solver named Scaled Gradient Projection (SGP) for the solution of the constrained optimization problem arising in the considered MBIR method. Even if the SGP algorithm exhibits fast convergence, the time required on a serial computer for the reconstruction of a real DBT data set is too long for the clinical needs. In this paper we propose a parallel SGP version designed to perform the most expensive computations of each iteration on Graphics Processing Unit (GPU). We apply the proposed parallel approach on three different GPU boards, with computational performance comparable with that of the boards usually installed in commercial DBT systems. The numerical results show that the proposed GPUbased MBIR method provides accurate reconstructions in a time suitable for clinical trials.
2020
 On the Steplength Selection in Stochastic Gradient Methods
[Relazione in Atti di Convegno]
Franchini, G.; Ruggiero, V.; Zanni, L.
abstract
This paper deals with the steplength selection in stochastic gradient methods for large scale optimization problems arising in machine learning. We introduce an adaptive steplength selection derived by tailoring a limited memory steplength rule, recently developed in the deterministic context, to the stochastic gradient approach. The proposed steplength rule provides values within an interval, whose bounds need to be prefixed by the user. A suitable choice of the interval bounds allows to perform similarly to the standard stochastic gradient method equipped with the besttuned steplength. Since the setting of the bounds slightly affects the performance, the new rule makes the tuning of the parameters less expensive with respect to the choice of the optimal prefixed steplength in the standard stochastic gradient method. We evaluate the behaviour of the proposed steplength selection in training binary classifiers on well known data sets and by using different loss functions.
2020
 Spectral properties of barzilaiborwein rules in solving singly linearly constrained optimization problems subject to lower and upper bounds
[Articolo su rivista]
Crisci, S.; Porta, F.; Ruggiero, V.; Zanni, L.
abstract
In 1988, Barzilai and Borwein published a pioneering paper which opened the way to inexpensively accelerate firstorder. In more detail, in the framework of unconstrained optimization, Barzilai and Borwein developed two strategies to select the step length in gradient descent methods with the aim of encoding some secondorder information of the problem without computing and/or employing the Hessian matrix of the objective function. Starting from these ideas, several effcient step length techniques have been suggested in the last decades in order to make gradient descent methods more and also more appealing for problems which handle largescale data and require real time solutions. Typically, these new step length selection rules have been tuned in the quadratic unconstrained framework for sweeping the spectrum of the Hessian matrix, and then applied also to nonquadratic constrained problems, without any substantial modification, by showing them to be very effiective anyway. In this paper, we deeply analyze how, in quadratic and nonquadratic mini mization problems, the presence of a feasible region, expressed by a single linear equality constraint together with lower and upper bounds, inuences the spectral properties of the original BarzilaiBorwein (BB) rules, generalizing recent results provided for boxconstrained quadratic problems. This analysis gives rise to modified BB approaches able not only to capture secondorder informa tion but also to exploit the nature of the feasible region. We show the benefits gained by the new step length rules on a set of test problems arising also from machine learning and image processing applications.
2020
 Steplength and Minibatch Size Selection in Stochastic Gradient Methods
[Relazione in Atti di Convegno]
Franchini, G.; Ruggiero, V.; Zanni, L.
abstract
The steplength selection is a crucial issue for the effectiveness of the stochastic gradient methods for largescale optimization problems arising in machine learning. In a recent paper, Bollapragada et al. [1] propose to include an adaptive subsampling strategy into a stochastic gradient scheme. We propose to combine this approach with a selection rule for the steplength, borrowed from the fullgradient scheme known as Limited Memory Steepest Descent (LMSD) method [4] and suitably tailored to the stochastic framework. This strategy, based on the Ritzlike values of a suitable matrix, enables to give a local estimate of the local Lipschitz constant of the gradient of the objective function, without introducing linesearch techniques, while the possible increase of the subsample size used to compute the stochastic gradient enables to control the variance of this direction. An extensive numerical experimentation for convex and nonconvex loss functions highlights that the new rule makes the tuning of the parameters less expensive than the selection of a suitable constant steplength in standard and minibatch stochastic gradient methods. The proposed procedure has also been compared with the Momentum and ADAM methods.
2019
 Recent advances in variable metric firstorder methods
[Capitolo/Saggio]
Bonettini, S.; Porta, F.; Prato, M.; Rebegoldi, S.; Ruggiero, V.; Zanni, L.
abstract
Minimization problems often occur in modeling phenomena dealing with reallife applications that nowadays handle largescale data and require realtime solutions. For these reasons, among all possible iterative schemes, firstorder algorithms represent a powerful tool in solving such optimization problems since they admit a relatively simple implementation and avoid onerous computations during the iterations. On the other hand, a well known drawback of these methods is a possible poor convergence rate, especially showed when an high accurate solution is required. Consequently, the acceleration of firstorder approaches is a very discussed field which has experienced several efforts from many researchers in the last decades. The possibility of considering a variable underlying metric changing at each iteration and aimed to catch local properties of the starting problem has been proved to be effective in speeding up firstorder methods. In this work we deeply analyze a possible way to include a variable metric in firstorder methods for the minimization of a functional which can be expressed as the sum of a differentiable term and a nondifferentiable one. Particularly, the strategy discussed can be realized by means of a suitable sequence of symmetric and positive definite matrices belonging to a compact set, together with an Armijolike linesearch procedure to select the steplength along the descent direction ensuring a sufficient decrease of the objective function.
2019
 Steplength selection in gradient projection methods for boxconstrained quadratic programs
[Articolo su rivista]
Crisci, Serena; Ruggiero, Valeria; Zanni, Luca
abstract
The role of the steplength selection strategies in gradient methods has been widely in vestigated in the last decades. Starting from the work of Barzilai and Borwein (1988), many efficient steplength rules have been designed, that contributed to make the gradient approaches an effective tool for the largescale optimization problems arising in important realworld applications. Most of these steplength rules have been thought in unconstrained optimization, with the aim of exploiting some secondorder information for achieving a fast annihilation of the gradient of the objective function. However, these rules are successfully used also within gradient projection methods for constrained optimization, though, to our knowledge, a detailed analysis of the effects of the constraints on the steplength selections is still not available. In this work we investigate how the presence of the box constraints affects the spectral properties of the Barzilai–Borwein rules in quadratic programming problems. The proposed analysis suggests the introduction of new steplength selection strategies specifically designed for taking account of the active constraints at each iteration. The results of a set of numerical experiments show the effectiveness of the new rules with respect to other state of the art steplength selections and their potential usefulness also in case of boxconstrained nonquadratic optimization problems.
2018
 On the steplength selection in gradient methods for unconstrained optimization
[Articolo su rivista]
Di Serafino, Daniela; Ruggiero, Valeria; Toraldo, Gerardo; Zanni, Luca
abstract
The seminal paper by Barzilai and Borwein (1988) has given rise to an extensive investigation, leading to the development of effective gradient methods. Several steplength rules have been first designed for unconstrained quadratic problems and then extended to general nonlinear optimization problems. These rules share the common idea of attempting to capture, in an inexpensive way, some secondorder information. However, the convergence theory of the gradient methods using the previous rules does not explain their effectiveness, and a full understanding of their practical behaviour is still missing. In this work we investigate the relationships between the steplengths of a variety of gradient methods and the spectrum of the Hessian of the objective function, providing insight into the computational effectiveness of the methods, for both quadratic and general unconstrained optimization problems. Our study also identifies basic principles for designing effective gradient methods.
2018
 Reconstruction of 3D Xray CT images from reduced sampling by a scaled gradient projection algorithm
[Articolo su rivista]
Piccolomini, E. Loli; Coli, V. L.; Morotti, E.; Zanni, L.
abstract
We propose a scaled gradient projection algorithm for the reconstruction of 3D Xray tomographic images from limited data. The problem arises from the discretization of an illposed integral problem and, due to the incompleteness of the data, has infinite possible solutions. Hence, by following a regularization approach, we formulate the reconstruction problem as the nonnegatively constrained minimization of an objective function given by the sum of a fittodata term and a smoothed differentiable Total Variation function. The problem is challenging for its very large size and because a good reconstruction is required in a very short time. For these reasons, we propose to use a gradient projection method, accelerated by exploiting a scaling strategy for defining gradientbased descent directions and generalized BarzilaiBorwein rules for the choice of the steplengths. The numerical results on a 3D phantom are very promising since they show the ability of the scaling strategy to accelerate the convergence in the first iterations.
2017
 A comparison of edgepreserving approaches for differential interference contrast microscopy
[Articolo su rivista]
Rebegoldi, Simone; Bautista, Lola; Blanc Féraud, Laure; Prato, Marco; Zanni, Luca; Plata, Arturo
abstract
In this paper we address the problem of estimating the phase from color images acquired with differentialinterferencecontrast microscopy. In particular, we consider the nonlinear and nonconvex optimization problem obtained by regularizing a leastsquareslike discrepancy term with an edgepreserving functional, given by either the hypersurface potential or the total variation one. We investigate the analytical properties of the resulting objective functions, proving the existence of minimum points, and we propose effective optimization tools able to obtain in both the smooth and the nonsmooth case accurate reconstructions with a reduced computational demand.
2017
 A fast gradient projection method for 3D image reconstruction from limited tomographic data
[Relazione in Atti di Convegno]
Coli, V. L.; Loli Piccolomini, E.; Morotti, E.; Zanni, L.
abstract
We consider in this paper the problem of reconstructing 3D Computed Tomography images from limited data. The problem is modeled as a nonnegatively constrained minimization problem of very large size. In order to obtain an acceptable image in short time, we propose a scaled gradient projection method, accelerated by exploiting a suitable scaling matrix and efficient rules for the choice of the steplength. In particular, we select the steplength either by alternating BarzilaiBorwein rules or by exploiting a limited number of back gradients for approximating secondorder information. Numerical results on a 3D SheppLogan phantom are presented and discussed.
2016
 A Note on Spectral Properties of Some Gradient Methods
[Relazione in Atti di Convegno]
di Serafino, Daniela; Ruggiero, Valeria; Toraldo, Gerardo; Zanni, Luca
abstract
Starting from the work by Barzilai and Borwein, gradient methods have gained a great amount of attention, and efficient
lowcost schemes are available nowadays. The acceleration strategies used by these methods are based on the definition of effective
steplength updating rules, which capture spectral properties of the Hessian of the objective function. The methods arising from
this idea represent effective computational tools, extremely appealing for a variety of largescale optimization problems arising
in applications. In this work we discuss the spectral properties of some recently proposed gradient methods with the aim of
providing insight into their computational effectiveness. Numerical experiments supporting and illustrating the theoretical analysis
are provided.
2016
 Phase estimation in DifferentialInterferenceContrast (DIC) microscopy
[Relazione in Atti di Convegno]
Bautista, Lola; Rebegoldi, Simone; Blanc Féraud, Laure; Prato, Marco; Zanni, Luca; Plata, Arturo
abstract
We present a gradientbased optimization method for the estimation of a specimen’s phase function from polychromatic DIC images. The method minimizes the sum of a nonlinear leastsquares discrepancy measure and a smooth approximation of the total variation. A new formulation of the gradient and a recent updating rule for the choice of the step size are both exploited to reduce computational time. Numerical simulations on two computergenerated objects show significant improvements, both in efficiency and accuracy, with respect to a more standard choice of the step size.
2016
 Scaled firstorder methods for a class of largescale constrained least square problems
[Relazione in Atti di Convegno]
Coli, VANNA LISA; Valeria, Ruggiero; Zanni, Luca
abstract
Typical applications in signal and image processing often require the numerical solution of large–scale linear least
squares problems with simple constraints, related to an m x n nonnegative matrix A, m << n. When the size of A is such that
the matrix is not available in memory and only the operators of the matrixvector products involving A and AT can be computed,
forward–backward methods combined with suitable accelerating techniques are very effective; in particular, the gradient projection
methods can be improved by suitable step–length rules or by an extrapolation/inertial step. In this work, we propose a further
acceleration technique for both schemes, based on the use of variable metrics tailored for the considered problems. The numerical
effectiveness of the proposed approach is evaluated on randomly generated test problems and real data arising from a problem of
fibre orientation estimation in diffusion MRI.
2016
 TVregularized phase reconstruction in differentialinterferencecontrast (DIC) microscopy
[Relazione in Atti di Convegno]
Rebegoldi, Simone; Bautista, Lola; Blanc Féraud, Laure; Prato, Marco; Zanni, Luca; Plata, Arturo
abstract
In this paper we address the problem of reconstructing the phase from color images acquired with differentialinterferencecontrast (DIC) microscopy. In particular, we reformulate the problem as the minimization of a leastsquares fidelity function regularized with of a total variation term, and we address the solution by exploiting a recently proposed inexact forwardbackward approach. The effectiveness of this method is assessed on a realistic synthetic test.
2015
 A new steplength selection for scaled gradient methods with application to image deblurring
[Articolo su rivista]
Porta, Federica; Prato, Marco; Zanni, Luca
abstract
Gradient methods are frequently used in large scale image deblurring problems since they avoid the onerous computation of the Hessian matrix of the objective function. Second order information is typically sought by a clever choice of the steplength parameter defining the descent direction, as in the case of the wellknown Barzilai and Borwein rules. In a recent paper, a strategy for the steplength selection approximating the inverse of some eigenvalues of the Hessian matrix has been proposed for gradient methods applied to unconstrained minimization problems. In the quadratic case, this approach is based on a Lanczos process applied every m iterations to the matrix of the gradients computed in the previous m iterations, but the idea can be extended to a general objective function. In this paper we extend this rule to the case of scaled gradient projection methods applied
to constrained minimization problems, and we test the effectiveness of the proposed strategy in image deblurring problems in both the presence and the absence of an
explicit edgepreserving regularization term.
2015
 Limitedmemory scaled gradient projection methods for realtime image deconvolution in microscopy
[Articolo su rivista]
Porta, Federica; Zanella, R.; Zanghirati, G.; Zanni, Luca
abstract
Gradient projection methods have given rise to effective tools for image deconvolution in
several relevant areas, such as microscopy, medical imaging and astronomy. Due to the
large scale of the optimization problems arising in nowadays imaging applications and
to the growing request of realtime reconstructions, an interesting challenge to be faced
consists in designing new acceleration techniques for the gradient schemes, able to
preserve their simplicity and low computational cost of each iteration. In this work we propose
an acceleration strategy for a stateoftheart scaled gradient projection method for
image deconvolution in microscopy. The acceleration idea is derived by adapting a steplength
selection rule, recently introduced for limitedmemory steepest descent methods
in unconstrained optimization, to the special constrained optimization framework arising
in image reconstruction. We describe how important issues related to the generalization of
the steplength rule to the imaging optimization problem have been faced and we evaluate
the improvements due to the acceleration strategy by numerical experiments on
largescale image deconvolution problems.
2015
 Numerical Methods for Parameter Estimation in Poisson Data Inversion
[Articolo su rivista]
Zanni, Luca; Benfenati, Alessandro; Bertero, Mario; Ruggiero, Valeria
abstract
In a regularized approach to Poisson data inversion, the problem is reduced to the minimization of an objective function which consists of two terms: a data fidelity function, related to a generalized KullbackLeibler divergence, and a regularization function expressing
a priori information on the unknown image. This second function is multiplied by a parameter β, sometimes called regularization parameter, which must be suitably estimated for obtaining a sensible solution. In order to estimate this parameter, a discrepancy principle has been recently proposed, that implies the minimization of the objective function for several values
of β. Since this approach can be computationally expensive, it has also been proposed to replace it with a constrained minimization, the constraint being derived from the discrepancy principle. In this paper we intend to compare the two approaches from the computational
point of view. In particular, we propose a secantbased method for solving the discrepancy equation arising in the first approach; when this root finding algorithm can be combined with an efficient solver of the inner minimization problems, the first approach can be competitive
and sometimes faster than the second one.
2014
 First Order Methods for Image Deconvolution in Microscopy
[Abstract in Atti di Convegno]
Vicidomini, Giuseppe; Zanella, Riccardo; Zanghirati, Gaetano; Zanni, Luca
abstract
Highperformance deconvolution algorithms become crucial
to avoid long delays in data analysis pipelines, especially
in case of largescale imaging problems, such as
those arising in astronomy and microscopy. In this work
we present effective deconvolution approaches obtained by
following two main directions: using acceleration strategies
for first order methods and exploiting Graphics Processing
Units (GPUs). Numerical experiments on largescale image
deconvolution problems are performed for evaluating
the effectiveness of the proposed optimization algorithm.
2014
 Parameter estimation in regularization models for Poisson data
[Abstract in Atti di Convegno]
Zanni, Luca
abstract
In many imaging applications the image intensity is measured by counting incident particles and, consequently, the fluctuations in the counting process can be taken into account by modeling the data as realizations of Poisson random variables. In such case, the maximum likelihood approach for image restoration leads to minimization problems in which the datafidelity function is the generalized KullbackLeibler (KL) divergence. Since, in general, these optimization problems are illconditioned, regularization approaches are necessary and the design of strategies for selecting a proper value of the regularization parameter is a crucial issue. This is still an open problem for Poisson data and, in special cases, interesting contributions have been recently provided. In this work we consider some regularization models and the theoretical and numerical issues concerning with their parameter estimations. Following the idea of the discrepancy principle, we discuss strategies that provide a parameter estimation as solution of a discrepancy equation and also regularization models in which a suited estimation is obtained by solving constrained optimization problems which force an upper bound on the discrepancy function. Furthermore, reconstruction strategies that require only an overestimation of the regularization parameter will be also presented.
2013
 A practical use of regularization for supervised learning with
kernel methods
[Articolo su rivista]
Prato, Marco; Zanni, Luca
abstract
In several supervised learning applications, it happens that reconstruction methods have to be applied repeatedly before being able to achieve the final solution. In these situations, the availability of learning algorithms able to provide effective predictors in a very short time may lead to remarkable improvements in the overall computational requirement. In this paper we consider the kernel ridge regression problem and we look for solutions given by a linear combination of kernel functions plus a constant term. In particular, we show that the unknown coefficents of the linear combination and the constant term can be obtained very fastly by applying specific regularization algorithms directly to the linear system arising from the Empirical Risk Minimization problem. From the numerical experiments carried out on benchmark datasets, we observed that in some cases the same results achieved after hours of calculations can be obtained in few seconds, thus showing that these strategies are very wellsuited for timeconsuming applications.
2013
 Corrigendum: Efficient gradient projection methods
for edgepreserving removal of Poisson noise
[Articolo su rivista]
R., Zanella; P., Boccacci; Zanni, Luca; M., Bertero
abstract
This file contains a complete proof of lemma 1 of the paper
"Efficient gradient projection methods
for edgepreserving removal of Poisson noise",
2009 Inverse Problems 25 045010.
2013
 Filter factor analysis of scaled gradient methods for linear least squares
[Relazione in Atti di Convegno]
Porta, Federica; Cornelio, Anastasia; Zanni, Luca; Prato, Marco
abstract
A typical way to compute a meaningful solution of a linear least squares problem involves the introduction of a filter factors array, whose aim is to avoid noise amplification due to the presence of small singular values. Beyond the classical direct regularization approaches, iterative gradient methods can be thought as filtering methods, due to their typical capability to recover the desired components of the true solution at the first iterations. For an iterative method, regularization is achieved by stopping the procedure before the noise introduces artifacts, making the iteration number playing the role of the regularization parameter. In this paper we want to investigate the filtering and regularizing effects of some firstorder algorithms, showing in particular which benefits can be gained in recovering the filters of the true solution by means of a suitable scaling matrix.
2013
 ML estimation of wavelet regularization hyperparameters in inverse problems
[Relazione in Atti di Convegno]
Cavicchioli, Roberto; C., Chaux; L., Blanc Feraud; Zanni, Luca
abstract
In this paper we are interested in regularizing hyperparameter
estimation by maximum likelihood in inverse problems with
wavelet regularization. One parameter per subband will be
estimated by gradient ascent algorithm. We have to face with
twomain difficulties: i) sampling the a posteriori image distribution
to compute the gradient; ii) choosing a suited stepsize
to ensure good convergence properties. We first show that
introducing an auxiliary variable makes the sampling feasible
using classical MetropolisHastings algorithm and Gibbs
sampler. Secondly, we propose an adaptive stepsize selection
and a linesearch strategy to improve the gradientbased
method. Good performances of the proposed approach are
demonstrated on both synthetic and real data.
2013
 On the filtering effect of iterative regularization algorithms for discrete inverse problems
[Articolo su rivista]
Cornelio, Anastasia; Porta, Federica; Prato, Marco; Zanni, Luca
abstract
Many realworld applications are addressed through a linear leastsquares
problem formulation, whose solution is calculated by means of an iterative approach. A huge amount of studies has been carried out in the optimization field to provide the fastest methods for the reconstruction of the solution, involving choices of adaptive parameters and scaling matrices. However, in presence of an illconditioned model and real data, the need of a regularized solution instead of the leastsquares one changed the point of view in favour of iterative algorithms able to combine a fast execution with a stable behaviour with respect to the restoration error. In this paper we analyze some classical and recent gradient approaches for the linear leastsquares problem by looking at their way of filtering the singular values, showing in particular the effects of scaling matrices and nonnegative constraints in recovering the correct filters of the solution. An original analysis of the filtering effect for the image deblurring problem with Gaussian noise on the data is also provided.
2013
 Scaled gradient projection methods for astronomical imaging
[Relazione in Atti di Convegno]
M., Bertero; P., Boccacci; Prato, Marco; Zanni, Luca
abstract
We describe recently proposed algorithms, denoted scaled gradient projection (SGP) methods, which provide efficient and accurate reconstructions of astronomical images. We restrict the presentation to the case of data affected by Poisson noise and of nonnegative solutions; both maximum likelihood and Bayesian approaches are considered. Numerical results are presented for discussing the practical behaviour of the SGP methods.
2013
 Scaling techniques for gradient projectiontype methods in astronomical image deblurring
[Articolo su rivista]
Bonettini, Silvia; G., Landi; E., Loli Piccolomini; Zanni, Luca
abstract
The aim of this paper is to present a computational study on scaling techniques in gradient projectiontype (GPtype) methods for deblurring of astronomical images corrupted by Poisson noise. In this case, the imaging problem is formulated as a nonnegatively constrained minimization problem in which the objective function is the sum of a fittodata term, the Kullback–Leibler divergence, and a Tikhonov regularization term. The considered GPtype methods are formulated by a common iteration formula, where the scaling matrix and the steplength parameter characterize the different algorithms. Within this formulation, both firstorder and Newtonlike methods are analysed, with particular attention to those implementation features and behaviours relevant for the image restoration problem. The numerical experiments show that suited scaling strategies can enable the GP methods to quickly approximate accurate reconstructions and then are useful for designing effective image deblurring algorithms.
2013
 Special Issue on "Optimization Methods for Inverse Problems in Imaging": Guest Editorial
[Abstract in Rivista]
M., Bertero; V., Ruggiero; Zanni, Luca
abstract
The aim of this special issue is to focus on the growing interaction between inverse
problems in imaging science and optimization, that in recent years has given rise to
significant advances in both the areas: optimizationbased tools have been developed
to solve challenging image reconstruction problems while the experience with imaging
problems has led to an improved and deeper understanding of certain optimization
algorithms. The issue includes 10 peer reviewed papers whose contributions represent
new advances in numerical optimization for inverse problems with significant
impact in signal and image processing.
2013
 Towards realtime image deconvolution:
application to confocal and STED microscopy
[Articolo su rivista]
R., Zanella; G., Zanghirati; Cavicchioli, Roberto; Zanni, Luca; P., Boccacci; M., Bertero; G., Vicidomini
abstract
Although deconvolution can improve the quality of any type of
microscope, the high computational time required has so far limited its massive spreading. Here we demonstrate the ability of the scaledgradientprojection (SGP) method to provide accelerated versions of the most used algorithms in microscopy. To achieve further increases in efficiency, we also consider implementations on graphic processing units (GPUs). We test the proposed algorithms both on synthetic and real data of confocal and STED microscopy. Combining the SGP method with the GPU implementation we achieve a speedup factor from about a factor 25 to 690 (with respect the conventional algorithm). The excellent results obtained on STED microscopy images demonstrate the synergy between superresolution techniques and imagedeconvolution. Further, the realtime processing allows conserving one of the most important property of STED microscopy, i.e the ability to provide fast subdiffraction resolution recordings.
2012
 A practical use of regularization for supervised learning with kernel methods
[Poster]
Prato, Marco; Zanni, Luca
abstract
In several supervised learning applications, it happens that reconstruction methods have to be applied repeatedly before being able to achieve the final solution. In these situations, the availability of learning algorithms able to provide effective predictors in a very short time may lead to remarkable improvements in the overall computational requirement. Here we consider the kernel ridge regression problem and we look for predictors given by a linear combination of kernel functions plus a constant term, showing that an effective solution can be obtained very fastly by applying specific regularization algorithms directly to the linear system arising from the Empirical Risk Minimization problem.
2012
 Efficient deconvolution methods for astronomical imaging: algorithms and IDLGPU codes
[Articolo su rivista]
Prato, Marco; Cavicchioli, Roberto; Zanni, Luca; P., Boccacci; M., Bertero
abstract
Context. The RichardsonLucy (RL) method is the most popular deconvolution method in Astronomy because it preserves the number of counts and the nonnegativity of the original object. Regularization is, in general, obtained by an early stopping of RL iterations; in the case of pointwise objects such as binaries or open star clusters, iterations can be pushed to convergence. However, it is well known that RL is not an efficient method: in most cases and, in particular, for low noise levels, acceptable solutions are obtained at the cost of hundreds or thousands of iterations. Therefore, several approaches for accelerating RL have been proposed. They are mainly based on the remark that RL is a scaled gradient method for the minimization of the KullbackLeibler (KL) divergence, or Csiszar Idivergence, which represents the datafidelity function in the case of Poisson noise. In this framework, a line search along the descent direction is considered for reducing the number of iterations.Aims. In a recent paper, a general optimization method, denoted as scaled gradient projection (SGP) method , has been proposed for the constrained minimization of continuously differentiable convex functions. It is applicable to the nonnegative minimization of the KL divergence. If the scaling suggested by RL is used in this method, then it provides a considerable speedup of RL. Therefore the aim of this paper is to apply SGP to a number of imaging problems in Astronomy such as single image deconvolution, multiple image deconvolution and boundary effect correction.Methods. Deconvolution methods are proposed by applying SGP to the minimization of the KL divergence for the imaging problems mentioned above and the corresponding algorithms are derived and implemented in IDL. For all the algorithms several stopping rules are introduced, including one based on a recently proposed discrepancy principle for Poisson data. For a further increase of efficiency, implementation on GPU (Graphic Processing Unit) is also considered.Results. The proposed algorithms are tested on simulated images. The speedup of SGP methods with respect to the corresponding RL methods strongly depends on the problem and on the specific object to be reconstructed, and in our simulationsit ranges from about 4 to more than 30. Moreover, significant speedups up to two orders of magnitude have been observed between the serial and parallel implementations of the algorithms. The codes are available upon request.
2012
 Efficient multiimage deconvolution in astronomy
[Poster]
Cavicchioli, Roberto; Prato, Marco; Zanni, Luca; Boccacci, P.; Bertero, M.
abstract
The deconvolution of astronomical images by the RichardsonLucy method (RLM) is extended here to the problem of multiple image deconvolution
and the reduction of boundary effects. We show the multiple image RLM in its accelerated gradientversion SGP (Scaled Gradient Projection).
Numerical simulations indicate that the approach can provide excellent results with a considerable reduction of the boundary effects. Also exploiting
GPUlib applied to the IDL code, we obtained a remarkable acceleration of up to two orders of magnitude.
2012
 Optimization methods for digital image restoration on MPP multicore architectures
[Capitolo/Saggio]
Cavicchioli, Roberto; Prearo, A; Zanella, R; Zanghirati, G; Zanni, Luca
abstract
We consider the numerical solution on modern multicore architectures of largescale optimization problems arising in image restoration. An efficient solution of these optimization problems is important in several areas, such as medical imaging, microscopy and astronomy, where largescale imaging is a
basic task. To face these challenging problems, a lot of effort has been put in designing effective algorithms, that have largely improved the classical optimization strategies usually applied in image processing. Nevertheless, in many largescale applications also these improved algorithms do not provide the expected
reconstruction in a reasonable time. In these cases, the modern multiprocessor architectures represent an important resource for reducing the reconstruction time. Actually, one can consider different possibilities for a parallel computational scenario. One is the use of Graphics Processing Units (GPUs): they were originally designed to perform many simple operations on matrices and
vectors with high efficiency and low accuracy (single precision arithmetic), but they have recently seen a huge development of both computational power and accuracy (double precision arithmetic), while still retaining compactness and
low price. Another possibility is the use of lastgeneration multicore CPUs, where generalpurpose, very powerful computational cores are integrated inside the same CPU and a bunch of CPUs can be hosted by the same motherboard,
sharing a central memory: they can perform completely dierent and asynchronous tasks, as well as cooperate by suitably distributing the workload of a complex task. Additional opportunities are offered by the more classical clusters of nodes, usually connected in dierent distributedmemory topologies to
form largescale highperformance machines with tens to hundredthousands of processors. Needless to say, various mix of these architectures (such as clusters of GPUs) are also possible and sold, indeed. It should be noticed, however, that all the mentioned scenarios can exist even in very smallsized and cheap configurations. This is particularly relevant for GPUs: initially targeted at 3D graphics applications, they have been employed in many other scientific computing areas, such as signal and image reconstruction. Recent applications show that in many cases GPU performances are comparable to those of
a mediumsized cluster, at a fraction of its cost. Thus, also small laboratories, which cannot afford a cluster, can benet from a substantial reduction of computing time compared to a standard CPU system. Nevertheless, for very large problems, as 3D imaging in confocal microscopy, the size of GPU's ondevices
dedicated memory can become a limit to performance.
For this reason, the ability to exploit the scalability of clusters by means of standard MPI implementations is still crucial for facing very largescale applications. Here, we deal with both the GPU and the MPI implementation of an optimization algorithm, called Scaled Gradient Projection (SGP) method, that applies to several imaging problems. GPU versions of this method have been recently evaluated, while an MPI version is presented in this work in the cases of both deblurring and denoising problems. A computational study of the different implementations is reported, to show the enhancements provided by these parallel approaches in solving both 2D and 3D imaging problems.
2011
 A regularization algorithm for decoding perceptual temporal profiles from fMRI data
[Articolo su rivista]
Prato, Marco; Favilla, Stefania; Zanni, Luca; Porro, Carlo Adolfo; Baraldi, Patrizia
abstract
In several biomedical fields, researchers are faced with regression problems that can be stated as Statistical Learning problems. One example is given by decoding brain states from functional magnetic resonance imaging (fMRI) data. Recently, it has been shown that the general Statistical Learning problem can be restated as a linear inverse problem. Hence, new algorithms were proposed to solve this inverse problem in the context of Reproducing Kernel Hilbert Spaces. In this paper, we detail one iterative learning algorithm belonging to this class, called νmethod, and test its effectiveness in a betweensubjects regression framework. Specifically, our goal was to predict the perceived pain intensity based on fMRI signals, during an experimental model of acute prolonged noxious stimulation. We found that, using a linear kernel, the psychophysical time profile was well reconstructed, while pain intensity was in some cases significantly over/underestimated. No substantial differences in terms of accuracy were found between the proposed approach and one of the stateoftheart learning methods, the Support Vector Machines. Nonetheless, adopting the νmethod yielded a significant reduction in computational time, an advantage that became more evident when a relevant feature selection procedure was implemented. The νmethod can be easily extended and included in typical approaches for binary or multiple classification problems, and therefore it seems wellsuited to build effective brain activity estimators.
2011
 Image reconstruction in astronomy, medicine and microscopy
[Poster]
Bonettini, Silvia; Prato, Marco; Ruggiero, V.; Zanella, R.; Zanghirati, G.; Zanni, Luca
abstract
The aim of the project is to develop image reconstruction methods for applications in microscopy, astronomy and medicine, and to release software especially tailored for the specific application. The image acquisition process produces a degradation of the information with both deterministic and statistical features. For this reason, an image enhancement is needed in order to remove the noise and the blurring effects. The underlying mathematical model belongs to the class of inverse problems, which are very difficult to solve, especially when the specific features of real applications are included in the model. Reliable reconstruction methods have to take into account of the statistics in the image formation process, of several physical constraints and of important features to be preserved in the restored image, as edges and details. The proposed reconstruction methods are based on constrained optimization methods which are well suited for the processing of large size images and also for the 3D case. The research group has a longterm experience in optimization methods for such kind of applications and in the realization of algorithms on parallel and distributed systems, as the GPUs.
2011
 Learning from examples: methodologies and software
[Poster]
Bonettini, Silvia; Prato, Marco; Ruggiero, V.; Zanella, R.; Zanghirati, G.; Zanni, Luca
abstract
The project’s goal is to apply advanced Machine Learning methodologies, mainly based on SVMs, and the related effective numerical methods for the solution of the underlying optimization problem. The analysis aims to identify the most suitable solution strategies for a given application context, or for a specific problem instance, by exploiting the characteristics of its mathematical model. In addition to the binary classification, also multiclass and (nonlinear) regression problems can be considered. The algorithmic study is followed by a code prototyping, with the possible development of a scalar or highperformance software, tailored to apply the studied strategies to the particular needs for the user. The research group makes its skills on Numerical Analysis, Numerical Optimization, scalar and concurrent programming available to the project.
2011
 SGPdec:A Scaled Gradient Projection method for2D and 3D images deconvolution
[Software]
R., Zanella; Zanni, Luca; G., Zanghirati; Cavicchioli, Roberto
abstract
SGPdec is a Matlab package for the deconvolution of 2D and 3D images corrupted by Poisson noise. Following amaximum likelihood approach, SGPdec computes a deconvolved image by early stopping an iterative method for the minimization of the generalized KullbackLieibler divergence. The iterative minimization method implemented by SGPdec is a Scaled Gradient Projection (SGP) algorithm that can be considered an acceleration of the Expectation Maximization method, also known as RichardsonLucy method. The main feature of the SGP algorithm consists in the combination of nonexpensivediagonally scaled gradient directions with adaptive BarzilaiBorwein steplength rules specially designed for thesedirections; global convergence properties are ensured by exploiting a linesearch strategy (monotone or nonmonotone)along the feasible direction.The algorithm SGP is provided to be used as iterative regularization method; this means that a regularized reconstruction can be obtained by early stopping the SGP sequence. Several early stopping strategies can be selected, basedon different criteria: maximum number of iterations, distance of successive iterations or function values, discrepancyprinciple; the user must choose a stopping criterion and fixsuited values for the parameters involved by the chosen criterion.
2011
 SGPIDL: a Scaled Gradient Projection method for image deconvolution in an Interactive Data Language environment
[Software]
Prato, Marco; Cavicchioli, Roberto; Zanni, Luca; Boccacci, P.; Bertero, M.
abstract
An Interactive Data Language (IDL) package for the single and multiple deconvolution of 2D images corrupted by Poisson noise, with the optional inclusion of a boundary effect correction. Following a maximum likelihood approach, SGPIDL computes a deconvolved image by early stopping of the scaled gradient projection (SGP) algorithm for the solution of the optimization problem coming from the minimization of the generalized KullbackLeibler divergence between the computed image and the observed image. The algorithms have been implemented also for Graphic Processing Units (GPUs).
2010
 A discrepancy principle for Poisson data
[Articolo su rivista]
M., Bertero; P., Boccacci; G., Talenti; Zanella, Riccardo; Zanni, Luca
abstract
In applications of imaging science, such as emissiontomography, fluorescence microscopy and optical/infrared astronomy,image intensity is measured via the counting of incident particles(photons, $\gamma$rays, etc.). Fluctuations in the emissioncountingprocess can be described by modeling the data as realizations of Poissonrandom variables (Poisson data). A maximum likelihood approach forimage reconstruction from Poisson data was proposed in the mideighties. Since the consequent maximization problem is, in general,illconditioned, various kinds of regularization were introduced in theframework of the socalled Bayesian paradigm. A modification ofthe wellknown Tikhonov regularization strategy results, thedatafidelity function being a generalized KullbackLeiblerdivergence. Then a relevant issue is to find rules for selectinga proper value of the regularization parameter. In this paperwe propose a criterion, nicknamed discrepancy principle forPoisson data, that applies to both denoising and deblurring problemsand fits quite naturally the statistical properties of the data.The main purpose of the paper is to establish conditions, on thedata and the imaging matrix, ensuring that the proposed criteriondoes actually provide a unique value of the regularization parameterfor various classes of regularization functions. A few numericalexperiments are performed to demonstrate its effectiveness. Moreextensive numerical analysis and comparison with other proposedcriteria will be the object of future work.
2010
 A regularization algorithm for decoding perceptual profiles
[Poster]
Favilla, Stefania; Prato, Marco; Zanni, Luca; Porro, Carlo Adolfo; Baraldi, Patrizia
abstract
In this study we wished to test the feasibility of predicting the perceived pain intensity in healthy volunteers, based on fMRI signals collected during an experimental pain paradigm lasting several minutes. This model of acute prolonged (tonic) pain bears some similarities with clinically relevant conditions, such as prolonged ongoing activity in nociceptors and spontaneous fluctuations of perceived pain intensity over time.To predict individual pain profile, we tested and optimized one methodological approach based on new regularization learning algorithms on this regression problem.
2010
 Gradient projection methods for image deblurring and denoising on graphics processors
[Relazione in Atti di Convegno]
Serafini, Thomas; Zanella, Riccardo; Zanni, Luca
abstract
Optimizationbased approaches for image deblurring and denoising onGraphics Processing Units (GPU) are considered. In particular, a new GPU implementationof a recent gradient projection method for edgepreserving removal ofPoisson noise is presented. The speedups over standard CPU implementations areevaluated on both synthetic data and astronomical and medical imaging problems.
2010
 Iterative regularization algorithms for constrained image deblurring on graphics processors
[Articolo su rivista]
Ruggiero, V; Serafini, Thomas; Zanella, Riccardo; Zanni, Luca
abstract
The ability of the modern graphics processors to operate on large matrices in parallel can be exploited for solving constrained image deblurring problems in a short time. In particular, in this paper we propose the parallel implementation of two iterative regularization methods: the well known expectation maximization algorithm and a recent scaled gradient projection method. The main differences between the considered approaches and their impact on the parallel implementations are discussed. The effectiveness of the parallel schemes and the speedups over standard CPU implementations are evaluated on test problems arising from astronomical images.
2010
 Nonnegative leastsquares image deblurring: improved gradient projection approaches
[Articolo su rivista]
F., Benvenuto; Zanella, Riccardo; Zanni, Luca; M., Bertero
abstract
The leastsquares approach to image deblurring leads to an illposed problem. The addition of the nonnegativity constraint, when appropriate, does not provide regularization, even if, as far as we know, a thorough investigation of the illposedness of the resulting constrained leastsquares problem has still to be done. Iterative methods, converging to nonnegative leastsquares solutions, have been proposed. Some of them have the "semiconvergence'' property, i.e. early stopping of the iteration provides "regularized'' solutions. In this paper we consider two of these methods: the projected Landweber (PL) method and the iterative image space reconstruction algorithm (ISRA).Even if they work well in many instances, they are not frequently used in practice because, in general, they require a large number of iterations before providing a sensible solution. Therefore the main purpose of this paper is to refresh these methods by increasing their efficiency. Starting from the remark that PL and ISRA require only the computation of the gradient of the functional, we propose the application to these algorithms of special acceleration techniques that have been recently developed in the area of the gradient methods. In particular, we propose the application of efficient steplength selection rulesand linesearch strategies. Moreover, remarking that ISRA is a scaled gradient algorithm, we evaluate its behaviour in comparison with a recent scaled gradient projection (SGP) method for image deblurring. Numerical experiments demonstrate that the accelerated methods still exhibit thesemiconvergence property, with a considerable gain both in the number of iterations and in the computational time; in particular, SGP appears definitely the most efficient one.
2009
 A scaled gradient projection method for constrained image deblurring
[Articolo su rivista]
Bonettini, Silvia; Zanella, Riccardo; Zanni, Luca
abstract
A class of scaled gradient projection methods for optimization problems with simple constraints is considered. These iterative algorithms can be useful in variational approaches to image deblurring that lead to minimized convex nonlinear functions subject to nonnegativity constraints and, in some cases, toan additional flux conservation constraint. Aspecial gradient projection method is introduced that exploits effective scaling strategies and steplength updating rules, appropriately designed for improving the convergence rate. We give convergence results for this scheme and we evaluate its effectiveness by means of an extensive computational study on the minimization problems arising from the maximum likelihood approach to image deblurring. Comparisons with the standard expectation maximization algorithm and with other iterative regularization schemes are also reported to show the computational gainprovided by the proposed method.
2009
 Accelerating gradient projection methods for $ell_1$constrained signal recovery by steplength selection rules
[Articolo su rivista]
Loris, I; Bertero, M; DE MOL, C; Zanella, Riccardo; Zanni, Luca
abstract
We propose a new gradient projection algorithm that compares favorably with the fastest algorithms available to date for ℓ1constrained sparse recovery from noisy data, both in the compressed sensing and inverse problem frameworks. The method exploits a linesearch along the feasible direction and an adaptive steplength selection based on recent strategies for the alternation of the wellknown Barzilai–Borwein rules. The convergence of the proposed approach is discussed and a computational study on both well conditioned and illconditioned problems is carried out for performance evaluations in comparison with five other algorithms proposed in the literature.
2009
 Efficient gradient projection methods for edgepreserving removal of Poisson noise
[Articolo su rivista]
Zanella, Riccardo; Boccacci, P; Zanni, Luca; Bertero, M.
abstract
Several methods based on different image models have been proposed and developed for image denoising. Some of them, such as total variation (TV) and wavelet thresholding, are based on the assumption of additive Gaussian noise. Recently the TV approach has been extended to the case of Poisson noise, a model describing the effect of photon counting in applications such as emission tomography, microscopy and astronomy. For the removal of this kind of noise we consider an approach based on a constrained optimization problem, with an objective function describing TV and other edgepreserving regularizations of the Kullback–Leibler divergence. We introduce a new discrepancy principle for the choice of the regularization parameter, which is justified by the statistical properties of the Poisson noise. For solving the optimization problem we propose a particular form of a general scaled gradient projection (SGP) method, recently introduced for image deblurring. We derive the form of the scaling from a decomposition of the gradient of the regularization functional into a positive and a negative part. The beneficial effect of the scaling is proved by means of numerical simulations, showing that the performance of the proposed form of SGP is superior to that of the most efficient gradient projection methods. An extended numerical analysis of the dependence of the solution on the regularization parameter is also performed to test the effectiveness of the proposed discrepancy principle.
2009
 From BOLDFMRI signals to the prediction of subjective pain perception through a regularization algorithm
[Relazione in Atti di Convegno]
Prato, Marco; Favilla, Stefania; Baraldi, Patrizia; Porro, Carlo Adolfo; Zanni, Luca
abstract
Functional magnetic resonance imaging, in particular theBOLDfMRI technique, plays a dominant role in humanbrain mapping studies, mostly because of its noninvasivenessand relatively high spatiotemporal resolution.The main goal of fMRI data analysis has been to revealthe distributed patterns of brain areas involved in specificfunctions, by applying a variety of statistical methods withmodelbased or datadriven approaches. In the last years,several studies have taken a different approach, where thedirection of analysis is reversed in order to probe whetherfMRI signals can be used to predict perceptual or cognitivestates. In this study we test the feasibility of predicting theperceived pain intensity in healthy volunteers, based on fMRIsignals collected during an experimental pain paradigm lastingseveral minutes. In particular, we introduce a methodologicalapproach based on new regularization learning algorithmsfor regression problems.
2009
 Gradient projection approaches for optimization problems in image deblurring and denoising
[Poster]
Bonettini, Silvia; F., Benvenuto; Zanella, Riccardo; Zanni, Luca; M., Bertero
abstract
Gradient type methods are widely used approaches for nonlinearprogramming in image processing, due to their simplicity, low memory requirement and ability to provide mediumaccurate solutions without excessive computational costs. In this work we discuss some improved gradient projection methods for constrained optimization problems in image deblurring and denoising. Crucial feature of these approaches is the combination of special steplength rules and scaled gradient directions, appropriately designed to achieve a better convergence rate. Convergence results are given by exploiting monotone or nonmonotone linesearch strategies along the feasible direction. The effectiveness of the algorithms is evaluated on the problems arising from the maximum likelihood approach to the deconvolution of images and from the edgepreserving removal of Poisson noise. Numerical results obtained by facing large scale problems involving images of several megapixels on graphics processors are also reported.
2008
 Accelerated gradient methods for constrained image deblurring
[Articolo su rivista]
Bonettini, Silvia; Zanella, Riccardo; Zanni, Luca; Bertero, M.
abstract
In this paper we propose a special gradient projection method for the image deblurring problem, in the framework of the maximum likelihood approach. We present the method in a very general form and we give convergence results under standard assumptions. Then we consider the deblurring problem and the generality of the proposed algorithm allows us to add a energy conservation constraint to the maximum likelihood problem. In order to improve the convergence rate, we devise appropriate scaling strategies and steplength updating rules, especially designed for this application. The eectiveness of the method is evaluated by means of a computational study on astronomical images corrupted by Poisson noise. Comparisons with standard methods for image restoration, such as the expectation maximization algorithm, are also reported.
2008
 Inverse problems in machine learning: an application to brain activity interpretation
[Articolo su rivista]
Prato, Marco; Zanni, Luca
abstract
In a typical machine learning problem one has to build a model from a finite training set which is able to generalize the properties characterizing the examples of the training set to new examples. The model has to reflect as much as possible the set of trainingexamples but, especially in realworld problems in which the data are often corrupted by different sources of noise, it has to avoid a too strict dependence on the training examples themselves. Recent studies on the relationship between this kind of learning problem and the regularization theory for illposed inverse problems have given rise to new regularized learning algorithms. In this paper we recall some of these learning methods and we propose an accelerated version of the classical Landweber iterative scheme which results particularly efficient from thecomputational viewpoint. Finally, we compare the performances of these methods with the classical Support Vector Machines learning algorithm on a realworld experiment concerning brain activity interpretation through the analysis of functional magnetic resonance imaging data.
2008
 Iterative image reconstruction: a point of view
[Capitolo/Saggio]
M., Bertero; H., Lanteri; Zanni, Luca
abstract
Several iterative methods are available for solving the illposed problem of image reconstruction. They are motivated by different approaches and may derive from methods used for the solutionof linear equations or the minimization of suitable functionals.In this paper we adopt the approach flowing from maximum likelihood to Bayesian formulation of image reconstruction and providing a generalization of the classical regularization theory. This approach leads to the minimization of functionals derived from properties of the noise and, possibly, from additional information on the solution.We investigate a class of scaled gradient methods, based on a suitable decomposition of the gradient, and we show that this class contains some of the methods used for the solution of maximum likelihood problems in image reconstruction. We also obtain very simple regularized versions of these methods. Constraints of nonnegativity and flux conservation are taken into account by considering scaled gradient projection (SGP) methods, derived from the previous approach, and for them a convergence proof can be given. Numerical experience on a particular problem shows that SGP can provide a considerable increase in efficiency with respect to the standard algorithm used for that problem. Work is in progress in order to understand whether a similar gain can be achieved in other cases.
2008
 New adaptive stepsize selections in gradient methods
[Articolo su rivista]
Frassoldati, Giacomo; Zanni, Luca; G., Zanghirati
abstract
This paper deals with gradient methods for minimizing ndimensional strictly convex quadratic functions. Two new adaptive stepsize selection rules are presented and some key properties are proved. Practical insights on the effectiveness of the proposed techniques are given by a numerical comparison with the BarzilaiBorwein (BB) method, the cyclic/adaptive BB methods and two recent monotone gradient methods.
2008
 Predicting subjective pain perception based on BOLDfMRI signals: a new machine learning approach
[Relazione in Atti di Convegno]
Favilla, Stefania; Prato, Marco; Zanni, Luca; Porro, Carlo Adolfo; Baraldi, Patrizia
abstract
Functional magnetic resonance imaging, in particular the BOLDfMRI technique, plays a dominant role in human brain mapping studies, mostly because of its noninvasiveness, good spatial and acceptable temporal resolution in comparison with other techniques. The main goal of fMRI data analysis has been to reveal the distributed patterns of brain areas involved in specific functions and their interactions, by applying a variety of univariate or multivariate statistical methods with modelbasedor datadriven approaches. In the last few years, a growing number of studies have taken a different approach, where the direction of analysis is reversed in order to probe whether fMRI signals can be used to predict perceptual or cognitive states. In this study we wished to test the feasibility of predicting the perceived pain intensity in healthy volunteers, based on fMRI signals collected during an experimental pain paradigm lasting several minutes. To this end, we tested and optimized one methodological approach based on new regularization learning algorithms on this regression problem.
2007
 On adaptive stepsize selections in gradient methods
[Abstract in Rivista]
Frassoldati, Giacomo; Zanni, Luca; Zanghirati, G.
abstract
In recent years several proposals for the stepsize selection have largely improved the gradient methods, in the case of both constrained and unconstrained nonlinear optimization. We introduce a new stepsize rule with some crucial properties. We design stepsize selection strategies where the new rule and a standard BarzilaiBorwein (BB) rule can be adaptively alternated to get meaningful convergence rate improvements in comparison with other BBlike gradient schemes.
2007
 On recent machine learning algorithms for brain activity interpretation
[Poster]
Prato, Marco; Zanni, Luca; G., Zanghirati
abstract
In several biomedical and bioinformatics applications, one is faced with regression problems that can be stated as Statistical Learning problems. One example is given by the brain activity interpretation through the analysis of functional Magnetic Resonance Imaging (fMRI) data. In recent years it has been shown that the general Statistical Learning problem can be restated as a linear inverse problem. Hence, new algorithms have been proposed to solve this inverse problem in the context of reproducing kernel Hilbert spaces. These new proposals involve a numerical approach which differs from the ones concerning the classical machine learning techniques and which seems mainly suitable for the just described classes of problems; thus, it is worth to explore how effectively these new algorithms perform when compared with wellknown, standard machine learning approaches. The paper will then deal with an experimentation of the new methods on a realworld experiment, as well as with a performance comparison with the support vector machines (SVMs) technique.
2007
 PGPDT: Parallel Gradient Projectionbased Decomposition Technique
[Software]
T., Serafini; Zanni, Luca; G., Zanghirati
abstract
GPDT is a C++ software designed to train largescale Support Vector Machines (SVMs) for binary classification in both scalar and distributed memory parallel environments [1,3,5,6]. It uses a popular problem decomposition technique to split the SVM quadratic programming (QP) problem into a sequence of smaller QP subproblems, each one being solved by a suitable gradient projection method (GPM). The currently implemented GPMs are the Generalized Variable Projection Method (GVPM) [2] and the DaiFletcher method (DFGPM) [4]. [1] G. Zanghirati, L. Zanni, A Parallel Solver for Large Quadratic Programs in Training Support Vector Machines, Parallel Computing 29 (2003), 535551.[2] T. Serafini, G. Zanghirati, L. Zanni, Gradient Projection Methods for Large Quadratic Programs and Applications in Training Support Vector Machines, Optim. Meth. Soft. 20 (2005), 353378.[3] T. Serafini, L. Zanni, On the Working Set Selection in Gradient Projectionbased Decomposition Techniques for Support Vector Machines, Optim. Meth. Soft. 20 (2005), 583596.[4] Y.H. Dai, R. Fletcher, New Algorithms for Singly Linearly Constrained Quadratic Programming Problems Subject to Lower and Upper Bounds, Math. Prog. 106(3) (2006), 403421. [5] L. Zanni, An Improved Gradient Projectionbased Decomposition Technique for Support Vector Machines, Computational Management Science 3(2) (2006), 131145.[6] L. Zanni, T. Serafini, G. Zanghirati, Parallel Software for Training Large Scale Support Vector Machines on Multiprocessor Systems, JMLR 7(Jul), 14671492, 2006.
2006
 An improved gradient projectionbased decomposition technique for support vector machines
[Articolo su rivista]
Zanni, Luca
abstract
In this paper we propose some improvements to a recent decomposition technique for the large quadratic program arising in training Support Vector Machines. As standard decompositionapproaches, the technique we consider is based on the idea to optimize, at each iteration, a subset of the variables through the solution of a quadratic programming subproblem.The innovative features of this approach consist in using a very effective gradient projection method for the inner subproblems and a special rule for selecting the variables to be optimizedat each step. These features allow to obtain promising performance by decomposing the problem into few large subproblems instead of many small subproblems as usually done by other decomposition schemes. We improve this technique by introducing a new inner solver and a simple strategy for reducing the computational cost of each iteration. We evaluate the effectiveness of these improvements by solving largescale benchmark problems and by comparison with a widely used decomposition package.
2006
 Parallel software for training large scale support vector machines on multiprocessor systems
[Articolo su rivista]
Zanni, Luca; Serafini, Thomas; G., Zanghirati
abstract
Parallel software for solving the quadratic program arising in training support vector machines for classification problems is introduced. The software implements an iterative decomposition technique and exploits both the storage and the computing resources available on multiprocessor systems, by distributing the heaviest computational tasks of each decomposition iteration. Based on a wide range of recent theoretical advances, relevant decomposition issues, such as the quadratic subproblem solution, the gradient updating, the working set selection, are systematically described and their careful combination to get an effective parallel tool is discussed. A comparison with stateoftheart packages on benchmark problems demonstrates the good accuracy and the remarkable time saving achieved by the proposed software. Furthermore, challenging experiments on realworld data sets with millions training samples highlight how the software makes large scale standard nonlinear support vector machines effectively tractable on common multiprocessor systems. This feature is not shown by any of the available codes.
2005
 Gradient projection methods for quadratic programs and applications in training support vector machines
[Articolo su rivista]
Serafini, Thomas; G., Zanghirati; Zanni, Luca
abstract
Gradient projection methods based on the BarzilaiBorwein spectral steplength choices are considered for quadratic programming (QP) problems with simple constraints. Wellknown nonmonotone spectral projected gradient methods and variable projection methods are discussed. For both approaches, the behavior of different combinations of the two spectral steplengths is investigated. A new adaptive steplength alternating rule is proposed, which becomes the basis for a generalized version of the variable projection method (GVPM). Convergence results are given for the proposed approach and its effectiveness is shown by means of an extensive computational study on several test problems, including the special quadratic programs arising in training support vector machines (SVMs). Finally, the GVPM behavior as inner QP solver in decomposition techniques for largescale SVMs is also evaluated.
2005
 On the working set selection in gradient projectionbased decomposition techniques for support vector machines
[Articolo su rivista]
Serafini, Thomas; Zanni, Luca
abstract
This work deals with special decomposition techniques for the large quadratic program arising in training support vector machines. These approaches split the problem into a sequence of quadratic programming (QP) subproblems which can be solved by efficient gradient projection methods recently proposed. Owing to the ability of decomposing the problem into much larger subproblems than standard decomposition packages, these techniques show promising performance and are well suited for parallelization. Here, we discuss a crucial aspect for their effectiveness: the selection of the working set; that is, the index set of the variables to be optimized at each step through the QP subproblem. We analyze the most popular working set selections and develop a new selection strategy that improves the convergence rate of the decomposition schemes based on large sized working sets. The effectiveness of the proposed strategy within the gradient projectionbased decomposition techniques is shown by numerical experiments on large benchmark problems, both in serial and in parallel environments.
2005
 Some improvements to a parallel decomposition technique for training support vector machines
[Relazione in Atti di Convegno]
Serafini, Thomas; Zanni, Luca; G., Zanghirati
abstract
We consider a parallel decomposition technique for solving the large quadratic programs arising in training the learning methodology Support Vector Machine. At each iteration of the technique a subset of the variables is optimized through the solution of a quadratic programming subproblem. This inner subproblem is solved in parallel by a special gradient projection method. In this paper we consider some improvements to the inner solver: a new algorithm for the projection onto the feasible region of the optimization subproblem and new linesearch and steplength selection strategies for the gradient projection scheme. The effectiveness of the proposed improvements is evaluated, both in terms of execution time and relative speedup, by solving largescale benchmark problems on a parallel architecture.
2004
 Parallel decomposition approaches for training support vector machines
[Relazione in Atti di Convegno]
Serafini, Thomas; G., Zanghirati; Zanni, Luca
abstract
We consider parallel decomposition techniques for solving the large quadratic programming (QP) problems arising in training support vector machines. A recent technique is improved by introducing an efficient solver for the inner QP subproblems and a preprocessing step useful to hot start the decomposition strategy. The effectiveness of the proposed improvements is evaluated by solving largescale benchmark problems on different parallel architectures.
2003
 A parallel solver for large quadratic programs in training support vector machines
[Articolo su rivista]
Zanni, Luca; G., Zanghirati
abstract
This work, is concerned with the solution of the convex quadratic programming problem arising in training the learning machines named support vector machines. The problem is subject to box constraints and to a single linear equality constraint; it is dense and, for many practical applications, it becomes a largescale problem. Thus, approaches based on explicit storage of the matrix of the quadratic form are not practicable. Here we present an easily parallelizable approach based on a decomposition technique that splits the problem into a sequence of smaller quadratic programming subproblems. These subproblems are solved by a variable projection method that is well suited to a parallel implementation and is very effective in the case of Gaussian support vector machines. Performance results are presented on well known largescale test problems, in scalar and parallel environments. The numerical results show that the approach is comparable on scalar machines with a widely used technique and can achieve good efficiency and scalability on a multiprocessor system. (C) 2003 Elsevier Science B.V. All rights reserved.
2003
 Large quadratic programs in training gaussian support vector machines
[Articolo su rivista]
Serafini, Thomas; G., Zanghirati; Zanni, Luca
abstract
We consider the numerical solution of the large convex quadratic program arising in training the learning machines named support vector machines. Since the matrix of the quadratic form is dense and generally large, solution approaches based on explicitstorage of this matrix are not practicable. Well known strategies for this quadratic program are based on decomposition techniques that split the problem into a sequence of smaller quadratic programming subproblems. For the solution of these subproblems we present an iterative projectiontype method suited for the structure of the constraints and very eective in case of Gaussian support vector machines. We develop an appropriate decomposition technique designed to exploit the high performance of the proposed inner solver on medium or large subproblems. Numerical experiments on largescale benchmark problems allow to compare this approach with another widelyused decomposition technique. Finally, a parallel extension of the proposed strategy is described.
2003
 Training Support Vector Machines on Parallel Architectures
[Capitolo/Saggio]
Serafini, Thomas; G., Zanghirati; Zanni, Luca
abstract
The parallel solution of the large quadratic programming problem arising in training support vector machines is analysed. Some improvements to a recent decomposition technique are discussed. The effectiveness of the proposed approach is evaluated by solving largescale benchmark problemson different parallel architectures.
2003
 Variable Projection Methods for Large Quadratic Programs in Training Support Vector Machines
[Working paper]
G., Zanghirati; Zanni, Luca
abstract
In this paper we analyse the variable projection methods for the solution of the convex quadratic programming problem arising in training the learning techinque named Support Vector Machine. Since the Hessian matrix of the objective function is large and dense the problem is faced by decomposition techniques that require to solve a sequence of quadratic programming subproblems of smaller size. For the solution of these subproblems, we consider variable projection methods based on different steplength updating rules. In particular, we introduce a new steplength selection that implies remarkable convegence rate improvements. The effectiveness of the variable projection method combined with this steplength selection is evaluated in comparison with standard routines on well known benchmark problems.
2003
 Variable projection methods for largescale quadratic optimization in data analysis applications
[Relazione in Atti di Convegno]
Galligani, Emanuele; V., Ruggiero; Zanni, Luca
abstract
This paper concerns with the numerical evaluation of the variable projection method for quadratic programming problems in three data analysis applications. The three applications give rise to largescale quadratic programs with remarkable differences in the Hessian definition and/or in the structure of the constraints.
2002
 An overview on projectiontype methods for convex largescale quadratic programs
[Capitolo/Saggio]
Zanni, Luca; V., Ruggiero
abstract
A wellknown approach for solving large and sparse linearly constrained quadratic programming (QP) problems is given by the splitting and projection methods. After a survey on these classical methods, we show that they can be unified in a general iterative scheme consisting in to solve a sequence of QP subproblems with the constraints of the original problem and an easily solvable Hessian matrix. A convergence theorem is given for this general scheme. In order to improve the numerical performance of these methods, we introduce two variants of a projection type scheme that use a variable projection parameter at each step. The two variable projection methods differ in the strategy used to assure a sufficient decrease of the objective function at each iteration. We prove, under very general hypotheses, the convergence of these schemes and we propose two practical, nonexpensive and efficient updating rules for the projection parameter. An extensive numerical experimentation shows the effectiveness of the variable projectiontype methods.
2000
 A modified projection algorithm for large strictlyconvex quadratic programs
[Articolo su rivista]
V., Ruggiero; Zanni, Luca
abstract
In this paper, we propose a modified projectiontype method for solving strictlyconvex quadratic programs. This iterative scheme requires essentially the solution of an easy quadratic programming subproblem and a matrixvector multiplication at each iteration. The main feature of the method consists in updating the Hessian matrix of the subproblems by a convenient scaling parameter. The convergence of the scheme is obtained by introducing a correction formula for the solution of the subproblems and very weak conditions on the scaling parameter. A practical nonexpensive updating rule for the scaling parameter is suggested. The results of numerical experimentation enable this approach to be compared with some classical projectiontype methods and its effectiveness as a solver of large and very sparse quadratic programs to be evaluated.
2000
 A Random Generator for LargeScale Linearly Constrained Quadratic Programming Test Problems
[Monografia/Trattato scientifico]
C., Durazzi; V., Ruggiero; Zanni, Luca
abstract
In this paper we describe a random generator for large and sparse quadratic programming problems that frequently arise in different areas of applied science. This generator is an useful tool in testing algorithms on QP problems with different features, since it allows us to vary many parameters which characterize the problems. The procedure used to generate a QP problem as well as some details for its implementation are explained.Finally, we report an analysis of the numerical results, obtained by the routine E04NFK of the NAG library on the test problems produced by the generator.
2000
 On the stability of the direct elimination method for equality constrained least squares problems
[Articolo su rivista]
Galligani, Emanuele; Zanni, Luca
abstract
A backward error analysis of the direct elimination method for linear equality constrained least squares problems is presented. It is proved that the solution computed by the method is the exact solution of a perturbed problem and bounds for data perturbations are given. The numerical stability of the method is related to the way in which the constraints are used to eliminate variables and these theoretical conclusions are confirmed by a numerical example.
2000
 Projectiontype methods for large convex quadratic programs: theory and computational experience
[Working paper]
Galligani, Emanuele; V., Ruggiero; Zanni, Luca
abstract
A wellknown approach to the solution of large and sparse linearly constrained quadratic programming (QP) problems is given by the classical projection and splitting methods. These methods have a similar iterative scheme consisting in to solve a sequence of QP subproblems with the constraints of the original problem and an easily solvable Hessian matrix. A theorem of convergence, resuming the main results about the projection and splitting methods, is given for this general scheme. In order to achieve a higher numerical performance than that obtained by projection and splitting methods, we introduce two variants of an iterative scheme that use a variable projection parameter at each step. These two variable projectiontype methods differ in the strategy used to assure a sufficient decrease in the objective function f(x). We prove, under very general hypotheses, the convergence of these scheme and we propose two practical, nonexpensive and efficient updating rules for updating the projection parameter.An extensive numerical experimentation shows that the variable projectiontype methods have an efficient behaviour.These results are produced by a set of programs available at the URL: http://www.unife.it/AnNum97/index.html. About the solution of the inner subproblems of the projectiontype methods, we observe that, when the structure of the constraints does not suggest a particular solver for the QP inner subproblems, we can formulate each inner subproblem as a mixed linear complementarity problem (LCP) that can be solved by the classical projected SOR method of Cryer. When the number of constraints (and, consequently, the size of the inner LCPs) is large, appropriate parallel splitting methods for LCPs can be used for the solution on multiprocessor systems. Finally we describe two applications, arising in the framework of data analysis, that involve QP problems suited to be solved by projectiontype methods.
2000
 Splitting and ProjectionType Methods for Large Convex Quadratic Programs
[Relazione in Atti di Convegno]
V., Ruggiero; Zanni, Luca
abstract
2000
 Variable projection methods for large convex quadratic programming
[Capitolo/Saggio]
V., Ruggiero; Zanni, Luca
abstract
We propose two projectiontype methods for solving large quadratic programs. The main feature of these iterative schemes consists in using, at each iteration, a variable projection parameter instead of a fixed one as in the classical projection methods. The convergence may be obtained without restrictive conditions on the projection parameters by using appropriate correction rules that imply, at each iteration, a sufficient decrease in the objective function. The first method uses a correction rule on the descent direction produced by the projection step, while in the second method, the correction formula works adaptively on the value of the variable projection parameter. We give convergence results for the general case of inexact solution of the inner subproblems. The numerical behaviour of the methods is strictly dependent on the sequence of the projection parameters. We introduce a practical nonexpensive updating rule for these parameters and evaluate its effectiveness on large scale test problems.
1999
 On a class of iterative methods for largescale convex quadratic programs
[Relazione in Atti di Convegno]
Zanni, Luca; V., Ruggiero
abstract
We analyse the efficiency of a class of iterative methods for solving large scale convex quadratic programs. These methods, known as splitting methods and projection methods, require to solve a sequence of easy strictly convex quadratic programming subproblems obtained by splitting the matrix of the objective function. We describe in details the techniques used for generating the large and sparse test problems on which the computational behaviour of the methods is studied.
1999
 On the efficiency of splitting and projection methods for large strictly convex quadratic programs
[Capitolo/Saggio]
V., Ruggiero; Zanni, Luca
abstract
In this paper we analyse the behaviour of the classical splitting and projection methods for solving largescale strictly convex quadratic programming problems with linear constraints. The drawbacks of these classical methods are overcome by the recent modified projectiontype and variable projection methods. These new approaches have the same complexity and a similar structure: each iteration consists of a projection step followed by a correction formula. Neverthless, on the contrary of the modified projectiontype methods, the variable projection method does not require to prefix any scalar parameters and is weakly dependent on a priori scaling of the objective function. The results of a numerical experimentation permit to compare the new approaches with the classical splitting and projection methods and to evaluate the effectivenes of the variable projection method as solver of large quadratic programs.
1998
 A minimization method for the solution of large symmetric eigenproblems
[Articolo su rivista]
Galligani, Emanuele; V., Ruggiero; Zanni, Luca
abstract
This paper concerns with the solution of a special eigenvalue problem for a large sparse symmetric matrix by a fast convergent minimization method. A theoretical analysis of the method is developed; it is proved that is convergent with a convergence rate of fourth order. This minimization method requires to solve a sequence of equalityconstrained least squares problems that become increasingly illconditioned, as the solution of eigenvalue problem is approached, A particular attention has been addressed to this question of illconditioning for the practical application of the method. Computational experiments carried out on Cray C90 show the behaviour of this minimization method as accelerating technique of the inverse iteration method. Also a comparison with the scaled Newton method has been done.
1998
 Parallel solution of large scale quadratic programs
[Relazione in Atti di Convegno]
Galligani, Emanuele; V., Ruggiero; Zanni, Luca
abstract
We consider the solution of large and sparse linearly constrained quadratic programming problems. We describe an iterative scheme that requires to solve at each iteration a linear complementarity problem. For the solution of this last problem we examine parallel iterative solvers based on splittings of the complementarity matrix. We report the numerical results obtained by solving with the proposed approach quadratic programs on Cray T3E.
1998
 Parallel splitting methods for linearly constrained quadratic programs
[Capitolo/Saggio]
Galligani, Emanuele; V., Ruggiero; Zanni, Luca
abstract
We consider the solution of large and sparse linearly constrained quadratic programming problems by an iterative scheme that requires to solve at each iteration a linear complementarity problem. For the solution of this last problem we examine parallel iterative solvers based on splittings of the complementarity matrix. We report the results of numerical experiments carried out on Cray T3E.
1997
 Error analysis in equality constrained quadratic optimization problems
[Articolo su rivista]
Galligani, Emanuele; Zanni, Luca
abstract
In questo lavoro si considera la risoluzione numerica di problemi di programmazione quadratica con vincoli lineari di uguaglianza. Quando le dimensioni e la sparsità del problema sono particolarmente elevate, un approccio efficiente consiste nel trasformare il problema originario in una successione di sottoproblemi dello stesso tipo, ma di più facile risoluzione, mediante la decomposizione della matrice della funzione obiettivo. Il comportamento del metodo iterativo derivante da quest’approccio viene esaminato in corrispondenza alla scelta di due diverse strategie per la risoluzione dei sottoproblemi. Quando la matrice della funzione obiettivo e la matrice dei vincoli sono dense e di piccole o medie dimensioni, il problema originario può essere risolto con metodi diretti di eliminazione che si basano sulla decomposizione QR della matrice dei vincoli per eliminare alcune variabili e risolvere un problema equivalente di programmazione quadratica non vincolata di dimensioni più piccole. La stabilità numerica di un metodo di questa classe, il metodo di eliminazione diretta, viene studiata mediante la tecnica dell’analisi all’indietro dell'errore.
1997
 Error analysis of an algorithm for equalityconstrained quadratic programming problems
[Articolo su rivista]
Galligani, Emanuele; Zanni, Luca
abstract
In this paper the numerical stability of the orthogonal factorization method for linear equalityconstrained quadratic programming problems is studied using a backward error analysis. A perturbation formula for the problem is analyzed; the condition numbers of this formula are examined in order to compare them with the condition numbers of the two matrices of the problem. A class of test problems is also considered in order to show experimentally the behaviour of the method.
1997
 Splitting methods and parallel solution of constrained quadratic programs
[Relazione in Atti di Convegno]
Galligani, Emanuele; V., Ruggiero; Zanni, Luca
abstract
In this work the numerical solution of linearly constrained quadratic programming problems is examined. This problem arises in many applications and it forms a basis for some algorithms that solve variational inequalities formulating equilibrium problems. An attractive iterative scheme for solving constrained quadratic programs when the matrix of the objective function is large and sparse consists in transforming, by a splitting of the objective matrix, the original problem into a sequence of subproblems easier to solve. At each iteration the subproblem is formulated as a linear complementarity problem that can be solved by methods suited for implementation on multiprocessor system. We analyse two parallel iterative solvers from the theoretical and practical point of view. Results of numerical experiments carried out on Cray T3D are reported.
1997
 Splitting methods for quadratic optimization in data analysis
[Articolo su rivista]
Galligani, Emanuele; V., Ruggiero; Zanni, Luca
abstract
Many problems arising in data analysis can be formulated as a large sparse strictly convex quadratic programming problems with equality and inequality linear constraints. In order to solve these problems, we propose an iterative scheme based on a splitting of the matrix of the objective function and called splitting algorithm (SA). This algorithm transforms the original problem into a sequence of subproblems easier to solve, for which there exists a large number of efficient methods in literature. Each subproblem can be solved as a linear complementarity problem or as a constrained least distance problem. We give conditions for SA convergence and we present an application on a large scale sparse problem arising in constrained bivariate interpolation. In this application we use a special version of SA called diagonalization algorithm (DA). An extensive experimentation on CRAY C90 permits to evaluate the DA performance.
1996
 Error analysis of elimination methods for equality constrained quadratic programming problems
[Relazione in Atti di Convegno]
Galligani, Emanuele; Zanni, Luca
abstract
A backward error analysis for the orthogonal factorization method for equality constrained quadratic programming problems has been developed. Furthermore, this method has been experimentally compared with direct elimination method on a class of test problems.
1996
 Quadratic optimization in data analysis
[Capitolo/Saggio]
Galligani, Emanuele; V., Ruggiero; Zanni, Luca
abstract
Sono stati esaminati due approcci per la risoluzione di problemi derivanti dall’analisi dei dati. Il primo consiste nell’applicare una tecnica di substrutturazione che porta a dover risolvere un insieme di problemi di programmazione quadratica convessa con vincoli lineari di uguaglianza. Tali problemi sono indipendenti e possono essere risolti su processori diversi, usando per la soluzione di ciascuno di essi il metodo dei moltiplicatori combinato con l’algoritmo dei gradienti coniugati, particolarmente efficiente su un processore vettoriale.Per problemi con struttura più sparsa, quale quello dell’interpolazione vincolata di superfici di classe C1, si può usare un approccio globale, che richiede di risolvere un problema di programmazione quadratica strettamente convessa con vincoli di uguaglianza e disuguaglianza. In questo caso si è analizzata teoricamente una classe di metodi derivanti dalla decomposizione della matrice della funzione obiettivo. Una vasta sperimentazione numerica su Cray C90 mostra l’efficienza di uno di questi metodi, detto algoritmo di diagonalizzazione.
1996
 Splitting methods for constrained quadratic programs in data analysis
[Articolo su rivista]
Galligani, Emanuele; V., Ruggiero; Zanni, Luca
abstract
This paper is concerned with the numerical solution of a linearly constrained quadratic programming problem by methods that use a splitting of the objective matrix. We present an acceleration step for a general splitting algorithm and we establish the convergence of the resulting accelerated scheme. We report the results of numerical experiments arising in constrained bivariate interpolation to evaluate the efficiency of this acceleration technique for a particular splitting of the objective matrix and for the corresponding extrapolated form.
1995
 The diagonalization method for strictly convex quadratic programs: analysis and implementationon vector computers
[Working paper]
Galligani, Emanuele; V., Ruggiero; Zanni, Luca
abstract
This report concerns with the solution of linearly constrained strictly convex quadratic programming problem by a splitting iterative method, called diagonalization algorithm (DA). This algorithm transforms the original problem into a sequence of subproblems easier to solve, for which there exists a large number of efficient methods in literature. In fact each subproblem may be formulated as a linear complementarity problem or as a constrained least distance problem. We give conditions for DA convergence and we present an application on a large scale sparse problem arising in constrained bivariate interpolation. An extensive experimentation on CRAY C94 permits to evaluate the DA performance.The Fortran 77 codes carried out on multivector computer Cray C94 implementing the algorithms described above, are reported in appendix.
1994
 Convergence properties of a projection method for affine variational inequality problems
[Articolo su rivista]
Zanni, Luca
abstract
1994
 On the efficiency and numerical stability of orthogonal factorization method for equalityconstrained quadratic programming problems
[Working paper]
Galligani, Emanuele; Zanni, Luca
abstract
We consider the orthogonal factorization method for linear equalityconstrained quadratic programming problems. Backward error analysis of this method is developed in a previous work. In this paper the efficiency and the numerical stability of the method are evaluated experimentally on a class of test problems. The computer experimentation is carried out on CRAY C90 using LINPACK library.
1992
 On the convergence rate of two projection methods for variational inequalities in R^n
[Articolo su rivista]
Zanni, Luca
abstract