
Mauro LEONCINI
Professore Ordinario Dipartimento di Scienze Fisiche, Informatiche e Matematiche sede exMatematica

Home 
Curriculum(pdf) 
Didattica 
Pubblicazioni
2022
 An Algorithm to Predict EBike Power Consumption Based on Planned Routes
[Articolo su rivista]
Burani, E.; Cabri, G.; Leoncini, M.
abstract
Ebikes, i.e., bikes equipped with a small electrical engine, are becoming increasingly widespread, thanks to their positive contribution to mobility and sustainability. A key component of an ebike is the battery that feeds the drive unit: clearly, the higher the capacity of the battery, the longer the distances that the biker will cover under engine support. On the negative side, the additional weight incurred by the electric components is likely to ruin the riding experience in case the battery runs out of power. For this reason, an integrated hardwaresoftware system that provides accurate information about the remaining range is essential, especially for older or “notinshape” bikers. Many ebikes systems are already equipped with a small control unit that displays useful information, such as speed, instantaneous power consumption, and estimated range as well. Existing approaches rely on machine learning techniques applied to collected data, or even on the remaining battery capacity and the assistance level required by the drive unit. They do not consider crucial aspects of the planned route, in particular the difference in altitude, the combined weight of bike and biker, and road conditions. In this paper, we propose a mathematical model implemented in an application to compute battery consumption, and hence the presumed remaining range, in a more accurate way. Our application relies on external sources to compute the route and the elevation data of a number of intermediate points. We present the mathematical model on which our application is based, we show the implemented application in shape of an app, and we report the results of the experiments.
2022
 Multifunctionallydoped PEDOT for organic electrochemical transistors
[Articolo su rivista]
Carli, S; Bianchi, M; Di Lauro, M; Prato, M; Toma, A; Leoncini, M; De Salvo, A; Murgia, M; Fadiga, L; Biscarini, F
abstract
Organic Electrochemical Transistors (OECTs) are suitable for developing ultrasensitive bioelectronic sensors. In the organic electrochemical transistors architecture, the sourcedrain channel is made of a conductive polymer film either cast from a formulated dispersion or electrodeposited from a monomer solution. The commercial poly(3,4ethylenedioxidethiophene)/poly(styrene sulfonate) (PEDOT:PSS) water dispersion is the workhorse of organic bioelectronics for its high conductance, low impact and ease of processability. In this study, a hybrid organic electrochemical transistors channel fabrication strategy is presented, where electrochemical deposition of a PEDOT/X (with X indicating the counterion) is performed on a dispersioncast PEDOT:PSS film. Six different counterions where used: X = PSS, Nafion, Hyaluronate, Dextran sulfate, Dexamethasone phosphate and tauroursodeoxycholic acid, each potentially endowing organic electrochemical transistors with additional functions such as ion exchange and pharmacological activity upon release of X. The PEDOT/XPEDOT:PSS bilayers were characterized by means of electrochemical impedance spectroscopy (EIS), atomic force microscopy (AFM), Xray photoelectron spectroscopy (XPS) and focused ion beam tomography combined with scanning electron microscopy (FIBSEM). In addition, their respective organic electrochemical transistorss were characterized and compared to PEDOT:PSS organic electrochemical transistors. Our results show that the hybrid bilayer strategy is viable to fabricate multifunctional organic electrochemical transistorss with biologicallyrelevant function, thereby retaining the outstanding figures of merit of commercial PEDOT:PSS.
2022
 Reidentification Attack based on FewHints Dataset Enrichment for Ubiquitous Applications
[Relazione in Atti di Convegno]
Artioli, A.; Bedogni, L.; Leoncini, M.
abstract
Ubiquitous and pervasive applications record a large amount of data about users, to provide contextaware and tailored services. Although this enables more personalized applications, it also poses several questions concerning the possible misuse of such data by a malicious entity, which may discover private and sensitive information about the users themselves. In this paper we propose an attack on ubiquitous applications pseudoanonymized datasets which can be leaked or accessed by the attacker. We enrich the data with true information which the attacker can obtain from a multitude of sources, which will eventually spark a chain reaction on the records of the dataset, possibly reidentifying users. Our results indicate that through this attack, and with few hints added to the dataset, the possibility of reidentification are considerable, achieving more than 70% reidentified users on a public available dataset. We compare our proposal with the state of the art, showing the improved performance figures obtained thanks to the graphmodeling of the dataset records and the novel hint structure.
2021
 WebFPTI: A tool to predict the toxicity/pathogenicity of mineral fibres including asbestos
[Articolo su rivista]
Gualtieri, A. F.; Leoncini, M.; Rinaldi, L.; Zoboli, A.; Di Giuseppe, D.
abstract
The risk assessment for the human health of airborne mineral fibres, including asbestos minerals, is a complex task. WebFPTI is a browserbased software written in Python that allows users to calculate an index of toxicity and pathogenicity potential of mineral fibres based on their crystalchemicalphysical properties. WebFPTI can be a powerful tool for both academy and environmental/health institutions to classify the carcinogenicity potential of (respirable) mineral fibres, so that specific actions aimed at protecting workers and general public can be fostered.
2020
 Distributed balanced color assignment on arbitrary networks
[Articolo su rivista]
De Marco, G.; Leoncini, M.; Montangero, M.
abstract
Consider a scenario in which a set of n agents hold items, where each item can be of one out of m possible types (colors). The agents are connected by an arbitrary network and have to distributively find a repartition of the colors such that the amount of colors for each agent is as balanced as possible: in the particular case where m is a multiple of n, each agent must have exactly m/n colors. More formally, the goal is to let the agents agree on an assignment of colors to agents such that the following two conditions hold: (i) each color is assigned to exactly one agent; (ii) each agent has at least ⌊m/n⌋ and at most ⌈m/n⌉ colors. Among all possible such repartitions, we seek for the one that minimizes the number of “changes” (measured in terms of misplaced items) with respect to the initial configuration. In other words, our aim is to design a distributed algorithm that finds a balanced coloring of minimum cost, where the cost of a coloring is the number of items that need to be relocated. This kind of questions, usually modeled as generalized bipartite matching problems, have been studied in the distributed setting only on clusters of commodity nodes with the aim of copying with realworld massive instances, which may involve millions of agents and colors. Here we propose the first distributed algorithm designed to run on an arbitrary network with the agents as nodes. Our algorithm turns out to be efficient in terms of both time and message complexity for a large class of graphs. Our results can be outlined as follows. We prove a lower bound Ω(m/n⋅D2) on message complexity, where D is the diameter of the graph, that holds for any approximation algorithm whose solution has cost at most 2(α−2)/α times the cost of any optimal solution, for every constant α>2. We give a distributed deterministic algorithm with time complexity O(maxn2,Dlogq) and message complexity?>O(nlogn⋅(logq+mlogm)), where q is the maximum number of items of a given color held by any agent. We show that the cost of our solution for arbitrary graphs is at most (2+δ) times the optimal cost, for any δ>0. We finally observe that, for large diameter graphs (i.e., D=Ω(nϵ), ϵ>0), we get matching lower and upper bounds on message complexity for the vast majority of instances of potential interest, that is, instances with polynomial number of colors and (up to) superexponential number of items.
2019
 A Parallel BranchandBound Algorithm to Compute a Tighter Tardiness Bound for Preemptive Global EDF
[Articolo su rivista]
Leoncini, Mauro; Montangero, Manuela; Valente, Paolo
abstract
In this paper we present a parallel exact algorithm to compute an upper
bound to tardiness of preemptive Global EDF (GEDF) schedulers, named harmonic
bound, which has been proved to be up to 30% tighter than previously proposed
bounds. Tightness is a crucial property of tardiness bounds: a too loose bound may
cause a feasible soft realtime application to be mistakenly deemed unfeasible.
Unfortunately, no polynomialtime algorithm is known to date to compute the
harmonic bound. Although there is no proof of hardness of any sort either, the complex
formula of the bound apparently provides no hints to devise algorithms with
subexponential worstcase cost.
In this paper we address this issue by proposing a parallel, exact, branchandbound
algorithm to compute the harmonic bound, called harmBB, which proves to
be extremely fast in a large number of experiments. More specifically, we compare its
execution times with those of existing polynomialtime algorithms for other known
tardiness bounds on 630000 random task sets. harmBB outperforms, or is comparable
to, the competitor algorithms in all scenarios but the ones with the highest number
of processors (7 and 8) and tasks (50). In the latter scenarios harmBB is indeed
slower than the other algorithms; yet, it was still feasible, as it takes only about 2:8
s to compute the bound on a commodity DualCore CPU. Even better, we show that
harmBB has a high parallel efficiency, thus its execution time may be largely cut
down on highlyparallel platforms.
2019
 A distributed messageoptimal assignment on rings
[Articolo su rivista]
De Marco, G.; Leoncini, M.; Montangero, M.
abstract
Consider a set of items and a set of m colors, where each item is associated to one color. Consider also n computational agents connected by a ring. Each agent holds a subset of the items and items of the same color can be held by different agents. We analyze the problem of distributively assigning colors to agents in such a way that (a) each color is assigned to one agent only and (b) the number of different colors assigned to each agent is minimum. Since any color assignment requires the items be distributed according to it (e.g. all items of the same color are to be held by only one agent), we define the cost of a color assignment as the amount of items that need to be moved, given an initial allocation. We first show that any distributed algorithm for this problem requires a message complexity of Ω(n⋅m) and then we exhibit an optimal message complexity algorithm for synchronous and asynchronous rings that in polynomial time determines a color assignment with cost at most three times the optimal. We show that the approximation is tight and how to get a better cost solution at the expenses of either the message or the time complexity. Finally, we present some experiments showing that, in practice, our algorithm performs much better than the theatrical worst case scenario.
2017
 A branchandbound algorithm to compute a tighter bound to tardiness for preemptive global EDF scheduler
[Relazione in Atti di Convegno]
Leoncini, Mauro; Montangero, Manuela; Valente, Paolo
abstract
Tightness is a crucial property of tardiness bounds; in fact, a too loose bound may cause a feasible soft realtime application to be mistakenly deemed unfeasible. Recently, a new tardiness bound for preemptive global EDF (GEDF), named harmonic bound, has been proposed and proved to be up to 30% tighter than previous bounds. Unfortunately, no polynomialtime algorithm is known to date to compute the harmonic bound. Although there is no proof of hardness of any sort either, the complex formula of the bound apparently provides no hints to devise algorithms with subexponential worstcase cost. In this paper we address this issue by proposing an exact branchandbound algorithm, called harmBB, which proved to be extremely fast in a large number of experiments. More specifically, we compared its execution times with those of existing polynomialtime algorithms for other known similar bounds on 630000 random task sets. harmBB outperformed the competitor algorithms in all scenarios but the ones with the highest number of processors and tasks (8 and â ¼50, respectively). However, even in the latter cases, harmBB was at most 1.5 slower than the other algorithms, and, in absolute terms, took only about 4.4 ms to compute the bound on commodity hardware.
2017
 Distributed BetaAssignment on graphs
[Relazione in Atti di Convegno]
DE MARCO, Gianluca; Leoncini, Mauro; Mazzali, Lucia; Montangero, Manuela
abstract
Consider a set of items and a set of m colors, where each item is associated to one color. Consider also n computational agents connected by an arbitrary graph. Each agent initially holds a subset of the items. We analyze the problem of distributively assigning colors to agents in such a way that (a) each color is assigned to one agent only and (b) the number of different colors assigned to each agent is minimum (c) the number of items initially assigned to agents that are not consistent with the assignment of colors is minimum. This problem, known in the centralized setting as a matching problem, has been introduced in the distributed setting in [3] for the simple ring topology. Here, we generalize the problem to arbitrary graphs, by solving it with a distributed algorithm which is eficient both in terms of time complexity and message complexity for a large class of graphs. Namely, our results can be outlined as follows. We prove a lower bound on the message complexity ofÏ (m/n D2), where D is the diameter of the graph. We give a distributed deterministic algorithm with time complexity O(maxn2;Dlog qg) and message complexity O (n/log n) (log q+mlogm) , where q is the maximum number of items of a given color held by an agent. It can be noticed that for a large class of instances of practical interest, namely for m O(nc), for some constant c, and q â¬ O(mm), our algorithm exhibits a message complexity of O(m n), which turns out to be optimal, in view of our lower bound, for graphs with diameter linear in the number of nodes. We finally show that the cost of our solution for arbitrary graphs is at most three times the optimal cost (found by a centralized algorithm).
2016
 Towards UserAware Service Composition
[Relazione in Atti di Convegno]
Cabri, Giacomo; Leoncini, Mauro; Martoglia, Riccardo; Zambonelli, Franco
abstract
Our everyday life is more and more supported by the information technology in general and specific services provided by means of our electronic devices. The AMBIT project (Algorithms and Models for Building contextdependent Information delivery Tools) aims at providing a support to develop services that are automatically tailored based on the user profile. However, while the adaptation of the single services is the first step, the next step is to achieve adaptation in the composition of different services. In this paper, we explore how services can be composed in a useraware way, in order to decide the composition that better meets users’ requirements. That is, we exploit the user profile not only to provide her customized services, but also to compose them in a suitable way.
2015
 CMStalker: a combinatorial tool for composite motif discovery
[Articolo su rivista]
Leoncini, Mauro; Montangero, Manuela; PANUCIA TILLAN, Karina; Pellegrini, Marco
abstract
Controlling the differential expression of many thousands different genes at any given time is a fundamental task of metazoan organisms and this complex orchestration is controlled by the socalled regulatory genome encoding complex regulatory networks: several Transcription Factors bind to precise DNA regions, so to perform in a cooperative manner a specific regulation task for nearby genes. The in silico prediction of these binding sites is still an open problem, notwithstanding continuous progress and activity in the last two decades. In this paper we describe a new efficient combinatorial approach to the problem of detecting sets of cooperating binding sites in promoter sequences, given in input a database of Transcription Factor Binding Sites encoded as Position Weight Matrices. We present CMStalker, a software tool for composite motif discovery which embodies a new approach that combines a constraint satisfaction formulation with a parameter relaxation technique to explore efficiently the space of possible solutions. Extensive experiments with twelve data sets and eleven stateoftheart tools are reported, showing an average value of the correlation coefficient of 0.54 (against a value 0.41 of the closest competitor). This improvements in output quality due to CMStalker is statistically significant.
2015
 CNVScan: detecting borderline copy number variations in NGS data via scan statistics
[Relazione in Atti di Convegno]
Bergamini, Elisabetta; D'Aurizio, Romina; Leoncini, Mauro; Pellegrini, Marco
abstract
In this paper we propose CNVScan, a CNV detection method based on scan statistics that overcomes limitations of previous read count (RC) based approaches mainly by being a windowless approach. The scans statistics have been used before mainly in epidemiology and ecology studies, but never before was applied to the CNV detection problem to the best of our knowledge. Since we avoid windowing we do not have to choose an optimal windowsize which is a key step in many previous approaches. Extensive simulated experiments with single read data in extreme situations (low coverage, short reads, homo/heterozygoticity) show that this approach is very effective for a range of small CNV (200500 bp) for which previous stateoftheart methods are not suitable.
2015
 Investigating Power and Limitations of Ensemble Motif Finders Using Metapredictor CE3
[Articolo su rivista]
Leoncini, Mauro; Montangero, Manuela; Panucia Tillan, Karina
abstract
Ensemble methods represent a relatively new approach to motif discovery that combines the results returned by "thirdparty" finders with the aim of achieving a better accuracy than that obtained by the single tools. Besides the choice of the external finders, another crucial element for the success of an ensemble method is the particular strategy adopted to combine the finders' results, a.k.a. learning function.
Results appeared in the literature seem to suggest that ensemble methods can provide noticeable improvements over the quality of the most popular tools available for motif discovery.
With the goal of better understanding potentials and limitations of ensemble methods, we developed a general software architecture whose major feature is the flexibility with respect to the crucial aspects of ensemble methods mentioned above. The architecture provides facilities for the easy addition of virtually any thirdparty tool for motif discovery whose code is publicly available, and for the definition of new learning functions. We present a prototype implementation of our architecture, called CE3 (Customizable and Easily Extensible Ensemble).
Using CE3, and available ensemble methods, we performed experiments with three wellknown datasets. The results presented here are varied. On the one hand, they confirm that ensemble methods cannot be just considered as the universal remedy for "insilico" motif discovery. On the other hand, we found some encouraging regularities that may help to find a general set up for CE3 (and other ensemble methods as well) able to guarantee substantial improvements over single finders in a systematic way.
2014
 AMBIT: Towards an Architecture for the Development of Contextdependent Applications and Systems
[Relazione in Atti di Convegno]
Cabri, Giacomo; Leoncini, Mauro; Martoglia, Riccardo
abstract
The development of ubiquitous services tailored to the needs
and expectations of a very large number of potential users (especially
mobile users) requires that future applications and systems be aware of
the service fruition contexts and possibly of accurate user proles.
The AMBIT research project aims at providing a general model of con
text as well as a platform that can be exploited to build and deploy dif
ferent kinds of contextdependent applications and systems. We aim at
overcoming the restrictions of the existing approaches, which are mainly
due to the limited notion of context they propose (if any). In partic
ular, we stress the fact that current technologies does not accurately
consider the notion of context semantics and user prole, which is the
main source of the
ooding of useless data that overload systems and
often users' minds.
2013
 CE^3: Customizable and Easily Extensible Ensemble Tool for Motif Discovery
[Relazione in Atti di Convegno]
PANUCIA TILLAN, Karina; Leoncini, Mauro; Montangero, Manuela
abstract
Ensemble methods (or simply ensembles) for motif discovery
represent a relatively new approach to improve the accuracy of standalone motif finders. In particular, the accuracy of an ensemble is determined by the included finders and the strategy (learning rule) used to combine the results returned by the latter, making these choices crucial for the ensemble success. In this research we propose a general architecture for ensembles, called CE3, which is meant to be extensible and customizable for what concerns external tools inclusion and learning rule. Using CE3 the user will be able to “simulate” existing ensembles and possibly incorporate newly proposed tools (and learning functions) with the aim at improving the ensemble’s prediction accuracy. Preliminary experiments performed with a prototype implementation of CE3 led to interesting insights and a critical analysis of the potentials and limitations of currently available ensembles.
2013
 CMF: a Combinatorial Tool to Find Composite Motifs
[Relazione in Atti di Convegno]
Leoncini, Mauro; Montangero, Manuela; M., Pellegrini; PANUCIA TILLAN, Karina
abstract
Controlling the differential expression of many thousands
genes at any given time is a fundamental task of metazoan organisms and this complex orchestration is controlled by the socalled regulatory genome encoding complex regulatory networks. CisRegulatory Modules are fundamental units of such networks. To detect CisRegulatory Modules “insilico” a key step is the discovery of recurrent clusters of DNA binding sites for sets of cooperating Transcription Factors. Composite
motif is the term often adopted to refer to these clusters of sites. In this paper we describe CMF, a new efficient combinatorial method for the problem of detecting composite motifs, given in input a description of the binding affinities for a set of transcription factors. Testing with known benchmark data, we attain statistically significant better performance against nine stateoftheart competing methods.
2012
 Direct vs 2stage approaches to structured motif finding
[Articolo su rivista]
Federico, Maria; Leoncini, Mauro; Montangero, Manuela; Valente, Paolo
abstract
Background
The notion of DNA motif is a mathematical abstraction used to model regions of the DNA (known as Transcription Factor Binding Sites, or TFBSs) that are bound by a given Transcription Factor to regulate gene expression or repression. In turn, DNA structured motifs are a mathematical counterpart that models sets of TFBSs that work in concert in the gene regulations processes of higher eukaryotic organisms. Typically, a structured motif is composed of an ordered set of isolated (or simple) motifs, separated by a variable, but somewhat constrained number of "irrelevant" basepairs. Discovering structured motifs in a set of DNA sequences is a computationally hard problem that has been addressed by a number of authors using either a direct approach, or via the preliminary identication and successive combination of simple motifs.
Results
We describe a computational tool, named SISMA, for the denovo discovery of structured motifs in a set of DNA sequences. SISMA is an exact, enumerative algorithm, meaning that it nds all the motifs conforming to the specications. It does so in two stages: rst it discovers all the possible component simple motifs, then combines them in a way that respects the given constraints. We developed SISMA mainly with the aim of understanding the potential benets of such a 2stage approach w.r.t. direct methods. In fact, no 2stage software was available for the general problem of structured motif discovery, but only a few tools that solved restricted versions of the problem. We evaluated SISMA against other published tools on a comprehensive benchmark made of both synthetic and real biological datasets. In a signicant number of cases, SISMA outperformed the competitors, exhibiting a good performance also in most of the cases in which it was inferior.
Conclusions
A reection on the results obtained lead us to conclude that a 2stage approach can be implemented with many advantages over direct approaches. Some of these have to do with greater modularity, ease of parallelization, and the possibility to perform adaptive searches of structured motifs. As another consideration, we noted that most hard instances for SISMA were easy to detect in advance. In these cases one may initially opt for a direct method; or, as a viable alternative in most laboratories, one could run both direct and 2stage tools in parallel, halting the computations when the first halts.
2011
 A High Performing Tool for Residue Solvent Accessibility Prediction
[Relazione in Atti di Convegno]
Palmieri, Lorenzo; Federico, Maria; Leoncini, Mauro; Montangero, Manuela
abstract
Many efforts were spent in the last years in bridging the gap between the huge number of sequenced proteins and the relatively few solved structures. Relative Solvent Accessibility (RSA) prediction of residues in protein complexes is a key step towards secondary structure and proteinprotein interaction sites prediction. With very different approaches, a number of software tools for RSA prediction have been produced throughout the last twenty years. Here, we present a binary classifier which implements a new method mainly based on sequence homology and implemented by means of lookup tables. The tool exploits residue similarity in solvent exposure pattern of neighboring context in similar protein chains, using BLAST search and DSSP structure. A twostate classification with 89.5% accuracy and 0.79 correlation coefficient against the real data is achieved on a widely used dataset.
2010
 Self Organization and Self Maintenance of Mobile Ad Hoc Networks through Dynamic Topology Control
[Relazione in Atti di Convegno]
D. M., Blough; Leoncini, Mauro; G., Resta; P., Santi
abstract
One way in which wireless nodes can organize themselves into anad hoc network is to execute a topology control protocol, which is designed to build a network satisfying specific properties. A number of basic topology control protocols exist and have been extensively analyzed. Unfortunately, most of these protocols are designed primarily for static networks and the protocol designerssimply advise that the protocols should be repeated periodically to deal with failures, mobility, and other sources of dynamism. However, continuously maintaining a network topology with basic connectivity properties is a fundamental requirement for overall network dependability. Current approaches consider failures only as an afterthought or take a static fault tolerance approach, which results in extremely high energy usage and low hroughput. In addition, most of the existing topology control protocols assume that transmission poweris a continuous variable and, therefore, nodes can choose an arbitrary power value between some minimum and maximum powers. However, wireless network interfaces with dynamic transmission power control permit the power to be set to oneof a discrete number of possible values. This simple restriction complicates the design of the topology control protocol substantially. In this paper, we present a set of topology control protocols, which work with discrete power levels and forwhich we specify a version that deals specifically with dynamic networks that experience failures, mobility, and other dynamic conditions. Our protocols are also novel in the sense that they are the first to consider explicit coordination betweenneighboring nodes, which results in more efficient power settings. In this paper, we present the design of these topology control protocols, and we report on extensive simulations to evaluate them and compare their performance against existing protocols. The results demonstrate that our protocols produce very similartopologies as the best protocols that assume power is a continuous variable, while having very low communication cost and seamlessly handling failures and mobility.
2009
 An Efficient Algorithm for Planted Composit Motif Extraction
[Relazione in Atti di Convegno]
Federico, M.; Valente, Paolo; Leoncini, Mauro; Montangero, Manuela; Cavicchioli, R.
abstract
In this paper we present an algorithm for the problem of planted structured motif extraction from a set of sequences. This problem is strictly related to the structured motif extraction problem, which has many important applications in molecular biology. We propose an algorithm that uses a simple twostage approach: first it extracts simple motifs, then the simple motifs are combined in order to extract structured motifs. We compare our algorithm with existing algorithms whose code is available, and which are based on more complex approaches. Our experiments show that, even if in general the problem is NPhard, our algorithm is able to handle complex instances of the problem in a reasonable amount of time.
2009
 KBoost: a Scalable Algorithm for HighQuality Clustering of Microarray Gene Expression Data
[Articolo su rivista]
Geraci, F; Leoncini, Mauro; Montangero, Manuela; Pellegrini, M; Renda, M. E.
abstract
In this paper we propose KBoost, a novel clustering algorithm basedon a combination of the FurthestPointFirst (FPF) heuristic for solving themetric kcenter problem, a {\em stabilitybased} method for determining thenumber of clusters, and a kmeanslike cluster refinement. Experiments showthat \textit{KBoost} exhibits a good quality/running time tradeoff that makesit ideal for large data sets, with quality measured by several internal andexternal criteria.
2009
 Partially Controlled Deployment Strategies for Wireless Sensors
[Articolo su rivista]
Leoncini, Mauro; Resta, G; Santi, P.
abstract
In this paper we study the following problem: we are given a certain one ortwodimensional region R to monitor and a requirement on the degree of coverage (DoC) of R to meet by a network of deployed sensors. The latter will be dropped by a moving vehicle, which can release sensors at arbitrary points within R. The node spatial distribution when sensors are dropped at a certain point is modeled by a certain probability density function F. The network designer is allowed to choose an arbitrary set of drop points, and to release an arbitrary number of sensors at each point. Given this setting, we consider the problem of determining the best performing strategy among a certain set of gridlike strategies that reflect the (one or twodimensional) symmetry of the region to be monitored. The best performing deployment strategy is such that (1) the DoC requirement is fulfilled and (2) the total number of deployed nodes n is minimum. We study this problem both analytically and through simulation, under the assumption that F is the twodimensional Normal distribution centered at the drop point.The main contribution of this paper is an indepth study of the interrelationships between environmental conditions, DoC requirement, and cost of the deployment.
2008
 An STDMABased Framework for QoS Provisioning in Wireless Mesh Networks
[Relazione in Atti di Convegno]
Leoncini, Mauro; Santi, P; Valente, Paolo
abstract
Providing strong QoS guarantees for wireless multihop networks is very challenging, due to many factors such as use of a shared communication medium, variability in wireless link quality, and so on. However, wireless mesh technology gives the opportunity to alleviate some of these problems, due to lack of mobility in the wireless infrastructure, and presence of natural centralization points in the network. The main contribution of this paper is the definition of a simple framework that exploits these features to provide provable, strong QoS guarantees to network clients. In particular, admitted clients are guaranteed a certain minimum bandwidth and maximum delay on their connections. The framework is based on STDMA scheduling at the MAC layer, which is periodically executed at the network manager to adapt to changes in traffic demand. While scheduling computation is centralized, admission control is performed locally at the wireless backbone nodes, thus reducing signaling. We propose two bandwidth distribution and related admission control policies, which are at opposite ends of the network utilization/spatial fairness tradeoff. Through extensive simulations, we show that the proposed framework achieves its design goals of providing strong QoS guarantees to VoIP clients while not sacrificing throughput in a realistic mesh network scenario, also in presence of highly unbalanced load at the backbone nodes. To the best of our knowledge, this is the first proposal with similar features for wireless mesh networks.
2007
 FPFSB: A SCALABLE ALGORITHM FOR MICROARRAY GENE EXPRESSION DATA CLUSTERING
[Relazione in Atti di Convegno]
Geraci, F; Leoncini, Mauro; Montangero, Manuela; Pellegrini, M; Renda, M. E.
abstract
Efficient and effective analysis of large datasets from microarraygene expression data is one of the keys to timecritical personalizedmedicine. The issue we address here is the scalability of the dataprocessing software for clustering gene expression data into groupswith homogeneous expression profile. In this paper we propose FPFSB,a novel clustering algorithm based on a combination of theFurthestPointFirst (FPF) heuristic for solving the kcenter problemand a stabilitybased method for determining the number of clustersk. Our algorithm improves the state of the art: it is scalable to largedatasets without sacrificing output quality.
2007
 Topology Control with Better Radio Models: Implications for Energy and MultiHop Interference
[Articolo su rivista]
D. M., Blough; Leoncini, Mauro; G., Resta; P., Santi
abstract
Topology Control (TC) is a wellstudied technique used in wireless ad hoc networks to find energyefficient and/or lowinterference subgraphs of the maxpower communication graph. However, existing work has the following limitations: (1) theenergy model adopted is quite unrealistic — only transmit power is often considered and homogeneous decay of the radiosignal with distance is assumed; (2) the interference measure does not account for multihop communications. In this paper,we show the dramatic effect of the underlying energy and interference model on TC. In particular, we demonstrate that by usingmore realistic energy models and considering the effects of multihop interference, radically different conclusions about TC canbe drawn; namely that (1) energy efficient TC is essentially meaningless, since every link turns out to be “efficient”, and that(2) topologies identified as “interferenceoptimal” in the current literature can be extremely bad from the viewpoint of multihopinterference. Given these observations, we propose a new measure of link interference, extend it to deal with multihop interference,and design a corresponding optimal communication subgraph, called ATASP. We prove that, in the worst case, ATASP coincideswith the maxpower communication graph, showing that in some unfortunate situations also performing multihop interferencebased TC is pointless. However, the simulation results with random node deployments presented in this paper show that, on theaverage, ATASP is a sparse subgraph of the maxpower communication graph, and multihop interferencebased TC is indeedpossible. Since computing ATASP requires global knowledge, we experiment through simulation with known localized algorithmsfor energyefficient TC and show that they perform well (on the average) with respect to multihop interference.
2006
 Distributed Algorithm for a Color Assignment on Asynchronous Rings
[Relazione in Atti di Convegno]
G., DE MARCO; Leoncini, Mauro; Montangero, Manuela
abstract
We study a version of the betaassignment problem (Chang and Lee, 1988) on asynchronous rings: consider a set of items and a set of m colors, where each item is associated to one color. Consider also n computational agents connected by an asynchronous ring. Each agent holds a subset of the items, where initially different agents might hold items associated to the same color. We analyze the problem of distributively assigning colors to agents in such a way that (a) each color is assigned to one agent and (b) the number of different colors assigned to each agent is minimum. Since any color assignment requires that the items be distributed according to it (e.g. all items of the same color are to be held by only one agent), we define the cost of a color assignment as the amount of items that need to be moved, given an initial allocation. We first show that any distributed algorithm for this problem on the ring requires a communication complexity of Omega(n middot m) and then we exhibit a polynomial time distributed algorithm with message complexity matching the bound, that determines a color assignment with cost at most (2 + eps) times the optimal cost, for any 0 < eps < 1.
2006
 Efficient computation of Nash equilibria for very sparse winlose bimatrix games
[Relazione in Atti di Convegno]
B., Codenotti; Leoncini, Mauro; G., Resta
abstract
It is known that finding a Nash equilibrium for winlose bimatrixgames, i.e., twoplayer garnes where the players' payoffs are zero andone, is complete for the class PPAD. We describe a linear timealgorithm which computes a Nash equilibrium for winlose bimatrixgarnes where the number of winning positions per strategy of each ofthe players is at most two. The algorithm acts on the directed graphthat represents the zeroone pattern of the payoff matrices describingthe game. It is based upon the efficient detection of certain subgraphswhich enable us to determine the support of a Nash equilibrium.
2006
 The kneighbors approach to interference bounded and symmetric topology control in ad hoc networks
[Articolo su rivista]
Dm, Blough; Leoncini, Mauro; G., Resta; P., Santi
abstract
Topology control, wherein nodes adjust their transmission ranges to conserve energy and reduce interference, is an important feature in wireless ad hoc networks. Contrary to most of the literature on topology control which focuses on reducing energy consumption, in this paper we tackle the topology control problem with the goal of limiting interference as much as possible, while keeping the communication graph connected with high probability. Our approach is based on the principle of maintaining the number of physical neighbors of every node equal to or slightly below a specific value k. As we will discuss in this paper, having a nontrivially bounded physical node degree allows a network topology with bounded interference to be generated. The proposed approach enforces symmetry on the resulting communication graph, thereby easing the operation of higher layer protocols. To evaluate the performance of our approach, we estimate the value of k that guarantees connectivity of the communication graph with high probability both theoretically and through simulation. We then define kNEIGH, a fully distributed, asynchronous, and localized protocol that uses distance estimation. kNEIGH guarantees logarithmically bounded physical degree at every node, is the most efficient known protocol (requiring 2n messages in total, where n is the number of nodes in the network), and relies on simpler assumptions than existing protocols. Furthermore, we verify through simulation that the network topologies produced by kNEIGH show good performance in terms of node energy consumption and expected interference.
2005
 Analysis of a Wireless Sensors Dropping Problem in Environmental Monitoring
[Relazione in Atti di Convegno]
Leoncini, Mauro; G., Resta; P., Santi
abstract
In this paper we study the following problem: we are given a certain region R to monitor and a requirement on the degree of coverage (DoC) of R to meet by a network of deployed sensors. The latter will be dropped by a moving vehicle, which can release sensors at arbitrary points within R. The node spatial distribution when sensors are dropped at a certain point is modeled by a probability density function F. The network designer is allowed to choose an arbitrary a set of drop points, and to release an arbitrary number of sensors at each point. Given this setting, we consider the problem of determining the optimal grid deployment strategy, i.e., the drop strategy in which release points are arranged in a grid such that (1) the DoC requirement is fulfilled and (2) the total number of deployed nodes n is minimum. This problem is relevant whenever manual node deployment is impossible or overly expensive, and partially controlled deployment is the only feasible choice. The main contribution of this paper is an accurate study of the interrelationships between environmental conditions, DoC requirement, and cost of the deployment. In particular, we show that, for a given value of σ and DoC requirement, optimal grid deployment strategies can be easily identified.
2005
 Comparison of CellBased and Topology ControlBased Energy Conservation in Wireless Sensor Networks
[Capitolo/Saggio]
D. M., Blough; Leoncini, Mauro; G., Resta; P., Santi
abstract
In this chapter we compare the effectiveness of two popular energy conservation strategies, namely topology control protocols and cooperative cellbased approaches, to reduce the energy consumption, and thus extend the lifetime, of wireless sensor networks. To this end, we define a realistic (although necessarily idealized in some respects) and unified framework of investigation. Using this framework, we prove lower bounds on network lifetime for cellbased cooperative strategies with node densities that are known to be sufficient to ensure a connected network. We also perform a number of simulation experiments, under a traffic model specifically designed for sensor networks, to evaluate and compare cellbased approaches with topology control. This is the first attempt at a comprehensive understanding of the conditions under which one of these approaches outperforms the other. Indeed, our study reveals a number of properties of the techniques investigated, some of which are not at all obvious. As expected, cellbased cooperative approaches, which completely power down network interfaces of certain nodes for extended time periods, produce longer network lifetimes when node density is very high. However, even with moderately high density of nodes, cellbased approaches do not significantly extend lifetime when it is defined in terms of connectivity. While it is not surprising that cellbased approaches do not extend lifetime under low density conditions, we find that topology control techniques can significantly increase network lifetime under those conditions and, in fact, they substantially outperform cooperative approaches in this respect. More details on the precise findings can be found in Section 31.7.
2005
 Topology Control with Better Radio Models: Implications for Energy and MultiHop Interference
[Relazione in Atti di Convegno]
D. M., Blough; Leoncini, Mauro; G., Resta; P., Santi
abstract
Topology Control (TC) is a wellstudied technique used in wireless ad hoc networks to find energyefficient and/or lowinterference subgraphs of the maxpower communication graph. However, existing work has the following limitations: (1) the energy model adopted is quite unrealistic  only transmit power is often considered and homogeneous decay of the radio signal with distance is assumed; (2) the interference measure does not account for multihop communications. In this paper, we show the dramatic effect of the underlying energy and interference model on TC. In particular, we demonstrate that by using more realistic energy models and considering the effects of multihop interference, radically different conclusions about TC can be drawn; namely that (1) energy efficient TC is essentially meaningless, since every link turns out to be "efficient", and that (2) topologies identified as "interferenceoptimal" in the current literature can be extremely bad from the viewpoint of multihop interference. Given these observations, we propose a new measure of link interference, extend it to deal with multihop interference, and design a corresponding optimal communication subgraph, called ATASP. We prove that, in the worst case, ATASP coincides with the maxpower communication graph, showing that in some unfortunate situations also performing multihop interferencebased TC is pointless. However, the simulation results with random node deployments presented in this paper show that, on the average, ATASP is a sparse subgraph of the maxpower communication graph, and multihop interferencebased TC is indeed possible. Since computing ATASP requires global knowledge, we experiment through simulation with known localized algorithms for energyefficient TC and show that they perform well (on the average) with respect to multihop interference.
2004
 Approximation algorithms for a hierarchically structured bin packing problem
[Articolo su rivista]
B., Codenotti; G., DE MARCO; Leoncini, Mauro; Montangero, Manuela; M., Santini
abstract
In this paper we study a variant of the bin packing problem in which the items to be packed are structured as the leaves of a tree. The problem is motivated by document organization and retrieval. We show that the problem is NPhard and we give approximation algorithms for the general case and for the particular case in which all the items have the same size.
2004
 Comparison of CellBased and Topology ControlBased Energy Conservation in Wireless Sensor Networks
[Relazione in Atti di Convegno]
D. M., Blough; Leoncini, Mauro; G., Resta; P., Santi
abstract
In this paper, we compare the effectiveness of two popular energyconservation strategies, namely topology control and cooperativecellbased approaches, to extend the lifetime of ad hoc networks. To this end, we define a realistic, though necessarily idealized in some respects, and unified framework of investigation. Using this framework, we perform a number of simulation experiments taking into account different dimensions of the analysis. In particular, we investigate realistic traffic models and energy parameters that fit the potential application settings of general ad hoc networks and sensor networks. We also consider alternative definitions of network lifetime and, in case of general ad hoc networks, stationarity and mobility. Despitethe growing number of papers on energy conservation, this is the firstattempt at a comprehensive understanding of the conditions under whichone approach outperforms the other. Indeed, our study reveals a number of properties of the techniques investigated, some of which are not at all obvious. We have found that cellbased cooperative approaches, which completely power down network interfaces of certain nodes for extended time periods, produce significantly longer network lifetimes when node density is very high and in the presence of mobility. While it is not surprising that cellbased approaches do not extend lifetimeunder low density conditions, we find that, for certain parameter choices, topology control techniques {\em can} significantly increase network lifetime under those conditions and, in fact, they substantially outperform cooperative approaches in this respect.
2003
 Computation of the Lovász Theta Function for Circulant Graphs
[Monografia/Trattato scientifico]
V. E., Brimkov; B., Codenotti; V., Crespi; R. P., Barneva; Leoncini, Mauro
abstract
2003
 Generating Realistic Data Sets for Combinatorial Auctions
[Relazione in Atti di Convegno]
A., Bonaccorsi; B., Codenotti; N., Dimitri; Leoncini, Mauro; G., Resta; P., Santi
abstract
We consider the generation of realistic data sets for combinatorial auctions. This problem has been recognized as central to enhance the contribution of the computer science community to the field. We put forward the notions of structure and budget as main guidelines towards the generation of succinct and realistic input data. We describe a computational framework for the analysis of existing algorithms against realistic benchmarks, and use it in the context of two real world scenarios, i.e., real estate and railroad track auctions. The results of this analysis suggest that the obstacles to using (one round) combinatorial auctions in real world applications might be of an economic nature rather than a computational one.
2003
 The Kneigh Protocol for Symmetric Topology Control in Ad Hoc Networks
[Relazione in Atti di Convegno]
D. M., Blough; Leoncini, Mauro; G., Resta; P., Santi
abstract
We propose an approach to topology control based on the principle of maintaining the number of neighbors of every node equal to or slightly below a specific value k. The approach enforces symmetry on the resulting communication graph, thereby easing the operation of higher layer protocols. To evaluate the performance of our approach, we estimate the value of k that guarantees connectivity of the communication graph with high probability. We then define kNeigh, a fully distributed, asynchronous, and localized protocol that follows the above approach and uses distance estimation. We prove that kNeigh terminates at every node after a total of 2n messages have been exchanged (with n nodes in the network) and within strictly bounded time. Finally, we present simulations results which show that our approach is about 20% more energyefficient than a widelystudied existing protocol.
2002
 Distributed algorithms for certain assignment problems
[Articolo su rivista]
B., Codenotti; G., DE MARCO; Leoncini, Mauro; Montangero, Manuela
abstract
In the last two decades several approaches toindexing problems have appeared in the literature.Such interest on indexing was deeply renovatedby the emergence of WEB technologies and by thesubsequent need of algorithmic tools for indexing,clustering, and/or classifying a very large amountof data. In this paper we consider the problemof classifying a set of documents against a set ofcolors (e.g., keywords or predened categories),and we develop distributed algorithms for its solution. In particular, seek efficient algorithmsfor the problem of assigning documents to a setof agents (connected according to, e.g., a ringtopology) in such a way that each agent achievesthe maximum possible specialization, i.e., it holdsdocuments associated with the minimum numberof dierent colors.
2002
 On the symmetric range assignment problem in wireless ad hoc networks
[Relazione in Atti di Convegno]
D. M., Blough; Leoncini, Mauro; G., Resta; P., Santi
abstract
Stream 1: Foundations of Information Technology in the Era of Network and Mobile Computing (TCS 2002).
2002
 Reliable Parallel Solution of Bidiagonal Systems
[Articolo su rivista]
I., BAR ON; Leoncini, Mauro
abstract
This paper presents a new efficient algorithm for solving bidiagonal systems of linear equations on massively parallel machines. We use a divide and conquer approach to compute a representative subset of the solution components after which we solve the complete system in parallel with no communication overhead. We address the numerical properties of the algorithm in two ways: we show how to verify the à posteriori backward stability at virtually no additional cost, and prove that the algorithm is à priori forward stable. We then show how we can use the algorithm in order to bound the possible perturbations in the solution components.
2001
 Distributed Algorithms for Certain Assigment Problems
[Relazione in Atti di Convegno]
B., Codenotti; G., De Marco; Leoncini, Mauro; Montangero, Manuela
abstract
In the last two decades several approaches to indexing problems have appeared in the literature. Such interest on indexing was deeply renovated by the emergence of WEB technologies and by the subsequent need of algorithmic tools for indexing, clustering, and/or classifying a very large amount of data. In this paper we consider the problem of classifying a set of documents against a set of colors (e.g., keywords or predefined categories), and we develop distributedalgorithms for its solution. In particular, seek efficient algorithms for the problem of assigning documents to a set of agents (connected according to, e.g., a ring topology) in such a way that eachagent achieves the maximum possible specialization, i.e., it holds documents associated with the minimum number of different colors.
2001
 The role of arithmetic in fast parallel matrix inversion
[Articolo su rivista]
B., Codenotti; Leoncini, Mauro; F. P., Preparata
abstract
n the last two decades several NC algorithms for solving basic linear algebraic problems have appeared in the literature. This interest was clearly motivated by the emergence of a parallel computing technology and by the wide applicability of matrix computations. The traditionally adopted computation model, however, ignores the arithmetic aspects of the applications, and no analysis is currently available demonstrating the concrete feasibility of many of the known fast methods. In this paper we give strong evidence to the contrary, on the sole basis of the issue of robustness, indicating that some theoretically brilliant solutions fail the severe test of the ``Engineering of Algorithms.'' We perform a comparative analysis of several wellknown numerical matrix inversion algorithms under both fixed and variableprecision models of arithmetic. We show that, for most methods investigated, a typical input leads to poor numerical performance, and that in the exactarithmetic setting no benefit derives from conditions usually deemed favorable in standard scientific computing. Under these circumstances, the only algorithm admitting sufficiently accurate NC implementations is Newton's iterative method, and the word size required to guarantee worstcase correctness appears to be the critical complexity measure. Our analysis also accounts for the observed instability of the considered superfast methods when implemented with the same floatingpoint arithmetic that is perfectly adequate for the fixedprecision approach.
2000
 A Java Framework for Internet Distributed Computations
[Capitolo/Saggio]
B., Codenotti; Leoncini, Mauro; G., Resta
abstract
2000
 ON THE ROBUSTNESS OF GAUSSIAN ELIMINATION WITH PARTIAL PIVOTING
[Articolo su rivista]
P., Favati; Leoncini, Mauro; A., Martinez
abstract
It has been recently shown that large growth factors might occur in Gaussian Elimination with Partial Pivoting (GEPP) also when solving some plausibly natural systems. In this note we argue that this potential problem could be easily solved, with much smaller risk of failure, by very small (and low cost) modifications of the basic algorithm, thus confirming its inherent robustness. To this end, we first propose an informal model with the goal of providing further support to the comprehension of the stability properties of GEPP. We then report the results of numerical experiments that confirm the viewpoint embedded in the model. Basing on the previous observations, we finally propose a simple scheme that could be turned into (even more) accurate software for the solution of linear systems.
2000
 On the Lovasz number of certain circulant graphs
[Relazione in Atti di Convegno]
V., Brimkov; B., Codenotti; V., Crespi; Leoncini, Mauro
abstract
The theta function of a graph, also known as the Lov?sz number, hasthe remarkable property of being computable in polynomial time,despite being "sandwiched" between two hard to compute integers,i.e., clique and chromatic number. Very little is known about theexplicit value of the theta function for special classes of graphs. Inthis paper we provide the explicit formula for the Lov?sz number ofthe union of two cycles, in two special cases, and a practicallyefficient algorithm, for the general case.
2000
 Reliable Solution of Tridiagonal Systems of Linear Equations
[Articolo su rivista]
I., BAR ON; Leoncini, Mauro
abstract
In this paper we present new formulas for characterizing the sensitivity of tridiagonal systems that are independent of the condition number of the underlying matrix. We also introduce efficient algorithms for solving tridiagonal systems of linear equations which are stable and reliable (namely, stable in the backward sense and little sensitive to perturbations in the coefficients).
1999
 Parallel complexity of numerically accurate linear system solvers
[Articolo su rivista]
Leoncini, Mauro; G., Manzini; L., Margara
abstract
We prove a number of negative results about practical (i.e., work efficient and numerically accurate) algorithms for computing the main matrix factorizations. In particular, we prove that the popular Householder and Givens methods for computing the QR decomposition are Pcomplete, and hence presumably inherently sequential, under both real and floating point number models. We also prove that Gaussian elimination (GE) with a weak form of pivoting, which aims only at making the resulting algorithm nondegenerate, is likely to be inherently sequential as well. Finally, we prove that GE with partial pivoting is Pcomplete over GF(2) or when restricted to symmetric positive definite matrices, for which it is known that even standard GE (no pivoting) does not fail. Altogether, the results of this paper give further formal support to the widespread belief that there is a tradeoff between parallelism and accuracy in numerical algorithms.
1999
 RELIABLE SOLUTION OF BIDIAGONAL SYSTEMS WITH APPLICATIONS
[Articolo su rivista]
I., BAR ON; Leoncini, Mauro
abstract
We show that the stability of Gaussian elimination with partial pivoting relates to the well definition of the reduced triangular systems. We develop refined perturbation bounds that generalize Skeel bounds to the case of ill conditioned systems. We finally develop reliable algorithms for solving general bidiagonal systems of linear equations with applications to the fast and stable solution of tridiagonal systems.
1999
 Tight Complexity Bounds for the TwoDimensional Real Knapsack
[Articolo su rivista]
V. E., Brimkov; S. S., Danchev; Leoncini, Mauro
abstract
We study the complexity of the 2dimensional knapsack problemmax{c_1 x + c_2 y : a_1 x + a_2 y <= b; x, y \in Z_+}, where c_1, c_2, a_1, a_2, b \in R_+. The problem is defined in terms of real numbers and we study it where an integral solution is sought under a real number model of computation. Weobtain a tight complexity bound Theta(log b/a_min), where a_min = min{a_1, a_2}.
1998
 Stable solution to tridiagonal systems
[Articolo su rivista]
I., BAR ON; Leoncini, Mauro
abstract
In this paper we present three different pivoting strategies for solving general tridiagonal systems of linear equations. The first strategy resembles the classical method of Gaussian elimination with no pivoting and is stable provided a simple and easily checkable condition is met. In the second strategy, the growth of the elements is monitored so as to ensure backward stability in most cases. Finally, the third strategy also uses the righthand side vector to make pivoting decisions and is proved to be unconditionally backward stable.
1997
 A Fast Parallel Cholesky Decomposition Algorithm for Tridiagonal Symmetric Matrices
[Articolo su rivista]
I., BAR ON; B., Codenotti; Leoncini, Mauro
abstract
In this paper we present a new parallel algorithm for computing the Cholesky decomposition (LL^T) of real symmetric positivedefinite tridiagonal matrices. The algorithm consists of a preprocessing and a factoring stage. In the preprocessing stage it determines a rank(p1) correction to the original matrix (p=number of processors) by precomputing selected components x_k of the L factor, k=1... p1. In the factoring stage it performs independent factorizations of p matrices of order n/p. The algorithm is especially suited for machines with both vector and processor parallelism, as confirmed by the experiments carried out on a Connection Machine CM5 with 32 nodes. Let \hat{x}_k and \hat{x}'_k denote the components computed in the preprocessing stage and the corresponding values (re)computed in the factorization stage, respectively. Assuming that \abs{\hat{x}_k/\hat{x}'_k} is small, k=1... p1, we are able to prove that the algorithm is stable in the backward sense. The above assumption is justified both experimentally and theoretically. In fact, we have found experimentally that \abs{\hat{x}_k/\hat{x}'_k} is small even for illconditioned matrices, and we have proven by an a priori analysis that the above ratios are small provided that preprocessing is performed with suitably larger precision.
1997
 Parallel Algorithms for Certain Matrix Computations
[Articolo su rivista]
B., Codenotti; B. N., Datta; K., Datta; Leoncini, Mauro
abstract
The complexity of performing matrix computations, such as solving a linear system, inverting a nonsingular matrix or computing its rank, has received a lot of attention by both the theory and the scientific computing communities. In this paper we address some “nonclassical” matrix problems that find extensive applications, notably in control theory. More precisely, we studythe matrix equations AX +XA^T = C and AX  XB = C, the “inverse” of the eigenvalue problem (called pole assignment), and the problem of testing whether the matrix [B AB ... A^nB] has full row rank. For these problems we show two kinds of PRAM algorithms: on one side very fast, i.e. polylog time, algorithms and on the other side almost linear time and processorefficient algorithms. In the latter case, the algorithms rely on basic matrix computations that can be performed efficiently also on realistic machine models.
1996
 CHECKING ROBUST NONSINGULARITY OF TRIDIAGONAL MATRICES IN LINEAR TIME
[Articolo su rivista]
I., BAR ON; B., Codenotti; Leoncini, Mauro
abstract
In this paper we present a linear time algorithm for checking whether a tridiagonal matrix will become singular under certain perturbations of its coefficients. The problem is known to be NPhard for general matrices. Our algorithm can be used to get perturbation bounds on the solutions to tridiagonal systems.
1996
 On Speed versus Accuracy: Some Case Studies
[Articolo su rivista]
Leoncini, Mauro
abstract
We study the complexity of some computational problems in case certain stability guarantees are required. After providing suitable settings for this kind of investigations, we are able to give theoretical support to longstanding viewpoints in numerical analysis. More precisely, we focus on a set of significant examples and prove that: (1) the fastest known polynomial evaluation algorithms are not stable, (2) LUP decomposition cannot be computed stably in polylogarithmic parallel time, and (3) reductions among computational problems do not in general constitute a practical tool for solving numerical problems.
1996
 On the Algebraic Complexity of Certain Integer Linear Programming Problems
[Relazione in Atti di Convegno]
V. E., Brimkov; S. S., Danchev; Leoncini, Mauro
abstract
1996
 On the Parallel Complexity of Gaussian Elimination with Pivoting
[Articolo su rivista]
Leoncini, Mauro
abstract
Consider the Gaussian elimination algorithm with the wellknownpartial pivoting strategy for improving numerical stability (GEPP).Vavasis proved that the problem of determining the pivot sequenceused by GEPP is log spacecomplete for P, and thus inherently sequential.Assuming P{NC, we prove here that either the latter problemcannot be solved in parallel time O(N^(1/2eps)) or all the problems in Padmit polynomial speedup. Here N is the order of the input matrix and= is any positive constant. This strengthens the Pcompleteness resultmentioned above. We conjecture that the result proved in this paperholds for the stronger bound O(N^(1eps)) as well, and provide supportingevidence for the conjecture. Note that this is equivalent to asserting theasymptotic optimality of the naive parallel algorithm for GEPP (moduloP different from NC).
1996
 On the Parallel Complexity of Matrix Factorization Algorithms
[Relazione in Atti di Convegno]
Leoncini, Mauro; G., Manzini; L., Margara
abstract
1996
 Parallel Complexity of Householder QR Factorization
[Relazione in Atti di Convegno]
Leoncini, Mauro; G., Manzini; L., Margara
abstract
Gaussian Elimination with Partial Pivoting and Householder QRfactorization are two very popular methods to solve linear systems.Implementations of these two methods are provided in stateoftheartnumerical libraries and packages, such as LAPACK and MATLAB.Gaussian Elimination with Partial Pivoting was already known to bePcomplete. Here we prove that the Householder QR factorization islikely to be inherently sequential as well. We also investigate theproblem of speedup vs non degeneracy and accuracy in numericalalgorithms.
1996
 Strong NPcompleteness of a Matrix Similarity Problem
[Articolo su rivista]
V., Brimkov; B., Codenotti; Leoncini, Mauro; G., Resta
abstract
Consider the following problem: given an upper triangular matrix A, with rational entries and distinct diagonal elements, and a tolerance τ greater than or equal to 1, decide whether there exists a nonsingular matrix G, with condition number bounded by τ, such that G^(−1)AG is 2 × 2 block diagonal. This problem, which we shall refer to as DICHOTOMY, is an important one in the theory of invariant subspaces. It has recently been proved that DICHOTOMY is NPcomplete. In this note we make some progress proving that DICHOTOMY is actually NPcomplete in the strong sense. This outlines the “purely combinatorial” nature of the difficulty, which might well arise even in case of well scaled matrices with entries of small magnitude.
1994
 How much can we speedup Gaussian elimination with pivoting?
[Relazione in Atti di Convegno]
Leoncini, Mauro
abstract
Consider the problem of determining the pivot sequence used by the Gaussian Elimination algorithm with Partial Pivoting (GEPP). Let N stand for the order of the input matrix and let &egr; be any positive constant. Assuming P ≠ NC, we prove that if GEPP were decidable in parallel time M1/2–&egr; then all the problems in P would be characterized by polynomial speedup. This strengthens the Pcompleteness result that holds of GEPP. We conjecture that our result is valid even with the exponent 1 replaced for 1/2, and provide supporting arguments based on our result. This latter improvement would demonstrate the optimality of the naive parallel algorithm for GEPP (modulo P ≠ NC).
1994
 Oracle Computations in Parallel Numerical Linear Algebra
[Articolo su rivista]
B., Codenotti; Leoncini, Mauro; G., Resta
abstract
Using a new notion of reducibility, we investigate, in a model of approximate parallel computation, the behaviour of several known reductions among important problems in linear algebra. We show that, although many such problems have been proved to be NC1 equivalent, when approximation is taken into account, new questions about their relative paralleltime complexity come up. More precisely, some known NC1 reduction algorithms can be extended with additional special care, whereas other reductions do not extend to the approximation model
1994
 Spectral Properties of Some Matrices Close to the Toeplitz Triangular Form
[Articolo su rivista]
A., Bernasconi; Leoncini, Mauro; G., Resta
abstract
We consider a class of matrices, that we call nearly Toeplitz, and show that they have interesting spectral properties. More precisely, we show that the eigenvectors of certain nearly Toeplitz matrices give complete information about the structure of the symmetric group Sk, i.e., thegroup of permutations of the integers 1,. . . , k. Obtaining this kind of information is a central task in two seemingly unrelated branches of mathematics, namely the Character Theory of the Symmetric Group and the Polya’s Theory of Counting.
1992
 Introduction to Parallel Processing
[Monografia/Trattato scientifico]
B., Codenotti; Leoncini, Mauro
abstract
1992
 On Reductions and Approximation in Parallel Algebraic Complexity
[Relazione in Atti di Convegno]
B., Codenotti; Leoncini, Mauro; G., Resta
abstract
1992
 Repeated Matrix Squaring for the Parallel Solution of Linear Systems
[Relazione in Atti di Convegno]
B., Codenotti; Leoncini, Mauro; G., Resta
abstract
Given a nxn nonsingular linear system Ax=b, we prove that thesolution x can be computed in parallel time ranging from Omega(logn) to O(log^2 n), provided that the condition number, c(A), of A isbounded by a polynomial in n. In particular, if c(A) = O(1), a timebound O(log n) is achieved. To obtain this result, we reduce thecomputation of x to repeated matrix squaring and prove that a numberof steps independent of n is sufficient to approximate x up to arelative error 2^–d, d=O(1). This algorithm has both theoretical andpractical interest, achieving the same bound of previously publishedparallel solvers, but being far more simple.
1992
 Solving General Linear Systems in Parallel
[Relazione in Atti di Convegno]
B., Codenotti; Leoncini, Mauro; G., Resta
abstract
1991
 An Experimental Environment for Design and Analysis of Global Routing Heuristics
[Relazione in Atti di Convegno]
B., Codenotti; Leoncini, Mauro; J., David; F., Makedon
abstract
1991
 An Experimental Learning Environment for VLSI Design
[Relazione in Atti di Convegno]
J., David; F., Makedon; B., Codenotti; Leoncini, Mauro
abstract
1991
 Matrix Inversion in RNC1
[Articolo su rivista]
B., Codenotti; Leoncini, Mauro
abstract
We prove that some central problems in computational linear algebra are in the complexity class RNC1 that is solvable by uniform families of probabilistic boolean circuits of logarithmic depth and polynomial size. In particular, we first show that computing the solution of n × n linear systems in the form x = Bx + c, with B∞ ≤ 1 − n^(−k), k = O(1), in the fixed precision model (i.e., computing d = O(1) digits of the result) is in RNC1 ; then we prove that the case of general n × n linear systems Ax = b, with both A∞ and b∞ bounded by polynomials in n, can be reduced to the special case mentioned before.
1991
 Parallel Algebraic Reductions among Numerical Problems
[Articolo su rivista]
B., Codenotti; Leoncini, Mauro; G., Resta
abstract
In this note we consider, for a number of linear algebra problems, an environmentallowing approximate computations. Within this framework we show that the relative complexity of these problems should be studied according to a strict notion of reducibility, which corresponds to the wellknown manyone reducibility of combinatorial complexity.
1991
 Parallel Complexity of Linear System Solution
[Monografia/Trattato scientifico]
B., Codenotti; Leoncini, Mauro
abstract
1991
 Parallel Complexity of Matrix Computations
[Capitolo/Saggio]
B., Codenotti; Leoncini, Mauro; G., Resta
abstract
1990
 Fondamenti di calcolo parallelo
[Monografia/Trattato scientifico]
B., Codenotti; Leoncini, Mauro
abstract
1990
 Parallelism and Fast Solution of Linear Systems
[Articolo su rivista]
B., Codenotti; Leoncini, Mauro
abstract
We review some of the most important results in the area of fast parallel algorithms for the solution of linear systems and related problems, such as matrix inversion, computation of the determinant and of the adjoint matrix. We analyze both direct and iterative methods implemented in various models of parallel computation.
1990
 Preconditioning Linear Systems and Parallelism
[Articolo su rivista]
B., Codenotti; Leoncini, Mauro
abstract
We present three polynomial preconditioning techniques and analyze some of their theoretical and computational properties. We first show some formal relations between the preconditioning polynomial and the characteristic polynomial of the coefficient matrix. Parallel algoeirhms are then derived and their behaviour related to that of Csanky's method. We also present experimental results obtained for special types of matrices.
1989
 Parallel models of computation: an introductory survey
[Articolo su rivista]
Leoncini, Mauro
abstract
This paper gives an overview of some models of computation which have proved successful in laying a foundation for a general theory of parallel computation. We present three models of parallel computation, namely boolean and arithmetic circuit families, and Parallel Random Access Machines. They represent different viewpoints on parallel computing: boolean circuit families are useful for indepth theoretical studies on the power and limitations of parallel computers; Parallel Random Access Machines are the most general vehicle for designing highly parallel algorithms; arithmetic circuit families are an important tool for undertaking studies related to one of the most active areas in parallel computing, i.e. parallel algebraic complexity.