Nuova ricerca

Claudia CANALI

Professore Associato presso: Dipartimento di Ingegneria "Enzo Ferrari"


Home | Curriculum(pdf) | Didattica |


Pubblicazioni

2021 - A Variable Neighborhood Heuristic for Facility Locations in Fog Computing [Relazione in Atti di Convegno]
Alves de Queiroz, T.; Canali, C.; Iori, M.; Lancellotti, R.
abstract

The current trend of the modern smart cities applications towards a continuous increase in the volume of produced data and the concurrent need for low and predictable latency in the response time has motivated the shift from a cloud to a fog computing approach. A fog computing architecture is likely to represent a preferable solution to reduce the application latency and the risk of network congestion by decreasing the volume of data transferred to cloud data centers. However, the design of a fog infrastructure opens new issues concerning not only how to allocate the data flow coming from sensors to fog nodes and from there to cloud data centers, but also the choice of the number and the location of the fog nodes to be activated among a list of potential candidates. We model this facility location issue through a multi-objective optimization problem. We propose a heuristic based on the variable neighborhood search, where neighborhood structures are based on swap and move operations. The proposed method is tested in a wide range of scenarios, considering a smart city application’s realistic setup with geographically distributed sensors. The experimental evaluation shows that our method can achieve stable and better performance concerning other literature approaches, supporting the given application.


2020 - A Random Walk based Load Balancing Algorithm for Fog Computing [Relazione in Atti di Convegno]
Beraldi, R.; Canali, C.; Lancellotti, R.; Mattia, G. P.
abstract

The growth of large scale sensing applications (as in the case of smart cities applications) is a main driver of the fog computing paradigm. However, as the load for such fog infrastructures increases, there is a growing need for coordination mechanisms that can provide load balancing. The problem is exacerbated by local overload that may occur due to an uneven distribution of processing tasks (jobs) over the infrastructure, which is typical real application such as smart cities, where the sensor deployment is irregular and the workload intensity can fluctuate due to rush hours and users behavior. In this paper we introduce two load sharing mechanisms that aim to offload jobs towards the neighboring nodes. We evaluate the performance of such algorithms in a realistic environment that is based on a real application for monitoring in a smart city. Our experiments demonstrate that even a simple load balancing scheme is effective in addressing local hot spots that would arise in a non-collaborative fog infrastructure.


2020 - Adaptive Computing-plus-Communication Optimization Framework for Multimedia Processing in Cloud Systems [Articolo su rivista]
Shojafar, Mohammad; Canali, Claudia; Lancellotti, Riccardo; Abawajy, Jemal
abstract

A clear trend in the evolution of network-based services is the ever-increasing amount of multimedia data involved. This trend towards big-data multimedia processing finds its natural placement together with the adoption of the cloud computing paradigm, that seems the best solution to cope with the demands of a highly fluctuating workload that characterizes this type of services. However, as cloud data centers become more and more powerful, energy consumption becomes a major challenge both for environmental concerns and for economic reasons. An effective approach to improve energy efficiency in cloud data centers is to rely on traffic engineering techniques to dynamically adapt the number of active servers to the current workload. Towards this aim, we propose a joint computing-plus-communication optimization framework exploiting virtualization technologies, called MMGreen. Our proposal specifically addresses the typical scenario of multimedia data processing with computationally intensive tasks and exchange of a big volume of data. The proposed framework not only ensures users the Quality of Service (through Service Level Agreements), but also achieves maximum energy saving and attains green cloud computing goals in a fully distributed fashion by utilizing the DVFS-based CPU frequencies. To evaluate the actual effectiveness of the proposed framework, we conduct experiments with MMGreen under real-world and synthetic workload traces. The results of the experiments show that MMGreen may significantly reduce the energy cost for computing, communication and reconfiguration with respect to the previous resource provisioning strategies, respecting the SLA constraints.


2020 - Collaboration Strategies for Fog Computing under Heterogeneous Network-bound Scenarios [Relazione in Atti di Convegno]
Canali, C.; Lancellotti, R.; Mione, S.
abstract

The success of IoT applications increases the number of online devices and motivates the adoption of a fog computing paradigm to support large and widely distributed infrastructures. However, the heterogeneity of nodes and their connections requires the introduction of load balancing strategies to guarantee efficient operations. This aspect is particularly critical when some nodes are characterized by high communication delays. Some proposals such as the Sequential Forwarding algorithm have been presented in literature to provide load balancing in fog computing systems. However, such algorithms have not been studied for a wide range of working parameters in an heterogeneous infrastructure; furthermore, these algorithms are not designed to take advantage from highly heterogeneous network delays that are common in fog infrastructures. The contribution of this study is twofold: First, we evaluate the performance of the sequential forwarding algorithm for several load and delay conditions; second, we propose and test a delay-aware version of the algorithm that takes into account the presence of highly variable node connectivity in the infrastructure. The results of our experiments, carried out using a realistic network topology, demonstrate that a delay-blind approach to sequential forwarding may determine poor performance in the load balancing when network delay represents a major contribution to the response time. Furthermore, we show that the delay-aware variant of the algorithm may provide a benefit in this case, with a reduction in the response time up to 6%.


2020 - Data Flows Mapping in Fog Computing Infrastructures Using Evolutionary Inspired Heuristic [Capitolo/Saggio]
Canali, C.; Lancellotti, R.
abstract

The need for scalable and low-latency architectures that can process large amount of data from geographically distributed sensors and smart devices is a main driver for the popularity of the fog computing paradigm. A typical scenario to explain the fog success is a smart city where monitoring applications collect and process a huge amount of data from a plethora of sensing devices located in streets and buildings. The classical cloud paradigm may provide poor scalability as the amount of data transferred risks the congestion on the data center links, while the high latency, due to the distance of the data center from the sensors, may create problems to latency critical applications (such as the support for autonomous driving). A fog node can act as an intermediary in the sensor-to-cloud communications where pre-processing may be used to reduce the amount of data transferred to the cloud data center and to perform latency-sensitive operations. In this book chapter we address the problem of mapping sensors over the fog nodes with a twofold contribution. First, we introduce a formal model for the mapping model that aims to minimize response time considering both network latency and processing time. Second, we present an evolutionary-inspired heuristic (using Genetic Algorithms) for a fast and accurate resolution of this problem. A thorough experimental evaluation, based on a realistic scenario, provides an insight on the nature of the problem, confirms the viability of the GAs to solve the problem, and evaluates the sensitivity of such heuristic with respect to its main parameters.


2020 - Distributed load balancing for heterogeneous fog computing infrastructures in smart cities [Articolo su rivista]
Beraldi, R.; Canali, C.
abstract

Smart cities represent an archetypal example of infrastructures where the fog computing paradigm can express its potential: we have a large set of sensors deployed over a large geographic area where data should be pre-processed (e.g., to extract relevant information or to filter and aggregate data) before sending the result to a collector that may be a cloud data center, where relevant data are further processed and stored. However, during its lifetime the infrastructure may change, e.g., due to the additional sensors or fog nodes deploy, while the load can grow, e.g., for additional services based on the collected data. Since nodes are typically deployed in multiple time stages, they may have different computation capacity due to technology improvements. In addition, an uneven distribution of the workload intensity can arise, e.g., due to hot spot for occasional public events or to rush hours and users’ behavior. In simple words, resources and load can vary over time and space. Under the resource management point of view, this scenario is clearly challenging. Due to the large scale and variable nature of the resources, classical centralized solutions should in fact be avoided, since they do not scale well and require to transfer all data from sensors to a central hub, distorting the very nature of in-situ data processing. In this paper, we address the problem of resources management by proposing two distributed load balancing algorithms, tailored to deal with heterogeneity. We evaluate the performance of such algorithms using both a simplified environment where we perform several sensitivity analysis with respect to the factors responsible for the infrastructure heterogeneity and exploiting a realistic scenario of a smart city. Furthermore, in our study we combine theoretical models and simulation. Our experiments demonstrate the effectiveness of the algorithms under a wide range of heterogeneity, overall providing a remarkable improvement compared to the case of not cooperating nodes.


2020 - Randomized Load Balancing under Loosely Correlated State Information in Fog Computing [Relazione in Atti di Convegno]
Beraldi, R.; Canali, C.; Lancellotti, R.; Mattia, G. P.
abstract

Fog computing infrastructures must support increasingly complex applications where a large number of sensors send data to intermediate fog nodes for processing. As the load in such applications (as in the case of a smart cities scenario) is subject to significant fluctuations both over time and space, load balancing is a fundamental task. In this paper we study a fully distributed algorithm for load balancing based on random probing of the neighbors' status. A qualifying point of our study is considering the impact of delay during the probe phase and analyzing the impact of stale load information. We propose a theoretical model for the loss of correlation between actual load on a node and stale information arriving to the neighbors. Furthermore, we analyze through simulation the performance of the proposed algorithm considering a wide set of parameters and comparing it with an approach from the literature based on random walks. Our analysis points out under which conditions the proposed algorithm can outperform the alternatives.


2019 - A Best Practice for Attracting Female Students to Enrol in ICT Studies [Capitolo/Saggio]
Canali, C.; Addabbo, T.; Moumtzi, V.
abstract

The extremely low rates of females compared to men, enrolled at Computer Sciences (CS) and Information Systems Universities result not only in a massive loss of talent for companies and economies but also perpetuate gaps in gender inequality in the ICT field. To face this, Universities and Research Organizations are gradually taking initiatives to address such gender imbalance, trying to intervene and raise the awareness on a complex set of rooted cultural/societal gender stereotypes, including gender bias and linking ICT with masculinity that are permeating early school education, STEM teaching practices and parents’ attitudes. This approach is based on several studies on the current students that highlight how female bachelor students in CS have lower levels of self-confidence compared to their male counterparts which can negatively impact on their plans to continue their studies. Towards this direction, the Horizon 2020 EQUAL-IST (Gender Equality Plans for Information Sciences and Technology Research Institutions) project supports six Universities across Europe (Italy, Lithuania, Germany, Ukraine, Finland, Portugal) to design and implement actions towards gender equality, with a specific focus on the ICT/IST area. The Universities have settled up several concrete initiatives to attract female students towards ICT studies. Specifically, this paper presents the best practice implemented at the University of Modena and Reggio Emilia, (UniMORE) the Summer Camp namely Ragazze Digitali (Digital Girls). The summer camp offers to female students of third and fourth grade of the high schools a first-hand experience based on a learn-by-doing approach to coding applied to creative and innovative fields, as well as inspiring female role models from the academia and the industry. For its scope, nature (free for the girls to participate) and duration (four entire weeks), the Summer Camp Ragazze Digitali represents a unique experience not only in Italy but also in Europe and, at the best of our knowledge, in the world. The paper describes the Summer Camp experience, highlighting the impacts of this experience on the female students, with particular attention to changed attitudes and plans for their future studies and careers.


2019 - A fog computing service placement for smart cities based on genetic algorithms [Relazione in Atti di Convegno]
Canali, C.; Lancellotti, R.
abstract

The growing popularity of the Fog Computing paradigm is driven by the increasing availability of large amount of sensors and smart devices on a geographically distributed area. The scenario of a smart city is a clear example of this trend. As we face an increasing presence of sensors producing a huge volume of data, the classical cloud paradigm, with few powerful data centers that are far away from the data sources, becomes inadequate. There is the need to deploy a highly distributed layer of data processors that filter, aggregate and pre-process the incoming data according to a fog computing paradigm. However, a fog computing architecture must distribute the incoming workload over the fog nodes to minimize communication latency while avoiding overload. In the present paper we tackle this problem in a twofold way. First, we propose a formal model for the problem of mapping the data sources over the fog nodes. The proposed optimization problem considers both the communication latency and the processing time on the fog nodes (that depends on the node load). Furthermore, we propose a heuristic, based on genetic algorithms to solve the problem in a scalable way. We evaluate our proposal on a geographic testbed that represents a smart-city scenario. Our experiments demonstrate that the proposed heuristic can be used for the optimization in the considered scenario. Furthermore, we perform a sensitivity analysis on the main heuristic parameters.


2019 - A Technique to Identify Data Exchange Between Cloud Virtual Machines [Capitolo/Saggio]
Bicocchi, N.; Canali, C.; Lancellotti, R.
abstract

Modern cloud data centers typically exploit management strategies to reduce the overall energy consumption. While most of the solutions focus on the energy consumption due to computational elements, the optimization of network-related aspects of a data center is becoming more and more important, considering also the advent of the Software-Defined Network paradigm. However, an enabling step to implement network-aware Virtual Machine (VM) allocation is the knowledge of data exchange patterns. In this way we can place in well-connected hosts (or on the same physical host) the couples of VMs that exchange a large amount of information. Unfortunately, in Infrastructure as a Service data centers, a detailed knowledge on VMs data exchange is seldom available without the deployment of a specialized (and costly) monitoring infrastructure. In this paper, we propose a technique to infer VMs communication patterns starting from input/output network traffic time series of each VM. We discuss both the theoretical aspect of such technique and the design challenges for its implementation. A case study is used to demonstrate the viability of our idea.


2019 - AGATE: Adaptive Gray Area-based TEchnique to Cluster Virtual Machines with Similar Behavior [Articolo su rivista]
Canali, Claudia; Lancellotti, Riccardo
abstract

As cloud computing data centers grow in size and complexity to accommodate an increasing number of virtual machines, the scalability of monitoring and management processes becomes a major challenge. Recent research studies show that automatically clustering virtual machines that are similar in terms of resource usage may address the scalability issues of IaaS clouds. Existing solutions provides high clustering accuracy at the cost of very long observation periods, that are not compatible with dynamic cloud scenarios where VMs may frequently join and leave. We propose a novel technique, namely AGATE (Adaptive Gray Area-based TEchnique), that provides accurate clustering results for a subset of VMs after a very short time. This result is achieved by introducing elements of fuzzy logic into the clustering process to identify the VMs with undecided clustering assignment (the so-called gray area), that should be monitored for longer periods. To evaluate the performance of the proposed solution, we apply the technique to multiple case studies with real and synthetic workloads. We demonstrate that our solution can correctly identify the behavior of a high percentage of VMs after few hours of observations, and significantly reduce the data required for monitoring with respect to state-of-the-art solutions.


2019 - GASP: Genetic algorithms for service placement in fog computing systems [Articolo su rivista]
Canali, C.; Lancellotti, R.
abstract

Fog computing is becoming popular as a solution to support applications based on geographically distributed sensors that produce huge volumes of data to be processed and filtered with response time constraints. In this scenario, typical of a smart city environment, the traditional cloud paradigm with few powerful data centers located far away from the sources of data becomes inadequate. The fog computing paradigm, which provides a distributed infrastructure of nodes placed close to the data sources, represents a better solution to perform filtering, aggregation, and preprocessing of incoming data streams reducing the experienced latency and increasing the overall scalability. However, many issues still exist regarding the efficient management of a fog computing architecture, such as the distribution of data streams coming from sensors over the fog nodes to minimize the experienced latency. The contribution of this paper is two-fold. First, we present an optimization model for the problem of mapping data streams over fog nodes, considering not only the current load of the fog nodes, but also the communication latency between sensors and fog nodes. Second, to address the complexity of the problem, we present a scalable heuristic based on genetic algorithms. We carried out a set of experiments based on a realistic smart city scenario: the results show how the performance of the proposed heuristic is comparable with the one achieved through the solution of the optimization problem. Then, we carried out a comparison among different genetic evolution strategies and operators that identify the uniform crossover as the best option. Finally, we perform a wide sensitivity analysis to show the stability of the heuristic performance with respect to its main parameters.


2019 - HR Analytics in the Digital Workplace: Exploring the Relationship between Attitudes and Tracked Work Behaviors [Capitolo/Saggio]
Fabbri, T; Scapolan, A; Bertolotti, F; Canali, C
abstract

The increasing use of digital technologies in organizational contexts, like collaborative social platforms, has not only changed the way people work but also provided organizations with new and wide ranges of data sources that could be analyzed to enhance organizational- and individual-level outcomes, especially when integrated with more traditional tools. In this study, we explore the relationship between data flows generated by employees on companies’ digital environments and employees’ attitudes measured through surveys. In a sample of 107 employees, we collected data on the number and types of actions performed on the company’s digital collaborative platform over a two-year period and the level of organizational embeddedness (fit, sacrifice, and links dimensions) through two rounds of surveys over the same period. The correlation of the quantity and quality of digital actions with the variation of organizational embeddedness over the same period shows that workers who engaged in more activities on the digital platform also experienced an increase in their level of organizational embeddedness mainly in the fit dimension. In addition, the higher the positive variation of fit, the more employees performed both active and passive digital actions. Finally, the higher the variation of organizational embeddedness, the more employees performed networking digital behaviors.


2019 - Lessons Learned From Tailored Gender Equality Plans: Classification and Analysis of Actions Implemented Within the EQUAL-IST Project [Relazione in Atti di Convegno]
Sangiuliano, M.; Canali, C.; Gorbacheva, E.
abstract

Gender Equality Plans (GEPs) represent a comprehensive tool to promote structural change for gender equality in research institutions. The Horizon 2020 EQUAL-IST project ("Gender Equality Plans for Information Sciences and Technology Research Institutions") supports six Informatics and Information Systems Departments at universities across Europe to initiate the design and implementation of GEPs. This paper is focused on project outcomes of the first iteration of GEP implementation (October 2017 - May 2018). Based on the internal reports provided by the involved research institutions, we classified the implemented actions as 'structural change actions' or 'preparatory actions' (following the study by Sangiuliano, Canali & Madesi, 2018) and as 'internally-oriented actions' or 'externally-oriented actions'. The implemented actions were analyzed across such intervention areas as Institutional Communication, Human Resources and Management Practices, and Teaching and Services for (Potential) Students. The conducted study addresses the need to investigate the peculiarities of GEP implementation in the Information Sciences and Technology (IST) and Information and Communications Technology (ICT) disciplines, where the gender leak in the recruitment pipeline often starts at universities already, with extremely low numbers of enrolled female students. We therefore aim at understanding if the notable amount of actions to attract more female students, which were initiated within the EQUAL-IST project during the first iteration of GEP implementation, implies a risk to bend the process towards more externally-oriented actions, which are less likely to impact internal power structures, at least in the short run. The second purpose of the paper is to explore, whether structural change actions, which have the potential to go beyond mere raising awareness on the topics at stake, tend to be concentrated in the Human Resources and Management Practices area.


2019 - Measuring Gender Equality in Universities [Relazione in Atti di Convegno]
Addabbo, T.; Canali, C.; Facchinetti, G.; Pirotti, T.
abstract

The paper proposes a fuzzy expert system for gender equality evaluation in tertiary education that has been experimented in 6 European universities in Italy, Lithuania, Finland, Germany, Portugal, Ukraine within the EQUAL-IST Horizon 2020 project with the goal to design and implement Gender Equality Plans (GEPs) for IST Research Institutions. We propose a Fuzzy Expert System (FES), a cognitive model that, by replicating the expert way of learning and thinking, allows to formalize qualitative concepts and to reach a synthetic measure of the institution’s gender equality (ranging from 0 to 1 increasing with gender equality achievements), that can be then disentangled in different dimensions. The dimensions included in the model relate to gender equality in the structure of employment (academic and non academic) and in the governance of the universities, to the equal opportunity machinery and to the work-life balance policies promoted by the institutions. The rules and weights in the system are the results of a mixed strategy composed by gender equality experts and by a participatory approach that has been promoted within the EQUAL-IST project. The results show heterogeneity in the final index of gender equality and allow to detect the more critical areas where new policies should be implemented to achieve an improvement in gender equality. The value of the final gender equality index resulting from the application of the FES is then compared to the gender equality perceived by each institution involved in the project and will be used to improve also the awareness in gender gap in important dimensions in tertiary education setting.


2019 - PAFFI: Performance Analysis Framework for Fog Infrastructures in realistic scenarios [Relazione in Atti di Convegno]
Canali, C.; Lancellotti, R.
abstract

The growing popularity of applications involving the process of a huge amount of data and requiring high scalability and low latency represents the main driver for the success of the fog computing paradigm. A set of fog nodes close to the network edge and hosting functions such as data aggregation, filtering or latency sensitive applications can avoid the risk of high latency due to geographic data transfer and network links congestion that hinder the viability of the traditional cloud computing paradigm for a class of applications including support for smart cities services or autonomous driving. However, the design of fog infrastructures requires novel techniques for system modeling and performance evaluation able to capture a realistic scenario starting from the geographic location of the infrastructure elements. In this paper we propose PAFFI, a framework for the performance analysis of fog infrastructures in realistic scenarios. We describe the main features of the framework and its capability to automatically generate realistic fog topologies, with an optimized mapping between sensors, fog nodes and cloud data centers, whose performance can be evaluated by means of simulation.


2018 - An Approach to Balance Maintenance Costs and Electricity Consumption in Cloud Data Centers [Articolo su rivista]
Chiaraviglio, Luca; D'Andreagiovanni, Fabio; Lancellotti, Riccardo; Shojafar, Mohammad; Blefari Melazzi, Nicola; Canali, Claudia
abstract

We target the problem of managing the power states of the servers in a Cloud Data Center (CDC) to jointly minimize the electricity consumption and the maintenance costs derived from the variation of power (and consequently of temperature) on the servers' CPU. More in detail, we consider a set of virtual machines (VMs) and their requirements in terms of CPU and memory across a set of Time Slot (TSs). We then model the consumed electricity by taking into account the VMs processing costs on the servers, the costs for transferring data between the VMs, and the costs for migrating the VMs across the servers. In addition, we employ a material-based fatigue model to compute the maintenance costs needed to repair the CPU, as a consequence of the variation over time of the server power states. After detailing the problem formulation, we design an original algorithm, called Maintenance and Electricity Costs Data Center (MECDC), to solve it. Our results, obtained over several representative scenarios from a real CDC, show that MECDC largely outperforms two reference algorithms, which instead either target the load balancing or the energy consumption of the servers.


2018 - An Optimization Model to Reduce Energy Consumption in Software-Defined Data Centers [Relazione in Atti di Convegno]
Canali, Claudia; Lancellotti, Riccardo; Shojafar, Mohammad
abstract

The increasing popularity of Software-Defined Network technologies is shaping the characteristics of present and future data centers. This trend, leading to the advent of Software-Defined Data Centers, will have a major impact on the solutions to address the issue of reducing energy consumption in cloud systems. As we move towards a scenario where network is more flexible and supports virtualization and softwarization of its functions, energy management must take into account not just computation requirements but also network related effects, and must explicitly consider migrations throughout the infrastructure of Virtual Elements (VEs), that can be both Virtual Machines and Virtual Routers. Failing to do so is likely to result in a sub-optimal energy management in current cloud data centers, that will be even more evident in future SDDCs. In this chapter, we propose a joint computation-plus-communication model for VEs allocation that minimizes energy consumption in a cloud data center. The model contains a threefold contribution. First, we consider the data exchanged between VEs and we capture the different connections within the data center network. Second, we model the energy consumption due to VEs migrations considering both data transfer and computational overhead. Third, we propose a VEs allocation process that does not need to introduce and tune weight parameters to combine the two (often conflicting) goals of minimizing the number of powered-on servers and of avoiding too many VE migrations. A case study is presented to validate our proposal. We apply our model considering both computation and communication energy contributions even in the migration process, and we demonstrate that that our proposal outperforms the existing alternatives for VEs allocation in terms of energy reduction.


2018 - Designing a private CDN with an off-sourced network infrastructure: model and case study [Relazione in Atti di Convegno]
Canali, Claudia; Corbelli, Andrea; Lancellotti, Riccardo
abstract

Content Delivery Networks for multimedia contents are typically managed by a dedicated company. However, there are cases where an enterprise already investing in a dedicated network infrastructure wants to deploy its own private CDN. This scenario is quite different from traditional CDNs for a twofold reason: first, the workload characteristics; second, the impact on the available choices for the CDN design of having the management of the network infrastructure off-sourced to a third party. The contribution of this paper is to introduce and discuss the optimization models used to design the private CDN and to validate our models using a case study.


2018 - Gender equality in tertiary education and research institutions. An evaluation proposal. [Capitolo/Saggio]
Addabbo, T.; Canali, C.; Facchinetti, G.; Grandi, Alessandro; Pirotti, T.
abstract

Gender inequality in research and innovation is well documented (European Commission, 2016) and tools to measure and monitor it have been proposed and tested within EU funded projects as GenderTime (Badaloni & Perini, 2016) or Effective gender equality in research and academia (EGERA) (http://www.egera.eu/). The evaluation proposal at the heart of this contribution has been developed within EQUAL-IST project (Gender Equality Plans for Information Sciences and Technology Research Institutions) funded under the European Union's Horizon 2020 research and innovation programme that aims at introducing structural changes in research organizations to enhance gender equality within Information System and Technology Institutions. The dimensions and indicators used to measure gender equality are consistent to those that the literature on gender equality in research and academic institutions have shown to be significant. Our contribution shows an innovation in the choice on how to measure gender equality by using Fuzzy Multi Criteria Decision Analysis (FMCDA). We propose a Fuzzy Expert System, a cognitive model that, by replicating the expert way of learning and thinking, allows to formalize qualitative concepts and to reach a synthetic measure of the institution's gender equality (ranging from 0 to 1 increasing with gender equality achievements) that can be then disentangled in its different dimensions. The latter characteristic of the model that we propose can be fruitfully used by policy makers and Equal opportunity officers in order to detect and address the critical elements in the organization and carry out changes to improve gender equality. A first application of the model has been experimented within the EQUAL-IST project and is available for other universities and research institutions wishing to obtain an assessment of their organization in terms of gender equality. Further developments of the model, together with its wider implementation, include the assessment by using fuzzy logic of gender equality policies and institutional factors affecting gender equality within the institution.


2018 - Joint Minimization of the Energy Costs from Computing, Data Transmission, and Migrations in Cloud Data Centers [Articolo su rivista]
Canali, Claudia; Chiaraviglio, Luca; Lancellotti, Riccardo; Shojafar, Mohammad
abstract

We propose a novel model, called JCDME, for the allocation of Virtual Elements (VEs), with the goal of minimizing the energy consumption in a Software-Defined Cloud Data Center (SDDC). More in detail, we model the energy consumption by considering the computing costs of the VEs on the physical servers, the costs for migrating VEs across the servers, and the costs for transferring data between VEs. In addition, JCDME introduces a weight parameter to avoid an excessive number of VE migrations. Specifically, we propose three different strategies to solve the JCDME problem with an automatic and adaptive computation of the weight parameter for the VEs migration costs. We then evaluate the considered strategies over a set of scenarios, ranging from a small sized SDDC up to a medium sized SDDC composed of hundreds of VEs and hundreds of servers. Our results demonstrate that JCDME is able to save up to an additional 7% of energy w.r.t. previous energy-aware algorithms, without a substantial increase in the solution complexity.


2018 - On private CDNs with off-sourced network infrastructures: A model and a case study [Articolo su rivista]
Canali, Claudia; Corbelli, Andrea; Lancellotti, Riccardo
abstract

The delivery of multimedia contents through a Content Delivery Network (CDN) is typically handled by a specific third party, separated from the content provider. However, in some specific cases, the content provider may be interested in carrying out this function using a Private CDN, possibly using an off-sourced network infrastructure. This scenario poses new challenges and limitations with respect to the typical case of content delivery. First, the systems has to face a different workload as the content consumer are typically part of the same organization that is the content provider. Second, the offsourced nature of the network infrastructure has a major impact on the available choices for CDN design. In this paper we develop an exact mathematical model for the design of a Private CDN addressing the issues and the constraints typical of such scenario. Furthermore, we analyze different heuristics to solve the optimization problem. We apply the proposed model to a real case study and validate the results by means of simulation.


2018 - Social technologies for the workplace: Metrics proposal for adoption assessment [Relazione in Atti di Convegno]
De Michele, R.; Fabbri, T.; Canali, C.
abstract

As Web 2.0 technologies are increasingly being implemented for business purpose, they offer a wide range of opportunities and potential benefits for the enterprises that internally adopt digital social platforms. However, enterprises usually are not able to correctly and effectively evaluate their investments along this direction. To fill this gap, in this paper we propose two metrics to assess adoption and performance of enterprise social platforms, namely users (Total and Active) Participation Rate and Return on Effort and apply them on data gathered from two companies that recently adopted digital platforms including social tools for collaboration and communication among employees.


2018 - Special issue on algorithms for the resource management of large scale infrastructures [Articolo su rivista]
Ardagna, Danilo; Canali, Claudia; Lancellotti, Riccardo
abstract

Modern distributed systems are becoming increasingly complex as virtualization is being applied at both the levels of computing and networking. Consequently, the resource management of this infrastructure requires innovative and efficient solutions. This issue is further exacerbated by the unpredictable workload of modern applications and the need to limit the global energy consumption. The purpose of this special issue is to present recent advances and emerging solutions to address the challenge of resource management in the context of modern large-scale infrastructures. We believe that the four papers that we selected present an up-to-date view of the emerging trends, and the papers propose innovative solutions to support efficient and self-managing systems that are able to adapt, manage, and cope with changes derived from continually changing workload and application deployment settings, without the need for human supervision.


2017 - A Computation- and Network-Aware Energy Optimization Model for Virtual Machines Allocation [Relazione in Atti di Convegno]
Canali, Claudia; Lancellotti, Riccardo; Shojafar, Mohammad
abstract

Reducing energy consumption in cloud data center is a complex task, where both computation and network related effects must be taken into account. While existing solutions aim to reduce energy consumption considering separately computational and communication contributions, limited attention has been devoted to models integrating both parts. We claim that this lack leads to a sub-optimal management in current cloud data centers, that will be even more evident in future architectures characterized by Software-Defined Network approaches. In this paper, we propose a joint computation-plus-communication model for Virtual Machines (VMs) allocation that minimizes energy consumption in a cloud data center. The contribution of the proposed model is threefold. First, we take into account data traffic exchanges between VMs capturing the heterogeneous connections within the data center network. Second, the energy consumption due to VMs migrations is modeled by considering both data transfer and computational overhead. Third, the proposed VMs allocation process does not rely on weight parameters to combine the two (often conflicting) goals of tightly packing VMs to minimize the number of powered-on servers and of avoiding an excessive number of VM migrations. An extensive set of experiments confirms that our proposal, which considers both computation and communication energy contributions even in the migration process, outperforms other approaches for VMs allocation in terms of energy reduction.


2017 - A Correlation-based Methodology to Infer Communication Patterns between Cloud Virtual Machines [Relazione in Atti di Convegno]
Canali, Claudia; Lancellotti, Riccardo
abstract

The VMs allocation over the servers of a cloud data center is becoming a critical task to guarantee energy savings and high performance. Only recently network-aware techniques for VMs allocation have been proposed. However, a network-aware placement requires the knowledge of data transfer patterns between VMs, so that VMs exchanging significant amount of information can be placed on low cost communication paths (e.g. on the same server). The knowledge of this information is not easy to obtain unless a specialized monitoring function is deployed over the data center infrastructure. In this paper, we propose a correlation-based methodology that aims to infer communication patterns starting from the network traffic time series of each VM without relaying on a special purpose monitoring. Our study focuses on the case where a data center hosts a multi-tier application deployed using horizontal replication. This typical case of application deployment makes particularly challenging the identification of VMs communications because the traffic patterns are similar in every VM belonging to the same application tier. In the evaluation of the proposed methodology, we compare different correlation indexes and we consider different time granularities for the monitoring of network traffic. Our study demonstrates the feasibility of the proposed approach, that can identify which VMs are interacting among themselves even in the challenging scenario considered in our experiments.


2017 - A measurement-based analysis of temperature variations introduced by power management on Commodity HardWare [Relazione in Atti di Convegno]
Chiaraviglio, Luca; Blefari-Melazzi, Nicola; Canali, Claudia; Cuomo, Francesca; Lancellotti, Riccardo; Shojafar, Mohammad
abstract

Commodity HardWare (CHW) is currently used in the Internet to deploy large data centers or small computing nodes. Moreover, CHW will be also used to deploy future telecommunication networks, thanks to the adoption of the forthcoming network softwarization paradigm. In this context, CHW machines can be put in Active Mode (AM) or in Sleep Mode (SM) several times per day, based on the traffic requirements from users. However, the transitions between the power states may introduce fatigue effects, which may increase the CHW maintenance costs. In this paper, we perform a measurement campaign of a CHW machine subject to power state changes introduced by SM. Our results show that the temperature change due to power state transitions is not negligible, and that the abrupt stopping of the fans on hot components (such as the CPU) tends to spread the heat over the other components of the CHW machine. In addition, we also show that the CHW failure rate is reduced by a factor of 5 when the number of transitions between AM and SM states is more than 20 per day and the SM duration is around 800 [s].


2017 - Identifying Communication Patterns between Virtual Machines in Software-Defined Data Centers [Relazione in Atti di Convegno]
Canali, Claudia; Lancellotti, Riccardo
abstract

Modern cloud data centers typically exploit management strategies to reduce the overall energy consumption. While most of the solutions focus on the energy consumption due to computational elements, the advent of the Software-Defined Network paradigm opens the possibility for more complex strategies taking into account the network traffic exchange within the data center. However, a network-aware Virtual Machine (VM) allocation requires the knowledge of data communication patterns, so that VMs exchanging significant amount of data can be placed on the same physical host or on low cost communication paths. In Infrastructure as a Service data centers, the information about VMs traffic exchange is not easily available unless a specialized monitoring function is deployed over the data center infrastructure. The main contribution of this paper is a methodology to infer VMs communication patterns starting from input/output network traffic time series of each VM and without relaying on a special purpose monitoring. Our reference scenario is a software-defined data center hosting a multi-tier application deployed using horizontal replication. The proposed methodology has two main goals to support a network-aware VMs allocation: first, to identify couples of intensively communicating VMs through correlation-based analysis of the time series; second, to identify VMs belonging to the same vertical stack of a multi-tier application. We evaluate the methodology by comparing different correlation indexes, clustering algorithms and time granularities to monitor the network traffic. The experimental results demonstrate the capability of the proposed approach to identify interacting VMs, even in a challenging scenario where the traffic patterns are similar in every VM belonging to the same application tier.


2017 - Scalable and automatic virtual machines placement based on behavioral similarities [Articolo su rivista]
Canali, Claudia; Lancellotti, Riccardo
abstract

The success of the cloud computing paradigm is leading to a significant growth in size and complexity of cloud data centers. This growth exacerbates the scalability issues of the Virtual Machines (VMs) placement problem, that assigns VMs to the physical nodes of the infrastructure. This task can be modeled as a multi-dimensional bin-packing problem, with the goal to minimize the number of physical servers (for economic and environmental reasons), while ensuring that each VM can access the resources required in the next future. Unfortunately, the naïve bin packing problem applied to a real data center is not solvable in a reasonable time because the high number of VMs and of physical nodes makes the problem computationally unmanageable. Existing solutions improve scalability at the expense of solution quality, resulting in higher costs and heavier environmental footprint. The Class-Based placement technique (CBP) is a novel approach that exploits existing solutions to automatically group VMs showing similar behaviour. The Class-Based technique solves a placement problem that considers only some representative VMs for each class, and that can be replicated as a building block to solve the global VMs placement problem. Using real traces, we analyse our proposal performance, comparing different alternatives to automatically determine the number of building blocks. Furthermore, we compare our proposal against the existing alternatives and evaluate the results for different workload compositions. We demonstrate that the CBP proposal outperforms existing solutions in terms of scalability and VM placement quality.


2016 - A comparison of techniques to detect similarities in cloud virtual machines [Articolo su rivista]
Canali, Claudia; Lancellotti, Riccardo
abstract

Scalability in monitoring and management of cloud data centres may be improved through the clustering of virtual machines (VMs) exhibiting similar behaviour. However, available solutions for automatic VM clustering present some important drawbacks that hinder their applicability to real cloud scenarios. For example, existing solutions show a clear trade-off between the accuracy of the VMs clustering and the computational cost of the automatic process; moreover, their performance shows a strong dependence on specific technique parameters. To overcome these issues, we propose a novel approach for VM clustering that uses Mixture of Gaussians (MoGs) together with the Kullback-Leiber divergence to model similarity between VMs. Furthermore, we provide a thorough experimental evaluation of our proposal and of existing techniques to identify the most suitable solution for different workload scenarios.


2016 - An energy-aware scheduling algorithm in DVFS-Enabled Networked Data Centers [Relazione in Atti di Convegno]
Shojafar, Mohammad; Canali, Claudia; Lancellotti, Riccardo; Abolfazli, Saeid
abstract

In this paper, we propose an adaptive online energy-aware scheduling algorithm by exploiting the reconfiguration capability of a Virtualized Networked Data Centers (VNetDCs) processing large amount of data in parallel. To achieve energy efficiency in such intensive computing scenarios, a joint balanced provisioning and scaling of the networking-plus-computing resources is required. We propose a scheduler that manages both the incoming workload and the VNetDC infrastructure to minimize the communication-plus-computing energy dissipated by processing incoming traffic under hard real-time constraints on the per-job computing-plus-communication delays. Specifically, our scheduler can distribute the workload among multiple virtual machines (VMs) and can tune the processor frequencies and the network bandwidth. The energy model used in our scheduler is rather sophisticated and takes into account also the internal/external frequency switching energy costs. Our experiments demonstrate that the proposed scheduler guarantees high quality of service to the users respecting the service level agreements. Furthermore, it attains minimum energy consumptions under two real-world operating conditions: a discrete and finite number of CPU frequencies and not negligible VMs reconfiguration costs. Our results confirm that the overall energy savings of data center can be significantly higher with respect to the existing solutions.


2016 - Minimizing computing-plus-communication energy consumptions in virtualized networked data centers [Relazione in Atti di Convegno]
Shojafar, Mohammad; Canali, Claudia; Lancellotti, Riccardo; Baccarelli, Enzo
abstract

In this paper, we propose a dynamic resource provisioning scheduler to maximize the application throughput and minimize the computing-plus-communication energy consumption in virtualized networked data centers. The goal is to maximize the energy-efficiency, while meeting hard QoS requirements on processing delay. The resulting optimal resource scheduler is adaptive, and jointly performs: i) admission control of the input traffic offered by the cloud provider; ii) adaptive balanced control and dispatching of the admitted traffic; iii) dynamic reconfiguration and consolidation of the Dynamic Voltage and Frequency Scaling (DVFS)-enabled virtual machines instantiated onto the virtualized data center. The proposed scheduler can manage changes of the workload without requiring server estimation and prediction of its future trend. Furthermore, it takes into account the most advanced mechanisms for power reduction in servers, such as DVFS and reduced power states. Performance of the proposed scheduler is numerically tested and compared against the corresponding ones of some state-of-the-art schedulers, under both synthetically generated and measured real-world workload traces. The results confirm the delay-vs.-energy good performance of the proposed scheduler.


2015 - A Class-based Virtual Machine Placement Technique for a Greener Cloud [Relazione in Atti di Convegno]
Canali, Claudia; Lancellotti, Riccardo
abstract

The management of IaaS cloud systems is a challenging task, where a huge number of Virtual Machines (VMs) must be placed over a physical infrastructure with multiple nodes. Economical reasons and the need to reduce the ever-growing carbon footprint of modern data centers require an efficient VMs placement that minimizes the number of physical required nodes. As each VM is considered as a black box with independent characteristics, the placement process presents scalability issues due to the amount of involved data and to the resulting number of constraints in the underlying optimization problem. For large data centers, this excludes the possibility to reach an optimal allocation. Existing solutions typically exploit heuristics or simplified formulations to solve the allocation problem, at the price of possibly sub-optimal solutions. We introduce a novel placement technique, namely Class-Based, that exploits available solutions to automatically group VMs showing similar behavior. The Class-Based technique solves a placement problem that considers only some representatives for each class, and that can be replicated as a building block to solve the global VMs placement problem. Our experiments demonstrate that the proposed technique is a viable solution that can significantly improve the scalability of the VMs placement in IaaS Cloud systems with respect to existing alternatives.


2015 - Automatic parameter tuning for Class-Based Virtual Machine Placement in cloud infrastructures [Relazione in Atti di Convegno]
Canali, Claudia; Lancellotti, Riccardo
abstract

A critical task in the management of Infrastructure as a Service cloud data centers is the placement of Virtual Machines (VMs) over the infrastructure of physical nodes. However, as the size of data centers grows, finding optimal VM placement solutions becomes challenging. The typical approach is to rely on heuristics that improve VM placement scalability by (partially) discarding information about the VM behavior. An alternative approach providing encouraging results, namely Class-Based Placement (CBP), has been proposed recently. CBP considers VMs divided in classes with similar behavior in terms of resource usage. This technique can obtain high quality placement because it considers a detailed model of VM behavior on a per-class base. At the same time, scalability is achieved by considering a small-scale VM placement problem that is replicated as a building block for the whole data center. However, a critical parameter of CBP technique is the number (and size) of building blocks to consider. Many small building blocks may reduce the overall VM placement solution quality due to fragmentation of the physical node resources over blocks. On the other hand, few large building blocks may become computationally expensive to handle and may be unsolvable due to the problem complexity. This paper addresses this problem analyzing the impact of block size on the performance of the VM class-based placement. Furthermore, we propose an algorithm to estimate the best number of blocks. Our proposal is validated through experimental results based on a real cloud computing data center.


2015 - Exploiting Classes of Virtual Machines for Scalable IaaS Cloud Management [Relazione in Atti di Convegno]
Canali, Claudia; Lancellotti, Riccardo
abstract

A major challenge of IaaS cloud data centers is the placement of a huge number of Virtual Machines (VMs) over a physical infrastructure with a high number of nodes. The VMs placement process must strive to reduce as much as possible the number of physical nodes to improve management efficiency, reduce energy consumption and guarantee economical savings. However, since each VM is considered as a black box with independent characteristics, the VMs placement task presents scalability issues due to the amount of involved data and to the resulting number of constraints in the underlying optimization problem. For large data centers, this condition often leads to the impossibility to reach an optimal solution for VMs placement. Existing solutions typically exploit heuristics or simplified formulations to solve the placement problem, at the price of possibly sub-optimal solutions. We propose an innovative VMs placement technique, namely Class-Based, that takes advantage from existing solutions to automatically group VMs showing similar behavior. The Class-Based technique solves a placement problem that considers only some representatives for each class, and that can be replicated as a building block to solve the global VMs placement problem. Our experiments demonstrate that the proposed technique is viable and can significantly improve the scalability of the VMs placement in IaaS Cloud systems with respect to existing alternatives.


2015 - Parameter tuning for scalable multi-resource server consolidation in cloud systems [Articolo su rivista]
Canali, Claudia; Lancellotti, Riccardo
abstract

Infrastructure as a Service cloud providers are increasingly relying on scalable and efficient Virtual Machines (VMs) placement as the main solution for reducing unnecessary costs and wastes of physical resources. However, the continuous growth of the size of cloud data centers poses scalability challenges to find optimal placement solutions. The use of heuristics and simplified server consolidation models that partially discard information about the VMs behavior represents the typical approach to guarantee scalability, but at the expense of suboptimal placement solutions. A recently proposed alternative approach, namely Class-Based Placement (CBP), divides VMs in classes with similar behavior in terms of resource usage, and addresses scalability by considering a small-scale server consolidation problem that is replicated as a building block for the whole data center. However, the server consolidation model exploited by the CBP technique suffers from two main limitations. First, it considers only one VM resource (CPU) for the consolidation problem. Second, it does not analyze the impact of the number (and size) of building blocks to consider. Many small building blocks may reduce the overall VMs placement solution quality due to fragmentation of the physical server resources over blocks. On the other hand, few large building blocks may become computationally expensive to handle and may be unsolvable due to the problem complexity. This paper extends the CBP server consolidation model to take into account multiple resources. Furthermore, we analyze the impact of block size on the performance of the proposed consolidation model, and we present and compare multiple strategies to estimate the best number of blocks. Our proposal is validated through experimental results based on a real cloud computing data center.


2014 - An Adaptive Technique to Model Virtual Machine Behavior for Scalable Cloud Monitoring [Relazione in Atti di Convegno]
Canali, Claudia; Lancellotti, Riccardo
abstract

Supporting the emerging digital society is creating new challenges for cloud computing infrastructures, exacerbating scalability issues regarding the processes of resource monitoring and management in large cloud data centers. Recent research studies show that automatically clustering similar virtual machines running the same software component may improve the scalability of the monitoring process in IaaS cloud systems. However, to avoid misclassifications, the clustering process must take into account long time series (up to weeks) of resource measurements, thus resulting in a mechanism that is slow and not suitable for a cloud computing model where virtual machines may be frequently added or removed in the data center. In this paper, we propose a novel methodology that dynamically adapts the length of the time series necessary to correctly cluster each VM depending on its behavior. This approach supports a clustering process that does not have to wait a long time before making decisions about the VM behavior. The proposed methodology exploits elements of fuzzy logic for the dynamic determination of time series length. To evaluate the viability of our solution, we apply the methodology to a case study considering different algorithms for VMs clustering. Our results confirm that after just 1 day of monitoring we can cluster without misclassifications up to 80% of the VMs, while for the remaining 20% of the VMs longer observations are needed.


2014 - Balancing Accuracy and Execution Time for Similar Virtual Machines Identification in IaaS Cloud [Relazione in Atti di Convegno]
Canali, Claudia; Lancellotti, Riccardo
abstract

Identification of VMs exhibiting similar behavior can improve scalability in monitoring and management of cloud data centers. Existing solutions for automatic VM clustering may be either very accurate, at the price of a high computational cost, or able to provide fast results with limited accuracy. Furthermore, the performance of most solutions may change significantly depending on the specific values of technique parameters. In this paper, we propose a novel approach to model VM behavior using Mixture of Gaussians (MoGs) to approximate the probability density function of resources utilization. Moreover, we exploit the Kullback-Leibler divergence to measure the similarity between MoGs. The proposed technique is compared against the state of the art through a set of experiments with data coming from a private cloud data center. Our experiments show that the proposed technique can provide high accuracy with limited computational requirements. Furthermore, we show that the performance of our proposal, unlike the existing alternatives, does not depend on any parameter


2014 - Detecting Similarities in Virtual Machine Behavior for Cloud Monitoring using Smoothed Histograms [Articolo su rivista]
Lancellotti, Riccardo; Canali, Claudia
abstract

The growing size and complexity of cloud systems determine scalability issues for resource monitoring and management. While most existing solutions con- sider each Virtual Machine (VM) as a black box with independent characteristics, we embrace a new perspective where VMs with similar behaviors in terms of resource usage are clustered together. We argue that this new approach has the potential to address scalability issues in cloud monitoring and management. In this paper, we propose a technique to cluster VMs starting from the usage of multiple resources, assuming no knowledge of the services executed on them. This innovative technique models VMs behavior exploiting the probability histogram of their resources usage, and performs smoothing-based noise reduction and selection of the most relevant information to consider for the clustering process. Through extensive evaluation, we show that our proposal achieves high and stable performance in terms of automatic VM clustering, and can reduce the monitoring requirements of cloud systems.


2014 - Exploiting ensemble techniques for automatic virtual machine clustering in cloud systems [Articolo su rivista]
Canali, Claudia; Lancellotti, Riccardo
abstract

Cloud computing has recently emerged as a new paradigm to provide computing services through large-size data centers where customers may run their applications in a virtualized environment. The advantages of cloud in terms of flexibility and economy encourage many enterprises to migrate from local data centers to cloud platforms, thus contributing to the success of such infrastructures. However, as size and complexity of cloud infrastructures grow, scalability issues arise in monitoring and management processes. Scalability issues are exacerbated because available solutions typically consider each virtual machine (VM) as a black box with independent characteristics, which is monitored at a fine-grained granularity level for management purposes, thus generating huge amounts of data to handle. We claim that scalability issues can be addressed by leveraging the similarity between VMs in terms of resource usage patterns. In this paper, we propose an automated methodology to cluster similar VMs starting from their resource usage information, assuming no knowledge of the software executed on them. This is an innovative methodology that combines the Bhattacharyya distance and ensemble techniques to provide a stable evaluation of similarity between probability distributions of multiple VM resource usage, considering both system- and network-related data. We evaluate the methodology through a set of experiments on data coming from an enterprise data center. We show that our proposal achieves high and stable performance in automatic VMs clustering, with a significant reduction in the amount of data collected which allows to lighten the monitoring requirements of a cloud data center.


2014 - Improving scalability of cloud monitoring through PCA-based Clustering of Virtual Machines [Articolo su rivista]
Canali, Claudia; Lancellotti, Riccardo
abstract

Cloud computing has recently emerged as a leading paradigm to allow customers to run their applications in virtualized large-scale data centers. Existing solutions for monitoring and management of these infrastructures consider virtual machines (VMs) as independent entities with their own characteristics. However, these approaches suffer from scalability issues due to the increasing number of VMs in modern cloud data centers. We claim that scalability issues can be addressed by leveraging the similarity among VMs behavior in terms of resource usage patterns. In this paper we propose an automated methodology to cluster VMs starting from the usage of multiple resources, assuming no knowledge of the services executed on them. The innovative contribution of the proposed methodology is the use of the statistical technique known as principal component analysis (PCA) to automatically select the most relevant information to cluster similar VMs. We apply the methodology to two case studies, a virtualized testbed and a real enterprise data center. In both case studies, the automatic data selection based on PCA allows us to achieve high performance, with a percentage of correctly clustered VMs between 80% and 100% even for short time series (1 day) of monitored data. Furthermore, we estimate the potential reduction in the amount of collected data to demonstrate how our proposal may address the scalability issues related to monitoring and management in cloud computing data centers.


2013 - Algorithms for Web Service Selection with Static and Dynamic Requirements [Articolo su rivista]
Canali, Claudia; Colajanni, Michele; Lancellotti, Riccardo
abstract

A main feature of Service Oriented Architectures is the capability to support the development of new applications through the composition of existing Web services that are offered by different service providers. The runtime selection of which providers may better satisfy the end-user requirements in terms of quality of service remains an open issue in the context of Web services. The selection of the service providers has to satisfy requirements of different nature: requirements may refer to static qualities of the service providers, which do not change over time or change slowly compared to the service invocation time (for example related to provider reputation), and to dynamic qualities, which may change on a per-invocation basis (typically related to performance, such as the response time). The main contribution of this paper is to propose a family of novel runtime algorithms that select service providers on the basis of requirements involving both static and dynamic qualities, as in a typical Web scenario. We implement the proposed algorithms in a prototype and compare them with the solutions commonly used in service selection, which consider all the service provider qualities as static for the scope of the selection process. Our experiments show that a static management of quality requirements is viable only in the unrealistic case where workload remains stable over time, but it leads to very poor performance in variable environments. On the other hand, the combined management of static and dynamic quality requirements allows us to achieve better user-perceived performance over a wide range of scenarios, with the response time of the proposed algorithms that is reduced up to a 50 % with respect to that of static algorithms.


2013 - Automatic virtual machine clustering based on bhattacharyya distance for multi-cloud systems [Relazione in Atti di Convegno]
Canali, Claudia; Lancellotti, Riccardo
abstract

Size and complexity of modern data centers pose scalability issues for the resource monitoring system supporting management operations, such as server consolidation. When we pass from cloud to multi-cloud systems, scalability issues are exacerbated by the need to manage geographically distributed data centers and exchange monitored data across them. While existing solutions typically consider every Virtual Machine (VM) as a black box with independent characteristics, we claim that scalability issues in multi-cloud systems could be addressed by clustering together VMs that show similar behaviors in terms of resource usage. In this paper, we propose an automated methodology to cluster VMs starting from the usage of multiple resources, assuming no knowledge of the services executed on them. This innovative methodology exploits the Bhattacharyya distance to measure the similarity of the probability distributions of VM resources usage, and automatically selects the most relevant resources to consider for the clustering process. The methodology is evaluated through a set of experiments with data from a cloud provider. We show that our proposal achieves high and stable performance in terms of automatic VM clustering. Moreover, we estimate the reduction in the amount of data collected to support system management in the considered scenario, thus showing how the proposed methodology may reduce the monitoring requirements in multi-cloud systems.


2013 - Experimental evaluation of peer-to-peer applications (Guest Editorial) [Articolo su rivista]
Roberto, Canonico; Canali, Claudia; Walid, Dabbous
abstract

n/a


2012 - A Novel Intermediary Framework for Dynamic Edge Service Composition [Articolo su rivista]
Canali, Claudia; Colajanni, Michele; Delfina, Malandrino; Vittorio, Scarano; Raffaele, Spinelli
abstract

Multimedia content, user mobility and heterogeneous client devices require novel systems that are able to support ubiquitous access to the Web resources. In this scenario, solutions that combine flexibility, efficiency and scalabilityin offering edge services for ubiquitous access are needed. We propose an original intermediary framework, namely Scalable Intermediary Software Infrastructure (SISI), which is able to dynamically compose edge services on the basis of user preferences and device characteristics. The SISI framework exploits a per-user profiling mechanism, where each user can initiallyset his/her personal preferences through a simple Web interface, and the system is then able to compose at run-time the necessary components. The basic framework can be enriched through new edge services that can be easily implemented through a programming model based on APIs and internal functions. Our experiments demonstrate that flexibility and edge service composition do not affect the system performance. We show that this framework is able to chain multiple edge services and to guarantee stable performance.


2012 - A quantitative methodology based on component analysis to identify key users in social networks [Articolo su rivista]
Canali, Claudia; Lancellotti, Riccardo
abstract

Social networks are gaining an increasing popularity on the Internet and are becoming a critical media for business and marketing. Hence, it is important to identify the key users that may play a critical role as sources or targets of content dissemination. Existing approaches rely only on users social connections; however, considering a single kind of information does not guarantee satisfactory results for the identification of the key users. On the other hand, considering every possible user attribute is clearly unfeasible due to huge amount of heterogenous user information. In this paper, we propose to select and combine a subset of user attributes with the goal to identify sources and targets for content dissemination in a social network. We develop a quantitative methodology based on the principal component analysis. Experiments on the YouTube and Flickr networks demonstrate that our solution outperforms existing solutions by 15%.


2012 - Automated Clustering of Virtual Machines based on Correlation of Resource Usage [Articolo su rivista]
Canali, Claudia; Lancellotti, Riccardo
abstract

The recent growth in demand for modern applications combined with the shift to the Cloud computing paradigm have led to the establishment of large-scale cloud data centers. The increasing size of these infrastructures represents a major challenge in terms of monitoring and management of the system resources. Available solutions typically consider every Virtual Machine (VM) as a black box each with independent characteristics, and face scalability issues by reducing the number of monitored resource samples, considering in most cases only average CPU usage sampled at a coarse time granularity. We claim that scalability issues can be addressed by leveraging the similarity between VMs in terms of resource usage patterns. In this paper we propose an automated methodology to cluster VMs depending on the usage of multiple resources, both system- and network-related, assuming no knowledge of the services executed on them. This is an innovative methodology that exploits the correlation between the resource usage to cluster together similar VMs. We evaluate the methodology through a case study with data coming from an enterprise datacenter, and we show that high performance may be achieved in automatic VMs clustering. Furthermore, we estimate the reduction in the amount of data collected, thus showing that our proposal may simplify the monitoring requirements and help administrators to take decisions on the resource management of cloud computing datacenters.


2012 - Automated clustering of VMs for scalable cloud monitoring and management [Relazione in Atti di Convegno]
Canali, Claudia; Lancellotti, Riccardo
abstract

The size of modern datacenters supporting cloud computing represents a major challenge in terms of monitoring and management of system resources. Available solutions typically consider every Virtual Machine (VM) as a black box each with independent characteristics and face scalability issues by reducing the number of monitoring re- source samples, considering in most cases only average CPU utilization of VMs sampled at a very coarse time granularity. We claim that better management without compromising scalability could be achieved by clustering together VMs that show similar behavior in terms of resource utilization. In this paper we propose an automated methodology to cluster VMs depending on the utilization of their resources, assuming no knowledge of the services executed on them. The methodology considers several VM resources, both system- and network-related, and exploits the correlation between the resource demand to cluster together similar VMs. We apply the proposed methodology to a case study with data coming from an enterprise datacenter to evaluate the accuracy of VMs clustering and to estimate the reduction in the amount of data collected. The automatic clustering achieved through our approach may simplify the monitoring requirements and help administrators to take decisions on the management of the resources in a cloud computing datacenter.


2011 - Data Acquisition in Social Networks: Issues and Proposals [Relazione in Atti di Convegno]
Canali, Claudia; Colajanni, Michele; Lancellotti, Riccardo
abstract

The amount of information that is possible to gather from social networks may be useful to different contexts ranging from marketing to intelligence. In this paper, we describe the three main techniques for data acquisition in social networks, the conditions under which they can be applied, and the open problems.We then focus on the main issues that crawlers have to address for getting data from social networks, and we propose a novel solution that exploits the cloud computing paradigm for crawling. The proposed crawler is modular by design and relies on a large number of distributed nodes and on the MapReduce framework to speedup the data collection process from large social networks.


2011 - Dynamic request management algorithms for Web-based services in cloud computing [Relazione in Atti di Convegno]
Lancellotti, Riccardo; Andreolini, Mauro; Canali, Claudia; Colajanni, Michele
abstract

Service providers of Web-based services can take advantage ofmany convenient features of cloud computing infrastructures, but theystill have to implement request management algorithms that are able toface sudden peaks of requests. We consider distributed algorithmsimplemented by front-end servers to dispatch and redirect requests amongapplication servers. Current solutions based on load-blind algorithms, orconsidering just server load and thresholds are inadequate to cope with thedemand patterns reaching modern Internet application servers. In thispaper, we propose and evaluate a request management algorithm, namelyPerformanceGain Prediction, that combines several pieces ofinformation (server load, computational cost of a request, usersession migration and redirection delay) to predict whether theredirection of a request to another server may result in a shorterresponse time. To the best of our knowledge, no other studycombines information about infrastructure status, user requestcharacteristics and redirection overhead for dynamic requestmanagement in cloud computing. Our results showthat the proposed algorithm is able to reduce the responsetime with respect to existing request management algorithmsoperating on the basis of thresholds.


2011 - Technological solutions to support Mobile Web 2.0 services [Capitolo/Saggio]
Canali, Claudia; Colajanni, Michele; Lancellotti, Riccardo
abstract

The widespread diffusion and technological improvements of wireless networks and portable devices are facilitating mobile accesses to the Web and Web 2.0 services. The emerging Mobile Web 2.0 scenario still requires appropriate solutions to guarantee user interactions that are comparable with present levels of services. In this chapter we classify the most important services for Mobile Web 2.0, and we identify the key functions that are required to support each category of Mobile Web 2.0 services. We discuss some possible technological solutions to implement these functions at the client and at the server level, and we identify some research issues that are still open.


2010 - A quantitative methodology to identify relevant users in social networks [Relazione in Atti di Convegno]
Canali, Claudia; Casolari, Sara; Lancellotti, Riccardo
abstract

Social networks are gaining an increasing popularity on the Internet, with tens of millions of registered users and an amount of exchanged contents accounting for a large fraction of the Internet traffic. Due to this popularity, social networks are becoming a critical media for business and marketing, as testified by viral advertisement campaigns based on such networks. To exploit the potential of social networks, it is necessary to classify the users in order to identify the most relevant ones.For example, in the context of marketing on social networks, it is necessary to identify which users should be involved in an advertisement campaign.However, the complexity of social networks, where each user is described by a large number of attributes, transforms the problem of identifying relevant users in a needle in a haystack problem. Starting from a set of user attributes that may be redundant or do not provide significant information for our analysis, we need to extract a limited number of meaningful characteristics that can be used to identify relevant users.We propose a quantitative methodology based on Principal Component Analysis (PCA) to analyze attributes and extract characteristics of social network users from the initial attribute set. The proposed methodology can be applied to identify relevant users in social network for different types of analysis. As an application, we present two case studies that show how the proposed methodology can be used to identify relevant users for marketing on the popular YouTube network. Specifically, we identify which users may play a key role in the content dissemination and how users may be affected by different dissemination strategies.


2010 - A two-level distributed architecture for the support of content adaptation and delivery services [Articolo su rivista]
Canali, Claudia; Colajanni, Michele; Lancellotti, Riccardo
abstract

The growing demand for Web and multimedia content accessed through heterogeneous devices requires the providers to tailor resources to the device capabilities on-the-fly. Providing services for content adaptation and delivery opens two novel challenges to the present and future content provider architectures: content adaptation services are computationally expensive; the global storage requirements increase because multiple versions of the same resource may be generated for different client devices. We propose a novel two-level distributed architecture for the support of efficient content adaptation and delivery services. The nodes of the architecture are organized in two levels: thin edge nodes on the first level act as simple request gateways towards the nodes of the second level; fat interior clusters perform all the other tasks, such as content adaptation, caching and fetching. Several experimental results show that the Two-level architecture achieves better performance and scalability than that of existing flat or no cooperative architectures.


2010 - Adaptive algorithms for efficient content management in social network services [Relazione in Atti di Convegno]
Canali, Claudia; Colajanni, Michele; Lancellotti, Riccardo
abstract

Identifying the set of resources that are expected to receive the majority of requests in the near future, namely hot set, is at the basis of most content management strategies of any Web-based service. Here we consider social network services that open interesting novel challenges for the hot set identification. Indeed, social connections among the users and variable user access patterns with continuous operations of resource upload/download determine a highly variable and dynamic context for the stored resources. We propose adaptive algorithms that combine predictive and social information, and dynamically adjust their parameters according to continuously changing workload characteristics. A large set of experimental results show that adaptive algorithms can achieve performance close to theoretical ideal algorithms and, even more important, they guarantee stable results for a wide range of workload scenarios.


2010 - Characteristics and evolution of content popularity and user relations in social networks [Relazione in Atti di Convegno]
Canali, Claudia; Colajanni, Michele; Lancellotti, Riccardo
abstract

Social networks have changed the characteristics of the traditional Web and these changes are still ongoing. Nowadays, it is impossible to design valid strategies for content management, information dissemination and marketing in the context of a social network system without considering the popularity of its content and the characteristics of the relations among its users. By analyzing two popular social networks and comparing current results with studies dating back to 2007 we confirm some previous results and we identify novel trends that can be utilized as a basis for designing appropriate content and system management strategies.Our analyses confirm the growth of the two social networks in terms of quantity of contents and numbers of social links among the users. The social navigation is having an increasing influence on the content popularity because the social links are representing a primary method through which the users search and find contents. An interesting novel trend emerging from our study is that subsets of users have major impact on the content popularity with respect to previous analyses, with evident consequences on the possibility of implementing content dissemination strategies, such as viral marketing.


2010 - Enabling Efficient Peer-to-Peer Resource Sharing in Wireless Mesh Networks [Articolo su rivista]
S., Burresi; Canali, Claudia; M. E., Renda; P., Santi
abstract

Wireless mesh networks are a promising area for the deployment of new wireless communication and networking technologies. In this paper, we address the problem of enabling effective peer-to-peer resource sharing in this type of networks. Starting from the well-known Chord protocol for resource sharing in wired networks, we propose a specialization that accounts for peculiar features of wireless mesh networks: namely, the availability of a wireless infrastructure, and the 1-hop broadcast nature of wireless communication, which bring to the notions of location-awareness and MAC layer cross-layering. Through extensive packet-level simulations, we investigate the separate effects of location-awareness and MAC layer cross-layering, and of their combination, on the performance of the P2P application. The combined protocol, MeshChord, reduces messageoverhead of as much as 40% with respect to the basic Chord design, while at the same time improving the information retrieval performance. Notably, differently from the basic Chord design, our proposed MeshChord specialization displays information retrieval performance resilient to the presence of both CBR and TCP background traffic. Overall, the results of our study suggest that MeshChord can be successfully utilized for implementing file/resource sharing applications in wireless mesh networks.


2010 - Resource Management Strategies for the Mobile Web [Articolo su rivista]
Canali, Claudia; Colajanni, Michele; Lancellotti, Riccardo
abstract

The success of the Mobile Web is driven by the combination of novel Web-based services with the diffusion of advanced mobile devices that require personalization, location-awareness and content adaptation. The evolutionary trend of the Mobile Web workload places unprecedented strains on the server infrastructure of the content provider at the level of computational and storage capacity, to the extent that the technological improvements at the server and client level may be insufficient to face some resource requirements of the future Mobile Web scenario. This paper presents a twofold contribution. We identify some performance bottlenecks that can limit the performance of future Mobile Web, and we propose and evaluate novel resource management strategies. They aim to address computational requirements through a pre-adaptation of the most popular resources even in the presence of irregular access patterns and short resource lifespan that will characterize the future Mobile Web. We investigate a large space of alternative workload scenarios. Our analysis allows to identify when the proposed resource management strategies are able to satisfy the computational requirements of future Mobile Web, and even some conditions where further research is necessary.


2009 - Hot set identification for social network applications [Relazione in Atti di Convegno]
Canali, Claudia; Colajanni, Michele; Lancellotti, Riccardo
abstract

Several operations of Web-based applications areoptimized with respect to the set of resources that will receivethe majority of requests in the near future, namely the hotset. Unfortunately, the existing algorithms for the hot setidentification do not work well for the emerging social networkapplications, that are characterized by quite novel featureswith respect to the traditional Web: highly interactive useraccesses, upload and download operations, short lifespan ofthe resources, social interactions among the members of theonline communities.We propose and evaluate innovative combinations of predictivemodels and social-aware solutions for the identificationof the hot set. Experimental results demonstrate that some ofthe considered algorithms improve the accuracy of the hot setidentification up to 30% if compared to existing models, andthey guarantee stable and robust results even in the context ofsocial network applications characterized by high variability.


2009 - On the Impact of Far Away Interference on Evaluations of Wireless Multihop Networks [Relazione in Atti di Convegno]
D., Blough; Canali, Claudia; G., Resta; P., Santi
abstract

It is common practice in wireless multihop network evaluations to ignore interfering signals below a certain signal strength threshold. This paper investigates the thesis that this produces highly inaccurate evaluations in many cases. We start by dening a bounded version of the physical interference model, in which interference generated by transmitters located beyond a certain distance from a receiver is ignored. We then derive a lower bound on neglected interference and show that it is approximately two orders of magnitude greater than the noise floor for typical parameter values and a surprisingly small number of nodes. We next evaluate the effect of neglected interference through extensive simulations done with a widely-used packet-level simulator (GTNetS), considering 802.11 MAC with both CBR and TCP traffic in networks of varying size and topology. The results of these simulations show very large evaluation errors when neglecting far-away interference: errors in evaluating aggregate throughput when using the default interference model reached up to 210% with 100 nodes, and errors in individual flow throughputs were far greater.


2009 - Performance Evolution of Mobile Web-Based Services [Articolo su rivista]
Canali, Claudia; Colajanni, Michele; Lancellotti, Riccardo
abstract

The mobile Web's widespread diffusion opens many interesting design and management issues about server infrastructures that must satisfy present and future client demand. Future mobile Web-based services will have growing computational costs. Even requests for the same Web resource will require services to dynamically generate content that takes into account specific devices, user profiles, and contexts. The authors consider the evolution of the mobile Web workload and trends in server and client devices with the goal of anticipating future bottlenecks and developing management strategies.


2008 - Content Delivery and Management [Relazione in Atti di Convegno]
Canali, Claudia; Cardellini, V.; Colajanni, Michele; Lancellotti, Riccardo
abstract

This chapter explores the issues of content delivery through CDNs, with a specialfocus on the delivery of dynamically generated and personalized content. Wedescribe the main functions of a modern Web system and we discuss how the deliveryperformance and scalability can be improved by replicating the functions ofa typical multi-tier Web system over the nodes of a CDN. For each solution, wepresent the state of the art in the research literature, as well as the available industrystandardproducts adopting the solution. Furthermore, we discuss the pros and consof each CDN-based replication solution, pointing out the scenarios that provides thebest benefits and the cases where it is detrimental to performance.


2008 - Evaluating Load Balancing in Peer-to-Peer Resource Sharing Algorithms for Wireless Mesh Networks [Relazione in Atti di Convegno]
Canali, Claudia; RENDA M., E; Santi, P.
abstract

Wireless mesh networks are a promising area for thedeployment of new wireless communication and networkingtechnologies. In this paper, we address the problemof enabling effective peer-to-peer resource sharing in thistype ofnetworks. In particular, we consider the well-knownChord protocol for resource sharing in wired networks andthe recently proposed MeshChord specialization for wirelessmesh networks, and compare their performance undervarious network settings for what concerns total generatedtraffic and load balancing. Both iterative and recursive keylookup implementation in Chord/MeshChord are consideredin our extensive performance evaluation. The resultsconfirm superiority of MeshChord with respect to Chord,and show that recursive key lookup is to be preferred whenconsidering communication overhead, while similar degreeofload unbalancing is observed. However, recursive lookupimplementation reduces the efficacy of MeshChord crosslayerdesign with respect to the original Chord algorithm.MeshChord has also the advantage of reducing load unbalancingwith respect to Chord, although a moderate degreeof load unbalancing is still observed, leaving room for furtherimprovement ofthe MeshChord design.


2008 - Impact of Social Networking Services on Performance and Scalability of Web Server Infrastructures [Relazione in Atti di Convegno]
Canali, Claudia; Lancellotti, Riccardo; Sanchez, J.
abstract

The evolution of Internet is heading towards a new generation of social networking services that are characterized by novel access patterns determined by social interactions among the users and by a growing amount of multimedia content involved in each user interaction. The impact of these novel services on the underlying Web infrastructures is significantly different from traditionalWeb-based services and has not yet been widely studied. This paper presents a scalability and bottleneck analysis of a Web system supporting social networking services for different scenarios of user interaction patterns, amount of multimedia content and network characteristics. Our study demonstrates that for some social networking services the user interaction patterns may play a fundamental role in the definition of the bottleneck resource and must be considered in the design of systems supporting novel applications.


2008 - MESHCHORD: A Location-Aware, Cross-Layer Specialization of Chord for Wireless Mesh Networks [Relazione in Atti di Convegno]
Burresi, S; Canali, Claudia; RENDA M., E; Santi, P.
abstract

Wireless mesh networks are a promising area for the deploymentof new wireless communication and networkingtechnologies. In this paper, we address the problem of enablingeffective peer-to-peer resource sharing in this type ofnetworks. Starting from the well-known Chord protocol forresource sharing in wired networks, we propose a specialization(called MESHCHORD) that accounts for peculiarfeatures of wireless mesh networks: namely, the availabilityof a wireless infrastructure, and the 1-hop broadcast natureof wireless communication. Through extensive packet-levelsimulations, we show that MESHCHORD reduces messageoverhead of as much as 40% with respect to the basic Chorddesign, while at the same time improving the informationretrieval performance.


2008 - Resource management strategies for Mobile Web-based services [Relazione in Atti di Convegno]
Canali, Claudia; Colajanni, Michele; Lancellotti, Riccardo
abstract

The great diffusion of Mobile Web-enabled devices allows the implementation of novel personalization, location and adaptation services that will place unprecedented strains on the server infrastructure of the content provider. This paper has a twofold contribution. First, we analyze the five-years trend of Mobile Web-based applications in terms of workload characteristics of the most popular services and their impact on the server infrastructures. As the technological improvements at the server level in the same period of time are insufficient to face the computational requirements of the future Mobile Web-based services, we propose and evaluate adequate resource management strategies. We demonstrate that pre-adaptating a small fraction of the most popular resources can reduce the response time up to one third thus facing the increased computational impact of the future Mobile Web services.


2007 - A distributed infrastructure supporting personalized services for the Mobile Web [Relazione in Atti di Convegno]
Canali, Claudia; Colajanni, Michele; Lancellotti, Riccardo; Yu, P. S.
abstract

Personalized services are a key feature for the success of the next generation Web that is accessed by heterogeneous and mobile client devices. The need to provide high performance and to preserve user data privacy opens a novel dimension in the design of infrastructures and request dispatching algorithms to support personalized services for the Mobile Web. Performance issues are typically addressed by distributed architectures consisting of multiple nodes. Personalized services that are often based on sensitive user information may introduce constraints on the service location when the nodes of the distributed architecture do not provide the same level of security. In this paper, we propose an infrastructure and related dispatching algorithms that aim to combine performance and privacy requirements. The proposed scheme may efficiently support personalized services for the Mobile Web especially if compared with existing solutions that separately address performance and privacy issues. Our proposal guarantees that up to the 97% of the requests accessing sensitive user information are assigned to the most secure nodes with limited penalty consequences on the response time.


2007 - Impact of request dispatching granularity in geographically distributed Web systems [Relazione in Atti di Convegno]
Andreolini, Mauro; Canali, Claudia; Lancellotti, Riccardo
abstract

The advent of the mobileWeb and the increasing demand for personalized contents arise the need for computationally expensive services, such as dynamic generation and on-the-fly adaptation of contents. Providing these services exacerbates the performance issues that have to be addressed by the underlying Web architecture. When performance issues are addressed through geographically distributed Web systems with a large number of nodes located on the network edge, the dispatching mechanism that distributes requests among the system nodes becomes a critical element. In this paper, we investigate how the granularity of request dispatching may affect the performance of a distributed Web system for personalized contents. Through a real prototype, we compare dispatching mechanisms operating at various levels of granularity for different workload and network scenarios. We demonstrate that the choice of the best granularity for request dispatching strongly depends on the characteristics of the workload in terms of heterogeneity and computational requirements. A coarsegrain dispatching is preferable only when the requests have similar computational requirements. In all other instances of skewed workloads, that we can consider more realistic, a fine-grain dispatching augments the control on the node load and allows the system to achieve better performance.


2006 - A distributed architecture to support infomoblity services [Relazione in Atti di Convegno]
Canali, Claudia; Lancellotti, Riccardo
abstract

The growing popularity of mobile and location aware devices allows the deployment of infomobility systems that provide access to information and services for the support of user mobility. Current systems for infomobility services assume that most information is already available on the mobile device and the device connectivity is used for receiving critical messages from a central server. However, we argue that the next generation of infomobility services will be characterized by collaboration and interaction among the users, provided through real-time bidirectional communication between the client devices and the infomobility system.In this paper we propose an innovative architecture to support next generation infomobility services providing interaction and collaboration among themobile users that can travel by several different transportation means, ranging from cars to trains to foot. We discuss the design issues of the architecture, with particular emphasis on scalability, availability and user data privacy, which are critical in a collaborative infomobility scenario.


2006 - Content Adaptation Architectures Based on Squid Proxy Server [Articolo su rivista]
Canali, Claudia; Cardellini, V.; Lancellotti, Riccardo
abstract

The overwhelming popularity of Internet and the technology advancements have determined the diffusion of many different Web-enabled devices. In such an heterogeneous client environment, efficient content adaptation and delivery services are becoming a major requirement for the new Internet service infrastructure. In this paper we describe intermediary-based architectures that provide adaptation and delivery of Web content to different user terminals. We present the design of a Squid-based prototype that carries out the adaptation of Web images and combines such a functionality with the caching of multiple versions of the same resource. We also investigate how to provide some form of cooperation among the nodes of the intermediary infrastructure, with the goal to evaluate to what extent the cooperation in discovering, adapting, and delivering Web resources can improve the user-perceived performance.


2006 - Distributed architectures for high performance and privacy-aware content generation and delivery [Relazione in Atti di Convegno]
Canali, Claudia; Colajanni, Michele; Lancellotti, Riccardo
abstract

The increasing heterogeneity of mobile client devices used to access the Web requires run-time adaptations of the Web contents. A significant trend in these content adaptation services is the growing amount of personalization required by users. Personalized services are and will be a key feature for the success of the next generation Web, but they open two critical issues: performance and profile management. Issues related to the performance of adaptation services are typically addressed by highly distributed architectures with a large number of nodes located closer to user. On the other hand, the management of user profile must take into account the nature of these data that may contain sensitive information, such as geographic position, navigation history and personal preferences that should be kept private.In this paper, we propose a distributed architecture for the ubiquitous Web access that provides high performance, while addressing the privacy issues related to the management of sensitive user information. The proposed distributed-core architecture splits the adaptation services over multiple nodes distributed over a two-level topology, thus exploiting parallel adaptations to improve the user perceived performance.


2006 - Distribution of adaptation services for Ubiquitous Web access driven by user profile [Relazione in Atti di Convegno]
Canali, Claudia; Colajanni, Michele; Lancellotti, Riccardo
abstract

The popularity of ubiquitous Web access requires runtime adaptations of the Web contents. A significant trend in these content adaptation services is the growing amount of personalization required by users. Personalized services are and will be a key feature for the success of the ubiquitous Web, but they open two critical issues: performance and profile management. Issues related to the performance of adaptation services are typically addressed by highly distributed architectures with a large number of nodes located closer to user. On the other hand, the management of user profile must take into account the nature of these data that may contain sensitive information, such as geographic position, navigation history and personal preferences that should be kept private.In this paper, we investigate the impact that a correct profile management has on distributed infrastructures that provide content adaptation services for ubiquitous Web access. In particular, we propose and compare two scalable solutions of adaptation services deployed on the nodes of a two-level topology. We study, through real prototypes, the performance and the constraints that characterize the proposed architectures.


2005 - A Two-level Distributed Architecture for Web Content Adaptation and Delivery [Relazione in Atti di Convegno]
Canali, Claudia; Cardellini, V; Colajanni, Michele; Lancellotti, Riccardo; Yu, P. S.
abstract

The complexity of services provided through the Web iscontinuously increasing as well as the variety of new devicesthat are gaining access to the Internet. Tailoring Weband multimedia resources to meet the user and client requirementsopens two main novel issues in the research areaof content delivery. The working set tends to increase substantiallybecause multiple versions may be generated fromthe same original resource. Moreover, the content adaptationoperations may be computationally expensive. In thispaper, we consider third-party infrastructures composed bya geographically distributed system of intermediary and cooperativenodes that provide fast content adaptation anddelivery of Web resources. We propose a novel distributedarchitecture of intermediary nodes which are organized intwo levels. The front-end nodes in the first tier are thin edgeservers that locate the resources and forward the client requeststo the nodes in the second tier. These interior nodesare fat servers that run the most expensive functions such ascontent adaptation, resource caching and fetching. Throughreal prototypes we compare the performance of the proposedtwo-level architecture to that of alternative one-levelinfrastructures where all nodes are fat peers providing theentire set of functions.


2005 - Architectures for scalable and flexible Web personalization services [Relazione in Atti di Convegno]
Canali, Claudia; Casolari, Sara; Lancellotti, Riccardo
abstract

The complexity of services provided through theWeb is con-tinuously increasing and issues introduced by both heteroge-neous client devices and Web content personalization are be-coming a major challenge for the Web. Tailoring Web andmultimedia resources tomeet the user and client requirementsopens twomain novel issues in the research area of content de-livery. The content adaptation operations may be computa-tionally expensive, requiring high efficiency and scalability intheWeb architectures.Moreover, personalization services in-troduce security and consistency issues for user profile infor-mation management. In this paper, we propose a novel dis-tributed architecture, with four variants, for the efficient de-livery of personalized service where the nodes are organizedin two levels.We discuss how the architectural choices are af-fected by security and consistency constraints as well as by theaccess to privileged information of the content provider.More-over we discuss performance trade-offs of the various choices.


2005 - Distributed Systems to Support Efficient Adaptation for Ubiquitous Web [Relazione in Atti di Convegno]
Canali, Claudia; Casolari, Sara; Lancellotti, Riccardo
abstract

The ubiquitous Web will require many adaptation and personalization serviceswhich will be consumed by an impressive amount of dierent devices and classes ofusers. These novel advanced services will stress the content provider platforms in anunprecedented way with respect to the content delivery seen in the last decade. Mostservices such as multimedia content manipulation (images, audio and video clips) arecomputationally expensive and no single server will be able to provide all of them,hence scalable distributed architectures will be the common basis for the delivery platform.Moreover these platforms would even address novel content management issuesthat are related to the replication and to the consistency and privacy requirements ofuser/client information. In this paper we propose two scalable distributed architecturesthat are based on a two-level topology. We investigate the pros and cons of sucharchitectures from both a security, consistency and performance points of view.


2005 - Performance comparison of distributed architectures for content adaptation and delivery of Web resources [Relazione in Atti di Convegno]
Canali, Claudia; Cardellini, V; Colajanni, Michele; Lancellotti, Riccardo
abstract

The increasing popularity of heterogeneous Webenableddevices and wired/wireless connections motivatesthe diffusion of content adaptation services that enrich thetraditional Web. Different solutions have been proposedfor the deployment of efcient adaptation and delivery services:in this paper we focus on intermediate infrastructuresthat consist of multiple server nodes. We investigatewhen it is really convenient to place this distributed infrastructurecloser to the clients or to the origin servers,and which is the real gain that can be get by node cooperation.We evaluate the system performance through threeprototypes that are placed in a WAN-emulated environmentand are subject to two types of workload.


2005 - Utility computing for Internet applications [Capitolo/Saggio]
Canali, Claudia; Rabinovich, M; Xiao, Z.
abstract

With the growing demand for computing resources and network capacity, roviding scalable and reliable computing service on the Internet becomes a challenging problem. Recently, much attention has been paid to the “utility computing” concept that aims to provide computing as a utility service similar to water and electricity. While the concept is very challenging in general, we focus our attention in this chapter to a restrictive environment - Web applications. Given the ubiquitous use of Web applications on the Internet, this environment is rich and important enough to warrant careful research. This chapter describes the approaches and challenges related to the architecture and algorithm design in building such a computing platform.


2004 - Evaluating User-perceived Benefits of Content Distribution Networks [Relazione in Atti di Convegno]
Canali, Claudia; Cardellini, V.; Colajanni, Michele; Lancellotti, Riccardo
abstract

Content Distribution Networks (CDNs) are a class of successful content delivery architectures used by the most popular Web sites to enhance their performance. The basic idea is to address Internet bottleneck issues by replicating and caching the content of the customer Web sites and to serve it from the edge of the network. In this paper we evaluate to what extent the use of a CDN can improve the user-perceived response time. We consider a large set of scenarios with different network conditions and client connections, that have not been examined in previous studies. We found that CDNs can offer significative performance gain in normal network conditions, but the advantage of using CDNs can be reduced by heavy network traffic. Moreover, if CDN usage is not carefully designed, the achieved speedup can be suboptimal.


2003 - Cooperative Architectures and Algorithms for Discovery and Transcoding of Multi-version Contents [Relazione in Atti di Convegno]
Canali, Claudia; Cardellini, V; Colajanni, Michele; Lancellotti, Riccardo; Yu, P. S.
abstract

A clear trend of the Web is that a variety of new consumer deviceswith diverse computing powers, display capabilities, andwired/wireless network connections is gaining access to the Internet.Tailoring Web content to match the device characteristicsrequires functionalities for content transformation, namelytranscoding, that are typically carried out by the content Webserver or by an edge proxy server. In this paper, we explorehow to improve the user response time by considering systemsof cooperative edge servers which collaborate in discovering,transcoding, and delivering multiple versions of Web objects.The transcoding functionality opens an entirely new space ofinvestigation in the research area of distributed cache cooperation,because it transforms the proxy servers from contentrepositories along the client-server path into pro-active networkelements providing computation and adaptive delivery.We propose and investigate different algorithms for cooperativediscovery, delivery, and transcoding in the context of edgeservers organized in hierarchical and flat peer-to-peer topologies.We compare the performance of the proposed schemesthrough ColTrES (Collaborative Transcoder Edge Services),a flexible prototype testbed that implements all consideredmechanisms.


2003 - Cooperative TransCaching: A System of Distributed Proxy Servers for Web Content Adaptation [Relazione in Atti di Convegno]
Canali, Claudia; Cardellini, V; Colajanni, Michele; Lancellotti, Riccardo; Yu, P. S.
abstract

The Web is rapidly evolving towards a highly heterogeneous accessed environment, due to the variety of new devices with diverse capabilities and network interfaces. Hence, there is an increasing demand for solutions that enable the transformation of Web content for adapting and delivering it to diverse destination devices.We investigate different schemes for cooperative proxy caching and transcoding that can be implemented in the existing Web infrastructure and compare their performance through prototypes that extend Squid operations to an heterogeneous client environment.