PhD Program in  Computer Science and Systems Engineering 

Research Projects Proposals 2019 (XXXV Cycle)

Research line: Data Science and Engineering

Title: Efficient  algorithms for large scale structured machine learning
(Funded by ERC consolidator)
Proposer: Lorenzo Rosasco
Curriculum: Computer Science
Short description: The projects will aim at developing theoretical and algorithmic ideas to explain the success of current systems and suggest the development of novel practical and efficient solutions. Candidates must have strong mathematical and computational skills.
Topics of interest include  but are not limited to: deterministic and random projections/ sketching,  optimization methods for non-smooth/non convex problems (stochastic, accelerated, distributed , parallel methods), data with geometric structure (graph, string, permutations, manifolds) as well as time structure (dynamical systems). While the emphasis is on methodological and computational aspects, the candidates will have the opportunity to work  in close collaborations on a number of application, including high energy physics data, robotics, time series prediction.

Title: Ethic-by-Design Query Processing / Responsible Query Processing

Proposer: Barbara Catania

Research area: Data Science and Engineering
Curriculum: Computer Science

Description: Nowadays, large-scale technologies for the management and the analysis of big data have a relevant and positive impact: they can improve people’s lives, accelerate scientific discovery and innovation, and bring about positive societal change. At the same time, it becomes increasingly important to understand the nature of these impacts at the social level and to take responsibility for them, especially when they deal with human-related data.

Properties like diversity, serendipity, fairness, or coverage have been recently studied at the level of some specific data processing systems, like recommendation systems, as additional dimensions that complement basic accuracy measures with the goal of improving user satisfaction [2].

Due to the above-mentioned social relevance and to the fact that ethical need to take responsibility is also made mandatory by the recent General Data Protection Regulation of the European Union [GDPR16], nowadays the development of solutions satisfying – by design – non-discriminating requirements is currently one of the main challenges in data processing and is becoming increasingly crucial when dealing with any data processing stage, including data management stages [1, 3, 4].

Based on our past experience in advanced query processing for both stored and streaming data, the aim of the proposed research is to design, implement, and evaluate ad hoc query processing techniques for stored and stream data to automatically enforce specific beyond-accuracy properties, with a special reference to diversity. The focus will be on compositional techniques: property satisfaction will be preserved in any more complex query workflow, possibly iteratively combining together several query processing steps.

Link to the group or personal webpage:


[1] S Abiteboul et Al. Research Directions for Principles of Data Management (Dagstuhl Perspectives Workshop 16151). Dagstuhl Manifestos 7(1): 1-29 (2018)

[2] M Kaminskas, D Bridge. Diversity, Serendipity, Novelty, and Coverage: A Survey and Empirical Analysis of Beyond-Accuracy Objectives in Recommender Systems. TiiS 7(1): 2:1-2:42 (2017) 

[3] J Stoyanovich, B Howe, H.V. Jagadish. Special Session: A Technical Research Agenda in Data Ethics and Responsible Data Management. SIGMOD Conf. 2018: 1635-1636 (2018)

[4] J Stoyanovich, K Yang, H.V. Jagadish. Online Set Selection with Fairness and Diversity Constraints. EDBT 2018: 241-252 (2018)

Assessing similarity: the role of embeddings in schema/ontology matching and in query relaxation

Proposer(s):  Giovanna Guerrini

Research area(s): Data Science and Engineering

Curriculum: Computer Science

A large number of applications need to be able to assess similarity between concepts that are represented by words, possibly bound by hierarchical structures. Word embeddings are used for many natural language processing (NLP) tasks thanks to their ability to capture the semantic relations between words by embedding them in a metric space with lower dimensionality [1]. Word embeddings have been mostly used to solve traditional NLP problems, such as question answering, textual entailment, and sentiment analysis.

Recently, embeddings emerged as a new way of encapsulating hierarchical information [2]. Specifically, hyperbolic embeddings lie in hyperbolic spaces, which are suitable to represent the hierarchical structure and maintain distances among elements. The idea behind hyperbolic embeddings is very simple: forcing elements with semantic correlations to be closest to each other in the embedding space.

The aim of this research theme is to exploit this way to assess similarity to more efficiently and accurately establish mappings between schemas and align ontologies [3], which are crucial tasks for information integration. The ability of efficiently establish similar/corresponding terms can also be exploited in relaxed query processing [4].

Finally, the possibility of exploit embeddings also for assessing many-faceted similarity for concepts described by a word but also positioned in space, as in the case of geo-terms, can be investigated [5].

Link to the group or personal webpage:


[1] T. Mikolov, K. Chen, G. Corrado, and J. Dean. Efficient estimation of word representations in vector space. ICLR Workshop, 2013.

[2] M. Nickel and D. Kiela. Poincaré embeddings for learning hierarchical representations. NIPS 2017.

[3] P. Shvaiko & J. Euzenat. Ontology matching: state of the art and future challenges. IEEE Transactions on knowledge and data engineering, 25(1), 158-176, 2013.

[4] Barbara Catania, Giovanna Guerrini. Adaptively Approximate Techniques in Distributed Architectures. SOFSEM 2015: 65-77

[5] K. Beard. A semantic web based gazetteer model for VGI.  ACM SIGSPATIAL Workshop on Crowdsourced and Volunteered Geographic Information, 2012.

Title: Joint segmentation, detection, tracking in video sequences for efficient and effective scene understanding

Proposer: Francesca Odone
Curriculum: Computer Science

Description: Intrinsically of a different nature, image segmentation and object detection address two different questions with the common goal of understanding the content of a scene. Image segmentation is often seen as a lower level tasks. In recent years we assisted to the development of semantic segmentation approaches, where we also associate semantic labels with pixels, super-pixels, or image regions. In this way we obtain an overall understanding of the scene, as image regions may contain objects as well as background areas. Semantic segmentation methods are providing today good performances at the price of being computationally demanding,  for this reason  in video analysis they are usually applied at a lower rate and propagated on the following frames. If the camera is moving propagation requires motion estimation. Conversely detection is an efficient, higher level task, that applies to entities associated with a defined spatial extent (objects);  object detection is usually less accurate in terms of localisation (it provides bounding boxes). For its higher level nature it is easier  to extend it to video analysis through the use of state of the art tracking or prediction algorithms.

We will explore different ways of combining these complementary sources of information, trying to achieve a good compromise between efficiency and effectiveness. Our focus will be primarily on the analysis of video sequences acquired by moving cameras, there included applications related with robotics and automation and autonomous guidance.

Title: Machine learning for prognostic maintenance
Stefano Rovetta, Francesco Masulli
Computer Science

Description: Predictive maintenance is widely acknowledged as the “killer application” of machine learning in Industry 4.0.  This research activity will develop machine learning methods for prognostic maintenance, an approach that aims not only at predicting the future maintenance necessities, but also at describing causes and effects of future evolutions of a system: “foresight,” as opposed to “forecast.”

The  activity will be carried on in collaboration with a software company that already markets a more traditional solution for predictive maintenance. Therefore, the work will build on an existing, substantial body of tools and know-how. The candidate is expected to develop competences that are of great technical, industrial, ans well as commercial, interest.

Link to the group or personal webpage:


[1] Vogl, G.W., Weiss, B.A. & Helu, M. “A review of diagnostic and prognostic capabilities and best practices for manufacturing.” J Intell Manuf (2019) 30: 79.

Title: Smart request processing for personalized data space-user interactions through approximation and learning

Proposers: Barbara Catania, Giovanna Guerrini
Curriculum: Computer Science

The increase of data size and complexity requires a deep revisiting of user-data interactions and a reconsideration of the notion of query itself. A huge number of applications need user-data interactions that emphasize user context and interactivity with the goal of facilitating interpretation, retrieval, and assimilation of information [1]. The ability to learn from observations and interactions [2] as well as to approximate process requests [3,4] are two key ingredients in these new settings.

The aim of this research theme is to devise smart innovative approaches for exploiting, from a processing viewpoint and a focus on graph-shaped data, the role of user context (geo-location, interests, needs) and of similar requests repeated over time to inform approximation and refine knowledge on underlying data, which in turn can be used to more efficiently and effectively fulfill information needs. Preliminary approaches in this direction have been proposed in [5].

Link to the group or personal webpage:


[1] Georgia Koutrika. Databases & People: Time to Move on From Baby Talk. EDBT/ICDT ‘18

[2] Yongjoo Park, Ahmad Shahab Tajik, Michael Cafarella, and Barzan Mozafari. Database Learning: Toward a Database that Becomes Smarter Every Time. ACM SIGMOD ‘17

[3] Peter Haas. Some Challenges in Approximate Query Processing. EDBT/ICDT ‘2018

[4] Barbara Catania, Giovanna Guerrini. Adaptively Approximate Techniques in Distributed Architectures. SOFSEM ‘15

[5] Barbara Catania, Francesco De Fino, Giovanna Guerrini. Recurring Retrieval Needs in Diverse and Dynamic Dataspaces: Issues and Reference Framework. EDBT/ICDT Workshops 2017

Title: Making motion analysis computationally efficient

Proposers: Nicoletta Noceti, Giuseppe Ciaccio
Curriculum: Computer Science

Short Description: Motion analysis is one of the main elements in Computer Vision, at the basis of a variety of higher-level tasks, as activity recognition and behavior understanding. The earliest task of motion analysis is always the ability to identify the moving regions in image streams, often referred to as motion-based image (and video) segmentation.  Two main scenarios can be identified:

  • When the videos are acquired with a still camera, the task can be formulated as a pixel-based classification (moving or still?)
  • When there is no prior information on the setting of acquisition, the problem of motion-based image segmentation is intertwined with motion estimation.

In these fields, most of the work in the recent decades has a focus on accuracy of results; only a marginal share of the literature deals with performance, intended as execution time of the algorithms, and efficiency, intended as power consumption of the computing device running a specific algorithm. The two latter aspects, however, play a critical role in real-world visual applications, due to increasing frame resolution/throughput requirements and increasing demand for stand-alone battery-powered autonomous device applications (e.g. robots).

We will therefore investigate the possibility of leveraging modern highly-parallel computer architectures (e.g. many-core processors) and throughput-oriented devices (e.g. GPUs) along with power-constrained platforms (e.g. FPGAs or low-power computers), using state-of-the-art numerical methods and code optimization techniques, in order to cope with the growing computational demands of motion analysis.  The goal of this project is to devise methods for motion-based image segmentation, with particular focus on background subtraction and optical flow estimation, specifically intended to be effective but also performing and efficient.


  • Sundaram, N., Brox, T., & Keutzer, K. (2010). Dense point trajectories by GPU-accelerated large displacement optical flow. In European conference on computer vision(pp. 438-451). Springer, Berlin, Heidelberg.
  • Stagliano, A., Noceti, N., Verri, A., & Odone, F. (2015). Online space-variant background modeling with sparse coding. IEEE Transactions on Image Processing, 24(8), 2415-2428.

Title: Analysis of Genic Expression via Game Theory and Graphs

Proposer: Marcello Sanguineti
Curriculum: Computer Science 

Short description. Recent research (e.g., Albino et al. (2008)) have emphasized the possibility of applying Game Theory (Peters (2008) to the analysis of medical results obtained via the so-called “microarray techniques”. Such techniques allow one to “take a picture” the expression of thousands of genes in a cell by means of a single experiments. The departure point is the study of the genetic expression in a sample of cells that satisfy particular biological conditions: for instance, cells belonging to a cancer-affected subject. Game Theory plays a basic role in defining the “microarray games” (Schena et al. (1995)) and in evaluating the relevance of each gene in influencing or even determining a pathology, by taking into account the interactions with other genes. To this end, in the literature (see, e.g., Moretti et al. (2007)) it has been investigated the use of some “power indices” from Game Theory (such as the Shapley and Banzhaf values) to estimate gene relevance. In particular, an in-depth analysis has been performed in relationship with colon cancer and neuroblastic tumors.

The aim of this research project consists in studying the use of Game Theory in the analysis of genic expression data and contributing to a better understanding of some originating factors of cancer. As a first step, the already-available studies based on the Shapley and Banzhaf values in microarray games will be further developed and tested on case-studies made available the research group of Prof. Alberto Ballestrero at IRCCS at S. Martino Hospital in Genova, with whom we have already established a joint research plan. The partner team from S. Martino has access to databases made up of some hundreds of microarray experiments and RNA-sequencing, made available by the Cancer Genome Atlas Network (TCGA). As a second step, the research aims at considering other power indices, such as the “τ-value”. The third phase will be devoted to a comparison among the various indices, both from theoretical and experimental viewpoints, in such a way to evaluate which are the better-suited in the study of cancer. Finally, the combination of tools from Game Theory with graph-based approaches, which allow one to model interactions between pairs of genes, will be examined.

Link to the group/personal webpage:


-Albino D., Scaruffi P., Moretti S., Coco S., Di Cristofano C., Cavazzana A., et al. . Identification of low intratumoral gene expression heterogeneity in neuroblastic tumors by wide-genome analysis and game theory. Cancer 113(6):1412-1422, 2008.

-Moretti S., Patrone F., Bonassi S. The class of microarray games and the relevance index for genes. Top 15:256-280, 2007.

-Peters H. Game Theory. A Multi-Leveled Approach, Springer, 2008.

-Schena M., Shalon D., Davis W.R., Brown P.O. Quantitative Monitoring of Gene Expression Patterns with a Complementary DNA Microarray. Science 270:467-470, 1995.

Research line: Artificial Intelligence and Multiagent Systems

Title: Hybrid deliberative/reactive robot architectures for task scheduling and joint task/motion planning 

Proposers: Marco Maratea (This email address is being protected from spambots. You need JavaScript enabled to view it.), Fulvio Mastrogiovanni (This email address is being protected from spambots. You need JavaScript enabled to view it.)
Curriculum: Computer Science

Short description: A longstanding goal of Artificial Intelligence (AI) is to design robots able to autonomously and reliably move in the environment and manipulate objects to achieve their goals. Autonomy and reliability can be considered still unsolved issues when robots operate in everyday environments, especially in the presence of humans. Traditionally, beside specific activities in perception, knowledge representation, action, and the mechanical structure of robots, an important research trend is related to the architecture robots may adopt to enforce autonomy and reliability, and specifically robustness and resilience to unexpected events, as well as uncertainty in perception and action outcomes.

This project aims at investigating, designing and prototyping robot architectures able to (i) interleave scheduling (i.e., the long-term definition of what a robot should do in the future) and task planning (i.e., which specific actions a robot should perform next), and (ii) integrate task and motion planning (i.e., what robot trajectories correspond to planned actions), in full perception-representation-reasoning-action loop.

On the one hand, the integration between scheduling and planning has not received sufficient attention in the literature, and only recently the issue has been studied, possibly relying on modeling through logical languages, e.g., PDDL or Answer Set Programming.

On the other hand, while discrete task planning has been considered mainly in the AI community, continuous motion planning has been the focus in much Robotics research. Such a separation leads to suboptimal robot behavior in real-world scenarios, especially in case of not modeled events, misperceptions or uncertainty in sensory data. However, in the past few years, a number of approaches have been discussed in the literature, which aim at integrating the discrete and the continuous planning process. The recent introduction of such planning formalisms as PDDL+ is a decisive step in this direction, and its effective use in Robotics architectures has not been fully explored yet.

The Ph.D. student will be involved in ongoing research activities in the application of advanced AI techniques to Robotics. In particular, the following topics will be considered:

  • Definition of expressive and computationally efficient knowledge representation approaches for robots.
  • Definition of innovative and efficient scheduling algorithms suitable for a robotic setting.
  • Representation of robot perceptions using logic formalisms able to ground further reasoning processes, i.e., knowledge revision, update, fusion.
  • Design and implementation of joint task/action planning strategies for robots.

Link to the group/personal page:,

Title:Inductive and Deductive Reasoning in Transportation

Proposers: Davide Anguita (This email address is being protected from spambots. You need JavaScript enabled to view it.), Marco Maratea (This email address is being protected from spambots. You need JavaScript enabled to view it.)

Macro-areas: Artificial Intelligence, Data Analysis

Curriculum:Computer Science

Short description:Inference is defined as the act or process of deriving logical conclusions from premises known or assumed to be true. Deduction is an inference approach since it does not implies any risk. Once a rule is assumed to be true and the case is available there is no source of uncertainty in deriving, through deduction, the result. Inductive reasoning, instead, implies a certain level of uncertainty since we are inferring something that is not necessary true but probable. The inductive reasoning is the simple inference procedure that allows to increment our level of knowledge since the induction allows to infer something that is not possible to logically deduce just based on the premises. New generation of information systems collects and store a large amount of heterogeneous data which allow to induce Data Driven Models able to forecast their evolution. Deep, Multi-Task, Transfer, Semi-Supervised learning algorithms together with rigorous statistical inference procedures allow us to transform large and heterogeneous amounts of distributed and hard-to-interpret pieces of information in meaningful, easy to interpret and actionable information. Data Driven Models scale well with the amount of data available but they are not as effective if exploited for deduction purposes. On the contrary, Model Based Reasoning allows to model in an effective way complex systems based on the physical knowledge about them and deduce meaningful information by solving complex (optimization) problems. The general idea is to encode an application problem as a logical specification. This specification is subsequently evaluated by a general-purpose solver, whose answers can be mapped back to solutions of the initial application problem. The Model Based Reasoning limitation is that they may not scale well with the size of problem. The scope of this PhD proposal is to make inductive and deductive reasoning work together for the purpose of solving real world problems related to the transportation domain (e.g. Railway, Busses, and Airways). Transport of goods and people is a multifaceted problem since it involves technical constraints coming from the limited physical assets, safety constraints, social and cultural implication. In Europe the increasing volume of people and freight transported is congesting the transportation systems. The challenge of this research theme is twofold. From one side there is a necessity to exploit and further refine the state-of-the-art tools and basic research themes in the inductive and deductive fields in order to make induction and deduction work together and solve the respective limitations. From the other side there are plenty of real world problems in public transportation (e.g. multimodal transportation systems, train dispatching, combinatorial problems, and forecast problems) that needs the combination of different technological tools and techniques in order to be able to obtain satisfying results.

Link to the group/personal page:,

Title: Neural Software Engineering

Proposers: Massimo Narizzano,  Armando Tacchella
Curriculum: Computer Science

Description:  Neural networks are one of the most investigated and widely used techniques in Machine Learning. In spite of their success, they still find limited application in safety- and security-related contexts, wherein assurance about networks’ performances must be provided. In the recent past, automated reasoning techniques have been proposed by several researchers to close the gap between neural networks and applications requiring formal guarantees about their behavior. This research program starts with a comprehensive categorization of existing approaches for the automated verification of neural networks. The purpose of this phase is to understand current limitations and directions for investigation in this topic at the crossroads of Machine Learning and Automated Reasoning. The second phase seeks to provide an evaluation of current approaches from a quantitative point of view: what can be verified, what are the time/space requirements, what are the fundamental limitations that must be overcome. Finally, the program seeks to design and implement a tool which is capable to analyse neural networks in a purely automated fashion, in order to uncover adversarial examples and provide hints to fix network weights to eliminate them.

Link to the group or personal webpage:


Luca Pulina, Armando Tacchella: An Abstraction-Refinement Approach to Verification of Artificial Neural Networks. CAV 2010: 243-257
Luca Pulina, Armando Tacchella: NeVer: a tool for artificial neural networks verification. Ann. Math. Artif. Intell. 62(3-4): 403-425 (2011)
Luca Pulina, Armando Tacchella: Challenging SMT solvers to verify neural networks. AI Commun. 25(2): 117-135 (2012)

Title: Configuration and Optimization of Complex Systems

Proposer: Armando Tacchella
Curriculum: Computer Science

Description:  Computer-automated design (CautoD) differs from “classical” computer-aided design (CAD) in that it is oriented to replace some of the designer’s capabilities and not just to support a traditional work-flow with computer graphics and storage capabilities. While CautoD programs may integrate CAD functionalities, their purpose goes far beyond the replacement of traditional drawing instruments and most often involves the use of advanced techniques from artificial intelligence. As mentioned in [BOP+16], the first scientific report of CautoD techniques is the paper by Kamentsky and Liu [KL63], who created a computer program for designing character-recognition logic circuits satisfying given hardware constraints. In mechanical design — see, e.g., [RS12] — the term usually refers to tools and techniques that mitigate the effort in exploring alternative solutions for structural implements, and this is the flavor of CautoD that will be considered hereafter.  The goal of this research program is to evaluate general purpose AI methods to solve and optimize configurations of complex systems. Starting from the work presented in [AMT17] related to elevators, the idea is to consider configuration software (e.g., CLIPS) and other constraint-based techiques (e.g., Constraint Programming) in order to configure systems (or parts thereof) automatically starting from specifications.

Link to the group or personal webpage:


[AMT17] Leopoldo Annunziata, Marco Menapace, Armando Tacchella: Computer Intenstive vs. Heuristic Methods in Automated Design of Elevator Systems. In 31st European Conference of Modelling and Simuation, ECMS 2017 (to appear).

[BOP+16]Robin T. Bye, Ottar L. Osen, Birger Skogeng Pedersen, Ibrahim A. Hameed, and Hans Georg Schaathun. A software framework for intelligent computer-automated product design. In 30th European Conference on Modelling and Simulation, ECMS 2016, Regensburg, Germany, May 31 – June 3, 2016, Proceedings., pages 534–543, 2016.

[KL63] Louis A. Kamentsky and Chao-Ning Liu. Computerautomated design of multifont print recognition logic. IBM Journal of Research and Development, 7(1):2–13, 1963.

[RS12] R. Venkata Rao and Vimal J. Savsani. Mechanical design optimization using advanced optimization techniques. Springer Science & Business Media, 2012.

Research line: Secure and Reliable Systems

TitleIdentity Management for Digital Financial Infrastructures  (Funded by FBK-Trento)
Proposer: Silvio Ranise (FBK Trento)
CurriculumSecure and Reliable Systems

The Research Environment:  The selected candidate will be working at FBK in Trento, Italy, within the Security and Trust (ST) research unit.

Short description: Digital Identity Management (IdM) is a key-enabler for the development of innovative services in digital finance. On the one hand, the adoption of strong authentication mechanisms is key to provide a high level of assurance on the identity of a customer before permitting a transaction. On the other hand, with the growth of online banking and fintech applications, consumers need to share their personal attributes and financial details with banks or third party services, especially using APIs. Two notable examples of this trend are the announcement of a collaboration between the Financial Data Exchange and the OpenID Foundation in order to advance a common technical standard for the secure exchange of consumer financial information [1], and the various implementations supporting the revised payement service directive (PSD2 [2]).
The sharing of financial data with third party services can result in a broader range of additional services for the consumer such as personal budget management and financial counselling based on derived attributes (e.g., using machine learning techniques). It is crucial that the design and implementation of these new services and applications consider security, usability, and privacy by adopting current best practices and using tools that assist the correct implementation.
The research work to be conducted in the thesis aims to make significant contributions to developing methodologies, techniques, and tools for the security and privacy of usable IdM solutions in digital finance. The activities will include:
  • Design of faster and user-frictionless on-boarding procedures (e.g., using the Italian electronic identity card CIE 3.0 [3]). These procedures include the personal data acquisition and the application of a set of electronic signatures on the customer-bank contracts.
  • Analysis of state-of-the-art strong authentication solutions in fintech for evaluating their security and usability, e.g. by extending the methodology proposed in [4] regarding formal analysis.
  • Investigation of the use of the blockchain technology for different aspects, such as smart contracts, payments and fraud reduction.
  • Identification of legal obligations and requirements, focusing on eIDAS [5],  GDPR [6], PSD2 [2].



[2] PSD2

[3] CIE 3.0

[4] Giada Sciarretta, Roberto Carbone, Silvio Ranise, Luca Viganò: Design, Formal Specification and Analysis of Multi-Factor Authentication Solutions with a Single Sign-On Experience. POST 2018: 188-213. DOI: 10.1007/978-3-319-89722-6_8

[5] eIDAS

[6] GDPR

Title: Formal Methods for Requirements Validation of Resilient Systems  (Funded by FBK-Trento)

Proposer(s): Marco Bozzano, Alessandro Cimatti, Stefano Tonetta
Curriculum: Secure and Reliable Systems

Description: In the last decade, an increasing number of applications needs software systems that are open and interconnected and, at the same time, require a high level of assurance of critical mission, safety, or security requirements. New formal methods must be developed to ensure the system resilience to internal and external factors. The study will focus on the formalization and validation of requirements for resilient systems. The objective is to investigate new formal languages that are able to capture the resilience of systems to internal faults or external attacks and new methods to analyze these properties solving problems such as satisfiability, realizability, and diagnosability. The study will build on existing model checking techniques for transition systems with first-order constraints and temporal properties and on the results of past and current projects such as EURAILCHECK and CITADEL.

Link to the group or personal webpage:


[1] A. Cimatti, M. Roveri, A. Susi, S. Tonetta: Validation of requirements for hybrid systems: A formal approach. ACM Trans. Softw. Eng. Methodol. 21(4): 22:1-22:34 (2012)

[2] M. Bozzano, A. Cimatti, M. Gario, S. Tonetta: Formal Design of Asynchronous Fault Detection and Identification Components using Temporal Epistemic Logic. Logical Methods in Computer Science 11(4) (2015)

[3]  A. Cimatti, M. Roveri, S. Tonetta: HRELTL: A temporal logic for hybrid systems. Inf. Comput. 245: 54-71 (2015)

[4] A. Cimatti, A. Griggio, S. Mover, S. Tonetta: Infinite-state invariant checking with IC3 and predicate abstraction. Formal Methods in System Design 49(3): 190-218 (2016)

Title: Security Testing of Blockchain Smart Contract (Funded by FBK, Trento)
 Mariano Ceccato
Curriculum: Secure and Reliable Systems

Short description: In traditional software, testing can be performed at any time, possibly also after software deployment, to spot and let developers fix programming errors. Software updates are available to solve programming mistakes that are fixed after the software is published. Recently, crypto currencies (e.g., Ethereum) defined smart contracts: programs stored in the blockchain whose execution is guarantee by the distributed miner network. Smart contracts represent a disruptive model, because once their transactions are written to the blockchain, they are immutable, even when caused by a programming defect. Thus, accurate testing of smart contracts is crucial, to detect programming errors before they are delivered to the blockchain. Security testing (also called penetration testing) is a branch of software testing devoted to stress programs with respect to their security features, with the aim of identifying vulnerabilities. The aspects of security testing of smart contracts that will be considered for investigation during the PhD include: generating execution scenarios, intended to exercise vulnerabilities; evaluating whether such scenarios expose an actual vulnerability, i.e., they violate a security oracle/invariant. To reduce effort and cost of security testing, the focus will be on achieving a high level of automation.

The Research Environment: The student will be situated at FBK in Trento, Italy, within the Security & Trust (ST) research unit. FBK-ST ( carries out research in cutting-edge security solutions for web-based authentication/authorization protocols, mobile operating systems and applications, cloud-based and service-oriented applications and infrastructures. The research group consists of approximately 20 researchers, spread among senior members, postdocs and PhD students.

Link to the research group page:

Title: Testing Machine Learning-based Internet of Things Systems

Proposers: Filippo Ricca, Maurizio Leotta
Curriculum: Computer Science , Secure and Reliable Systems

Short Description: Internet of Things (IoT) is a network of interconnected devices that possibly communicate also with remote cloud control servers. Thanks to IoT systems, trains are able to dynamically compute and report arrival times, cars are able to avoid traffic-jam by proposing alternative paths and m-health systems are able to determine the right medicament dose for a patient. Intelligent IoT systems often rely on Machine Learning (ML) solutions to take decisions. As the IoT technology continues to mature, we will see more and more ML-based IoT applications and systems emerge in different contexts.

Ensuring that such applications are secure, reliable, and compliant is of paramount importance since IoT systems are often safety-critical. At the same time, testing these kinds of systems can be difficult due to: (1) the wide set of disparate technologies used to implement them, (2) the added complexity that comes with Big Data, (3) the fact that testing ML solutions is an open research problem. However, IoT and ML testing has been mostly overlooked so far, both by research and industry. This is apparent from the related scientific literature, where proposals and approaches in this context are rare.

The aim of this research theme is: 1) to investigate novel approaches and techniques for testing ML-based IoT systems; 2) to build tools supporting the devised approaches; and, 3) to validate experimentally them.

Industrial Partner The candidate will have the opportunity to collaborate in the Actelion @ Dibris Joint Lab. Actelion is a leading biopharmaceutical company with more than 2,600 employees in 30 countries (

Link to the personal webpages:


Maurizio Leotta, Filippo Ricca, Diego Clerissi, Davide Ancona, Giorgio Delzanno, Marina Ribaudo, Luca Franceschini. Towards an Acceptance Testing Approach for Internet of Things Systems. Proceedings of 1st International Workshop on Engineering the Web of Things (EnWoT 2017), pp.125-138, Volume 10544, LNCS. Springer, 2018.

Testing of machine learning systems – The new must have skill in 2018. Capgemini (company with 200,000 team members and global revenues of EUR 12.8 billion)

Title: Software-Engineering the Internet of Things

Proposer(s): Gianna Reggio
Curriculum: Computer Science, Secure and Reliable Systems

Short Description: Internet of Things (IoT)[3] based systems are very recent, and pose new difficult problems to developers, as stated e.g. in [1] and [2], for which no software engineering support is available yet: “Confronted by the wildly diverse and unfamiliar systems of the IoT, many developers are finding themselves unprepared for the challenge. No consolidated set of software engineering best practices for the IoT has emerged. Too often, the landscape resembles the Wild West, with unprepared programmers putting together IoT systems in ad hoc fashion and throwing them out into the market, often poorly tested.”, as stated by [2].

The thesis aims initially at assessing the state-of-the-art of IoT based systems development, surveying companies and startups, and the scarce existing literature, to identify:
– the currently used development processes, methods, and software engineering techniques, e.g. testing (if any);
– the mostly used software tools, frameworks, standards and protocols;
– the perceived problems, and unsatisfied needs.

Then, the task of capturing and specifying the requirements for an IoT-based system will be considered, with a particular emphasis in understanding which are the relevant non-functional requirements. The preliminary proposal of [4] of a method based on the UML and following the service-oriented paradigm for capturing and specifying the requirements on an IoT-based system a will be extended to cover the non-functional requirements, and validated by industrial cases studies

Finally, the work will tackle the task of designing and implementing an IoT-system starting from the requirement specifications of the previous step, proposing specific methods. The new methods can also help to understand what are the most appropriate protocols and technologies to choose.

[1] D. Spinellis. 2017. Software-Engineering the Internet of Things. IEEE Software 34, 1 (2017), 4-6.
[2] X. Larrucea, A. Combelles, J. Favaro, and K. Taneja. 2017. Software Engineering for the Internet of Things. IEEE Software 34, 1 (2017), 24Ð28.
[3] IEEE Internet Initiative. 2015. Towards a definition of the Internet of Things (IoT). (2015). Available at
[4] Gianna Reggio. 2018. A UML-based Proposal for IoT System Requirements Specification. In MiSEÕ18: MiSEÕ18:IEEE/ACM 10th International Workshop on Modelling in Software Engineering , May 27, 2018, Gothenburg, Sweden. ACM, New York, NY, USA, Article 4, 8 pages. 3193956

Link to the group/personal webpage:

Title: A Holistic Method for Business Process Analytics

Proposers: Gianna Reggio, Filippo Ricca
Curriculum: Computer Science, Secure and Reliable Systems

Short Description: In the last decade, the availability of massive storage systems, large amounts of data (big data) and the advances in several disciplines related to data science provided powerful tools for potentially improving the business activities of the organizations. Unfortunately, it is rather difficult to graft modern big data practices into existing infrastructures and into company cultures that are ill-prepared to embrace big data, for example [1] reports the following staggering figures about the success rate of big-data projects:” A year ago, Gartner estimated that 60 percent of big data projects fail. As bad as that sounds, the reality is actually worse. According to Gartner analyst Nick Heudecker? this week, Gartner was “too conservative” with its 60 percent estimate. The real failure rate? “closer to 85 percent.”

In other words, abandon hope all ye who enter here, especially because “the problem isn’t technology,” Heudecker said. It’s you. “

Initially, we plan to investigate which are the reasons leading to the failure of big-data projects by surveying scientific and grey literature, and also if and how
the few existing approaches to support big-data/analytics projects (e.g. CRISP-DM [6] and DataOps [7]) can overcome them.

Then, we will consider the restrict field of “Business Process Analytics” (BPA), that refers to collecting and analysing the business process-related data to answer some process-centric questions (see, e.g. [3] and [2]).
Based on the initial investigations, the aim of the thesis is to develop a holistic method combining business process modelling and data-driven business process improvement to successfully leverage big-data. The method will help:
– connect the business processes, and the stakeholderÕs goals with the available data;
– elicit the right questions for improving the business activities, and successively selecting the right analytic techniques for answering them;
– optimize the data collection and storage with respect the useful analysis.

Some initial ideas can be found in [4].

[1] M. Asay. 85% of big data projects fail, but your developers can help yours succeed. TechRepublic, CBS Interactive.
November 10, 2017.
[2] S.Beheshti,B.Benatallah,S.Sakr,D.Grigori,H.Motahari-Nezhad,M.Barukh,A.Gater, and S. Ryu. Process Analytics: Concepts and Techniques for Querying and Analyzing Process Data. Springer, 2016.
[3] M. zur MŸhlen and R. Shapiro. Business Process Analytics, pages 137Ð157. Springer, 2010.
[4] Reggio G., Leotta M., Ricca F., Astesiano E. Towards a Holistic Method for Business Process Analytics. In: Zhang L., Ren L., Kordon F. (eds) Challenges and Opportunity with Big Data. Monterey Workshop 2016. Lecture Notes in Computer Science, vol 10228. Springer. 2017.
[6] Cross-industry standard process for data mining (CRISP-DM). Last seen March 2018.
[7] The DataOps Manifesto. Last seen March 2018.

Link to the group or personal webpage:

Title: Design and Validation of Cloud and IoT Systems

Proposer: Giorgio Delzanno
Curriculum: Computer Science, Secure and Reliable Systems

The Internet of Things (IoT) is the network of physical objects embedded with electronics, software, sensors, and connectivity to enable objects to exchange data with the manufacturer, operator and/or other connected devices. Physical items are no longer disconnected from the virtual world, but can be controlled remotely and can act as physical access points to Internet services. “Smart” objects play a key role in the Internet of Things vision, since embedded communication and information technology have the potential to revolutionize the utility of these objects. Using sensors, they are able to perceive their context, and via built-in networking capabilities they would be able to communicate with each other, access Cloud microservices a and interact with people. The IoT worlds provides several interesting research challenges ranging from the integration of different platforms, for instance mobile and Cloud environments, the analysis and validation of the huge amount of content generated by these applications via big data analysis and processing techniques, collaborative protocols design, latency reduction/hiding techniques for guaranteeing real time constraints, large-scale processing of user information, privacy and security issues, state consistency/persistence. The goal of the research is to consider both theoretical aspects, e.g., application of formal verification/reasoning for concurrent and distributed systems, formal semantics of IoT dev frameworks/platforms, as well as practical aspects related to new methodologies and platforms for the development and orchestration of IoT applications, and applications of IoT systems in real world domains.


Davide Ancona, Giorgio Delzanno, Luca Franceschini, Maurizio Leotta, Enrico Prampolini, Marina Ribaudo, Filippo Ricca:
An Abstract Machine for Asynchronous Programs with Closures and Priority Queues.
RP 2017: 59-74

Parameterized Verification of Topology-Sensitive Distributed Protocols goes Declarative
S. Conchon, G. Delzanno, A. Ferrando.
NETYS 2018.

Giorgio Delzanno:
Logic-based Verification of the Distributed Dining Philosophers Protocol.
Fundam. Inform. 161(1-2): 113-133 (2018)

Maurizio Leotta, Diego Clerissi, Dario Olianas, Filippo Ricca, Davide Ancona, Giorgio Delzanno, Luca Franceschini, Marina Ribaudo:
An acceptance testing approach for Internet of Things systems.
IET Software 12(5): 430-436 (2018)

Giorgio Delzanno:
Parameterized Verification of Publish/Subcribe Protocols via Infinite-state Model Checking.
CILC 2018: 97-111

Giorgio Delzanno, Giovanna Guerrini:
An IoT-enabled Framework for Context-aware Role-based Access Control.
SEBD 2018

Title: Types and models for aliasing and mutation control

Proposers: Elena Zucca, Paola Giannini (Univ. Piemtonte Orientale)
Curriculum: Computer science, Secure and reliable systems

Short Description: In languages with state and mutations, keeping control of aliasing relations is a key issue for correctness. This is exacerbated by concurrency mechanisms, since side-effects in one thread can affect the behaviour of another thread, hence unpredicted aliasing can induce unplanned/unsafe communication. For these reasons, the last few decades have seen considerable interest in type systems for controlling aliasing and mutation, to make programs easier to maintain and understand [1,2]. We will investigate how to improve the expressivity of such type systems, considering different approaches: explicit type modifiers and non-structural rules [3], or type inference for computing aliasing effects [4]. We will also propose operational models allowing a more direct representation of aliasing, hence a simpler reasoning.

[1] C. Gordon, M. Parkinson, J. Parsons, A. Bromfield, J. Duffy. Uniqueness and reference immutability for safe parallelism. OOPSLA 2012, ACM Press.
[2] D. Clarke, J. Oestlund, I. Sergey, T. Wrigstad. Ownership types: A survey. Aliasing in Object-Oriented Programming. Types, Analysis and Verification, LNCS 7850, Springer, 2013, pp. 15–58.
[3] P. Giannini, M. Servetto, E. Zucca, J. Cone. Flexible recovery of uniqueness and immutability, Theoretical Computer Science 764, 2019.
[4] P. Giannini, M. Servetto, E. Zucca. Tracing sharing in an imperative pure calculus. Science of Computer Programming 172, 2019.

Title: Corecursion in programming languages

Proposer: Elena Zucca (This email address is being protected from spambots. You need JavaScript enabled to view it.)
Curriculum: Computer science,  Secure and reliable systems

Short Description: Recursion works well with inductive/well-founded data types. Instead, programming with coinductive/non-well-founded data types, such as infinite lists, infinite trees or graphs, requires either imperative features (e.g., marking nodes in a graph visit), or corecursion, which, however, often does not provide the desired semantics.
Basing on recent advances in the foundations of corecursive definitions [1], we will investigate, propose, compare, formalize and possibly implement language constructs for flexible corecursion. We will focus on the functional [2,3] and/or object-oriented [4] paradigm.

[1] D. Ancona, F. Dagnino, E. Zucca. Generalizing Inference Systems by Coaxioms. ESOP 2017.
[2] J-B. Jeannin, D. Kozen, A. Silva. CoCaml: Functional Programming with Regular Coinductive Types, Fundam. Inform. 150(3-4), 2017.
[4] D. Ancona, E. Zucca, Corecursive Featherweight Java. FTfJP@ECOOP 2012.

Title: Programming foundations in Agda

Proposer: Elena Zucca
Curriculum: Computer science, Secure and reliable systems

Short Description: Agda [1] is a dependently typed functional programming language, and a proof assistant based on the propositions-as-types paradigm. As such, it allows one to write formal definitions of programming languages, notably execution models and type systems, which are themselves programmes, as exploited in [2]. We will investigate how to use Agda to encode some sophisticated programming features, whose underlying mathematical models could hardly be expressed in standard programming languages. Examples of such features include: flexible coinductive definitions [3] and types for tracing sharing relations [4].

[3] D. Ancona, F. Dagnino, E. Zucca. Generalizing Inference Systems by Coaxioms. ESOP 2017.
[4] P. Giannini, M. Servetto, E. Zucca. Tracing sharing in an imperative pure calculus. Science of Computer Programming 172, 2019.

Title: Novel approaches for securing Blockchain

Proposer: Alessio Merlo
Curriculum: Computer science, Secure and Reliable Systems

Short Description: The development and the success of cryptocurrencies like Bitcoin is mainly due to the robustness and the reliability of the blockchain technology. The guarantees offered by this technology are pushing the scientific community to question the usability of the same technology to support the decentralization of several services, like, e.g., financial, insurance, medical, just to cite a few. However, the security issues related to the adoption of blockchain in the wild are mostly unexplored. The aim of this Ph.D. proposal is the study of new methodologies and solutions that could support a secure and reliable adoption of blockchain in emerging contexts.

Link to the group/personal webpage:

Title: Novel detection methodologies for Android malware

Proposer(s): Alessio Merlo
Curriculum: Computer Science, Secure and Reliable Systems

Short Description: Malware detection on Android is a moving target. In fact, albeit in recent years a set of novel tecniques to detect and recognize mobile malware have been put forward, the characteristics of malware changed over time, thereby making the problem of automatic (i.e., without manual inspection) detection of malware still far from being solved. Obfuscation, packing and moving the malicious behavior of malware into native code are some examples of techinques adopted by very recent malware to circumvent the analysis techinques at the state of the art. The aim of this Ph.D. proposal is to study and implement novel approaches for the (static/dynamic) analysis of recent Android malware.

Link to the group/personal webpage:

Title: Automatic security analysis of the IoT apps ecosystem

Proposers: Alessandro Armando, Alessio Merlo
Curriculum: Computer Science

Description: Applications are the key component in virtually all emerging scenarios, e.g., mobile, cloud, fog, IoT, and play a primary role in the interaction with the user (mobile), with things (IoT), with contextual data (fog) and with business processes (cloud). The sheer number and the sophistication of apps (in each domain) call for their automatic security assessment, which is crucial for organizations that need to protect their strategic assets and their customers’ data. Furthermore, in the IoT environment, the different app domains are tightly coupled: IoT apps provide the core functionalities for the “things” while web, fog, cloud, and mobile apps are devoted to the management and control of the “things”. The presence of a vulnerable app (that, e.g., lacks proper authentication/authorization or implements weak encryption) in any of such domains can lead to security breaches affecting the whole ecosystem. Therefore, there is a growing demand for sound and reliable solutions to assess the security of such a complex ecosystem as a whole.

The aim of this research theme is to investigate specific security challenges related to the interaction of IoT and ecosystem apps and define novel methodologies and tools for i) the automated security analysis of the entire IoT apps ecosystem, ii) the compliance w.r.t. regulations and security policies and iii) the mitigation & reporting of issues and vulnerabilities.

Link to the group or personal webpage:


  • KPMG Australia, “Security and the IoT Ecosystem”, White-paper, 2016;
  • Ammar, M., Russello, G., & Crispo, B. (2018). Internet of Things: A survey on the security of IoT frameworks. Journal of Information Security and Applications, 38, 8-27.
  • Hossain, M. M., Fotouhi, M., & Hasan, R. (2015, June). Towards an analysis of security issues, challenges, and open problems in the internet of things. In 2015 IEEE World Congress on Services(pp. 21-28). IEEE.

Title: Information Hiding and Network Steganography  
(Funded by CNR-IMATI on the SIMARGL EU Project via an external contract “Assegno di Ricerca”)

Proposer(s): Luca Caviglione

Macro-area(s): Secure and Reliable Systems

Curriculum: Computer science

Keywords: Security, Computer Networks, Information Hiding

Short Description: Information hiding and network steganography techniques are increasingly used in investigative journalism to protect the identity of sources or by malware to hide its existence and communication attempts. Therefore, understanding how they can be used to create covert channels to empower malicious software is essential to fully assess the cybersecurity panorama. In this perspective, typical use cases for a malware are: the creation of covert channels hidden within the network traffic to exfiltrate sensitive information towards a remote command & control facility, and the set-up of a local covert channel within the single device to bypass the security framework of the guest OS. The aim of this PhD is to investigate novel information-hiding-capable threats, with emphasis on the development of detection techniques and mitigation policies, possible able to scale. Part of the work will be done within the framework of the H2020 Project – SIMARGL: Secure Intelligent Methods for Advanced RecoGnition of malware and stegomalware.


[1] L. Caviglione, W. Mazurczyk, “Steganography in Modern Smartphones and Mitigation Techniques”, IEEE Communications Surveys & Tutorials, Vol. 17, No. 1, pp. 334 – 357, First Quarter 2015.

[2] L. Caviglione, M. Gaggero, J.-F. Lalande, W. Mazurczyk, M. Urbanski, “Seeing the Unseen: Revealing Mobile Malware Hidden Communications via Energy Consumption and Artificial Intelligence”, IEEE Transactions on Information Forensics & Security, Vol. 11, No. 4, pp. 799 – 810, April 2016.

[3] L. Caviglione, M. Gaggero, E. Cambiaso, M. Aiello, “Measuring the Energy Consumption of Cybersecurity”, IEEE Communications Magazine, Special Issue on Traffic Measurements for Cyber Security, Vol. 55, No. 7, pp. 58 – 63, July 2017.

[4] S. Schmidt, W. Mazurczyk, R. Kulesza, J. Keller, L. Caviglione, “Exploiting IP Telephony with Silence Suppression for Hidden Data Transfers”, Computers & Security, Vol. 79, pp. 17-32, 2018.

[5] L. Caviglione, M. Podolski, W. Mazurczyk, M. Ianigro, “Covert Channels in Personal Cloud Storage Services: the case of Dropbox”, IEEE Transactions on Industrial Informatics, Vol. 13, No. 4, pp. 1921 – 1931, Aug. 2017.

Link to the group/personal webpage:

Research line: Human-Computer Interaction 

Title: Study and Development of Innovative Methods and Technologies for Analysis, Recognition, Quantification, Co-Registration, Fusion of Ultrasound and/or MRI Images to Support Diagnostics

(Funded by CNR and ESAOTE SPA, according to the CNR-Confindustria Agreement 2018)

Proposers: Dr. Giuseppe Patane’ (IMATI-CNR,, Dr Andrej Dvorak (ESAOTE SPA
Curriculum: Computer Science

Research environment: The training, research, and development activities will involve CNR-IMATI, as Research Partner, and ESAOTE SPA as Industrial Partner, with a collaboration with main Hospitals in Genova and with the Ligurian Regional Hub on Life Sciences.

Short Description The aim of this Industrial Ph.D. is the training of researchers and multi-skilled professionals capable of covering roles that require a deep knowledge of methodological and operational aspects in Computer Science, Computer Graphics, and Bio-medicine, with a focus on the modeling, visualization, and analysis of biomedical images acquired by ultrasound and/or magnetic resonance techniques. The training of the Ph.D. student will provide a solid scientific and technological background on Computer Science, Information Engineering, and Bio-medical Research. Particular attention will be paid to the main aspects of research and development for an Industrial Ph.D., in terms of scientific publications, development, and patents. The proposed research project falls within the thematic area of Computer Science, Biomedical Sciences, and advanced technological research in the area of information technology for health and biomedical industry, with a particular focus on the early diagnosis, screening, therapy, and follow-up of pathologies. Furthermore, the project has a multidisciplinary value, with fundamental and applied research in the fields of 3D Computer Graphics, Bio-medicine, Scientific Visualization, and Medicine.


  • Mattia Natali, Giulio Tagliafico, Giuseppe Patanè: Local up-sampling and morphological analysis of low-resolution magnetic resonance images. Neurocomputing 265: 42-56 (2017).
  • Imon Banerjee, Giuseppe Patanè, Michela Spagnuolo: Combination of visual and symbolic knowledge: A survey in anatomy. Comp. in Bio. and Med. 80: 148-157 (2017).

Title: Perception-based rendering for virtual reality environments

Proposer: Fabio Solari
Curriculum: Computer Science

Short Description: Virtual reality provides complex and realistic worlds, where users can interact to perform several tasks.  A fundamental aspect of interactions is how humans perceive virtual worlds, thus visual perception studies can play an important role: modeled perception can suggest how to modify the rendering of the virtual world in order to adaptively optimize it for human observers and for their perceptual bandwidths. For instance, stereo cues (e.g. disparity) are predominant in peripersonal interactions, motion (e.g. optic flow) and perspective cues are essential in navigation and walking tasks, and space variant human sight effectively modulates spatial resolution. These perceptual cues could be used to devise and to implement visualization techniques that exploit them to render virtual environments (and also virtual representations of real objects that users can touch) in a natural way by allowing users to improve their performance and mitigate fatigue.  Thus, the research theme aims in particular to develop perception-based advanced rendering techniques, which can be used to improve the user’s experience in virtual worlds.

Link to the group/personal webpage:

Title: Multimodal interactive systems based on non-physical dimensions of touch

Proposers: Antonio Camurri, Enrico Puppo, Davide Anguita, Gualtiero Volpe

Description This PhD proposal aims at investigating computational models and developing techniques and systems for the automated measure of tactility: how non-verbal, social, affective content usually communicated and perceived by touch can be communicated and perceived without any physical contact. Can tactility be as effective as the physical one in socio-emotional communication? Scientific research (e.g., McKenzie et al., 2010) as well as artistic theories and practice (e.g., the dance) demonstrate the existence of tactility. Humans are able to perceive touch even in cases of lack of physical contact, since movement alone may induce in an observer the perception of touch. Touch conveys emotions, facilitates or enhances compliance in social interactions. Touch reduces the negative effects of several chronic disease. Illusory touch occurs when people believe they have been touched but no actual tactile stimulation has been applied. This PhD proposal focuses on computational models of tactility, that is, to study and develop systems to enable the communication and perception of non-verbal, social, affective content usually communicated and perceived by touch, but without any physical contact. Tactility is the carrier of non-verbal emotional and social communication. Research challenges include the following: how does an observer perceive tactility and its role in socio-emotional interaction? Does an observer of tactility performed on a “ghost” body perceive the same socio-affective message as on a physical body?

Proposed work plan
– Assessment of the interdisciplinary existing state of the art: motion capture, biomechanics, crossmodal perception (Spence 2011), humanistic theories and computational models of non-verbal multimodal full-body movement analysis (Kleinsmith & Berthouze 2016), social signal processing (Vinciarelli et al 2012), analysis of 3D trajectories, machine learning. Software environments for the development of real-time multimodal systems (EyesWeb;
– Design of a dataset and of pilot experiments. The dataset consist of a pool of movements performed by a number of pairs or small groups of participants highly skilled in movement execution (e.g., dancers) as well as poorly skilled. For example, two participants in front of each other at a few steps of distance; the first slowly walks to approach and touch the other (e.g. on a shoulder); then she returns to the original position, the second leaves the scene, and the first repeats the same action and touches the “ghost” of the second participant: she touches the memory, a sort of tactile photography.

The dataset will be recorded using the Qualysis ( motion capture and other sensor systems (physiology, IMUs, audio) available at DIBRIS premises of Casa Paganini-InfoMus;
– Analysis of tactility: extraction of a collection of multimodal features from the recorded data that explain the difference of same touch gesture on a real human Vs the “ghost”;
– Assessment of the analysis outcomes by comparison with ratings of the same dataset provided by human participants;
– Development, evaluation and validation of prototypes of multimodal systems exploiting tactility.

Expected results
– A collection of an archive of MoCap and multimodal data for the analysis of tactility, to be made publicly available to the research community;
– Development of novel algorithms, techniques, and software libraries for the automated analysis of tactility;
– Scientific publications in top-level international conferences and journals;
– Development of prototypes of systems exploiting tactility in at least one of the following scenarios: therapy and rehabilitation in specific activities of the ARIEL (Augmented Rehabilitation) Joint Laboratory DIBRIS-Gaslini Children Hospital; enhanced active experience of cultural heritage in collaboration with Palazzo Reale Museum in Genoa;
– Participation to public dissemination events: e.g., European Commission events, international workshops and conferences, summer schools, science festivals;
– The research may be part of international projects, including European funded Horizon 2020 ICT projects, running at Casa Paganini-InfoMus research centre.

Link to the group or personal Webpage:

Casa Paganini – InfoMus Research Centre publications:

– Camurri, A., & Volpe, G. (2016). The Intersection of art and technology. IEEE MultiMedia, 23(1), 10-17.
– Kleinsmith, A., Bianchi-Berthouze, N. (2013). Affective body expression perception and recognition: A survey. IEEE Transactions on Affective Computing4(1), 15-33.
– McKenzie, K. J., Poliakoff, E., Brown, R. J., and Lloyd, D. M. (2010). Now you feel it, now you don’t: how robust is the phenomenon of illusory tactile experience? Perception, 39(6), 839-850.
– Spence, C. (2011). Crossmodal correspondences: A tutorial review. Attention, Perception, & Psychophysics, 73(4), 971-995.
– Vinciarelli, A., Pantic, M., Heylen, D., Pelachaud, C., Poggi, I., D’Errico, F., & Schroeder, M. (2012). Bridging the gap between social animal and unsocial machine: A survey of social signal processing. IEEE Transactions on Affective Computing3(1), 69-87.

Title: Techniques for the design and implementation of (Spatial) Augmented Reality Environments

Proposer: Manuela Chessa
Curriculum : Computer Science

Short Description Augmented Reality (AR) allows a real-time blending of digital information (e.g. text, virtual elements, images, sounds) onto the real world. Among the different technologies to design and implement AR scenarios, we can distinguish between wearable devices (e.g. the HoloLens) and non-wearable solutions, in particular Spatial Augmented Reality (SAR). This approach allows displaying additional information, virtual objects, or even changing the appearance of the physical objects directly onto the real environment. It is worth noting that,  compared to head mounted displays AR displays or handheld devices, SAR has some advantages, e.g. it allows the interaction with physical (yet augmented) objects and it scales well to multiple users and therefore supports collaborative tasks naturally. On the other side, many issues are still open, e.g., a robust detection of the 3D structure of the environments and the registration between virtual and real contents. Moreover, the combination with handled or wearable devices can further improve the range of possible solutions and interaction systems.

The research theme aims to develop novel techniques to create AR and SAR environments in which people can interact in an ecological way. Besides virtual visual information added to the real world, also sensorized objects providing controlled force and tactile feedbacks could be used to augment reality and devise novel interaction paradigms.

Link to the group/personal webpage:

Research line: Systems Engineering

Title:  Sustainable planning and control of distributed power and energy systems

Proposers: R. Minciardi, M. Robba
Curriculum: Systems Engineering

Description: The increase in the use of renewable energies, the emergence of distributed generation and storage systems, and, in general, the concept of “smart grids”, have given rise to the necessity of defining new decision and control schemes for planning and management purposes. Currently, a major challenge is represented by the lack of a unified mathematical framework including robust tools for modeling, simulation, control and optimization of time critical operations in complex multicomponent and multiscaled networks, characterized by microgrids, interconnected buildings, renewables, storage systems and electric vehicles. The difficulty of defining effective real time optimal control schemes derives from the structure of a power grid, and, specifically, from the presence of several issues: renewable and traditional power production, bidirectional power flows, dynamic storage systems, demand response requirements, and stochastic aspects (such as uncertainties in renewable, prices, and demand forecasting). This results in optimization problems, which are generally intractable within a real time optimal control scheme, if all components of the whole system are represented at a full level of detail. Moreover, the new regulation related to new market entrants and schemes requires a revision and improvement of distributed energy management systems planning and management, as well as their coordination in order to optimize self-consumption and energy distribution.

The proposed PhD research activity will fall within this framework and has the objective of developing and applying tractable approaches for planning and optimal control, taking into account stochastic issues (i.e., intermittent renewables, demands, prices) and considering different possible architectures (multilevel, decentralized, distributed). In particular, the formulation of the optimization and control problems will be based on realistic models for the electrical grid and for its various sub-systems (microgrids, intelligent buildings, storage systems, renewables). Moreover, different energy distribution systems will be taken into account in relation to polygenerative systems: district heating, buildings heating and cooling, with the associated storage systems, water, and gas distribution networks. Finally, different kinds of demands will be taken into account (heat, cool, electricity), as well as electrical vehicles with charging/discharging cycles within a smart grid.


F. Delfino, R. Minciardi,  F. Pampararo, M. Robba. A Multilevel Approach for the Optimal Control of Distributed Energy Resources and Storage, IEEE Transactions on Smart Grid, Special Issue on Control Theory and Technology in Smart Grid, to appear

S. Bracco, F. Delfino, F. Pampararo, M. Robba, M. Rossi. A mathematical model for the optimal operation of the University of Genoa Smart Polygeneration Microgrid: Evaluation of technical, economic and environmental performance indicators, Energy, 2013

H. Dagdougui, R. Minciardi, A. Ouammi, M. Robba, R. Sacile. A dynamic decision model for the real time control of hybrid renewable energy production systems, IEEE Systems Journal, Vol. 4, No. 3, 2010, p. 323-333.

Title: Optimal routing and charging of electrical vehicles in a smart grid.
Proposers: Riccardo Minciardi, Massimo Paolucci, Michela Robba
Curriculum:  System Engineering

Description: At international level, different new policies have been developed to reduce CO2 emissions, such as the Kyoto Protocol, the European 20-20-20 strategy, and the Energy roadmap 2050. The result is an increase of green technologies for energy production and transportation. Due to the presence of intermittent and distributed production (such as renewables (RES)) and loads (such as electrical vehicles (EVs)), the actual electrical grid management has to be changed, and new control strategies are necessary for the integration of electrical and transportation networks. In fact, on one side, EVs need to be charged in the fastest time possible, and, on the other side, smart grids should afford such a request. In particular, from users’ perspective it is necessary to know electrical consumes over a specific path, and to decide where and how much to charge EVs to satisfy their own travel exigencies. Instead, from the grid perspective, it is necessary to offer adequate charging facilities based on control strategies that are able to satisfy users but guaranteeing electrical grid constraints. The proposed PhD research activity will fall within this framework. In particular, the following main objectives/activities can be listed:

  • Definition and development of a discrete event optimization model for microgrids with EVs.
  • Definition and development of power management strategies for charging stations.
  • Optimal routing and charging of EVs: development of meta- and math-heuristics.

Demonstration activities with real charging stations and case studies (in collaboration with companies) are foreseen  during the three years.

Link to personal homepage


Schneider, M., Stenger, A., Hof, J., 2014, An Adaptive VNS Algorithm for Vehicle Routing Problems with Intermediate Stops, in Technical Report LPIS-01/2014.

Yagcitekin, B., Uzunoglu, M., 2016, A double-layer smart charging strategy of electric vehicles taking routing and charge scheduling into account,

Title:  Smart scheduling approaches for manufacturing industry.

Proposer: Massimo Paolucci
Curriculum:  System Engineering

Short description: Scheduling in manufacturing industry involves key decisions about how to exploit at best the available resources (e.g., machines, tool, workers, energy) in order to efficiently perform the required production activities. Scheduling decisions are at the operational level, that is, they regard a short time planning horizon (a day or shift) and must take into account detailed production conditions and requirements. In real manufacturing industries scheduling problems are at a large scale (the number of activities to be performed may be huge and workshops may include many machines and tools) so that the number of possible alternative decisions usually grows exponentially. In addition, even if scheduling problems share common features, several relevant differences exist which characterize different industrial sector (e.g., food and beverage, fashion, automotive). Therefore an effective general purpose solution approach that could represent the basis for developing scheduling systems for different sectors, avoiding to restart from scratch with a specific algorithm, seems not available. The introduction of the Industry 4.0 paradigm will allow to rely on fresh data from the field, so improving the possibility of planning, adapting and revising the scheduling decisions more effectively, even reacting to the unpredicted changes that usually characterize the real production systems. Finally, sustainability issues, as energy consumption and carbon footprint, need to be included among the scheduling objectives.

The purpose of this research project is to design a new solution approach for facing a large class of the scheduling problems emerging in manufacturing industry. Such an approach can be based on several building block and strategies (recent metaheuristics as adaptive large neighborhood search or bio-inspired algorithms, simulation-optimization as well as heuristics based on mathematical programming) that can be exploited to design a solver framework for this class of hard combinatorial problems. 

Link to the group or personal webpage:

Title: Strategic and tactical planning in production and logistics for manufacturing industry

Proposer(s): Massimo Paolucci
Curriculum:  System Engineering

Short description: Planning at strategic and tactical level are connected problems influencing both the design of the supply chain and the manufacturing production activities. Such decisions usually involve the activation of facilities, the allocation of the available resources, as well as the aggregate management of both production and distribution activities over a medium-long time horizon. Apparently strategic and tactical planning decisions impact not only business objectives but also environmental sustainability; as an example, in closed loop supply chains planning decisions also include the use of recovered materials and components from returned products and in general the reduction of energy consumption.

Therefore the proposed research project aims at considering the problem of planning in the supply chain for manufacturing production systems in order to define a general purpose decision support system able to operate both at strategic and tactical level. The purpose is to determine a unified model and a set of optimization approaches to support planning decisions at different levels (e.g., supply network design, inventory and lot-size planning, distribution planning), including sustainability aspects, such as remanufacturing and energy consumption and emissions. Since at least part of the considered optimization problems are computationally intractable as they belong to the NP-hard complexity class, the algorithms that need to be designed and tested can range from exact approaches, based on mathematical programming models, to heuristic, metaheuristics (from neighborhood search techniques to population and bio-inspired algorithm) or matheuristics (i.e., methods that include mathematical programming models in a heuristic solution framework).

Link to the group or personal webpage:

Title:  Hyper- and meta- heuristics for multi-objective optimization

Proposer: Massimo Paolucci
Curriculum:  System Engineering

Short description: Most of the decision problems in real life applications require to take into account more than one objective/criterion. Usually such objectives are non-commensurable and conflicting. These problems arise in many different fields and often express the conflict between customer satisfaction, stakeholders profit and social and environmental sustainability. As an example, in manufacturing industry and logistics, planning the activities on the supply chain should aim at timely meeting the customer demand, reducing production, inventory and transportation costs, minimizing the energy consumption and CO2 emission, favoring material recycling and so on. Multi-criteria decision making deals with this wide class of decision problems and embeds Multi-objective optimization as those methods whose purpose is to define the set of solutions which deserve to be considered by decision makers. Such solutions are the so-called efficient or Pareto optimal ones. In general the problem of determining the Pareto optimal solutions for a multi-objective optimization problem is NP-hard and the dimension of such set of solutions is exponential. For this reason, metaheuristic algorithms, such as Genetic Algorithms, Simulated Annealing, Ant Colony Optimization and Particle Swarm Optimization, have been applied to multi-objective optimization since 90s. The purpose of this research is to deep investigate the possible use of metaheuristics for multi-objective optimization, trying in particular to design general purpose self-adapting algorithms. This can be pursued by experimenting the so-called hyper-heuristics, consisting in combining higher level metaheuristics with lower level metaheuristics, where the purpose of the former is to identify the best configuration of the latter when solving a given optimization problem.

Transportation Network Optimization Via Transferable-Utility Games

Proposer: Marcello Sanguineti
Curriculum: Systems Engineering

Short description. Network connectivity is an important aspect of any transportation network, as the role of the network is to provide the society with the ability to easily travel from point to point using various modes. Analyzing networks’ connectivity can assist the decision makers with the identification of weak components, to detect and prevent failures, and to improve the connectivity in terms of reduced travel time, reduced costs, increase reliability, easy access, etc..

A basic question in network analysis is: how “important” is each node? An important node might, e.g., highly contribute to short connections between many pairs of nodes, handle a large amount of the traffic, generate relevant information, represent a bridge between two areas, etc. To quantify the relative importance of nodes, one possible approach consists in using the concept of “centrality” [1, Chapter 10]. A limitation of classical centrality measures is the fact that they evaluate nodes based on their individual contributions to the functioning of the network. For instance, the importance of a stop in a transportation network can be computed as the difference between the full network capacity and the capacity when the stop is closed. However, such an approach is inadequate when, for instance, multiple stops can be closed simultaneously. As a consequence, one needs to refine the existing centrality measures, in such a way to take into account that the network nodes do not act merely as individual entities, but as members of groups of nodes. To this end, one can exploit game theory [2], which, in general terms, provides a basis to develop a systematic study of the relationship between rules, actions, choices, and outcomes in situations that can be either competitive or non-competitive.

The idea at the roots of game-theoretic centrality measures [3] is the following: the nodes are considered as players in a cooperative game, where the value of each coalition of nodes is determined by certain graph-theoretic properties. The key advantage of this approach is that nodes are ranked not only according to their individual roles in the network, but also taking into account how they contribute to the roles of all possible groups of nodes. This is important in various applications in which a group’s performance cannot be simply described as the sum of the individual performances of the group members involved. In the case of transportation networks, suppose we have at our disposal a certain budget. One possible approach consists in addressing the question of whether investing all the money in increasing the capacity and/or service of a transportation component (road section, bridge, transit route, bus stop, etc.)  substantially improves the whole network. A better way of proceeding for the network analyst/designer would probably consist in considering to simultaneously improve a (possibly small) subset of the components. In this case, to evaluate the importance of a component one has to take into account the potential gain of improving one component as a part of a group of components, not merely the potential gain of improving the component alone. This approach can be formalized in terms of cooperative game theory [2], where the nodes are players whose performances are studied in coalitions, i.e., subsets of players.

This research project, which takes the hint from the works [4,5], consists in developing methods and tools from a particular type of cooperative games, called “cooperative games with transferable utility”, for brevity “TU games”, to optimize transportation networks. Given a transportation network a TU game will be defined, which takes into account the network topology, the weights associated with the arcs, and the demand based on the origin-destination matrix (weights associated with nodes). The nodes of the network represent the players of the TU game.

We aim at exploiting game-theoretic solution concepts developed during decades of research to identify the nodes that play a major role in the network. In particular, we shall use the so-called solution concept known as Shapley value [2], which represents a criterion according to which each node is attributed a value, in such a way that the larger the value the larger the node importance.  The Shapley value enjoys mathematical properties well-suited to the proposed analysis. Computational aspects related to the evaluation of the Shapley value will be investigated, too [6], studying the possibility of polynomial-time computation with respect to the network dimension.

Depending on whether the analysis focuses on the “physical nodes” or the “physical links”, the definition of the player changes. This research project considers both. When the transportation nodes (representing, e.g., intersections, transit terminals, bus stops, major points of interest, etc.) will be analyzed, the network on which the TU game will be defined is identical to the physical network. On the other hand, when arcs (e.g., road segments, transit routes, rail lines, etc.) will be analyzed, the network will be transformed in such a way that the physical links are modeled as nodes.

Link to the group/personal webpage:


[1] S. Wasserman and K. Faust, Social Network Analysis: Methods and Applications. Vol. 8. Cambridge University Press, 1994.

[2] J. González-Díaz, I. García-Jurado, and M.G. Fiestras-Janeiro, An Introductory Course on Mathematical Game Theory. AMS, 2010.

[3] T.P. Michalak, Game-Theoretic Network Centrality – New Centrality Measures Based on Cooperative Game Theory, 2016. Available from:

[4] Y . Hadas and, M. Sanguineti, An Approach to Transportation Network Analysis Via Transferable-Utility Games. 96th Annual Meeting of the Transportation Research Board, Transportation Research Board of the National Academies, Washington, DC, 8-12 gennaio 2017.

[5] Y. Hadas, G. Gnecco, M. Sanguineti, An Approach to Transportation Network Analysis Via Transferable    Utility

Title: Coupling of atmospheric and hydrological modelling

Proposer(s): Luca Ferraris, Fabio Delogu, Antonio Parodi (This email address is being protected from spambots. You need JavaScript enabled to view it.) Research area(s): Hydro-informatics, Hydrological modeling, Meteorology

Scholarship Funded by CIMA Foundation 


The prediction of hydro-meteorological phenomena at the interface between the short range and the sub-seasonal spatio- temporal scales in complex orography areas is calling for the formulation and application of coupling framework between atmospheric and terrestrial hydrological modelling.
The goal of the proposed PhD thesis is to develop such a coupling framework on top of the following key components:

  • The atmospheric model WRF (Weather Research and Forecasting Model) an open source code conceived and developed since the mid 90’s by NCAR (National Center for Atmospheric Research), National Oceanic and Atmospheric Administration (NOAA), U.S. Air Force, Naval Research Laboratory, University of Oklahoma, and the Federal Aviation Administration. WRF is a mesoscale forecasting system designed for both research and operational applications, capable of operating at spatial resolutions from hundreds of meters to hundreds of kilometers. WRF offers a very rich portfolio of physical parameterizations (microphysics, radiation, turbulence, soil model) packages and data assimilation ones
  • The Continuum hydrological model a hydrological model developed by CIMA Research Foundation and able to work both in the pre-event analysis and forecast phase and in the monitoring stage for the simultaneous control of the event. The choice of CIMA Research Foundation to study its own model was born from the need to develop a model with a reduced number of parameters able to take advantage of all the information available via satellite.The Continuum model is useful in forecasting and reducing risk because it is able to provide the responses of the basin to the meteorological stress, in particular during intense events. The great utility of Continuum is also realized in the possibility of being implemented in areas with poor equipment on the ground and in the possibility of calibrating the model with respect to variables such as soil moisture or soil temperature that are rarely present in other models available. Continuum also has features that make it applicable to different types of reservoirs, different climates and areas strongly anthropized with the presence of hydraulic works (dams) that can play an important role in reducing the effects of floods.

    The coupling framework, implemented using state of the art hydroinformatics techniques and taking inspiration from the OASIS coupling guidelines, will provide:

    • An adjustable multi-physics and multi spatio-temporal scales land-atmosphere modeling capability for conservative, coupled and uncoupled assimilation & prediction of major water cycle components
    • Accurate and reliable streamflow prediction across scales


• A research modeling framework for evaluating and improving hydro-meteorological physical process and coupling representations

Link to the group or personal Webpage


Silvestro F, Gabellani S., Delogu F., Rudari R., and Boni G.:Exploiting remote sensing land surface temperature in distributed hydrological modelling: the example of the Continuum model. Hydrol. Earth Syst. Sci., 17, 39-62, doi:10.5194/hess-17-39-2013, 2013.

Silvestro F., Gabellani S., Rudari R., Delogu F., Laiolo P. and Boni G.: Uncertainty reduction and parameter estimation of a distributed hydrological model with ground and remote-sensing data. Hydrol. Earth Syst. Sci., 19, 1727-1751, doi:10.5194/hess-19-1727-2015, 2015.

Hally, A., Caumont, O., Garrote, L., Richard, E., Weerts, A., Delogu, F., Fiori, E., Rebora, N., Parodi, A., … & Clematis, A (2015). Hydrometeorological multi-model ensemble simulations of the 4 November 2011 flash flood event in Genoa, Italy, in the framework of the DRIHM project. Natural Hazards And Earth System Sciences, vol. 15, p. 537- 555, ISSN: 1684-9981

Laiolo P., Gabellani S., L. Campo, F. Silvestro, F. Delogu, R. Rudari, L. Pulvirenti, G. Boni, F. Fascetti, N. Pierdicca, R. Crapolicchio, S. Hasenauer, S. Puca. Impact of different satellite soil moisture products on the predictions of a continuous distributed hydrological model. International Journal of Applied Earth Observations and Geoinformation, doi 10.1016/j.jag.2015.06.002, Vol. 48, 2016.

Leong, S. H., Parodi, A., & Kranzlmüller, D. (2017). A robust reliable energy-aware urgent computing resource allocation for flash-flood ensemble forecasting on HPC infrastructures for decision support. Future Generation Computer Systems, 68, 136-149

Parodi, A., Kranzlmüller, D., Clematis, A., Danovaro, E., Galizia, A., Garrote, L., … & Siccardi, F. (2017). DRIHM (2US): An e-Science Environment for Hydrometeorological Research on High-Impact Weather Events. Bulletin Of The American Meteorological Society, 98(10), 2149-2166.

Title: The role of vegetation in hydrological modelling
Proposer(s): Luca Ferraris, Valerio Basso, Simone Gabellani (This email address is being protected from spambots. You need JavaScript enabled to view it.)
Research area(s): Hydrology, Climate

Scholarship Funded by CIMA Foundation 

Vegetation is a dynamic component that, through its physiological and structural characteristics (e.g. stomatal conductance, leaf density, plant age, etc.), affects primarily the partitioning of incoming solar energy into latent and sensible heat fluxes and the amount of rainfall into runoff, canopy interception, evapotranspiration (ET) and soil infiltration. Besides, by the photosynthesis and respiration processes necessary to growth and maintain plant tissues, vegetation acts as sink or source of carbon. For this reason, vegetation is known to play an important role in the spatial distribution and temporal variation of the energy, water and carbon fluxes at the land surface.

How vegetation affects these fluxes in a future climate is currently a central problem of earth system sciences.

Dynamic Vegetation Models (DVMs) were developed to incorporate into a model framework, the physiological and structural processes that describes the time evolution of vegetation functions and distribution. DVM includes processes based on ecological and physiological knowledge of the factors influencing individual plant demography.

Ecological and physiological processes comprise photosynthesis, autotrophic respiration, allocation, Nitrogen (N) cycle and plant competition.

Coupled with hydrological or Soil Vegetation Atmosphere Transfer scheme (SVAT, model that represents the interactions between vegetation and climate), DVMs have been largely applied at the global scale, to investigate the feedbacks among vegetation, climate and hydrological cycle under a rapid increase of atmospheric CO2 concentration.

Understanding the variations of ET caused by structural and physiological changes of vegetation is extremely relevant to flood and drought estimations because evapotranspiration represents for some 60% of terrestrial precipitation and can approach 100% of annual rainfall in water-limited ecosystem.

The main goal of the research is to develop a prognostic DVM coupled with a hydrological model for water management, flood forecasting and climate studies.

The model should adopt a robust and parsimonious approach to prognostically derive LAI and stomatal conductance, two fundamental plant characteristics that influence the evapotranspiration flux. Robust in the sense that should be capable to reproduce the inter- seasonal and intra-seasonal variations of water and energy fluxes in diverse climates and regions while parsimonious because the parameterization should be reduced at minimum. For instance, sensitivity analyses should be conducted to test the advantage, if any, of using a more sophisticated scheme for modeling the exchange of ET among soil, vegetation and atmosphere (e.g.: two sources vs. one source scheme).

In addition, the water scarcity impacts on the abiotic and biotic plant functions deserve be investigated (e.g. use of specific soil moisture stress functions for each Plant Functional Type).

The model should be able properly work in data scarce environment and to fully use and benefits from satellite data especially the new high-resolution from the Sentinel constellation.

Link to the group or personal Webpage


Silvestro F, Gabellani S., Delogu F., Rudari R., and Boni G.:Exploiting remote sensing land surface temperature in distributed hydrological modelling: the example of the Continuum model. Hydrol. Earth Syst. Sci., 17, 39-62, doi:10.5194/hess-17-39-2013, 2013.

Silvestro F., Gabellani S., Rudari R., Delogu F., Laiolo P. and Boni G.: Uncertainty reduction and parameter estimation of a distributed hydrological model with ground and remote-sensing data. Hydrol. Earth Syst. Sci., 19, 1727-1751, doi:10.5194/hess-19-1727-2015, 2015.

Arora, V., 2002. Modeling vegetation as a dynamic component in soil-vegetation-atmosphere transfer

schemes and hydrological models. Reviews of Geophysics, 40(2), pp.1–26. Available at:

Katul, G.G. et al., 2012. Evapotranspiration: a process driving mass transport and energy exchnge in the soil-plant-atmosphere-climate system. Reviews of Geophysics, 50(RG3002), p.RG000366: 1-25.

Title: An analytic definition of coping and adaptive capacity for civil protection planning and climate change adaptation 

Proposer(s): Luca Ferraris, Chiara Franciosi, Marina Morando (This email address is being protected from spambots. You need JavaScript enabled to view it.)

Research area(s): Planning, Climate Change Adaptation


As agreed by the main international frameworks and references related to Disaster Risk Reduction (DRR), analyzing comprehensively the driving forces related to each component of the Risk equation (hazard, vulnerability, capacity and exposure)is essential to identify and implement the most efficient and effective risk reduction measures.

However, the scientific community is still discussing on the identification and definition of the components of Risk and their interrelations. More specifically, from the analysis of scientific literature in both the DRR and Climate change (CC) areas, the functional relation between Vulnerability and Capacity (both coping and adaptive) has not been fully defined.

At international level, the following assumption finds a widespread consensus: Vulnerability contributes to the analysis of socio-economic, territorial and physical context, whilst knowledge of Capacity supports to define both response and adaption of a system to a natural hazard. However, in order to better understand the complex dynamics that underlay risk, a different procedural and methodological approach is needed to better understand territorial weaknesses and strengths which are at the root of natural disasters.

To this end, developing a new systemic “reading” of the natural disasters and their impacts could foster the ability of better understanding the correlation between Vulnerability and Capacity. By so doing DRR and DRM planners will be facilitated in the selection of non-structural measures needed for both Civil Protection emergency planning and CC adaptation. Such a holistic analysis will further strengthen community and systemic resilience.

The main aim of the research is that of identifying a set of indicators describing and assessing the Capacity (both coping and adaptive) and the Vulnerability components of Risk. The indicators will provide an analytical dataset necessary to comprehend existing interrelation of the two variables of the risk equation and, also, to identify more effective and efficient non-structural risk reduction measures.

An ad hoc methodology will be developed comprising the modelling of the components interactions while considering feedback loops as part of the evaluation process. The methodology will allow a more precise characterization of possible prevention and preparedness measures.

Finally, the proposed methodology will improve local-scale civil protection emergency planning while directly strengthening resilience to natural hazards and CC.

Link to the group or personal Webpage


Birkmann J., Cardona O.D., Carreño M. L., Barbat A.H., Pelling M., Schneiderbauer S., Kienberger S., Keiler M., Alexander D., Zeil P., Welle T. (2013). Framing vulnerability, risk and societal responses: the MOVE framework, Natural Hazards, vol. 67, pagg. 193–211.

Bollin C., Cárdenas C., Herwig H. e Krishna V. (2003). Disaster Risk Management by Communities and Local Governments.

Cardona O.D., Van Aalst M.K., Birkmann J., Fordham M., McGregor G., Perez R., Pulwarty R.S., Schipper E.L.F. e Sinh B.T. (2012). Determinants of risk: exposure and vulnerability. In: Managing the Risks of Extreme Events and Disasters to Advance

Cutter S. L., Boruff B. J. e Shirley W. L. (2003). Social Vulnerability to Environmental Hazards., Social Science Quarterly, 84, 242–261.

IPCC (2014). Climate Change 2014: Impacts, Adaptation, and Vulnerability. Summaries, Frequently Asked Questions, and Cross-Chapter Boxes. A Contribution of Working Group II to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. World Meteorological Organization, Geneva, Switzerland, 190 pp.

Schneiderbauer S., Calliari, E., Eidsvig, U. e Hagenlocher M. (2017). The most recent view of vulnerability.

Smit B. e Wandel J. (2006). Adaptation, Adaptive Capacity and Vulnerability. Global Environmental Change, 16, 282-292.

UNISDR (2017a). Words into Action Guidelines: NATIONAL RISK ASSESSMENT. Governance System, Methodologies, and Use of Results.

UNISDR (2015). Sendai Framework for Disaster Risk Reduction 2015 – 2030.

UNISDR (2005). Hyogo Framework for Action 2005-2015. Building the resilience of nations and communities to disasters.

Welle T. e Birkmann J. (2015). The World Risk Index – An Approach to Assess Risk and Vulnerability on a Global Scale», Journal of Extreme Events, vol. 02, n. 01, pag. 1550003.

You are here: Home Admission: How to apply, Rankings, etc Specific research projects



WordPress Cookie Notice by Real Cookie Banner