Time Discrepancy in digital twins

Our paper on tackling issues arising from time discrepancy between digital and physical twins has been published in Robotics and Autonomous Systems. More can be found here!

Abstract: Digital twins (DTs) represent a key technology in the development, real-time monitoring and optimisation of cyber–physical systems (CPSs). Such potential emerges as a result of the real-time coupling between DTs and their physical counterparts, where it is possible to make use of operational data as it is being generated in order to aid decision-making. Harnessing this potential presents several design challenges, such as the parallel operation of the DT and its physical twin (PT), and the necessary synchronisation thereof, to ensure coherent execution of the system in ensemble. In this paper we present an approach that handles situations where a DT and its PT get out of sync as a result of disturbances in the normal operational conditions of the DT–PT system, e.g., due to network degradation or temporary network drop. The purpose is to provide a best-effort functionality covering: user notification, degradation of DT to digital shadow (DS), with recovery mechanisms to re-establish the synchronisation between DT and PT.

Loosening control @ TAAS published

Our paper entitled ‘Loosening Control—A Hybrid Approach to Controlling Heterogeneous Swarms‘ has been published in IEEE Transactions on Autonomous and Adaptive Systems. Find out more here.

Abstract: Large pervasive systems, deployed in dynamic environments, require flexible control mechanisms to meet the demands of chaotic state changes while accomplishing system goals. As centralized control approaches may falter in environments where centralized communication and knowledge may be impossible to implement, researchers have proposed decentralized control methods that leverage agent-driven, self-organizing behaviors, to achieve reliable, flexible systems. This article presents and compares the performance of three decentralized control approaches in the online multi-object k-assignment problem. In this domain, a set of sensors is tasked to detect and track an unknown and changing set of targets. Results show that a proposed hybrid approach that incorporates supervisory devices within the population while allowing semi-autonomous operations in non-supervisory devices produces a flexible and reliable system capable of both high detection and coverage rates.

Special Session @ EUSIPCO

We have a special session on Edge-Fog-Cloud Machine Learning for Smart Cities Applications at the European Signal Processing Conference (EUSIPCO) 2022! Deadline for paper submission is February 20, 2022

More information under: https://sites.google.com/view/e2f2c-ml4smartcities

Scope and Topics of Interest

To harness the power of vast amount of real-time data streams from smart cities applications, Edge-to-fog-to-cloud (E2F2C) processing has emerged as a novel paradigm where the processing of data occurs at each of the three architectural tiers – edge, fog and cloud, and also “en-route” at the participating devices along a given E2F2C data path. To achieve this in practical applications, in-depth studies and novel approaches are needed on the interface between machine learning and deep learning, the underlying hardware – accounting for the emergent and powerful edge processing devices such as edge GPUs, and the large-scale software orchestration relying on resource virtualization.

The special session seeks original contributions and review papers in, but not limited to, the following topics:

  • Distributed machine learning
  • Federated learning
  • Just-in-time deep learning models (e.g. early exiting, dynamic computation graphs)
  • Collaborative Edge Computing with machine/deep learning
  • E2F2C offloading mechanisms
  • Resource-efficient ML/DL at the edge
  • Machine Learning for Internet of Things
  • Multi-modal data analysis (e.g. visual, audio, sensor signals)
  • Applications of machine learning for smart city analytics and decision making

The aim of this special session is to bring together and disseminate state-of-the-art research contributions that address E2F2C processing in the context of smart cities, including the analysis and design of novel algorithms and methodologies, innovative smart cities applications with E2F2C processing, and enabling technologies, etc. Please consider to submit your latest research in the topic.

Organizers

https://sites.google.com/view/e2f2c-ml4smartcities

Special Session at WCCI 2022

We are organizing a special session on Deep Learning for visual, audio, and sensor data analysis in Smart City environments at at the International Joint Conference on Neural Networks 2022 (IJCNN-SS-1) in conjunction with IEEE World Congress on Computational Intelligence (WCCI) 2022

Organizers: Alexandros Iosifidis, (Aarhus University) and Lukas Esterle, (Aarhus University)

Submission Deadline: January 31st, 2022 (11:59 PM AoE) via Submission – WCCI2022


Scope and Topics of Interest

Recent advances in Deep Learning and high-performance computing led to remarkable solutions for visual, audio, sensor, and multi-modal data analysis problems. Deep Learning-empowered systems can nowadays achieve performance levels in various data analysis tasks which are comparable to, or even exceeding, those of humans. Even though these advancements have the potential to open new high-impact applications in Smart City environments, this promise has yet to be met. This is due to challenges in Smart City environments which go beyond the unrestricted analysis of visual, audio, and sensor data provided by Deep Learning models when run on high-end Graphics Processing Units (GPUs). The large number of sensors (like cameras, microphones, thermometers, motion sensors, etc.) available in such environments leads to the enormous size of collected data needed for effective data analytics. Rapid response and privacy requirements prohibit transfer and processing on powerful serves and require processing on the edge. However, with processing infrastructure setting restrictions in terms of processing power, battery/electric power consumption, and autonomy (embedded GPUs or low-end processors used in edge and fog computing), efficient high-performing Deep Learning models, as well as effective data fusion schemes, are required. This goes beyond the current capabilities of the state-of-the-art. Thus, novel efficient solutions are needed to successfully employ high-performing Deep Learning models in such processing platforms.

For fully exploiting Deep Learning solutions in Smart City environments setting restrictions in processing power, memory consumption, hard real-timeness, handling uncertainties in the processing outcomes, and requiring a level of interpretability, a number of challenges need to be addressed through theoretical and methodological contributions, including but not limited to:

  • Lightweight Deep Learning models for visual, audio, sensor data analysis
  • Deep Learning models for efficient multimodal data analysis and fusion
  • Sensor time-series analysis based on Deep Learning
  • Efficient Deep Learning methodologies for Internet of Things
  • Deep Learning methodologies for smart cities, including Federated Learning, Transfer Learning, Domain Adaptation, Split Computing
  • Deep Learning for applications in smart city environments, including smart homes, smart lighting, traffic prediction, data anonymization for visual analysis, intelligent transportation systems, vehicular networks

The Special Session will be a forum to exchange ideas and to discuss new developments in Deep Learning for visual, audio, and sensor data analysis in Smart City environments. Please consider to submit your latest research in the topic.

Accepted papers

Several papers have been accepted recently for publication.

The first paper was accepted for presentation at the International Joint Conference on Rules and Reasoning (RuleML+RR 2021). The paper entitled ‘RoboCIM: Towards a Domain Model for Industrial Robot System Configurators despite Tribal Knowledge‘ has been written together with Daniella Tola, Cláudio Gomes, Carl Schultz, Christian Schlette, and Casper Hansen. We propose a domain model to automatically identify possible configurations among a set of robotic parts and components. This will lower the requirement for manually defining possible combinations of parts.

The second paper was accepted for presentation at the International Conference on System Reliability and Safety. The paper entitled ‘Fault Injecting Co-simulations for Safety‘ authored together with Mirgita Frasheri, Casper Thule, Hugo Daniel Macedo, Kenneth Lausdahl, and Peter Gorm Larsen proposes an approach to inject faults into running co-simulation. This will facilitate testing of systems under potential faults in co-simulation environments and enhance the safety in future systems.

Two papers have been accepted for presentation at the Workshop on Self-Improving System Integration (SISSY) part of the International Conference on Autonomic Computing and Self-Organizing Systems (ACSOS). The paper entitled ‘Digital twins for collaboration and self-integration‘ has been co-authored together with Cláudio Gome, Mirgita Frasheri, Henrik Ejersbo, Sven Tomforde (University of Kiel, Germany) and Peter Gorm Larsen. The second paper entitled ‘Verification and Uncertainties in Self-integrating System‘ was co-authored together with Barry Porter (University of Lancaster, UK) and Jim Woodcock (University of York, UK).

The last paper was accepted for presentation at the Workshop on Self-organized Construction (SOCO) also part of the International Conference on Autonomic Computing and Self-Organizing Systems (ACSOS). The paper entitled ‘Towards a Holistic, Self-organised SafetyFramework for Construction‘ written together with Christos Chronopoulos, Karsten Winther Johansen, Jochen Teizer, and Carl Schultz. The paper proposes a safety framework for collaborative and self-organized construction utilising prediction mechanisms to identify hazardous zones on the construction site.

New DFF Project

Federated Learning for Online Collaborative Knowledge and Decision-making (FLOCKD) has been accepted for funding by the Danish Independent Research Fund (DFF).

The FLOCKD project will investigate the distribution of Deep Neural Networks (DNNs) in smart camera networks, allowing individual cameras in a networked setting to classify and prediction trajectories and actions of observed objects. While DNNs are indeed very successful in identification and prediction tasks, they are resource expensive to train and maintain. To overcome this, federated learning has been proposed, combining the learned models of different devices. However, due to the different perceptions of cameras, a single common DNN might not be viable and individual, specialised DNNs are required. While utilising such individual specialised networks, we will also develop approaches allowing cameras to request feedback from each other by sharing their specialised networks upon request. We hypothesise this will lead to better network-wide inference.

The FLOCKD project I will work with Alexandros Iosifidis (Aarhus University) and we will collaborate with Prof Mohan Kankanhalli (National University of Singapore), Prof Bradley McDanel (Franklin and Marshall College), and Prof Andrea Prati (University of Parma).

UPSIM Project granted

The ITEA 3 funding has been granted for the UPSIM (Unleash Potentials in Simulations. The project has a total of 41 partners across Europe.

Within this project, we are looking for excellent candidates for a 3-year PostDoc position on credible Digital Twins in product development. Candidates should have a background in Electrical Engineering, Computer Science, or similar and a good understanding of co-simulation, programming, runtime modelling, and dynamic model integration for more agile Digital Twins.

The successful candidate will work on modern, more agile ways of working are applied for Modelling & Simulation among other tasks. For realizing these modern concepts proper IT infrastructures accessibility and availability is essential. The Continuous Integration and Continuous Development well-established for software development will be augmented for the application of model and simulation development. Services are developed and applied for assessing data and keeping track on changes. This enables the investigation of elaborated System Simulation processes for collaboration and application of concepts for continuous model and simulation assessment as well as the continuous traceability to development artefacts.

TCPS paper accepted

Our paper on Self-aware Cyber-physical Systems has been accepted for publication in the ACM Transactions on Cyber-physical Systems. The paper was co-authored together with Kirstie Bellman, Chris Landauer, Nikil Dutt, Andreas Herkersdorf, Axel Jantsch, Nima TaheriNejad, Peter R Lewis, Marco Platzner, and Kalle Tammemäe. The paper is a result of the SelPhyS Workshop 2018 at Aston University in Birmingham. The next SelPhyS workshop is just around the corner!

Abstract: When multiple robots are required to collaborate in order to accomplish a specific task, they need to be coordinated in order to operate efficiently. To allow for scalability and robustness, we propose a novel distributed approach performed by autonomous robots based on their willingness to help each other. This willingness, based on their individual state, is used to inform a decision process of whether or not to interact with other robots within the environment. We study this new mechanism to form coalitions in the online multi-object k-coverage problem, and compare it with six other methods from the literature. We investigate the trade-off between the number of robots available and the number of potential targets in the environment. We show that the proposed method is able to provide comparable performance to the best method in the case of static targets, and to achieve a higher level of coverage with respect to the other methods when the targets are moving.

ICAART accepted

Together with Mirgit Frasheri and Alessandro V. Papadopoulos (both Mälardalen University, Sweden), I wrote a paper on the. willingness to interact. This paper was accepted as full paper and for presentation at next years International Conference on Agents and Artificial Intelligence. The paper is entitled “Modeling the Willingness to Interact in Cooperative Multi-Robot Systems”

Abstract: When multiple robots are required to collaborate in order to accomplish a specific task, they need to be coordinated in order to operate efficiently. To allow for scalability and robustness, we propose a novel distributed approach performed by autonomous robots based on their willingness to help each other. This willingness, based on their individual state, is used to inform a decision process of whether or not to interact with other robots within the environment. We study this new mechanism to form coalitions in the online multi-object k-coverage problem, and compare it with six other methods from the literature. We investigate the trade-off between the number of robots available and the number of potential targets in the environment. We show that the proposed method is able to provide comparable performance to the best method in the case of static targets, and to achieve a higher level of coverage with respect to the other methods when the targets are moving.