Special Session @ EUSIPCO

We have a special session on Edge-Fog-Cloud Machine Learning for Smart Cities Applications at the European Signal Processing Conference (EUSIPCO) 2022! Deadline for paper submission is February 20, 2022

More information under: https://sites.google.com/view/e2f2c-ml4smartcities

Scope and Topics of Interest

To harness the power of vast amount of real-time data streams from smart cities applications, Edge-to-fog-to-cloud (E2F2C) processing has emerged as a novel paradigm where the processing of data occurs at each of the three architectural tiers – edge, fog and cloud, and also “en-route” at the participating devices along a given E2F2C data path. To achieve this in practical applications, in-depth studies and novel approaches are needed on the interface between machine learning and deep learning, the underlying hardware – accounting for the emergent and powerful edge processing devices such as edge GPUs, and the large-scale software orchestration relying on resource virtualization.

The special session seeks original contributions and review papers in, but not limited to, the following topics:

  • Distributed machine learning
  • Federated learning
  • Just-in-time deep learning models (e.g. early exiting, dynamic computation graphs)
  • Collaborative Edge Computing with machine/deep learning
  • E2F2C offloading mechanisms
  • Resource-efficient ML/DL at the edge
  • Machine Learning for Internet of Things
  • Multi-modal data analysis (e.g. visual, audio, sensor signals)
  • Applications of machine learning for smart city analytics and decision making

The aim of this special session is to bring together and disseminate state-of-the-art research contributions that address E2F2C processing in the context of smart cities, including the analysis and design of novel algorithms and methodologies, innovative smart cities applications with E2F2C processing, and enabling technologies, etc. Please consider to submit your latest research in the topic.

Organizers

https://sites.google.com/view/e2f2c-ml4smartcities

Special Session at WCCI 2022

We are organizing a special session on Deep Learning for visual, audio, and sensor data analysis in Smart City environments at at the International Joint Conference on Neural Networks 2022 (IJCNN-SS-1) in conjunction with IEEE World Congress on Computational Intelligence (WCCI) 2022

Organizers: Alexandros Iosifidis, (Aarhus University) and Lukas Esterle, (Aarhus University)

Submission Deadline: January 31st, 2022 (11:59 PM AoE) via Submission – WCCI2022


Scope and Topics of Interest

Recent advances in Deep Learning and high-performance computing led to remarkable solutions for visual, audio, sensor, and multi-modal data analysis problems. Deep Learning-empowered systems can nowadays achieve performance levels in various data analysis tasks which are comparable to, or even exceeding, those of humans. Even though these advancements have the potential to open new high-impact applications in Smart City environments, this promise has yet to be met. This is due to challenges in Smart City environments which go beyond the unrestricted analysis of visual, audio, and sensor data provided by Deep Learning models when run on high-end Graphics Processing Units (GPUs). The large number of sensors (like cameras, microphones, thermometers, motion sensors, etc.) available in such environments leads to the enormous size of collected data needed for effective data analytics. Rapid response and privacy requirements prohibit transfer and processing on powerful serves and require processing on the edge. However, with processing infrastructure setting restrictions in terms of processing power, battery/electric power consumption, and autonomy (embedded GPUs or low-end processors used in edge and fog computing), efficient high-performing Deep Learning models, as well as effective data fusion schemes, are required. This goes beyond the current capabilities of the state-of-the-art. Thus, novel efficient solutions are needed to successfully employ high-performing Deep Learning models in such processing platforms.

For fully exploiting Deep Learning solutions in Smart City environments setting restrictions in processing power, memory consumption, hard real-timeness, handling uncertainties in the processing outcomes, and requiring a level of interpretability, a number of challenges need to be addressed through theoretical and methodological contributions, including but not limited to:

  • Lightweight Deep Learning models for visual, audio, sensor data analysis
  • Deep Learning models for efficient multimodal data analysis and fusion
  • Sensor time-series analysis based on Deep Learning
  • Efficient Deep Learning methodologies for Internet of Things
  • Deep Learning methodologies for smart cities, including Federated Learning, Transfer Learning, Domain Adaptation, Split Computing
  • Deep Learning for applications in smart city environments, including smart homes, smart lighting, traffic prediction, data anonymization for visual analysis, intelligent transportation systems, vehicular networks

The Special Session will be a forum to exchange ideas and to discuss new developments in Deep Learning for visual, audio, and sensor data analysis in Smart City environments. Please consider to submit your latest research in the topic.

Accepted papers

Several papers have been accepted recently for publication.

The first paper was accepted for presentation at the International Joint Conference on Rules and Reasoning (RuleML+RR 2021). The paper entitled ‘RoboCIM: Towards a Domain Model for Industrial Robot System Configurators despite Tribal Knowledge‘ has been written together with Daniella Tola, Cláudio Gomes, Carl Schultz, Christian Schlette, and Casper Hansen. We propose a domain model to automatically identify possible configurations among a set of robotic parts and components. This will lower the requirement for manually defining possible combinations of parts.

The second paper was accepted for presentation at the International Conference on System Reliability and Safety. The paper entitled ‘Fault Injecting Co-simulations for Safety‘ authored together with Mirgita Frasheri, Casper Thule, Hugo Daniel Macedo, Kenneth Lausdahl, and Peter Gorm Larsen proposes an approach to inject faults into running co-simulation. This will facilitate testing of systems under potential faults in co-simulation environments and enhance the safety in future systems.

Two papers have been accepted for presentation at the Workshop on Self-Improving System Integration (SISSY) part of the International Conference on Autonomic Computing and Self-Organizing Systems (ACSOS). The paper entitled ‘Digital twins for collaboration and self-integration‘ has been co-authored together with Cláudio Gome, Mirgita Frasheri, Henrik Ejersbo, Sven Tomforde (University of Kiel, Germany) and Peter Gorm Larsen. The second paper entitled ‘Verification and Uncertainties in Self-integrating System‘ was co-authored together with Barry Porter (University of Lancaster, UK) and Jim Woodcock (University of York, UK).

The last paper was accepted for presentation at the Workshop on Self-organized Construction (SOCO) also part of the International Conference on Autonomic Computing and Self-Organizing Systems (ACSOS). The paper entitled ‘Towards a Holistic, Self-organised SafetyFramework for Construction‘ written together with Christos Chronopoulos, Karsten Winther Johansen, Jochen Teizer, and Carl Schultz. The paper proposes a safety framework for collaborative and self-organized construction utilising prediction mechanisms to identify hazardous zones on the construction site.

New DFF Project

Federated Learning for Online Collaborative Knowledge and Decision-making (FLOCKD) has been accepted for funding by the Danish Independent Research Fund (DFF).

The FLOCKD project will investigate the distribution of Deep Neural Networks (DNNs) in smart camera networks, allowing individual cameras in a networked setting to classify and prediction trajectories and actions of observed objects. While DNNs are indeed very successful in identification and prediction tasks, they are resource expensive to train and maintain. To overcome this, federated learning has been proposed, combining the learned models of different devices. However, due to the different perceptions of cameras, a single common DNN might not be viable and individual, specialised DNNs are required. While utilising such individual specialised networks, we will also develop approaches allowing cameras to request feedback from each other by sharing their specialised networks upon request. We hypothesise this will lead to better network-wide inference.

The FLOCKD project I will work with Alexandros Iosifidis (Aarhus University) and we will collaborate with Prof Mohan Kankanhalli (National University of Singapore), Prof Bradley McDanel (Franklin and Marshall College), and Prof Andrea Prati (University of Parma).

UPSIM Project granted

The ITEA 3 funding has been granted for the UPSIM (Unleash Potentials in Simulations. The project has a total of 41 partners across Europe.

Within this project, we are looking for excellent candidates for a 3-year PostDoc position on credible Digital Twins in product development. Candidates should have a background in Electrical Engineering, Computer Science, or similar and a good understanding of co-simulation, programming, runtime modelling, and dynamic model integration for more agile Digital Twins.

The successful candidate will work on modern, more agile ways of working are applied for Modelling & Simulation among other tasks. For realizing these modern concepts proper IT infrastructures accessibility and availability is essential. The Continuous Integration and Continuous Development well-established for software development will be augmented for the application of model and simulation development. Services are developed and applied for assessing data and keeping track on changes. This enables the investigation of elaborated System Simulation processes for collaboration and application of concepts for continuous model and simulation assessment as well as the continuous traceability to development artefacts.

TCPS paper accepted

Our paper on Self-aware Cyber-physical Systems has been accepted for publication in the ACM Transactions on Cyber-physical Systems. The paper was co-authored together with Kirstie Bellman, Chris Landauer, Nikil Dutt, Andreas Herkersdorf, Axel Jantsch, Nima TaheriNejad, Peter R Lewis, Marco Platzner, and Kalle Tammemäe. The paper is a result of the SelPhyS Workshop 2018 at Aston University in Birmingham. The next SelPhyS workshop is just around the corner!

Abstract: When multiple robots are required to collaborate in order to accomplish a specific task, they need to be coordinated in order to operate efficiently. To allow for scalability and robustness, we propose a novel distributed approach performed by autonomous robots based on their willingness to help each other. This willingness, based on their individual state, is used to inform a decision process of whether or not to interact with other robots within the environment. We study this new mechanism to form coalitions in the online multi-object k-coverage problem, and compare it with six other methods from the literature. We investigate the trade-off between the number of robots available and the number of potential targets in the environment. We show that the proposed method is able to provide comparable performance to the best method in the case of static targets, and to achieve a higher level of coverage with respect to the other methods when the targets are moving.

ICAART accepted

Together with Mirgit Frasheri and Alessandro V. Papadopoulos (both Mälardalen University, Sweden), I wrote a paper on the. willingness to interact. This paper was accepted as full paper and for presentation at next years International Conference on Agents and Artificial Intelligence. The paper is entitled “Modeling the Willingness to Interact in Cooperative Multi-Robot Systems”

Abstract: When multiple robots are required to collaborate in order to accomplish a specific task, they need to be coordinated in order to operate efficiently. To allow for scalability and robustness, we propose a novel distributed approach performed by autonomous robots based on their willingness to help each other. This willingness, based on their individual state, is used to inform a decision process of whether or not to interact with other robots within the environment. We study this new mechanism to form coalitions in the online multi-object k-coverage problem, and compare it with six other methods from the literature. We investigate the trade-off between the number of robots available and the number of potential targets in the environment. We show that the proposed method is able to provide comparable performance to the best method in the case of static targets, and to achieve a higher level of coverage with respect to the other methods when the targets are moving.

Article in Transactions on Cyber-physical Systems accepted for publication

Together with John NA Brown I am working on Networked Self-awareness for several years now – today, our article on the subject has been accepted for publication in the Transactions on Cyber-physical Systems. The paper is entitled “I Think Therefore You Are: Models for Interaction in Collectives of Self-Aware Cyber-physical Systems“.

Abstract: When multiple robots are required to collaborate in order to accomplish a specific task, they need to be coordinated in order to operate efficiently. To allow for scalability and robustness, we propose a novel distributed approach performed by autonomous robots based on their willingness to help each other. This willingness, based on their individual state, is used to inform a decision process of whether or not to interact with other robots within the environment. We study this new mechanism to form coalitions in the online multi-object k-coverage problem, and compare it with six other methods from the literature. We investigate the trade-off between the number of robots available and the number of potential targets in the environment. We show that the proposed method is able to provide comparable performance to the best method in the case of static targets, and to achieve a higher level of coverage with respect to the other methods when the targets are moving.

SISSY 2019 Papers accepted

Both papers submitted to the Workshop on Self-Improving System Integration (SISSY) have been accepted for publication.

The first paper, entitled “CHARIOT – Towards a Continuous High-level Adaptive Runtime Integration Testbed” was a collaborative effort of Chloe Barnes and Peter Lewis (both Aston University), Kirstie Bellman and Chris Landauer (both Topcy House Consulting), Joen Botev (University of Luxemburg), Ada Diaconescu (Telecom ParisTech), Christian Gruhl and Sven Tomforde (both University of Kassel), Phyllis Nelson (California State Polytechnical University), Anthony Stein (University of Augsburg), and Christopher Steward (Ohio State University).

The second paper, entitled “”When you believe in things that you don’t understand”: the effect of cross-generational habits on self-improving system integration” was written together with Chloe Barnes (Aston University) and John NA Brown (LinkedIn).

“CHARIOT – Towards a Continuous High-level Adaptive Runtime Integration Testbed” – ABSTRACT: Integrated networked systems sense a common environment, learn to navigate the environment and share their experiences. Sharing experiences simplifies learning, reducing costly trial and error in complex environments. However, integration produces dependencies that make constituent systems less robust to failures, unexpected outputs and performance anomalies. Even with APIs and reflective, self-aware techniques, system integration still requires expert programming and tuning. Self-integrating systems proposed in recent research automate integration, but can be challenging to validate at scale. We therefore propose CHARIOT, a common test environment to allow for different approaches and systems to be deployed, assessed and compared on a shared platform for the development of self-integrating systems. In this paper, we discuss the underlying requirements and challenges, potential metrics, and a system metamodel to accommodate these.

“”When you believe in things that you don’t understand”: the effect of cross-generational habits on self-improving system integration” – ABSTRACT: Humans experiencing unexpected feedback to certain actions which they are not able to explain, might develop superstitious behaviour. In this paper, we discuss that similar behaviour might also occur in engineered systems. We provide a thought-experiment regarding such behaviour in computational systems. This will be a first step towards an awareness of others and their affect on the system itself as described in networked self-awareness.