Together with Kristof Van Moffaert, Tim Brys, and Ann Nowe from the Vrije Universiteit Brussel; Arjun Chandra from the university of Oslo; and Peter Lewis university of Birmingham (now at Aston University); I wrote a paper about selecting weights for multi-objective reinforcement learning adaptively.
The paper entitled “A Novel Adaptive Weight Selection Algorithm for Multi-Objective Multi-Agent Reinforcement Learning” has been submitted to the IEEE World Congress on Computational Intelligence (WCCI) and was accepted today!
Besides our accepted paper, my simulator, called CamSim, has also been selected for presentation in the Demo-Track at the SASO 2013.
I submitted the following short video to explain the simulator:
[vimeo http://www.vimeo.com/70176909 w=500&h=281]
As last year, I co-supervised interns for the summer here at the Institute of Networked and Embedded Systems.
This year we had five interns- 3 girls and 2 guys. Their project was to control a marble maze (Labyrinth) by interpreting human motion.
To recognise motion, the interns used the Microsoft Kinect. They calculated the locations and displacements of the arms using a laptop and sent the calculated values to a Netduino Board to control the servomotors.
They also extended their work on gesture recognition to control Parrot AR.Drones.
They also created a short video about their work and their results:
We just released the first official version of the CamSim Simulation tool.
In the last few months, the pre-release of CamSim went through various major refactoring processes.
The key changes as well as benefits of CamSim are:
1) Ease of generating test scenarios, with cameras and objects limited only by computer memory.
2) Camera behaviour, using an economic and pheromone inspired approach is implemented, as well as several communication policies. These approaches and communication policies are described in ‘Socio-Economic Vision Graph Generation and Handover in Distributed Smart Camera Networks’ by Lukas Esterle, Peter R. Lewis, Xin Yao and Bernhard Rinner.
3) Several bandit solvers are implemented to provide meta-management at the camera level, selecting between communication policies and auction strategies dynamically at runtime. This behaviour is described in ‘Learning to be Different: Heterogeneity and Efficiency in Distributed Smart Camera Networks’ by Peter R. Lewis, Lukas Esterle, Arjun Chandra, Bernhard Rinner and Xin Yao.
4) The motion behaviour of objects can be replaced using reflection mechanism. Besides prior known movements ‘straight’ and ‘waypoints’, where ‘straight’ defines movement in a straight line and random bouncing off the simulation boundary and ‘waypoints’ defines a movement along a predefined path, another movement behaviour based on Brownian motion has been added to the collection.
5) All aspects of camera behaviour, including bandit solvers, communication strategies and pheromone learning can be replaced using reflection mechanisms.
To provide prospective developers with a low entry level, abstract classes are available for extension. Furthermore, introductory tutorials are available online at the GITHub Repository.
The first release can be downloaded from GitHub or go to the official page.
This year’s Summer School of the Aware Initiative (AWASS2013) is about to end. I was asked to Mentor one of the case studies entitled Computational Self-awareness in Smart-Camera Networks. The project should use the open source simulation tool CamSim. The group I mentored came up with three ideas to improve our previous ideas:
1. Object trajectory prediction
2. Probabilistic auctioning for out-of-FOV objects
3. Cluster FOV such that a camera has different vision graphs per cluster
Our Paper entitled ‘Learning to be Different: Heterogeneity and Efficiency in Distributed Smart Camera Networks’ has been accepted as a full paper for presentation and publication at this years Conference on Self-Adaptive and Self-Organizing Systems (SASO).