Projects

Research projects I am or have been involved in:

Project Year Funding
Method development: Analyzing real life eye tracking
data using realistic computational models of cognition
2021 – 2022 eSSENCE
Ethics for autonomous systems/AI 2020 – 2025 WASP-HS
Modeling using Deep Neural Networks 2017 – 2020 eSSENCE
A Plurality of Lives 2016 – 2017 The Pufendorf Institute
Modelling Cognitive Development in Robots 2015 – 2016 The Crafoord Foundation
A New Principle for Robot Hands 2013 – 2014 The Crafoord Foundation
IMPROV 2013 – 2014 The Pufendorf Institute
Goal-Leaders 2011 – 2014 EU, FP7
The Meaning of Actions 2011 – 2012 The Pufendorf Institute
Bimanual Object Manipulation 2010 – 2011 The Crafoord Foundation
Shape Perception for Object Manipulation 2008 – 2010 The Crafoord Foundation
Thinking in Time 2008 – 2018 The Swedish Research Council (VR)
SmartHand 2006 – 2010 EU, FP6
Mind RACES 2004 – 2007 EU, FP6
Learning for Adaptive Visual Assistants 2002 – 2005 EU, FP5
Ikaros 2001 – Lund University
The Artificial Hand Project 1997 – 2005 NUTEK, SSF
Cognitive Aspects of Conditioning and Habituation 1997 – 2000 HSFR
Robot with Autonomous Spatial Learning 1994 – 1997 The Carl Trygger Foundation

The imperfect creator creating the perfect: Ethics for autonomous systems/AI (2020-2025)

Which values and norms should be applied when we assess the function of autonomous AI systems from an ethical perspective? Even if technology is universal, morality is not. Ethical theories are already driving the debate regarding what AI should and should not do.

In this project, researchers will carry out a consequence analysis of various ethical theories to describe the mechanisms required for autonomous AI systems to act morally. The work will be based in modern theory of learning, which assumes that the AI systems can learn about consequences based on experience and on observations of human behaviour and emotions.

Central concepts are utility, equality and superiority over human beings. The ability of robots to take over simple tasks and execute them at a lower cost, with better accuracy and without rest makes the utility obvious. The ability of robots and AI systems to take over more complex tasks and execute them as if they were almost human illustrates the concept of equality. The next leap in AI development, when they become autodidactic but without human faults and shortcomings and when they will perhaps be able to make better decisions than humans, can make them superior to humans. But how will they act when faced with ethical dilemmas? Should the benefit of the many be at the expense of the few? What consideration should be given to human rights and integrity? What algorithms should be in control? What does a state, or a society, hold as good? Who decides? This is an interdisciplinary project conducted by researchers in robotics, philosophy and cognitive science.

Method development for analysis and modelling of large scale electrophysiological recordings using deep artificial neural networks (2017-2020)

This projects aims at integrating advanced technologies for: 1) the analyses of recorded brain activity, 2) mathematical image analysis of sensory cues and the outcome of actions and 3) real world applications of artificial intelligence in humanoid robots. The computational platform needed to achieve this will be built on deep artificial neural networks. The project’s two main objectives are the following:

Aim 1 : To develop plausible systems-level network models that can reproduce observed neurophysiological data and generate testable hypotheses about policy/ value functions Method: To extend and connect existing computational models of cortex and basal ganglia and to enhance them with data on network functional connectivity estimated from electrophysiological measurements.

Aim 2: To evaluate action-based learning in autonomous humanoid robots Method: Humanoid robots learn to control an artificial hand in a skilled reaching task by action-based learning and sensory feedback combining systems for intelligent perception with autonomous robotic systems and direct comparisons to brain processes are made.

Modelling Cognitive Development in Robots (2015-2016)

The project will let a robot reproduce the main developmental stages of a child's first year. We will investigate a number of key questions: (1) How important is the body for cognitive development? (2) What is the role of social inertaction? (3) What is the role of goal-direction in learning? What specific innate abilities are necessary? The project is financed by the Crafoord Foundation.

A new principle for tactile sensation in robot hands (2013-2014)

The goal of the project is to develop robot hands with a novel type of touch seonsr that makes it possible to make the whole hand sensitive to touch. The project is financed by the Crafoord Foundation.

IMPROVE (2013-2014)

The ultimate goal of the IMPROVE Theme is to define and develop solutions for improving the quality of life of organisms that rely on visual information. In an urban space, this applies not only to healthy individuals but also to elderly and visually impaired individuals as well as animals.

In an interdisciplinary effort, IMPROVE aims at determining optimum levels of visual information necessary for living creatures to navigate the visual world, while preserving biodiversity in a shared environment.

IMPROVE is intended as a means to integrate the know-how available at Lund University (LU), and lay the foundations for new cross-disciplinary research focusing on the pioneering development of a new generation of visual optimization techniques. The Theme engages ten scientists from various research areas at LU, including Biology, Cognitive Sciences, Environmental Psychology, Medicine, Psychophysics, and Physics.

Goal-Leaders (2011–2014)

Goal-Leaders is a 3-year project funded by the European Union within the "Cognitive Systems and Robotics" initiative. Grant agreement no: FP7 270108 (STREP)

The Goal-Leaders project aims at developing biologically-constrained architectures for the next generation of adaptive service robots, with unprecedented levels of goal-directedness and proactivity. Goal-Leaders will realize builder robots able to realize externally assigned tasks (e.g., fetching objects, composing building parts) and, at the same time, keeping their homeostatic drives within a safe range (e.g., never end up without energy or get hurt), by operating autonomously and for prolonged periods of time in open-ended environments. To this aim, Goal-Leaders will pursue a combined neuroscientific, computational and robotic study of three key sets of competences:

The Goal-Leaders achievements beyond the state of the art will be assessed against behavioral and neuroscientific data, and by realizing three demonstrators in which robots will perform autonomous navigation and construction tasks, and will readapt without reprogramming to novel task allocations or changes in their environment.

The Meaning of Actions (2011-2012)

The Meaning of Actions: Motor Functions, Intentions and the Brain is an eight month project at the Pufendorf Institute. The subject area of the project is highly interdisciplinary, involving researchers from psychology, cognitive science, computer science, rehabilitation engineering and neuroscience. The project runs from September 2011 to April 2012. The project leaders are professor Christian Balkenius and professor Peter Gärdenfors.

The overall goal of the project is to study how actions and intentions are represented in the human brain. The standard view of actions is that they are driven by goals or intentions in combination with beliefs. Yet, the relations between actions and goals are deeply problematic. Theoretical and empirical results show that actions are sometimes not driven by goals at all or by goals that are quite different from what we imagine.

We have an intuitive conviction that most common actions are guided by our intentions, but this is probably to some extent an illusion. A phenomenon called choice blindness has been discovered that can be used to investigate the cognitive architecture of intention and cognitive control. Choice blindness is the failure to detect mismatches between intention and outcome in simple decision tasks. The phenomenon can reveal the mechanisms of how we ascribe intentions for actions to ourselves and to others. In this context, the wider social context of action will also be considered. Humans seem to be alone in forming joint intentions in the sense that the intentional actions of one individual are coordinated with those of another individual. Such alignments allow humans to achieve more advanced forms of cooperation than other animals.

Another way to investigate how actions are represented in the brain is to design robots that can interact with humans. To do this, it is necessary for the robot to understand the goals and intentions of the humans. Currently, great efforts are made to develop companion robots. So far the research has focussed on technology, making the robots perform actions such as taking things out of a refrigerator. In the future, a central part of a robot’s social capacity will be its ability to read the intentions of its user.

To accomplish this we need a better understanding of how actions are controlled in humans. We will investigate the requirements of a robot architecture for the perception of human intentional movements. We will also look at movements that communicate an intention to act, in particular, in the context of human-robot communication.

Ikaros (2001–)

The goal of the Ikaros project is to develop an open infrastructure for system level modeling of the brain including databases of experimental data, computational models and functional brain data. The system makes heavy use of the emerging standards for Internet based information and will make all information accessible through an open web-based interface. In addition, Ikaros can be used as a control architecture for robots which in the extension will lead to the development of a brain inspired robot architecture.

Ikaros has been used in several EU-funded projects including Goal-Leaders, Mind RACES and LAVA.

Bimanual Object Manipulation (2010–2011)

Bimanual Object Manipulation was a one year project financed by The Crafoord Foundation.

Shape Perception for Object Manipulation (2008 – 2010)

Shape Perception for Object Manipulation was financed by The Crafoord Foundation.

Thinking in Time (2008 – 2018)

Thinking in Time: Cognition, Communication, and Learning is a multidisciplinary research center at Lund University. We study the physiological and cognitive bases of language and communication with special focus on temporal processing. Our goal is to provide a description of timing and sequencing of language and cognition - from the millisecond perceptions at the cellular level, to the long-term development of words and concepts.

The following areas are studied within CCL:

SmartHand (2006–2010)

SmartHand is a 3-year project funded by the European Union within Nanosciences, Nanotechnologies, Materials and new Production Technologies (NMP). Grant agreement: NMP4-CT-2006-0033423 (STREP).

The Smart Bio-Adaptive Hand Prosthesis (SmartHand) was a highly innovative, interdisciplinary project, combining forefront research from material sciences, bio- and information technologies with cognitive neuroscience to solve a major societal problem; namely, the development of an artificial hand displaying all the basic features of a real human hand. The successful realisation of this highly visionary project required crossing the boundaries of distinct scientific fields, merging forefront expertise of the consortium combines and use of state-of-the-art research results from relevant fields, to improve quality of life for disabilities by improving mobility and diminishing phantom pains associated with amputees.

the SmartHand prosthesis could have major impacts on rehabilitation of amputates. People that have lived through a traumatic amputation often encounter severe depressions as a result of a distorted self-image and fear for social rejection. Further is it also common with phantom pains, forcing the amputee taking heavy painkillers and thus complicating a comeback to the labour market. However, it has been shown that electric stimulation of the nerves has a positive and pain killing effect. We believed that a neural interface with recording and stimulating capability could significantly improving quality of life by relieving the phantom pains. Furthermore, the functional artificial hand could help to restore self-image and social acceptance by the user. An artificial hand or the robotic hand that restores functionality could be of great importance for rehabilitating disabled amputees back to work.

The SmartHand smart bio-adaptive hand prosthesis does more than just replicate the physical functionality of a real human hand. The SmartHand also uses a unique technology to provide the user with a measure of sensation when using the SmartHand. The robotic hand has forty sensors that are activated when pressed on by an object. These sensors are connected to the patients' remaining nerves in the upper arm; the stimulus can be interpreted by the brain as coming from the SmartHand.

Mind RACES (2004–2007)

Mind RACES is a three-year EC funded project (Sixth Framework Programme - Information Society and Technologies - Cognitive Systems) involving 8 Partners. It is mainly focused on the concept of Anticipation. The project started on October 1st 2004 and was formally completed on December 2007. The MindRACES website will be maintained and continuously updated anyway, and it will include novel work of the consortium on prediction and anticipation.

The general goal of the Mind RACES project is to investigate different anticipatory cognitive mechanisms and architectures in order to build Cognitive Systems endowed with the ability to predict the outcome of their actions, to build a model of future events, to control their perception anticipating future stimuli and to emotionally react to possible future scenarios. Such Anticipatory Cognitive Systems will contribute to the successful implementation of the desired ambient intelligence.

After 3 years of research the 7 Partners of MindRACES Consortium produced more than 80 publications, different robotic and software artefacts and important events, such as ABIALS 2008 and the Fall Symposium 2005, that show the importance of anticipation in various cognitive context.

LAVA (2002–2005)

Learning for Adaptable Visual Assistants (LAVA) was a 3 year EC funded Research and Technology Development project in the Information Society Technologies programme of the 5th Framework. Grant agreement: IST-2001-34405.

Xerox Research Centre Europe is the co-ordinating partner in this project. The LAVA project began in May 2002. The main objective of the project is to devise machine learning technologies: for the reliable categorisation of generic object classes in real-world images, and for the interpretation of events in video data.

The goal is to create fundamental enabling technologies for cognitive vision systems and to understand the systems- and user-level aspects of their applications. Technologically, the objectives are the robust and efficient categorisation and interpretation of large numbers of objects, scenes and events, in real settings, and automatic online acquisition of knowledge of categories, for convenient construction of applications. Categorisation is fundamentally a generalisation problem, which we shall solve using measures of distance between visual descriptors known as "kernels". We aim to dramatically improve generalisation performance by incorporating prior knowledge about the behaviour of descriptors within kernels, and by exploiting the large amounts of unlabelled data available to vision systems. Finally we aim to exploit this technology in integrated systems that employ vision for information retrieval in a mobile setting, and systems that derive symbolic representations from video.

The Artificial Hand Project (1997–2005)

The artificial hand project was funded by The Swedish National Board of Industrial and Technical Development (NUTEK) and The Swedish Foundation for Strategic Research (SSF).

The overall objective was to develop a novel strategy for motor control of functional hand prostheses based on electrical signals generated from multiple muscle electrodes or microchips implanted in the peripheral or central nervous system. The use of Artificial Neural Network (ANN) was essential to fulfil this purpose. The purpose was also to develop systems for artificial sensibility to be applied to such hand prostheses and to patients with loss of sensory nerve function. The overall goal was to create new possibilities for rehabilitation of amputees and paralysed patients.

The project was multidisciplinary and involved several subprojects including a demonstration that rat sciatic axons are capable of regenerating through the via holes of an implanted silicon sieve electrode. Furthermore we were able to register nerve signals via the chip after electrical stimulation of the nerve roots. An in vitro model was set up and used to demonstrate that certain chip design can reduce the problem of crosstalk. We also demonstrated that central nervous axons are capable of growing into a chip if attracted by pieces of peripheral nerve.

ANNs were used to recognize complex muscle signals from multiple surface electrodes in order to associate specific signal patterns with specific movement of a virtual hand. By recording from several surface mounted electrodes on the arm, we are able to predict the corresponding motion of the hand using an artificial neural network.

Cognitive Aspects of Conditioning and Habituation (1997–2000)

The project was financed by HSFR. Its goal was to construct a computational model of the cognitive components in conditioning and habituation.

Conditioning and habituation are two interacting psychological phenomena with a number of similarities. In conditioning, an animal is exposed to some events, and as a consequence, it learns to associate a certain behavior with a specific situation. In habituation too, an event occurs repeatedly, but in this case, the reaction of the animal wanes with repeated exposure.

The dynamics of habituation is very similar to the extinction of a response that has previously been learned during conditioning. In both cases, the response becomes less probable or weaker with each occurrence with the event. There is one large difference between the two situations, however. In extinction, a learned response is weakened, but in habituation the reaction that dies away is typically an innate orienting reaction.

Both conditioning, extinction and habituation have traditionally been described as fairly simple phenomena where a change in a single variable, the associative strength or an attention factor, has taken on the whole explanatory burden. However, a large range of empirical results have been gathered over the years that show that a single variable is not sufficient for the job. This is especially salient in situations where earlier experiences influence learning.

Both habituation and extinction can be interrupted if a novel stimulus is presented. These two phenomena, disinhibition and dishabituation, show very similar properties. Another similarity is that both extinguished and habituated responses spontaneously recover with time in a similar way. This is yet another reason to suspect that the mechanisms behind the two processes may be the same.

A central goal of the project was to investigate the apparent similarities between extinction and habituation and to try to give a coherent explanation for the two phenomena in terms of expectations.

Robot with Autonomous Spatial Learning (1994–1997)

The project Robot with Autonomous Spatial Learning was financed by the Carl Trygger Foundation.

The goal of the project was to develop a robot that was able to solve various problems of spatial navigation. The sensory inputs of the robot were a combination of tactile, ultrasonic and visual information. We strived for a robot that could solve the following problem types in increasing order of difficulty:

Reactive obstacle avoidance using tactile and ultrasonic sensors; place recognition based on ultrasonic information only; exploratory behavior; visual obstacle avoidance; visual place recognition; goal-seeking behavior using boh ultrasonic and visual information; attention focusing on changes in the environment; linguistic production of information concerning such changes, using either speech synthesis, or written output on a monitor.

The main result of the project was a robot that could learn to navigate in natural environment using visual information. It used the elastic template-matching algorithm developed within the project to recognize spatial locations and to navigate towards goals using a sequence of visual subgoals.