Saturday, August 30, 2008

Thesis --- Infrastructure Development for a Mind Attention Interface

Chapter 1
Introduction
For a long time, people have dreamt of using their mind as direct input to control
machines. Researchers on Brain-Computer Interfaces (BCIs) have been working
to build practical systems which can be used by disabled people and there have
been significant results which show that computers can be slowly but effectively
controlled by a human mind [1, 63]. "Mind" here refers to the combination
of thought, emotion, perception, imagination, and other brain states. In the
Mind Attention Interface (MAI), we intend to measure mind states by using
Electroencephalograms (EEG).
For some time, psychology and neuroscience researchers have had a research
focus on attention. Attention is the cognitive process of selectively concentrating
on things that the mind is interested in [35]. It is one of the cognitive processes
which is considered as having a most concrete association with the human mind,
and it is closely linked with perception [40]. Audio and visual cues for attention
are another source that we can use to help improve communication with
computers. In the MAI, measurements of attention such as gaze, head or hand
movements are obtained from dedicated recording systems.
The idea of the Mind Attention Interface is to create a platform for interacting
with a virtual reality theatre using EEG and measures of attention. This thesis
describes the construction of such extensive hardware and software platform,
1
2 CHAPTER 1. INTRODUCTION
which is called the MAI System. The rest of this chapter introduces the most
important concepts behind the Mind Attention Interface.
1.1 Brain-Computer Interfaces
1.1.1 Background
A Brain-Computer Interface (BCI) is an interface between a person's brain and
a computer. It is a computer interface that only uses signals from the brain,
and which does not require any user motor activity [2] such as eye or other body
potentials. Research on BCIs started in the 1970s and have developed rapidly
in the 1990s [25]. BCIs can be classified in two major categories: invasive and
non-invasive. In invasive BCIs, electrodes are implanted into the brain during
neurosurgery, and these systems diagnose and repair damage brain functions.
(Invasive BCIs will not be discussed further in this thesis) Non-invasive BCIs, on
the other hand, detect brain signals from the surface of the brain, such as the
scalp of a user, using an electrode cap. Electroencephalography (EEG) is the
most commonly used non-invasive BCI due to its fine temporal resolution, ease
of use and cost [63]. By converting the acquired analogue signals to digital, and
sending them to a computer for processing, a BCI can be used to control certain
electronic devices.
1.1.2 Electroencephalographic BCIs
The first electroencephalogram (EEG) recording was obtained by Hans Berger
in 1929 [19]. An EEG is a graph of measurements of the electrical activity of
the brain as recorded from electrodes placed on the scalp or on the cortex itself.
Because electrical activity is closely involved in the way the brain works,
EEG provides "direct measurements" of brain functions in comparison with technologies
which rely on blood flow or metabolism which maybe decoupled from
brain activity. Traditionally, this non-invasive technology has several limitations:
1.1. BRAIN-COMPUTER INTERFACES 3
Firstly, scalp electrodes are not sensitive enough to pick out action potentials
of individual neurons. Instead, the EEG picks up signals from the synchronised
groups of neurons. Secondly, identifying the source locations of measured EEG
potential is an invoice problem which means that these desired locations are very
non-specific. In contrast with other brain-imaging techniques such as functional
magnetic resonance imaging (fMRI) , the spatial resolution in the region could
be less than 3 millimetres. Thirdly, due to its susceptibility to noise, this method
can be largely affected by the surrounding environments, such as the 50Hz AC
noise (50Hz is the mains power frequency in Australia). On the other hand, EEG
has several positive aspects as a tool of exploring brain activity. As mentioned
above, it is an affordable system for most research groups. The setup and scalp
preparation for modern EEG systems is relatively straightforward to use. In particular,
its temporal resolution is very high. EEG has a time resolution down to
sub-milliseconds compared with other methods for researching brain activities,
which have time resolution in the order of seconds or even minutes.
The types of waves measured in the EEG signal are studied together with
mental states. Major types of continuous, sinusoidal EEG waves are delta, theta,
alpha, beta, and gamma. The only difference between these signal types is the
frequency range † . They are listed in Table 1.1 with associated mental states.
Electroencephalographic BCI applications can provide disabled users with a
new channel for sending messages to the outside world [66]. In [23], slow cortical
potential (SCP) was used to control cursor movements on a computer screen.
This rhythm shows up as large drifts in the cortical voltage which last from a few
hundred milliseconds up to several minutes. In [66], similar cursor movement BCI
is implemented by controlling the amplitudes of the sensorimotor activity signals
such as the mu rhythm in the alpha frequency band (8 -12Hz). Other commonly
Functional fMRI is the use of magnetic resonance imaging to measure the haemodynamic
response related to neural activity in the brain. MRI is a non-invasive method used to render
images of the inside of an object.
†There is no precise agreement on the frequency ranges for each type.
4 CHAPTER 1. INTRODUCTION
wave frequency range associated mental states
Delta 0-4 Hz certain encephalopathy, deep
sleep
Theta 4-8 Hz trances, hypnosis, lucid dreaming,
light sleep
Alpha 8-12 Hz relaxation, calmness, abstract
thinking
Beta 12+ Hz anxious thinking, active concentration
Gamma 26 - 100 Hz higher mental activity including
perception, problem solving, fear,
and consciousness
Table 1.1: EEG waves associated with mental states [13]
used signals in BCIs are Event-Related Potentials (ERPs), Visual Evoked Potentials
(VEPs), and Steady State Visual Evoked Responses (SSVERs) [64, 65].
Among these signals, an ERP response to unpredictable stimuli called the
P300 (or simply P3) is one of the most robust [11]. The P300 ERP appears as
a positive deflection of the EEG voltage at approximately 300ms. Based on this
phenomenon, P300 character recognition is designed in [14] to choose characters
for disabled users: A 6 × 6 grid containing alphabet letters and numbers is
displayed on screen and users are asked to count the number of times the row or
column flash which contains the character they wanted to select. Every twelve
flashes are called a set, which all rows and columns have flashed once. P300
amplitude is measured and the average response to each row and column is used
to determine the selected character.
Although there are many existing BCI systems in use at present, the extensive
training required before users obtain such technology and the slowness in com1.2.
ATTENTION AND THE MIND AS A COMPUTER INTERFACE 5
munication is the substantial barrier of using BCI systems. One implementation
by Wolpaw in [65] using mu rhythm achieved 10.88 binary decisions per minute
with 2 month training. In another word, a typical cursor move from the centre of
a video screen to a target located at the top or bottom edge will take 3 seconds.
Another example would be P300 character recognition system implemented by
Donchin in [11], which achieved 9.23 binary decisions per minute online classification
rate. The classification rate is the way BCI uses to compare the speed
of the system. In P300 for example, more than five binary decisions (correct
ones) are needed to choose a single symbol as there are totally 36 symbols. To
summarise, users are normally trained through biofeedback for several weeks to
months before they make use of the systems, and online classification rate are
less than 15 binary decisions per minute [47].
1.2 Attention and the Mind as a Computer In-
terface
1.2.1 Attentive Interface
The so-called "Attentive Interface" is a relatively new category of user interface,
which dynamically prioritises the information it presents to its users [58]. In
[46], Selker gives a definition of Attentive Interface as: "Context-aware human-
computer interfaces that rely on a person's attention as the primary input". Here
attention is epitomized by eye gaze which is one of the key features that have
been widely used in Attentive Interfaces. Figure 1.1 shows a typical eye-gaze
tracking system. In recent years, interactive applications using eye tracking have
improved to the stage that they now allow people to type, draw, and control
an environment[26]. In these interfaces, the synchronised rotation of the eyes is
often used as a pointing device to point to and select a graphical object located
at the intersection of the eye-gaze vector and the screen (see [52] for example).
6 CHAPTER 1. INTRODUCTION
Blinking, eye gestures and movements, as well as eye dwell-time can all be used
in selection and control [56]. At present, training is essential to use attentive
interfaces because none of the above methods is natural. Some experts believe
that ongoing research may bring us generation interfaces with more natural eye
movements for selection and control [38].
Figure 1.1: Gaze system by SeeingMachinesTM
1.2.2 Mind and Attention for Interaction with a Virtual
Environment
In virtual environments, real-world human movements can be mapped to actions
in a virtual world. The effect of these actions is often to do with manipulation
of the entire objects in 3D spaces. Although most emphasis has been placed
on physical controllers together with tracking systems, eye gaze has also been
considered as an interface to a virtual environment [18]. Tanriverdi and Jacob,
in [52], claim that eye movement-based interaction offers the potential of easy,
natural, and fast ways of interacting in virtual environments. A comparison of
the performance that eye movement and pointing based interactions in virtual
environments was undertaken, and the results show that eye gaze based technique
has speed advantage especially in distant virtual environment.
1.3. THIS THESIS 7
Bayliss in [3, 5] demonstrate examples of EEG based BCI systems in control
of virtual environments. Her past research in [4] also gives us a clue of the effects
that eye tracker and virtual environment impact on EEG recording, and the
results indicate that neither eye tracker nor virtual environment will introduce
significant noise than computer monitors to the EEG signal quality. Therefore,
the Mind Attention Interface idea seems realistically, which all devices would
work together without affecting others.
My infrastructure goal was to build a fundamental framework for Wedge virtual
reality theatre that supported mind and attention data from different sources,
in different formats, and which would go beyond a simple BCI interface. Firstly,
the real-time analysis of brain states from EEG was planned to be quite sophisticated
and to go beyond a simple classification of signals into frequency bands.
Secondly, it was planned to provide natural feedback of a user's brain states to
the evolution of an immersive virtual experience which included 3D graphics and
surround sound. Thirdly, the measurements of eye-gaze-based attention from a
user in virtual environment were intended to provide feedback on the brain state
of a user. Eye gaze can also play a role in filtering EEG signals for interference
from oculomotor events. Head position, orientation, and other control devices can
also be used to interact with the virtual world. Figure 1.2 shows the feedback
loop that we intend to create in the MAI.
1.3 This Thesis
This thesis describes infrastructure development for a "Mind Attention Interface".
This Mind Attention Interface enables sensor measurements of a subject's mind
states and attention states to be used as control and feedback data for a virtual
environment. Sensor devices measure EEG brain activity, eye gaze, and head and
hand movements. The MAI has been built for use in the Wedge, a two-walled
immersive virtual environment which combines stereoscopic computer graphics
with surround sound. Its goal is to contribute to Human Computer Interface
8 CHAPTER 1. INTRODUCTION
Figure 1.2: Feedback loop in the MAI
(HCI) and Brain-Computer Interface (BCI) research in virtual environments,
and, by so doing, provide new insights into the human behaviour and the human
mind.
The infrastructure of the MAI described here consists of four major layers:
The first layer is the Data Acquisition (DAQ) layer which handles the device
drivers, data transmission over the network, and timing. The second layer is a
central server that handles peer connections, header information and provides the
application layer clients with a gateway to access data over the network. The third
layer is the signal-processing layer, which processes raw data and provides highlevel
information for further usage. The Spectral Engine Connection Kit (SECK)
enables multiple processing modules to be used in this layer and thus distributes
processing over the network. The last layer in the MAI is the application layer
that connects to the MAI and display information in the virtual environment.
My major contributions are the MAI System, which include first and second
layers and software libraries created for the fourth layer. Although the signalprocessing
layer is largely the work of others, its interfacing components are also
1.3. THIS THESIS 9
a contribution of the present thesis.
When I commenced this project, the MAI had just been funded and no MAI
infrastructure existed. In the last one and half years, my research has included
the investigation and purchase of hardware, and the development of software for
the MAI framework. This thesis describes my research process: System hardware
and associated software are described in Chapter 2. Chapter 3 specifies
the MAI System software platform requirements and states the studies over a
set of comparable systems. Chapter 4 contains a discussion of the MAI software
design considerations based on the requirements. Chapter 5 describes the implementation
of the system enabling components. Utilities that create supportive
environments for carrying out experiments are also described in Chapter 5. Subsequently,
profiling results of the current system performance in comparison with
another similar system are shown in Chapter 6. Overview of this project and
future work are summarised in Chapter 7.

No comments: