network models with particular characteristics (such as the amygdala and
hippocampus as “hebbian associa-tors” and the pulvinar as a topographically organized array of neurons
forming a saliency map). Activation
of the hypothalamus releases virtual
hormones. Cortical regions use recurrent and multi-layer neural networks
and self-organizing maps. Due to the
Lego-like nature of BL, simple models
can be replaced by more sophisticated
models as they become available.
One of the goals of BabyX is to visually represent functional neural-circuit
models in their appropriate anatomical
positions. For example, our Basal Ganglia model (based on Redgrave et al.
controls motor actions and has an appropriate 3D location and geometry,
as in Figure 1, and the activity of the
specific neurons form inputs to the
shaders to show the circuit in action
as it processes.
Emotions in BabyX are, in fact, coordinated brain-body states that modulate
activity in other circuits (such as increasing the gain on perceptual circuits).
Emotional states modulate the sensitivity of behavioral circuits. For example,
stress lowers the threshold for triggering
a brainstem central pattern generator
that, in turn, generates the motor pattern of facial muscles in crying.
Neurotransmitters and neuromod-ulators play many key roles in BabyX’s
learning and affective systems.
example of a physiological variable
that affects both the internal and external state of BabyX is dopamine, which
provides a good example of how modeling at a low level interlinks various
phenomena. In BabyX, virtual dopamine plays a key role in motor activity and reinforcement learning. It can
also modulate plasticity in the neural
networks and have subtle behavioral
effects such as pupil dilation and blink
rate. The use of such low-level models
means the user can adjust BabyX’s behavioral dynamics, sensitivities, and
even temperament by adjusting virtual
Sensory input. BabyX takes audio-
visual input Web camera and micro-
phone, and “touch” from keyboard or
touchscreen and is designed to work
without special hardware. BL can inter-
face to different devices, and the BabyX
project exists separately from choice
could be as a virtual agent in AR. Such
an additional level of engagement
would enhance the experience but also
benefit from the tight emotional sig-
naling feedback we have developed.
Learning through interaction. One
motivation for the BabyX project is to ex-
of display systems (such as virtual real-
ity, or VR, or augmented reality, or AR).
Advances in AR, particularly in systems
that allow for facial-expression track-
ing, accurate eye tracking, and depth
gaze registration of the user, mean
an obvious possible implementation
Figure 3. BabyX (version 1). Detailed biomechanical face model simulating expressions
generated from muscle activations.
Figure 4. BabyX (version 4, under development). Screenshot from real-time interactive
psychobiological virtual infant simulation.