AFFORDING DIFFERENCE
Different Bodies / Different Cultures / Different Expressions / Different Abilities

     

TABLE of CONTENTS

 

INTRODUCTION

THE STUDIES

THE TOOLS

FORMAT AND PROCEDURE

RESULTS

Theme 1 -- Guidance to get Started

Theme 2 -- Body Part Extrapolation and Engagement

Theme 3 -- Designing for Dis-alignment

Theme 4 -- Inclusive Performing

Theme 5 -- Speech Stimulation

Theme 6 -- MotionComposer/MetaBody Box

CONCLUSIONS

FUTURE WORK

CREDITS

PUBLICATIONS

 

 

Introduction

This collaborative project involved Co-organizers and Associate partners: Palindrome, Universidad de Valladolid, Reverso, IMM, STEIM, InfoMus, Instituto STOCOS, AMBER Platform. 

The project concerned the development and implementation of tools of interactive technology for different(ly abled) bodies, challenging the notion of ability/disability and considering all bodies as different(ly abled), while specifically paying attention to bodies that are usually considered disabled and affording them new tools of expression in which their difference acquires a positive value.

- Tools for understanding and raising an awareness of cultural diversity
- Tools for fostering embodied communication for a sustainable culture

Using motion tracking technology (including video-based and controller-based systems), dance and music, the questions the research poses include: How can we remove barriers to expression? Technology tends to reduce gestural expression, how can we expand it -- expand range of movement, range of expression? How can we promote a more positive awareness of difference, through disalignment from normative conception of ability or intelligible expression? How do we generate affordances that invites deviant (alternative) behaviors which foster plurality and not homogenisation? What are the cultural differences in the perception of difference? How do these differences relate to acceptance and integration (inclusion)? How do different societies (including governmental and non-governmental organizations) approach inclusion?

What if the language of gestures and movements could either overcome the obstacles of spoken language or even create new languages in which all kinds of bodies with diverse abilities, (some of them usually considered "disabled") could communicate?
The research involved different interactive technologies and their continuous adaptation in relation to other technological developments going on in Metabody and in particular in view of its integration in the future Architecture of the Metabody Project.

The Studies

From July 2013 to January 2016, a team of workshop-leaders, engineers, composers, choreographers and assistants sought evidence that interactive digital movement-to-music technologies can play a role in affording dance and music engagement among highly diverse individuals and, in so-doing, provide new methods of promoting inclusion and the acceptance of diversity. The work took place in the form of 28 workshops in 6 European countries. A total of 242 persons with disabilities took part, as well as 119 therapists, teachers and care-takers.  Detailed records are available.

The types and severity of disability varied widely, as did age and demographics. Disabilities included Rett Syndrome, Autism (Autism Spectral Disorder), Cerebral Palsy, Quadriplegia, Parkinson's, Alzheimer's and others. Most workshops also included "non-disabled" participants (including professional dancers). Ages of participants ranged from 8 to 85.  The work was organized through various institutions for persons with disabilities and participation was free.

The Tools

The technology we used included sensors, motion tracking software, mapping, and music and systems for sound and music generation.  While the exact hardware and software varied from site to site, it always included video-based (including so-called 3D-video) motion tracking (in a much smaller number of cases, electrode-based systems were used to sense body contact).  The motion tracking included EyesWeb and EyeCon software.  Sound and music generation were the result of programming in SuperColider, PureData, MAX/msp and similar systems (technical details are availble). 

Together these devices provided new affordances in music-making -- new relationships of body to sound.  This concerns not only a greater range of body parts and gestures which can be used to play music, but they also contribute "open affordances" -- features that allow a more open form of exploration, where searching, discovering and play are basic afforded actions. The dance and music are still dependent on the users' capabilities, of course, but much less in the form of measurable skills and more relying on sensory and attentive focusing which might be amplified by qualities such as openness, playfulness and creativity.

Format and Procedure

While there was some variation in procedure, sessions generally alternated individual and collective exercises. The work began with the entire group doing a warm-up/body-work of 30 - 45 minutes. This was followed by a demonstration of the interactive system, and the opportunity for participants to get an individual taste of the experience. Next, we divided the group into smaller groups of 3-6 persons. In this section, we tried to let the individual needs of participants guide the workshop.  This included storytelling scenarios and little performances.  At the end we would bring everyone together for a finalé, which was followed by a de-briefing -- a discussion which was sometimes with the persons of other abilities, and sometimse only included workshop leaders, therapists, family members and care-takers.  In most, though all cases, videos were made and records of experiences were taken.

Results

While some of the movement-music systems we used worked exactly as planned, in a surprising number of cases, users up-ended our intentions.  They did not use the equipment as we intended, but rather imposed their own creative ideas and impulses -- crashing our preconceptions concerning creative movement and music.  Perhaps more than anything else, it was this "breaking the rules" that has guided our project. It has lead to profound re-thinking and re-designing of interactive environments for human expression for persons of other abilities.

There exists litte research in this field, and thus we were excited by the richness and variety of our results.  Some publications have appeared.

What follows is an overview of some of the key themes which were explored.

 

 

   

Theme 1 -- Guidance to get Started

Making music through free movements in space is not something most of -- with and without other abilities -- are used to.  The absence of haptic controllers can be confusing to some, and indeed it takes a little guidance to join in.  For persons with good speech and cognitive ability, one can simply explain it, but for others, we might begin by holding hands, and essentially and guiding their movements.  For persons in wheelchairs, we would do this first from in front, then from behind, and finally by holding very still, the user can "hear their own movements".  Thus, the introduction is made step-by-step, beginning with touching and utimately they are left on their own to explore.

 

source video_300310 / length_0:52

 

 

source video_300205 / length_0:18

 

 

 

Theme 2 -- Body Part Extrapolation and Engagement

What is it that engages the human psyche in a movement/music experience?  This varies from person to person.  Among other factors, it depends on the movements used as well as the music and mapping. An important element is a sense of exploration and discovery.  At first the user might discover that the bending of the knees generates a sound. Next, that the intensity of the movement changes the intensity of the sound.  Typically, soon after this, users begin to extrapolate the interaction onto other parts of the body -- exploring what different body parts they can use.  Somewhat confused by the absense of any haptic responder, notice how Maria stamps her foot on the floor at 00:10 in order see what effect this action might have on the music.  We see this extrapolation take many forms -- in a creative spirit, people naturally go from body part to body part, engaged in the adventure of discovery.

 

source video_300125 / length_0:16

 

 

 

 

Theme 3-- Designing for Dis-alignment

We have noticed in increasing detail and sophistication, the aspects of human movement which, when sonified, are most meaningful to movers in their movement-music expression. While there is important diversity in range of expression, ability and body type, we found the disalignment(1) context important in designing systems that accommodate aberrant behavior. Specifically, this means:

1. systems with the broadest possible range of mappings
2. systems that are equivocal, employing for example fuzzy logic, rather than strictly 1- to-1 mappings
3. focusing on activity-based parameters (as opposed to position-oriented parameters). Many people cannot, or do not want to, feel limited by position-fixing controllers.
4. systems for which there is no "wrong" way to play them

This implies a profound re-thinking of system design. As one of the designers, Andreas Bergsland, put it: "The concept of affordance can be useful when designing interactive environments, because it invites thinking about users, technology and audience as an ecosystem where reciprocal interchange of information and sensation take place. It highlights the fact that both thinking and sensing are distributed and embodied processes, where environment, technology and users constantly feed back on each other." This dynamic looping process enriches the experience and contributes to the creation of scenarios helpful in integrating the experience on a collective level.

In several parts of his session, Frederick’s arms weren’t necessarily reaching out to the sides so as to be adequately tracked by the system, but would often be positioned in his lap, pointing towards the camera or backwards away from it. This would frequently make the system replace the user input with “default” values -- in this case, the absence of one or both arms for the tracking software would generate a value corresponding to minimum arm height (arm way down). This affects the environment so that the choice of notes will be from the low pitched end of the piano. When he had only one arm to the side, something which happened quite often, the insistent bass voice would be accompanied by a voice in the treble range of the register. Together, this was in fact one reason why Frederick’s playing occasionally took on the flavour of late romantic piano music, à la Chopin and Liszt. Fredrick’s non-adherence to the “rules” of the interaction highlighted for us how “glitches” of the design or “mis-use” could infact have aesthetically pleasing results. Together with similar incidences, we have come to embrace faults and failures of different sorts as a positive thing that can lead to new design innovations and new ways of thinking about movement, technology and interaction.

 

source video_300201 / length_0:16   No Permission for Public Viewing

 

 

In the following example, not for reasons of physical limitation, but for purely artistic reasons, Damien elected to reach towards his audience.   Again, this "wrong" way of playing limits the range of sounds, yet can nevertheless enhance the personal value of the experience.

source video_300372 / length_0:22

 

The children we worked with in Istanbul suprised us repeatedly with their clever, alternative methods of playing.  The metaphor of "playing a musical instrument" is quickly overtaken by freer and more complex forms of physical expression.

source video_300351 / length_1:43

 

Daniel has a mental disability. His movements tend to be quite stiff and limited. We used a pitch-bend effect and he discovered his body in a new way. He stretched his fingers, rotated his arms, used foot movements and even small jumps. Perhaps most striking, however, were the pauses he employed, freezing in place to delineate the effect that he was having on the music. He would smile broadly in satisfaction of realizing the effect he was having on the music.

This demonstrates key principles in interactive dance-music experiences:

1. Stillness-to-action. This is the most basic and perhaps most powerful of movement-music mappings, namely that movement causes sound and stillness causes silence. Notice: stillness is not a passive experience -- people do not normally freeze! It is a special task. When frozen, one tends to listen. Thus, by repeatedly stopping and starting, one very quickly gains a clear sense that the body is linked causally to the sound.

2. Small vs. large body-movements. Daniel not only used small finger movements to control sound, but also large body movements. This alternation is based on two basic metaphors: small controlled movements (musician), and large body movements (dancer). While the former develops a fine sense of control and causality, the latter leads to increased physicality (breathing faster), with its inherent excitement and stimulation.

3. Height-to-pitch. For reasons that are not entirely clear, stretching up tall implies higher pitched sounds, and bending low implies low ones. (Obviously, even the words embody this parallel). In any case, the mapping is highly intuitive.

4. Body-part extrapolation. As he explored the sound environment, Daniel began more and more to use different body parts. After hearing what his fingers were doing, he tried the head, torso and feet, even jumping on one occasion. This extrapolation from one body part to another is intuitive, though not altogether logical, since not all of these body parts were actually being tracked. This freedom from defined goals, and individualization of the experience is an example of disalignment. His movements grew, not only in size and body-part, but in originality. We saw a progression from defined, mapped experiences, to freer, non sequitur (artistic) elements..

source video_300351 / length_1:43

 

Daniel participated in a full-day workshop together with a small group. Daniel loves music and dance, especially latin and classic, which he enjoys on a regular basis. People who know him characterize him as active and full of humour, although he can be timid and need time to adjust to new sensory impressions and social settings. Daniel can react in a pertinent manner in various situations, but has problems with comprehension and communication, especially on the verbal level, and his abilities in that area can be compared to a 3-4 year old.

What was most striking with Daniel was that he got very absorbed in the music and in the exploration of his body’s role in the interaction. In his first session he was playing the Fields environment together with a close other. At first he acted a little insecure, seeking the safety of eye contact with his close other, but after a little initial hesitation he seemed to gain confidence and started to engage actively in the interaction. When he moved and made sound he seemed at first quite surprised that he was actually producing the sounds with his body. After he had established the causal relationship between his own movements and the sounds, he started to explore what parts of the body corresponded with what sounds (see figure 4). Whereas his movements moments earlier had seemed quite stiff and limited, he now began to stretch his fingers, rotate his arms, use foot movements and even small jumps. Perhaps most striking, however, were the pauses he employed, freezing in place to delineate the effect that he was having on the music. Precisely this freezing and moving again is maybe the strongest way of establishing the causal relationship between movement and sound, at least when the system responds without noticeable latency. While many users need overt instruction and a bit of training to do this, Daniel, seemed to this spontaneously. Subsequently, he would smile broadly in satisfaction of realizing the effect he was having on the music.

All the time he seemed to listen intensely and respond immediately to the sounds he was making. We learned from his close others that this stood in stark contrast with how Daniel responds verbally in everyday communication settings - he can sometimes answer a question or request after as much as 30-40 seconds. The psychological absorption, acuity and presence we observed in Daniel suggests a state of mind thoroughly described and studied by Csikszentmihalyi, called flow (see e.g. Csikszentmihalyi 2014). In this state, there is a “merging of action and awareness; a concentration that temporarily excludes irrelevant thoughts, feelings from consciousness” and there is a clear feedback in the interaction.

 

 

 

Theme 4 -- Inclusive Performing

Persons of other abilities are often excluded from participating in dance and music traditions and social events.  This happens not only because of the practical matters such as wheelchair access, but also because of the lack of variabilities of the tools involved. Motion detecting sensors and digital technology offer.............

 

 

 

 

 

 

Theme 5 -- Speech Stimulation

When Aneta was moving she was triggering vocal sounds. In this scene, her movements were triggering a "meh" sound, and so she began to use mouth movements to trigger the sounds. Similarly, bird sounds cause people to move their arms (as wings), and so on. Such physicalization, or embodiment of sounds plays a powerful role in our relationship to body-sound.

source video_300105 / length_0:29

 

The triggering of vocal sounds through movement can lead to increased vocalizations in persons who have difficulty speaking. We observed this on a number of occasions.  Kristina normally does not speak at all but during a session in which her movements were triggering sounds, she started vocalizing more and more.  Her mother told us later that she had continued vocalizing on into the evening, long after our session was over.


Kristina / source photograph_66154

 


Theme 6 -- MotionComposer / MetaBody Box

Showcased at:

IMF Genoa 2014
IMF Madrid 2014
IMF Amsterdam 2014
IMF Weimar 2015
IMF Berkeley 2015
The Fifth International Congress on Tourism for All, Madrid, Spain, 9.2015. Invitation through Organización Nacional de Ciegos Españoles (ONCE), and MetaBody.

Motioncomposer is an interactive device that turns movements into music. It is based on motion tracking technology and is being designed for persons of all abilities of all types of bodies.  It interprets the body and its movements in a non-Cartesian way.  That is, instead of analysing the position of body parts in 3D space, it looks instead for tendencies of movement and shape that allow for randomness and unpredictable behaviors. This makes it an extremely adaptable tool, which can be used with persons with severe mental and physical challenges.

As MetaBody began, a number of the participating groups, including Palindrome, InfoMus, Stocos, STEIM, Kdanse and others, had already been working with interactive technology and persons with disabilities.  Palindrome had been developing a video-based motion tracking device that turns movement into music and they named it "MotionComposer".

The MotionComposer was conceived as an easy-to-use, but therefore relatively inflexible platform.  Thus,in order to facilitate the collaboration with MetaBody partners, a second device was conceived:  the "Meta Body Box".  It is similar from a hardware standpoint, but the software is "open", allowing, for example, artists and engineers access to the human movement data -- allowing the integration of their own music or other media.

Brief Technical Description

There are two versions:  The MotionComposer 2.0, pictured above, and the MotionComposer 3.0, which is in development.  (Neither version is commercially available at this time.)

MOTIONCOMPOSER_2.0

HARDWARE

small format (ATX) computer (i7 level CPU)
sense-box (pictured above)
TOF (time of flight) based sensor
high resolution, low latency ethernet (CMOS) video
standard screen,mouse,keyboard user interface

SOFTWARE

motion tracking based on EyesWeb (developed by InfoMus)

six interactive music environments:

    "Particles" by Andreas Bergsland (written in CSound)
    "Drums" by Andrea Cera (written in PD)
    "Techno" by Marcello Lussana (written in SuperColider)
    "Accents" by Pablo Palacio (written in SuperColider)
    "Fields" by Giacomo Lepri (written in PD)
    "Tonality by Adrien Garcia and Ives Schachtschnabel (written in PD)

 

MOTIONCOMPOSER_3.0 (planned)

HARDWARE

Integrated Chassis
stereo vision camera system
2x high resolution, low latency ethernet (CMOS) video cameras
Tablet controller

SOFTWARE

motion tracking by FusionSystems

six interactive music environments:

    "Particles" by Andreas Bergsland (written in CSound)
    "Drums" by Andrea Cera (written in PD)
    "Techno" by Marcello Lussana (written in SuperColider)

    "Import Your Music" by Andreas Bergsland (written in CSound)
    "Fields" by Giacomo Lepri (written in PD)
    "Tonality by Adrien Garcia and Ives Schachtschnabel (written in PD)

 

Design principles

During the four-year development of the MC, a number of design principles have emerged motivated by three overriding goals:

Inclusion - the device can be used equally by persons with and without disabilities
Synaesthesia - the device is both a musical instrument and a dance device
Artistically satisfying and/or entertaining - a quality experience and not a toy or gimmick

Inclusion has to be seen as our primary goal. We want users with and without disabilities to be able to make music -- alone or with others -- on an equal, or nearly equal footing. While this may sound utopian, it is not. In the simplest terms it means, 1) allowing many different body parts and kinds of movement to be used, 2), the device must be easy to operate, both for user and therapist, 3) it should sound pretty good however it is played, and 4) it must provide highly intuitive mapping and clear causality. Inclusion is also the main motivator behind our development of three different interaction modes, which we will discuss in detail below.

Synaesthesia refers to the confusion or overlapping of our senses. It is what happens when we “feel the music inside us” when we dance, or when our movements and the sound they cause become one and the same thing (this is discussed below in the section, “Musical Instrument or Dance Device”). In our experience, synaesthesia will strengthen users’ engagement, their feeling of engagement in and focus on the here-and-now and the causal connection between their own bodies and the sounds they hear. There are technical issues that can add to, or detract from a synaesthetic experience. One of these is latency. With a sufficient lag from movement to resulting sound, the user will tend to feel that movement and sound are two separated events, instead of one. Single shooter computer games can tolerate quite a high latency between, say, shooting, and when the monster is blown to bits. This is because what is important there is causality and not synaesthesia.

One of the continuing challenges we have faced is to create experiences that are artistically satisfying and/or entertaining, and which can remain so over time. There are different reasons why a user might want to spend time with a musical environment: one is that, with practice, she or he develops skill and is better able to shape her or his movements to achieve a desired effect. If continuously presenting adequate challenges to a user, the experience of mastery can drive and retain interest and involvement. A second reason is that there is a variety of both music and interaction metaphors to explore. Our take on this has been to develop a set of environments, which from the user point-of-view are characterized by each playing different types of sounds and being based on different interaction metaphors. But we have also aimed for that even without switching environments, the musical responses should have variation and interest by themselves, so that an identical movement will not necessarily produce an identical sound. For instance, the exact repetition of a sound sample enabled by digital audio technology will quickly feel tiring and even irritating to the user, so introducing variants or avoiding repetition has been important. A third is that the music is beautiful to hear and we are able to make with appropriate-feeling beautiful movements -- in other words, we are seeking aesthetical experiences. While it is not always easy to pinpoint what triggers aesthetical experiences in each user, we are constantly aiming for this, thus setting goals that are just as much artistic as therapeutic in nature.

Mapping
Mapping deals with how body-movement parameters, as analysed by the tracking software, are linked to sound parameters as a part of interactive design. It is generally divided into two parts: action and output. It is the combinations thereof, which offer their particular efficacy in vivo. The body-movement parameters we track are common in video-based interactive systems, and fall roughly into 4 categories:

1. activity
2. shape, including height, width and arm height
3. position in the room (centerX)
4. gesture (particular combinations of the first three, for example, an arm extended quickly overhead, or a jump)

Of particular importance to us are:

1. Stillness-to-action. We see this as most basic: movement causes sound and stillness causes silence. Stillness is not a passive experience, as we are almost never still. When we hold still, we tend to listen and this reinforces the causal relationship. It is also a gesture, if I may call it that, that almost everyone can do. The onset of movement/sound after stillness is highly latency critical, whereas the transition the other way around is somewhat less affected by latency.

2. Small vs. large body-movements. We have a very different sense when we use finger movements to control sound, as when we use the entire body. Not only are our expectations in the sound world different, but the way we concentrate changes as well. They are based on two metaphors: the musician, who very accurately controls small movements, and the dancer, who uses full-body movement to physicalize an artistic intent. Both seem valid to us, and in combining them we seek a rich and varied user experience.

3. Body-part extrapolation. When users hear the sounds that their fingers make, they often, and without instruction, will try out head, torso, feet and other body parts, looking for additional games to play. This sense of exploration through the body is one of the hallmarks of the MotionComposer.


The Three Modes of MotionComposer

Room - Chair - Bed

Throughout the development process of the MC, we have faced fundamentally conflicting design criteria. For example, while there is a high priority for simplicity of operation, at the same time, in order to insure the inclusion of truly all users, different modes of user needed to be rendered. This became clear to us through workshops with persons of other abilities, including those with quadriplegia, aphasia, dementia and Rett syndrome, to name a few. A one-size-fits-all solution, or a machine that somehow “intelligently” adapts to users, is not easy to implement. Instead, we adopted a compromise design solution in which the MC has 3 modes of use labeled room, chair and bed.

When we started out, the basic mode of interaction was open in the sense that we allowed for the user to move anywhere inside the area which was tracked with the camera system. In consequence, most of our environments at that time used the position perpendicular to the camera (centerX) as a central parameter in the interaction. Not only are there users who cannot move around the room on their own, but even for those that can, this parameter can be difficult to follow. Thus, although moving around the room is extremely important, and reinforces the dance-is-music metaphor, we also needed a mode of interaction where the instrument could be played from a stationary position -- i.e. a chair mode, in which the continuously-variable centerX data is transferred to the height of the arms beside the body (only now there are two data streams: left-arm left music channel, right-arm right music channel). The choice of which to use, room or chair, is deceptively complex. We tend to apply metaphors when we move to make music. For example, when we hear piano notes, we might extend our arms and wiggle our fingers. Another type of sound may make us feel more like using mouth, head, torso or feet movements.

After doing a workshop at a children's hospital it became clear to us that there are many people who can neither move around the room, nor raise their arms beside their bodies and we developed bed mode for them. In bed mode, activity, or the quantity of movement, is tracked in two areas of the body. Admittedly, this mode leaves quite a lot of the musical decisions to the system (i.e. the composer), but we have put a great deal of effort into maintaining variation and interest even for this interaction mode. And indeed, activity, unlike shape or position-based parameters, still retains the powerful component of timing. Thus, activity is thus not only the most trackable of human movement parameters, but is also the most important from the standpoint of physical (dance) and music expression.

Six Musical Environments

Another consequence of our principle of inclusion, considering how different people tend to enjoy various types of music, is our offer of varied environments. Consequently, the chances are that most users can find something they find interesting, beautiful or engaging. So, in addition to the three modes of use, the user selects between six musical environments. Each environment offers different mappings and different styles of music. In addition, several of the environments have variants, with several sound banks or other settings, so that the overall musical potential of the device is rich and varied. In terms of musical genres or styles, we have implemented elements from classical, jazz, techno, latin, soundscape and electroacoustic music. We will now give an outline of basic mappings, metaphors and musical content of the four musical environments relevant in this context.

Tonality

The prevailing metaphor used in this environment is that of playing an instrument, and for most users, one that they are familiar with. The choice of instrument -- in the current version you can choose between piano, vibraphone and harpsichord -- can be set by the user or the therapist in the GUI (sitar, guitar and a moog-like synthesizer will be added to the next generation of the device). When it comes to the choice of notes from these instruments, it comes about through a combination of user input and features built into the system: Although the user chooses the approximate note value, or whether the notes are ascending or descending, the exact selection is controlled by the system, using algorithms to ensure that the notes are in accordance with an underlying musical logic and thus rendering a strong sense of tonality. On top of this musical intelligence, the user affects the dynamics (soft/loud), pitch range, whether chords are played or single notes and various kinds of articulation (arpeggios, scales, chords). The result is an environment that is ”musical” in a relatively traditional manner that can remind listeners of music in the classical and jazz idioms, but where the user can also feel that s/he is in some sense “playing” the music.

Particles

The Particles Environment is perhaps the most sonically most complex of the six. It is based on four sound worlds, each consisting of a large number of short samples, in which the user can orient him/herself in that the user’s movements in different zones trigger the sounds. Within each sound world, the samples are organized so that sounds sharing a similar characteristic or belonging to the same source category are contiguous. Moreover, the transitions between different groups or categories of sounds are made continuous, so that even if there is a pronounced change in quality, this change will still come about as a smooth and sonically continuous transition. The nature of the different sound worlds from which the user can choose, suggests different metaphors. In one, materials, the user can play the sounds of materials like glass, metal, water, wood and skin by navigating to different areas of the interaction space. In another sound world, Songshan Mountain, the user plays vocal sounds from a Chinese opera singer. The environment generally reacts in a very dynamic manner letting the size of movement control the density of the samples so as to vary from playing single particles, to chained sequences or even dense clouds. These loud masses of sound render a relatively abstract quality that can be far removed from the original. All-in-all, the large number of sounds gives the environment a sonic richness that can evoke interest and curiosity.

Techno

This environment is based on a contemporary popular dance metaphor, where the user is given an underlying beat to which s/he can dance. The system reacts to the user’s movement by making the music more active and engaging, so as to invite the user to keep dancing. This takes place in a few ways.
One of the most basic aspects of the techno genre is the groove. While it must never stop, at the same time it must be modulated and these modulations can be user-activated. When the user stands motionless before the MotionComposer a beat is heard, but when they “groove to the music” the kick (bass drum) comes in. This effect generally keeps the user in motion, bobbing up and down or shifting weight between legs. By bending low the music becomes low-pass filtered, a recognizable effect from the techno genre. Stretching high similarly offers a high-pass effect. Finally, melodic layers can be added and removed by extending the arms.

Fields

This environment is relatively diverse since it includes both metaphors of narrativity/ impersonation, as well as playing a musical instrument and causing sonic events. The logic of the environment rests on a division of the interaction space into two side-by-side zones that can be preset by the user/therapist. In some of the fields the user plays animal sounds, thus enabling a game of impersonation or role-playing where the user “becomes” the animal. In others, the user can “play” different instruments or objects like drums and glass, or weather phenomena like “wind” and “rain”. Musically, this environment therefore has affinity with soundscape composition and an expanded notion of what sounds can be musical. The division of the interaction space into two distinct areas, which can be played simultaneously by two users, makes this environment ideal for duets, enabling, for instance, a “conversation” between a chicken and a frog. Actually, developing Fields made us aware of a number of benefits of having more than one user, something we will discuss further below.

Fields, as mentioned earlier, is based on the human movement parameter we call activity, which we consider to be the most important. Derived from the continuous activity parameter we do an additional analysis of which of four levels it falls into:

Activity level 1. very small, discrete movement, such as fingers or eyes.
Activity level 2. typical gestures of the hands, head, shoulders, feet, etc.
Activity level 3. bursts of large, high-energy movement.
Activity level 4. jumps, where both feet leave the floor.

Here, levels 1,3 and 4 are Boolean (on/off triggers), while 2 is a continuously variable controller.

These levels are used when choosing among four categories of sounds in the Fields environment following the logic that small movement cause small events, creatures or objects to be heard, whereas large movement will cause big ones. For instance, in the case of the bird sounds:

Activity level 1: plays individual chirps, or tweets
Activity level 2 in low range: plays a trill, or continuous singing of one bird
Activity level 2 in high range: plays more than one bird, volume also increases.
Activity level 3: plays a larger bird, such as a crow
Activity level 4: plays the sound of how we imagine an archaeopteryx would sound like

Thus, even if this follows a “logic”, this logic is interpreted in a creative, playful and sometimes joking manner. For the frog, for example, level 1 will render gentle “quacks”, while when the user moves really large thus reaching level 4, a “splash” will be heard suggesting for the user that the frog has jumped in the water. Each kind of sound requires a unique strategy for its implementation.

Drums

The "drums" environment creates a small set of percussive instruments around the body of the user. The user can freely play them by hitting with hands and feet. If the energy and the rate of the user's gestures go above a first threshold, the environment starts to quantize (align on a regular rid) the rhythmic structures the user is playing, creating the feeling of a more groovy performance. If the user's energy goes beyond a second threshold, the environment introduces a pre-composed rhythm, which further reinforces the impression of producing a groove.
The user is caught in a loop where his/her movements produce rhythms, but the subtle interventions of the system increase the desire to dance.

Accents

Accents is comprised of musical rhythms. When a user moves a body part, their movement causes the drums sound to be much louder, thus accenting each gesture they make.

Single vs. Multi-User

Even when we do them by ourselves, music and dance are in some sense concerned with performance; sharing the experience heightens the enjoyment. In the current version of the MC, only one of the six music environments, Fields, is implemented for multiple users. Based on positive experiences with having two users together in an environment in many of the later workshops, we have seen the need for porting the two-person mode to all environments, and this is currently in development.

Allowing two-person interaction also has the advantages of creative social and musical interaction, either involving a friend, colleague or therapist. As Eide (2014) points out, the dialogical perspective in music has become important to music therapists in recent decades, emphasizing co-experience and co-creation (p.122). In our work, we have experienced that games of imitation, mirroring and dialogue heighten the enjoyment for many users. The challenges that two-person interaction present to users -- most often this relates to problems of hearing who does what -- are often easily solved through focus and conscious guidance, and might offer the pedagogical benefit of making space in the interaction and listening to the other.

 

Research on Mapping is an on-going process. Choreographer Robert Wechsler is shown here investigating a new environment by composer Andreas Bergsland.

source video_300122 / length_1:50

source video_300310 / length_0:52

 

source video_300310 / length_0:52

 

source video_300310 / length_0:52

 

 

Misc. Videos

 

There are over 50 cataloged videos from the various sites where we worked. 
These are some edited examples:

 

 

 

source video_300122 / length_1:50

 

 

source video_300122 / length_1:50

 

 

source video_300122 / length_1:50

 

 

source video_300122 / length_1:50

 

source video_300122 / length_1:50

source video_300122 / length_1:50

 

 

 

 

   
 

Conclusions

We have noticed in increasing detail and sophistication, the aspects of human movement which, when sonified, are most meaningful to movers in their movement-music expression. While there is important diversity in range of expression, ability and body type, we found the disalignment context important in designing systems that accommodate aberrant behavior. Specifically, this means:

1. systems with the broadest possible range of mappings
2. systems that are equivocal, employing for example fuzzy logic, rather than strictly 1- to-1 mappings
3. using activity-based parameters (as opposed to position- and shape-based parameters). Many people cannot (or do not want to) use fixed-measurement controllers.
4. systems for which there is no "wrong" way to play them

On a deeper level, a profound re-thinking of system design may be needed. As one of the designers, Andreas Bergsland put it: "The concept of affordance can be useful when designing interactive environments, because it invites thinking about users, technology and audience as an ecosystem where reciprocal interchange of information and sensation take place. It highlights the fact that both thinking and sensing are distributed and embodied processes, where environment, technology and users constantly feed back on each other."(19) This dynamic looping process enriches the experience and contributes to the creation of scenarios helpful in integrating the experience on a collective level.

Performing, that is, the showing of what one can do, also plays a role in the process of raising awareness of diversity. Indeed, the interactive motion tracking with its "play area" offers a unique stage for this process to unfold. The workshop leader assumes the role of director/conductor, orchestrating the set through storytelling, theater and dance.

Rather than leading to exclusion, awareness of our differences in a creative setting can have the opposite effect. Listening and observing the other, imitating the workshop leader and following the same rules together allows a freedom of expression in a co-footing environment. Everyone is differently the same. The disability doesn't exist anymore or is perceived as a poetic difference. "Listen to my body talking" promotes diversity through original movement and sound.

In designing music-movement tools for persons with disabilities, we face large, but also very interesting challenges. This user group is not only incredibly diverse, but also incredibly open. One of our main challenges has been to ensure inclusion for users with all abilities, so that all types of movements can in fact render musically interesting and pleasing results for the user. The overruling strategy we have taken in that respect has been to strive for variation and richness in mapping strategies, interaction metaphors and in sound and music. Simultaneously, we have maintained activity, being something truly universal across abilities, as the central parameter for all our environments. For both Frederick, Anna and Daniel, feeling the music follow their activity level seemed to be sufficient to generate a rewarding experience. We have realized that its counterpart, stillness, is also very important, and as for Daniel, can be a crucial component in perceiving the causality between movement and sound.

We have made many surprising revelations in our workshops as users would play the MotionComposer “incorrectly”, and in doing so discover brilliant creativity, inventiveness and musicality. To wit Frederick (described above) played the tonality chair environment, in which arm height along the vertical axis is tracked. But Frederick was almost horizontal in his special wheelchair and thus his arm movements did not follow the intended trajectories. This lead to unintentional, yet interesting consequences. Other examples include persons who reach both arms to one side of their body (fairly common), twist around in their wheelchairs, or who reached towards the floor or towards the audience instead of upwards. From a choreographic standpoint, these ways of playing are expressive and completely justified even though the logic of the system as a musical instrument is not what was intended.

The question for us, then, is how to design dance-music systems which offer "rules" for their control, and yet for those cannot or choose not to follow those rules, allow alternative mappings and modes of playing. This dichotomy -- rules and freedom -- cannot be resolved through compromise. A system that sometimes does what you want, will always be frustrating. The question is how to accommodate multiple modes of playing. They can be concurrent or alternating and if the latter, where there is a switching back-and-forth, how is the choice made of when to switch?

    by the therapist (or other person pressing the buttons)
    by the user (for example, through a particular gesture)
    randomly
    via an intelligent system, which analyses the style, range of movement, etc. of the user

Finding good answers to these questions depends on amassing experiences with users who appropriately reflect an extremely broad range of abilities. Disabilities Studies professor Devva Kasnitz spoke up during a seminar we gave at University of California at Berkeley to say, "we are not interested in having you develop tools for us, but we do want you to develop tools with us" (Block 2015). Different users can contribute to the development in different ways, some more actively and directly with own suggestions and ideas, and others more indirectly, through showing what they like and what they don’t like so much.


Future Work

With MetaBody partners InfoMus and Steim, we are looking into extending the range of human movement properties which we believe could improve the technology and make a richer user-experience. For example, higher order movement qualities, such as softness, lightness, tension and so on are important to how we feel when we move and yet are largely out-of-reach to the technologies used in this study. Shape-based aspects -- twists of the torso, twist of limbs, bending of torso and limbs, extension and contraction -- represent a similarly out-of-reach area.(20) The dancer Muriel Romero pointed this out at the 2015 MetaBody Conference in Madrid when she said, "As soon as I do something interesting with my body, the technology gets confused".

Finally, the way we look at interactive technologies could lead to a new perspective of the body and by extension, society. Soft skills, as artistic and creative expressions, promote the vision of a sustainable and inclusive culture. Beyond therapeutic and pedagogical interests, we observed that the joy and pleasure felt by participants has a universal echo concerning the perception of the body. What is imperfect, what we call dis-able falls away in this universal perspective.

Credits

Workshop leaders and researchers:  Josepha Dietz, Alicia Penalba.

Principle authors: Robert Wechsler, Andreas Bergsland, Delphine Lavau, Marcello Lussana, Ekmel Ertan.

Additional contributors: Josepha Dietz, Annika Dörr, Pablo Palacio, Alicia Penalba, Marije Baalman, and Jaime Del Val.

Videographer:  Anna Pfannstiel

NOTE:  If we have accidentally ommitted your name, apologies, and please let us know.

 

   
 

 

 

 

 

   
 

1.  "Using motion tracking technology (including video-based and controller-based systems), dance and music, the questions the research poses include: How can we remove barriers to expression? Technology tends to reduce gestural expression, how can we expand it -- expand range of movement, range of expression? How can we promote a more positive awareness of difference, through disalignment from normative conception of ability or intelligible expression? How do we generate affordances that invites deviant (alternative) behaviors which foster plurality and not homogenization? What are the cultural differences in the perception of difference? How do these differences relate to acceptance and integration (inclusion)? How do different societies (including governmental and non-governmental organizations) approach inclusion?"  Jaime Del Val

 

   
 

 

Publications:

Wechsler, R., Bergsland, A., Lavau, D.; Affording Difference : Different Bodies / Different Cultures / Different Expressions / Different Abilities, IMF  CITATION FROM FINAL MetaBody Journal needed.

Bergsland, A. and Wechsler, R. (2016).  Interaction design and use cases for MotionComposer, a device turning movement into music. SoundEffects - An Interdisciplinary Journal of Sound and Sound Experience, special Edition on: Sound and Listening in Healthcare and Therapy. Vol 5, No 1 (2016).

Dietz, J.; "MotionComposer...."; Unterstützte Kommunikation, Kongress 2015; ISAAC (International Society for Augmentative and Alternative Communication), Technische Universität, Dortmund, Germany, 9.2015.

Bergsland, A., "Aspects of digital affordances:Openness, skill and exploration ", Presentation at Affordance of Symbiosis – International Metabody Forum, Weimar March 2015.

Torres, R., "Musicoterapia, el poder curativo de la expresión artística interdisciplinar", SALUD-Musicoterapia, November 2014

Bergsland,A.; Wechsler,R.; "Composing Interactive Dance Pieces for the MotionComposer, a device for Persons with Disabilities", Proceedings of NIME2015, BatonRouge LA, USA, 2015.

Nicole Strecker; "Tanz Dich Frei", TANZ_Zeitschrift; Berlin, Mai 2015

ALICIA PEÑALBA, MARÍA-JOSÉ VALLES, ELENA PARTESOTTI, ROSARIO CASTAÑÓN, MARÍA-ÁNGELES SEVILLANO; Types of interaction in the use of MotionComposer, a device that turns movement into sound; Proceedings of ICMEM – The International Conference on the Multimodal Experience of Music, University of Sheffield, England, 2015.

 

 

 

 

 

 

 

* - this is not a public website.   One of the videos has not been cleared for public release. (marked in red).  All names of persons in the videos have been changed.

   
 

 


 

 

 

 

 

with the support of the culture programme of the European Union