TABLE of CONTENTS
This collaborative project involved Co-organizers and Associate partners: Palindrome, Universidad de Valladolid, Reverso, IMM, STEIM, InfoMus, Instituto STOCOS, AMBER Platform.
The project concerned the development and implementation of tools of interactive technology for different(ly abled) bodies, challenging the notion of ability/disability and considering all bodies as different(ly abled), while specifically paying attention to bodies that are usually considered disabled and affording them new tools of expression in which their difference acquires a positive value.
Using motion tracking technology (including video-based and controller-based systems), dance and music, the questions the research poses include: How can we remove barriers to expression? Technology tends to reduce gestural expression, how can we expand it -- expand range of movement, range of expression? How can we promote a more positive awareness of difference, through disalignment from normative conception of ability or intelligible expression? How do we generate affordances that invites deviant (alternative) behaviors which foster plurality and not homogenisation? What are the cultural differences in the perception of difference? How do these differences relate to acceptance and integration (inclusion)? How do different societies (including governmental and non-governmental organizations) approach inclusion?
What if the language of gestures and movements could either
overcome the obstacles of spoken language or even create new languages
in which all kinds of bodies with diverse abilities, (some of them usually
considered "disabled") could communicate?
From July 2013 to January 2016, a team of workshop-leaders, engineers, composers, choreographers and assistants sought evidence that interactive digital movement-to-music technologies can play a role in affording dance and music engagement among highly diverse individuals and, in so-doing, provide new methods of promoting inclusion and the acceptance of diversity. The work took place in the form of 28 workshops in 6 European countries. A total of 242 persons with disabilities took part, as well as 119 therapists, teachers and care-takers. Detailed records are available.
The types and severity of disability varied widely, as did age and demographics. Disabilities included Rett Syndrome, Autism (Autism Spectral Disorder), Cerebral Palsy, Quadriplegia, Parkinson's, Alzheimer's and others. Most workshops also included "non-disabled" participants (including professional dancers). Ages of participants ranged from 8 to 85. The work was organized through various institutions for persons with disabilities and participation was free.
The technology we used included sensors, motion tracking software, mapping, and music and systems for sound and music generation. While the exact hardware and software varied from site to site, it always included video-based (including so-called 3D-video) motion tracking (in a much smaller number of cases, electrode-based systems were used to sense body contact). The motion tracking included EyesWeb and EyeCon software. Sound and music generation were the result of programming in SuperColider, PureData, MAX/msp and similar systems (technical details are availble).
Together these devices provided new affordances in music-making -- new relationships of body to sound. This concerns not only a greater range of body parts and gestures which can be used to play music, but they also contribute "open affordances" -- features that allow a more open form of exploration, where searching, discovering and play are basic afforded actions. The dance and music are still dependent on the users' capabilities, of course, but much less in the form of measurable skills and more relying on sensory and attentive focusing which might be amplified by qualities such as openness, playfulness and creativity.
Format and Procedure
While there was some variation in procedure, sessions generally alternated individual and collective exercises. The work began with the entire group doing a warm-up/body-work of 30 - 45 minutes. This was followed by a demonstration of the interactive system, and the opportunity for participants to get an individual taste of the experience. Next, we divided the group into smaller groups of 3-6 persons. In this section, we tried to let the individual needs of participants guide the workshop. This included storytelling scenarios and little performances. At the end we would bring everyone together for a finalé, which was followed by a de-briefing -- a discussion which was sometimes with the persons of other abilities, and sometimse only included workshop leaders, therapists, family members and care-takers. In most, though all cases, videos were made and records of experiences were taken.
While some of the movement-music systems we used worked exactly as planned, in a surprising number of cases, users up-ended our intentions. They did not use the equipment as we intended, but rather imposed their own creative ideas and impulses -- crashing our preconceptions concerning creative movement and music. Perhaps more than anything else, it was this "breaking the rules" that has guided our project. It has lead to profound re-thinking and re-designing of interactive environments for human expression for persons of other abilities.
There exists litte research in this field, and thus we were excited by the richness and variety of our results. Some publications have appeared.
What follows is an overview of some of the key themes which were explored.
Theme 1 -- Guidance to get Started
Making music through free movements in space is not something most of -- with and without other abilities -- are used to. The absence of haptic controllers can be confusing to some, and indeed it takes a little guidance to join in. For persons with good speech and cognitive ability, one can simply explain it, but for others, we might begin by holding hands, and essentially and guiding their movements. For persons in wheelchairs, we would do this first from in front, then from behind, and finally by holding very still, the user can "hear their own movements". Thus, the introduction is made step-by-step, beginning with touching and utimately they are left on their own to explore.
source video_300310 / length_0:52
source video_300205 / length_0:18
Theme 2 -- Body Part Extrapolation and Engagement
What is it that engages the human psyche in a movement/music experience? This varies from person to person. Among other factors, it depends on the movements used as well as the music and mapping. An important element is a sense of exploration and discovery. At first the user might discover that the bending of the knees generates a sound. Next, that the intensity of the movement changes the intensity of the sound. Typically, soon after this, users begin to extrapolate the interaction onto other parts of the body -- exploring what different body parts they can use. Somewhat confused by the absense of any haptic responder, notice how Maria stamps her foot on the floor at 00:10 in order see what effect this action might have on the music. We see this extrapolation take many forms -- in a creative spirit, people naturally go from body part to body part, engaged in the adventure of discovery.
source video_300125 / length_0:16
Theme 3-- Designing for Dis-alignment
We have noticed in increasing detail and sophistication, the aspects of human movement which, when sonified, are most meaningful to movers in their movement-music expression. While there is important diversity in range of expression, ability and body type, we found the disalignment(1) context important in designing systems that accommodate aberrant behavior. Specifically, this means:
This implies a profound re-thinking of system design. As one of the designers, Andreas Bergsland, put it: "The concept of affordance can be useful when designing interactive environments, because it invites thinking about users, technology and audience as an ecosystem where reciprocal interchange of information and sensation take place. It highlights the fact that both thinking and sensing are distributed and embodied processes, where environment, technology and users constantly feed back on each other." This dynamic looping process enriches the experience and contributes to the creation of scenarios helpful in integrating the experience on a collective level.
In several parts of his session, Frederick’s arms weren’t necessarily reaching out to the sides so as to be adequately tracked by the system, but would often be positioned in his lap, pointing towards the camera or backwards away from it. This would frequently make the system replace the user input with “default” values -- in this case, the absence of one or both arms for the tracking software would generate a value corresponding to minimum arm height (arm way down). This affects the environment so that the choice of notes will be from the low pitched end of the piano. When he had only one arm to the side, something which happened quite often, the insistent bass voice would be accompanied by a voice in the treble range of the register. Together, this was in fact one reason why Frederick’s playing occasionally took on the flavour of late romantic piano music, à la Chopin and Liszt. Fredrick’s non-adherence to the “rules” of the interaction highlighted for us how “glitches” of the design or “mis-use” could infact have aesthetically pleasing results. Together with similar incidences, we have come to embrace faults and failures of different sorts as a positive thing that can lead to new design innovations and new ways of thinking about movement, technology and interaction.
source video_300201 / length_0:16 No Permission for Public Viewing
In the following example, not for reasons of physical limitation, but for purely artistic reasons, Damien elected to reach towards his audience. Again, this "wrong" way of playing limits the range of sounds, yet can nevertheless enhance the personal value of the experience.
source video_300372 / length_0:22
The children we worked with in Istanbul suprised us repeatedly with their clever, alternative methods of playing. The metaphor of "playing a musical instrument" is quickly overtaken by freer and more complex forms of physical expression.
source video_300351 / length_1:43
Daniel has a mental disability. His movements tend to be quite stiff and limited. We used a pitch-bend effect and he discovered his body in a new way. He stretched his fingers, rotated his arms, used foot movements and even small jumps. Perhaps most striking, however, were the pauses he employed, freezing in place to delineate the effect that he was having on the music. He would smile broadly in satisfaction of realizing the effect he was having on the music.
This demonstrates key principles in interactive dance-music experiences:
1. Stillness-to-action. This is the most basic and perhaps most powerful of movement-music mappings, namely that movement causes sound and stillness causes silence. Notice: stillness is not a passive experience -- people do not normally freeze! It is a special task. When frozen, one tends to listen. Thus, by repeatedly stopping and starting, one very quickly gains a clear sense that the body is linked causally to the sound.
2. Small vs. large body-movements. Daniel not only used small finger movements to control sound, but also large body movements. This alternation is based on two basic metaphors: small controlled movements (musician), and large body movements (dancer). While the former develops a fine sense of control and causality, the latter leads to increased physicality (breathing faster), with its inherent excitement and stimulation.
3. Height-to-pitch. For reasons that are not entirely clear, stretching up tall implies higher pitched sounds, and bending low implies low ones. (Obviously, even the words embody this parallel). In any case, the mapping is highly intuitive.
4. Body-part extrapolation. As he explored the sound
environment, Daniel began more and more to use different body parts. After
hearing what his fingers were doing, he tried the head, torso and feet,
even jumping on one occasion. This extrapolation from one body part to
another is intuitive, though not altogether logical, since not all of
these body parts were actually being tracked. This freedom from defined
goals, and individualization of the experience is an example of disalignment.
His movements grew, not only in size and body-part, but in originality.
We saw a progression from defined, mapped experiences, to freer, non sequitur
source video_300351 / length_1:43
Daniel participated in a full-day workshop together with a small group. Daniel loves music and dance, especially latin and classic, which he enjoys on a regular basis. People who know him characterize him as active and full of humour, although he can be timid and need time to adjust to new sensory impressions and social settings. Daniel can react in a pertinent manner in various situations, but has problems with comprehension and communication, especially on the verbal level, and his abilities in that area can be compared to a 3-4 year old.
What was most striking with Daniel was that he got very absorbed in the music and in the exploration of his body’s role in the interaction. In his first session he was playing the Fields environment together with a close other. At first he acted a little insecure, seeking the safety of eye contact with his close other, but after a little initial hesitation he seemed to gain confidence and started to engage actively in the interaction. When he moved and made sound he seemed at first quite surprised that he was actually producing the sounds with his body. After he had established the causal relationship between his own movements and the sounds, he started to explore what parts of the body corresponded with what sounds (see figure 4). Whereas his movements moments earlier had seemed quite stiff and limited, he now began to stretch his fingers, rotate his arms, use foot movements and even small jumps. Perhaps most striking, however, were the pauses he employed, freezing in place to delineate the effect that he was having on the music. Precisely this freezing and moving again is maybe the strongest way of establishing the causal relationship between movement and sound, at least when the system responds without noticeable latency. While many users need overt instruction and a bit of training to do this, Daniel, seemed to this spontaneously. Subsequently, he would smile broadly in satisfaction of realizing the effect he was having on the music.
All the time he seemed to listen intensely and respond immediately to the sounds he was making. We learned from his close others that this stood in stark contrast with how Daniel responds verbally in everyday communication settings - he can sometimes answer a question or request after as much as 30-40 seconds. The psychological absorption, acuity and presence we observed in Daniel suggests a state of mind thoroughly described and studied by Csikszentmihalyi, called flow (see e.g. Csikszentmihalyi 2014). In this state, there is a “merging of action and awareness; a concentration that temporarily excludes irrelevant thoughts, feelings from consciousness” and there is a clear feedback in the interaction.
Persons of other abilities are often excluded from participating in dance and music traditions and social events. This happens not only because of the practical matters such as wheelchair access, but also because of the lack of variabilities of the tools involved. Motion detecting sensors and digital technology offer.............
Theme 5 -- Speech Stimulation
When Aneta was moving she was triggering vocal sounds. In this scene, her movements were triggering a "meh" sound, and so she began to use mouth movements to trigger the sounds. Similarly, bird sounds cause people to move their arms (as wings), and so on. Such physicalization, or embodiment of sounds plays a powerful role in our relationship to body-sound.
source video_300105 / length_0:29
The triggering of vocal sounds through movement can lead to increased vocalizations in persons who have difficulty speaking. We observed this on a number of occasions. Kristina normally does not speak at all but during a session in which her movements were triggering sounds, she started vocalizing more and more. Her mother told us later that she had continued vocalizing on into the evening, long after our session was over.
Motioncomposer is an interactive device that turns movements into music. It is based on motion tracking technology and is being designed for persons of all abilities of all types of bodies. It interprets the body and its movements in a non-Cartesian way. That is, instead of analysing the position of body parts in 3D space, it looks instead for tendencies of movement and shape that allow for randomness and unpredictable behaviors. This makes it an extremely adaptable tool, which can be used with persons with severe mental and physical challenges.
As MetaBody began, a number of the participating groups, including Palindrome, InfoMus, Stocos, STEIM, Kdanse and others, had already been working with interactive technology and persons with disabilities. Palindrome had been developing a video-based motion tracking device that turns movement into music and they named it "MotionComposer".
The MotionComposer was conceived as an easy-to-use, but therefore relatively inflexible platform. Thus,in order to facilitate the collaboration with MetaBody partners, a second device was conceived: the "Meta Body Box". It is similar from a hardware standpoint, but the software is "open", allowing, for example, artists and engineers access to the human movement data -- allowing the integration of their own music or other media.
Brief Technical Description
There are two versions: The MotionComposer 2.0, pictured above, and the MotionComposer 3.0, which is in development. (Neither version is commercially available at this time.)
During the four-year development of the MC, a number of design principles have emerged motivated by three overriding goals:
Inclusion has to be seen as our primary goal. We want users with and without disabilities to be able to make music -- alone or with others -- on an equal, or nearly equal footing. While this may sound utopian, it is not. In the simplest terms it means, 1) allowing many different body parts and kinds of movement to be used, 2), the device must be easy to operate, both for user and therapist, 3) it should sound pretty good however it is played, and 4) it must provide highly intuitive mapping and clear causality. Inclusion is also the main motivator behind our development of three different interaction modes, which we will discuss in detail below.
Synaesthesia refers to the confusion or overlapping of our senses. It is what happens when we “feel the music inside us” when we dance, or when our movements and the sound they cause become one and the same thing (this is discussed below in the section, “Musical Instrument or Dance Device”). In our experience, synaesthesia will strengthen users’ engagement, their feeling of engagement in and focus on the here-and-now and the causal connection between their own bodies and the sounds they hear. There are technical issues that can add to, or detract from a synaesthetic experience. One of these is latency. With a sufficient lag from movement to resulting sound, the user will tend to feel that movement and sound are two separated events, instead of one. Single shooter computer games can tolerate quite a high latency between, say, shooting, and when the monster is blown to bits. This is because what is important there is causality and not synaesthesia.
One of the continuing challenges we have faced is to create
experiences that are artistically satisfying and/or entertaining, and
which can remain so over time. There are different reasons why a user
might want to spend time with a musical environment: one is that, with
practice, she or he develops skill and is better able to shape her or
his movements to achieve a desired effect. If continuously presenting
adequate challenges to a user, the experience of mastery can drive and
retain interest and involvement. A second reason is that there is a variety
of both music and interaction metaphors to explore. Our take on this has
been to develop a set of environments, which from the user point-of-view
are characterized by each playing different types of sounds and being
based on different interaction metaphors. But we have also aimed for that
even without switching environments, the musical responses should have
variation and interest by themselves, so that an identical movement will
not necessarily produce an identical sound. For instance, the exact repetition
of a sound sample enabled by digital audio technology will quickly feel
tiring and even irritating to the user, so introducing variants or avoiding
repetition has been important. A third is that the music is beautiful
to hear and we are able to make with appropriate-feeling beautiful movements
-- in other words, we are seeking aesthetical experiences. While it is
not always easy to pinpoint what triggers aesthetical experiences in each
user, we are constantly aiming for this, thus setting goals that are just
as much artistic as therapeutic in nature.
Of particular importance to us are:
Room - Chair - Bed
Throughout the development process of the MC, we have faced fundamentally conflicting design criteria. For example, while there is a high priority for simplicity of operation, at the same time, in order to insure the inclusion of truly all users, different modes of user needed to be rendered. This became clear to us through workshops with persons of other abilities, including those with quadriplegia, aphasia, dementia and Rett syndrome, to name a few. A one-size-fits-all solution, or a machine that somehow “intelligently” adapts to users, is not easy to implement. Instead, we adopted a compromise design solution in which the MC has 3 modes of use labeled room, chair and bed.
When we started out, the basic mode of interaction was open in the sense that we allowed for the user to move anywhere inside the area which was tracked with the camera system. In consequence, most of our environments at that time used the position perpendicular to the camera (centerX) as a central parameter in the interaction. Not only are there users who cannot move around the room on their own, but even for those that can, this parameter can be difficult to follow. Thus, although moving around the room is extremely important, and reinforces the dance-is-music metaphor, we also needed a mode of interaction where the instrument could be played from a stationary position -- i.e. a chair mode, in which the continuously-variable centerX data is transferred to the height of the arms beside the body (only now there are two data streams: left-arm left music channel, right-arm right music channel). The choice of which to use, room or chair, is deceptively complex. We tend to apply metaphors when we move to make music. For example, when we hear piano notes, we might extend our arms and wiggle our fingers. Another type of sound may make us feel more like using mouth, head, torso or feet movements.
After doing a workshop at a children's hospital it became clear to us that there are many people who can neither move around the room, nor raise their arms beside their bodies and we developed bed mode for them. In bed mode, activity, or the quantity of movement, is tracked in two areas of the body. Admittedly, this mode leaves quite a lot of the musical decisions to the system (i.e. the composer), but we have put a great deal of effort into maintaining variation and interest even for this interaction mode. And indeed, activity, unlike shape or position-based parameters, still retains the powerful component of timing. Thus, activity is thus not only the most trackable of human movement parameters, but is also the most important from the standpoint of physical (dance) and music expression.
Six Musical Environments
Another consequence of our principle of inclusion, considering how different people tend to enjoy various types of music, is our offer of varied environments. Consequently, the chances are that most users can find something they find interesting, beautiful or engaging. So, in addition to the three modes of use, the user selects between six musical environments. Each environment offers different mappings and different styles of music. In addition, several of the environments have variants, with several sound banks or other settings, so that the overall musical potential of the device is rich and varied. In terms of musical genres or styles, we have implemented elements from classical, jazz, techno, latin, soundscape and electroacoustic music. We will now give an outline of basic mappings, metaphors and musical content of the four musical environments relevant in this context.
Single vs. Multi-User
Even when we do them by ourselves, music and dance are in some sense concerned with performance; sharing the experience heightens the enjoyment. In the current version of the MC, only one of the six music environments, Fields, is implemented for multiple users. Based on positive experiences with having two users together in an environment in many of the later workshops, we have seen the need for porting the two-person mode to all environments, and this is currently in development.
Allowing two-person interaction also has the advantages of creative social and musical interaction, either involving a friend, colleague or therapist. As Eide (2014) points out, the dialogical perspective in music has become important to music therapists in recent decades, emphasizing co-experience and co-creation (p.122). In our work, we have experienced that games of imitation, mirroring and dialogue heighten the enjoyment for many users. The challenges that two-person interaction present to users -- most often this relates to problems of hearing who does what -- are often easily solved through focus and conscious guidance, and might offer the pedagogical benefit of making space in the interaction and listening to the other.
Research on Mapping is an on-going process. Choreographer Robert Wechsler is shown here investigating a new environment by composer Andreas Bergsland.
source video_300122 / length_1:50
source video_300310 / length_0:52
source video_300310 / length_0:52
source video_300310 / length_0:52
There are over 50 cataloged videos from the various sites
where we worked.
source video_300122 / length_1:50
source video_300122 / length_1:50
source video_300122 / length_1:50
source video_300122 / length_1:50
source video_300122 / length_1:50
source video_300122 / length_1:50
We have noticed in increasing detail and sophistication, the aspects of human movement which, when sonified, are most meaningful to movers in their movement-music expression. While there is important diversity in range of expression, ability and body type, we found the disalignment context important in designing systems that accommodate aberrant behavior. Specifically, this means:
On a deeper level, a profound re-thinking of system design may be needed. As one of the designers, Andreas Bergsland put it: "The concept of affordance can be useful when designing interactive environments, because it invites thinking about users, technology and audience as an ecosystem where reciprocal interchange of information and sensation take place. It highlights the fact that both thinking and sensing are distributed and embodied processes, where environment, technology and users constantly feed back on each other."(19) This dynamic looping process enriches the experience and contributes to the creation of scenarios helpful in integrating the experience on a collective level.
Performing, that is, the showing of what one can do, also plays a role in the process of raising awareness of diversity. Indeed, the interactive motion tracking with its "play area" offers a unique stage for this process to unfold. The workshop leader assumes the role of director/conductor, orchestrating the set through storytelling, theater and dance.
Rather than leading to exclusion, awareness of our differences in a creative setting can have the opposite effect. Listening and observing the other, imitating the workshop leader and following the same rules together allows a freedom of expression in a co-footing environment. Everyone is differently the same. The disability doesn't exist anymore or is perceived as a poetic difference. "Listen to my body talking" promotes diversity through original movement and sound.
In designing music-movement tools for persons with disabilities, we face large, but also very interesting challenges. This user group is not only incredibly diverse, but also incredibly open. One of our main challenges has been to ensure inclusion for users with all abilities, so that all types of movements can in fact render musically interesting and pleasing results for the user. The overruling strategy we have taken in that respect has been to strive for variation and richness in mapping strategies, interaction metaphors and in sound and music. Simultaneously, we have maintained activity, being something truly universal across abilities, as the central parameter for all our environments. For both Frederick, Anna and Daniel, feeling the music follow their activity level seemed to be sufficient to generate a rewarding experience. We have realized that its counterpart, stillness, is also very important, and as for Daniel, can be a crucial component in perceiving the causality between movement and sound.
We have made many surprising revelations in our workshops
as users would play the MotionComposer “incorrectly”, and
in doing so discover brilliant creativity, inventiveness and musicality.
To wit Frederick (described above) played the tonality chair environment,
in which arm height along the vertical axis is tracked. But Frederick
was almost horizontal in his special wheelchair and thus his arm movements
did not follow the intended trajectories. This lead to unintentional,
yet interesting consequences. Other examples include persons who reach
both arms to one side of their body (fairly common), twist around in their
wheelchairs, or who reached towards the floor or towards the audience
instead of upwards. From a choreographic standpoint, these ways of playing
are expressive and completely justified even though the logic of the system
as a musical instrument is not what was intended.
by the therapist (or other person pressing
With MetaBody partners InfoMus and Steim, we are looking into extending the range of human movement properties which we believe could improve the technology and make a richer user-experience. For example, higher order movement qualities, such as softness, lightness, tension and so on are important to how we feel when we move and yet are largely out-of-reach to the technologies used in this study. Shape-based aspects -- twists of the torso, twist of limbs, bending of torso and limbs, extension and contraction -- represent a similarly out-of-reach area.(20) The dancer Muriel Romero pointed this out at the 2015 MetaBody Conference in Madrid when she said, "As soon as I do something interesting with my body, the technology gets confused".
Finally, the way we look at interactive technologies could lead to a new perspective of the body and by extension, society. Soft skills, as artistic and creative expressions, promote the vision of a sustainable and inclusive culture. Beyond therapeutic and pedagogical interests, we observed that the joy and pleasure felt by participants has a universal echo concerning the perception of the body. What is imperfect, what we call dis-able falls away in this universal perspective.
Workshop leaders and researchers: Josepha Dietz, Alicia Penalba.
Principle authors: Robert Wechsler, Andreas Bergsland, Delphine Lavau, Marcello Lussana, Ekmel Ertan.
Additional contributors: Josepha Dietz, Annika Dörr, Pablo Palacio, Alicia Penalba, Marije Baalman, and Jaime Del Val.
Videographer: Anna Pfannstiel
NOTE: If we have accidentally ommitted your name, apologies, and please let us know.
1. "Using motion tracking technology (including video-based and controller-based systems), dance and music, the questions the research poses include: How can we remove barriers to expression? Technology tends to reduce gestural expression, how can we expand it -- expand range of movement, range of expression? How can we promote a more positive awareness of difference, through disalignment from normative conception of ability or intelligible expression? How do we generate affordances that invites deviant (alternative) behaviors which foster plurality and not homogenization? What are the cultural differences in the perception of difference? How do these differences relate to acceptance and integration (inclusion)? How do different societies (including governmental and non-governmental organizations) approach inclusion?" Jaime Del Val
Bergsland, A. and Wechsler, R. (2016). Interaction design and use cases for MotionComposer, a device turning movement into music. SoundEffects - An Interdisciplinary Journal of Sound and Sound Experience, special Edition on: Sound and Listening in Healthcare and Therapy. Vol 5, No 1 (2016).
Dietz, J.; "MotionComposer...."; Unterstützte Kommunikation, Kongress 2015; ISAAC (International Society for Augmentative and Alternative Communication), Technische Universität, Dortmund, Germany, 9.2015.
ALICIA PEÑALBA, MARÍA-JOSÉ VALLES, ELENA PARTESOTTI, ROSARIO CASTAÑÓN, MARÍA-ÁNGELES SEVILLANO; Types of interaction in the use of MotionComposer, a device that turns movement into sound; Proceedings of ICMEM – The International Conference on the Multimodal Experience of Music, University of Sheffield, England, 2015.
* - this is not a public website. One of the videos has not been cleared for public release. (marked in red). All names of persons in the videos have been changed.
with the support of the culture programme of the European Union