Day 2, Stream 1
Chair
Caleb Kelly: College of Fine Arts, University of NSW, Australia
Presenters
Suzon Fuks: Waterwheel, Australia/Belgium
James Cunningham: Igneous, Australia
Ian Winters: Independent artist, US
Judith Doyle: OCAD University, Canada
Fei Jun: China Central Academy of Fine Arts
Wynnie (Wing Yi) Chung: Simon Fraser University, Canada
Emily Ip: Simon Fraser University, Canada
Thecla Schiphorst: Simon Fraser University, Canada
Megan Heyward: University of Technology, Sydney
Michael Finucan: SAE, Australia
Stahl Stenslie: Aalborg University, Denmark
ABSTRACTS
Suzon Fuks, James Cunningham, Ian Winters & Inkahoots Waterwheel Patch
Our current research uses mobile devices as a way of integrating remote physical movement and sound into the online structure of the Tap to allow participation away from keyboard/mouse-based computers. Taking a cue from research using sensors in dance, we are using phones carried by/attached to participants to collect sensor information on participants’ movement (geo-locative / accelerometer / motion / rhythm) as a control source for the Tap. We are also working with ‘low-bandwith’ ways of integrating audio from old-school mobile phones as both a content, feedback and control source.
Ultimately we see the Tap ‘plug-in’ as a structure that invites creative / innovative projects rather than a creative project per se. Since March 2013, Inkahoots have been working to implement the results of the research into the Tap interface.
For ISEA2013, we will present highlights of our research, and conduct a demonstration of the Tap ‘plugin’ in a brief live performative presentation exploring tracks and soundscapes of remote participants’ local waterways/scapes in Australia, Asia, Europe and North America.
About Waterwheel & the Tap: Waterwheel, launched in August 2011, is a rapidly growing collaborative online venue for streaming, mixing media, and sharing ideas about Water http:/water-wheel.net. The Tap allows new ways of presenting and new choreographic and kinetic possibilities by performing and editing, live, up to 6 webcams, media and drawing with an audience in type chat, all in one web-page. It now offers a palette of tools, allowing any visual object to be moved, rotated, resized or faded in real-time through the use of the mouse.
see http://blog.water-wheel.
Judith Doyle, Fei Jun GestureCloud : gesture, surplus value and collaborative exchange
GestureCloud is a micro-collaborative formation of artists working in Canada and China. Founded by Judith Doyle (Toronto) and Fei Jun (Beijing), we investigate art, gesture and the politics of labour exchange. In our project, 3D depth cameras are adapted for motion capture and gesture representation. We ask, how can these tools be best leveraged for use in the art studio and for collecting documentary gestures on location?
We are interested in the syntactic structures of gesture, and consider culturally-situated, historically-informed theoretical models grounded in gesture studies and other interdisciplinary fields (performativity, art history, neuroscience). Key research themes include technological mediation, post-internet conditions, and the changing definition of physical versus immaterial labour. GestureCloud addresses how our ubiquitously networked present impacts conceptions of embodiment, subjectivity, and agency. Our project probes changing modes of artistic production and issues of labour.
Over six years and four China-Canada artist exchanges, GestureCloud began with traditional mocap and subsequently built Kinect-based skeleton tracking to make artworks. The .bvh files we generated are used to control avatars, mobile devices, and robotic systems. We will discuss studio-based tele-collaboration in research, using examples of GestureCloud projects in virtual and physical exhibitions.
At ISEA, we will demonstrate for the first time a portable 3D depth camera and a suite of applications we programmed, for portable motion capture. The depth camera is made with an XBOX Kinect, embedded computing, and a 3D printed shell. The depth camera applications include field-based documentary motion capture, gesture recognition, and interactive media installations. We consider gesture as a meeting point between different discourses and embodied experience where meaning can be identified and generated
See http://www.readingpictures.com/gesturecloud.html
Wynnie (Wing Yi) Chung, Emily Ip & Thecla Schiphorst WO.DEFY – designing wearable technology in the context of historical cultural resistance practices
WO.DEFY is an interactive kinetic garment that acts as a memorial to the Self-Comb Sisters (自梳女), a female collective based in the Canton Province in China during the early 20th century. Their pursuance of celibacy was a resistance movement to the traditional expectations of marriage and women’s role in Chinese society. The women’s decision to undertake this identity was formalised through a hair tying ceremony, a symbolic gesture of concealing their beauty and projecting a stance of self-independence and self-sufficiency. Like feminists, Self-Comb Sisters acquired much will to stand against their emotional struggles and society.
This provocative piece of work resonates, recreates and reinforces the intrapersonal struggles of these women through poetic visual representations and physical contractions, based on the physiological breathing pattern and kinetic motions of the wearer. WO.DEFY uses silk fibres and human hair as anthropomorphous materials that afford an emotional dialogue through familiarity. Inspired by hair’s functional capability to document health conditions, it is metaphorically incorporated within the artifact as memory capsules; this defines WO.DEFY as a wearable story that encapsulates the unspoken emotional journey of the Self-Comb Sisters.
The explicit projection of the wearer’s “emotion as is” without resistance allows for a reflective evaluation between one’s situated and invented ethos. Hence, WO.DEFY brings forth a critical reflection on the meaning of the application, and inspires new opportunities for wearable technologies for human expression and communication. Revealing the emotional data in its entirety contributes to a richer understanding of self and an alternative in sharing perspectives and expressing feelings without the use of semantics.
Here’s a link to a short video of WO.DEFY:
https://vimeo.com/54584637
Megan Heyward & Michael Finucan Notes for Walking the space in between time: media art and augmented landscapes
In a pervasive media culture featuring connectivity and content available at every step, can augmented spaces and location based media offer the potential for fresh experiences and engagement with the physical environment?
In this creator session, Megan Heyward will discuss Notes for Walking, a media artwork staged at Middle Head National Park and Mosman Art Gallery, Sydney from January 5-27, 2013 as part of the Sydney Festival 2013. Developed in association with Mosman Art Gallery, UTS and the National Parks and Wildlife Service, Notes for Walking allows visitors with smartphones to download an app and discover a set of thirteen short video ‘notes’ that are tagged to locations at Middle Head, a heritage site containing decommissioned naval forts in a harbourside landscape of sandstone tunnels, bushland, cliffs and abandoned lookout posts. The project brings together Megan Heyward’s ongoing research into locative media, spatial narrative, augmented spaces and landscape practices.
The project works deeply with elements of landscape and is layered to acknowledge the contentious histories of the site. The video notes involve site-specific video and audio, including hydrophonic and ELF recordings of Middle Head, merged with short textual sequences that encourage audience engagement with the area. The gallery installation integrates social media photos and responses as people explore the site and experience Notes for Walking.
In this session, Heyward and sound designer Michael Finucan will share key elements of the work, and discuss technical and creative issues including the complexities of delivering video and audio contents via smartphone. The session could possibly be presented on site at Middle Head, so that ISEA2013 attendees can experience the work on location, otherwise in an atypical setting. More broadly, the session will explore challenges and paradoxes of augmented and locative technologies, and whether they can encourage new experiences of landscape and environment.
Stahl Stenslie, David Cuartielles, Andreas Göransson & Tony Olsson Virtual Touch
The proposal/paper focus on the use and experience of touch as an artistic material in multimodal and computer-based environments. It presents i) how it presently feels to touch and be touched in such environments, and ii) our somatosensory future within mixed-, augmented- and diffuse realities.
The main aim is to give an overview of the role of haptic stimulation and corporeal interaction in interactive art, showing how touch can be used to construct meaningful experiences. Touch is analytically investigated through a phenomenological approach into how the world of our experience is constituted for us. A phenomenology of touch allows us to understand the interplay between subjective, felt embodiment and psychophysically contextualised touch designs.
Touch is approached through practice based research. Functional prototypes of bodysuits and -systems that both touch and react to touch are presented and evaluated. Specifically, the proposal investigates how bodysuits can function as two-way tactile displays, conveying haptic, vibrotactile feedback to the body and interfacing the human to the computer through touch.
A workshop/demonstration/exhibition of the project group’s latest multi-user, mobile, smartphone-based, wearable/smart clothing system for geolocative haptic experiences can be presented.
A central contribution of the proposal/paper is the demonstration of how touch can be content in itself, and can form so called haptic storytelling. New in this approach are the combinations of the various theories on and about touch, from phenomenology to somaesthetics, but also the application of these to interactive arts, where touch appears as a genuine medium. The proposal thus aims to contribute to the definition of new practices of inquiry and knowledge making within electronic/media art.
New uses of touch as artistic material converges our various, living realities, but simultaneously diverges from common ethical norms and practices. How do we want to touch and be touched? Where? By whom? And why.