I have always been fascinated, perhaps even obsessed, with my eyes. I have often felt them looking into things, as if they had their own embodied consciousness that I was entirely, simultaneously, conscious of. It was as if we, my eyes and I, saw the world separately and together, possessing a double vision, one set within the meaty windows of my sockets, and the other looking outside, grasping the world with a replete hapticity, sending shivers across my pupils and retinas as they did so.
I have found myself trying to catch my eyes out, to second guess their movements, their sightlines, and their interests. I must be a sight for sore eyes on the rush hour train, wrestling with what I will allow my eyes to see. I often try to resist my conforming eyes, to make them look towards the cultural periphery, to the aesthetic margins, to the haphazard shards of broken, refracted light on oily windows that few others see as they go about their busy, and sometimes dreary lives. I want to see my eyes politicised, I want to turn them completely into organs of touch, and to feel them wander freely across the intricate layers of the film and television screen. I want Sherlockian eyes.
I hold a rather romantic notion about my eyes, and the eyes of some viewers: that they sometimes wander freely across the spaces, objects, lights, colours, bodies, movements and sounds of the diegetic world they are presented with. Narrative action may be centre frame, and all the elements of the mise-en-scene may be attempting to draw my eyes to this interaction, but I will catch myself looking to the far left of the screen, to hold my sight on an obscure pattern on a wall, or to search for the origins of a distant minor or irrelevant sound just off-screen. I want to see inside and outside the narrative simultaneously. I imagine my eyes as Sherlock-like, searching for narrative clues, new plot developments, and for the sensuous expression of character, mood and feeling, while also loosing or freeing themselves, to (unconsciously) float within all the elements of filmic or televisual material as they happen on the screen.
I see in Sherlock’s eyes this double vision: the ability to have foresight, to see into the margins of things, and to be consciously aware of the vision within me, and all around me. As Sherlock sees into the finest grain of things, so do my eyes and I. My Sherlockian eyes are forensic, haptic, self-processing and are blessed with twenty twenty vision – they have the power to see into all things clearly.
But is this so, or just a fictive longing? What evidence do I have that my eyes do what I say they do? What evidence do I have that viewers possess a double vision? This romantic, phenomenological notion of the viewing, carnal, haptic eyes, then, I wanted to test, to explore, to see in action. And in deep conversation (and much debate!) with Jodi Sita, a neuroscientist at La Trobe University, the idea for The Eye Tracking and the Moving Image Research group was born…
Introducing The Eye tracking and Moving Image Research Group
Jodi and I set up the ETMI group at the end of 2012. We had two central goals in bringing the group together; we wanted to utilise eye tracking technology more centrally in the analysis and examination of the moving image; and we wanted to draw together scholars and practitioners from the Sciences, and the (Creative) Arts and Humanities so that different modes of enquiry and theoretical and methodological apparatus were placed in the same analytical arena.
Our desire was to build upon existing research that drew these disparate disciplines together, and to extend the type of work being conducted in arts-science research centres such as the NeuroArts Lab in the Department of Psychology, Neuroscience & Behaviour at McMaster University in Hamilton, Ontario. Behind the formation of the group, then, was a strong commitment to cross-disciplinary and cross-institutional relationships, and to what was considered to be the necessary dialogue and interaction between different disciplines united by a shared desire to investigate vision regimes in relation to the affecting power and beauty of the moving image.
The utilisation of eye-tracking technology was thus not born out of a technological determinism but as a tool to bridge and fuse different approaches and methodologies so that new findings, new knowledge, new ways of understanding seeing and sensing could emerge. Our approach drew upon those scholars who have crossed the line so-to-speak, and is particularly indebted to the work of Uri Hasson, Ohad Landesman, Barbara Knappmeyer, Ignacio Vallines, Nava Rubin, and David J. Heeger, who introduced to the field the idea of neurocinematics, the neuroscience of film, and the ‘inter-subject correlation analysis (ISC)…used to assess similarities in the spatiotemporal responses across viewers’ brains during movie watching’ (2008:1).
Invitations were sent out to leading, Melbourne-based neuroscientists, visual ethnographers, eye tracking experts, film and television practitioners, and film and television scholars at the end of October 2012. A list of members of the group with their research specialisms can be found at the end of this blog (the membership of the group will be extended to include national and international scholars interested in this field of enquiry). Research clusters (see below) were formed in January 2013, and texts for analysis (pilot case studies) chosen. Data will be generated and first findings undertaken during April/May of this year. The group will present these findings in numerous contexts, including the international eye-tracking conference to be held in Noosa, Australia, later this year, with the short term ambition of the work leading to fully funded research clusters, the setting up of a national body, with international relationships.
Aims and Objectives
The group will explore the way viewers look at moving images, initially concentrating on the examination of film and television but it will then extend its work to gaming, and installation art, for example. The group will be concerned with all platforms, interfaces, and portals through which the moving image is distributed and consumed, including the television (set), cinema, the computer, and the mobile device.
The analysis of viewer’s engagement with the moving image will include assessing where they look, why and how they look within determined visual fields, and what they are feeling or experiencing when they look. At the core of the group’s work, then, is a dual concern with emotion, affect, feeling, and embodiment. To this end a range of supportive investigative and methodological tools will also be used including, pupil dilation, heart rate and breathing rate.
The pilot case studies will involve tracking the eyes of 20 participants while they view 4 film or television sequences. The sequences will be between 5-10 minutes long and will be viewed on a 22inch High Definition Dell computer screen positioned between 60-70 cm away from the participant. Participants will view the segments in the eye tracking research facility at La Trobe University. Eye movements will be recorded with an infrared eye tracker (Tobii x120) using Tobii Studio software version 2.1.14. The eye tracker will be positioned between the screen and the participant at a distance of between 54 and 62cm cm from the participant depending on height. Each participant will be calibrated to the set up using a 9-point on screen reference grid and will be confirmed to have less than a 0.5 degree of visual angle viewing accuracy during calibration with at least one eye. Eye fixations will be defined using the Tobii fixation filter, set to a velocity threshold of 30 pixels/window and a distance threshold of 15 pixels. Subjects will have their eye movements recorded for between 25-40 minutes as they view four short video segments between 5 to 10 minutes long.
For the gaze data analysis, each cluster group will define several important areas of interest (AOIs) and using a moving window technique in Tobii Studio 2.0.2, will extract information related to gaze behaviour including the fixation locations and durations and related pupillary changes for each film or television segment for each participant. Gaze sequences will also be examined.
In addition to eye movements, other physiological measures will be recorded during the viewing process; these include breathing rate & heart rate. Piezoelectric devices will be used to collect heart rate and breathing rate data. A pulse transducer is a piezoelectric device that will register changes in pressure of the transducer from the finger blood pressure pulse, into an electrical signal. Measurements obtained from the pulse transducer will be used to derive heart rate (beats per minute). A respiratory belt, also a piezoelectric device, will be used to measure changes in thoracic or abdominal circumference during respiration. Measurements obtained from the belt will be used to derive breathing rate (breaths per minute). These data will be recorded and analysed using a Powerlab/4Sp AD Instrument PC using Chart version 3.6.3/s software.
The analysis of the initial data will be made in relation to a number of key criteria:
- Gaze (scanning, glimpsing, surveying) viewing behaviour: analysis of the gaze behaviour during viewing relevant aspects of the moving image
- Emotional register: analysis of the emotional qualities of the same (Pupillary changes etc. and if possible use of measures such as HR, BR, EEG)
- Demographics: description of any areas of interest pertaining to differences in subject age/gender/ experience etc.
- Other information: as identified within cluster discussions.
The three research clusters were formed on the basis of locating three distinct analytical strands that could readily examine data, and yield patterns, differences that could then be extrapolated for further investigation.
The Stillness and Movement Cluster
In broad terms, this cluster will look at how stillness and movement creates intensified regimes of looking, and/or fleeting glimpses, and restricted and unrestricted panoramas or vistas, affecting viewer perception, understanding, and identification. Stillness and movement will be examined within the frame, between frames, in terms of people, objects, camera, editing, special effect (if any), and sound. The data generated will allow the cluster to examine eye behaviour in relation to a flurry of movement (from still scene to movement), or a comparison of segments containing less movement, to scenes with lots of movement, for example. This cluster’s analytical strand will be placed within the context of notions of fast and slow film and television, intensified continuity, and new theoretical ideas on attention and reception.
Narrative and Performance Cluster
In broad terms, this cluster will look at the way ‘eyes’ see, recognise, and respond to story structures and arcs. It will also look at the function, importance of, performance and character in terms of looking and seeing, identification and feeling. The data generated will allow the cluster to examine how stories and performances command and direct viewer attention, create modes of feeling, and foster modes of identification.
Colour, Lighting, Bodies, Landscapes Cluster
In broad terms, this cluster will look at moving image aesthetics; at the precise details of visual material to see how, why, when and where they direct viewer’s to look at a scene/sequence/image, and the emotional consequences of such attention. The data generated will allow the cluster to examine phenomena such as the eye movements (and any emotional data) for areas of interest (AOI) on features such as faces, objects, specific features in a landscape, and the onset or location of a specific colour.
Our Sherlockian Eyes
So, what have we begun (very tentatively) to see? From the early stages of a pilot case study, using the ‘Holmes Saves Mrs Hudson’ sequence from the episode, A Scandal in Belgravia (Sherlock, BBC, 2012), we observed a number of trends across the viewings analysed so far.
First, eyes followed movement and directional cues and signs. This included camera and character movement, such as locked on fixations following Mrs Hudson’s fingers scraping along the wall at the beginning of the sequence (03-010 seconds).
Second, we observed an alignment in vision; in particular we saw that Sherlock’s’ point of view in the scene produced a close proximity in focus and attention (respondent’s looked where Sherlock looked, and with the same overall gaze patterning, see from 1.05 to 1.14, and illustration directly below).
Third, we saw evidence that viewers searched for narrative information and cues: this included scanning the background wall before Sherlock first enters the scene (from 0.33-0.36 seconds), moving between the image of the smile on the wall and Holmes’ face, and spending time ‘reading’ the shop window signs, and the note on the front door, as Watson arrives at the scene (from 2.22 to 2.37, see illustration directly below).
Fourth, we found that viewers heavily focused on the eyes, face, and mouth of the central characters, in an ocular triangulation, and followed faces in line with the dialogue exchanges.
Finally, we observed that certain viewers looked at elements of the mise-en-scene, including the interior lights, the computer, and furniture, even as the more dramatic moments of the scene were taking place. It should be noted, that these observations come from only a very small sample (7 people to date), which will be increased, and which still needs to undergo full data analysis and interpretation.
What do we see in these results so far? Less of the dreamed of double vision, and more of the eyes being held to attention by narrative cues, by camera and character movement, point of view and performance. We guess this is to be expected: it equates with the results of other studies into narrative centred visual texts, and speaks to the way viewer’s are pulled seamlessly into the diegetic worlds they believe and invest in. Although, it does also seem that certain viewers do look to the margins of the screen, to the more ‘insignificant’ elements of the mise-en-scene, and the reasons they do this remain of great interest. Shortly, we will combine eye tracking technology with instruments that can measure such things as heart-rate and skin conduction to delve deeper into the bodily reactions viewing creates in us. We would like to present more of our findings in a future blog.
It is 7.10am and I am on the slow, sluggish train into town. I use my Sherlockian eyes to scan the carriage I am sitting in. Heads are down or stare into deep space, papers are being read, feet are being shuffled, and sounds bleed out from numerous portable devices. The traces of last year’s graffiti shouts their way through the new paint, spelling out the words ‘red rum’, a damp blood spot has formed on the shirt of the man sitting next to me, and I can see a reflection of myself doubled, trebled, quadrupled along the run of glowing windows.
Sean Redmond and Jodi Sita are convenors of the group.
Sean Redmond is an Associate Professor of Media and Communication at Deakin University, Melbourne, Australia. He writes on stardom and celebrity, science fiction, screen aesthetics, and authorship. His latest book is The Cinema of Takeshi Kitano (Columbia University Press, 2013).
Sean Redmond, Deakin University, Melbourne: firstname.lastname@example.org
Jodi Sita is an Anatomist and Neuroscientist; using eye tracking to study human behaviour in sports, coaching, forensic science and with the moving image. Currently working at La Trobe University, Melbourne, Australia.
Eye Tracking and the Moving Image Group Membership
Craig Batty, Senior Lecturer in Creative and Professional Writing, RMIT University.
James Breeze, the Director of Objective Digital Eye Tracking.
Dirk de Bruyn, Senior Lecturer in Animation and Digital Culture at Deakin University.
Adrian Dyer, Associate Professor, vision scientist and photographer, seeking to understand how the representation of an image is created. Currently holds positions with both RMIT & Monash Universities.
Wendy Haslem, Lecturer in Film History and New Media at Melbourne University.
Angela Ndalianis, Associate Professor of Cinema and Cultural Studies at Melbourne University,
Claire Perkins, Lecturer in Film & Television Studies at Monash University.
Sarah Pink, Professor in Media and Communication, working across the Design Research Institute and the School of Media and Communications at RMIT University, with an interest in digital visual and sensory ethnography methodologies.
Jenny Robinson, Lecturer in Communication at RMIT University, with specialist interest in eye-tracking research.
Darrin Verhagen, Lecturer and researcher in the perception and cognition of sound, RMIT University.
Kim Vincs, Professor in Dance and Motion Capture at Deakin University, and director of the Deakin Motion Lab, Deakin University’s motion capture studio and research centre.