Blind Landing is an interactive installation inspired by the novel, "Night Flight". Pilot, Fabien, is forced to make "instrumental landings," trained to rely on technology rather than believe in one's own senses in flight control. As a metaphor, the work is installed as a control room where the audience becomes an individual pilot tester, predicted and guided by an AI mass-viewer. The participant’s gaze and attentional level data is displayed—alongside the AI viewer—with a gradient scale consisting of five shades of red, based on the priority of prediction. By utilizing a custom-built computer, an EEG helmet, and eye tracking, the installation uses the search results for "most viewed videos on YouTube" as a case, enabling participants to see how predictable they are under the algorithm of recommendation. As a SNS platform with content promotion algorithms seeks to expend its business model using radical-symbiosis techniques, such as brain-computer-interface, the techniques risk inducing slanted mass media in micro levels of cognition based on customer decision-making habits. The work captures the points in which viewing content becomes daunting and hinders the viewer's probability. Blind Landing reveals how our mind is preferred to be follow the line drawn by communicative capitalism’s subjugation—submitting the mind to an explosively expanding perceptual stimuli, not only generating panic and anxiety, but also destroying all subjectivation. The interactive work reveals the danger of biased economic, political, and social issues that could occur in the near future.
What did you create?
Blind Landing is an interactive installation which enables the audience to see how predictable human are via Brain-computer-interface and eye tracker. The work shows participants the most viewed videos on YouTube, highlighted with the gaze of a virtual viewer that predicts the normal gaze pattern for most viewers. Once the participant's data is collected, the viewing history of the participant is blocked with AI viewer’s predictive gaze; blocking the point where the user's gaze fell based on priority of possible eye-fixation position. The work aim to reveal the mechanism of one’s unique appreciation. On the other hand, the goal of the work is to show how our preferences fall into line with the predetermined system of automated connection in the internet services— automatizing the subjectivities of users with flooding perceptual stimuli, not only generating panic and anxiety, but also confine users with their own preference.
Why did you make it?
Today the total number of daily users of YouTube has reached 63 million, more than 50% of whom view the content on their mobile devices. These streaming services and video-centric social networking services profit from ‘being seen’. To constantly get users’ attention, the images are structured in a sensual configuration that demands our attention and, ultimately, our time. What’s more, the streaming services provide content in accordance with our own preferences, which makes the user even more eager to dive into the system that is designed by the content provider to control our gaze. As a SNS platform with content promotion algorithms seeks to expend its business model using radical-symbiosis techniques, such as brain-computer-interface, the techniques risk inducing slanted mass media in micro levels of cognition based on customer decision-making habits.
How did you make it?
Blind Landing shows participants how the visual stimuli from algorithmically promoted contents affect their attention and behavior pattern. The work software utilize participants’ EEG brain signals to generate the attended scene while the YouTube appreciation. For easy wear, EEG hat was sawed inside the 70's Pilot helmet. To allow 360 degree of freedom, hanger was installed with iron pivot on the ceiling. In the EEG signals analayzation stage, we used wireless EEG for real-time measure of participant's cognitive state. In this process, pre-trained mathematical models was used for state classification, which were created prior to viewing the YouTube clips. Technically, raw EEG brain signals are pre-processed with g.HIsys to remove noise, and then proper features such as gamma-power are extracted. A state-of-the-art machine-learning model (e.g. convolutional neural network) is adopted as a classification model. Lastly, for the simulation of virtual viewer , Adaptive Control of Thought-Rational (ACT-R) architecture, is used. The vision model is presented with five YouTube video clips as test stimuli. Then, the model is programmed to see each object in the movie with object detection (YOLO-9000). As a result, the participant’s gaze and attentional level data is displayed—alongside the AI viewer—with a gradient scale consisting of five shades of red, based on the priority of prediction.
Your entry’s specification
Blind Landing was installed in 4x3x3m open-lab room, The front wall contains the control room button that symbolizes the algorithm system. In the left side of the wall, wide screen TV is installed for documentation video. In the right side, the research paper and notes on artwork is archived. In the middle of the room table with chair is installed. On the table, custom-built PC with eyetracker is installed. The EEG helmet is fixed from the ceiling and can be dragged down up to 175cm within 360 degree of freedom, to meet the need of participant’s sitting height.