ReDissonance is a net art work that generates real-time compositions based on an artificial intelligence’s interpretation of the emotion in Tweets.
Why did you make it?
Using Foucalt’s statement that systems of discourse are self-generating ‘practices that … form the objects of which they speak’. The net artwork ReDissonance self generates arrays of objects, text messages and sonic compositions based on an AI’s interpretation of the emotions expressed in Twitter tweets to re-write, re-image and re-encrypt itself. That our social media posts are regulated, categorised and encrypted belies the increasing amount of control such ambiguous networks have over gestural and affective exchanges in everyday life. Ambiguity and affect are interlinked because both take place in the liminal network spaces in-between. Likewise, according to Barthes (2005) affect is ambiguous because it is a dynamic state of transition, an exchange that involves an “inventory of shimmers, of nuances, of states, of changes”. In other words, affective states have the capacity to confuse the prerequisite of clear communication and encrypt social media forms as meaningfully indecipherable.
How did you make it?
ReDissonance was built using the game engine Unity3d and artificial intelligence algorithms. -A tweet that references an emotion is sourced from Twitter -The AI then reconstructs the sentence structure in virtual object form (selecting colour, type of objects and sound) according to the emotional category - The tweet is then added to a collective emotional category and then algorithmically remixed with tweets from the same category
Your entry’s specification
The artwork is built in WEBGL via http://www.markcypher.com/twitexchange/build/ and can be viewed full screeen on any browser. Or the work and can be sourced online via a video at https://vimeo.com/369477196