Sean: Computational Research

Project 1:

Project Name/Creators: Blur-braincoat by Diller, Scofidio, and Renfro

Project URL:

 Project Description: This project reinvents the way humans interact with each other by removing the ability for people to physically visualize another person. The project works to connect participants to a network proxy and have them interact without being able to judge based upon visual expressions.

Input: The user of the braincoat acts as the input, by completing a survey of questions before interacting with other participants. The users input questions determines the response the braincoat emits to other users rather than the users own human expression.

Output: After completing a survey, the user of the braincoat interacts with other participants through the coat. The coat gives off simple digital signals such as a light to show how the user would respond. These responses could include affinity, embarrassment, or even shock.

 Process: The coat works by connecting users through a network and matches people based upon their survey results. If a user is usually shy when interacting with others face to face, the coat will emulate that expression through its light components or even sounds. The coat’s main design process is that it can cloud the users features to give an anonymous result.

 Statement: The blur-braincoat works off the concept of social relations and how we have preconceived ideas about people simply based on what we see from their physical features. The artist explains that we can judge a person’s gender, age, race, and even social standing. Acting as a type of physical social network the braincoat matches individuals not on appearance, but on ideal comparisons.

Critique: The concept of the braincoat is revolutionary for social networking, but lacks practicality in that the artist need to conduct a question and have the users wear a prosthetic suit in order to function. The idea could be improved if somehow anonymity could be upheld without the use of a sort of external garment or questionnaire.

Project 2:

Project Name/Creators: Face visualiser by Daito Manabe

Project URL:

Project Description: This project works to recreate human facial expression and emotion through the use of computer programming and basic knowledge of electricity and how muscles in the face work. The design allows for control of the muscles in the face and gives off unusual facial reactions that might not be possible to recreate naturally.

Input: For this project, the input would be considered the synchronized electrical signals that are given off by a computer program. These could be modulated to allow for different patterns, and in the artist demonstration, the computer emitted signals based on music.

Output: The output of the project was the way the face reacted to the electrical stimulation. Manabe explained that the output signals would hurt his face similar to a feeling of needles on skin, but the larger issue was that his responses would sometimes inhibit sight and breathing.

Process: The process of this project is to create an array of digital signals that translate over to facial expressions using electric nodes attached to the users major muscles. The project can be linked to music, giving of the illusion that the face is dancing and moving in many areas that do not seem natural.

Statement: Although the original concept was to capture Manabe’s emotional expressions and translate them to another person, the hypothesis was incorrect and the artist found it not possible to recreate emotions such as a smile without human intervention. The project was then used to show synchronized motion between two users faces.

Critique: The concept is interesting, but strays off from its original purpose. I believe that if Manabe could actually find a way to match his expressions to another users face, the response would look much more synchronized than random facial muscle movement.

Project 3:

Project Name/Creators: XSense by Adam Danielsson, Per Nilsson, Melvin Ochsmann, Koen Van Mol, Robert Winters, Tamara Klein, and Andreas Nertlinge

Project URL:

Project Description: This project allows users to experience a natural phenomenon called synesthesia, or the ability to cross associate color, sound, and visuals through the use of a modified helmet. This is a rare occurrence in some people that may cause brain sensors to collaborate due to a chemical imbalance. The result could include hearing visuals or associating color with a taste.

Input: The input in this project is the physical space and events going on around the user. As the user experiences sensory stimulation, the helmet converts the signals or crosses them with other signals. This includes giving the user the ability to receive sound and visualize it through the helmets LED embedded lights.

Output: The output of the project would be the users reaction and response, which gives a sense of special awareness even with sensory inputs being crossed. The best description of the project is that its similar to a sonar system, which allows animals to send sound waves and receive visual awareness of what is around them.

Process: The process of this project takes in environmental stimuli and converts and translates the external signals into the internal components of the helmet. The external inputs being crossed still allow the user to feel as if they can recognize the space around the.

Statement: The statement of this project is to give the users feeling and senses that could not be felt without the chemical imbalance of synesthesia. It almost seems to show that we can still be aware with our senses crossed as the brain has ways of interpreting these new environmental inputs.

Critique: I believe the concept is appealing, but I would like to see how the product could be used in real world scenarios such as providing sensory vision to the blind or even visual hearing for the deaf.



Leave a Reply

Your email address will not be published. Required fields are marked *