No. 13

Understand _V.T.S

By : 俊廷 賴

Entrant’s location : Taiwan

LINKS

What did you create?

Interactive composite equipment that combines real-time object detection with YOLO v3 (You Only Look Once) to activate the brain and prodalgorithms and biological algorithms working together, the proliferating neural networks are seen as a wuce vision to touch sensory. Control visual extension robot to explore the world, and generate new cognitive neurology, it allowing us to experience new kinds of feelings and "seeing" the world. By enabling another way to explore and re-understand the world, to aware of signals, meditation, learning, practice, life, play, self.  

Why did you make it?

The characteristics of "brain plasticity" let our senses have more different ways to perceive the world. The experiment of artificial algorithm identification system and the possibility of brain algorithm system collaboration, general skin visual rely on brain parsing information formed a kind of feeling, I import the object detection Yolo v3, and convert Yolo v3's output into braille type messages,  the experimental hybrid system can in the condition of low pixels, understand object and text. Object's weight of the database has the characteristics of easy to increase, we can use open source on the Internet to amplify database, upload, update, expanded, more substantially increases the blind vision the technique can identify the world. The device also provides a situation of "machines that detect pixels to process and predict information identify the world". The operation and consciousness of self-extension confuse the prediction machine of the brain in reality and in the virtual world, just as the machine collecting pixels cannot recognize what is not pixels. Will human stay in producing stories by labels and symbols forever and miss understanding truth or not. Want to get detached from the life game, perhaps the key is not in identifying, just simply experient it , feel it .

How did you make it?

In the current version, this system is divided into three main parts. In the first part, I used the Raspberry Pi camera to collect image data and transfer them to the motor, and directly transmit them to the back. In the second part, images data collected by the other one  Raspberry Pi camera were processed by Nvidia Jetson Nano, and the results of YOLO v3 object judgment were output into braille information, which was transmitted to the thigh by servo motor. In the last part, I developed a walking machine that imitates eye movement to control the Raspberry Pi camera, and DualShock 4 was used to operate the Arduino robot. And the rest collects the noise from the motor array and converts it into an LED signal in real-time also by Raspberry Pi.

CLOSE