Musicality Written Documentation

The ultimate goal of this project was to create an Arduino-based interface that would allow a user wearing an Emotiv headset to simply think of a note and have that note reproduced by a speaker. We accomplished this much, but were unable to get the Arduino to deal with more complex musical ideas, such as tempo, rhythm, or note duration. Before we get too involved with what did and didn’t work, here is the materials and software list:

– Processing to control the Arduino and Emotiv headset

Emotiv Headset

– Arduino Leonardo (thanks, Remy!)

– 8-ohm Speaker

– 100 ohm resistor

– USB Cable

Mind Your OSCs and Emotiv EPOC software to monitor the Emotiv headset

– Computer

As you can see, the materials themselves were not difficult for us to use or obtain, but trying to wrangle the computer into doing what we want took time. We spent a couple of classes trying to get a code called Goodbye World, which would have used Emotiv brain waves to turn an LED light off, but that never came to fruition, and we eventually decided to dive right into the goal of our project…and we ran into several roadblocks right away. The first was that we had installed a version of Processing that was too intense for the Arduino and Emotiv to handle. Through much trial and error, we figured out that we needed to use the 32-bit version of Processing 1.5.1 to get it to work. Then, after we installed the proper version of Processing, we were still receiving error messages due to the fact that we hadn’t imported the Arduino Firmata library, which is the library that allows users to generate tones and do other music-related activities. With the software itself ready to go, we were ready to start fiddling around with the code. However, Firmata ultimately wasn’t able to talk to Mind Your OSCs, so we had to import the Arduino OSCs library to get things to work.

The tuneMelody Leonardo example was the basis for our final code, which was used in conjunction with the Arduino OSC library and modified to not play a random set of four notes, but rather the four notes used in the first phrase of “Jingle Bells.” Once the code was compiling and running properly, the major problem was altering it so that the brain waves and strength of our Emotiv wearer were compatible. For example, the code needed the Emotiv wearer to be thinking a certain note with approximately thirty percent of his brain power, but in practice, we found that he often didn’t get above ten or twenty. By tweaking the sensitivity of the Emotiv in the code and playing around with the connection of the 8-ohm speaker, we were able to have the speaker reproduce the sounds that the wearer was thinking of. However, we had to train him not to think of the sound itself, because it’s quite difficult for someone without perfect pitch to perfectly remember a note, but to pick some directional association, such as the starting note of “Jingle Bells” being thought of whenever he thought of the action of moving left.

This is as far as our group got, as at this point we ran into two larger problems that could not be solved in the limited time we had left: the delay between Processing recognizing that the Emotiv wearer was thinking of a certain note and the actual playing of that note by the speaker, and the fact that what was played was unreliable in the duration and amount of times it was played. Because of the delay, the Emotiv wearer did not know if his brain waves had been picked up, so it would be difficult to judge to what strength and frequency he needed to send the signal, which often resulted in many more “beeps” from the Arduino than we wanted. The next step for this project would be to find a way to eliminate or greatly reduce the delay so as to experiment with using the Emotiv to create rhythmic patterns with the notes.

VIDEO: http://youtu.be/k5jghl4hfHM

This entry was posted in Uncategorized. Bookmark the permalink.

One Response to Musicality Written Documentation

  1. rusnuvol says:

    The includes Firmata, but we did not use Firmata

Leave a Reply