Ai project final

so in the end, the final  prototype of the AI is not fully functional but it should recognize some words. First it was hard to learn and implement how voice recognition works and use the software tat i was using. I was using an Arduino library called Uspeech. what it does is some complex algorithm that recognizes phonemes and match them with the letters.

     

When I first used it, it wasn’t working properly(mostly because i was using a bad version of it) but now that its been updated(and they constantly do) it was working for the most part. all you need i an pre-amp mic(doesn’t have to be a huge mic like seen above) and you’re set to go.

video

It’s not perfect but at least it works. What I ultimately wanted it was to recognize full words and respond with an action. For the incorporation of the spider robot, here’s an example code that would simulate turns/movemoent:

#include <uspeech.h>
signal voice(A0);
String collvoice;

void setup(){
voice.calibrate();
Serial.begin(9600);
}

void left(){
voice.calibrate();
Serial.println(“left”);
}
void right(){
voice.calibrate();
Serial.println(“right”);
}

void loop(){
char phoneme = voice.getPhoneme();

if( phoneme != ‘h’){
collvoice = denoise(phoneme,collvoice);
}
else {
int i[3],j,min,x;
i[0] = umatch(collvoice,”sop”); //stop
i[1] = umatch(collvoice,”ez”); //left
i[2] = umatch(collvoice,”i”); //right
//find the lowest number
while(j<0){
if(i[j]<min){
x = j;
min = i[j];
}
j++;
}
if(x == 0){
}
if(x == 1){
left();
}
if(x == 2){
right();
}
}
}

Lastly, for the video tracking part Kim would explain how it would have worked if we had more time and I wasn’t so fixated on the voice part. It would use a web cam and openVC.

This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply