Here are links to my preliminary research for the imagine project. I am researching the utilization of sonar, or Sound Navigation and Ranging, to map the ocean floor. I am conceptualizing an invention that will allow us to see by interpreting sound waves through our ears to give us an imprint in our minds of our surroundings, thus minimizing our reliance on our eyes to see.
To further my preliminary research, I am looking into the area of human haptic perception, which was brought to my attention as a possible resource for my imagine project. Haptic refers to tasks involving active motor exploration of stimuli, and research in multisensory processing has revealed that many cortical areas of the brain which process specific aspects of visual input are also activated during analogous haptic or tactile tasks. Tactile involves tasks in which stimuli are applied to a skin surface. A test known as a Positron emission tomographic (PET) has revealed that a specific region of the visual cortex responds to a tactile analog of the relevant visual task, indicating that visual cortical activity might be independent of the sensory modality in which a task is presented, what seems to me to be the physical seeing itself. What I propose is to possibly create a device which can utilize sound waves (analog) to reflect and echo off of its environment to stimulate this region of the visual cortex, where frequencies of the sound spectrum serve as a haptic taskforce which gathers and processes information in a visual context or form that is independent of the optic nerve sensors. This device would be implanted somewhere between the inner ear’s fluid and the region of the brain where the sound waves are processed and attached in a three-way inter-connection to the visual cortex region mentioned above. I believe that this mode of ‘seeing’ could be as effective as our eyes if not more so, or that our optical vision can be enhanced into a form of “ultra-sight” for lack of a better term. Perhaps a cure for blindness could be obtained through this radical technology, so that blind people could not only see again but view everything in a superior way that is not restricted to the shortcomings of optic sight. Perhaps the sound spectrum of sight in my proposed device could also haptically convert the calibrations of reverberant frequencies to include color information, or degrees of light and darkness, to include the infrared and ultraviolet spectrums which our eyes cannot see. I am basing all of this on information I have gathered from the following links:
The electromagnetic spectrum covers a wide range of wavelengths and photon energies. Light used to “see” an object must have a wavelength about the same size as or smaller than the object. The ALS generates light in the far ultraviolet and soft x-ray regions, which span the wavelengths suited to studying molecules and atoms.
Look at the picture of the electromagnetic spectrum. See if you can find answers to these questions:
What kind of electromagnetic radiation has the shortest wavelength? The longest?
What kind of electromagnetic radiation could be used to “see” molecules? A cold virus?
Why can’t you use visible light to “see” molecules?
Some insects, like bees, can see light of shorter wavelengths than humans can see. What kind of radiation do you think a bee sees?
The human central nervous system (CNS) can be subdivided into spinal cord, medulla oblongata, pons, midbrain, diencephalon, cerebellum, and the two cerebral hemispheres. All these parts have their own functions. Each region is interconnected with other parts of the brain. The CNS is also connected by cranial nerves and by the spinal nerves with other regions of the human body. In the peripheral nervous system (PNS) sensory endings and effector endings are included. The interaction of the brain with the environment is realized by the PNS. The sensory nerves (afferent fibers) provide the CNS with information from the environment. The efferent fibers (motor neurons) control the muscles.
It is possible to divide the anatomic regions of the CNS further. The topology of the cerebral cortex is shown in Figure 4.3, from [kala92, p123,]. In the cortex different regions have different higher brain functions. The modules should not be regarded as isolated parts; they are highly connected.
The human brain appears to have no localized center of conscious control. The brain seems to derive consciousness from interaction among numerous systems within the brain. Executive functions rely on cerebral activities, especially those of the frontal lobes, but redundant and complementary processes within the brain result in a diffuse assignment of executive control that can be difficult to attribute to any single locale.
Midbrain functions include routing, selecting, mapping, and cataloguing information, including information perceived from the environment and information that is remembered and processed throughout the cerebral cortex. Endocrine functions housed in the midbrain play a leading role in modulating arousal of the cortex and of autonomic systems.
Nerves from the brain stem complex where autonomic functions are modulated join nerves routing messages to and from the cerebrum in a bundle that passes through the spinal column to related parts of a body. Twelve pairs of cranial nerves, including some that innervate parts of the head, follow pathways from the medulla oblongata outside the spinal cord.
INNER EAR: http://images.google.com/imgres?imgurl=http://www.hearingcarecenter.com/images/ear.gif&imgrefurl=http://www.hearingcarecenter.com/hearing_inner.htm&usg=__VPXhXQWO_r-TsTzM-bL2oRU9DrE=&h=292&w=350&sz=74&hl=en&start=9&um=1&tbnid=SE3rVVEsIr_RiM:&tbnh=100&tbnw=120&prev=/images%3Fq%3Dinner%2Bear%26hl%3Den%26sa%3DG%26um%3D1
‘Imagine’ Project Research
At 3:57 in this video, we can see that Trent Reznor, from Nine Inch Nails is using a live drum sequencer that is being manipulated by him, but projected onto the stage. It is as though the sequencer is actually there and he is manipulating it on the stage. This is actually being controlled through a series of sensors. In the future of nonlinear music editing, I hope to see that we can have multiple walls in a room that we can physically move around in, and seamlessly activate devices that are non sacrificial to CPU power, and constantly available for preferred routing, rather than saying “I think I’m going to use some guitar now, let me go tune it up and plug it in”. In the future of nonlinear editing, this process will be removed, as it can serve as both a saver of time, and a preserver of ideas.
Invention Idea: Perhaps this method of performance can be adapted into a method of creating, and composing music. Rather than having a virtual drum machine, or virtual synthesizer controlled by a mouse and keyboard, the controls of the virtual instruments could be synced up to sensors, and possibly controls on the very walls of the room, while the image of the virtual instruments are being projected. Even moreso, the sensors can be linked up to a different software, that allows the user to create their own oscillators, filters, and attack-decay-sustain-release (ADSR) envelopes, so that a synth can be constructed by the user without having to bend over a computer for hours at a time. This would give the user and extra advantage in setting up a smaller interface, with the controls and sensors, for these machines to be adapted, and easily transported to live events. A component of this machine must have the ability to store the memory of the sound patch that was constructed by the user in the studio.
Week 5- ‘Imagine’ Project- Further Research
After having investigated the one technology that would be able to be used graphically is the technology that exists within Multi-touch interfaces, or multi-touch input devices. This is already being used by military leaders around the world the view specifics about maps and locations. This can be used along with options provided to the user as to what he/she might want their synthesizer to sound like. After that, a 16-64 step sequencer is introduced, looping the sequence in a drum machine order, the person has the option of stopping the loop, to continue on to further composition, or keep looping to make different edits. With that said, here is some more research I have found, regarding the use of multi-touch input devices.
I also had to look further into the fact of how much control would people want over these devices? Do they want a program that comes with preloaded sounds, and a virtual device with an interface that stays the same? To further explore the idea, I visited a page that explains the use of a virtual synth called Reaktor 5 (by Native Instruments), where you can construct your own synthesizer in a graphical way that seems similar to a hypertext map. I also explored the use of a music composing program entitled ‘FlexiMusic Composer’ to see what challenges might come into play when dealing with editing Digital Audio Workstation tracks, and if that’s really something I want to include in this project altogether.
Native Instruments Reaktor 5: http://www.native-instruments.com/index.php?id=reaktor5_us
FlexiMusic Composer: http://www.fleximusic.com/composer/overview.htm
Screenshots of Reaktor 5:
http://cachepe.zzounds.com/media/fit,325by400/brand,zzounds/Reaktor5_Screenshot_skrewel-3259baecf267c0c43483ca9b517f1751.gif (The graphics follow the sound in a striking way. Editing the sound is as easy as dragging the mouse over the yellow green bar display. Fantastic for space opera sounds.)
More information on Reaktor 5:
Making Role Playing Games with RPG Maker XP:
I tried the first version of this in high school (when RPG Maker 2000 came out), and its a great practical way of making fun role playing games.
I suggest you guys check it out. There is NO PROGRAMMING involved.
Grand Theft Auto 4:
Researching Technologies for Imaginary Entertainment Invention
The De-Scheduler: Virtual Rehearsal System
Coordinating artists’ schedules is intense at best. However, it is critical that artists rehearse in the same place at the same time, in order to “feel” each others’ timing, visual cues, and kinesthetic rhythm. To alleviate the annoying and ubiquitous problem of trying to schedule rehearsals, The De-Scheduler Virtual Rehearsal System is developed. The De-Scheduler is a device that allows each artist to project a holographic image of him/herself into a common space occupied by other holographic images or real persons. One space is reserved for a rehearsal. Each participant “logs on” and projects his image into that space at a time convenient for him/her. Each log on adds more interaction and information to the rehearsal; the rehearsal itself becomes a changing entity, just as it would be if all were in one place at one time. Each participant accesses the compounded information found in the rehearsal session saved in the De-Scheduler’s memory. All review the material simultaneously and then project their holographic images into the rehearsal space at the same time. A dancer in a studio in France is joined by the image of a musician in Senegal and they are joined by five actors in the US. Their holographic images allow for sensory input such that they are able to see and to “feel” each other as they would if their bodies were together. The performers show up for tech rehearsals and perform the material as a unit, with the same accuracy that would have been obtained in a traditional rehearsal.
Current Holographic Technology
Musion® Eyeliner™ System is a high definition holographic video projection system allowing spectacular three-dimensional moving images to appear within a live stage setting. Live or virtual stage presenters can appear alongside and interact with virtual images of humans or animated characters. Just about the only thing you can’t do is shake their hands – or at least, only virtually. www.eyeliner3d.com – Getting close, but you can’t “feel” each other.
Technology by Vizrt and SportVu with the help of forty-four HD cameras and twenty computers, such as CNN’s hologram – (article on this technology found at http://gizmodo.com/5076663/how-the-cnn-holographic-interview-system-works)
o On the subject’s side:
• 35 HD cameras pointed at the subject in a ring
• Different cameras shoot at different angles (like the matrix), to transmit the entire body image
• The cameras are hooked up to the cameras in home base in NY, synchronizing the angles so perspective is right
• The system is set up in trailers outside Obama and McCain HQ
• Not only is it mechanical tracking via camera communication, there’s infrared as well
• Correspondents see a 37-inch plasma where the return feed of the combined images are fed back to them. • Twenty “computers” are crunching this data in order to make it usable
o On the HQ side:
• Only used two out of 40-something total camera feeds that CNN has
• The delay is either minimal, or we’ve gotten used to satellite delay that we don’t even notice now
• An array of computers takes the crunched info feed from the subject’s side in order to mesh it with the video from Wolf’s side.
• Unfortunately, it doesn’t look like the images are actually “projected” onto the floor of the CNN studio so that Wolf can actually talk to the person, you know, in a face to face. So it’s not quite Star Wars just yet. Only after computers merge the video feeds together do you get a coherent hologram + person scenario
Infosys is a huge technology conglomerate in India known as “the Taj Mahal of training engineers says it is developing 3-D imaging handsets that will be able to project free standing holographic environments and photos that you’ll be able to rotate, move through, and dissect. They mention applications such as analyzing crash sites, helping medical students practice surgery and gaming. – practicing surgery sounds a lot more like true interaction than the other technologies (surly article on Infosys technology found at http://www.cracked.com/blog/hologram-technology-by-2010-laser-swords-to-follow/)
(1) MyKey by Ford: Designed to provide a safer driving experience for new drivers. MyKey allows parents to supervise the speed limit and volume control of the car. Volume Restriction is 44% total volume. Speed Limit is 88mph
Hits the market in 2010
(2) Video Goggles: Apple and Sony plan to team up to develop video gogles in which you can view video files taken from your ipod and any portable media player.
(3) LG Solar Cellphone:
-Eco friendly mobile phone
-Equipped with solar panel battery
-By pointing the phones solar panel to natural light, solar power is converted into electricity.
-If left in natural light for a long period of time the solar panel creates enough energy as a charging device
Lg plans to release this Eco friendly product in the European market by the end of 2009.
LG phone pics
I have broken down my invention into a variety of component parts, each of which serves a specific function. I am currently in the process of detailing these specifics for each of these component parts; for example, the Frequency Calculators located in the east and west quadrants of the HSNVCA are designed to process all soundwave information that is separated into their individual frequencies through the Hertz Filter and apply them to complicated formulas that determine the properties and dimensions of the physical environment, including all objects within this environment that have interacted with or contributed in some way to these sonic waves. What this means is that the general ambient sound of a natural environment, which includes at the most basic level the sound of air in motion, interacts with the environment, moving through trees, around buildings, and through doorways and windows. In addition, there may be birds chirping, car engines throttling, children playing in a schoolyard,etc. Now, as all of these diverse sounds propigate through airspace and reflect off of/bend around/are absorbed by/refract through objects, these interactions give us information about the physical properties of the environment. So if all of these frequencies that are interacting with the environment in a vast multitude of ways are factored into these frequency calculators, all of this sonic information can then be implemented into formulaic calculations to determine the shape, structure, substance, texture, mass, dimension and proximity of each object. This information is then delivered to the Multibeam Bathymetric Calibrator to illustrate(actuate) a preliminary 3D rendering or mapping of the physical environment and everything in it at any given moment in time.
Going back to the Frequency Calculator, its function is instrumental to the overall process involved with the Sonic element of the Haptic Sonic Nano-Vision Conversion Actuator. However, as its name implies, there are Three Primary elements to the HSNVCA, each of which contains multiple component parts, which are ultimately integrated together at the Central Processing and funneled out to the optic nerves through two cylindrical tubes,a recent expansion to my overall design.
Here are some links to additional research for my project design.
MORE SOLIDIFIED CONCEPT
Upon further thought, I realized the invention of a camera that is imbedded in humans’ eyes that records to a brain-like HD is unnecessary as that’s what the brain does, so it would function solely as a backup, unless it is removed when the human dies and the memories are then encapsulated forever. I think the better way to proceed is to have a camera that is first attached to lenses and then later implanted in eyeballs that wirelessly records to a storage device in a central location (ie – server farms in Jersey or Texas). This could have many positive side effects. If the person goes missing or is lost, one can access their memories from this server and see their last memories and locate them. Upon death the person’s memories will be saved forever. Without getting completely morbid however, I think this would work the best for my original concept of logging concerts/theatrical memories. One can choose to selectively record events/memories to the storage device/server and can access them at any time, from anywhere. The user can have their own memories play back in their own head or access them from a computer or internet device anywhere in the world. The memories should be complete with all senses and the user will have the option of viewing their memories as an outsider or ‘jumping into’ them as the person experiencing them, in first person.
I still think the issues of intellectual property and ethical issues should be considered, especially after the user dies, and what happens with those memories – who holds the rights to them, what becomes of them, etc.
Eventually, one could share their memories with others, as we do now with a google calendar. This would give people the opportunity to experience situations from various viewpoints.
Biohuman research/eye implants/eye surgery:
Japanese invention to find your keys- Video: http://www.liveleak.com/view?i=00b_1207503927
Human Memory Prothesis:
journals, journal programs, have to actively be involved in the recording process, no device or invention exists where the recording process is handled for you for later access
3.31.09 – change name from huDVR to huMR?? (humor?) human memory recorder? huDMR human digital memory recorder
Look up movie:
robin williams – the cuter – movie
arthur c clark – ‘the city and the stars’ – memories are collected and then regenerated….
consider abuse to the system, people stealing others’ memories,
are these real memories before they’re processed and emotions are involved…yes.
for proposal —-why this is different than just visual perception…