- Details
- Parent Category: Computer Science
- Category: Software/Programming
When a paralyzed person imagines moving a limb, cells in the part of the brain that controls movement still activate as if trying to make the immobile limb work again. Despite neurological injury or disease that has severed the pathway between brain and muscle, the region where the signals originate remains intact and functional.
In recent years, neuroscientists and neuroengineers working in prosthetics have begun to develop brain-implantable sensors that can measure signals from individual neurons, and after passing those signals through a mathematical decode algorithm, can use them to control computer cursors with thoughts. The work is part of a field known as neural prosthetics.
Read more: A leap forward in brain-controlled computer cursors
- Details
- Parent Category: Computer Science
- Category: Software/Programming
A new computer simulation of the brain can count, remember and gamble. And the system, called Spaun, performs these tasks in a way that’s eerily similar to how people do.
Short for Semantic Pointer Architecture Unified Network, Spaun is a crude approximation of the human brain. But scientists hope that the program and efforts like it could be a proving ground to test ideas about the brain.
- Details
- Parent Category: Computer Science
- Category: Software/Programming
Image-processing software is a hot commodity: Just look at Instagram, a company built around image processing that Facebook is trying to buy for a billion dollars. Image processing is also going mobile, as more and more people are sending cellphone photos directly to the Web, without transferring them to a computer first.
At the same time, digital-photo files are getting so big that, without a lot of clever software engineering, processing them would take a painfully long time on a desktop computer, let alone a cellphone. Unfortunately, the tricks that engineers use to speed up their image-processing algorithms make their code almost unreadable, and rarely reusable. Adding a new function to an image-processing program, or modifying it to run on a different device, often requires rethinking and revising it from top to bottom.
- Details
- Parent Category: Computer Science
- Category: Software/Programming
PITTSBURGH—Computer graphic artists who produce computer-animated movies and games spend much time creating subtle movements such as expressions on faces, gesticulations on bodies and the draping of clothes. A new way of modeling these dynamic objects, developed by researchers at Carnegie Mellon University, Disney Research, Pittsburgh, and the LUMS School of Science and Engineering in Pakistan, could greatly simplify this editing process.
Read more: Carnegie Mellon and Disney Develop New Model for Animated Faces and Bodies
- Details
- Parent Category: Computer Science
- Category: Software/Programming
MIT research uses information about how frequently objects are seen together to refine the conclusions of object recognition systems.
by Larry Hardesty
Today, computers can’t reliably identify the objects in digital images. But if they could, they could comb through hours of video for the two or three minutes that a viewer might be interested in, or perform web searches where the search term was an image, not a sequence of words. And of course, object recognition is a prerequisite for the kind of home assistance robot that could execute an order like “Bring me the stapler.” Now, MIT researchers have found a way to improve object recognition systems by using information about context. If the MIT system thinks it’s identified a chair, for instance, it becomes more confident that the rectangular thing nearby is a table.
- Details
- Parent Category: Computer Science
- Category: Software/Programming
by Larry Hardesty
New research could enable computer programming based on screen shots, not just code
Until the 1980s, using a computer program meant memorizing a lot of commands and typing them in a line at a time, only to get lines of text back. The graphical user interface, or GUI, changed that. By representing programs, program functions, and data as two-dimensional images — like icons, buttons and windows — the GUI made intuitive and spatial what had been memory intensive and laborious.
- MU Research Leads to Improved Human, Object Detection Technology
- Berkeley Lab Scientists' Computer Code Gives Astrophysicists Frist Full Simulation of Star's Final Hours
- Programming Tools Allow Use of Video Game Processors for Defense Needs
- Dryad Lets You Intuitively Create Beautiful Trees for Your Virtual World or Game.