neural network

Latest

  • Robotic legs simulate our neural system, lurch along in the most human-like way so far

    by 
    Steve Dent
    Steve Dent
    07.06.2012

    We've seen some pretty wonky bipedal robots before, but scientists at the University of Arizona have gone straight to the source -- us -- to make one with a more human-like saunter. It turns out it's not just our skull-borne computer that controls gait: a simple neural network in the lumber area of our spine, called the central pattern generator (CPG), also fires to provide the necessary rhythm. By creating a basic digital version of that and connecting some feedback sensors in the legs, a more natural human stride (without balance) was created -- and on top of that it didn't require the tricky processing used in other striding bots. Apparently this throws light on why babies can make that cute walking motion even before they toddle in earnest, since the necessary CPG system comes pre-installed from birth. That means the study could lead to new ways of stimulating that region to help those with spinal cord injuries re-learn to walk, and produce better, less complex walking robots to boot. Judging by the video, it's a good start, but there's still a ways to go before they can mimic us exactly -- you can watch it after the break.

  • DNA-based artificial neural network is a primitive brain in a test tube (video)

    by 
    Terrence O'Brien
    Terrence O'Brien
    07.30.2011

    Many simpler forms of life on this planet, including some of our earliest ancestors, don't have proper brains. Instead they have networks of neurons that fire in response to stimuli, triggering reactions. Scientists from Caltech have actually figured out how to create such a primitive pre-brain using strands of DNA. Researchers, led by Lulu Qian, strung together DNA molecules to create bio-mechanical circuits. By sequencing the four bases of our genetic code in a particular way, they were able to program it to respond differently to various inputs. To prove their success the team quizzed the organic circuit, essentially playing 20 questions, feeding it clues to the identity of a particular scientist using more DNA strands. The artificial neural network nailed answer every time. Check out the PR and pair of videos that dig a little deeper into the experiment after the break.

  • British researchers design a million-chip neural network 1/100 as complex as your brain

    by 
    Jesse Hicks
    Jesse Hicks
    07.11.2011

    If you want some idea of the complexity of the human brain, consider this: a group of British universities plans to link as many as a million ARM processors in order to simulate just a small fraction of it. The resulting model, called SpiNNaker (Spiking Neural Network architecture), will represent less than one percent of a human's gray matter, which contains 100 billion neurons. (Take that, mice brains!) Yet even this small scale representation, researchers believe, will yield insight into how the brain functions, perhaps enabling new treatments for cognitive disorders, similar to previous models that increased our understanding of schizophrenia. As these neural networks increase in complexity, they come closer to mimicking human brains -- perhaps even developing the ability to make their own Skynet references.

  • Schizophrenic computer may help us understand similarly afflicted humans

    by 
    Sean Buckley
    Sean Buckley
    05.11.2011

    Although we usually prefer our computers to be perfect, logical, and psychologically fit, sometimes there's more to be learned from a schizophrenic one. A University of Texas experiment has doomed a computer with dementia praecox, saddling the silicon soul with symptoms that normally only afflict humans. By telling the machine's neural network to treat everything it learned as extremely important, the team hopes to aid clinical research in understanding the schizophrenic brain -- following a popular theory that suggests afflicted patients lose the ability to forget or ignore frivolous information, causing them to make illogical connections and paranoid jumps in reason. Sure enough, the machine lost it, and started spinning wild, delusional stories, eventually claiming responsibility for a terrorist attack. Yikes. We aren't hastening the robot apocalypse if we're programming machines to go mad intentionally, right?