If the ability to make mistakes is indeed a crucial element that AI would need to really behave like humans (as some experts believe), than mankind might be witness to the dawn of a human-like AI era. Rachel Wood (University of Sussex, Brighton, UK) created a computer program that makes mistakes. What is more, it learns from its mistakes!
Wood's program commits a famous cognitive error known as the A-not-B error, which is made by babies between 7 and 12 months old and is seen as one of the hallmarks of fledgling human intelligence. The A-not-B error is made by infants when a toy is placed under a box labelled A while the baby watches. After the baby has found the toy several times, it is shown the toy being put under another nearby box, B. When the baby searches again, it persists in reaching for box A. As New Scientist reports, to test whether software programs could make the same mistake, Wood and her colleagues designed an experiment in which A and B were alternate virtual locations at which a sound could be played. A simulated robot, which existed in a virtual space, was instructed to wait a few seconds and then to move to the location of the sound. The process was repeated six times at A, then switched and performed six times at B.
The first time the team carried out the test, the robot's brain was a standard neural network, which is designed to simulate the way a brain learns. (...) That robot successfully navigated to A and then, when the source was switched, simply moved to B. Next Wood used a form of neural program called a homeostatic network, which gives the programmer control over how the neural network evolves. She programmed it to decide for itself how often its neurons would fire in order to locate sound A, but then to stick to those times when it later tried to locate sound B, even though they might not be the most efficient for that task. This is analogous to giving the network some memory of its past experiences. And this time the results were different. Wood found that the simulated robot persisted in moving towards A even after the source of the sound had switched to B. In other words, it was making the same error as a human baby.
Wood's program commits a famous cognitive error known as the A-not-B error, which is made by babies between 7 and 12 months old and is seen as one of the hallmarks of fledgling human intelligence. The A-not-B error is made by infants when a toy is placed under a box labelled A while the baby watches. After the baby has found the toy several times, it is shown the toy being put under another nearby box, B. When the baby searches again, it persists in reaching for box A. As New Scientist reports, to test whether software programs could make the same mistake, Wood and her colleagues designed an experiment in which A and B were alternate virtual locations at which a sound could be played. A simulated robot, which existed in a virtual space, was instructed to wait a few seconds and then to move to the location of the sound. The process was repeated six times at A, then switched and performed six times at B.
The first time the team carried out the test, the robot's brain was a standard neural network, which is designed to simulate the way a brain learns. (...) That robot successfully navigated to A and then, when the source was switched, simply moved to B. Next Wood used a form of neural program called a homeostatic network, which gives the programmer control over how the neural network evolves. She programmed it to decide for itself how often its neurons would fire in order to locate sound A, but then to stick to those times when it later tried to locate sound B, even though they might not be the most efficient for that task. This is analogous to giving the network some memory of its past experiences. And this time the results were different. Wood found that the simulated robot persisted in moving towards A even after the source of the sound had switched to B. In other words, it was making the same error as a human baby.
What's more, as the robot went through a series of 100 identical trials, the A-not-B error faded away, just as it does in infants after they have made the wrong choice enough times”.
Rachel Wood and her team are very excited about their fallible machine. After all, homeostatic networks, even if they make mistakes, might turn out to be the best way to build robots that have both a memory of their physical experiences and the ability to adapt to a changing environment.
Rachel Wood and her team are very excited about their fallible machine. After all, homeostatic networks, even if they make mistakes, might turn out to be the best way to build robots that have both a memory of their physical experiences and the ability to adapt to a changing environment.
No comments:
Post a Comment