What do bees' habits and Internet servers have in common?

Communication patterns of bees gave an idea how Internet servers can be optimized to manage massive loads, without being overwhelmed with requests.

Georgia Tech researchers developed a communication system inspired by honeybee dance to help single-task Internet servers move between tasks as needed, reducing the chances of a Web site being overwhelmed with and locking out potential visitors. The team led by Prof. Craig Tovey designed the system by recognizing that bees and Internet servers have one thing in common - limited resources to be deployed for the best results. Researchers studied the bees' strategies for distributing resources in a constantly changing environment to see how the strategies could be applied to Internet servers.

The research focuses on the technique called Swarm Intelligence, a branch of Artificial Intelligence based on collective behavior. So far, the honeybee method improved service between 4 and 25 percent in tests based on real Internet traffic.

Can you make your computer laugh?

Can a computer have a sense of humor? Physicist Igor Suslov at the Kapitza Institute for Physical Problems in Moscow suggests a computer program based on his mathematical model, could actually tell amusing jokes.

Suslov says that a computer model which he has designed explains the evolution of humor. Our ability to experience humor, he suggests, ultimately depends on quirks in how the brain handles information. The physicist explains that verbal jokes work by drawing the mind into error. It first settles on one meaning, and then has to correct itself and see another, like in this joke: Father (reprovingly): "Do you know what happens to liars when they die?" Johnny: "Yes sir, they lie still". The wit here rests on how the brain flips between two meanings of "lie". In general, such verbal joke play work by making the mind of the observer settle on to one meaning, then spot an error and correct itself. Suslov’s goal is to create a brain-like computer, called a neural net, that can mimic this process - along with the errors - to behave the same way. It may not laugh, but it would react to simple jokes in which there are ambiguous words and meanings as well as tell them.

Jokes produced by computer programs are mostly primitive, but sometimes can be surprisingly funny. Here are two jokes generated by a computer program developed at the University of Edinburgh by Graeme Ritchie and Kim Binsted:

What do you call a ferocious nude?
A grizzly bare.

What kind of murderer has fibre?
A cereal killer.
Now, are your jokes as funny?

Avatar surgery comes next?

Would you, in the future, choose your family doctor after having checked his practical skills by watching him during a virtual operation on an avatar? Think about it. Nursing students at Tacoma Community College (Washington), before practicing medicine on real patients, already get to practice on virtual ones in Second Life.

During a live demonstration at the League for Innovation in the Community College’s technology conference, John Miller, a nursing instructor at Tacoma, played the role of the patient lying on a hospital bed in the virtual emergency room. The avatars of his two students, both of whom were participating remotely, entered the room to treat the patient. Mr. Miller’s avatar was suffering from chest pains. The students asked typical medical questions concerning their instructor’s condition, while their avatars on the screen hooked up an IV and attached a blood-pressure cuff. Mr. Miller fed information to the program to provide the blood-pressure reading and an electrocardiogram readout. His avatar then went into cardiac arrest, and the students had to react by giving CPR and providing electrical defibrillation.

Although a virtual world is not the best place to, for example, learn how to start an IV, it gives nursing students a chance to practice medical procedures. Second Life training won’t replace traditional learning or live simulations at the college, but it provides another method of practice, says John Miller. A safe one, it has to be added.


A computer programmed to make mistakes...

If the ability to make mistakes is indeed a crucial element that AI would need to really behave like humans (as some experts believe), than mankind might be witness to the dawn of a human-like AI era. Rachel Wood (University of Sussex, Brighton, UK) created a computer program that makes mistakes. What is more, it learns from its mistakes!

Wood's program commits a famous cognitive error known as the A-not-B error, which is made by babies between 7 and 12 months old and is seen as one of the hallmarks of fledgling human intelligence. The A-not-B error is made by infants when a toy is placed under a box labelled A while the baby watches. After the baby has found the toy several times, it is shown the toy being put under another nearby box, B. When the baby searches again, it persists in reaching for box A. As New Scientist reports, to test whether software programs could make the same mistake, Wood and her colleagues designed an experiment in which A and B were alternate virtual locations at which a sound could be played. A simulated robot, which existed in a virtual space, was instructed to wait a few seconds and then to move to the location of the sound. The process was repeated six times at A, then switched and performed six times at B.
The first time the team carried out the test, the robot's brain was a standard neural network, which is designed to simulate the way a brain learns. (...) That robot successfully navigated to A and then, when the source was switched, simply moved to B. Next Wood used a form of neural program called a homeostatic network, which gives the programmer control over how the neural network evolves. She programmed it to decide for itself how often its neurons would fire in order to locate sound A, but then to stick to those times when it later tried to locate sound B, even though they might not be the most efficient for that task. This is analogous to giving the network some memory of its past experiences. And this time the results were different. Wood found that the simulated robot persisted in moving towards A even after the source of the sound had switched to B. In other words, it was making the same error as a human baby.
What's more, as the robot went through a series of 100 identical trials, the A-not-B error faded away, just as it does in infants after they have made the wrong choice enough times”.

Rachel Wood and her team are very excited about their fallible machine. After all, homeostatic networks, even if they make mistakes, might turn out to be the best way to build robots that have both a memory of their physical experiences and the ability to adapt to a changing environment.

...and a human with a computer brain

French "mathlete" Alexis Lemaire, a 27-year-old doctoral student in artificial intelligence, calculated the 13th root of a 200-digit number in 72 seconds and claimed new world calculation record. That’s about five seconds faster than the previous record, which Lemaire also held.

Lemaire was presented with the randomly-picked number by a computer, which displayed the figure over 17 lines on the screen. It took him just over a minute to identify 2,397,207,667,966,701 as the 13th root. Yes – that’s 2 quadrillion, 397 trillion, 207 billion, 667 million, 966 thousand, 701.

How did he do it? "I use an artificial intelligence system which I use on my own brain instead of on a computer. Personally, I believe most people can do it but I have also a high-speed mind. My brain works sometimes very, very fast”, he says.

InteliWISE AVATAR awarded again

InteliWISE virtual assistant has received an award as the most interesting broadband product in the Computerworld contest – BROADBAND 2007.
The head of the competition Jury, Mr Zbigniew Kądzielski, said that the Jury decided in favour of the InteliWISE AVATAR because of its technological advancement and the fact, that it was created in Poland.
The InteliWISE AVATAR makes browsing Internet sites easier even for users not familiar with the World Wide Web. The core of the product, which uses broadband, is a visual module – combining video and audio files. InteliWISE AVATAR's knowledge comes from separate knowledge basis: the client's, basic, RSS channels and Internet resources. This solution is already in implementation i.e. on LOT Polish Airlines' website.


Avatars can inhabit your First Life too

You’re a high-tech maniac and want to move into a new house? A house where, say, any relevant events that may have happened are announced and summarized on a huge plasma TV as well as on 8 wireless PC tablets that are located throughout the place? Here’s something suitable for you...

Imagine a 4,500 sq. ft. house that takes home automation to a whole other level using a combination of a home control system, audio distribution system, home lighting system, and a security system. This complex system is all tied together with a really attractive electronic butler who’s name is Cleopatra. She is a voice activated avatar who, according to Brian Conte, the owner of this Seattle-area house, “provides a home personality and a friendly interface to the home’s automation system”.

Cleopatra appears on a 42-inch Panasonic plasma screen that faces the front door, but can also roam throughout the house, appearing on other screens and numerous wireless PC tablets. She gives status reports on the home’s electronic systems, greets everyone by name, shows pictures of people who have approached the front door throughout the day, announces missed phone calls, voice-mails, package deliveries, stock quotes, news, and even weather. Microphones built into the home’s ceilings allow the inhabitants to interact with Cleopatra by requesting information and controlling any aspect of the house. The system also keeps track of how many people are in each room, so that it can intelligently adjust the lights, music and ventilation in order to maximize overall comfort in your home.

Oh, by the way, if you’re still not convinced whether you should start saving money for a similar house: Cleopatra resembles Angelina Jolie...

So: is Second Life eco-friendly or not?

Remember this blog note titled Avatars consume as much electricity as Brazilians? It raised a question whether Second Life was sustainable ecologically. Well, according to Anuradha Vittachi and Peter Armstrong, founders of OneWorld (the international network for global justice), it is.

During her presentation at the United Nations OCHA +5 Symposium Vittachi demonstrated the potential of Second Life to cut down on air travel by meetings in sims and showcased OneClimate Island that will have virtual events running in parallel to the United Nations Climate Change Conference in Bali, 3 - 14 December 2007. Armstrong explains: “We will be opening a virtual window on events in Bali for anyone in the world who can access Second Life. But unlike its Real Life equivalent - and appropriately for a climate change conference - it will produce no travel-related carbon emissions”. In other words: OneClimate Island is a carbon free way to meet other people.

There are many simple ways to reduce your CO2 which you can find here and here. Maybe we should add another one to the list: meeting our friends in virtual reality rather than flying a plane.

InteliWISE awarded for avatars

InteliWISE AVATAR – the flagship product of a renowned Polish company InteliWISE - was awarded a medal in the INNOVATION 2007 Competition during the Industrial Technology, Science and Innovation Fair, organized by Gdansk International Fair Co. The company was also awarded a special prize, the Cup from the President of the Polish Agency for Enterprise Development, in recognition of its solutions presented there. The InteliWISE show box, where InteliWISE STAND and InteliWISE AVATAR were presented, drew the attention of many visitors.

InteliWISE provides innovative AI solutions, aiming to support on-line transactions and friendly customer care. Making use of intelligent technologies, the company supports on line customer service, search of information and boosts e-marketing. The InteliWISE AVATAR - one of the world's most advanced "virtual human" applications - a virtual advisor allowing for "almost natural" contact between Web user and the Web site. InteliAssistant is basing on a multi-source knowledge base and Natural Language Processing, which allows it to recognize user's commands written in natural language. The interactivity of the Assistant is supported by multimedia: dynamic animation and multi language speech synthesis. This virtual agent can be easily added-on to your company or personal Web Site within one day and then customized to the company's needs and Web Site content.


“If the computer knew a little more about you, it could behave better”

Tufts University researchers are developing techniques that could allow computers to respond to users’ thoughts of frustrations - too much work, or boredom - too little work. Sensitive machines would adjust their user interfaces based on the measurements of brain activity.

The researchers have launched a three-year research project that will use light to measure blood flow in the brain, which can help identify feelings of work overload, frustration or distraction among computer users. Applying non-invasive and easily portable imaging technology in new ways, the scientists hope to gain real-time insight into the brain’s more subtle emotional cues and help provide a more efficient way to get work done.

“If the computer knew a little more about you, it could behave better” said Robert Jacob, computer science professor and researcher. “If it knew your workload was increasing, maybe it could adjust the layout of the screen”. Who knows, maybe in time it could do a lot more than that?

Second Lifers, you’re being tested!

UK researchers are studying the way people act in virtual worlds compared to the real world. They use software called "SL-bot" to examine the way people act inside a virtual world of Second Life and to investigate its inhabitants' psychology.

British scientists use the SL-bot that masquerades as an ill-mannered human user. It starts a conversation with real Second Lifers and deliberately invades their personal space to see how they will react. NewScientist.com describes how it all works: In one experiment, SL-bot was sent on a mission to find other avatars that were alone. As soon as it did, it greeted them by first name, waited two seconds then moved to the virtual equivalent of within 1.2 metres away. It then recorded the other avatar's reaction for 10 seconds afterwards, and sent the data to the researchers. Out of 28 avatars approached this way, 12 simply moved away and 20 also responded via text chat. On a previous mission, SL-bot observed pairs of normal avatars as they interacted. It found that users are, on average, six times more likely to shift position when someone comes to within 1.2 m. That backs up the idea that people also value their virtual personal space, say the researchers.

Nick Yee of Stanford University, California, has done similar investigations of personal space in Second Life. He believes however, that the ethics of experimenting in virtual worlds remain under negotiation. SL users would probably share this view.

British Telecom futurologist says AI entities will win Nobel prizes by 2020

British Telecom futurologist Ian Pearson predicts that people will probably make conscious machines smarter than humans sometime between 2015 and 2020. According to BT 2005 Technology Timeline (Pearson was one of its authors), in some ten to twenty years such AI entities will be given vote, gain PhD or win Nobel Prizes.

Here are some other BT predictions for years to come:

2006-2010: Synthetic voices pop band gets in top 20
2006-2010: AI chatbots indistinguishable from people by 95 % of population
2006-2010: First artificial electronic life
2008-2012: Mood-sensitive home décor
2011-2015: AI Entity passes A Level
2011-2015: 25 % of TV celebrities synthetic
2013-2017: AI technology imitating thinking processes of the brain
2013-2017: AI teachers get better results than most human teachers
2016-2020: Electronic pets outnumber organic pets
2016-2020: Electronic life form given basic rights
2016-2020: AI Member of parliament
2020s: AI Entity gains PhD
2020s: AI Entity awarded Nobel Prize
2020s: AI entities given vote
2030s: Robots physically and mentally superior to humans
2050s: Humanoid robots beat England football team (naah, this ain’t never gonna happen!)