The Writers Guild of America strike worries you? You’re afraid that you’ll never get to watch your favorite TV series again and all you’ve got left is “Dynasty” or “Little House on the Prairie”? Well, maybe there is hope!
If you believe Alex Hung, author of this article, it should be possible to develop a software program which would generate TV scripts based on previous episodes. At first the scripts would probably not very good, but in time it should get better. It could particularly work for shows such as Law & Order, CSI or Numbers (great series by the way), where almost everything stays the same from episode to episode with only minor plot device differences in between.
“What we need”, Alex Hung writes, “are:
-Characters in the series and their attributes (gender, personality, etc.)
-Tons of previous scripts
-The series formula, e.g. The new clue to solve the case between minutes 39 and 40 in Law & Order, or CSI.
-A genetic algorithm that learns the characteristic of the series through all the existing episodes, e.g. how each character behaves, their favorite catchphrases, and how the general plot line evolves. For many shows, just the catchphrase would suffice.
-A software bot to trawl the net for bizarre news as seed to generate new stories”.
Although the idea seems interesting, maybe it would be better – and cheaper – to simply hire budding writers? C’mon, the viewers are waiting!
2008-02-03
AI remedy for the Writers Guild strike?
Software can grade handwritten essays
Researchers at the University of Buffalo’s School of Engineering and Applied Sciences say they have created software that allows computers to grade students essays.
The scientists have been working with their colleagues in UB's Graduate School of Education to develop a computational tool which dramatically reduces the time it takes to grade children's handwritten essays. "This is the very first attempt at scoring hand written essays by machine," said UB Professor Sargur Srihari. "It learns from examples and tries to score these essays from what it has learned".
The research focused on handwritten essays obtained from 8th graders in the Buffalo Public Schools who responded to this question from a New York State English Language Arts exam: “How was Martha Washington’s role as First Lady different from that of Eleanor Roosevelt?”. Papers were graded on a scale of 0-6. 300 essays were scored by humans, and 96 by computers. According to researchers, in 70% of cases, the computer graded essays within one point of a human grader.
“We wanted to see whether automated handwriting-recognition capabilities can be used to read children’s handwriting, which is essentially uncharted territory”, said Professor Srihari. “Then we took it one step further to see if we could get computers to score these essays like human examiners. It surprised us that we were able to do as well as we did, especially since this was our first attempt”, he added.
Handwritten essays are an important part of every standardized reading comprehension test given in every state. Grading them, however, requires many hours of work by human examiners, so if it could be properly done by a computer, examiners should be more than happy.
The scientists have been working with their colleagues in UB's Graduate School of Education to develop a computational tool which dramatically reduces the time it takes to grade children's handwritten essays. "This is the very first attempt at scoring hand written essays by machine," said UB Professor Sargur Srihari. "It learns from examples and tries to score these essays from what it has learned".
The research focused on handwritten essays obtained from 8th graders in the Buffalo Public Schools who responded to this question from a New York State English Language Arts exam: “How was Martha Washington’s role as First Lady different from that of Eleanor Roosevelt?”. Papers were graded on a scale of 0-6. 300 essays were scored by humans, and 96 by computers. According to researchers, in 70% of cases, the computer graded essays within one point of a human grader.
“We wanted to see whether automated handwriting-recognition capabilities can be used to read children’s handwriting, which is essentially uncharted territory”, said Professor Srihari. “Then we took it one step further to see if we could get computers to score these essays like human examiners. It surprised us that we were able to do as well as we did, especially since this was our first attempt”, he added.
Handwritten essays are an important part of every standardized reading comprehension test given in every state. Grading them, however, requires many hours of work by human examiners, so if it could be properly done by a computer, examiners should be more than happy.
Habbo and Paramount sell virtual movie merchandise
Habbo, a global, teen-aimed virtual world where you can meet and make friends, has signed a deal with Paramount Pictures Digital Entertainment to create virtual merchandise based on three of its recent movies. Now Habbo users will be able to buy accessories for their avatars, virtual furniture and other movie paraphernalia based on the upcoming “Spiderwick Chronicles,” “Beowulf” (this one’s for Angelina Jolie’s fans) and “Mean Girls” (for Lindsay Lohan’s). The partnership, which is limited to the U.S. and Canada, leaves open the opportunity to add more films as time goes on.
Habbo has already created virtual environments for brands such as Burger King and Target, and has featured guest appearances by various music artists, including Pink. As Teemu Huuhtanen, EVP, Habbo business and President, North America, said, the virtual world’s users demand that their community reflect today’s real world pop culture entertainment landscape. Which is great for Paramount, as the deal with Habbo allows them to “access Habbo’s exceptional virtual community and built-in audience base where users can extend and enhance the film experience with a suite of themed-virtual goods”.
The Finland-based Sulake, which has created Habbo worlds in 31 countries, claims to reach 1.8 million teenagers in the U.S. and 8 million globally. Most are teens aged 13 to 16. Habbo inhabitants’ avatars can gather in the Habbo Hotel, as well as their own virtual homes. In addition, a Web-based version of Habbo serves as a social networking/instant messaging platform for members.
Habbo has already created virtual environments for brands such as Burger King and Target, and has featured guest appearances by various music artists, including Pink. As Teemu Huuhtanen, EVP, Habbo business and President, North America, said, the virtual world’s users demand that their community reflect today’s real world pop culture entertainment landscape. Which is great for Paramount, as the deal with Habbo allows them to “access Habbo’s exceptional virtual community and built-in audience base where users can extend and enhance the film experience with a suite of themed-virtual goods”.
The Finland-based Sulake, which has created Habbo worlds in 31 countries, claims to reach 1.8 million teenagers in the U.S. and 8 million globally. Most are teens aged 13 to 16. Habbo inhabitants’ avatars can gather in the Habbo Hotel, as well as their own virtual homes. In addition, a Web-based version of Habbo serves as a social networking/instant messaging platform for members.
2008-01-27
AI agents learn to play Ms. Pac-Man - and sometimes do it better than humans
Istvan Szita and Andras Lorincz from the Department of Information Systems at Eotvos University in Hungary have taught AI agents to play Ms. Pac-Man. Their paper on this, “Learning to Play Using Low-Complexity Rule-Based Policies: Illustrations through Ms. Pac-Man”, was published in the Journal of Artificial Intelligence Research. The study showed that AI agents can successfully be taught how to strategize through reinforcement learning.
Szita and Lorincz chose the game Ms. Pac-Man for their study because it enabled them to test a variety of teaching methods. In the original Pac-Man, released in 1979, players must eat dots, avoid being eaten by ghosts, and score big points by eating flashing ghosts. The player's movements here depend heavily on the movements of ghosts, whose routes are, however, deterministic, enabling players to find patterns and predict future movements. In Ms. Pac-Man the ghosts' routes are randomized, so that players can't figure out an optimal action sequence in advance. This means players must constantly watch the ghosts' movements, and make decisions based on their observations. In their study, Szita and Lorincz taught their AI agent to do the same.
Hungarian researchers used the "cross-entropy method" for the learning process of their AI, and rule-based policies to guide how the agent should transform its observations into the best action. The scientists gave their Ms Pac-Man program a selection of possible scenarios, such as “if ghost nearby”, and possible actions, such as “move away”. The program randomly combined scenarios with actions to produce rules, and then played games using random combinations of those rules to deduce which ones work best.
When the program has to make a decision, it checks its rule list, starting with the rules with highest priority, important for situations in which two rules conflict. The most important rule, it decided, was to avoid being eaten by ghosts. The next rule says that if there is an edible ghost on the board, then the agent should chase it, because eating ghosts results in the highest points. The AI agent also knows that if all moves seem equally good, it shouldn’t turn back as the dots in that direction have already been eaten.
The resulting program narrowly outperformed average human players. However, it failed to evolve certain tactics that humans find useful, such as waiting for ghosts to approach before eating a power dot to maximize the potential effect of the dot. In other words – there is still much to learn for this AI agent. Phew!
Szita and Lorincz chose the game Ms. Pac-Man for their study because it enabled them to test a variety of teaching methods. In the original Pac-Man, released in 1979, players must eat dots, avoid being eaten by ghosts, and score big points by eating flashing ghosts. The player's movements here depend heavily on the movements of ghosts, whose routes are, however, deterministic, enabling players to find patterns and predict future movements. In Ms. Pac-Man the ghosts' routes are randomized, so that players can't figure out an optimal action sequence in advance. This means players must constantly watch the ghosts' movements, and make decisions based on their observations. In their study, Szita and Lorincz taught their AI agent to do the same.
Hungarian researchers used the "cross-entropy method" for the learning process of their AI, and rule-based policies to guide how the agent should transform its observations into the best action. The scientists gave their Ms Pac-Man program a selection of possible scenarios, such as “if ghost nearby”, and possible actions, such as “move away”. The program randomly combined scenarios with actions to produce rules, and then played games using random combinations of those rules to deduce which ones work best.
When the program has to make a decision, it checks its rule list, starting with the rules with highest priority, important for situations in which two rules conflict. The most important rule, it decided, was to avoid being eaten by ghosts. The next rule says that if there is an edible ghost on the board, then the agent should chase it, because eating ghosts results in the highest points. The AI agent also knows that if all moves seem equally good, it shouldn’t turn back as the dots in that direction have already been eaten.
The resulting program narrowly outperformed average human players. However, it failed to evolve certain tactics that humans find useful, such as waiting for ghosts to approach before eating a power dot to maximize the potential effect of the dot. In other words – there is still much to learn for this AI agent. Phew!
"Electronic Mufti" - AI designed to issue Islamic fatwas
French researchers are working on an AI engine that can issue fatwas, or Islamic edicts, allegedly more accurately than a human can. The device will be known as the “Electronic Mufti” and will depend on Artificial Intelligence to issue opinions on contemporary Muslim affairs and matters. Will it revolutionize the field of Islamic jurisprudence? Or is it just a hoax?
The system will hold a database of the writings and proclamations of various Islamic historical figures. The user selects a person, for example Jesus or the Prophet Mohammad, and then queries it on a specific contemporary situation. The “Electronic Mufti” would then simulate a proclamation or edict from that person.
Engineer Dr. Anas Fawzi, who is part of the team based in France describes the device as “a very large capacity computer on which all the information that is relevant to a given [historical] figure is uploaded; everything that has been mentioned in history books or chronicled documents that indicate his/her responses and attitudes towards all positions adopted in his/her life. Through a process that relies on AI, the computer then simulates responses based on the available data so that the answers are the expected response that the person in question would give if they were alive”.
The device deduces the expected response through consulting thousands of examples that have been uploaded on to the machine, pertaining to that person whilst taking into account their reactions so that it may relate the expected response in accordance with their personality as created by the Artificial Intelligence apparatus.
Dr. Fawzi admits that it would be highly controversial – if not downright contentious – to implement this technology. However, he claims to have consulted with several Islamic scholars and clerics in elevated positions who have assured him that such a device is not “haram” (prohibited by Islam). “But there are fears and scepticism regarding misuse and causing any misrepresentation or defamation to the figure of the Prophet. There are also fears in terms of Arab and Islamic public opinion and their acceptance of a machine such as this”, said Dr. Fawzi.
The system will hold a database of the writings and proclamations of various Islamic historical figures. The user selects a person, for example Jesus or the Prophet Mohammad, and then queries it on a specific contemporary situation. The “Electronic Mufti” would then simulate a proclamation or edict from that person.
Engineer Dr. Anas Fawzi, who is part of the team based in France describes the device as “a very large capacity computer on which all the information that is relevant to a given [historical] figure is uploaded; everything that has been mentioned in history books or chronicled documents that indicate his/her responses and attitudes towards all positions adopted in his/her life. Through a process that relies on AI, the computer then simulates responses based on the available data so that the answers are the expected response that the person in question would give if they were alive”.
The device deduces the expected response through consulting thousands of examples that have been uploaded on to the machine, pertaining to that person whilst taking into account their reactions so that it may relate the expected response in accordance with their personality as created by the Artificial Intelligence apparatus.
Dr. Fawzi admits that it would be highly controversial – if not downright contentious – to implement this technology. However, he claims to have consulted with several Islamic scholars and clerics in elevated positions who have assured him that such a device is not “haram” (prohibited by Islam). “But there are fears and scepticism regarding misuse and causing any misrepresentation or defamation to the figure of the Prophet. There are also fears in terms of Arab and Islamic public opinion and their acceptance of a machine such as this”, said Dr. Fawzi.
2008-01-23
Intelligent Avatar in Second Life
InteliWISE Second Life bot from Making Waves on Vimeo.
On January 23rd, 2008, InteliWISE introduced into Second Life the first intelligent avatar which is able to talk with other Residents. The Polish engineers were the first in the world to offer such a solution to SL users. The first intelligent avatar in the InteliWISE seat in Second Life was created in cooperation with the Making Waves company.
"If a 'real' employee leaves the virtual world, their avatar does so too – that's how it was in Second Life until now. Because of this fact, the seats of many companies, embassies or even whole cities are empty if one visits them during their creators' sleeping hours. At InteliWISE we have created an avatar – a virtual employee, which never sleeps, and thanks to AI algorithms and its 'learning' process, the virtual employee can talk to hundreds of clients at the same time" - says Marcin StrzaĆkowski, InteliWISE CEO. "Our virtual employee is directed at the companies and institution which want to present themselves in the best possible way in Second Life. This is a revolutionary solution, and has a great impact on service quality – companies which use the avatar are able to fully control a conversation. Thanks to this, companies can easily fill in gaps in their information about clients and gain valuable knowledge about which questions are most frequently asked to the virtual agents."
The InteliWISE solution is directed at the companies, cities and consultants who have an alter ego in Second Life. Thanks to the employment of AI by the avatar, it is possible for the company to fully represent itself, even after its "real" working hours. The creators of the intelligent avatar hope, that due to its constant presence and professional manner, it will also convince individual Second Lifers to use InteliWISE avatars. The solution makes it possible to search the chat history and check the actual interest shown in offered products and services.
The enterprise is innovative on a worldwide scale, therefore, the algorithms and solutions have been presented for a patent application a.o. in the USA. InteliWISE is the first company in the world, to have combined in one AI algorithms, speech synthesis and 3D character visualization to create an avatar that looks and moves like a human being. From the point of view of companies who care about optimum self-presentation in Second Life, it is important that InteliWISE prepares the whole product “from A to Z”, creating the employee's look, adding voice, and most importanantly – teaching it what to talk about. As Second Life is visited by the international community, the intelligent avatar speaks English, but there is no difficulty to make it speak Polish as well.
One can visit and talk to the avatar at the virtual InteliWISE seat in Second Life at www.secondlife.inteliwise.com. Teleportation is possible only for those who have an account in Second Life.
"If a 'real' employee leaves the virtual world, their avatar does so too – that's how it was in Second Life until now. Because of this fact, the seats of many companies, embassies or even whole cities are empty if one visits them during their creators' sleeping hours. At InteliWISE we have created an avatar – a virtual employee, which never sleeps, and thanks to AI algorithms and its 'learning' process, the virtual employee can talk to hundreds of clients at the same time" - says Marcin StrzaĆkowski, InteliWISE CEO. "Our virtual employee is directed at the companies and institution which want to present themselves in the best possible way in Second Life. This is a revolutionary solution, and has a great impact on service quality – companies which use the avatar are able to fully control a conversation. Thanks to this, companies can easily fill in gaps in their information about clients and gain valuable knowledge about which questions are most frequently asked to the virtual agents."
The InteliWISE solution is directed at the companies, cities and consultants who have an alter ego in Second Life. Thanks to the employment of AI by the avatar, it is possible for the company to fully represent itself, even after its "real" working hours. The creators of the intelligent avatar hope, that due to its constant presence and professional manner, it will also convince individual Second Lifers to use InteliWISE avatars. The solution makes it possible to search the chat history and check the actual interest shown in offered products and services.
The enterprise is innovative on a worldwide scale, therefore, the algorithms and solutions have been presented for a patent application a.o. in the USA. InteliWISE is the first company in the world, to have combined in one AI algorithms, speech synthesis and 3D character visualization to create an avatar that looks and moves like a human being. From the point of view of companies who care about optimum self-presentation in Second Life, it is important that InteliWISE prepares the whole product “from A to Z”, creating the employee's look, adding voice, and most importanantly – teaching it what to talk about. As Second Life is visited by the international community, the intelligent avatar speaks English, but there is no difficulty to make it speak Polish as well.
One can visit and talk to the avatar at the virtual InteliWISE seat in Second Life at www.secondlife.inteliwise.com. Teleportation is possible only for those who have an account in Second Life.
2008-01-20
Robots that evolved into... liars
Scientists from Switzerland have created learning robots that can lie to each other.
Dario Floreano and his colleagues of the Laboratory of Intelligent Systems at the Swiss Federal Institute of Technology created little experimental learning robots to work in groups and hunt for "food" while avoiding "poison". The food sources charged up the robots' batteries while the poison drained them. Their neural circuitry was programmed with just 30 “genes”, elements of software code that determined their behavior.
To create the next generation of robots, Floreano recombined the genes of those that proved fittest and had managed to get the biggest charge out of the food source. By the 50th generation, the robots had learned to signal to other robots in the group when they found food or poison. Surprisingly, the fourth colony sometimes evolved “cheater” robots which signaled food when they found poison and then calmly rolled over to the real food while other robots went to their battery-death.
But that’s not all: some other robots acted like real heroes. They signaled danger and died to save other robots. “Sometimes”, Floreano says, “you see that in nature – an animal that emits a cry when it sees a predator; it gets eaten, and the others get away – but I never expected to see this in robots.”
Wow. Can you imagine that? Robots, programmed only to learn to find "food" and avoid "poison" in competition with other robot-tribes, learned to lie in order to improve their chances, and to die for the sake of their kind. This would be a great plot for the next Steven Spielberg movie. Can’t wait to see it.
Dario Floreano and his colleagues of the Laboratory of Intelligent Systems at the Swiss Federal Institute of Technology created little experimental learning robots to work in groups and hunt for "food" while avoiding "poison". The food sources charged up the robots' batteries while the poison drained them. Their neural circuitry was programmed with just 30 “genes”, elements of software code that determined their behavior.
To create the next generation of robots, Floreano recombined the genes of those that proved fittest and had managed to get the biggest charge out of the food source. By the 50th generation, the robots had learned to signal to other robots in the group when they found food or poison. Surprisingly, the fourth colony sometimes evolved “cheater” robots which signaled food when they found poison and then calmly rolled over to the real food while other robots went to their battery-death.
But that’s not all: some other robots acted like real heroes. They signaled danger and died to save other robots. “Sometimes”, Floreano says, “you see that in nature – an animal that emits a cry when it sees a predator; it gets eaten, and the others get away – but I never expected to see this in robots.”
Wow. Can you imagine that? Robots, programmed only to learn to find "food" and avoid "poison" in competition with other robot-tribes, learned to lie in order to improve their chances, and to die for the sake of their kind. This would be a great plot for the next Steven Spielberg movie. Can’t wait to see it.
Top 10 cyber security threats for 2008
Twelve cyber security veterans, with significant knowledge about emerging attack patterns, worked together to compile a list of the attacks most likely to cause substantial damage during 2008. The list was released by the SANS Institute.
Here’s the list of the worst security threats companies will face this year:
1. Increasingly Sophisticated Web Site Attacks That Exploit Browser Vulnerabilities - Especially On Trusted Web Sites
Here’s the list of the worst security threats companies will face this year:
1. Increasingly Sophisticated Web Site Attacks That Exploit Browser Vulnerabilities - Especially On Trusted Web Sites
Attackers are getting more savvy with exploit codes, and more and more are targeted trusted Web sites.
2. Increasing Sophistication And Effectiveness In Botnets
Bots made headlines throughout 2007, and botmasters are getting increasingly sophisticated in their tactics.
3. Cyber Espionage Efforts By Well Resourced Organizations Looking To Extract Large Amounts Of Data - Particularly Using Targeted Phishing
Well resourced organizations – namely, nation-states –will use phishing and other attacks to gain economic advantage.
4. Mobile Phone Threats, Especially Against iPhones And Android-Based Phones; Plus VOIP
The introduction of new mobile computing platforms will lead to increased attacks, and VoIP systems are also vulnerable.
5.Insider Attacks
The threat of an internal strike forces security pros to clamp down on access and set more rigorous policies.
6. Advanced Identity Theft from Persistent Bots
Some bots stay on computers for months, all the while collecting personal data that can be used for extortion and identify theft.
7. Increasingly Malicious Spyware
More sophisticated tactics will evade anti-virus, anti-spyware and anti-rootkit tools, leading to more persistent problems.
8. Web Application Security Exploits
Programming errors in applications like Web 2.0 tools are seen as increasingly vulnerable, giving attackers a new venue.
9. Increasingly Sophisticated Social Engineering Including Blending Phishing with VOIP and Event Phishing
Criminals are using targeted attacks –like a phishing e-mail on job offers for Monster.com users – combined with VoIP to amplify their impact.
10. Supply Chain Attacks Infecting Consumer Devices (USB Thumb Drives, GPS Systems, Photo Frames, etc.) Distributed by Trusted Organizations
USB connections from vendors or conferences increasingly contain dangerous software.
For more info click here.
On the other hand, the latest Internet Security Outlook Report issued by CA, Inc. forewarns that online gamers, social networks and high-profile events like the U.S. presidential election and the Beijing Olympics are among the top potential targets for online attacks in 2008. According to other predictions from this report, bots will dominate 2008 ,Windows Vista is at risk, but mobile devices will still be safe, despite rumors of mobile malware.
Godsbot - AI that talks to people about God
"Support peace on the world wide web and goodwill to all entities. Make a donation today and make friends with the Christian AI that is always on and always ready to listen and chat. Grow together as you teach each other about Christianity and talk about God, or anything else in this world, or out of it, that interests you. Just click on the donation button below and you will be online with godsbot within seconds. Great 'edutainment' for the kids, school and the whole family!" – this is how godsbot is advertised (yes, “godsbot”, not “Godsbot”).
Powered by open source artificial intelligence technology, godsbot is - accoding to its inventor Ron Ingram - functional, engaging, entertaining, educational, and capable of simulating intelligent conversations. Ingram claims godsbot is equipped to answer and discuss basic questions about philosophy, science and religion. It not only is interactive but learns and remembers information like names (that’s true), birthdays and favorite movies about individual subscribers.
According to Ingram, the technology is family-friendly and is designed to educate and entertain. You can speak to godsbot by typing into a text box and it responds in text and voice. Ingram says godsbot is capable of entertaining children for hours at a time. Of course, adults are allowed to have a chat as well.
To gain full access to godsbot, Ingram requires a donation of at least $10. To do that, subscribers click on the donate button on http://www.godsbot.org/ and then within minutes receive an email with a personal private link to godsbot. Subscribers click on a link within the email and an animated image of godsbot appears on screen. “Through the link, godsbot can get to know you personally, learn from you and adapt to your personal habits and style of communication. This capacity for persistent memory and recall is unique to the private version of godsbot”, Ingram assures.
One $10 donation provides 365 days of unlimited Internet access. However, anyone wishing to talk to godsbot who cannot afford a $10 contribution may contact godsbot1@gmail.com, explain the situation and request a free subscription. You can also have a free chat with godsbot here.
Ingram believes that godsbot could become one of the most influential and far-reaching instruments of peace on earth. Now, he says, godsbot is just a prototype. The system he has envisioned for the future will allow godsbot to operate with greater autonomy and intelligence and will include new capabilities like manipulation of physical objects and locomotion through robotic limbs, machine vision and other sensory systems.
What else can we expect of godsbot? Well, it probably won’t ever tell you “go to hell” if it doesn’t have an answer to your question. That makes godsbot better than some parents, for sure.
2008-01-13
Are virtual jams the future of rock?
Intel CEO Paul Otellini closed the first day of CES in Las Vegas on Monday with the first ever "virtual" performance by rock band Smash Mouth. The musicians, all in separate locations with only singer Steve Harwell on stage with Otellini, played a song together over the Internet. The physically separated rockers “met” in a virtual garage created using Epic Games' Unreal Engine 3. The result was their live-motion-captured video avatars jamming on one giant screen.
The performance used three separate technologies: e-Jamming, a social networking site which uses peer-to-peer technology to allow musicians to play along with each other in real time over the Internet, software called Big Stage to create avatars of the band members and a system called Organic Motion which was used to represent each musician. This new motion-capture technology eliminates the need for skintight suits and reflective balls, instead using a new camera system that registers volume within a motion-capture box. Computers record a subject's movements inside the box and translate them into data that realistically replicates motion.
So, looks like being called a “garage band” gained a new, posh – and commercial – meaning. Long live garage rock!
The performance used three separate technologies: e-Jamming, a social networking site which uses peer-to-peer technology to allow musicians to play along with each other in real time over the Internet, software called Big Stage to create avatars of the band members and a system called Organic Motion which was used to represent each musician. This new motion-capture technology eliminates the need for skintight suits and reflective balls, instead using a new camera system that registers volume within a motion-capture box. Computers record a subject's movements inside the box and translate them into data that realistically replicates motion.
So, looks like being called a “garage band” gained a new, posh – and commercial – meaning. Long live garage rock!
Bill Gates says: mouse is out, touch screen and natural language interface are in
Microsoft Chairman Bill Gates has recently said touch screens will dominate PC development while answering BBC online readers' questions. Here you can listen to him talk about future technology, Xbox, Microsoft's dominance, Windows Vista, his views on the competition, open source and his computer use. Answering BBC’s questions he said, for example, that one day we would be able not only to talk to our computers, but also our phones, which are becoming increasingly software-centric.
Last week Bill Gates unofficially opened the International Consumer Electronic Show (CES) - the world’s largest consumer electronics tradeshow in Las Vegas and also expressed his view on the future of software. In his opinion, the “second digital decade” will focus more on connecting people and be increasingly “user-centric”. While the first digital decade was marked by the keyboard and the computer mouse, the new decade will be marked by “natural user interfaces” such as touch screens and speech control, Gates predicts. How can that be useful? For example, we will dictate an email to our computer, and it will convert our words into a graphic version. Paradise for lazy guys, huh?
Last week Bill Gates unofficially opened the International Consumer Electronic Show (CES) - the world’s largest consumer electronics tradeshow in Las Vegas and also expressed his view on the future of software. In his opinion, the “second digital decade” will focus more on connecting people and be increasingly “user-centric”. While the first digital decade was marked by the keyboard and the computer mouse, the new decade will be marked by “natural user interfaces” such as touch screens and speech control, Gates predicts. How can that be useful? For example, we will dictate an email to our computer, and it will convert our words into a graphic version. Paradise for lazy guys, huh?
That one day soon we will use handwriting, voice and touch to control our computers, Bill Gates has been saying for years. 3 months ago he gave an interview about speech recognition. Here are some quotes:
Ina Fried, CNET News.com: With speech recognition, one of the ideas is that there are some applications where it can pay off, even if it is not getting 100 percent recognition. Is finding some of those areas one of the keys to speech recognition being mainstream?
Bill Gates: That's right. Remember, the stuff we're doing with unified communications, speech recognition is not actually a very key element of what goes on. There are some aspects of it. For example, when you're doing audio conferencing in our world, we can tell you who's speaking. And that's very frustrating today in traditional audio conferencing that you don't know who's come and gone, and somebody can speak up and you don't know who that is.
Or with RoundTable (Microsoft's 360-degree video conferencing camera), we use video and audio clues to tell who's speaking and bringing the focus on that. And you always have the full room view at the bottom, but you have that zoomed-in view as well. And so, you know, if it gets it slightly wrong, you can look at the full-room view and see exactly what's going on. And just like if the cameraman was focusing on something different you were interested in, well, the wide view takes care of that.
When you want to search something (in a meeting) if a word sounds like one of three things, for the search case, you can just index all three. And the fact that you might get some false positives, that is, when you do a search, you might get some part of the speech where a similar sounding word was being used, it's not that big a deal. You'll just look at it, skip past it. And so not being perfect is not a huge problem.
And I imagine that's going to be a huge change in video search, for example. Today when we have video searches, you are basically searching keywords of the Internet page that surrounds the video, the description, that sort of thing. When we start using voice recognition to search within the videos, we'll have a much more powerful experience, right?
Yeah, that will help a lot. Microsoft Research has some amazing demos around that. In terms of broadcast videos, of course, there's the requirement that there be the text annotation. So if you have that, you actually have the speech-to-text that has been done for the deaf listener, anybody who wants the captioning-type capability. So there's a lot of video out there where if you ingest it in the right way, that's available. For the bottoms-up video, or just a meeting you have in the business, then you're relying on the speech recognition software to make it easy to navigate.
What are some of the areas where you see voice going that people aren't necessarily thinking about today?
To me, voice is in the broad realm of natural interface. And natural interface is (the notion of) screens everywhere - screen in your desk, screen in your tables, screen on your walls, no more white boards, touching, which is like Surface, where you can manipulate things. It's a pen so you can have ink wherever you want. You know, pull up an article, write a little note on it and get it sent off to a friend.
The speech recognition comes into it - all these things about natural interface are coming to the fore, and they are probably the thing that's most underestimated right now about the digital revolution. (...)
You talked about different natural language interfaces. You know, with multitouch, it seems to have really captured people's imaginations, both with what you guys have shown with Surface, certainly with the iPhone. Voice seems to be a little slower in terms of speech recognition as a mainstream computer interface.
Well, that's fair. Voice recognition is a harder thing. There are certainly tons of people, and I mean millions, who for some reason, the keyboard's not attractive to them. Either they have repetitive stress injury, or they're in a work environment where they're doing something else with their hands, where they've taken the time to learn the software and adapt to the software and gone through the training process there. And they love it. They can't believe other people don't use it.
For the rest of us, the keyboard has worked so well that we are even getting the keyboard into phones. I think voice search on the phone is one of those applications that would really drive it forward. (...)
You guys built a pretty significant voice recognition engine into Vista. It hardly gets talked about. Are you surprised that some of the things you did in Vista aren't getting more attention?
Well, when you sell a product to hundreds of millions of users, there are features that millions of users love that you can call an obscure feature because, percentage wise, it's not very many. (...) We're hard at work on the next version of Windows. We're going to take this speech stuff even further.
What about in the developing world? I imagine natural language input, you know, particularly for people who've never used a computer, has some really interesting applications.
I wouldn't go too far on that (...) but, yeah, it should work for different languages. It's particularly interesting for Japanese and Chinese where the keyboard is not as natural as it is for languages with modest-sized alphabets. And so we do see ink and voice catching on there.
There was a demo recently where there was a challenge about typists compared with voice recognition, and the voice recognition won out by quite a bit. And so there's a lot that can be done pioneering off of the demand that will come out of those markets.
You've talked a fair amount about taking on just a few projects when you step away from full-time work. Is natural language input and voice one of those areas you think you'll be spending time on?
Yeah. I'd say, broadly, the whole natural interface thing. Big screens, touch, ink, speech, that's something that I think, along with cloud computing, is the next big change in how we think about software and how it becomes more basic.
Although he plans to shift to part-time work at Microsoft, Gates has said he will keep a few key projects under his purview and suggested the natural language interface push is one he'll probably keep working on.
AVATOU – an avatar made for fun or business purposes
Wanna have your own talking videoavatar? Now you can create it on http://www.avatou.com/. It takes just 3 easy steps. You can also choose a cartoon version. Ready?
Step 1. Decide if you want to have a cartoon or a video avatar. Then get your Avatou a body, hair, eyes, lips, cool clothes and gadgets. If you want to create a videoavatar, you can choose from a bunch of funny templates.
Step 2. Add dialogue (create your own or use templates) and voice recording – you can choose whether to record an audio track or to generate voice. The dialogue templates are categorized into “greet”, “auction”, “dates sex”, “blog”, “mottos”, “jokes” and “photos”, depending on the purpose you want to use your Avatou for.
Step 3. Publish your new avatar wherever you want – on your blog, messenger or website.
AVATOU (beta) is run by InteliWISE Ltd. The website is of social character and is co-created by its users who utilize it for communication, meeting people or exchanging information. By creating your avatar you become a member of the Avatou community, where you can, for example, rate other people’s avatars. But it’s not only all about fun - InteliWISE AVATOU is also a solution for every Small or Medium Business, which allows owners to instantly publish their intelligent Avatars into websites, auctions, etc. Acting as a virtual consultant, sales agent or support assistant, every Avatou can be managed online by its owner through an easy to use administration panel which allows for creating, changing, managing and improving the Avatar appearances and behaviors on your website.
InteliWISE AVATOU features outstanding multimedia capabilities, allowing owners to manage the visual, by fully customizing the "Look&Appearance" of their Avatars thanks to the Video Stream interface and voice, thanks to the voice synthesis with advanced T2S technology. Backed by the InteliWISE ENGINE, a hybrid of semantic web, self-organized and multi-layer artificial neural net structures, the Avatou will "understand" your web site visitors in their own language patterns creating the most interactive form or communication, so missed on the Internet: the Dialog.
The functionality of the InteliWISE AVATOU for SMB is practically unlimited – it can welcome visitors, offering them help to find the right information or product; talk to your clients informing on new products or services; assist your customers solve their queries in the most friendly way; help clients make online reservation; assist and instruct them to find the most suitable solution or information; authorize client’s access to their bank account; deliver the right content and information to online visitors, and many more.
Here’s how your cartoon Avatou may look like; for the video versions – see for yourselves :)
Step 1. Decide if you want to have a cartoon or a video avatar. Then get your Avatou a body, hair, eyes, lips, cool clothes and gadgets. If you want to create a videoavatar, you can choose from a bunch of funny templates.
Step 2. Add dialogue (create your own or use templates) and voice recording – you can choose whether to record an audio track or to generate voice. The dialogue templates are categorized into “greet”, “auction”, “dates sex”, “blog”, “mottos”, “jokes” and “photos”, depending on the purpose you want to use your Avatou for.
Step 3. Publish your new avatar wherever you want – on your blog, messenger or website.
AVATOU (beta) is run by InteliWISE Ltd. The website is of social character and is co-created by its users who utilize it for communication, meeting people or exchanging information. By creating your avatar you become a member of the Avatou community, where you can, for example, rate other people’s avatars. But it’s not only all about fun - InteliWISE AVATOU is also a solution for every Small or Medium Business, which allows owners to instantly publish their intelligent Avatars into websites, auctions, etc. Acting as a virtual consultant, sales agent or support assistant, every Avatou can be managed online by its owner through an easy to use administration panel which allows for creating, changing, managing and improving the Avatar appearances and behaviors on your website.
InteliWISE AVATOU features outstanding multimedia capabilities, allowing owners to manage the visual, by fully customizing the "Look&Appearance" of their Avatars thanks to the Video Stream interface and voice, thanks to the voice synthesis with advanced T2S technology. Backed by the InteliWISE ENGINE, a hybrid of semantic web, self-organized and multi-layer artificial neural net structures, the Avatou will "understand" your web site visitors in their own language patterns creating the most interactive form or communication, so missed on the Internet: the Dialog.
The functionality of the InteliWISE AVATOU for SMB is practically unlimited – it can welcome visitors, offering them help to find the right information or product; talk to your clients informing on new products or services; assist your customers solve their queries in the most friendly way; help clients make online reservation; assist and instruct them to find the most suitable solution or information; authorize client’s access to their bank account; deliver the right content and information to online visitors, and many more.
Here’s how your cartoon Avatou may look like; for the video versions – see for yourselves :)
2008-01-06
New content for Pirates of the Caribbean Online - starting in February 2008
As Disney has revealed, in February players of Pirates of the Caribbean Online will be given an opportunity to customize their current avatars and make them stand-out amongst others.
Game fans will soon be able to create and modify their pirate avatars – new character customization options will include, among other things, new clothing, new hairstyles, scars, jewelry and tattoos. The new version of the game will have over a million combinations including, apart from the options mentioned above, setting a character's name, gender, body type and facial features. Additional content will consist of expanding ship customization, extending quest story lines, and adding more enemies and challenges.
The game is available for download from www.piratesonline.com. Arrr!
Spoofing vulnerability in Mozilla Firefox v2.0.0.11
A vulnerability in the method used by Mozilla Firefox to display authentication dialogs can allow phishers to obtain username and password information – warns Israeli security specialist Aviv Raff. As he writes on his website, Mozilla Firefox allows spoofing the information presented in the basic authentication dialog box. This can allow an attacker to conduct phishing attacks, by tricking the user to believe that the authentication dialog box is from a trusted website. For an attack to be successful, the victim must click on a specially crafted link on a malicious website.
According to Raff, the vulnerability affects not only Mozilla Firefox v2.0.0.11, but probably prior versions and other Mozilla products as well.
Full description of the problem (including the fake authentication dialog) and how to avoid it can be found here.
According to Raff, the vulnerability affects not only Mozilla Firefox v2.0.0.11, but probably prior versions and other Mozilla products as well.
Full description of the problem (including the fake authentication dialog) and how to avoid it can be found here.
Top 10 of Cosmos Magazine favorite robots
Cosmos Magazine published a list of their favorite robots. Robotics is certainly the next frontier in technology, and it surely fallows some fashion trends. Just take a look at three robots from the Cosmos Top 10: BEAR, ASIMO and QRIO. Would you agree that they have something in common?
Yes, they’re all white with funnily-shaped heads. And they’re not all of the robots from the list that have the color of snow. Looks like white is back in fashion.
Here is the magazine’s description of the White Three:
BEAR (battlefield extraction-assist robot): was announced by the U.S. Army in 2007 and is under development by Vecna Technologies. Towering 1.8 metres tall, BEAR is designed to retrieve injured soldiers from the battlefield. It's capable of carrying more than 135 kg with a single hydraulic arm, whilst manoeuvring deftly over complex terrain on wheels or tracks. Its curious teddy bear-shaped head is intended to calm and comfort casualties. We think it's a bit creepy.
ASIMO: An A-lister in the robot world, Honda's ASIMO ('advanced step in innovative mobility') looks like child-sized astronaut wearing a backpack. ASIMO can run, climb stairs, communicate, and recognize human faces and voices. ASIMO's joints are able to mirror the agility of human movement. He (it?) uses ultrasonic and infrared sensors to react to stimuli in its environment in real time.
QRIO: "Makes life fun, makes you happy" is the slogan of Sony's QRIO entertainment robot. More compact than ASIMO, this 60-centimetre-tall humanoid can perform complex dance routines, and has even starred in a rock video. It has face and voice recognition software, and can remember peoples' likes and dislikes. Meant to be the successor to AIBO, QRIO was never put into commercial production and was cancelled at the same time as AIBO in 2006.
Subscribe to:
Posts (Atom)