05 February 2014

Google Artificial Intelligence and the Supercomputer (Video)


It is not a new question, the idea of “evil” supercomputers is all too familiar to fans of science fiction, but what about for fans of reality? According to the predictions of some in the field of artificial intelligence, the point at which computers will be as capable as humans is quite near, and beyond that point there is no guarantee that our power-plug peers will hold a favorable opinion of us, or even want us around. On the bright side though, the purportedly non-evil company known as Google is setting up an ethics board to guide its forays into AI.
Ray Kurzweil, a prominent inventor and futurist who was recently recruited to work at Google on the project to develop computers that comprehend natural language, has gained a great deal of notoriety recently for his predictions about the future of AI. During a conversation at an online event, Mr. Kurzweil postulated that, “Using these biologically inspired models,” states Kurzweil can exploded the potential of AI, placing it almost on a “human” scale of interpretation.
He makes these claims based on trends in technology that are often discussed under the name “Moore’s Law” and with emphasis on the idea of the singularity – a concept that describes a situation where increased technological capacity makes possible the design and integration of even more advanced technology, which in turn promotes even more advanced design; a cycle that accelerates the rate of technological innovation to shorter and shorter periods.
The field of artificial intelligence is getting a lot of attention right now, especially with Google recently acquiring the somewhat secretive AI company DeepMind, which specializes in computers that learn to recognize patterns and use them in novel situations. In the past year Google has also acquired several robotics companies and it is partnered with NASA on the Quantum Artificial Intelligence Lab. Cont..

Most artificial intelligence technologies are functionally isolated, which is to say they focus on a particular area of intelligence, be it language, learning, spatial awareness, communication, etc., but they are not well enough developed to be used in an integrated manner. The usefulness of these technologies will initially be applied to such seemingly mundane tasks as retrieving better Google search results and optimizing user personalization, but after they are sufficiently developed these technologies will quickly begin to be integrated in the hopes of creating the mythical beast: strong AI, a humanly conscious and intelligent computer.
Some scientists are skeptical, or at the very least, wary of unbridled optimism. There are those who doubt even the possibility that computer intelligence can ever reach a level comparable to human intelligence. Others are concerned by the idea that if such a feat is possible, what evidence do we have that it will be friendly to humans.
This consideration has been deeply mined by science fiction, but on a more practical level, many AI researchers are concerned that unforeseeable situations in which programming flaws, inconsiderate goal prioritization or indifference to humans requiring the same resources the computer needs, could create serious risks to humanity.
There are some efforts to nip the problem in the bud, researchers such as Eliezer Yudkowsky of the Machine Intelligence Research Institute are promoting preemptive work on what is called “friendly AI” in the hopes that the necessary control parameters will already be developed by the time they are needed.
According to the unidentified sources of TheInformation website, Google has, as part of the deal to acquire the company DeepMind, agreed to create an ethics board to monitor the use of their AI advancements. While “ethics board” may not sound like a mighty bulwark against the possibilities of human obsolescence, it is at least an indication of how seriously even the companies pursuing the dream of strong-AI are considering the unpredictability of advancing technology to the point of singularity.

No comments:

Post a Comment