Go homepage(回首页)
Upload pictures (上传图片)
Write articles (发文字帖)

The author:(作者)归海一刀
published in(发表于) 2014/5/3 9:07:20
Stephen Hawking: artificial intelligence could lead to human destruction, but

Stephen Hawking: artificial intelligence could lead to human extinction-artificial intelligence, hawking, end of the world-IT news Stephen Hawking: artificial intelligence could lead to human extinction

Renowned physicist Stephen Hawking discusses new film starring Johnny Depp the transcendent hacker says, AI is not only the greatest event in the history of mankind, but also may be the last incident. What he means is: artificial intelligence might lead to human destruction.

When you cheer for the development of artificial intelligence technology, when do you think it might not be a good thing?

Watch new movies starring Depp after of the transcendental hacker, hawking writes an article for the independent magazine clearly expressed his concern about this issue.

Hawking concerns not just the future of artificial intelligence techniques, even now a number of companies such as Google and Facebook. He said: "short-term effects of artificial intelligence depending on who should control it, while the long-term impact depends on whether it can be controlled. ”

Regardless of the short or long-term perspective, artificial intelligence, there is a huge potential risks. In fact, hawking seems to have no confidence in the so-called artificial intelligence expert.

He said: "the situation may emerge in the future whether it was good or bad, totally unpredictable, in the face of this situation, the experts will do everything possible to ensure the best results, right? Wrong! If you have a superior alien culture sent us a message saying: ' we are in decades to reach the Earth, ' we should say: ' all right, when you reach us, please let us know, we will give you liumen '? We might not do that, but if artificial intelligence technologies continue to evolve, there will be situations like that. ”

In fact, insist on technological developments seem to be eliminating or avoiding the final product may pose a threat to humans. Unmanned vehicles are a good example. Engineers do not seem to care about people driving in the course of fun.

Stephen Hawking admitted that robots and other artificial intelligence device might have brought great benefits to mankind. If those device design was very successful and brought great benefits to mankind, he said it will be the biggest event in human history. But he also cautioned that artificial intelligence could be final events in human history.

In fact, humanity in assessing potential risks of artificial intelligence techniques and benefits of the research work done by too few. Hogan expressed concern and lamentation. He said: "the artificial intelligence technology development to the fullest extent, we will be faced with the best or the worst thing in the history of mankind. ”

Hawking had previously tried to draw attention, namely sci-fi fascination wool over our eyes. It sometimes hiding an important issue, namely, the end result can cause havoc. He once said that alien civilizations may hate the Earth civilization.

Human beings have a tendency to narcissism, blindly believe that you're smart and never think things can sometimes go wrong. Perhaps, as he puts it, the tech industry should be more focus on proactive concern above.


(

霍金:人工智能可能导致人类灭亡 - 人工智能,霍金,世界末日 - IT资讯
霍金:人工智能可能导致人类灭亡

著名物理学家史蒂芬·霍金在讨论约翰尼·德普主演的新片《超验骇客》时称,人工智能或许不但是人类历史上最大的事件,而且还有可能是最后的事件。他的意思是:人工智能可能会导致人类的灭亡。

当你为人工智能技术的飞速发展而欢呼雀跃的时候,你是否想过这可能并不是一件好事?

在观看了德普主演的新片《超验骇客》后,霍金在为《独立》杂志撰写的一篇文章中明确地表达了他对这个问题的担忧。

霍金担忧的对象不仅仅是未来的人工智能技术,甚至还包括现在的一些公司比如谷歌和Facebook。他说:“人工智能的短期影响取决于由谁来控制它,而长期影响则取决于它是否能够被控制。”

不管从短期还是长期角度来说,人工智能都存在着巨大的潜在风险。实际上,霍金似乎根本就不信任所谓的人工智能专家。

他说:“未来可能出现的情况到底是好是坏完全是无法预料的,面对这种情况,专家们肯定会尽一切可能去保证得到最好的结果,对吗?错!如果有一个优越的外星文明给我们发来一条信息说:‘我们将在几十年之后到达地球,’我们是否应该回答说:‘好的,当你到达地球的时候请通知一下我们,我们将给你留门’?我们可能不会那么做,但是如果人工智能技术继续发展下去,差不多就会出现那样的情况。”

实际上,坚持技术发展似乎可以消除或避免最终产品可能对人类构成的威胁。无人汽车就是个很好的例子。工程师们似乎并不关心人们在开车的过程中获得的乐趣。

霍金承认,机器人和其他的人工智能设备也许会给人类带来巨大的好处。如果那些设备的设计非常成功,就能给人类带来巨大的好处,他说那将是人类历史上最大的事件。然而他同时也提醒说,人工智能也有可能是人类历史上最后的事件。

实际上,人类在考核人工智能技术的潜在风险和收益方面所做的研究工作太少。霍金对此表示担忧和哀叹。他说:“人工智能技术发展到极致程度时,我们将面临着人类历史上的最好或者最坏的事情。”

霍金以前就曾试图提醒人们注意一点,即科幻的魅力会蒙蔽我们的双眼。它有时会掩盖住一个重要的问题,即最终的结果有可能引发一场浩劫。他曾经说过,外星文明可能会憎恨地球文明。

人类有一种自恋的倾向,盲目相信自己很聪明而从不考虑事情有时可能会出错。或许正如霍金所说的那样,科技行业应该更多地把关注的焦点放在未雨绸缪上面。


)


If you have any requirements, please contact webmaster。(如果有什么要求,请联系站长)





QQ:154298438
QQ:417480759