Go homepage(回首页)
Upload pictures (上传图片)
Write articles (发文字帖)

The author:(作者)归海一刀
published in(发表于) 2013/11/14 3:23:56
Computers give pause, can you hold it?

Computers give pause, can you hold it?
-Automation technology-IT information Computers give pause, can you hold it?

(/Nicholas Carr) the evening of February 12, 2009, United States Continental Airlines passenger plane in windy weather, began its voyage of flying from Newark to Buffalo, New York. Today's commercial flight, the pilot is not a lot to do, this flight is no exception. In anticipation of hours of flying time, the captain maerwen·leisinuo (Marvin Renslow) briefly during takeoff for a simple manual driving, the Bombardier Q400 turboprop aircraft up in the sky, then opened the autopilot, allow the software to manipulate the flight. Next, the aircraft at an altitude of 16,000 feet advance steadily towards the Northwest, Lei Sinuo and his co-pilot libeika·Xiao (Rebecca Shaw) talked about their family lives, works and air traffic control officers of character and temper. Aircraft successfully reaches the law at Buffalo airport, but after the landing gear and flaps down, the pilot's QUADRANT suddenly dramatic shake up, which was a plane to lose lift, or signals into aerodynamic stall condition. Therefore, autopilot off, the captain took the hand to begin the artificial fly. Lei Sinuo responded quickly, but it has chosen is wrong thing to do: he suddenly pull up lever, lift the head (which will reduce the airspeed), without at the same time push the QUADRANT plane accelerates. Lei Sinuo manipulation has not only failed to defuse the crisis, instead of suddenly slowing down the plane. Immediately after the aircraft lost control and fell from the sky like a brick. "We're done! "This is your Captain speaking on the Q400 airliner crashed into a house in suburban Buffalo last words said before.

Need to know how to do have to actually do it. Computer automation saves the coolies, exactly what we learn that must be paid. Photo: Kyle Bean/The Atlantic

Professional computer operator?

That killed all 49 people on board and 1 person on the ground died of accidents should not happen. United States National Transportation Safety Board investigation concluded that the cause of the accident was pilot error. Stated in the survey report, the captain's face the reaction of the stall warning "this should be automated, the actual flight control input did not match the training", showing "panic and chaos". Operating the route of regional airline Colgan air (Colgan Air) one Executive acknowledged that drivers encounter on lack of situational awareness when emergencies occur (situational awareness).

Buffalo crash is not an isolated incident. A few months later, we had another very similar disaster occur, except this had caused further casualties. The night of May 31, 2012, France Airlines Airbus A330 jets from Brazil Rio de Janeiro, took off for Paris. About 3 hours after takeoff, the jumbo jet is attacked by a storm over the Atlantic. Plane's air speed sensors due to ice cover, and began to give false readings, caused the autopilot automatically disengaged. Under the bewildered, driver Pierre-saidelike·boning operations (Pierre-C é DRIC Bonin) flung back to the joystick. Aircraft start to rise, stall warning is bouncing at the same time, Bonin still pull the lever back. With the plane sharply lost speed, the airspeed sensor to start working again, feedback to the crew the exact figures. However, Bonin to continue to make the plane slows down. Eventually the whole plane to a halt down and start to fall. Bonin then release the joystick, A330 aircraft is likely to be self-correcting, but he did not do so. After 3 minutes, the plane fell 35,000 feet in the air, hitting the sea level. The passengers and crew on board were killed all 228 people.

Human history the first autopilot formed from two gyroscopes in 1930, an article in popular science (Popular Science) article dubbed "metal Aviator." Two gyroscopes installed horizontally, another vertical installation, the controller connects to the plane, powered by a propeller behind a wind-powered generator. Horizontal gyroscope to keep the wings level, while vertical is responsible for steering. Modern autopilots and then the rudimentary device almost can not find the similarities. Modern controlled by computer on the plane on autopilot, run very sophisticated software, gather information from electronic sensors, continuously adjust all the aircraft's attitude, speed and bearing. Pilots now in what they call a "glass cockpit", early analog dials and readouts are mostly gone, replaced by digital display box. Automation has become so complicated, regular passenger flights, leaving humans in all pilot control only a short 3 minute time. Pilots spent a lot of time doing, is a monitor screen and type the data. Not too much to say, they have become a computer operator.

Which-according to many aerospace and automation experts have come to the conclusion-a problem. Excessive use of automation systems leading to degradation of the expertise of the pilot, reaction becomes slow. United Kingdom Bristol University ergonomics expert Jane · Novo Izzy (Jan Noyes) calls it "crew went to technology." No one doubts that the autopilot for years in an enhanced contribution to flight safety. Autopilot relieve pilot fatigue, failure to provide early warning when flown by crew's inability to drive to keep the aircraft remained orderly driving. But overall steady decline in aircraft accidents has overshadowed the recent "stunning new incidents" and automation, Professor of psychology at George Mason University, leading authority lagu·balasuluo (Raja Parasuraman) comments. When the autopilot fails, too many pilots to face sudden, rarely assume responsibility, mistakes. In 2011, in an interview with the associated press, United States senior pilot, a former head of United Airlines Pilots Association (Air Line Pilots Association), top security officials of the luoli·Kai (Rory Kay) bluntly: "we have forgotten how to fly. "The United States FAA (Federal Aviation Administration) are nervous, in January of the same year issued a" safety alert ", urged airlines to allow their pilots to do more manual flight under. United States Federal Aviation Administration warns that too dependent on automated aircraft and passengers may be put in jeopardy.

Relying on automation, losing probably more than

Airlines a lesson worth thinking about. Notwithstanding all these events revealed the automation benefits, but would undermine the reliance on the capabilities and performance of its people. Its impact goes far beyond security. Since automation will change what we learn and know how to act, how to, things would involve moral levels. We make a decision or cannot make a decision to refer the matter to the machine which will shape our lives and our own place in the world. Things have been so, but in recent years, with the trajectory of labour-saving technology shift from mechanical to software, Automation has penetrated into all aspects of life, its operations have become more subtle. In the pursuit of convenience, speed and efficiency, we threw the workload to computers with haste, too late to reflect on what's it cost to do so.

Doctors use computer to do diagnosis, surgery, Wall Street bankers ' portfolios and the sale of financial instruments with a computer; architects use computers to design the building; lawyers use computers to record evidence. Computerized is not only specialized areas of work. Smart phones and other smaller and cheaper computers, we rely on software to make the everyday life of many other things. We open the App helped us shopping, cooking, socializing, and even child-rearing. We followed GPS instructions with a junction is a junction. We sought the views of a recommendation engine and see what movies, reading books and listening to songs. We have a question for Google, to refer the matter to Siri. In work and in life, we are increasingly living in a glass cockpit.

100 years ago, the United Kingdom huaihaite, mathematician, philosopher (Alfred North Whitehead) wrote: "the development of civilization is to increase the number of things without thinking will be completed. "It's hard to imagine anything better than words can show confidence in the automation. Huaihaite implies a belief--human activities are from top to bottom, a class distinction: every time we put a task or a tool to the machine to handle, freed to pursue their own rank higher, requires more flexibility, more intelligent or more tasks to broaden their horizons. Each upward step, we may lose something, but in the long run, our income will far outweigh the losses.

History provides ample evidence to support huaihaite. Since man invented the lever, the wheel and the abacus since we have numerous physical and mental work to the machine. However, huaihaite's comments should not be treated as universal truth. When he wrote these words, and automation are often limited to specific, clear and repetitive work--steam-loom weaving, mechanical calculator counts. Automation is different now. Computers can be programmed to perform very complex work, during which close coordination to assess many variables to complete a series of tasks. Many software programs is also responsible for the mental work, such as observation of the detection, analysis and judgement, decision, and so on, until recently, was thought to be unique to human functions. This person may operate the computer from falling into playing the role of a high-tech zero handyman--he typed data, monitor output, wary of the accident. Let us not so much free hand to open up new horizons in thinking and actions, all sorts of software actually let us gaze trivialities. We use one rule to replace a much more refined and professional competence.

Most people are willing to believe that Automation allows us to spend time in the pursuit of higher level above, but will not change our operation, or ways of thinking. This is a fallacy – automation scholars call "alternative myth" argument. Labor-saving devices alternative work or other activities not just some isolated part of it also changes throughout the mission, including the roles, attitudes and skills of the participants. , Automation, Professor of psychology at George Mason University, leading authority of lagu·balasuluo and his colleague pointed out in his paper 2010, "Automation is not a simple alternative to human activity, it tends to be the designers expected and unexpected changes to this activity. ”

Most people are willing to believe that Automation allows us to spend time in the pursuit of higher level above, but will not change our operation, or ways of thinking. This is a fallacy – automation scholars call "alternative myth" argument.

Psychologists found that when we work with computers, will often fall into complacency and prejudice--the two would undermine our performance and cognitive defects that caused the error. When a computer to lure us into a false sense of security, automation complacency comes up. Believe that computers will be completed accurately and deal with any issues arising, our attention is scattered away. We are severed from the job at hand, attention has faded on the surrounding things. Automatic bias is that we too believe monitors the accuracy of the information above. We trust in the software has become so strong, that ignore or disregard of other sources, including your own eyes and ears. When a computer has provided incorrect or incomplete information, we turn a blind eye.

In the cockpit, on the battlefield, in factories in the control room, under high risk of complacency and prejudice has been well documented. However, recent studies have shown that this kind of problem every person you work with computers. Today, many radiologists use analysis software to highlight the breast x-rays, where there may be questions. Typically, those contribute to the discovery of the disease. However, they can also have the opposite effect. Software-the impact of the recommendations, a radiologist may be less noticed has not been highlighted areas, sometimes overlooked early tumor Imaging. Most of us will also experience when using computer automation complacency. When you are using email or word processing software, because we know there is spell check in, we would not be so careful proofreading grammar.

Automation will people from being performer to observers

How your computer can weaken the human understanding and concerns it points to a deeper problem. Automation is our transition from performer to observers. We put down the joystick and watching the screen. This shift may make our lives easier, but it also can inhibit the development of professional skills. Starting from in the late 1970 of the 20th century, a psychologist, has been recorded, called "processing effect" (Generation Effect). This phenomenon was observed in vocabulary study in the first place, researchers found that when people initiative to generate words from her mind (back when "after brain"), to do more than merely make it easier to remember it.

This phenomenon has now been studied thoroughly, in a number of cases have affected both. When you are actively involved in a mission, has opened the complex psychological processes, these processes can help you remember more knowledge. You learn more, know more. When you repeat the same task for a longer period of time, your brain would construct specialized neural circuits are used for this activity. Brain would gather a wealth of information storage, knowledge organization and will have you instantly can invoke it. Both Serena Williams on the tennis court and is on the Board, Carlson [note], superb masters of law, assessing signal contexts can be found, and with seemingly incredible speed and precision response to changing circumstances. Looks like an instinctive reaction, honed is actually hard, hard skills, and modern software designed to ease the strain of exercise tips.

Comment: magenusi·kaersen, Norway international chess master, chess rank is now a 19-year-old Carlson is the first chess player. April 26, 2004, at the age of 13 years and 148 days Carlson to get title of chess Grandmaster.

Netherlands kelisituofu·fanning cognitive psychologist Wiegand (Christof van Nimwegen), an experiment was started in 2005, aims to investigate the impact of software technology know-how at our disposal. Wiegand Fannin recruited two groups, one called "the missionaries and cannibals" (Missionaries and Cannibals) computer game, the rules of the game based on a classic logic puzzle. In the game, players must deliver 5 cannibals and 5 missionaries (Wiegand Fannin experiment with each of the 5 yellow balls and 5 blue balls) for river boats can accommodate up to 3 passengers. Whether on board or ashore, are no worse than the missionaries many cannibals (or eating). Participants were divided into two groups, one group to use teaching software to get the job done, the software provides a step by step guide, highlighting which move is feasible and what is not. Another program that uses a basic, does not provide any support.

You may have guessed, use teaching software's group has made faster progress from the start. They can simply follow the prompts to action, instead of stopping at every step, recall the rules and figure out how to apply it to the new situation. However, as the test proceeds, does not use a set of auxiliary software has over-reached itself. They formed a clear conceptual understanding of the game, develop better strategies, and make fewer mistakes. 8 months later, Fannin Wiegand for the same person playing this game again. A group who did not use assistive software faster-than-competitor 1 time almost completed the race. Processing of positive action, they "have developed stronger for knowledge bearing in mind" (imprinting of knowledge).

Wiegand Fannin phenomena observed in the laboratory automation--would hamper our ability to transform information into knowledge--can also be seen in the real world. In many companies, managers and other experts rely on the decision support system to analyze the information and proposals for action programmes. For example, accountants in the auditing of enterprises will use such a system. Application will speed up the progress of the work, but there are signs, more capable software that accountants not more. Recently Australia researchers conducted a study, surveyed decision support systems for three of the effects of multinational accounting firms. Two certified public accountants with some very sophisticated software, according to the accountant in software client for answers to some basic questions, recommend inclusion in the audit file in the client-related business risks. Third company using a simple software, accountant is needed to assess the risk of a range of possible, manually select related lists. Researchers gave each company's accountant took the test, measuring their expertise. Third company accountant significantly strong than the other two, to different forms of risk showed a stronger understanding.

Who needs human, it will really become a problem

And the most shocking and disturbing, is the computer automation is still in its early stages. Experts used to think, programmers limited ability to automate complex tasks, especially those related to sensory perception, pattern recognition, and conceptual knowledge of the area. They're driving a car, for example, said that driving a car is not only needed in a short amount of time to understand the fast-moving visual signal flow also needs to respond to unexpected situations. Two prominent economists wrote in 2004, "in the face of oncoming vehicles to the left of the factors involved in many, it is hard to imagine a set of rules you can copy the driver's behavior. "In just 6 years later, in October 2010, Google announced that it had already established a 7" driving a car "'s fleet, in California and Nevada on the road already traveled more than 140,000 miles (more than 200,000 kilometer).

Driverless car robots how to navigate and perform tasks in the physical world provided a preview, as well as from human hands, coordinated motion and fluid policy decisions on the environmental awareness. Task automation is making equally rapid progress of the brain. Just a few years ago, a computer participating as the brink (Jeopardy) quiz show like this sounds ridiculous, but in a famous match in 2011, IBM supercomputer Waston show champion Ken Jennings has been (Ken Jennings) beats. Waston does not think like humans do, what it does not know what it was doing or saying. Its extraordinary advantage of modern computer processor speed.

In 2011, with the machine running, published by (Race Against the Machine), explored the impact of computerization on the economy. In the book, Massachusetts Institute of technology (MIT) researchers at the ailike·bulinyueersong (Erik Brynjolfsson) and andelu·maikefei (Andrew McAfee) pointed out that the Google driverless cars and the IBM super computer Waston is representative of the new wave of automation, is "growing exponentially" computer power use, will change the nature of the work of almost every job and career. Today, they wrote, "computer developed so rapidly, and their use in areas ranging from science fiction into everyday life ... ... Just a few years is enough. ”

Who needs humans? The question, whether questions were rhetorical questions, often appear in automation-discussion topic. If your computer's capacity to expand so fast, by contrast, humans seem slow, cumbersome, error-prone, why not create a sui generis system, perfect to carry out its mandate without any human oversight or interference? Why not to rule out human factors out? In commenting on the links between automation and pilot error, and technology theorist Kevin Kelly (Kevin Kelly) pointed out that the obvious solution was to develop a fully autonomous autopilot: "in the long run, the human pilots should not fly the plane. "Silicon Valley's venture capitalists weinuode·kesila (Vinod Khosla) recently said that when health care software (he jokingly called" the doctor algorithms ") assisted the doctor make a diagnosis developed to fully replace when doctors, healthcare will dramatically improve. Solution to the imperfect automation is fully automated.

The idea is very attractive, but there is no machine is infallible. Sooner or later even the most advanced technology will fail one day, playing the opposite effect, the most advanced computer system will encounter its designers never anticipated the situation. As automation technology became increasingly complex, algorithms, databases, contact between the sensor and mechanical parts are increasingly close, doubling the number of potential sources of failure, and even more difficult to detect. All parts are in good condition to run, but made a small mistake in the design of the system can lead to a catastrophe. Moreover, even if we could design a perfect system, it will still run in an imperfect world.

In 1983 of the automation of academic journals (Automatica) published a classic paper, project lizian·banbuliqi, a psychologist at University College London (Lisanne Bainbridge) describes a problem of computer automation. Many system designers assume that human operators are "unreliable, inefficient," at least compared with the computer is so. Then, designers would try to get people to accept liability as small as possible. Eventually, the person into a monitor, just passively watch screen. Such a work is our humanity--famous love wandering species-is not particularly suitable for work. As early as World War II radar operators during the investigation results show that it's very difficult to keep attention in the more than half an hour of your time staring at the screen to see. Bainbridge stated that "This means that humans cannot complete the basic function of monitoring unexpected exception. "Also, because a person's ability to" no worse ", even an experienced operator, if simply staring at the screen, and ultimately no different from fledgling novice. Principle of focusing and lack of awareness of these two points together, increases the possibility of incidents that the operator will not be able to respond. As a result, humans will be the weakest link in the system of a self-fulfilling prophecy.

Weakening effect of automation

Psychologists have found some easy ways to reduce adverse effects of automation. At irregular intervals we can setup software for steering control manual control, standing by at all times allow a human operator to focus, context-awareness and improve their learning ability. We can also limit the scope of automation to ensure that people who use computers to perform challenging tasks, rather than just in a position of spectator. For more things to do and helps keep the processing effects play a role. We can also in software integration into educational routines, require the user to repeat difficult physical and mental tasks to memory formation and skills development.

Some software designers to remember such a proposal. At school, the best instructional program through focused attention, asking someone to work, or by repeating the consolidation of learned skills to help students master a subject. Software design reflects the brain stores memories of recent findings, conceptual knowledge and hands-on blend. However, most software or applications do not encourage learning and participation, not only that, but also to have the opposite effect. This is because training and maintenance of professional skills is almost inevitably at the expense of speed and efficiency, inefficient learning needs and seeks to maximize the productivity and profitability of business rarely make such a concession. Individuals, too, all in the quest for efficiency and convenience. We have chosen to reduce the burden of work procedures, rather than those who make us work harder and takes longer programs.

Both pilots in the cockpit and the doctors in the clinic, need to know how to do have to actually do it. On human's greatest and most tend to overlook one thing: we each collision with reality, you will understand more about the world, and thus more fully into the world. Although hard fighting tasks will make us afraid to work, but it is this how labour defines us as a people. Computer automation way, it makes it easier for us to get what we want, but increases the distance we work with, and work is needed to understand the world. When we see ourselves as a screen bio, will have to face the question: we want to define your own, or let to define what you want? If we do not manage to solve the problem, our gadgets will be happy for us to answer.

Nicholas Carr (Nicholas Carr) was of the superficial (The Shallows:What the Internet Is Doing to Our Brains), author of a book.


(

电脑一歇菜,你能挺住吗? - 科技自动化 - IT资讯
电脑一歇菜,你能挺住吗?

(文/Nicholas Carr)2009年2月12日傍晚,美国大陆航空公司的一架客机在大风天气里开始了它从新泽西州纽瓦克市飞往纽约州布法罗市的航程。如今的商业航班,飞行员其实并没有很多事情要做,这趟航班也不例外。在预计一小时的飞行时间里,机长马尔文·雷斯诺(Marvin Renslow)在起飞的短暂过程中进行了简单的人工驾驶,将这架庞巴迪Q400型涡轮螺旋桨飞机升上了天空,之后便打开了自动驾驶仪,让软件来操纵飞行。接下来,客机在1.6万英尺的高空平稳地朝西北方前进,雷斯诺和他的副驾驶丽贝卡·肖(Rebecca Shaw)聊起了他们各自的家庭生活、工作情况以及空管人员的性格和脾气。飞机顺利到达了法布法罗机场,但就在起落架和襟翼放下以后,驾驶员的油门操纵杆突然剧烈摇晃起来,这是飞机失去升力、或将陷入气动失速状态的信号。于是,自动驾驶仪关闭,机长接过手来开始人工驾驶飞机。雷斯诺的反应很快,但做的却偏偏是错误的事情:他猛地拉起操纵杆,提起了机头(这会使空速降低),却没有同时向前推动油门操纵杆让飞机加速。雷斯诺的操控不但没有化解危机,反而使飞机骤然减速。紧接着飞机就失去了控制,然后像砖头一样从空中掉了下来。“我们完了!”是机长在这架Q400客机撞上布法罗郊区的一所房子前说的最后一句话。

要知道怎么做就必须得实际去做。计算机自动化所节省的那份苦力,恰恰是我们学习知识所必须付出的。图片来源:Kyle Bean/The Atlantic

专业人士沦为了电脑操作工?

这起导致机上全部49人以及地面上1人死亡的事故本来不应该发生。美国国家运输安全委员会调查后认为,事故原因是飞行员的操作失误。调查报告指出,机长面对失速警告的反应“本应自动化,实际的飞行控制输入与所受训练并不相符”,显示出“惊慌和混乱”。运营这条航线的地区性航空公司科尔根航空(Colgan Air)的一名高管也承认,机上的驾驶人员遇到紧急情况发生时缺乏情景意识(situational awareness)。

布法罗市的坠机不是一个孤立的事件。几个月后,又有一起极为类似的灾难发生,只是这次造成了更多的伤亡。2012年5月31日晚上,一架法国航空公司的空客A330客机从巴西的里约热内卢起飞,前往巴黎。起飞后大约3小时,这架大型喷气式客机便在大西洋上空遭遇了风暴的袭击。飞机的空气速度传感器由于表面结冰,开始给出错误的读数,导致自动驾驶仪自动脱开。惶惑之下,操作飞机的驾驶员皮埃尔-塞德里克·博宁(Pierre-Cédric Bonin)猛地拉回操纵杆。飞机开始上升,失速警告也同时响起,但博宁仍然一意将操作杆往回拉。随着飞机大幅攀升,失去了速度,空速传感器再次开始工作,反馈给机组人员准确的数字。然而,博宁继续让飞机减速。最终整架客机停滞下来,开始下落。如果博宁当时松开操纵杆,A330客机很可能会自我校正过来,但他没有这样做。3分钟后,飞机从3.5万英尺的高空掉下来,击中海平面。机上的乘客和机组人员共228人全部身亡。

人类历史上的第一个自动驾驶仪由两个陀螺仪组成,在1930年的一篇《大众科学》(Popular Science)文章里被戏称为“金属飞行员”。两个陀螺仪一个水平安装,另一个竖直安装,分别连接到飞机的控制器,由螺旋桨后面的一个风力驱动的发电机供电。水平陀螺仪保持机翼水平,而竖直的则负责转向。现代的自动驾驶仪与当年那个简陋的设备几乎找不到相似之处。现代的自动驾驶仪由飞机上的计算机控制,运行极其复杂的软件,从电子传感器收集信息,每时每刻都在不断调整飞机的姿态、速度和轴承。如今的飞行员在他们所谓的“玻璃驾驶舱”里工作,以前的那些模拟表盘和读数器大多已经消失,取而代之的是数字化的显示框。自动化已经变得如此之复杂,在通常的客运航班里,留给人类飞行员操控的拢共只有短短3分钟的时间。飞行员花很多时间在做的,是监控屏幕和键入数据。不太夸张地说,他们已经成了电脑操作员。

而这——根据许多航空和自动化专家得出的结论——成了一个问题。过度使用自动化系统导致飞行员的专业知识退化,反应变得迟钝。英国布里斯托尔大学的人体工程学专家简·诺伊斯(Jan Noyes)将其称为“机组人员的去技术化”。没有人会怀疑自动驾驶仪多年来在增强飞行安全这方面所做的贡献。自动驾驶仪减轻了飞行员的疲劳程度,提供故障的预先警报,当机组人员无力驾驶飞行时,可以保持飞机仍然有序驾驶。但整体飞行事故的稳步下降,掩盖了近来一系列“令人瞠目结舌的新型事故”,乔治梅森大学的心理学教授和自动化领域的权威拉贾·巴拉苏罗(Raja Parasuraman)这样评论。当自动驾驶仪出现故障时,太多的飞行员面对突如其来、已经很少承担的重任,都犯下错误。2011年,在美联社的一次采访中,美国联合航空公司的资深飞行员、曾经担任航空公司飞行员协会(Air Line Pilots Association)最高安全官员的罗里·凯(Rory Kay)直截了当地说:“我们忘记了如何飞行。”美国联邦航空管理局(Federal Aviation Administration)大是紧张,同年1月下发了“安全警报”,敦促各航空公司让其所辖的飞行员进行更多的手动飞行。美国联邦航空管理局警告说,过度依赖于自动化可能将飞机和乘客都置于危险之中。

依赖自动化,失去的或许比得到的更多

航空公司的教训值得我们思考。这些事件揭示了自动化虽有一切好处,但却会损害依赖它的人的能力和表现。其影响的方面远远不止是安全。因为自动化会改变我们如何行动、如何学习和知道哪些东西,事情就牵涉到道德层面。我们作决定或做不出决定把哪些事情交给机器,会塑造我们的生活和我们自己在世界上的立足之地。事情一直如此,但近年来,随着节省人力的技术的轨迹从机械转向软件,自动化已渗透到生活的方方面面,其运作也变得更加隐蔽。为了追求便利、速度和效率,我们忙不迭地将工作负荷抛给计算机,来不及反思这样做的代价是什么。

医生用计算机来做诊断、做手术;华尔街的银行家用计算机来组合和买卖金融工具;建筑师用计算机来设计建筑;律师用计算机来记录证据。被计算机化的并不只有专业工作领域。有了智能手机和其他体积更小、价格更便宜的电脑,我们依靠软件来进行日常生活的许多杂事。我们打开App帮我们购物、做饭、社交,甚至养育孩子。我们一个路口一个路口地遵循GPS的指示。我们征求推荐引擎的意见看什么电影、读什么书和听什么歌。我们有问题问谷歌,有事情交给Siri。在工作和生活中,我们越来越多地生活在玻璃驾驶舱里。

一百年前,英国数学家、哲学家怀海特(Alfred North Whitehead)写道:“文明的发展就是加大不用想便能完成的事情的数量。”很难想象还有什么表述比这句话更能显示对自动化的信心。怀海特的话里隐含着一种这样的信念——人类活动是从上到下有等级区分的:每当我们把一件任务或一种工具交给机器去负责,就把自己解放出来去追求等级更高的、需要更多灵活性、更多智能或更开阔视野的任务。每向上一步,我们都可能会失去些什么,但从长远看,我们的所得将远远大于所失。

历史提供了充足的证据来支持怀海特。自从人类发明了杠杆、方向盘和算盘以来,我们已经将数不清的体力活儿和脑力活儿交给了机器。但是,怀海特的见地不应该被当成普遍的真理。当他写下这句话时,自动化还往往局限于一些特定的、明确的、重复性的劳动领域——用蒸汽织机织布、用机械式计算器算数。现在的自动化则不同。计算机可以被编程执行非常复杂的工作,其间要评估许多变量来完成一连串紧密协调的任务。许多软件程序还负责脑力工作,比如观察检测、分析判断,甚至作出决定等等直到最近都还被认为是人类所独有的职能。这可能会使得操作计算机的人落入扮演一个高科技零杂工的角色——他键入数据、监控输出、谨防事故。与其说让我们腾出手来开辟思想和行动的新前沿,各种各样的软件实际上让我们的目光变得短浅。我们用一条条的规则替换了更加精细和专业的才干。

大多数人都愿意相信,自动化能让我们把时间花在更高层次的追求上面,但不会改变我们行动或思考的方式。这是一种谬见——自动化学者称之为“替代神话”的说法。节省劳力的设备不只替代了工作或其他活动中的一些彼此孤立的组成部分,它还改变了整个任务,包括参与者的角色、态度和技能。乔治梅森大学的心理学教授、自动化领域的权威拉贾·巴拉苏罗和他的一位同事在其2010年的一篇论文中指出,“自动化并不是简单地替代人类活动,它往往会以设计者预期之外和始料不及的方式改变这一活动。”

大多数人都愿意相信,自动化能让我们把时间花在更高层次的追求上面,但不会改变我们行动或思考的方式。这是一种谬见——自动化学者称之为“替代神话”的说法。

心理学家发现,当我们用电脑工作时,经常会陷入自满和偏见——两种会削弱我们表现和引发错误的认知缺陷当中。当一台计算机诱使我们进入一种虚假的安全感当中,自动化自满就出现了。相信计算机会准确无误地完成工作并且处理任何出现的问题,我们任注意力四散开去。我们与手头的工作脱离开来,对周围事情的注意力也在淡去。自动化偏见则是指我们过于相信显示器上面的信息的准确性。我们对软件的信任变得如此之强,以至于忽略或无视其他的信源,包括自己的眼睛和耳朵。当一台计算机提供了不正确或不完备的数据,我们视而不见。

在驾驶舱里、在战场上、在工厂的控制室中,高风险的情况下自满和偏见的例子已有据可查。然而,最近的研究表明,这种问题困扰着每一个用电脑工作的人。今天,许多放射科医生使用分析软件来高亮乳房X光片上可能有问题的地方。通常,这些亮点有助于疾病的发现。但是,它们也可以具有相反的效果。受软件建议的影响,放射科医生可能会不那么留意还没有被突出显示的区域,有时候就会忽视了早期肿瘤的影像。我们大多数人在使用电脑时也会经历自动化自满。在使用邮件或文字处理软件时,因为知道了有拼写检查在工作,我们就不那么细心地校对语法错误了。

自动化将人从执行者转变为观察员

计算机可以削弱人类认识和关注点的方式指向了一个更深层次的问题。自动化将我们从执行者转变为观察员。我们放下了操纵杆,转而观看屏幕。这种转变可能使我们的生活更轻松,但同时也可以抑制专业技能的发展。从20世纪70年代末开始,心理学家就一直在记录一种被称为“加工效应”(Generation Effect)的现象。这种现象最初是在词汇研究当中被观察到的,研究人员发现,当人们从脑中主动生成单词(背单词时“经过了大脑”),要比仅仅是看更容易记住它。

这一现象现在已被研究透彻,在许多不同的情况下都对学习有着影响。当你积极参与一项任务,便开启了复杂的心理过程,这些过程能让你记住更多的知识。你学得更多、记得也更多。当你在一段较长的时间里重复同一种任务,你的大脑会构造专门的神经回路用于进行这一活动。大脑会聚集丰富的信息存储,并且将知识组织得让你瞬间就能调用它。无论是网球场上的小威廉姆斯还是在棋盘上的卡尔森[注释],技艺精湛的高手可以发现情境中的规律、评估信号,并以看似不可思议的速度和精度应对不断变化的情况。看起来像是本能的反应,实际上是辛苦磨练出、来之不易的技巧,也正是现代软件旨在缓解的劳累中锻炼出来的技巧。

注释:马格努斯·卡尔森,挪威国际象棋大师,现为19岁的卡尔森是国际象棋等级排名第一的棋手。2004年4月26日,年仅13岁零148天的卡尔森获得国际象棋特级大师称号。

荷兰的认知心理学家克里斯托夫·范宁韦根(Christof van Nimwegen),在2005年开始了一个实验,旨在调查软件对我们掌握技术诀窍方面的影响。范宁韦根招募了两拨人,玩一个叫做“传教士与食人族”(Missionaries and Cannibals)的电脑游戏,游戏的规则基于一个经典的逻辑难题。在游戏中,玩家必须运送5个食人族和5个传教士(在范宁韦根的实验里分别用5个黄球和5个蓝球表示)过河,使用的船一次最多只能容纳3名乘客。无论在船上还是岸上,食人族的数量都不能比传教士多(否则就要吃人了)。实验参与者被分成两组,一组使用教学软件来完成任务,软件提供一步一步的指导,高亮出哪些移动是可行的、哪些不是。另一组则使用一个基本的程序,不会提供任何辅助。

你可能也猜到了,使用教学软件的那一组在一开始取得了更快的进步。他们可以简单地按照提示进行操作,而不是每一步都停下来,回忆规则并想出如何将其适用于新的形势。但是,随着测试的进行,不使用辅助软件的一组逐渐占据上风。他们对游戏形成了更清晰的概念性理解,制定出更好的策略,也少犯错误。8个月后,范宁韦根让相同的人再次玩这个游戏。此前没有使用辅助软件的一组几乎以超出对手一倍的速度完成了比赛。在加工效果的正面作用下,他们“对知识形成了更强的铭记”(imprinting of knowledge)。

范宁韦根在实验室里观察到的现象——自动化会妨碍我们将信息转化成知识的能力——在现实世界中也可以见到。在许多企业中,管理者和其他专家都依赖决策支持系统来分析信息和提出行动方案。例如,会计师在企业的审计中会使用这种系统。应用程序会加快工作的进度,但一些迹象表明,软件越是能干,会计师就越是不行。最近由澳大利亚的研究人员进行的一项研究,调查了决策支持系统对三家跨国会计师事务所的影响。其中两家会计师事务所采用非常先进的软件,能够根据会计师在软件客户端输入的一些基本问题的答案,在客户的审计文件中推荐纳入相关业务风险。第三家公司使用简单的软件,需要会计师来评估一系列可能的风险,手动选取相关的列表。研究人员给每家公司的会计师做了测试,测量他们的专业知识。第三家公司的会计师比其他两家的明显很强,对不同形式的风险都显示出了更强的理解能力。

谁还需要人类,这真的会成为一个问题

而最令人震惊和不安的,是计算机自动化还仍处于早期阶段。专家过去以为,程序员将复杂任务自动化的能力有限,特别是在那些涉及感官知觉、模式识别和概念性知识的领域。他们以驾驶汽车为例,说驾驶汽车不仅需要在很短的时间里理解高速运动的视觉信号,还需要流畅地应对突发状况。两位著名经济学家在2004年写道,“面对迎面而来的车辆执行向左转这个动作涉及的因素之多,很难想象一套规则可以复制司机的行为。”短短6年后,2010年10月,谷歌宣布它已经建立了一个由7辆“自驾驶汽车”组成的车队,在加利福尼亚州和内华达州的道路上已经行驶了超过14万英里(超过20万公里)。

无人驾驶汽车为机器人将如何在物理世界中导航和执行任务提供了一个预览,并且从人类手中接过了环境意识、协调运动和流体决策的工作。大脑任务的自动化正在取得同样迅速的进展。就在几年前,一台电脑参加像《危机边缘》(Jeopardy)这样的智力问答节目听上去似乎很可笑,但在2011年的一个著名比赛中,IBM的超级计算机Waston将这档节目一直以来的冠军肯·詹宁斯(Ken Jennings)打得落花流水。Waston并不像人类那样思考,它并不了解它在做什么或说什么。它的优势在于现代计算机处理器的非凡速度。

2011年出版的《与机器赛跑》(Race Against the Machine)探讨了计算机化对经济产生的影响。在书中,麻省理工大学(MIT)的研究人员埃里克·布林约尔松(Erik Brynjolfsson)和安德鲁·迈克菲(Andrew McAfee)指出,谷歌的无人驾驶汽车和IBM的超级电脑Waston是自动化新浪潮的代表,对呈“指数级增长”的计算机力量的利用,将改变几乎每一个工作和职业的工作性质。今天,他们写道,“计算机发展得如此迅速,其应用领域从科幻小说迈入日常生活……短短几年就够了。”

谁需要人类呢?这个问题,不管是疑问还是反问,经常会在讨论自动化的话题里出现。如果计算机的能力扩展得这么快,相比之下,如果人类看起来缓慢、笨拙、容易出错,为什么不建立一个自成一体的系统,完美地执行任务而没有任何人类的监督或干预呢?为什么不把人为因素给排除出去呢?在评论自动化和飞行员操作失误之间的联系时,技术理论家凯文·凯利(Kevin Kelly)指出,显而易见的解决方案是开发一个完全自主的自动驾驶仪:“从长远来看,人类飞行员不应该驾驶飞机。”硅谷的风险投资家维诺德·科斯拉(Vinod Khosla)最近表示,当保健医疗软件(他戏称为“医生算法”)从协助主治医生做出诊断发展到完全取代医生之时,医疗保健将会得到大大改善。解决不完善自动化的方法就是全面自动化。

这种想法是很诱人,但没有机器是万无一失的。即使是最先进的技术也迟早会有失效、起到反效果的一天,最先进的计算机系统也会遇到它的设计者从来没有预料到的情况。随着自动化技术变得越来越复杂,算法、数据库、传感器和机械部件之间的联系越来越紧密,潜在的故障源的数量将成倍增长,而且愈加难以检测。所有的部件都在完好运行,但系统设计中出了一个小错误就能酿成大祸。而且,就算可以设计出一个完美的系统,它仍将在一个不完美的世界里运行。

在1983年学术期刊《自动化》(Automatica)上发表的一篇经典论文中,英国伦敦大学学院的工程心理学家利兹安·班布里奇(Lisanne Bainbridge)描述了计算机自动化的一个难题。许多系统设计者都假设,人类操作员是“不可靠、低效的”,至少和计算机相比是如此。于是,设计师便尽量让人承担尽可能小的责任。最终,人变成了监控者,只是被动地看着屏幕。而这样一份工作是我们人类——出了名爱走神的物种——特别不适合的工作。早在二战期间对雷达操作员的调查研究结果就表明,人类很难在超过半个小时的时间里一直保持注意力盯着屏幕看东西。班布里奇指出,“这意味着人类不可能完成监控突发异常情况的基本职能。”而且,由于一个人的能力“不用就会变差”,哪怕是经验丰富的操作员,如果一味盯着屏幕,最终也将和初出茅庐的新手没什么两样。难以集中注意力和缺乏对原理的认知这两点加起来,增大了遇到事故操作员会无力应对的可能。于是乎,人类将是系统中最薄弱的环节的预言就自我应验了。

削弱自动化的影响

心理学家已经发现了一些简单的方法来削弱自动化的不良影响。我们可以设定在不规则的时间间隔让软件转向控制人工控制,随时保持待命可以使人类操作员集中注意力,提高他们的情景感知和学习能力。我们也可以限制自动化的范围,确保使用电脑工作的人执行具有挑战性的任务,而不是仅仅处于旁观者的地位。给人更多的事情做,有助于使加工效应发挥作用。我们还可以在软件里融入教育性的例程,需要用户重复困难体力和脑力任务来促使记忆形成和技能培养。

有些软件设计师记住了这样的建议。在学校,最好的教学程序会通过促使人集中注意力、要求人下功夫,或者通过重复巩固学到的技能来帮助学生掌握一个课题。这些软件的设计体现了大脑储存记忆方面的最新发现,将概念性知识和实际操作融入其中。但是,大多数软件或应用程序并不鼓励学习和参与,不仅如此,有的还起到相反的效果。这是因为培养和维持专业技能几乎必然牺牲速度和效率,学习需要效率低下,而力求最大限度提高生产力和利润的商业很少会做出这样的让步。个人也一样,人人都在寻求效率和便利。我们选择减轻工作负担的程序,而不是那些使我们干活更卖力和耗时更久的程序。

无论在驾驶舱中的飞行员还是诊室里的医生,要知道怎么做就必须得实际去做。关于人类最了不起的、也是最容易忽视的一件事情是:我们与现实每碰撞一次,就会加深对世界的了解,进而更充分地融入这个世界。虽然与艰苦任务做斗争会使我们害怕付出劳动,但正是这份劳动定义了我们何以为人。计算机自动化舍本逐末,它让我们更容易得到我们想要的东西,但却增加了我们与劳作的距离,而劳作是要理解这个世界所必须的。当我们把自己化为屏幕生物,就必须面对一个存在的问题:我们是想让能做的事情来定义自己,还是让想要的东西来定义?如果我们不设法解决这个问题,我们的小工具会很乐意代我们作答。

尼古拉斯·卡尔(Nicholas Carr)是《肤浅》(The Shallows: What the Internet Is Doing to Our Brains)一书的作者。


)


If you have any requirements, please contact webmaster。(如果有什么要求,请联系站长)





QQ:154298438
QQ:417480759