Go homepage(回首页)
Upload pictures (上传图片)
Write articles (发文字帖)

The author:(作者)delv
published in(发表于) 2014/1/8 7:48:09
Not science fiction, next-generation operating system interface this way,

Is not a science fiction film, the next-generation operating system interface-interface, user interfaces-it news Is not a science fiction film, the next-generation operating system interface

Generally, when we talk about the "user interface" (user interface), and our first thought is of many operating systems, applications, and show to the user graphical interface, text and voice messages. For the user interface of the operating system we have been familiar, because we need to do is click on the Windows or Mac OS x Desktop by arranging the icons one by one.


In fact, the original operating system interface is not as we know it today "graphical interface", but mainly to the text interface, this information until Apple co-founder Steve Jobs (Steve Jobs) was launched in 1984, was changed after the Macintosh operating system. In recent years, the UI leap again several times, including now commonly adopted in the Smartphone's touch interface, such as the Siri application voice and gestural interface of the Xbox Kinect device is equipped with. Of course, most of these interfaces are still in the early stages of their development.


But in any case, these different styles of user interface or for show us about next-generation user interface prototype. Recently, the United States Science and technology media counts for us, "OS interface will have eight properties in the future," which reads as follows:



Gestural interface


Released in 2002 science fiction movie minority report (Minority Report), we have seen many use gestures to complete the picture of your computer system. The film, starring Tom Cruise (Tom Cruise) wear very futuristic electronic gloves you can easily complete the picture on your own computer, video, and data processing, and so on. It is reported that this operating system concepts by Oblong company chief scientist John angdekefo (John Underkoffler) designed, and he's also the film's technical advisor.


Over 10 years ago, such technology is futuristic, indeed, even somewhat unrealistic, but is no longer the thing to do now. At present, such as Wii, Kinect devices already have similar gestures, such as control functions.


It should be said that gesture has its own unique advantages, because of its traditional two-dimensional action space, added a z axis, thus many of the previously difficult to achieve functions will become a reality. Angdeke believe in the movie minority report's gesture is expected to become a reality in the next five years.


Brain control


When people were thinking at the time, the brain produces a variety of electronic signals, and these electronic signals can be designed to handle specific tasks. United States California San Francisco neuro-technology firm Emotiv co-founder and CEO Tan Li Lifescience led his team developed after allows users to put on, just read could manipulate computer neural helmet (neuroheadset).


For now, the technology is still in its early stages of development, the conditions are not yet available for commercial promotion. However, we may in the future by just thinking "mind" turn off the lights or appliances is enough to get us excited.



Flexible OLED display


If you think the Smartphone 's touch-screen to reflect sensitive enough, flexible OLED display might be very suitable for you. The so-called OLED, organic light-emitting diodes (Organic Light-Emitting Diode), also known as electro-mechanical laser displays (Organic Electroluminesence Display, OELD). OLED display technology has a spontaneous, flexible features, and when a current passed, these organic materials are light, and the OLED display viewing angle of the screen is enormous.


If this technology in the future online, users can through curved OLED display the screen to zoom in, zoom out the picture, or by folding a way to complete operations such as to increase the volume of the foot. This technology when combined with smart phones, Smartphone touchscreen gloves used in winter's troubles will be picked up.



Augmented reality (Augmented Reality)


If you are familiar with this concept, you may have such as Wikitude and Drodishooting this experience augmented reality applications. For now, most mature products in this technology is the much-anticipated Google goggles.


Actually, the augmented reality applications are far more than that. For example, when we saw a piece of foreign language instruction abroad, we can make use of this technology will correspond to the translation project. Of course, augmented reality technology can also be combined with the projection images in real time display in the current environment. It is reported that the United States Massachusetts Institute of technology recently combined with augmented reality technology to build a next-gen user interface prototypes using gestures to control.



Voice user interface


In terms of voice user interface, we're most familiar with, but is the product of Apple's Siri voice service, this service is based on a speech recognition system using natural language user interface guides the user through some specific task. However, you can also see in products such as Google goggles shadows voice user interface, Google goggles requires users to say "Ok,Glass" to activate their tasks on the system.


However, the voice user interface is now also there is a fatal flaw, it is the absence of speech recognition accuracy. However take account of the rapid development and penetration of smart phones, just solve this problem I am afraid only a matter of time.



Visual user interface (Tangible User Interface)


Visual user interface refers to the combination of the physical environment and digital tools, user interface, PixelSense is a technology developed by Microsoft has the best embodiment. In simple terms, PixelSense is one developed by Microsoft, combining hardware and software technologies, users can directly by hand or voice to the instructions on the screen, rather than rely on the mouse and keyboard support.


Latest PixelSense, Microsoft joined hands together to create interactive and Samsung display solutions "SUR40". SUR40 sensing user touch sensors are built, rather than cameras. Moreover, the system also can recognize objects placed on the top of the screen size and shape, and even its own RFID tag. For example, if we put smart phone SUR40, SUR40 portion of your screen will be used in the form of slides to play pictures from your mobile phone.



Wearable computing


As with this name, the so-called wearable computing is can be worn by the user in the body of the technological equipment, which can be the gloves, glasses, or even something made of high-tech clothing. But whatever the specific manifestations of wearable devices, they are a key feature of the built-in user interface is to liberate the user hands and will have no impact on their normal activities.


In this regard, Sony has released earlier this year based on the Android operating system smart watch, the device is user smartphones via Bluetooth connectivity, and receive a new push message notifies the user. And, as with all other smart phones, users also can download for Sony Smart Watch, installed applications.


With the development of science and technology in this area, we believe will see more in the future advent of micro chips and smart technology, which is vigorously promoting the further development of the wearable computing.



Sensor network user interface (Sensor Network User Interface)


Figure the example used in this picture is very good show with three small color LCD screen devices have built-in sensors, infrared scene to interact and how the accelerometer, which is also called "sensor network user interface" the most common scenarios. In fact, users can even shaking, tilting and collision three small devices as a way to see a different interaction effects.


It should be said that relative to the other user interface, the sensor network user interfaces is a more "crowded" interface, because this mode typically requires users across devices, control screen.


(

不是科幻片,下一代的操作系统界面就这样 - 操作系统界面,用户界面 - IT资讯
不是科幻片,下一代的操作系统界面就这样

一般来说,当我们谈到“用户界面”(user interface)的时候,我们首先想到的就是诸多操作系统、应用程序展示给用户的图形界面、文本和声音信息。对于操作系统的用户界面我们已经再熟悉不过,因为我们需要做的就是点击Windows或者Mac OS X桌面上所排列的一个个图标而已。


事实上,原先的操作系统界面并非如今我们所熟知的“图形化界面”,而主要是文本界面,这一情况直到苹果联合创始人史蒂夫-乔布斯(Steve Jobs)在1984年推出了Macintosh操作系统后才得以改变。近年来,用户界面又迎来了数次飞跃,其中包括如今智能手机中所普遍采用的触摸界面、诸如Siri采用的语音界面和Xbox Kinect体感设备搭载的手势界面。当然,以上这些界面大多还处于自己发展的早期阶段。


但无论如何,这些不同风格用户界面的出现还是为我们展示了有关下一代用户界面的大致雏形。日前,美国科技媒体就为我们盘点了“未来操作系统界面将具备的八大特性”,具体内容如下:



手势界面


在2002年上映的科幻影片《少数派报告》(Minority Report)中,我们看到了许多利用手势完成电脑系统操作的画面。该片主演汤姆-克鲁斯(Tom Cruise)在佩戴上极具未来感的电子手套后便可以轻松在自己的电脑上完成图片、视频和数据处理等操作。据悉,这一操作系统的概念由Oblong公司首席科学家约翰-昂德科佛(John Underkoffler)设计而成,而他同时也是该片的技术顾问。


在10年前,这样的技术的确颇具未来感,甚至有些不切实际,但现在恐怕早已不是那么一回事了。目前,诸如Wii、Kinect等设备都已经具备了类似的手势控制功能。


应该说,手势控制有着自己独一无二的优势,因为其在传统的二维操作空间上新增了一根Z轴,因而许多此前很难实现的功能都将成为现实。昂德科佛相信,电影《少数派报告》中的手势控制有望在未来五年成为现实。


脑控制


当人们在进行思考的时候,大脑会产生各种各样的电子信号,而这些电子信号完全可以被设计用于处理某些特定工作。美国加州旧金山的神经科技公司Emotiv Lifescience联合创始人兼总裁谭黎就率领自己的团队开发出了可以让使用者戴上之后,只需起心动念便可以操控眼前电脑的神经头盔(neuroheadset)。


就目前而言,该技术还处于开发早期阶段,尚不具备进行商业化推广的条件。但是,光是想想我们未来有可能通过“意念”关闭家里的电灯或者电器就足以让我们兴奋不已。



可弯曲OLED显示界面


如果你认为智能手机的触摸屏反映不够灵敏的话,可弯曲OLED显示界面或许将十分适合你。所谓OLED,即有机发光二极管(Organic Light-Emitting Diode),又称为有机电激光显示(Organic Electroluminesence Display, OELD)。OLED显示技术具有自发光、可弯曲的特性,当有电流通过时,这些有机材料就会发光,且OLED显示屏幕可视角度极大。


如果这项技术在日后投入实用的话,用户可以通过弯曲OLED显示屏幕的方法来放大、缩小图片,或者通过折叠一脚的方式来完成提高音量等操作。在将这一技术同智能手机结合后,人们在冬天戴手套使用智能手机触摸屏的烦恼将一扫而光。



增强现实(Augmented Reality)


如果熟悉这一科技概念的话,你或许已经在诸如Wikitude和Drodishooting这些应用中体验过了增强现实技术。就目前而言,将这一技术运用的最为成熟的产品还是备受期待的谷歌眼镜。


事实上,有关增强现实的应用范围还远远不止于此。比如,当我们在国外看到一块外语指示牌的时候,我们完全可以利用这一技术将对应的翻译投射出来。当然,增强现实技术也可以结合投影将画面实时显示在当前环境内。据悉,美国麻省理工大学日前就结合增强现实技术打造出了一个可以利用手势进行控制的次世代用户界面原型。



语音用户界面


在语音用户界面方面,我们最熟悉不过的产品便是苹果的Siri语音助理服务,该服务基于语音识别系统利用自然语言用户界面帮助用户完成部分特定任务。然而,你同样可以在诸如谷歌眼镜这些产品中看到语音用户界面的影子,因为谷歌眼镜需要用户首先说出“Ok,Glass”来激活自己的任务系统。


但是,这一语音用户界面目前还存在着一个致命的弱点,那就是缺乏语音识别准确性。不过考虑到目前智能手机的飞速发展以及普及率,解决这一问题恐怕只是时间问题而已。



可视化用户界面(Tangible User Interface)


可视化用户界面指的是结合了物理使用环境和数字工具的用户界面,微软此前开发的PixelSense便是这一技术的最佳体现。简单来说,PixelSense就是一个由微软所开发,结合了硬件与软件的新技术,用户可以直接用手或声音对屏幕作出指令,而无需依赖鼠标与键盘的帮助。


在最新款的PixelSense中,微软联手和三星联手打造出了交互显示解决方案“SUR40”。SUR40内置了可以感应用户触摸的感应器,而非摄像头。而且,该系统还可以根据大小和形状识别放在屏幕上方的物体,甚至是其自带的电子标签。比如,如果我们将智能手机放在SUR40上的话,SUR40的部分屏幕便会被用于以幻灯片的形式播放手机中的图片。



可穿戴计算技术


同这个名字一样,所谓可穿戴计算技术指的就是可以被用户佩戴在身上的科技设备,它可以是手套、眼镜,甚至可以是一件高科技打造的衣服。但无论可穿戴设备的具体表现形式几何,他们内置用户界面的一个关键特性就是解放用户双手,并且不会对其正常活动造成影响。


在这一方面,索尼已经在今年早些时候发布了基于Android操作系统的智能手表,该装置可以通过蓝牙同用户智能手机展开互联,并在接收到新推送消息的时候通知用户。而且,同其他所有智能手机一样,用户也同样可以为索尼的智能手表下载、安装应用。


随着科学技术在这一领域的不断发展,我们相信会在未来看到更多微型芯片和智能技术的问世,而这些技术则将大力推进可穿戴计算技术的进一步发展。



感应器网络用户界面(Sensor Network User Interface)


图中这张图片所举的例子就很好的展示了三个彩色LCD屏幕小装置在内置了感应器、红外线和加速仪如何进行互动的场景,而这也是所谓“感应器网络用户界面”最常见的情景。事实上,用户甚至还可以通过摇晃、倾斜和碰撞这三个小装置的方法来看到不同的互动效果。


应该说,相对于其他用户界面,感应器网络用户界面是一个显得更为“拥挤”的操作界面,因为这一模式通常需要用户跨设备、屏幕进行控制。


)


If you have any requirements, please contact webmaster。(如果有什么要求,请联系站长)





QQ:154298438
QQ:417480759