The role of humans in automated systems

The role of humans in automated systems is shifting as advances in machine learning and cognitive computing lead to increasing levels of automation. In such systems, human operators often take the role of monitoring or supervisory controlling, rather than active controlling. This shift can result in difficulties when it is necessary to make quick Control actions. To accommodate the changed roles, new types of human-computer interactions are in demand that allow for intuitive, fluid coordination between humans and automated systems. VIRTUAL VEHICLE is developing the methods and computational tools that will allow designers and engineers to create the necessary advanced automation interfaces in the automotive and industrial domains.

Recent advances in artificial intelligence and cognitive computing allow for automation that alters the traditional boundaries between humans and computers. The highest levels of automation are often referred to as “autonomous system”. The scope of Automation differs in terms of how much of an operation is automated, ranging, for example, from automated self-parking to a start-to-finish driving operation. Full end-to-end automation includes highly adaptive task aspects, such as planning, initiating, terminating, and transitioning an operation. Such tasks are often complex and costly to automate. Instead, automated tasks usually exhibit sufficient regularities, such as maintaining a constant distance or even steering a vehicle in certain environments. However, most end-to-end operations, especially in complex environments, require human involvement.

Humans in automated systems

One reason why humans remain essential, even in highly automated systems, is their relative flexibility to adapt to new and unplanned situations and to find creative solutions. Although the powerful intelligent learning algorithms that designers and engineers utilise today can extract increasingly complex behavioural patterns, predicting the future state in a dynamic environment such as a city or public airspace remains extremely difficult.

Figure 1: Model of interconnetion between human and automation

While overly complex environments such as vehicle or air traffic could theoretically be simplified through new radical standards that reduce complexity (e.g. making air traffic more similar to rail traffic), this is rarely feasible due to the significant amount of necessary national and international harmonisation efforts and costs, which are often not compatible with the commercial Need for product differentiation (the aviation innovation programmes Next-Gen in the U.S. and SESAR in the E.U. represent examples). Instead, the allocation of highly adaptive functionality to the ‘intelligent’ human is often more practical, less costly, and easier to certify. However, when the allocation of functions is primarily driven by technological capabilities and economic interests rather than by human constraints and abilities, the resulting systems are often difficult to use and can lead to unsafe operations or even remain unsold. Current research is thoroughly investigating the human role in automated systems. The results indicate that supervisory control can lead to boredom and difficulties in maintaining sufficient situation awareness. In addition, humans respond slower when they are ‘out-of-the-loop’.

Aviation automation has shown that the increased use of flight-automation can lead to a loss of manual piloting skill and increased pilot confusion concerning what automation is actually doing. This leads to the “automation conundrum”: the more and the better automation works, the less likely it is that human operators will be able to effectively intervene and take over manually when necessary. This multifaceted interconnection between human and automation has been well explained, such as in Endsley’s HASO model[2] (see Figure 1). In essence, basic automation performance and human interaction performance are mutually dependent.

Human-system integration

As a result, human-automation interaction has become a critical enabler for the successful adoption of automation. The challenge is that the interface must provide sufficient information to the human who is less aware of what is transpiring, and at the same time enable a quick response. Information overload and the inability to effectively respond can be the result, such as in the accidents of Air France flight 447 in 2009 or Asiana Flight 214 in 2013. The guiding principle for developing effective human-automation interaction has been well recognised for many decades: technological and human research should collaborate from early design phases to jointly optimise the allocation of functions to automation and humans. This helps prevent problems, such as the need for human operators to fix what automation cannot do or the creation of systems that fail because of the unrealistic expectations of the human operator. While the principle is straightforward, the implementation is more difficult because it requires collaboration between rather diverse team members (e.g. technical engineers and human-social researchers), who possess different knowledge, backgrounds, and goals. This requires shared language, methods, and tools to mutually contextualise technical and human characteristics.

Methods and tools to integrate

Humans & Automation Connecting human and technological research to design effective Automation requires methods and tools to collaborate, to exchange or share critical information, and to iterate designs. To support this process, VIRTUAL VEHICLE is developing methods and tools that centre around a new cognitive modelling architecture, the Graz Model Human Processor (GMHP), see Figure 2. The GHMHP enables the Simulation of the cognitive processes that underlie human performance and the subsequent integration of such processes into system simulations.

Benefit

As part of an ongoing internal strategic project, the GMHP architecture allows to model processes such as visual scanning, response selection, and distraction and leads to quantifications of human performance for large numbers of simulation scenarios. In addition, the GMHP will be used to design and implement adaptive user interfaces that learn from observing human behavior. The GMHP is based on more than 30 years of cognitive modelling research and is being developed in collaboration with the Computer Science department of the University of Illinois at Urbana-Champaign. The GMHP will soon be available to enable the development of effective automation and adaptive computer interfaces in research Projects and commercial applications.

 

Get in touch with the authors:


Dr. Peter Mörtl

Key Researcher for Human-Systems Integration
Contact

 


Dr. Wai-Tat Fu

Associate Professor for Human-Computer Interaction
Department of Computer Science University of Illinois.
Contact