AUTOMATION PSYCHOLOGY: Automation: [i]Noun:[/i] Any sensing, detection, information processing, decision making or control action that could be performed by humans but is actually performed by machine. [i]Verb[/i]: The designing, engineering, implementation process resulting in the above. Types of Automation: [u]Types and Levels of Automation[/u]: A model to describe automation. Parasuraman, R., Sheridan, T.B. & Wickens, C.D. (2000). A model for Types and Levels of Human Interaction with Automation. [i]IEEE Transactions on Systems, Man, and Cybernetics -- Part A: Systems and Humans[/i], 30 (3), 286-297. Types: What is being automated? (Following simple four-stage model of human information processing.)[list][*][b]Information acquisition, selection, filtering.[/b] (Sensory processing.)[/*][*][b]Information integration.[/b] (Perception, working memory.)[/*][*][b]Action selection and choice.[/b] (Decision making.)[/*][*][b]Control and action execution.[/b] (Response selection.)[/*][/list] Levels: Levels of automation of decision and action selection. 10. The computer decides everything, acts autonomously, ignoring the human. 9. ... informs the human only if it decides to. 8. ... informs the human only if asked for. 7. ... executes automatically, then necessarily informs the human. 6. ... allows the human a restricted time to veto before automatic execution. 5. ... executes its suggestion if the human approves. 4. ... suggests one alternative. 3. ... narrows the selection down to a few. 2. ... offers a complete set of decision/action alternatives. 1. ... offers no assistance: human must take all decisions and actions. [u]Supervisory Control[/u]: Delegating the active executive tasks to automation leaves the human operator as passive, out-of-the-loop supervisor, with risk of complacency as a major concern. Operator tasks: [list][*][b]Planning[/b] what needs to be done, based on a model of the physical system to be controlled, objectives and possibilities.[/*][*][b]Teaching[/b] the automation, i.e. deciding on a desired control action and communicating the necessary commands to the automation.[/*][*][b]Monitoring[/b], i.e. allocating attention among the appropriate displays or other sources of information about task progress and estimate the current state of the system, to maintain situation awareness.[/*][*][b]Intervening[/b], i.e. reprogramming or taking over manual control from the automation in case of a diagnosed abnormality of sufficient magnitude.[/*][*][b]Learning[/b] from knowledge of the results, like planning, is an out-of-the-loop human function and feeds back into planning the next phase of supervision.[/*][/list] Monitoring strategy: Moray, N. & Inagaki, T. (2000) Attention and Complacency. [i]Theoretical Issues in Ergonomics Science[/i], 1 (4), 354-365. Complacency should measure defective monitoring, not the number of missed signals, as these could be caused by other variables such as system design, training, workload, and monitoring strategy. A monitoring strategy is a [b]plan of action to most effectively sample multiple sources[/b], and is influenced by system knowledge (reliability, range limits, possible consequences) and the task at hand. It is not always possible to catch all signals; an optimal strategy may therefore still result in misses. "Complacency" may in fact be part of an appropriate monitoring strategy as highly reliable processes, or currently high-divergence meters, require less sampling. As a reference to measure complacency, an optimal monitoring strategy should be defined. On a scale from [b]scepticism[/b] (under-trust, over-sampling) to [b]complacency[/b] (over-trust, under-sampling), [b]eutactic behaviour[/b] would hold the middle, where optimal monitoring occurs with the right amount of trust and sampling. Experimentally, this means monitoring strategy (e.g. through eye-tracking) needs to be investigated, not (just) detection rate. [u]Decision Support Systems[/u]: Decision support systems are a type of [i]interactive[/i] systems that are generally automated up to the third stage (action selection and choice), but leave the execution to the operator. The operator is assisted his choice of action. [u]Alarms and Warnings[/u]: Meyer, J. (2004). Conceptual Issues in the Study of Dynamic Hazard Warnings. [i]Human Factors[/i], 46 (2), 196-204. Dynamic warnings are [b]sensor-based signaling systems[/b], that present, at any moment in time, one of two or more messages (including inactivity) based on input from sensors, people, or computational devices. Some messages may alert the user about a situation requiring closer monitoring or intervention. [i]Compliance[/i] and [i]reliance[/i]: In response to a warning, [b]compliance[/b] denotes the response when an operator acts as the warning implies even when it is false (error of commision), whereas [b]reliance[/b] denotes not responding when the warning does not explicitly implies action, even when action is required (error of omission). Studies suggest that these are independent constructs, influenced separately by various factors. For example, reliance may decrease with experience while compliance remains unchanged. At least one study challenges that (see below). Rational response: Expectancy-Value Theory can be used to describe the rational response, which would be to maximise expected value. Where a failure is either present (F) or not present (N), an operator can act as if it is present (f) or not act (n). [b]The operator should act when EVf > EVn[/b]. This may depend on the probability of F. A threshold probability pF* can then be calculated, above which the operator should respond with f. Warning systems effect a change in pF: The probability of a failure is supposedly greater when the warning is given (pF|W). [b]The operator should act when pF|W > pF*, but a system may not be that well-calibrated[/b]. According to EV analysis, the correct response does not depend on the outcome, but on the a priori computation. Acting was the correct response when the connected EV was higher than that of not acting, even when the action led to negative effects. Performing many unnecessary actions (i.e. commission errors) may be appropriate when the cost of missing a necessary action is high. [b]Errors of omission and commission are therefore not necessarily incorrect actions[/b]. Determinants of response: Assuming a multitask environment and an experienced operator (who has noticed and understood the warning, and samples multiple sources of information), the following factors influence response behaviour.[list][*][b]Normative factors[/b]: Characteristics of the situation.[list][*][b]Situational factors[/b]: Related to properties of the situation in which a warning is used, e.g. probability of failures, payoff structure.[/*][*][b]Diagnostic factors[/b]: Related to the warning system (e.g. SDT sensitivity, conditional probabilities) and the available additional information.[/*][/list][/*][*][b]Task factors[/b]: Related to the characteristics of the task during which a warning is encountered.[list][*][b]Task structure factors[/b]: E.g. the number of different variables/systems that need to be controlled, the presence of other people.[/*][*][b]Interface factors[/b]: Related to the way the task-relevant information is displayed and the display/control design, e.g. [b]signal urgency[/b] (flashing, coloured).[/*][/list][/*][*][b]Operator factors[/b]: Variables that characterise the individual operator in the particular situation.[list][*][b]General operator factors[/b]: E.g. abilities, training, skills, risk-taking tendency, strategies.[/*][*][b]System-specific operator factors[/b]: E.g. the operator's trust in the system, knowledge of the system.[/*][/list][/*][/list] [img]http://files.noctifer.net/alarm_response.jpg[/img] Note the various interaction effects (e.g. operator factors and task demands) and closed-loop dynamics (e.g. learning effects). Orientation: [b]Reliance-oriented automation[/b] indicates that the system is operating normally. The operator needs to act when the automation fails to provide this indication (implicit indicator of the need to act). [b]Compliance-oriented automation[/b] indicates an abnormal situation, and the operator needs to act in response to this indicator (explicit indicator of the need to act). False alarms: False alarms or false positives are signals given off that do not reflect the true state of the world as intended. [i]Traditional false alarm[/i]: A state is being signalled that is not actually there. [i]Nuisance alarm[/i]: The alarm is being given in the wrong context or caused by irrelevant factors. [i]Inopportune alarm[/i]: The alarm is being given at an inopportune point in time, usually too early, leaving the operator unable to correctly interpret it. [i]Cry-wolf effect[/i]: Too many false alarms lead operators to ignore these alarms, increasing the chance of missing a true positive, or when they do respond, respond slower. Countermeasures: Technology-based countermeasures include [b]likelihood alarm displays[/b] and optimal calibration. Operator-based strategies include training. Task-based strategies include workload reduction. Ironies of Automation: Bainbridge, L. (1983). Ironies of Automation. [i]Automatica[/i], 19 (6), 775-779. [u]Human error[/u]: Automation is implemented to prevent human error by replacing the human; however, [b](human) design error is a major source of automation problems. A human supervisor remains necessary[/b] to make sure the automation itself does not commit any errors. [u]Operator tasks[/u]: A human operator remains necessary to see to those tasks which could not be automated, potentially giving him an [b]arbitrary list of unconnected tasks[/b]. Additionally, his intervention is required when the automation fails to manage its tasks, i.e. in particularly critical situations, meaning [b]the "error-prone" human must take over in particularly "error-prone" situations[/b]. Potential Problems with Automation: [u]Vigilance[/u]: Humans are notoriously bad at vigilant monitoring, a task that increases in relevance with increasing automation. [u]Deskilling[/u]: Due to the operator's inactivity as perceiver, decision maker, actor, and controller, these skills that are no longer used may be lost -- but are no less required in critical situations. [u]Workload[/u]: Automation is intended to reduce workload, yet it does not reduce, but [b]change[/b] the tasks of the operator, and may in fact disproportionally spread workload over situations. [i]Clumsy automation[/i]: Automation that reduces workload in situations with already low workload, and increases workload in already highly demanding situations. [u][i]Situation Awareness[/i][/u]: "The perception of elements in the environment within a volume of time and space ([b]level 1[/b]), the comprehension of their meaning ([b]level 2[/b]), and the projection of their status in the near future ([b]level 3[/b])". In connection with automation, following issues can be found at these levels. Level 1 [list][*][b]Lacking supervision[/b], e.g. caused by overtrust or vigilance problems.[/*][*][b]Changed feedback channels[/b]. Automation can change and/or reduce the flow of information to the operator.[/*][*][b]Lacking system transparency[/b]. The operator may not be sufficiently informed about what the automation is currently doing.[/*][/list] Levels 2 and 3 [list][*][b]Complexity[/b]. Complex systems are difficult to understand, leading to an[/*][*][b]incorrect mental model[/b], leaving the operator unable to properly understand the automation even when it is working properly.[/*][/list] Additionally, ten "demons" of SA have been formulated. Lack of situation awareness can lead to automation surprises and mode errors. Demons of Situation Awareness: [list][*][b]Attentional Tunneling[/b]: SA is particularly important in complex environments, where shared attention is required. Due to limited attentional resources, selective attention is employed. SA is largely dependent on the strategy of selective attentional sampling.[/*][*][b]Working memory limitations[/b]: Especially problematic for novices who cannot yet "chunk" situations and system states.[/*][*][b]Workload, Anxiety, Fatigue and Other factors (WAFOS)[/b] further limit available resources and lead to more troublesome information acquisition (e.g. attentional tunneling) and generally less well structured, more error-prone information processing, leading e.g. to [b]premature closure[/b], "jumping to conclusions". [/*][*][b]Information overload[/b] as caused by the increase of information, the way it is presented, and speed with which it is presented. "It's not a problem of volume, but of bandwidth!"[/*][*][b]Misplaced salience[/b]. Saliently presented elements, while advantageous in some cases, can be distracting in others.[/*][*][b]Complexity creep[/b] and feature escalation, the continuous addition of functionality and complexity.[/*][*][b]Incorrect mental model[/b] can be particularly problematic, as persons do not realise that their mental model is faulty.[/*][*][b]Out Of The Loop Unfamiliarity (OOTLUF)[/b] as characterised by complacency and deskilling.[/*][/list] [i]Automation surprises[/i]: When the automation does not act or respond as was expected or as can be understood. [i]Mode errors[/i]: The operator uses the system as if it was in a different mode, and is therefore surprised by/does not understand the resulting feedback. Measuring Situation Awareness: There are a number of ways in which one might attempt to measure SA. However, many of these cannot measure SA [i]specifically[/i]. [b]Eye-tracking[/b], for example, indicates what information has been looked at, but it not what information has been [i]seen[/i]. [b]EEG[/b], then, allows registration of perception and processing, but leaves out the critical element of correct interpretation. [b]Global performance measures[/b] only measure the result of a long cognitive process, providing little information about what exactly caused this performance -- particularly, poor performance can be the result of any number of measures that are not SA-related. [b]External task measures[/b] (e.g. changing a specific element on the display and timing how long it takes to be noticed) may better differentiate this, but is highly intrusive and assumes that the operator will act in certain way which he very likely will not. [b]Embedded task measures[/b] are performance measures on subtasks which may provide inferences about SA related to that task, but this would not measure when for example improvement on this subtask leads to a decrement on another, and it may be difficult to select relevant tasks. A number of subjective techniques have been developed. An [b]observer[/b] (either separately or, unknown by the subject, as a confederate), especially a trained one, may have good situational knowledge but cannot know what exactly the operator's concept of the situation is. For the [b]Subjective Workload Dominance metric (SWORD)[/b], subjects make pair-wise comparative ratings of competing design concepts along a continuum that expresses the degree to which one concept entails less workload than the other, resulting in a linear ordering of the design conceps. The exact role of SA in this however, is unclear. Specifically for SA measurement, the [b]Situation Awareness Rating Technique (SART)[/b] is a subjective rating scale that considers the operators' workload in addition to their perceived undertanding of the situation. However, someone who is aware of what he does not know (and with that, may be said to have better understanding) may therefore rate his understanding lower than someone who knows less but is not aware of it. The [b]Situation Awareness Global Assessment Technique (SAGAT)[/b] is specifically designed to [b]objectively[/b] measure SA across all three levels. To this end, a simulation is frozen, all displays blanked out, and a number of questions are asked regarding the state of the situation before the freeze. These questions are designed with the operators' goals in mind, working down past the decisions and judgements required to achieve those goals, to what needs to be known, understood, and foreseen. The questions need to be posed in a cognitively compatible manner, preventing the operator from having to perform any transformations of the information which may influence the answer, or may cause him to reconstruct, rather than recall, the information. [u]Trust[/u]: Trust is influenced by the system's [b]perceived reliability[/b], the human's [b]predisposition[/b] towards technology, his [b]self-efficacy[/b] and many other aspects (see dynamic model). Inhowfar the operator's trust is appropriate, is referred to as its calibration. [i]Calibration[/i]: Ideally, the operator's trust is [b]well-calibrated[/b], i.e. in direct relation to the automation's actual reliability. When this is not the case, we speak of mistrust. Two aspects of calibration: [list][*][b]Resolution[/b] refers to how precisely a judgement of trust differentiates levels of automation capability. With low resolution, large changes in automation capability are reflected in small changes in trust.[/*][*][b]Specificity[/b] refers to the degree to which trust is associated with a particular component or aspect of the automation. [b]Functional specificity[/b]: The differentiation of functions, subfunctions and modes. Low functional specificity means trust is based on the system as a whole. [b]Temporal specificity[/b] describes changes in trust over time. High temporal specificity means that trust reflects moment-to-moment changes.[/*][/list] Dynamic Model: Based on Lee, J.D. & See, K.A. (2004). Trust in Automation: Designing for Appropriate Reliance. [i]Human Factors[/i], 46 (1), 50-80. [img]http://files.noctifer.net/trust_dynamic_model_lee_see.png[/img] Some notes:[list][*]Context[list][*]Individual: Predisposition to trust as a general personality trait, e.g. as operationalised through the Interpersonal Trust Scale (Rotter, 1980) or the Complacency Scale (Singh, Molloy & Parasuraman, 1993). A higher general level of trust is not correlated with gullibility; is positively correlated with correct assessment of trustability of others.[/*][*]Individual: Trust as dynamic attitude evolves with time during a relationship based on experience.[/*][*]Organisational: Represents interactions between people, e.g. when informing one another about others' trustworthiness. [/*][*]Cultural: The set of social norms and expectations based on shared education and experience influences trust. Collectivistic (as compared to individualistic) cultures have a lower general level of trust, as "trust" as a concept is less needed due to clear division roles.[/*][*]Environmental: Specific situational factors can influence the automation's capability, e.g. when a situation occurs for which the automation was not designed.[/*][/list][/*][*]Trust is [b]ultimately an affective response[/b], but is influenced by analytic and analogical considerations (though not as much as vice versa). [/*][*]Reliance: Having formed an intention to rely on the automation based on how much it is trusted, while for example also taking into consideration workload, contextual factors such as time constraints may still prevent the automation from actually be relied upon.[/*][*]Automation[list][*]Information about the automation can be described along two dimensions. [b]Attributional abstraction[/b] refers to attributions on which trust can be based. [b]Performance[/b] refers to [i]what[/i] the automation does: Current and historical operation of the automation, or competence as demonstrated by its ability to achieve the operator's goals. [b]Process[/b] information describes [i]how[/i] the automation operatates: The degree to which the algorithms are appropriate and able. [b]Purpose[/b] describes [i]why[/i] the automation was developed. The [b]level of detail[/b] refers to the functional specificity of the information.[/*][*]The display is often the only way to communicate this information. A number of aspects of display design can influence trust. [b]Structure and organisation[/b] are important for all three paths of trust evolution. For the analytic and analogic aspects specifically, the exact [b]content[/b] need to be considered, as well as its [b]amount[/b]: Too much information may take too long to interpret. This needs to be weighed against aspects of specificity. Affectively, [b]good design[/b] (aesthetics, use of colour etc.) can increase trust -- [b]even if this is inappropriate[/b]. Other things known to increase trust are a "[b]personality[/b]" of the automation that is compatible to the operator's (as e.g. communicated by human-like voice output), the display's [b]pictorial realism[/b], its [b]concreteness[/b], and its "[b]real world feel[/b]" (being more than merely virtual).[/*][/list][/*][*][b]Closed-loop dynamics[/b]: The output of each process influences the input of the other. For example, a low level of initial trust can mean that the automation is not used at all, whereas a higher level of initial trust can lead to subsequent experience and further increased trust. Reliance is an important mediator.[/*][/list] [i]Mistrust[/i]: When trust is [b]not well-calibrated[/b], either too high or too low. [i]Distrust[/i]: The operator has [b]less trust[/b] in the automation than is warranted. Could lead to [b]disuse[/b], using the system less than is warranted. Even rare failures, especially [b]ununderstandable failures[/b], can lead to a reduction in perceived reliability, and with that, to distrust and disuse. [i]Overtrust[/i]: The operator has [b]more trust[/b] in the automation than is warranted. Long [b]experience[/b] with often highly reliable systems can lead to too high an estimation of its reliability. Overtrust can lead to [b]misuse[/b], or specifically, complacency and automation bias. [i]Complacency[/i]: No one clear definition; e.g. "(self-)satisfaction, little suspicion based on an unjustified assumption of satisfactory system state leading to non-vigilance." Lacking supervision of automated systems leads to [list][*][b]detection[/b] issues: Errors will be overseen, which may in turn increase complacency as the amount of errors wrongfully appears to be low.[/*][*]loss of [b]situation awareness[/b], as the operator is no longer sufficiently vigilant to keep track of all processes.[/*][/list] [i]Automation bias[/i]: "Automated cues as heuristic replacement for more vigilant and complete information search processing": The operator trusts that the automation will always indicate critical situations and [b]no longer surveys the relevant information[/b] himself (error of omission; operator does not take action because he is not explicitly informed), and/or assumes the automated indications are correct and follows them [b]without verifying them or while ignoring contradictory information[/b] (error of commission; taking action on the basis of a false cue). Particularly relevant to decision support systems. [i]Assimilation bias[/i]: Interpreting other indicators, insofar they are processed, as being more consistent with the automated information than they really are. [i]Discounting bias[/i]: Discounting information inconsistent with the automated information. [i]Confirmation bias[/i]: Overattending to, or seeking out, consistent information and ignoring other data, or processing new information in a manner that confirms one's preexisting beliefs. Using Automation: [u]Use[/u]: Whether or not automation is used, depends on the operator's general attitude towards technology, his trust in the automation, his current workload, the required cognitive overhead, his self-efficacy and situational factors. [u]Misuse[/u]: Using the system more than is warranted. This result from overtrust can lead to e.g. erroneous decision making, use of heuristics, premature closure, automation bias and complacency. [u]Disuse[/u]: Using the system less than is warranted. This result from distrust is caused e.g. by general mistrusting the automation's capabilities, general mistrust of (new) technology and false alarms. [u]Abuse[/u]: Any inappropriate use of the system, or inappropriate implementation of automation as for example caused by technical or economical factors. Selected Literature and Empirical Findings: [u]General[/u]: MABA-MABA or Abracadabra?: Dekker, S.W.A. & Woods, D.D. (2002). MABA-MABA or Abracadabra? Progress on Human-Automation Co-ordination. [i]Cognition, Technology & Work[/i], 4, 240-244. Dekker and Woods criticise MABA-MABA lists and the TALOA model, saying they lead automation developers to believe that these provide simple answers to the question of what to automate. The "simple answers" provided by these tools, are misleading because of the following.[list][*][b]Arbitrary granularity[/b]. The four-stage division of TALOA follows only one of many possible models, all of which are probably incorrect.[/*][*][b]Technology bias through misleading metaphors[/b]. When it's not a model of information processing that determines the categories, the technology itself often does so, as e.g. the original MABA-MABA lists distinguish aspects as "information capacity" and "computation". These are technology-centred aspects, biased against humans and leaving out typically human abilities such as anticipation, inference, collaboration etc.[/*][*][b]Ignoring qualitative effects[/b]. The "substitution myth" assumes automation can simply replace humans and that's that. However, capitalising on some strength of automation does not replace a human weakness. It creates [i]new[/i] human strengths and weaknesses as it qualitatively changes the human's tasks and environment.[/*][/list] Engineers relying on these models are therefore unaware of the consequences of their decisions, being only familiar with those explicitly mentioned by the models, e.g. complacency, deskilling, etc. Engineers need to be aware that it is not so, that new technology simply changes the role of the operator who then needs to adapt. New technology changes the entire situation, leading people to adapt the [i]technology[/i] again, and so forth. Engineers need to accept that their technology transforms the workplace and need to learn from these effects. To this end, the authors suggest that automation should be conceived of as a "[b]teamplayer[/b]", capitalising on human strengths such as pattern recognition and prediction, and making communication (programming) easy. Human factors and folk models: Dekker, S. & Hollnagel, E. (2004). Human factors and folk models. [i]Cognition, Technology & Work[/i], 6, 79-86. The authors argue that human factors constructs such as complacency show characteristics of folk models, namely[list][*][b]Explanation by substitution[/b]: Substituting one label for another rather than decomposing a large construct into more measurable specifics. For example, none of the definitions of complacency actually explain it, but "define" it by substituting one label for another (complacency is, for example, "self-satisfaction", "overconfidence", "contentment" etc.), making claims of complacency as causal factor immune to critique. None of the definitions explain how complacency produces vigilance decrements or leads to a loss of SA. In fact, replacing an arbitrary set of factors by the umbrella term "complacency" represents a significant [b]loss of actual information[/b] and the inverse of scientific progress. A better alternative would be to use better-known and preferably measurable concepts. [/*][*][b]Immunity against falsification[/b], due to being underspecified and open to interpretation. For example, if the question "where are we headed?" between pilots is interpreted as loss of SA, this claim is immune against falsification. Current theories of SA are not sufficiently articulated to explain why this question should or should not represent a loss of SA.[/*][*][b]Overgeneralisations[/b]. Overgeneralisations take narrow laboratory findings and apply them uncritically to any broad situation where behavioural particulars bear some resemblance to the phenomenon that was investigated under controlled circumstances. The lack of precision of folk models and their inability to be falsified contribute to this.[/*][/list]Models and theories define what measurements are applicable. Folk models describe measures that reflect hypothetical intermediate "cognitive" states rather than the actual performance. Workload and situation awareness are examples. The authors suggest we focus on measuring performance rather than cognition or mental states. Situation Awareness, Mental Workload, and Trust in Automation: Parasuraman, R., Sheridan, T.B. & Wickens, C.D. (2008). Situation Awareness, Mental Workload, and Trust in Automation: Viable, Empirically Supported Cognitive Engineering Constructs. [i]Journal of Cognitive Engineering and Decision Making[/i], 2 (2), 140-160. The authors counter the arguments in the above two papers by DWH. Abracadabra DWH are said to be uninformed of current literature, dominant attitudes in the scientific community and engineering processes. TALOA was never intended as a method, and current "MABA" lists do include the typical human strengths that DWH missed in Fitts' original, known to be deprecated lists. There was no advocacy of substitution, only guidance in what to consider. Folk models[list][*][b]Situation awareness[/b] is [b]diagnostic[/b] of different humen operator states and therefore [b]prescriptive[/b] as to different remedies. It represents a continuous diagnosis of the state of a dynamic world, and has an [b]objective[/b] truth against which its accuracy can be measured (namely, the actual state of that world). In focusing on performance measures, DWH fail to note the [b]distinction between psychological constructs and performance[/b]. Different displays, automation levels and procedures are required for supporting performance than for supporting SA. SA becomes particularly relevant when things go wrong, i.e. fall outside of normal performance. The authors note that conclusions regarding SA measurement and its implications have been supported by strong empirical research.[/*][*][b]Mental workload[/b]. Workload, like SA, also generates [b]specific diagnoses with prescriptive remedies[/b] for overload and underload. The distinction between performance is again important: Two operators may have the same performance, but markedly different workload (e.g. as available residual attention). Workload has proven relevant to understanding the relation between objective task load and task strategies. In driving, high workload is a good [b]predictor[/b] for a future performance drop, not predictable by current performance measures. The concept of mental workload has been supported by strong science.[/*][*][b]Trust in automation and complacency[/b]. A large body of research has clearly established the importance of trust in the human use of automation. The existence of various definitions for complacency is not a valid argument. Empirically, numerous studies indicate that operator monitoring in systems with imperfect automation can be poor. These findings are based on [b]objective[/b] measures.[/*][/list]The authors note that these concepts [i]are[/i] [b]falsifiable[/b], namely [b]in terms of their usefulness in prediction[/b]. Constructs are not statements of fact and can therefore not be falsified in and of themselves. Additionally, the "right or wrong" question may in the field of human factors not be as relevant as the question of in what situations the theories can account for what amount of performance variation. In general, DWH are accused of a blame-by-association approach (misapplication of theories by some does not mean the theories themselves are incorrect) and of using strawman arguments. [u]Situation Awareness[/u]: Measurement of Situation Awareness: Endsley , M.R. (1995). Measurement of Situation Awareness in Dynamic Systems. [i]Human Factors[/i], 37 (1), 65-84. Two experiments address the question whether or not SAGAT is a valid measure -- whether or not it indeed measures what it is supposed to measure. Two reasons why it may not are mentioned. [list][*][b]Memory issues[/b]: If the information is processed in a [b]highly automated fashion[/b], it may not enter into consciousness. If it does, [b]short-term memory span[/b] may prevent it from being reported after the simulation has frozen. Even if it even enters term memory, [b]recall issues[/b] may still prevent it from being recalled.[/*][*][b]Intrusiveness[/b] of the freeze may alter the operators' behaviour.[/*][/list] The first study deals with the memory issues by, in multiple trials, administering a full battery of SAGAT in random order. Due to the random order, the time at which each question was asked after the freeze varied up to six minutes. Endsley found that the majority of absolute error scores were within acceptable limits of error (meaning [b]information can indeed be reported[/b]), and that [b]no effect of time on error score was found[/b] (there is no sign of short-term memory decay). To address the technique's potential intrusiveness, during a similar study, the amount (0-3) and duration (half, one, two minutes) was varied across trials. [b]No performance difference was registered[/b] in any of the conditions. A number of issues related to these experiments can be identified.[list][*]A substantial amount of [b]training missions[/b] in which SAGAT was also adminstered was flown before the actual trials. This is highly unusual: One does not usually train SAGAT at all. Data on the pilots' scores across missions is not available.[/*][*]The simulation only allowed ten out of twenty-six questions to be verified. These related to [b]variables of high importance[/b] to flight. The fact that these can be reported, does not necessarily mean that other parameters can be as well. Additionally, these mostly related to Level 1 SA, meaning these findings cannot necessarily be extended towards other levels of SA.[/*][*]The subjects used in these studies were [b]expert fighter pilots[/b]. The fact that experts are able to accurately report the requested information, does not necessarily mean that other subjects are able to do so as well.[/*][*]Endsley set out to investigate whether or not there was an effect, and claims to have [b]proven a negative[/b], which is fundamentally impossible.[/*][/list] [u]Trust[/u]: The Dynamics of Trust: Lewandowsky, S., Mundy, M. & Tan, G.P.A. (2000). The Dynamics of Trust: Comparing Humans to Automation. [i]Journal of Experimental Psychology: Applied[/i], 6 (2), 104-123. The authors developed the following tentative framework primarily describing the role of trust and self-confidence in an operator's decision to delegate tasks to an "auxiliary", usually automation. [img]http://files.noctifer.net/dynamicsoftrust.jpg[/img] A trade-off between trust in the auxiliary and self-confidence is said to predict auxiliary use. In case of automation being the auxiliary aid, the ultimate responsibility is felt by the operator to be his and his alone. Especially because of that, faults affecting his performance are thought to affect his self-confidence. In case of the auxiliary being a human co-operator however, a perceived diffusion of responsibility is hypothesised to render self-confidence more resilient. In a first experiment, subjects supervised a simulated pasteurisation process with (between subjects) either an automated auxiliary aid, or a thought-to-be human auxiliary aid. Subjects had the option to delegate the relevant task either to the auxiliary aid, or operate it manually. Within subjects, operating errors were introduced either (first) during manual control (manual fault) or during automated control (auxiliary fault). Switching to the other control would solve the problem. Analysis focused on differences between human-human and human-automation cooperation situations regarding the following aspects.[list][*]Performance. Faults did reduce performance, but the only significant difference was between pre-fault (where no faults were introduced) and auxiliary fault trials.[/*][*]Allocation strategy. Subjects developed [b]strategies to combat the faults[/b]. In auxiliary fault trials, control was predominantely allocated manually, and vice versa. Additionally, it appeared (not enough subjects for significance) that the subjects' perception of automation is more volatile, indicated by automation trials leading to more "extreme" allocation strategies (higher percentage of either manual or automation control).[/*][*]Subjective ratings and auxiliary use. In the automation condition, [b]periods of manual faults were associated with low self-confidence and high trust, whereas automation faults engendered low trust but high self-confidence[/b]. In the human condition however, [b]self-confidence was found to be fairly stable[/b] across all trials. [b]Trust minus self-confidence appeared to indeed be a good predictor[/b] for auxiliary use, but less so in the human condition, hinting at the relevance of another, yes unidentified variable.[/*][/list] In a second experiment, this unidentified variable was hypothesised to be the operator's presumed own trustworthiness as perceived by the auxiliary partner. They also introduced risk as an additional independent variable, in the form of process speed (higher speed leading to more performance loss in case of errors). [list][*]Performance. Again, performance declined when faults were introduced. An interaction effect, possibly due to learning as the here-relevant order was not randomised, was seen in auxiliary faults impairing performance more in slow conditions than in fast ones. No such effect was seen in the human condition.[/*][*]Allocation strategy. Results were as above in low-speed conditions. This [b]pattern was more accentuated in high-speed conditions[/b]: No more than one subject continued to favour using the auxiliary after it developed faults. There was no hint of any differences between auxiliary type conditions.[/*][*]Subjective ratings and auxiliary use. Independent of auxiliary type condition, auxiliary faults led to lower trust and higher self-confidence, and manual faults led to higher trust and lower self-confidence. The [b]failure to replicate self-confidence resilience in human conditions[/b] might have been [b]due to the faults appearing more extreme[/b] in this second experiment, causing self-confidence to decline regardless of auxiliary partner. Trust minus self-confidence again was a good predictor of auxiliary use, again a bit better in the automation condition. [b]Presumed trustworthiness declined in manual fault trials. In human conditions, subjects were likely to delegate tasks to their partner when their presumed trustworthiness was low, and vice versa. Presumed trustworthiness played no role however, in predicting auxiliary use in the automation condition[/b].[/*][/list] [u]Monitoring, Complacency[/u]: Performance Consequences of Automation-Induced "Complacency": Parasuraman, R., Molloy, R. & Singh, I.L. (1993). Performance Consequences of Automation-Induced "Complacency". [i]International Journal of Aviation Psychology[/i], 3 (1), 1-23. Langer's (1989) concept of [b]premature cognitive commitment[/b] describes an attitude that is formed upon a person's first encounter with a system, and reinforced when the device is re-encountered in the same way. Variability in automation reliability then, may interfere with producing this type of attitude, and with that, reduce complacency. The authors examined the effect of reliability variations on operator detection of automation failures using a flight simulation that included manual tracking and fuel management, and an automated system-monitoring task. Four conditions: constantly low/high reliability, and variable reliability switching every ten minutes, starting low/high. Following hypotheses were investigated.[list][*][b]Complacency is higher in constant reliability conditions: Confirmed[/b], in that detection probability (assumed measure of complacency) was significantly higher in variable conditions.[/*][*][b]The higher the initial reliability, the higher the complacency: Not confirmed[/b]. No significant detection difference found within constant/variable conditions.[/*][*][b]An operator's trust in and reliance on automation weakens immediately after failure: Partly confirmed.[/b] Detection rate did improve after total automation failure in both conditions, but the constant reliability groups did not reach the same level as the variable reliability groups.[/*][*][b]he above predictions only hold in a multitask environment: Confirmed[/b]. A second experiment was conducted with the same training etc., but where only the previously automated system monitoring task was to be done. Detection rates were uniformly high across groups and sessions.[/*][/list] According to these findings, [b]variable reliability may increase monitoring efficiency[/b]. In practice, this may be achieved by artifically introducing automation failures, which is risky, or by adaptive function allocation. One might ask whether or not detection rate is an appropriate indicator for complacency and whether or not the chosen "high" (87,5%) and "low" (57,25%) reliability figures are acceptable. Context-related Reliability: Bagheri, N. & Jamieson, G.A. (2004). The Impact of Context-related Reliability on Automation Failure Detection and Scanning Behaviour. [i]Proceedings of the IEEE International Conference on Systems, Man and Cybernetics[/i], 212-217. Studies generally model automation failures as random, unpredictable events, and neglect to take into account, or [b]inform the operators of, the conditions that might make automation failures more likely[/b] to happen. The authors investigated the effect of providing operators with information about the context-related nature of automation reliability. The same procedure was used as in Parasuraman (1993) above. However, constant-high subjects were informed that automation would operate invariably "slightly under 100%", constant low "slightly more than 50%", and the same terms were used to describe the variable conditions with additional rationale of maintenance needing to happen ever 20 minutes improving reliability. The following effects were found on the dependent variables.[list][*]Performance measures. No significant difference was found between conditions: Detection rate was uniformly high, meaning an improvement compared to Parasuraman (1993). No significant differences were found between other performance measures. [b]Providing information about the automation seemed to enhance participants' detection performance without affecting their performance on other tasks[/b].[/*][*]Mean time between fixations. MTBF in constant high conditions was significantly smaller for those who received context information compared to those who did not. The information then, did seem to [b]influence monitoring strategy[/b].[/*][*]Subjective trust in the automation in context conditions appeared to be [b]more stable[/b] even when failures are discovered. [/*][/list] It appears [b]increased understanding of the system can decrease complacency[/b]. [u]Decision Support Systems[/u]: Misuse of automated decision aids: Bahner, J.E., Hüper, A. & Manzey, D. (2008). Misuse of automated decision aids: Complacency, automation bias and the impact of training experience. [i]International Journal of Human-Computer Studies[/i], 66, 688-699. The authors investigated complacency effects towards decision aids in process control, and the role of prior experience with automation faults. In a simulated process of providing oxygen to astronauts, an automated system helped to diagnose errors. Initially, all participants are trained in manual control and error diagnosis, being explicitly told what information needs to be checked for which errors (normative model). Later, an automated diagnosistic aid is added, and all participants were told that failures may occur in this system, but only half of the participants actually experienced them during the training. In the experimental trials, after nine correct diagnoses, the system gave a faulty one. For two subsequent faults, the system broke down entirely and no diagnostic support was available. Results:[list][*]Correct diagnoses. During those trials where automated diagnoses were correct, the [b]experience group took longer to identify the fault[/b] (time from fault indication to issuing correct repairs) than the information group. The [b]experience group also sampled a higher percentage of the information[/b] that needed to be sampled for each fault (as learnt in the training phase), but neither group sampled all relevant information, i.e. [b]all were complacent to some extent[/b] by this definition.[/*][*]False diagnosis. Only 5 out of 24 followed the false recommendation, i.e. showed a commission error. These were almost equally distributed across both groups; [b]experience did not influence this[/b]. These five compared to the other nineteen participants however, [b]took almost half as much time to identify failures[/b] in the correct-diagnoses trials, and [b]sampled significantly less information[/b] (relevant and overall). [/*][*]No diagnoses. No significant effects were found. [/*][/list] The study suggested a new way to [b]measure complacency in comparing actual behaviour to a normative model[/b]. All subjects showed complacency to some extent, extending the significance of complacency to decision support systems. Subjects in the [b]information group were more complacent[/b] and took [b]less time to verify diagnoses[/b], hinting at a role of time pressure. Participants who committed a commission error also showed particularly high levels of complacency. The impact of function allocation: Manzey, D., Reichenbach, J. & Onnasch, L. (2008). Performance consequences of automated aids in supervisory control: The impact of function allocation. [i]Proceedings of the 52nd Meeting of the Human Factors and Ergonomics Society[/i], New York, September 2008. Santa Monica: HFES, 297-301. The authors investigate the effects of automation decision aids in terms of performance, automation bias, and deskilling, depending on level of automation. In a simulated process of providing oxygen to astronauts, an automated system helped to diagnose errors. [b]Information Analysis support[/b] provided just a diagnosis, [b]Action Selection support[/b] provided additional action recommendations, and [b]Action Implementation support[/b] also implemented these actions if the operator so requested. All were informed that the automation was imperfect and its diagnoses should be verified. Additionally, there was a full-manual control group. After training, the first experimental block was manual, two automated without faulty diagnoses, one automated with an additional misdiagnosis, and the last block was again manual. Effects are discussed on the following aspects.[list][*]Primary task performance. Fault identification time, percentage of correct diagnoses, and out-of-target-errors [b]improved in automation blocks with significant difference between low plus middle versus high level of automation[/b].[/*][*]Secondary task performance. Prospective memory task showed [b]improvement in middle and high support[/b] groups. No differences were found in a secondary reaction time task. [/*][*]Automation bias. Up to half of the participants followed the misdiagnosis in block 4, but [b]no differences were found for the different support types[/b]. A comparison of information sampling behaviour between those who did, and those who did not commit the error revealed no differences, suggesting the error was not due to a lack of verification, but [b]due to a misperception or discounting of contradictory information[/b]. The effect being independent of level of automation suggests that medium levels of automation do not represent an efficient countermeasure.[/*][*]Return-to-manual performance. No effect was found on identification time or percentage of correct diagnoses. A [b]slight effect[/b] was found in the highest-level automation group performing worse in terms of out-of-target-errors than the other two support types.[/*][/list] Compared to a manual control group, [b]support by an automated aid led to an increase in performance (faster and better), dependent on level of automation[/b]. Secondary task performance also showed improvement. However, up to half of the participants using the automated aid [b]committed a commission error[/b] upon an automated misdiagnosis, [b]regardless of level[/b]. A weak indication of (selective) [b]deskilling[/b] has also been found. [u]Automation Bias[/u]: Electronic Checklists: Mosier, K.L., Palmer, E.A. & Degani, A. (1992). Electronic Cheklists: Implications for Decision Making. [i]Proceedings of the Human Factors Society 36th Annual Meeting[/i], 7-11. The improper use of checklists has been cited as a factor in several aircraft accidents. Solutions to checklist problems include the creation of electronic checklist systems. The authors compare the paper checklist to two types of electronic checklist systems. The automatic-sensed checklist automatically checked all items that the system could sense were completed. The only manual control was to confirm the completed checklist at the end. The manual-sensed checklist required that the pilots manually touch the display for each item, which would only then be sensed and coloured accordingly (yellow for not completed, green for completed). Another independent variable was the instruction to perform immediate action items, i.e. the first few items of certain emergency checklists either from memory or by following these items from the checklist. Performing critical actions from memory allows a faster response, but also decreases the amount of thought that precedes the action; time savings gained may be overshadowed by an increase in errors. The participating crews were brought in an ambiguous situation, needing to determine whether or not the #1 engine was on fire. Based on various cues, shutting down either engine could be "justified" -- #1 recovered from salient and serious warnings to stable, albeit somewhat reduced thrust, whereas #2 did not recover but did not have any salient initial warnings. The "engine fire" checklists recommended shutting down the #1 engine, although the best choice was to leave both running. Sample size was too small for statistical significance, but the following data is reported. [b]Crews that shut down #1 (bad) tended to be those with electronic checklists initiating the shutdown from memory. Crews that left both engines running (optimal) tended to be those whose shutdown procedures were less automated[/b] -- i.e. traditional paper checklists and/or not from memory. No crew shut down #2. Apparently, [b]the salience of the #1 cues overrode the less salient but more informative #2 cues[/b]. The number of [b]informational items discussed by crews decreased as the checklist became more automated[/b]. Authors mention the [b]importance of salience[/b] and the risk of perceptual tunneling when an item is too salient, especially when combined with stress, time pressure and information overload. Checklists can serve as a means to [b]focus attention where it is needed[/b]. However, the checklist itself may then [b]become the focus of attention[/b] and encourage crews not to conduct their own system checks, but rather rely on the checklist as primary system indicator (as with automatically sensed checks, or e.g. "already-checkd items must be okay"). Accountability: Mosier, K.L., Skitkja, L.J., Heers, S. & Burdick, M. (1998). Automation Bias: Decision Making and Performance in High-Tech Cockpits. [i]The International Journal of Aviation Psychology[/i], 8 (1), 47-63. Accountability is known to be able to mitigate classical decision-making biases and increase the tendency to use all available information for situation assessment. The authors investigated where other not these results would extend to experienced pilots using automation in the cockpit. In the accountable condition, pilots were told they would be asked to justify their performance and strategies in the use of automation afterwards. The nonaccountable group was only told general information, and informed that due to a malfunction, no performance data could be recorded. The primary task was flying two legs, secondary was a tracking task. The primary task involved automated loading of new flight parameters given by ATC. The secondary task was fully automated in the first leg; in the second leg, it was manual above 5000 ft. Omission errors could be committed as primary task automation failed thrice, and secondary automation failed once during the first leg. Additionally, a false automated warning of an engine fire (contradicted by other indicators pointed out to the pilots during training) created the opportunity for a commission error. A checklist would recommend shutting down the engine. 5 out of 25 pilots made no errors. About [b]half committed errors related to automation failures most critical to flight[/b]; none of them failed to notice the irrelevant secondary task automation failure. There was no difference between the two groups, but [b]increased actual flight experience decreased the likelihood of catching the automation failures[/b]. Separating those who committed 2-3 errors from those who committed 0-1, showed that [b]those who felt more accountable (regardless of experimental group) were less likely to make omission errors[/b]. All pilots who received the engine warning ultimately shut down the engine, making a commission error. During the debriefing, 67% of them reported a [b]phantom memory[/b] of at least one additional cue that was not actually present. Because the results match those of a similar study earlier with non-experts, the authors conclude [b]expertise does not protect against automation bias[/b]. In fact, expertise was related to a greater tendency to use automated cues. Externally imposed accountability was not found to be a factor, however, a subjectively reported higher [b]internalised sense of accountability[/b] as gathered from the debriefing did seem relevant. Advantages of Teamwork: Mosier, K.L., Skitka, L.J., Dunbar, M. & McDonnell, L. (2001). Aircrews and Automation Bias: The Advantages of Teamwork? [i]The International Journal of Aviation Psychology[/i], 11 (1), 1-14. From social psychology, a number of effects are known to be attributable to the presence of others.[list][*][b]Social facilitation[/b] is the tendency for people to do better on simple tasks when in the presence of others. Complex tasks however show the opposite effect. [b]Drive theory[/b] explains this by a higher arousal in the presence of others, causing the individual to enact behaviours that form [b]dominant responses[/b]. If the dominant response is correct, performance increases.[/*][*][b]Social loafing[/b] is the tendency of individuals to exert less effort when participating as a membe of a group than when alone, e.g. caused by [b]deindividuation[/b] (dissociation from individual achievement and decrease of personal accountability).[/*][/list] The multilevel theory of team decision making says that several constructs influence decision-making accuracy in hierarchical teams.[list][*][b]Team informity[/b]: The degree to which the team as a whole is aware of all the relevant cues or information.[/*][*][b]Staff validity[/b]: The degree to which each member of the team can produce valid judgements on the decision object.[/*][*][b]Hierarchical sensitivity[/b]: The degree to which the team leader effectively weights teammember judgements in arriving at the team's decision.[/*][/list]The level of autonomy and authority of modern automation systems may give them the character of independent agents or team members (influencing staff validity, even hierarchical sensitivity), rather than merely as sources of cues and information (team informity). The authors investigated the presence of a second crewmember on automation bias and associated errors. A previous study with students found that having a second crewmember, being trained to verify automated directives, and being visually prompted to verify automated directives did not have an effect on errors. Training participants on the phenomenon of automation bias however produced a significant reduction. The current study investigated whether or not these findings could be extended towards professional crews. Independent variables: Crew size (alone or with another person), training (systems-only, additional emphasis on verification, additional explanation of biases and errors), display design (receiving a prompt to verify automated functioning or not). Dependent variables: The number of omission and commission errors, as caused by failed automated loading of new flight directives and a false automated warning of an engine fire contradicted by other parameters (touching the warning screen brought up a checklist recommending engine shutdown). [b]No significant effect was found of crew size, training, or display design on omissions errors[/b]. 21% of automation (omission) errors were verbally acknowledged without corrective actions being taken (possibly due to a simulator effect, i.e. no real risk was present, or because crews are wary of correcting automation and instead question their own interpretation, or because they waited for a criticality expecting that otherwise, the error would correct itself). All solo-fliers and all but two crews (out of twenty) shut down the engine. In the debriefing, phantom memories were reported. [b]The presence of a second crewmember did not reduce automation bias and associated errors, nor did training or display prompts[/b]. The authors assume individual characteristics such as trust in automation, self-confidence, and internalised accountability play a more important role. Automation Bias in Mammography: Alberdi, E., Povyakalo, A., Strigini, L. & Ayton, P. (2004). Effects of Incorrect Computer-aided Detection (CAD) Output on Human Decision-making in Mammography. [i]Academic RAdiology[/i], 11, 909-919. CAD searches for typical patterns and highlights these to support human decision making. [b]Sensitivity[/b] is the amount of true positives over the amount of actual positives (true positive + false negative); [b]specificity[/b] is the amount of true negatives over the amount of actual negatives (true negative + false positive). The authors investigated the effects of incorrect CAD output on the reliability of the decisions of human users. Three studies are described. In the base study, groups of more and less qualified practitioners indicated on mammograms which areas were indicative of cancer, and whether or not the patient should (probably) return. All readers reviewed each case twice, once with, once without CAD, with a week in between. In the CAD condition they were instructed to first look at the films themselves, and were told about possible CAD errors. The goal of the first follow-up study was to further investigate the human response to CAD's false negatives (either failing to place a prompt or by placing prompts away from the actual cancer area). Only 16 out of 180 cases of the base study were false negatives, so a new data set was compiled. There was no no-CAD condition. The second follow-up study only had a no-CAD condition for comparison. The base study turned out that in conditions [b]with CAD, readers were more confident in their decision[/b] ("no recall" instead of "discuss but probably no recall") than for the same cases without CAD. The reader sensitivity in this study was 76% in the CAD condition. In study 1, however, this was only 52%. [b]Where CAD incorrectly marked, or didn't mark, cancer cases, the percentage of correct decisions was low[/b]; it was high in other cases. In study 2, sensitivity was still low, but significant differences were found between study 1 and 2 in percentage of correct decisions in the unmarked and incorrectly marked cases, with worse performance in the study 1 (CAD condition). Analyses suggest that the output of the CAD tool, in particular [b]the absense of prompts, might have been used by the readers as reassurance for "no recall" decisions for no-cancer cases[/b]. One could argue that the [/b]readers tended to assume that the absence of prompting was a strong indication for no-cancer, paying less attention to these cases (error of omission, reliance)[/b]. Automation Bias and Reader Effectiveness in Mammography: Alberdi, E., Povyakalo, A.A., Strigini, L., Ayton, P. & Given-Wilson, R. (2008). CAD in mammography: lesion-level vs. case-level analysis of the effects of prompts on human decisions. [i]International Journal of Computer Assisted Radiology and Surgery[/i], 3 (1-2), 115-122. Previous studies have used data at the level of cases (e.g. recall/no recall). The current study used data at the level of mammographic features, allowing them to e.g. identify recalls based on mammographic features that CAD had not prompted. Additionally, differences between readers are investigated, as the absense of significant effects in other studies may be caused by a balancing effect. Data used was, for each mammogram, those areas known to be cancer regions, marked by CAD, and marked by readers. Additionally the readers indicated each type of abnormality they spotted, their degree of suspicion for each mark, and their recall/no recall decision. Mammographs were used of 45 "non-obvious" cancer cases. The following analyses were conducted.[list][*]Recalls for non-cancer areas. False-target recalls are recalls based on erroneous features, as opposed to true-target recalls. 13,5% of recall decisions were false-target recalls in both CAD and no-CAD conditions, with some overlap of cases between conditions.[/*][*]Effects of prompts on feature classification. [b]The lack of a correct CAD prompt significantly decreased the probability of a reader marking a cancer area[/b] at all, and the probability of a reader indicating the area as suspicious (>3). [b]The presence of a prompt had significant positive effects[/b] on both measures.[/*][*]Readers' reactions to prompts. [b]Prompts increased the probability of a reader marking that feature[/b]; false positive prompts even more so than true positive prompts. The same effects are found for the probability of marking the feature as suspicious.[/*][*]Predictors of reactions to prompts. Analysis suggests that a [b]CAD prompt significantly reduces[/b], by up to 10%, [b]the probability of false negative error for the less effective readers[/b] on moderately difficult-to-detect features. For the 10-20% [b]most effective readers[/b], there is a [b]significant negative estimated effect[/b], i.e. an increase in the probability of false negative errors on features of moderate difficulty.[/*][/list] [u]Alarms[/u]: Why Better Operators Receive Worse Warnings: Meyer, J. & Bitan, Y. (2002). Why Better Operators Receive Worse Warnings. [i]Human Factors[/i], 44 (3), 343-353. The authors suggest that the occurence of warnings in complex systems often depends on the operators' characteristics, and that the diagnostic value of a warning decreases for better operators. An informative way to evaluate a system is in terms of [b]positive predictive value[/b] and [b]negative predictive value[/b] (percentage of true positives/negatives out of all indicated positives/negatives). [b]Expert operators are likely to decrease the probability of a failure, necessarily also decreasing the PPV and increasing the NPV[/b]. In extreme cases, all failures are prevented by the operator, and the only alarms given are false positives. This was experimentally tested. Participants had to monitor three stations reflecting a numerical value that diminished over time at different rates of change. To see the value of a station, it had to be inspected. Only one station could be inspected at a time. Upon inspection, the value could be reset. Inspection cost 5 points, intervention 20. Participants lost points for negative stations and gained points for positive stations. Optimal scheduling required intervention every 22, 33 and 66 seconds for the three stations (order randomised). A warning system indicated whether or not a station value was positive or negative, with some probability for false alarms and misses. The warning systems differed across conditions in their sensitivity and response criteria. PPV was higher for more sensitive and more cautious warning systems, as is to be expected. Additionally, PPV decreased over subsequent trials purely as the result of an improvement in operator performance, reducing the proportion of periods with negative values. Similarly, NPV increased over trials. On the Independence of Compliance and Reliance: Dixon, S.R., Wickens, C.D. & McCarley, J.S. (2007). On the Independence of Compliance and Reliance: Are Automation False Alarms Worse Than Misses? [i]Human Factors[/i], 49 (4), 564-572. Previously, it had been implied that compliance and reliance are separate, possibly even independent constructs -- that an increase in false alarms should affect only compliance and an increase in misses should affect only reliance. Participants performed two tasks concurrently: A continuous compensatory tracking task, and a cognitively demanding systems monitoring tasks. The second task was performed either unaided, with perfectly reliable, FA prone, or miss prone automation. The authors hyposesised that[list][*]perfect automation would benefit both tasks;[/*][*][b]miss-prone automation would reduce reliance[/b], shift attention away from the tracking task to catch misses, and therefore harm the tracking task;[/*][*][b]an increase in misses should not affect compliance[/b] measures (speed of response to an alert);[/*][*][b]FA-prone automation should reduce compliance[/b] and therefore harm the system monitoring task;[/*][*][b]FAs should also reduce reliance[/b] and therefore harm the tracking task.[/*][/list]All hypotheses were confirmed. [b]Compliance and reliance are not entirely independent, and FAs are more "damaging" than misses in that they decrease both reliance and compliance[/b].