Content area
The necessity of considering human factors in the early phases of intelligent systems engineering is increasing in tandem with overall system complexity and size. With this increased need, the capabilities and limitations of human-systems integration (HSI) are becoming more of a focus in the design of intelligent systems. Systems have been growing progressively more complex as the system is required to perform complicated tasks in an increasingly complex environment. However, the current state of system design and engineering processes is often insufficient or too late in the system life cycle to adequately address human-intelligent systems integration. As a result, emergent problems related to human factors arise late in the system life cycle, often even after system deployment, resulting in additional cost, time, and liability. This research proposes a method for including human factors early and throughout the systems engineering process utilizing use case definitions and associated diagrams that show relationships with external actors, including humans. Human performance, task analysis, and Goals, Operators, Methods, Selection rules (GOMS) models produce quantitative metrics for human factors to be included in the system design process. System use case definitions are a natural pathway for the inclusion of human factors early and throughout the systems life cycle, enabling the consideration of quantitative human-systems integration metrics in the system design process.
Introduction
Including human factors in system design and requirements can be a challenge for most engineering teams and is frequently overlooked until late in the engineering process (Madni 2009). This is especially true with the increase in the development of intelligent systems. The spectrum of work being done by the human, by the intelligent system, and their interactions leads to many emergent system properties (Illankoon et al. 2019). These types of emergent properties result in problems which often arise after system deployment and can incur additional cost, time, and liability (Naranji et al. 2015).
Model-based systems engineering (MBSE) has been proposed by several different authors as a method for resolving some of these crucial issues facing the design of systems with a heavy human component (Watson et al. 2017a, b; Handley 2012). However, there are several challenges to an actual implementation of MBSE for these systems. Traditionally, systems engineering as a field has been less focused on the human component of the system, preferring to treat the human as an external actor, rather than an integral part of the system (Madni 2009; Delligatti 2013). As a result, human considerations tend to be integrated into the system design late in the system life cycle at great expense and risk (Algarin 2016). Often, the resulting solution is to influence training requirements of the user—forcing the human to adapt to the system, resulting in decreased performance and increased cost. Additionally, the need for these adjustments are often not realized until after a critical system failure risking life, cost, and reputation—recent examples of this are the Boeing 737 Max crashes (Johnston and Harris 2019). Properly integrating human factors into the beginning of the systems engineering process would result in designing the system for the human needs in the same manner as it is designed for other stakeholder requirements. This method is an improvement beyond the current method of training or adapting the human to the system after much of it has been designed or built.
The Systems Modeling Language (SysML) is the language of MBSE, and as such, it provides an effective means to communicate system design elements in a well understood manner (Walden et al. 2015). However, a common modeling language is only one of several parts necessary to successfully implement MBSE (Delligatti 2013). Utilizing SysML is an important first step towards an MBSE approach to HSI, but it is only part of the potential value that fully implementing MBSE techniques might bring. Without the full utilization of MBSE techniques, human-systems integration will still be faced with its current problems, a persistent difficulty with incorporating human system consideration early and throughout the system design process. This research proposes a systems engineering modeling methodology based on use case definitions as a natural architectural pathway for the inclusion of human factors early and throughout the system life cycle. Implementing this methodology enables the consideration of quantitative HSI metrics in the system design process.
Previous research
The need to better incorporate human interaction with systems is a well-documented problem, with examples including the famous Patriot friendly fire accidents in 2003, 3-mile island, and other major disasters (Madni 2009; Sheridan 2008). However, there is no detailed strategy for including the human element in the system design. As pointed out in Algarin (2016), the lack of inclusion of HSI in the system and its terminology is correlated with program failure (demonstrated on DoD projects). Madni (2009) encourages specific research focuses, including the need for more cost-effective human performance modeling early in the system life cycle. If HSI is brought forward in the system design process, it allows analysis of how the system design impacts the human element and vice versa. There is a clear need for more cost-effective human performance modeling in the beginning of systems development and the necessity to develop an architecture that can support analysis of changes in critical human parameters. While some fields within systems engineering attempt to address this (for example, agile systems engineering), even these approaches tend to treat the user as external actor to the system, rather than a key and integrated component of a successful system (Douglass 2016).
HSI metrics have been well developed by the community, but much of the work in the HSI community (such as user-centered design) involves either human in the loop (HITL) testing or less quantitative models of the human behavior (Madni 2009). For example, Branstrom et al. (2019) used HITL testing to determine the linkage between a lack of reliability in intelligent systems and physiological stress on the human. They also link this stress to the cognitive workload on the human and the resulting effectiveness. This is performed using an example autocorrect system (Fig. 1) that they used for the purpose of the testing.
Fig. 1 [Images not available. See PDF.]
Example autocorrect system (reproduced with permission from Branstrom et al. (2019))
When the available metrics involve expensive (either in time or money) testing or are difficult to quantify, it becomes difficult (though not impossible) for a systems engineer to properly account for human factors early in the system design. Instead, these more expensive tests and metrics can be supplemented (not replaced) using lower fidelity representations to produce quantifiable metrics (Evans and Karwowski 1986). These models can be linked to the system model created with MBSE to provide the systems engineer with a faster and more frequent consideration of the full system performance.
With MBSE, the primary artifact of the life cycle process is a single coherent model as opposed to a series of document-based products (e.g., a requirements specification) (Friedenthal et al. 2008; Douglass 2016; Delligatti 2013). These documents may be produced using MBSE, but they are not the primary focus. MBSE is composed of three primary pillars: a language, a modeling methodology, and a modeling tool (Delligatti 2013). In modern MBSE, the language used is SysML (Walden et al. 2015). The modeling method is a “documented set of design tasks that a modeling team performs to create a system model” (Delligatti 2013). The final MBSE pillar, the modeling tool, provides a means to build a system model that is consistent with the modeling language. These three pillars, when combined, provide powerful benefits to the system modeler—especially in determining the impact of the inevitable changes in a system design (Douglass 2016).
While MBSE is a useful technique, typical systems engineering techniques tend to treat the human user of the system as an external actor—as opposed to a detailed portion of the system that also needs to be incorporated in the system design (Delligatti 2013; Madni 2009). This tendency often results in the human factor not being considered until later in the system design process and causes, in part, the tendency of systems engineering to focus on the technical aspects of the systems that are easy to represent quantitatively. A significant amount of work has gone into representing human interaction with the system in a manner more visible to the systems engineers designing the system. There has been some promising work in human task analysis, for example, an open source GOMS analysis tool called Cogulator has been used to provide analysis of alternatives (Stanley et al. 2017). Similarly, the work done by Watson et al. (2017a, b) uses SysML to better describe HSI and translates SysML into GOMS using IMPRINT in order to better understand the expected human performance for a designed system. However, as implemented, this consideration of the human element occurs later in the system development life cycle and does not have the opportunity to influence system design from the beginning to better accommodate the human element. As a result, this modeling information is used to drive training requirements for the human instead of influencing the initial system design (Watson et al. 2017a, b).
While describing these activities using SysML is an important step in integrating human factors with MBSE, it is not sufficient by itself. As mentioned above, a modeling language is only one of three pillars for MBSE. Another pillar, the modeling tool, can be satisfied by using a common system modeling tool currently existing (i.e., Cameo Systems Modeler™). However, in the second pillar, a modeling methodology requires a deliberate roadmap to fully integrate human factors in the system model. Without such a roadmap, the incorporation of human factors remains incomplete and is unable to fully influence the system design and fully utilize the value available in MBSE (Delligatti 2013). Instead, much of the literature refers to influencing training requirements—fitting the human to the system as opposed to designing the system to work with the human from the beginning.
The lack of inclusion of human factors into a system design is a well-documented problem, but there is a persistent need for a comprehensive strategy to account for HSI early in the system design.
Supporting methodologies
This section discusses several of the supporting techniques that contribute to the final proposed methodology discussed in Section 4. Broadly, these methodologies are drawn from model-based systems engineering and GOMS modeling.
Model-based systems engineering
Most MBSE approaches utilize SysML. The primary artifacts created within SysML are diagrams. There are several types of diagrams including use case diagrams, block definition diagrams, and activity diagrams (Weilkiens 2008; Delligatti 2013). For the purposes of this paper, a few of the more relevant diagrams and their purposes will be discussed here.
Use case diagrams
Use case diagrams (UCD) provide a method to represent how an outside actor or user interact with the system in order to achieve a goal (or use case) (Delligatti 2013; Douglass 2016). A UCD can contain a description of more than one use case, typically grouping three to five similar or related use cases together on one diagram.
The purpose of a UCD is not to unambiguously specify exactly how a system accomplishes each use case, but rather to provide a simple, high-level representation of what the system is designed to accomplish or be used for. A large, complex system may have thousands of use cases, only a subset of which may be captured in UCDs (Douglass 2016).
A UCD shows the interaction between actors (systems or people outside of the system who interact with or use the system) and the system itself (Delligatti 2013). From a human-systems integration perspective, this makes UCDs exceptionally useful, as a human actor’s interaction with the system will be documented in a UCD. Additionally, humans who act as part of the system may also be captured in a use case diagram—although it may be in a UCD describing behavior of a lower level subsystem.
Behavioral diagrams
Once a UCD has been defined, the next step is to elaborate on the behavior required for the system to perform the use cases within the UCD. Within SysML, there are three types of behavioral diagrams that can be used for this purpose (Delligatti 2013):
Activity diagrams
Sequence diagrams
State machine diagrams
Any or all three diagrams may be used to define the system behavior.
The purpose of these diagrams is to provide iteratively more detail about the system processes. This is continued until the requisite level of system detail is captured. The usage of these diagrams allow for the system designer to start with a simple, user-based view of the system (i.e., the UCD) and continue elaborating all the required system behavior in new diagrams. This allows each diagram to be only as complex as necessary, while still capturing all the requisite behavior.
Block definition diagrams
Block definition diagrams (BDDs) define blocks, which provide a means to describe components, elements, or entities being used within or that interact with the system (Delligatti 2013). Blocks are defined along with their properties, behaviors, and interfaces. These blocks can be used to describe components of the system and be used to define additional blocks (a new block type can inherit parameters of a parent block) (Friedenthal et al. 2008). Blocks are used to define objects and types within a system and can be used within other diagrams, such as an activity diagram.
External models and simulations
Most MBSE tools (i.e., Cameo, Core, Rhapsody) have the ability to utilize external models or simulation tools (“Integration with External Evaluators—Cameo Simulation Toolkit 18.5—Documentation” n.d.; “Integrating Rational Rhapsody with Other Design Tools” n.d.). For example, activity diagrams may make calls to MATLAB, C++, or other tools with plugins for integration. These techniques allow for a tool designed to model human-systems integration to be fully integrated with the system model. This full integration is crucial to a full MBSE approach to system design.
GOMS modeling
The concept of GOMS was first introduced by Card, Moran, and Newell in their book titled The Psychology of Human-Computer Interaction (1983). They describe four components used to model the user’s cognitive structure, where each component is a set of Goals, Operations, and Methods for completing the goals, and Selection rules for deciding between methods. These components make up the GOMS model. In their book, they specify the GOMS model as four components that make up the user’s cognitive structure.
Goals: A definition for a desired state using a symbolic structure, described with potential methods for achieving that state.
Operations: Cognitive, elementary perceptual, or motor acts required to accomplish desired changes in the task environment or user’s mental state.
Methods: A conditional sequence of operators and goals involving tests on the user’s immediate memory and the task environment.
Selection rules: Straightforward directions for deciding between methods to accomplish goals.
A task that a human must accomplish can be broken down into these components, decomposing the task into subtasks (“Cogulator: A Cognitive Calculator” n.d.). These subtasks all correspond to a set of customizable metrics that can be used to calculate the estimated time and workload of the full task being analyzed (Estes 2015).
GOMS tools
Cogulator was used to execute the GOMS modeling for this research, providing the timing, memory, and workload analysis demonstrated in the following sections. Cogulator was chosen in part due to its simple human performance calculator that can be used to approximate human task time and difficulty, including workload and memory estimates, in addition to Cogulator being open source, flexible, and easily accessible (“Cogulator/Cogulator” n.d.). However, another GOMS tool could easily be used in place of Cogulator, for example, IMPRINT, as was demonstrated in the work by Watson et al. (2017a, b).
Proposed use case methodology
The following proposed method, use case–based design, is the focus of this research and is built upon the supporting methodologies in Section 3. While both MBSE and GOMS modeling are very useful in their areas, there is a demonstrated need for the integration from the type of user-oriented consideration provided by GOMS within the systems engineering community. The methodology presented here combines the techniques from MBSE and GOMS to make the design of a system, with the human user as a key component of the system, easier and more intuitive to a systems engineer. By utilizing GOMS modeling (for example, using a tool like Cogulator), systems engineers can incorporate human factors throughout their system design with quantifiable metrics that can be used during the entire design process.
The following steps are meant to be a general methodology, not an exhaustive list of steps. While the steps are listed in an order, many of the steps (stakeholder engagement for example) do not have definitive start and stop dates: instead, they should continue throughout and beyond the design process. The following sections describe each step in the process.
The use case–based design process:
Stakeholder engagement
Use case design
Requirement generation
Use case behavior elaboration
Continue elaboration until needed detail level is reached
Evaluate system design
Make system modifications as necessary
Employ human in the loop testing of the system as the design is determined
Stakeholder engagement
Stakeholder engagement, including engagement with future or current users, management, and customers, is a vital first step. Stakeholders help determine what needs to be built, requirements for it to be useful, desired functionality, the current state, and much more. While stakeholder engagement is listed first here, it should not end when moving on to the next stage; instead, it should persist throughout the system’s life cycle to continue getting feedback from the stakeholders as the system progresses (Walden et al. 2015).
Use case design
After sufficient stakeholder engagement, the system designers can begin to determine some of the high-level goals of the system. These can be captured in several places, but organizing them as use cases provides a simple, easily understood, method to communicate and document them.
System requirements
Modeling the human as a part of the system allows for human performance requirements to be included (e.g., regarding workload on the human) as part of the system requirements. Using parametric models for the human allows for these requirements to be continuously met as the system is being designed in the same way that normal system requirements are met. This continuous, model-based testing is done in addition to standard testing with a human in the loop but can be done more frequently with less cost and time constraints.
Use case behavior elaboration
As the system design progresses, elements are defined to elaborate on the use cases. This stage is where the bulk of the system design work takes place. In order to elaborate on the system, at a minimum, block definition diagrams (as described in Section 3.1.3) and behavior diagrams (as described in Sections 3.1.2–3.1.4) are needed. These provide the ability to describe the system in terms of its components and their behaviors.
Continue elaboration
A rough idea of how a use case will be carried out is provided in the use case behavior elaboration step, but the detail is not to a sufficient level where the system design is complete. To provide this additional detail, the system is continuously deconstructed until enough detail has been provided.
Evaluate system design
As the system design process continues, it is necessary to test the model to ensure that it performs as designed and that the design fits the desire. There are many ways to test the software itself for errors, many of which are built into the MBSE tools. A more important evaluation for the focus of this paper is that the model incorporates the human component as designed. For this, once the GOMS level design has taken place, a tool such as Cogulator (as demonstrated in Section 5.6) can be used.
Many different design choices will be required when designing an intelligent system. Good systems engineering promotes postponing those decisions until required—leaving the system design space as open as possible (McKenney et al. 2011). This allows for the system designer to perform trade space analysis or an analysis of alternatives with as much information about the system as possible, but before the designer is locked into a particular configuration.
Make system modifications as necessary
As the system evaluation stage produces results, it may be necessary to adjust the system design. Utilizing MBSE in this fashion helps the designer understand all the impacts of adjusting the design, as the requirements, dependencies, tests, etc. are all linked and made available within the system model (Bjorkman et al. 2013; Huldt and Stenius 2019). A change high in the system design will ripple down throughout the model, notifying the designer of required changes or considerations, including any implications for the human component of the system (Madni and Sievers 2018).
HITL testing
As mentioned several times previously, an MBSE or other model-based approach does not lessen the benefit of human in the loop testing as often as is feasible in the design process. HITL testing provides many benefits including the reduction in uncertainty that is inherent in human models, providing a more open-ended test of the system, and producing more accurate parameters for the human model in future testing.
Use case methodology example application
The following example applies each of the steps in the proposed use case–based design methodology by modeling an auto-proofreading system similar to that used by Branstrom et al. (2019). Illuminating portions of the model are shown on a familiar example system, text autocorrect, to demonstrate how to apply the proposed methodology.
Stakeholder engagement
Stakeholder engagement is a vital part of the methodology. Information from the stakeholder feedback informs every other stage in the process. As the system design progresses, information from the stakeholder allows for system designers to adjust the system to meet requirements—the more often the stakeholders are engaged, the sooner such design choices can be made.
Use case design
Once a sufficient understanding of the stakeholder needs and requirements has been established, the systems engineers can document some of the major use cases for the system. These are identified and documented in part through use case diagrams.
An important distinction between the documentation of these example use cases and that which might be done in a less human focused systems engineering process is the inclusion of use cases that might be thought of as the user’s responsibility. For example, Fig. 2 shows a use case diagram that might be created by a more traditionally focused systems engineer (Delligatti 2013; Douglass 2016). Here, the user is treated as completely separate from the autocorrect system. None of the activities that would be performed by the user is listed as use case; instead, only those that are actively performed by the system for the user are shown.
Fig. 2 [Images not available. See PDF.]
Context-level use case diagram
Taking a more human-systems integration focused perspective allows the creation of use cases that consider the whole system—including the user. As shown in Fig. 3, an example of this is use case 2: analyze sentence. As will be discussed below, this use case is primarily focused on the user’s ability to work with the information presented and completing the required task with that information.
Fig. 3 [Images not available. See PDF.]
Human-systems integration context-level use case diagram
Use case diagrams presents the high-level goals of the example autocorrect system and relevant actors (stakeholders and system elements) associated with the system meeting those goals. As the system gets decomposed into subsystems, new use cases for these subsystems may arise and be defined as necessary. Additionally, use cases provide a good location for understanding system requirements, behaviors required to meet those requirements, and the systems/external actors necessary to meet these usages (Douglass 2016).
System requirements
As stakeholder engagement continues, more system requirements are developed and elaborated. Some of these requirements may come directly from the stakeholder (e.g., the system shall be able to detect spelling errors or the average time for an average user to correct an error shall be less than 20 s), while others may come from use cases or be derived from higher level stakeholder requirements (e.g., the system shall include the Merriam Webster’s 2018 English Spelling Dictionary). Having the human as a part of the system means the human component’s requirements can be included. For example, since an estimate of the workload on the human can be calculated, one requirement may be to keep the estimated workload below a certain threshold.
Use case behavior elaboration
As more of the system is designed and stakeholder requirements determined, the system’s use cases can be elaborated on. In general, this can be accomplished by breaking a use case up into its component behaviors. For example, UC 1 and UC 2 from Fig. 3 are elaborated on in Figs. 4 and 5, respectively. These two activity diagrams allow the system designer to specify how the use cases are carried out by the system. Additionally, block definition diagrams are used to provide information on the components, objects, information, messages, etc. that are being used, or flowing through the system.
Fig. 4 [Images not available. See PDF.]
Use case 1 activity diagram
Fig. 5 [Images not available. See PDF.]
Use case 2 activity diagram
For example, Fig. 6 shows how the autocorrect system is made up of an autocorrect program and a user. The user, in turn, is defined in terms of a few relevant properties, i.e., skill level.
Fig. 6 [Images not available. See PDF.]
Block definition diagram
These diagrams show only the information required by the system designer or that which the system designer thought would be instructive to display. Most properties or objects can be elided by the system designer to create a simpler, easier to understand diagram. These elided properties will still exist within the model, even if not displayed on the diagram.
Continue elaboration
The behavior diagrams provided in Figs. 4 and 5 provide an idea of how to complete the use case, but not in enough detail. For instance, the action “Decide if Autocorrect is correct” is further broken down—this time to GOMS level detail—in Fig. 7.
Fig. 7 [Images not available. See PDF.]
Activity diagram elaboration
Using the activity diagram in Fig. 7, there is sufficient detail to construct a GOMS model of the user deciding whether autocorrect is correct. At this point, given the system requirements, the designers have likely reached the required level of detail for this activity.
Evaluate system design
To evaluate the example autocorrect system, a few possible cases allowed by the system design are considered: (1) a correctly highlighted error with a correction recommendation and (2) an incorrectly highlighted error without a correction recommendation. These scenarios are a subset of those considered by Branstrom et al. (2019).
These types of scenarios can be easily run with a system model such as the one defined above. Since GOMS level detail has been applied for UC 2, a scenario can be easily translated and analyzed using a tool like Cogulator (Figs. 8, 9, 10, and 11).
Fig. 8 [Images not available. See PDF.]
Cogulator model—path 1—error identification is correct with a recommended correction
Fig. 9 [Images not available. See PDF.]
Cogulator Gantt model—path 1—error identification is correct with a recommended correction
Fig. 10 [Images not available. See PDF.]
Cogulator model—path 2—error identification is incorrect with no recommended correction
Fig. 11 [Images not available. See PDF.]
Cogulator Gantt model—path 2—error identification is incorrect with no recommended correction
As expected, the results from Cogulator demonstrate that an incorrectly highlighted autocorrect decision with no correction recommendation causes a significant increase in the time to complete the task—especially as compared to a correctly highlighted autocorrect decision with a correction recommendation. The time to complete a review of the same sentence increased from approximately 18 s to almost 40 s and increased the mental workload from “low” to “medium.”
While it may be unrealistic to expect an autocorrect system to be correct all the time, the differences in these performance levels may help the system designer decide between, for example, the necessary accuracies of the intelligent algorithms used by the natural language processor—especially when comparing the performance times to the stakeholder requirement of an average of 20 s per usage.
It is of interest to note that the increased workload of the incorrect scenario is confirmed by the study performed by Branstrom et al. (2019) who measured and correlated physiological responses to an increased workload using human in the loop testing.
While tools such as Cogulator or IMPRINT provide low fidelity estimates of time or workload, it is important that the results be treated as quick turn estimates—allowing the systems engineer to better account for the human performance in between human in the loop testing cycles.
Make system modifications as necessary
An advantage to utilizing this proposed methodology is that the changes from necessary system modifications are propagated throughout the system model. In practice, this means that the changing of one component of the system that impacts another component of the system is flagged and presented to the engineer for review. Since this methodology includes the human user as a system component early in the system life cycle, changes that impact the user are also flagged for review just as any other system component would be flagged.
HITL testing
HITL testing is a necessary and important part of any system design process and should be performed as often as realistically possible. However, this method also recognizes that HITL testing can be expensive and time consuming and recommends the that the human component of the system be represented in the design process, between HITL test events, as the proposed method recommends. HITL testing also provides more accurate data to inform and update the human models (for example, the average user typing speed may be adjusted based on data from a HITL test).
Conclusion
The proposed method is designed to give an approximation of human interaction with intelligent systems in the early development phases before any physical testing with real humans might be possible and far more frequently. As the system design progresses to an iteratively more detailed design, so does the modeling of the human element. Quantitative metrics from these models can be used to derive system requirements and constraints early in the system design, building it into the standard design process from the beginning as a seamless extension to current practices.
These methods can be combined to supplement other user-oriented design techniques, such as user-centered design or agile systems engineering. The methodology detailed in Sections 4 and 5 provides practitioners of either technique with accessible or quantifiable methods for including the user throughout the system design. For example, in user-centered design, this method could be used during the design of a prototype which will be used to collect user feedback. Agile systems engineering, which is often already built on MBSE, can easily integrate this technique as a more persistent and quantifiable representation of the user—as opposed to treating the user as an external actor (Douglass 2016).
Unfortunately, there are plenty of examples where the human element was inadequately considered in the system design, some with major consequences. By implementing some of the proposed processes, system performance can be greatly impacted using the method demonstrated on the example autocorrect system model. Increased inclusion of human factors in system design and requirements can help prevent costly system failures, lost time, and increased liability.
Utilizing use cases to isolate and define the human interaction with the system provides a clear and useful path to aid in human-systems integration. While the methods described here do not replace techniques such as human in the loop testing, they provide quantitative metrics that the system designer can use to understand the impact of system changes on the human element.
Compliance with ethical standards
Conflict of interest
The authors declare that they have no conflict of interest.
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
Algarin, Liana. 2016. Human systems engineering and program success - a retrospective content analysis 23 (1): 24
Bjorkman, EA; Sarkani, S; Mazzuchi, TA. Test and evaluation resource allocation using uncertainty reduction. IEEE Trans Eng Manag; 2013; 60,
Branstrom, C; Jeong, H; Park, J; Lee, B-C; Park, J. Relationships between physiological signals and stress levels in the case of automated technology failure. Human-Intelligent Systems Integration; 2019; 1,
Card, SK; Newell, A; Moran, TP. The psychology of human-computer interaction; 1983; Hillsdale, L. Erlbaum Associates Inc.:
Cogulator/Cogulator. n.d. GitHub. https://github.com/Cogulator/Cogulator
Cogulator: A Cognitive Calculator. n.d. http://cogulator.io/primer.html
Delligatti L (2013) SysML distilled: a brief guide to the systems modeling language, 1st edn. Addison-Wesley Professional
Douglass, BP. Agile systems engineering; 2016; San Francisco, Morgan Kaufmann Publishers Inc.:
Estes, S. The workload curve. Human Factors The Journal of the Human Factors and Ergonomics Society; 2015; 57,
Evans, Gerald W., and Waldemar Karwowski. 1986. A perspective on mathematical modeling in human factors. In Applications of fuzzy set theory in human factors, edited by Waldemar Karwowski and Anil Mital, 6:3–27. Advances in Human Factors/Ergonomics. Elsevier. https://doi.org/10.1016/B978-0-444-42723-6.50006-4
Friedenthal, S; Moore, A; Steiner, R. A practical guide to SysML: systems modeling language; 2008; San Francisco, Morgan Kaufmann Publishers Inc.:
Handley, HAH. Incorporating the NATO human view in the DoDAF 2.0 Meta model. Syst Eng; 2012; 15,
Huldt, T; Stenius, I. State-of-practice survey of model-based systems engineering. Systems Engineering; 2019; 22,
Illankoon, P; Tretten, P; Kumar, U. Modelling human cognition of abnormal machine behaviour. Human-Intelligent Systems Integration; 2019; 1,
Integrating Rational Rhapsody with Other Design Tools. n.d. https://www.ibm.com/support/knowledgecenter/en/SSB2MU_8.4.0/com.ibm.rhp.integ.designtools.doc/topics/c_integ_rhp_with_otherdesigntools.html
Integration with External Evaluators - Cameo Simulation Toolkit 18.5 - Documentation. n.d. Accessed October 27, 2019. https://docs.nomagic.com/display/CST185/Integration+with+External+Evaluators
Johnston, P; Harris, R. The Boeing 737 MAX Saga: lessons for software organizations. Software Quality Professional; 2019; 21,
Madni AM (2009) Integrating humans with software and systems: technical challenges and a research agenda. Systems Engineering, n/a-n/a:n/a. https://doi.org/10.1002/sys.20145
Madni, AM; Sievers, M. Model-based systems engineering: motivation, current status, and research opportunities. Syst Eng; 2018; 21,
McKenney, TA; Kemink, LF; Singer, DJ. Adapting to changes in design requirements using set-based design. Nav Eng J; 2011; 123,
Naranji, E; Mazzuchi, T; Sarkani, S. Reducing human/pilot errors in aviation using augmented cognition and automation systems in aircraft cockpit. AIS Transactions on Human-Computer Interaction; 2015; 7,
Sheridan, TB. Risk, human error, and system resilience: fundamental ideas. Hum Factors; 2008; 50,
Stanley, R. M., D. Kelley, S. Wilkins, and A. Castillo. 2017. Modeling the effects of new automation capabilities on air traffic control operations: approved for public release; distribution unlimited. Case Number 17-2692. In 2017 IEEE/AIAA 36th Digital Avionics Systems Conference (DASC), 1–7. https://doi.org/10.1109/DASC.2017.8102009
Walden, DD; Roedler, GJ; Kevin, F; Douglas Hamelin, R; Shortell, TM. Systems engineering handbook: a guide for system life cycle processes and activities; 2015; 4 Hoboken, NJ, Wiley:
Watson, ME; Rusnock, CF; Colombi, JM; Miller, ME. Human-centered design using system modeling language. Journal of Cognitive Engineering and Decision Making; 2017; 11,
Watson, M; Rusnock, C; Miller, M; Colombi, J. Informing system design using human performance modeling. Syst Eng; 2017; 20,
Weilkiens, T. Systems engineering with SysML/UML: modeling, analysis, design; 2008; San Francisco, Morgan Kaufmann Publishers Inc.:zbMath ID: 1114.68039
© Springer Nature Switzerland AG 2020.