DYNAMIC INTERACTION: A MEASUREMENT DEVELOPMENT AND EMPIRICAL
EVALUATION OF KNOWLEDGE BASED SYSTEMS AND WEB 2.0 DECISION SUPPORT
Brandon Alan Beemer
B.S., DeVry University, 2002
M.S., University of Colorado at Denver, 2005
A thesis submitted to the
University of Colorado Denver
in partial fulfillment
of the requirements for the degree of
Doctor of Philosophy
Computer Science and Information Systems
2010 by Brandon A. Beemer
All Rights Reserved.
This Dissertation for the Doctor of Philosophy
Brandon A. Beemer
Has been approved by
Dawn Gregg, Ph.D.
Peter Bryant, Ph.D.
Ilkyeun Ra, Ph.D.
Steve Walczak, Ph.D.
( / "2.0/0
Beemer, Brandon A. (Ph.D., Computer Science and Information Systems)
Dynamic Interaction: A Measurement Development and Empirical Evaluation of Knowledge Based
Systems and Web 2.0 Decision Support Mashups
Dissertation directed by Associate Professor Dawn Gregg
The research presented in this dissertation focuses on the organizational and consumer need
for knowledge based support in unstructured domains, by developing a measurement scale for dynamic
interaction. Addressing this need is approached and evaluated from two different perspectives. The
first approach is the development of Knowledge Based Systems (KBS) designed to operate in
unstructured domains. By meshing the system attributes of KBS and Decision Support Systems
(DSS), researchers and developers have begun designing KBS for unstructured domains that tract with
the users iterative decision process and allow the user to evaluate alternative solutions. The second
approach to providing knowledge support in unstructured domains is through knowledge
synthesization. The latest trend in Web 2.0 research focuses on mashup applications that are designed
to synthesize knowledge by semantically connecting disjointed information and knowledge sources
(Blake and Nowlan, 2008).
The focus of this dissertation is on developing and empirically evaluating a new IS construct
that can be used by researchers to quantity dynamic interaction. Dynamic interaction is empirically
evaluated in two IS domains: KBS and eCommerce mashups. The nomological net of dynamic
interaction is initially quantified in the KBS domain, and then is expanded from evaluating this new
construct in the mashup domain.
This dissertation consists of two parts and four chapters. The first part is titled Knowledge
Based Systems Support of Unstructured Domains Through Dynamic Interaction and has two
chapters that cover KBS designed for unstructured domains by including dynamic interaction. The 1st
chapter is titled Knowledge Based Systems to Support Unstructured Decisions: A Literature Review
and provides a literature review of recent KBS, of which are designed for unstructured domains and
include an iterative user interface. The 2nd chapter is titled Dynamic Interaction in Knowledge Based
Systems: An Exploratory Investigation and Empirical Evaluation. This study empirically investigates
the KBS discussed in the previous chapter, which are designed for unstructured domains and include
The second part of the dissertation is titled Decision Support Mashups Knowledge
Synthesization Through Dynamic Interaction and is composed of two chapters that discuss knowledge
synthesization in unstructured domains through dynamic interaction in mashups. The 3rd chapter is
titled Mashups: A Literature Review and Classification Framework. It presents a mashup literature
review, a mashup classification framework, and provides a literary foundation for the final chapter of
the dissertation. The final and 4th chapter is titled Decision Support Mashups in Unstructured
Domains: An Empirical Evaluation of Dynamic Interaction and performs an empirical evaluation of
dynamic interaction in decision support mashups.
This abstract accurately represents the contents of the candidates dissertation. I recommend its
I would like to thank the Information Systems faculty for giving me the opportunity to be
apart of this program and for teaching me the skills and techniques required for academic research,
would also like to give a special thanks to my advisor, Dawn Gregg, for coaching and encouraging
along the way. Without you, this would not have been possible.
TABLE OF CONTENTS
1 Introduction................................................................. 1
1.1 Importance of the Topics..................................................... 1
1.2 Research Problem and Scope................................................... 2
1.3 Research Questions........................................................... 3
1.4 Research Contribution........................................................ 4
1.5 Outline of Dissertation...................................................... 5
2 Knowledge Based Systems to Support Unstructured Decisions: A Literature
2.1 Introduction................................................................. 9
2.2 Expert Systems.................................................................. 10
2.2.1 Expert Systems Decisions........................................................ 10
2.2.2 Expert Systems Architecture.................................................. 11
2.2.3 Expert Systems Limitations...................................................... 13
2.3 Advisory Systems................................................................ 15
2.3.1 Advisory Systems Decision Support............................................... 15
2.3.2 Advisory Systems Architecture................................................... 16
184.108.40.206 Process 1: Knowledge Acquisition................................................ 19
220.127.116.11 Process 2: Cognition............................................................ 19
Process 3: Interface
Comparing, Contrasting, and Classifying Expert and Advisory Systems..........
Comparing Expert and Advisory Systems........................................
Contrasting Expert and Advisory Systems......................................
Classifying Current Expert and Advisory Systems..............................
Dynamic Interaction in Knowledge Based Systems: An Exploratory Investigation
and Empirical Evaluation.....................................................
Measurement Development and Pretest..........................................
Conceptualization of Dynamic Interaction Substrata...........................
Initial Item Selection.......................................................
Scale Pretest and Refinement.................................................
Limitations and Future Research..............................................
Mashups: A Literature Review and Classification Framework....................
Access Control and Cross Communication.....................................
End User Programming.......................................................
Mashup Classification Framework............................................
Decision Support Mashups in Unstructured Domains: An Empirical Evaluation of
Decision Mashups in Unstructured Domains...................................
End-user Programming in Mashups............................................
Review of eCommerce Mashup Functionality...................................
Dynamic Interaction and Control Theory.....................................
5.5 Experiment Design and Measurement Scales................................... 98
5.6 Experiment and Data Analysis................................................. 103
5.7 Discussion, Limitations, and Future Research................................. 107
5.8 Conclusion................................................................... 112
6 Conclusions.................................................................. 113
A. Literature classification table............................................ 118
LIST OF FIGURES
1. Expert system architecture...................................................... 12
2. Advisory systems architecture................................................... 18
3. Dynamic interaction bridges gap between KBS and DSS............................. 30
4. Control theory feedback loop........................................................ 32
5. Research model for dynamic interaction.............................................. 38
6. Substrata of dynamic interaction as derived from control theory..................... 39
7. Google ads used to solicit participation in the exploratory study.................. 45
8. Revised 2nd order factor PLS model with 9 scale items............................ 47
9. PLS SEM results................................................................. 51
10. Potential expansions of dynamic interactions nomological net................... 53
11. Google cross-communication warning.................................................. 66
12. Classifications of mashup user interfaces....................................... 77
13. MTVs Linken Park and Jay-Z mashup album........................................ 87
14. End users iterative mashup process............................................. 88
15. Screen print from bestsportdeals.com which includes multiple sellers............ 92
16. Screenprintfrompricegrabber.com................................................. 92
17. Screen print from shopper.coms incremental comparison mashup functionality... 93
18. Control theory feedback loop.................................................... 94
19. Substrata of dynamic interaction................................................ 95
20. Research model.................................................................. 97
21. PLS SEM results.................................................................... 107
New posture of dynamic interactions nomological net
LIST OF TABLES
Expert and advisory system benefits.........................................
Advisory and expert system classification table.............................
Recent expert system and advisory system research...........................
Initial scale items for dynamic interaction.................................
Pretest results perceived dynamic interaction...............................
Loadings and cross-loadings.................................................
Internal consistency and discriminant validity..............................
Summary of hypothesis tests.................................................
Mashup research categories..................................................
Classifications for cross communication and access control..................
Classifications for mashup integrations.....................................
Mashup agent classifiers....................................................
Mashup classification framework.............................................
Mashup classification framework (applied to literature).....................
Derived measurement items for diagnosticity, confidence, satisfaction, and
Derived dynamic interaction scale items..........
Response bias analysis and descriptive statistics
Loadings and cross-loadings......................
22. Internal consistency and discriminant validity........................................ 106
23. Summary of hypothesis test............................................................ 107
24. Dissertation research questions and conclusions...................................... 114
25. Literature classification table....................................................... 118
The research presented in this dissertation addresses the organizational and consumer need for
knowledge based support in unstructured domains. Addressing this need is approached and evaluated
from two different perspectives. The first approach is the development of Knowledge Based Systems
(KBS) designed to operate in unstructured domains. By meshing the system attributes of KBS and
Decision Support Systems (DSS), researchers and developers have begun designing KBS for
unstructured domains that track with the user's iterative decision process and allow the user to evaluate
alternative solutions. The second approach to providing knowledge support in unstructured domains is
through knowledge synthesization. The latest trend in Web 2.0 research focuses on mashup
applications that are designed to synthesize knowledge by allowing end-user to semantically connect
disjointed information and knowledge sources (Blake and Nowlan, 2008).
1.1 Importance of the Topics
The competitive capabilities of an organization are grounded in its organizing principles by
which individual and functional expertise is structured, coordinated, and communicated (Zander and
Kogut, 1995). This discipline is commonly referred to as Knowledge Management (KM) and includes
converting data, organizational insight, experience and expertise into reusable and useful knowledge
that is distributed and shared with the people who need it (Alavi and Leidner, 2001). KM addresses
business challenges by: A) creating and delivering innovative products and services, B) managing
relationships with customers, partners and suppliers, and C) improving more efficient and effective
work practices and processes (Aronson and Turban, 2001; Alavi and Leidner, 2001). Some of the
benefits that organizational knowledge provide includes streamlining communication, increasing
efficiency, and fostering competitive advantage. As such, the corporate sector, government agencies,
and organizations have continually demonstrated an increasing interest in the topic of KM (Pollock,
2001). Likewise, KM has also received much attention from academic researchers in the development
of KBS like Expert Systems (ES) and Advisory Systems (AS) (e.g. Rapanotti, 2004; Lin et al, 2006;
Vanguard, 2006; Magni et al, 2006; Moynihan et al, 2006). As KBS functional advancements have
been made over the years, organizations and researchers have begun to identify the opportunity and
need to apply knowledge based support to unstructured decision domains (Pollock, 2001; Lau and
1.2 Research Problem and Scope
The architectural design of KBS are very different from traditional systems because the
problems they are designed to solve have no algorithmic solution; instead, they utilize codified
heuristics or decision-making rules of thumb which have been extracted from the domain expert(s), to
make inferences and determine a satisfactory solution (Beemer and Gregg, 2008; McGraw and
Harbison-Briggs, 1989). For unstructured decision domains, DSS are developed, which are
collaborative systems that use various types of formulas and algorithms to synthesize information from
various data sources (Aronson and Turban, 2001). The architectural design of DSS differs
significantly from KBSs heuristics approach; instead they are designed to interactively tract with the
users nonlinear cognition process in unstructured decision domains (Guerlain et al, 2000; Kim et al,
2007; Subsom and Singh, 2007). As such, a functional gap has existed between KBSs support of
structured decisions and DSSs support of unstructured decisions (Aronson and Turban, 2001).
Despite this, the need to apply knowledge based support to unstructured domains exists, and
is the research problem addressed by this dissertation, from two different perspectives. The first is by
empirically evaluating KBS designed to operate in unstructured domains by incorporating dynamically
interactive user interfaces to tract with the users iterative decision making process. The second is by
empirically evaluating decision support mashups that are designed to synthesize knowledge in
unstructured domains by allowing the end-user to combine disparate data and information sources.
1.3 Research Questions
Generally speaking, when knowledge based systems are applied to unstructured domains they
tend to experience low user acceptance resulting from mistrust of the system because of its inability to
justify solutions when uncertainty exists (Furtado, 2004). There are two main schools of thought
addressing this low user acceptance rate. The first focuses on developing more robust explanation
facilities to justify the systems solution in unstructured domains (Arnold et al, 2006). The second
declares that the need for interaction between knowledge-based systems (KBS) and the user has
increased, mainly, to enhance the acceptability of the reasoning process and of the solutions proposed
by the KBS (Furtado, 2004; pg. 1). Through dynamic interaction with the user, KBS are able to tract
with the users iterative cognition process in solving unstructured decision and involve the users
opinion in the systems logic, which gives them a sense of ownership (and ultimately trust) in the
solution (Kim et al, 2007; Mintzberg et al, 1976).
1) In unstructured domains, when KBS are designed to tract with the user's iterative decision
making process by incorporating a dynamically interactive user interface, how are Perceived
Reliability, Perceived Usefulness, and Intention to Use affected?
Over the past few years mashups have become a popular topic amongst both research and
industry communities with the idea behind them being to synthesize new content by reusing and
combining existing content from heterogeneous information sources (Koschmider, 2009). Mashups
allow end-users to combine information and knowledge from a plethora of sources, and integrate them
into customized, goal-oriented applications (Albinola et al, 2009). However, the current posture of
mashup literature is rather disjointed, which can partially be explained by the newness of this topic.
To date, mashup literature lacks an articulation of the different subtopics of mashup research and the
different technological components that mashups are composed of.
2) What are the boundaries and subtopics of mashup literature?
3) What technological components are mashups composed of, and what characteristics can be used
to classify mashups and frame future research?
From an academic perspective, decision support mashups have received little attention. To
date most of the mashup literature has been from the applied sector. However, as mashups are
maturing as a technology, and are beginning to be applied to business domains like decision support,
there is now an opportunity to evaluate them from an academic perspectives that offers both relevancy
and novelty (e.g. Waclzak et al, 2009). Similar to KBS being applied to unstructured domains,
intention to use will likely be an important acceptance factor in decision support mashups. Other
important constructs in unstructured decision domains are diagnosticity (amount of information
available to the decision maker) and cognitive calibration (confidence and satisfaction), which refers to
the aligning of the decision makers confidence in the decision with the actual decision quality.
4) How is diagnosticity affected when mashups are designed with dynamic interaction?
5) What influence does decision confidence have on decision satisfaction?
6) When mashups are designed to incorporate dynamic interaction functionality, do they experience
increased intention to use?
1.4 Research Contribution
The research included in this dissertation has three main contributions. The first contribution
investigates the recent trend where system developers are meshing the traditional attributes of KBS and
DSS to provide knowledge based support for unstructured domains. An exploratory and confirmatory
study was conducted to evaluate the effects of dynamic interaction (KBS intelligent interface) on
perceived reliability, perceived usefulness, and intention. This study supported the hypothesized
relationships between Dynamic Interaction, Perceived Reliability, Perceived Usefulness, and Intention,
and provides a foundation for future research.
The second is a literary contribution to the mashup body of knowledge. Given that the
concept of mashups is only a couple years old, the current posture of the mashup body of knowledge
lacks coherent continuity. In an effort to outline the scope, subtopics, and technical components
encompassed and addressed by mashup literature, a thorough review of mashup literature was
conducted. The resulting contribution was the identification of mashup subtopics (Access Control and
Cross Communication, Mashup Integration, Mashup Agents, Mashup Frameworks, End-User
Programming, and Enterprise Mashups), and a mashup classification framework that can be used by
developers in requirements development and researchers to frame future research.
From conducting the mashup literature review an observation was made, namely that much of
mashup literature resides in the practitioner sector. There seems to be a need, and opportunity, to
begin evaluating decision support mashups from an academic perspective (e.g. Walczak et al, 2009).
As such, the final contribution of this dissertation is an empirical evaluation of decision support
mashups, to determine if like KBS in unstructured domains, mashups should be designed around the
users iterative decision making process. It is hypothesized that when a mashup possesses dynamic
interaction functionality, it will positively influence diagnosticity, confidence, satisfaction, and
1.5 Outline of Dissertation
The focus of this dissertation is on knowledge based support in unstructured domains through
dynamic interaction. It consists of two parts and four chapters:
The first part is titled Knowledge Based Systems Support of Unstructured Domains
Through Dynamic Interaction and has two chapters that cover KBS designed for unstructured
domains by including dynamic interaction. The 1 st chapter is titled Knowledge Based Systems to
Support Unstructured Decisions: A Literature Review and provides a literature review of recent
KBS, of which are designed for unstructured domains and include an iterative user interface. The 2nd
chapter is titled Dynamic Interaction in Knowledge Based Systems: An Exploratory Investigation
and Empirical Evaluation. This study empirically investigates the KBS discussed in the previous
chapter, which are designed for unstructured domains and include dynamic interaction.
The second part of the dissertation is titled Decision Support Mashups Knowledge
Synthesization Through Dynamic Interaction and is composed of two chapters that discuss
knowledge synthesization in unstructured domains through dynamic interaction in mashups. The 3rd
chapter is titled Mashups: A Literature Review and Classification Framework. It presents a
mashup literature review, a mashup classification framework, and provides a literary foundation for the
final chapter of the dissertation. The final and 4th chapter is titled Decision Support Mashups in
Unstructured Domains: An Empirical Evaluation of Dynamic Interaction and performs an
empirical evaluation of dynamic interaction in decision support mashups.
Part 1: Knowledge Based Systems Support of Unstructured Domains Through Dynamic
2. Knowledge Based Systems to Support Unstructured Decisions: A Literature Review
(Published in: F. Burstein and C.W. Holsapple, Eds., Handbook on Decision Support Systems 1,
Springer Berlin Heidelberg (2008), 361-377.)
Both advisory systems and expert systems are knowledge based systems designed to provide expertise
to support decision making in a myriad of domains. Expert systems are used to solve problems in well
defined, narrowly focused problem domains, whereas, advisory systems are designed to support
decision making in more unstructured situations which have no single correct answer. This paper
provides an overview of advisory systems, which includes the organizational needs that they address,
similarities and differences between expert and advisory systems, and the supportive role advisory
systems play in unstructured decision making.
Advisory Systems, Expert Systems, Knowledge Based Systems, Intelligent Assistants
Advisory systems provide advice and help to solve problems that are normally solved by
human experts; as such, advisory systems can be classified as a type of expert system (e.g.
Vanguard, 2006; Forslund, 1995). Both advisory systems and expert systems are knowledge-based
problem-solving packages that mimic a human expert in a specialized area. These systems are
constructed by eliciting knowledge from human experts and coding it into a form that can be used by a
computer in the evaluation of alternative solutions to problems within that domain of expertise.
While advisory systems and expert systems share a similar architectural design, they do differ
in several significant ways. Expert systems are typically autonomous problem solving systems used in
situations where there is a well-defined problem and expertise needs to be applied to find the
appropriate solution (Aronson and Turban, 2001). In contrast, advisory systems do not make decisions
but rather help guide the decision maker in the decision making process, while leaving the final
decision making authority up to the human user. The human decision maker works in collaboration
with the advisory system to identify problems that need to be addressed, and to iteratively evaluative
the possible solutions to unstructured decisions. For example a manager of a firm could use an
advisory system that helps assess the impact of a management decision on firm value (e.g. Magni et al,
2006) or an oncologist can use an advisory system to help locate brain tumors (e.g. Demir et al, 2005).
In these two examples, the manager and the oncologist are ultimately (and legally) accountable for any
decisions/diagnoses made. The advisory system, for ethical reasons, only acts as a tool to aid in the
decision making process (Forslund, 1995).
This paper provides an overview of both advisory systems and expert systems, highlighting
their similarities and differences. It provides a background on both expert and advisory systems and
describes the architectures and the types of decisions each system supports. It distinguishes between
advisory systems which utilize the case-based reasoning methodology and traditional expert systems
that use rule-based reasoning. A review and classification of recent advisory/expert systems research
is included to show how both types of systems are currently being utilized. The paper concludes with
recommendations for further advisory system research.
2.2 Expert Systems
In response to the organizational need of intelligent decision support, expert systems were
developed by coupling artificial intelligence (Al) and knowledge management techniques. Expert
systems are designed to encapsulate the knowledge of experts and to apply it in evaluating and
determining solutions to well-structured problems.
2.2.1 Expert Systems Decisions
Expert systems have applications in virtually every field of knowledge. They are most
valuable to organizations that have a high-level of knowledge and expertise that cannot be easily
transferred to other members. They can be used to automate decision-making or used as training
facilities for non-experts (Aronson and Turban, 2001). Expert systems were designed to deal with
complex problems in narrow well-defined problem domains. If a human expert can specify the steps
and reasoning by which a problem may be solved, then an expert system can be created to solve the
same problem (Giarranto and Riley, 2005).
Expert systems are generally designed very differently from traditional systems because the
problems they are designed to solve have no algorithmic solution. Instead, expert systems utilize
codified heuristics or decision-making rules of thumb, which have been extracted from the domain
expert(s), to make inferences and determine a satisfactory solution. The decision areas expert
systems are typically applied to include configuration, diagnosis, interpretation, instruction,
monitoring, planning, prognosis, remedy, and control (Giarranto and Riley, 2005). Expert systems
research and development encompasses several domains, which include but are not limited to:
medicine, mathematics, engineering, geology, computer science, business, and education (Carroll and
Researchers and developers of the initial expert systems tried to address the problem of lost or
hard to transfer expertise by capturing the experts knowledge and replicating their decision-making
capacity. An example of this objective is found in CATS-1, a pioneering expert system that addressed
General Electrics eventual loss of their top expert in troubleshooting diesel electric locomotive
engines (Bonnisone and Johnson, 1983). The structural design of expert systems reflects this ambition
to completely replace the expert, and is inspired by the human information processing theory (Waugh
and Norman, 1968).
2.2.2 Expert Systems Architecture
Expert systems have been defined as a system that uses human knowledge captured in a
computer to solve a problem that ordinarily needs human expertise (Aronson and Turban, 2001). As
is shown in Figure 1, expert system architecture distinctly separates knowledge and processing
procedures in the knowledge base and inference engine, respectively (Bradley et al, 1995; Waterman,
1986; Aronson and Turban, 2001).
The knowledge base of expert systems contains both tacit and explicit knowledge. Tacit
knowledge exists in the mind and is difficult to articulate, it governs explicit knowledge through
mechanisms such as intuition (McGraw et al., 1989), while explicit knowledge is context specific, and
is easily captured and codified (Bradley et al, 2006). A knowledge engineer is often needed to help
elicit tacit knowledge from the expert and then to codify it into the knowledge base. The knowledge
engineer uses various methods in structuring the problem solving environment, these include
interpreting and integrating the experts answers to questions, drawing analogies, posing counter
examples, and bringing conceptual difficulties to light (Aronson and Turban, 2001).
Figure 1: Expert system architecture. (Aronson and Turban, 2001; Bradley et al, 1995; Waterman,
The knowledge representation formalizes and organizes the knowledge so that the
inference engine can process it and make a decision. One widely used knowledge representation in
expert systems is an IF THEN rule. The IF part of the rule lists a set of conditions the rule applies to.
If the IF part of the rule is satisfied; the THEN part of the rule can be executed, or its problem-solving
action taken. Expert systems whose knowledge is represented in rule form are called rule-based
In expert systems, the inference engine organizes and controls the steps taken to solve the
problem. It uses rule-based reasoning to navigate through the rules, which are stored in the knowledge
base (Aronson and Turban, 2001). When the knowledge base is structured in this way, as to
supplement rule-based reasoning, it is referred to as a decision tree. Each unique branch in the
decision tree represents a correct answer to the situational antecedents that lead to it. If the inference
engine starts from a set of conditions and moves toward some conclusion, the method is called forward
chaining. If the conclusion is known (for example, a goal to be achieved) but the path to that
conclusion is not known, then the inference engine reasons backwards using backward chaining
(Giarranto and Riley, 2005). Once the inference engine determines a solution to the problem, it is
presented to the user through the user interface. In addition, explanation facilities in expert systems
trace the line of reasoning used by the inference engine to help end users assess the credibility of the
decision made by the system (Feigenbaum et al, 1988).
Often the decisions made by expert systems are based on incomplete information about the
situation at hand. Uncertainty increases the number of possible outcomes to all possible solutions,
making it impossible to find a definable best solution to the problem. For example, in the medical
domain there are constraints of both time and money. In many cases, running additional tests may
improve the probability of finding an appropriate treatment but the additional tests may cost too
much money or take time the patient does not have (Giarranto and Riley, 2005). In an attempt to
accommodate for uncertainty, many expert systems utilize methods to perform inexact reasoning,
which allows them to find an acceptable solution to an uncertain problem. Two popular methods used
to perform reasoning under uncertainty are Bayesian probability (Castillo et al, 1997) and fuzzy theory
(Bellman and Zadeh, 1970; Negoita, 1985).
2.2.3 Expert Systems Limitations
Many expert systems are based on the notion that the process of solving unstructured
decisions consisted of five sequential phases: 1) problem identification; 2) assimilating necessary
information; 3) developing possible solutions; 4) solution evaluation; 5) solution selection (Brim et al,
1962; Dewey, 1910). These expert systems perform the last four decision-making steps for the user
and have been applied successfully in a wide variety of highly specialized domains. Traditionally rule-
based expert systems operate best in structured decision environments, since solutions to structured
problems have a definable right answer, and the users can confirm the correctness of the decision by
evaluating the justification provided by explanation facility (Gefen et al, 2003a). However, researchers
have identified many limitations to current expert systems, which include (Luger, 2005):
1. Difficulty in capturing the "deep" knowledge of the problem domain.
2. Lack of robustness and flexibility.
3. Inability to provide in depth explanations of solution logic (instead, expert system
explanations are generally restricted to a description of the steps taken in finding a solution).
4. Difficulties in solution verification.
5. Little learning from experience.
The inflexibility of traditional expert systems reduces their ability to handle unstructured
more loosely defined problems. In the 1970s, decision theorists discovered that the phases within the
decision process are executed iteratively until an acceptable decision is reached (Mintzberg et al, 1976;
Witte, 1972). When a decision maker gathers and assimilates information, they subconsciously begin
to comparatively evaluate it with previously gathered information (Mintzberg et al, 1976). This
comparative evaluation of information, coupled with an understanding of the informations contextual
relevancy, results in decisions sufficient for unstructured problems, which have no definable right
solution because of the existence of outcome uncertainty (Mintzberg et al, 1976; Witte, 1972). One
reason that the rule-based inference engines used in traditional expert systems have limited capacity to
handle unstructured decisions is because they usually do not support the required iterative process of
decision making (Mintzberg et al, 1976; Witte, 1972).
While many researchers agree with the preceding description of expert systems and their
limitations (e.g. Turban and Watkins, 1986; Aronson and Turban, 2001), there is disagreement in the
research community regarding the scope of expert system functionality. For example J.R. Quinlan
(1980) and J.R. Quinlan (1988) describe expert systems that incorporate the capability of addressing
unstructured decision environments.
2.3 Advisory Systems
Advisory systems are advice giving systems as opposed to systems that present a solution to
the decision maker (Aronson and Turban, 2001). Research in advisory systems has found that for
many problems decision makers need the problem to be identified and framed so that they can make
decisions for themselves (e.g. Forslund, 1995; Miksch et al, 1997; Gregg and Walczak, 2006).
2.3.1 Advisory Systems Decision Support
Advisory systems support decisions that can be classified as either intelligent or unstructured,
and are characterized by novelty, complexity, and open-endedness (Mintzberg et al, 1976). In addition
to these characteristics, contextual uncertainty is ubiquitous in unstructured decisions, when combined,
exponentially increase the complexity of the decision making process (Chandler and Pachter, 1998).
Because of the novel antecedents and lack of definable solution, unstructured decisions require the use
of knowledge and cognitive reasoning to evaluate alternative courses of action to find the one that has
the highest probability of desirable outcome (Chandler and Pachter, 1998; Mintzberg et al, 1976). The
more context specific knowledge acquired by the decision maker in these unstructured decision
making situations, the higher probability they will achieve the desirable outcome (Aronson and
The decision making process that occurs when users utilize advisory systems is similar to that
which is used for the Judge-Advisor model developed in the organizational behavior literature
(Sniezek, 1999; Sniezek and Buckley, 1995; Arendt et al, 2005). Under this model, there is a principle
decision maker that solicits advice from many sources; however, the decision maker holds the
ultimate authority for the final decision and is made accountable for it (Sniezek, 1999). The Judge-
Advisor model suggests that decision makers are motivated to seek advice from others for decisions
that are important, unstructured, and involve uncertainty. Similarly, advisory systems help to
synthesize knowledge and expertise related to a specific problem situation for the user; however, the
ultimate decision-making power and responsibility lies with the user not the system.
Advisory systems support decisions related to business intelligence, health diagnostics,
mechanical diagnostics, pharmaceutical research, autonomous aviation systems, infrastructure
procurement and many more (Chandler and Pachter, 1998; Rapanotti, 2004; Sniezek, 1999). Advisory
systems can also support problem identification in unstructured decision-making environments.
Without expert levels of knowledge, most unstructured decisions often remain unidentified because,
most strategic decisions do not present themselves to the decision maker in convenient ways;
problems and opportunities in particular must be identified in the streams of ambiguous, largely verbal
data that decision makers receive (Mintzberg et al, 1976; Mintzberg, 1973; Sayles, 1964).
Additionally, decision makers who lack access to the proper expertise are constrained by cognitive
limits to economically rational behavior that induce them to engage in heuristic searches for
satisfactory decisions, rather than comprehensive searches for optimal decisions (Blanning, 1987;
March and Simon, 1958; Simon, 1972).
2.3.2 Advisory Systems Architecture
Advisory systems differ from expert systems in that classical expert systems can solve a
problem and deliver a solution, while advisory systems are designed to help and complement the
humans problem-solving process (Forslund, 1995; Mintzberg et al, 1976). In unstructured situations,
which have no single correct answer, cooperative advisory systems that provide reasonable answers
to a wide range of problems, are more valuable and desirable than expert systems that produce correct
answers to a very limited number of questions (Forslund, 1995).
The changes in advisory systems, from expert systems, includes giving the final decision back
to the user, and utilizing the case-based reasoning methodology in the inference engine (Forslund,
1995). In contrast to the rule-based reasoning used in traditional expert systems, which uses Boolean
logic, case-based reasoning accommodates for uncertainty by using algorithms to compare the current
situation to previous ones, and assigns probabilities to the different alternatives (Watson, 1999). Once
probabilities have been assigned, the advisory system inference engine is then able to evaluate the
alternatives; this iterative evaluation functionality resembles and supplements the cognitive process
used by humans when making unstructured decisions, and thus it is more effective in supporting the
users of the system (Lau and Tsui, 2006). Case-based reasoning is often mistakenly referred to as a
technology, but in fact is a methodology, which is implemented through various technologies; these
technologies include nearest neighbor distance algorithms, induction, fuzzy logic, and Structure Query
Language (SQL) Online Analytical Processing (OLAP) tools (Watson, 1999). These intelligent
suggestions, which are the result of the case-based reasoning inference engine, are then incorporated
into the iterative decision making process of the human decision maker, the user (Forslund, 1995;
Witte, 1972; Mintzberg et al, 1976). Figure 2 illustrates the iterative support of advisory systems in the
decision making process, this functionality contrasts expert systems which only provided a final
answer with supportive justification.
Figure 2: Advisory systems architecture, adapted from Forslund (1995) & Lau and Tsui (2006)
In addition to iterative user interaction, advisory systems include a monitoring agent to help
address the need for identifying unstructured decisions that need to be addressed, this is displayed in
Figure 2 as the flow of information from domain variables to the inference engine (Mintzberg et al,
1976; Mintzberg, 1973; Sayles, 1964; Forslund, 1995). If these domain variables exceed expected
norms, then the system shell will notify the user that there is a situation which needs to be addressed
and will begin the iterative decision making process by offering a suggested course of action.
The three main processes of expert systems are knowledge acquisition, inference, and
interface (Aronson and Turban, 2001); similarly, the three main processes in advisory systems are
knowledge acquisition, cognition, and interface. Because of the monitoring functionality that is
adopted by advisory systems, the term cognition better describes the middle process. To provide a
visual aid, the main processes of advisory systems have been labeled in Figure 2 and are described in
the following sections.
18.104.22.168 Process 1: Knowledge Acquisition
The process of knowledge acquisition in advisory systems is similar to that of traditional
expert systems, but it can be much more complicated because the unstructured nature of the problem
domain can make the knowledge more difficult to capture and codify. In general, advisory systems are
designed to support a broad category of problems, too broad to exactly specify all of the knowledge
necessary to solve the problem (Forslund, 1995). The eventual success or failure of an advisory
system is dependent upon the effectiveness of knowledge acquisition; the measure of effectiveness lies
in the structure and quality of the encoded knowledge, not the quantity. The knowledge base structure
and codification must be conducive to the inference engine design. The knowledge representation
scheme used in advisory systems formalizes and organizes the knowledge so that it can be used to
support the type of case-based reasoning implemented in the system.
22.214.171.124 Process 2: Cognition
Cognition does a better job of describing this process in advisory systems, than does
inference, because it encapsulates the added functionality of active monitoring and problem
recognition, which was introduced in the transition from expert systems. Most unstructured decisions
do not present themselves to the decision maker in convenient ways, so advisory systems supplement
the task of problem identification by monitoring environmental variables (Mintzberg et al, 1976;
Mintzberg, 1973; Sayles, 1964). There are various methods used by advisory systems to perform this
task, and the method used is dependent upon the environment that the advisory system operates in.
Advisory systems can either monitor for problems (e.g. mortgage credit checks) or for opportunities
(e.g. pharmaceutical drug discovery) (Rapanotti, 2004). In addition to monitoring for potential
problems or opportunities, advisory systems support the decision maker in the iterative process of
determining a solution to the problem. The inference engine uses the environmental variables, user
input, and the knowledge base to evaluate different courses of action and make suggestions to the
decision maker. Unlike expert systems, the suggestions made by advisory systems do not always
represent the final answer to the problem. Instead, they represent advice used by the decision maker as
a part of the iterative problem solving process.
126.96.36.199 Process 3: Interface
This process encapsulates all sub-processes that facilitate information exchange between the
inference engine and the end-user. This includes the automated input of environmental parameters that
are used in monitoring functionality, the iterative communication with the user throughout the decision
making process, and the reasoning process the advisory system used in making the recommendation
(the explanation) as well as some expression indicating the advisory systems evaluation of the quality
of the advice (Sniezek, 1999). Unlike more traditional expert systems, user interaction with advisory
systems can involve much more than entering the initial problem conditions and waiting for the system
recommendation and explanation. Advisory systems can have multiple mid stages in the decision
process, which require user input to guide the overall advisor decision-making process.
Since the inception of advisory systems, there has not been a lot of research or design
literature concerning the new iterative functionality of the user interface. While much attention is
given to the cognition process components, the user interface is equally important because these types
of systems are prone to a lack of user acceptance. This problem was initially realized with the
development of expert systems because they were perceived as a potential threat to an employee who
perceives that his or her most valuable skill is embodied within this system and that job security is
accordingly threatened as a result of system use (Liker and Sandi, 1997). While this is not quite the
concern with advisory systems, it is still prudent to design the user interface in such a way as to foster
feelings of perceived usefulness and ease of use by users (Davis et al, 1989).
2.4 Comparing, Contrasting, and Classifying Expert and Advisory Systems
The distinction between advisory systems and expert systems has historically not been
explicitly made by researchers (e.g. Negoita, 1985). Advisory systems are an evolutionary extension
of expert systems, evidence of this is found in the similarities between their architectural designs; yet
despite their similarities, there are critical differences between these two system architectures, which
merits their distinctions.
2.4.1 Comparing Expert and Advisory Systems
Both expert systems and advisory systems provide numerous benefits to users operating in
complex decision making environments; some of these benefits are summarized in Table 1. The main
factor that affects the realization of these benefits is that users accept, trust, and use the systems (Davis
et al, 1989).
2.4.2 Contrasting Expert and Advisory Systems
Although advisory and expert systems do share some commonalties in their shell structures,
Table 2 highlights the major differences such as the decisions theyre each designed for (unstructured
vs. structured), the Al methodologies that each uses (case-based vs. rule-based), the role they each play
in the decision making process (decision support vs. decision maker). In addition to these differences,
advisory systems incorporate new advancements in the active monitoring functionality highlighted in
Figure 2, and are designed to further supplement human cognitive problem solving process, by
incorporating iterative interaction with the user.
Decreased Decision Making Time Using the systems recommendations, a human can make decisions much faster. This property is important in supporting frontline decision makers who must make quick decisions while interacting with customers.
Enhancement of Problem Solving and Decision Making Enhances problem solving process by incorporating the knowledge of Top Experts into the decision making process.
Improved Decision Making Process Provides rapid feedback on decision consequences, facilitates communication among decision makers on a team, and allow rapid response to unforeseen changes in the environment, thus providing a better understanding of the decision making environment.
Improved Decision Quality Reliability. Advisory systems consistently monitor all the details and do not overlook relevant information, potential problems, or potential solutions.
Ability to Solve Complex Problems Some advisory and expert systems are already able to solve problems in which the required scope of knowledge exceeds that of any one individual. This allows decision makers to gain control over complicated situations and improve the operation of complex systems.
Table 1: Expert and advisory system benefits, (Aronson and Turban, 2001)
Attribute Advisory System Expert System
Decision Structure Unstructured Structured
AI Methodology Case-Based Reasoning Rule-Based Reasoning
Role in Decision Process Decision Support Decision Maker
Query Direction Human <- System Human < System
Problem Identification User or System User
Table 2: Advisory and expert system classification table, adapted from Turban and Watkins (1986)
An example of a current expert system is a deicing system being developed by the Colorado
Department of Transportation (9news, 2006). In an effort to reduce costs and wasted chemicals, the
system is designed to decide the optimal amount of magnesium chloride (a liquid deicer) to distribute
on the roads based on automated humidity and temperature inputs from sensors in the road, and
manual inputs from the truck drivers which are entered via laptops in the their trucks. These inputs are
sent wirelessly to a central computer which uses its artificial intelligence and knowledge base
components to provide snow removal truck drivers with the appropriate amount of deicer to use. In
this system, the system ultimately has the ability to make better decisions than the snow removal
An example of a current advisory system is a system developed to support hospital operations,
the system is called HELP (Health Evaluation through Logical Processes). This system performs
various functions to aid physicians in providing effective and expedient health care to hospital patients.
The HELP systems functionality includes: 1) reviewing manually inputted lab results and identifying
patient issues which need to be addressed, 2) using the knowledge base and case based reasoning to
provide physicians with a preliminary diagnosis, 3) monitoring vitals for ICU patients and identifying
when urgent care is needed, 4) assisting physicians with complex diagnoses. This advisory system
incorporates a knowledge base which works harmoniously with an artificial intelligence component,
but unlike traditional expert systems the system is designed to be used as an advisor and not a decision
maker. Also, this system incorporates a monitoring capacity and provides problem identification. One
area that this is performed is by monitoring patients vital signs and lab result inputs, and proactively
identifying suggested courses of action for evolving problems. The query flow in the HELP system is
by directional meaning that the system or the user can initiate the iterative decision making process.
Unlike the Colorado Department of Transportation deicing system, the HELP system works with
physicians, providing them with additional information an insights into the problem at hand. However,
the physician still has a great deal of control of the ultimate decision that is reached and ethics
demands a human decision maker, to assume responsibility for the outcome of care given.
2.4.3 Classifying Current Expert and Advisory Systems
Many of these described advisory systems, which are designed to apply human expertise in
supporting the decision maker (but not solve the problem), are being classified as expert systems by IS
researchers. Table 3 contains a brief review of systems classified as expert systems by IS researchers;
along with our own classification of these systems by using the criteria in Table 2. A high percentage
of these systems are actually advisory systems not truly expert systems. 41% of the systems (7/17)
System Name Description of Functionality System Type
AccuStrat (Rapanotti, 2004) A predictive model for disease management; the model suggests which patients need additional care. Advisory System
PlacementPlus 4.0 (Rapanotti, 2004) A business rule management application that is used to match delinquent accounts with collection agencies. Expert System
ClassPharmer (Rapanotti, 2004) A knowledge-based system designed to assist computational chemists in the drug discovery process. Advisory System
Auction Advisor (Gregg and Walczak, 2006) Online Auction Recommendation and Bidding Decision Support System. Advisory System
Decision Script (Vanguard, 2006) Decision Script's allow the capture of business rules and builds complex applications. Advisory System
JR-ules 4.6 (Rapanotti, 2004) Business Rules Management that lets policy managers and business analysts manage complex rule sets. Expert System
Pathways Analysis (Rapanotti, 2004) Generates multiple biological networks with functional analysis to facilitate understanding of experiments. Advisory System
EZ-Xpert 3.0 (A1 Developers, 2006) A rule-based expert system that is designed for quick development and deployment. It generates code. Expert System
Buffer Overflow Control (Lin et al, 2006) Uses neural network and the fuzzy logic controllers to rid internet buffer overflow at the user/server level. Expert System
Intelligent Tutoring (Rapanotti, 2004) An interactive knowledge based system which is used for distributing circuit analysis knowledge to non-experts. Advisory System
Firm Evaluation (Magni et al. 2006) Couples fuzzy logic with rule-based reasoning to support firm evaluation Advisory System
Software Design Assistant (Moynihan et al, 2006) Prototype system which demonstrates the feasibility of using expert system technology to aid software design. Advisory System
Ultrasonography System (Hata et al, 2005) Uses an anatomical knowledge base to diagnose brain diseases and trauma. Expert System
Memory Controller (Rubio and Lizy, 2005) A server memory controller which decomposes database queries into simple operations to foster efficiency. Expert System
Recycling Management (Fonseca, 2005) Expert system: helps manufacturers assess and analyze their industrial residuals as potential road construction material. Advisory System
Reservoir Management (Karbowski et al, 2005) Expert system that combines a rule-based inference engine and algorithmic calculations for reservoir flood gate control. Expert System
Design Advisor (Chau, 2004) Expert system that advises engineers in design of liquid- retaining structures. Advisory System
Table 3: Recent expert system and advisory system research
were expert systems according to our classification criteria, and 59% (10/17) were actually advisory
systems. Thus, the majority of new systems in our limited survey were actually advisory systems,
which highlights the transition to the new advisory system paradigm, and motivates the distinction
between advisory and expert systems.
2.5 Future Research
The majority of current advisory systems research has consisted of applied studies that
developed advisory systems in specific application domains (e.g. Gregg and Walczak, 2006; Magni et
al, 2006; Butz et al, 2006 etc.). While there is certainly an ongoing need to explore the diverse array of
potential applications of advisory systems, there is also a need for basic research on the advisory
system paradigm. This includes research related to improving user interaction with advice giving
systems, defining the objectives and types of advice given by these systems, and improving the ability
to acquire and represent the knowledge used in these systems (Roldan et al, 2003).
Over the past few decades both successful and unsuccessful expert and advisory systems have
been developed; improving user interaction with these systems is necessary in order for them to be
trusted, accepted, and to contribute to the decision making process (Carroll and McKendre, 1987).
Improving user interaction with advisory systems requires additional understanding and research on
the role of advice in decision-making, facilitating the iterative interaction between decision makers and
the system, and the impact of the advice given on the final decision that is made (Sniezek 1999).
Specifically, there is a need to determine how systems can best give advice which is adaptive and
conducive to the cognitive decision making process of the user(s) (Sniezek 1999). Research is also
needed to examine how to enhance the iterative decision support functionality of advisory systems
(Brim et al, 1962; Mintzberg et al, 1976).
There is also a need for additional research in knowledge acquisition and representation. The
process of eliciting tacit knowledge obtained by an expert and coding it into explicit knowledge that is
congruent with the Al technology in the inference engine is a very complicated process which spans
across the following research disciplines: psychology, information systems, and computer science
(Bradley et al, 2006). This process differs from that found in traditional expert systems because the
tacit knowledge which is necessary for the system is much more difficult to define, codify, evaluate,
and represent than is rule-based explicit knowledge (Bellman and Zadeh, 1970; Bradley et al, 2006;
McGrawand Harbison-Briggs, 1989).
The goal of this paper is to extend previous publications that suggest and discuss the advisory
systems paradigm (Aronson and Turban, 2001; Forslund, 1995), by incorporating insight from decision
theory into the design of this emerging system architecture (Mintzberg et al, 1976). For the past
decade many advisory systems have been classified as expert systems by IS researchers, even though
they provide advice instead of making a decision. It is our hope that this review of advisory systems
provides insight and fosters the acceptance of advisory systems as a unique paradigm of research aside
from expert systems. There is a distinct organizational need for advice giving systems (Forslund,
1995). However, additional research is needed to better define the role advisory systems should play
in supporting decision making and how best to improve their effectiveness and acceptance within
3. Dynamic Interaction in Knowledge Based Systems: An Exploratory Investigation and
(Forthcoming in Decision Support Systems)
In response to the need for knowledge-based support in unstructured domains, researchers and
practitioners have begun developing systems that mesh the traditional attributes of knowledge based
systems (KBS) and decision support system (DSS). One such attribute being applied to KBS is
dynamic interaction. In an effort to provide a mechanism that will enable researchers to quantify this
system attribute, and enable practitioners to prescribe the needed aspects of dynamic interaction in a
specific application, a measurement scale was derived from previous literature. Control theory was
used to provide the theoretical underpinnings of dynamic interaction and to identify its conceptual
substrata. A pretest and exploratory study was conducted to refine the derived scale items, and then a
confirmatory study was conducted to evaluate the nomological validity of the measurement scale.
Dynamic Interaction, Decision Support Systems, Knowledge Based Systems, Advisory Systems
Information Systems (IS) literature contains two main system architectures designed for
decision support: knowledge based systems (KBS) and decision support systems (DSS). KBS have
been defined as a system that uses human knowledge captured in a computer to solve a problem that
ordinarily needs human expertise and have applications in virtually every field of knowledge (Aronson
and Turban, 2001). KBS are designed to deal with complex problems in narrow, well-defined problem
domains. If a human expert can specify the steps and reasoning by which a problem may be solved,
then a KBS can be created to solve the same problem (Giarranto and Riley, 2005). The architectural
design of KBS are very different from traditional systems because the problems they are designed to
solve have no algorithmic solution; instead, they utilize codified heuristics or decision-making rules of
thumb which have been extracted from the domain expert(s), to make inferences and determine a
satisfactory solution (Beemer and Gregg, 2008; McGraw and Harbison-Briggs, 1989). For
unstructured decision domains, DSS are developed, which are collaborative systems that use various
types of formulas and algorithms to synthesize information from various data sources (Aronson and
Turban, 2001). The architectural design of DSS differs significantly from KBSs heuristics approach,
instead they are designed to interactively tract with the users nonlinear cognition process in
unstructured decision domains (Guerlain et al, 2000; Kim et al, 2007; Subsom and Singh, 2007).
A functional gap has existed between KBSs support of structured decisions and DSSs
support of unstructured decisions (Aronson and Turban, 2001). Despite this, the need to apply
knowledge to unstructured domains exists, and recently system developers have begun meshing the
traditional characteristics of these two system architectures to provide knowledge based support of
unstructured decisions. This is being accomplished by incorporating dynamic interaction (which is
traditionally a DSS trait) into KBS (Forslund, 1995; Lau and Tsui, 2006; Walczak et al, 2009). Figure
3 provides an illustration of emerging system designs that are bridging the described functional gap
between knowledge based systems and decision support systems (e.g. iterative expert systems,
advisory systems, web 2.0 mash-ups).
Figure 3: Dynamic interaction bridges gap between KBS and DSS
As Figure 3 illustrates, a common feature shared by these three systems is dynamic interaction
(Beemer and Gregg, 2008; Lau and Tsui, 2006; Walczak et al, 2009). Iterative expert systems closely
resemble traditional expert systems but include the added functionality of an iterative decision process
that allows the user to revisit and revise their inputs and consider alternative solutions (Lau and Tsui,
2006). Advisory systems are also driven by an iterative system process, but unlike iterative expert
systems which are typically rule-based, advisory systems couple rule-based reasoning with other forms
of logic such as case-based reasoning (Beemer and Gregg, 2008). Another type of system that is
bridging the gap between knowledge based systems and decision support systems is mashups, which
provide an enhanced interactive web experience that allow a variety of users to share in the generation,
organization, distribution, and utilization of knowledge (Walczak et al, 2009).
Generally speaking, when knowledge based systems are applied to unstructured domains they
tend to experience low user acceptance resulting from mistrust of the system because of its inability to
justify solutions (Furtado, 2004). There are two main schools of thought addressing this low user
acceptance rate. The first focuses on developing more robust explanation facilities to justify the
systems solution in unstructured domains (Arnold et al, 2006). The second declares that the need for
interaction between knowledge-based systems (KBS) and the user has increased, mainly, to enhance
the acceptability of the reasoning process and of the solutions proposed by the KBS (Furtado, 2004;
pg. 1). Through dynamic interaction with the user, KBS are able to tract with the users iterative
cognition process in solving unstructured decision and involve the users opinion in the systems logic,
which gives them a sense of ownership (and ultimately trust) in the solution (Kim et al, 2007;
Mintzberg et al, 1976).
The purpose of this study is to derive a measurement scale to quantify dynamic interaction in
knowledge based systems. It measures dynamic interaction as a construct that is perceived by the user
because of the positive or negative influence a users initial attitude towards the system may have on
the actual dynamic interaction that takes place during system use. A review of system control theory
literature is conducted which provides the theoretical underpinnings for the substrata and nomological
model (related constructs) of dynamic interaction. Upon this foundation a multi-item measurement
scale is derived for dynamic interaction which is then evaluated in an exploratory and confirmatory
study. Lastly a discussion is presented that covers the implications of this study on future research.
3.2 Theoretical Background
Dynamic interaction is neither a new concept nor is it limited to the decision support
architectures previously discussed, in fact it has been around for several decades in artificial
intelligence literature. In the 1990s the world wide web began to take on dynamic characteristics with
database driven web applications that serve custom web pages designed to meet specific criteria and
interests of unique user (Keyes, 1998). Web applications then evolved into virtual working
environments and various studies evaluated groupware architectures in terms of their ability to
synthesize and support the dynamic interaction between colleagues and teacher/students in distance
education (Lin and Lee, 2004). Most recently, dynamic interaction is being expanded to make systems
brings browser-based interaction much closer to application-based interaction (McCarthy, 2005;
Royale, 2008; Smith, 2008). Despite the popularity of dynamic interaction as a system attribute, IS
literature lacks a measure to quantify the role of this construct within the information system
nomological net. To address this need, the focus of this paper is to derive such a measurement for
Figure 4: Control theory feedback loop, derived from Raudsepp (2007)
Dynamic interactions underpinnings are found in system control theory, which spans over
many academic disciplines ranging from engineering to economics and is primarily focused with
influencing the behavior of dynamic systems (Lewis, 1992). Specifically stated, control theory is the
area of application-oriented mathematics that deals with the basic principles underlying the analysis
and design of control systems. To control an object means to influence its behavior so as to achieve a
desired goal (Sontag, 1998; pg. 1). The majority of control theory applications incorporate some
variation of a feedback loop; as Figure 4 illustrates a feedback loop has 3 general phases: A) input
values, B) process input and calculate output, C) evaluate output, if necessary iterate back to step A)
and adjust input values (Raudsepp, 2007).
A common application of control feedback loops is machine learning which includes but is
not limited to autonomous robots, fuzzy logic, intelligent systems, neural networks, and database
autonomies (self tuning databases). For example, database autonomies has become more popular as
databases have become increasingly large and complex and the human resource cost to administer
these databases has also grown. To help ease the cost of ownership of the large databases, researchers
have begun developing self managing databases (ADBMS) that automatically configure and manage
its resources (Devraj, 2007). Essentially an ADBMS is a control feedback loop that oversees the
database and collects and analyzes statistics, determines whether performance is satisfactory or not,
and then takes appropriate action to resolve performance issues if they exist (Martin et al, 2006).
The control feedback loop is also being applied in the context of dynamic KBS related to the
brain-computer interaction subset of machine learning. While there is no physical connection
involved, the KBS includes the users cognition process in developing a solution through dynamic
interaction. KBS can be effective in supporting unstructured decisions when they are designed with
feedback loops that allow the user to influence the behavior of the system as to achieve the desired
solution by evaluating alternative solutions (Sontag, 1998). A good example of knowledge based
support being applied to unstructured decisions via dynamic interaction would be an Iterative Expert
System built for solving shipment consolidation problems. Lau and Tsui (2006) developed such a
system that adopted rule-based reasoning to provide expert advice for cargo allocation, and included an
iterative improvement mechanism that undertakes different outcomes until the most optimal solution is
3.3 Hypothesis Model
Examining the predictive ability of a measurement scale and its nomological validity
(relationship with other relevant constructs) requires identifying the constructs within a nomological
network of consequential variables (Bhattacherjee, 2002). Figure 5 illustrates the hypothesized
nomological network for dynamic interaction, which has been constructed from prior literature, and
serves as the research model for this study. The hypothesized consequential constructs of dynamic
interaction are perceived reliability (trust in the predictions made), perceived usefulness, and
behavioral intention to use. The hypothesized relationship between dynamic interaction and these
consequential constructs are discussed below.
The primary conceptualization of DSS/KBS trust is based on the assumption that users
generally adapt their trust levels to accommodate different levels of recommendation quality. That is,
the more reliably the system is in providing appropriate decision recommendations, the higher the level
of trust the user will have in the system. For example, one study examining the relationship between
system trust and the reliability of diagnostic and decision support aids found they were sensitive to
different levels of aid reliabilities (Wiegmann et al, 2001). When knowledge based systems are
applied in unstructured domains a second constraint on user trust comes into play. Users distrust
system recommendations because proposed solutions cannot be exhaustively justified, that is, a lack of
system transparency negatively impacts trust in the reliability of the recommendations made by the
systems (Furtado, 2004). The majority of KBS research covering this reliability concern is focused on
developing more robust explanation facilities for the KBS (e.g. Arnold et al, 2006), however some
researchers are calling for KBS to be more dynamically interactive (Furtado, 2004). Dynamic
interaction serves to increase the users involvement in the decision making process and awareness of
the processes employed by the system when arriving at a decision recommendation. Therefore, it is
hypothesized that dynamic interaction will induce trust due to enhanced system transparency and thus
have a positive effect on the users perception of the reliability of the recommendations made by the
HI: Dynamic interaction will have a positive influence on perceived reliability.
In unstructured decision domains there is often a particular person that is held legally
accountable for the decisions being made (e.g. doctors, stock brokers, executives). In these situations
the KBS no longer performs autonomously but rather performs as a decision support tool like a DSS
(e.g. Rapanotti, 2004). As such, there have been recent studies that have investigated the users
cognition process in unstructured domains, with the premise being that since the system is used to
support the users decision process, it should be designed around the users decision process. Various
studies have shown that the users decision process in unstructured domains is iterative, which
suggests that when dynamic interaction is built into the system to support the users iterative thought
process, it will have a positive influence on user perceptions of the decision support tool (Kim et al,
2007; Mintzberg et al, 1976; Witte, 1972). In these cases, the addition of dynamic interaction serves to
improve user perceptions of how well a system is performing its tasks, and thus serves to enhance
perceptions of the output quality of the system (Furtado, 2004). A theoretical extension to the
technology acceptance model (TAM2) found that output quality has a positive effect on the perceived
usefulness of systems (Venkatesh and Davis, 2000). Thus it is hypothesized:
H2: Dynamic interaction will have a positive influence on perceived usefulness.
The relationship between Trust and TAM2 is well established in IS literature (e.g. Gefen et al,
2003a; Tung et al, 2008; Wu and Chen, 2005). Perceived reliability is a dimension of trust that is
commonly investigated when a systems ability is a concern (Madsen and Gregor, 2000). In these
cases practitioners are directed to include trust-building mechanisms into the application (Gefen et al,
2003a). The degree to which these trust-building mechanisms enhance perceived reliability determines
the degree to which the users behavioral intention to use the system (TAM2) is positively affected
(Gefen et al, 2003a).
H3: Perceived reliability will have a positive influence on behavioral intention to use.
Both perceived usefulness and behavioral intention to use belong to TAM, and the
relationship between these two constructs has been confirmed in a plethora of different domains (e.g.
Email, Lotus Notes, Online Auctions) (Davis, 1986; Davis, 1989; Gefen et al, 2003a; Gregg and
Walczak, 2006). Therefore, it is hypothesized that in the domain of dynamic interaction being applied
to KBS, perceived usefulness will have a positive influence on behavioral intention to use (Davis,
1986; Davis, 1989).
H4: Perceived usefulness will have a positive influence on behavioral intention to use.
To evaluate the nomological validity of dynamic interaction within the research model in
Figure 5, measurements were derived from previous studies and were reworded to fit the context of the
NCAA knowledge based prediction system developed for this study. The measurements for perceived
usefulness and behavioral intention were derived from Daviss (Davis, 1989) measurement
development paper for TAM, and the measurement for perceived reliability was derived from Madsen
and Gregors (2000) study on various aspects of trust. The scale items for trust were all worded
positively, since negative items reflect distrust, which literature describes as being an entirely different
construct, rather than being a polar opposite of trust (Bhattacherjee, 2002; Lewicki et al, 1998). The
exact wording of the measurements can be found in Table 4.
(Perceived Usefulness) Derived From
The system would be helpful to you in filling out your tournament bracket. (Davis, 1989)
The system would make it easier for you to make your tournament predictions. (Davis, 1989)
The system would make the time you spent on your bracket more effective. (Davis, 1989)
Overall, using the system would be better than not using the system. (Davis, 1989)
The system performs reliably. (Madsen and Gregor, 2000)
The system analyzed the problem domain consistently. (Madsen and Gregor, 2000)
The system provided the advice I required to make my decision. (Madsen and Gregor, 2000)
I feel that 1 can rely on the system to function properly (Madsen and Gregor, 2000)
1 intend to use this tool to gather information for the NCAA basketball tournament. (Davis, 1989)
I intend to use this tool to help me fill out a tournament bracket. (Davis, 1989)
I intend to use this tool to help me predict the tournament outcomes. (Davis, 1989)
I intend to use this tool to review potential outcomes of the tournament. (Davis, 1989)
Table 4: Derived measures
Figure 5: Research model for dynamic interaction
3.4 Measurement Development and Pretest
Despite the long history of dynamic interaction as a system attribute, and its growing
popularity via control feedback loops in emerging technologies (e.g. mashups), IS literature lacks a
measurement scale to quantify dynamic interaction. The focus of this section is to derive such a
measurement in the context of KBS being applied to unstructured decision domains, in an effort to
provide a measurement scale that academicians can use to quantify dynamic interaction and that
practitioners can use to prescribe the aspects of dynamic interaction that are warranted in different
environments. The validity of a measurement scale begins with the initial item construction (Nunnally,
1978); Rather than test the validity of measures after they have been constructed, one should ensure
the [content] validity by the plan and procedures for [instrument] construction (Davis, 1989; pg. 323).
Therefore to promote construct validity from the onset, a top-down substrata specification and bottom-
up item matching approach was used to promote adequate coverage of the domain by the scale items
(Bhattacherjee, 2002). First, the substrata of dynamic interaction were conceptualized in the KBS
domain from control theory (section 3.4.1). Next, an extensive review of IS literature was conducted
to derive scale items from studies focused on the particular substrata of dynamic interaction (section
3.4.2) . Lastly, a Q-sort was conducted to pretest and refine the initial measurement scale (section
3.4.1 Conceptualization of Dynamic Interaction Substrata
The first step in measurement development consisted of using previous literature to
hypothesize the substrata of dynamic interaction. Despite the vast array of differing contexts that
control theory exists in (e.g. atmospheric science, biology, economics), the basic process used to
Figure 6: Substrata of dynamic interaction as derived from control theory
control these dynamic systems remains quite germane to the basic feedback loop prescribed by control
theory (Cox, 2000; North and Macal, 2007; Thomas et al, 1995). The same is true when control theory
is applied to dynamic information systems through dynamic interaction. Therefore, the control process
prescribed by control theory provides theoretical grounding for developing the substrata of dynamic
interaction as an information system construct, these substrata are illustrated in Figure 6 and include
inclusive, incremental, and iterative system traits (Lewicki et al, 1998; Sontag, 1998).
Traditional knowledge based systems are designed with a user interface component which
enables the user to input information the system requests from them (Aronson and Turban, 2001).
Since these systems are designed for structured domains (e.g. training facilities, diagnostic
applications), the users opinion is generally not included in the systems cognition process, because
the problem has a definable right solution that is codified in the system knowledge base and which is
then explained by the systems explanation facility (Luger, 2005). However, as knowledge based
systems are beginning to be designed for unstructured environments, the users inclusion in the
systems cognition process becomes warranted to enhance the acceptability of the reasoning process
and of the solutions proposed by the KBS (Furtado, 2004).
System architects often design incremental systems when dealing with domains that are ever
changing, exceedingly complex, or contain uncertainty (Aist et al, 2007; Lin and Lee, 2004; Newkirk
and Lederer, 2006). In the context of data mining, maintaining accurate query cost statistics becomes
extremely difficult because of continuous changes being made to the database (inserts, updates, and
deletes), and the large cost (time) of reanalyzing extremely large databases. For example, Lin and Lee
(2004) present an approach to maintaining these query cost statistics by breaking them into smaller
statistics, which are incrementally updated, and then aggregated together. Similar to domains that are
ever changing (e.g. database cost statistics), decisions in domains that contain uncertainty are
extremely difficult to quantify; researchers have found that systems designed with an incremental
process are able to identify solutions more quickly and accurately in these domains (Aist et al, 2007;
Newkirk and Lederer, 2006).
Decision theorists long ago determined that the human cognition process in unstructured
environments is incremental and iterative in nature (Mintzberg et al, 1976; Witte, 1972). Recent
literature has found that when decision support systems are designed to tract with the users iterative
cognitive process, the decision time and accuracy is improved (Kim et al, 2007). Kim et al (2007) did
call for further study to investigate both linear and non-linear decision models in unstructured
environments, which is addressed by this study.
3.4.2 Initial Item Selection
The item selection process used was designed to achieve two main goals. The first goal was
to determine the appropriate number of items to quantify the influence of the inclusion, incremental,
and iterative factors on the decision making process without producing redundant scale items. The
next goal was to select items that represented the actual dimensions of dynamic interaction as applied
to information systems. To account for scale refinement 15 scale items (5 per substrata) were
generated from past IS studies concerning different aspects of dynamic interaction. One observation
that should be noted is the extent to which IS literature covers the substrata of dynamic interaction
(inclusive, incremental, and iterative) which were identified in the previous section. This provides
justification of the use of control theory as the theoretical foundation of dynamic interaction in the
realm of IS. Table 5 contains the 5 scale items chosen for each substrata, along with details concerning
the study that each item was derived from.
3.4.3 Scale Pretest and Refinement
Pretest participants consisted of 10 database application developers, the participants were
asked to perform a prioritization and then a categorization. For prioritization they were given a
Scale Item Substrata Source Synopsis
(1) The system progressively Interacted with you Inclusive (Akoumianakis et al, 2000) Developed a methodology to dynamically enhance the interaction with multiple artifacts to suit different usage patterns, user groups, or contexts of use.
(2) The system included you in the decision process Inclusive (Sakai and Masuyama, 2004) Developed a search summarization system that included dynamic interaction; their system performed better than systems without interaction.
(3) The system involved you in the decision process Inclusive (Furtado, 2004) Investigated a new methodology for interactive KBS that include the user in the process.
(4) Your opinion influenced the systems suggestion Inclusive (Furtado, 2004) Investigated a new methodology for interactive KBS that include the user in the process.
(5) The system incorporated your ranking into the solution Inclusive (Snidaro and Foresti, 2007) Provided an interactive knowledge representation model for ambient intelligence computing.
(6) The system used a phased approach to develop a solution Incremental (Lin and Lee, 2004) Demonstrated, phased, incremental updates on database statistics, as a method of increasing performance.
(7) The system added to the proposed solution until it was satisfactory. Incremental (Newkirk and Lederer, 2006) Discussed incremental strategic information systems planning in an uncertain environment.
(8) The system developed the solution step by step until it was satisfactory. Incremental (Liu and Lin, 2004) Developed an incremental text mining technique to efficiently identify the users current interest by mining the user's information folders.
(9) The system developed the solution in increments Incremental (Aist et al, 2007) Developed an incremental dialogue system faster and preferred over non-incremental counterpart.
(10) The system incrementally worked with you to develop your final solution Incremental (Kim et al, 2007) Discussed the incremental human decision-making behavior in unstructured environments.
(11) The system allowed you to go back and change your rankings if you wanted to. Iterative (Anick and Tipimeni, 1999) Developed a terminological feedback query composer for iterative information seeking.
(12) The system adjusted the computers prediction if you changed your team rankings. Iterative (Huang, 2003) Developed an iterative dynamic programming approach.
(13) The system allowed you to review and revise your candidate solution. Iterative (Koyuturk and Aykanat, 2005) Proposed an Iterative-improvement-based de- clustering method that utilizes the available information on query distribution.
(14) The system iteratively worked with you until you were satisfied with your solution. Iterative (Berente and Lyytinen, 2007) Discussed various aspects of iteration in a complex decision domain.
(15) The system continually allowed you to revise your solution until you were satisfied. Iterative (Lau and Tsui, 2006) An Iterative Heuristics Expert System for Enhancing Consolidation Shipment Process in Logistics Operations.
Table 5: Initial scale items for dynamic interaction
definition for dynamic interaction and then asked to rank each statement by how well it matches the
definition. For the categorization task, participants were given 3 envelopes, each containing the name
and definition of one of dynamic interactions three substrata. Next they were given 15 index cards
with each containing one of the scale items and were instructed to match each item with one of the
substrata definitions. A fourth envelope was provided labeled, Does not fit anywhere, so that
participants could discard items they felt did not match anywhere. The categorization data was then
cluster analyzed by placing in the same cluster items that 6 or more respondents placed in the same
category (Davis, 1989). The clusters are considered to be a reflection of the domain substrata for each
construct and serve as a basis of assessing coverage, or representativeness, of the item pools (Davis,
1989; pg. 325).
Hem Substrata Rank Cluster Removed
(1) The system progressively Interacted with you Inclusive 15 *
(2) The system included you in the decision process Inclusive 10 A
(3) The system involved you in the decision process Inclusive 2 A
(4) Your opinion influenced the system's suggestion Inclusive 7 A
(5) The system incorporated your ranking into the solution Inclusive 12 *
(6) The system used a phased approach to develop a solution Incremental 4 B
(7) The system added to the proposed solution until it was satisfactory. Incremental 9 B
(8) The system developed the solution step by step until it was satisfactory. Incremental 14 *
(9) The system developed the solution in increments Incremental 6 B
(10) The system incrementally worked with you to develop your final solution Incremental 8 B
(11) The system allowed you to go back and change your rankings if you wanted to. Iterative 13 *
(12) The system adjusted the computers prediction if you changed your team rankings. Iterative 1 C
(13) The system allowed you to review and revise your candidate solution. Iterative 11 C
(14) The system iteratively worked with you until you were satisfied with your solution. Iterative 5 c
(15) The system continually allowed you to revise your solution until you were satisfied. Iterative 3 c
Table 6: Pretest results perceived dynamic interaction
The results of both the prioritization and categorization are provided in Table 6. Cluster A
contained 3 scale items representing the inclusive substrata, cluster B contained 4 scale items
representing the incremental substrata, and finally cluster C contained 4 scales items representing the
iterative substrata. Scale items 1, 5, 8, and 11 were ranked 15th, 12th, 14th, and 13th respectively and
fell outside of the three substrata clusters and thus were removed from the measurement.
3.5 Exploratory Study
The pretest procedure refined the initial measurement scale from 15 to 11 items, with 3 items
remaining in the inclusive substrata, and 4 items in both the incremental and iterative substrata. An
exploratory study was conducted to further refine the measurement scale. The decision domain
selected for the exploratory study was that of predicting the outcome of the National Collegiate
Athletic Association (NCAA) mens basketball tournament. It is very popular amongst office
colleagues in the United States to conduct pools where participants fill out a tournament bracket with
the goal of predicting the outcome with the highest accuracy. Participants pay an entrance fee and the
person with the highest prediction accuracy receives a monetary award comprised of the aggregated
entrance fees. There are numerous variables that can potentially impact the outcome off the NCAA
tournament (e.g. win%, conference, experience, average height, ranking, seed), which adds a
considerable amount of uncertainty to this decision domain. Additionally, the nature of a tournament
bracket (e.g. IF team-A beats team-B AND team-D beats team-C THEN team-A plays team-D) is
similar to that of a KBS heuristic decision tree. Therefore, since this decision domain contained
uncertainty and is also conducive to a KBS inference engine, it was selected for this study.
To conduct the experiment, two KBS were developed to aid users in predicting the outcome
of the NCAA tournament. The first system was a traditional KBS which included an explanation
facility that explained the most significant predictors of past tournaments and the current values of
these statistics. The second system was built upon the first, but included a dynamically interactive
interface that had inclusive, incremental, and iterative functionality. To include the users opinion in
the decision process, the system allowed user to input their own rankings of the tournament teams.
Next, the system presented an incremental solution that was composed of a (user selected) combination
of the KBSs prediction and the users opinion (e.g. the user could select a solution based 50% on the
KBS and 50% on their own rankings). Finally, the system was designed to be iterative by allowing the
user to revisit one, or both, of the previous decision steps (inputting rankings and selecting
combination ratio). The iterative decision process of the second KBS would continue until the user
was satisfied with their solution.
March Mayhem Advisor FREE NCAA BBall Bracket Tool NCAA Pool Prediction Tool
Need help filling out your bracket? Free March Madness Predictor Need help filling out your bracket?
FREE Prediction Tool Fill out your bracket FREE FREE Prediction Tool
www .cudenver .edu www .cudenver .edu www .cudenver edu
Figure 7: Google ads used to solicit participation in the exploratory study
To solicit participation, a Google Ad-Words campaign was ran for four consecutive days
leading up to the seeding of the 2009 NCAA basketball tournament. Figure 7 contains the ads that
were used to solicit participation in the study. Ad clicks were bid at $0.10 per click, with a $25.00
daily budget limit. Over the four day period that the ad campaign was ran, the ads produced 1000
clicks (which was restricted by the daily budget limit), with 133 individuals participating in the study,
yielding a 13.3% response rate.
For scale validation the Partial Least Squares (PLS)1 Structural Equation modeling (SEM)
method was performed on 67 randomly selected observations from the exploratory test data (a holdout
sample of 66 was retained for further analysis). The 11 item scale produced by the pretest in section
4.3 was modeled as a second order factor model, with items: incl2-4, incrl-2 & 4-5, and iter2-5 as
reflective indicators of latent first order factors of the Inclusive, Incremental, and Iterative substrata
and these three first-order factors as formative indicators of the Dynamic Interaction construct. A
second order formative model is reasonable for this study because a change in one aspect of dynamic
interaction does not necessarily imply a change in the others (Chin and Gopal, 1995). For example,
one can include users opinions without allowing the user to iterate on those opinions throughout the
decision making process.
As with other multivariate statistical packages (e.g. AMOS), PLS-Graph does not have built
in functionality to model second order constructs, however there are methods that can be applied to
accomplish a second order construct model in PLS-Graph, the easiest of which is the approach of
repeated indicators known as the hierarchical component model (Chin, 1998; Chin, 2000; Wold, 1989).
When using the hierarchical component model, researchers suggest that it is ideal to have an equal
number of indicators for each 1st order construct (Chin, 1998). Therefore items incr5 and iter3 were
removed because they were the lowest loading indicators on the Incremental and Iterative substrata,
which each had four indicators, compared to Inclusives three. The revised second order factor model
illustrated in Figure 8 has an equal number of indicators for each 1st order factor. The relation
between the Inclusive, Incremental and Iterative substrata is modeled using a formative measurement
model because causality flows from the indicators to the dynamic interaction construct (Chin, 1998).
1 PLS Graph 3.0 was used to perform the PLS analysis
Figure 8: Revised 2nd order factor PLS model with 9 scale items
Removing the incr5 and iter3 scale items reduced the T-statistic for the Inclusive path coefficient from
28 to 20, with Incrementals remaining unchanged at 30, and the Iteratives being reduced from 24 to
20. Despite the nominal variations in the path coefficient T-statistics after these items were removed,
all three path coefficients remained significant at the .01 level. When modeling the paths from the first
order to the overall second order construct using PLS, the R-square for the second order construct is
always 1.0. However, the reason for using a formative (or molar) model is to examine the relative path
weights as this "molar construct" is used to predict other constructs in the model (Chin and Gopal,
1995). The results of the second order PLS analysis suggests that the Inclusive, Incremental and
Iterative substrata contribute almost equally to the Dynamic Interaction construct.
3.6 Confirmatory Study
To investigate the nomological validity of the proposed measurement scale, a confirmatory
study was conducted to evaluate the hypothesis model developed in section 3. The same two NCAA
knowledge based prediction systems used in the exploratory study were used in the confirmatory
study, however instead of soliciting participation via a Google ad campaign, email solicitations were
used. Five listservs were obtained which together contained 316 email addresses of individuals who
participated in various NCAA March Madness pools in 2008. An email was sent to each of the
individuals in the aggregated listserv that invited them to participate in the study. Since the study was
conducted during the weeks prior to the seeding of the 2009 tournament, hypothetical teams were
seeded into each of the two knowledge based systems. As an incentive for participating in the study -
participants would have access to the system loaded with the actual tournament teams once the
tournament seeding took place.
Of the 316 invitations sent, 86 individuals participated, yielding a 27% response rate for the
confirmatory study. 62% of the respondents were male, and the overall participant population had an
average age of 35. Respondents were asked to rate their overall familiarity with the NCAA mens
basketball tournament on a scale of 1 to 4: 1 Not at all familiar, 2 Vaguely of familiar, 3 Familiar,
4 Very familiar; the average self evaluation of contextual familiarity was 2.6. Descriptive statistics
of the dataset are presented in Table 7. For the system lacking dynamic interaction the means of the
constructs ranged from 2.04 to 2.17, conversely, the means of the constructs for the system that
included dynamic interaction ranged from 6.06 to 6.33.
Construct Mean (non Dl) Mean (D1) Standard Deviation
(Dl) Dynamic Interaction 2.08 6.06 2.29
(PR) Perceived Reliability 2.04 6.23 2.26
(PU) Perceived Usefulness 2.17 6.33 2.25
(BI) Behavioral Intention to Use 2.08 6.25 2.19
All Items were measured with a 7 point Likert scale Strongly Disagree < > Strongly Agree 1 2 3 4 5 6 7
Table 7: Descriptive statistics
Dl PR PU BI
incl2 .97 .74 .77 .75
inc)3 .97 .76 .78 .77
incl4 .97 .76 .76 .79
incrl .95 .74 .76 .75
incr2 .96 .76 .79 .77
incr4 .97 .76 .78 .77
iter2 .98 .77 .81 .78
iter4 .97 .77 .81 .78
iter5 .99 .78 .81 .78
prl .79 .97 - .78 .81
pr2 .75 .99 .77 .76
pr3 .78 .98 .81 .80
pr4 .75 .98 .77 .76
pul .81 .78 .99 .80
pu2 .81 .78 .99 .80
pu3 .81 .80 .98 .82
pu4 .81 .81 .98 .79
bil .78 .80 .79 .99
bi2 .80 .80 .81 .99
bi3 .79 .79 .82 .99
bi4 .81 .81 .81 .99
Table 8: Loadings and cross-loadings
The psychometric properties of the research model were evaluated by examining item
loadings, internal consistency, and discriminant validity. Researchers suggest that item loadings and
internal consistencies greater than .70 are considered acceptable (Agarwal and Karahanna, 2000). As
can be seen by the shaded cells in Table 8, all item loadings surpass this threshold. Internal
consistency is evaluated by a constructs composite reliability score. The composite reliability scores
are located in the leftmost column of Table 9 and are more than adequate for each construct. There are
two parts to evaluating discriminant validity; firstly, each item should load higher on its respective
construct than on the other constructs in the model, and secondly, the Average Variance Extracted
(AVE) for each construct should be higher than the inter-construct correlations (Agarwal and
Karahanna, 2000). In Table 8, by comparing the shaded cells to the non-shaded cells, we can see that
all items load higher on their respective construct than the other constructs in the research model.
Likewise, in Table 9, by comparing the shaded cells to the non-shaded cells, we can see that the AVE
for each construct is higher than the inter-construct correlations without exception. These two
comparisons suggest that the model has good discriminant validity.
Composite AVE and Inter-Construct Correlations
Reliability DI PR PU BI
.99 DI .93
.98 PR .86 .94
.99 PU .89 S'?
.99 BI .87 .87 .87 .98
Table 9: Internal consistency and discriminant validity
The results of the PLS SEM analysis are presented in Figure 9. Perceived reliability had an
R-Square of .88, with R-square values of .93 and .94 for perceived usefulness and behavioral intention
to use respectively. This means that 88% of the variance in perceived reliability and 93% of the
variance in perceived usefulness is explained by dynamic interaction, and 94% of the variance in
behavioral intention to use is explained by perceived reliability and perceived usefulness (Agarwal and
Karahanna, 2000). The path coefficients between dynamic interaction, perceived reliability, and
perceived usefulness were significant at .001, while the path coefficients between perceived reliability,
perceived usefulness, and behavioral intention to use were significant at .01. As summarized by Table
10, all four hypotheses were supported, which supports nomological validity of the dynamic
interaction measurement scale.
Figure 9: PLS SEM results
HI: DI -> PR Yes
H2: Dl -> PU Yes
H3: PR -> BI Yes
H4: PU -> BI Yes
Table 10: Summary of hypothesis tests
3.7 Limitations and Future Research
While the immediate focus of this paper is concerning the quantification of dynamic
interaction as a system attribute, the domain which was investigated (KBS in unstructured domains)
raises some interesting questions that are not addressed by this study, namely Do users really know
whats important when making complex (and potentially nonlinear) decisions? Traditionally in
structured domains KBS perform as autonomous decision makers and outline the relevant factors
involved in decision through the explanation facility. However, unstructured domains often require a
human to be held legally responsible for the final decision (e.g., a Doctors diagnosis). This study
could benefit from further investigation of this middle ground, where the role KBS are limited to
decision support and to framing the problem for the user by identifying relevant decisional
antecedents. Additionally, further study regarding how decision quality is affected when the users
opinion is included in the systems cognition process is warranted.
It is common practice to include a mix of negatively worded and positively worded items in
measurement scales to help reduce response bias. However this study included several items related to
trust. Literature describes trust and distrust as being entirely different constructs and not simply just
polar opposites (Bhattacherjee, 2002; Lewicki et al, 1998). Therefore the scale items for perceived
reliability (trust) were all worded positively, to ensure that trust, and not distrust, was being measured.
For the purposes of this study, it was decided that all items should be positively worded so that all
scales were measured the same way. Future research may need to examine the impact of including
negatively worded scale items, particularly for the dynamic interaction scale.
The contribution of this study was limited to the development of a measurement scale for
dynamic interaction as a system attribute. To accomplish this, an experiment was developed where
dynamic interaction was controlled for between two NCAA prediction KBS. While the context of the
NCAA tournament was interesting and attracted participation in the study, the measurement needs to
be applied to a business information system context so that the measurements nomological net can be
further investigated. Additionally, the measurement scales nomological net should be expanded to see
which other constructs are potentially impacted by dynamic interaction.
As illustrated by Figure 10, this study validated the relationships between dynamic
interaction, perceived reliability, perceived usefulness, and behavioral intention to use. There are a
number of different IS constructs that may be impacted as well. While perceived reliability is an
aspect of trust, IS literature could benefit from further investigation into the other trust related
constructs that are influenced by dynamic interaction. In unstructured decision domains, humans
iteratively assimilate new information and compare it to existing knowledge (Mintzberg et al, 1976;
Figure 10: Potential expansions of dynamic interactions nomological net
Witte, 1972). Since dynamic interaction is an effort to build the KBS around the users iterative
decision process, it would be interesting to investigate the role dynamic interaction plays on cognitive
absorption. From a similar perspective, since dynamic interaction enables the KBS to tract with the
users decision process, it would be interesting to investigate how self efficacy and personal
innovativeness are affected.
Another limitation of this research concerns the decision domain that was used to evaluate the
hypothesis model. Considering that predicting the NCAA basketball tournament is an unstructured
domain, a traditional KBS (without dynamic interaction) is not a suitable decision aid. As such, an
observation made was that participants gave high ratings to the KBS with dynamic interaction, and low
ratings to the KBS without dynamic interaction. This could explain the high R-Square values in Table
10 for perceived reliability, perceived usefulness, and behavioral intention to use. As such, to address
this limitation, there is an opportunity for future research to evaluate systems with varying levels of
Lastly, while this study focused on the domain of knowledge based systems being applied to
unstructured domains the measurement should be evaluated in other architectural domains that model
control feedback loops. For instance Database Autonomies would be a suitable architecture to apply
this measurement scale because they are viewed as control feedback loops, which is what this
measurement was derived from Martin et al (2006).
The use of knowledge based systems in unstructured domains is expanding; thus it is
important that researchers understand the factors that impact the use of these systems. Past researchers
have hypothesized that including dynamic interaction in these systems will improve trust in the
reliability of these systems (e.g. Furtado, 2004). This study supports this, finding that including
dynamic interaction in a KBS system operating in an unstructured domain will increase the both the
perceived reliability and the perceived usefulness of the system, ultimately leading to an increased
intention to use the system.
One of the contributions of this study is the development of a measurement scale for dynamic
interaction based upon the theoretical underpinnings of control theory. This phase of the research
found that the substrata of dynamic interaction does include the ability of the system to include the
opinions of the user, to incrementally adjust the solution during the decision process and to iterate
through the decision process until an acceptable solution is reached. Results suggest that when
modeling the dynamic interaction a second-order formative measurement scale is appropriate. As
originally suggested by (Chin, 1995), formative second-order models are useful in cases where the goal
is to asses how much an individual factor contributes to the overall factors. This study also agrees with
(Chin, 1998), which argues that to understand these factors they must be demonstrated by embedding
the model within a nomological network. In this case, we used the second-order dynamic interaction
model to predict perceived reliability, perceived usefulness and behavioral intention to use.
Future studies investigating use of knowledge based systems in unstructured domains may use
the proposed scale to evaluate the role played by dynamic interaction in their domains. The proposed
scale may be useful in examining improvements dynamic interaction provides in trust the
recommendations provided by the system as well as in the perceived usefulness of the system. From a
practitioner standpoint, the dynamic interaction scale presented here provides a convenient means for
developers to assess their users perceptions of their KBS and allows them to assess whether the levels
of inclusion, presentation of incremental solutions and ability to iterate is appropriate to the problem
Part 2: Decision Support Mashups Knowledge Synthesization Through Dynamic Interaction
4. Mashups: A Literature Review and Classification Framework
(Published in Future Internet Journal, 2009, 1(1), 59-87)
The evolution of the Web over the past few years has fostered the growth of a handful of new
technologies (e.g. Blogs, Wikis, Web Services). Recently mashups have emerged as the newest Web
technology and have gained lots of momentum and attention from both academic and industry
communities. Current mashup literature focuses on a wide array of issues, which can be partially
explained by how new the topic is. However, to date, mashup literature lacks an articulation of the
different subtopics of mashup research. This study presents a broad review of mashup literature to help
frame the subtopics in mashup research.
Mashups, End-User Programming, Data Integration, Mashup Agents, Enterprise Mashups
In the past five years the web has experienced a surge in growth, a phenomena described by
OReilly (2005) as the emergence of Web 2.0, a new trend for web applications that emphasizes
services, participation, scalability, remixability, and collective intelligence. While some argue that the
new applications emerging on the Web represents a gradual evolution of the Internet and not a new
version of the Web (Berners-Lee, 2009), the term Web 2.0 is commonly used to refer to the current
generation of social web applications being developed today. For example, the first metric used to
evaluate eCommerce sites simply emphasized page views, but now eCommerce sites are evaluated by
their cost per click (Fain and Pederson, 2006). In addition to eCommerce applications, other Web 2.0
applications include Blogs and Wikis, both of which foster communication, collaboration, work
processes, and knowledge sharing (OReilly, 2005; Cayzer, 2006; Dearstyne, 2007; Hester, 2009).
Since Web 2.0 applications facilitate user involvement in contributing to information sources, there has
been a vast increase in the amount of information and knowledge sources on the web. To foster the
aggregation of differing information sources a content-syndication protocol (Really Simple
Syndication: RSS feeds) was developed and enables web sites to share their content with other
applications (Cold, 2006). While RSS feeds and web services provide the medium for aggregating
differing information sources, the latest trend in Web 2.0 research focuses on mashup applications that
are designed to synthesize knowledge by semantically connecting disjointed information sources to
simulate new knowledge (Blake and Nowlan, 2008).
Mashup research is gaining a lot of momentum in both the academic and industry
communities. However, to date, mashup literature lacks an articulation of the different categories of
research. To address this literary shortcoming, a methodology was developed and conducted to review
mashup literature. A review of 60 publications revealed the following six categories of mashup
research: access control and cross communication, mashup integration, mashup agents, mashup
frameworks, end user programming, and enterprise mashups. The remainder of this paper is structured
as follows. First, the methodology used to review the literature will be discussed. The literature
composing each of the 6 research categories is then discussed, including related research and
foundational concepts that mashups are built upon. Following the literature review, the characterizing
attributes from each section are aggregated into an overall mashup classification framework that can be
used by researchers to frame future research, and by web developers to aid in anticipating the
appropriate design attributes of mashup environments.
4.2 Review Methodology
Over the past few years mashups have become a popular topic amongst both research and
industry communities. Flowever, the current posture of mashup literature is rather disjointed, which
can partially be explained by the newness of this topic. A possible explanation for this is that mashup
literature lacks a thorough literature review to identify its boundaries, common issues, and subtopics.
Therefore, in an effort to address this need and help frame future research, a four step methodology
was developed and conducted to review mashup literature and identify its boundaries and subtopics.
The first step in the review process was to gather as many mashup related publications as
possible. To do this Google Scholar was used to search for publications with the following phrases in
their title: mashup, mash-up, mashups, web mash-ups, Web 2.0 mashups. The IEEE Explorer and
ACM Portal libraries were also searched with the same keywords used for Google Scholar.
Additionally, the references of each mashup publication retrieved were also reviewed to identify
additional mashup publications that fell outside of our search criteria. In result, 60 mashup publications
were gathered, which are summarized in Appendix A. The second step in the review process was to
identify the number of subtopics within mashup literature. To accomplish this, all of the publications
were reviewed and were grouped based on the research questions addressed, key terms describing the
research, and on research methods used. This resulted in six groupings of mashup papers. The third
step of the review process consisted of developing names and definitions for each grouping of
literature. The publications within each group of literature were reviewed a second time to help
develop names (access control, integration, agents, frameworks, end user programming, enterprise) and
definitions of each group. The final step of the review process was to ensure that each publication
appropriately fit into the group it was originally placed in, after the group names and definitions were
given. In this final review, 3 and 4 articles were moved from the Frameworks group to the Access
Control and End-User Programming groups, respectively. Both authors discussed this change in
classification and it was decided that while the 7 articles moved were framework oriented, the focus of
the frameworks and subsequent research was primarily on the groups they were moved to. One
interesting observation made was that of the 60 mashup publications, 19 of them came from the
International World Wide Web Conference.
4.3 Literature Review
Currently theres a lot of buzz surrounding mashups in industry and academic communities.
The term mashup was actually derived from the music industry, where disc jockeys remix original
content from various artists to create new material (Murugesan, 2007). Therefore the idea behind a
mashup is to synthesize new information by reusing and combining existing content from disparate
information sources. Mashups allow end-users to combine information and knowledge from a plethora
of sources, and integrate them into customized, goal-oriented applications (Albinola et a), 2009).
Mashups have emerged as a trend in website design that focuses on synthesizing new content
by combining data and information from multiple sources in a unique way. As such, mashups have
proliferated because of how quickly they can be created, given that theyre composed of pre-existing
Access Control &
End User Programming
Mashups connect disjointed applications to
provide unified services, however access control
to legacy systems is difficult and hinders a
mashups potential while increasing security
risks to users who have to give their credentials
to mashup sites.
Mashups aggregate various different types of
data sources (eg. databases, legacy systems,
xml, dynamic web pages, and rss feeds), this
area of research addresses the data extraction
obstacles presented by these different data
A promising potential of mashups is the ability
to semantically determine information sources
that are relevant to the user, and autonomously
include them in the mashup.
While mashups are gaining exponential
popularity, their individual applications tend to
be ad-hoc, partially because of the vast
differences in data sources and purpose.
Researchers have identified the need for mashup
frameworks, to provide developers with a set of
Enabling the end user to create their own custom
mashups is a major reason why mashups have
gained such popularity. However this presents
an obstacle in that most end users do not obtain
the technical expertise to develop mashups, to
address researchers are developing end user
programming languages and tools, to enable
non-technical users to easily create mashups.
Organizations have identified the potential
strategic advantage that could be provided by
mashups in terms of business intelligence. As
such business researchers are focusing on the
various enterprise mashup related issues such as
accountability, design principals, and intranet
Table 11: Mashup research categories
Hasan el al, 2008
Recordon and Reed, 2006
Keukelaere et al 2008
Jackson and Wang, 2007
Blau et a), 2009
Thor et al, 2007
Stonebraker & Hellerstein, 2001
Murthy et al, 2006
Hornung et al, 2008
Tatemura et al, 2007
Kukka el al, 2008
Ennals et al, 2007
Wang et al, 2008
Blake and Nowlan, 2008
Lu et al, 2009
Ikeda et al, 2008
Cetin et al, 2007
Mostarda and Palmisano, 2009
Vancea et al, 2008
Ankolekar et al, 2007
Diego et al, 2007
Raza et al, 2008
Chen and Ilkeychi, 2008
Le-Phuoc et al, 2009
Albinota et al, 2009
Tuchinda et al, 2008
Wong and Hong, 2007
Tatemura et al, 2008
Ennals and Garofalakis, 2008
Sabbouh et al, 2007
Jones et al, 2008
Huynh et al, 2007
Brandt and Klemmer, 2006
Ennals and Gay, 2007
Wang et al, 2009
Lizcano et al, 2008
Hoyer et al, 2008
Makki and Sangtani, 2008
Simmen et al, 2008
Hoyer et al, 2009
Altinel et al, 2007
Majer et al, 2009
Siebeck et al, 2009
data (Homung et al, 2008; Merrill, 2006; Tatemura et al, 2008). For example Huynh et al (2007) found
that all users were able to learn their mashup interface and complete a mashup within 45 minutes. The
expansion of mashups can be viewed from three different perspectives. The first area of expansion is
simply in the number of new mashups that are continually being created. Since their building blocks
are existing data sources, building new mashups is a relatively quick process. Secondly, the number of
sources being used to build mashups has also greatly expanded. Initially, many of the resources used in
mashups were publicly available data (e.g. web services and RSS feeds), but now mashups are also
being composed of databases, data warehouses, and even legacy systems (Sneed, 2006; Vancea et al,
2008). The number of mashup research domains is another area of expansion found in mashup
literature, for example, Hoyer and Fischer (2008) distinguish between consumer and enterprise
The remainder of this section presents the results of the literature review and categorization
process described in the previous section. Table 11 contains category names and descriptions for the 6
different literary groupings of mashup literature. The first 5 categories (access control and cross
communications, integration, agents, frameworks, and end user programming) represent the
interrelated technological challenges common to all mashup applications, with the 6th category
covering organizational topics that are present in the domain of enterprise mashups.
4.3.1 Access Control and Cross Communication
The first area of mashup research found in prior literature examines issues surrounding access
control to mashup data and cross communication between backend resources. Mashups connect
disparate applications and data resources to provide their services. This requires mashup applications
to gain access to disparate data sources that have a variety of different sensitivity levels from RSS
feeds (open access) to legacy systems (restricted access). This creates security risks for both mashup
users, who have to give their credentials to mashup sites, and legacy systems managers, which must
open their systems to external access.
From a logistical and security perspective, the technical challenges involved in mashing
legacy systems is much different than mashing multiple RSS feeds. As is the case of enterprise
mashups, where the back-end resources being utilized by the mashup are legacy systems or databases,
access control becomes a legitimate concern for several reasons (Hasan et al, 2008). First, it can be
difficult to include back-end systems in the mashup because of the access control systems that they
have in place. Often times the workaround is for the user to give the mashup their credentials and for
the mashup to go on and impersonate the user. In situations where the legacy system or database is not
designed to permit provisional mashup access, the mashup receives all access. If a malicious mashup
was developed and gained access to a back-end system there would be great risk to the owners and
other users of the back-end system. Similarly, if a legitimate mashup was developed by a user and
included the users credentials to one or many back-end systems and the mashup was compromised by
a malicious attack, the users credentials could be leaked to many different sources. This access control
problem has been addressed by several publications and continues to receive attention (Jackson and
Wang, 2007; Keukelaere et al, 2008; Recordon and Reed, 2007; Hasan et al, 2008).
There are three primary mechanisms being used to provide access control for mashups. The
first approach is referred to as a strawman approach, in which a series of access checks are made to
authenticate the user and then the mashup application is provided the users credentials to allow it to
access the system (e.g. acts as the user when interacting with the application). Two examples of this
approach are Authsub (2009) and OAuth (2009). Authsub is Google's protocol web services. Authsub
requires users of web applications to complete an "access consent" page which provides the user with
an authentication token. This token allows the web application to interface with Google, acting as an
agent for the user. OAuth (2009) is an open authentication specification that provides consumer
applications requesting restricted data with an unauthorized token that is converted to an authorized
access token when the user successfully logs. Once the service provider receives the authorized token
it redirects the user to the consumer application where the authorized token is exchanged for an access
token that contains a password. One complexity of the OAuth (2009) approach is that it requires that
the service maintain the state for all previously issued tokens. There are a number of downfalls to
strawman schemes for mashup delegation (Hasan et al, 2008). First, the mashup receives all of the
users privileges to the back-end system. Since the user cannot restrict the mashups access within the
back-end system, the user must completely trust the mashup. Additionally, a comprised mashup can
leak user credentials, making the user, who is completely trusting the mashup, extremely vulnerable.
One alternative to strawman approach is an approach that mimics real-life permit-bad
authorization schemes. Hasan et al (2008) propose a delegation permit model that allows a user to
grant a mashup delegation permits, which specify limited access rights to specific services. This
approach to mashup delegation addresses two of the shortcomings of other methodologies. First, the
user can restrict the scope of access available to the mashup in the back-end system. It also limits the
length of time that the mashup has access to the system.
A second alternative for providing access control to mashup applications is Open ID 2.0
(Recordon and Reed, 2007). There are four entities in the Open ID 2.0 model: the user, mashup,
claimed identification, and identity provider. First, the user visits the mashup and provides a claimed
identification which specifies which identity provider they wish to use to access the mashup (e.g. users
can log into many online mashups using Twitter, Facebook, Yahoo or Google credentials). The
mashup redirects the user to the identity provider where the user can log in. The identity provider then
redirects the user back to the mashup with cryptographic proof that the user has been authenticated and
also provides the mashup with any profile information the user chooses to release. Similar to the
delegation permit model Hasan et al (2008), Open ID 2.0 allows users to control the amount of time
the mashup is authorized to access the identity provider (Recordon and Reed, 2007).
As Table 12 illustrates, access control methods can fall into one of three different categories:
anonymous, full delegation, and limited delegation. Anonymous access control would be a situation
where publicly published data sources are being accessed (e.g. web services, RSS feeds). Situations
where the mashup is given the users credentials, and the backend system cannot differentiate between
the user and the mashup, would be categorized as full delegation (e.g. Authsub, 2009; OAuth, 2009).
Finally, approaches that provide the user some control over what access the mashup has to the data
sources (e.g. Hasan et al, 2008; Recordon and Reed, 2007) would be considered limited delegation
because the amount of access and the length of time that it is granted to the mashup is restricted.
Back End Front End
(Access Control)___________(Cross Communication)
(e.g. public web services)
Hasan et al (2008)
Recordon and Reed (2006)