Citation
Decision performance using spatial decision support systems

Material Information

Title:
Decision performance using spatial decision support systems a geospatial reasoning ability perspective
Creator:
Erskine, Michael A. ( author )
Place of Publication:
Denver, CO
Publisher:
University of Colorado Denver
Publication Date:
Language:
English
Physical Description:
1 electronic file (213 pages). : ;

Subjects

Subjects / Keywords:
Decision making ( lcsh )
Decision support systems ( lcsh )
Geospatial data ( lcsh )
Geographic information systems ( lcsh )
Genre:
bibliography ( marcgt )
theses ( marcgt )
non-fiction ( marcgt )

Notes

Review:
As many consumer and business decision makers are utilizing Spatial Decision Support Systems (SDSS), a thorough understanding of how such decisions are made is crucial for the information systems domain. This dissertation presents six chapters encompassing a comprehensive analysi of the impact of geospatial reasoning ability on decision-performance using SDSS. An introduction to the research is presented in Chapter I. Chapter II provides a literature review and research framework regarding decision-making using geospatial schematization. Chapter IV explores the impact of geospatial reasoning ability on the technology acceptance of SDSS. Chapter V presents results of an experiment exploring the impact of geospatial reasoning ability of decision-making performance. Finally, Chapter VI presents a conclusion. Together thes chapters contribute to a greater understanding of the impact of geospatial reasoning ability in relation to business and consumer decision-making.
Thesis:
Thesis (Ph.D.)--University of Colorado Denver. Computer science and information systems
Bibliography:
Includes bibliographic references.
System Details:
System requirements: Adobe Reader.
General Note:
Department of Computer Science and Engineering
Statement of Responsibility:
by Michael A. Erskine.

Record Information

Source Institution:
|University of Colorado Denver
Holding Location:
|Auraria Library
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
891658739 ( OCLC )
ocn891658739

Downloads

This item has the following downloads:


Full Text
DECISION PERFORMANCE USING SPATIAL DECISION SUPPORT SYSTEMS:
A GEOSPATIAL REASONING ABILITY PERSPECTIVE
by
MICHAEL A. ERSKINE
B.S., Metropolitan State University of Denver, 2004
M.S., University of Colorado Denver, 2007
A thesis submitted to the
Faculty of the Graduate School of the
University of Colorado in partial fulfillment
of the requirements for the degree of
Doctor of Philosophy
Computer Science and Information Systems
2013


This thesis for the Doctor of Philosophy degree by
Michael A. Erskine
has been approved for the
Computer Science and Information Systems Program
by
Jahangir Karimi, Chair
Dawn G. Gregg, Advisor
Judy E. Scott
Ilkyeun Ra
November 8, 2013


Erskine, Michael, A., (Ph.D., Computer Science and Information Systems)
Decision Performance Using Spatial Decision Support Systems:
A Geospatial Reasoning Ability Perspective
Thesis directed by Associate Professor Dawn G. Gregg.
ABSTRACT
As many consumer and business decision makers are utilizing Spatial Decision
Support Systems (SDSS), a thorough understanding of how such decisions are made is
crucial for the information systems domain. This dissertation presents six chapters
encompassing a comprehensive analysis of the impact of geospatial reasoning ability on
decision-performance using SDSS. An introduction to the research is presented in
Chapter I. Chapter II provides a literature review and research framework regarding
decision-making using geospatial data. Chapter III presents the constructs of geospatial
reasoning ability and geospatial schematization. Chapter IV explores the impact of
geospatial reasoning ability on the technology acceptance of SDSS. Chapter V presents
results of an experiment exploring the impact of geospatial reasoning ability on decision-
making performance. Finally, Chapter VI presents a conclusion. Together these chapters
contribute to a greater understanding of the impact of geospatial reasoning ability in
relation to business and consumer decision-making.
The form and content of this abstract are approved. I recommend its publication.
Approved: Dawn G. Gregg
m


TABLE OF CONTENTS
CHAPTER
I. AN INTRODUCTION TO DECISION-MAKING USING GEOSPATIAL DATA 1
Abstract................................................................1
Introduction............................................................1
II. BUSINESS DECISION-MAKING USING GEOSPATIAL DATA: A RESEARCH
FRAMEWORK AND I.ITERATE RE REVIEW..............................................6
Abstract................................................................6
Introduction............................................................7
Literature Review......................................................11
Theoretical Background..............................................11
Information Presentation............................................12
Task Characteristics................................................16
User Characteristics................................................23
Decision-Making Performance.........................................30
Conceptual Model.......................................................33
Discussion.............................................................35
Limitations of Reviewed Literature..................................35
Future Research.....................................................37
Conclusion.............................................................39
III. GEOSPATIAL REASONING ABILITY: CONSTRUCT AND SUBSTRATA
DEFINITION, MEASUREMENT AND VALIDATON ........................................42
Abstract...............................................................42
Introduction...........................................................43
Literature Review......................................................45
IV


Cognitive Fit Theory.....................................................45
Geovisualization and Decision-Performance................................46
User-Characteristics.....................................................47
Task-Characteristics.....................................................49
Scale Development Procedure..................................................51
Construct Conceptualization (Step 1).........................................53
Factor One: Examination of Prior Research................................53
Factor Two: Identification of Construct Properties and Entities.........55
Factor Three: Specification of the Conceptual Theme......................56
Factor Four: Definition of the Construct and Substrata...................56
Measurement Item Generation (Step 2).........................................58
Content Validity (Step 3)....................................................63
Measurement Model Specification (Step 4).....................................67
Reflective and Formative Constructs..........................................69
Visual Representation of Measurement Model...................................70
Mathematical Notation of Measurement Model...................................71
Data Collection (Step 5).....................................................72
Pretest A................................................................72
Pretest B................................................................73
Scale Purification and Refinement (Step 6)...................................73
Pretest A................................................................73
Pretest B................................................................74
Pretest B Reduced Items................................................82
Data Collection and Reexamination of Scale Properties (Step 7)...............87
Collinearity.............................................................94
v


Gender Concerns....................................................94
Discussion............................................................96
Implication to Research............................................99
Implication to Industry............................................99
Limitations and Future Research...................................100
Conclusion...........................................................104
IV. USER-ACCEPTANCE OF SPATIAL DECISION SUPPORT SYSTEMS:
APPLYING UTILITARIAN, HEDONIC AND COGNTIVE MEASURES.........................106
Abstract.............................................................106
Introduction.........................................................106
Perceived Enjoyment: Hedonic Measures.............................108
Technology Acceptance Model: User-Acceptance and Utilitarian Measures
.....................................................................109
Geospatial Reasoning Ability: Cognitive Measures.....................110
Research Model..........................................................Ill
Research Methodology....................................................117
Research Sample......................................................117
Research Instrument..................................................118
Analysis................................................................122
Discussion..............................................................131
Implications for Industry............................................133
Implications for Scholarship.........................................134
Limitations..........................................................134
Future Research......................................................135
Conclusion..............................................................135
vi


V. INDIVIDUAL DECISION-PERFORMANCE OF SPATIAL DECISION SUPPORT
SYSTEMS: A GEOSPATIAL REASONING ABILITY AND PERCEIVED TASK-
TECHNOLOGY FIT PERSPECTIVE...................................................137
Abstract..............................................................137
Introduction..........................................................138
Literature Review.....................................................139
Cognitive Fit Theory...............................................140
Decision Performance...............................................141
Perceived Task-Technology Fit......................................141
User Characteristics...............................................142
Task Characteristics...............................................143
Research Model........................................................146
Research Methodology..................................................149
Experiment Design..................................................151
Subjects...........................................................151
Geospatial Reasoning Ability Measurement Items.....................152
Perceived Task-Technology Fit Measurement Items....................153
Problem Complexity.................................................154
Visualization Complexity...........................................155
Analysis..............................................................156
Measurement Model..................................................156
Structural Model...................................................161
Heterogeneity......................................................164
Findings and Discussion...............................................166
Limitations........................................................168
Implications.......................................................170
vii


Conclusion
171
VI. CONCLUSION: TOWARD A COMPREHENSIVE UNDERSTANDING OF
GEOSPATIAL REASONING ABILITY AND THE GEOSPATIAL DECISION-
MAKING FRAMEWORK..........................................................172
Abstract...........................................................172
Introduction.......................................................172
Conceptual Model...................................................173
Implication to Research.........................................175
Implication to Industry.........................................175
Theoretical Frameworks.............................................176
Findings and Discussion............................................178
Future Research....................................................182
REFERENCES................................................................184
viii


LIST OF TABLES
Table
1. Common Theories Related to Spatial Decision-Making............................12
2. List of Complexity Frameworks (Adapted from Gill and Hicks, 2006)............. 20
3. User-Characteristic Measurement Instruments used in Examined Research, not
including spatial ability measures (as shown in Table 4).........................27
4. Spatial Reasoning Instruments Used in Examined Research.......................30
5. Research Population Groups.....................................................36
6. Spatial Reasoning Instruments Used in Examined Research.......................49
7. Proposed Substrata of GRA......................................................57
8. Definition of SPGS.............................................................58
9. Initial Self-Perceived Geospatial Orientation and Navigation Measurement Items.... 60
10. Initial Self-Perceived Geospatial Memorization and Recall Measurement Items..61
11. Initial Self-Perceived Geospatial Visualization Measurement Items............62
12. Initial Self-Perceived Geospatial Schematization Measurement Items...........63
13. Content Validity Results of Geospatial Orientation and Navigation.............64
14. Content Validity Results of Geospatial Memorization and Recall................65
15. Content Validity Results of Geospatial Visualization..........................66
16. Content Validity Results of Geospatial Schematization.........................66
17. Pretest A Reliability Statistics..............................................74
18. Pretest B Descriptive Statistics of Demographics Variables...................76
19. PLS Factor Analysis Loadings..................................................79
20. Latent Variable Correlation..................................................80
21. Path Coefficients............................................................80
IX


22. Construct Reliability......................................................81
23. PLS Factor Analysis Loadings...............................................83
24. PLS Latent Variable Correlation............................................83
25. Path Coefficients..........................................................84
26. Construct Reliability......................................................85
27. PLS Factor Loading and Cross Loading.......................................86
28. Inter-Construct Correlations and Square Root of AVE........................87
29. Study 1 Descriptive Statistics of Demographic Variables....................88
30. PLS Factor Analysis Loadings and Weights...................................90
31. Latent Variable Correlation................................................90
32. Path Coefficients..........................................................91
33. Construct Reliability......................................................92
34. PLS Factor Loading and Cross Loading.......................................93
35. Inter-Construct Correlations and Square Root of AVE........................94
36. SPGON GRA: Gender Group Differences........................................95
37. SPGMR GRA: Gender Group Differences........................................95
38. SPGV GRA: Gender Group Differences.........................................96
39. Final GRA Measurement Items................................................98
40. Final SPGS Measurement Items...............................................98
41. Perceived Enj oyment Measurement Items....................................118
42. Perceived Usefulness Measurement Items....................................119
43. Perceived Ease-of-Use Measurement Items...................................120
44. Attitude Measurement Items................................................120
45. Behavioral Intent Measurement Items.......................................121
46. Geospatial Reasoning Ability Measurement Items............................122
x


47. Descriptive Statistics of Demographic Variables............................123
48. Item Reliability...........................................................124
49. Item Loadings and Cross Loadings, highest loadings shown in bold...........126
50. Square Root of Loadings and Cross-Loadings, highest loadings shown in bold.127
51. Inter-Construct Correlations and Square Root of AVE........................128
52. Construct AVE..............................................................129
53. Path Coefficients..........................................................131
54. Hypothesis Tests...........................................................131
55. Summary of Geospatial Decision Performance Research........................145
56. Summary of Geospatial Decision Performance Research........................150
57. Descriptive Statistics of Experiment Subjects..............................152
58. Geospatial Reasoning Ability Measurement Items, Adapted from Erskine and Gregg
(2011, 2012, 2013).............................................................. 153
59. Perceived Task-Technology Fit Measurement Items, Adapted from Karimi et al.
(2004) and Jarupathirun and Zahedi (2007)....................................... 154
60. Problem Complexity.........................................................155
61. Cronbachs alpha and Composite Reliability.................................157
62. Measurement Item Loadings..................................................158
63. Average Variance Extracted (AVE) by Construct..............................159
64. Measurement Item Cross-Loadings............................................160
65. Latent Variable Correlations and Sq. of AVE (shown in bold)................161
66. R2 Values of Endogenous Latent Variables...................................161
67. Path Coefficients and Significance Levels..................................162
68. Path Coefficients and Significance Levels..................................163
69. R2 and Q2 Values of Endogenous Latent Variables............................164
70. Path Coefficients Male Only..............................................165
xi


71. Path Coefficients-Female Only.........................................165
72. Result of Gender Group Comparison.....................................166
73. Hypotheses Test.......................................................168
xii


LIST OF FIGURES
Figure
1. Proposed Research Model.....................................................34
2. Overview of Scale Development Procedure from MacKenzie et al. (2011).......52
3. First-Order Reflective, Second-Order Formative Measurement Model including
Propositions...................................................................71
4. Pretest B, Path Analysis including SPGS....................................77
5. Pretest B, Path Analysis including SPGS.....................................78
6. Pretest B, Path Analysis....................................................82
7. Test, Path Analysis.........................................................89
8. Proposed Research Model....................................................112
9. Initial Nomological Network of GRA, along with Path Coefficients and R Square
Values........................................................................129
10. Proposed Research Model...................................................146
11. Experiment Workflow.......................................................150
12. Apartment Finder Experiment Tool Developed using GISCloud and Bing Maps.
151
13. Visual result of SEM-PLS Algorithm (using SmartPLS).......................162
14. Visual result of SEM-PLS Algorithm (using SmartPLS).......................163
15. Research Model with Relationship Significance.............................167
16. Conceptual Model..........................................................174
17. Current Nomological Network of GRA Construct..............................182
xiii


ATM
AVE
CFT
DSS
GIS
GPS
GRA
INSPIRE
IS
NASA
NFC
NSDI
PDA
PLS
PPGIS
PTTF
RFID
SDSS
SEM
SMW
SPGON
SPGMR
SPGV
SPGS
TAM
TLX
USGS
UTAUT
VGI
LIST OF ABBREVIATIONS
Automated Teller Machine
Average Variance Extracted
Cognitive Fit Theory
Decision Support System
Geographic Information System
Global Positioning System
Geospatial Reasoning Ability
Infrastructure for Spatial Information in the European Community
Information Systems
National Aeronautic and Space Administration
Need for Cognition
National Spatial Data Infrastructure
Personal Digital Assistant
Partial Least Squares
Public Participation Geographic Information Systems
Perceived Task-Technology Fit
Radio Frequency Identification
Spatial Decision Support System
Structural Equation Modeling
Subjective Mental Workload
Self-Perceived Geospatial Orientation and Navigation
Self-Perceived Memorization and Recall
Self-Perceived Geospatial Visualization
Self-Perceived Geospatial Schematization
Technology Acceptance Model
Task Load Index
United States Geological Survey
Unified Theory of Acceptance and Use of Technology
Volunteered Geographic Information


CHAPTER I
AN INTRODUCTION TO DECISION-MAKING USING
GEOSPATIAU DATA
Abstract
This chapter provides an overview of the motivation of this dissertation, a discussion of
key industry sectors that currently use geospatial data for decision-making and a brief
history of geospatial decision-making presented through well-known cases.
Introduction
Consumer, business and government decision-makers increasingly rely on
geospatial data to make critical decisions. Recent developments, particularly the advent
of global positioning systems, expansion of advanced mobile communications networks,
prevalence of powerful mobile devices, and systems such as location-based services have
allowed individuals and organizations to collect and share vast quantities of geospatial
data. Furthermore, due to the increased access to geospatial data, as well as tools to assess
such information, many decision-makers utilize geospatial criteria in their decision-
making tasks. Modern Spatial Decision Support Systems (SDSS) have simplified such
analyses, yet there is little understanding regarding how such decision are made, what
presentation methods best facilitate geospatial decision-making, as well as what specific
factors influence decision-making performance. While multi-criterion decision-making
using geospatial data was once only practical for expert analysts, today consumers,
business and government agencies are increasingly reliant on online mapping services,
1


location-based services and SDSS to assist in their procedural, organization, and strategic
decision-making processes.
For instance, consumer tasks, such as locating the nearest bank, finding a home or
apartment, or even finding friends at an amusement park, are decision-making tasks that
are commonly aided using SDSS and associated geospatial visualization (MasterCard,
2013; Zillow, 2013; Apple, 2013). Additionally, online mapping services have become
increasingly popular with consumers, leading to over 100 million mobile device owners
accessing Google Maps each month (Gundotra, 2010). Coinciding with the increasing
usage of such mapping services, there has been an increasing public awareness of
geospatial tools and information for business and consumers in mainstream media (e.g.,
Griggs, 2013; Lavrinc, 2013; Versace, 2013).
While mobile devices have simplified everyday geospatial decision-making for
consumers, technology advances have also enhanced geospatial decision-making for
businesses, government and non-profit organizations. For instance, businesses can use
SDSS to facilitate retail site location tasks, government entities can quickly locate
citizens that will be impacted by an impending disaster, and non-profit organizations can
target individuals based on precise demographic and location criteria. More specifically,
the insurance industry can benefit from enhanced risk analytics and crisis management.
The manufacturing sector can benefit from logistic, supply chain and asset management.
The banking and finance industry can benefit from the identification of areas with high
risk potential as well as tracking consumer financial behaviors. Marketers benefit through
the use of SDSS in numerous ways as well, such as by targeting customers based on far
2


more granular locations than mailing codes alone (Hess, Rubin, & West, 2004; ESRI,
2013).
Government agencies, economic development groups and tourism authorities can
also leverage advances in SDSS. For instance, Makati City in the Philippines used SDSS
to make urban planning decisions more efficient (Africa, 2013). In the tourism industry,
SDSS has been used to perform visitor flow management, build visitor facility
inventories and assess the impacts of tourism (Chen, 2007). Economic development
groups can use SDSS to better understand specific commercial and residential markets
(ESRI, 2013).
Problem-solving using geospatial data has demonstrated its value throughout
history. A well-known example of the use of geospatial data was that of Snow (1849,
1855) who explored geospatial relationships between public wells and cholera outbreaks
to confirm that some wells were indeed contaminated and contributed to cholera
outbreaks (Brody, Rip, Vinten-Johansen, Paneth, & Rachman, 2000). Previously, Seaman
(1798) had presented an analysis of yellow fever in New York using similar thematic
maps. More recently, SDSS have been applied to better understand and mitigate various
diseases, such as malaria (Kelly, Tanner, Vallely, Clements, 2012), cancer (Rasaf,
Ramezani, Mehrazma, Rasaf, & Asadi-Lari, 2012) and diabetes (Noble, 2012). The
prevalence of the use of SDSS in medical research has even brought about a sub-domain
of healthcare research called spatial epidemiology (Elliot, Wakefield, Best, & Briggs,
2000).
Recently, geospatial data gathered and shared by large crowds using social media
has brought about exciting benefits. Such crowd-sourced data allows emergency
3


resources to be distributed more effectively following natural and man-made disasters.
For example, a United States Geological Survey (USGS) program called the Twitter
Earthquake Detector uses geospatial data and keyword filtering from Twitter data to
determine if an earthquake occurred. It has been reported that the Twitter Earthquake
Detector was able to detect a disaster within seconds, while traditional scientific alerts
took far longer to reach experts (Department of the Interior: Recovery Investments,
2012).
Academic researchers have explored the use of geospatial data for decision-
making in regard to multi-criterion decision-making, geospatial visualization, geospatial
decision-making performance, as well as group decision-making using geospatial data
(e.g.: Jankowski & Nyerges, 2001; Skupin & Fabrikant, 2003; Malczewski, 2006).
However, research in these areas has significantly lagged behind technology
developments and the increasing prevalence of SDSS, and therefore, no comprehensive
decision-making models, research constructs, measurement scales and well-established
benchmarks for geospatial decision-making exist.
Due to the increasing prevalence and access to geospatial data and an increasing
trend of individual, business and governmental decision-making being performed using
SDSS and associated information systems, it is essential that the information system (IS)
scholarship develop a more comprehensive understanding of how such decisions are
made. This current lack of understanding within the IS scholarship, as well as the
significance of geospatial decision-making, provided the primary motivation for the
development of this dissertation.
4


The next five chapters will provide initial steps toward a more comprehensive
understanding of geospatial decision-making. More specifically, Chapter II provides a
thorough literature review and research framework for geospatial decision-making
research. Chapter III presents the development of a comprehensive construct defining
individual geospatial reasoning ability, a key user-characteristic that has provided mixed
results in previous empirical studies. Chapter IV presents an extension of the Technology
Acceptance Model in the context of geospatial visualization and the use of online
mapping services. Chapter V explores decision-making within the research framework
presented in Chapter II. Together Chapter IV and Chapter V work toward extending the
external validity of the proposed geospatial reasoning ability construct. Finally, Chapter
VII presents a conclusion to the overall empirical tests of the research framework
introduced in Chapter II and defines a nomological network of the geospatial reasoning
ability construct presented in Chapter III.
5


CHAPTER II
BUSINESS DECISION-MAKING USING GEOSAPTIAU DATA: A
RESEARCH FRAMEWORK AND UITERATURE REVIEW1
Abstract
Following the brief introduction presented in Chapter I, Chapter II will present a
detailed introduction and a conceptual model for future research related to geospatial
decision-making. More specifically, Chapter II provides a thorough literature review and
framework for geospatial decision-making research.
Organizations that leverage their increasing volume of geospatial data have the
potential to enhance their strategic and organizational decisions. However, literature
describing the best techniques to make decisions using geospatial data and the best
approaches to take advantage of geospatial datas unique visualization capabilities is very
limited. This chapter reviews the use of geovisualization and its effects on decision
performance, which is one of the many misunderstood components of decision-making
when using geospatial data. Additionally, this chapter proposes a comprehensive model
allowing researchers to better understand decision-making using geospatial data and
provides a robust foundation for future research. Finally, this chapter makes an argument
for further research of information presentation, task-characteristics, user-characteristics
and their effects on decision performance utilizing geovisualized data.
1 An early version of this chapter was submitted as a 1st Year Paper and approved by Dawn G. Gregg in
November 2010. A subsequent version of this chapter is currently under review with Axioms.
6


Introduction
While geospatial data permeates business computing there is only a limited
understanding of how geographic information is utilized to make strategic and
organizational business decisions, as well as how to effectively visualize geographic data
for such decision-making.
The utility of database technologies, as well as that of spreadsheets, has been
taught in most business school courses for many years, so most business professionals
have had a clear understanding of utilization and benefits of such technologies. However,
as geospatial data has become more prevalent within IS, computing researchers and
business professionals have tried to better understand all aspects of decision-making
using geospatial data. One of these areas is the ability to understand decision-making
processes as they relate to the unique abilities of geovisualization, or the ability to
represent, understand and utilize geospatial data in map-like projections for decision-
making. This paper provides an analysis of current research concerning geovisualization
and decision-making performance. Additionally, this chapter responds to Vesseys (1991)
call for additional analysis in areas of conflicting research results.
The ability to interpret geographic information and make decisions based on
geographic data is essential for business decision makers, because over 75 percent of all
business data contains geographic information (Tonkin, 1994) and 80 percent of all
business decisions involve geographic data (Mennecke, 1997). While geospatial data can
be presented utilizing traditional methods such as tables, often unique relationships
contained within geospatial data are only apparent through geovisualization (Reiterer,
Mann, MuBler, & Bleimann, 2000).
7


As organizations have collected vast amounts of geospatial or geo-referenced
data, two technologies have been developed to interpret these data in support of decision-
making. These systems are Spatial Decision Support Systems (SDSS) and Geographic
Information Systems (GIS).
While traditional DSS have been implemented successfully for production
planning, forecasting, business process reengineering and virtual shopping (Subsom &
Singh, 2007), such systems poorly utilize geospatial data. Thus, SDSS were developed to
aid decision-making when utilizing complex geospatial data. Such technologies operate
much like DSS, but are tailored to handle the unique complexities of geospatial data.
SDSS provide capabilities to input and output geospatial data, provide analytic
capabilities unique to geospatial data, and allow complex geospatial representations to be
presented (Densham, 1991). Although IS researchers are familiar with DSS concepts,
many IS researchers are not yet familiar with key SDSS concepts such as
georeferencing, geocoding and spatial analysis (Pick, 2004, p. 308).
While SDSS provide methods for geospatial decision-making, Geographic
Information Systems, commonly referred to as GIS, often allow geospatial experts to
analyze and report geographic data. GIS can be used to populate information to SDSS as
well as to perform complex geospatial analyses. While there are numerous aspects of GIS
relevant to IS researchers, this chapter examines the decision-making aspects of GIS
(Mennecke & Crossland, 1996). Such an understanding is critical for IS researchers
because the global GIS adoption rates continue to increase and research has shown that a
strong understanding of geospatial data can lead to enhanced decision-making (Pick,
2004).
8


SDSS and GIS that leverage geovisualization are provided to professionals and
consumers through a variety of sources. Prominent consumer tools for the search of
information using geovisualization include Google Maps and Bing Maps, which allow a
user to visually locate geo-referenced information, such as addresses, businesses and
even people (Google Maps, 2013; Bing Maps, 2013). Other domain-specific examples
include the capability to determine wireless signal strength at street level, automated
banking kiosk location information and interactive real estate search tools (T-Mobile,
2013; American Express, 2013; RE/MAX, 2013). However, the use of geospatial data is
not limited to only consumers and business organizations. Government agencies leverage
geospatial data for decision-making when solving large societal problems. More
specifically, immense geospatial-specific infrastructure systems have been implemented
such as the National Spatial Data Infrastructure (NSDI) in the United States and the
Infrastructure for Spatial Information in the European Community (INSPIRE) in the
European Union (Yang, Wong, Yang, & Li, 2005; Yang, Raskin, Goodchild, & Gahegan,
2010). Goals of such systems are to enable nearly every agency of a government to share
large volumes of geographic data locally, nationally and globally (Yang et al., 2005).
Early demonstration projects of the NSDI included a geospatial crime tracking system for
a metropolitan police department as well as a regional system to help communities
perform effective master planning activities (Federal Geographic Data Committee, 1999).
As more devices and technologies become networked through technologies such
as portable navigation devices, mobile computing platforms and radio-frequency
identification (RFID), more and more collected data will consist of geographic or geo-
referenced data. With the increase in the amount of geospatial data available to decision-
9


makers, it is crucial that IS professionals and researchers expand their knowledge of
geospatial systems and understand their unique characteristics, inherent potential and
limiting drawbacks. Smelcer and Carmel (1997) identified four research streams
contributing to the effectiveness of geovisualization, including information
representation, task difficulty, geographic relationship and cognitive skill. This research
chapter will expand on these findings.
Additionally, Pick (2004) points to promising research in the area of visualization,
which this research chapter addresses. Particularly, this chapter will attempt to clarify the
unique aspects that geovisualization brings to business decision-makers and will suggest
specific future research goals.
This chapter begins with a literature review emphasizing the theoretical
backgrounds of existing research, then analyzes existing research related to decision-
making utilizing geovisualization including task-characteristics, such task complexity,
collaboration, and task type and user-characteristics, such as cognitive fit, task
complexity perceptions, mental workload, goal setting, self-efficacy, spatial reasoning
ability as well as decision-making performance, all of which are identified as key
research streams relevant to the visualization of geographic data. The goal of this section
is to review relevant research in the IS, geography and psychology realms in order to
develop a comprehensive model that can be used to increase the understanding of the
impact geographic data visualization has on decision performance. Following this, a
conceptual model based on existing literature will be presented. Then, limitations of the
reviewed literature and future research suggestions will be discussed. Finally, a
conclusion is presented.
10


Literature Review
This section provides an in-depth analysis of reoccurring themes found in
literature related to task- and user-characteristics, as well as decision-making
performance.
Theoretical Background
Research reveals the importance of information-presentation, task-characteristics and
user-characteristics on decision-performance. An emphasis is placed on exploring
theories that have been suggested to explain these four themes. Specifically, literature
related to information visualization and its effects on decision-making often cites
Cognitive Fit Theory (CFT), Complexity Theory, Task Fit Theory, Image Theory as well
as research on task-technology fit, self-efficacy, motivation, goal-setting and spatial
abilities (see Table 1). Of the four research streams Smelcer and Carmel (1997)
identified, each potentially relates to an existing theory, including Task Fit (information
representation and geographic relationship), Complexity Theory (task difficulty), and
CFT (cognitive skill). This research will expand on these findings.
11


Table 1. Common Theories Related to Spatial Decision-Making.
Theory Study
Cognitive Fit Theory Vessey, 1991 (posited); Smelcer and Carmel, 1997; Dennis and Carte, 1998; Mennecke et al., 2000; Speier and Morris, 2003; Speier, 2006
Complexity Theory Smelcer & Carmel, 1997; Swink & Speier, 1999
Task Fit/Task-Technology Fit Smelcer & Carmel, 1997; Jarupathirun & Zahedi, 2007
Self-Efficacy Jarupathirun & Zahedi, 2007
Motivation Theory Jarupathirun & Zahedi, 2007
Goal-Setting Theory Jarupathirun & Zahedi, 2007
Image Theory Crossland, Wynne, & Perkins, 1995
The following section explores reoccurring themes found in literature related to
visualization of geospatial data. These themes include information presentation, task-
characteristics, user-characteristics and decision-performance.
Information Presentation
Numerous researchers have explored the importance of visual information
presentation on decision performance (e.g., Vessey, 1991; Smelcer & Carmel, 1997;
Dennis & Carte, 1998; Mennecke et al., 2000; Speier & Morris, 2003; Speier, 2006). For
example, in her work, Vessey posits the CFT, which suggests that there are two types of
information presentation, as well as two types of problem-solving tasks. Furthermore, it
is suggested that when the problem representation matches the problem-solving task,
higher quality decisions are made. In Vesseys research, the objective measures of
decision time and decision accuracy, as well as interpretation accuracy, are measured as
antecedents of performance; however, it is noted that confidence in the solution also
12


could play a role. Additionally, Vessey points out that while often-analyzed tasks from
prior research utilized simple graphs and tables, actual business problems are far more
complex and not as well defined. Furthermore, prior research may have included, for
example, numbers along with graphical representations actually presenting a mix of
spatial and symbolic data.
Vesseys (1991) CFT has been referenced as a theoretical background, extended
into other domains and validated in numerous empirical studies, such as Speier (2006)
and Smelcer and Carmel (1997). Speier presented a review of eight empirical research
papers that tested for cognitive fit. Their research discovered that all but one paper either
fully or partially supported the CFT. The author of the paper that did not support CFT,
Frownfelter-Lohrke (1998), explained that the lack of support most likely resulted due to
the complex nature of tasks involving the examination of financial statements.
Extensions of CFT include work performed by Dennis and Carte (1998) who
demonstrated that when map-based presentations are coupled with appropriate tasks,
decision processes and decision performance are influenced. Additionally, Mennecke et
al. (2000) expand on the CFT by determining the effects of subject characteristics and
problem complexity on decision efficiency and accuracy. Also, CFT has been extended
from information presentation to query interface design in order to explain how ones
ability to understand data visualizations will influence decision outcomes (Speier &
Morris, 2003).
In addition to CFT, Task-Technology Fit has been utilized to explain the
importance of appropriate information presentation methods. For example, Ives (1982)
articulates the importance of visual information presentation. Ives states that while
13


researchers have responded to calls for additional research into data and information
visualization techniques, there is still potential for additional research into cartographic
data visualization, particularly through SDSS, GIS or other digital map-based
presentations. Specifically, Ives calls for a more in-depth understanding of how multi-
dimensional graphics could display complex information through simplified information
or charts that overlay information, both of which are technologies inherent to even basic
geovisualization systems.
Densham (1991) suggests that a SDSS user interface must be both powerful and
easy-to-use. Also, a SDSS must provide information in both graphical, or map space, and
tabular formats, or objective space, while providing the capability to move between these
representations or view these representations simultaneously to determine the most
appropriate to facilitate problem solving. However, even with multiple display options, it
is not yet understood if a decision-maker would know which of the output options
provides the best visualization method for a particular decision-making process. To
support a problem-solver who is unsure of how to select the most appropriate
visualization method, several authors have suggested the inclusion of an expert system to
provide such suggestions (Densham, 1991; Yang et al., 2005).
Additionally, relevant studies regarding the visualization of cartographic
information include Crossland et al. (1995), Smelcer and Carmel (1997), Speier and
Morris (2003) and Dennis and Carte (1998). For example, Crossland et al. performed a
study in which some participants were provided with a paper map and tabular
information while others had access to a SDSS. They were able to confirm that the
addition of a GIS-based SDSS contributed significantly toward two measures of decision-
14


making performance, decision time and decision accuracy. Speier and Morris tested the
use of text- and graphical-based interfaces to determine the effects on decision-making.
Smelcer and Carmel tested whether spatial information is best represented through
geovisualization and found that maps representing geographic relationships allowed for
faster problem solving. The authors concluded that while low difficulty tasks can be
solved quickly regardless of representation (p. 417), more difficult tasks should be
represented using maps to keep problem-solving times and errors from rising rapidly (p.
418). Dennis and Carte determined that geographically adjacent/spatial information was
best presented using spatial visualization, while non-adjacent/symbolic information tasks
were best presented using tables.
Finally, some researchers suggest that reducing the amount of information
presented to only include essential information could improve decision-making
performance (e.g., Agrawala & Stolte, 2001; Klippel, Richter, Barkowsky, & Freksa,
2005). For example, while early maps presented geospatial information with little
precision, they were still able to convey relevant information. The benefit of such
simplified maps is demonstrated by Agrawala and Stolte, who collected feedback from
over 2,000 users of a technology that emulates hand-drawn driving directions, which
often emphasize essential information while eliminating nonessential details.
Additionally, Klippel et al. suggest that modern cartographers can successfully develop
schematic maps that are simplified, yet present cognitively adequate representations of
environmental knowledge (p. 68). Comprehensive, yet easy-to-read, transit maps used in
large metropolitan cities demonstrate a good example of the benefit of schematization.
15


Task Characteristics
In addition to information presentation, research has shown that the specific
characteristics of the task being performed can play a vital role in decision-making
performance. Complexity Theory has been extended to demonstrate that key aspects of
geovisualization, including data aggregation, data dispersion and task complexity,
influence decision-making performance (Swink & Speier, 1999). For example,
Complexity Theory posits that as task complexity increases so too does the need for
information presentation to match problem-solving tasks. Additionally, Complexity
Theory was validated by Smelcer and Carmels (1997) research, which confirmed that
increased task difficulty led to decreased decision-making performance. Moreover,
Crossland and Wynne (1994) discovered that decision-making performance decreased
less significantly with the use of electronic maps, versus paper maps.
Jarupathirun and Zahedi (2001) state that, based on research by Vessey (1991),
Payne (1976), Campbell (1988) and Zigurs and Buckland (1998), tasks can be classified
into simple and complex groups based on task characteristics. Characteristics of complex
tasks include multiple information attributes, multiple alternatives to be evaluated,
multiple desired outcomes, solution scheme multiplicity, conflicting interdependence and
uncertainty.
Several empirical studies have addressed task complexity (e.g., Crossland et al.,
1995; Smelcer & Carmel, 1997; Swink & Speier, 1999; Mennecke et al., 2000; Speier &
Morris, 2003). For example, Speier and Morris (2003) discovered that decision-making
performance increased by utilizing a visual query interface when working with complex
decisions. Additionally, Swink and Speier defined task characteristics to include the
16


problem size, data aggregation and data dispersion. Their findings revealed that decision
performance, as measured by decision-quality and decision-time, was superior for smaller
problems. In the context of data aggregation, there was no effect on decision quality;
however, there was a significant effect on decision time indicating that more time was
required for disaggregated problems. Additionally, it was discovered that decision quality
for problems with greater data dispersion improved, but there was no significant effect on
decision time. Smelcer and Carmel confirmed what had been discovered in previous
research (others mentioned herein) in that more difficult tasks require additional problem
solving time. In their work, Mennecke et al. discovered that as task complexity increases,
accuracy is lowered, yet found only partial support for task efficiency being lowered.
Research conducted by Crossland et al. (1995) on the effects of SDSS on
decision-making performance included a measure of task-complexity. In their work, it
was discovered that the use of a SDSS versus data tables and paper maps significantly
improved decision-making time, while there was no significant effect on decision
accuracy. The authors pointed out that there may have been too much similarity between
the task complexity levels to ensure that decision accuracy would not be improved
through the use of an SDSS. The authors also suggested that there may be levels of
problem complexity that can only be solved through the use of an SDSS.
In their work, Albert and Golledge (1999) developed three paper and pencil tests
to assess task complexity across experience levels and gender. One of the findings was
that subjects were better at performing map overlay tasks involving or (inclusive
disjunction) and xor (exclusive disjunction) operators versus those utilizing and and
not operators. The researchers also discovered that the boundary complexity of a
17


visualized entity did not affect performance, whereas the quantity of visualized entities
did. Boundary complexities can be quite varied, as for example the target radius of a
retail location may be represented by a simple circle, yet the high water boundary of a
river would be represented by a very complex boundary representing elevation, water
flow and other essential qualities.
Additional research suggests that the perception of complexity may be essential to
better understanding the effects of task characteristics on decision-making performance.
For example, Huang (2000) performed an experiment of 10 popular web-based shopping
sites and determined that increased complexity decreased the desire to explore the site,
but slightly increased the desire to purchase. Perhaps, when posited to SDSS decision-
making, an increased complexity decreases the desire to explore additional solutions
while encouraging a decision to be made quickly. This could explain some of the
variances discovered in past research particularly within the task-complexity and
decision-making performance realm. Huangs research utilized the General Measure of
Information Rate developed by Mehrabian and Russell (1974) as a measure of the
perceived complexity.
Speier (2006) proposed a framework of complexity with four levels of
complexity; which are, in order of complexity: (1) trivial decision-making, (2) optimal
decision-making, (3) satisficing decision-making, and (4) aided decision-making.
Speiers empirical study furthered Vesseys (1991) CFT by comparing the outcomes of
spatial and symbolic information presentation with spatial or symbolic tasks, moderated
by task complexity, on decision performance as measured by decision quality and
accuracy. In this empirical study the subjects completed tasks that all had optimal
18


outcomes. However, findings were inconsistent with theory, as the decision time of
symbolic tasks with low complexity was reduced when using spatial presentations.
However, there are several extensions to Vesseys work demonstrating that tables and
graphs are equal in decision performance at a low complexity and that graphs provide a
higher decision performance at high complexity.
Gill and Hicks (2006) presented a thorough list of complexity frameworks from
literature, which are presented in Table 2. Like Speier (2006), Gill and Hicks also
suggested that there are multiple classes of complexity: experienced complexity,
information processing complexity, problem space complexity, lack of structure
complexity and objective complexity.
19


Table 2. List of Complexity Frameworks (Adapted from Gill and Hicks, 2006).
Construct Type Description
Degree of Difficulty Perceived or observed difficulty
Sum of Job Characteristics Index/Job Diagnostic Survey Task potential to induce a state of arousal or enrichment; measured using self-reporting instruments such as JCI and JDS
Degree of Stimulation Task potential to induce a state of arousal or enrichment; measured using physiological responses
Information Load Objective measure of throughput; total information processes or information processed per second
Knowledge Amount of knowledge subject must possess in order to perform task
Size Minimum theoretical size of problem space
Number of Paths Number of alternative paths that are possible given a strategy
Degree of Task Structure Lack of strategy or structure needed to move from initial state to goal state
Novelty of Task Uniqueness of task to subject; routine tasks are not complex using this measure
Degree of Uncertainty Degree that the outcome of the task cannot be predicted at initial state
Complexity of Underlying Systems/Environment Number of objective attributes
Function of Alternatives and Attributes Objective function of the number of alternatives and the task attributes
Function of Task Characteristics Direct function of all possible task characteristics
Another measure of task complexity could be manipulated through the
represented geographic relationships. For example, Smelcer and Carmel (1997) compared
spatial information used in the decision-making process presented through tables and
maps. In both the tables and the maps, common geographic relationships used in business
decision-making using spatial data were utilized. These included proximity, adjacency
and containment. Examples of proximity in the context of geographic relationships
20


include route optimization, examples of adjacency include territory assignment, and
examples of containment include site selection.
In their study of geographic containment and adjacency tasks, Dennis and Carte
(1998) discovered that when users are presented with geographic data that represents
geographic containments, tabular data presentations might lead to better decision-making,
while adjacency tasks benefit from map-based visualizations.
While most research into Complexity Theory (as pertaining to decision-making)
and Task-Technology Fit Theory has focused on individual decision performance, recent
technological innovations have led to collaborative uses of geospatial data and
information that may require these theories to be revisited through a collaborative
perspective. Geospatial data and information can lead to collaborative decision-making
through two distinct ways. First, decision-making tools utilizing geospatial information
can be used for collaborative decision-making with geographically and temporally
distributed participants. Second, through the recent phenomenon of online social
networks, geospatial information can be shared and utilized ubiquitously through vast
online communities. Each of these methods is discussed next.
A large and varied stream of research exists in the area of collaborative decision-
making utilizing geographic data. For example, grassroots groups and community
organizations have adopted Public Participation Geographic Information Systems
(PPGIS) to address the need for collaborative decision-making utilizing complex
geospatial data (Sieber, 2006). In their work, Conroy and Gordon (2004) empirically
look at the ability of a software application to increase citizen involvement in complex
policy discussions and propose that geovisualization can offer citizen participants
21


opportunities to better envision scenarios and can provide additional communication
channels to decision makers. Through the ubiquity provided by networked computing and
recent technologies, it may be possible for groups of organizations to collaborate and
form virtual organizations (Grimshaw, 2001). Jankowski and Nyerges (2001) studied the
use of GIS in a collaborative decision-making environment and discovered that decision
outcomes such as participant agreements and shared understanding could be more
effectively reached through the use of PPGIS. While there is a deep understanding of
how collaborative decision-makers can utilize geographic data, one aspect not fully
explored in current research related to geospatial information presentation and its effects
on decision-making performance was its effect on group decision-making performance.
In their framework development research, Mennecke and Crossland (1996) call for
additional exploration in the areas of GIS and its capabilities in collaborative decision-
making. Decision-making tools utilizing geospatial information can be used for
collaborative decision-making with geographically and temporally distributed
participants.
In addition to collaborative decision-making, another area of current research in
the usage of geospatial data involves how such data is utilized within online social
networks. This is especially important with the increasing use of online social networks,
because large quantities of geospatially referenced data are increasingly being shared
through such networks. Goodchild (2007) labels the geographic data that is commonly
shared through online social networks as Volunteered Geographic Information (VGI).
Some online social networks have included geographic information as a core component
in their implementations. Such availability of geographic information through online
22


social networks has even allowed researchers to map online social networks (Khalili et
al., 2009). A geographic visualization of online social networks can provide decision
makers with a geospatial representation of a virtual phenomenon. From a business
perspective, a geospatial understanding of social networks can allow strategic decision
makers to target marketing campaigns or locate retail operations in geographic areas
appropriate for their target audiences. However, the successful interpretation of VGI
from online social networks is hampered by several drawbacks. These drawbacks exist
primarily because geographic data provided by members of online social networks varies
in quality and accuracy. For example, one image may be tagged with the word Denver
while another is tagged with precise geographic coordinates. Additionally, as there are no
validation processes, a user can easily misidentify or intentionally provide incorrect
geospatial tagging (Khalili et al., 2010; Flanagin & Metzger, 2008).
User Characteristics
In addition to task-characteristics, researchers suggest that the characteristics of
the user also play a role in decision-performance. Such characteristics include context-
based factors, experience level, self-efficacy, cognitive workload and spatial reasoning
ability.
Several researchers have discussed the importance of context as an extension to
the research into the usability of geo-visualization tools, such as SDSS and GIS (e.g.,
Albert & Golledge, 1999; Mennecke et al., 2000; Jarupathirun & Zahedi, 2001; Slocum,
Blok, Jiang, Koussoulakou, Montello, Fuhrmann, & Hedley, 2001; Speier & Morris,
2003). Slocum et al. reported that context-based factors influence the ability to interpret
geo-visualized information. For example, expertise, culture, sex, age, sensory
23


disabilities, education, ethnicity, physiology and anatomy, and socioeconomic status
(Slocum et al., 2001, p. 10) are referred by the authors as influencing the ability to
interpret geospatial information. Additionally, Zipf (2002) posits that geovisualized maps
must address user contexts such as pre-existing knowledge of the area presented in the
map, physical, as well as, cognitive impairments and abilities. Specifically, an example
was demonstrated where a young child was presented with a geovisualization tool in
which abstract information was removed and a map that more closely represents reality
was presented to facilitate better interpretation. Additionally, Zipf further posits that a
users cultural context can influence the interpretation of the colors used in a map. For
example, in some cultures, the color green may represents parkland or forests on maps,
while in others, it represents bodies of water. In their work, Albert and Golledge tested
tasks and measured gender as a control variable. One of their conclusions was that men
performed significantly better in operations involving not operators. An example of a
not operation in the geospatial context would be to select all neighborhoods that are
near a public school, but not near a prison. Additionally, the authors found that there were
no significant differences in test scores between subjects with GIS experience versus
those that had none. This is an essential observation as GIS and SDSS technologies are
often implemented as web-based technologies and can have users with limited
geographic visualization and information knowledge. Finally, both Zipf and Slocum et al.
point out the importance of considering sensory disabilities when developing
geovisualization technologies.
Speier and Morris (2003) discovered that task experience, database experience,
gender and computer self-efficacy were non-significant in their analysis of query
24


interface design on decision performance. Other user contexts variables, such as those
related to information learning as well as fatigue related to working through multiple
tasks, were controlled for by Swink and Speier (1999). Mennecke et al. compared
subjects with previous SDSS experience to subjects with limited SDSS experience to
determine if experience influenced efficiency and accuracy of the solutions. In their
experiments, the cognitive effort required in the decision-making process was measured
using a condensed version of the Need for Cognition (NFC) instrument. However, their
research found only marginal support for an increase in solution accuracy and no
difference between subject groups in regard to solution efficiency. Additionally,
Mennecke et al. discovered that experience only presented significant improvement on
solution accuracy when working with paper maps. They discovered that students were
more efficient than professionals in solving geographic problems. While this may seem
surprising, it may be due to professionals incorporating multiple levels of analysis that
students, with limited experience, may not be able to draw upon.
In addition to research into the importance of user context and user experience as
related to geovisualization, other theories and constructs, particularly those from
psychology and organizational behavior, are also utilized, including self-efficacy,
motivation, goal-setting and Image Theory. For example, Jarupathirun and Zahedi (2007)
introduced a perceived performance construct consisting of decision satisfaction, SDSS
satisfaction, perceived decision quality and perceived decision efficiency. In their
findings, it was discovered that perceived decision efficiency was the greatest motivator
for goal commitment. While decision quality is likely more important than efficiency, the
25


authors proposed that there might be a perception that SDSS improves decision quality
inherently.
Additionally, Jarupathirun and Zahedi (2001) posit that based on empirical
research into the theories associated with goal-setting, users who set a higher goal level
will be motivated to expend more effort toward reaching the desired goals. Jarupathirun
and Zahedi also argue that intrinsic incentives, such as perceived effort and perceived
accuracy, can influence goal commitment levels, which are known to moderate the
effects of goal levels on performance (Hollenbeck & Klein, 1987). Finally, in an effort to
combat the lack of motivation and/or expertise, some researchers have provided financial
incentives and used experiment tasks from domains familiar to the subjects (Speier,
2006).
Crossland et al. (1995) extended Image Theory into the realm of decision-making
by proposing that the efficiencies gained through the use of electronic maps, versus paper
maps, would improve decision performance. Their study revealed that both decision-
performance, as measured through decision-accuracy and decision-time, improved with
the use of electronic maps versus paper maps at two different complexity levels.
Additionally, Jarupathirun and Zahedi (2007) discovered that self-efficacy had
strong positive influences on task-technology fit and other expected outcomes, as well as
a strong negative influence on perceived goal difficulty. It is suggested that repeated,
successful completion of tasks could improve self-efficacy, which could be accomplished
through training and learning as well as tutorials and support systems.
Another user-characteristic explored was the mental workload exhibited by
subjects performing geospatial decision-making tasks. Speier and Morris (2003)
26


measured Subjective Mental Workload (SMW) using the NASA Task Load Index
(NASA-TLX) after each completed task and discovered that when comparing visual- and
text-based interfaces, with low and high complexity decisions, the use of visual interfaces
carried a reduced SMW. Speier and Morris suggest that research into the SMW could
benefit from additional investigation. In particular, the NASA-TLX measure could use
additional validation, as the user-reported cognitive loads might not represent actual
cognitive loads that could be measured utilizing actual physiological responses.
Table 3. User-Characteristic Measurement Instruments used in Examined
Research, not including spatial ability measures (as shown in Table 4).
Study Additional User Context Instruments Used/Cognitive Load Test Used
Speier and Morris, 2003 NASA-TLX
Mennecke et al., 2000 NFC [modified]
Jarupathirun and Zahedi, 2007 Self-Efficacy [as recommended by Marakas et al. 1998]
Huang, 2000 General Measure of Information Rate
Finally, the importance of the spatial reasoning ability of the decision-maker
utilizing geovisualized information must be further explored, as there has been
conflicting research into the ability of spatial reasoning to aid in decision-making using
spatial data. Some research has presented no or conflicting evidence of the effects of
spatial ability on decision performance. For example, Smelcer and Carmel (1997)
discovered no statistical significance between spatial ability and the effects of
information representation, task difficulty and geographic relationships on decision
performance. The researchers speculated that due to the nature of the tasks, which did not
involve the need to navigate spatial problems, spatial visualization techniques were not
required (Smelcer & Carmel, 1997). Swink and Speier (1999) call for more in-depth
27


investigations of visual skills related to decision-making performance (p. 189).
Additionally, while in their early work Jarupathirun and Zahedi (2001) question whether
spatial ability has any impact on system utilization and decision-making performance,
they follow-up with a determination that spatial ability as measured through spatial
orientation ability and visualization ability had no significant effect on the perceived task-
technology fit. These findings are of value as they suggest that high spatial ability does
not influence a users perception of the technology. This is essential when developing a
technology for the public Internet, where it will be impractical to ensure that all users of a
technology have a prerequisite spatial ability (Jarupathirun & Zahedi, 2007).
However, other research has discovered that there are effects between spatial
ability and decision performance. For example, Swink and Speier (1999) determined that
higher spatial orientation skills produced a higher decision quality and required less
decision time; however, this finding was only significant for large problems with low
data dispersion. Additionally, Speier and Morris (2003) found that spatial reasoning
ability alone had no significant effects on decision outcomes. However, when combined
with interface design, spatial reasoning ability had a significant effect on decision-
accuracy.
Furthermore, research in other domains has identified a connection between
spatial ability and geovisualization tools. For example, Rafi, Anuar, Samad, Hayati, &
Mahadzir (2005) discusses the use of online virtual environments to facilitate the
instruction of spatial thinking skills. In their study of 98 pre-service undergraduate
students, only seven students, or about 7%, were found to have any previous spatial
experience. Rafi et al. imply that such a gap is a crucial issue and creates a hurdle for
28


students pursuing careers that require qualitative spatial reasoning. As students with no
pre-existing spatial thinking had difficulties in courses requiring spatial thinking ability, it
is likely that users lacking spatial thinking skills would have difficulties utilizing
geovisualization tools.
Additionally, students who have participated in courses that utilize
geovisualization tools, such as computerized cartography or geographic information
systems, have demonstrated improvement in their spatial thinking ability (Lee &
Bednarz, 2009). In their research, Lee and Bednarz point out that psychometric testing
designed to assess spatial abilities, such as spatial visualization and spatial orientation,
were generally focused on small-scale spatial thinking and thus not necessarily valid to
test large-scale geographic spatial abilities. However, Lee and Bednarz discovered that
recently, new spatial analysis tests have been developed which considered spatial abilities
in a geographic context (e.g., Audet & Abegg, 1996; Meyer, Butterick, Olkin, & Zack,
1999; Kerski, 2000; Olsen, 2000).
Based on these insights, the development an updated measurement instrument to
specifically determine an individuals geospatial aptitude, which may be an essential
component of decision-making with geospatial data, is recommended. Furthermore, it is
suggested that this measurement instrument be based on the work of Lee and Bednarz
(2009) who developed and validated a spatial skills test specifically designed to
overcome shortcomings of previous spatial skills tests.
29


Table 4. Spatial Reasoning Instruments Used in Examined Research.
Study Spatial Reasoning Test Used
Smelcer & Carmel, 1997 VZ-2 (Spatial Visualization)
Albert & Golledge, 1999 Three Paper/Pencil Tests to assess spatial ability.
Swink& Speier, 1999 S-l (Spatial Orientation)
Speier & Morris, 2003 S-l (Spatial Orientation)
Jarupathirun & Zahedi, 2007 VZ-2 (Spatial Visualization) S-l (Spatial Orientation)
Lee & Bednarz, 2009 New Spatial Skills Test
Decision-Making Performance
Another key component of geospatial decision-making is the measure of decision-
making performance. To determine decision-making performance, most researchers
utilize objective measures of decision-time and decision-accuracy as indicators of
decision-making performance, including Crossland et al. (1995), Dennis and Carte (1998)
and Speier (2006). However, in their measure of decision performance, Smelcer and
Carmel (1997) simply examined the length of time required to make a decision. Others
have proposed various additional indicators, such as decision-concept and regret (Sirola,
2003; Hung, Ku, Ting-Peng & Chang-Jen, 2007), which among other indicators, are
discussed below.
While decision-time and decision-accuracy are indicators of decision-
performance, Sirola (2003) posits that the use of an appropriate decision-analysis
methodology will undoubtedly influence decision-performance metrics and could modify
the decision makers perceptions of the decision-making process and result. These
decision-analysis methodologies include cost-risk comparisons, knowledge-based
systems, cumulative quality function, chained paired comparisons, decision trees,
30


decision tables, flow diagrams, pair-wise comparison, cost functions, expected utility,
information matrices, multi-criteria decision aids and logical inference/simulation.
In their vignette-based research, Speier and Morris (2003) identified decision
performance as a decision outcome, which consisted of subjective mental workload,
decision-accuracy and decision-time constructs. In their research it was discovered that
there were significant interaction effects between interface type (text/visual) and task
complexity on Subjective Mental Workload (SMW), as well as interface type and task
complexity individually.
While CFT and Complexity Theory focus on the relationships between user- and
task-characteristics and decision-performance, Goodhues Task-Technology Fit Theory
posits that a technology will improve task performance if the capability of the technology
matches the task to be performed (Goodhue, 1995; Goodhue & Thompson, 1995).
Smelcer and Carmel (1997) demonstrate the importance of utilizing the most appropriate
types of geographic relationships for the tasks at hand through their extension of Task-
Technology Fit Theory. Geographic relationships often found in business data include
proximity, adjacency and containment tasks. Additionally, Jarupathirun and Zahedi
(2007) synthesized research on task-technology fit with the psychology-based constructs
of goal setting and self-efficacy to further explain and determine success factors of SDSS.
The use of task-technology fit allows researchers to examine user-satisfaction, which
assess beliefs about a system and have been shown to impact adoption and intention to
use the technology (Rogers, 1983; Taylor & Todd, 1995; Karimi, Somers & Gupta,
2004).
31


In their work, Jarupathirun and Zahedi (2007) explored users perceived decision
quality, perceived decision performance, decision satisfaction and SDSS satisfaction and
suggest further inclusion into the Technology Acceptance Model (TAM) or the Unified
Theory of Acceptance and Use of Technology (UTAUT) models. Additionally, Dennis
and Carte (1998) discovered that when using map-based presentations, users were more
likely to utilize a perceptual decision process, while tabular data presentations induced an
analytical decision process. In their work, time and accuracy were used as measurements
of decision performance. While numerous researchers utilize perceived measures or
objective decision performance measures to determine the results of decision-making
performance, Hung et al. (2007) suggest that perceived regret should also be considered,
as many decision makers consider potential regret when making decisions. Their study
discovered significant reduction in regret for subjects who utilized a DSS over those who
did not. Other constructs and theories, particularly those from psychology and
organizational behavior, are also utilized, including research in self-efficacy, motivation,
goal-setting and Image Theory. For example, Jarupathirun and Zahedi introduced a
perceived performance construct consisting of decision satisfaction, SDSS satisfaction,
perceived decision-quality and perceived decision-efficiency. In their findings, perceived
decision-efficiency was the greatest motivator for goal commitment. While decision-
quality is likely more important than efficiency, the authors proposed that there might be
a perception that SDSS improves decision-quality inherently. These findings are
consistent with other studies in which task-characteristics have been shown to have an
impact on user satisfaction as measured through task-technology fit (Karimi et al., 2004).
32


In their study of visual-query interfaces, Speier and Morris (2003) discovered that
decision-making performance increased by utilizing a visual query interface when
working with complex decisions. In addition, Swink and Speier (1999) discovered that
moderate amounts of data dispersion required longer decision-times than tasks with low
data dispersion. In their work Albert and Golledge (1999) developed three paper and
pencil tests to test task complexity across experience levels and gender.
Based on existing literature it is evident that the most common measures of
decision-making performance are the objective measures of decision-time and decision-
accuracy. However, other research has suggested the use of perception of the decision-
making process and performance could also be utilized, particularly as user perceptions
often play a role in technology acceptance.
Conceptual Model
Based on the reviewed literature and associated theoretical frameworks, a
conceptual model of business decision-making using geospatial data is proposed. This
model consists of four distinct constructs, including information presentation, task-
characteristics, user-characteristics and decision-performance. Information presentation
was determined to be a key antecedent of decision-performance as suggested through
Vesseys (1991) CFT. Literature has shown that different information presentation
methods may be required based on the geospatial problem being solved. Additionally,
task-characteristics have demonstrated an impact on decision-performance. Specifically,
task complexity, problem type, data dispersion, group decision-making and data quality
have been shown to define task-characteristics. In addition to information presentation
and task-characteristics, user-characteristics have also been shown to influence decision-
33


performance. Such user-characteristics include age, gender, prior experience, culture,
sensory ability, education, self-efficacy, task motivation, goal-setting, mental workload,
and geospatial reasoning ability. Finally, while decision-accuracy and decision-time were
the most common measures of decision-performance, decision-satisfaction, decision-
regret and decision-methodology could be valid measures of decision-performance.
The relationships between these constructs are presented through the following
three propositions:
Proposition Pp Information presentation impacts decision-performance.
Proposition P2: Task-characteristics impact decision-performance.
Proposition P3: User-characteristics impact decision-performance.
Figure 1 presents this conceptual model visually. This dissertation will explore
these propositions and the conceptual model through an empirical study presented in
Chapter V.
Information
Presentation
Task
Characteristics
User
Characteristics
Figure 1. Proposed Research Model.
34


Discussion
The following chapters will include the design of measurement instruments and
development of laboratory experiments to test the proposed conceptual model and
propositions.
Limitations of Reviewed Literature
Four key limitations were discovered in the reviewed literature, including (1) the
choice of research subjects, (2) the selection of task types, (3) the motivation of subjects
to successfully complete the problem solving experiment and (4) a lack of experiments
testing the full conceptual model.
First, in the majority of the literature reviewed for this chapter, undergraduate
students were utilized as research subjects (see Table 5), who may not accurately
represent business decision makers who utilize geospatial data (e.g., Speier & Morris,
2003; Swink & Speier, 1999; Smelcer & Carmel, 1997). However, research conducted by
Mennecke et al. (2000) discovered that there were few differences between the results of
university students and business professionals when performing their study. Jarupathirun
and Zahedi (2007) cited Mennecke et al. (2000) in their research as a strong validation
that university students are a valid proxy for professionals. However, Jarupathirun and
Zahedi (2007) also point out that many university students are of a younger demographic
population, which might have had previous experiences with web-based SDSS such as
geovisualized search tools (e.g., restaurant and ATM locators) and online mapping tools
(e.g., Google Maps), providing them with a significant amount of prior SDSS experience.
Thus, it is recommended that future studies, which elect to use student populations,
provide a justification of the sample choice and clearly discuss limitations of
35


generalizability per the recommendations of Compeau, Marcolin, Kelley, and Higgins
(2012).
Table 5. Research Population Groups.
Study Population
Undergraduate College Students Crossland et al., 1995
Undergraduate Students Smelcer& Carmel, 1997
Graduate Business Students Dennis & Carte, 1998
Undergraduate Students in Geography, Psychology and an Introductory GIS Class Albert & Golledge, 1999
Undergraduate Students Previous GIS Coursework Swink & Speier ,1999
University Students and Professionals Mennecke et al., 2000
Undergraduate Students Speier & Morris, 2003
Undergraduate Students Speier, 2006
Undergraduate Business Students Jarpathirun & Zahedi, 2007
University Students Lee & Bednarz, 2009
Second, the problem types examined by existing research have presented some
additional limitations. While some researchers chose real estate/home finding as their
task method (e.g., Speier & Morris, 2003), others chose more domain specific tasks. This
is a concern as, for example, the task of locating an automated banking kiosk would
reflect a fundamentally different problem than determining properties that may be
impacted by a natural disaster. It is suggested that future research carefully select
problem types to facilitate an improved comparison of research results.
Third, the motivation for completing the research tasks accurately can be
questioned. To address this issue, some researchers provided monetary incentives to
participants plus additional monetary incentives for higher decision performance (Hung
et al., 2007). Furthermore, simulated experiments may not adequately reveal how
36


individuals make real world decisions, as the motivations might be different.
Additionally, group decision-making may have different motivations than individual
decision-making.
Finally, few studies measured task- and user-characteristics simultaneously with
information presentation to determine moderating impacts of each construct. Thus, it is
suggested that the entire conceptual model be tested empirically to determine the effects
of each antecedent construct on decision-performance.
In addition to the addressing these limitations, numerous future research
opportunities have presented themselves in the course of this literature review.
Future Research
It is suggested that additional work be performed to test the proposed
comprehensive model utilizing varying task and user characteristics which could help
identify further moderating and mediating variables.
For example, Swink and Speier (1999) suggest that complexity levels could be
increased to determine if their findings still hold true. Albert and Golledge (1999) call for
more research into specific tasks and how groups of individuals are able to make-
decisions using geovisualization.
In their work, Jarupathirun and Zahedi (2007) measured effects of perceptual
constructs including perceived efficiency and perceived accuracy; however, a comparison
to objective measures was not made and the authors suggested such an experiment as
potential future research. Additionally, the authors suggested that users perceptions over
time could be measured to develop a more comprehensive understanding. Crossland et al.
37


(1995) suggest that future research assess decision-maker confidence, user process
satisfaction, and individual level of motivation (p. 232).
The surveyed literature suggested numerous future research possibilities within
the four research themes presented within this paper. These themes include information
presentation, task-characteristics and user-characteristics and their effects on decision
performance using geospatial data. Of these, the theme of user characteristics seems most
promising, particularly in regard to a deeper understanding of the importance of
geospatial ability. Some general research questions, based on the conceptual model
include:
Which geovisualization techniques improve decision-performance?
Which specific user-characteristics impact decision-performance?
Can specific geovisualization techniques overcome user-characteristics
that negatively impact decision-performance?
Which specific task-characteristics impact decision-performance?
Can specific geovisualization techniques overcome task-characteristics
that negatively impact decision-performance?
Potential research questions related to geospatial ability include:
Do current spatial ability measurement instruments properly identify the
ability to analyze and interpret geospatial data as would be defined in a
geospatial ability construct? If not, what key antecedents can measure
geospatial ability?
Does geospatial ability influence decision-making performance? And if so,
can inconsistencies in past empirical studies be explained?
38


Do actual geospatial ability and perceived geospatial ability differ?
To answer these questions, it is proposed that a geospatial ability construct be
developed along with a comprehensive measurement instrument. Such a construct would
allow researchers to more easily validate research results which could provide business
leaders with a better understanding of the need of geospatial ability in the workforce.
Furthermore, an understanding of the importance of geospatial ability could lead to
refined geovisualization tools which may overcome potential gaps in a users geospatial
ability. Additionally, it is suggested that future geospatial decision-making research
emphasize the importance of including measures for each of the three antecedent
constructs (information presentation, task-characteristics, user-characteristics) to
determine their combined effects on decision-performance. Future research into decision-
making using geospatial data should continue to validate existing theory as well as
provide business decision-makers with sound best practices and tools for decision-
making. Furthermore, an understanding of the importance of geospatial decision-making
could lead design science researchers to develop refined geovisualization tools which
may overcome potential negative task- and user-characteristics a users geospatial ability.
This dissertation will include the development of a geospatial reasoning ability
construct, measures of such a construct, as well as empirical research to determine the
potential benefits of such a construct.
Conclusion
As organizations collect large amounts of geospatial data, there is a need to
effectively utilize the collected data to make strategic and organizational decisions.
However, literature describing the best techniques to make decisions using geospatial
39


data as well as the best approaches for geovisualization is limited. This literature review
revealed that existing research can provide a strong foundation for future exploration of
how business decision-making using geospatial data occurs. Additionally, a conceptual
model for the study of effects of geovisualization on decision-performance is presented
and defined through existing theory. Along with the conceptual model, numerous
applicable research methods, existing constructs, potential limitations, validity concerns
and potential future research questions were presented.
While business school curriculums often include basic computing courses, as well
as courses designed to convey understanding of database and spreadsheet tools, there has
been neglect in incorporating even a basic understanding of the importance of geospatial
data as well as the unique tools to interpret such data. While in the last ten years some
business schools have recognized the need for the inclusion of GIS by offering a required
course, an elective course or even a degree emphasis or research center (Pick, 2004),
following the insights represented in this paper, it is recommended that business
curriculums include problem-solving exercises utilizing geospatial data.
Based on the numerous contradictions in past research, particularly pertaining to
task complexity, the development of common framework for determining task
complexity levels in problem-solving scenarios is recommended. Such a framework
could then be used to benchmark research and provide researchers with the ability to
more easily compare research results. Additionally, based on discrepancies of previous
research into geospatial ability, the importance of spatial ability in problem solving using
geovisualized information must be better understood in order to ensure that businesses
and individuals are able to make better decision using geographic data. This research will
40


allow information system practitioners to effectively utilize geovisualization tools to
organize and present large quantities of geospatial data.
This chapter reviewed the use of geospatial visualization and its effects on
decision performance, which is one of the many components of decision-making using
geospatial data. Additionally, this chapter proposed a comprehensive model allowing
researchers to better understand decision-making using geospatial data and provided a
robust foundation for future research. Finally, this chapter made an argument for further
research into task- and user-characteristics and their effects on decision-performance
when utilizing geospatial data.
41


CHAPTER III
GEOSPATIAL REASONING ABILITY: CONSTRUCT AND
SUBSTRATA DEFINITION, MEASUREMENT AND VALIDATION2
Abstract
The preceding chapter suggested several potential research question related to
geospatial reasoning ability. The first of these questions relates to determining how
geospatial reasoning ability can be measured, as well as what the key antecedents of
geospatial reasoning ability are. Thus, Chapter III presents the development of a
comprehensive construct defining individual geospatial reasoning ability, a key user-
characteristic that has provided mixed results in previous empirical studies.
Todays organizations often gather large quantities of geographic and geospatially
referenced data to support business decision-making. Prior research investigating the
significance of user-characteristics on decision-performance, when working with
geospatial data, has presented conflicting results. This is particularly true in regard to the
impact of geospatial reasoning ability on the ability to perform efficient and effective
decisions making. As the amount of geographic- and geospatially-referenced data grows,
it is essential to develop a comprehensive understanding of how user characteristics, such
as geospatial reasoning ability, influence decision-performance. Furthermore, such an
understanding is an essential component within the human-computer interaction domain.
This research introduces two new constructs, Geospatial Reasoning Ability and
2 An earlier version of this chapter was submitted as a Comprehensive Examination Paper and approved in
September 2012. A subsequent version of this chapter was presented at the Americas Conference on
Information Systems (AMCIS) 2011 and is currently under review with Decision Support Systems.
42


Geospatial Schematization, and presents validated measurement scales to accompany
these constructs.
Introduction
Geospatial data permeates business computing. Many business decisions, from
transport and logistics to marketing and product development, now rely on geospatial
data. This highlights the need for an improved understanding of ways to best utilize
geographic data when making such decisions. Scholars have stated that over 75 percent
of all business data contains geographic information (Tonkin, 1994) and that 80 percent
of all business decisions utilize geographic data (Mennecke, 1997). It is very likely that
these percentages are now even greater due to the prevalence of devices that can collect
geo-referenced data, particularly GPS-enabled mobile computing platforms, such as
smartphones and tablets.
Researchers have found that a clear understanding of such geospatial data can
lead to improved business decision-making (Pick, 2004). Specialized tools have been
developed to help businesses utilize the immense body of geospatial knowledge that is
continuously being collected. These tools include Spatial Decision Support Systems
(SDSS) and Geographic Information Systems (GIS).
SDSS function much like Decision Support Systems (DSS), but are designed to
enable business decision-makers to better understand the complexities of geospatial data.
GIS allow experts to analyze and report geographic data. As various GIS and SDSS tools
evolve, many of their functionalities and capabilities are converging. Of additional
interest is that these tools have evolved to allow non-geospatial experts to easily perform
decision-making functions using geospatial data. For example, many banks provide
43


online and mobile tools allowing consumers to locate automated banking kiosks with
particular criteria, such as hours of operation and types of cards accepted.
As the volume of geographic and geographically-referenced data grows and
organizations rely on geovisualization to present such data to business decision-makers, it
is imperative that all aspects of the decision-making process are fully understood in order
to help researchers and practitioners develop optimal systems. This chapter provides the
first step toward improving our understanding of how geospatial reasoning ability
impacts the capability of business decision-makers to use geovisualization when making
decisions.
There are several research questions related to geovisualization, decision
performance and geospatial reasoning ability. First, can geospatial reasoning ability be
measured practically? Second, does geospatial reasoning ability moderate decision-
making performance? Third, can specific geovisualization techniques allow a decision
maker without strong geospatial reasoning ability to make sound decisions? The first of
these questions is addressed in this chapter.
This chapter begins with a literature review emphasizing the theoretical
background and the current understanding of user characteristics, particularly spatial
reasoning, on geospatial decision performance. Then, a section describing the scale
development procedure is presented. This is followed by a formal definition of the
proposed construct and substrata, along with a measurement model. Next, an exploratory
study is presented and discussed. Finally, limitations, future research suggestions and a
conclusion are presented.
44


Literature Review
This section provides an analysis of reoccurring topics found in literature related
to geovisualization and decision performance, along with a review of literature which
explores the effects of user-characteristics on decision-making performance, with an
emphasis on research related to spatial reasoning. As business leaders are faced with
tasks such as the interpretation of large quantities of business data containing geospatial
information, it is crucial for researchers to understand how such decisions are made.
Particularly, what presentation formats are most appropriate, which individual
characteristics form stronger decision-makers, and what specific decision tasks can be
made utilizing geospatial data? Applying an existing theoretical lens to this problem will
allow researchers to draw upon established research in an attempt to answer these
questions.
Cognitive Fit Theory
Numerous researchers have explored the importance of visual information
presentation. For example, Vessey (1991) posits the CFT, which suggests that when
information presentation matches the problem-solving task, higher quality decisions are
made. The CFT has been referenced as the theoretical background, extended into other
domains and validated in numerous empirical studies, such as Mennecke et al. (2000),
Smelcer and Carmel (1997) and Speier and Morris (2003). Extensions of the CFT include
work performed by Dennis and Carte (1998), which demonstrated that when map-based
presentations are coupled with appropriate tasks, decision processes and decision
performance are influenced. Additionally, Mennecke et al. expand on the CFT by
determining the effects of subject characteristics and problem complexity on decision
45


efficiency and accuracy. Speier (2006) presented a review of eight empirical research
papers that tested for cognitive fit. This research discovered that all but one paper
(Frownfelter-Lohrke, 1998) either fully or partially supported the CFT.
Geovisualization and Decision-Performance
While geospatial data can be presented utilizing traditional methods such as
tables, unique relationships contained within geospatial data are often only apparent
though geovisualization (Reiterer et al., 2000). Geovisualization, or the visualization of
geospatial or geospatially-referenced data, and its effect on decision performance has
been explored in several key works, including Crossland et al. (1995), Smelcer and
Carmel (1997), Speier and Morris (2003) and Dennis and Carte (1998).
Research has revealed that decision performance was improved through the use of
GIS-based SDSS (Crossland et al., 1995), graphical-based interfaces for geospatial data
(Speier & Morris, 2003), geovisualization for tasks with a high task-complexity (Smelcer
& Carmel, 1997) and for presenting adjacent information (Dennis & Carte, 1998).
Comparisons were made with paper maps (Crossland et al., 1995), text-based interfaces
to geospatial data (Speier & Morris, 2003) and tabular presentation (Dennis & Carte,
1998).
Research that examines information presentation and its effects on decision
performance utilizes numerous measurement indicators; however, objective measures,
such as decision-time and decision-accuracy, are the most common (Crossland et al.,
1995; Dennis & Carte, 1998; Smelcer & Carmel, 1997; Speier, 2006). Other indicators
have included perceptions of decision-performance and even decision-regret (Hung et al.,
2007; Sirola, 2003). Researchers exploring the relationship of geovisualization on
46


decision-performance also often include task characteristics, user characteristics, or both,
in their work.
User-Characteristics
There is growing research into the effects of user characteristics, particularly
context-based (Albert & Golledge, 1999; Slocum et al., 2001; Zipf, 2002) and cognitive-
based (Mennecke et al., 2000; Speier & Morris, 2003). Furthermore, user-characteristics
explored in research have included gender (Albert & Golledge, 1999), previous
experience with SDSS tools (Mennecke et al., 2000), self-efficacy and motivation
(Jarupathirun & Zahedi, 2001, 2007).
While the existing work on user characteristics has revealed important insights,
research into the importance of spatial reasoning ability has produced conflicting results
(Smelcer & Carmel, 1997; Swink & Speier, 1999; Speier & Morris, 2003; Rafi et al.,
2005; Jarupathirun & Zahedi, 2007; Lee & Bednarz, 2009). For example, Smelcer and
Carmel discovered no significant relationships between spatial ability and decision-
performance. However, the researchers did point out that this might be because the task-
characteristics may not have required the use of spatial ability from the research subjects.
Additionally, Swink and Speier noted inconsistencies in prior research measuring spatial
ability and called for additional research. Finally, Jarupathirun et al. revealed that spatial
ability, as measured through spatial orientation ability and visualization ability, had no
effect on decision performance.
In contrast, another corpus of research had found relationships between spatial
ability and decision-performance. For instance, Swink and Speier (1999) revealed that
experiments utilizing task-characteristics involving large problems with low data
47


dispersion and user-characteristics of high spatial orientation improved decision
performance. Rafi et al. (2005) stated that students who participated in virtual
environments had improved spatial abilities, which could help overcome difficulty in
courses that require spatial skills, such as engineering graphics or cartography. Whitney,
Batinov, Miller, Nusser and Ashenfelter (2011) determined that address-verification
performance was significantly associated with spatial ability. Alternately, Lee and
Bednarz (2009) discovered that students who completed courses utilizing geospatial tools
gained greater geospatial reasoning ability as measured by a geospatial ability
measurement exam developed by the researchers. Rusch, Nusser, Miller, Batinov and
Whitney (2012) discovered that spatial ability has an effect on decision-performance
when testing for three dimensions of spatial ability: spatial visualization, logical
reasoning and perspective taking. These conflicting results may be due to the nature of
the spatial reasoning tests used (see Table 6).
48


Table 6. Spatial Reasoning Instruments Used in Examined Research
Study Spatial Reasoning Test Used Effect of Spatial Ability on Outcome
Smelcer et al., 1997 VZ-2 (Spatial Visualization) Effect
Albert et al., 1999 Three Paper/Pencil Tests Partial Effect
Swinketal., 1999 S-l (Spatial Orientation) Effect
Speier et al., 2003 S-l (Spatial Orientation) Effect
Jarupathirun et al., 2007 VZ-2 (Spatial Visualization) S-l (Spatial Orientation) No Effect
Lee et al., 2009 Spatial Skills Test Effect
Whitney et al., 2011 VZ-2 (Spatial Visualization) MV-2 Visual Memory PT (Perspective Taking) Effect
Rusch et al., 2012 VZ-2 (Spatial Visualization) Logical Reasoning PT (Perspective Taking) Effect
Lee and Bednarz (2009) emphasize that most of the spatial ability tests utilized in
research involving geospatial ability are based on table top psychometric exams, such
as those that measure spatial orientation and spatial visualization. Indeed, many of the
reviewed studies utilized various components of popular spatial reasoning tests (see
Table 6). To overcome this limitation, Lee and Bednarz (2009) developed a test to
measure spatial reasoning within a geographic context.
T ask-Characteristics
Research has shown that characteristics of the task being performed can play a
vital role in decision-making performance. For example, Complexity Theory (Campbell,
1988) posits that as task complexity increases so too does the need for information
49


presentation to match problem-solving tasks. Complexity Theory has been extended to
demonstrate that key aspects of geovisualization, including data aggregation, data
dispersion and task complexity, influence decision-making performance (Swink and
Speier, 1999). Additionally, Complexity Theory was validated by Smelcer and Carmels
(1997) research, which confirmed that increased task difficulty led to decreased decision-
performance. Moreover, Crossland and Wynne (1994) discovered that decision-making
performance decreased less significantly with the use of electronic maps versus paper
maps.
Jarupathirun and Zahedi (2001) state that, based on research by Vessey (1991),
Payne (1976), Campbell (1988) and Zigurs and Buckland (1998), tasks can be classified
into two groups, simple and complex, based on task-characteristics. Characteristics of
complex tasks include multiple information attributes, multiple alternatives to be
evaluated, multiple desired outcomes, solution scheme multiplicity, conflicting
interdependence and uncertainty.
Several empirical studies have addressed task complexity. For example, Speier
and Morris (2003) discovered that decision-making performance increased by utilizing a
visual query interface when working with complex decisions. Additionally, Swink and
Speier (1999) defined task characteristics to include the problem size, data aggregation
and data dispersion. Their findings indicated that decision-performance, as measured by
decision-quality and decision-time, was superior for smaller problems. In the context of
data aggregation, there was no effect on decision quality; however, there was a
significant effect on decision-time, indicating that more time was required for
disaggregated problems. Additionally, it was discovered that while decision-quality for
50


problems with high data dispersion improved, there was no significant effect on decision-
time. Smelcer and Carmel (1997) confirmed what had been revealed by previous research
(others mentioned herein), that more difficult tasks require greater decision-time. In their
work, Mennecke et al. (2000) discovered that as task complexity increases, accuracy is
lowered, yet found only partial support for task efficiency being lowered.
Scale Development Procedure
As many of the reviewed research projects explored different dimensions of
spatial ability, and most did not measure spatial reasoning with regards to geographic
context, this could partially explain the contradictory results found in these studies. This
finding suggests a need for improved measures of spatial ability which is context-
sensitive to both geographic-scale (task aware) and business decision-making (user
aware.) Thus, a new construct and measurement scale, Geospatial Reasoning Ability
(GRA), are introduced in the following sections as a measure of an individuals cognitive
geospatial reasoning ability within the context of performing business decision-making.
This research utilized the construct development and validation procedures
recommended by MacKenzie, Podsakoff, & Podsakoff (2011) to define and measure
geospatial reasoning ability and to develop an appropriate measurement scale (Figure 2).
In their work, MacKenzie et al. present a ten-step scale development procedure. These
steps are classified into six categories: conceptualization, development of measures,
model specification, scale evaluation and refinement, validation, and norm development.
This chapter completes the first seven steps in the scale development process.
Completing these steps allows for an initial GRA scale to be developed and validated.
Future research studies will be necessary to validate this scale in other contexts and to
51


develop norms for the scale. This is consistent with other initial scale development
studies (e.g. Balog, 2011; Brusset, 2012).
Figure 2. Overview of Scale Development Procedure from MacKenzie et al. (2011).
52


Construct Conceptualization (Step 1)
MacKenzie et al. (2011) recommend that the first step of the scale development
procedure is to complete four factors of construct conceptualization. These four factors
include: (1) establishing how the construct has been used in prior research or by
practitioners, (2) identifying the properties of the construct as well as the entities to which
the construct applies, (3) specifying the conceptual theme of the construct and (4)
defining the construct in clear, concise and unambiguous terms.
Factor One: Examination of Prior Research
Factor one of construct conceptualization, as suggested by MacKenzie et al.
(2011), is to conduct a comprehensive literature review to determine how geospatial
reasoning and similar concepts have been defined and measured. The majority of these
findings were described in the literature review section above.
Prior research has shown conflicting results when the relationship between spatial
ability and decision performance was assessed (e.g. Jarupathirun & Zahedi, 2007; Rafi et
al., 2005; Smelcer & Carmel, 1997; Speier & Morris, 2003; Swink & Speier, 1999), as
summarized in Table 6. The majority of these studies have had two major limitations: (1)
they only assessed certain dimensions of spatial ability and (2) they do not specifically
address the geographic context. Furthermore, existing measurement tools are often
complicated to administer and may not provide comparative results as only certain
dimensions are measured. For example, some of these tests require prior GIS knowledge
in order to be completed successfully.
To address these concerns, Lee and Bednarz (2009) developed a new spatial skill
test. Their test was designed to overcome the aforementioned limitations, by measuring
53


multiple dimensions of spatial thinking and by placing these questions in a geospatial
context. Their 30-minute test consisted of seven sets of question items, including
performance tasks and multiple-choice questions. While this assessment addressed many
of the initial concerns, it too had some limitations. For example, the scale was designed to
evaluate, and be tested on, students enrolled in a geography department and thus the
questions specifically refer to many common GIS tasks, such as site selection and
topography. Concepts and terms such as these are ones that a business decision-maker
may not be very familiar with, requiring training in order to clarify the test tasks to
subjects outside the geography domain. Furthermore, a post-study questionnaire revealed
that the subjects perceived many of the questions as too simple.
We address these limitations through the development of a measurement scale
that (1) addresses known dimensions of geospatial reasoning, (2) emphasizes the
geographic context of the cognitive ability measured and (3) assesses GRA in subjects
not necessarily familiar with advanced cartographic and geographic terms. Doing so will
address the major limitations of the existing measurement tools.
The limitations found in the current body of research are the primary motivation
for the development of a new construct and measurement tool. Such a scale could provide
a more accurate measure as it would be sensitive to geographic-scale (task aware) and
business decision-making (user aware). Furthermore, such a scale would allow future
researchers to more easily incorporate geospatial reasoning ability in their work. Thus, a
new construct, Geospatial Reasoning Ability (GRA) of business decision makers, is
introduced as a measure of an individuals geospatial reasoning ability within the context
54


of performing business decision-making, along with an easy-to-use questionnaire-based
measurement scale.
Factor Two: Identification of Construct Properties and Entities
MacKenzie et al. (2011) suggest that the second factor of the construct
conceptualization step should include the identification of the construct properties and
identification of the entities to which it applies. As previous studies measuring spatial
ability were focused on individual decision-making, the proposed construct is also
applied to an individual entity. However, since teams normally make many business
decisions, the impact of GRA on group decisions should be examined in the future.
While spatial reasoning has been measured using cognitive tests (Speier &
Morris, 2003) and psychometric test (Jarupathirun & Zahedi, 2001, 2007), this research
utilizes psychometric measures to estimate GRA by measuring self-perceived traits.
These self-perceived traits are referred to as substrata in this paper. Cognitive tests are
often used in laboratory experiments where activities can be more easily timed and
controlled, while psychometric tests can often be administered in a variety of settings. As
one of the goals of this research was to develop an easy-to-administer measure of GRA,
psychometric measures were utilized.
Thus, in the context of this research, the proposed GRA construct specifies a type
of cognition that applies to an individual person, and is measured utilizing self-perceived
substrata.
55


Factor Three: Specification of the Conceptual Theme
The next factor of the construct conceptualization step, as suggested by
MacKenzie et al. (2011), is the specification of the conceptual theme. We propose that
the GRA construct specifies a cognitive ability that consists of multiple dimensions based
on self-perception. While prior research has focused on specific psychometric
components of spatial ability (e.g., Smelcer & Carmel, 1997, Swink & Speier, 1999), this
study attempts to establish a model encompassing all known and measurable substrata of
GRA as identified through a comprehensive literature review.
MacKenzie et al. (2011) also suggest specifying if a construct changes or remains
stable over time. We suspect GRA to remain stable over time; however, specialized
training or careers that utilize geospatial thinking may influence a persons GRA. For
example, it has been shown by Rafi et al. (2005) that spatial intelligence can be improved
through specific training.
Factor Four: Definition of the Construct and Substrata
As suggested by MacKenzie et al. (2011), the final factor is to unambiguously
define the construct. GRA can be defined as a business decision makers cognitive ability
to assess and utilize geospatial information in order to make sound business decisions. To
measure this construct, the various substrata of GRA are identified and defined next.
In order to identify and define the substrata of the GRA construct, a literature
review of published works, that specifically explored spatial and geospatial reasoning
ability was conducted. During this process, scholarly articles were reviewed for key
semantic content (e.g. spatial ability, spatial thinking, spatial reasoning, spatial skills,
geospatial ability, geovisualization, maps, and geographic information systems) by an
56


iterative approach using the search capabilities of EBSCO Publishings Business Source
Premier and Thomson Reuters Web of Science. This review revealed several potential
substrata of the GRA construct. To validate the results of this step, two academic and two
industry experts were asked to validate each of the proposed substrata and their
definitions for agreement. The two industry experts, who work with and develop
geovisualization systems, had over 20 years of combined experience. The academic
experts perform research on behavioral/cognitive topics in the information systems
scholarship. Each of these experts suggested only minor updates to the definitions and
stated an overall agreement with the proposed substrata. The final substrata and
definitions are presented in Table 7.
Table 7. Proposed Substrata of GRA.
Proposed Substrata Definition Developed From
Self-Perceived Geospatial Orientation and Navigation (SPGON) The self-perceived ability to determine ones position and direction in geographic space. Kozlowski & Bryant, 1977; Swink& Speier, 1999; Cherney, Brabec and Runco, 2008; etc.
Self-Perceived Geospatial Memorization and Recall (SPGMR) The self-perceived ability to commit geographic concepts to memory and the ability to reconstruct these concepts. Lei, Kao, Lin & Sun, 2009; Miyake, Friedman, Rettinger, Shah & Hegarty, 2001; etc.
Self-Perceived Geospatial Visualization (SPGV) The self-perceived ability to form mental images of geographic space. Eisenberg & McGinty, 1977; Velez, Silver & Tremaine, 2005; etc.
A second construct, which we have labeled Self Perceived Geospatial
Schematization (SPGS), has been identified in literature as important for decision-making
using geospatial data, yet, no research was found that specifically identified
schematization ability as an indicator of spatial reasoning ability. However, several
studies indicated its importance for communicating geospatial information (e.g., Klippel
57


et al., 2005). The definition of SPGS is presented in Table 8. Based on the potential of
SPGS to be a potential substratum of GRA, it was tested as both an independent construct
as well as a potential substratum of GRA.
Table 8. Definition of SPGS.
Construct Definition Developed From
Self-Perceived Geospatial Schematization (SPGS) The self-perceived ability to reduce complexity of geographic elements by converting to a schema or outline. Agrawala& Stolte, 2001; Klippel et al., 2005; etc.
Following the completion of the conceptualization step of the scale development
procedure outlined by MacKenzie et al. (2011) the two-step process of measurement item
generation is performed.
Measurement Item Generation (Step 2)
While existing measurement tools often explore spatial abilities, there are distinct
differences between the general spatial abilities measured by these tools and spatial
reasoning abilities in a geographic context. Furthermore, the existing spatial ability
measurement tools that were reviewed only address some components of spatial ability,
such as visualization, not the proposed spectrum of GRA. Additionally, many of these
existing measurement tools often require the subjects to complete physical or visual
tasks, making the tests difficult to administer. Such measurement tools usually require
subjects to be physically present in a laboratory setting. The goal of this research was to
develop a set of measurement items that correctly measure the various dimensions of
GRA and that are easy to administer. Given that measurement items for each of these
substrata do not exist in literature, the author developed each item. The measurement
58


items were developed utilizing semantic terminology derived from the substrata
definitions and existing literature. The items were further refined through the opinions of
the experts mentioned above.
The following tables (Table 9-11) present the initial measurement items of the
proposed substrata of the GRA construct, which include self-perceived geospatial
orientation and navigation (SPGON), self-perceived geospatial memorization and recall
(SPGMR) and self-perceived geospatial visualization (SPGV).
The SPGON substratum is defined as the self-perceived ability to determine
ones position and direction in geographic space. The following initial measurement
items (see Table 9) are expected to measure the SPGON substratum.
59


Table 9. Initial Self-Perceived Geospatial Orientation and Navigation Measurement
Items.
Item ID Item
SPGON1 In most circumstances, I feel that I could quickly determine where I am based on my surroundings.
SPGON2 I can usually determine the cardinal directions by looking at the sky.
SPGON3 I rarely get lost.
SPGON4 Examining my surroundings allows me to easily orient myself.
SPGON5 At any given point, I usually know where north, south, east and west lie.
SPGON6 I find it difficult to orient myself in a new place.
SPGON7 Knowing my current location is rarely important to me.
SPGON8 I feel that I can easily orientate myself in a new place.
SPGON9 Using a compass is easy.
SPGONIO I rarely consider myself lost.
SPGON11 I have a great sense of direction.
SPGON12 I could easily navigate a course of 500 meters north and 500 meters east using a magnetic compass.
SPGON13 Knowing my current location is essential to determine where I am going.
SPGON14 When driving or walking home, I am likely to choose a route that I have never taken.
Author Developed, Source for concepts: Kozlowski & Bryant, 1977; Swink & Speier, 1999; Cherney et al., 2008; Meilinger & Knauff, 2008
The SPGMR substratum is defined as the self-perceived ability to commit
geographic concepts to memory and the ability to reconstruct these concepts. The initial
SPGMR measurement items, shown in Table 10, are expected to measure the SPGMR
substratum.
60


Table 10. Initial Self-Perceived Geospatial Memorization and Recall Measurement
Items.
Item ID Item
SPGMR1 I am good at giving driving directions from memory.
SPGMR2 I am good at giving walking directions from memory.
SPGMR3 I can usually remember a new route after I have traveled it only once.
SPGMR4 I dont enjoy giving directions, as I have trouble recalling the details needed.
SPGMR5 I dont remember routes.
SPGMR6 When revisiting a place I dont frequent often, I usually can remember how to get around.
SPGMR7 After studying a map, I can often follow the route without needing to look back at the map.
SPGMR8 When someone gives me good verbal directions, I can usually get to my destination without asking for additional directions.
SPGMR9 After being a shown a map, it would be easy for me to recreate a similar map to memory.
SPGMR10 After seeing a city map once, I am usually able to commit key landmarks and their locations to memory.
SPGMR11 It would be easier to memorize a tabular list of states, instead of a map showing the states.
SPGMR12 It would be easier to memorize capital cities from a map, instead of a table.
SPGMR13 The best way to memorize the layout of a college campus is to study a map.
SPGMR14 I could draw an approximate outline of my home country from memory.
Author Developed, Source for concepts: Lei et al., 2009; Miyake et al., 2001
The SPGV substratum is defined as the self-perceived ability to form mental
images of geographic space. The initial SPGV measurement items, shown in Table 11,
are expected to measure the SPGV substratum.
61


Table 11. Initial Self-Perceived Geospatial Visualization Measurement Items.
Item ID Item
SPGV1 It is easy for me to visualize a place I have visited.
SPGV2 I find it difficult to visualize a place I have visited.
SPGV3 I can visualize a place from information that is provided by a map without having been there.
SPGV4 When someone describes a place, I form a mental image of what it looks like.
SPGV5 I can visualize geographic locations.
SPGV6 When someone describes a place, I have a difficult time visualizing what it looks like.
SPGV7 When viewing an aerial photograph, I often visualize what the area looks like on the ground.
SPGV8 I can visualize a place from an aerial photograph.
SPGV9 I can visualize a place from a verbal description.
SPGV10 I can visualize a place from a map.
SPGV11 I can visualize what a future building might look like on an empty lot.
SPGV12 When viewing a map, I often visualize what the area looks like on the ground.
SPGV13 While reading written walking directions, I often form a mental image of the walk.
SPGV14 Generally I prefer to memorize the mental images of a walk or drive, versus the written directions.
Author Developed, Source for concepts: Eisenberg & McGinty, 1977; Velez et al., 2005
Table 12 presents the initial measurement items of the proposed self-perceived
geospatial schematization (SPGS) construct. The SPGS construct is defined as the self-
perceived ability to reduce complexity of geographic elements by converting to a schema
or outline.
62


Table 12. Initial Self-Perceived Geospatial Schematization Measurement Items.
Item ID Item
SPGS1 I prefer maps that display key information clearly, such as transit maps.
SPGS2 I prefer maps that include full-color aerial photography.
SPGS3 I prefer simple, sketch-like maps.
SPGS4 When looking at subway or transit maps, I can usually quickly find the routes I need to take in order to reach my destination.
SPGS5 When giving directions it is easy for me to decide what is important enough to include and what to exclude.
SPGS6 I prefer maps that only provide key information, even if they are not to scale.
SPGS7 I prefer maps that only provide information necessary to accomplish my tasks.
SPGS8 I prefer written walking directions that only include the most essential navigational elements.
SPGS9 I prefer maps that show only the most essential information.
SPGS10 I am better at interpreting maps that only provide necessary information.
SPGS11 It is easy for me to ignore irrelevant information on a map and to focus only on necessary information.
SPGS12 I prefer verbal driving directions that only include the most essential information to reach my destination.
SPGS13 I like simple, clear maps.
SPGS14 I prefer highly detailed maps that show more than just the basic information.
Author Developed, Source for concepts: Agrawala & Stolte, 2001; Klippel et al., 2005
Content Validity (Step 3)
Following the initial generation of items, MacKenzie et al. (2011) recommend
establishing content validity. In order to establish content validity, a categorization and
prioritization exercise was conducted. Ten business decision makers, who have worked
as functional or project managers for ten years or more, were selected to perform this
63


exercise. Due to their professional experience, these business decision-makers were ideal
candidates for establishing content validity. Each of the business decision makers was
given five envelopes, each stating the name and definition of the substrata, as well as an
envelope for items that did not fit the substrata. Additionally, the participants were given
index cards with each prospective measurement item and asked to both sort and
categorize the items into the appropriate envelope. Based on the results of this exercise,
measurement items that had a majority agreement in categorization were retained and
ordered based on the averaged ranks. The items resulting from this exercise were
combined into a single instrument and a 7-point Likert scale (Likert, 1932) was added to
measure agreement with each item. See Table 13-16 for results of the categorization and
prioritization exercise.
Table 13. Content Validity Results of Geospatial Orientation and Navigation.
Initial Item Sort Results New Item Rank Rank Results
SPGS1 SPGON3
SPGS2 SPGON11 Removed
SPGS3 SPGON7
SPGS4 SPGON1
SPGS5 SPGON2
SPGS6 SPGON8
SPGS7 Removed
SPGS8 SPGON5
SPGS9 SPGON9
SPGS10 SPGON6
SPGS11 SPGON4
SPGS12 SPGONIO
SPGS13 Removed
SPGS14 Removed
64


Table 14. Content Validity Results of Geospatial Memorization and Recall.
Initial Item Sort Results New Item Rank Rank Results
SPGMR1 SPGMR2
SPGMR2 SPGMR4
SPGMR3 SPGMR1
SPGMR4 SPGMR8
SPGMR5 SPGMR6
SPGMR6 SPGMR9
SPGMR7 SPGMR3
SPGMR8 SPGMR5
SPGMR9 SPGMR7
SPGMR10 SPGMR10
SPGMR11 SPGMR13 Removed
SPGMR12 SPGMR12 Removed
SPGMR13 SPGMR11 Removed
SPGMR14 Removed
65


Table 15. Content Validity Results of Geospatial Visualization.
Initial Item Sort Results New Item Rank Rank Results
SPGV1 SPGV4
SPGV2 SPGV7
SPGV3 SPGV2
SPGV4 SPGV6
SPGV5 SPGV1
SPGV6 SPGV13 Removed
SPGV7 SPGV12 Removed
SPGV8 SPGV8
SPGV9 SPGV5
SPGV10 SPGV3
SPGV11 SPGV14 Removed
SPGV12 SPGV11 Removed
SPGV13 SPGV10
SPGV14 SPGV9
Table 16. Content Validity Results of Geospatial Schematization.
Initial Item Sort Results New Item Rank Rank Results
SPGS1 SPGS7
SPGS2 SPGS11 Removed
SPGS3 SPGS1
SPGS4 Removed
SPGS5 SPGS6
SPGS6 SPGS10
SPGS7 SPGS9
SPGS8 SPGS8
SPGS9 SPGS4
SPGS10 SPGS12 Removed
SPGS11 SPGS2
SPGS12 Removed
SPGS13 SPGS5
SPGS14 SPGS3
66


Measurement Model Specification (Step 4)
The fourth step in the scale development procedure recommended by MacKenzie
et al. (2011) is to formally specify the measurement model. In this case, the relationship
between each proposed substratum and the GRA construct are visually and formally
presented.
Prior research has demonstrated that orientation and navigation are indicators of
spatial ability. For example, Kozlowski and Bryant (1977) revealed that perceptions of an
individuals sense of direction reflected their spatial orientation ability. Also, Swink and
Speier (1999) discovered a relationship between spatial orientation and decision-
performance. Furthermore, Chemey et al. (2008) reported that spatial task performance
might be influenced by self-perceptions of navigation ability. As spatial orientation and
navigation appear to be indicators of spatial ability within a geospatial context, we
propose that:
Proposition PI: Self-perceived geospatial orientation and navigation ability
(SPGON) forms geospatial reasoning ability (GRA).
Additionally, research has demonstrated that memorization and recall are
indicators of spatial ability. In their study, Lei et al. (2009), discovered that subjects who
were familiar with specific landmarks were more likely to locate these landmarks using a
geospatial tool than with landmarks unfamiliar to them. Furthermore, Miyake et al.
(2001) discovered that the ability to memorize visiospatial concepts might influence
spatial ability. Additionally, subjects who had to retrace directions from a map reported
that during memorization they first translated the map into walking directions (Meilinger
& Knauff, 2008). Existing spatial tests include a visual memory test, which asks
67


participants to memorize an array of shapes on a page, and then identify these shapes
from recall (Velez et al., 2005). Thus, we propose:
Proposition P2: Self-perceived geospatial memorization and recall ability
(SPGMR) forms geospatial reasoning ability (GRA).
Furthermore, research has demonstrated that visualization ability is an indicator of
spatial ability. For example, Eisenberg and McGinty (1977) utilized a spatial
visualization test to measure spatial reasoning ability. Additionally, Velez et al. (2005)
discovered that spatial ability is correlated to the ability of 3-dimensional visualization.
Thus, we propose:
Proposition P3: Self-perceived geospatial visualization ability (SPGV) forms
geospatial reasoning ability (GRA).
Finally, researchers suggest that reducing the amount of information presented in
geovisualization to only include essential information can improve decision-making
performance. The benefits of such simplified maps is demonstrated by Agrawala and
Stolte (2001) who collected feedback from over 2,000 users of a technology that emulates
hand-drawn driving directions, which often emphasize essential information while
eliminating nonessential details. Additionally, Klippel et al. (2005) suggest that modern
cartographers can successfully develop schematic maps that are simplified, yet sufficient,
representations. In their study, Tversky and Lee (1998) asked students to provide both
written directions and a route map, of which 86% were able to provide sufficient written
directions and 100% were able to create sufficient route maps. Furthermore, Meilinger
and Knauff (2008) suggested that a reason subjects have difficulty regaining their
orientation, once lost, is that highly schematized information that does not contain
68


enough information for immediate orientation. No research was identified that
specifically recognized schematization ability as an indicator of reasoning ability,
however several studies indicated its importance for communicating geospatial
information (e.g., Klippel et al., 2005). As the relationship between SPGS and spatial
reasoning has not been established in literature, this construct was tested as a substratum
of GRA, as well as an independent construct. As the ability to effectively interpret
schematized geospatial information may influence GRA, we propose:
Proposition P4: Self-perceived geospatial schematization ability (SPGS) forms
geospatial reasoning ability (GRA).
As the CFT suggests that certain presentation modes facilitate decision-making,
based on an individuals cognitive capability (Vessey, 1991; Speier, 2006), the substrata
presented above were essential to fully determine the effects of geovisualization on
deci si on-performance.
Reflective and Formative Constructs
While self-perception measurement items have commonly been used as reflective
measures of first-order constructs (e.g., Davis, 1989), there is a continuing debate
concerning the use of reflective or formative measurements for second-order
multidimensional constructs, such as GRA (e.g., Vlachos & Theotokis, 2009; Polites,
Roberts & Thatcher, 2012). Furthermore, Vlachos & Theotokis have discovered that
incorrectly specifying a second-order measurement models relationships can lead to
differing research conclusions.
In their criteria for distinguishing between reflective- and formative-indicator
models, MacKenzie, Podsakoff and Jarvis (2005) and MacKenzie et al. (2011) suggest
69


that constructs should be specified as reflective when (1) indicators are manifestations of
the construct, (2) changes in the indicator should not cause changes in the construct, (3)
changes in the construct do cause changes in the indicators, (4) dropping an indicator
should not alter the conceptual domain of the construct, (5) indicators are viewed as
affected by the same underlying construct and are parallel measures that co-vary, and (6)
indicators are required to have the same antecedents and consequences and to have a high
internal consistency and reliability. Based on this definition of a reflective construct, we
propose that the first-order constructs be modeled as reflective.
MacKenzie et al. (2011) further suggest that a construct should be specified as
formative when (1) the indicators are defining characteristics of the construct, (2)
changes in the indicators should cause changes in the construct, (3) changes in the
construct do not cause changes in the indicator, (4) dropping an indicator may alter the
conceptual domain of the construct, (5) it is not necessary for indicators to co-vary with
each other, and (6) indicators are not required to have the same antecedents and
consequences, nor have high internal consistency or reliability. Based on this definition
of a formative construct, we propose that the second-order construct be modeled as
formative.
Thus, subsequently the measurement model is referred to as a first-order
reflective, second-order formative model.
Visual Representation of Measurement Model
The measurement model is presented visually in Figure 3, demonstrating the ten
proposed measurement items for each first-order factor in addition to the propositions
that SPGON, SPGRM and SPGV are indicators of the GRA construct. Note that SPGS
70


has been included as a potential substratum of GRA in this model. While no previous
research has utilized schematization as an indicator of spatial ability, there is evidence
that SPGS could contribute to GRA. Thus, the authors felt that its inclusion was essential
to test its relationship to GRA.
Figure 3. First-Order Reflective, Second-Order Formative Measurement Model
including Propositions.
Mathematical Notation of Measurement Model
Each substratum is considered to be a latent (i.e., unobservable) variable,
measured using ten manifest (i.e., observable) variables. Thus, the relationship between
each measurement item and the substrata can be formally expressed as:
X[li] =Mli] XI +e[li]
71


X[2i] = k[2i] X2 + s[2i]
X[3i] = k[3i] X3 + s[3i]
X[4i] = X[4i] X4 + s[4i]
In the expressions above, x[li-4i] represent the measurement items for each
substratum, X[l-4] represent the four substrata, X[ 1 i-4i] represent the effect of X[l-4] on
X[li-4i], while s[li-4i] represent the measurement error for each indicator.
Furthermore, the GRA construct is a second-order formative construct defined by
the substrata. Thus, this relationship can be formally expressed as:
Y = Sy[l-4] X[l-4] + C
In this expression, Y represents GRA, y[l-4] represents the weigh associated with
each indicator substratum, x[l-4] represents each indicator substratum and 'C, represents
the common error term.
Data Collection (Step 5)
The next step of the scale development procedure recommended by MacKenzie et al.
(2011) was to perform the data collection. As this study performed two distinct pretests,
two data collection methods are presented.
Pretest A
Pretest A consisted of a small sample pretest, allowing the researchers to test
reliability and to gain initial feedback on the measurement items. For this pretest, 33
student subjects from an urban research university were asked to complete an online
survey and provide feedback.
72


Pretest B
In an attempt to reduce response bias, a variety of methods were used to recruit
subjects for Pretest B. These included contacting subjects in person, as well as through
social networks and e-mail. Since a variety of solicitation methods were used, there is no
meaningful way to show response rate.
For a traditional factor analysis, Hair, Black, Babin and Anderson (2010) suggest
a five-to-one ratio of the number of observations to number of variables. However, it is
further suggested that a more preferable sample would utilize a ten-to-one ratio. Knowing
this, and that our study contained 40 initial variables, our goal was to utilize a sample size
of at least 200, but preferably 400, subjects. Throughout a five-month period, 624
responses were collected. Of these, 24 incomplete observations were removed, retaining
600 responses. This number meets the minimum requirements for traditional factor
analysis and exceeds the minimum number necessary for a successful Partial Least
Squares analysis. Using Microsoft Excel 2010, random numbers were generated for each
row and the data was sorted based on this column. The sample was split and 300 samples
were retained for a validation study, while the other 300 samples were used for the
Pretest B analysis.
Scale Purification and Refinement (Step 6)
The next step recommended by MacKenzie et al. (2011) is to purify and refine the
scale. This step was accomplished by performing two pretests.
Pretest A
For Pretest A, a small sample pretest of the research instrument was conducted. For
this pretest, 33 subjects were asked to complete an online survey, measuring the results of
73


all 40 substrata measurement items. Additionally, the subjects were given an opportunity
to provide input via open-ended questions designed to elicit feedback on the actual
measurement items. The open-ended question responses were reviewed for additional
opportunities of scale refinement. An Item Analysis performed using IBM SPSS
Statistics 19 revealed either good or acceptable levels of Cronbachs alpha (as shown in
Table 17) for each substratum and SPGS, (Cronbach, 1951; Gliem & Gliem, 2003).
Furthermore, the open-ended feedback demonstrated that the questions were clear and
interesting to the participants. This was crucial as one of the goals identified for the
measurement scale was that it could successfully determine GRA in subjects who do not
necessarily have a familiarity with geographic concepts.
Table 17. Pretest A Reliability Statistics.
Substrata Cronbachs Alpha No of Items No of Subjects
SPGON .830 10 33
SPGMR .846 10 33
SPGV .835 10 33
SPGS .735 10 33
Pretest B
Based on the success of Pretest A, an exploratory data analysis using Partial Least
Squares (PLS) Structural Equation Modeling (SEM) was performed.
Given that the selected analytical methodology can influence the results, great
care was taken to ensure the most appropriate statistical analysis method. Polites et al.
(2012) determined that between the years 2000 and 2009, 84% of studies testing
aggregate constructs utilized PLS, indicating that the use of PLS in IS research has
become widely accepted. Furthermore, Ringle, Gotz, Wetzels and Wilson (2009)
74


suggested the use of PLS path modeling over maximum likelihood covariance-based
structural equation modeling for tests of second-order constructs when methodological
requirements are not met. As the primary goal of this study was to define a new construct
on an untested measurement model, PLS path modeling was utilized to ensure consistent
results. As such, SmartPLS (Ringle, Wende, & Will, 2005) was utilized for subsequent
measurement model testing.
The results of this analysis helped establish construct validity and provided an
opportunity for refinement of the measurement scale. The first goal of Pretest B was to
identify parsimonious sets of variables. Furthermore, a second goal was to reduce the
number of items significantly and to identify only the most representative items for each
construct. The pretest procedure resulted in a reduction in measurement items from 40 to
12. Additionally, the pretest revealed that SPGS was not a valid indicator of GRA.
The Pretest B sample population (n=300) included demographic variables
measuring age, gender, education, profession and cultural background. The results of
these demographic items are presented in Table 18.
75


Table 18. Pretest B Descriptive Statistics of Demographics Variables.
Question Variables Percentage
Age 18-25 30.7
26-35 32.3
36-45 12.0
45-55 12.3
56-65 9.0
66 3.7
Gender Female 60.0
Male 40.0
Education 2 Year/Associate Degree 24.7
4 Year/Bachelor Degree 30.7
Doctor/JD/PhD 2.7
Elementary/Middle School 0.7
High School 30.7
Master Degree 10.7
Profession Business Professional 43.7
Student 37.7
Geospatial Professional 0.7
Other 18.0
Culture African 3.7
Asian 8.0
Australian 0.3
European 38.7
Middle Eastern 4.3
North American 43.7
South American 1.3
The first analysis of Pretest B involved a PLS factor analysis using SmartPLS
(Ringle et al., 2005). For this analysis the measurement items of each first-order
substratum were modeled as reflective. The first step was to test the full research model,
which included SPGS as a substratum of GRA. This model revealed that the path
76


coefficient between SPGS and GRA was 0.047, while the remaining path coefficients
ranged from 0.355 to 0.376 (see Figure 4). This agreed with prior research, which had not
included schematization as an indicator of spatial reasoning. However, as there is
evidence that SPGS could contribute to GRA in some way (Agrawala & Stolte, 2001;
Klippel et al., 2005), it was further refined using an IBM SPSS Statistics 19 factor
analysis. Another reason to have continued the development of the SPGS construct is that
it is a construct similar to the GRA measure, so it can be used to establish discriminate
validity in future testing.
Figure 4. Pretest B, Path Analysis including SPGS.
As prior research has shown that an equal number of indicator variables are
preferred for first-order factors using multiple indicators, only the four highest loading
items were retained. Figure 5 presents the path analysis findings and Table 19 presents
cross-loadings for each of the substrata.
77


SPG0N6
SPG0N5
Figure 5. Pretest B, Path Analysis including SPGS.
78


Table 19. PLS Factor Analysis Loadings.
Factor ^oading by Substratum and GRA
Item SPGMR SPGON SPGV GRA
SPGMR1 0.8045 0.6625 0.5896 0.748
SPGMR2 0.8043 0.6242 0.556 0.7233
SPGMR3 0.8233 0.7056 0.6486 0.7896
SPGMR4 0.8473 0.7 0.6772 0.8055
SPGMR5 0.7597 0.6439 0.5544 0.7126
SPGMR6 0.5875 0.4979 0.4109 0.5457
SPGMR7 0.776 0.6584 0.7327 0.778
SPGMR8 0.5193 0.4697 0.3074 0.477
SPGMR9 0.7306 0.6572 0.5596 0.7078
SPGMR10 0.7485 0.6817 0.6481 0.7507
SPGON1 0.551 0.6938 0.5459 0.6465
SPGON2 0.6474 0.7882 0.5283 0.7155
SPGON3 0.6569 0.8212 0.6022 0.754
SPGON4 0.757 0.8682 0.601 0.8112
SPGON5 0.7406 0.8392 0.6156 0.798
SPGON6 0.6611 0.7449 0.5025 0.6967
SPGON7 0.4304 0.5098 0.3377 0.4662
SPGON8 0.4832 0.616 0.5218 0.5827
SPGON9 0.539 0.6179 0.5253 0.6066
SPGONIO 0.7298 0.8104 0.5752 0.7707
SPGV1 0.6616 0.6594 0.791 0.7505
SPGV2 0.7018 0.6204 0.7772 0.7474
SPGV3 0.675 0.6072 0.7939 0.7367
SPGV4 0.5709 0.5281 0.697 0.6363
SPGV5 0.5091 0.462 0.698 0.5869
SPGV6 0.3526 0.337 0.5861 0.4437
SPGV7 0.2337 0.2504 0.2978 0.2773
SPGV8 0.4771 0.4669 0.6959 0.5754
SPGV9 0.3752 0.3499 0.558 0.4492
SPGV10 0.4642 0.4675 0.7024 0.5725
Highest Item Loadings shown in bold.
79


The latent variable correlation, as shown in Table 20, shows the correlations
between GRA and its substrata. Each substratum correlates with GRA between 0.8852
and 0.9528, while there is also a high correlation between each of the substrata.
Table 20. Latent Variable Correlation.
SPGMR SPGON SPGV GRA
SPGMR 1 - - -
SPGON 0.8499 1 - -
SPGV 0.776 0.7298 1 -
GRA 0.9528 0.9369 0.8852 1
As demonstrated in Table 21, the path coefficients are relatively equal and range
between 0.299 and 0.396. Relatively equal path coefficients are a suggested requirement
of research models using repeating indicators (Beemer & Gregg, 2010). Additionally, all
first-order constructs achieve t-statistic values greater than 1.96 demonstrating
convergent validity (Gefen & Straub, 2005). Furthermore, a comparison of the first-order
latent variable correlations reveals that all are below 0.9, indicating no common method
bias (Bagozzi, Yi, & Phillips, 1991).
Table 21. Path Coefficients.
Original Sample Sample Mean Standard Deviation Standard Error T Statistics
SPGMR -> GRA 0.396 0.395 0.0129 0.0129 30.7691
SPGON -> GRA 0.382 0.3813 0.0112 0.0112 34.1738
SPGV -> GRA 0.299 0.2995 0.0143 0.0143 20.9492
Next, construct reliability and validity were established. The Average Variance
Extracted (AVE) for a first-order construct should be greater than 0.50 (Fornell &
Larcker, 1981), which SPGRM and SPGON achieve, and thus support, construct validity.
However, SPGV only reached an AVE of 0.4554, which is a concern. Once non-essential
80


or incorrectly loading items are removed this value may improve and will need to be re-
2
evaluated. For second-order constructs with formative indicators, such as GRA, a R
value greater than 0.5 can indicate construct validity (Diamantopoulos, Reifler, & Roth,
2008; MacKenzie et al., 2011). In the measurement model, the path analysis revealed a
R value of 1.0 as all indicator variables also defined the second-order construct. The
composite reliability scores were greater than 0.8888 for each of the substrata, which
meant that reliability as defined by Cronbach (1951) and Fornell and Larcker for
reflective, first-order constructs, was achieved. Gleim and Gleim (2003) suggest a
minimum alpha of 0.8 as a reasonable goal, which each of the tested substrata exceeded.
Per MacKenzie et al. as the measurement model does not predict correlation of the sub-
dimensions for the second-order, formative construct of GRA, reliability is not relevant
for this construct. See Table 22 for more detail.
Table 22. Construct Reliability.
AVE Composite Reliability R Square Cronbachs Alpha Communality Redundancy
GRA 0.4468 0.9588 1 0.9546 0.4468 0.2671
SPGRM 0.5578 0.9253 0 0.909 0.5578 0
SPGON 0.5468 0.9218 0 0.9037 0.5468 0
SPGV 0.4554 0.8888 0 0.86 0.4554 0
Next, items that did not load highly upon their corresponding substrata were
removed. Indicated by prior research, an equal number of indicators should be utilized for
all first order constructs, therefore only the four highest loading measurement items were
retained. Care was taken to ensure that an equal number of measurement items remained
for each substratum, as this would ensure the integrity of the model. The item reduction
81


for SPGS occurred using a traditional factor analysis using IBM SPSS Statistics 19 and
five items were retained.
Pretest B Reduced Items
Once the items were reduced, an additional path analysis of Pretest B was
performed using SmartPLS (Ringle et al., 2005). Again, the measurement items of each
first-order substratum were modeled as reflective, while the second-order construct of
GRA was modeled using the hierarchical component model. This model, along with the
updated path coefficients, is presented in Figure 6. The reduced items for each substratum
are presented in Table 23 along with factor loadings.
Figure 6. Pretest B, Path Analysis.
82


Table 23. PLS Factor Analysis Loadings.
Factor Loading by Substratum and GRA
Item SPGMR SPGON SPGV GRA
SPGMR1 0.8492 0.6507 0.5601 0.7658
SPGMR2 0.876 0.6396 0.5372 0.7639
SPGMR3 0.8443 0.6756 0.6837 0.8155
SPGMR4 0.8766 0.6685 0.6396 0.81
SPGON3 0.6287 0.8439 0.5732 0.7619
SPGON4 0.6893 0.9201 0.6166 0.8293
SPGON5 0.6722 0.8772 0.6332 0.8116
SPGON10 0.6835 0.8539 0.5812 0.7892
SPGV1 0.6167 0.63 0.8509 0.7678
SPGV2 0.6579 0.6128 0.9018 0.7939
SPGV3 0.6291 0.6059 0.9076 0.7822
SPGV10 0.4078 0.4147 0.6902 0.5332
The latent variable correlation, as shown in Table 24, shows that the correlations
between GRA and each of its substrata after the items for each substratum were reduced.
Each substratum correlates with GRA between 0.8742 and 0.9165, while there is also a
high correlation between each of the substrata.
Table 24. PLS Latent Variable Correlation.
SPGMR SPGON SPGV GRA
SPGMR 1 - - -
SPGON 0.765 1 - -
SPGV 0.7045 0.688 1 -
GRA 0.9165 0.9133 0.8742 1
After reducing the number of items from ten to four for each substratum, the path
coefficients, as shown in Table 25, continue to be relatively equal and range between
0.3382 and 0.3901. As such relative similarity was a suggested requirement of research
models using repeating indicators (Beemer & Gregg, 2010) we deem the results
83


acceptable. Additionally, convergent validity can be demonstrated when t-statistic values
are greater than 1.96 (Gefen & Straub, 2005), which all of the first-order constructs
achieve. To ensure that there was no systematic influence biasing the data, two common
method bias tests were performed. The first involved the addition of a theoretically
dissimilar marker construct to which each of the substrata was compared. Using this
method, the greatest squared variance revealed only a 2.87% shared variance, below the
3% suggested threshold (Lindell & Whitney, 2001). Furthermore, a comparison of the
first-order latent variable correlations reveals that all are below 0.9, indicating no
common method bias (Bagozzi et al., 1991).
Table 25. Path Coefficients.
Original Sample Sample Mean Standard Deviation Standard Error T Statistics
SPGMR -> GRA 0.3798 0.3795 0.0114 0.0114 33.3878
SPGON -> GRA 0.3901 0.3899 0.0096 0.0096 40.6855
SPGV -> GRA 0.3382 0.3387 0.011 0.011 30.8816
Next, construct reliability and validity, were further established. The Average
Variance Extracted (AVE) for a first-order construct should be greater than 0.5 (Fomell
& Larcker, 1981), which all substrata achieve. The concern encountered when analyzing
the AVE of the substrata containing all measurement items in Pretest B was resolved
through the item reduction. For second-order constructs with formative indicators, such
as GRA, a R value greater than 0.5 indicates construct validity (Diamantopoulos et al.,
2
2008; MacKenzie et al., 2011). In the measurement model the path analysis revealed a R
value of 1.0 as all indicator variables defined the second-order construct. The composite
reliability scores were greater than 0.8986 for each of the substrata, which meant that
reliability as defined by Cronbach (1951) and Fornell and Larcker (1981) for reflective,
84


first-order constructs, was achieved. Gleim and Gleim (2003) suggest a minimum alpha
of 0.8 as a reasonable goal, which each of the tested substrata exceeded. Per MacKenzie
(2011), as the measurement model does not predict correlation of the sub-dimensions for
the second-order, formative construct of GRA, reliability is not relevant for this
construct. See Table 26 for more detail.
Table 26. Construct Reliability.
AVE Composite Reliability R Square Cronbachs Alpha Communality Redundancy
GRA 0.5964 0.9461 1 0.9372 0.5964 0.3291
SPGRM 0.7425 0.9202 0 0.8843 0.7425 0
SPGON 0.7643 0.9283 0 0.8968 0.7643 0
SPGV 0.6927 0.8986 0 0.8465 0.6927 0
Finally, discriminant and convergent validity were established. Convergent
validity is demonstrated as all items that should load in GRA did so at greater than 0.6,
except for SPGV10 that loaded slightly below 0.6 at 0.5779. Particular attention should
be given to this item in future uses of this measurement scale to ensure continued
validity. Additionally, all items that should load on SPGS, which is now treated as an
independent construct, did so greater than 0.6. Furthermore, discriminate validity was
further established as GRA substrata items that should not load on SPGS based on theory,
only do so at less than 0.3. See Table 27 for more detail.
85


Table 27. PLS Factor Loading and Cross Loading.
GRA SPGS
GRA1 (SPGMR1) 0.8038 0.2268
GRA2 (SPGMR2) 0.7827 0.1875
GRA3 (SPGMR3) 0.7878 0.0983
GRA4 (SPGMR4) 0.8091 0.1407
GRA5 (SPGON3) 0.7864 0.1957
GRA6 (SPGON4) 0.8204 0.1299
GRA7 (SPGON5) 0.8004 0.1253
GRA8 (SPGONIO) 0.7948 0.1656
GRA9 (SPGV1) 0.7454 0.1167
GRA10 (SPGV2) 0.7351 0.0574
GRA11 (SPGV3) 0.7263 0.0829
GRA12 (SPGV10) 0.5779 0.1938
SPGS4 0.2026 0.8631
SPGS5 0.1982 0.7975
SPGS8 0.1578 0.7485
SPGS9 0.1258 0.7748
SPGS10 0.0848 0.7398
Using the two-step procedure outlined by Gefen and Straub (2005), discriminant
validity was further established by comparing inter-construct correlations with square
root of each constructs AVE. Gefen and Straub state that the square root of the AVE
should be much larger than the construct correlations. Chin (1998) suggests that
discriminant validity can be inferred when the variance of each construct is larger than
the variance shared with any other construct. Furthermore, Fomell and Larcker (1981)
suggest that all AVEs should exceed a 0.50 threshold, which occurs in this analysis.
Based on the results of this analysis, as seen in Table 28, the square root of the AVE for
each substratum is larger than any correlation.
86


Full Text

PAGE 1

DECISION PERFORMANCE USING SPATIAL DECISION SUPPORT SYSTEMS: A GEOSPATIAL REASONING ABILITY PERSPECTIVE by M ICHAEL A. ERSKINE B.S. Metropolitan State University of Denver 2004 M.S. University of Colorado Denver 2007 A thesis submitted to the Faculty of the Graduate School of the University of Colorado in partial fulfillment of the requirements for the degree of Doctor of Philosophy Computer Science and Information Systems 2013

PAGE 2

ii This thesis for the Doctor of Philosophy degree by Michael A. Erskine has been approved for the Computer Science and Information Systems Program by Jahangir Karimi Chair Dawn G. Gregg Advisor Judy E. Scott Ilkyeun Ra November 8, 2013

PAGE 3

iii Erskine, Michael, A., ( Ph.D., Computer Science and Information Systems ) Decision Performance Using Spatial Decision Support Systems: A Geospatial Reasoning Ability Perspective Thesis directed by Associate Professor Dawn G. Gregg. ABSTRACT As many consumer and business decision makers are utilizing Spatial Decision Support S ystems (SDSS), a thorough understanding of how such decisions are made is crucial for the information syste ms domain. This dissertation presents six chapters encompassing a comprehensive analysis of the impact of geospatial reasoning ability on decision performance using SDSS. An introduction to the research is presented in Chapter I Chapter II provides a lite rature review and research framework regarding decision making using geospatial data. Chapter III presents the constructs of geospatial reasoning ability and geospatial schematization. Chapter IV explores the impact of geospatial reasoning ability on the technology acceptance of SDSS. Chapter V presents results of an experiment exploring the impact of geospatial reasoning ability on decision making performance. Finally, Chapter VI presents a concl usion. Together these chapters contribute to a greater understanding of the impact of geospatial reasoning ability in relation to business and consumer decision making. The form and content of this abstract are approved. I recommend its publication. App roved: Dawn G. Gregg

PAGE 4

iv TABLE OF CONTENTS CHAPTER I AN INTRODUCTION TO DECISION MAKING USING GEOSPATIAL DATA ..... 1 Abstract ................................ ................................ ................................ ................... 1 Introduction ................................ ................................ ................................ ............. 1 II BUSINESS DECISION MAKING USING GEOSPATIAL DATA: A RESEARCH FRAMEWORK AND LITERATURE REVIEW ................................ ............................... 6 Abstract ................................ ................................ ................................ ................... 6 Introduction ................................ ................................ ................................ ............. 7 Literature Revie w ................................ ................................ ................................ .. 11 Theoretical Background ................................ ................................ .................. 11 Information Presentation ................................ ................................ ................. 12 Task Characteristics ................................ ................................ ........................ 16 User Characteristics ................................ ................................ ........................ 23 Decision Making Performance ................................ ................................ ....... 30 Conceptual Model ................................ ................................ ................................ 33 Discussion ................................ ................................ ................................ ............. 35 Limitations of Reviewed Literature ................................ ................................ 35 Future Research ................................ ................................ .............................. 37 Conclusion ................................ ................................ ................................ ............ 39 III GEOSPATIAL RE ASONING ABILITY: CONSTRUCT AND SUBSTRATA DEFINITION, MEASUREMENT AND VALIDATON ................................ ................ 42 Abstract ................................ ................................ ................................ ................. 42 Introduction ................................ ................................ ................................ ........... 43 Literature Review ................................ ................................ ................................ .. 45

PAGE 5

v Cognitive Fit Theory ................................ ................................ ....................... 45 Geovisua lization and Decision Performance ................................ .................. 46 User Characteristics ................................ ................................ ........................ 47 Task Characteristics ................................ ................................ ........................ 49 Scale Development Procedure ................................ ................................ .............. 51 Construct Conceptualization (Step 1) ................................ ................................ ... 53 Factor One: Examination of Prior Research ................................ ................... 53 Factor Two: Identification of Construct Properties and Entities .................... 55 Factor Three: Specification of the Conceptual Theme ................................ ... 56 Factor Four: Definition of the Construct and Substrata ................................ .. 56 Measurement Item Generation (Step 2) ................................ ................................ 58 Content Validity (Step 3) ................................ ................................ ...................... 63 Measurement Model Specification (Step 4) ................................ ......................... 67 Reflective and Formative Constructs ................................ ................................ .... 69 Visual Representation of Measurement Model ................................ ..................... 70 Mathematical Notation of Measurement Model ................................ ................... 71 Data Collection (Step 5) ................................ ................................ ........................ 72 Pretest A ................................ ................................ ................................ .......... 72 Pretest B ................................ ................................ ................................ .......... 73 Scale Purification and Refinement (Step 6) ................................ .......................... 73 Pretest A ................................ ................................ ................................ .......... 73 Pretest B ................................ ................................ ................................ .......... 74 Pretest B Reduced Items ................................ ................................ .............. 82 Da ta Collection and Reexamination of Scale Properties (Step 7) ........................ 87 Collinearity ................................ ................................ ................................ ..... 94

PAGE 6

vi Gender Concerns ................................ ................................ ............................. 94 Discussion ................................ ................................ ................................ ............. 96 Implication to Research ................................ ................................ .................. 99 Implication to Industry ................................ ................................ .................... 99 Limitations and Future Research ................................ ................................ .. 100 Conclusion ................................ ................................ ................................ .......... 104 IV USER ACCEPTANCE OF SPATIAL DECISION SUPPORT SYSTEMS: APPLYING UTILITARIAN, HEDONIC AND COGNTIVE MEASURES ................. 106 Abstract ................................ ................................ ................................ ............... 106 Introduction ................................ ................................ ................................ ......... 106 Perceived Enjoyment: Hedonic Measure s ................................ .................... 108 Technology Acceptance Model: User Acceptance and Utilitarian Measures ................................ ................................ ................................ ....................... 109 Geospatial Reasoning Ability: Cognitive Measures ................................ ..... 110 Research Model ................................ ................................ ................................ .. 111 Research Met hodology ................................ ................................ ....................... 117 Research Sample ................................ ................................ ........................... 117 Research Instrument ................................ ................................ ...................... 118 Analysis ................................ ................................ ................................ ............... 122 Discussion ................................ ................................ ................................ ........... 131 Implications for Industry ................................ ................................ ............... 133 Implications for Scholarship ................................ ................................ ......... 134 Limitations ................................ ................................ ................................ .... 134 Future Research ................................ ................................ ............................ 135 Conclusion ................................ ................................ ................................ .......... 135

PAGE 7

vii V INDIVIDUAL DECISION PERFORMANCE OF SPATIAL DECISION SUPPORT SYSTEMS: A GEOSPATIAL REASONING ABILITY AND PERCEIVED TASK TECHNOLOGY FIT PERSPECTIVE ................................ ................................ ........... 137 Abstract ................................ ................................ ................................ ............... 137 Introduction ................................ ................................ ................................ ......... 138 Literature Review ................................ ................................ ................................ 139 Cognitive Fit Theory ................................ ................................ ..................... 140 Decisi on Performance ................................ ................................ ................... 141 Perceived Task Technology Fit ................................ ................................ .... 14 1 User Characteristics ................................ ................................ ...................... 142 Task Characteristics ................................ ................................ ...................... 143 Research Model ................................ ................................ ................................ .. 146 Research Methodology ................................ ................................ ....................... 149 Experi ment Design ................................ ................................ ........................ 151 Subjects ................................ ................................ ................................ ......... 151 Geospatial Reasoning Ability Measurement Items ................................ ....... 152 Perceived Task Technology Fit Measurement Items ................................ ... 153 Problem Complexity ................................ ................................ ..................... 154 Visualization Complexity ................................ ................................ ............. 155 Analysis ................................ ................................ ................................ ............... 156 Measurement Model ................................ ................................ ..................... 156 Structural Model ................................ ................................ ........................... 161 Heterogeneity ................................ ................................ ................................ 164 Findings and Discussion ................................ ................................ ..................... 166 Limi tations ................................ ................................ ................................ .... 168 Implications ................................ ................................ ................................ ... 170

PAGE 8

viii Conclusion ................................ ................................ ................................ .......... 171 VI CONCLUSION: TOWARD A COMPREHENSIVE UNDERSTANDING OF GEOSPATIAL REASONING ABILITY AND THE GEOSPATIAL DECISION MAKING FRAMEWORK ................................ ................................ ............................. 172 Abstract ................................ ................................ ................................ ............... 172 Introduction ................................ ................................ ................................ ......... 172 Conceptual Model ................................ ................................ ............................... 173 Implication to Research ................................ ................................ ................ 175 Implication to Industry ................................ ................................ .................. 175 Theo retical Frameworks ................................ ................................ ..................... 176 Findings and Discussion ................................ ................................ ..................... 178 Future Research ................................ ................................ ................................ .. 182 REFERENCES ................................ ................................ ................................ ............... 184

PAGE 9

ix LIST OF TABLES Table 1. Common Theor ies Related to Spatial Decision Making. ................................ ............. 12 2. List of Complexity Frameworks (Adapted from Gill and Hicks, 2006). ...................... 20 3. User Characteristic Measurement Instruments used in Examined Research, not including spatial ability measures (as shown in Table 4). ................................ ................ 27 4. Spatial Reasoning Instruments Used in Examined Research. ................................ ...... 30 5. Research Po pulation Groups. ................................ ................................ ........................ 36 6. Spatial Reasoning Instruments Used in Examined Research ................................ ....... 49 7. Proposed Substrata of GRA. ................................ ................................ ........................ 57 8. Definition of SPGS. ................................ ................................ ................................ ..... 58 9. Initial Self Perceived Geospatial Orientation and Navigation Measurement Items. ... 60 10. Initial Self Perceived Geospatial Memorization and Recall Measurement Items. .... 61 11. Initial Self Perceived Geospatial Visualization Measurement Items. ........................ 62 12. Initial Self Perceived Geospatial Schematization Measurement Items. .................... 63 13. Content Validity Results of Geospatial Orientation and Navigation. ........................ 64 14. Content Validity Results of Geospatial Memorization and Recal l. ........................... 65 15. Content Validity Results of Geospatial Visualization. ................................ .............. 66 16. Content Validity Results of Geospatial Schematization. ................................ ........... 66 17. Pretest A Reliability Statistics. ................................ ................................ ................... 74 18. Pretest B Descriptive Statistics of Demographics Var iables. ................................ ..... 76 19. PLS Factor Analysis Loadings. ................................ ................................ .................. 79 20. Latent Variable Correlation. ................................ ................................ ...................... 80 21. Path Coefficients. ................................ ................................ ................................ ....... 80

PAGE 10

x 22. Construct Reliability. ................................ ................................ ................................ 81 23. PLS Factor Analysis Loadings. ................................ ................................ .................. 83 24. PLS Latent Variable Correlation. ................................ ................................ .............. 83 25. Path Coefficients. ................................ ................................ ................................ ....... 84 26. Construct Reliability. ................................ ................................ ................................ 85 27. PLS Factor Loading and Cross Loading. ................................ ................................ ... 86 28. Inter Construct Correlations and Square Root of AVE. ................................ ............ 87 29. Study 1 Descriptive Statistics of Demographic Variables. ................................ ........ 88 30. PLS Factor Analysis Loadings and Weights. ................................ ............................ 90 31. Latent Variable Correlation. ................................ ................................ ...................... 90 32. Path Coefficients. ................................ ................................ ................................ ....... 91 33. Construct Reliability. ................................ ................................ ................................ 92 34. PLS Factor Loading and Cross Loading. ................................ ................................ ... 93 35. Inter Construct Correlations and Square Root of AVE. ................................ ............ 94 36. SPGON GRA: Gender Group Differences. ................................ ........................... 95 37. SPGMR GRA: Gender Group Differences. ................................ .......................... 95 38. SPGV GRA: Gender Group Differences. ................................ .............................. 96 39. Final GRA Measurement Items. ................................ ................................ ................ 98 40. Final SPGS Measurement Items. ................................ ................................ ............... 98 41. Perceived Enjoyment Measurement Items. ................................ .............................. 118 42. Perceived Usefulness Measurement Items. ................................ .............................. 119 43. Perceived Ease of Use Measurement Items. ................................ ........................... 120 44. Attitude Measurement Items. ................................ ................................ ................... 120 45. Behavioral Intent Measurement Items. ................................ ................................ .... 121 46. Geospatial Reasoning Ability Measurement Items. ................................ ................. 122

PAGE 11

xi 47. Descriptive Statistics of Demographic Variables. ................................ ................... 123 48. Item Reliability. ................................ ................................ ................................ ....... 124 49. Item Loadings and Cross Loadings, highest loadings shown in bold. ..................... 126 50. Square Root of Loadings and Cross Loadings, highest loadings shown in bold. .... 127 51. Inter Construct Correlations and Square Root of AVE. ................................ .......... 128 52. Construct AVE. ................................ ................................ ................................ ........ 129 53. Path Coefficients. ................................ ................................ ................................ ..... 131 54. Hypothesis Tests. ................................ ................................ ................................ ..... 131 55. Summary of Geospatial Decision Performance Research. ................................ ...... 145 56. Summary of Geospatial Decision Performance Research. ................................ ...... 150 57. Descriptive Statistics of Experim ent Subjects. ................................ ........................ 152 58. Geospatial Reasoning Ability Measurement Items, Adapted from Erskine and Gregg (2011, 2012, 2013). ................................ ................................ ................................ ......... 153 59. Perceived Task Technology Fit Measurement Items, Adapted from Karimi et al. (2004) and Jarupathirun and Zahedi (2007). ................................ ................................ ... 154 60. Problem Complexity ................................ ................................ ................................ 155 ................................ ......................... 157 62. Measurement Item Loadings. ................................ ................................ ................... 158 63. Average Variance Extracted (AVE) by Construct. ................................ .................. 159 64. Measurement Item Cross Loadings. ................................ ................................ ........ 160 65. Latent Variable Correlations and Sq. of AVE (shown in bold). .............................. 161 66. R 2 Values of Endogenous Latent Variables. ................................ ............................ 161 67. Path Coefficients and Significance Levels. ................................ ............................. 162 68. Path Coefficients and Significance Levels. ................................ ............................. 163 69. R 2 and Q 2 Values of Endogenous Latent Variables. ................................ ................ 164 70. Path Coefficients Male Only. ................................ ................................ ................ 165

PAGE 12

xii 71. Path Coefficients Female Only. ................................ ................................ ............ 1 65 72. Result of Gender Group Comparison. ................................ ................................ ..... 166 73. Hypotheses Test. ................................ ................................ ................................ ...... 168

PAGE 13

xiii LIST OF FIGURES Figure 1. Proposed Research Model. ................................ ................................ ............................ 34 2. Overview of Scale Development Procedure from MacKenzie et al. (2011). .............. 52 3. First Order Reflective, Second Order Formative Measurement Model including Propositions. ................................ ................................ ................................ ...................... 71 4. Pretest B, Path Analysis including SPGS. ................................ ................................ ... 77 5. Pretest B, Path Analysis including SPGS. ................................ ................................ ... 78 6. Pretest B, Path Analysis. ................................ ................................ .............................. 82 7. Test, Path Analysis. ................................ ................................ ................................ ...... 89 8. Proposed Research Model. ................................ ................................ ......................... 112 9. Initial Nomological Network of GRA, along with Path Coefficients and R Square Values. ................................ ................................ ................................ ............................ 129 10. Proposed Research Model. ................................ ................................ ....................... 146 11. Experiment Workflow. ................................ ................................ ............................ 150 12 ................................ ................................ ................................ ................................ ......... 151 13. Visual result of SEM PLS Algorithm (using S martPLS). ................................ ....... 162 14. Visual result of SEM PLS Algorithm (using SmartPLS). ................................ ....... 163 15. Research Model with Relationship Significance. ................................ .................... 167 16. Conceptual Model. ................................ ................................ ................................ ... 174 17. Current Nomological Network of GRA Construct. ................................ ................. 182

PAGE 14

xiv LIST OF ABBREVIATIONS ATM Automated Teller Machine AVE Average Variance Extracted CFT Cognitive Fit Theory DSS Decision Support System GIS Geographic Information System GPS Global Positioning System GRA Geospatial Reasoning Ability INSPIRE Infrastructure for Spatial Information in the European Community IS Information Systems NASA National Aeronaut ic and Space Administration NFC Need for Cognition NSDI National S patial Data Infrastructure PDA Personal Digital Assistant PLS Partial Least Squares PPGIS Public Participation Geographic Information Systems PTTF Perceived Task Technology Fit RFID Radio Frequency Identification SDSS Spatial Decision Support System SEM Structural Equation Modeling SM W Subjective Mental Workload SPGON Self Perceived Geospatial Orientation and Navigation SPGMR Self Perceived Memorization and Recall SPGV Self Perceived Geospatial Visualization SPGS Self Perceived Geospatial Schematization TAM Technology Acceptance Model TLX Task Load Index USGS United States Geological Survey UTAUT Unified Theory of Acceptance and Use of Technology VGI Volunteered Ge ographic Information

PAGE 15

1 CHAPTER I AN INTRODUCTION TO DECISION MAKING USING GEOSPATIAL DATA Abstract This chapter provides an overview of the motivation of this dissertation a discussion of key industry sectors that currently use geospatial data for decision making and a brief history of geospatial decision making presented throu gh well known cases Introduction Consumer, business and government decision makers increasingly rely on geospatial data to make critical decisions. Recen t developments, particularly the advent of global positioning systems, expansion of advanced mobile communications networks, prevalence of powerful mobile devices, and systems such as location based services have allowed individuals and organizations to co llect and share vast quantities of geospatial data. Furthermore, due to the increased access to geospatial data as well as tools to assess such information, many decision makers utilize geospatial criteria in their decision making tasks. Modern Spatial De cision S upport S ystems (SDSS) have simplified such analyses, yet there is little understanding regarding how such decision are made, what presentation methods best facilitate geospatial decision making as well as what specific factors influence decision m aking performance. While multi criterion decision making using geospatial data was once only practical for expert analysts, today consumers, business and government agencies are increasingly reliant on online mapping services,

PAGE 16

2 location based services and SDSS to assist in their procedural, organization, and strategic decision making processes. For instance, consumer tasks, such as locating the nearest bank, finding a home or apartment, or even finding friends at an amusement park are decision making task s that are commonly aided using SDSS and associated geospatial visualization ( MasterCard 2013; Zillow 2013; Apple 2013). Additionally o nline mapping services have become increasingly popular with consumers leading to over 100 million mobile device own ers access ing Google Maps each month (Gundotra, 2010). Coinciding with the increasing usage of such mapping services, there has been an increasing public awareness of geospatial tools and information for business and consumers in mainstream media ( e.g., Gr iggs, 2013; Lavrinc, 2013; Versace, 2013). While mobile devices have simplified everyday geospatial decision making for consumers, technology advances have also enhanced geospatial decision making for businesses, government and non profit organizations. For instance, businesses can use SDSS to facilitate retail site location tasks, government entities can quickly locate citizens that will be impacted by an impending disaster, and non profit organizations can target individuals based on precise demographic and location criteria. More specifically, t he insurance industry can benefit from enhanced risk analytics and crisis management. The m anufacturing sector can benefit from logistic supply chain and asset management. The banking and finance industry can be nefit from the identification of areas with high risk potential as well as tracking consumer financial behaviors. Marketers benefit through the use o f SDSS in numerous ways as well, such as by targeting customers based on far

PAGE 17

3 more granular locations than m ailing codes alone ( Hess, Rubin, & West, 2004 ; ESRI 2013). Government agencies, economic development groups and tourism authorities can also leverage advances in SDSS. For instance, Mak ati City in the Philippines used SDSS to make urban planning decision s more efficient ( Africa 2013). In the tourism industry, SDSS has been used to perform visitor flow management, build visitor fa cility inventories and assess the impacts of tourism (Chen, 2007). Economic development groups can use SDSS to better understan d specific commercial and residential markets (ESRI, 2013). Problem solving using geospatial data has demonstrated its value throughout history. A well known example of the use of geospatial data was that of Snow (1849, 1855) who explored geospatial relati onships between public wells and cholera outbreaks to confirm that some wells were indeed contaminated and contribu ted to cholera outbreaks (Brody, Rip, Vinten Johansen, Paneth, & Rachman 200 0 ). Previously, Seaman (1798) had presented an analysis of yello w fever in New York using similar thematic maps. More recently, SDSS have been applied to better understand and mitigate various diseases, such as malaria (Kelly Tanner, Vallely, Clements 2012), cancer (Rasaf Ramezani, Mehrazma, Rasaf, & Asadi Lari 2012) and diabetes (Noble 2012). The prevalence of the use of SDSS in medical research has even brought about a sub domain of healthcare research called spatial epidemiology (Elliot Wakefield, Best, & Briggs 2000). Recently, geospatial data gathered an d shared by large crowds using social media has brought about ex c iting benefits. S uch crowd sourced data allows emergency

PAGE 18

4 resources to be distributed more effectively following natural and man made disaster s For example, a United States Geological Survey ( USGS ) program called the Twitter Earthquake Detector uses geospatial data and keyword filtering from Twitter data to determine i f an earthquake occurred. It has been reported that the Twitter Earthquake Detector was able to detect a disaster within second s while traditional scientific alerts took far longer to reach experts (Department of the Inter ior: Recovery Investments, 2012 ). Academic researchers have explored the use of geospatial data for decision making in regard to multi criterion decision making geospatial visualization, geospatial decision making perform ance, as well as group decision making using geospatial data (e.g.: Jankowski & Nyerges, 2001; Skupin & Fabrikant, 2003; M alczewski, 2006). However, research in these area s has si gnificantly lag ged behind technology developments and th e increasing prevalence of SDSS, and therefore, no comprehensive decision making models, research constructs, measurement scales and well established benchmarks for geospatial decision making exist. Due to the incr easing prevalence and access to geospatial data and an increasing trend of individual, business and governmental decision making being performed using SDSS and associated information systems, it is essential that the information system (IS) scholarship dev elop a more comprehensive understanding of how such decisions are made This current lack of understanding within the IS scholarship as well as the significance of geospatial decision making provided the primary motivation for the development of this dis sertation.

PAGE 19

5 The next five chapters will provide initial steps to ward a more comprehensive understanding of geospatial decision making. More specifically, Chapter II provides a tho rough literature review and research framework for geospatial decision making research. Chapter III presents the development of a comprehensive construct defining individual geospatial reasoning ability, a key user characteristic that has provided mixed results in previous empirical studies. Chapter IV presents an extension of the Technology Acceptance Model in the context of geospatial visualization and the use of online mappin g services. Chapter V explores decision making within the research framework presente d in Chapter II. Together Chapter IV and Chapter V work toward extending the external validity of the proposed geospatial reasoning ability construct. Finally, Chapter VII presents a conclusion to the overall empirical tests of the research framework intro duced in Chapter II a nd defines a nomological network of the geospatial reasoning ability construct presented in Chapter III

PAGE 20

6 CHAPTER II BUSINESS DECISION MAKING USING GEOSAPTIAL DATA: A RESEARCH FRAMEWORK AND LITERATURE REVIEW 1 Abstract Following the brief introduction presented in Chapter I, Chapter II will present a detailed introduction and a conceptual model for future research relate d to geospatial decision making. More specifically, Chapter II provides a thorough literature review a nd framework for geos patial decision making research. Organizations that leverage their increasing volume of geospatial data have the potential to enhance their strategic and organizational decisions However, literature describing the best techniques to make decisions using geospatial data and the best ation capabilities is very limited This chapter reviews the use of geovisualization and its effects on decision performance, which is one of the many misunderstood components of decision making when using geospatial data. A d ditionally this chapter proposes a comprehensive model allowing researchers to better understand decision making using geospatial data and provide s a robust foundation for future research. Finally, this chapter makes an argument for further research of information presentation, task characteristics, user characteristics and their effects on decision performance utilizing geovisualized data. 1 An early version of this chapter was submitted as a 1 st Year Paper and approved by Dawn G. Gregg in November 2010. A subsequent version of this chapter is currently under review with Axioms.

PAGE 21

7 Introduction While geospatial d ata permeates business computing there is only a limited understanding of how geographic information is utilized to make strategic and organizational business decisions, as well as how to effectively visualize geographic data for such decision making The utility of database technologies, as well as that of spreadsheets, has been taught in most business school courses for many years, so most business professionals have had a clear understanding of utilization and benefits of such technologies. However, as geospatial data has become more preva lent within IS computing researchers and business professionals have tried to better understand all aspects of decision making using geospatial data. One of these areas is the ability to understand decision making proc esses as they relate to the unique abilities of geovisualization, or the ability to represent, understand and utilize geospatial data in map like projections for decision making. This paper provides an analysis of current research concerning geovisualizati on and decision call for additional analysis in areas of conflicting research results. The ability to interpret geographic information and make decisions based on geographic data is essential for business decision makers, because over 75 percent of all business data contains geographic information (Tonkin, 1994) and 80 percent of all business decisions involve geographic data (Mennecke, 1997). While geospatial data can be presented u tilizing traditional methods such as tables, often unique relationships contained within geospatial data are only apparent th r ough geovisualization ( Reiterer, Mann, Muler, & Bleimann 2000).

PAGE 22

8 As organizations have collect ed vast amounts of geospatial or geo referenced data, two technologies have been developed to interpret these data in support of decision making. These systems are Spatial Decision Support Systems (SDSS) and Geographic Information Systems (GIS) While traditi onal DSS have been implemented successfully for production planning, forecasting, business process reengineering and virtual shopping (Subsorn & Singh, 2007), such systems poorly utilize geospatial data. Thus, SDSS were developed to aid decision making whe n utilizing complex geospatial data Such technologies operate much like DSS, but are tailored to handle the unique complexities of geospatial data. SDSS provide capabilities to input and output geospatial data, provide analytic capabilities unique to geos patial data, and allow complex geospatial representations to be presented (Densham, 1991). Although IS researchers are familiar with DSS concepts, many IS researchers are not yet familiar with key SDSS concepts such as l analysis (Pick, 2004 p. 308 While SDSS provide methods for geospatial decision making, Geographic Information Systems, commonly referred to as GIS, often allow geospatial experts to analyze and report geographic data. GIS can be used to populate inf ormation to SDSS as well as to perform complex geospatial analyses. While there are numerous aspects of GIS relevant to IS researchers, this chapter examines the decision making aspects of GIS (Mennecke & Crossland, 1996). Such an understanding is critical for IS researchers because the global GIS adoption rates continue to increase and research has shown that a strong understanding of geospatial data can lead to enhanced decision making (Pick, 2004).

PAGE 23

9 SDSS and GIS that leverage geovisualization are provided to professionals and consumers through a variety of sources. Prominent consumer tools for the search of information using geovisualization include Google Maps and Bing Maps, which allow a user to visually locate geo referenced information, such as address es, businesses and even people (Google Maps, 2013; Bing Maps, 2013). Other domain specific examples include the capability to determine wireless signal strength at street level, automated banking kiosk location information and interactive real estate searc h tools (T Mobile, 2013; American Express, 2013; RE/MAX, 2013). However, the use of geospatial data is not limited to only consumers and business organizations. Government agencies leverage geospatial data for decision making when solving large societal pr oblems. More specifically, immense geospatial specific infrastructure systems have been implemented such as the National Spatial Data Infrastructure (NSDI) in the United States and the Infrastructure for Spatial Information in the European Community (INSPI RE) in the European Union (Yang, Wong, Yang, & Li 2005; Yang, Raskin, Goodchild, & Gahegan, 2010). Goals of such systems are to enable nearly every agency of a government to share large volumes of geographic data locally, nationally and globally (Yang et al., 2005). Early demonstration projects of the NSDI included a geospatial crime tracking system for a metropolitan police department as well as a regional system to help communities perform effective master planning activities (Federal Geographic Data Com mittee, 1999 ). As more devices an d technologies become networked through technologies such as portable navigation devices, mobile computing platforms and radio frequency identification (RFID), more and more collected data will consist of geographic o r geo referenced data. With the increase in the amount of geospatial data available to decision

PAGE 24

10 makers, it is crucial that IS professionals and researchers expand their knowledge of geospatial systems and understand their unique characteristics, inherent potenti al and limiting drawbacks. Smelcer and Carmel (1997) identified four research streams contributing to the effec tiveness of geo visualization, including information representation, task difficulty, geographic relationship and cognitive skill. This research c hapter will expand on these findings. Additionally, Pick (2004) points to promising research in the area of visualization, which this research chapter addresses. Particularly, this chapter will attempt to clarify the unique aspects that geovisualization b rings to business decision makers and will suggest specific future research goals. This chapter begins with a literature review emphasizing the theoretical backgrounds of existing research, then analyzes existin g research related to decision making utilizi ng geovisualization including task characteristics, such task complexity, collaboration, and task type and user characteristics, such as cognitive fit, task complexity perceptio ns mental workload, goal setting, self efficacy, spatial reasoning ability as well as decision making performance, all of which are identified as key research streams relevant to the visualization of geographic data. The goal of this section is to review re levant research in the IS, geography and psychology realms in order to develop a comprehensive model that can be used to increase the understanding of the impact geographic data visualization has on decision performance. Following this a conceptual model based on existing literature will be presented. Then, limitations of the reviewed literature and future research suggestions will be discussed. Finally, a conclusion is presented.

PAGE 25

11 Literature Review This section provides an in depth analysis of reoccurring themes found in literature related to task and user characteristics, as well as decision making performance. Theoretical Background Research reveals the importance of information presentation, ta sk characteristics and user characteristics on decision performance. An emphasis is placed on exploring theories that have been suggested to explain these four themes. Specifically, literature related to information visualization and its effects on decisio n making often cites Cognitive Fit Theory (CFT) Complexity Theory, Task Fit Theory, Image Theory as well as research on task technology fit, self efficacy, motivation, goal setting and spatial abilities (see Table 1). Of the four research streams Smelcer and Carmel (1997) identified, each potentially relates to an existing theory, including Task Fit (information representation and geographic relationship), Complexity Theory (task difficulty), and C FT (cognitive skill). This research will expand on these fi ndings.

PAGE 26

12 Table 1 Common Theories Relat ed to Spatial Decision Making. Theory Study Cognitive Fit Theory Vessey 1991 (posited) ; Smelcer and Carmel 1997 ; Dennis and Carte 1998 ; Mennecke et al. 2000 ; Speier and Morris 2003 ; Speier 2006 Complexity Theory Smelcer & Carmel 1997 ; Swink & Speier 1999 Task Fit /Task Technology Fit Smelcer & Carmel 1997 ; Jarupathirun & Zahedi 2007 Self Efficacy Jarupathirun & Zahedi 2007 Motivation Theory Jarupathirun & Zahedi 2007 Goal Setting Theory Jarupathirun & Zahedi 2007 Image Theory Crossland, Wynne, & Perkins, 1995 The following section explore s reoccurring themes found in literature related to visualization of geospatial data. These themes include information presentation, task characteristics, user charac teristics and decision performance. Information Presentation Numerous researchers have explored the importance of visual information presentation on decision performance (e.g., Vessey, 1991; Sme lcer & Carmel, 1997; Dennis & Carte, 1998; Mennecke et al., 2000; Speier & Morris, 2003; Speier, 2006) For exa mple, in her work, Vessey posits the CFT which suggests that there are two types of information presentation, as well as two types of problem solving tasks. Furthermore, it is suggested that when the problem representation matches the problem solving task higher quality deci sions are made. In Vessey h, the objective measures of decision time and decision accuracy as well as interpretation accuracy are measured as antecedents of performance ; however, it is noted that confidence in the solution also

PAGE 27

13 could play a ro le. Additionally, Vessey points out that while often analyzed tasks from prior research utilized simple graphs and tables, actual business problems are far more complex and not as well defined Furthermore prior research may have included, for example, numbers along with graphical represent ations actually presenting a mix of spatial and symbolic data. FT has been referenced as a theoretical background, extended into other domains and validated in numerous empirical studies, such as Speier (2006 ) and Smelcer a nd Carmel (1997) Speier presented a review of eight empirical research papers that tested for cognitive fit. Their research discovered that all but one paper either fully or partially supported the C FT The author of the paper that did not support C FT Frownfelter Lohrke (1998) explained that the lack of support most likely resulted due to the complex nature of tasks involving the examinatio n of financial statements Extensions of C FT include work performed by Dennis and Carte (1998) who demonstrate d that when map based presentations are coupled with appropriate tasks, decision processes and decision performance are influenced. Additionally, Mennecke et al (2000) expand on the C FT by determining the effects of subject characteristics and problem complexity on decision e fficiency and accuracy. Also, C FT has been extended ability to understand data visualizations will influen ce decision outcomes (Speier & Morris 2003). In addition to C FT Task Technology Fit has been utilized to explain the importance of appropriate information presentation methods For example, Ives (1982) articulates the importance of visual information presentation. Ives states that while

PAGE 28

14 researchers have responded to calls for additional research into data and infor mation visualization techniques there is still potential for additional research into cartographic data visualization, particularly through SDSS, GIS or other digital map based presentations. Specifically, Ives calls for a more in depth understanding of how multi dimensional graphics could display complex information through simplified information or charts that overlay information, both of which are technologies inherent to even basic geovisualization syste ms Densham (1991) suggests that a SDSS user interface must be both powerful and easy to use. Also, a SDSS must provide information in both graphical, or map space, and tabular formats, or objective space, while providing the capability to move between the se representations or view these representations simultaneously to determine the most appropriate to facilitate problem solving. However, even with multiple display options, it is not yet understood if a decision maker w ould know which of the output options provides the best visualization method for a particular deci sion making process. To support a problem solver who is unsure of how to select the most appropriate visualization method, several authors have suggested the incl usion of an expert system to provide such suggestions (Densham, 1991; Yang et al., 2005). Additionally, relevant studies regarding the visualization of cartographic information include Crossland et al. (1995), Smelcer and Carmel (1997), Speier and Morris ( 2003) and Dennis and Carte (1998). For example, Crossland et al. performed a study in which some participants were provided with a paper map and tabular information while others had access to a SDSS. They were able to confirm that the addition of a GIS bas ed SDSS contributed significantly toward two measures of decision

PAGE 29

15 making perform ance, decision t ime and decision accuracy. Speier and Morris tested the use of text and graphical based interfaces to determine the effects on decision makin g Smelcer and Car mel tested whether spatial information is best represented through geovisualization and found that maps representing geographic relationships allowed for faster problem solving. The authors conclude d that while low difficulty tasks can be egardless of representation represented using maps to keep problem solving times and errors from ri sing rapidly 418). Dennis and Carte determined that geographically adjacent/spatial information was best pres ented using spatial visualization, while non adjacent/symbolic information tasks were best presented using tables. Finally, some researchers suggest that reducing the amount of information presented to only include essential information could improve decis ion maki ng performance (e.g. Agrawala & Stolte, 2001; Klippel Richter, Barkowsky, & Freksa, 2005) For example, while early maps presented geospatial information with little precision, they were still able to convey relevant information. The benefit of s uch simplified maps is demonstrat ed by Agrawala and Stolte who collected feedback from over 2,000 users of a technology that emulates hand drawn driving directions, which often emphasize essential information while eliminating nonessential details. Additi onally Klippel et al. suggest that modern cartographers can successfully develop schematic maps that are simplified, yet present environmental knowledge Comprehensive, yet easy to read transit maps used in large metropolitan cities demonstrate a good example of the benefit of schematization.

PAGE 30

16 Task Characteristics In addition to information presentation r esearch has shown that the specific characteristics of the task being performed can play a vital role in decision making performance. Complexity Theory has been extended to demonstrate that key aspects of geovisualization including data aggregation, data dispersion and task complexity infl uence decision making performance (Swink & Speier, 1999). For example, Complexity Theory posits that as task complexity increases so too does the need for information presentation to match problem solving tasks. Additionally, Complexity Theory was validate increased task difficulty led to decreased decision making performance. Moreover, Crossland and Wynne (1994) discovered that decision making performance decreased less significantly with the u se of electronic maps, versus paper maps. Jarupathirun and Zahedi (2001) state that, based on research by Vessey (1991), Payne (1976), Campbell (1988) and Zigurs and Buckland (1998), tasks can be classified into simple and complex groups based on task char acteristics. Characteristics of complex tasks include multiple information attributes, multiple alternatives to be evaluated, multiple desired outcomes, solution scheme multiplicity, conflicting interdependence and uncertainty. Several empirical studies ha ve addressed task complexity (e.g., Crossland et al., 1995; Smelcer & Carmel, 1997; Swink & Speier, 1999; Mennecke et al., 2000; Speier & Morris, 2003) For e xample, Speier and Morris (2003) discovered that decision making performance increased by utilizin g a visual query interface when working with complex decisions. Addit ionally, Swink and Speier defined task characteristics to include the

PAGE 31

17 problem size, data aggregation and data dispersion. Their findings revealed that decision performance, as measured by decision quality and decision time, was superior for smaller problems. In the context of data aggregation, there was no effect on decision quality; however there was a significant effect on decision time indicating that more time was required for disaggr egated problems. Additionally, it was discovered that decision quality for problems with greater data dispersion improved but there was no significant effect on de cision time. Smelcer and Carmel confirmed what had been discovered in previous research (oth ers mentioned herein) in that more difficult tasks require additional problem solving time In their work Mennecke et al. discovered that as task complexity increases, accuracy is lowered, yet found only partial support for task efficiency being lowered. Research conducted by Crossland et al. (1995) on the effects of SDSS on decision making performance included a measure of task complexity. In their work, it was discovered that the use of a SDSS versus data tables and paper maps significantl y improved dec ision making time while there was no significant effect on decision accuracy. The authors point ed out that there may have been too much similarity between the task complexity levels to ensure that decision accuracy would not be improved through the use of an SDSS. The authors also suggest ed that there may be levels of problem complexity that can only be solved t hrough the use of an SDSS In their work Albert and Golledge (1999) developed three paper and pencil tests to assess task complexity across experience levels and gender. One of the findings was disjunction

PAGE 32

18 visualized entity did not affect performance, whereas the quantity of visualized entities did Boundary complexities can be quite varied, as for example the target radius of a retail location may be represented by a simple circle yet the high water boundary of a river would be represented by a very complex boundary representing elevation, water flo w and other essential qualities Additional research suggests that the perception of complexity may be essential to better understanding the effects of task characteristics on decision making performance. For example, Huang (2000) performed an experiment o f 10 popular web based shopping sites and determined that increased complexity decreased the desire to explore the site, but slightly increased the desire to purchase. Perhaps, when posited to SDSS decision making, an increased complexity decreases the des ire to explore additional solutions while encouraging a decision to be made quickly. This could explain some of the variances discovered in past research particularly within the task complexity and decision making performance realm. research utiliz ed the General Measure of Information Rate developed by Mehrabian and Russell (1974) as a measure of the perceived complexity. Speier (2006) proposed a framework of complexity with four levels of complexity; which are, in order of complexity: ( 1 ) trivial d ecision making, ( 2 ) optimal decision making, ( 3 ) satisficing decision making and ( 4 ) aided decision making. (1991) C FT by comparing the outcomes of spatial and symbolic information presentation with spatial or s ymbolic tasks, moderated by task complexity, on decision performance as measured by decision quality and accuracy. In this empirical study the subjects completed tasks that all had optimal

PAGE 33

19 outcomes However, f indings were inconsistent with theory, as the d ecision time of sym bolic tasks with low complexity was reduced when using spatial presentations. However, there are graphs are equal in decision performance at a low com plexity and that grap hs provide a higher decision performance at high complexity. Gill and Hicks (2006) presented a thorough list of complexity frameworks from literat ure, which are presented in Table 2. Like Speier (2006), Gill and Hicks also suggested that there are multipl e classes of complexity: experienced complexity, information processing complexity, problem space complexity, lack of structure complexity and objective complexity.

PAGE 34

20 Table 2 List of Complexity Frameworks (Adapted from Gill and Hic ks 2006). Construct Type Description Degree of Difficulty Perceived or observed difficulty Sum of Job Characteristics Index/Job Diagnostic Survey Task potential to induce a state of arousal or enrichment; measured using self reporting instruments such as JCI and JDS Degree of Stimulation Task potential to induce a state of arousal or enrichment; measured using physiological responses Information Load Objective measure of throughput; total information processes or information processed per second Knowledge Amount of knowledge subject must possess in order to perform task Size Minimum theoretical size of problem space Number of Paths Number of alternative paths that are possible given a strategy Degree of Task Structure Lack of strategy or struct ure needed to move from initial state to goal state Novelty of Task Uniqueness of task to subject; routine tasks are not complex using this measure Degree of Uncertainty Degree that the outcome of the task cannot be predicted at initial state Complexity of Underlying Systems/Environment Number of objective attributes Function of Alternatives and Attributes Objective function of the number of alternatives and the task attributes Function of Task Characteristics Direct function of all possible task chara cteristics Another measure of task complexity could be manipulated through the represented geographic relationships. For example, Smelcer and Carmel (1997) compared spatial information used in the decision making process presented through tables and maps In both the tables and the maps, common geographic relationships used in business decision making using spatial data were utilized. These included proximity, adjacency a nd containment. Examples of proximity in the context of geographic relationships

PAGE 35

21 incl ude route optimization, examples of adjacency include territory assignment, and examples of con tainment include site selection In their study of geographic containment and adjacency tasks, Dennis and Carte (1998) discovered that when users are presented w ith geographic data that represents geographic containments, tabular data presentation s might lead to better decision making, while adjacency tasks benefit from map based visualization s While most research into Complexity Theory (as pertaining to decision making) and Task Technology Fit Theory has focused on individual decision performance, recent technological innovations have led to collaborative uses of geospatial data and information that may require these theories to be revisited through a collaborati ve perspective. Geospatial data and information can lead to collaborative decision making through two distinct ways. First, decision making tools utilizing geospatial information can be used for collaborative decision making with geographically and tempo ra lly distributed participants. Second, through the recent phenomenon of online social networks, geospatial information can be shared and utilized u biquitously through vast online communities Each of these methods is discussed next A large and varied strea m of research exists in the area of collaborative decision making utilizing geographic data. For example, grassroots groups and community organizations have adopted Public Participation Geographic Information Systems (PPGIS) to address the need for collabo rative decision making utilizing complex geospatial data (Sieber, 2006). In their work Conroy and Gordon (2004) empirically look at the ability of a software application to increase citizen involvement in complex policy discussions and propose that geovi sualization can offer citizen participants

PAGE 36

22 opportunities to better envision scenarios and can provide additional communication channels to decision makers. Through the ubiquity provided by networked computing and recent technologies, it may be possible for groups of organizations to collaborate and form virtual organizations (Grimshaw, 2001). Jankowski and Nyerges (2001) studied the use of GIS in a collaborative decision making environment and discovered that decision outcomes such as participant agreements and shared understanding could be more effectively reached through the use of PPGIS While there is a deep understanding of how collaborative decision makers can utilize geographic data, one aspect not fully explored in current research related to geospat ial information presentation and its effects on decision making performance was its effect on group decision making performance. In their framework development research, Mennecke and Crossland (1996) call for additional exploration in the areas of GIS and its capabilities in collaborative decision making. Decision making tools utilizing geospatial information can be used for collaborativ e decision making with geographically and temporally distributed participants. In add ition to collaborative decision makin g, another area of current research in the usage of geospatial data involves how such data is utilized within online social networks. This is especially important with the increasing use of online social networks, because large quantities of geospatially r eferenced data are increasingly being shared through such networks. Goodchild (2007) labels the geographic data that is commonly shared through online social networks as Volunteered Geographic Information (VGI). Some online social networks have included ge ographic information as a core component in their implementations. Such availability of geographic information through online

PAGE 37

23 social networks has even allowed researchers to map online social networks (Khalili et al., 2009). A geographic visualization of o nline social networks can provide decision makers with a geospatial representation of a virtual phenomenon. From a business perspective, a geospatial understanding of social networks can allow strategic decision makers to target marketing campaigns or loca te retail operations in geographic areas appropriate for their target audiences. However, the successful interpretation of VGI from online social networks is hampered by several drawbacks These drawbacks exist primarily because geographic data provided b y members of online social networks varies while another is tagged with precise geographic coordinates. Additionally, as there are no validation processes, a user can easi ly misidentify or intentionally provide incorrect geospatial tagging (Khalili et al., 2010; Flanagin & Metzger, 2008). User Characteristic s In addition to task characteristics, researchers suggest that the characteristics of the user also play a role in de cision performance. Such characteristics include context based factors, experience level, self efficacy, cognitive workload and spatial reasoning ability. Several researchers have discussed the importance of context as an extension to the research into the usability of geo visualization tools, such as SDSS and GIS (e.g., Albert & Golledge, 1999; Mennecke et al., 2000; Jarupathirun & Zahedi, 2001; Slocum Blok, Jiang, Koussoulakou, Montello, Fuhrmann, & Hedley 2001; Speier & Morris, 2003). Slocum et al. reported that context based factors influence the ability to interpret geo

PAGE 38

24 ( Slocum et al., 200 1, p. 10) are referred by the authors as influencing the ability to interpret geospatial information. Additional ly, Zipf (2002) posits that geo visualized maps must address user contexts such as pre existing knowledge of the area presente d in the map, physi cal, as well as, cognitive impairments and abilities. Specifically, an example was demonstrated where a young child was presented with a geovisualization tool in which abstract information was removed and a map that more closely represents reality was pres ented to facilitate better interpreta tion. Additionally, Zipf further posits that a example, in some cultures, the color green may represents parkland or forests on m aps, while in others, it represents bodies of water. In their work, Albert and Golledge tested tasks and measured gender as a control variable. One of their conclusions was that men An example of a neighborhoods that are near a public school but not near a prison Addit ionally, the authors found that there were no significant differences in test scores between subjects with GIS experience versus those that had none. This is an essential observation as GIS and SDSS technologies are often implemented as web based technologies and can have users with limited geographic visualization and information knowledge. Finally, both Zipf and Slocum et al. point out the importance of considering sensory disabilities when developing geovisualization technologies. Speier and Morris (2003) discovered that task experience, database experience, gender and computer self efficacy were non sig nificant in their analysis of query

PAGE 39

25 interface desig n on decision performance Other user contexts variables such as those related to information learning as well as fatigue related to working through multiple tasks were controlled for by Swink and Speier (1999). Mennecke et al. compared subjects with previous SDSS experience to subjects with limited SDSS experience to determine if experience influenced efficiency and accuracy of the solutions. In their experiments, the cognitive effort required in the dec ision making process was measured ( NFC ) instrument. However, their research found only marginal support for an increase in solution accuracy and no difference between subject groups in regard to solutio n efficiency Additionally, Mennecke et al. discovered that experience only presented significant improvement on solution accuracy when working with paper maps. They discovered that students were more efficient than professionals in solving geographic prob lems. While this may seem surprising, it may be due to professionals incorporating multiple levels of analysis that students with limited experience may not be able to draw upon In addition to research into the importance of user context and us er experi ence as related to geo visualization, other theories and constructs particularly those from psychology and organizational behavior, are also utilized, including self efficacy, motivation, goal setting and Image Theory. For example, Jarupathirun and Zahedi (2007) introduced a perceived performance construct consisting of decision satisfaction, SDSS satisfaction, perceived decision quality and perceived decision efficiency. In their findings, it was discovered that perceived decision efficiency was the greate st motivator for goal commitment. While decision quality is likely more important than efficiency, the

PAGE 40

26 authors proposed that there might be a perception that SDSS improves decision quality inherently. Additionally, Jarup athirun and Zahedi (2001) posit that based on empirical research into the theories associated with goal setting, users who set a higher goal level will be motivated to expend more effort toward reaching the desired goals. Ja rupathirun and Zahedi also argue that intrinsic incentives, such as perceived effort and perceived accuracy, can influence goal commitment levels, which are known to moderate the effects of goal level s on performance (Hollenbeck & Klein 1987). Finally, in an effort to combat the lack of motivation and/or expertise, some r esearchers have provided financial incentives and used experiment tasks from domains familiar to the subjects (Speier, 2006). Crossland et al (1995) extended Image Theory into the realm of decision making by proposing that the efficiencies gained through the use of electronic maps, versus paper maps, would improv e decision performance. T heir study revealed that both decision performance, as meas ured through decision accuracy and decision time improved with the use of electronic maps versus paper maps at t wo different complexity levels. Additionally, Jarupathirun and Zahedi (2007) discovered that self efficacy had strong positive influen ces on task technology fit and other expected outcomes, as well as a strong negative influence on perceived goal difficulty. It is suggested that repeated, successful completion of tasks could improve self efficacy, which could be accomplished through training and learning as well as tutoria ls and support systems. Another user characteristic explored was the mental workload exhibited by subjects performing geospatial decision making tasks. Speier and Morris (2003)

PAGE 41

27 measured Subjective Mental Workload (SMW) using the NASA Task Load Index (NASA TLX) after each completed task and discovered that when comparing visual and text based interfaces, with low and high complexity decisions, the use of visual interfaces carried a reduced SMW. Speier and Morris suggest that research into the SMW could bene fit from additional investigation In particular, the NASA TLX measure could use additional validation, as the user reported cognitive loads might not represent actual cognitive loads that could be measured utilizing actua l physiological responses Table 3 User Characteristic Measurement Instruments used in Examined Research, not including spatial ability measures (as shown in Table 4). Study Additional User Context Instruments Used/Cognitive Load Test Used Speier and Morris 2003 NASA TLX Mennecke et al ., 2000 NFC [modified] Jarupathirun and Zahedi 2007 Self Efficacy [as recommended by Marakas et al 1998] Huang 2000 General Measure of Information Rate Finally, the importance of the spatial reasoning ability of the decision maker utilizing geovisualized information must be further explored, as there has been conflicting research into the ability of spatial reasoning to aid in decision making using spatia l data. Some research has presented no or conflicting evidence of the effects of spatial ability on decision performance. For example, Smelcer and Carmel (1997) discovered no statistical significance between spatial ability and the effects of information r epresentation, task difficulty and geographic relationships on decision performance. The researchers speculated that due to the nature of the tasks, which did not involve the need to navigate spatial problems, spatial visualization technique s were not requ ired (Smelcer & depth

PAGE 42

28 investigations of visual skills related to decision making performance Additionally, while in their early work Jarupathirun and Zahedi (2001) question whether spatial ability has any impact on system utilization and decision making performance, they foll ow up with a determination that spatial ability as measured through spatial orientation ability and visualization ability had no significant effect on the perceived task technology fit. These findings are of value as they suggest that high spatial ability does eption of the technology This is essential when developing a technology for the public Internet where it will be im practical to ensure that all users of a technology have a prerequisite s patial ability (Jarupathirun & Zahedi, 2007). However, other rese arch has discovered that there are e ffects between spatial ability and decision performance. For example, Swink and Speier (1999) determined that higher spatial orientation skills produced a higher decision quality and required less decision time; however this finding was only significant for large problems with low data dispersion Additionally, Speier and Morris (2003) found that spatial reasoning ability alone had no significant effects on decision outcomes. However, when combined with interface design, spatial reasoning ability had a significant ef fec t on decision accuracy Furthermore, research in other domains has identified a connection between spatial ability and geovisualization tools. For example, Rafi Anuar, Samad, Hayati, & Mahadzir (2005) disc usses the use of online virtual environments to facilitate the instruction of spatial thinking skills. In their study of 98 pre service undergraduate students, only seven students, or about 7%, were found to have any pre vious spatial experience. Rafi et al imply that such a gap is a crucial issue and creates a hurdle for

PAGE 43

29 students pursuing careers that require qualitative spatial reasoning. As students with no pre existing spatial thinking had difficulties in courses requiring spatial thinking ability, it i s likely that users lacking spatial thinking skills would have difficulties utilizi ng geovisualization tools Additionally, students who have participated in courses that utilize geovisualization tools, such as computerized cartography or geographic inform ation systems, have demonstrated improvement in their s patial thinking ability (Lee & Bednarz, 2009). In their research, Lee and Bednarz point out that psychometric testing designed to assess spatial abilities, such as spatial visualization and spatial ori entation, were generally focused on small scale spatial thin king and thus not necessarily valid to test large scale geo graphic spatial abilities However, Lee and Bednarz discovered that recently, new spatial analysis tests have been developed which consid ered spatial ab ilities in a geographic context ( e.g. Audet & Abegg, 1996; Meyer Butterick, Olkin, & Zack, 1999; Ker ski, 2000; Olsen, 2000 ). Based on these insights, the development an updated measurement instrument to component of decision making with geospatial data, is recommended Furthermore, i t is suggested that this measurement instrument be based on the work of Lee and Bednarz (2009) who developed and validated a spatial skills test specifically designed to overcome shortcomings of previous spatial skills tests.

PAGE 44

30 Table 4 Spatial Reasoning Instruments Used in Examined Research. Study Spatial Reasoning Test Used Smelcer & Carmel 1997 VZ 2 (Spatial Visualization) Albert & Golledge 1999 Three Paper/Pencil Tests to assess spatial ability. Swink & Speier 1999 S 1 (Spatial Orientation) Speier & Morris 2003 S 1 (Spatial Orientation) Jarupathirun & Zahedi 2007 VZ 2 (Spatial Visualization) S 1 (Spatial Orientation) Lee & Bednarz 2009 Decision Making Performance Another key component of geospatial decision making is the measure of decision making performance. To determine decision making performance, m ost researchers utilize objective measures of decision time and decision accuracy as indicators of decision making performance, including Crossland et al. (1995 ), Dennis and Carte (1998) and Speier ( 2006). However, in their measure of decision performance, Smelcer and Carmel (1997) simply examined the length of time required to make a decision. Others have proposed va r ious a dditional indicators, such as decision concept and regret (Sirola, 200 3 ; Hung Ku, Ting Peng & Chang Jen 2007) which among other indicators are discussed below. While decision time and decision accuracy are indicators of decision performance, Si rola (2003 ) posits that the use of an appropriate decision analysis methodology will undoubtedly influence decision performance metrics and could modify making process and result. These decision analysis met hodologies include cost risk comparisons, knowledge based systems, cumulative quality function, chained paired comparisons, decision trees,

PAGE 45

31 decision tables, flow diagrams, pair wise comparison, cost functions, expected utility, information matrices, multi criteria decision aids and log ical inference/simulation In their vignette based research Speier and Morris (2003) identified decision performance as a decision outcome which consist ed of subjective mental workload, decision accuracy and decision time c onstructs. In their research it was discovered that there were significant interaction effects between interface type (text/visual) and task complexity on Subjective Mental Workload (SMW), as well as interface type and task complexity individually. While CFT and Complexity Theory focus on the relationships between user and task characteristics and decision Technology Fit Theory posits that a technology will improve task performance if the capability of the technology matches th e task to be performed ( Goodhue, 1995 ; Goodhue & Thompson, 1995). Smelcer and Carmel (1997) demonstrate the importance of utilizing the most appropriate types of geographic relationships for the tasks at hand through their extension of Task Technology Fit Theory. Geographic relationships often found in business data include proximity, adjacency and containment tasks. Additionally, Jarupathirun and Zahedi (2007) synthesized research on task technology fit with the psychology based constructs of goal setting and self efficacy to further explain and determi ne success factors of SDSS. The use of task technology fit allows researchers to examine user satisfaction which assess beliefs about a system and have been shown to impact adoption and intention to use the tech nology (Rogers, 1983; Taylor & Todd, 1995; K arimi Somers & Gupta 2004)

PAGE 46

32 In their work, Jarupathirun and Zahedi (2007) explored perceived decision quality, perceived decision performance, decision satisfactio n and SDSS satisfaction and suggest further inclusion into the Technology Acceptanc e Model (TAM) or the Unified Theory of Acceptance and Use of Technology (UTAUT) models Additionally, Dennis and Carte (1998) discovered that when using map based presentations, users were more likely to utilize a perceptual decision process while tabular data presentations induced an analytical decision process. In their work, time and accuracy were used as measurements of decision performance While numerous researchers utilize perceived measures or objective decision performance measures to determine t he results of decision making performance, Hung et al. (2007) suggest that perceived regret should also be considered, as many decision makers consider potential re gret when making decisions. T heir study discovered significant reduction in regret for subje cts who utilized a DSS over those who did not Other constructs and theories, particularly those from psychology and organizational behavior, are also utilized, including research in self efficacy, motivation, goal setting and Image Theory. For examp le, Ja rupathirun and Zahedi introduced a perceived performance construct consisting of decision satisfaction, SDSS satisfaction, perceived decision quality and perceived decision efficiency. In their findings, perceived decision efficiency was the greatest motiv ator for goal commitment. While decision quality is likely more important than efficiency, the authors proposed that there might be a perception that SDSS improves decision quality inherently. These findings are consistent with other studies in which task characteristics have been shown to have an impact on user satisfaction as measured through task technology fit (Karimi et al., 2004)

PAGE 47

33 In their study of visual query interfaces, Speier and Morris (2003) discovered that decision making performance increased by utilizing a visual query interface when working with complex decisions. In addition, Swink and Speier (1999) discovered that moderate amounts of data dispersion required longer decision times than tasks with l ow data dispersion. I n their work Albert and Golledge (1999) developed three paper and pencil tests to test task complexity across experience levels and gender. Based on existing literature it is evident that the most common measures of decision making performance are the object ive measures of decis ion time and decision accuracy. However, other research has suggested the use of perception of the decision making process and performance could also be utilized, particularly as user perceptions often play a role in technology acceptance. Conceptual Model Based on the reviewed literature and associated theoretical frameworks, a conceptual model of business decision making using geospatial data is proposed. This model consists of four distinct constructs, including information presentation, task characteris tics, user characteristics and decision performance. Information presentation was determined to be a key antecedent of decision performance as suggested through (1991) C FT Literature has shown that different information presentation methods may b e required based on the geospatial problem being solved. Additionally, task characteristics have demonstrated an impact on decision performance. Specifically, task complexity, problem type, data dispersion, group decision making and data quality have been shown to define task characteristics. In addition to information presentation and task characteristics, user characteristics have also been shown to influence decision

PAGE 48

34 performance. Such user characteristics include age, gender, prior experience, culture, s ensory ability, education, self efficacy, task motivation, goal setting, mental workload, and geospatial reasoning ability. Finally, while decision accuracy and decision time were the most common measures of decision performance, decision satisfaction, dec ision regret and decision methodology could be valid measures of decision performance. T he relationships between these constructs are presented through the following three propositions: Proposition P 1 : Informa tion presentation impacts decision p erformance. Proposition P 2 : Task c haracteristics impact decision p erformance. Proposition P 3 : User c haracteristics impact decision p erformance. Figure 1 presents this conceptual model visually. This dissertation will ex p lore these propositions and the conceptual mode l through an empirical study presented in Chapter V Figure 1 Proposed Research Model.

PAGE 49

35 Discussion The following chapters will include the design of measurement instruments and development of laboratory experiments to test the proposed conceptual model and propositions. Limitations of Reviewed Literature Four key limitations were discovered in the reviewed literature, including ( 1) the choice of research subjects, ( 2) the selection of task types, ( 3) the motivation of subjects to successfully complete the problem solving experiment and ( 4) a lack of experiments testing the full conceptual model. First, i n t he majority of the literature reviewed for this chapter, undergraduate students were utilized as research subjects (see Ta ble 5) who may not accurately represent business decision makers who utilize geospatial data ( e.g., Speier & Morris, 2003; Swink & Speier, 1999; Smelcer & Carmel, 199 7 ). However, research conducted by Mennecke et al. (2000) discovered that there were few differences between the results of university students and business professionals when performing their study. Jarupathirun and Zahedi (2007) cited Mennecke et al. (200 0) in their research as a strong validation that university students are a valid proxy for professionals. However, Jarupathirun and Zahedi (2007) also point out that many university students are of a younger demographic population, which might have had pre vious experiences with web based SDSS such as geovisualized search tools (e.g., restaurant and ATM locators) and online mapping tools (e.g. Google Maps ) providing them with a significant amount of prior SDSS experience. Thus, it is recommended that futur e studies, which elect to use student populations, provide a justification of the sample choice and clearly discuss limitations of

PAGE 50

36 generalizability per the recom mendations of Compeau, Marcolin, Kelley, and Higgins (2012) Table 5 R esearch Population Groups. Study Population Undergraduate College Students Crossland et al ., 1995 Undergraduate Students Smelcer & Carmel 1997 Graduate Business Students Dennis & Carte 1998 Undergraduate Students in Geography, Psychology and an Introductory GIS Class Albert & Golledge 1999 Undergraduate Students Previous GIS Coursework Swink & Speier 1999 University Students and Professionals Mennecke et al ., 2000 Undergraduate Students Speier & Morris 2003 Undergraduate Students Speier 2006 Undergraduate Business Students Jarpathirun & Zahedi 2007 University Students Lee & Bednarz 2009 Second, the problem types examined by existing research have presented some additional limitations. While some researchers chose real estate/home f inding as the ir task method (e.g., Speier & Morris, 2003), others chose more domain specific tasks. This is a concern as, for example, the task of locating an automated banking kiosk would reflect a fundamentally different problem than determining properti es that may be impacted by a natural disaster. It is suggested that future research carefully select problem types to facilitate an improved comparison of research results. Third, the motivation for completing the research tasks accurately can be question ed. To address this issue, some researchers provided monetary incentives to participants plus additional monetary incentives for higher decision performance (Hung et al., 2007). Furthermore, simulated experiments may not adequately reveal how

PAGE 51

37 individuals m Additionally, group decision making may have different motivations than individual decision making. Finally, few studies measured task and user characteristics simultaneously with informat ion presentation to determine moderating impacts of each construct. Thus, it is suggested that the entire conceptual model be tested empirically to determine the effects of each antecedent construct on decision performance. In addition to the addressing th ese limitations, numerous future research opportunities have presented themselves in the course of this literature review. Future Research It is suggested that additional work be performed to test the proposed comprehensive model utilizing varying task and user characteristics which could help identify further moderating and mediating variables. For example, Swink and Speier (1999) suggest that complexity levels could be increased to determine if their findings still hold true. Albert and Golledge (1999) call for more research into specific tasks and how groups of individuals are able to make decisions using geovisualization. In their work, Jarupathirun and Zahedi (2007) measured effects of perceptual constructs including perceived efficiency and perceiv ed accuracy; however, a comparison to objective measures was not made and the authors suggested such an experiment as potential time could be measure d to develop a more compr ehensive understanding Crossland et al.

PAGE 52

38 (1995) maker confidence, user process satisfaction, and indiv idual level of motivation (p. 232) The surveyed literature suggested numerous future research possibiliti es within the four research themes presented within this paper. These themes include information presentation, task characteristics and user characteristics and their effects on decision performance using geospatial data. Of these, the theme of user charac teristics seems most promising, particularly in regard to a deeper understanding of the importance of geospatial ability. Some general research questions, based on the c onceptual model include: Which geovisualization techniques improve decision performance ? Which specific user characteristics impact decision performance? Can specific geovisualization techniques overcome user characteristics that negatively impact decision performance? Which specific task characteristics impact decision performance? Can specific geovisualization techniques overcome task characteristics that negatively impact decision performance? Potential research questions related to geospatial ability include: Do current spatial ability measurement instruments properly identify the abi lity to analyze and interpret geospatial data as would be defined in a geospatial ability construct? If not, what key antecedents can measure geospatial ability? Does geospatial ability influence decision making performance? And if so, can inconsistencies in past empirical studies be explained?

PAGE 53

39 Do actual geospatial ability and perceived geosp atial ability differ? To answer these questions, it is proposed that a geospatial ability construct be developed along with a comprehensive measurement instrument. Suc h a construct would allow researchers to more easily validate research results which could provide business leaders with a better understanding of the need of geospatial ability in the workforce. Furthermore, an understanding of the importance of geospatia l ability could lead to ability. Additionally it is suggested that future geospatial decision making research emphasize the importance of including measures for each o f the three antecedent constructs (information presentation, task characteristics, user characteristics) to determine their combined effects on decision performance. Future research into decision making using geospatial data should continue to validate exi sting theory as well as provide business decision makers with sound best practices and tools for decision making. Furthermore, an understanding of the importance of geospatial decision making could lead design science researchers to develop refined geovisu alization tools which may overcome potential negative task and user This dissertation will include the development of a geospatial reasoning ability construct, measures of such a construct, as well as empirical research to determine the potential benefits of such a construct. Conclusion As organizations collect large amounts of geospatial data, there is a need to effectively utilize the collected data to make strategic and organizational decisions. However, literature describing the best techniques to make decisions using geospatial

PAGE 54

40 data as well as the best approaches for geovisualization is limited. This literature review revealed that existing research can provide a strong foundation for future exploration of how business decision making using geospatial data occurs. Additionally, a conc eptual model for the study of effects of geovisualization on decision performance is presented and defined through existing theory. Along with the conceptual model, numerous applicable research methods, existing constructs, potential limitations, validity concerns and potential future research questions were presented. While business school curriculums often include basic computing courses as well as courses designed to convey understanding of database and spreadsheet tools, there has been neglect in incor porating even a basic understanding of the importance of geospatial data as well as the unique tools to interpret such data. While in the last ten years some business schools have recognized th e need for the inclusion of GIS by offering a required course, an elective course or even a degree emphasis or research center (Pick, 2004), following the insights represented in this paper, it is recommended that business curriculums include problem solving exercises utilizing geospatial data. Based on the numerous c ontradictions in past research, particularly pertaining to task complexity, the development of common framework for determining task complexity levels in problem solving scenarios is recommended Such a framework could then be used to benchmark research an d provide researchers with the ability to more easily compare research results. Additionally, based on discrepancies of previous research into geospatial ability, the importance of spatial ability in problem solving using geovisualized information must be better understood in order to ensure that businesses and individuals are able to make better decision using geographic data. This research will

PAGE 55

41 allow information system practitioners to effectively utilize geovisualization tools to organize and present lar ge quantities of geospatial data. This chapter reviewed the use of geospatial visualization and its effects on decision performance, which is one of the many components of decision making using geospatial data. Additionally, this chapter proposed a compreh ensive model allowing researchers to better understand decision making us ing geospatial data and provided a robust foundation for future research. Finally, this chapter made an argument for further research into task and user characteristics and their eff ects on decision performance when utilizing geospatial data.

PAGE 56

42 CHAPTER III GEOSPATIAL REASONING ABILITY: CONSTRUCT AND SUBSTRAT A DEFINITION, MEASUREMENT AND VALIDATION 2 Abstract The preceding chapter suggested several potential research question related to geospatial reasoning ability. The first of these questions relates to determin ing how geospatial reasoning ability can be measured, as well as what the key antecedents of geospatial reasoning ability are. Thus, Chapter III presents the development of a com prehensive construct defining individual geospatial reasoning ability, a key user characteristic that has provided mixed results in previous empirical studies referenced da ta to support business decision making. Prior r esearc h investigating the significance of user characteristics on decision performance when working with geospatial data, has presented conflicting results. This is particularly true in regard to the impact o f geospatial reasonin g ability on the ability t o perform ef ficient and effective decisions making As the amount of geographic and geospatially referenced data grows, it is essential to develop a comprehensive understanding of how user characteristics, su ch as geospatial reasoning ability, influence decision performance. Furthermore, such an understanding is an essential component within the human computer interaction domain. This research introduces two new constructs, Geospatial Reasoning Ability and 2 An earlier version of this chapter was submitted as a C omprehensive Examination Paper and approved in September 2012. A subsequent version of this chapter was presented at the Americas Conference on Information Systems (AMCIS) 2011 and is currently under review with Decision Support Systems.

PAGE 57

43 Geo spatial Schematization, and presents validated measurement scales to accompany these constructs. Introduction Geospatial data permeates business computing. Many business decisions, from transport and logistics to marketing and product development, now rely on geospatial data. This highlights the need for an im proved understanding of ways to best utilize geographic data when making such decisions. Scholars have stated that over 75 percent of all business data contains geographic information (Tonkin, 1994) and that 80 percent of all business decisions utilize geographic data (Mennecke, 1997). It is very likely that these perce ntages are now even greater due to the prevalence of devices that can collect geo referenced data, particularly GPS enabled mobile computing platforms, such as smartphones and tablets. Researchers have found that a clear understanding of such geospatial da ta can lead to improved business decision making (Pick, 2004). Specialized tools have been developed t o help businesses utilize the immense body of geospatial knowledge that is continuously being collected. These tools include Spatial Decision Support Syst ems (SDSS) and Geographic Information Systems (GIS). SDSS function much like Decision Support Systems (DSS), but are designed to enable business decision makers to better understand the complexities of geospatial data. GIS allow experts to analyze and report geographic data. As various GIS and SDSS tools evolve, many of their functionalities and capabilities are converging. Of additional interest is that these tools have evolved to allow non geospatial experts to easily perform decision making functions using geospatial data. For example, many banks provide

PAGE 58

44 online and mobile tools allowing consumers to locate automated banking kiosks with particular criteria, such as hours of operation and types of cards accepted As the volume of geographic and geographically referenced data grows and organizations rely on geo v isualization to present such data to business decision makers, it is imperative that all aspects of the decision making process are fully understood in order to help researchers and practitioners dev elop optimal systems. This chapter provides the first step toward improving our understanding of how geospatial reasoning ability impacts the capability of business decision makers to use geovisualization when making decisions. There are several research questions related to geovisualization, decision performance and geospatial reasoning ability. First, can geospatial reasoning ability be m easured practically? Second, does geospatial reasoning ability moderate decision making performance? Third, can specific geovisualization techniques allow a decision maker without strong geospatial reasoning ability to make sound decisions? The first of th ese questions is addressed in this chapter This chapter begins with a literature review emphasizing the theoretical background and the current understanding of user characteristics, particularly spatial reasoning, on geospatial decision performance. Then a section describing the scale development procedure is presented. This is followed by a formal definition of the proposed construct and substrata, along with a measurement model. Next an exploratory study is presented and discussed. Finally, limitations future research suggestions and a conclusion are presented.

PAGE 59

45 Literature Review This section provides an analysis of reoccurring topics found in literature related to geovisualization and decision performance, along with a review of literature which expl ores the effects of user characteristics on decision making performance, with an emphasis on research related to spatial reasoning. As business leaders are faced with tasks such as the interpretation of large quantities of business data containing geospati al information, it is crucial for researchers to understand how such decisions are made. Particularly, what presentation formats are most appropriate, w hich individual characteristics form stronger decision makers, and what specific decision tasks can be m ade utilizing geospatial data ? Applying an existing theoretical lens to this problem will allow researchers to draw upon established research in an attempt to answer these questions. Cognitive Fit Theory Numerous researchers have explored the importance o f visual information presentation. For example, Vessey (1991) posits the C FT which suggests that when information presentation matches the problem solving task, higher quality decisions are made. The C FT has been referenced as the theoretical background, extended into other domains and validated in numerous empirical studies, such as Mennecke et al. (2000), Smelcer and Carmel (1997) and Speier and Morris (2003). Extensions of the C FT include work performed by Dennis and Carte (1998 ), which demonstrated that when map based presentations are coupled with appropriate tasks, decision processes and decision performance are influenced. Additionally, Mennecke et al. expand on the C FT by determining the effects of subject characteristics an d problem complexity on decision

PAGE 60

46 efficiency and accuracy. Speier (2006) presented a review of eight empirical research papers that tested for cognitive fit. This research discovered that all but one paper (Frownfelter Lohrke, 1998) either fully or partiall y supp orted the C FT Geovisualization and Decision Performance While geospatial data can be presented utilizing traditiona l methods such as tables, unique relationships contained within geospatial data are often only apparent though geovisualization (Reit erer et al. 2000). Geovisualization, or the visualization of geospatial or geospatially referenced data, and its effect on decision performance has been explored in several key works, including Crossland et al. (1995), Smelcer and Carmel (1997), Speier an d Morris (2003) and Dennis and Carte (1998). Research has revealed that decision performance was improved through the use of GIS based SDSS (Crossland et al. 1995), graphical based interfaces for geospatial data (Speier & Morris 2003), geovisualization f or tasks with a high task complexity (Smelcer & Carmel 1997) and for presenting adjacent information (Dennis & Carte 1998). Comparisons were made with paper maps (Crossland et al. 1995), text based interface s to geospatial data (Speier & Morris 2003) a nd tabular presentation (Dennis & Carte 1998). Research that examines information presentation and its effects on decision performance utilizes numerous measurement indicators; however, objective measures, such as decision time and decision accuracy, are the most common (Crossland et al., 1995; Dennis & Carte, 1998; Smelcer & Carmel, 1997; Speier, 2006). Other indicators have included perceptions of decision performance and even decision regret (Hung et al. 2007; Sirola, 2003). Researchers exploring the r elationship of geovisualization on

PAGE 61

47 decision performance also often include task characteristics, user character istics, or both, in their work. User Characteristics There is growing research into the effects of user characteristics, partic ularly context based (Albert & Golledge, 1999; Slocum et al. 2001; Zipf, 2002) and cognitive based (M ennecke et al., 2000; Speier & M orris, 2003). Furthermore, user characteristics explored in research have included gender (Albert & Golledge, 1999), previous experience with SDSS tools (Men necke et al., 2000) self efficacy and motivation (Jarupathirun & Zahedi, 2001, 2007). While the existing work on user characteristics has revealed important insights, research into the importance o f spatial reasoning ability has produced conflicting results ( Smelcer & Carmel, 1997; Swink & Speier, 1999; Speier & Morris, 2003; Rafi et al., 2005; Jarupathi run & Zahedi, 2007; Lee & Bednarz, 2009). For ex ample, Smelcer and Carmel discovered no significa nt relationships between spatial ability and decision performance. However, the researchers did point out that this might be because the task characteristics may not have required the use of spatial ability from the research subjects. Addit ionally, Swink a nd Speier noted inconsistencies in prior research measuring spatial ability and called for additional research. Fin ally, Jarupathirun et al. revealed that spatial ability, as measured through spatial orientation ability and visualization ability had no ef fect on decision performance. In contrast another corpus of research had found relationships between spatial ability and decision performance. For instance, Swink and Speier (1999) revealed that experiments utilizing task characteristics involving large p roblems with low data

PAGE 62

48 dispersion and user characterist ics of high spatial orientation improved decision performance. Rafi et al. (2005) stated that students who participated in virtual environments had improved spatial abilities, which could help overcome difficulty in courses that require spatial skills, such as engineering graphics or cartography. Whitney, Batinov, Miller, Nusser and Ashenfelter (2011) determined that address verification performance was significantly associated with spatial ability. Alte rnately, Lee and Bednarz (2009) discovered that students who completed courses utilizing geospatial tools gained greater geospatial reasoning ability as measured by a geospatial ability measurement exam developed by the researchers Rusch, Nusser, Miller, Batinov and Whitney (2012) discovered that spatial ability has an effect on decision performance when testing for three dimensions of spatial ability : spatial visualization, logical reasoning and perspective taking These conflicting results may be due to the nature of the spatial reasoning tests used (see Table 6 ).

PAGE 63

49 Table 6 Spatial Reasoning Instruments Used in Examined Research Study Spatial Reasoning Test Used Effect of Spatial Ability on Outcome Smelcer et al., 1997 VZ 2 (Spatial Visualization) Effect Albert et al. 1999 Three Paper/Pencil Tests Partial Effect Swink et al. 1999 S 1 (Spatial Orientation) Effect Speier et al. 2003 S 1 (Spatial Orientation) Effect Jarupathirun et al. 2007 VZ 2 (Spatial Visualization) S 1 (Spatial Orientation) No Effect Lee et al. 2009 Effect Whitney et al. 2011 VZ 2 (Spatial Visualization) MV 2 Visual Memory PT (Perspective Taking) Effect Rusch et al. 2012 VZ 2 (Spatial Visualization) Logical Reasoning PT (Perspective Taking) Effect Lee and Bednarz (2009) emphasize that most of the spatial ability tests utilized in as those that measure spatial orientation and spatial visualization. Indeed, many of the reviewed studies utilized various components of po pular spatial reasoning tests (s ee Table 6 ). To overcome this limitation, Lee and Bednarz (2009) developed a test to measure spatial reasoning within a geographic context. Task Characteristics Research has shown that characteristics of the task being performed can play a vital role in decision making performance. For example, Complexity Theory (Campbell, 198 8) posits that as task complexity increases so too does the need for information

PAGE 64

50 presentation to match problem solving tasks. Complexity Theory has been extended to demonstrate that key aspects of geovisualization including data aggregation, data dispersion and task complexity influence decision making p erformance (Swink and (1997) research, which confirmed that increased task difficulty l ed to decreased decision performance. Moreover, Crossland and Wynne (1994) discovere d that decision making performance decreased less significantly with the use of electronic maps versus paper maps. Jarupathirun and Zahedi (2001) state that, based on research by Vessey (1991), Payne (1976), Campbell (1988) and Zigurs and Buckland (1998), tasks can be classified into two groups, simple and complex, based on task characteristics. Characteristics of complex tasks include multiple information attributes, multiple alternatives to be evaluated, multiple desired outcomes, solution scheme multipli city, conflicting interdependence and uncertainty. Several empirical studies have addressed task complexity. For example, Speier and Morris (2003) discovered that decision making performance increased by utilizing a visual query interface when working with complex decisions. Additionally, Swink and Speier (1999) defined task characteristics to include the problem size, data aggregation and data dispersion. Their f indings indicated that decision performance, as measured by decision quality and decision time, was superior for smaller problems. In the context of data aggregation, there was no effect on decision quality; however there was a significant effect on decision time, indicating that more time was required for disaggregated problems. Additionally, it w as discovered that while decision quality for

PAGE 65

51 problems with high data dispersion improved, there was no significant effect on decision time. Smelcer and Carmel (1997) confirmed what had been revealed by previous research (others mentioned herein), that mor e difficult tasks require greater decision time. In their work Mennecke et al. (2000) discovered that as task complexity increases, accuracy is lowered, yet found only partial support for task efficiency being lowered. Scale Development Procedure As many of the reviewed research projects explored different dimensions of spatial ability and most did not measure spatial reasoning with regards to geographic context, this could partially explain the contradictory results found in these studies. This finding suggests a need for improved measures of s patial ability which is context sensitive to both geographic scale (task aware) and business decision making (user aware.) Thus, a new construct and measurement scale Geospatial Reasoning Ability (GRA), are introd uced in the following sections geospatial reasoning ability within the context of performing business decision making. This research utilized the construct development and validation proc edures recommended by MacKe nzie, Podsakoff, & Podsakoff (2011) to define and measure geospatial reasoning ability and to develop an appropriate measurement scale (Figure 2 ). In the ir work, MacKenzie et al. present a ten step scale development procedure. These steps are classified in to six categories: conceptualization, development of measures, model specification, scale evaluation and refinement, validation, and norm development. This chapter completes the first seven steps in the scale development process. Completing these steps all ows for an initial GRA scale to be developed and validated. Future research studies will be necessary to validate this scale in other contexts and to

PAGE 66

52 develop norms for the scale. This is consistent with other initial scale development studies (e.g. Balog, 2011; Brusset, 2012). Figure 2 Overview of Scale Development Procedure from MacKenzie et al. (2011).

PAGE 67

53 Construct Conceptualization (Step 1) M acKenzi e et al. (2011) recommend that the first step of the scale development procedure is to complete four factors of construct conceptualization. These four factors include: (1) establishing how the construct has been used in prior research or by practitioners, (2) identifying the properties of the construct as well as the entities to which the construct applies, (3) specifying the conceptual theme of the construct and (4) defining the construct in clear, concise and unambiguous terms. Factor One: Examination of Prior Research Factor one of construct conceptualization as suggested by MacKenzie et al. (2011) is to conduct a comprehensive literature review to determine how geospatial reasoning and similar concepts have been defined and measured. The majority of these findings were described in the literature review section above. Prior research has shown conflicting results when the relationship between spatial ability and decision performance was assessed (e.g. Jarupathirun & Zahedi, 2007; Rafi et al., 2005; Smelcer & Carmel, 1997; Speier & Morris, 2003; Swink & Speier, 1999), as summarized in Table 6 The majority of these studies have had two major limitations: (1) they only assessed certain dimensions of spatial ability and (2) they do not specifically address the geographic context. Furth ermore, existing measurement tools are often complicated to administer and may not provide comparative results as only certain dimensions are measured For example, some of these tests require prior GIS knowledge in order to be completed successfully. To address these concerns, Lee and Bednarz (2009) developed a new spatial skill test. Their test was designed to overcome the aforementioned limitations, by measuring

PAGE 68

54 multiple dimensions of spatial thinking and by placing these questions in a geospatial conte xt. Their 30 minute test consisted of seven sets of question items, including performance tasks and multiple choice questions. While this assessment addressed many of the initial concerns, it too ha d some limitations. For example, the scale was designed to evaluate, and be tested on, students enrolled in a geography department and thus the questions specifically refer to many common GIS tasks, such as site selectio n and topography. C oncepts and terms such as these are ones that a business decision maker may not be very familiar with, requiring training in order to clarify the test tasks to subjects outside the geography domain. Furthermore, a post study questionnaire revealed that the subjects perceived many of the questions as too simple. We address these l imitations through the development of a measurement scale that (1) addresses known dimensions of geospatial reasoning, (2) emphasizes the geographic context of the cognitive ability measured and (3) assesses GRA in subjects not necessarily familiar with ad vanced cartographic and geographic terms. Doing so will address the major limitations of the existing measurement tools. The limitations found in the current body of research are the primary motivation for the development of a new construct and measuremen t tool. S uch a scale could provide a more accurate measure as it w ould be sensitive to geographic scale (task aware) and busine ss decision making (user aware). Furthermore, such a scale would allow future researchers to more easily incorporate geospatial r easoning ability in their work Thus, a new construct, Geospatial Reasoning Ability (GRA) of business decision makers, is

PAGE 69

55 of performing business decision making, alo ng with an easy to use questio nnaire based measurement scale. Factor Two: Identification of Construct Properties and Entities MacKenzie et al. (2011) suggest that the second factor of the construct conceptualization step should include the identification of the construct properties and identification of the entities to which it applies. As previous studies measuring spatial ability were focu s ed on individual decision making, the proposed construct is also applied to an individual entity However, since tea ms normally make many business decisions, the impact of GRA on group decisions should be examined in the future. While spatial reasoning has been measured us ing cognitive tests (Speier & Morris, 2003) and psy chometric test (Jarupathirun & Zahedi, 2001, 20 07), this research utilizes psychometric measures to estimate GRA by measuring self perceived traits. These self perceived traits are referred to as substrata in this paper. Cognitive tests are often used in laboratory experiments where activities can be m ore easily timed and controlled, while psychometric tests can often be administered in a variety of settings As one of the goals of this research was to develop an easy to administer measure of GRA, psychometric measures were utilized. Thus, in the conte xt of this research, the proposed GRA construct specifies a type of cognition that applies to an individual person, and is measured utilizing self perceived sub strata.

PAGE 70

56 Factor Three: Specification of the Conceptual Theme The next factor of the construct conceptualization step, as suggested by MacKenzie et al. (2011), is the specification of the conceptual theme. We propose that the GRA construct specifies a cognitive ability that consists of multiple dimensions based on self perception. While prior resear ch has focused on specific psychometric components of spatial ability (e.g. Smelcer & Carmel 1997, Swink & Speier 1999), this study attempts to establish a model encompassing all known and measurable substrata of GRA as identified through a c omprehensive literature review. MacKenzie et al. (2011) also suggest specifying if a construct changes or remains stable over time. We suspect GRA to remain stable over time; however specialized training or careers that utilize geospatial thinking may inf example, it has been shown by Rafi et al. (2005) that spatial intelligence can be impr oved through specific training. Factor Four: Definition of the Construct and Substrata As suggested by MacKenzie et al. (2011), the final factor is to unambiguously to assess and utilize geospatial information in order to make sound business decisions. To measure this construct, the various substrata of GRA are identified and defined next In order to identify and define the substrata of the GRA construct a literature review of published works, that specifically explored spatial a nd geospatial reasoning ability was conducted. During this process, scholarly articles were reviewed for key semantic content (e.g. spatial ability, spatial thinking, spatial reasoning, spatial skills, geospatial ability geovisualization, maps, and geographic information system s) by an

PAGE 71

57 substrata of the GRA construct. To validate the results of this step, two ac ademic and two industry experts were asked to validate each of the proposed substrata and their definitions for agreement. The two industry experts who work with and devel op geovisualization systems, had over 20 years of combined experience. The academic experts perform research on behavioral/cognitive topics in the information systems scholarship. Each of these experts suggested only minor updates to the definitions and stated an overall agreement with the proposed substrata. The final substrata and defin itions are presented in Table 7 Table 7 Proposed Substrata of GRA Proposed Substrata Definition Developed From Self Perceived Geospatial Orientation and Navigation (SPGON) The self perceived ability to and direction in geographic space. Kozlowski & Bryant, 1977; Swink & Speier, 1999; Cherney, Brabec and Runco, 2008; etc. Self Perceived Geospatial Memorization and Recall (SPGMR) The self perceived ability to commit geographic concepts to memory and the ability to reconstruct these concepts. Lei, Kao, Lin & Sun, 2009; Miyake, Friedman, Rettinger, Shah & Hegarty, 2001; etc. Self Perceived Geospatial Visualization (SPGV) The self perceived ability to form mental images of geographic space. Eisenberg & McGi nty, 1977; Velez, Silver & Tremaine, 2005; etc. A second construct, which we have labeled Self Perceived Geospatial Schematization (SPGS), has been identified in literature as important for decision making using geospatial data y et, no research was foun d that specifically identified schematization ability as an indicator of spatial reasoning ability. H owever several studies indicated its importance for communicating geospatial information (e.g. Klippel

PAGE 72

58 et al., 2005). The definition of SPGS is presented in Table 8 Based on the potential of SPGS to be a potential substratum of GRA it was tested as both an independent construct as well as a potential substratum of GRA. Table 8 Definition of SPGS Construct Definition Developed From Self Perceived Geospatial Schematization (SPGS) The self perceived ability to reduce complexity of geographic elements by converting to a schema or outline. Agrawala & Stolte, 2001; Klippel et al. 2005; etc. Following the completion of the concep tualization step of the scale development procedure outlined by MacKenzie et al. (2011) the two step process of measuremen t item generation is performed. Measurement Item Generation (Step 2) While existing measurement tools often explore spatial abilities, there are distinct differences between the general spatial abilities measured by these tools and spatial reasoning abilities in a geographic context. Furthermore, the existing spatial ability measu rement tools that were reviewed only address some componen ts of spatial ability, such as visualization, not the proposed spectrum of GRA Additionally, many of these existing measurement tools often require the subjects to complete physical or visual tasks, making the tests difficult to administer. Such measureme nt tools usually require subjects to be physically present in a laboratory setting. The goal of this research was to develop a set of measurement items that correctly measure the various dimensions of GRA and that are easy to administer. Given that measure ment items for each of these substrata do not exist in literature the author developed each item. The measurement

PAGE 73

59 items were developed utilizing semantic terminology derived from the substrata definitions and existing literature. The items were further re fined through the opinions of the experts mentioned above. The following tables (Table 9 11 ) present the initial measurement items of the proposed substrata of the GRA construct, which include self perceived geospatial orientation and navigation (SPGON), self perceived geospatial memorization and recall (SPGMR) and self perceived geospatial visualization (SPGV). perceived ability to determine items (see Table 9 ) are expected to measure the SPGON substratum.

PAGE 74

60 Table 9 Initial Self Perceived Geospatial Orientation and Navigation Measurement Items. Item ID Item SPGON1 In most circumstances, I feel that I could quickly determine where I am based on my surroundings. SPGON2 I can usually determine the cardinal direc tions by looking at the sky. SPGON3 I rarely get lost. SPGON4 Examining my surroundings allows me to easily orient myself. SPGON5 At any given point, I usually know where north, south, east and west lie. SPGON6 I find it difficult to orient myself in a new place. SPGON7 Knowing my current location is rarely important to me. SPGON8 I feel that I can easily orientate myself in a new place. SPGON9 Using a compass is easy. SPGON10 I rarely consider myself lost. SPGON11 I have a great sense of direction. SPGON12 SPGON13 Knowing my current location is essential to determine where I am going. SPGON14 When driving or walking home, I am likely to choose a route that I have never taken. Author Developed, Source for concepts: Kozlowski & Bryant 1977; Swink & Speier, 1999; Cherney et al. 2008; Meilinger & Knauff, 2008 perceived ability to commit SPG MR measurement items, shown in T able 10 are expected to measure the SPGMR substratum.

PAGE 75

61 Table 10 Initial Self Perceived Geospatial Memor ization and Recall Measurement Items. Item ID Item SPGMR1 I am good at giving driving directions from memory. SPGMR2 I am good at giving walking directions from memory. SPGMR3 I can usually remember a new route after I have traveled it only once. SPGMR4 needed. SPGMR5 SPGMR6 remember how to get around. SPGMR7 After studying a map, I can often follow the route without needing to look back at the map. SPGMR8 When someone gives me good verbal directions, I can usually get to my destination without asking for additional directions. SPGMR9 After being a shown a map, it would be easy for me to recreate a similar map to memory. SPGMR10 After seeing a city map once, I am usually able to commit key landmarks and their locations to memory. SPGMR11 It would be easier to memorize a tabular list of states, instead of a map showing the states. SPG MR12 It would be easier to memorize capital cities from a map, instead of a table. SPGMR13 The best way to memorize the layout of a college campus is to study a map. SPGMR14 I could draw an approximate outline of my home country from memory. Author Developed, Source for concepts: Lei et al. 2009; Miyake et al. 2001 perceived ability to form mental GV measurement items, shown in T able 11 are expected to measure the SPGV substratum.

PAGE 76

62 Table 11 Initial Self Perceived Geospatial Visualization Measurement Items. Item ID Item SPGV1 It is easy for me to visualize a place I have visited. SPGV2 I find it difficult to visualize a place I have visited. SPGV3 I can visualize a place from information that is provided by a map without having been there. SPGV4 When someone describes a place I form a mental image of what it looks like. SPGV5 I can visualize geographic locations. SPGV6 When someone describes a place, I have a difficult time visualizing what it looks like. SPGV7 When viewing an aerial photograph, I often visualize what the area looks like on the ground. SPGV8 I can visualize a place from an aerial photograph. SPGV9 I can visualize a place from a verbal description. SPGV10 I can visualize a place from a map. SPGV11 I can visualize what a future building might look like on an empty lot. SPGV12 When viewing a map, I often visualize what the area looks like on the ground. SPGV13 While reading written walking directions, I often form a mental image of the walk. SPGV14 Generally I prefer to memorize the mental images of a walk or drive, versus the written directions. Author Developed, Source for concepts: Eisenberg & McGinty, 1977; Velez et al., 20 05 Table 12 presents the initial measurement items of the proposed self perceived perceived ability to reduce complexity of geographic elements by con verting to a schema or outline.

PAGE 77

63 Table 12 Initial Self Perceived Geospatial Schematization Measurement Items. Item ID Item SPGS1 I prefer maps that display key information clearly, such as transit maps. SPGS2 I prefer maps that include full color aerial photography. SPGS3 I prefer simple, sketch like maps. SPGS4 When looking at subway or transit maps, I can usually quickly find the routes I need to take in order to reach my destination. SPGS5 When giving directions it is easy for me to decide what is important enough to include and what to exclude. SPGS6 I prefer maps that only provide key information, even if they are not to scale. SPGS7 I prefer maps that only provide information necessary to accomplis h my tasks. SPGS8 I prefer written walking directions that only include the most essential navigational elements. SPGS9 I prefer maps that show only the most essential information. SPGS10 I am better at interpreting maps that only provide necessary info rmation. SPGS11 It is easy for me to ignore irrelevant information on a map and to focus only on necessary information. SPGS12 I prefer verbal driving directions that only include the most essential information to reach my destination. SPGS13 I like simple, clear maps. SPGS14 I prefer highly detailed maps that show more than just the basic information. Author Developed, Source for concepts: Agrawala & Stolte, 2001; Klippel et al. 2005 Content Validity (Step 3) Following the initial generation of items, MacKenzie et al. (2011) recommend establishing content validity. In order to establish content validity, a categorization and prioritization exercise was conducted. Ten business decision makers, who have worked as functional or project managers for ten years or more, were selected to perform this

PAGE 78

64 exercise. Due to their professional experience, these business decision makers were ideal candidates for establishing content validity. Each of the business decision maker s was given five envelopes, each stating the name and definition of the substrata, as well as an envelope for items that did not fit the substrata. Additionally, the participants were given index cards with each prospective measurement item and asked to bo th sort and categorize the items into the appropriate envelope. Based on the results of this exercise, measurement items that had a majority agreement in categorization were retained and ordered based on the averaged ranks. The items resulting from this ex ercise were com bined into a single instrument and a 7 point Likert scale (Likert, 1932) was added to me asure agreement with each item. See Table 13 16 for results of the categorization and prioritization exercise. Table 13 Content Validity Results of Geospatial Orientation and Navigation. Initial Item Sort Results New Item Rank Rank Results SPGS1 SPGON3 SPGS2 SPGON11 Removed SPGS3 SPGON7 SPGS4 SPGON1 SPGS5 SPGON2 SPGS6 SPGON8 SPGS7 Removed SPGS8 SPGON5 SPGS9 SPGON9 SPGS10 SPGON6 SPGS11 SPGON4 SPGS12 SPGON10 SPGS13 Removed SPGS14 Removed

PAGE 79

65 Table 14 Content Validity Results of Geospatial Memorization and Recall. Initial Item Sort Results New Item Rank Rank Results SP GMR 1 SPGMR2 SPGMR 2 SPGMR4 SPGMR 3 SPGMR1 SPGMR 4 SPGMR8 SPGMR 5 SPGMR6 SPGMR 6 SPGMR9 SPGMR 7 SPGMR3 SPGMR 8 SPGMR5 SPGMR 9 SPGMR7 SPGMR 10 SPGMR10 SPGMR 11 SPGMR13 Removed SPGMR 12 SPGMR12 Removed SPGMR 13 SPGMR11 Removed SPGMR 14 Removed

PAGE 80

66 Table 15 Content Validity Results of Geospatial Visualization. Initial Item Sort Results New Item Rank Rank Results SPGV1 SPGV4 SPGV2 SPGV7 SPGV3 SPGV2 SPGV4 SPGV6 SPGV5 SPGV1 SPGV6 SPGV13 Removed SPGV7 SPGV12 Removed SPGV8 SPGV8 SPGV9 SPGV5 SPGV10 SPGV3 SPGV11 SPGV14 Removed SPGV12 SPGV11 Removed SPGV13 SPGV10 SPGV14 SPGV9 Table 16 Content Validity Results of Geospatial Schematization. Initial Item Sort Results New Item Rank Rank Results SPGS1 SPGS7 SPGS2 SPGS11 Removed SPGS3 SPGS1 SPGS4 Removed SPGS5 SPGS6 SPGS6 SPGS10 SPGS7 SPGS9 SPGS8 SPGS8 SPGS9 SPGS4 SPGS10 SPGS12 Removed SPGS11 SPGS2 SPGS12 Removed SPGS13 SPGS5 SPGS14 SPGS3

PAGE 81

67 Measurement Model Specification (Step 4) The fourth step in the scale development procedure recommended by MacKenzie et al. (2011) is to formally specify the measurement model. In this case, the relationship between each proposed substratum and the GRA construct are visually and formally presented. Prior research has demonstrated that orientation and navigation are indicators of spatial ability. For example, Kozlowski and Bryant (1977) revealed th at perceptions of an Speier (1999) discovered a relationship between spatial orientation and decision performance. Furthermore, Cherney et al. (2008) reported that spatial task performance might be influenced by self perceptions of navigation ability. As spatial orientation and navigation appear to be indicator s of spatial ability within a geospatial context, we propose that: Proposition P1: Self perceived geospatia l orientation and navigation ability (SPGON) forms geospatial reasoning ability (GRA). Additionally, research has demonstrated that memorization and recall are indicators of spatial ability. In their study, Lei et al. (2009), discovered that subjects who w ere familiar with specific landmark s were more likely to locate these landmarks using a geospatial tool than with landmarks unfamiliar to them. Furthermore, Miyake et al. (2001) discovered that the ability to memorize visiospatial concepts might influence spatial ability. Additionally, subjects who had to retrace directions from a map reported that during memorization they first translated the map into walking directi ons (Meilinger & Knauff, 2008). Existing spatial tests include a visual memory test, which asks

PAGE 82

68 participants to memorize an array of shapes on a page, and then identify these shape s from recall (Velez et al. 2005). Thus, we propose: Proposition P2: Self perceived geospatial memorization and recall ability (SPGMR) forms geospatial reasoning abil ity (GRA). Furthermore, research has demonstrated that visualization ability is an indicator of spatial ability. For example, Eisenberg and McGinty (1977) utilized a spatial visualization test to measure spatial reasoning ability. Additionally, Velez et a l. (2005) discovered that spatial ability is correlated to the ability of 3 dimensional visualization. Thus, we propose: Proposition P3: Self perceived geospatial visualization ability (SPGV) forms geospatial reasoning ability (GRA) Final ly, researchers s uggest that reducing the amount of information presented in geovisualization to only include essential information can improve decision ma king performance The benefits of such simplified maps is demonstrated by Agrawala and Stolte (2001) who collected fee dback from over 2,000 users of a technology that emulates hand drawn driving directions, which often emphasize essential information while eliminating nonessential details. Additionally, Klippel et al. (2005) suggest that modern cartographers can succe ssfu lly develop schematic maps that are simplified, yet sufficient representations. In their study, Tversky and Lee (1998) asked students to provide both written directions and a route map, of which 86% were able to provide sufficient written directions and 1 00% were able to create sufficient route maps. Furthermore, Meilinger and Knauff (2008) suggest ed that a reason subjects have difficu lty regaining their orientation, once lost, is that highly schematized information that does not contain

PAGE 83

69 enough information for immediate orientation. No research was identified that specifically recognized schematization ability as an indicator of reasoning ability, however several studies indicated its importance for communicating geospatial information (e.g. Klippel et al. 2005). As the relationship between SPGS and spatial reasoning has not been established in literature, this construct w as t ested as a substratum of GRA, as well as an independent construct. As the ability to effectively interpret schematized geospatial in formation may influence GRA, we propose: Proposition P4: Self perceived geospatial schematization ability (SPGS) forms geospatial reasoning ability (GRA). As the C FT suggests that certain present ation modes facilitate decision making, based on an individua 1991; Speier, 2006), the substrata presented above w ere essential to fully determine the effects of geovisuali zation on decision performance. Reflective and Formative Constructs While self perception measurement items have commonly been used as reflective measures of first order constructs (e.g. Davis, 1989), there is a continuing debate concerning the use of reflective or formative measurements for second order multidimensional constructs, such as GRA (e.g. Vlachos & The otokis, 2009; Polites, Roberts & Thatcher, 2012). Furthermore, Vlachos & Theotokis have discovered that incorrectly specifying a second differing research conclusions. In their criteria for distinguishin g between reflective and formative indicator models, MacKenzie, Podsakoff and Jarvis (2005) and MacKenzie et al. (2011) suggest

PAGE 84

70 that constructs should be specified as reflective when (1) indicators are manifestations of the construct, (2) changes in the i ndicator should not cause changes in the construct, (3) changes in the construct do cause changes in the indicators, (4) dropping an indicator should not alter the conceptual domain of the construct, (5) indicators are viewed as affected by the same underl ying construct and are parallel measures that co vary, and (6) indicators are required to have the same antecedents and consequences and to have a high internal consistency and reliability. Based on this definition of a reflective construct, we propose tha t the first order constructs be modeled as reflective. MacKenzie et al. (2011) further suggest that a construct should be specified as formative when (1) the indicators are defining characteristics of the construct, (2) changes in the indicators should cause changes in the construct, (3) changes in the construct do not cause changes in the indicator, (4) d ropping an indicator may alter the conceptual domain of the construct, (5) it is not necessary for indicators to co vary with each other, and (6) indicators are not required to have the same antecedents and consequences, nor have high internal consistency or reliability. Based on this definition of a formative construct, we propose that the second order construct be modeled as formative Thus, subsequently the measurement model is referred to as a first order reflective second order formative model. Vi sua l Representation of Measurement Model The measurement model is presented visually in Figure 3 demonstrating the ten proposed measurement items for each first order factor in addition to the propositions that SPGON, SPGRM and SPGV are indicators of the GRA construct. Note that SPGS

PAGE 85

71 has been included as a potential substratum of GRA in this model. While no previous research h as utilized schematization as an indicator of spatial ability, there is evidence that SPGS could contribute to GRA. Thus, the authors f elt that its inclusion was essential to test its relationship to GRA. Figure 3 First Order Reflective, Second Order Formative Measurement Model including Propositions. Mathematical Notation of Measurement Model Each substratum is considered to be a latent (i.e. unobservable) variable, measured using ten manifest (i.e. observable) variables. Thus the relationship between each measurement item and the substrat a can be formally expressed as:

PAGE 86

72 In the expre ssions 1i 4i] represent the measurement items for each substratum, X[1 4i] represent the effect of X[1 4] on 4i] represent the measurement error for each indicator. Furthermore, the GRA construct is a se cond order formative construct defined by the substrata. Thus this relationshi p can be formally expressed as: In this expression 4] represents the weigh associated with 4] repre nts the common error term. Data Collection (Step 5) The next step of the scale development procedure recommended by MacKenzie et al. (2011) was to perform the data collection. As this study performed two distin ct pretests, two data co llection methods are presented. Pretest A Pretest A consisted of a small sample pretest, allowing the researchers to test reliability and to gain initial feedback on the measurement items. For this pretest, 33 stud ent subjects from an urban research university were asked to complete an onlin e survey and provide feedback.

PAGE 87

73 Pretest B In an attempt to reduce response bias a variety of methods were used to recruit subjects for Pretest B. These included contacting subjects in person, as well as through social networks and e mail. Since a variety of solicitation methods were used there is no meaningful way to show response rate. For a traditional factor analysis, Hair, Black, Babin and Anderson (2010) suggest a five to one ratio of t he number of observations to number of variables. However, it is further suggested that a more preferable sample would utilize a ten to one ratio. Knowing this and that our study contained 40 initial variables, our goal was to utilize a sample size of at least 200, but preferably 400 subjects. Throughout a five month period 624 responses were collected. Of these, 24 incomplete observations were removed, retaining 600 responses. This number meets the minimum requirements for traditional factor analysis an d exceeds the minimum number necessary for a successful Partial Least Squares analysis. Using Microsoft Excel 2010, random numbers were generated for each row and the data was sorted based on this column. The sample was split and 300 samples were retained for a validation study, while the other 300 samples were used for the Pretes t B analysis. Scale Purification and Refinement (Step 6) The next step recommended by MacKenzie et al. (2011) is to purify and refine the scale. This step was accomplis hed by perfo rming two pretests. Pretest A For Pretest A, a small sample pretest of the research instrument was conducted. For this pretest, 33 subjects were asked to complete an online survey, measuring the results of

PAGE 88

74 all 40 substrata measurement items. Additionally, the subjects were given an opportunity to provide input via open ended questions designed to elicit feedback on the actual measurement items. The open ended question responses were reviewed for additional opportunities of scale refinement. An Item Analysis performed using IBM SPSS as shown in Table 1 7 ) for each substratum and SPGS, (Cronbach, 1951; Gliem & Gliem, 2003). Furthermore, the open ended feedback demonstrate d that the questions were clear and interesting to the participants. This was crucial as one of the goals identified for the measurement scale was that it could successfully determine GRA in subjects who do not necessarily have a familiarity with geographi c concepts. Table 17 Pretest A Reliability Statistics. Substrata No of Items No of Subjects SPGON .830 10 33 SPGMR .846 10 33 SPGV .835 10 33 SPGS .735 10 33 Pretest B Based on the success of Pretest A, an exploratory data analysis using Partial Least Squares (PLS) Structural Equation Modeling (SEM) was performed. Given that the selected analytical methodology can influence the results, great care was taken to ensure th e most appropriate statistical analysis method. Polites et al. (2012) determined that between the years 2000 and 2009, 84% of studies testing aggregate constructs utilized PLS, indicating that the use of PLS in IS research has become widely accepted. Furth ermore, Ringle, Gtz, Wetzels and Wilson (2009)

PAGE 89

75 suggested the use of PLS path modeling over maximum likeli hood covariance based structural equation modeling for tests of second order constructs when methodological requirements are not met. As the primary g oal of this study was to define a new construct on an untested measurement model, PLS path modeling was utilized to ensure consistent res ults. As such, SmartPLS (Ringle, Wende, & Will 2005) was utilized for subsequent measurement model testing. The result s of this analysis helped establish construct validity and provided an opportunity for refinement of the measurement scale. The first goal of Pretest B was to identify parsimonious sets of variables. Furthermore, a second goal was to reduce the number of i tems significantly and to identify only the most representative items for each construct. The pretest procedure resulted in a reduction in measurement items from 40 to 12. Additionally, the pretest revealed that SPGS was not a valid indicator of GRA. The P retest B sample population (n=300) included demographic variables measuring age, gender, education, profession and cultural background. The results of these demographic items are presented in Table 1 8

PAGE 90

76 Table 18 Pretest B Descript ive Statistics of Demographics Variables. Question Variables Percentage Age 18 25 30.7 26 35 32.3 36 45 12.0 45 55 12.3 56 65 9.0 66 3.7 Gender Female 60.0 Male 40.0 Education 2 Year/Associate Degree 24.7 4 Year/Bachelor Degree 30.7 Doctor/JD/PhD 2.7 Elementary/Middle School 0 .7 High School 30.7 Master Degree 10.7 Profession Business Professional 43.7 Student 37.7 Geospatial Professional 0 .7 Other 18.0 Culture African 3.7 Asian 8.0 Australian 0.3 European 38.7 Middle Eastern 4.3 North American 43.7 South American 1.3 The first analysis of Pretest B involved a PLS factor analysis using SmartPLS (Ringle et al., 2005). For this analysis the measurement items of each first order substratum were modeled as reflective. The first step was to test the full research model, whic h included SPGS as a substratum of GRA. This model revealed that the path

PAGE 91

77 coefficient between SPGS and GRA was 0.047, while the remaining path coefficients ranged from 0.355 to 0.376 (see Figure 4 ). This agreed with prior research, which had not included s chematization as an indicator of spatial reasoning. However, as there is evidence that SPGS could contribute to GRA in some way (Agrawala & Stolte, 2001; Klippel et al. 2005), it was further refined using an IBM SPSS Statistics 19 factor analysis. Another reason to have continued the development of the SPGS construct is that it is a construct sim ilar to the GRA measure, so it can be used to establish discriminate validity in future testing. Figure 4 Pretest B, Path Analysis including SPGS. As prior research has shown that an equal number of indicator variables are preferred for first order factors using multiple indicators, only the four highest loading items were retained. Figure 5 presents the pat h analysis findings and Table 1 9 presents cross loadings for each of the substrata.

PAGE 92

78 Figure 5 Pretest B, Path Analysis including SPGS.

PAGE 93

79 Table 19 PLS Factor Analysis Loadings. Factor Loading by Substratum and GRA Item SPGMR SPGON SPGV GRA SPGMR1 0.8045 0.6625 0.5896 0.748 SPGMR2 0.8043 0.6242 0.556 0.7233 SPGMR3 0.8233 0.7056 0.6486 0.7896 SPGMR4 0.8473 0.7 0.6772 0.8055 SPGMR5 0.7597 0.6439 0.5544 0.7126 SPGMR6 0.5875 0.4979 0.4109 0.5457 SPGMR7 0.776 0.6584 0.7327 0.778 SPGMR8 0.5193 0.4697 0.3074 0.477 SPGMR9 0.7306 0.6572 0.5596 0.7078 SPGMR10 0.7485 0.6817 0.6481 0.7507 SPGON1 0.551 0.6938 0.5459 0.6465 SPGON2 0.6474 0.7882 0.5283 0.7155 SPGON3 0.6569 0.8212 0.6022 0.754 SPGON4 0.757 0.8682 0.601 0.8112 SPGON5 0.7406 0.8392 0.6156 0.798 SPGON6 0.6611 0.7449 0.5025 0.6967 SPGON7 0.4304 0.5098 0.3377 0.4662 SPGON8 0.4832 0.616 0.5218 0.5827 SPGON9 0.539 0.6179 0.5253 0.6066 SPGON10 0.7298 0.8104 0.5752 0.7707 SPGV1 0.6616 0.6594 0.791 0.7505 SPGV2 0.7018 0.6204 0.7772 0.7474 SPGV3 0.675 0.6072 0.7939 0.7367 SPGV4 0.5709 0.5281 0.697 0.6363 SPGV5 0.5091 0.462 0.698 0.5869 SPGV6 0.3526 0.337 0.5861 0.4437 SPGV7 0.2337 0.2504 0.2978 0.2773 SPGV8 0.4771 0.4669 0.6959 0.5754 SPGV9 0.3752 0.3499 0.558 0.4492 SPGV10 0.4642 0.4675 0.7024 0.5725 Highest Item Loadings shown in bold.

PAGE 94

80 The latent variable correlation, as shown in Table 20 shows the correlations between GRA and its substrata. Each substrat um correlates with GRA between 0.8852 and 0.9528, while there is also a high correlation between each of the substrata. Table 20 Latent Variable Correlation. SPGMR SPGON SPGV GRA SPGMR 1 SPGON 0.8499 1 SPGV 0.776 0.7298 1 GRA 0.9528 0.9369 0.8852 1 As demonstrated in Table 21 the path coefficients are relatively equal and range between 0.299 and 0.396. Relatively equal path coefficients are a suggested requirement of research models using repeating indicators (Beemer & Gregg, 2010). Additionally, all first order constructs ac hieve t s tatistic values greater than 1.96 demonstratin g convergent validity (Gefen & Straub, 2005). Furthermore, a comparison of the first order latent variable correlations reveals that all are below 0 .9, indicating no common method bias (Bagozzi Yi, & Phillips 1991). Table 21 Path Coefficients. Original Sample Sample Mean Standard Deviation Standard Error T Statistics SPGMR > GRA 0.396 0.395 0.0129 0.0129 30.7691 SPGON > GRA 0.382 0.3813 0.0112 0.0112 34.1738 SPGV > GRA 0.299 0.2995 0.0143 0.0143 20.9492 Next, construct reliability and validity were established. The Average Variance Extracted (AVE) for a first order construct should be greater than 0 .50 (Fornell & Larcker, 1981), which SPGRM and SPGON achieve and thus support construct validity. However, SPGV only reached an AVE of 0.4554, which is a concern. Once non essential

PAGE 95

81 or incorrectly loading items are removed this value may improve and will need to be re evaluated. For second order constructs with fo rmative indicators, such as GRA, a R 2 value greater than 0 .5 can indicate construct validity (Diamantopoulos Reifler, & Roth 2008; MacKenzie et al., 2011). In the measurement model the path analysis revealed a R 2 value of 1.0 as all indicator variables also defined the second order construct. The composite reliability scores were greater than 0 .8888 for each of the substrata, which meant that reliability as defined by Cronbach (1951) and Fornell and Larcker for reflective, first order constructs, was ach ieved. Gleim and Gleim (2003) suggest a minimum alpha of 0 .8 as a reasonable goal, which each of the tested substrata exceeded Per MacKenzie et al. as the measurement model does not predict correlation of the sub dimensions for the second order, formative construct of GRA, reliability is not relevant for this construct. See Table 22 for more detail. Table 22 Construct Reliability. AVE Composite Reliability R Square Alpha Communality Redundancy GRA 0.4468 0.9588 1 0.9546 0.4468 0.2671 SPGRM 0.5578 0.9253 0 0.909 0.5578 0 SPGON 0.5468 0.9218 0 0.9037 0.5468 0 SPGV 0.4554 0.8888 0 0.86 0.4554 0 Next, items that did not load highly upon their corresponding substrata were removed. Indicated by prior research, an equal number of indicators should be utilized for all first o rder constructs, therefore only the four highest loading measurement items were retained. Care was taken to ensure that an equal number of measurement items remained for each substratum, as this would ensure the integrity of the model. The item reduction

PAGE 96

82 for SPGS occurred using a traditional factor analysis using IBM SPSS Statistics 19 and five items were retained. Pretest B Reduced Items Once the items were reduced, an additional path analysis of Pretest B was performed using SmartPLS (Ringle et al., 2005). Again, the measurement items of each first order substratum were modeled as reflective, while the second order construct of GRA was mo deled using the hierarchical component model. This model, along with the updated path coefficients, is presented in Figure 6 The reduced items for each substratum are presented in Table 23 along with factor loadings. Figure 6 Pretest B, Path Analysis.

PAGE 97

83 Table 23 PLS Factor Analysis Loadings. Factor Loading by Substratum and GRA Item SPGMR SPGON SPGV GRA SPGMR1 0.8492 0.6507 0.5601 0.7658 SPGMR2 0.876 0.6396 0.5372 0.7639 SPGMR3 0.8443 0.6756 0.6837 0.8155 SPGMR4 0.8766 0.6685 0.6396 0.81 SPGON3 0.6287 0.8439 0.5732 0.7619 SPGON4 0.6893 0.9201 0.6166 0.8293 SPGON5 0.6722 0.8772 0.6332 0.8116 SPGON10 0.6835 0.8539 0.5812 0.7892 SPGV1 0.6167 0.63 0.8509 0.7678 SPGV2 0.6579 0.6128 0.9018 0.7939 SPGV3 0.6291 0.6059 0.9076 0.7822 SPGV10 0.4078 0.4147 0.6902 0.5332 The latent variable correlation, as shown in Table 2 4 shows that the correlations between GRA and each of its substrata after the items for each substratum were reduced. Each substratum correlates with GRA between 0.8742 and 0.9165, while there is also a high correlation between each of the substrata. Tabl e 24 PLS Latent Variable Correlation. SPGMR SPGON SPGV GRA SPGMR 1 SPGON 0.765 1 SPGV 0.7045 0.688 1 GRA 0.9165 0.9133 0.8742 1 After reducing the number of items from ten to four for each substratum, the path coefficients, as shown in Table 2 5 continue to be relatively equal and range between 0.3382 and 0.3901. As such relative similarity was a suggested requirement of research m odels using repeating indicators (Beemer & Gregg, 2010) we deem the results

PAGE 98

84 acceptable. Additionally, convergent validity can be demonstrated when t s tatistic values are greater than 1.96 (Gefen & Straub, 2005), which all of the first order constructs achieve. To ensure that there was no systematic influence biasing the data, two common method bias tests were performed. The first involved the addition of a theoretically dissimilar marker construct to which each of the substrata was compared. Using this method, the greatest squared vari ance revealed only a 2.87% shared variance, below the 3% suggested threshold (Lindell & Whitney, 2001) Furthermore, a comparison of the first order latent variable co rrelations reveals that all are below 0 .9, indicating no common method bias (Bagozzi et al., 1991). Table 25 Path Coefficients. Original Sample Sample Mean Standard Deviation Standard Error T Statistics SPGMR > GRA 0.3798 0.3795 0.0114 0.0114 33.3878 SPGON > GRA 0.3901 0.3899 0.0096 0.0096 40.6855 SPGV > GRA 0.3382 0.3387 0.011 0.011 30.8816 Next, construct reliability and validity, were further established. The Average Variance Extracted (AVE) for a first order construct should be greater than 0 5 (Fornell & Larcker, 1981), which all substrata achieve. The concern encountered when analyzing t he AVE of the substrata containing all measurement items in Pretest B was resolved through the item reduction. For second order constructs with formative indicators, such as GRA, a R 2 value greater than 0 .5 indicates construct validity (Diamantopoulos et a l 2008; MacKenzie et al., 2011). In the measurement model the path analysis revealed a R 2 value of 1.0 as all indicator variables defined the second order construct. The composite reliability scores were greater than 0 .8986 for each of the substrata, wh ich meant that reliability as defined by Cronbach (1951) and Fornell and Larcker (1981) for reflective,

PAGE 99

85 first order constructs, was achieved. Gleim and Gleim (2003) suggest a minimum alpha of 0 .8 as a reasonable goal, which each of the tested substrata exc eeded Per MacKenzie (2011), as the measurement model does not predict correlation of the sub dimensions for the second order, formative construct of GRA, reliability is not relevant for this construct. See Table 2 6 for more detail. Table 26 Construct Reliability. AVE Composite Reliability R Square Alpha Communality Redundancy GRA 0.5964 0.9461 1 0.9372 0.5964 0.3291 SPGRM 0.7425 0.9202 0 0.8843 0.7425 0 SPGON 0.7643 0.9283 0 0.8968 0.7643 0 SPGV 0.6927 0.8986 0 0.8465 0.6927 0 Finally, discriminant and convergent validity were established. Convergent validity is demonstrated as all items that should load in GRA did so at greater than 0.6, except for SPGV10 that loaded slightly below 0.6 at 0.5779. Particular attention should be given to this item in future use s of this measurement scale to ensure continued validity. Additionally, all items that should load on SPGS, which is now treated as an independent construct, did so greater than 0.6. Furthermore, discriminate validity was fu rther established as GRA substrata items that should not load on SPGS based on theory, o nly do so at less than 0.3. See T able 27 for more detail.

PAGE 100

86 Table 27 PLS Factor Loading and Cross Loading. GRA SPGS GRA1 (SPGMR1) 0.8038 0.2268 GRA2 (SPGMR2) 0.7827 0.1875 GRA3 (SPGMR3) 0.7878 0.0983 GRA4 (SPGMR4) 0.8091 0.1407 GRA5 (SPGON3) 0.7864 0.1957 GRA6 (SPGON4) 0.8204 0.1299 GRA7 (SPGON5) 0.8004 0.1253 GRA8 (SPGON10) 0.7948 0.1656 GRA9 (SPGV1) 0.7454 0.1167 GRA10 (SPGV2) 0.7351 0.0574 GRA11 (SPGV3) 0.7263 0.0829 GRA12 (SPGV10) 0.5779 0.1938 SPGS4 0.2026 0.8631 SPGS5 0.1982 0.7975 SPGS8 0.1578 0.7485 SPGS9 0.1258 0.7748 SPGS10 0.0848 0.7398 Using the two step procedure outlined by Gefen and Straub (2005), discriminant validity was further established by comparing inter construct correlations with square shoul discriminant validity can be inferred when the variance of each construct is larger than the variance shared with any other construct. Furthermore, Fornell and Larcker (1981) sug gest that all AVEs should exceed a 0.50 threshold, which occurs in this analysis. Based on the results of this analysis, as seen in Table 2 8 the square root of the AVE for each substratum is larger than any correlation.

PAGE 101

87 Table 28 Inter Construct Correlations and Square Root of AVE. SPGON SPGMR SPGV Square Root of AVE SPGON 0.765 0.688 0.8742 SPGMR 0.765 0.7045 0.8617 SPGV 0.688 0.7045 0.8323 The analysis performed allowed for construct reliability and validity, discriminate reliability, indicator reliability and convergent reliability to be established. Next, the scale will be reas sessed using a second data set. Data Collection and Reexamination of Scale Properties (Step 7) The seventh step, as recommended by Mac Kenzie et al. (2011), is to collect additional data and to use this data to reexamine the scale derived in the previous steps. The three hundred responses from the initial data co llection effort were utilized for this step. The demographic information on t he secon d data set is provided in Table 2 9

PAGE 102

88 Table 29 Study 1 Descriptive Statistics of Demographic Variables. Question Variables Percentage Age 18 25 32.7 26 35 32.0 36 45 12.0 45 55 12.7 56 65 6.7 66 4.0 Gender Female 52.0 Male 48.0 Education 2 Year/Associate Degree 26.3 4 Year/Bachelor Degree 39.3 Doctor/JD/PhD 3.0 Elementary/Middle School 1.0 High School 30.3 Master Degree 10.0 Profession Business Professional 51.7 Student 33.3 Geospatial Professional .7 Other 14.3 Culture African 3.7 Asian 13.7 Australian 0 European 33.0 Middle Eastern 4.0 North American 44.7 South American 1.0 As with Pretest 2, the Tes t data was analyzed using Smart PLS (Ringle et al., 2005). The path coefficients of the first order constructs (substrata) on the second order construct appeared to be relatively equal, an important consideration for a multi dimensional second order construct. Again, the measurement item s of each first order

PAGE 103

89 substratum were modeled as reflective, while the second order construct of GRA was modeled using the hierarchical component model. This model is presented in Figure 7 loadings are presented in Table 30 Figure 7 Test, Path Analysis.

PAGE 104

90 Table 30 PLS Factor Analysis Loadings and Weights. Factor Loading by Substratum and GRA Item SPGMR SPGON SPGV GRA SPGMR1 0.8746 0.6908 0.5779 0.8106 SPGMR2 0.9186 0.6642 0.5629 0.8125 SPGMR3 0.8666 0.609 0.6846 0.8125 SPGMR4 0.8498 0.598 0.5576 0.7579 SPGON3 0.58 0.8684 0.5419 0.751 SPGON4 0.6798 0.9361 0.6128 0.8409 SPGON5 0.6621 0.8942 0.5275 0.7884 SPGON10 0.6405 0.8099 0.5577 0.7576 SPGV1 0.6551 0.6011 0.8575 0.7856 SPGV2 0.5753 0.5513 0.9077 0.7521 SPGV3 0.5554 0.5174 0.8977 0.7277 SPGV10 0.499 0.4794 0.7018 0.6234 The latent variable correlation, as shown in Table 31 demonstrates that each substratum correlates with GRA between 0.8586 and 0.9102, while there is also a high correlation (greater than 0.63) between each of the substrata. Table 31 Latent Variable Correlation. SPGMR SPGON SPGV GRA SPGMR 1 SPGON 0.7304 1 SPGV 0.6794 0.6384 1 GRA 0.9102 0.8944 0.8586 1 The path coefficients, as shown in Table 32 continue to be relatively equal and range between 0.3437 and 0.3948 (while they ranged from 0.3382 and 0.3901 in pretest B). Relatively similar path coefficients were a suggested requirement of second order formative research models using repeating indicators. Additionally, convergent validity is demonstrated when t s tatistic values are greater than 1.96 (Gefen and S traub, 2005), which was exceeded by all first order constructs. To ensure that there was no systematic

PAGE 105

91 influence biasing the data, two common method bias tests were performed. The first involved the addition of a theoretically dissimilar marker construct t o which each of the substrata was compared. Using this method, the greatest squared variance revealed only a 2.87% shared variance, below the 3% suggested threshold (Lindell & Whitney, 2001). Furthermore, a comparison of the first order latent variable cor relations reveals that all are below 0 .9, indicating no common method bias (Bagozzi et al., 1991 ). Table 32 Path Coefficients. Original Sample Sample Mean Standard Deviation Standard Error T Statistics SPGMR > GRA 0.3938 0.3948 0.013 0.013 30.2249 SPGON > GRA 0.3874 0.3865 0.0131 0.0131 29.5689 SPGV > GRA 0.3437 0.3438 0.0128 0.0128 26.8764 Next, construct reliability and validity, as well as convergent validity, were established. The AVE for a first order construct should be greater than the 0.5 threshold (Fornell & Larcker, 1981), which all three substrata exceed at values greater than 0 .7. For second order constructs with formative indicators, such as GRA, a R 2 value greater than 0 .5 indicates construct validity (Diamantopoulos et al 2008; MacKenzie et al., 2011). In the measurement model the path analysis revealed a R 2 value of 1.0 as al l indicator variables defined the second order construct. The composite reliability scores were greater than 0 .9 for each of the substrata, which was an improvement over the pretest results. See Table 33 for the full results.

PAGE 106

92 Table 33 Construct Reliability. AVE Composite Reliability R Square Alpha Communality Redundancy GRA 0.5933 0.9457 1 0.937 0.5933 0.3333 SPGRM 0.7705 0.9306 0 0.9004 0.7705 0 SPGON 0.7715 0.9309 0 0.9001 0.7715 0 SPGV 0.7144 0.9083 0 0.8629 0.7144 0 Finally, discriminant and convergent validity were established. All items that should load on GRA did so at greater than 0 .6, which helped establish convergent validity. SPGV10, which was a potential concern in the previous test, improved to a more acceptable 0.632. Additionally, all items that should load on SPGS did so at greater than 0 .6. Furthermore, as the SPGS items tha t should not load on GRA, based on a lack of such findings in existing literature, did so at less than 0 .2, which was also an improvement over the pretest results, which contributes to discriminate validity. See Table 3 4

PAGE 107

93 Table 34 PLS Factor Loading and Cross Loading. GRA SPGS GRA1 (SPGMR1) 0.8197 0.1321 GRA2 (SPGMR2) 0.8305 0.1064 GRA3 (SPGMR3) 0.8274 0.1778 GRA4 (SPGMR4) 0.7927 0.1762 GRA5 (SPGON3) 0.7539 0.1429 GRA6 (SPGON4) 0.8276 0.0747 GRA7 (SPGON5) 0.7903 0.1309 GRA8 (SPGON10) 0.7461 0.0967 GRA9 (SPGV1) 0.7726 0.12 GRA10 (SPGV2) 0.7071 0.0701 GRA11 (SPGV3) 0.677 0.0401 GRA12 (SPGV10) 0.632 0.1181 SPGS4 0.1462 0.7953 SPGS5 0.0597 0.6452 SPGS8 0.1482 0.745 SPGS9 0.097 0.8084 SPGS10 0.1355 0.7702 Using the two step procedure outlined by Gefen and Straub (2005), discriminant validity was further established by comparing inter construct correlations with the square square root of the AVE sho uld be discriminant validity can be inferred when the variance of each construct is larger than the variance shared with any other construct. Furthermore, Fornell and Larcker (1981) s uggest that all AVEs should exceed a 0.50 threshold, which occurs in this analysis. Based on the results of this analysis, as seen in Table 3 5 the square root of the AVE for each substratum is larger than any correlation. However, this preliminary analysi s should be confirmed in a future study by comparing correlations between a conceptually similar construct and ideally discovering a correlation of less than 0 .71, as was suggested by MacKenzie et al. (2011).

PAGE 108

94 Table 35 Inter Const ruct Correlations and Square Root of AVE. SPGON SPGMR SPGV Square Root of AVE SPGON 0.7304 0.6384 0.8784 SPGMR 0.7304 0.6794 0.8778 SPGV 0.6384 0.6794 0.8452 Collinearity Finally, due to the formative nature of the construct, an assessment of collinearity was performed. Specifically, each of the substrata measurement items were examined for tolerance and variance inflation factor (VIF), the reciprocal of tolerance. This tes t was performed using IBM SPSS Statistics 19 and revealed that the VIF does not cross the value threshold of 5.0 (Hair, Ringle & Sarstedt, 2011; Hair, Hult, Ringle & Sarstedt, 201 4 ), which would indicate a potential col l inearity problem. Gender Concerns Finally, as gender has been shown to cause differences in spatial cognitive abilities (Albert & Golledge, 1999) it is important to determine if gender differences influence the outcome of the test sample, thus additional analyses were performed using subgr oups consisting of only female ( n =166) or male ( n =134) subjects. The first test addressed the effects of SPGON on GRA based on gender. This test revealed regressions weights of 0.435 and 0.326 for female and male subjects, respectively. To determine if this change was statistically significant a t statistic and p value were calculated, reveal ing that the differences between female and male subjects were indeed statistically significant at a p value of 0.000. See Table 3 6 for more detail.

PAGE 109

95 Table 36 SPGON GRA: Gender Group Dif ferences Female Subjects Male Subjects Sample Size 166 134 Regression Weight 0.435 0.326 Standard Error 0.019 0.0220 t statistic 3.757 p value (2 tailed) 0.000 The second test addressed the effects of SPGMR on GRA based on gender. This test revealed regressions weights of 0.410 and 0.398 for female and male subjects, respectively. While these regressing weights are very similar, a t statistic and p value confirme d that there is no statistical significance between SPGMR and GRA based on gender. See Table 3 7 for more detail. Table 37 SPG MR GRA: Gender Group Differences Female Subjects Male Subjects Sample Size 166 134 Regression Weight 0.410 0.398 Standard Error 0.0167 0.0178 t statistic 0.491 p value (2 tailed) 0.624 Finally, the effects of SPGV on GRA based on gender were analyzed. This test revealed regressions weights of 0.310 and 0.399 for female and male subjects, respectively. A t statistic and p value were calculated revealing that the differences between female and male subjects were statistically significant at a p value of 0.005. See Table 3 8 for more detail.

PAGE 110

96 Table 38 SPG V GRA: Gender Group Differences Female Subjects Male Subjects Sample Size 166 134 Regression Weight 0.310 0.399 Standard Error 0.0194 0.0254 t statistic 2.843 p value (2 tailed) 0.005 These findings demonstrate that the SPGON and SPGV substrata encounter statistically significant differences based on gender, which aligns with results indicating that gender differences influenced the outcomes of spatial reasoning tests from previous stud ies. This finding further indicates that the gender differenc es present in spatial reasoning also find their way into geospatial reasoning. Of additional interest is that SPGMR was not affected by gender. Discussion Based on the above analysis it appears that the propositions indicating that SPGON, SPGMR and SPGV are substrata of GRA are in agreement with the proposed measurement model. However, it was also revealed that SPGS is not an indicator of GRA, but instead an independent construct. This follows ex isting literature as the perceived ability to navigate space using existing environmental knowledge or tools such as maps, while the SPGS construct essentially focuses on the communication of su ch knowledge. While it could be expected that the average business decision maker would utilize the concepts included in SPGON, SPGMR and SPGV regularly, the SPGS concepts could be more applicable to a geospatial professional, such as a cartographer. Such an individual would

PAGE 111

97 use a process consisting of abstraction, idealization, and selection to develop schematized geospatial information (Klippel et al. 2005; Herskovits, 1998) more often than a business decision maker. Further research will be essential to better understand the importance of SPGS, as defined in this study, on business decision making using geospatial data and how it can be best applied to communicate information to decision makers. While we made some progress to better understand and define SPGS, to further demonstrate the need for a comprehensive definition of schematizat ion, Klippel et al. (2005 ) suggest additional research and ask the following questions: Is schematization a process or the result of a process? Are concepts or conceptualizations the result of a schematization process? For the map domain this may be easier as we could claim that the result of a schematization process is a schematic map, which is a non deniable fact, but what is the schematic map composed of? Anot her question that should be answered is: do we schematize spatial relations or do we schematize spatial objects? Do we schematize intersections or do we sch ematize angles between streets? (p. 59) Furthermore, unlike SPGON, SPGMR and SPGV whose underlying concepts related to spatial ability have been successfully applied in numerous previous studies, SPGS will require further research particularly to determine how much, if any, impact there is on de cision performance. Additionally, it was shown that gender causes statistically significant differences in the measurement model, which was the case in previous literature. Table 3 9 presents the final 12 items of the GRA measurement instrument utilizing th e three substrata.

PAGE 112

98 Table 39 Final GRA Measurement Items. Item ID Item ID SPGMR1 I can usually remember a new route after I have traveled it only once. SPGMR2 I am good at giving driving directions from memory. SPGMR3 After studying a map, I can often follow the route without needing to look back at the map. SPGMR4 I am good at giving walking directions from memory. SPGON3 In most circumstances, I feel that I could quickly determine where I am based on my surroundings. SPG ON4 I have a great sense of direction. SPGON5 I feel that I can easily orientate myself in a new place. SPGON10 I rarely get lost. SPGV1 I can visualize geographic locations. SPGV2 I can visualize a place from information that is provided by a map without having been there. SPGV3 I can visualize a place from a map. SPGV10 While reading written walking directions, I often form a mental image of the walk. The final measurement items for SPGS are presented in Table 40 Table 40 Final SPGS Measurement Items. Item ID Item ID SPGS4 I prefer maps that show only the most essential information. SPGS5 I like simple, clear maps. SPGS8 I prefer written walking directions that only include the most essential navigational elements. SPGS9 I prefer maps that only provide information necessary to accomplish my tasks. SPGS10 I prefer maps that only provide key information, even if they are not to scale. Expected implications to industry and research, as well as expected lim itations of th e research are presented next.

PAGE 113

99 Implication to Research As prior research has revealed conflicting results on the importance of spatial ability on decision performance, particularly when utilizing geographic data, the results of this research will more narrowly define geospatial reasoning ability for business d ecision makers and provided an easy to administer instrument which can provide researchers with a stronger, context specific measure to be extended into future research. Ideally, the GRA construct will be compared to existing measures, such as VZ 2, during a decision making task experiment. The measurement developed during this study will allow future researchers to more easily incorporate geospatial reasoning ability into their experiments, as laboratory settings and training evaluators are not necessary f or the evaluation of research results. Implication to Industry In addition to providing researchers with a valid construct and measurement scale, this research could provide business leaders with a measurement tool to determine current and future employee researchers feel that this ability may be an essential component for the successful interpretation of geovisualized data and could allow organizations to make better decisions from such data. W hen building teams of decision makers who will utilize geospatial data to make their decisions, such as market researchers or site location specialists, organizations could rely on future versions of the GRA scale to qualify candidates. Furthermore, some i ndustries, such as aviation and sea transport, rely on the expectation that candidates possess strong geospatial reasoning skills. Using a measure,

PAGE 114

100 such as GRA, these industries may be able to pre qualify candidates before administering additional laborato ry testing. Limitations and Future Research While the current study represents only initial steps to establish a refined measurement instrument, future studies will need to be conducted in order to further establish the validity of GRA and its substrata. F uture steps should explore the scale validity further by performing experimental manipulation of the construct, establishing nomological validity, establishing criterion validity, performing a known groups comparison, and further assessing the discriminant validity. Each of these tests is essential to strengthening the validity of the GRA const ruct and its measurement scale. As the construct proposed in this research project will most likely first be tested in highly controlled experiments to ensure strong internal validity, it will be essential that the construct be extended to ensure greater generalizability and provide external validity. The topic of generalizability, or the ability to generalize research from one setting to another, has been debated hea vily in recent IS journals (e.g., Compeau et al., 2012; Lee & Hubona, 2009; Lee & Baskerville, 2003, 2012; Tsang & Williams, 2012 ). Lee and Baskerville (2003) suggest that a theory that lacks generalizability also lacks usefulness, which is essential in a field that also has professional implications, such as IS. Tsang and and suggested defining the intended meaning of generalizability, as well as indicating to what popula tion a study is being generalized In 2012, Lee and Baskerville recommend ( 1) a ( 2) a judgment of conditions between the study

PAGE 115

101 setting and t he generalization, ( 3) a judgment of successfully having included all necessary variables to generalize and ( 4) assessing whether a theory is indeed true, which would be a requirement to generalize a theory. Compeau et al. present, in a humorous rhetorical device, that many authors address generalizability by including a paragraph similar to the following during the third round of revisions: The use of students, rather than working p rofessionals, is one potential limitation of the results. It may be that the results are unique to a student population and would not generalize to a population of working professionals. Nonetheless, we are testing a broad theory of behavior so we see no r eason to expect a difference. Moreover, our goal in this paper was to develop theory and measures, rather than on internal validity rather th an external validity. (p. 109 4) Furthermore, Compeau et al. (2012) provide four recommendations for generalizability, including ( 1) explicitly presenting the goal of the research, ( 2) explicitly defining the population to which the study should generalize, ( 3) justifying the sample ch oice, ( 4) discussing limita tions of the sample choice and (5 ) providing consistency between each of these elements. Based on these recommendations this chapter research goal was to provide a general measure of GRA that applies to a global population of inf ormation technology users. The sample choice of Internet users applies to this population group; however, a clear limitation is that the sample may not include sufficient representations of experience levels, cultures, etc. Thus, the results of this study

PAGE 116

102 are expected to generalize to a larger group of technology users; however, t his cannot be stated as a fact. To limit the potential of common method bias impacting construct validity, several procedural techniques were utilized in the scale development. For example as suggested by Podsakoff MacKenzie, Lee, & Podsakoff (2003), subject anonymity was ensured and respondents were informed that there were no right or wrong answers, and measurement items were sources from varied literature and sorted by both industry and academic experts. Furthermore, t he recommendations of Tourangeau, Rips, and Rasinski (2000) were used, including, but not limited to, labeling all s cale values, using simple and concise questions, avoiding unfamiliar terms, and avoiding compli cated syntax. While such procedural remedies were utilized to minimize common method bias during the scale development, another limitation of this study is the limited ability to test for common method bias statistically. While various techniques exist, su Single Factor Test (Podsakoff et al., 2003), the Partial Correlation Technique (Podsakoff et al., 2003), the Multitrait Multimethod Technique (Shadish Cook, & Campbell 2002), the Correlation Marker Technique, the CFA Marker Technique Corre lation Marker Technique (Lindell & Whitney, 2001) and the Unmeasured Latent Marker technique (Liang Saraf, Hu, & Xue 2007) all have limitations in their ability to detect common method bias. For example, while the Unmeasured Latent Marker Construct commo n method bias test was demonstrated for use in PLS by Liang et al. more recent evidence has demonstrated that the technique does not detect com mon method bias in PLS analyses ( Chin Thatcher, & Wright 2012 ). In addition to the procedural remedies, the su ggest test of first order latent variable correlations being less than 0.9 was met in both

PAGE 117

103 the Pretest and Test (Bagozzi et al., 1991). Additionally, the Correlation Marker Technique also demonstrated only a 2.87% shared variance, which is less than the 3% threshold, on the Test data. However, the potential of common method bias exists as data was collected using an identical method and during a common timeframe. To reduce the potential of this bias, it is recommended that the scale be applied to a differen t population and at a different time in the future (Podsakoff et al., 2003). Furthermore, while studies utilizing self reported measures are suspected to contain common method bias, recent research has demonstrated that studies utilizing self reported meas ures have been shown to offset such bias by attenuating effects of other errors, such as measurement error (Conway & Lance, 2010) Due to these limitations, further refinement and validation of the measurement items must be performed. Examples of suggested future research include the application on real world cases, as well as measuring the construct on business decision makers who utilize geospatial data and comparing these resu lts to average business decision makers. Furthermore, the GRA scale could be em ployed to extend studies related to usability of SDSS and GIS, as well as to empirically measure the effectiveness of various geovisualization techniques. To establish nomological validity, a confirmatory study should be conducted to empirically test the G RA construct within the context of a literature based research model. Additionally, as suggested by MacKenzie et al. (2011), an essential part of the scale development process is to cross validate the scale. Furthermore a s suggested by MacKenzie et al. the development of norms to facilitate the interpretation of a measurement scale is essential. It is suggested that a scale be tested on various population

PAGE 118

104 groups to determine the generalizability to distinct groups. Additionally, we suggest that a cluster analysis be performed to determine the feasibility of utilizing the GRA measurement scale to classify subjects by their GRA. As this study was conducted in a general context, and since geospatial reasoning has been found to be highly task dependent, the study will need to be replicated in a variety of domains to assess the impact of task characteristics on geospatial reasoning ability. Also, as the study was presented in the context of the impact of geospatial reasoning on decision performance, the result ing construct and instrument will need to be applied in a decision making domain, so that the overall effectiveness of the scale for predicting the outcome variable can be assessed. Finally, additional research should be performed to assess the SPGS constr uct that was elimin ated from the measurement model. Particularly, since the SPGS appeared to have strong content, convergent and discriminant validity as well as reliability. While SPGS did not appear to be predictive of the GRA construct, the analysis per formed indicated high reliability for this construct. Perhaps, SPGS will play an important role in moderating decision performance, thus additional research is warranted. Conclusion There has been rapid growth of geospatial information collected by business organizations and advanced tools have been developed to v isualize such data for decision making. However, while there have been several attempts to clarify what cognitive abilities are necessary to make such decisions, there have been conflicting results.

PAGE 119

105 IS research has generally only measured one dimension of spatial ability, instead of testing for a spectrum of spatial abilities. As such, the IS s cholarship lacks an empirically validated, multi dimensional measurement of GRA, thus a multi step measurement scale development process, based on the work of MacKenzie et al. (2011), was utilized to develop a multi dimensional scale with reliability and validity. The result of this research project was the development of a 12 item scale to measure the three proposed substrata of GRA: (1) self perceived geospatial orientation and navigation, (2) self perceived geospatial memorization and recall, and (3) self perceived geospatial visualization. As such, this measurement scale addresses multiple dimension s of GRA, a limitation of previous studies addressing spatial ability. Furthermore, the scale focuses on the geographic context of spatial reasoning. Finally, the scale was developed to assess psychometric responses regardless of the past geospatial experi ence of research subjects. We suggest that future research further the validity of these substrata and the GRA construct. Finally it was revealed that SPGS is an independent construct that is different from GRA, but one that may also have an impact on dec ision performance. It was suggested that future research expand on the importance of SPGS in business decision making using geospatial data.

PAGE 120

106 CHAPTER IV USER ACCEPTANCE OF SPATIAL DECISION SUPPORT SYSTEMS: APPLYING UTILITARIAN, HEDONIC AND COGNITIVE MEASURES 3 Abstract Following the scale development of GRA, Chapter IV presents an extension of the Technology Acceptance Model in the context of geospatial visualization and the use of online mapping services. The purpos e of this is to further validate the GRA construct and scale through the establishments of norms, external validity and nomological validity S DSS have become an important tool for consumer and business evaluation of geospatial information. This paper demonstrates that the user acceptance of such tools is influenced by utilitarian, hedonic and cognitive measures. Furthermore, the nomological validit y of geospatial reasoning ability is expanded through the use of well established constructs, including perceived enjoyment and perceived ease of use. Responses from 577 subjects were evaluated using a partial least squares analysis. Introduction Recent te chnological innovations related to collecting, analyzing and presenting geospatial data have provided both consumer and business decision makers with advanced decision support capabilities. For instance, the implementation of positioning technologies, such as global satellite navigation systems and terrestrial navigation capabilities provided by wireless network access points and mobile communication 3 A subsequent ver sion of this chapter is under review at Information & Management.

PAGE 121

107 networks, allows devices to easily determine their location in physical space. Contemporary mobile devices, including tablets and smartphones, maximize the capabilities of such positioning technologies by providing the capability to transmit location information in real time. In addition, analytic systems such as GIS and SDSS have provided professionals and cons umers with the capability to analyze geospatial information more easily. Consumers have benefited from the capabilities provided by online mapping services, location based services and web based SDSS capabilities. Particularly online mapping services, su ch as Google Maps and Bing Maps, have experienced tremendous usage growth in recent years (Google Maps, 2013; Bing Maps, 2013) Globally, over 100 million mobile users access Google Maps each month (Gundotra, 2010) In the United States alone, about 48 mil lion users accessed an online mapping service from a mobile device, while nearly 94 million users accessed such services from a fixed Internet access point (ComScore, 2011) Most online mapping services and location based services utilize geovisualization to present geospatial information to the user. Thus, an understanding of user perceptions and attitudes toward geovisualization are essential for future industry development and academic research. In addition to a direct impact on consumers, geovisualizati on also became an important presentation technique for conveying complex geospatial relationships contained in enter prise data to business decision makers. As over 75 percent of all business data contains geographic information and 80 percent of all busine ss decisions utilize geographic data (Mennecke, 1997; Tonkin, 1994) it has become essential to develop geovisualization tools and techniques to maximize decision performance for

PAGE 122

108 business leaders. Furthermore, location analytics, which include social, emot ional, geographic and physical indicators, are providing business decision makers with even more advanced insights into their geospatial data (Ferguson, 2012) Often such aggregated data is presented in a geovisualized presentation format designed to allow decision makers to easily identify complex geospatial interactions. Likened to a consumer context, a better understanding of geovisualization in the business context is essential. A lack of research addressing user perceptions and attitudes regarding geov isualization provided the motivation for this study. Particularly, this study measured utilitarian, hedonic and cognitive user perceptions to better understand user acceptance of geovisualization. The utilitarian perceptions were measured through the inclu sion of the perceived usefulness and perceived ease of use constructs. The hedonic perceptions were provided by measures of the perceived enjoyment construct. Finally, cognitive perceptions were measured through the implementation of the geospatial reasoni ng ability construct. These constructs, along with their appropriate theoretical background, are described next. Perceived Enjoyment: Hedonic Measures Numerous researchers have found that hedonic measures, like fun or enjoyment, are relevant for predicti ng user acceptance and u se of information technologies ( e.g. Brown & Venkatesh, 2005; Childers, Carr, Peck & Carsons, 2002; Gerow, Ayyagari, Thatcher & Roth, 2013; Thong, Hong & Tam, 2006; van der Heijden, 2004 ) Studies have found that hedonic aspects of systems are improved in systems that include appealing visual layouts like graphic s and colors (Ives, 1982; Klein, Moon, & Picard

PAGE 123

109 2002; van der Heijden, 2004). The inherent visual nature of geovisualization systems suggests that understanding the hedonic attributes might be especially important to understanding user acceptance of these systems. For this study, the hedonic measures included in the perceived enjoyment (PE) construct developed by Lee, Fiore and Kim (2006) were utilized. Technology Acceptance Model: User Acceptance and Utilitarian Measures D avis, Bagozzi and Warshaw (1989) (1975) Theory of Reasoned Action to develop the Technology Acceptance Model (TAM). The Theory of Reasoned Action posits that beliefs influence a ttitudes. The initial TAM assumed that utilitarian views about perceived usefulness (PU) and perceived ease of use (PEOU) influence technology acceptance. The TAM has been empirically applied to better understand the acceptance of various technologies, such as online shopping (Gefen, Karahanna & Straub, 2003) enterprise technologies (Amoako Gyampah & Salam, 2004; Amoako Gyampah, 2007) and even telemedicine (Hu, Chau, Sheng & Tam, 1999) However, TAM has not been applied to the concept of geovisualization or technologies that rely on geovisualization. This study will address this gap and provide an initial understan ding of which factors influence the user acceptance of SDSS. An understanding, of the utilitarian perceptions of SDSS will allow industry to begin the development of more effective geovisualization techniques. In addition to measuring PU and PEOU TAM trad itionally also includes measures of attitude toward a technology (A) and the behavioral intent to use a technology (BI), which are included as well.

PAGE 124

110 Geospatial Reasoning Ability: Cognitive Measures (2011) measure of geos patial reasoning ability (GRA) was geospatial information. Geospatial reasoning ability is measured using a 12 item scale encompassing the three substrata of geospatial reasoning: (1 ) self perceived geospatial orientation and navigation, (2) self perceived geospatial memorization and recall, and (3) self perceived geospatial visualization. While most prior information systems research evaluated spatial ability using only one or two dimensions of spatial ability, geospatial reasoning ability evaluates a spectrum of spatial abilities. Furthermore, the measurement scale focuses on the geographic context of spatial reasoning. Additionally, the scale was developed to assess psy chometric responses regardless of prior geospatial experience, making it appropriate for expert and non expert subjects. This research project was established to investigate the following questions: (1 ) Do cognitive measures, such as geospatial reasoning ability, impact utilitarian perceptions of SDSS? (2) Do cognitive measures, such as geospatial reasoning ability, impact hedonic perceptions of SDSS? (3 ) Do utilitarian perceptions such as perceived ease of use of SDSS impact attitude toward SDSS and be h avioral intent of using SDSS? (4 ) Do hedonic perceptio ns, such as perceived enjoyment of SDSS impact attitude toward SDSS and behavioral intent of using SDSS? In addition to answering these research questions, another goal of this study was the initial de velopment of a nomological network via well established constructs for the relatively new geospatial reasoning ability construct.

PAGE 125

111 In the following section a research model and related hypotheses are introduced. This is followed by an explanation of the res earch methodology applied in this study. Then, an analysis of the dataset is presented. Finally, a discussion of findings, disclosure of limitations, suggestions for future research and a conclusion are presented. Research Model This section introduces the theoretical lens, a proposed research model and hypotheses developed to better understand the user acceptance of geovisualization (1989) TAM a widely cit (1975) Theory of Reasoned Action This extension of TAM combines utilitarian, hedonic and cognitive perceptions to determine attitudes and the extent of behavioral intent. Furthermore, this r esearch model utilizes TAM to better understand user acceptance within the geovisualization domain, a technique which conveys complex geospatial relationships and is utilized in various contemporary technologies including online mapping services and advanc ed GI S and SDSS technologies. Figure 8 introduces the proposed research model and its associated hypotheses visually which are then explained thereafter

PAGE 126

112 Figure 8 Proposed Research Model. Prior resea rch has shown that techno logies such as location based services and virtual reality platforms, which require spatial reasoning abilities, also have an effect on the hedonic perceptions of subjects. For instance, Winn Hoffman, Hollander, Osberg, Rose and Char (1997) highlight the influence of spatial ability on enjoyment and presence within virtual environment s Ho (2010) found a significant relationship between the use of location based services and perceived enjoyment. In addition, Strohecker (2000) notes that subjects who utiliz ed a gra phical software tool that allowed for the simulation of scale, perception and frame of reference often expressed enjoyment. In their study, Hwang Jung and Kim (2006) learned of a significant relationship between the use of hand held, motion based virtual reality platforms and enjoyment While the aforementioned studies demonstrate effects on hedonic perceptions, other information technologies such as customer relationship management tools did not appear to have a statistically significant impact o n enjoyment (Al Momani & Noor, 2009) Thus, the author posit s : H1: Geospatial Reasoning Ability (GRA) positively influences Perceived Enjoyment (PE)

PAGE 127

113 In addition to influencing hedonic measures, geospatial reasoning ability is likely to impact utilitarian m easures. While geospatial reasoning ability has not been used as a direct indicator of PEOU or PU in previous studies, several existing studies provide evidence of a significant positive relationship between geospatial reasoning ability and perceived ease of use, and also between geospatial reasoning ability and perceived usefulness (Arning & Ziefle, 2007; Campbell, 2011; Gilbert, Lee Kelley & Barton, 2003) In their study, Arning and Ziefle determined that spatial ability had a significant effect on the performance of using a personal digital assistant (PDA) for information retrieval. In two experiments, spatial ability measured by spatial visualization and spatial or ientation predicted effectiveness and efficiency, which are measures of usability, respectively (Campbell, 2011) Furthermore, it is suggested that the implications of spatial ability on psychological attitude and behavior should not be ignored (Gilbert et al., 2003) Thus, the author posit s : H2: Geospatial Reasoning Ability (GRA) positively influences Perceived Ease of Use (PEOU) H3: Geospatial Reasoning Ability (GRA) positively influences Perceived Usefulness (PU) The posi tive and significant effect of PEOU on PE has been demonstrated in numerous empirical studies (e.g. Balog & Pribeanu, 2010; Chesney, 2006; van der Heijden, 2004) van der Heijden identified a significant effect of perceived ease of use on perceived enjoyment in an evaluation of a cinema website, which would be considered a highly hedonic information system. Chesney discovered a positive relationship between perceived ease of use and perceived enjoyment in a study investigating Lego Mindstorm s

PAGE 128

114 development environments. Balog and Pribeanu also discovered a significant relationship between perceived ease of use and perceived enjoyment in their study of augmented reality teaching platforms. Based on the numerous studies that identified a significant and positive relation between perceived eas e of use and perceived enjoyment, the author posit s : H4: Perceived Ease of Use (PEOU) positively influences Perceived Enjoyment (PE) P EOU has been found to have a significant relationship with attitude in numerous empirical studies (e.g. Chen, Gillenson & Sherrell, 2002; Chen & Tan, 2004; Lin & Lu, & Fenech, 2003) In their study, Chen et al. (2002) discovered that PEOU use is a key antecedent of A Chen and Tan (2004) reported a significant relationship between PEOU on A In their study of web sites, Lin and Lu discovered that PEOU had a significant relationship with A discovered that PEOU had a significant positive effect on web retail A Vijayasarath y (2004) also added that significant relationsh ips exist between the PEOU and A of online shopping. Furthermore, Davis et al. (1989) stated that the combined PEOU and PU determine the attitude toward a system. Thus, we posit: H5: Perceived Ease of Use (PEOU) positively influences Attitude (A ) Additionally, numerous studies have demonstrated a relationship between PEOU and PU (e.g. Davis, 1993; Subramanian, 1994) Furthermore, Legris Ingham and Collerette (2003) cited twenty one studies that indicated a positive relationship between PEOU and P U It is clear based on prior research that PEOU influences perceived

PAGE 129

115 usability ac ross a wide variety of domains (e.g. Davis, 1989 1993; Davis et al., 1989) Thus, we posit that: H6: Perceived Ease of Use (PEOU) positively influences Perceived Usefulnes s (PU) Hedonic aspects of technology adoption have been considered in numero us technology adoption studies ( e.g. Davis, Bagozzi & Warshaw, 1992; van der Heijden, 2004) Several papers specifically indicate relationships between PE and BI within the contex t of mobi le and online mapping services ( e.g. Abad, D az & Vigo, 2010; Ho, 2010; Novak & Schmidt, 2009; Verkasalo, L pez Nicol s, Molina Castillo & Bouwman, 2010) Novak and Schmidt discovered a significant relationship between hedonic measures and the us e of a large collaborative display that provid ed information contextualized on a map in a travel advisory scenario. Ho found a significant relationship between the use of location based services and PE Abad et al. found significant relationships between P E and BI in an outdoor content generation exercises that utilized Google Maps along with other mobile device features. Verkasalo et al. also found that BI was impacted by PE of mobile map applications. Based on the importance of hedonic aspects, such as PE on user acceptance of technologies, we posit: H7: Perceived Enjoyment (PE) positively influences Attitude (A) PU has been found to significantly influence A in numerous empirical studies (e.g. Chen et al. 2002; Chen & Tan, 2004; Lin & s & Fenech, 2003; Vijayasarathy, 2004 ) In their study regardin g online retailers, Lee et al. (2006) discovered a significant relationship between the PU of online shopping and A toward an

PAGE 130

116 online retailer. Chen et al. (2002) discovered a significant positi ve relationship between the PU A toward the store. In Chen and Tan (2004) a significant relationship between PU and A was reported. As reported in their extension of the TAM, Gefen and Straub (1997) noted that PU and PE OU affect attitude toward use. In their study of websites, Lin and Lu (2000) discovered that perceived usefulness had a significant relationship with attitude as measured by preferences of a web site. In (2003) d iscovered that perceived usefulness had a positive effect on attitude of web retail. Vijayasarath y (2004) also discovered a significant relationship between perceived usefulness and attitude of online shopping. Furthermore, Davis et al. (1989) stated that the combined perceived ease of use and perceived usefulness determine the attitude towards a system. Thus, we posit: H8: Perceived Usefulness (PU) positively influences Attitude (A) While the early version of the TAM did not include attitude (Davis et al., 1989) later works (Davis, 1993; Mathieson, 1991) included attitude as an antecedent toward behavioral intent. Furthermore, Davis (1993) reported that the overall attitude of a technology is a significant determinant of whether someone will use it. Liker and Sindi (1997) reported that positive attitudes of a technology would result in usage of a technology, while negative attitudes will result in rejection of a technology. Positive attitude, positive/negative feelings, and evaluated belief toward an object are all related to behavioral intent (Karahanna, Straub & Chervany, 1999) Furthermore, the relationship between attitude and behavioral intent has been w idely tested in the literature ( e.g. Anderson & Agarwal, 2010; Herath & Rao, 2009; Venkatesh, Morri s, Davis & Davis,

PAGE 131

117 2003) As a result, we expect that attitude regarding location aware mobile applications is positively related to intentions of using SDSS. Thus, we posit that H9: Attitude (A) positively influences Behavioral Intent (BI) Next, the resear ch methodology used to test the aforemen tioned hypotheses is presented. Research Methodology To assess the above research model an online survey instrument comprised of measurement items adapted from prior research was utilized. Next, subjects who were fa miliar with online mapping service were solicited to complete the survey. A paragraph explaining geovisualization and online mapping services was provided, and only subjects who had previously used online mapping services were allowed to complete the surve y. Research Sample Subjects who were familiar with online mapping services were solicited to complete a survey. All subjects were provided a paragraph explaining geovisualization and online mapping services and then asked a screening question to determine if they had ever used an online mapping service. Only subjects who had previously used a SDSS and who were over the age of 18 were permitted to participate. Five hundred and seventy seven usable responses were obtained from subjects over a six month perio d. Numerous techniques were utilized to contact potential subjects, including e mail, social network services and personal contact. The goal of the recruitment process was to obtain subjects from a wide variety of backgrounds. One limitation of the recruit ment method was that it provided no meaningful way to demonstrate response rate.

PAGE 132

118 Research Instrument The instrument first explained some basic concepts of geovisualization, geospatial data, etc. Upon learning about these concepts, demographic information was collected from each subject, including age, gender, education, and cultural background. Next, the participants completed measurement items for hedonic, utilitarian, cognitive, attitudes and behavioral intent. The utilized instrument s are presented in Tables 41 4 6 The survey instrument was distributed online using SurveyMonkey. Several studies have included measures of hedonic perceptions in relationship to information technologies. Table 41 lists the six PE measurement items utilized in this study, which were adapted from Lee et al. (2006) Table 41 Perceived Enjoyment Measurement Items. Measurement Item Source/Adapted From PE1: Viewing spatial information presented in a geovisual ized format would be entertaining. Lee et al., 2006 PE2: Viewing spatial information presented in a geovisualized format would be enjoyable. Lee et al., 2006 PE3: Viewing spatial information presented in a geovisualized format would be interesting. Lee et al., 2006 PE4: Viewing spatial information presented in a geovisualized format would be fun. Lee et al., 2006 PE5: Viewing spatial information presented in a geovisualized format would be exciting Lee et al., 2006 PE6: Viewing spatial information pre sented in a geovisualized format would be appealing. Lee et al., 2006 Two established constructs were used to measure utilitarian perception of geovisualization. The first of these constructs, PU consisted of six measurement items derived from Davis (19 89) Davis et al. (1989) Venkatesh and Davis (2000) van der

PAGE 133

119 Heijden (2004) and Fagan Neill and Wooldridge ( 2008 ) These measurement items are shown in Table 42 Table 42 Perceived Usefulness Measurement Items. Measurement Item Source/Adapted From PU1: Viewing spatial information presented in a geovisualized format would improve my decision making productivity. Davis, 1989; Davis et al., 1989; Fagan et al., 2008; Venkatesh & Davis, 2000 PU2: Viewing spatial information presented in a geovisualized format would enhance my effectiveness in decision making. Fagan et al., 2008; Venkatesh & Davis, 2000 PU3: Viewing spatial information presented in a geovisualized format would be useful. Davis, 1989; Davis et al., 1989; Fagan et al., 2008; Venkatesh & Davis, 2000 PU4: Viewing spatial information presented in a geovisualized format would increase my decision making performance. Davis, 1989; Davis et al., 1989; Fagan et al., 2008; Venkatesh & Davis, 2000 PU5: Viewing spatial i nformation presented in a geovisualized format would make my decision making easier. Davis, 1989; Davis et al., 1989 PU6: Viewing spatial information presented in a geovisualized format would allow me to be more informed. van der Heijden, 2004 The second of these constructs, PEOU consists of six measurement items derived from Davis (1989) Davis et al. (1989) Venkatesh and Davis (2000) and Fagan et al. (2008) These measurement items are shown in Table 43

PAGE 134

120 Table 43 Perceived Ease of Use Measurement Items. Measurement Item Source/Adapted From PEOU1: Learning to operate a system that geovisualizes information would be easy for me. Davis, 1989 ; Davis et al., 1989; Fagan et al. 2008 PEOU2: I would find it difficult to get a system that geovisualizes information to do what I want it to. Davis, 1989 ; Davis et al., 1989 PEOU3: My interaction with a system that geovisualizes information would be clear and understandable. Davis, 1989 ; Davis et al., 1989; Venkatesh & Davi s, 2000 PEOU4: I would find a system that geovisualizes information to be flexible to interact with. Davis, 1989 ; Davis et al., 1989 PEOU5: It would be easy for me to become skillful at using a system that geovisualizes information. Davis, 1989 ; Davis et al., 1989 PEOU6: I would find a system that geovisualizes information easy to use. Davis, 1989; Davis et al., 1989; Fagan et al., 2008; Venkatesh & Davis, 2000 The attitude construct consists of six measurement items that were adapted from Lee et al. ( 2006) These items are shown in Table 4 4 Table 44 Attitude Measurement Items. Measurement Item Source/Adapted From A1: Viewing spatial information using geovisualization would be good. Lee et al., 2006 A2: Viewing spatial information using geovisualization would be superior. Lee et al., 2006 A3: Viewing spatial information using geovisualization would be pleasant. Lee et al., 2006 A4: Viewing spatial information using geovisualization would be excellent. Lee et al., 2006 A5: Viewing spatial information using geovisualization would be interesting. Lee et al., 2006 A6: Viewing spatial information using geovisualization would be worthwhile. Lee et al., 2006

PAGE 135

121 The BI construct consists of six measurement items that were adapted from Venkatesh and Davis (2000) These items are shown in Table 4 5 Table 45 Behavioral Intent Measurement Items. Measurement Item Source/Adapted From BI1: If I hav e access to a geovisualization technology I intend to use it. Venkatesh & Davis, 2000 BI2: I intend to utilize a geovisualization technology. Venkatesh & Davis, 2000 BI3: Assuming I have access to a geovisualization technology, I intend to use it. Venkatesh & Davis, 2000 BI4: Given that I have access to a geovisualization technology, I predict that I would use it. Venkatesh & Davis, 2000 BI5: I predict that I would utilize a geovisualization technology. Venkatesh & Davis, 2000 BI6: I plan to utilize a geovisualization technology. Venkatesh & Davis, 2000 The GRA construct consists of twelve items sourced from Erskine and Gregg (2011) These items are shown in Table 4 6

PAGE 136

122 Table 46 Geospatial Reasoning Ability Measurem ent Items. Measurement Item Source/Adapted From GRA1: I can usually remember a new route after I have traveled it only once. Erskine & Gregg, 2011 GRA2: I am good at giving driving directions from memory. Erskine & Gregg, 2011 GRA3: After studying a map, I can often follow the route without needing to look back at the map. Erskine & Gregg, 2011 GRA4: I am good at giving walking directions from memory. Erskine & Gregg, 2011 GRA5: In most circumstances, I feel that I could quickly determine where I am based on my surroundings. Erskine & Gregg, 2011 GRA6: I have a great sense of direction. Erskine & Gregg, 2011 GRA7: I feel that I can easily orientate myself in a new place. Erskine & Gregg, 2011 GRA8: I rarely get lost. Erskine & Gregg, 2011 GRA9: I can visualize geographic locations. Erskine & Gregg, 2011 GRA10: I can visualize a place from information that is provided by a map without having been there. Erskine & Gregg, 2011 GRA11: I can visualize a place from a map. Erskine & Gregg, 2011 GRA12: While reading written walking directions, I often form a mental image of the walk. Erskine & Gregg, 2011 Upon completion of the data collection, an analysis w as performed using SmartPLS (Ringle et al. 2005) Analysis The measurement and structural models of the proposed researc h were assessed by a partial least squares analysis performed using Smart PLS which analyzed the measurement (outer) and structural (inner) model. The demographic statistics of the sample can be found in Table 4 7

PAGE 137

123 Table 47 Descriptive Statistics of Demographic Variables. Question Variables Percentage Age 18 25 31.4% 26 35 32.4% 36 45 11.8% 45 55 12.5% 56 65 8.0% 66+ 4.0% Gender Female 55.6% Male 44.4% Education 2 Year/Associate Degree 25.5% 4 Year/Bachelor Degree 30.5% Doctor/JD/PhD 2.8% Elementary/Middle School 0.9% High School 30.3% Master Degree 10.1% Profession Business Professional 42.2% Student 35.9% Geospatial Professional 0.5% Other 18.4.0% Culture African 3.8% Asian 11.3% Australian 0.2% European 35.9% Middle Eastern 4.2% North American 43.5% South American 1.2% The measurement model explains the relationship between each of the constructs and its corresponding measurement items. We assessed the measurement model by evaluating the item reliability, internal consistency and discriminant validity. Establishing an acceptable measurement model allowed the structural model to be tested. The first task involved assessing the reliability of the measurement items. The removal of any item with a loading of less than 0 .5. All items except PEOU2 were deemed acceptable. A detailed examination of the data set revealed that the results of

PAGE 138

124 PEOU2 correctly inversely mirrored similar responses, to a lesser extent, and that most responses were neutral. We feel that the use of a negatively worded question may have fundamentally changed the meaning of the item. For ex ample, a person may not find a tool easy to use, but that does not necessarily mean that the tool is inherently difficult to them. The use of negatively and positively worded items in a research study has been a topic in pri or research with mixed results ( e.g. Ibrahim, 2001; Mook, Kleijn & van der Ploeg, 1992; Schmitt & Stults, 1985; Spector, Van Katwyk, Brannick & Chen, 1997) Thus, PEOU2 was removed from the analysis. The reliability of the measurement items is shown in Table 4 8 As composite reliability (1951) alpha values of greater than 0 .7 are deemed acceptable, all values appear excellent, which demonstrates measurement item reliability. Table 48 Item Reliability. Construct Composite Reliability Alpha GRA 0.9457 0.9371 PE 0.9652 0.9567 PU 0.9606 0.9506 PEOU 0.9452 0.9274 A 0.9450 0.9301 BI 0.9667 0.9586 Generally, acceptable thresholds have stated that items should load at great er than 0 .7 on their own constructs and cross loadings should be less than 0 .5 on other of loadings should differ by more than 0.2, and that all correlations between factors should not exceed 0 .7. While all me asurement items except GV10 exceed the 0 .7 threshold, many cross loadings were higher than the suggested 0 .5 minimum (Hair et al., 2010). However, each item loaded more highly on its own construct and constructs appear to share more variance with their

PAGE 139

125 res pective measures. Table 49 demonstrates item loading and cross loading of the outer model Per Vinzi Chin, Henseler, and Wang (2010) the loading and cross loading values can be misleading unless they are squared, thus a second table (Table 50 ) which demo nstrates squared loadings and cross loadings is presented.

PAGE 140

126 Table 49 Item Loadings and Cross Loadings, highest loadings shown in bold. A BI GRA PE PEOU PU A1 0.8625 0.6435 0.3518 0.5812 0.6259 0.6814 A2 0.8321 0.6045 0.3142 0.5357 0.5623 0.6353 A3 0.8683 0.6347 0.3287 0.6637 0.5827 0.6036 A4 0.8786 0.6312 0.3618 0.6511 0.5958 0.6658 A 5 0.8657 0.6041 0.3494 0.6953 0.5741 0.6652 A6 0.8576 0.6603 0.3534 0.6111 0.5984 0.7002 BI1 0.6197 0.8664 0.3511 0.5083 0.5820 0.5841 BI2 0.6461 0.9165 0.3944 0.5084 0.6244 0.5853 BI3 0.6796 0.9282 0.3729 0.5735 0.6073 0.6174 BI4 0.6687 0.9011 0.3526 0.5129 0.5869 0.5762 BI5 0.7111 0.9354 0.3922 0.5607 0.6450 0.6278 BI6 0.6676 0.9131 0.3996 0.5214 0.6233 0.5964 GMR1 0.2755 0.2961 0.7826 0.3564 0.3900 0.2448 GMR2 0.2332 0.2798 0.7779 0.3439 0.3661 0.2462 GMR3 0.3223 0.3413 0.8141 0.3841 0.4613 0.3286 GMR4 0.3077 0.3424 0.7790 0.3861 0.3645 0.3441 GON10 0.2624 0.2853 0.7580 0.3209 0.3823 0.2548 GON3 0.3366 0.3238 0.7496 0.3371 0.4144 0.2789 GON4 0.3054 0.3146 0.8227 0.3382 0.4172 0.2569 GON5 0.2485 0.2790 0.7874 0.2646 0.3809 0.2246 GV1 0.3733 0.3248 0.7941 0.4276 0.4453 0.3833 GV10 0.3138 0.3160 0.5977 0.3524 0.3704 0.3272 GV2 0.3080 0.3242 0.7879 0.3678 0.4276 0.2966 GV3 0.3544 0.3705 0.7696 0.4061 0.4610 0.3422 PE1 0.6280 0.5204 0.4377 0.9042 0.5120 0.5631 PE2 0.6456 0.5306 0.4568 0.9416 0.5241 0.5819 PE3 0.6713 0.5459 0.4410 0.8976 0.5241 0.6216 PE4 0.6610 0.5344 0.4091 0.9210 0.5164 0.5735 PE5 0.6482 0.4987 0.4149 0.8962 0.4783 0.5771 PE6 0.6839 0.5430 0.3926 0.8788 0.5229 0.6561 PEOU1 0.5601 0.5773 0.4973 0.5259 0.8479 0.5223 PEOU 3 0.6111 0.6086 0.4824 0.4852 0.9008 0.5377 PEOU 4 0.6339 0.5892 0.4311 0.4982 0.8635 0.5263 PEOU 5 0.5933 0.5950 0.4832 0.4892 0.9032 0.5211 PEOU 6 0.6185 0.5873 0.4492 0.4934 0.8858 0.5489 PU1 0.6991 0.5871 0.3454 0.5807 0.5595 0.9195 PU2 0.6877 0.5697 0.3569 0.5909 0.5395 0.9252 PU3 0.6630 0.5821 0.3330 0.5831 0.5083 0.8663 PU 4 0.6723 0.5809 0.3466 0.5811 0.5293 0.9042 PU 5 0.6828 0.5817 0.3193 0.5795 0.5395 0.9062 PU 6 0.7058 0.6266 0.3807 0.6161 0.5641 0.8513

PAGE 141

127 Table 50 Square Root of Loadings and Cross Loadings, highest loadings shown in bold. A BI GRA PE PEOU PU A1 0.9287 0.8022 0.5931 0.7624 0.7911 0.8255 A2 0.9122 0.7775 0.5605 0.3719 0.7499 0.7971 A3 0.9318 0.7967 0.5733 0.8147 0.7633 0.7769 A4 0.9373 0.7945 0.6015 0.8069 0.7719 0.8160 A5 0.9304 0.7772 0.5911 0.8338 0.7577 0.8156 A6 0.9261 0.8126 0.5945 0.7817 0.7736 0.8368 BI1 0.7872 0.9308 0.5925 0.7130 0.7629 0.7643 BI2 0.8038 0.9573 0.6280 0.7130 0.7902 0.7650 BI3 0.8244 0.9634 0.6107 0.7573 0.7793 0.7857 BI4 0.8177 0.9493 0.5938 0.7162 0.7661 0.7591 BI5 0.8433 0.9672 0.6263 0.7488 0.8031 0.7923 BI6 0.8171 0.9556 0.6321 0.7221 0.7895 0.7723 GMR1 0.5249 0.5442 0.8846 0.5970 0.6245 0.4948 GMR2 0.4829 0.5290 0.8820 0.5864 0.6051 0.4962 GMR3 0.5677 0.5842 0.9023 0.6198 0.6792 0.5732 GMR4 0.5547 0.5851 0.8826 0.6214 0.6037 0.5866 GON10 0.5122 0.5341 0.8706 0.5665 0.6183 0.5048 GON3 0.5802 0.5690 0.8658 0.5806 0.6437 0.5281 GON4 0.5526 0.5609 0.9070 0.5815 0.6459 0.5069 GON5 0.4985 0.5282 0.8874 0.5144 0.6172 0.4739 GV1 0.6110 0.5699 0.8911 0.6539 0.6673 0.6191 GV10 0.5602 0.5621 0.7731 0.5936 0.6086 0.5720 GV2 0.5550 0.5694 0.8876 0.6065 0.6539 0.5446 GV3 0.5953 0.6087 0.8773 0.6373 0.6790 0.5850 PE1 0.7925 0.7214 0.6616 0.9509 0.7155 0.7504 PE2 0.8035 0.7284 0.6759 0.9704 0.7239 0.7628 PE3 0.8193 0.7389 0.6641 0.9474 0.7239 0.7884 PE4 0.8130 0.7310 0.6396 0.9597 0.7186 0.7573 PE5 0.8051 0.7062 0.6441 0.9467 0.6916 0.7597 PE6 0.8270 0.7369 0.6266 0.9374 0.7231 0.8100 PEOU1 0.7484 0.7598 0.7052 0.7252 0.9208 0.7227 PEOU3 0.7817 0.7801 0.6946 0.6966 0.9491 0.7333 PEOU4 0.7962 0.7676 0.6566 0.7058 0.9292 0.7255 PEOU5 0.7703 0.7714 0.6951 0.6994 0.9504 0.7219 PEOU6 0.8764 0.7664 0.6702 0.7024 0.9412 0.7409 PU1 0.8361 0.7662 0.5877 0.7620 0.7480 0.9589 PU2 0.8293 0.7548 0.5974 0.7687 0.7345 0.9619 PU3 0.8142 0.7630 0.5771 0.7636 0.7130 0.9308 PU4 0.8199 0.7622 0.5887 0.7623 0.7275 0.9509 PU5 0.8263 0.7627 0.5651 0.7612 0.7345 0.9519 PU6 0.8401 0.7916 0.6170 0.7849 0.7511 0.9227

PAGE 142

128 Next, discriminant validity was examined. Discriminant validity demonstrates the extent to which one construct or variable is distinct from another (Hair et al., 2010) While loadings and cross loadings can be used to establish discriminant validity, such validity can also be examined by comparing inter const ruct correlations with the square root of the average variance extracted (AVE). In such an examination, the square root of the AVE should be equal or greater than any of the inter construct correlations. See Table 51 for a comparison of the inter construct correlations and the square root of each Table 51 Inter Construct Correlations and Square Root of AVE. GRA PE PU PEOU A BI Square Root of AVE GRA 0.4692 0.3879 0.5323 0.3991 0.4144 0.7703 PE 0.4692 0.6575 0.5661 0.7243 0.5837 0.9068 PU 0.3879 0.6575 0.6036 0.7655 0.7655 0.8959 PEOU 0.5323 0.5661 0.6036 0.6855 0.6719 0.8805 A 0.3991 0.7243 0.7655 0.6855 0.7318 0.8609 BI 0.4144 0.5837 0.6570 0.6719 0.7318 0.9104 Next, validity is examined using the AVE and latent variable correlations. When a construct AVE exceeds a threshold of 0 .5, convergent validity can be established (Fornell & Larcker, 1981) Table 52 demonstrates that the construct AVE values exceed this threshold.

PAGE 143

129 Table 52 Construct AVE. Construct AVE Geospatial Reasoning Ability (GRA) 0.5934 Perceived Enjoyment (PE) 0.8223 Perceived Usefulness (PU) 0.8026 Perceived Ease of Use (PEOU) 0.7753 Attitude (A) 0.7412 Behavioral Intent (BI) 0.8288 Upon completion of the assessment of the measurement model an analysis of the structural model was performed. To accomplish such an analysis, each of the inner model relationships was analyzed. The first step of analyzing the structural model was to perfor m an assessment of the effects as shown in Figure 9 *significant at alpha of 0.001, ^ significant at alpha of 0.05 Figure 9 Initial Nomological Network of GRA, along with Path Coefficients and R Square Values. The relationships between GRA and PE PEOU and PU were positive with path coefficients of 0.234, 0.532 and 0.093, respectively. These associations are consistent with Hypotheses H 1 H 2 and H 3 Furthermore, the GRA construct accounts for 28.3% of the varian ce of PEOU in the structural model. The relationships between PEOU and PE A and PU were positive with path coefficients of 0.441, 0.269 and 0.554, respectively. These relationships are consistent

PAGE 144

130 with Hypotheses H 4 H 5 and H 6 Furthermore, the combined e ffects of GRA and PEOU account for 36.0% of the variance of PE in the structural model and the combined effects of GRA and PEOU account for 37.1% of the variance of PU The relationships between PE and A as well as PU and A were positive with path coeff icients of 0.309 and 0.400, respectively. These relationships are consistent with Hypotheses H 7 and H 8 Furthermore, the combined effects of PE PEOU and PU account for 71.4% of the variance found in A The relationship of A and BI was positive with a path coefficient of 0.732, which is consistent with hypothesis H 9 Furthermore, the effect of A accounts for 53.6% of the variance of BI Finally, the significance of each of the construct relationships was tested using the Smart PLS bootstrap re sampling procedure with 200 re samples and using 577 cases. The purpose of the test statistic was to determine the significance relationship seen i n the PLS model. It is recommended that t statistic values should be greater than 1.96 for an alpha 0.05 or 2.56 for an alpha of 0.001 (Gefen & Straub, 2005) The alpha of 0.001 threshold was met by all relationships except GRA PU, which only met the al p ha of 0.05 threshold. See Table 53 for the path coefficient table that includes the t statistics results.

PAGE 145

131 Table 53 Path Coefficients. Relationship Original Sample Sample Mean Standard Deviation Standard Error T Statistics A > BI 0.7318 0.7344 0.0215 0.0215 34.1083 GRA > PE 0.2342 0.2394 0.0488 0.0488 4.7985 GRA > PEOU 0.5323 0.5386 0.0359 0.0359 14.8165 GRA > PU 0.0929 0.0950 0.0471 0.0471 1.9728 PE > A 0.3090 0.3109 0.0382 0.0382 8.0914 PEOU > A 0.2693 0.2725 0.0440 0.0440 6.1130 PEOU > PE 0.4415 0.4405 0.0480 0.0480 9.1947 PEOU > PU 0.5542 0.5569 0.0484 0.0484 11.4416 PU > A 0.3998 0.3976 0.0430 0.0430 9.2905 Discussion The results of the analysis reveal conclude that all hypotheses are supported. These results are summarized in Table 5 4 Table 54 Hypothesis Tests. Hypothesis Findings H1: Geospatial Reasoning Ability positively influences Perceived Enjoyment Significant/Positive* H2: Geospatial Reasoning Ability positively influences Perceived Ease of Use Significant/Positive* H3: Geospatial Reasoning Ability positively influences Perceived Usefulness Significant/Positive^ H4: Perceived Ease of Use positively influences Perceived Enj oyment Significant/Positive* H5: Perceived Ease of Use positively influences Attitude Significant/Positive* H6: Perceived Ease of Use positively influences Perceived Usefulness Significant/Positive* H7: Perceived Enjoyment positively influences Attitude Significant/Positive* H8: Perceived Usefulness positively influences Attitude Significant/Positive* H9: Attitude positively influences Behavioral Intent Significant/Positive* significant at alpha of 0.001 ^ significant at alpha of 0.05

PAGE 146

132 Based on the results of the hypotheses tests, an updated research model (Figure 9 ) is presented. Additionally, this model serves as an initial nomological network of the geospatial reasoning ability construct. This study examines the impact of geospatial r perceived enjoyment, perceived ease of use and perceived usefulness associated with using an online mapping tool. Our hypotheses were all supported, as shown in Table 5 4 ospatial reasoning abilities have a significant impact on their hedonic and utilitarian perception. This is consistent with prior research investigating technology acceptance in other domains that have found that an efficacy and personality traits strongly influence user per ception of technological tools ( e.g. Devaraj, Easley & Crant, 2008; Djamasbi, Strong & Dishaw, 2010; Luo, Li Zhang & Shim, 2010) Consistent with the TAM this research suggests that utilitarian and hedonic measures are indicators of technology acceptance (Davis et al., 1992) However, this model is extended by evaluating the impact of geospatial cognition on technology acceptance, as measured by GRA This has important implications in today's environment where geospatial data is increasingly being added to commonly used devices, such as smartphones and automotive navigation systems. In addition to extending the TAM this research also extends the C FT This theory suggests that decision performance improves when information presentation matches a problem task (Vessey, 1991) While decision performance measures were not directly utilized, this study indicates that hedonic and utilitarian perceptions of a deci sion making

PAGE 147

133 tool are impacted by the presentation method. In the case of this study, the use of geovisualization to present geospatial data was examined. Furthermore, our findings regarding the relationships between the PEOU PU and PE on A were consisten t with prior research ( e.g. Chen et al., 2002; Davis et al., 1992; Vijayasarathy, 2004) Finally, this research looks at the impact of A on BI. Consistent with research in other domains, this research finds that A towards geospatial tools, such as an SDSS is a significant predictor of BI to use those tools ( e.g. Davis, 1993; Liker and Sindi, 1997 ; Mathieson, 1991) Implications for Industry As BI has been shown to be a strong indicator of actual use ( e.g. Ajzen, 1985; Ajzen & Fishbein, 1980; Yi, Jackso n, Park & Probst, 2006) these results indicate that the effect of GRA may ultimately influence actual use of systems requiring GRA such as SDSS. As many business decisions are made using geospatial data, the ability to effectively use this type of data f or decision making is essential. This research suggests that businesses need to understand employee skills in this area and be prepared to provide support, training and/or more advanced geovisualization tools to help support individuals with lower GRA Whi le a demographic understanding of GRA including percentages of the population with low or high GRA as well as cultural, education or gender effects are yet to be established, the knowledge that GRA impacts technology adoption could ultimately encourage p roviders of SDSS technologies to provide visualization alternatives for specific users.

PAGE 148

134 Implications for Scholarship This research project provided an initial nomological network for the GRA construct. The establishment of such a network will allow future research to perform comparative analyses. Furthermore, the TAM was extended to include GRA as an external variable. While TAM has been applied to numerous technologies and situations, it had not yet been applied to the conceptual domain of geovisualization. Prior research has shown that utilitarian measures are a stronger indicator of user acceptance of utilitarian systems, while hedonic measures are a stronger indicator of user acceptance o f hedonic systems (van der Heijden, 2004) The indication that utilitarian and hedonic measures nearly equally influence attitude toward geovisualization, provides a better understanding of technologies that encompass both utilitarian and hedonic attribute s. Furthermore, this research expands the knowledge of information systems that contain combined utilitarian and hedonic aspects. Limitations This study has several limitations. First, the GRA construct is a new construct that has yet to be further explor ed and exposed to repeated empirical validation. Second, the concept of geovisualization is broad and subjects may have had differing interpretations of the meaning of geovisualization. While examples of geovisualization were provided to subjects, some may have had positive pleasurable experiences with such technologies, while other subjects may have used geovisualization exclusively in their workplace. Furthermore, limited empirical research has been conducted on information systems with both utilitarian a nd hedonic aspects, so this study functions as an exploratory study. Finally, the research domain of this study was limited to the use of online mapping

PAGE 149

135 services, such as Google Maps. Additional research specifically addressing more advanced GIS and SDSS tools will need to be conducted. Finally, only individual decision making, not group decision making, was explored in this study. Future Research Future research will be essential to better understand geospatial reasoning ability and its impact on techno logy acceptance and usage. Research is needed to further validate the geospatial reasoning ability construct and to develop norms for geospatial reasoning ability with respect to various population groups. In addition to identifying such norms, it will be essential to determine measurable impacts of geospatial reasoning ability on outcomes such as decision performance. The effect of geospatial reasoning ability on an exp lored. This analysis of geospatial reasoning ability can include business tasks as well as other areas that heavily utilize geospatial information, such as logistic s civil aviation, and military applications. Finally, the effect of geospatial reasoning ab ility on group decision making should be explored. Conclusion The TAM, as originally developed by Davis (1989) has been demonstrated to be an effective method to evaluate the acceptance of various information systems. This research expands this model to include geospatial reasoning ability and measures adoption of SDSS. As both consumers and business decision m akers leverage the benefits of SDSS to analyze and interpret complex geospatial information, a better understanding of what factors impact the adoption of SDSS is essential to industry and scholarship. This study

PAGE 150

136 provides an exploratory analysis of the use r acceptance of SDSS and identifies several important findings. First, cognitive ability plays a role in the utilitarian and hedonic perceptions of SDSS. Second, geospatial reasoning ability can be a predictor of the behavioral intent to use SDSS, which is an indicator of actual use. Implications of this research include emphasizing the combined utility of measuring hedonic and utilitarian perceptions for a technology that contains both hedonic and utilitarian attributes. The findings of this study inform i ndustry and scholarship of the significance of geospatial reasoning ability on the hedonic and utilitarian perceptions of information systems, wh ich ultimately effects adoption of syste ms designed to improve decision making using geospatial data

PAGE 151

137 CHAPTER V INDIVIDUAL DECISION PERFORMANCE OF SPATIAL DECISION SUPPORT SYSTEMS: A GEOSPATIAL REASONING ABILITY AND PERCEIVED TASK TECHNOLOGY FIT PERSPECTIVE 4 Abstract Chapter V continues the validation process of the GRA construct and scale by incorporating these into an experiment based on the research framework presented in Chapter II Increasingly, spatial decision support systems (SDSS) help consumers, businesses and governmental entities make decisions involving geospatial data. Understanding if, and how, user and task characteristics impact decision performance will allow SDSS developers to maximize decision making performance. Furthermore, scholars can benefit from a more comprehensive understanding of what specific characteristics influence decision making using an SDSS. C FT is used as the theoretical framework of this study, along with research in decision performance and geospatial reasoning ability. This paper provides a synthesis of research investigating user characteristics, task characteristics and de cision performance when using SDSS. Two hundred subjects participated in a two factor experiment designed to measu re the impact of user and task characteristics on decision performance. Specifically, geospatial reasoning ability is investigated as a user characteristic while problem complexity and presentation complexity are used as treatments of task characteristics. Decision time and decision accuracy are used as 4 An early version of this chapter was presented at the Americas Conference on Information Systems (AMCIS) 2013. A subsequent version is of this chapter is under review at Decision Support S ystems.

PAGE 152

138 measures of decision performance while task technology fit is applied as an indicator of u ser satisfaction A partial least squares analysis revealed statistical significance of user and task characteristics on decision performance. Theoretical and managerial implications are discussed in detail Introduction Consumer, government and business decision makers increasingly rely on SDSS. For example, consumers often use web based SDSS for a variety of decision making tasks, including locating a nearby bank, selecting the best route to an airport or even more complex tasks, such as selecting an ide al neighborhood for a new home. In addition, business decision makers utilize such tools to assign sales territories, determine sites for outdoor advertising campaigns and to achieve supply chain efficiencies. Government and community groups apply such too ls to communicate complicated geospatial concepts, to determine areas that could be impacted by civic projects, and to spatially correlate social and natural phenomena with their possible causes. These examples are just a few of the many ways that entities use SDSS to support geospatial decision making. Decision makers often have access to large quantities of geospatial data, which is continuously collected using mobile devices and shared from a variety of sources. Furthermore, several free or low cost too ls allow organizations to easily develop and provide SDSS technology to decision makers. As m any business decisions utilize geographic data, understanding how such decisions are made and how such decision making can be improved provides an important benefi t to organizations (Tonkin, 1992; Mennecke, 1997)

PAGE 153

139 While prior research papers address aspects of decision performance, user characteristics and task characteristics (Jarupathirun & Zahedi, 2007; Ozimec Natter, & Reutterer 2010) none of these specifically measure geospatial reasoning, problem complexity and visualization complexity with decision performance simultaneously. Understanding how user and task characteristics influence decision performance when solving geospatial pro blems provides an important extension to existing knowledge in this area. Furthermore, industry will benefit by learning how user and task characteristics can be augmented to enhance decision performance. Three primary research questions provide the moti vation for this study: ( 1) Does geospatial reasoning ability impact geospatial decision making performance? ( 2) Does the complexity of the visualization impact geospatial decision making performance? ( 3) Does the complexity of the problem impact geospatial decision making performance? Each of these questions is addressed in this research project. The following section presents a literature review and theoretical framework for this study. Subsequently, a comprehensive research model and accompanying research method are presented. These sections are followed by findings derived through a partial least squares analysis. Finally, a discussion, limitations, suggestions for future research and a conclusion are presented Literature Revie w This research extends the C FT by exploring the effects of user and task characteristics on decision performance. Specifically, the impact of the user characteristic of geospatial cognitive ability on d ecisi on performance is examined

PAGE 154

140 Cognitive Fit Theory CFT pr ovides the theoretical framework for this study. CFT suggests that higher quality decisions are made when the information presentation matches the problem solving task. The CFT has been highly cited within the information systems scholarship and has been e xtended into several decision making studies involving geospatial data ( e.g., Smelcer & Carmel, 1997; Swink & Speier, 1999; Speier & Morris, 2003; Mennecke et al., 2000) Several studies explore the impact of task complexity on decision making performance particularly when examining a problem involving geospatial data ( e.g., Smelcer & Carmel, 1997; Swink & Speier, 1999 ; Jarupathirun & Zahedi, 2007; Ozimec et al., 2010) For instance, Swink and Spei e r validate that decision making performance, as measured by decision quality and decision time, is superior for less complex problems. Additionally, while Mennecke et al. (2000) confirm that as task complexity increases, accuracy is lowered, only partial support for task efficiency being lowered was (2006) review of research examining cognitive fit, noted that seven of the eight papers examined provided full or pa rtial support of the CFT. While there have been extensions of CFT related to geo spatial decision making, Kelton, Pennington, and Tuttle (2010) state that : in order to improve the generalizability of future research, researchers should attempt to isolate a nd identify the manner in which problem representations differ and examine the cognitive effects resulting from these format factors and i nteractions among them (p. 89)

PAGE 155

141 Decision Performance Numerous studies have examined objective decision performance, measured by decision accuracy and decision time, when making decisions using geographic information (e.g. Swink & Speier, 1999; Crossland & Wynne, 1994; Crossland et al., 1995; Smelcer & Carmel, 1997; Dennis & Carte, 1998; Ozimec et al., 2010) For exampl e, Crossland et al. examined decision time and decision accuracy when evaluating the use of SDSS versus paper maps. Additionally, Dennis and Carte utilized decision time and decision performance when comparing map and tabular geospatial information present ation. Other decision performance indicators have included perceptions of the decision outcome or process, such as perceived decision quality and perceived decision confidence (Jarupathirun & Zahedi, 2007; Ozimec et al., 2010) Objective measures of decis ion making time and decision accuracy are the most commonly validated measures of decision making performance. However, research also suggests that incorporating the use of perceptions of the decision making process and performance, particularly as user pe rceptions have been shown to be significant aspects of technology acceptance (Jarupathirun & Zahedi, 2007; Ozimec et al., 2010) Perceived Task Technology Fit Additionally, the user satisfaction measure of perceived task technology fit has been tested in r elation to geospatial decision performance with significant results (Jarupathirun & Zahedi, 2007). The perceived task technology fit measure allows user satisfaction to be asse sse d, revealing the potential likelihood of adoption and intention to use the te chnology (Rogers, 1983; Taylor & Todd, 1995 ).

PAGE 156

142 User Characteristics Several studies have investigated individual user characteristics and their impact on successfully processing geographic information and making use of it in decision making. For example, user characteristics such as sex, age, culture, cognitive ability, mental workload, spatial visualization, spatial orientation, and general spatial ability have been examined (Albert & Golledge, 1999; Slocum et al., 2001; Zipf, 2002; Speier & Morris, 2003; Jarupathirun & Zahedi, 2007). Numerous studies utilize spatial visualization ability as a measure of user characteristics (Smelcer & Carmel, 1997; Whitney et al., 2011) Other studies have included spatial orientation (Swink & Speier, 1999) self efficac y (Jarupathirun & Zahedi, 2007) and concepts including visual memory and perspective taking (Whitney et al., 2011) Lee and Bednarz (2009) raise a concern that many cognitive evaluation tools that should evaluate geospatial reasoning are based to geospatial context. This concern was addressed through the development of a multi dimensional construct designed to measure geospatial reasoning ability (Erskine & Gregg, 2011) Under standing user characteristics that impact individual decision making performance is essential as such knowledge will allow researchers to develop tools and presentation techniques that improve decision making performance for those with lower cognitive abil ity without impacting those with high cognitive ability (Smelzer & Carmel, 1997) Lawton (1994) discovered that there were significant differences for spatial perception, mental rotation, route strategy, orientation strategy and spatial anxiety among the different gender groups. Specifically, males scored higher on mental rotation, spatial

PAGE 157

143 perception and orientation strategy, while females scored higher on route strategy and spatial anxiety. In their meta analysis of spatial ability, Linn and Petersen (198 5) reported that mental rotation and spatial perception abilities scores were higher for male subjects. Additionally, it was reported that gender accounts for up to 5% of performance differences in most sp atial tasks. However, Evans (1980) reported that th e impact of gender had mixed significance impacts based on the task. The author also noted that most reviewed papers suffered from a limitation in that paper and pencil spatial ability tests were primarily conducted. Yet, Kozlowski and Bryant (1977) report ed that subjects who provided a self rating of sense of direction had no significant differences due to gender. Furthermore, spatial ability, as measured through mental rotation, and mediated by gender has been found to influence multitasking (Mntyl, 201 3) Rusch et al. (2012) found a significant relationship between gender and analysis accuracy, with male subjects having a higher accuracy. Furthermore, specific factors such as menstrual cycles and genetics have been suggested as influencing the impact of gender differences research (Linn & Petersen, 1985; Mntyl, 2013) In addition to user characteristics, task characteristics have also been explored in similar studies. Task Characteristics Numerous task characteristics have been evaluated in previous studies, including map types, map symbolization, (Ozimec et al., 2010) and information presentation (Dennis & Carte, 1998) However, one of the most common task characteristic measures is that o f task difficulty (Smelcer & Carmel, 1997; Jarupathirun & Zahedi, 2007 ; Ozimec et al., 2010) Task difficulty can be manipulated through the complexity of the

PAGE 158

144 relationships of the data analyzed (Smelcer & Carmel, 1997) as well as through the number of poss ible solutions and functionality of tools provided (Jarupathiru n & Zahedi, 2007) Table 5 5 provides an overview of relevant decision performance studies that explore task charact eristics, user characteristics and decision performance

PAGE 159

145 Table 55 Summary of Geospatial Decision Performance Research Study Task Characteristics User Characteristics Decision Performance Crossland et al. 1995 Visualization Tool Decision Time, Decision Accuracy Smelcer & Carmel 1997 Task Difficulty, Geographic Relationships, Data Representations Spatial Visualization Decision Time, Decision Accuracy Dennis & Carte 1998 Information Presentation Decision Time, Decision Accuracy Swink & Speier 1999 Problem Size, Data Aggregation, Data Di spersion Spatial Orientation Decision Quality, Decision Time Jarupathirun & Zahedi 2007 SDSS Functionality, Site Selection, Task Complexity, Goal Difficulty Visualization, Spatial Orientation, Self Effic acy Perceived Decision Quality, Perceived Decision Efficiency, Decision Satisfaction, SDSS Technology Satisfaction Ozimec et al. 2010 Map Type, Map Symbolization, Task Complexity, Goal Difficulty Spatial Ability, Map Experience Decision Efficiency, Decision Accuracy, Decision Confidence, Perceived Ease of Task Whitney et al. 2011 Address Verification Spatial Visualization, Visual Memory, Perspective Taking Field Travel Distance, Total Time, Number of Errors Note: Only measures directly relevant to this study are shown in Table 55 ; see original works for further information.

PAGE 160

146 Research Model This research project utilizes geospatial reasoning ability and perceived task technology fit as user characteristics, presentation complexity and problem complexity as task characteristics, and decision time and decision accuracy as measures of decision p erformance. See Figure 10 for a visual representation of the proposed research model. This research paper builds upon previous experiments, as shown in Table 5 5 by including the multi dimensional measure of geospatial reasoning ability to measure spatial cognition within the context of a geographic scale. As demonstrated by the literature review objective measure s of decision making are widely used measures of decision making perform ance Figure 10 P roposed Research Model. Prior research has shown conflicting results when measuring the effects of spatial ability on objective and subjective decision making performance measures. For example, Albert and Golledge (1999) and Jarupathirun and Zahedi (2007) reported partial or no

PAGE 161

147 significant effect of spatial ability on their experiment outcomes. However, Smelcer and Carmel (1997) Swink and Speier (1999) Speier and Morris (2003) Lee and Bednarz (2009) Whitney et al. (2011) and Rusch et al. (2012) discove red a significant effect between spatial ability on the outcomes of their experiments. Many of these studies utilize measurement instruments that examine cognitive reasoning outside of the geospatial context, measure only one or two dimensions of spatial r easoning and often require some previous SDSS experience. However, the geospatial reasoning ability (GRA) scale examines such reasoning using three dimensions within the geospatial context and allows expert as well as non expert responses (Erskine & Gregg, 2011) Furthermore, as decision time and decision accuracy have been commonly used as measurements of decision performance, we suggest measuring the impact of GRA on these two measures. Based on these findings, we posit : H1a: Higher geospatial reas oning ability (GRA) leads to lower decision time (T) H1b: Higher geospatial reasoning ability (GRA) leads to increased decision accuracy (A) H1c: Higher geospatial reasoning ability (GRA) leads to higher perceived task technology fit (PTTF). In addition to user characteristics, task characteristics such as task complexity have been demonstrated to impact decision making performance. Indeed, one of t he most common measures of task characteristics is that of task difficulty (Smelcer & Carmel, 1997; Jarupath irun & Zahedi, 2007; Ozimec et al., 2010) (1988) Task Complexity Theory, which suggests that task types have attributes that influence complexity, as well as suggesting that objective characteristics, psychological

PAGE 162

148 experien ce and task person interaction also influence complexity (Speier & Morris, 2003; Jarupathirun & Zahedi, 2007) Finally, support was found linking decision making performance and cognitive fit (Vessey, 1991; Jarupathirun & Zahedi, 2007) For instance, a per ceived task technology fit (PTTF) scale was used to measure individual perceptions of SDSS performance using a seven item scale with a statistically significant impact (Jarupathirun & Zahedi, 2007) Similarly, as decision time and decision accuracy have be en commonly used as measurements of decision performance, we suggest measuring the impact of PTTF on these two measures. Based on these findings, we posit: H2a: Higher perceived task technology fit (PTTF) leads to lower decision time (T) H2b: Higher perceived task technology fit (PTTF) leads to increased decision accuracy (A) Finally, research has identified that increases in complexity lead to decreased decision making performance (Smelcer & Carmel, 1997) Specifically, increases to the visualizatio n complexity and task complexity have been shown to increase decision making time. Smelcer and Carmel examined three levels of task difficulty moderated by the number of sub tasks each problem required, revealing that increased task complexity lead s to inc reased decision time. Furthermore, Dennis and Carte (1998) tested decision time and decision accuracy on geographic containment and geographic adjacency tasks, revealing decision performance improvements when map based presentations were used (versus tabu lar presentations) in all cases, except for decision accuracy when performing geographic containment tasks. Additionally, Swink and Speier (1999) found that decision

PAGE 163

149 performance, as measured through decision quality and decision time, decreased as problem complexity increased. Based on these findings, we posit that increasing task complexity leads to degraded decision performance, or more specifically: H 3a : Higher problem complexity (ProbC) leads to increased decision time (T). H 3b : Higher problem complexity (ProbC) leads to decreased decision accuracy (A). H 4a : Higher presentation complexity (PresC) leads to increased decision time (T). H 4b : Higher presentation complexity (PresC) leads to decreased decision accuracy (A). Research Methodology This r esearch project was evaluated using an experiment with a two by two treatment design. Specifically, subjects were asked to perform a geospatial decision making task where the problem complexity and the visualization complexity were manipulated. In addition to the experiment, participants were asked to provide demographic information and complete a geospatial reasoning ability and perceived task technology fit measurement scales. Figure 11 demonstrates the experiment workflow as perceived by the research sub jects.

PAGE 164

150 Figure 11 Experiment Workflow. First, the subject was presented with a consent form. Upon agree ment, demographic information was collected, including age, gender, education and cultural bac kground. Next, the subject was asked to complete the 12 item GRA measureme nt scale. Then the experiment was presented, which collect ed decision time and decision accuracy data. One of four experiment modes was randomly selected. Presentation complexity was manipulated by controlling for the number of location items (e.g. businesses) that appeared on the map. Problem complexity was controlled based on the number of decision criteria the subjects were asked to apply to the problem. See Table 56 for a tabular comparison of the experiment modes. Table 60 provides additional detail regarding each of the modes. Finally the 7 item PTTF measurement was presented and the data collection was completed. Table 56 Summary of Geospatial Decision Performance Research. Experiment Mode Visualization Complexity Task Complexity Mode 1 Low/Easy Low/Easy Mode 2 Low/Easy High/ Difficult Mode 3 High/ Difficult Low/Easy Mode 4 High/ Difficult High/ Difficult

PAGE 165

151 Experiment Design The experiment asks participants to evaluate a decision making tool in a hypothetical scenario in which they must select the ideal apartment for a friend moving to another country. Detailed evaluation criteria, which include spatial and non spatial criteri a, are provided (e.g., cost and location preferences). The complexity of the decision criteria and the problem scale are manipulated to create realistic scenarios, yet still allow variations in the treatment. Figure 12 presents an overview of the apartment finder tool. Figure 12 ment Tool Developed using GISCloud and Bing Maps. Subjects Responses from 200 subjects were collected from January through May of 2013. Various methods were employed to solicit subjects, including e mails and social network participant recruitment. Participation in the study was voluntary. Some subjects received a nominal amount of extra credit for participating in the study. The experiment consisted

PAGE 166

152 of four experiment modes in which 50 subjects were included for each mode. Descriptive statistics of the 200 subject pool are presented in Table 5 7 Table 57 Descriptive Statistics of Experiment Subjects. Question Variables Percentage Age 18 25 48.00% 26 35 37.50% 36 46 10.00% 46 55 3.00% 56 65 1.00% 65+ 0.50% Gender Female 44.00% Male 56.00% Education Elementary/Middle School 0.00% High School 33.00% 2 Year/Associate Degree 34.00% 4 Year/Bachelor Degree 22.00% Doctor/JD/PhD 10.00% Cultural Background African 1.00% Australian 0.00% Asian 13.00% European 36.00% Middle Eastern 20.5% North American 25.5% South American 2.00% Geospatial Reasoning Ability Measurement Items To assess geospatial reasoning ability the twelve item geospatial reasoning ability scale developed by Erskine and Gregg (2011, 2012 2013 ) is utilized. This scale addresses three dimensions of geospatial reasoning ability: ( 1) geospatial memorization and recall, ( 2) geospatial orientation and navigation and ( 3) geospatial visualization.

PAGE 167

153 These measurement items, as shown in Table 5 8 are pres ented prior to the experiment along with a seven item Likert scale (Likert, 1932) Table 58 Geospatial Reasoning Ability Measurement Items, Adapted from Erskine and Gregg (2011 2012, 2013 ). Item ID Item SPGMR1 I can usually rem ember a new route after I have traveled it only once. SPGMR2 I am good at giving driving directions from memory. SPGMR3 After studying a map, I can often follow the route without needing to look back at the map. SPGMR4 I am good at giving walking directions from memory. SPGON3 In most circumstances, I feel that I could quickly determine where I am based on my surroundings. SPGON4 I have a great sense of direction. SPGON5 I feel that I can easily orientate myself in a new place. SGON10 I rarely get lost. SPGV1 I can visualize geographic locations. SPGV2 I can visualize a place from information that is provided by a map without having been there. SPGV3 I can visualize a place from a map. SPGV10 While reading written walking directions, I often form a mental image of the walk. Perceived Task Technology Fit Measurement Items In addition to decision time and decision accuracy, perceived task technology fit was measured as an indicator of user satisfaction toward decision performance To assess perceived task technology fit, measurement items developed by Karimi et al. (2004) and Jarupathirun and Zahedi (2007) were adapted for this study. These items were presented to the research subjects upon completion of the decision making experiment along w ith a seven point Likert scale. Table 5 9 presents the seven item perceived task technology fit measurement scale used for this study.

PAGE 168

154 Table 59 Perceived Task Technology Fit Measurement Items, Adapted from Karimi et al. (2004) and Jarupathirun and Zahedi (2007). Item ID Item PTTF1 The functionalities of the [tool] were adequate for the task given. PTTF2 The functionalities of the [tool] were appropriate for the task given. PTTF3 The functionalities of the [tool] were useful for the task given. PTTF4 The functionalities of the [tool] were compatible with the task given. PTTF5 The functionalities of the [tool] were helpful in solving the task given. PTTF6 The functionalities of the [tool] were sufficient. PTTF7 The functionalities of the [tool] made the task easy. Problem Complexity Problem complexity was manipulated for each treatment to include either a low or high complexity mode. For the modes using a low problem complexity, the apartment finder experiment pro vided three proximity criteria and one attribute criterion for the decision task. Alternately, for the modes using a high problem complexity, the apartment finder experiment required a decision to be made using four proximity criteria and two attribute cri teria. In the experiment, subjects were asked to find an apartment for a friend in the small town of Bad Tlz, Germany using the online apartment finder tool. To guide the decision making process a series of spatial and attribute based apartment criteria were provided. The spatial based criteria consisted of statements regarding visual proximity. An example of a spatial based criterion used in both the high and low problem complexity treatment s prox imities between the map elements, subjects had to estimate the proximity visually.

PAGE 169

155 The attribute based criteria required subjects to select an individual point on the map and then review its associated attribute table. An example of an attribute criterion, as used in t he high complexity treatment, was Table 60 Problem Complexity Problem Specification Assume that a friend has asked you to find an apartment in Bad Tlz, Germany. The city is testing an online map tool, which should help you find the best apartment for your friend. Please find the best apartment based on the following criteria: Low Problem Complexity Criteria 1) The location should be as close to the Business School Building (American International University) as possible 2) There must be a grocery store nearby 3) There must be a laundry nearby 4) Additionally, the apartment rent must be less than 350 Euros/ m onth High Problem Complexity Criteria 1) Th e location should be as close to the Business School Building (American International University) as possible 2) There must be a grocery store nearby 3) There must be a laundry nearby 4) The laundry must offer coin laundry and dry cleaning services 5) Add itionally, the apartment rent must be less than 300 Euros/ m onth 6) An apartment with a bar, or other nightlife, nearby would be preferred Visualization Complexity In addition to problem complexity, visualization complexity was also manipulated to include a low or high complexity treatment. Specifically, the visualization complexity modified the number of points, or nodes, represented on the map. The low visualizatio n complexity treatment consisted of forty two nodes (comprising twelve apartments, six grocery stores, six pubs, six coin laundries, six university buildings and six restaurants). The high visualization complexity treatment consisted of eighty four nodes ( comprising

PAGE 170

156 twenty four apartments, twelve grocery stores, twelve pubs, twelve coin laundries, twelve university buildings and twelve restaurants). Analysis To perform the st atistical analysis, a structural equation modeling/partial lease squares (SEM PLS) analysis was conducted using SmartPLS (Ringle et al., 2005) Both the measurement and structural models of the proposed research model were evaluated. The PLS algorithm was set to use a path weighting sche me and a maximum of 300 iterations. Upon successfully completing the algorithm, an evaluation of the stop criterion changes revealed that the algorithm converged after Iteration 3 of the SmartPLS analysis To perform the analysis, both the measurement mode l and structural model were assessed. For the measurement model assessment, the composite and indicator reliability as well as the convergent and discriminant validity were assessed Measurement Model The first step of the measurement model evaluation in volved te sting the construct reliability (1951) alpha and composite reliability. It is recommended that both of these measures have values above 0 .70, as higher values indicated reliability (Cronbach, 1951; Gliem & Gliem, 2003) However, i t is also noted that values above 0 .95 can indicate that measurement items are too similar. Previous studies have demonstrated that the various items comprising the GRA construct are designed to measure three distinct substrata of GRA, including geospatial memorization and recall, geospatial orientation and navigation, as well as geospatial visualization (Erskine & Gregg, 2011, 2012) Furthermore, while the PTTF measurement items are quite similar, they have been successfully used in previous studies ( Karim i et al., 2004; Jarupathirun &

PAGE 171

157 Zahedi, 2007) constructs are shown in Table 61 Table 61 Construct Composite Reliability GRA 0.9463 0.9534 PTTF 0.9782 0.9817 To further evaluate the measurement model, both indicator reliability and convergent validity were assessed. To determine indicator reliability, the loading of each measurement item on its respective construct was evaluated. Generally, it is suggested that measurement items have a loading above 0 .70, which all items of the GRA construct, of PTTF were above this threshold. The measurement item loadings on each construct are shown in Table 62

PAGE 172

158 Table 62 Measurement Item Loadings. Construct GRA PTTF GRA 1 0.7458 GRA2 0.8269 GRA3 0.8101 GRA4 0.8025 GRA5 0.8153 GRA6 0.8959 GRA7 0.8176 GRA8 0.8095 GRA9 0.8000 GRA10 0.8371 GRA11 0.8010 GRA12 0.5333 PTTF1 0.9484 PTTF2 0.9476 PTTF3 0.9497 PTTF4 0.9451 PTTF5 0.9431 PTTF6 0.9290 PTTF7 0.9205 Following the establishment of indicator reliability, convergent validity was tested by evaluating the average variance extracted (AVE). For this test, the AVE of each construct should exceed the 0 .50 threshold (Fornell & Larcker, 1981) The AVE for GRA and PTTF were 0.6332 and 0.8846 respectively, indicating convergent validity. See Table 63 for the AVE of each multi item, reflective construct.

PAGE 173

159 Table 63 Average Variance Extracted (AVE) by Construct. Construct AVE GRA1 0.6332 PTTF 0.8846 T N/A Formative, Single Item A N/A Formative, Single Item ProbC N/A Formative, Single Item PresC N/A Formative, Single Item Finally, the measurement model was tested for discriminant validity using two methods. First, the cross loadings of measurement items were compared between the GRA and PTTF construct. Generally, measurement items of a construct should have a loading of abo ve 0 .50 (Hair et al., 2010) while above 0 .70 would be ideal (Hair Hult, Ringle, & Sarstedt 2014) Using this test, all items indicate convergent validity. See Table 6 4 for measurement item cross loadings.

PAGE 174

160 Table 64 Measurement Item Cross Loadings Construct GRA PTTF GRA1 0.7458 0.3516 GRA2 0.8269 0.3299 GRA3 0.8101 0.3479 GRA4 0.8025 0.3040 GRA5 0.8153 0.4032 GRA6 0.8959 0.3882 GRA7 0.8176 0.3326 GRA8 0.8095 0.2947 GRA9 0.8000 0.3219 GRA10 0.8371 0.2560 GRA11 0.8010 0.2495 GRA12 0.5333 0.0697 PTTF1 0.3627 0.9484 PTTF2 0.4152 0.9476 PTTF3 0.3695 0.9497 PTTF4 0.3700 0.9451 PTTF5 0.3493 0.9431 PTTF6 0.3791 0.9290 PTTF7 0.3692 0.9205 Highest loadings shown in bold. A second test of discriminant validity compares the square root of each of the AVE of GRA was 0.7957 and of PTTF was 0.9405. As these values are larger than any of the construct correlations, discriminant validity was further indicated. See Table 6 5 for the complete results of discriminant validity testing.

PAGE 175

161 Table 65 Latent Variable Correlations and Sq. o f AVE (shown in bold). A GRA PTTF PresC ProbC T A N/A GRA 0.5086 0.7957 PTTF 0.5265 0.3974 0.9405 PresC 0.1908 0.0176 0.0240 N/A ProbC 0.1305 0.0720 0.0343 0.0000 N/A T 0.4803 0.5579 0.5702 0.1069 0.1555 N/A Structural Model Following the measurement model evaluation the structural model was evaluated. The first step of this evaluation was the examination of collinearity using each predictor tolerance, is defined mathematically as 1/(1 R 2 ). Scholars have recommended maximum analysis of VIF, the structural model does not exhibit collinearity. See Table 6 6 for the R 2 tolerance and VIF values of the th ree endogenous latent variables (Hair et al., 2014) See Figure 13 for the visual results of the SmartPLS SEM PLS algorithm. Table 66 R 2 Values of Endogenous Latent Variables. Construct R 2 Tolerance 1 R 2 VIF (1/(1 R 2 )) PTTF 0.158 0.842 1.188 A 0.433 0.567 1.764 T 0.486 0.514 1.946

PAGE 176

162 Figure 13 Visual r esult of SEM PLS Algorithm (using SmartPLS) Next, the significance of each path coefficient was evaluated using the bootstrapping method. For this analysis, the number of cases was set to 200, which are identical to the total number of valid observations, and the number of samples was set to 5000 as recommended by Hair et al. (2014) All hypothesis tests fell above Gefen and (2005) recommended t statistic threshold of 1.96. See Table 6 7 for complete significance results of the path coefficients. Table 67 Path Coeffi cients and Significance Levels. Hypotheses Path Path Coefficients T Values Significance Levels H1a ( ) GRA T 0.3823 6.7321 p < .01 H1b (+) GRA A 0.3495 6.9855 p < .01 H1c (+) GRA PTTF 0.3974 6.2971 p < .01 H2a ( ) PTTF T 0.4207 6.5972 p < .01 H2b (+) PTTF A 0.3871 6.8840 p < .01 H3a (+) ProbC T 0.1424 2.8014 p < .01 H3b ( ) ProbC A 0.1187 2.2055 p < .05 H4a (+) PresC T 0.1035 1.9859 p < .05 H4b ( ) PresC A 0.1876 3.4969 p < .01

PAGE 177

163 Next, significance testing of the total effects is shown. Here again, all paths show significance levels of 0 .01, except H 3b and H 4a which had a statistical significance of 0 .05. See Table 6 8 for complete significance results of the total effects and Fig ure 14 for the visual results of the SmartPLS bootstrapping algorithm. Table 68 Path Coefficients and Significance Levels. Hypotheses Path Path Coefficients T Values Significance Levels H1a ( ) GRA T 0.5495 10.2458 p < .01 H1b (+) GRA A 0.5034 10.6981 p < .01 H1c (+) GRA PTTF 0.3974 6.2971 p < .01 H2a ( ) PTTF T 0.4207 6.5972 p < .01 H2b (+) PTTF A 0.3871 6.8840 p < .01 H3a (+) ProbC T 0.1424 2.8014 p < .01 H3b ( ) ProbC A 0.1187 2.2055 p < .05 H4a (+) PresC T 0.1035 1.9859 p < .05 H4b ( ) PresC A 0.1876 3.4969 p < .01 Figure 14 Visual result of SEM PLS Algorithm (using SmartPLS).

PAGE 178

164 The next examination involved an analysis of the Coefficients of Determination or R 2 values. GRA, PTTF, ProbC and PresC jointly explain 48.6% of the variance of T and 43.3% of the variance of A. These values can be considered close to moderate. Finally, GRA alone explains 15.8% of the variance of PTTF, which is weak. Following this, the b lindfolding procedure of SEM PLS was used to determine predictive relevance or Q 2 of the endogenous constructs. The Q 2 values revealed large predictive relevance of A and T, yet only an approximately medium relevance of PTTF. The R 2 and Q 2 values of the en dogenous latent variables are shown in Table 6 9 Table 69 R 2 and Q 2 Values of Endogenous Latent Variables. Latent Variable R 2 Q 2 PTTF 0.158 0.1387 A 0.433 0.4321 T 0.486 0.4749 Heterogeneity The final test analyzes the data set for heterogeneity, which is of concern as gender has previously been shown to impact spatial reasoning. First, statistical differences between male (n=112) and female (n=88) participants were estimated. To do so, responses were grouped by male and fem ale population selectio ns. The path coefficients and t values of the population selection consisting of only male subjects are shown in Table 70

PAGE 179

165 Table 70 Path Coefficients Male Only. Hypotheses Path Path Coefficients t Values Standard Error H 1a ( ) GRA T 3.1076 9.4045 0.0588 H 1b (+) GRA A 1.2333 7.0510 0.0655 H 1c (+) GRA PTTF 0.4078 3.4545 0.0765 H 2a ( ) PTTF T 1.4922 3.8256 0.0896 H 2b (+) PTTF A 0.6853 4.1034 0.0788 H 3a (+) ProbC T 0.2035 1.4306 0.0712 H 3b ( ) ProbC A 0.1355 1.8151 0.0767 H 4a (+) PresC T 0.2279 1.6553 0.0687 H 4b ( ) PresC A 0.1355 2.0429 0.0749 Next, female participant values were retriev ed. The path coefficients and t values of the population selection consisting of only female subjects are shown in Table 71 Table 71 Path Coefficients Female Only. Hypotheses Path Path Coefficients t Values Standard Error H 1a ( ) GRA T 1.5434 2.5473 0.1054 H 1b (+) GRA A 0.8158 3.7583 0.0780 H 1c (+) GRA PTTF 0.7141 5.2503 0.0892 H 2a ( ) PTTF T 2.0264 5.4614 0.0984 H 2b (+) PTTF A 0.8544 5.7269 0.0817 H 3a (+) ProbC T 0.3967 2.7068 0.0712 H 3b ( ) ProbC A 0.0926 1.2248 0.0759 H 4a (+) PresC T 0.1501 0.9448 0.0767 H 4b ( ) PresC A 0.2229 2.8430 0.0782 Finally, the groups are compared. A test for equality of standard errors between the male and female groups revealed significant (alpha = 0.10) results for all but one hypothesis, H 1a See Table 72 for the complete results.

PAGE 180

166 Table 72 Re sult of Gender Group Comparison Hypotheses Path Equal standard errors assumed. Unequal standard errors assumed. Test for equality of standard errors t value df p value t value df p value H 1a ( ) GRA T 13.728 198 0.000 13.031 137 0.000 1.000 H 1b (+) GRA A 4.147 198 0.000 4.120 180 0.000 0.702 ^ H 1c (+) GRA PTTF 2.630 198 0.009 2.620 182 0.010 0.629 ^ H 2a ( ) PTTF T 4.021 198 0.000 4.035 187 0.000 0.394 ^ H 2b (+) PTTF A 1.482 198 0.140 1.497 191 0.136 0.203 ^ H 3a (+) ProbC T 1.901 198 0.059 1.929 193 0.055 0.118 ^ H 3b ( ) ProbC A 0.393 198 0.695 0.400 194 0.690 0.099 ^ H 4a (+) PresC T 0.758 198 0.449 0.759 186 0.449 0.458 ^ H 4b ( ) PresC A 0.804 198 0.423 0.811 191 0.418 0.222 ^ (^ alpha = 0.10). Findings and Discussion The analysis revealed several key findings. Th e positive user characteristic of g eospatial reasoning ability impact s decision performance positively, as m easured using decision time and decision accuracy. Furthermore, the task characteristics of problem complexity and presentation complexity impact decision performance, as measured using decision time and decision accuracy. Additionally, a statistically significant relat ionship between geospatial reasoning ability and perceived task technology fit exists, however

PAGE 181

167 the magnitude is negligible. See Figure 15 for a visual representation of the significant relationships within the research model. *p<0.05, ***p<0.01 Figure 15 Research Model wi th Relationship Significance The hypotheses are well supported as shown in Table 73 These findings suggest that geospatial reasoning ability and perceived task technology fit do indeed have an effect on decision performance. Thus, these two measures could provide value to those evaluating SDSS technologies. The impact of geospatial reas oning ability on decision performance aligns with the results of Smelcer and Carmel (1997) Swink and Speier (1999) Speier and Morris (2003) Lee and Bednarz (2009) Whitney et al. (2011) and Rusch et al. (2012) who all discovered a significant effect of spatial ability on the outcomes of their experiments. Also, the significant effects between perceived task technology fit align with the findings of Jarvenpaa (1989) and Vessey (1991) who found support between decision making performance and task technolo gy fit.

PAGE 182

168 Table 73 Hypotheses Test Hypotheses Path Findings H1a ( ) GRA T Significant/Negative*** H1b (+) GRA A Significant/Positive*** H1c (+) GRA PTTF Significant/Positive*** H2a ( ) PTTF T Significant/Negative*** H2b (+) PTTF A Significant/Positive*** H3a (+) ProbC T Significant/Positive*** H3b ( ) ProbC A Significant/Negative** H4a (+) PresC T Significant/Positive** H4b ( ) PresC A Significant/Negative*** **p<0.05, ***p<0.01 In addition, the results support that there are significant differences in the relationship between geospatial reasoning ability and task complexity for subjects of different gender for all areas except decision time. For the most part, the significant dif ferences between gender groups align with previous findings ( Linn & Petersen, 1985; Lawton, 1994) Various reasons could explain why no significance was found for H1a between gender groups (Linn & Petersen, 1985; Mntyl, 2013) For instance, the many expe riment and subject factors, such as the spatial tests used in previous studies, which were often one dimensional as opposed to the three dimensional GRA construct, could account for the lack of significant differences between GRA and decision time based on gender groups. Limitations This study includes several limitations. For instance, the experiment design specifically addresses a consumer oriented decision problem that may be too simple for generalization to large scale business and governmental decision making using SDSS. Furthermore, problem complexity is manipulated only through the number of proximity

PAGE 183

169 relationship and attributes required for each problem. There are many additional methods for manipulation complexity in geospatial problem solving, such as by including tasks to determine if points or areas equal, contain, overlap or are located within other areas. The DE9 IM model developed by Clementini Di Felice, and van Oosterom (1993) provides an overview of the various geospatial relationships that exist. Future extensions of this study should address these geospatial relationships in addition to the proximity relationships used in this study. Furthermore, only point data was included in the visual presentation, whereas line and polygon data could h ave provided a richer problem solving experiment. Additionally, this study only addresses individual perceptions related to SDSS decision making performance. While this is suitable for most consumer geospatial decision making, many business and governmenta l geospatial decision making may require group participation. While this study provides an initial assessment of the impact of geospatial reasoning ability on decision making performance, future studies will need to be conducted in order to better understand geospatial reasoning ability, geospatial decision making and their interaction. Additionally, subjects included in this study were primarily students at an American research institution, so specific cultural and regional aspects of decision maki ng may not have been captured. Furthermore, while geospatial reasoning ability and perceived task technology fit play a role in the decision making process, there are numerous other factors that may influence the process, which have not been addressed.

PAGE 184

170 Imp lications Scholarly researchers can benefit from this study for three key reasons. First, the decision sciences research domain benefits from the additional perspectives regarding decision making using SDSS. Second, no prior research utilizing both visuali zation complexity and problem complexity as measures of task complexity were found, thus a research gap is addressed. Furthermore, a better understanding of the impact of problem and presentation complexity of geovisualized information extends the CFT. T hird, the GRA measurement scale is further empirically validated, making it a viable alternative to other spatial tests that may not measure multiple dimensions, may not provide a geographic context or that require previous SDSS experience. These benefits could provide tremendous benefits to future decision performance research in the context of SDSS. Furthermore, such knowledge will provide a foundation for information systems researchers to develop tools and techniques that improve decision performance of individuals with low geospatial reasoning ability. Industry can greatly benefit from this study as it highlights the importance of considering geospatial reasoning ability when developing and designing SDSS tools. Such knowledge could allow managers to allocate individuals with high geospatial reasoning ability to tasks involving problem solving using SDSS. Furthermore, a more comprehensive understanding of the benefits and potential drawbacks of task complexity (moderated through visualization and probl em complexity, herein) can guide systems designers and developers to help select the most appropriate visualization method for visualizing complex geospatial relationships.

PAGE 185

171 Conclusion Consumer, business and governmental entities increasingly rely on SDSS f or decision making involving geospatial data. Understanding user and task characteristics that impact decision performance will allow developers of such systems to maximize decision making performance. While there have been several studies that explored t he impact of task and user characteristics on decision performance, there have been inconsistent results. This study explores the potential reasons for these inconsistencies and provides a comprehensive design to eliminate such problems in future research For instance, a two factor experiment design was implemented to determine the role of problem complexity and presentation complexity on decision performance. We feel that a stronger understanding of the characteristics that influence decision performance when using SDSS can guide future research in the decision sciences domain. Finally, this paper extends the nomological network of the geospatial reasoning ability construct, which has not yet been applied to decision performance. A further understanding o f the statistically significant relationships of this construct will allow it to be applied in future studies with greater confidence in its ability to measure geospatial reasoning ability

PAGE 186

172 CHAPTER VI CONCLUSION: TOWARD A COMPREHENSIVE UNDERSTANDING OF GEOSPATIAL REASONING ABILITY AND THE GEOSPATIAL DECISION MAKING FRAMEWORK Abstract As consumer, business and governmental entities increasingly utilize geospatial data when making procedural, organizational and strategic decisions, it is essential to better understa nd how such decisions are made. This chapter summarizes research involving the conceptual decision making framework, in addition to exploring the nomological network of geospatial reasoning ability (GRA). This summary presents benefits to industry and acad emic research. For instance, understanding user and task characteristics that impact decision performance allows developers spatial decision support systems (SDSS) to maximize decision making performance. Additionally, i nformation systems scholars will be nefit from a comprehensive understanding of what specific characteristics influence decision making, i n this case geospatial decision making Introduction The previous five chapters provided the initial steps to develop a more comprehensive understanding of geospatial decision making within the information systems scholarship. Chapter I provided a background and motivation for this dissertation Chapter II provided a complete literature review and research framework for geospatial decision making research. Chapter III suggested the development of a comprehensive construct defining individual GRA, a key user characteristic that has provided mixed results in previous empirical studies. Chapter IV pre sented an extension

PAGE 187

173 of the Technology Acceptance Model in the context of geospatial visualization and the use of online mapping services. Chapter V explored decision making within the research framework presented in Chapter II This chapter presents a conc lusion to the overall empirical testing of the research framework presented in Chapter II as well as presenting a nomological network of the GRA construct presented in Chapter III While a large body of literature explores geographic visualization, only li mited research explores how such visualizations impact decision performance and within those, conflicting results are often presented. Thus, a simple, yet comprehensive model for the study of the effects of information presentation on decision performance was developed. The core elements of this conceptual model are discussed next. Conceptual Model To better understand how decision making using geospatial data occurs, a broad conceptual mod el was developed in Chapter III (see Figure 16 ). This model was derived from existing theory, particularly C FT (Vessey, 1991). While the model is initially applied to geospatial decision making, it could also be applie d to any decision making task. The model consists of three propositions: Proposition P 1 : In formation p resentation impact s decision performance. Proposition P 2 : Task characteristics impact decision performance Proposition P 3 : User characteristics impact decision performance.

PAGE 188

174 Figure 16 Conceptual Model. The model suggests that when task characteristics, user characteristics and information presentation align, decision performance is maximized. As this model was initially applied to geospatial decision making, measures that have previously been shown to im pact geospatial decision making were used In the case of this disser t ation geospatial visualization using interactive thematic maps is used for information presentation. Multi criteria tasks including proximity information are used for task characteris tics. GRA is used as a representation of user characteristics. Finally, objective measures of deci sion time and decision accuracy, as well as perceived task technology fit, are used to assess decision performance. While these measures are appropriate for g eospatial decision making other measures should be applied as appropriate based on relevant theory and literature.

PAGE 189

175 Implication to Research This dissertation presents several important implications to research. First, it provided a literature review of research concerning decision making using geospatial data. This literature review can be referenced by scholars interested in exploring this important area, and highlights limitations of past studies in addition to several potential research ideas. Second C FT was extended to develop a comprehensive decision making using geospatial data research model. This model will allow future research regarding decision making using geospatial data to apply an appropriate theoretical frame. Third, a new construct and its associated measurement scale were introduced. This scale was designed to address limitations of previous scales designed to measure geospatial reasoning. Furthermore, this scale was subjected to comprehensive statistical testing to ensure reliability a nd validity. Fourth, the TAM was extended to include GRA as an antecedent, demonstrating that cognitive user characteristics influence technology adoption. Fifth, the hedonic and utilitarian nature of geovisualization was discovered, proving future researc h another applied domain to further explore technologies that incorporate both of these aspects. Sixth, the GRA construct and the conceptual model of geospatial decision making were tested empirically using an experiment. This study revealed important insi ghts, including the impact of GRA on decision performance and provided the first test of the conceptual model developed by extending the C FT Implication to Industry This dissertation also presents numerous benefits to industry. For instance, the cognitive ability of GRA has demonstrated itself to be important to geovisualization technology adoption as well as to decision making using geospatial data. As

PAGE 190

176 geovisualization becomes a component of everyday technology interaction, such as locating a restaurant o low GRA will be essential for application developers. In addition to an understanding of GRA, this dissertation also revealed several concepts that will be essential to developers of locatio n based services. For instance, significant differences in the problem solving based on gender could influence marketing and development of such tools. Also, the specific application of an apartment finding exercise reveals specific implications to provide rs of such services, but most importantly that even with multiple criteria, individuals were able to quickly locate ideal apartments within minutes. Prior to recent geovisualization technology innovation, such a task would have taken hours if not days. Ad ditionally, as numerous industries perform such decision making, these findings are of great value. For instance, organizational leaders can assemble teams that have members with greater GRA through the administration of the GRA scale during the team membe r selection process. Furthermore, human resources departments could assess the GRA of individuals applying for positions that require a great deal of individual spatial ability, such as in lo gistics, maritime and aviation. Theoretical Frameworks In the course of this dissertation, the conceptual model was linked to three theoretical frameworks: C FT Task Complexity Theory and the Technology Acceptance Model. FT provided the initial theoretical framework for the model. This theory suggests that higher quality decisions are made when the information presentation matches the problem solving tasks. C FT has been referenced as a theoretical

PAGE 191

177 background, extended into other domains and validated in numerous empirical studies involving geos patial data, including Smelcer and Carmel (1997) Mennecke et al. (2000), and Speier and Morris (2003). For instance, Mennecke e t al. extended the C FT in an exploration of how user characteristics and task complexity impact decision performance. Second, Ca addressing the task characteristic relationship within the conceptual model. Task Complexity Theory suggests that each type of task has attributes that influence complexity and that comple xity is further influenced by user characteristics, such as psychological experience (Larson & Czerwinski, 1998). Swink and Speier (1999) applied Task Complexity Theory to explain that different tasks types may be more or less complex, ultimately influenci ng decision performance. Technology Acceptance Model (TAM) is applied when addressing the user characteristic relationship within the conceptual model. This model, which is an adaption of the Theory of Reasoned Action (Fishbein & Ajz en, 1975), is widely used in the information systems scholarship with the intent to explain why individuals adopt and use technologies. While the TAM has been revised several times, the original model along with the hedonic measure of perceived enjoyment w as applied to the use of online mapping services. Specifically, the TAM was applied to determine if GRA influences the acceptance of SDSS, as well as if SDSS users adopt such systems based on hedonic utilitarian properties or a combination of both.

PAGE 192

178 Findings and Discussion Following the brief int roduction presented in Chapter I Chapter II presented a detailed introduction to the conceptual model described above. Specifically, an argument was made for use of the model when exploring decision making us ing geospatial data. The literature review revealed that existing research in information systems as well as other domains provides a strong foundation toward the exploration of decision making using geospatial data. In addition to the literature review, a research framework was presented along with a conceptual research model. In addition to the conceptual research model, a discussion of research questions, theoretical lenses, existing constructs, potential limitations, and validity concerns appropriate to future research were presented. It is suggested that the framework could be used as a benchmark for future research, provide a stronger theoretical background, and facilitate improved comparison and understanding between studies. This would be tremendou sly beneficial due to the numerous contradictions in past research, particularly surrounding task complexity and spatial reasoning. Additionally, while business school curriculums include basic computing courses and courses that address databases and advan ced spreadsheets, many of these courses do not include geospatial problem solving exercises. Thus, ano ther key proposal of Chapter II was the suggestion that business school curriculums offer, at a minimum, problem solving exercises utilizing geospatial da ta. Pick (2004) learned that some business schools included a study of GIS as either an elective or required course ; however with the prevalence of geospatial data and its increased use in business problem solving, perhaps this is not enough.

PAGE 193

179 Due to prior research that suggests spatial ability plays an important role in decision making ability when working with geospatial data, a new construct measuring GRA was introduced in Chapter III While research into the role of cognitive abilities has been extensiv e there have been conflicting results. It is suggested that the discrepancies are related to the specific dimensions of spatial ability assessed, the lack of geospatial context and the need for previous geospatial or tool experience. As IS research has g enerally only measured one or two dimensions of spatial ability in each study, the new GRA construct assesses a spectrum of spatial abilities. The scale development procedures provided by MacKenzie et al. (2011), were utilized to develop a multi dimensiona l scale with demonstrated reliability and validity. Furthermore, Chapter III revealed an independent construct that is different from GRA, but one that may also have an impact on decision performance. This construct, called self perceived geospatial schema tization, is developed and presented along with several suggestions for future research. In Chapter IV the newly established GRA construct was further validated. For this validation step the Technology Acceptance Model (TAM) was selected as a theoretical lens, as it is one of the most widely cited information systems models. For instance, as of August 4, 2013 Google Scholar lists 16,760 citations of the original Davis ( 1989 ) paper alone. The initial GRA construct developed in Chapter III was tested along with an existing construct measuring perceived enjoyment (PE). Perceived enjoyment was included due to previous research suggesting that certain technologies contain both utilitarian and hedonistic attributes that could affect technology adoption. While T AM has been demonstrated to be an effective method to evaluate the acceptance of various

PAGE 194

180 information systems, it had not been applied to SDSS or GIS nor has GRA been used as an antecedent. Thus Chapter III extended TAM in two important ways: 1) GRA was used as an antecedent of utilitarian and hedonistic perception 2) TAM was applied in the context of SDSS. This research chapter revealed two key findings. First, cognitive ability plays a role in the utilita rian and hedonic perceptions of SDSS. Second, GRA can be a predictor of the behavioral intent to use SDSS, which is an indicator of actual use. These findings demonstrate that user s of SDSS may perceive such tools as having both hedonic and utilitarian att ributes. Scholarly i mplications of this research include a confirmation of the value of measuring the combined hedonic and ut ilitarian perceptions when assessing a technology that contains both hedonic and utilitarian attributes. The findings of this study inform industry and scholarship of the significance of GRA on the hedonic and utilitarian perceptions of information systems, wh ich ultimately effects adoption of syste ms designed to improve decision making using geospatial data. In Chapter V the concept ual model proposed in Chapter II was applied to a geospatial decision making experiment. This experiment explored the potential reasons for inconsistencies in studies t hat explored the impact of task characteristics and user characteristics on decision per formance. Furthermore, great care was taken in the study design to eliminate external influences. For instance, a two factor experiment design was implemented to determine the moderating role of problem complexity and presentation complexity on decision pe rformance. A stronger understanding of the characteristics that influence decision performance of SDSS can guide future research in the decision sciences domain. Understanding user and task chara cteristics that impact decision

PAGE 195

181 performance will allow devel opers of such systems to maximize decision making performance. Two hundred subjects participated in a two factor experiment designed to measu re the impact of user and task characteristics on decision performance. Specifically, GRA and p erceived task techn ology fit were investigated as user characteristics, while problem complexity and map complexity were used as treatments of task characteristics. Decision time and decision accuracy were used as measures of decision performance. A partial least squares ana lysis revealed statistical significance of user and task characteristics on decision performance. Chapter V also extended the nomological network of the GRA measurement scale, which had not yet been applied to decision performance. A complete nomological network presenting the empirical results of Chapter IV and V are presented in Figure II measurement scale will allow it to be applied in future studies with greater confidence in its ability to truly detect GRA. This nomological network should be expanded to include antecedents of GRA in order to allow test s of predictive efficiency and mediating efficiency to be performed (Liu Li, & Zhu 2012).

PAGE 196

182 Figure 17 Current Nomological Network of GRA Construct. Future Research Finally, this chapter concludes with a discussion of the limitations and recommendations for future research related to both the GRA construct as well as the conceptual decision making research model. Future scholarly research should include further validation and establish ment of norms for the GRA construct. It is suggested that GRA measurements be compared with well known spatial reasoning tests, such as VZ1. Such future research will be essential to better understand geospatial reasoning ability and its impact on technology acceptance and usage. In addition to the future research investigating GRA, it is suggested that the conceptual decision making model presented in Chapter II be further evaluated in various decision making scenarios, but particularly geospatial decision making. For example, future extensions should address geospatial relationships in addition to proximity

PAGE 197

183 relationsh ips. Furthermore, in addition to simple point data as was used in this study, line and polygon data could provide a richer and more realistic problem solving experiment. Additionally, this dissertation only addresses individual perceptions related to SDSS decision making performance. While this is suitable for most consumer geospatial decision making, many business and governmental geospatial decision making require group participation.

PAGE 198

184 REFERENCES & Vigo, M. (2010) Acceptance of mobile technology in hedonic scenarios In Proc eedings of the 24th BCS Interact ion Specialist Group Conference (pp. 250 258 ) British Computer Society. Africa, C. (2013). M aking cities more liveable with GIS FutureGov Retrieved from http://www.futuregov.asia/articles/2013/jul/30/making cities more liveable gis/ Agrawala, M., & Stolte, C. (2001). Rendering effective route maps: improving usability through generalization. In Proceedings of the 28 th Annual Conference on Computer Graphi cs and Interactive Techniques (pp. 2 41 249 ) ACM. Ajzen, I. (1985). From intentions to actions: A theory of planned behavior. In J. Kuhl & J. Beckmann (Eds.), Action control: From cognition to behavior (pp. 11 39) New York: Springer Verlag. Ajzen, I. & Fishbein, M. (1980) Understanding attitudes and predicting social behavior Englewood Cliffs, NJ: Prentice Hall. Albert, W. S., & Golledge, R. G. (1999). The use of spatial cognitive abilities in geographical information systems: The map overlay o perati on Transactions in GIS 3 ( 1 ) 7 21. Al Momani K., & Noor N. A. M. (2009). E Service quality, ease of use, usability and enjoyment as antecedents of E CRM performance: An empirical investigation in Jordan mobile phone services The Asian J ournal of Technology Management 2 (2) 50 63. American Express (2013). Express Cash ATM Finder. Retrieved from http://amex.via.infonow.net/locator/cash/ Amoako Gyampah K. (2007). Perceived usefulness, user involvement and behavioral intention: A n empirical study of ERP implementation. Compute rs in Hum an Behav ior 23 ( 3 ) 1232 1248. Amoako Gyampah K. & Salam A. F. (2004). An extension of the technology acceptance model in an ERP implementation environment Informa tion & Management 41 ( 6 ) 731 745. Anderson C. L. & Agarwal R. (2010). Practicing safe computing: A multimethod empirical examination of home computer user security behavioral intentio ns MIS Quarterly 34( 3 ) 613 643. Apple (2013). Find my friends! Retrieved from

PAGE 199

185 http://www. apple.com/icloud/features/find my friends.html Ar ning K. & Ziefle, M. (2007). Barriers of information access in small screen device applications: The relevance of user characteristics for a transgenerational design, In Univers al A ccess in Ambient Intell igence Environments (pp. 117 136 ) Berlin Heidelberg: Springer Audet, R. H., & Abegg, G. L. (1996). Geographic information systems: i mplications for problem solving. Journal of Research in Science Teaching 33(1), 21 45. Bagozzi, R. P., Yi, Y. & Phillips, L. W. (1991). Assessing construct validity in organizational research, Administrative Science Q uarterly 36, 421 458. Balog, A. ( 2011 ) Testing a multidimensional and hierarchical quality assessment model for digital libraries Studies in Info r matics and Control 20 ( 3 ) 233 246. Balog, A. & acceptance of an augmented reality teaching platform: A structural equation modeling approach. Studies in Informatics and Control 19( 3 ) 319 330. Beemer, B. A. & Gregg, D. G. ( 2010 ) Dynamic interaction in knowledge based systems: An exploratory investigation and empirical evaluation De cision Support Systems 49 (4) 386 395. Bing Maps (2013). Retrieved from http://www.bing.com/maps. Brody, H., Rip, M. R., Vinten Johansen, P., Paneth, N., & Rachman, S. (2000) Map making and myth making in Broad Street: th e London cholera epidemic, 1854. The Lancet 356 (9223) 64 68. Brown, S. A. & Venkatesh, V. (2005). Model of adoption of technology in households: A baseline model test and extension incorporating household life cycle MIS Quarterly 29(3), 399 426. Brusset, X. ( 2012 ) Supply chains: agile, robust or both? In Colloquium on European Retail Research Book of Proceedings 65 101. Campbell, D. J. (1988). Task complexity: A review and analysis Academy of Management Review 13 ( 1 ) 40 52. Campbell, S. G. (2011). Users' spatial abilities affect interface usability outcomes (Doctoral d issertation), University of Maryland, College Park. Chen, L. D., Gillenson, M. L. & Sherrell, D. L. (2002). Enticing online consumers: an extended technology acceptance perspective Information & Management 39 ( 8 ) 705 719.

PAGE 200

186 Chen, L.D. & Tan, J. (2004). Technology adaptation in E commerce: Key determinants of virtual stores acceptance European Management Journal 22 ( 1 ) 74 86. Chen, R. J. C. (2007) Geographic information systems (GIS) applications in retail tourism and teaching curriculum. Journal of Retailing and Consumer Services 14 (4) 289 295. Cherney, I. D., Brabec, C. M., & Runco, D. V. (2008) Mapping out spatial ability: S ex differences in way finding navigation Perceptual and Motor Skills 107 (3) 747 760 Chesney, T. (2006). An acceptance model for useful and fun information systems Human Technology: An Interdisciplinary Journal on Humans in ICT Environments 2( 2 ) 225 235. Childers, T. L., Carr, C. L., Peck, J. & Carsons, S. (2002). Hedonic and utilitarian motivations for online retail shopping behavior J ournal of Retailing 77 ( 4 ) 511 535. Chin, W. W. (1998). The partial least squares approach to structural equation modeling In G. A. Marcoulides (Ed.) Modern methods for business research (pp. 295 336). Mahwah, NJ: Laurence Erlbaum Associates. Chin, W. W., Thatcher, J. B. & Wright, R. T. (2012). Assessing common method bias: problems with the ULMC technique MIS Quarterly 36 ( 3 ) 1003 1019. Clementini, E., Di Felice, P. & van Oosterom, P. (1993) A s mall set of formal topological relationships suitable for end user interaction. In D. Abel, B. C. Ooi (Eds.) Advances in Spatial Databases: Third International Symposium SSD '93 Singapore, Proceedings, Lecture Notes in Computer Science, 692 ( 1993 ) 277 29 5. Compeau, D., Marcolin, B., Kelley, H. & Higgins, C. (2012) Generalizability of information systems research using student subjects a reflection on our practices and recommendations for future research Information Systems Research 23 ( 4 ) 1093 1109 ComScore (2011). U.S. mobile map audience grows 39 percent in past year as fixed internet map audience softens slightly, Retrieved from http://www.comscore.com/Press_Events/Press_Releases/2011/7/U.S._Mobile_Ma p_Audience_Grows_39_Percent_in_Past_Year C onroy M. M. & Gordon, S. I. (2004) Utility of interactive computer based materials for enhancing public participation Journal of Environmental Planning and Management 47( 1 ) 19 33. Conway, J. M. & Lance, C. E. (2010). What reviewers should expect from authors

PAGE 201

187 regarding common method bias in organizational research Journal of Business and Psychology 25 ( 3 ) 325 334. Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests, Psychometrika 16 (3) 29 7 334. Crossland, M. D. & Wynne, B. E. (1994). Measuring and testing the effectiveness of a spatial decision support system. In Proceedings of the 27th Annual Hawaii International Conference on System Sciences ( Volume IV pp. 542 551). Los Alamitos, CA: IEEE Computer Society Press. Crossland, M. D., Wynne, B. E., & Perkins, W. C. (1995). Spatial d ecision support systems: An overview of technology and a test of e fficacy Decision Support Systems 14 (3) 219 235. Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology MIS Quarterly 13 ( 3 ) 319 340. Davis, F. D. (1993). User acceptance of information technology: system characteristics, user perceptions and behavioral impacts Int ernational J ournal of Man Machine Studies 38 (3) 475 487. Davis, F. D., Bagozzi, R. P. & Warshaw, P. R. (1989). User acceptance of computer technology: a comparison of two theoretical models Management Science 35 ( 8 ) 982 1003. Davis, F. D., Bagozzi, R. P. & W arshaw, P. R. (1992). Extrinsic and intrinsic motivation to use computers in the workplace, J ournal of Applied Social Psychology 22 ( 14 ) 1111 32. Dennis, A. R., & Carte, T. A. (1998). Using geographical information systems for decision making: extending cognitive fit theory to map based presentations Information Systems Research 9 ( 2 ) 194 203. Densham, P. J. ( 1991 ). Spatial decision support systems. In D. J. Maguire, M. F. Goodchild, and D. W. Rhind (Eds.) Geographical Information Systems: Principles and Applications (pp. 403 412), New York: John Wile y and Sons Department of the Interior (2012). Twitter Earthquake Detector. Retrieved from http://recovery.doi.gov/press/us geological survey twitter earthquake detector ted/ Devaraj, S., Easley, R. F. & Crant, J. M. (2008). Research note how does personality matter? Relating the five factor model to technology acceptance and use Information Systems Research 19 ( 1 ) 93 105.

PAGE 202

188 Diamantopoulos, A. Reifler, P., & Roth, K. P. ( 2008 ) Advancing forma tive measurement models J ournal of Business Research 61 ( 12 ) 1203 1218. Djamasbi, S., Strong, D. M. & Dishaw, M. (2010). Affect and acceptance: Examining the effects of positive mood on the technology acceptance model Decision Support Systems 48 ( 2 ) 383 394. Eise nberg, T. A. & McGinty, R. L. ( 1977 ) On spatial visualization in college students, J ournal of Psychology 95 ( 1 ) 99 104. Elliott P., Wakefield, J. C., Best, N. G. & Briggs, D. J. (2000). Spatial Epidemiology: Me thods and Applications, Oxford University Press Erskine, M. A. & Gregg, D. G. ( 2011 ) Geospatial reasoning ability of business decision makers: con st ruct definition and measurement. In Proceedings of the Seventeenth Americas Conference on Information Systems Detroit, Michigan. Erskine, M. A. & Gregg, D. G. ( 2012). The effects of geospatial website attributes on eImage: An exploratory s tudy In Proceedings of the 2012 International Conference on Information Re sources Management Information Vienna, Austria Erskine, M. A. & Gregg, D. G. (2013) Impact of g eospatial r easoning a bility and p erceived task technology f it on d ecision p erformance: t he m oderati ng r ole of t ask c haracteristics. In Proceedings of the Nineteenth Americas Conference o n Inf orm ation Systems Chicago Illinois ESRI, (2013). Retrieved from http://www.esri.com. Evans, G. W. (1980). Environmental cognition. Psychological Bulletin 88 ( 2 ), 259 287. Fagan, M. H., Neill, S. & Wooldridge, B R. (2008). Exploring the intention to use computers: An empirical investigation of the role of intrinsic motivation, extrinsic motivation, and perceived ease of use. J ournal of Computer I nformation Systems 48 ( 3 ) 31 37. Federal Geographic Data Committee (1999). NSDI Community Demonstration Projects Retrieved from http://www.fgdc.gov/nsdi/library/factsheets/documents/demofactsheet.pdf Ferguson, R. B. (2012). Location analytics: b ringing geography back MIT Sloan Management Review Retrieved from http://sloanreview.mit.edu/feature/location analytics bringing geography back/ Fishbein, M. and Ajzen, I. (1975). Belief, attitude, intention, and behavior : a n introduction to theory and researc h Reading, M assachusetts : Addison Wesley.

PAGE 203

189 Flanagin, A. J. & Metzger, M. J. (2008). The credibility of volu nteered geographic information. GeoJournal 72 ( 3 4 ) 137 148. Fornell, C. & Larcker, D. (1981) Evaluating Structural Equation Models with unobservable variables and measurement error Journal of Marketing Research 18 ( 1 ) 39 50. Frownfelter Lohrke, C. (1998). The effects of differing information presentations of general purpose financial stateme J ournal of Information Systems 12 ( 2 ) 99 107. Gefen, D., Karahanna, D., & Straub, D. W. (2003). Trust and TAM in online shopping: an integrated model. MIS Quarterly 27 (1) 51 90. Gefen, D. & Straub, D. (2005) A practical guide to factoria l validity using PLS Graph: t utorial and a nnotated e xample, Communications of the Association for Information Systems 16 (1) 91 109. Gefen, D. & Straub, D. W. (1997). Gender differences in the perception and use of e mail: a n extension to the technology acceptance model. MIS Quarterly 21 ( 4 ) 389 400. Gerow J. E., Ayyagari, R., Thatcher J. B. & Roth, P. L. (2013). Can we have fun @ work? The r ole of intrinsic moti vation for utilitarian systems European Journal of Information Systems 22 (3), 360 380. Gilbert, D., Lee Kelley, L. & Barton, M. (2003). Technophobia, gender influences and consumer decision making for technology related products E uropean Journal of Innovation Management 6 (4) 253 263. Gill, T. G. & Hicks, R. C. (2006). Task complexity and informing science: a s ynthesis Informing Science Journal 9 1 30. Gliem, J. A. & Gliem R. R. (2003). Calculating, interpreting, and repor ting C r alpha reliability coeffic ient for L ikert type scales In Proceedings of the 2003 Midwest Research to Practice Conference in Adult, Continuing and Community Education Columbus, OH. Goodchild, M. (2007) Citizens as sensors: The world of volunteered geography. GeoJournal 69, 211 221. Goodhue, D. L. (1995). Understanding user evaluations of information systems. Management Science 41(12), 1827 1844. Goodhue, D. L. & Thompson, R. L. ( 1995 ) Task technology fit and individual performance MIS Quarterly 19 ( 2 ) 213 236.

PAGE 204

190 Google Maps (2013). Retrieved from http://maps.google.com Griggs, B. (2013) New Google Maps can help you avoid t raffic, CNN International Retrieved from http://www.cnn.com/2013/08/20/tech/mobile/google waze mobile maps Grimshaw, D.J. (2001). Harnessing the power of g eo graphical knowledge: t he potential for data integration in an SME. International Journal of Information Management 21, 183 191. Gundotra, V. (2010). To 100 million and beyond with Google Maps for mobile, Google Mobile Blog Retrieved from http://googlemobile.blogspot.com/2010/08/to 100 million and beyond with google.html Hair Jr., J. F., Black, W. C., Babin, B, J., & Anderson, R. E. (2010). Multivariate data analysis Seventh edition. USA: Prentice Hall. Hair Jr., J. F., Hult, G. T. M., Ringle, C. M. & Sarstedt, M. (2014). A primer on partial least squares structural equation modeling (PLS SEM) Los Angeles, California: SAGE Publications. Hair Jr., J. F., Ringle, C. M. & Sarstedt, M. (2011). PLS SEM: indeed a silver bullet, J. of Marketing Theory and Practice 19, 139 151. Herath, T. & Rao, H. R. (2009). Protection motivation and deterrence: A framework for security policy compliance in organizations, European J ournal of Information Systems 18 ( 2 ) 1 06 125. Herskovits, A. (1998). Schematization In P. Olivier & K. P. Gapp ( Eds .) Representation and processing of spatial expressions pp. 149 162. Mahwah, N ew J ersey : Lawr ence Erlbaum Associates Hess, R. L., Rubin, R. S., & West, L. A. (2004). Geograph ic information systems as a marketin g information system technology Decision Support Systems 38 ( 2 ) 197 212. Ho, S. Y. (2010). The effects of location behavior, In Proceedings of Pacific Asia Conference on Information Systems Taipei, Taiwan. Hollenbeck, J. R., & Klein, H. J. (1987). Goal commitment and the goal setting process: p roblems, prospects, and proposals for future research. Journal of Applied Psychology 72(2), 212 220 Hu, P. J., Chau, P. Y. K., Sheng O. R. L., & Tam, K. Y. (1999). Examining the

PAGE 205

191 technology acceptance model u sing physician acceptance of telemedicine technology Journal of Management Information Systems 16 ( 2 ) 91 112. Huang, M. H. (2000 ). Information load: i ts relationship to o nline exploratory and shopping b ehavior International Journal of Information Management 20 (5) 337 437. Hung, S. Y., Ku, Y. C., Ting Peng, L., & Chang Jen, L. ( 2007 ) Regret avoidance as a measure of DSS success: An exploratory study De cision Support Systems 42 (4) 2093 2106. Hwang J., Jung J. & Kim G. J. (2006). Hand held virtual reality: a feasibility study, In Proc eedings of ACM Virtual Reality Software and Technology (VRST0 6) Limassol, Cyprus 356 363. Ibrahim, A.M. (2001). Differential r esponding to positive and negative items: the case of a negative item in a questionnaire for course and faculty evaluation Psychological Reports 88 (2) 497 500. Ives, B. (1982). Graphical user interfaces for business information s ystems MIS Quarterly 6 1982 15 47. Jankowski, P., & Nyerges, T. (2001). GIS supported collaborative decision making: Results of an e xperiment Annals of the Association of American Geographers 91 (1) 48 70. Jarupathirun, S. & Zahedi, F. ( 2001 ) A theoretical framework for GIS based spatial decision support systems: u tilization and performance evaluation, In Proceedings o f the Americas Conference on Informatio n Systems 2001 245 248. Jarupathirun, S. & Zahedi, F. M. (2007) Exploring the influe nce of perceptual factors in the success of web based spatial DSS Decision Support Systems 43 ( 3 ) 933 951. Jarvenpaa, S. L. (1989) The effect of task demands and graphical format on information processing strategies, Management Science 25 (3) 285 303. Karahanna, E., Straub, D. W., & Chervany, N. L. (1999). Information technology adoption across time: a cross sectional comparison of pre adoption and post adoption beliefs MIS Quarterly 23 (2) 183 213. Karimi, J., Somers, T. M., & Gupta, Y. P. (2004). Impact of environmental uncertainty and task characteristics on user satisfaction with data Information Systems Research 15 (2) 175 193. Kelly, G. C., Tanner, M., Vallely, A., & Clements, A. (2012). Malaria elimination: moving forward with spatial decision support systems Trends in Parasitology

PAGE 206

192 28 (7) 297 304. Kelton, A. S., Pennington, R. R. & Tuttle, B. M. (2010) The effects of information presentation format on judgment and decision making: a review of the information systems research J ournal of Inf ormation Systems 24 (2) 79 1 05. Kerski, J. J. (2000). The implementation and effectiveness of geographic information systems technology and methods in secondary education (Unpublished doctoral d issertation), University of Colorado, Boulder. Khalili, N., Wood, J. & Dykes, J. (2009) Mapping geography of social n etworks, In D. Fairbairn (Ed.), Proceedings of the GIS Research UK 17th Annual Conference University of Durham, Durham, UK 311 315 K halili, N., Wood, J. & Dykes, J. (2010) Analyzing uncertainty in home location information in a large volunteered geographic information d atabase, In Proceedings of the GIS Research UK 18th Annual Conference Univer sity College London, London, UK, 57 63. Klein J Moon Y & Picard R W (2002) This computer responds to user frustration: theory, design, and results. Interacting with Computers 14 (2) 119 140. Klippel, A., Richter, K. F., Barkowsky, T. & Freksa, C. ( 2005 ). The cognitive reality of schematic m aps. In L. Meng, A. Zipf, & T. Reiche nbacher (E ds.) Map based Mobile Services Theories, Methods and Impleme ntations Springer, Berlin, 57 74. Kozlowski, L. T, & Bryant, K. J. ( 1977 ) Sense of direction, spatial orientation, and cognitive maps Journal of Experimental Psychology: Human Perc eption and Performance 3 (4) 590 598. Larson, K. & Czerwinski, M. (1998 ). Web page design: i mplications of memory, structure and scen t for information retrieval. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems ACM Pre ss/Addison Wesley Publishing Co, 25 32. Lavrinc, D. (2013). Google needs to make maps for motorcycles. Wired Retrieved from http://www.wired.com/autopia/2013/11/google maps for motorcyclists/ Lawton, C. A. (1994) Gender differences in way finding strategies: r elationship to spatial ability and spatial anxiety Sex Roles 30 ( 11/12 ) 765 779. Lee, A. S., & Baskerville, R. L. (2003). Generalizing generalizability in information systems research. Information S ystems R esearch 14(3), 221 243. Lee, A. S. & Baskerville, R. L. (2012). Conceptualizing generalizability: new

PAGE 207

193 contributions and a reply, MIS Quarterly 36 ( 3 ) 749 761. Lee, J., & Bednarz, R. ( 2009 ) Effect of GIS learning on spatial thinking Journal of Geography in Higher Education 33 ( 2 ) 183 198. Lee, H. H., Fiore A. M. & Kim, J. (2006). The role of the technology acceptance model in explaining effects of image interactivity technology on consumer responses International Journal of Reta il & Distribution Management 34 (8), 621 644. Lee, A. S., & Hubona, G. S. (2009). A s cientific b asis for r igor in information systems r esearch. MIS Quarterly 33(2), 237 262. Legris, P., Ingham, J. ,& Collerette, P. (2003). Why do people use information t echnology? A critical review of the technology acceptance model Information & Management 40 (3) 191 204. Lei, P L. Kao, G. Y M, Lin, S S. J., & Sun, C T. ( 2009 ) Impacts of geographical knowledge, spatial ability and environmental cognition on image searches supported by GIS software. Computers in Human Behavior 25 ( 6 ) 1270 1279. Liang, H., Saraf, N., Hu, Q., & Xue, Y. ( 2007 ) Assimilation of enterprise s ystems: t he e ffect of i nstitutional p ressures and the mediating r ole of t op m anagement MIS Quarterly 31(1) 59 87. Liker J. K., & Sindi, A. A. (1997). User acceptance of expert systems: a test of the theory of reasoned action, J ournal of Engineering and Technology Management 14 (2) 147 173. Likert, R. (1932) A technique for the measurement of attitudes. Archives of Psychology 22, 140, 1 55. Lin, J. C. & Lu, H. (2000). Towards an understanding of the behavioral intention to use a web site, Int ernational J ournal of In formation Management 20 (3) 197 208. Lindell, M. K., & Whitney, D. J. (2001). Accounting for common method variance in c ross sectional research designs. Journal of Applied Psychology 86(1) 114 121. Linn M. C. & Petersen, A. C. (1985) Emergence and characterization of sex d ifferences in spatial ability: a meta analysis Child Development 56 (6), 1479 1498. Liu, L., Li, C. & Zhu, D. (2012). A new approach to testing nomological validity and its application to a second order measurement model of trust J ournal of the Association for Information Systems 13 ( 12 ) 950 975. Luo, X., Li, H., Zhang, J., & Shim, J. P. (2010). Examining multi dimensional trust and multi faceted risk in initial acceptance of emerging technologies: a n empirical

PAGE 208

194 study of mobile banking services Decisio n Support Systems 49 (2) 222 234. MacKenzie, S. B., Podsakoff, P. M. & Jarvis, C. B. (2005). The problem of measurement model misspecification in behavioral and organizational research and som e recommended solutions J ournal of Appl ied Psychology 90 (4) 710 730. MacKenzie, S. B., Podsakoff, P. M., & Podsakoff, N. P. (2011). Construct measurement and validation procedures in MIS and behavioral research: integrating new and existing techniques MIS Quarterly 35 (2) 293 334. Malczewski, J. (2006). GIS based multicriteria decisio n analy sis: a survey of the literature. International Journal of Geographical Information Science 20 ( 7 ) 703 726. Mntyl, T. (2013) Gender differences in multitasking reflect spatial ability Psychological Science 24 (4) 514 520. Marakas, G. M., Johnson, R. D. & Clay, P. F. (2007). The evolving nature of the computer self efficacy construct: a n empirical investigation of measurement construction, validity, reliability and s tability over time J ournal of the Association for Information Systems 8 ( 1 ) 16 46. MasterCard (2013). ATM Locator, Retrieved from http://www.mastercard.us/cardholder services/atm locator.html Mathieson, K. (1991). Predicting user intentions: c omparing the technology acceptance model with the theory of planned behavior Info rmation Systems Research 2 (3) 173 191. Mehrabian, A., & Russell, J. A. (1974). An A pproach to E nvironmental P sychology Cambridge, MA: The MIT Press, 88 94. Meilinger, T. & Knauff, M. 2008. Ask for directions or use a map: a field experiment on spatial orientation and wayfinding in an urban environment, Journal of Spatial Science 53 (2) 13 23. Mennecke, B. E. ( 1997 ) Understanding the role of geographic information technologies in business: applications and research directi ons Journal of Geographic Information a nd Decision Analysis 1 (1) 44 68. Mennecke, B.E., & Crossland, M.D. (1996). Geographic information systems: applications and research opportunities for information systems r esearchers. In Proceedings of the 29th Ha waii International Conference on System Sciences 537 546. Mennecke, B. E., Crossland, M. D., & Killingsworth, B. L. (2000). Is a map more than a

PAGE 209

195 picture? The role of SDSS technology, subject characteristics, and problem complexity on map reading and prob lem solving MIS Quarterly 24 ( 4 ) 601 629. Meyer, J. W., Butterick, J., Olkin, M., & Zack, G. (1999). GIS in the K 12 curriculum: a cautionary note, Professional Geographer 51(4), 571 578. Miyake, A., Friedman, N. P., Rettinger, D. A., Shah, P., & Hega rty, M. ( 2001 ) How are visuospatial working memory, executive functioning, and spatial abilities related? A latent variable analysis. Journal of Experimenta l Psychology: General 130 (4) 621 640. Mook, J., Kleijn W. C & van der Ploeg H.M. (1992). Positively and negatively worded items in a self report measure of dispositional optimism, P sychological Reports 71 (1) 275 278. Noble, D. J. (2012). Predicting the epidemic: a study of diabetes risk profiling in a multi ethnic inner city population (D oc toral dissertation ) Novak J. & Schmidt, S. (2009). When joy matters: t he importance of hedonic stimulation in collocated collaborati on with large displays, In Proceedings of Intera ct 2009 Uppsala, Sweden A. & Fenech, T. (2003). Web retailing adoption: Exploring the nature of interne t users web retailing behavior. J ournal of Retailing and Consumer Ser vices 10 (2) 81 94. Olsen, T. P. (2000). Situated student learning and spatial informational analysis for environmental problem. (Unpublished doctoral dissertation). University of Wisconsin Madison, Madison. Ozimec, A. M., Natter, M., & Reutterer, T. (2010) Geographical information systems based marketing decisions: e ffects of alternative visualizations on decision q u ality. Journal of Marketin g 74 ( 6 ) 94 110. Payne, J. W. (1976). Task complexity and contingent processing in decision making: a n information search and protocol analysis, Organization al Behavior & Human Performance 16 ( 2 ) 366 387. Pick, J. B. ( 2004 ) Geographic information syst ems: a tutorial and introduction Communications of the Association for I nformation Systems 14 ( 1 ) 307 331. Podsakoff, P. M., MacKenzie, S. B., Lee, J. Y., & Podsakoff, N. P. (2003). Common method biases in behavioral research: a critical review of the literature and recommended remedies. Journal of A pplied P sychology 88(5), 879 903 Polites, G. L., Roberts, N. & Thatcher, J. ( 2012 ) Conceptualizing models using

PAGE 210

196 multidimensional co nstructs: a review and guidelines for their use European Journal of Information Systems 21 (1) 22 48. Rafi, A., Anuar, K., Samad, A., Hayati, M. & Mahadzir, M. (2005). Improving spatial ability using a Web based Virtual Environm ent (WbVE) Automation i n Construction 14 ( 6 ) 707 715. Rasaf, M. R., Ramezani, R., Mehrazma, M., Rasaf, M. R. R., & Asadi Lari, M. (2012). Inequalities in c ancer d istribution in Tehran; A Disaggregated Estimation of 200 7 Incidencea by 22 Districts, International Journal of Prev entive Medicine 3( 7 ) 483. Reiterer, H., Mann, T. M., Muler, G. & Bleimann, U. ( 2000 ) Visualisierung von e ntscheidungsrelevanten daten f r das management, HMD, Praxis der Wirtscha ftsinformatik 212, 71 83. Ringle, C. M., Gtz, O, Wetzels, M. & Wilson, B. ( 2009 ) On the use of formative measurement specifications in structural equation modeling: a Monte Carlo Simulation study to compare covariance based and partial least squares model estimation methodologies In Research Memoranda from Maastric ht (METEOR) Ringle, C. M., Wende, S. & Will, S ( 2005 ) S martPLS (M3) Beta, Hamburg Retrieved from http://www.smartpls.de. Re/Max (2013). Property search. Retrieved from http://www.remax.com/advancedsearch/ Rogers, E. M. ( 1983). Diffusion of Innovations ( 3 rd ed. ) New York: The Free Press. Rusch, M. L., Nusser, S. M., Miller, L. L., Batinov, G. I. & Whitney, K. C. (2012) Spatial ability and map based software applications In Proceedings of the Fifth International Conference on Advances in Computer Human Interactions 35 40. Schmitt, N. & Stults, D. M. (1985). Factors defined by negatively keyed items: t he result of careless respondents?, Applied Psychological Measurement 9 ( 4 ) 367 373. Seaman V. (1798). An inquiry into the cause of the prevalence of yellow fever in New York (1 st ed.), New York: The Me dical Repository. Shadish, W. R., Cook, T. D., & Campbell, D. T. ( 2002 ) Experimental and quasi e xperimental designs for g eneralized causal i nference Boston: Houghton Mifflin Company. Sieber, R. (2006). Public participation g eographic information systems: a literature review and framework, Annals of the Association of American Geographers 96 (3) 491 507.

PAGE 211

197 Sirola, M. (2003). Decision concepts In IEEE International Workshop on Intellige nt Data Acquisition and Advanced Computing Systems: Technology and Applications 59 62. Skupin, A. & Fabrikant, S. I. (2003). Spatialization methods: a cartographic research agenda for non geogr aphic information visualization, Cartography and Geographic Information Science 30 ( 2 ) 99 119. Slocum, T. A., Blok, C., Jiang, B., Koussoulakou, A., Montello, D. R., Fuhrmann, S., & Hedley, N. R. (2001). Cognitive and usability issues in geovisualization. Cartography and Geographic Information Science 28 ( 1 ) 61 75. Smelcer, J. B., & Carmel, E. (1997) The effectiveness of different representations f or managerial problem solving: c omparing tables and maps Decision Sciences 28 (2) 391 420. Snow J. (1849). On the m ode of communication of cholera (1 st ed.) London : J. Churchill Snow, J. (1855) On the mode of communication of cholera (2 nd ed.) London: J. Churchill Spector, P. E., Van Katwyk, P. T., Brannick, M. T. & Chen, P. Y. (1997). When two ics can produce artifactual factors, J ournal of Management 23 (5) 659 677. Speier C. (2006). The influence of information presentation formats on complex task decision making performance International Journal of Human Computer Studies 64 (11) 1115 113 1. Speier, C ., & Morris, M. G. (2003) The i nfluence of query i nterface d esign on decision m aking p erformance MIS Quarterly 27 ( 3 ) 397 423. Strohecker, C. (2000). Cognitive zoom: f rom object to path and back again, In Spatial Cognition II Berlin: Springer Verlag. Subramanian, G. H. (1994). A replication of perceived usefulness and perceived ease of use measurement Decision Sciences 25 ( 5/6 ) 863 874. Subsorn, P., & Singh, K. ( 2007 ). DSS a pplications as a b usiness e nhancement s trategy, In Proceed ings from the 3rd annual Transforming Information and Learning Conference Swink, M., & Speier, C. (1999) Presenting geographic information: e ffects on data Decision Sciences 30 ( 1 ) 169 196.

PAGE 212

198 Tayl or, S. & Todd, P. A. (1995) Understanding information technology usage: a test of competing models Information Systems Research 6 ( 2 ) 144 176. Thong, J. Y., Hong, S. J. & Tam, K. Y. (2006). The effects of post adoption beliefs on the expectation confirmation model for information technology continuance Int ernational Journal of Human Computer Studies 64 ( 9 ) 799 810. T Mobile (2013). Check your coverage. Retrieved from http://www.t mobile.c om/coverage.html Tonkin, T. (1994) Business geographics impacts corporate America Business Geographics 2 (2) 27 28. Tourangeau, R., Rips, L. J., & Rasinski, K. (2000). The psychology of survey response Cambridge, England: Cambridge University Press. Tsang, E. W., & Williams, J. N. (2012). Generalization and induction: misconceptions, clarifications, and a classification of induction. MIS Quarterly 36(3), 729 748. Tversky, B. & Lee, P. U. ( 1998 ) How space structures language, In C. Freksa, C. Habe l & K. F. Wender (eds.) Spatial Cognition: An interdisciplinary approach to representation and processing of spatial knowledg e (pp. 157 175), Berlin: Springer Verlag van der Heijden, H. (2004). User acceptance of hedonic information systems, MIS Quarterl y 28(4), 695 704. Velez, M. C., Silver, D. & Tremaine, M. ( 2005 ) Understanding visualization through spatial ability differences, In Proceed ings of IEEE Visualization (pp. 23 28 ), Minneapolis, MN, USA. Venkatesh V. & Davis, F. D. (2000). A theoretical extension of the technology acceptance model: f our longitudinal case studies M anagement Science 46 (2) 186 204. Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: t oward a unified vi ew, MIS Quarterly 27 ( 2 ) 425 278. Verkasalo, H., Lpez Nicols, C., Molina Castillo F. J. & Bouwman H. (2010). Analysis of users and non users of smartphone applications Telematic s and Informatics 27 (3) 242 255. Versace, C. (2013) Mapping heats up as Apple buys Embark, Google integra Forbes Retrieved from h ttp://www.forbes.com/sites/chrisversace/2013/08/26/mapping heats up as apple

PAGE 213

199 buys embark google integrates waze whats next/ Ves sey, I. (1991). Cognitive fit: a theory based analysis of the graphs vs. tables literature Decision Sciences 22 ( 2 ) 219 240. Vijayasarathy, L.R. (2004). Predicting consumer intentions to use on line shopping: t he case for an augmented technology acceptance model Informa tion & Managem ent 41 ( 6 ) 747 762. Vinzi, V. E., Chin, W. W., Henseler J. & Wang H. (2010) Handbook of partial least squares: Concepts, methods an d applications Berlin: Springer Vlachos, P. A. & Theotokis, A. ( 2009 ). Formative versus reflective measurement for multidimensional c onstructs Social Science Research Network Retrieved from http://ssrn.com/abstract=1521095 Whitney, K. C., Batinov, G. J., Miller, L. L., Nusser, S. M., & Ashenfelter, K. T. (2011) Explo In The Fourth International Conference on Advances in Computer Human Interactions (pp. 63 68), Gosier, Guadeloupe, France. Winn, W., Hoffman, H., Hollander, A., Osberg, K., Rose, H. & Char, P. (1997). The effect of student construction of virtual environments on the performance of high and low ability students, In Annual Meeting of the American Educational Research Association Chicago. Yang, C., Raskin, R., Goodchild, M., & Gahegan, M. (2010). Geospatial cyberinfrastructure: past, present and future. Computers, Environment and Urban Systems 34(4), 264 277. Yang, C., Wong, D. W., Yang, R., & Li, Q. (2005). Performance improving techniques in web based GIS, International Journal of G eographical Information Science 19 ( 3 ) 319 342. Yi, M.Y., Jackson, J. D., Park, J. S. & Probst, J. C. (2006). Understanding information technology acceptance by individual professionals? Toward an integrative view Information & Management 43 ( 3 ) 350 363. Zigurs, I. & Buckland, B. ( 1998 ) A theory of task/technology fit and group support systems effective ness, MIS Quarterly 22 ( 3 ) 313 344. Zillow (2013). Retrieved from http://www.zillow.com/mobile/ Zipf, A. (2002). User adaptive maps for location based services (LBS) for tourism. In Proceedings of the 9th International Conference for Information and Communication Technologies in Tourism, ENTER 2002 Innsbruck, Austria