Wednesday, April 16, 2008

Instructional design

Instructional design

From Wikipedia, the free encyclopedia
Jump to: navigation, search

Instructional Design is the practice of arranging media (communication technology) and content to help learners and teachers transfer knowledge most effectively. The process consists broadly of determining the current state of learner understanding, defining the end goal of instruction, and creating some media-based "intervention" to assist in the transition. Ideally the process is informed by pedagogically tested theories of learning and may take place in student-only, teacher-led or community-based settings. The outcome of this instruction may be directly observable and scientifically measured or completely hidden and assumed.
As a field, Instructional Design is historically and traditionally rooted in cognitive and behavioural psychology. However, because it is not a regulated field, and therefore not well understood, the term 'instructional design' has been co-opted by or confused with a variety of other ideologically-based and / or professional fields. Instructional Design, for example, is not Graphic Design although graphic design (from a cognitive perspective) could play an important role in Instructional Design. Preparing instructional text by E. Misanchuk, and publications by James Hartley are useful in informing the distinction between Instructional Design and Graphic Design.
Contents[hide]
1 History
2 Cognitive load theory and the design of instruction
3 Learning Design
4 Instructional design models
5 Influential researchers and theorists
6 See also
7 External links
8 References
//

[edit] History
Much of the foundation of the field of instructional design was laid in World War II, when the U.S. military faced the need to rapidly train large numbers of people to perform complex technical tasks, from field-stripping a carbine to navigating across the ocean to building a bomber. Drawing on the research and theories of B.F. Skinner on operant conditioning, training programs focused on observable behaviors. Tasks were broken down into subtasks, and each subtask treated as a separate learning goal. Training was designed to reward correct performance and remediate incorrect performance. Mastery was assumed to be possible for every learner, given enough repetition and feedback. After the war, the success of the wartime training model was replicated in business and industrial training, and to a lesser extent in the primary and secondary classroom. [1]
In 1955 Benjamin Bloom published an influential taxonomy of what he termed the three domains of learning: Cognitive (what we know or think), Psychomotor (what we do, physically) and Affective (what we feel, or what attitudes we have). These taxonomies still influence the design of instruction. [2]
During the latter half of the 20th century, learning theories began to be influenced by the growth of digital computers.
In the 1970s, many Instructional design theorists began to adopt an "information-processing" approach to the design of instruction. David Merrill for instance developed Component Display Theory (CDT). Component Display Theory concentrates on the means of presenting instructional materials (presentation techniques)[3].
Later in the 1980s and throughout the 1990s cognitive load theory began to find empirical support for a variety of presentation techniques [4].

[edit] Cognitive load theory and the design of instruction
Cognitive load theory developed out of several empirical studies of learners as they interacted with instructional materials [5]. It is emblematic of the historical roots of cognitive psychology in Instructional Design. Sweller and his associates began to measure the effects of working memory load and found that the format of instructional materials has a direct effect on the performance of the learners using those materials [6] [7] [8]
While the media debates of the 1990s focused on the influences of media on learning, cognitive load effects were being documented in several journals. These effects it seems were based on the design of instructional materials, as opposed to the media being used. Finally Mayer [9] asked the Instructional Design community to reassess this media debate, to refocus their attention on what was most important – learning.
By the late 1990s, Sweller and his associates had discovered several learning effects related to cognitive load and the design of instructional materials (e.g. the Split attention effect, redundancy effect, and the worked example effect). Later, other researchers like Richard Mayer began to attribute other learning effects to cognitive load [9]. Mayer and his associates soon developed a Cognitive Theory of Multimedia Learning [10] [11] [12]
In the past decade, Cognitive load theory has begun to be internationally accepted [13] and begun to revolutionize how Instructional Designers view instruction. Recently, Human performance experts have even taken notice of cognitive load and begun to promote this theory base as the Science of Instruction, with Instructional Designers as the practitioners of this field [14]. Finally Clark, Nguyen and Sweller [15] published an important text describing how Instructional Designers can promote efficient learning using evidence based guidelines of Cognitive load theory.

[edit] Learning Design
The IMS Learning Design [16]specification supports the use of a wide range of pedagogies in online learning. Rather than attempting to capture the specifics of many pedagogies, it does this by providing a generic and flexible language. This language is designed to enable many different pedagogies to be expressed. The approach has the advantage over alternatives in that only one set of learning design and runtime tools then need to be implemented in order to support the desired wide range of pedagogies. The language was originally developed at the Open University of the Netherlands (OUNL), after extensive examination and comparison of a wide range of pedagogical approaches and their associated learning activities, and several iterations of the developing language to obtain a good balance between generality and pedagogic expressiveness.
A criticism of Learning Design theory is that learning is an outcome. While instructional theory Instructional Design focuses on outcomes, while properly accounting for a multi-variate context that can only be predictive, it acknowledges that (given the variabilities in human capability) a guarantee of reliable learning outcomes is improbable. We can only design instruction. We cannot design learning (an outcome). Automotive engineers can design a car that, under specific conditions, will achieve 50 miles per gallon. These engineers cannot guarantee that drivers of the cars they design will (or have the capability to) operate these vehicles according to the specific conditions prescribed. The former is the metaphor for Instructional Design. The latter is the metaphor for Learning Design.

[edit] Instructional design models
Perhaps the most common model used for creating instructional materials is the ADDIE Model. This acronym stands for the 5 phases contained in the model:
Analyze - analyze learner characteristics, task to be learned, etc.
Design - develop learning objectives, choose an instructional approach
Develop - create instructional or training materials
Implement - deliver or distribute the instructional materials
Evaluate - make sure the materials achieved the desired goals
Most of the current instructional design models are variations of the ADDIE model. A sometimes utilized adaptation to this model is in a practice known as rapid prototyping.
But even Rapid Prototyping is considered a somewhat simplistic type of model. At the heart of Instructional Design is the analysis phase. After you thoroughly conduct the analysis--you can then choose a model based on your findings. That is the area where most people get snagged--they simply do not do a thorough enough analysis. (Part of Article By Chris Bressi on LinkedIn)
Proponents suggest that through an iterative process the verification of the design documents saves time and money by catching problems while they are still easy to fix. This approach is not novel to the design of instruction, but appears in many design-related domains including software design, architecture, transportation planning, product development, message design, user experience design, etc.[17] [18]
Some other useful models of instructional design include: the Dick/Carey Model, the Smith/Ragan Model, the Morrison/Ross/Kemp Model.
Instructional theories also play an important role in the design of instructional materials. Theories such as behaviorism, constructivism, social learning and cognitivism help shape and define the outcome of instructional materials.

[edit] Influential researchers and theorists
Lev Vygotsky - Learning as a social activity - 1930s
B.F. Skinner - Radical Behaviorism, Program Teaching - late 1930s-
Benjamin Bloom - Taxonomies of the cognitive, affective, and psychomotor domains - 1955
R.F. Mager - ABCD model for instructional objectives - 1962
Jean Piaget - Cognitive development - 1960s
Seymour Papert - Constructionism, LOGO - 1970s
Robert M. Gagné - Nine Events of Instruction - 1970s
Jerome Bruner - Constructivism
Dick, W. & Carey, L. "The Systematic Design of Instruction" - 1978
Michael Simonson - Instructional Systems and Design via Distance Education - 1980s
M. David Merrill and Charles Reigeluth - Elaboration Theory / Component Display Theory / PEAnets - 1980s
Robert Heinich, Michael Molenda, James Russell - Instructional Media and the new technologies of instruction 3rd ed. - Educational Technology - 1989
Roger Schank - Constructivist simulations - 1990s
David Jonassen - Cognitivist problem-solving strategies - 1990s
Ruth Clark - Theories on instructional design and technical training - 1990s
Charles Graham and Curtis Bonk - Blended learning - 2000s

[edit] See also
Since instructional design deals with creating useful instruction and instructional materials, there are many other areas that are related to the field of instructional design.
assessment
Confidence-Based Learning
DACUM
educational animation
educational psychology
educational technology
e-learning
electronic portfolio
evaluation
instructional technology
instructional theory
learning object
learning science
m-learning
Multimedia learning
online education
instructional design coordinator
Storyboarding
training
interdisciplinary teaching
rapid prototyping
Lesson study

[edit] External links

The examples and perspective in this article or section may not represent a worldwide view of the subject.Please improve this article or discuss the issue on the talk page.
American Society for Training & Development (ASTD)
International Society for Performance Improvement (ISPI)
Association for Educational Communications and Technology (AECT)
Instructional Design - An overview of Instructional Design
ISD Handbook
International Journal for Instructional Technology and Distance Learning

Progressive Teaching of Mathematics with Tablet Technology

Progressive Teaching of Mathematics with Tablet Technology


Birgit LochUniversity of Southern Queensland, Australialochb@usq.edu.au
Diane DonovanUniversity of Queensland, Australiadmd@maths.uq.edu.au

Abstract
We investigate a multimodal approach to lecture presentation built on tablet technology. This innovative approach presents a framework which provides organizational structure to the lecture and facilitates incorporation of additional detail through electronic ink. Diagrams, solutions and concept maps are developed spontaneously in real time, thus engaging and promoting student directed learning, through a creative, interactive and dynamic process. Our study analyzes benefits and drawbacks of this approach, through evaluation of instructor experiences and student feedback for three undergraduate mathematics courses held over three consecutive semesters.
Keywords: Tablet, undergraduate, multimodal, electronic ink
Introduction
Non-interactive computer technologies, such as PowerPoint slides, are standard tools used in modern lectures, yet, if slides are prepared entirely before delivery, we must question the effectiveness of these methods for the cognition of mathematical concepts. Such formats limit flexibility and encourage passive learning, and do not engage students since lectures cannot be adjusted based on audience reaction.
In response to this perceived problem the authors have investigated a multimodal approach to lecture presentation built on tablet technology. This new technology promotes a dynamic and interactive learning environment which fully engages students.
This paper reports on results of a study into teaching tertiary undergraduate mathematics using tablet technology and presents an analysis of students’ responses to this multimodal approach. Data has been acquired from students taking three different mathematics courses in three consecutive semesters; two of these courses were first year classes and one a second year class.
Lecturers’ experiences of this technology are discussed and implementation issues including benefits and difficulties are analyzed. Benefits of tablet technology can include high quality output, enhancing the ability to store and cross reference all presented material. Electronic ink notes can readily be posted on a website, providing easy access for on campus and distance students. Problem solving techniques can be demonstrated and cognition is enhanced by reading, listening, writing and thinking during a lecture.
Related work
Students need to learn mathematical symbols as well as mathematical explanation (Lowerison et al., 2006; Townsley, 2002); they need careful step by step instruction on how to work through a problem and how to present a solution in clear and precise, mathematical language (Loch, 2005). Consequently, real time delivery of problems and their solutions is an important aspect of any mathematics lecture.
The usefulness of electronic ink for lecture delivery has been discussed in a number of educational research papers. While some authors use a Tablet PC (Simon et al., 2003), others describe experiences with considerably cheaper graphics tablets (Loch, 2005; Loomes et al., 2002). Software packages have been written to perform certain tasks, many of which are tailored for distance education.
One example is Classroom Presenter, developed by Simon et al. (2003) in collaboration with Microsoft Research. Classroom Presenter is used widely in the US tertiary teaching sector, and is available for free download from the authors’ website. PowerPoint presentations are delivered in Classroom Presenter, which offers the potential to add space for live handwritten material. Classroom Presenter was first tested for computer science courses, then taken up by other disciplines such as engineering (Anderson et al., 2005). Its distinguishing feature is the use of the projector as an extended desktop, where the computer screen displays an instructor version of the presentation, including slide overview and personal comments which may differ from the projected image.
Recent releases of MS PowerPoint offer a pen mode, which allows adding ink to a slide during presentation. This feature can only be used to its full potential if a Tablet PC with Windows XP Tablet Edition is available, or if at least the latest version of MS Office is installed, since handwritten material may be lost during slide transition otherwise. However, as Microsoft Equation Editor is sometimes awkward to handle, PowerPoint presentations using electronic ink are more useful for courses which do not involve many typed mathematical formulae.
While it is clear that the concept of using electronic ink for lecture delivery is not new, our approach differs from previous approaches because it is compatible with mathematical typesetting in a natural way and is more suitable for demonstrating mathematical formulae than PowerPoint based ink approaches.
Background and Implementation
The Department of Mathematics at the University of Queensland developed lecture workbooks for all first and some second year courses (see Donovan (2000) for example). These workbooks contain background material, but also include blank boxes for addition, during lectures, of proofs, worked examples and student comments. The workbooks are written in LaTeX, a mathematical typesetting language, and offered as high-quality PDF files for download.
Apart from its portability, an advantage of the PDF format is that it can be used for dynamic lecture presentations, run in Adobe Acrobat Standard and allows for the adding of comments, which may be typed, pasted as images, imported from a file or written as electronic ink anywhere on a page. These comments can be saved separate or merged with the original document.
During a lecture, the PDF file can be projected on the screen, giving organizational structure to the lecture and a framework which supports subsequent discussions. Missing as well as extra detail can be added with electronic ink. Solutions to problems may be developed spontaneously in real time thus promoting student directed learning and creating an interactive and dynamic learning process which fully engages students.
Environment
In our study, we experimented with tablet technology and the PDF workbook for teaching three different courses over three consecutive semesters, all offered to on campus students only. Two of these courses are first year courses, and one a second year course. Details are as follows:
Course 1 – semester 2, 2004 – Calculus and Linear Algebra I – taken by about 320 Engineering and Science students.
Course 2 – semester 1, 2005 – Calculus and Linear Algebra II – taken by about 600 Engineering and Science students.
Course 3 – semester 2, 2005 – Discrete Mathematics – taken by about 120 IT, Science and Electrical Engineering students.
While for courses 1 and 2 a graphics tablet was used, a Tablet PC was available for course 3. Course 1 was taught completely by one of the authors, while the other author taught the linear algebra component of course 2, as well as the logic and proof sections of course 3. The remainder of course 2 and 3 was taught by different lecturers, using traditional printed overhead slides.
Feedback and results
Towards the end of the semester, students in all three courses were asked to fill out a survey form. Some of the questions were the same for all three courses, while others were tailored to the specific situation. The contexts and results are presented as follows; an overview of answer distribution for similar questions can be found in Table 1.

Question
Course 1
Course 2
Course 3
I prefer if lecturer writes on computer
80%
12%
24%
I prefer if lecturer writes on OHP
3%
60%
42%
Writing during lectures helps my understanding
89%
65%
95%
I cannot read lecturer’s writing
12%
38%
11%
I can decipher/it is easy to read
79%
30%
71%Table 1: Distribution of answers for similar questions for all three courses
In the first week of the semester, lectures in course 1 were presented with Acrobat Reader on the computer, with additional/missing material written on OHP (overhead projector). Due to too little projection space, the OHP was difficult to see. Computer-only projection, and writing with the graphics tablet, was introduced from week 2. No major technical problems occurred from this point onwards. Out of 65 students who responded, about 85% said they preferred writing in the workbook during lectures to receiving a complete workbook. Nearly 89% thought that writing helped their understanding. Asked if they preferred if the lecturer wrote on the computer, 80% agreed, while 3% responded that they preferred lecture presentation with an OHP. More than three quarters (79%) responded that they could decipher the lecturer’s writing. The completed lecture material, with all additions, was made available on the course website shortly after each lecture. Furthermore, 92% responded that they were in favour of computer-generated lecture notes being available on the website. About 46% said they knew students who “never go to lectures because the material is on the web” (see also Loch (2005)).
The linear algebra component of course 2 was taught entirely with an A3 size graphics tablet. This tablet was more difficult to handle (and carry) than a smaller tablet used in course 1 (A6 size) and the lecturer encountered a number of technical difficulties with the software. While all materials were available on the web, annotated notes from lectures were not. Out of 160 students, 65% agreed that additional comments, written during lectures enhanced their understanding of the course material. Only 30% of students said they could read the lecturer’s writing easily. Keeping in mind that students had a direct comparison with OHP from the calculus component, 12% preferred writing on the computer as mode of presentation while 60% preferred OHP.
Course 3 was taught with a Tablet PC. Although some technical problems occurred, they were quickly fixed and did not eat into lecture time. Base lecture notes, but not annotations, were made available on the web. Out of 38 student responses, 95% agreed that examples written during lectures enhanced their understanding. 71% said they could read the lecturer’s writing easily, and 42% thought the lecturer appeared to be comfortable with the technology. A surprisingly large number of students responded that they prefer the blackboard as mode of presentation (31%), up from 2% in course 2, while 24% said they prefer writing on the PC and 42% writing on the OHP.
A focus group of two students both enrolled in course 2 and 3 were interviewed to establish why students’ perception of tablet technology had changed from course 2 to 3. They responded that the graphics tablet in course 2 was too big and difficult to handle and they preferred the Tablet PC, but acknowledged that the differences in technologies were not related to course material. The lecturer was more confident with the technology in course 3, and set up time was shorter. The writing on the Tablet PC in course 3 was easier to read and understand, while the graphics tablet in course 2 created a distraction. The focus group said that “everyone was frustrated” in course 2. This was not the case in course 3. However, the two students found that from the side of the lecture theatre it was difficult to read the OHP projection of the calculus component in course 2. Somehow, material seemed to be covered more quickly with the tablet, and the lecture appeared more organized compared to OHP. Asked if they thought that tablet technology is just another teaching tool or if it improved their learning, they agreed with the latter responding that material presented on Tablet PC was easier to understand compared to that presented on OHP.
Discussion
After trialing tablet technology together with lecture workbooks for three semesters, we believe that - despite initial set backs - this technology is a useful tool for the modern mathematics lecture. It combines computer-generated slides and activities with handwriting to emphasize key concepts or to facilitate modifying a path to a problem solution in response to student questions. Students can actively contribute to the lecture and may find their question or answer recorded on the lecture slide. Students appreciate that the lecture material is given to them in the form of the workbook, which provides writing space and organization of the material in one location.
Survey results and student comments suggest that the lecturer’s competency and dexterity with the tablet is a key factor in the successful teaching with this tool. Any benefits of tablet technology such as being able to refer back to previous material, keeping an exact high quality copy of lecture material and being able to post on a website were outweighed by technical issues in course 2. As stated by Anderson et al. (2005), “a risk inherent in using new technology in the classroom is that the technology becomes a distraction rather than a complement”. The enormous size of the graphics tablet used in course 2 was one of the reasons contributing to frustration in students and lecturer.
While the Tablet PC is more versatile, easier to use (writing takes place on the screen and requires less hand-eye coordination) and allows additional features, the use of the graphics tablet was very well received in course 1.
Writing on a graphics tablet/Tablet PC is not difficult. In fact, Anderson et al. (2005) address legibility, layout, colour and contrast, periodically cleaning up a slide, pacing and space for inking and student note-taking. This advice would apply equivalently to writing on overhead slides.
The seriousness of technical problems and reliability of the equipment are major factors impacting on the successful use of tablet technology. In course 1, OHP technology together with computer projection appeared to be useless due to a lack of projection space, while this problem did not exist in courses 2 and 3. Students were able to directly compare tablet and OHP based teaching in course 2, and preferred the more familiar (and reliable) overhead approach. Their attitude was very negative towards tablet technology as they thought it was wasting valuable teaching time. This was clearly reflected in survey responses. Interestingly this attitude improved as the lecturer’s competency with the technology improved.
Students were able to download lecture material with electronic ink notes in course 1, which made the handwriting feature important as they were not told that full typed solutions existed.
Conclusion
The current study investigates an alternate multimodal approach built on tablet technology and electronic ink. This innovative approach proposes a framework compatible to mathematical typesetting which supports subsequent discussions and provides organizational structure to the lecture. Additional detail is incorporated through technologies, such as electronic ink. Diagrams, concept maps and solutions are developed spontaneously in real time thus promoting student directed learning and creating an interactive and dynamic learning process which fully engages students. As the material is developed spontaneously it ensures a flexible process, building on student’s abilities. Tablet technology facilitates backtracking, the redefinition of ideas, the refinement of solutions and the investigation of alternate paths. It allows for the conceptualization and comprehensive understanding of complex mathematical ideas. Students are not overwhelmed by impenetrable solutions but may interact with the development process.
References
Anderson, R., Anderson, R., McDowell, L. and Simon, B. (2005). “Use of classroom presenter in engineering courses”, 35th ASEE/IEEE Frontiers in Education conference, October 19-22, Indianapolis, IN.
Donovan, D. (2000). “Interactive discrete mathematics” in Proc. of the Fourth Biennial International Conference of the Engineering Mathematics and Applications Conference EMAC 2000, The Institute of Engineers, RMIT Melbourne, 131-133.
Loch, B. (2005). “Tablet Technology in First Year Calculus and Linear Algebra Teaching”, in Proc. of the Fifth Southern Hemisphere Conference on Undergraduate Mathematics and Statistics Teaching and Learning (Kingfisher Delta’05), 231-237.
Loomes, M., Shafarenko, A., and Loomes, M. (2002). “Teaching mathematical explanation through audiographic technology”, Computers & Education, vol.38, 137-149.
Lowerison, G., Sclater, J., Schmid, R.F., and Abrami, P.C. (2006). “Student perceived effectiveness of computer technology use in post-secondary classrooms”, Computers&Education, in press.
Simon, B., Anderson, R., and Wolfman, S. (2003). “Activating computer architecture with Classroom Presenter”, WCAE2003.
Townsley, L. (2002). “Multimedia Classes: Can there ever be too much technology?” in Proc. of the Vienna International Symposium on Integrating Technology into Mathematics Education, Vienna, Austria.


e-Journal of Instructional Science and Technology (e-JIST) Vol. 9 No. 2
© University of Southern Queensland

TNT- IDS: Showcasing an Educational Informatics Project Development Guide Prototype


TNT- IDS: Showcasing an Educational Informatics Project Development Guide Prototype


Ann M. Shortridge, Ed. D.University of Oklahoma, Schusterman Center, United Statesann-shortridge@ouhsc.edu
Toby De LoghtUniversity of Antwerp, Belgiumtoby.deloght@ua.ac.be
Benay Dara-Abrams, Ph. D.University of San Francisco and BrainJolt, United States benay@dara-abrams.com

Abstract:
Today many organizations and University systems are adopting policies that require their professional staff or faculty to create technology-based educational opportunities. However, these individuals often lack the theoretical background to engage in such endeavors. Therefore, the primary goal of this paper is to showcase the most recent stage of an online research-based educational informatics project development guide prototype. This paper highlights our newly propagated knowledge base and many of the principles that determine quality levels within instructional products. These principles are common to a variety of learning environments including academic courseware, consumer health informatics applications and just-in-time training modules.
Introduction
Few practical multi-dimensional educational informatics project development guides exist that are based upon applicable related research evidence (Schwier, 2001; Wiske, Sick & Wirsig, 2001; Shortridge & De Loght, T, 2004). Therefore, several members of an ongoing international collaboration began developing the prototype described in this paper in June of 2004. This collaboration has focused on innovative action research and development projects in e-learning, faculty development and change management for the past three years.
We began showcasing our prototype when we presented the background theory for its user interface in a paper at the ED-MEDIA 2005 Conference in Montreal. Our unique Entry Point Framework user interface was based upon Howard Gardner’s (1983) seminal work, Frames of Mind: The Theory of Multiple Intelligences and various mechanisms for fostering online communities of professional practice.
Gardner delineated eight specific human intelligences: Linguistic, Logical-Mathematical, Musical, Bodily-Kinesthetic, Spatial, Interpersonal, Intrapersonal, and Naturalist (Gardner 1983; 1993). Further, Gardner proposed that individuals possess multiple intelligences rather than one single intelligence with which they process information and solve problems (Gardner 1983; 1993; 1999). We have integrated these various intelligences into our user interface Entry Points. The focus of this paper is to describe the content within our knowledge base as it relates to each of these Entry Points.
Background Motivation for Our Knowledge Base
In the late 1980’s a growing body of literature began to emerge that assigned poor quality ratings to technology-based instruction. In response to this a number of researchers began to look for possible causes and solutions. We drew upon this body of research in order to provide our users with research-based guidelines and tools. A brief description of the some of the findings from several of the studies that we believed were significant is as follows.
In 1996, McNeil sought to develop a practitioner-validated list of competencies to guide individuals as they authored technology-based education and identified ten critical skills. Seven of these skills were specific to instructional design processes while only three were technical in nature. Further, in 2000, Mioduser, et al conducted a seminal study that reviewed the quality of online instruction and concluded that it is usually poorly designed. Of the 486 educational products sampled in this study, 52.5% left students with few options other than rote memorization whereas, only 4.6% provided opportunities for creation and invention and 5.0% provided students with opportunities for to solve real problems (Mioduser, Nachmias, Lahav and Oren, 2000, p. 63). This led us to conclude that the authors of these products either did not possess the competencies outlined by McNeil and/or that they ran into constraints and limitations when attempting to implement them. Continuing to search the literature we found that in 2001, Gibbs et al, sought to establish a matrix that could be used to accurately evaluate the quality of multi-media e-courseware. This study indicated that 16 significant evaluation guidelines ought to be included within this type of matrix. Of these guidelines, five were specific to usability, three were specific to instructional design, and three were related to the accuracy and timeliness of content and the credibility of the sources from which it is drawn. The other five guidelines directly related to what types of media elements (such as graphics, animation, and so on) need to be incorporated into such materials.
In regards to the type of media elements that need to be included in quality educational materials a number of another studies and theory-based overviews have been published. For example, in 1993, Laurillard described a conversational framework that could guide productive discussions/interactions between students and teachers that included a media analysis. Laurillard’s conversational framework is made up of 12 steps of discourse responsibility that set the stage for students to engage in high level cognitive activities (such as problem solving). Laurillard’s media analysis evaluated how and to what extent a particular media could support such activities. In some cases, Laurillard’s ideas also strongly suggested that particular media could actually become a virtual teacher. The media that received the highest scores in her evaluation were tutorial systems, simulations, micro-worlds, electronic collaboration or teamwork tools, and multi-media and audio resources.
Further, in 1994, Lloyd Rieber prepared an insightful guide for how to effectively design or choose and incorporate various types of visualizations in instructional materials. In his book, Computer, Graphics & Learning, Rieber proposed that Gagnes’s events of instruction support five applications of graphics to teach that lie within the cognitive-constructivist continuum: “cosmetic, motivation, attention-gaining, presentation and practice” (p. 45). Further, he asserted that cosmetic and motivational graphics impact the affective domain while attention gaining, presentation and practice graphics impact the cognitive domains. Additionally, effective cognitive domain graphics gain and focus student attention selectively. I.e., effective attention-gaining and presentation graphics help students to filter out external distractions, but should not be a source of distraction or cognitive overload themselves. (Cognitive overload happens when too much information is presented all at once.) Effective practice graphics support students in higher-level cognitive processes such as the integration of new information into prior knowledge. Further, Reiber highlighted that the function of a graphic may be arbitrary or that it may be used to create a realistic representation or an analogy. Representational graphics depict reality or a simplified version of it. Arbitrary graphics can be graphs or charts, etc. or may depict a mental model of a system or scientific phenomena.
In addition, in 2003 Clark and Mayer re-released the findings of a number of experimental studies that indicated the impact of various media on student learning outcomes. Mayer also conducted studies on the psychological effect of omitting extraneous text and personalizing tone. These studies indicated that materials that were presented in a concise and informal manner resulted in higher learning outcomes for students.
Further, our review of the literature also revealed that there are three other major constructs pertinent to development of high quality technology-based instruction: (a) social context, (b) social presence, and (c) learner control. Interestingly, however, instead of standing alone, each of these constructs supports interactions between teachers and students. Social context and social presence are both integral parts of teacher-student and student-student interaction, whereas learner control influences whether or not these interactions will ever take place. Successful technology-based instruction that encourages high levels of interaction will provide students with social context, social presence, and learner control (Hill, 2002; Rotter, 1989).
Overview of the TNT-IDS Prototype Knowledge Base
The TNT-IDS Knowledge Base includes research-based guidelines such as: why and how to develop concise content, the impact of conversational tone, why and how to use graphics and other types of visualizations, how to use worked examples and/or practice exercises to model and teach knowledge transfer and problem solving techniques and how to manage discussion forums.
Shown in Figure 1, our user interface accommodates individuals by engaging them in these topics in multiple ways through the Entry Point Framework.

Figure 1: Educational Project Development Guide Framework
The Entry Point Framework offers seven points of entry into a learning experience, which activate a combination of the eight different intelligences (Gardner 1999):
The Narrative Entry Point invites people into a learning experience through relating a story in the Learning Theory & Instructional Strategies and Index & Keyword Search, Glossary sections of our prototype. (The Index & Keyword Search, Glossary allows faculty to search for specific topics when they try to link different topics together, moving beyond available structures.)
The Quantitative Entry Point provides an introduction through measuring, counting, listing, or determining statistical attributes in the Project Building Block section of our prototype.
The Logical Entry Point offers the opportunity to understand relationships among different factors by applying deductive reasoning in the in the Project Building Block section of our prototype.
The Aesthetic Entry point engages the senses through an examination and discussion of the visual and aesthetic properties of concepts in various sections of our prototype, including Best Practices, Learning Theory & Instructional Strategies and Tips & Checklists.
The Experiential (“Hands-On”) Entry Point allows learners to construct their own experiments with physical materials or through computer simulations in the Using Technology for Learning and Web Design & Tools sections of our prototype. As many organizations and institutions use an LMS (Blackboard or WebCT, etc.), the Using Technology for Learning section engages users by offering a number of roadmaps to effectively use the technologies available in an LMS. These approaches demonstrate how instructional strategies can be implemented technically. The Web Design & Tools section also offers contains tips for how to resolve the limitations of LMS systems.
The Existential/Foundational Entry Point allows individuals to consider a subject based on its fundamental characteristics and underlying principles in the Literature section of our prototype. The Literature section offers overviews of relevant literature, giving educational product authors a jump-start and providing them with guidance as they further explore their interests.
The Interpersonal/Collaborative Entry Point engages learners in interactive, cooperative, and collaborative projects with others, or alternately in situations in which they can debate or argue with each other in the Discussion Forum areas of our prototype. The discussion forums extend the reach of the existing prototype. Using the Interpersonal Entry Point, they provide a safe environment to launch new and creative ideas and to obtain advice from participating colleagues as they implement their projects (Kahn 2005).
Highlights of Knowledge Base in the TNT-IDS Prototype & Related Entry Points
Figures 2 – 8 show several screen shots that illustrate how our knowledge base activates a particular intelligence via a specific Entry Point to elaborate on specific topics and teach and model specific techniques for our users.
Figure 2 is a screen shot of one of steps in Project Building Blocks. For the Project Building Blocks section of our knowledge base we chose a generic instructional design model (ADDIE – Analyze, Design, Develop, Implement, Evaluate) that is also tightly linked to project management to engage users through the Quantitative Entry Point and Logical Entry Points. Our goal was to provide a structured approach for dividing large projects into small manageable chunks by posing a basic set of reflection and design questions for our users to ask themselves.

Figure 2: Screen Shot of TNT IDS Prototype Project Building Block Steps
Figure 3 is screen shot that was taken from the Learning Theory/Instructional Materials section of our knowledge base. This topic discusses why and how concise content supports learning in technology-based courseware. This topic also models how graphics can be used to highlight key concepts in text-based instructional materials. Users click on a link and a graphic of the Inverted Pyramid Writing Style pops up.
Most people interested in building innovative courses are not in the field of education. Thus when they are attempting to base their courses/projects on a particular theoretical approach they often lack and cannot find practical ways to implement them. We built the Learning Theory & Instructional Strategies by asking people in the field to describe how they developed their courses/projects. As we documented how these individuals had built their courses/projects we made the theory behind transparent. This section engages users through the Narrative Entry Point.

Figure 3: Screen Shot of TNT IDS Prototype Concise Content Development Overview
Figure 4 includes screen shots from two different sections within our knowledge base. Taken from the Learning Theory/Instructional Materials the first topic provides a snap shot of a number of studies that discusses the impact of visualization techniques on learning outcomes. Taken from the Best Practices section the second topic illustrates for our users an exemplary example of how animations can be used to teach invisible scientific phenomena. Users view a static graphic and text-based explanation of the technique and can then navigate to the real course from which the example was taken and view the animation in real time. The example from Best Practices engages users through the Aesthetic Entry Point.

Figure 4: Screen Shot of TNT IDS Prototype Visualization Overview & Animation Technique
Figure 5 is a screen shot that was taken from the Tips/Check Lists section of our knowledge base. Two topics were taken from this section: the first provides our users with important insights regarding the importance of color blindness when designing or choosing graphics, the second discusses how font selection can be used to further influence the structure and tone of text. These two examples from the Tips/Check List section also engage users through the Aesthetic Entry Point. However, the Tips & Checklists section was designed to engage users through the Logical Entry Point; it contains both tools for planning and practical guidelines.

Figure 5: Screen Shot of TNT IDS Prototype Color Blindness Guidelines and Font Usage
Figure 6 is yet another screen shot that was taken from the Learning Theory/Instructional Materials section of our knowledge base. This topic illustrates different options that users can choose when converting a course from a traditional face-to-face format into a technology-based learning environment. There are two sub-sections provided under this topic: choosing a re-design option and addressing new expectations. Again links to real examples are included so that our users can see high quality worked examples of each option. This example from Best Practices engages users through the Aesthetic Entry Point.

Figure 6: Screen Shot of TNT IDS Prototype Worked Examples for User Design Options

Figure 7 is yet another screen shot that was taken from the Learning Theory/Instructional Materials and Best Practices sections of our knowledge base. Taken from the Learning Theory/Instructional Materials the first topic describes the benefits and potential of using instructional games and simulations. Taken from the Best Practices section the second topic illustrates this technique for our users by providing links to real examples so that our users can see high quality worked examples of each option. This example from Best Practices also engages users through the Aesthetic Entry Point.

Figure 7: Screen Shot of TNT IDS Prototype Games/Simulation Overview & Best Practice Examples
Figure 8 is another screen shot that was taken from the Tips/Check Lists section of our knowledge base. This section provides our users with basic online moderating skills and important insights on how to structure and manage discussion forums. Well-structured discussion forums provide social context and help students establish social presence. Higher levels of social presence increase student interest and participation. Social context enables students to perceive virtual spaces, such as discussion forums as a real places; social presence enables students to present themselves or take on a role that will be perceived by other participants as real.

Figure 8: Screen Shot of TNT IDS Prototype Discussion Forum Insights
Future Directions
Building and sustaining the knowledge base within an educational informatics project development guide requires ongoing support and fine-tuning to maintain the currency and applicability of the knowledge provided. Therefore, the authors are in the process of defining specific approaches, initiatives and instruments to support this effort. The TNT-IDS prototype is currently published on the Web within the University of Oklahoma’s WebCT and the University of Antwerp’s Blackboard learning management systems. WebCT and Blackboard both secure the prototype and its content through password-protected access and affords the authors access from independent locations so that each site can be managed. Required electronic tools are also available, including instant messaging and e-mail. Interested parties may explore the TNT-IDS Prototype by go to: https://webct.ouhsc.edu/webct/public/home.pl
User ID = tntproto; password = 2005.
References
Clark, R. C. & Mayer, R. E. (2003). E-learning and the science of instruction: Proven guidelines for consumers and designers of multimedia learning. San Francisco, CA: Pfeiffer.
Davis, J. (1996). The MUSE Book and Guide. Cambridge, MA: Harvard College.
De Loght, T., Dara-Abrams, B. & Shortridge, A. M. (2005). The online water cooler: Inviting faculty into professional development through the entry point framework. In Kommers P. & Richards G. (Eds.), ED-MEDIA 2005: World Conference on Educational Multimedia, Hypermedia & Telecommunications (pp 2622-2627). Charlottesville VA: Association for the Advancement of Computing in Education.
Gardner, H. (1983/1993). Frames of Mind: The Theory of Multiple Intelligences. NY: Basic Books.
Gardner, H. (1999). The Disciplined Mind: What all students should understand. NY: Simon and Schuster.
Gibbs, W., Graves, P. & Bernas R. (2001). Evaluation guidelines for multimedia courseware. Journal of Research On Technology in Education, 34 (1), 2 -17.
Hill, J. R. (2002). Strategies and techniques for community building in web- based learning environments. Journal of Computing in Higher Education, 14(1), 67-86.
Kahn, T. (2005). Designing Virtual Communities for Creativity and Learning. The George Lucas Educational Foundation, Retrieved March 10, 2005, from, http://www.edutopia.org/php/print.php?id=Art_483&template=printarticle.php.
Laurillard, D. (1993). Rethinking university teaching: A framework for the effective use of educational technology. London and New York: Routledge.
McNeil, S. (1996). A practitioner validated list of competencies needed for courseware authoring. In B. Robin, J. Price, J. Willis, & D. Willis (Eds.), Technology and Teacher Education Annual 1996 (pp. 338-343). Charlottesville, VA: Association for the Advancement of Computing in Education.
Mioduser, D., Nachmias, R., Lahav, O. & Oren, A. (2000). Web-based learning environments: Current pedagogical and Technological State. Journal of Research on Computing in Education, 33(1), 55-76.
Rieber, L. P. (1994). Computers, Graphics & Learning. Dubuque, Iowa: Brown & Benchmark.
Schwier, R. (2001). Catalysts Emphases and Elements of Virtual Learning Communities: Implications for Research and Practice. Quarterly Review of Distance Education, 2 (1), 5-18.
Shortridge, A. M. & De Loght, T. Quality e-Education: Project management & content development strategies. In Nall J. & Robson R. (Eds.), E-Learn 2004: World Conference on E-Learning in Corporate Government, Healthcare, & Higher Education, ( pp 202-207). Charlottesville VA: Association for the Advancement of Computing in Education.
Rotter, J. (1989). Internal versus external control of reinforcement. American Psychologist, 45(4), 489-493.
Wiske, S. M., Sick, M., & Wirsig, S. (2001). New technologies to support teaching for understanding. International Journal of Educational Research, 35, 483-50.


e-Journal of Instructional Science and Technology (e-JIST) Vol. 9 No. 2
© University of Southern Queensland

Simulating a Mass Election in the Classroom”

Simulating a Mass Election in the Classroom”


Dr. Andrew Biro
Acadia University, Nova Scotia, Canada
andrew.biro@acadiau.ca

Abstract
This article discusses the use of election simulation software in a course on American Government taught by the author in the Fall of 2004. The use of a computer-mediated election simulation allows for the experiential learning of certain features of mass elections in general, and US presidential elections in particular, that could not be done with “live” or smaller-scale electoral simulations. While the limitations of the technology do entail some precautions, overall the use of the simulation software proved to be a valuable pedagogical exercise.
Introduction
Teaching an American Government course at a small Canadian university, I wanted to use an election simulation that would teach students about the peculiarities of American elections, and American presidential elections in particular, with the complexities introduced by the electoral college system. Considerable evidence exists that simulations can be valuable exercises in political science courses. (Endersby and Webber, 1995; Kathlene and Choate, 1999; Pappas and Peadon, 2004; Princen and Stayaert, 2003; Smith and Boyer, 1996; Taylor, 2003) Simulations are valuable because they provide experiential learning (frequently characterized by participants as “realistic” or “authentic” experiences of political life) that contrast with more traditional lecture formats. (Endersby and Webber, 1995) On the other hand, however, it should be clear that simulations in and of themselves are not necessarily realistic or authentic, nor are all the lessons learned from the simulation universally generalizable to the “real world” outside the classroom.
For example, in most election simulations designed for classroom use, the dynamics are necessarily those of a relatively small-scale electoral contest. i While valuable lessons about elections are no doubt learned by the participants, it is far from clear that all of these lessons can be applied to the understanding of elections taking place on a larger scale. In order to see the qualitative differences that emerge in mass scale elections, students need to travel – at least virtually – beyond the bounds of a face-to-face community. More specifically, in teaching American Government (particularly outside the United States), there is a need to teach students how the electoral college system makes American presidential elections – with resources focused on a few hotly contested states – unlike elections in which candidates seek a majority from an undifferentiated electorate.
In order to address these issues, I decided to use a freely available presidential election simulation software – making some changes so that it could be used as a class, rather than individual exercise – to simulate the final month of a presidential election campaign. This allowed students to learn first-hand about the effects of the electoral college system, as well as some of the features of mass elections more generally (a geographically dispersed electorate, capital-intensive campaigning), which could not be easily reproduced using smaller-scale, “live” electoral simulations.
Simulation Software
The election was run using the Election Day (version 3.02) election simulation software, developed by John Gastil, which is described as “a semi-realistic simulation based on actual campaign laws, census data, public opinion surveys, voting patterns, and historical campaign environments.”ii In the simulation undertaken in this class, players controlled a fictitious candidate in a simulated contemporary US presidential election.iii
In this simulation, the main tasks for the candidates are to set a budget and schedule for each week of the campaign. The simulation provides players with a wealth of data (information on population, racial make-up, median income and education level, and party identification, are provided by city and state; public opinion data on each of 15 campaign issues is also provided for each state. Candidate’s organizational strength in each state is also rated (1-100), and can vary throughout the campaign, being strengthened, for example, by a candidate visit.) Planning the campaign thus requires a number of decisions: how much to spend, when to spend, and on what, where to travel to, what to do there, what issues to focus on, constructive versus negative campaign messages, how much energy to invest in particular events (more high-energy events will require the candidate to take more frequent rest days), and so on. And making effective decisions requires both research into the underlying data, and strategic thinking about the campaign as it evolves.
Once the data for each candidate’s weekly budget and schedule is entered, the simulation proceeds through the week, reporting on the results of scheduled events, and also providing unscheduled events, such as endorsement offers or news stories, to which the candidates must respond. Then, at the end of the week, the simulation processes all the new information, and provides updated candidate polling data, in the form of a red vs. blue national electoral map familiar to observers of American politics (see figure 1), with electoral vote tallies for each candidate.
Figure 1: Map of end of week election simulation results

Running the Simulation in a Class
Such a detailed and complex simulation can be a valuable learning tool. Inevitably there will be quibbles about the realism of certain features of the simulation (as well as, in this case, some stability issues – relatively frequent software crashes – which seem at least in part to be a function of developing a complex simulation with limited programming resources). Nevertheless, computer information-processing capacity allows a level of detail that provides players a sense of the scope of decision-making involved in planning and executing a presidential campaign. Unlike classroom simulations where local students are the electoral constituency, a computer simulation allows players to appreciate the increased complexity that comes with a national-scale campaign: managing a large budget, juggling voluminous voter data, appealing to a geographically dispersed constituency, and so on. Just as highly staffed and funded campaigns might test different campaign strategies and messages (via focus groups or polling), a computer simulation also allows users to test different electoral strategies (running the simulation multiple times with different choices). Because of the element of randomness built into the simulation, there is – again, realistically – no guarantee that strategies that users test on their own will yield the same results with the “real” campaign in the classroom.
One immediate problem with using this software package (and several others that were examined prior to the course) in a classroom setting, however, is that it is designed for use by only a small number of “players.” The simulation’s realism (its reliance on historical voter data) also means that for presidential elections, third (or fourth or fifth) players control candidates with little chance of electoral success. While experiencing the obstacles to third-party success might be a useful lesson in itself, running a campaign with no chance of winning may also lead to disillusionment and disengagement.
Both the large volume of data that can be assimilated in this simulation and the reality of two-party dominance in the American system thus pointed toward dividing the class into two large teams (one Democrat, one Republican). While most students in the class had some previous experience with working on small group projects, few had experienced working on a project with such a large group (18 students on each team). This presented challenges both for the students (who had to coordinate the activities of a large number of people) and for the instructor (who had to devise techniques to discourage “free riders” – see below).
At the start of the simulation, one entire class period (75 minutes) was devoted to having the two teams caucus for the first time. During this time, the teams divided themselves into smaller groups, each with specific responsibilities (i.e. budget, scheduling, planning events). The following week, the campaign began. Just before each class meeting devoted to the simulation (one per week for five weeks), each team had to submit their candidate’s weekly budget and travel schedule. I entered this data into the program on my computer. Then, in class, with the program running and projected onto a screen for all to see, we went through the week’s events. Teams had to respond on the spot to unscheduled events as they occurred. After class, I posted the updated data on a course website, which students then could download and use to plot out their strategy for the next week.
Dividing the class into large teams allowed students to divide up the research (exploring the data and testing different strategies), and thus to get a handle on an amount of data that would be unmanageable for a single individual. But they also quickly learned that plotting an overall campaign strategy required the coordination of individuals’ research efforts: event planners had to know where events were going to take place, travel schedulers had to know which states the campaign was targeting, and so on. In order to be successful, students had to work individually, in small groups, and in larger teams.
In discussing their electoral simulation (which similarly divided students into campaign teams), Pappas and Peaden note the familiar problem of free riders: “simulations allow for varying degrees of participation so it is difficult to ensure that all group members are pulling their weight.” (2004: 862) Given the size of the groups in this case, there was some danger that a significant number of students would act as free riders, and that a few students would do all or most of the work. Accordingly, I used two techniques to attempt to maintain broad student involvement. First, in addition to their team’s goal of winning the presidential election, each student was given an individual goal to achieve. These goals were of three types. Some students were assigned a congressional or state-wide race in addition to their role on the national campaign. In this case, their goal was to try to get the candidate to visit, and more generally to try to ensure victory (preferably by a wide margin), in their particular city or state. Other students were each assigned an interest group supporting the candidate: their goal was to try to maximize the exposure of a particular issue, and to ensure that the candidate did not moderate his/her stance on this issue. A final group of students were prospective appointees for particular cabinet positions. Their goal was to maximize the exposure of two or three issues related to their portfolio, and also to refuse tied endorsement offers from related groups, so as not to jeopardize their capacity to act once in office. In having differing – and sometimes conflicting – individual goals, the aim was both to develop a mechanism to penalize free riders,iv and also to give students a sense of the conflictual dynamics that are inherent to a large-scale campaign (or indeed any large-scale organization), and which are structural rather than personality conflicts.
The second technique for maintaining broad involvement was that students individually had to submit regular reports discussing the progress of the campaign (both the team’s campaign and progress towards their individual goal): three “interim” reports (after weeks two, three, and four), and one final report after the conclusion of the campaign. The reports were intended to be short (150 words for the interim reports, 300-400 words for the final report), and the interim reports were graded only on a pass/fail basis, to keep the marking load to a manageable level. But students generally seemed to take these assignments relatively seriously, and often exceeded the recommended length. Kathlene and Choate note that in their simulations, “grades are based predominantly on the ingenuity, creativity and quality of the written assignments. The exceptional students generally rise to the occasion.” (1999: 71) In this case, grades were based largely on completing written assignments, and only the final report was graded in terms of quality. Nevertheless, interest in the simulation was sufficiently high that a number of students produced high quality reports, with a few also providing the sorts of extra touches (i.e. submitting reports on personalized campaign letterhead) found by Kathlene and Choate (1999: 71-2).
Simulation Results
At the end of the term, a survey of students was done to assess the results of the simulation. An online (instead of in class) survey was chosen in order to preserve anonymity (respondents were asked, however, to identify which team they were on and their gender). Unfortunately, the medium, combined with the timing (end of term), produced a relatively low response rate (n=15). The data from this survey, however, was supplemented with data from anonymous course evaluations (n=33), a class devoted to “debriefing” at the conclusion of the campaign, and my observations of, and conversations with, various individual students during and after the simulation.
A number of students in the course evaluations listed the simulation as a highlight of the course. Colleagues also reported that discussion of the simulation was spilling over into other courses. In both the course evaluations and the online survey, a few students suggested more time should have been devoted to the simulation. Only one student (in the course evaluation) suggested that too much time was devoted to the simulation. All students who completed the online survey agreed that the simulation should be repeated in future offerings of the course (the question was not asked in the overall course evaluation).
Teams and smaller groups met or communicated frequently outside of class time. Interestingly, in spite of a campus environment that emphasizes technological connectivity, survey respondents reported relying more heavily on face-to-face meetings than on electronic or phone communication (mean = 4.14/5 vs. 3.67/5). On the other hand, both teams developed websites where campaign information was posted, and one team went so far as to have the website password protected.
While most students did participate actively in the simulation, some participated more intensively than others. On average, survey respondents reported spending 9 hours outside of class time working on the simulation (simulation participation and the written reports were worth a total 15% of the final mark). While it seems likely that students responding to the survey were those more heavily involved in the simulation, it is worth noting that, for the question “On a scale of 1 – 5 [1 = much less; 5 = much more], how much time did you spend working on the simulation compared to other members of your team?” the mean response was 3.6. And while a few students complained of free riders on their team, one survey respondent felt the problem was not free riders, but the development of party elites: “there were a bunch of people on each team who really overshadowed the others simply with loud statements and false promises of success. "Trust me, trust me" they cried, and they were trusted without positive result.”
One of the reasons for using a simulation was as a means of inducing class participation and to engage students in experiential learning. The desire to increase class participation, however, also had to be balanced against other outcomes. The incentive for winning the simulated election was fairly modest: one percent of the student’s final grade in the course. In part this was because I was wary of producing a dynamic of “extraordinarily intense” competition (Kathlene and Choate 1999: 72) that might spill over into the rest of the class’s activities. As well, the software appeared to provide a fairly high degree of randomized results (so that greater effort or smarter strategy would not necessarily guarantee winning the election).
As it turned out, the decisive turn in the campaign happened in week 3 (of 5 weeks), when a close race turned into a strong, across the board, lead for the Democratic candidate. There was no immediately obvious reason for this shift in party fortunes, which the Republicans overwhelmingly explained as a software glitch. Democrats, on the other hand, felt that at least part of the reason was their campaign strategy, so even this simulation “flaw” provided the means for a discussion of the operation of ideology in interpreting political events. In any case, once it became clear that the contest was effectively over (or that careful strategizing was less important than random computer variables), the intensity of student participation ebbed. Thus, while the survey respondents indicated that having to write reports was more influential than having individual goals in getting students to participate in the simulation (mean = 3.87/5 vs. 3.2/5), at least some participants may have been even more strongly motivated by competitiveness, or a desire not to be responsible for letting down their team members.
What did the students learn from this exercise? Table 1 shows the results of three survey questions, which asked respondents for their perceptions of specific learning outcomes. The average results for all three were closer to “very much” (5/5) than “very little” (1/5) – good, but not spectacular, results. On the other hand, a number of students commented that the simulation gave them an understanding of the electoral college system, and why presidential races are typically focused on a handful of battleground states, in a way that they thought reading a text or listening to a lecture would not have. In this sense, the idea of a simulation as a social science “laboratory” that provides “a deeper understanding of institutions, their successes and failures” (Smith and Boyer 1996, 690) seems to have been confirmed.
Table 1: Survey of student learning outcomes (n=15)
“How much do you think you learned from the simulation exercise about…”
Mean (1 = very little, 5 = very much)
The US electoral system
3.53
Election campaigns
3.87
Working in large groups
3.53
Lessons Learned: The benefits and drawbacks of the computer-simulated campaign
One of the main benefits of a computer-simulated campaign is that it provides the capacity to simulate a large (i.e. national) scale election. The most obvious benefit of this in an American Government course is that it allows students to experience first-hand the complexities and peculiarities of the electoral college system. Simulating a presidential (as opposed to local or state-wide) election thus allows students to understand the importance of “swing” or “battleground” states, and why resources in presidential contests are often highly geographically focused. Another benefit, which may be specific to teachers of American government outside of the United States, is that the Presidential race provides a more familiar frame of reference for students. The option of simulating, for example, a state-wide race that uses live local issues (Pappas and Peaden, 2004), is not available to teachers and students of American Government courses outside of the United States.
But another, perhaps more subtle, benefit of a computer simulation, is that it may more realistically simulate the experience of a mass election campaign, which is in its essence a mediated affair. “Live” election simulations, by contrast, necessarily rely on a relatively small electorate, whom candidates and campaign workers deal with on a face-to-face basis. Thus it is more difficult in a live simulation, for example, to demonstrate the importance of access to financial resources: an important factor in large-scale electoral contests. In Pappas and Peaden’s simulation, for example, a survey of the student electorate showed that voting choice was most strongly affected by candidate speeches and debates, rather than mediated campaign messages (commercials, campaign literature, posters, and press releases). In mass elections, however, it is not just that voters impressions of the candidates are largely inflected by media messages, but also that saturation advertising and news coverage may affect voters’ impressions of a candidate at an unconscious level. For obvious reasons, the media environment of a contemporary presidential campaign (particularly in intensely contested battleground states) – and hence the importance of a well-funded campaign – is impossible to simulate in a classroom.
Finally, it should be noted that the setting in which this simulation was conducted may make it more difficult to reproduce elsewhere. The simulation was conducted at a relatively small school (3700 students) in a small town. Most students live on or close to campus, and many of the students in the class already knew each other at the beginning of the term. This undoubtedly made the coordination of large group meetings much easier than it would be, for example, at a large university in a metropolitan setting. Such obstacles of scale could be overcome, but would probably necessitate devoting a larger amount of class time to the simulation.
The other local peculiarity that made running the simulation here easier is a university-wide compulsory laptop leasing program (the “Acadia Advantage”). This ensured that all students had a common computing platform, as well as institutionally embedded and relatively extensive tech support. Thus it was relatively easy to ensure that all students were able to load and run the software on their own computers, which was crucial for ensuring that all students participate in the simulation, and (if comments in the software’s user forum are any guide) is not something to be taken for granted.
More generally, there are a number of requirements for the software to be used for this simulation. Along with technical features – stability, user-friendliness – the software also should deliver the “realism” or “authenticity” that students expect of experiential learning. Election Day delivers better on some of these dimensions than others, although it is also worth noting that there are plans to upgrade this particular software, and as computer-mediated teaching becomes more common, other similar packages are likely to be developed.
If it is feasible, however, running such a simulation can be a worthwhile experience. It has the potential to engage students in a way that teaches them about the peculiarities of the American presidential election system, as well as the impact of scale on electoral processes and campaign strategies and dynamics. Given the contemporary state of communication technology, role of the media in society, and nature of mass democracy (in the United States and elsewhere), it arguably provides students with a more realistic experience of contemporary elections.
References:
Endersby, James W, and David J. Webber. 1995. “Iron Triangle Simulation: A role-playing game for undergraduates in Congress, interest groups, and public policy classes.” PS, Political Science & Politics. 28, 3 (Sept.): 520-522.
Kathlene, Lyn, and Judd Choate. 1999. “Running for elected office: A ten-week political campaign simulation for upper-division courses.” PS, Political Science and Politics. 32, 1 (Mar.): 69-76.
Pappas, Christine, and Charles Peaden. 2004. “Running for Your Grade: A Six-Week Senatorial Campaign Simulation.” PS, Political Science and Politics. 37, 3 (Sept.): 859-863
Princen, Thomas, and Karl Steyaert, “Water Trade: What Should the World Trade Organization Do?” In Encountering Global Environmental Politics: Teaching, Learning, and Empowering Knowledge, ed. Michael Maniates. Lanham: Rowman and Littlefield.
Smith, Elizabeth T., and Mark A. Boyer. 1996. "Designing In-Class Simulations." PS, Political Science and Politics. 29, 4 (Dec.): 690-94.
Taylor, Peter. 2003. “Nonstandard Lessons from the “Tragedy of the Commons.”” In Encountering Global Environmental Politics: Teaching, Learning, and Empowering Knowledge, ed. Michael Maniates. Lanham: Rowman and Littlefield.
i For examples, see Pappas and Peadon (1999), which relies on a debate open to the campus community, and Kathlene and Choate (2004), in which a large introductory political science class constitutes the electorate.
ii For more information on the simulation, see the Election Day website: http://www.election-day.info/ (accessed June 28, 2005). This phrase appears to have disappeared from the current site (accessed Dec. 6, 2005), which states that “the game lacks realism in a few respects, but that will improve.…” The realism of the simulation is discussed further, below.
iii The software also allows for lower-scale elections (state-wide and local), and the use of selected actual historic candidates.
iv Although the penalty was not particularly significant: achieving their individual goals was worth one percent of the final course grade.


e-Journal of Instructional Science and Technology (e-JIST) Vol. 9 No. 2
© University of Southern Queensland

Looking for Critical Thinking in Online Threaded Discussions


Looking for Critical Thinking in Online Threaded Discussions


Paula San Millan MaurinoFarmingdale State Universitypmaurino@optonline.net

Abstract
Threaded discussion forums have been a popular topic for the past few years in distance education research and studied as a factor in student participation, satisfaction, learning outcomes, social presence and interaction. Only recently has it been considered as a potential vehicle for the development of critical thinking skills and deep learning. Thirty-seven current studies on critical inquiry, deep learning, presence and interaction in distance education were synthesized. The studies were compared for findings about participation quality, participation quantity, critical thinking skills and deep learning, and recommendations. The synthesis revealed that current literature touts the potential for development of deep learning and critical thinking skills through online threaded discussions. For the most part, however, research does not show this happening at a high level or to any great extent. Confounding the issue is the fact that current research is predominated by examination of education and graduate level online classes and is mainly focused on student perceptions and outcomes. This is at odds with the profile of today’s “typical” distance education student. The need for more instructor involvement and effort is indicated in much of the research, but bulk of the research has focused on students and not teachers.
Introduction
Learning through discussions or conversations is a fundamental part of teaching and learning, particularly in higher education. New communication technologies enable discussions to be held online as well as in the classroom. These discussions may form a component of a totally online distance education class or be used as a supplement to a traditional face-to-face class. The discussions can be synchronous, with participants “talking” at the same time, or asynchronous, where communication turnaround can be delayed by hours or days.
Online threaded discussions provide students with access to the forum twenty-four hours a day, seven days a week. Students can thus participate whenever they have the time and desire and at their own pace. This online “talk” can be more thoughtful since it offers the chance for reflection. Students have time to read each other's contributions and to think carefully about their own contributions. Messages can be composed and revised as needed and this writing may encourage discipline and rigor in thinking and communicating.
The characteristics of anonymity may also serve to promote enhanced and more intensive discussion. Students can concentrate on the content of the message instead of the presenter and may be more open and honest about themselves. They may divulge information that is more personal and revealing which will, in turn, encourage others to do the same.
On the other hand, threaded discussions are written discussions and lack the affordances of oral conversation. Some students feel that these discussions are just a series of messages and there is no sense of community. The lack of facial expressions and voice make the process less human. The fact that there are no nonverbal clues to guide them can also lead to misunderstandings and misinterpretations. Asynchronous discussions may lack the speed, the spark and energy of a face-to-face conversation and hinder the development of dynamic and interactive discussion [1]. Fewer teacher prompts online and the “out of sight, out of mind” adage may serve to increase student procrastination. Further, multiple simultaneous threads can be confusing to follow and to respond to. Some students may overpost and others suffer from “communication anxiety”. They feel detached and are not sure who is really out there, when to expect a response and what kind of a response it will be [2].
Discussion
Threaded discussion forums have been a popular topic for the past few years in distance education research and studied as a factor in student participation, satisfaction, learning outcomes, social presence and interaction. Only recently has it been considered as a potential vehicle for the development of critical thinking skills and deep learning. In an effort to determine the efficacy of threaded online discussions in this regard, thirty-seven current research studies were analyzed and synthesized. The volume of research within these areas in recent years is substantial. In an effort to condense and summarize the research, a chart was constructed. The chart is shown at the end of this section of the paper. The research studies are listed alphabetically by author followed by date of the study. The next column indicates whether the study was conducted with a graduate, undergraduate, professional or high school level group. The purpose of the study as stated in the journal article is shown next, followed by the methodology used. The next column indicates whether the class was totally online or if just the discussions were online as a part of a face-to-face class. The last column contains the major findings of the study.
Of the thirty-seven studies reviewed, nineteen studies evaluated classes at the graduate level and eleven at the undergraduate level. Although this paper deals with college level distance education courses, several other studies were included because they were cited frequently within other studies and considered valuable literature. Of these seven, two were on a high school level and five were on a professional level.
The majority of studies were performed with education classes. Of the thirty studies involving college classes, thirteen were education classes. Five were business related classes and four were computer related classes. The other classes varied across discipline.
The majority of the education classes were at the graduate level. Only one undergraduate education class was researched. It is assumed that this is because education professionals are more interested in distance education research than researchers in other disciplines and they have access to education classes and students as subjects. Why so many researchers have chosen graduate level instead of undergraduate level education classes is not known. The predominance of graduate level classes for research, however, is at variance with the current statistical profile and demographics of current distance education students. The changing nature and demographics of the distance education student are discussed later in this paper.
As stated previously, studies were selected for review if the article indicated that the purpose of the research was to investigate critical inquiry, deep learning, presence, and interaction. The methodology varied and a number of studies used triangulation. Content analysis of class transcripts, discussion threads, or listservs was a popular method. These archived records have only been available for research the last five or ten years and as a result, are a popular newer method of data collection and analysis. It was used to some extent in 22 of the 37 studies. This content analysis was generally performed in an effort to analyze student responses. These student responses were then often categorized for quantity or quality. Some studies ranked student responses using a scale or taxonomy such as Bloom’s Taxonomy, Biggs’ SOLO Taxonomy, or Garrison’s Four Cognitive Processing Categories.
Another common research design was to compare student conversations online with face-to-face classes. Seven studies followed this methodology. Student interviews and questionnaires were also popular, frequently in addition to other methods. Of the 37 studies, fifteen interviewed or questioned students.
2.1 Research Findings
Kreijns, Kirschner, and Jochems in a 2002 study stated that there is a concomitant body of research that reports low participation rates, varying degrees of disappointing collaboration, low learning performance and quality of learning in distance education [3]. The analysis of these 37 studies supports some of these findings.
2.2 Participation Quantity
Some studies did report low participation rates [4] [5] [6] [7] [8] [9] [10]. Other studies specifically studied “lurkers” or low participants [4] [11] [12], but found that these “lurkers” do learn by observing others. Hung and Nichoni in a 2002 study further stated that lurking is a necessary step in getting familiar with a particular culture [12]. A 2002 study by Picciano found that there was no difference in learning outcomes for low, moderate and high participants [10].
Chen and Zimitat in a 2004 study found that online classes had more participation than in-class discussions [13], but the more common finding was widely varying degrees of participation by students in the same class [14] [10]. Hara, Bonk, and Angeli in a 1998 study reported that online participation by students was limited to the mandatory number required by the instructor [15].
2.3 Participation Quality
Online discussions were described as less personal than face-to-face discussions [16], perfunctory [5], less interactive and lacking in speed, spontaneity and energy [5] [17] [1]. However, some studies reported more honest reflective discussion online [18] [14] [15] [17] [19]. Online participation was described as good for information exchange [20] [21], but not for creative problem exploration and idea generation [22].
Other studies reported that threaded discussions do not encourage team building or group processes [8] [23]. Some online environments culturally condition students to agree with each other and challenging each others ideas in discussion is considered a personal affront. There is little social discord [24] [25] [26]. Vonderwell in a 2002 study found that students claimed to all have similar ideas and thus there was nothing to really talk about [16].
2.4 Critical Thinking Skills and Deep Learning
Chen and Zimitat in a 2004 study reported that deeper understanding was shown in face-to-face classes than online classes [27]. On the other hand, similar amounts of critical thinking were found in face-to-face and online classes by Newman et al. in 1997 [22]. Hara, Bonk and Angeli in 1998 did find cognitively deep, lengthy postings with peer references, but still noted that students posted only the required number of postings and that comments were highly dependent on the directions of the discussion starter [15]. Heckman and Annabi in 2002 stated that based on their work, online discussions can generate cognitive levels equal to a face-to-face discussion [28].
When critical inquiry or deep learning was categorized in hierarchical levels, most messages or responses were ranked at the lower cognitive levels [1] [20] [21].
2.5 Literature Recommendations
Despite the difficulties cited above, most of the studies stated that online discussions have the potential for the development and fostering of critical thinking skills and deep learning. However, overwhelming it was stated that this was not yet happening at a high level or to a great extent.
Recommendations and suggestions to improve critical thinking skills development and deep learning included combining online discussions with other activities such as collaborative group work [26], case studies [26], production of tangible products [8], and problem and project based learning activities [19]. Other recommendations included developing more appropriate teaching and social presence [29] [24].
Mentioned most often as needed for improving deep learning in online discussions was better instructor efforts [30] [5] [6] [26] [18] [31] [1] [32]. Along these same lines, setting of clearer goals for discussion topics was frequently mentioned [14] [33]. Problems relating to a lack of clear goals or shared purpose for discussion was discussed in a number of studies [34] [35] [20] [14].
Most researchers placed responsibility for social interaction squarely on the back of instructors. It is up to the instructor to create a sense of online community and make a space for social interaction to take place [36]. This space must foster intimacy, openness and connectedness. The teacher then must direct online discussions, influence the discussion by entering new topics, share new material and redirect conversational patterns as necessary [3].
It was stated that an interactive teaching style is the best pedagogical approach to Internet-based learning [37] [30] and the type of questions the instructor asks are extremely important. The questions must be interesting as well as probing and prodding. They must elicit self explanations from the learner, critical clarification and refinement [38].
Instructors are also responsible, according to the literature, for providing the scaffolding that allows students to advance from passive to deep learning. Teachers are the content experts and must guide and assist students in their quest for knowledge. They must diagnose misconceptions, inject knowledge from diverse sources, and respond to technical concerns [39]. On the other hand, there are researchers that recommend a “guide on the side” approach with a laissez faire approach to moderating student discussions. There is some conflict between these two approaches and disagreement about whether the teacher in an online class should be a facilitator or a content provider. Further disagreement exists about which of the two approaches is more student centered.
It is interesting to note that although better instructor efforts were mentioned frequently, there were not many studies that actually interviewed or focused on instructors. Mortera-Gutierrez in a 2002 study conducted three unstructured interviews with three instructors and found that the pragmatic approach of the instructor affects class interaction, skills, and strategies [40]. In 2003, Trollip and Blignaut categorized instructor postings and classified them as affective, administrative, other, corrective, informative and Socratic [32]. Li, in a 2003 study, interviewed one teacher to learn of problems of first time online students [9].
2.6 Other Factors
Some of the studies did not take place in entirely online classes. Students taking a blended class where they have some face-to-face meetings with the instructor and other students may not require the same level of social and teacher presence online. Students in these blended classes may have more time to devote to developing in-depth conversations since less time is needed for developing social connections. Also, the face-to-face discussions may stimulate idea generation for later online discussions. These opportunities are not available for students taking classes that are totally online.
Three of the studies also used synchronous discussions. Synchronous and asynchronous conversations have their own advantages and disadvantages and are not comparable in many ways. As mentioned with the blended classes above, students in classes with synchronous discussions may not have the same needs for development of teacher and student presence.
Lastly, some of the studies did mention that other factors affect critical thinking. Bullen in a 1998 study and LaPointe in a 2003 study mentioned the importance of learner characteristics [25] [41]. Guzdial et al. in 2002 and and Rourke et al. in 1999 discussed the influence of discipline and culture [6] [24]. Students enrolled in technical disciplines are accustomed to a more didactic lecture approach and are not accustomed to discussing controversial or ethical issues. These students have been taught correct procedures and how apply them, not how to discuss these procedures.
In summary, perhaps the most consistent finding was that deep learning does NOT happen spontaneously [41] and that when it does happen; it is difficult to measure [43].
2.7 The Changing Distance Education Student
The original target group of distance education was adults with occupational, social and family commitments wanting to improve and update professional knowledge. Distance education has traditionally been interwoven with adult learning theory and lifelong learning. In 1991, Verduin and Clark described distance education as a form of adult education traditionally offered through extension units of colleges and universities, offering a choice of time and location, and designed for adults with the adult learning traits of self direction and internal motivation [44]. The typical online student has been generally described as over 25, employed, a caretaker, who has already completed some higher education. These learner demographics may have been true in the past, but are no longer valid.
The National Center for Education states that online enrollment now spans all age groups. As of December 31, 1999, 65% of l8 year olds had enrolled in an online course. It was also reported that 57% of traditional undergraduates aged 19 to 23 have been enrolled in an online course. These students are taking online classes at the same time as face-to-face classes. Online classes are not replacing face-to-face classes, they are being offered as supplements or alternatives within traditional college certificate and degree programs. Combining distance education with traditional degree programs is becoming a dominant theme [44].
The National Center for Education also reported that over one-half of the increase in distance education classes from 1997-8 to 2000-01 was attributable to public two year colleges. This is particularly impressive, since general enrollment in four year colleges has been outpacing enrollment in two year colleges [45].
Schools granting associate degrees had the largest number of students taking at least one online course, representing about half of all the students studying online. Strong increases were predicted by all classes of schools offering associate degrees [45].
Fourteen years ago, Verduin and Clark described three major types of programs for adult learning and distance education: adult basic education for acquiring basic skills needed to function in a changing, increasingly technology based society; career education; and leisure and enrichment education [46]. The nature of online education has changed as well as the typical (if there is a typical) online student. A more common online student today may well be a young, full time associate degree student taking college courses online as well as in the classroom.