What is Education Technology?
Educational technology is most simply and comfortably defined as an array of tools that might prove helpful in advancing student learning. Educational Technology relies on a broad definition of the word "technology". Technology can refer to material objects of use to humanity, such as machines, hardware or utensils, but it can also encompass broader themes, including systems, methods of organization, and techniques.
Those who employ educational technologies to explore ideas and communicate meaning are learners or teachers.
The word technology for the sister fields of Educational and Human Performance Technology means "applied science." In other words, any valid and reliable process or procedure that is derived from basic research using the "scientific method" is considered a "technology." Educational or Human Performance Technology may be based purely on algorithmic or heuristic processes, but neither necessarily implies physical technology. The word technology, comes from the Greek "Techne" which means craft or art. Another word technique, with the same origin, also may be used when considering the field Educational technology. So Educational technology may be extended to include the techniques of the educator.
Again Educational Technologist is someone who transforms basic educational and psychological research into an evidence-based applied science (or a technology) of learning or instruction. But the term seems very stuffy and almost arrogant to those who work with the tools. Educational Technologists typically have a graduate degree (Master's, Doctorate, Ph.D., or D.Phil.) in a field related to educational psychology, educational media, experimental psychology, cognitive psychology or, more purely, in the fields of Educational, Instructional or Human Performance Technology or Instructional (Systems) Design. But few of those listed below as theorists would ever use the term "educational technologist" as a term to describe themselves, preferring less stuffy terms like educator.
Objectives of Education Technology (Macro Level, Micro Level)
The development of individual micro-level evaluations of different aspects of the NSDL program.The framework described in this paper is based on the concept of the digital resource. A digital resource is defined as any servable electronic file, retrieved by a user after a search of NSDL,including text files, pictures, films, animations, audio files, software applications, applets, etc. The resource lifecycle framework describes the various stages in the ‘lifecycle’ of a resource, from the moment of creation through to the moment of educational use, and beyond to the moment of redesign and improvement. In doing so, it identifies appropriate points within this lifecycle at which NSDL’s past and current capabilities may be evaluated.This model provides a set of general guiding principles and research questions, which can then be tailored to individual contexts in order to generate specific research questions and strategies. It is not offered as the definitively correct or prescriptive description of NSDL, but rather as a flexible, reconfigurable template, within which various locally relevant evaluation activities can be planned and implemented.
2 Focus, outcomes and audience
The focus of the evaluation is on a program-level, rather than an individual project-level, evaluation of NSDL. The central aims of the evaluation will be to assess how NSDL works as a program, particularly with regard to the efficacy of NSDL’s program-wide organizational communication,organizational knowledge processes, and organizational integration. The evaluation will thus look at how well the efforts of various NSDL projects are supported by and thus contribute to the overall NSDL program.The evaluation findings will be used to inform the NSDL Core Integration report due at NSF at the end of 2006. The findings will also be used to inform a series of smaller and more focused evaluation reports that will be released to the NSDL community in the intervening period.
3 The resource lifecycle model as a ‘meta-framework’
A meta-theory is a general theoretical framework that can be used to develop specific, individual,locally relevant theories (Giddens, 1984). An evaluation meta-framework is thus a general evaluation framework that can be used to develop specific, locally relevant evaluation strategies. The particular meta-theory chosen to guide NSDL evaluation is a sociotechnical one.A sociotechnical evaluation framework is useful for NSDL because NSDL is
- a complex organization distributed over space and time
- composed of various interest groups with different definitions and understandings of NSDL
and digital libraries
- in need of a coherent narrative to represent its structures, both to internal community
members engaged in discussions over the organization’s future, and also to external funding
partners such as NSF and a successful NSDL evaluation will have to address each of these points.
NSDL is a complex program that has funded over 200 projects covering a wide range of activities,including library architecture, database and search engines, web site design and usability, resource creation, collection development, and community and outreach activities.
Heterogeneous Community An NSDL evaluation also has to address the fact that NSDL is a distributed, federated and heterogeneous organization, which includes a wide range of personnel and ‘communities of practice’ (Wenger, 1998). Different communities of practice will have different forms of digital library knowledge that will be partly tacit and taken-for-granted by each group. They will have different definitions of NSDL as an organization and as a digital library, and as a consequence, an NSDL evaluation will have to address the concerns of and be tailored to the interests of these different groups.
Finally, an NSDL evaluation report should paint a coherent and compelling picture of NSDl’s past achievements and future goals. Reeves et al. (2003) stress that digital library evaluation strategies and reports should, at a foundational level, be informed by questions that can inform and support the future development of that digital library. Evaluation should not just measure and describe what is there, but provide a clear pointer to what might be, particularly in the form of data that can inform future development strategies. An NSDL evaluation strategy should be able to address the complexity and heterogeneity outlined in the previous two points, and also synthesize a coherent narrative that is useful for all NSDL stakeholders. An evaluation meta-framework can provide such a bridge from complexity to coherence.
A sociotechnical theoretical approach
As NSDL projects vary widely in form and function, there is thus no one single evaluation strategy suitable and applicable to the whole of NSDL. An NSDL evaluation strategy will therefore have to be ‘multi-faceted’ (Marchionini, Plaisant & Komboldi, 2003), in order to bring to bear a range of evaluation strategies appropriate to a range of contexts and questions. In order to address these issues, NSDL can be modeled as a sociotechnical artifact (van House, Bishop & Buttenfield, 2003).
That is, NSDL can be modeled as a mixture of projects, technologies, organizations, people and practices that are connected, mutually constitutive, and emergent and evolving.
Using a small set of basic spociotechnical concepts to address NSDL’s organizational complexity, an evaluation meta-framework can therefore tie multi-faceted evaluation efforts into a coherent narrative useful for future NSDL development efforts, and generate a coherent series of specific research questions that addresses major areas of NSDL activity.
4 Identifying NSDL’s educational impact
A crucial question to be addressed in the CI evaluation work is the extent of NSDL’s educational impact. At present, the classroom impact of digital libraries is understudied, and compared with research into digital library architectures, tools, and services, relatively little is known about how digital libraries and associated technologies are actually used in educational settings (e.g. Arms,2005). There is however a number of theoretical and practical obstacles that stand in the way of developing an evaluation strategy that could remedy this deficit.One significant obstacle is that the use of digital libraries is a complex phenomenon that is theoretically underdetermined. That is, we lack detailed understandings and models of all the On sociotechnical artifacts, see Bijker et al., 1987; Bijker, 1995.
Variables associated with digital library use that would allow us to isolate and study the impact of the technologies themselves (Kozma & Quellmaz, 1995). While the quality of a library’s resources and services are crucially important, to evaluate this in context we need to know about a wide range of attendant technological, economic and social contingencies, such as a school’s intranet and bandwidth, the number and age of a school’s computers, the presence or absence of technological support staff, the attitudes of teachers and policies of school administrators towards new technologies, the impact of new and unfamiliar technologies on existing work patterns and practices, the impact of new educational policies on teaching practices, and so on. Further, we also need to know how these variables are related. Without a thorough understanding of all these issues, it will be impossible to design controlled experiments and instruments that can successfully isolate the impact of one variable amongst many (in this case, NSDL).
This is not to say that ‘impact’ studies are impossible, but rather that evaluation research in this direction will have to begin by identifying useful impact models and variables, rather than assuming ‘NSDL’ and ‘educational impact’ exist as unambiguous and well-articulated phenomena that can studied with relative ease.
NSDL-CI evaluation strategies are therefore shifting the emphasis of the ‘impact question’ away from the macro-level and towards the micro-level. This involves a corresponding shift in the unit of analysis away from macro-level educational metrics, such as test scores, and towards individual teachers and their micro-level practices. The ‘impact question’ then becomes not ‘How has NSDL impacted test scores?’, but rather ‘How has NSDL impacted teaching and learning practices?’, such as the use of technology in classrooms to demonstrate scientific concepts (Kozma & Quellmalz,1995; Sumner & Marlino, 2005).
A central aim of the evaluation will therefore be to work towards the development of models of educational technology use that can describe and measure how educational technologies impact teacher and pupil practices, based on the understanding that those practices and variables are embedded in complex educational and technological environments (Oliver & Harvey, 2002).
5 NSDL’s organizational complexity
A central task of the NSDL program has been to develop organizational structures to support
distributed interdisciplinary teams in the creation and use of networked interoperable educational digital libraries. The organizational structures that have been developed so far have been complex, emergent, and distributed in time and space.
While pilot organizational structures were proposed at the start of the project, the final form of these structures was not pre-determined. In practice, NSDL’s organizational structures have evolved over time, both in response to the lessons learned from earlier stages of the project, and also in response to the technological, pedagogical and organizational challenges that have emerged since the project’s inception. Recently, these emerging organizational structures and relationships have begun to be replaced by more formal institutional relationships, as exemplified by the Memoranda of Understanding established with the Pathways Projects.
NSDL’s organizational structures are also distributed in space. Some two NSDL hundred projects have been funded, located across the United States. These projects were and are linked together in a variety of face-to-face and electronic arenas, including the Annual Meeting, workshops, telephone conferences, e-mail lists, newsletters, and wikis. Levels of participation in these arenas vary greatly.
NSDL’s distributed structure over both time and space means that the task of evaluating its
organizational structure is not a straightforward one. Information and data are spread out amongst its member projects, a number of which are no longer operational. Further, NSDL’s status as a grant- rather than contract-awarding institution means that there has, historically, been no direct obligation for funded projects to report progress and evaluation results to any central NSDL office.
Projects are expected to submit final reports directly to NSF, but from NSDL CI’s point of view it is unknown what these reports might contain, if they contain any individual evaluation results, or even whether they were submitted at all. NSDL Core Integration does not have any automatic access to individual project policies, rubrics, metadata schema, and so on.One significant evaluation task will therefore involve contacting various NSDL projects and asking them systematically to codify and submit their organizational knowledge, lessons learned, etc., for collation and review. A second significant task will involve developing organizational models that can both account for and also evaluate how well NSDL’s organizational components integrate and work together.
The complex structure of NSDL thus raises a number of questions for NSDL evaluation, including:
- Scale of the inquiry: What organizational scale should an NSDL evaluation focus on?
Currently funded NSDL projects? All historically funded NSDL projects?
- Granularity of the inquiry: NSDL practices are often locally contingent to particular projects.
Should therefore an evaluation attempt to validate all the individual processes that exist
within NSDL? Or, should it focus on the organizational processes whereby NSDL
coordinates its practices?
- Representation of the inquiry: Given NSDL’s heterogeneity, conceptual coherence is crucial
for the presentation of evaluation results to internal and external partners. Clear description
of NSDL’s progress towards its goals, and the lessons learned over the past years, will help
define and support the opportunities that NSDL will pursue in the next stage of its
6 The ‘resource lifecycle’ evaluation framework
To address these and other issues, the NSDL evaluation is being carried out within a meta-
framework that models the sociotechnical landscapes within which NSDL activities are conducted.This framework permits the development of individual and specific frameworks applicable to different parts of NSDL.The meta-framework adopted for NSDL evaluation is the ‘resource lifecycle’ model. The model focuses on digital resources in general rather than digital libraries in particular, and the core feature of the model involves tracking a digital resource through various stages from the moment of creation through to the moment of use, and then beyond to the moment of redesign and improvement.
The meta-framework is an idealized model, that permits the generation of more precise NSDL evaluation models and questions, and models that integrate disparate dimensions of NSDL such as resource quality, metadata quality, usability, GUI quality. It thus provides coherence and focus for a range of diverse and heterogeneous evaluation concepts and practices.The meta-framework identifies five basic areas of NSDL activity.
Four of these areas deal sequentially with the process of digital educational resource creation,
collection, and use:
- resource creation
- resource collecting and accessioning
- resource retrieval
- resource use and reuse
The fifth area deals with NSDL’s organizational knowledge and communication.
The model can be further broken down into sub-stages, including: resource creation and review;resource aggregation; item- and collection level metadata; outreach; the NSDL web site (nsdl.org);resource search/discovery; resource use/support; resource sharing; and resource reuse.Taken together, these stages constitute a ‘production line’ model, the successive stages of which involve transformations of the digital resource, in the process adding value and utility to that resource. For example, a resource that has been reviewed for pedagogical effectiveness, scientific accuracy, and technological functionality is more valuable than a resource that has not; a resource described by accurate metadata is more valuable than a resource that is not; a resource with metadata embedded within a powerful search and discovery tool is more valuable than one that is not; and so on. While the activities of individual NSDL projects may not directly cover all the stages of the resource lifecycle, all stages of the model do affect NSDL’s activities in one way or another.This resource life-cycle model has several advantages for NSDL evaluation work. First, the model’s
stages provide useful conceptual foci and boundaries for evaluation efforts. Second, it provides a coherent overview of how disparate evaluation activities – such as webmetrics and ethnography –may be integrated into an overall evaluation plan. Third, the model provides a framework for making recommendations for improving organizational communication and knowledge processes.Finally, the model extends NSDL evaluation to include areas where future NSDL development may be directed. These new areas will be strategically important for NSDL, as NSF embraces a cyberinfrastructure model in which digital libraries act as institutional and informational ‘pipes’between the scientists on the one hand and the classroom and society on the other hand. In this model, envisioning a digital library as only one of a number of network components connecting science and society unnecessarily limits the potential of digital libraries to develop digital services across these networks. In cyberinfrastructure terms, NSDL’s future interests lie at least partly in bridging the gaps between data exposure and classroom use, and here the resource lifecycle model
speaks particularly towards cyberinfrastructure as a digital publishing model, and NSDL’s potential to support such a model.
7 Some concerns with and limitations of the approach
The model is too resource-centric and thus not applicable to all NSDL projects
NSDL has funded a range of projects in different tracks, including collections, services, and targeted research. Many of these projects are not directly involved in resource creation and review, so how will an evaluation that focuses primarily on resources adequately assess the contributions of these tracks to NSDL?
The resource lifecycle model is not intended as a strict ‘one-size fits all’ description of NSDL
projects and their activities, but as a heuristic that will guide the development of individual
evaluation initiatives. The model concentrates not on the resource itself, but on the resource
lifecycle. The focus is thus on examining the contributions that various projects make to the
resource lifecycle, rather than specifically to the resources that they may (or may not) create. The design of outreach workshops and community services, for instance, is just as suitable a subject for the evaluation program as is the design of metadata schema; and here, the evaluation will deploy more integrated, program-oriented evaluation strategies and metrics.
Granularity and the unit of analysis
How exactly is a resource defined in order to track it through the lifecycle?
The purpose of the resource lifecycle is not strictly to define ‘resource,’ but to act as a focal point for the identification and implementation of evaluation questions appropriate to each stage of the lifecycle. For instance, in the early stages of the lifecycle, the focus will be on collecting resource review rubrics and the evaluation of NSDL support to its members in this area. In the latter stages of the lifecycle, the emphasis will be on evaluating outreach and workshop activities through questionnaires and surveys. Thus, what constitutes a resource and the level of granularity at which it
is defined may vary through the lifecycle, with the high level aim of the evaluation is to ascertain how well NSDL as an organization provides an institutional framework within which resources can be valued, adopted, used, shared, and re-created.
The theoretically underdetermined nature of digital library use remains
The resource lifecycle model provides a useful framework to address the issue of the
underdetermined nature of digital library use, but on its own, it does not resolve this issue. The questions of how to evaluate digital library use-in-context is not answered directly, but are deferred by the model, and re-emerge in the stage of use and re-use. Does the question of NSDL’s educational impact therefore remain unanswered?
The evaluation will involve not just capturing ‘views from the center’ through surveys, web metrics,etc., but the development of models and instruments for the study of teachers using NSDL collections and services in the classroom. As was described above (section 4), these studies will focus on the impact of NSDL on the micro- level practices of teachers, with the question becoming not ‘How has NSDL impacted test scores?’, but rather ‘How has NSDL impacted teaching and learning practices?’ A significant outcome of the evaluation here will be the development of metrics that can assess these changes in practices. It is likely that these studies will be more resource intensive, and will take a longer time to complete, than ‘one shot’ instruments such as questionnaires, and lab-based usability tests.
NSDL is a complex sociotechnical artifact that is composed of multiple and heterogeneous
technologies, member organizations, policies, and practices. Any evaluation of NSDL will not only have to address this complexity, but will also have to be meaningful to a wide range of audiences,including project PIs and administrators, NSF managers, and users. To address these complexities,NSDL Core Integration has adopted a program evaluation model - the ‘resource lifecycle’ – that acknowledges this complexity, and also models it, providing a series of evaluation measuring points about which a coherent narrative concerning NSDL’s performance as an organization and as a program may be woven.
Aspects of Individualization.
Model of Learning
[The six views of learning(The Behaviorist view, The Cognitive view, The development view, The Humanist view, The Constructive view )] Three main theoretical schools or philosophical frameworks have been present in the educational technology literature. These are Behaviorism, Cognitivism and Constructivism. Each of these schools of thought are still present in today's literature but have evolved as the Psychology literature has evolved.
This theoretical framework was developed in the early 20th century with the animal learning experiments of Ivan Pavlov, Edward Thorndike, Edward C. Tolman, Clark L. Hull, B.F. Skinner and many others. Many Psychologists used these theories to describe and experiment with human learning. While still very useful this philosophy of learning has lost favor with many educators.
B.F. Skinner wrote extensively on improvements of teaching based on his functional analysis of Verbal Behavior, and wrote "The Technology of Teaching", an attempt to dispel the myths underlying contemporary education, as well as promote his system he called programmed instruction. Ogden Lindsley also developed the Celeration learning system similarly based on behavior analysis but quite different from Keller's and Skinner's models.
Cognitive science has changed how educators view learning. Since the very early beginning of the Cognitive Revolution of the 1960s and 1970s, learning theory has undergone a great deal of change. Much of the empirical framework of Behaviorism was retained even though a new paradigm had begun. Cognitive theories look beyond behavior to explain brain-based learning. Cognitivists consider how human memory works to promote learning.
After memory theories like the Atkinson-Shiffrin memory model and Baddeley's Working memory model were established as a theoretical framework in Cognitive Psychology, new cognitive frameworks of learning began to emerge during the 1970s, 80s, and 90s. It is important to note that Computer Science and Information Technology have had a major influence on Cognitive Science theory. The Cognitive concepts of working memory (formerly known as short term memory) and long term memory have been facilitated by research and technology from the field of Computer Science. Another major influence on the field of Cognitive Science is Noam Chomsky. Today researchers are concentrating on topics like Cognitive load and Information Processing Theory.
Constructivism is a learning theory or educational philosophy that many educators began to consider in the 1990s. One of the primary tenets of this philosophy is that learners construct their own meaning from new information, as they interact with reality or others with different perspectives.
Constructivist learning environments require students to utilize their prior knowledge and experiences to formulate new, related, and/or adaptive concepts in learning. Under this framework the role of the teacher becomes that of a facilitator, providing guidance so that learners can construct their own knowledge. Constructivist educators must make sure that the prior learning experiences are appropriate and related to the concepts being taught. Jonassen (1997) suggests "well-structured" learning environments are useful for novice learners and that "ill-structured" environments are only useful for more advanced learners. Educators utilizing technology when teaching with a constructivist perspective should choose technologies that reinforce prior learning perhaps in a problem-solving environment.
Taxonomies of Educational Objectives
Instructional technique and technologies
Problem Based Learning and Inquiry-based learning are active learning educational technologies used to facilitate learning. Technology which includes physical and process applied science can be incorporated into project, problem, inquiry-based learning as they all have a similar educational philosophy. All three are student centered, ideally involving real-world scenarios in which students are actively engaged in critical thinking activities. The process that students are encouraged to employ (as long as it is based on empirical research) is considered to be a technology. Classic examples of technologies used by teachers and Educational Technologists include Bloom's Taxonomy and Instructional Design.
Educational Psychology provides many opportunities for study in quantitative methods, with the student's program of course work focused in one of three concentrations. Two concentrations lead to a PhD degree, while the third results in an MA degree. In addition to course work. opportunities for on-the-job--experience either in the form of part-time positions or practicum or internship placements--exist within The University and in other educational institutions and agencies in the Austin area.
Persons with degrees in Quantitative Methods find employment in colleges and universities, professional testing organizations, educational research and development agencies, industrial psychology agencies, interdisciplinary research projects, governing agencies of higher education, state departments of education, and research and program evaluation divisions of large school systems. Here is a sample list of career possibilities:
University teaching and research;
Measurement specialist or data analyst in personnel in a corporate or government agency;
Test director or staff member at a state education agency or university testing center;
Measurement/statistical specialist for a testing company;
Testing specialist for a certification examining board, particularly in medical or allied health areas;
Test director or evaluation specialist for a school district.
Statistics and Research Methodology
Students who specialize in educational research methodology and statistics learn how to plan and execute research, as well as how to analyze and evaluate the research carried out by others. They acquire knowledge and skill in utilizing a wide array of statistical techniques, develop understanding of the relationship between research design and statistical analysis, acquire skill in choosing designs and analysis techniques that are appropriate for specific problems, and develop skill as consultants on problems of research methodology. They also acquire an understanding of the ways in which mathematics and logic can be used to reduce the uncertainty associated with conclusions reached in the study of human behavior. Typically, such students are motivated to expand the range and increase the depth of their understanding of problems that can be approached through the use of statistical methodology. In the context of a given problem situation, such students tend to focus more on the process used to arrive at the decision than on the substantive conclusions reached. An academic background with considerable exposure to formal mathematics is highly desirable.
Students who specialize in psychological and educational measurement study the theory and methods of measuring variables important in psychological and educational research, practice, and evaluation, such as aptitude, achievement, attitude, personality, and other cognitive and non-cognitive characteristics. Students in psychometrics are prepared to develop and use measuring instruments that are appropriate for specific educational and psychological purposes. The psychometric specialist also develops a high level of understanding of measurement principles and procedures, as well as extensive knowledge of statistical procedures and methods of experimental design necessary to conduct applied psychometric research. Auxiliary skills that are developed include use of computers and related mathematics.
Educational institutions and human service agencies evaluate the effectiveness of their programs and products. Evaluators are needed to apply specialized tools and techniques for measuring an array of outcomes, e.g., conceptualize program models, clarify problem objectives, design data collection schemes, construct instruments, analyze data, report results, and work with area personnel to facilitate use of evaluation findings to improve program effectiveness. Specialization in program evaluation consists of courses and field experience leading to the MA degree. Educational and human service issues and programs and quantitative and qualitative evaluation methodologies are emphasized. The MA evaluation concentration is open only to doctoral students in Educational Psychology and is relevant to those who have an interest in professional accountability, effective management of human service and educational programs, and applied research on organizational operations and effectiveness. The MA component includes practicum and a Master's Thesis.
Quantitative Methods Courses for the PhD degree
Required for all EDP doctoral students:
Experimental Design and Statistical Inference and Lab
Psychometrics: Theory and Methods
Two other (secondary) courses in Quantitative Methods
Required for Quantitative Methods majors:
Colloquium in Quantitative Methods
Correlation and Regression Methods
Evaluation Models and Techniques
Survey Research Methods
Item Response Theory
Survey of Multivariate Methods
Special Emphasis Courses (Electives)
Advanced Psychometric Research
Computerized Adaptive Testing
Test and Scale Construction
Seminar in Advanced Psychometrics
Statistics and Research Methodology
Educational Research Methodology
Hierarchical Linear Models
Qualitative Research Methods
Structural Equation Models
Educational technology applications and lessons, such as user-authored software systems,simulations, and learning modules, face substantial difficulties being adopted and implemented in public schools and post-secondary institutions. The exponential growth in ownership of personal computers (the most common medium for applications) hides the fact that computers are under-utilized in schools (Becker, 1998; Cuban, 2002; National Center for Educational Statistics, 2000). Reasons for slowadoption and implementation include limited or problematic access to resources (e.g., appropriate hardware), cost, and poor technical support in schools, as well as reluctance on the part of teachers to change tried-and-true instructional practices and take risks with new technology (Ertmer, 1999). Given these formidable barriers to technology integration, good design may tip the balance in the decision to use and implement an application.
Difficult to use products are commonly ignored by teachers, while simple, compatible and effective designs have a better chance of classroom integration.Formative evaluation, one way of improving the design of learning technology, has typically concentrated on design modifications for the usability and curricular content of computer applications. Evaluators help debug software and gather feedback to make applications work better. With formal lessons, such as integrated learning systems, distance education, and web modules, evaluators help coordinate expert review and conduct field studies in the classroom with learning products. Both usability and curricular evaluation make valuable contributions to well-designed, usable learning products, but data collection tends to be limited to answering the question: Does the application or lesson work? The discussion of formative evaluation presented in this article examines a question not typically covered in many evaluation efforts or related literature, namely: Does the application or lesson fit into the context of schools?Formative evaluation of implementation examines the technical, curricular, and practical factors that inhibit the implementation and compatibility of an application. Evaluators analyzing classroom use can help developers more closely match designs to conditions found in schools,thus making it easier for instructors to implement new applications and lessons.
Model of teaching
EDUCATIONAL TECHNOLOGY APPLICATIONS AND LESSONS
Before examining the formative evaluation of contextual obstacles to use, it helps to understand the forms and purposes of present day educational technology. The educational technology discussed in this article includes applications, such as computer software (and applications delivered by the Internet) and computer delivered lessons used in schools, colleges, and universities.
In general, these technologies provide students easier access to information, help students develop knowledge and skills, and link people in different locations (Brown, Bransford, & Cocking, 2000; Knapp & Glenn, 1996). In classrooms, applications and lessons support lectures, discussion, individual and group learning, and other class activities. Outside the classroom and in computer labs, the same technology is used to support independent study,homework, tutoring, skills development, and communication, or to aid in class-related research.The Internet and its applications are also used in distance education, both as a means of delivery and communication, and as an aid to learning and instruction. Designers of educational technology commonly take advantage of existing non-educational technologies (mostly hardware and systems) as vehicles for educational applications. The use of radio, television, and personal computers all started out in non-educational settings but eventually found a place in schools. In recent years, developers and instructors have adapted the use of email and theWorldWideWeb for instructional purposes. Presently, innovations in learning technology come from the areas of artificial intelligence, voice recognition, virtual reality, and other emerging technologies.
Three primary types of learning technology applications are used by instructors in secondary and post-secondary institutions: formalized lessons, activities, and user-authored systems. While instructors use technology for other purposes both inside and outside of classrooms, these three categories of educational technology applications and lessons are commonly developed by instructional designers, content specialists, engineers, and scientists with the goal of sustained use by teachers and students.
knowledge skill and attitude
Pre-packaged lessons, programs, modules, and curricula are products developed by publishers, universities, academic laboratories, or other organizations. In the past, these were often limited to distance education and were delivered by radio, television, or as “programmed instruction” on computers (Cuban, 1986). In recent years, the web has made it possible to deliver lessons and modules through the Internet, not only for distance education, but also as an adjunct to classroom instruction. Modules include not only text and pictures, but also exercises, electronic bulletin boards for communication, and other features that take advantage of the online format and interactive capabilities of computers (Weston & Barker, 2001).
Integrated Learning Systems (ILS), another type of formalized lessons, are “comprehensive software systems” that deliver instruction and include management systems for teachers (Mills & Ragan, 2000).
Hands-on applications, such as computer-based exercises, simulations, and games, are often offered as stand-alone activities, either through the Internet or as software packages.Most of these activities are highly interactive and provide immediate feedback to users based on their input. They differ from formalized lessons in that most do not have extensive text, formalized learning objectives, or sequenced lessons.
User-authoring systems (Locatis & Al-Nuaim, 1999), such as presentation software, authored simulations, and authored assessments, allowinstructors to use “on-screen tools (menus, prompts, icons) that let users enter text, compose graphics, and prescribe branching . . . ”(p. 66). In an instructional context, instead of formal content being written by developers, these systems allow teachers to design their own lesson content with forms and templates, select different functions of an application based on need, and choose content from reference tools. Because authoring is in the hands of the user, the eventual instructional product experienced by students varies widely. The instructional design for user-authored systems concentrates on creating a usable “environment” where instructors are guided to produce lessons.
Formalized lessons, activities, and user-authored systems are all designed for instruction and learning, but designers do not always anticipate the way in which applications and lessons fit—or fail to fit—into the context of schools. Like many other products, educational technology applications and lessons face challenges in implementation. In schools, the benefits of an application, real or perceived by teachers, may be overshadowed by hesitancy to change existing practices. Moreover, applications and lessons function in complex organizational environments where preexisting motivational, organizational, and cultural factors maywork against their adoption and sustained use. Fortunately, many educational applications and lessons are highly modifiable and open to alterations in design that address these implementation problems.
LINKING FORMATIVE AND IMPLEMENTATION EVALUATION
Formative evaluation of educational technology applications and lessons reflect broader concerns about implementation found in other areas. Plans made during the early stages of policy-making or program design rarely translate into the reality envisioned by policy makers and developers. Initiatives in prevention, health, mental health, recidivism, and a host of other programs have commonly met with implementation problems. When program administrators,and the intended beneficiaries of programs, implement and respond to interventions, programs change in unanticipated ways because of political considerations, capacity limitations and incompatibility between a program’s design and the entrenched practices of individuals and groups. Almost 30 years ago, Pressman and Wildavsky (1974) described the breakdown between policy and outcomes in a case study of a large federally funded minority assistance program. The program failed to create jobs because of logistical obstacles, politics, and a lack of understanding of how program initiatives fit into local organizational structures and practices.
Wildavsky and Pressman pointed out that programs must adapt to their surroundingsin order to survive, and recommended that policy makers take the realities of local bureaucracies,conflicting motivations, and other contextual conditions into account when planning programs. A decade later, McLaughlin (1987) summarized two generations of policy implementation studies, illustrating how “implementation dominates outcomes” in a wide range of areas. McLaughlin added that policies succeed or fail not only because of organizational and structural factors, but because of individuals, and that those responsible for implementing programs “do not always do as told,” responding instead in “idiosyncratic, frustratingly unpredictable,if not downright resistant ways” to the efforts of program administrators (p. 172).
Because of the unanticipated responses of stakeholders, both in schools and in other areas, policy implementation involves extensive bargaining among stakeholders, iteration between policy actions and responses, and a thorough understanding of the context where the policy must function.
Evaluators examining implementation processes take on summative and formative roles.McLaughlin (1987) reminded evaluators that any generalization made from observed outcomes must be linked to the details of the program’s implementation. Patton (1978) advocated framing evaluation questions “in the context of implementation,” calling for evaluators to examine the substance and everydayworkings of programs that lead to observed outcomes. Implementation evaluation describes, first, whether the program reaches implementation (is there a program?),and if so, what it looks like in practice. Describing and analyzing variations in implementation across sites, program strengths and weaknesses, as well as the responses of individuals and groups to program initiatives (and how the program adapted to these local conditions), all are part of what an evaluator does when examining implementation processes.The same information about the context of implementation that is used to help in assigning summative judgments about a program’s merit can also be employed to improve programs.
Scriven (1967) first defined formative evaluation as the use of information to develop and improve programs and products. McClintock (1984) narrowed the definition to a systematic process using “empirical procedures” for providing “ongoing information to influence decision making and action” in policy and programmatic areas (p. 77). Dehar, Casswell, and Duignan (1993) described numerous health care evaluations (e.g., Edwards, 1987) in which evaluators worked during developmental stages of programs to improve their focus, methods, and orientation. For many efforts, evaluators help programs adapt themselves to their environments by making suggestions for altering program design. Chen (1996) formally linked process/outcomes and improvement/assessment distinctions with a two-by-two matrix of evaluation functions. “Process-improvement,” one category in the matrix, is defined as evaluation aimed at providing information for program improvement through analysis of implementation strengths and weaknesses. Chen described a process-improvement evaluation for an evaluation of a family planning program where “more married couples [were] persuaded to utilize birth control devices in an underdeveloped country if the service providers or counselors [were] local people, rather than outside health workers” (p. 124). Contextual information such as this is found only when a program meets with unanticipated obstacles in its ultimate setting. Design modifications for improving implementation (i.e., “using local people”) are generated out of contact with local conditions, an activity at the heart of formative evaluation.
FORMATIVE/PROCESS EVALUATION OF EDUCATIONAL TECHNOLOGY
The formative/summative distinction in evaluation roles is clearly delineated in the field of educational technology. Evaluation of educational technology has traditionally focused upon summative outcomes over descriptions of the processes leading to outcomes. Movement away from “black-box” studies, more common in other areas, has been slow to reach the area of educational technology. Bullock and Ory (2000) discussed the types of evaluation models used in higher education to evaluate educational technology, stating that “comparative studies are probably the most often used approach to evaluating technological innovation . . . ” (p. 317).
The huge number of studies conducted about distance education collected by Russell (2003) have in common not only their results (“no significant difference”), but also their comparative black-box design. While summative outcome studies are still used, more attention is now being paid to implementation factors that account for outcomes. In the evaluations of Apple
Classrooms of Tomorrow(ACOT) (Baker, Herman,&Gearhart, 1994), and theGLOBEproject
(Mean et al., 1997), implementation was described and linked to outcomes. Others (Baker,2001) have also called for inclusion of implementation evaluation for educational technologies as a method of explaining outcomes, given implementation variation among teachers using the same application, lesson, or instructional approach.
The evaluation of educational technology applications and lessons has a strong tradition of formative evaluation. Authors writing about formative evaluation in educational technology (Flagg, 1990; Gustafson & Branch, 1997; Maslowski & Visscher, 1999) agree that the practice is concerned with feedback, revision, review, and improvement of product designs.
Evaluators systematically collect and communicate data (often preliminary user-feedback) to developers, and iteration and prototyping occur with different versions of an application. Additionally, designs are compared to standards (e.g., International Standards Organization, 2003)and undergo expert review. Data collection methods are generally qualitative, with extensive use of observation, interviews, and surveys, but also embrace quantitative measures related to achievement and satisfaction and measures of use, such as frequency and duration. Ideally, the evaluator works as an independent mediator between end-users and developers gathering suggestions for design improvement, synthesizing data, and communicating recommendations to designers.
While much of formative evaluation for usability takes place before an application reaches the classroom, formative evaluation of educational technologies also involves field-testing, the closest that formative evaluation comes to examining contextual obstacles. Sanders and Cunningham (1974) were early advocates of field-testing for educational materials, calling “contextual information” a distinct category of inquiry about products that incorporates data from “the conditions under which the materials are expected to function” (p. 221). Scriven(1991) outlined four stages of formative evaluation for technology projects from early “in house critiques,” through controlled and then “hands-off” field tests, culminating in expert review.
Two types of clients typically hire formative evaluators; each has different approaches and priorities that have traditionally limited the scope of formative efforts. Software engineers (and other technical specialists) have largely concentrated on feedback about usability and interface design, with less attention given to design issues that impact the curricular and practical integration of technology. Formative evaluation of usability and interface design examines whether an educational application “works” in a purely instrumental sense, is usable, and has an attractive design (Seidel & Perez, 1994). In contrast, instructional designers and content experts evaluate lessons “delivered” by technology. Evaluation focuses on content and format to make sure that lessons effectively teach and assess their objectives (Dick, 1996).
Evaluators typically make recommendations for altered wording of text, clearer directions,more appropriate lesson length and altered lesson sequence. Many projects combine elements of the two approaches as lessons are debugged, user interfaces are improved, and content and format are altered. The strategy of improving interface quality and “tweaking” of content starts early in development and continues through field-testing.
Formative evaluation of usability and lesson content/format are critical to any evaluation effort, but each set of practices may ignore how the design of applications and lessons interacts with the context of the classroom. A product with “bug-free” programming, a usable interface, and attractive graphics may still fail to be implemented because its instructional approach is irrelevant to teachers’ needs, demands a large amount of an instructor time, or requires unwieldy or unavailable technical infrastructure. Implementation concerns are generally only found during extensive and “hands-off” testing, a step that is often skipped or shortened by developers (Scriven, 1991). Even when field tests are conducted, many tests borrow from a simple product model: Like field tests of automobiles, applications are tested to learn whether they hold up under stress. Developers commonly bring with them their own artificial environments for field tests, determining where, how, and for what function an application or lesson is used. This limits the ability of teachers to fit an application or lesson into their everyday class environment, and does not allow the evaluator to report on any implementation challenges and possible fixes. educational technology are regularly met with widespread apathy by teachers. While teachers may adopt a new application or lesson mandated for use by districts or principals, they may subsequently hide the new product in the closet after only nominal use (Cuban, 2002). Becker (1998) estimated that students averaged only 40 minutes per week of computer time, and that most computer use in the classroom primarily involves “conservative” practices, such as “basic skills” programs and games, instead of more meaningful forms of curricular integration.
Conditions for successful implementation depend partly on the beliefs, motivations, and practices of teachers. Ely (1999) called implementation a “change process” and described eight general conditions that must be in place for instructors to adopt and implement educational technology. Instructors exhibiting characteristics, such as commitment, participation, and leadership, were more likely to use educational technology. Rewards and incentives were also an important condition noted by Ely. Teachers receiving extrinsic rewards, such as extra pay, or intrinsic incentives, such as satisfaction with the accomplishment of their students, were more apt to integrate applications into the classrooms. MacArthur (1998) also found that teachers must perceive a powerful benefit from technology either in terms of learning, or in terms of improved productivity, automation, or efficiency, or both, for implementation to occur. Ertmer (1999) examined “intrinsic” implementation barriers, such as negative attitudes on the part of teachers toward technology, and found these barriers inhibit implementation even more than limited resources and access. Ertmer also found that some instructors feared the adoption of new teaching methods that incorporate the use of new technology and thus were unwilling to change entrenched teaching practices.
Organizational, physical/technical and practical factors also encourage and inhibit implementation.Easy and non-problematic access to hardware and applications is a first step to implementation, as are the timely and competent provision of technical support and effective professional development (Becker, 2000). Above and beyond these obvious external factors, Becker (1994) identified other conditions for implementation in the teaching environment of “exemplary computer users,” such as smaller class sizes and collegiality among teachers at the same school. The existence of a culture of technological use at a school also helps implementation;researchers (Bates, 2000; Rochelle, Pea, Hoadley, Gordin, & Means, 2000) found that adoption and implementation of technology became more likely when a critical mass of teachers use technology in their classrooms. Inhibiting implementation are traditional cultural practices and the organization of schools. The exigencies of classroom management and the organization of time were found to discourage teachers from changing practices to include technology (Cuban, 2002). Having enough time to master technology, author lessons, and arrange logistics is also an important inhibitor listed by Cuban.
While a wide range of implementation facilitators and obstacles are present in classrooms, many factors are not under the control of developers and evaluators of educational technology applications. Evaluators working for developers cannot change conditions found in schools, or directly change the behavior or attitudes of teachers. However, they can examine conditions that enable and inhibit successful implementation, and observe where these factors interact with product design. The remainder of the article examines technical, curricular, and practical implementation obstacles regularly found when teachers attempt to use educational technology applications and lessons. These obstacles are primarily found in the external environment of teachers, but interact with teachers’ internal perceptions of benefit, risk, and practicality of implementation. Examples of typical obstacles and recommended design changes are drawn from the evaluation practice of the author in both K-12 and higher education technology implementation.
The compatibility of an application with its ultimate setting is a major determinant of use.Designers of educational technology applications incorporate principles addressing compatibility,but the translation of a design principle to implementation tends to be incomplete because the designer cannot fully anticipate how his or her creation will function in its ultimate environment.
Design that looks good on paper, in a laboratory, or in a controlled classroom environment functions differently when used outside of these contexts. Design elements also can also be a determinant of the willingness of an instructor to use new technology, and the ease in which he or she can bring the technology into a classroom.
Compatibility, in strictly technical terms, involves software fitting operating platforms,hardware, software plug-ins, and other technical infrastructure; however, technical incompatibility (in the traditional sense) is sometimes only small part of why innovations are not implemented. Other areas of compatibility, such as the organizational context of technical infrastructure, the curriculum, and instruction imbedded in an application or lesson, and the practical details of integrating new technology into a classroom may have a greater effect on whether or not an application or lesson reaches students.
Three areas of compatibility problems are presented with descriptions of common implementation problems and typical design recommendations. The role of the evaluator as an independent advisor is separate from an instructional designer and involves extensive observation of use, interviews with teachers and students, collection of logged data related to duration, frequency, and other records of use. The evaluator actively analyzes and synthesizes evidence, and then makes design recommendations. Evaluation occurs as teachers and students use applications and lessons as “naturally” as possible for instructional purposes initiated by the instructor.
Contextual Technical Compatibility
For the developers of new educational applications and lessons, assessing technical compatibility is usually the easiest area of formative evaluation to understand because it is similar to commonly practiced user interface and usability evaluation. Assessment of technical compatibility focuses upon the ability of applications and lessons to fit in widely varied settings, on varied machines, and with varied operating systems and browsers. Many compatibility problems are invisible if usability evaluation is only conducted in a controlled setting with ideal hardware and operating systems, but become glaringly obvious when the application or lesson is placed in a school. These problems are typically related to (1) programming and (2) mismatches between the capabilities of schools and the characteristics of an application or lesson.
Common Implementation Problems
Programming. Uncovering errors in programming is generally the job of software engineers and occurs during preliminary stages of development and analysis of infrastructure.Evaluators can help remove bugs later on in the process by reporting them to developers and encouraging them to try out their applications on as many varied machines, platforms, and operating systems as possible. Evaluators can also encourage users to report bugs directly to developers through email or website forums, and help set up and monitor this communication. When the program is in pilot use, the evaluator can document bugs and describe the conditions present when they occurred. One difficulty with this type of formative evaluation is that programmers may insist that technology interactions with outside machines and platforms are essentially chaotic in nature. In fact, some problems are unpredictable and unrepeatable, but others will reoccur with regularity.
Capabilities. Obvious technical obstacles include exceptional hardware demands and problems in compatibility with existing infrastructure brought about by the design features of an application or lesson. School computer labs or home users may need specialized video cards, monitors, or processing capabilities to run applications; when faced with these demands, institutions or individuals may decide to skip implementation altogether. Highly advanced prototypical applications are most likely to pose obstacles to use in real classrooms. Teachers and schools systems may reject a new innovation if they have to make substantial purchases or upgrades in order to run the application. Many newer web-based applications run into bandwidth compatibility problems when integration is attempted in rural or economically disadvantaged school districts. At the university level, web-based modules designed for use in broadband campus labs run into the same problem of slow connectivity and limited access when used by students at home. Evaluators can identify infrastructure variability across a collection of variables for settings where the innovation will be potentially used, and evaluate the match between the capabilities of representative schools and the features of the innovation. Variables, such as SES, school level, school setting, and school history, can be examined during early stages of design. Fortunately, some of this work is already being conducted by national survey organizations (NCES, 2000) that provide general overviews of computer-to-student ratios, types of machines in use, and other aggregate profiles of technical capabilities in public schools. However, these survey data are very general and only useful for designers making initial decisions about the features of an innovation. More customized information can be provided by the evaluator through surveys or interviews with representative technical experts in districts and focus groups with representative teachers and institutions. Most importantly, the technology needs to be frequently piloted in varied settings that represent authentic use.
Typical Design Recommendations
Typical design recommendations arising out of evaluation of technical compatibility include stripped-down versions of software that run on older computers and/or can be delivered to computers with lower rates of connectivity. The ability to add features optionally given the capabilities of schools is also a common design recommendation. While these are generally good design principles, they are often overlooked in preliminary development efforts and controlled field tests. Many specific modifications arising from interactions with limited capacity are also not foreseeable in the early design phase. For instance, a software designed for projecting scripted presentations of human anatomical structures worked smoothly on newer projectors used at larger universities, but needed modifications to its design to work on older computer projectors common in other, less well-funded area schools. Another application based its operation on the existence of ready access to cheap palm devices (a system that worked in controlled field tests), but proved impractical because of expense, security concerns (they were easily stolen), and their unreliability. In response, the developers altered the design to work on standard computers.
Design challenges in technical infrastructure and usability can be overcome, but what can be more potentially devastating to the adoption and implementation of innovative earning technology is poor curricular and instructional fit. Challenges in implementation and adoption were addressed by Rochelle et al. (2000), who contended that successful integration of technology occurs in combination “with new instructional approaches and new organizational structure” (p. 90). This is especially true in elementary and secondary schools where the introduction of technology is regularly accompanied by curricular reform brought about by systematic initiatives (e.g., state standards and assessments). In this environment any new infusion of technology must serve the higher goal of changing entrenched and ineffective instructional practices.
Common Implementation Problems
Content assessment. Critical to any effort is the assessment of the relevance of lesson material (if the material is curricular in the traditional sense) to the core content of standards and assessments, content narrowness or breadth, and content difficulty. Decisions about content are usually made by instructional designers or content experts early in the design process, but the ultimate assessment of fit does not occur until a lesson is implemented. Designs that incorporate imbedded instructional methods and pedagogy (e.g., “drill and practice” or “inquiry-based learning”) must also fit with current philosophies and practices emphasized by extant reform efforts.Assessment of curricular compatibility for formalized curricular products comes through direct analysis of curricular frameworks (e.g., state standards), review with curriculum experts, as well as interviews and focus groups with teachers. In cases where no instructional designer is included on the development project, information about how the innovation fits existing standards and practices can be vital to the project’s success, and it may become the evaluator’s role to provide this information through needs analysis and literature review.
Compatibility with instructional practice. Analysis of instructional compatibility in practice involves observation of the application or lesson as it is used with students, and interviews with teachers as they assess utility and compatibility with instructional goals. Applications (especially activities, simulations, and games) implicitly structure student activities, calling upon students to respond to prompts, enter answers, or conduct interactive activities. These learning activities, partly determined by the design of the application, may or may not fit with goals of the instructor, or the way in which lessons are structured in everyday classrooms.The design of the application may also resist efforts to integrate complimentary instructional activities where the application is used in an instructional sequence.
Typical Design Recommendations
Examples of design recommendations for curriculum limitations include basic alterations of content to better reflect standards, as well the incorporation of flexibility into a design so that instructors can add or modify content to fit local conditions. For example, developers of a science module for middle school students focused extensive effort on lessons in one area of content that, while usable, only fit into a limited area of typical course curricula. Instructors who previously responded positively to the content of lesson and found the lesson usable, said ultimately that the effort involved in implementing the lesson was not worth its payoff because of its narrow focus. The recommendation in this case was to expand the content of the module to cover more topics. Examination of how the application or lesson fits into instructional practices can lead to changes in structured lesson activities that mirror exercises and assessments found in classrooms. For instance, interviews with instructors using the anatomy software described above found that the application would work well as a stand-in for laboratory assessments using animal cadavers. While this general use was anticipated, the exact manner in which the application would be used in the classroom was not, and involved alterations to the design of the application’s interface to accommodate this use. Other design modifications related to instruction arise from the ability to modify and adapt materials. Teachers using an application allowing for automated feedback on student summaries of written text were satisfied with the functioning of the application in field tests. However, they were limited by the amount of time and work needed to implement new material in the application’s database. Design recommendations included broadening the range of preexisting texts available to teachers, and implementing a systematic and accessible method of entering texts into the database.
Cuban has conducted a great deal of research about the practical realities of classrooms and technology and is a leading critic of the rush to wire American classrooms. In his history of technology use in the classroom (1986) and in other writings (2002), he has discussed factors in the way in which teachers’ work is organized that inhibit the use of learning technology. In Cuban’s model, teachers’ traditional practices have developed in rational ways in order to manage groups of students and use time efficiently; sometimes the use of new technology does not fit comfortably with these practices, or the limitations placed upon teachers by school structure.
Common Implementation Problems
Teachers contend with practical conditions placed upon them by the structural and organizational contexts of schools. Common practical obstacles include problematic access to technology, limited time, and the challenges of classroom management. Access is sometimes confused with the number of computers in a school, but is in reality much more complex (Becker, 2000). While a school may have a high student-to-computer ratio, the computers may be heavily scheduled, lack specific capabilities for easy use at a specific time, or be in the wrong place. Limited time, another practical constraint, is a fact of life for teachers. Any use of technology necessitates time spent in familiarization with the application or lesson, and for user-authored applications, significant time is spent creating and finalizing lessons. Teachers may also have to devote time to arranging support technology for implementation. Classroom management is concerned with how students behave because of the introduction and use of an application or lesson into an educational context. Lessons or activities must fit into existing class periods, must be easy to start, and must allow work conducted on one day to be easily continued the next. Most importantly, applications must be durable and reliable so the teacher is not left to scramble for alternative lesson plans if the technology breaks down.
The ultimate goal of formative evaluation is to create a more usable, compatible, and effective product. To generate design recommendations, the application or lesson must be observed in its natural setting, but this step is often skipped in many formative evaluations of educational technology applications and lessons. Information and recommendations made from formative evaluation of implementation regularly focuses on non-technological phenomena, such as how teachers structure their day. Even when the focus is on the technology itself, compatibility problems usually seem almost incidental to the stated functions of an application or product, one reason why this aspect of design is so overlooked. While a product “does what it is supposed to,” aspects of the product’s design and its ability to be implemented can make functionality almost irrelevant.Evaluators in other product development contexts, especially those where a product faces challenges in implementation, could benefit from the same general approach to formative evaluation of implementation context described in this article. The field of medical technology faces many of the same concerns as education. Lowenhaupt (2003) described failed efforts by a hospital staff to implement a new records system, and suggested those wishing to implement new systems consult with practitioners, such as nurses, to learn how the technology works in its ultimate setting. Telemedicine implementation problems were described by Stumpf, Zalunardo, and Chen (2001), who noted that most implementation problems were found in unexpected interactions between the technology and its users. Other non-educational technological applications, such as assistive technology for the disabled (NATRI, 2003) air-traffic control software (Paterno, 1998), and business communication systems (Lewis, 2003), face similar challenges with resistant practitioners, incompatibility of the product with the environment arising from preexisting practices, and modifiability in design that addresses implementation obstacles.This article has presented a model for the formative evaluation of implementation for educational technology applications and lessons. Formative evaluation in this area has traditionally focused on usability and curricular uses of an application, and not upon optimizing design for sustained and effective use in context. The evaluation practices presented in this article focus on generating design suggestions by learning about the technical, curricular and practical compatibility problems of an application or a lesson in its environment. Potential fixes in design are communicated to developers in an effort to make technology more compatible with the authentic conditions in schools.
Using Educational Technology to Enhance Learning and Teaching
Information technology (IT) offers tremendous promise for enhancing the academic experience.
Educational technologies include not only the Internet, which provides access to university websites directly tied to courses as well as to resources around the world, but also innovations in recording,collaborating, and responding technologies that offer enhanced environments for scholarly interaction and intellectual pursuit. These technologies are valuable when they serve the larger educational goals of the university: to create active learners who not only master the content of their chosen fields, but also develop techniques and modes of critical thought that will enable them to be informed and discerning citizens and contributors to their professions.
Most UCLA students are immersed in information technology in their daily lives. They expect that their academic lives will be similarly rich in technology, and that they will leave UCLA as
technology-savvy graduates. Both faculty and students are end users of educational technology, and from it they gain vastly improved access to course materials and to one another. But crucially, the technology landscape now includes a rich mixture of new kinds of course materials: discipline-specific multi-media content, simulations, and applications, as well as tools for communication, collaboration, writing, and research. Educational technology holds the promise of creating more interactive classes, engaging students more deeply and more actively in the course content, and contributing to a student’s learning of complex concepts by adapting to the student’s level and progression of understanding. To improve the learning experience significantly and consistently across the undergraduate and graduate curricula, however, UCLA, like comparable institutions, faces many challenges in developing practices, policies, and resources to adapt to ever-changing educational technology. These challenges are not merely financial, though they are obviously that; they also include a leadership challenge. In
this essay, we focus on our capacity to build on our diverse experiences and to develop a more
cohesive approach to leadership, infrastructure, and services based on a shared understanding of the uses of technology that will have the greatest impact on student learning and faculty teaching.
Reflecting on Past Successes: Three Examples
1) Support for Technology in Instruction. For over two decades, the Office of Instructional
Development (OID1) has provided a broad range of services in support of undergraduate instruction.Innovation grants, many of which include the use of technology, are awarded directly to faculty each year. OID’s Teaching Enhancement Center 2 provides training and consultation in the use of technology. Their Teaching Assistant Technology Training Program, initially funded as a national model by the Fund for the Improvement of Post-Secondary Education, includes modules on the effective use of technology by graduate students. Most recently, OID has provided such innovations as video streaming, podcasting3, and classroom personal-response systems. Other support for innovation by faculty and teaching assistants occurs in units across the campus, at the level of either the division (e.g. the Center for Digital Humanities4 and Social Sciences Computing5) or the department, program, or individual faculty (e.g. Virtual Office Hours6 in the Department of Chemistry and Biochemistry).The new Institute for Digital Research and Education7 and the NSF-funded AccessGrid8 support graduate education in the use of technology for computation and simulation across units and campuses. The largest educational technology impetus at UCLA in recent years has been the Instructional Enhancement Initiative (IEI), which is both a program and a funding mechanism for providing some components of the educational technology infrastructure at the department and division level. In 1997,28 UCLA Report for the WASC Capacity and Preparatory Review (December 2007) Essay 6. Using Educational Technology to Enhance Learning and Teaching the College began to charge a per-unit fee for all regular undergraduate courses and became an early adopter of the now-standard practice of universal course websites. College IEI money is distributed to departments or other units in its four academic divisions. As detailed in a recent report9, these resources (~$5.5 M/year) support the development and maintenance of course web sites, course management systems, student computer laboratories, the computing commons in the library (CLICC10),and the web portal to individualized course information (MyUCLA11), and assistance to faculty in the use of educational technology. The Henry Samueli School of Engineering and Applied Sciences now
similarly assesses a per-unit fee12 to provide computing resources for all its undergraduate courses.The implementation of the IEI has been a noteworthy success in meeting educational technology challenges specific to UCLA, in part by forming a consensus around the model of a common enterprise that is implemented and administered locally. The IEI builds on UCLA’s culture of distributed innovation by placing resources as close as possible to the point where support and services for students and faculty are needed. However, IEI resources arise from and are dedicated to undergraduate courses; there is no equivalent general support for graduate education.
2) Governance. Under the leadership of the Associate Vice Chancellor-Information Technology
(UCLA’s CIO), who heads the Office of Information Technology (OIT13), UCLA has made significant progress in establishing a governance structure for deciding institutional IT direction, policy, and investment. The Information Technology Planning Board (ITPB14)—a joint faculty Academic Senate- Administrative board responsible for strategic planning and policy recommendations for academic and administrative applications—was established in 2001. Because of the importance of technology for education, the Faculty Committee on Educational Technology (FCET15) was established soon after to provide advice to the ITPB and to the (then) College Provost. Now, with a broader membership, it serves that role for the CIO and the Vice Provost for Undergraduate Education. Local units’governance models vary, with some having very active faculty advisory committees. The Campus Computing Council (CCC16) brings together the IT directors from local campus units.
3) Educational Technology Leadership. The ITPB developed a campus-wide vision for educational
technology with two goals: 1) to integrate students into an educational technology-enhanced teaching, learning, and research environment, and 2) to use the Internet to support scholarly interaction, both to engage students and to enhance external access to UCLA. This vision for Educational Technology has been continuously reviewed and refined through the IT governance structure. Over the past six years the FCET has developed a strategic vision and recommended educational technology services and initiatives, as demonstrated in the Annual FCET Report17 . In 2003, it established the Brian P. Copenhaver Award18 for Innovation in Teaching with Technology, an award given annually to honor faculty who successfully experiment with new educational technology, to help faculty share their experiences with others, and to build a UCLA community of educational technology innovators. More recently, the FCET recommended that the campus converge on a Common Collaboration and Learning Environment (CCLE19), both to support instruction with a common environment and to provide a platform for interdisciplinary research and other collaborations. The CCLE will thus further integrate
research and teaching, serving undergraduates, graduate students, and faculty.
Current Challenges in Advancing Educational Technology
To some degree, the early launch and success of a broad range of services and programs throughout the campus has created a culture and set of practices that make it costly and difficult for UCLA to achieve significant systemic change and broad educational technology improvement. Although intertwined, the challenges for UCLA can be sorted into three major categories.
1) Educational Innovations. How can we build a research-rich educational environment for
undergraduate and graduate students, using educational technology-enabled pedagogy to achieve
clearly articulated learning outcomes? UCLA does not lack ideas about how to do this. In addition to UCLA Report for the WASC Capacity and Preparatory Review (December 2007) 29
Essay 6. Using Educational Technology to Enhance Learning and Teaching
the efforts of individual faculty such as those recognized by the Copenhaver Award, and individual graduate students such as those teaching through the Collegium of University Teaching Fellows20, there have been many studies and pilot projects, e.g. an OID pilot project on Blended Instruction21, an Academic Senate study of online instruction22, and a FCET recommendation for an Open Course Ware project23. There are several “islands of excellence” where students benefit from such innovations. However, these benefits are generally not realized beyond individual classes to the broader campus community. There is no systematic process for assessing impact beyond standard course evaluation forms. Other than the Copenhaver Award, there is little in the current resource and reward system for faculty that fosters investing the time required to incorporate innovative educational technology. The challenge for UCLA is thus to engage systematically in 1) assessing pilot efforts in terms of learning outcomes; 2) disseminating these successful ideas and encouraging adopters; 3) continuing assessment throughout larger scale implementations; and 4) rewarding innovators.
2) Building a Cohesive Instructional Technology Environment. Because responsibility and funding for educational technology programs is at the school, division, or department level, each unit has its own infrastructure, including about course management systems. The many such systems deployed across UCLA create a problem for students who must use different systems across courses, and for faculty and graduate students who teach in more than one unit. And for administrative functions such as the Library and Registrar, unnecessary complexity is added to an already highly technological and rapidly changing environment. More generally, core educational technology services are uneven across campus, with some units providing models of excellence and others lagging behind.
The CCLE initiative, mentioned earlier, is intended to help remedy this situation, and it has already become a catalyst for bringing the campus together to develop more effective governance and service delivery approaches, and fostering a spirit of cooperation. In 2006, an innovative campuswide process to define requirements and assess options resulted in a widely applauded decision to adopt the Moodle course management system24 for the CCLE. In 2007, with the support of key campus groups (OIT, OID, CCC, the Library), a cohort of staff members from units across campus contributed extensively to an alpha-phase implementation of Moodle. The EVC/Provost then allocated seed funds to facilitate a second phase of planning (Fall 2007) designed to determine the scale, scope, and architecture of, and to develop a funding model for, a wider implementation of the CCLE for 2008 and beyond.
A related challenge concerns three of our campus’s learning spaces. First, according to OID’s
Classroom Technology Plan25, furnishing UCLA classrooms with the newest educational technology equipment lags behind other UC campuses. Currently, only 50% of UCLA’s 200 general assignment classrooms are adequately outfitted. In response to OID’s plan, the Acting Chancellor has committed $800,000 in permanent funds to be allocated over a two-year period, 2008-2010. These newly allocated funds will ensure that all general classrooms are equipped by 2011. Second, while much of UCLA’s general public space has wireless coverage, the campus is involved in debates about the need for providing wireless connectivity within its academic buildings. And third, the UCLA Library must consider how to provide students more access to its digital resources, as well as more workspaces.
3) Leadership. At UCLA, leadership in implementing educational technology currently follows the fully distributed structure of instruction on campus, and coordinating our decentralized institution to produce a federated environment requires creative leadership. Unlike some of our peer universities, UCLA has no single position or office solely concerned with advancing the use of educational technology. Responsibility is shared among key organizations (i.e., OIT, OID, CCC, the Library) through active, robust governance processes. While the benefits of a federated environment are significant, connecting and leveraging local and institutional efforts is a challenge, not just for educational technology but for all aspects of IT. UCLA is pursuing a model of “Coordinated Autonomy” in which IT infrastructure and services are neither centralized nor decentralized but 30 UCLA Report for the WASC Capacity and Preparatory Review (December 2007)
Essay 6. Using Educational Technology to Enhance Learning and Teaching
“layered”, meaning that local components are on top of shared, co-owned, institutional components.This strategic vision is summarized by UCLA’s CIO in a recent Educause essay26.
Next Steps: Assessing the Use of Technology to Enhance Learning and Teaching
In approaching the report for the WASC Educational Effectiveness Review, the FCET will be working with faculty and others to develop an extended essay that will give an update on UCLA’s further development of a common collaboration and learning environment (i.e., the CCLE) and the issues of centralization and leadership that it raises. The report will also focus on three projects initiated by faculty, selected to illustrate the challenges to students and faculty in using educational technology to:
1) engage students more deeply and actively in course content; 2) incorporate information literacy
instruction to develop basic research skills; and 3) use feedback about student performance obtained in a blended instruction model to inform the redesign of a large introductory course.
Project 1. Student Engagement. Current technology makes it feasible for a wide range of courses to include multi-media student projects, which facilitate active learning of course content while also enhancing students’ technology skills. An example that will be highlighted is Professor Tim
Groeling’s course on Political Communication (Communication Studies 160), the core class on media and politics. Professor Groeling received a 2004 Copenhaver Award27 for introducing a video project in which students make political campaign ads, and then evaluate fellow students’ ads. This enhancement to the course was developed without significant university support and uses computing resources available to all students. Professor Groeling has done some informal assessment of the educational technology components of his course in the context of overall course evaluation. For the Educational Effectiveness report, we will consider how to introduce a more formal assessment of the educational technology component, and how to encourage others to adopt this sort of innovation, with the important goal of minimizing any new burden on the instructor.
Project 2. Information Literacy. Broadly defined, information literacy is the set of skills students need to locate, evaluate, and use information effectively and ethically. Students need these essential skills throughout their careers, and early information literacy experiences are foundational for advanced capstone experiences (Essay 5); yet many undergraduates come to UCLA with critical gaps in this skill set. To address this problem, the College Library (1) has developed a comprehensive Information Literacy Program28 for all undergraduates, (2) assigns each freshman cluster team (Essay 4) its own reference librarian to work with the faculty and TAs to design information literacy exercises tied to writing assignments, and a research guide29 for the students’ seminar projects, and (3) offers a Fiat Lux seminar on information literacy30 for cluster freshmen wanting more intensive training. For the Educational Effectiveness report, we will assess the partnership between the librarians and the Freshman Cluster Program, documenting how it benefits cluster freshmen, TAs and faculty, as well as strategies for extending it to other general education and lower-division courses.
Project 3. Student Learning and Course Design. A course in introductory statistics is essential to a large number of majors at UCLA, and students may enroll with widely varying skill levels, unrealistic impressions of their own competence, and different needs for using statistical tools and measures. The traditional model for Statistics 10 included three hours of lecture and one hour of a TA-taught section each week for the 1,700 students enrolled. This educational technology project, coordinated by Senior Lecturer Mahtash Esfandiari31, focuses on course redesign to address the contextual issues above and to introduce statistics as a science of data. A blended instruction model with a significant online component using Moodle was developed to maximize the role of the students as active learners and to provide detailed information to students and faculty alike on their skill levels. Each week, students participate in online quizzes, and lectures are immediately tailored to address issues identified by quiz results. For the Educational Effectiveness report, we will examine the impact of the Statistics 10 redesign on student learning, as well as faculty and student satisfaction.
Organizing Course materials
Technology in Education
Many people warn of the possible harmful effects of using technology in the classroom. Will children lose their ability to relate to other human beings? Will they become dependent on technology to learn? Will they find inappropriate materials? The same was probably said with the invention of the printing press, radio, and television. All of these can be used inappropriately, but all of them have given humanity unbounded access to information which can be turned into knowledge. Appropriately used-- interactively and with guidance-- they have become tools for the development of higher order thinking skills.
Inappropriately used in the classroom, technology can be used to perpetuate old models of teaching and learning. Students can be "plugged into computers" to do drill and practice that is not so different from workbooks. Teachers can use multimedia technology to give more colorful, stimulating lectures. Both of these have their place, but such use does not begin to tap the power of these new tools.
In this area, you will find descriptions of how computers can be used to stimulate and develop writing skills, collaborate with peers in foreign countries, do authentic kinds of research that is valuable to the adult world, and do complex kinds of problem solving that would otherwise be impossible.
by Patricia Armstrong, Assistant Director, Center for Teaching
• Background Information
• The Original Taxonomy
• The Revised Taxonomy
• Why Use Bloom's Taxonomy?
• Further Information
In 1956, Benjamin Bloom with collaborators Max Englehart, Edward Furst, Walter Hill, and David Krathwohl published a framework for categorizing educational goals: Taxonomy of Educational Objectives. Familiarly known as Bloom's Taxonomy, this framework has been applied by generations of K-12 teachers and college instructors in their teaching.
The framework elaborated by Bloom and his collaborators consisted of six major categories: Knowledge, Comprehension, Application, Analysis, Synthesis, and Evaluation. The categories after Knowledge were presented as "skills and abilities," with the understanding that knowledge was the necessary precondition for putting these skills and abilities into practice.
While each category contained subcategories, all lying along a continuum from simple to complex and concrete to abstract, the taxonomy is popularly remembered according to the six main categories.
The Original Taxonomy (1956)
Here are the authors' brief explanations of these main categories in from the appendix of Taxonomy of Educational Objectives (Handbook One, pp. 201-207):
• Knowledge "involves the recall of specifics and universals, the recall of methods and processes, or the recall of a pattern, structure, or setting."
• Comprehension "refers to a type of understanding or apprehension such that the individual knows what is being communicated and can make use of the material or idea being communicated without necessarily relating it to other material or seeing its fullest implications."
• Application refers to the "use of abstractions in particular and concrete situations."
• Analysis represents the "breakdown of a communication into its constituent elements or parts such that the relative hierarchy of ideas is made clear and/or the relations between ideas expressed are made explicit."
• Synthesis involves the "putting together of elements and parts so as to form a whole."
• Evaluation engenders "judgments about the value of material and methods for given purposes."
The 1984 edition of Handbook One is available in the CFT Library in Calhoun 116. See its ACORN record for call number and availability.
While many explanations of Bloom's Taxonomy and examples of its applications are readily available on the Internet, this guide to Bloom's Taxonomy is particularly useful because it contains links to dozens of other web sites.
Barbara Gross Davis, in the "Asking Questions" chapter of Tools for Teaching, also provides examples of questions corresponding to the six categories. This chapter is not available in the online version of the book, but Tools for Teaching is available in the CFT Library in Calhoun 116. See its ACORN record for call number and availability.
The Revised Taxonomy (2001)
A group of cognitive psychologists, curriculum theorists and instructional researchers, and testing and assessment specialists published in 2001 a revision of Bloom's Taxonomy with the title A Taxonomy for Teaching, Learning, and Assessment. This title draws attention away from the somewhat static notion of "educational objectives" (in Bloom's original title) and points to a more dynamic conception of classification.
The authors of the revised taxonomy underscore this dynamism, using verbs and gerunds to label their categories and subcategories (rather than the nouns of the original taxonomy). These "action words" describe the cognitive processes by which thinkers encounter and work with knowledge:
In the revised taxonomy, knowledge is at the basis of these six cognitive processes, but its authors created a separate taxonomy of the types of knowledge used in cognition:
• Factual Knowledge
o Knowledge of terminology
o Knowledge of specific details and elements
• Conceptual Knowledge
o Knowledge of classifications and categories
o Knowledge of principles and generalizations
o Knowledge of theories, models, and structures
• Procedural Knowledge
o Knowledge of subject-specific skills and algorithms
o Knowledge of subject-specific techniques and methods
o Knowledge of criteria for determining when to use appropriate procedures
• Metacognitive Knowledge
o Strategic Knowledge
o Knowledge about cognitive tasks, including appropriate contextual and conditional knowledge
An Encyclopedia of Educational Technology guide to the revised version provides a brief summary of the revised taxonomy and a helpful table of the six cognitive processes and four types of knowledge.
Why Use Bloom's Taxonomy?
The authors of the revised taxonomy suggest a multi-layered answer to this question, to which the author of this teaching guide has added some clarifying points:
1. Objectives (learning goals) are important to establish in a pedagogical interchange so that teachers and students alike understand the purpose of that interchange.
2. Teachers can benefit from using frameworks to organize objectives because
3. Organizing objectives helps to clarify objectives for themselves and for students.
4. Having an organized set of objectives helps teachers to:
o "plan and deliver appropriate instruction";
o "design valid assessment tasks and strategies";and
o "ensure that instruction and assessment are aligned with the objectives."
Citations are from A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom's Taxonomy of Educational Objectives.
An Information Architecture for Validating
Mark Melia and Claus Pahl
School of Computing, Dublin City University, Dublin 9, Ireland
Abstract. Courseware validation should locate Learning Objects incon-
sistent with the courseware instructional design being used. In order for
validation to take place it is necessary to identify the implicit and ex-
plicit information needed for validation. In this paper, we identify this
information and formally define an information architecture to model
courseware validation information explicitly. This promotes tool-support
for courseware validation and its interoperability with the courseware
The assembly of Learning Objects (LOs) into courseware is increasingly becom-
ing the norm in courseware construction. LO reuse offers course creators an
efficient methodology to create courseware from tried and tested components.
This methodology also has challenges. One of the most pressing challenges the
course creator faces is placing the right LO in the right place in courseware .
This is difficult as course creators typically only have high level knowledge of
the content and internal behaviour of each LO, due to 3rd party construction
and increasing internal complexity.
Quality courseware requires a consistent and suitable instructional design.
Due to a lack of understanding, a course creator may place LOs in courseware
that are not compliant with courseware’s instructional design. This causes in-
consistent pedagogy which can confuse, demotivate and isolate the learner and
could ultimately lead to the rejection of the course by the learner .
In this paper, we detail the courseware information that must be captured in
order to accurately validate courseware developed using LOs and define a layered
information architecture to that effect. The formal definition of the information
architecture allows for a validation engine to be built around it, providing tool
support to the course creator when validating courseware. Our formal definition
also facilitates for translation to and from external courseware representation
specifications such as SCORM  and IMS LD , allowing for interoperability.
2 Identification of Information Needs
Courseware validation is a complex task which involves evaluating courseware in
the context of its scope (the knowledge the courseware wishes to teach), instruc-
tional approach (how it teaches this knowledge) and content used (educational
materials used in teaching). In the real world this may mean delegating vali-
dation tasks to experts. Validation of course structure may be delegated to an
instructional design expert, while content related issues, may be delegated to a
subject matter expert.
In order for the courseware validation process to be automated, the implicit
knowledge and information used to validate courseware, known as the “course-
ware validation concerns”, must be identified and defined. Courseware infor-
mation changes over time (e.g. a more detailed learner model is possible post-
delivery) definitions must focus on one stage of the courseware life-cycle. Here
we define courseware information at the post-construction/pre-delivery stage of
the courseware life-cycle (i.e. after the courseware has been developed by the
course creator but before it is delivered to learners).
Fig. 1 outlines the courseware validation concerns for the post-construction/pre-
delivery stage of the coureware life-cycle. Three expert roles are represented in
the figure, the domain expert, the instructional designer and course accredita-
tion. These roles may be embodied by the one course creator, or may represent
several specialised course creators. Domain information outlines the knowledge
to be taught to the learner. The domain expert can also define basic rules of
thumb on how particular domain knowledge is taught (e.g. concept A always
before B). The scope of the courseware is defined in the context of the domain
information specifying the course pre-requisites and the course goals. The course
scope is usually defined by the course accreditation body. The instructional de-
sign defines a strategy for the transfer of knowledge to the learner and is derived
from general instructional design principles.
Courseware is defined as instructional logic combined with LOs. LOs are
the content which teach aspects of domain knowledge to the learner and are
described using metadata, outlining the material covered by the LO and the
method of instruction. Instructional logic specifies how the learner proceeds
through the courseware, from one LO to another.
3 Layered Information Architecture
3.1 Information Architecture Overview
By explicitly representing the courseware validation concerns identified in the
previous section it is possible to automatically validate courseware constructed.
The Courseware Authoring Validation Information Architecture (CAVIAr) al-
lows for the representation of the validation concerns using a system of lay-
ered models. CAVIAr extends the LAOS model, used in Adaptive Educational
Hypermedia (AEH) authoring. LAOS simplifies AEH authoring by separating
AEH concerns allowing the course creator to deal with each individually. .
Our model replicates the benefits of using the LAOS model in AEH authoring
by separating the courseware validation concerns.
The bottom three layers of the validation model, the domain model, the
goal and constraint model and the learner model, are replicated from the LAOS
model but adapted for validation. On top of these base layers the courseware
model and validation model are defined.
3.2 The Domain Model
The domain model is a formalism of knowledge in the form of a concept map,
where a node represents a concept and an edge represents the relationship from
one concept to another.
A conceptual graph for domain modeling in AEH authoring is formally de-
fined in . In AEH the domain model serves as the main navigational tool,
while in courseware validation it serves as a semantic point of reference for LO
annotation (section 3.5).
We can define the courseware validation domain model requirements as fol-
Definition 1. A concept map CM is determined by the tuple
is a set of concepts and L is a set of links.
– A concept c 2 C is an abstract notion which is described using attributes.
There must be at least one attribute for each concept, the name of the concept
described as nc.
– A link l 2 L is a tuple
concept respectively, nl is the name of the link. There are many types of
links, the link type adds semantics to the link, indicating what is meant by
the link. tl refers to the link type (e.g. “is-a”, “has-a”).
– Each concept c must have at least one link, the hierarchical link, which links
a concept to its parent concept. An exception of this is the root concept.
3.3 The Goal and Constraints Model
The purpose of the goal and constraints model in CAVIAr is to specify domain
pedagogic information and course conceptual goals. The course creator can ex-
press which concepts from the domain model are to be taught to the learner and
conceptual pre-requisite constraints between concepts.
We distinguish between two types of knowledge, deep knowledge and shal-
low knowledge, where deep knowledge implies an understanding of a concepts
underlying concepts or sub-concepts and shallow knowledge implies a passing
knowledge or a mere familiarity with a concept. To this effect, when defining a
model in terms of knowledge (i.e. the domain model), we must distinguish if we
mean deep knowledge or shallow knowledge. To do this, goals and constraints
have a strength - weak or strong, which specify deep knowledge or strong knowl-
edge respectively. This accommodates the needs of real courses as specified in
We now give a more formal definition of the elements which make up the
goal and constraint model.
Definition 2. The goal and constraint model is made up of the tuple
where P is a set of pre-requisite constraints, which specify a relationship between
concepts where one concept must be understood before the other concept and G
is a set of Goals.
– A pre-requisite constraint p 2 P is described using the tuple
where c1 2 C, c2 2 C and ps describes the strength of the pre-requisite con-
straint, weak or strong. Weak pre-requisites require only passing knowledge
of the pre-requisite concept, a strong pre-requisite requires deep knowledge of
the pre-requisite concept and its underlying concepts.
– A goal g2G is described by the tuple
goal concepts, alt defines alternative goal and gg is a nested goal within the
– A goal concept gc 2 GC is made up of the tuple
a concept in the domain model gc 2 C and gs is the strength of the goal, a
weak goal or a strong goal.
We can infer that the goal and constraint model is an overlay model on the
domain model in that the goal and constraint model is specified in terms of
the domain model. This is due to each gc 2 C, meaning that GC _ C and
pre-requisite constraints are expressed using concepts from the domain model.
3.4 The Learner Model
In CAVIAr, the learner model is used to represent the stereotypical learner
knowledge or pre-requisite knowledge for a courseware. The learner model is a
necessary layer of CAVIAr as concepts not covered in the courseware may be pre-
requisite of concepts in the courseware, but this knowledge is assumed learner
knowledge. In order for a validation engine to acknowledge the learner’s initial
assumed knowledge it must be modelled.
As we specify the learner model in terms of conceptual knowledge the learner
model elements also have a strength, weak or strong, as is the case for goals and
We can also describe the learner model formally as follows:
Definition 3. The learner model is described by the tuple
refers to a concept in the domain model kc 2 C and ks is the strength of the
assumed knowledge, weak or strong.
The learner model is also an overlay model on the domain model as each
kc 2 C, again meaning kc _ C.
3.5 The Courseware Model
A course structure can be modeled as a directed graph. Learning resources are
nodes on the graph. A learner’s traversal through the graph’s edges depends on
variables, such as learner choice, assessment results, learning styles, feedback,
which are assessed at run-time. Learning resources in the course are annotated
using some metadata standard such as IEEE LOM . The metadata maps the
learning resource to the concept(s) it addresses in the domain model.
Fig. 3 demonstrates how each LO in a course is associated with at least one
concept in the domain model, specified in the LO annotation (LO conceptual annotation is indicated in the diagram with an arrow from a LO to a concept).
LO conceptual associations can be used to group LOs. Grouping are made up of
LOs concerned with the same concept. This type of LO grouping is illustrated
in the diagram in figure 3 (dotted circle on course model indicates conceptual
groupings). Once we can group LOs by concept we can discriminate between
pedagogical strategies concerned with inter-conceptual pedagogy (strategy con-
cerning sequencing of the conceptual groupings) and intra-conceptual pedagogy
(pedagogical strategy concerning the LOs within each conceptual grouping).
We now give a more formal definition of the course model.
Definition 4. We consider courseware CW to be determined by the tuple
where LO represents a set of Learning Objects and LP is the set of learning
paths. SP and EP represent the start points and end points respectively in a
given courseware model.
– A learning path lp 2 LP is defined by the tuple
LO, lo2 2 LO start and end learning objects respectively. G refers to a
boolean gate condition, which must be true for the learner to proceed down
that learning path.
– A learning object lo 2 LO is defined by the tuple
learning content and Alo is the LO’s annotation.
– A learning object’s annotation Alo is a tuple
set used to describe a LO. CA is a set of conceptual annotation.
– Metadata m where m 2 M is a tuple
the metadata attribute and val refers to its value.
– A Conceptual Annotation ca 2 CA is a tuple
purpose of the annotation (i.e. to specify a competency in a particular con-
cept), c2C a reference to domain concept and s the strength of the link to
The connection between the courseware model and the lower layers is in the
LO’s annotation. Each LO’s annotation points to a concept c, where c 2 C.
In our definition we have defined a LP with a boolean gate condition, G.
This gate condition allows the course creator to specify what learning path the
learner should take based on various variables found in e.g. the learner model.
3.6 The Validation Model
The Validation Model captures pedagogical rules that courseware must adhere
to. In the validation model layer the course creator can express undesirable prop-
erties of courseware and also properties which must be present in the courseware.
Validation can be split into two distinct parts, validation which is concerned
with learning content of one domain concept (i.e. intra-conceptual validation)
and validation which looks at how the course proceeds from one concept to
another in the course (i.e. inter-conceptual validation). Typical intra-conceptual
validation will ensure that each concept is taught in a uniformed manner, while
inter-conceptual sequencing might ensure that the strategy undertaken in the
course is one of depth-first (covering each concept in detail before moving on to
the next concept).
The formal definition of the lower four layers of CAVIAr allows for the forma-
tion of pedagogical rules. Here we specify a rule which states for every concept c
that is a goal concept, there must exist a LO annotated with that concept which
is of “Type” “Lecture”.
8x9y(c(x), c 2 gci) ^ (CWi.LO(y).A.CA.c = x) ^ (CWi.LO(y).A.M(z).Att =′′
Type′′) ^ (CWi.LO(y).A.M(z).value =′′ Lecture′′)
In the example below, we demonstrate how the rule above can be imple-
mented using the JESS rule language, a Java-based, Lisp-like, rule language .
The rule states that each concept in the goal model, “concepts-in-gm”, must
have a LO which is annotated to have a “resource type” of “lecture”. If this is
not the case the rule prints out the violating concept with the message “HAS
In this paper we have given an overview of the information needs for the vali-
dation of courseware. We have then taken the information needs identified and
defined a layered information architecture (CAVIAr), which allows for the ex-
plicit representation of each information need. The definition of each layer allows
for the formulation of pedagogical rules, which can be validated in the context
of our information architecture.
The formal definition of the CAVIAr allows for tool support to be built
around it, in our future work, we will design and implement a validation engine
based on CAVIAr. Another advantage of CAVIAr’s formal definition is that
it allows for translation from external courseware representations to CAVIAr,
allowing for interoperability with existing courseware specifications, thus em-
bracing the state of the art in courseware construction.
A shortage of skilled labor exists in the construction industry. Fortunately, advancement in construction equipment and material technologies, along with modularized components and estimating and scheduling strategies have offset the shortage of skilled construction labor. The construction industry has witnessed a drop in real wages since 1970. Decline in real wages may be attributed to a combination of socioeconomic factors like migrant laborers, fringe benefits, safety procedures, union membership and worker skills. Another factor that may be impacting construction real wages is technological changes over the past couple of decades; including technological changes in construction equipment. There is a growing need to understand how changes in technology are affecting employment conditions in construction. If more could be known about how technology affects wages, the industry could formulate better strategies for future workforce needs. This paper examines the relationship between changes in equipment technology and changes in construction wages with the help of five factors of equipment
technology change; control, energy, ergonomics, functionality and information processing. Furthermore, data from the U.S. Bureau of Labor Statistics' Current Population Survey (CPS) is used to examine the effects of computer usage on wages among hourly workers in construction.
Construction; labor; equipment technology; wages; computer.
The U.S. construction industry contributes significantly to the U.S. economy.When one includes construction related business involving design, equipment and materials manufacturing, and supply, the construction industry accounts for 13% of the GDP, making it the largest manufacturing industry in the U.S. (BEA 2000).The shortage of skilled workers is considered to be one of the greatest challenges facing the U.S. construction industry. Not since the early 1970s and post World War II has the U.S. construction industry experienced such low unemployment rates (BLS 2002). Advances in construction equipment and material technologies, modularized components, and estimating and scheduling strategies have offset the shortage of skilled construction labor.However, there is a perception among industry leaders that the skilled worker shortage is getting worse. A survey of facility owners showed that 78% thought the skilled worker shortage had increased during the past 3 years (Rosenbaum 2001).
Although real wages in general in the U.S. began to outpace inflation in the late 1990’s, there has been a long-term decline in construction real wages since the 1970’s (Allmon, et al. 2000 and Oppedahl 2000). Other industries, such as manufacturing, have also experienced declines in real wages; however, the declines have typically been greater in construction. This greater decline may be due to a combination of socioeconomic factors including an increase in migrant laborers in construction, fringe benefits, and construction safety, and a decrease in union membership and
worker skills (Oppedahl 2000, Goodrum 2002).
Another factor that may be impacting construction real wages is technology. Over the past couple of decades, there has been a wide array of technological changes in construction equipment and material technology.Construction equipment has become more powerful, automated, more precise, safer, and more functional, allowing workers to be more productive in construction activities. In many instances, technology has made construction equipment easier to use. One example is heavy machinery. Advancements in hydraulic controls and microprocessors have automated and simplified the operation of earthmoving machinery. There have also been advancements in construction equipment that have introduced new technologies that require skill sets normally outside those traditionally required for construction. For example, the use of Global
Positioning Systems onboard earthmoving equipment now require equipment operators to
be proficient in the use of computers.
This paper examines the effect of equipment technology on construction wages in two parts. First, the effects of changes in equipment technology on real wages from 1976 to 1998 are examined. This involves examining the changes in five technology factors (Amplification of Human Energy, Level of Control, Functional Range, Ergonomics, and Information Processing) and the change in the average wage of workers in crews for 100 construction activities. Second, the effects of
computer usage on construction wages are examined for 470 individual hourly construction
2.1. Equipment Technology Defined
This research examines the effect of changes in equipment technology on construction wages, specifically the equipment technologies of hand tools, machinery, and computers. Hand tools include pneumatic nail guns, electric drills, circular saws, and similar types of tools. Machinery includes cranes, grout pumps, bulldozers, and similar types of implements.
2.2. Technology Factors
To examine how different mechanisms of equipment technology change have influenced construction wages, five factors were identified (defined below and examples
discussed later) to characterize changes in technology.Amplification of Human Energy:
technology designed to make an activity easier to perform physically. In its simplest terms, it
can be regarded as the shift in energy from human to machine bringing an increase in
Level of Control: advances in machinery and hand tools that transfer control from human
Functional Range: changes that expand a tool or machine’s range of capabilities.
Ergonomics: technology that alleviates physical stresses imposed on a worker and helps
the worker cope with the work environment Information Processing: over time,
construction equipment has been designed to provide greater and more accurate information
regarding internal and external processes. This factor includes the incorporation of computers
into the work processes.
3. DATA SOURCES
3.1. Estimation Manual
The data for the research came from the estimation handbook Means Building Construction Cost Data (Means) and the Computer and Internet Use Supplement, data files for 2001 from the U.S. Bureau of Labor Statistics’ Current Population Survey. Wage data from the 1976 and 1998 Means estimation handbooks on 100 activities was collected to examine the effects of changes in equipment technology (as defined by the technology factors) on construction wages. Data from the CPS was used specifically to examine the effects of the use of computers on construction wages.
These estimation handbooks provide wage data, unit labor costs, unit equipment costs, physical output data, and work-hour requirements for construction activities. While the handbooks are a valuable source of information about construction cost and productivity across time, there are some limitations to the data. The contractors who provide the figures for the manuals are not
required to build a project using their estimations; this leads some contractors to submit inflated estimates of construction costs (Pieper 1989).
Three criteria were used to select activities for inclusion in the study. The first criterion was that the same activity be found in both the 1998 and 1976 estimation manuals.Due to changes in methodology, materials, or lack of use in construction, a number of activities included in the 1976 manual were not included in the 1998 manual. Likewise, a number of new activities were included in the 1998 manual due to new methodology or materials. Second, activities from a diverse range of technological changes were selected.Third, activities were selected to represent a
wide range of activity types from different divisions of the Construction Specification Institute (CSI) master format.
3.2. CPS September 2001 Computer and Internet Use Supplement To further examine the effects of computer usage on construction wages, data was collected from the September 2001 Computer
and Internet Use Supplement from the U.S. Bureau of Labor Statistic’s (BLS) Current
Population Survey (CPS). The CPS is a monthly survey of approximately 50,000
households conducted by the U.S. Census Bureau for the U.S. Department of Labor. With
the survey being conducted for more than 50 years, CPS data provides information on
economic indicators, which influence U.S. governmental policy. Data from the CPS is
available to the public via their website.
Each month, the CPS randomly selects 59,000 housing units (e.g. single family homes,
townhouses, condominiums, apartment units, and mobile homes) for the sample, and
approximately 50,000 are occupied and eligible for the survey. The other units are found
ineligible because they have been destroyed, vacant, converted to nonresidential use, or
contain persons whose usual place of residence is elsewhere. Respondents are asked questions
about the employment information and demographic characteristics of each member of
the household over 14 years of age. In September 2001, the Computer and Internet
usage survey was added as a supplement to that month’s CPS. In addition to the demographic
data collected each month, the Computer and Internet Supplement contained questions about
the respondent’s use of computers, including the use of computers at work, which was used in the
A number of criteria were used to select cases (each case representing an individual
respondent) from the September 2001 CPS Computer Supplement data. First, only
individuals listing their primary industry of employment as construction were selected.
Next, each case had to meet the following series of additional selection criteria:
1. Full-time hourly workers;
2. Male construction workers;
3. Non-supervisory construction workers;
4. Hourly wage greater than or equal to the U.S. minimum wage of $5.15/hour.
The use of these selection criteria resulted in 470 cases.
4.1. Effects of Changes in Equipment Technology on Real Wages from 1976 to 1998
4.1.1. Measured Change in Equipment Technology
The authors identified and examined 43 types of hand tools and 31 types of machinery in
the 100 construction activities. Obviously, many hand tools and machinery were used in
several activities. Equipment technology changes were identified using equipment
catalogs, handbooks and specifications. Figure 1 shows the number of activities that experienced
a change in equipment technology in at least one tool or item of machinery for each of the
technology factors.As shown in Figure 1, more than 70% of the activities experienced an increase in energy output. Prior related research indicates that the metals, wood and plastic, and site-work
divisions experienced the greatest amount of change in tool and machinery energy output
(Goodrum and Haas 2002). One example of change in energy output in the metals division
involves welding machines, which offer increased wattage output. The powder actuated
systems in the metals divisions used in metal decking offer greater depth penetration for
installed studs. In addition, by 1998 cranes offered more lifting capacity than available in
1976. In the wood and plastic division, circular saws operated at higher RPMs, and the
pneumatic nail gun required less human energy than a hand held hammer. Most site work
machinery increased in horsepower output including front-end loaders, dump trucks,
backhoes, bulldozers, graders, asphalt pavers, and scrapers.
As seen in Figure 1, almost half of construction activities experienced a change in
the amount of human control needed from 1976 to 1998. Welding machines in the metals
division, for instance, are now equipped with remote controlled amperage adjusters and
powder actuated systems have semi-automatic loading capability. The pneumatic nail gun has
replaced the hand held hammer in the woods and plastic division and in formwork installation in
the concrete division. Also in the concrete division, pump trucks are now equipped with
remote controlled booms, and concrete vibrators automatically adjust the vibration frequency to
match the concrete’s slump.
Changes in functional range occurred in slightly less than half of the activities (Figure 1).
Through advancements in hydraulic controls and microprocessors, site-work machinery now has
greater precision and a longer reach for booms and buckets. Excavators and backhoes are
capable of digging deeper.
Figure 1 shows that exactly half of the construction activities experienced some change
in ergonomics. For example, by 1998 many hand tools, such as circular saws, hand drills,
pneumatic nail guns, and caulking guns, were lighter and operated with less noise and
vibration than their predecessors.
Almost all of the advances in information processing occurred in heavy machinery (Goodrum and Haas 2002). This finding explains why most construction activities did not experience such an
improvement in equipment technology. For example, some heavy machinery now offer selfmonitoring and self-diagnostic systems.
4.1.2. Measured Change in Real Wages Daily crew wages as reported in Means
were divided by the number of crewmembers in each activity to estimate individual worker’s
daily wage. In order to measure real wages (wages adjusted for inflation), the Census
Construction Cost Index was used to normalize wages to 1990 levels. A description of the
Census Construction Cost Index can be found at the Department of Commerce website
The overall average change from 1976 to 1998 in a worker’s daily real wage was -
$19.97, with a 95% confidence interval of ± $6.97. This confirms other findings that show a
long-term decline in construction real wages (Allmon, et al. 2000, Oppedahl 2000).
illustrates the average changes in daily real wages for each division of the CSI Master
On average, concrete activities experienced the largest decline in daily real
wages, while masonry activities experienced little change. Further research is needed to
determine the reasons behind the various sector changes.
Relation Between Equipment Technology and Partial Factor Productivity Change
Analysis of Variance (ANOVA) is used to test whether two or more groups have
statistically significant different means. The ANOVA test estimates the statistical
significance of the difference between the means (F-value), and it measures the amount of
variation in the dependent variable that is explained by the independent variable Eta
Square (e). The ANOVA analyses compared the daily real wage changes from 1976 to 1998 for
(1) activities that experienced a change according to the technology factor and (2)
activities that had not. Figure 3 shows the ANOVA results.
With the exception of energy and ergonomics, the activities that observed a change in equipment technology experienced a statistically significant different decline in daily real wages. Activities with an equipment change in functional range and information processing experienced over 60% less of a decline in daily real wages compared to activities without such changes. One possible
explanation for these differences is the added skills required for workers to adopt these types
of equipment technology changes, which may result in higher wages. Activities experiencing a
change in level of control actually experienced over 150% more of a decline in real wages
compared to activities without change. A possible explanation for this added decline is
that many changes in level of control serve to simplify the processes, which may result in
lower wages. Further research in the area is needed to examine other reasons.
Effects of Computer Usage on Construction
One result of the previous set of analyses was that information processing has a substantial and significant relation with activities that saw less of a decline in daily real wages compared to activities that did not experience such a change. Because this phase of the study was limited to examining changes in equipment technology that were widely diffused in
construction, most of the changes in information processing were found only in heavy machinery.
To further examine how changes in information processing affect construction wages, data from
the CPS September 2001 Computer Supplement was analyzed.
4.2.1 Measured Computer Usage Among Non-
Supervisory Construction Workers Of the 470 cases analyzed in the CPS September 2001 Computer Supplement, 49 (10.4%) indicated they used a computer at work.The top three occupations that used computers were: (1) electricians, (2) electrical power installer and repairers, and (3) plumbers.
Occupations in which there were no respondents indicating they used computers as work
included: roofers, concrete and terrazzo finishers, electrician apprentices, hard and soft
tile setter’s, insulation workers and sheet metal duct installers. Unfortunately, the Computer
Supplement data did not measure how the computers were used at work.
4.2.2. Relation Between Computer Usage and
Wages in Construction Data was analyzed from the CPS September 2001 Computer Supplement to examine the effects of computer usages on construction wages by comparing hourly wages
between construction workers who use a computer at work and those who do not use
computer at work (Figure 4). The difference in education, work experience, and age was also
examined between those who do and do not use a computer at work.
Information from the CPS is used to create more than 350 variables. The CPS,
however, does not ask respondents about their work experience, an important consideration in a
study on wage differentials. One method for estimating work experience, used by the BLS, is
to use CPS data to calculate potential experience using the following equation (1) (U.S.
Department of Labor. (1993)). The units of potential experience are given in years.
Potential Experience = Age – 6 – Years of School (1) Variable for education was recoded by
the researchers to represent number of years of education completed at school. Women’s work
experience is found to be substantially influenced by being married and having
children. To avoid these influences, this study focused on men.
These analyses show that nonsupervisory construction workers who use computers at work are significantly paid more than workers who do not use computers at work (the average hourly wage among workers who use computers was $18.43 compared to $15.56 for those who did not). At the same time, workers who use computers at work are statistically significantly more experienced (workers who used computers had on average 22 years of experience compared to 18 years of experience for those who did not); more educated (workers who used computers had on
average 12.8 years of education compared to 11.6 for those who did not); and older (workers
who used computers were on average 40.8 years old compared to 35.7 years old for those who
did not). Although this analysis indicates a relation between higher wages and the use of
computers for non-supervisory construction workers, it is not clear whether the increase in
average hourly wage is due to usage of computer or merely a reflection of already established
relations with experience, education and age.
The findings reported here indicate that:
1. The decline in real wages exists throughout all sectors and divisons in construction.
2. Activities that experienced a change in Functional Range and Information Processing experienced less of a decline in real wages compared to activities that did not.
3. Not all changes in equipment technology are related to lessened declines in real
wages. Activities that experienced a change in Level of Control actually experienced greater declines in real wages.
5. Non-supervisory construction workers who use computers at work earn higher hourly wages, although further research is needed to account for the effects of experience, education, and age.
Technology in Education
• The Web of Knowledge: Vision, Design, and Practice
• Intercultural Education and Virtual Reality
• Advancement of Science Knowledge In Language Learning ( ASKILL )
• Learning with the Internet
• Changing the Face of Education in Missouri
• Generation Y: Student Inclusion = Technology Infusion
• Technology and MI
• Linking Students with Their World: A Good Day in French Class
• Technology in Environmental Education
• Listen Up!: Using Audio Files in the Curriculum
• A New Generation Meets the Ancient Mariner
• Harnessing the Best of Technology for an Exceptional Information Literacy Library Program
• Working Together: Students with Disabilities and Computer Technology
• What's ONADIME?( Onadime Composer is a software tool kit for making multi-media, multi-sensory real time interactive computer programs for teaching, learning and entertainment. )
• Questions for Potential Online Instructors
• Lessons on Teaching Writing from Website Design
• Clickers, Be Aware!
• Mr. Coulter's Internet Tendency: to Infinity and Beyond
• Instant Messaging: Friend or Foe of Student Writing?
• Chaim Potok's My Name is Asher Lev, Art History and Images From the World Wide Web
• The Learning Space: A Unique Online Community of Teachers
• Releasing the Isolated Warrior
• People Are the Only Thing that Matter
• The Future of Learning in a New Free World and how to Build a World Wide Learning Web
• Americans All: Searching for Sponsors for a History and Civics Data Base System
• The Guilds: A New Curriculum for Education and Internet Reform
Virtual and Augmented Reality:
• Virtual Reality In Education
• Learning Through Virtual Reality
• Augmented Reality in Education
• Augmented Reality and Education: Current Projects and the Potential for Classroom Learning
• Multimedia Technology and Children's Development
• Technology As the Catalyst
• Learning by Design: Integrating Technology into the Curriculum Through Student Multimedia Design Projects
• Multimedia Encourages New Learning Styles
Beyond the classroom:
• Using New Educational Technologies to Empower Youth: The Power of Youth-Adult Partnerships in e-Learning
• Inventing Workshops: Hands on Technology
• Giant Campus: Experience Based Technology Learning
• Technology Access Foundation (TAF)
• WildTech Learning
• Learning to Do: Students Develop IT Projects that Deliver Service
• A Call to Action: A Global Youth Empowerment Society (YES)
• Campaign Against American E-Partheid
• The Knowledge Web
• The New Basics: Education and the Future of Work in the Telematic Age
• Teaching Every Student in the Digital Age: Universal Design for Learning
• The Internet and the Law: What Educators Need to Know
• Using the Internet to Strengthen Curriculum
• Project-Based Learning Using Information Technology
• Making Technology Standards Work for You--A Guide for School Administrators
• Telecosm: How Infinite Bandwidth Will Revolutionize Our World
• NETS?S Curriculum Series?Multidisciplinary Units for Grades 3?5
• National Educational Technology Standards for Teachers: Preparing Teachers to Use Technology
• Visual Literacy: Learn to See, See to Learn
Development of Educational Aids for the Parents of Children Having Colostomy
Aim : To develop the educational aids for parents of children having colostomy and test its effectiveness.
Methods : TWO educational aids in the f&m of booklet and a video film (computer disk) were developed and these were used to teach care of colostomy to parents (n = 120) of children having anorectal malformations or Hirschsprung’s disease.
Res&s : It was found that the developed educational aids were effective in order to provide knowledge and skill to the parents (p < 0‘05).
Concltrsions : There is a great need to develop the educational aids for parents, subsequently these can be used to teach procedures for long term home management of the children born with congenital anomalies.
Key words : Colostomy care, parent education, congenital anomalies, anorectal
malformations, Hirschsprung’s disease.
The care of the children with colostomy is a complex, challenging and lengthy process, though colostomy in a child is often temporary. However, since it alters the external appearance of rhe child, the psychological impact on the child and the family at times is profound. Sometimes the child needs to be provided all the care by the parents after discharge from the hospital. Subsequent to colostomy a large number of patients do not turn up for follow up trea‘tment in India. Probable reasons for this could be colostomy complications, culminating in high infant mortality rate and which probably compel the parents to delay or postpone the follow-up visits, Such situations affect the management and have an impact on the prognosis of the child. The parents need to be provided ongoing education and support commencing from pre-operative teaching to discharge from the hospital and home care.* The nurses can help the parents of such children by teaching them about the home care and proposed treatment protocols. Moreover they can direct the parents to appropriate resource agencies and provide the parents with guidance and support,
The present study was undertaken to develop the educational aids for parents of children having colostomy and testing their effectiveness.