Issues such as the validation of learning and the verification of student assessment have been areas of content in distance education since its inception. These issues have slowed the acceptance of distance education delivery by accrediting bodies and higher education institutions. It was the advent of advanced telecommunication and computer systems, combined with the paradigm shift in education and the ability to creatively measure outcomes, which brought credibility to degrees received through distance education.
Just as there is not one "correct" model for distance education delivery, there is also not a singular best way for assessing student progress. Perhaps the most important element in utilizing traditional and alternative assessments in distance education programs is to ensure that the tool fits the mode of delivery. Allowing students to complete a final examination in the privacy of their home and mail it back to the instructor, for example, may raise ethical questions about the validity of the results and the legitimacy of the educational program. On the other extreme, requiring students to appear in person at a testing center may not be a practical alternative either. In order to preserve the value of the assessment results, academicians have had to create some very unique testing alternatives. This paper evaluates several of those synchronous and asynchronous assessment methods.
Synchronous Assessment Methods
Synchronous assessment methods encompass any form of testing where the instructor and students are interacting in real-time during the assessment. In general, distance education programs are very student focused and utilize some form of competency-based instruction. According to the Western Governor's Virtual University design team, "creating a competency-based approach to assessing and certifying learning at the postsecondary level is paramount. This means carefully selecting among available assessment tools, fostering the development of new tools, and engaging in the difficult task of equating competencies of various kinds with certifications and degrees having acceptance in the worlds of work and academe" (1996, p. 10).
The use of competency-based models of instruction lends itself to the use of criterion-referenced assessments. Worthen et. al. (1993) defines a criterion-referenced instrument as one that "interprets an individual's score by describing how well or to what extent a student can perform the tasks in some well-defined domain of valued tasks" (p. 73). When used in a synchronous arena, there are many effective criterion-referenced assessment models.
The author's company uses NetMeeting to facilitate a semistructured interview that could be classified as a synchronous criterion-referenced assessment. In this format, the instructor and student have live audio carried through the Internet. The instructor can ask a student a series of questions, or can display a diagram on the screen and ask the student to identify various components. This form of assessment is also effective using just telephone communication, as seen in Capella's method for dissertation defenses.
Programs that use sophisticated Instructional Video technology have access to high tech assessment methods. Dr. Wilson (1997) explains that with IV, "the instructor can launch a QNA (question and answer) application containing pre-loaded questions for students to answer. The percentage of various answers are displayed on the instructor's screen, providing a completely anonymous method to verify student progress" (p. 15). In addition, as long as there is two-way video, the instructor could conceivably just email traditional examinations to students and watch them complete and email the exams back. This would guarantee that the student who was enrolled in the program actually took the assessment.
The availability of real-time conferencing tools has made the delivery of exams for remote students possible. One typical exam format is where students are asked one question at a time, similar to oral exams, and are required to type in answers within a limited time-frame (Kouki & Wright, 1996, p. 9). Accreditation agencies prefer this method of synchronous testing because the teacher has significant interaction with the remote students during the examination.
Some community college-based distance education programs have opted for an on-site student-service center approach. In this design, all of the courses are completed remotely, with the testing done at a regional exam center. The advantage of this method is that rural students have the luxury of attending a state university and only need to travel to complete final exams. The testing applicant is identified and monitored during the exam and therefore the opportunity for cheating is minimized. In addition, students receive interpersonal contact from testing center staff, tutors, and financial aid personnel thereby making them feel more connected to the institution. The downside to this component of certain distance education programs is that it increases the cost to students who need to travel to take exams, and to the institutions that have to staff the testing facility. It is also time consuming and many distance education students may not enroll under those conditions.
Synchronous assessment models play an important role in legitimizing the distance education process because dishonesty is minimized and the instructor has continual management of the testing environment. Many recent studies regarding adult learning paradigms however, tout the importance of group learning. "Group projects are an increasingly essential part of assignments. The working world is one of working groups, and student exposure to the benefits and pitfalls of group work is assumed to be beneficial for all students" (Becker & Dwyer, 1998, p. 1). One of the challenges facing distance education involves the incorporation of group assessment into distance learning programs. Recent advances in groupware, such as Lotus' LearningSpace and IBM's ConferenceWare, have simplified issues such as scheduling, monopolizing of the conversation, and impersonalization which have plagued distance education group process and slowed its acceptance.
Asynchronous Assessment Measures
The majority of assessments used in distance education are conducted in an asynchronous environment, where the assessment is completed outside the presence of an instructor. These assessments can take many different forms, from traditional examinations to alternative measures such as portfolios or student diaries. Regardless of the format, the tool must legitimately and honestly measure the desired outcome. Institutions that carelessly administer inappropriate assessments will further the notion that distance education has no place in mainstream higher education.
As seen in the Capella model, Asynchronous Learning Networks (ALNs) are a very effective method for the instructor to evaluate student progress throughout the course. Winiecki (1999) explains that ALNs, "are a system of distance education in which the instructor and students interact through computer conferencing software and modem or network connections. An ALN is characterized by interactions that follow a many-to-many pattern - students and instructors sending messages to the entire class and to individual students at the same time" (p. 4). The benefit of this method of assessment is that, unlike a traditional exam, the student has time to absorb the question and post a response. The resulting answer is more representative of the learning and understanding that has occurred. When postings are evaluated over the length of a course, the pattern of learning becomes evident. The difficulty in using ALNs is that the message threads are sometimes difficult to follow. If this occurs it can affect the quality of the student responses, thus biasing the results (Hutchby and Woofitt, 1998).
Another common form of asynchronous assessment found in all academic venues is the use of writing assignments. Also known as "learner-content interaction", research papers allow students to synthesize material and course objectives in a way that is meaningful to them. "This form of interaction, some would argue, provides the most important interaction relationship to actually build knowledge through the learning process" (Lane et. al., 1998, p. J7). Moore and Kearsley (1996) continue by saying that, "it is interacting with content that results in these changes in learner's understanding, what we sometimes call a change in perspective, when the learners construct their own knowledge" (pp. 128-129).
There are many benefits to using writing assignments as a source of student evaluation. One of the primary reasons is that research papers can be tailored to meet the individual interests of the student while still providing evidence of learned course objectives. Student knowledge is expanded beyond the sometimes-narrow scope of the course and new areas of interest are discovered. Even for all the positive factors, the use of writing assignments as assessments tools in distance education is not without controversy. The subjectiveness associated with grading reports and research papers always presents a challenge, as does the possibility that the student did not write the document.
Demonstration of learning through the use of portfolios is another asynchronous alternative method of assessment that is gaining in popularity because of its ability to assess higher-order abilities. Worthen et. al. (1993) sums up the value of the portfolio by stating that, "the involvement of students in assessing their progress actively by reviewing and analyzing the performances documented in their portfolios is one of the greatest strengths of this method" (p. 436).
Portfolio use is common among some distance education institutions. Schools such as the University of Phoenix and Webster University evaluate student portfolios to help assess learning that has occurred prior to admittance. These portfolios, while not constructed as collaboratively as suggested by Worthen et. al. (1993), do contain writings, certificates, articles, and other material essential to identifying the student's unique, lifelong educational experiences worthy of evaluation. Due to their extensive, time-consuming nature, portfolios are still not widely used as assessment instruments during the course of distance education programs.
Assessment instruments come in all shapes and sizes. Their fundamental purpose, especially in a higher education setting, is to ensure that students have comprehended the material at a deep level. Marton and Saljo (1996) explain that, "a deep understanding is the capacity to use explanatory concepts creatively and leads to the ability of people to think about problem situations and devise new solutions to those problems" (p. 11). If this occurs it means that the educational objectives, method of delivery, and assessment tools were all properly integrated. The next section of this paper evaluates methods of measuring progress in objectives relating to the cognitive, affective, and psychomotor domains in a distance education setting.
Cognitive, Affective, and Psychomotor Evaluation in Distance Education
All of the assessments discussed thus far measure cognitive objectives -- which evaluate understanding, thinking, and problem solving (Worthen et. al., 1993). The cognitive domain is the most common domain assessed in higher education because targeted assessments are easily created and interpreted. The advances in telecommunications and computer systems, and the flood of high quality test generating software, make cognitive assessments a logical choice in distance education. "Online tests are administered by the student tracking software and draw upon all media available. Test items are matched to the lesson objectives. Objectives may be tested in multiple ways using multiple media to ensure complete mastery of the objectives" (Stubbs, 1998, p. 7).
Trinity Learning Solutions uses a variation of this method in its Guided Independent Study format. Instead of the software correcting the mistakes and re-cueing the student, the completed exam is graded by an instructor who gives personal guidance to the student in their weak areas. National testing facilities such as Sylvan Prometric, who administer computer examinations for medical examiners, CPA's, real estate agents, Microsoft, and the FAA also use a cognitive online testing approach.
A recent development in the area of cognitive assessment in distance education, is the use of adaptive tests. As opposed to traditional assessments, which have a set number of questions, an adaptive test starts at a basic cognitive level and then increases in difficulty. As soon as the appropriate level of competency is reached, the exam is concluded, sometimes after only 10 or 15 questions (as opposed to the usual 80 questions). The feedback to the author from Trinity students is that the adaptive tests are less confusing since there are fewer items to compare, as well as less stressful because the student either passes quickly or has the exam concluded if it is evident that competence has not been achieved. The students also say that the items are more relevant to the objectives learned. Since Microsoft introduced adaptive testing for their national certification exams, Trinity's pass rates have risen 9%.
In contrast to cognitive tests, which measure some form of knowledge, affective tests measure variables such as attitudes, interests, and values (Worthen et. al., 1993). The vague and general nature of affective assessments makes them problematic. However, distance education delivery does not exacerbate the problematic nature of affective measures, and in some ways, might even increase the validity of the results.
Common ways of measuring affective objectives, in traditional higher education or distance education, is through the use of questionnaires, attitude essays, and student diaries. In Trinity's Microsoft Certified Systems Engineer program, an affective objective could be that the student becomes interested in investigating new networking alternatives. This could be measured by reviewing student notes to see if there are more references made to network solutions. Interviews with the instructor, where the student describes new network designs, would also demonstrate progress. Lastly, objective competency could be identified through the use of questionnaires that use attitude scales.
The anonymity of distance education may actually improve the results of affective measurements. Attitude scales tend to use questions that are highly reactive. As Worthen (1993) point outs, that "the respondent can determine a question's purpose from reading it and can structure his response to fit the impression he wants to make" (p. 354). The lack of face-to-face peer and instructor interaction makes distance education students less threatened by affective measures and increases the likelihood that the answers are not skewed or dishonest.
Psychomotor objectives are the most difficult to measure in a distance education setting. Programs that require students to travel to a regional testing facility have the easiest time in assessing progress in psychomotor skills. Self-contained distance education programs have established creative and academically valid alternatives.
Expert evaluation is one way to measure psychomotor skills in a distance education setting. "When manual dexterity or technique is involved, peer evaluation is required. An expert is enlisted to perform the evaluation and the results are tracked and matched with a validity-checking cognitive test to ensure accuracy of the peer review" (Stubbs, 1998, p. 7). Often times the reviewer is scrutinized by the institution and must sign an affidavit stating that the assessment was conducted according to specified guidelines.
Due to the lack of literature regarding psychomotor assessment in distance education, the author contacted two distance education facilities, First Institute and Eastern University, and discussed their techniques. According to Ron Beier, the President of First Institute, the school requires students to turn in videotapes that demonstrate the completion of various computer hardware-related tasks. He states that students "ham it up" and learn not only the objectives but also valuable lessons about presentation techniques that are often lacking in distance education programs.
When evaluating Trinity's use of cognitive, affective, and psychomotor assessments, it becomes painfully obvious that the institution lacks in originality. Cognitive instruments are used almost exclusively, and the few affective objectives that are used are too vague to be useful. In order to improve the program, Trinity should evaluate affective assessments that have been validated and found to be reliable. In addition, many of the program's components would be ideal for psychomotor assessment. In keeping with the high-tech, high-touch philosophy of the institution, students could be issued a video card upon enrollment, which would enable them to create computer-generated videos instead of videotapes. Upon course completion, the videos could be placed online for the benefit of other students.
Both synchronous and asynchronous assessment methods can be used effectively in distance education. When choosing an assessment method, an institution must evaluate several criteria including the subject matter, terminal consequences for dishonesty, cost associated with the method, and the layout of the course. Instructors should be given a repertoire of assessment alternatives to ensure that distance education students are challenged and stay involved with the course. Probably the most important factor in choosing an assessment method however, is that it matches the personality of the course and institution. For some degree granting schools this might mean proctored examinations at regional sites while for other institutions it could include completion of a project or portfolio. Regardless of the method chosen, assessment in distance education must be a consistent measure of competence so that all educators will be encouraged to make alternative education systems a viable option.
Becker, D. & Dwyer, M. (1998, September). The Impact of Student Verbal/Visual Learning Style Preference on Implementing Groupware in the Classroom. JALN, pp. 1-9.
Hutchby, I. & Woffitt, R. (1998). Conversation Analysis. Cambridge, UK: Polity Press.
Kouki, R. & Wright, D. (1996, July). Internet Distance Education Applications: Classification and Case Examples. ED, pp. 10-12.
Marton, F. & Saljo, R. (1976, August). On Qualitative differences in Learning. British Journal of Educational Psychology, pp. 4-11.
Moore, M. G. & Kearsley, G. (1996). Distance Education: A Systems View. United States: Wadsworth Publishing Company.
Stubbs, P. F. (1997, July). Just-in-Time Training in a Virtual Classroom. ED, p. 7
Willis, B. (1993). Distance Education: A Practical Guide. Englewood Cliffs, NJ: Educational Technology Publications.
Worthen, B. R., Borg, W. R. & White. K. R. (1993). Measurement and Evaluation in the Schools. White Plains, NY: Longman, Inc.
WGU Virtual University Design Team. (1997, January). The Western Governor's: A Proposed Implementation Plan. ED, p. 10.
Wilson, J. M. (1997, January). Expanding the Boundaries of the Virtual University. ED, pp. 15-16.
Winiecki, D. J. (1997). Becoming a Student in an Asynchronous, Computer-Mediated Classroom. Paper presented at the Third International Conference on Asynchronous Learning Networks: New York City.
About the Author
Ms. Jamie Morley has a BS in Business Administration and an MA in Organizational Management from the University of Phoenix. She is currently enrolled at Capella University where she is pursuing a Ph.D in Education, completion May 2000. Her dissertation evaluates the role of learning style accommodation and curriculum design in distance education. Ms. Morley also owns a school that specializes in teaching high level computer engineering and networking classes through distance education. She is on the board of the International Distance Educators Association. She may be reached at: 11711 Herman Roser SE, Albuquerque, NM 87123: 505-292-4180; email@example.com or firstname.lastname@example.org