Association for Educational Communications and Technology |
Presentation Number | 2108 |
Title | ACCOUNTING FOR INDIVIDUAL DIFFERENCES IN LEARNING: Where do we start and what are the implications for online instruction? |
Program Area | Research and Theory |
Date and Time: | Date: 11/8/01 Location: Dorval |
Length and Type of Session: | 60 Minutes Presentation then Discussion |
Presenter(s) | Joanne Bentley Utah State University , |
Short Description |
A comparison of two instruments designed to measure individual differences in learning and a discussion concerning the implications for online instruction. |
Abstract | BACKGROUND
Over the years there have been many attempts to account
for individual differences in learning. However, the
problems associated with getting a stable measure of
these differences have caused many to conclude that they
are indeterminable. Without the necessary consideration
of the dominant influence of emotions and intentions on
learning, both Cronbach (1957, 1975) and Snow (1987; Snow
et al., 1990) were unable to find stable
cognitive/aptitude treatment interactions. However, both
Snow and Cronbach found more stable attribute/treatment
interactions at the conative level (Cronbach, 1975). In
the late eighties, Snow (1987) described how in cognitive
psychology conation as a learning factor has been
demoted, and, since it seems not really to be a separable
function, it is merged with affect. Together these
factors are viewed as mere associates or products of
cognition, and then ignored. He warned that individual
difference constructs or aptitude complexes needed
greater consideration of the joint functioning between
cognitive, conative, and affective processes. Snow was in search of an information processing model of cognition that would include (still as a secondary consideration) possible cognitive-conative-affective intersections. He was looking for a way to fit realistic aspects of mental life, such as mood, emotion, impulse, desire, volition, and purposive striving into instructional models. According to Snow (1989), the best instruction involves treatments that differ in structure and completeness and high or low general ability measures. Highly structured treatments (e.g., high external control, explicit sequences and components) seem to help students with low ability but hinder those with high abilities (relative to low structure treatments). However, by treating individual differences in learning as a predominantly cognitive phenomena, researchers may have unwittingly ignored a key element in the equation. More recent research (Snow & Jackson, 1993; Snow & Jackson, 1997; Jackson, 1998; Martinez, 2000) suggests that may well be the case. PURPOSE The purpose of this study was to discover how the Learning Orientation Questionnaire (LOQ) and the Herrmann Brain Dominance Instrument (HBDI) are related in an attempt to sharpen and elaborate their respective score meanings and theoretical interpretations in accounting for individual learning differences. This study was the foundation for my dissertation and is a portion of the validation argument for the LOQ. REVIEW
OF LITERATURE Understanding individual
differences in learning has been a major research
interest since World War I. Over the ensuing years there
have been many attempts to account for individual
differences in learning (Gagné, 1967; Glaser, 1972,
1976; Ackerman, Sternberg, & Glaser, 1989; Jonassen
& Grabowski, 1993). In the fifties, Cronbach (1957)
optimistically challenged the field to find for
each individual the treatment to which he can most easily
adapt, however, perhaps due to the systematically
cognitive approach used by researchers of the time, this
challenge proved to be more complex than they had
originally anticipated. Problems with getting a stable
measure of these differences in learning, stable
interactions with treatment alternatives, and limited,
expensive technology made creating computerized
instruction, which accommodated a broad range of
individual differences very costly and time intensive.
During the era of media studies, it was common to assume
that most people learned in a similar fashion. However,
if we are intent on avoiding the no significance
differences trap that Russell (1997) documents in
his review of numerous media impact studies we should ask
if lumping together different types of learners may not
have confounded earlier research. METHODS
Cronbach (1988) introduced the term Validation argument
to describe the process of establishing validity, which
he described as an argument that must link
concepts, evidence, social and personal consequences, and
values . . . The 30-year old idea of three types of
validity, separate but equal, is an idea whose time is
gone . . . validation is never finished. Building
on Cronbach (1988), Martinez, Bunderson, & Wiley
(2000) propose that the verification procedure in
design experiments is a design process to establish the
various aspects of construct validity and other aspects
of a validity argument, thereby taking the idea of
constructing construct validity one step
further. Martinez, Bunderson, & Wiley (2000) go on to
describe how convergent and discriminant studies add to
the verification process by finding alternative
measures of the same construct and comparing measurement
outcomes across instruments, people, and occasions. SUMMARY OF RESULTS Based on expert judgment, items on the HBDI are primarily cognitive and the LOQ is primarily conative, confirming that the HBDI is more cognitively oriented and the LOQ more conative and affective. As experts sharpen distinctions between constructs, the clarity of their substantive processes increases, leading to improvements in the construct validity of the instruments. Of practical importance is that experts found the LOQ to measure different constructs from the HBDI. As one of the broadest measures of individual differences, the HBDI does not significantly overlap with the conative and affective constructs measured by the LOQ. The correlations between the LOQ and the HBDI have significance in the substantive process operating for both instruments. The HBDI and the LOQ do converge around measures of high intentionality. Intentionality appears to include HBDI scores in upper right, right mode, cerebral, whole-brainedness, cerebral left whole-brained (CLWB), and cerebral right whole-brained (CRWB). LOQ total scores were more likely to correlate with multiple quadrant combinations (or whole-brainedness) of HBDI scores than with single quadrant HBDI scores. The Upper Right quadrant was the most likely HBDI score to correlate with high LOQ scores. However, high LOQ scores are also likely to correlate with HBDI multiple quadrant combinations (or whole-brainedness) such as CRWB. (Additional results from the study, and the significance of these results will be explained more fully in the paper and presentation.) CONCLUSIONS
& IMPLICATIONS Assessing individual
differences in learning and then tailoring instruction to
fit students needs is less challenging when you can
interact face-to-face with your students over
timeIf one strategy doesnt work you can try
another, using verbal and non-verbal feedback to refine
the delivery process. Over time, a students
preference for certain content delivery styles becomes
evident. The ability to identify students
individual differences in learning and the opportunity to
dynamically tailor instruction for an individual has
always been possible in a traditional classroom but has
seldom, if ever, truly existed in computer-based
instruction (CBI). Convergent and discriminant validation
studies have been lacking in the past for both
instruments. This study has begun to address issues of
overlap and redundancy among individual difference
instruments important in teaching and learning
situations. Common areas in accounting for individual
learning differences have been highlighted while drawing
attention to distinctly different concepts for further
consideration by authors of both instruments. REFERENCES ADLNet. Advanced Distributed Learning Network (ADLNet), [Online]. Available: http://www.adlnet.org/ [June 26, 2000]. American Educational Research Association, American Psychological Association, and National Council on Measurement in Education (1999). Standards for educational and psychological tests. Washington, D.C: American Educational Research Association. American Psychological Association. (1954). Technical recommendations for psychological tests and diagnostic techniques. Psychological Bulletin, 51, (2 Part 2). American Psychological Association, American Educational Research Association, and National Council on Measurement in Education. (1974). Standards for educational and psychological tests. Washington, D.C: American Psychological Association. American Psychological Association, American Educational Research Association, and National Council on Measurement in Education. (1985). Standards for educational and psychological tests. Washington, D.C: American Psychological Association. Ackerman, A. J., Sternberg, R. J., & Glaser, (1989). Learning and individual differences: Advances in theory and research. New York : W. H. Freeman. Anderson, L. A. & Krathwhol D. R. (Eds.) (in press). A taxonomy for learning, teaching, and assessing: A revision of Blooms taxonomy of educational objectives. New York: Allyn & Bacon. Babbie, E. R. (1986). The practice of social research. (4th ed.). Belmont, CA: Wadsworth Publishing. Bailey, K. D. (1982). Methods of social research. (2nd ed.). New York: The Free Press. Bandura, A. (1986). Social foundations of thought and action: A social cognitive theory. Englewood Cliffs, NJ: Prentice -Hall Bereiter, C., & Scardamalia, M. (1993). Surpassing ourselves: Inquiry into the nature and implication of expertise. Chicago: Open Court. Bloom, B. S., Engelhart, M. D., Frost,E. J., Hill, W. H. & Krathwohl, D. R. (1956). Taxonomy of educational objectives. Handbook I: Cognitive domain. New York: David McKay. Boyle, G. J. (1995). Myers-Briggs Type Indicator (MBTI): Some psychometric limitations. Australian Psychologist, 30(1): 71-74. Bunderson, C. V. (1988). The validity of the Herrmann Brain Dominance Instrument. In Ned Herrmann (Ed.). The creative brain. Lake Lure, NC: Brain Books. Cronbach, L. J. (1957). The two disciplines of scientific psychology. American Psychologist, 671-684. Cronbach, L. J., & Snow, R. E. (1977). Aptitudes and instructional methods: A handbook for research on interactions. New York: Irvington. Cronbach, L. J. (1988). Five perspectives on validity argument. In H. Wainer & H. I. Braun (Eds.), Test validity (pp. 3-17). Hillsdale, NJ: Laurence Erlbaum. Curry, L. (1990). A critique of the research on learning styles. Educational Leadership, 48(2), 1990. Felder, R. (1996). Matters of Style. ASEE Prism, Vol. 6 (pp. 18-23). Gagné, R. (1967). Learning and individual differences. Columbus, Ohio: Merrill. Gall, J. P., Gall, M. D., & Borg, W. R. (1996). Applying educational research: a practical guide (4th ed.). New York: Longman. Gay, L. R. (1996). Educational research: Competencies for analysis and application (5th ed.). Columbus, Ohio: Merrill. Gardner, H. E. (1999). Intelligence reframed : Multiple intelligences for the 21st century. New York, NY : Basic Books. Gardner, H. E. (1993). Multiple intelligences: Theory in practice. New York, NY : Basic Books. Gardner, H. E. (1984). Art, mind, and brain: A cognitive approach to creativity. New York, NY : Basic Books. Gardner, W. L. & Martinko, M. J. (1996). Using the Myers-Briggs Type Indicator to study managers: A literature review and research agenda. Journal of Management, 22(1): 45-83. Glaser, R. (1976). Components of a psychology of instruction: Toward a science of design. Review of Educational Research, 46(1), 1-24. Glaser, R. (1972). Individuals and learning: The new aptitudes. Educational Researcher, 1(6), 5-13. Gredler, M. E. (1997). Learning and instruction: Theory into practice (3rd ed.). Upper Saddle River, New Jersey: Prentice-Hall. Hall, J. P. & Gottfredson, C. A. (In press). Evaluating Web-Based Training: The Quest for the Information Age Employee. In B. H. Khan (Ed.), Web-Based Training. Herrmann, N (1990). The creative brain. Lake Lure, NC: Brain Books. Ho, K. T. (1988). The dimensionality and occupational discriminating power of the Herrmann Brain Dominance Instrument. Unpublished doctoral dissertation, Brigham Young University, Utah. Jackson, D. (1998). An exploration of selected conative constructs and their relation to science learning. (CRESST CSE Technical Report 467). Palo Alto: Stanford University, Department of Education Jonassen, D. H., & Grabowski, B. L. (1993). Handbook of individual differences, learning and instruction.. Hillsdale, NJ: Lawrence Erlbaum Associates, Inc. Martinez, M. Learning Orientation Construct (LOC), [Online]. Available: http://www.trainingplace.com/source/research/learningorientations.htm#loc [July 31, 2000a]. Martinez, M. Learning Orientation Questionnaire, [Online]. Available: http://www.trainingplace.com/source/research/questionnaire.htm#manual [July 31, 2000b]. Martinez, M., Bunderson, C. V., & Wiley, D. (2000, April). Verification in a design experiment context: Validity argument as design process. Symposium session at the annual meeting of the American Educational Research Association, New Orleans, LA. Martinez, M., & Bunderson, C. V. (1999). Development of a self-report instrument for measuring learning orientations and sources for individual differences: Instrument testing and hypothesis refinement. Unpublished manuscript. Martinez, M. (1999a). A mass customization approach to learning. ASTDs Technical Training Magazine, 10(4), 24-26 Martinez, M. (1999b). An Investigation into Successful LearningMeasuring the Impact of Learning Orientation, a Primary Learner-Difference Variable, on Learning. (University Microfilms No. 992217) Martinez, M., Bunderson, C. V., Nelson, L. & Ruttan, J. P. (1999). Successful learning in the new millennium: a new web learning paradigm. Proceedings CD for the Association for the Advancement of Computing in Education WebNet 99 World Conference, Honolulu, HI. Martinez, M. (1998). Development and validation of the intentional learning orientation questionnaire. Unpublished manuscript, Brigham Young University, Utah. Martinez, M., & Bunderson, C. V.(1998). Transformation: A description of intentional learning. The Researcher, 13(1), 27-35. (ERIC Document Reproduction Service No. ED 408 260). Martinez, M. (1997). Designing intentional learning environments. Proceedings of the ACM SIGDOC 97 International Conference on Computer Documentation, Salt Lake City, UT, 173-80. Merrill, M. D. (1975). Learner control: Beyond aptitude-treatment interactions. AV Communications Review, 23, 217-226. Messich, S. (1976). Individuality in learning: Implications of cognitive styles and creativity for human development. San Francisco: Jossey-Bass. Messich, S. (1989). Validity. In R. L. Linn (Ed.), Educational Measurement (3rd ed.). New York: American Council on Education: Macmillian Publishing Com. Messich, S. (1995). Validity of psychological assessment. American Psychologist, 50(9), 741-749. The Ned Herrmann Group. (1989). Herrmann Brain Dominance Instrument Profile Interpretation Package. [Brochure]. Lake Lure, NC: Author. Reeves, T. (1993). Pseudoscience in computer-based instruction: The case of learner control research. Journal of Computer-Based Instruction, 20(2), 39-46. Rock, D. (1983). The issues and concerns related to developing a construct validation program. Unpublished report. Princeton: Educational Testing Service. Russell, T. (1997). Technology wars: Winners and losers. Educom Review, 32(2), 44-46. Snow, R. E. (1989). Toward assessment of cognitive and conative structures in learning. Educational Researcher, 18(9), 8-14. Snow, R. E. (1987). Aptitude complexes. In Richard E. Snow & Marshall Farr (Eds.), Aptitude, Learning, and Instruction, Conative and affective process analysis. (Vol. 3, pp. 11-34). Hillsdale: Lawrence Erlbaum Associates. Snow, R. E. & Jackson III, D. (1993). Assesment of conative constructs for educational research and evaluation: A catalogue. (CRESST CSE Technical Report 354). Palo Alto: Stanford University, Department of Education Snow, R. E. & Jackson III, D. (1997). Individual differences in conation: selected constructs and measures. (CRESST CSE Technical Report 447). Palo Alto: Stanford University, Department of Education. Snow, R. E., Mandinach, E., & Mc Vey, M. (1990). The topography of mastery assessment in instructional domains. Princeton, NJ: Educational Testing Service. Sperry, R. W. (1977). Bridging science and valuesA unifying view of mind and brain. American Psychologist, 32 (4), 237-245. |
Displayed with written
permission from Phil Harris/AECT (April 04, 2005). Presentation on
this site is © 2001 by AECT
Association for Educational Communications and Technology
Bloomington, IN 47408