THIS IS AN ARCHIVED VERSION OF CRA'S WEBSITE. THIS ARCHIVE IS AVAILABLE TO PROVIDE HISTORICAL CONTENT.

PLEASE VISIT HTTP://WWW.CRA.ORG FOR THE LATEST INFORMATION

CRA Logo

About CRA
Membership
CRA for Students
CRA for Faculty
CRA-Women
Computing Community Consortium (CCC)
Awards
Projects
Events
Jobs
Government Affairs
Computing Research Policy Blog
Publications
Data & Resources
CRA Bulletin
What's New
Contact
Home

<< Back to May 2003 CRN Table of Contents

[Published originally in the May 2003 edition of Computing Research News, Vol. 15/No. 3, pp. 3, 9.]

Computer Science, Meet Learning Science

By Randy J. Hinrichs

The opportunities of the digital age have irreversibly disturbed the educational universe. Computer science sees a connected world of rich human-centric interfaces. The National Research Council's prominent study "How People Learn" (2000) tells us how experts learn, and calls for learners to be connected to outside experts, to use visualization and analysis tools, and to create learning opportunities with feedback, reflection, and revision to improve learning.[1] So why do we still have classrooms and lectures as our predominant learning environment? Where is the breathtaking mobile, adaptive learning software that has been promised? Where's the revolution in technology in education? Bringing together the expertise of computer science and learning science research in a partnership with industry, government, and academia may prove to be just what we are looking for to achieve a revolution.

The global call to action has been sounded. The PITAC [2] report calls for using information to transform the way that we learn, an upcoming CRA Grand Research Challenge report envisions providing a teacher for every learner, and the "Leave No Child Behind" campaign claims assessment and accountability as the vanguard to improving education. Industry too is playing its hand by investing in learning science and technology (LST) research to find the sweet spot for educational networks, tools, and platform design and to nudge the transformation for their information workers. Bottom line: surviving in a highly digitized global environment requires learning at the speed of a neuron firing.

Limitations of the Current Approach

Education and learning R&D are dramatically underfunded, both on an absolute basis and compared with other domains. According to the PCAST, R&D in K-12 education is funded at only 0.03 percent of total K-12 expenditures.[3] The Federation of American Scientists' study of international funding for research in educational technology found that the United States spent only $33 million on non-defense-related education technology R&D in 2000, compared with $95.6 million in Canada (where distance education is a key player) and $65 million in the European Union.[4]

Current funding levels are insufficient to understand how to design and implement technology-education learning. Bloom (1984) suggested that 2-sigma gains could be achieved by providing a 1:1 experience with tutor and student.[5] But we haven't been able to use technology to scale even a 1:1 experience in instruction yet. Preliminary results in looking at effective uses of technology in education vary depending on the evolution of the technology and societal adaptation to the technology. Venezky and Davis (2002)[6] report that technology, especially on the WWW, can be the catalyst for improving and innovating in education; but, where transformative vision and inspiration lead, technology serves only as an additional resource and not as a catalyst. We have so far to go, and our evolving global workforce is looking for a solution.

Setting a Vision for Change

There are attempts to set the vision for an educational transformation. The summit on The Use of Advanced Technology in Education and Training convened by Commerce Secretary Don Evans and Education Secretary Rod Paige identified 10 visionaries who explored the potential for technology to enhance education, and highlighted the need for research to get there.[7] Other organizations such as DARPA, the Learning Federation, the National Academy of Sciences, and the IEEE Learning Task Force continue to push a global LST research agenda.[8] Currently, research in LST is all over the place, and it is still quite difficult to find scalable environments grounded in learning science research with a rich set of methods and tools for creating substantive change.

We need a call to action, a grand challenge. We can continue on this path of underfunded, small-scale, poorly coordinated investments in research, or we can create alliances between industry, government, and universities to develop a comprehensive, focused research and management plan. Such a plan can create the partnership needed to reduce redundancy, build significant open architectures and tools for distributed, qualified academic content, and integrate our research into a stream of prototypes and solutions that aim at advancing our noble cause of education and workforce preparedness.

In order to get there, a review of the state of the art in LST research is necessary. Of course many projects and programs exist, but to gain an appreciation for research directions, I looked at a few projects falling into three categories: Access and Navigation, Distributed User Modeling and Assessment, and Networked Simulations. I categorized the research as I see a trend: first, build the infrastructure so people can have easy access to quality educational materials; next, contextualize the content for the user to make it relevant and actionable; and then create immersive, learning environments that accelerate time on task and intrinsic motivation.

Access and Navigation

Much of the research conducted to date has focused on getting content (libraries, lectures, and laboratories) online and making it available to learners. Everyone has their own mom-and-pop solution with some attempts at reining in the chaos of the web. Self-assessment predominates and technology focuses specifically on mobile, data-driven access and navigation of the data. Results include building collective course management environments (OKI),[9] federating databases to turn the web into an instrument (Sloan Digital Sky,[10] iLabs [11]) driving the development of standards for design, communication and data reporting (SCORM [12]), and putting free courseware online (Merlot,[13] NEEDS,[14] OCW [15]). Baker (2002) suggests that moving forward, we need to dynamically generate content and adapt to, and compensate for, limits in users' expertise, interest, or time.[16] Access and navigation is fundamental to building digital education; interactivity based on who you are is evolutionary.

Distributed User Modeling and Assessment

Learning environments with rich interaction and collaboration increase the chances for deep learning. A central claim to creating meaningful, assessed learning environments in Knowing What Students Know (2001) [17] is creating a model to observe student behavior, and a method for drawing inferences about student knowledge from their behaviors. Computers can react to these inferences in two ways; they can use AI or Bayesian adaptive models and feed back information to the student based on their answers; or they can use peer-to-peer networks and real-time protocols to enable human-to-human interaction and create dynamic collaboratories that engage learners in working together, both synchronously and asynchronously. One is scalable, the other is not.

AI environments that adjust to the user and provide iterative feedback through problem-solving include Cognitive Tutors,[18] Biologica,[19] Andes,[20] and the Immex Project [21] at UCLA. Findings suggest that in these environments we can observe and augment student problem strategies with embedded assessment over the network. Several prominent peer-to-peer environments include the Learning Experience Project;[22] Learning by Design,[23] which looks at shared understanding with white boarding; Live Notes,[24] which analyzes networked note-taking; and Active Campus,[25] which uses location-based mobile learning to track discussion groups, polling, and voting. Findings suggest that learning increases with both AI and HI feedback models. Understanding the user and the context of the user's learning needs, creating challenging activities that require the user to use the content, and assessing the user's performance in situ will advance the effectiveness of digital environments for education.

Networked Simulations

Networked simulations make the invisible visible. The intent of this nascent research is to create a sense of being in the learning environment, focusing the student more on interactions, immersed in the content as an environment, role-playing among visual objects, constructing solutions by solving problems and thinking critically with others, while manipulating real-time equipment or scenarios to simulate the real world. The objective is to encourage decision-making based on experimentation, thus amplifying learning. The challenge is enabling scalable and affordable models. Several good examples of the simulations for teaching have been implemented.

The Minimally Invasive Surgery Training System (MIST) [26] teaches minimal laparoscopic surgery. ICT Games Project [27] is building emotional reaction in gaming environments, while MIT's Games to Teach [28] and CMU's BioHazard research are producing models for science and engineering in the university classroom using game-based technology for learning. Cave technologies for immersing students deeply in visual environments, such as the Round Earth [29] work at the University of Illinois, show equal promise. Networked simulations provide the most promise for continuous learning in a rich environment of contextualized content with the ability of adapting to the user at the level of the user's interaction. It also motivates them in ways similar to simulations over the Net, such as X-Box, which already are consuming hours of user attention. Imagine learning environments like this for education extended with digitally instrumented technologies, persistent content, and multi-expert participation, leveled by the user's ability and focused on learning while doing.

Conclusion

If we start with this great base of learning research and couple it with computer science research, we can stimulate a revolution in learning. There is much to be learned from computer science research: human computer interfaces, adaptive behaviors, interoperable geometries, 3D operating systems, and dynamic databases. This partnership in research agendas can help insure quality access to learning, enrichment for teacher and student experience, scalability across interoperable systems, standardization in tool and content development, new forms of meaningful interactivity, and the educational enchantment so hoped for by so many. We do not need to imitate the classroom; we already have good ones. We need to make the device an alternative classroom, and call people into the device and scale the classroom by uniting our knowledge of computers and learning.

At Microsoft Research we are addressing technology-based learning as a distributed lifelong learning challenge. Our efforts look at building real-time collaborative environments and web services for education as a platform for conducting research in education. We are using a shared source model for collaborations and partnering with universities worldwide in an invited RFP process. We are working with our government and private partners to define and develop a detailed learning science and technology roadmap to describe a research plan, along with a research management plan for implementing the roadmap. Our goal is to rally industry, government, and educators to use information to transform the way we learn, to provide a teacher for every learner, and to bring together learning science and computer science to serve the educational needs of our emerging digital workforce and lifelong learners.

The Computing Research Association can play a key role to enable this fundamental transformation in education and training by focusing their intellectual laser beam onto the partnerships needed to build momentum for a national program in learning science and technology R&D, and unite the discoveries of computer science and learning science and to align missions.


Randy J. Hinrichs, randyh [at] microsoft.com, is a Group Research Manager for Learning Science and Technology at Microsoft Research. He has been working as an educational technologist researcher for 25 years, and is one of the pioneers of the Learning Federation, a consortium of industry, government, and universities focused on an international research agenda for LST.

1. Bransford, John, Brown and Cocking, How People Learn, (2000), see http://www.nap.edu/catalog/9853.html 
2. http://www.hpcc.gov/pubs/pitac/pitac-tl-9feb01.pdf 
3. President's Committee of Advisors on Science and Technology, Report to the President on the Use of Technology to Strengthen K-12 Education in the United States, Section 8.4, March 1997, at http://www.ostp.gov/PCAST/k-12ed.html
4. http://www.fas.org/learn/intl_rev/ 
5. Bloom, B.S. (1984). The 2-sigma problem: The search for methods of group instruction as effective as one-to-one tutoring. Educational Researcher , 13(6), 4-16 .
6. Venezky, Richard, and Davis, Cassandra. Quo Vademus? The Transformation of Schooling in a Networked World. OECD/CERI, Version 8c, March 6, 2002.
http://waldorf.eds.udel.edu/oecd/cases/CS_SummaryDraft8c.pdf 
7. http://www.ta.doc.gov/reports/TechPolicy/2020Visions.pdf 
8. Report of a Workshop on a Proposed Learning Federation, National Science Foundation (November 28-29, 2000): http://www.learningfederation.org/docs/NSF_2000_workshop.pdf  and IEEE Computer Society Learning Technology Task Force (LTTF): http://lttf.ieee.org/ 
9. http://web.mit.edu/oki/
10. http://www.sdss.org/
11. http://i-lab.mit.edu/
12. www.adlnet.org/
13. http://www.merlot.org
14. www.needs.org
15. http://ocw.mit.edu
16. Baker, Eva L. (2002). "Design of Automated Authoring Systems for Tests." In "Technology and Assessment: Thinking Ahead-Proceedings from a Workshop." Center for Education (CFE) http://books.nap.edu/books/0309083206/html/84.html#pagetop
17. Pellegrino, James W. (Editor), 2001. "Knowing What Students Know: The Science and Design of Educational Assessment." National Academy Press. http://www.nap.edu/catalog/10019.html
18. http://www-2.cs.cmu.edu/~pact/
19. http://biologica.concord.org/webtest1/about_biologica.htm
20. http://www.pitt.edu/~vanlehn/andes.html
21. http://www.immex.ucla.edu/IMMEXMainFrame.htm
22. http://www.conferencexp.net
23. www.cc.gatech.edu/edutech/projects/lbdview.html
24. http://www.cs.berkeley.edu/~mattkam/livenotes/index.html
25. http://activecampus.ucsd.edu/
26. http://www.hmc.psu.edu/simulation/Equipment/MIST%20VR%20Trainer/mist%20vr%20trainer.html
27. http://www.ict.usc.edu/disp.php?bd=proj_games
28. http://cms.mit.edu/games
29. http://www.evl.uic.edu/roundearth/


Google
Search WWW Search cra.org

Copyright © 2007 Computing Research Association. All Rights Reserved. Questions? E-mail: webmaster@cra.org.