Can tutoring make a dramatic difference in how much a child learns? A famous study says it does. In 1984, the University of Chicago’s Benjamin Bloom published a paper that showed that being tutored was a more effective way of learning than conventional classroom instruction. Not just a little more effective, but tremendously more effective. In fact, Bloom found that tutored children outperformed children receiving conventional classroom instruction by a whopping two standard deviations, or two sigmas. According to Bloom, “the average tutored student was above 98% of the students in the control class.” (http://web.mit.edu/bosworth/MacData/afs.course/5/5.95/readings/bloom-two-sigma.pdf) In fact, “90% of the tutored students … attained the summative achievement reached by only the highest 20% of the students under conventional instructional conditions.”

Unfortunately, there are a couple of obvious problems with conventional tutoring: it’s labor intensive and therefore it’s very expensive. Almost immediately after Bloom’s study was published, academics began experimenting with a cheaper and more practical alternative: computer tutoring. There are now many commercial tutorial products, and there’s been a great deal of new research. Where do we stand, 30 years after Bloom?

A major recent paper by Kurt VanLehn of Arizona State University looked at a large selection of studies—over 100—to see what could be learned about tutoring today. (http://www.tandfonline.com/doi/full/10.1080/00461520.2011.611369#.U0BMP61dXwo) VanLehn divided instruction into categories: no tutoring, or conventional classroom instruction; computer-based intelligent tutoring systems (ITS) that were based on a simple question-and-answer format; computer-based ITSs that were based on a step-wise progression; and human tutoring.

The assumption has been that computer tutoring lags behind human tutoring, so what VanLehn found was surprising. Answer-based ITS represented a 0.31 sigma improvement over conventional classroom instruction, ITS systems based in step-wise progression resulted in a 0.75 sigma improvement, and human tutoring results in a 0.79 sigma improvement. Human tutoring does seem to be the most effective form of tutoring, but only by a tiny margin.

The improvements found with tutoring are quite substantial, though not as large as the improvements Bloom found. To find out why, VanLehn took a closer look at Bloom’s data. In the original studies, students took quizzes after each work unit. They moved on to the next unit if their scores were adequate; otherwise, they studied the original unit further. Classroom students moved on when their scores were at least 80%, while tutored students only moved on when their scores were 90% or above. VanLehn surmises that setting the threshold higher for mastery may be another aspect of the astonishing results Bloom observed.

In his blog Of That, Brandt Redd says there’s an emerging consensus about how to make computer-aided tutoring approach or maybe even achieve a two-sigma level of improvement. One of the most critical factors is that students should only advance when they’re truly prepared for the next level. Asynchronous learning, or advancing at an individual rate, is a related and important technique. (http://www.ofthat.com/2011/09/tackling-blooms-2-sigma-problem.html)

What makes computer tutoring so effective? It’s theorized that immediate feedback is one of the most important reasons it works so well. Feedback keeps students from getting completely off track and wasting time.

Clearly, the ability to work at a pace that’s comfortable for each individual is another critical piece of the puzzle. This “work at your own pace” dictum has become central to many tutoring platforms, such as Salman Khan’s super-successful online math instruction platform, Khan Academy.

Some theorists, such as Ken Koedinger of Carnegie Mellon, feel that computers may just be inherently better than humans as teachers. Data collected by a computer can be analyzed to determine how to maximize learning for each individual. In fact, Koedinger thinks learning is so complex that it requires a computer to thoroughly explore it.

For instance, Koedinger points to the spacing effect as one area where computers can be extremely helpful. It’s long been known that people learn best if they revisit the information to be assimilated at spaced intervals, rather than trying to learn information all at once. Coupled with the fact that people learn at different rates, tracking an optimum learning schedule for all the children in an entire classroom would represent an overwhelming amount of work for a teacher. Fortunately, this is the kind of chore a computer is happy to do.

With the new Common Core curriculum, it’s more important than ever that children be given the tools that will enable them to succeed, and the advantages of computer tutoring shouldn’t be overlooked. One interesting approach currently in testing uses computers to analyze essays, looking specifically for the features that yield better test results in the Common Core rubric. WriteTecNYC (yes, based in New York City), has developed proprietary algorithms that predict how well or how poorly a student performs. The feedback is immediate and a student can accurately gauge their improvement over time. Currently WriteTecNYC’s software at www.writetecnyc.com is available for free in beta, but it is not clear how long this option will be available. Whether parents embrace using analytical test prep tools like this or not, computer grading will continue to be rolled out. Students who need an edge will likely use the most effective tools to maximize test results and improve their testing abilities over time.

Lani Carroll is a writer living in Colorado Springs. She loves Colorado microbrews and those rare moments when she can find the perfect word without resorting to a thesaurus. She's long been interested in the correlations between test scores and achievement. She can be found at her Google+ Profile.

0 comments:

Post a Comment

 
Top