21 de maio de 2017

Personalized Learning: What Does the Research Say? (Benjamin Herold) by larrycuban

Benjamin Herold is a staff writer for Education Week. He covers education technology and writes for the Digital Education blog. This post appeared October 18, 2016

The K-12 sector is investing heavily in technology as a means of providing students with a more customized educational experience.
So far, though, the research evidence behind "personalized learning" remains thin.
The U.S. Department of Education has given half a billion dollars to districts that embrace the trend, with limited findings to date. Since 2009, the Bill & Melinda Gates Foundation has committed $300 million to support research and development around personalized learning, but officials there say it's still "early days" for the field. School and district leaders have helped turn personalized learning into a multimillion-dollar market, but evaluations of their efforts remain scattered. (The Gates Foundation helps support Education Week's coverage of personalized learning.)
One big problem: proponents have struggled to define personalized learning, let alone demonstrate its effectiveness. The purpose, tools, and instructional techniques that make up the notion vary considerably, depending who you ask.
While a fair amount of research exists on specific personalization strategies, such as the use of adaptive math software, the literature includes very little on personalized learning as a comprehensive approach.
There are some bright spots. Researchers have found promising early signs at some schools, and some software programs have been associated with significant improvements in student learning and engagement.
But so far, at least, such encouraging results are often highly dependent on local context and how well a particular approach was implemented. That makes it hard to draw sweeping conclusions.
For skeptics, those dynamics reflect a larger problem. Given the unclear findings around personalization via technology, critics argue, schools would be much better off investing in proven strategies that rely on increased teacher-to-student interaction, such as smaller class sizes.
To better understand what the research on personalized learning does—and doesn't—say, Education Week reviewed dozens of studies and talked with experts from a range of fields.
The takeaway for school and district leaders?
Don't believe the hype—at least not yet.
"Personalized learning holds promise, but there's still a lot of work to do to figure out how well this is working," said John F. Pane, a senior scientist and the distinguished chair in education innovation at the RAND Corp. "People who are thinking from a programmatic and implementation point of view should not necessarily buy into the advocacy around how great this is."
The Gates/RAND Studies
In 2015, Pane and his RAND colleagues undertook the field's most comprehensive study to date. They found that 11,000 students at 62 schools trying out personalized-learning approaches made greater gains in math and reading than similar students at more traditional schools. The longer students experienced "personalized-learning practices," the greater their achievement growth.
Those results captured the attention of such luminaries as Facebook founder Mark Zuckerberg, who cited the RAND study as one of the reasons he's willing to bet billions on personalized learning's future.
Also enthusiastic: Brad Bernatek, a senior program officer who oversees research for the Gates Foundation, which funded RAND's research and gave grants to the schools in their study.
"The results were encouraging, promising, and academically meaningful for the students in these schools," Bernatek said. But, he quickly added, "they were by no means definitive."
Indeed, some observers suggest the Gates/RAND study doesn't actually say much about whether the approach can work in typical K-12 environments. One reason: The schools in the study employed a wide range of instructional practices, many of which are also used at more traditional schools (such as grouping students based on performance data).
Furthermore, the schools in the study were mostly charters that won competitive grants. Did students gain academically because their schooling was "personalized," or did they gain because they were in high-functioning schools that received extra resources?
"I think it's still early days," Bernatek concluded. "That's the biggest takeaway."
Implementation Studies
The broadest look at how schools have implemented personalized-learning strategies comes via the federal Education Department, which gave more than $500 million to 21 districts, consortia, and charter networks in 2012 and 2013 as part of its Race to the Top-District program.
Unfortunately, the research findings to date from those grantees are not particularly deep, and the department's own reports have sometimes drawn conclusions that don't seem warranted by the available evidence.
For example, a set of recent case studies claimed that personalized learning had sparked "cultural shifts and transformed student learning" in such places as Miami-Dade County, Fla., and New Haven, Calif. To back that up, the report cited largely soft and self-reported findings, such as teachers saying they are now more comfortable taking classroom risks.
Department officials do say that many of their grantees show signs of promise on "leading indicators," including lower suspension rates for middle and high school students. Some grantees have also seen student-achievement growth, both on state exams and local interim assessments. The department expects more substantial evaluation reports to be released, beginning next year.
Other implementation studies of note include an ongoing look at a personalized-learning initiative in the Baltimore County district, where early results are mixed, and a generally positive case-study examination of California's Summit Schools charter network. Earlier this year, the Center on Reinventing Public Education, a nonprofit research center, also looked at the financial impact of implementing personalized-learning models, finding that money often went more to salaries, facilities, and operations than to technology and that schools often do a poor job of anticipating their costs.
Overall, though, the state of research around real-world implementations of personalized-learning models remains muddled and contentious.
Just look at New York City, where a nonprofit group called New Classrooms has been spreading a blended-learning approach to middle school math called School of One. In the model, up to 90 children share one large classroom with multiple teachers. Students work primarily on computers, progressing at their own pace through algorithm-generated "playlists" tailored to their individual needs.
Two studies by different researchers at Columbia University reached different conclusions.
The first covered 22 sites, but had a weak approach to comparing School of One students with children elsewhere. The results looked quite encouraging: By their second year in the program, School of One students' math skills had improved at a rate 47 percent faster than the national average, and those who started out the furthest behind made the largest gains. New Classrooms trumpeted the results aggressively.
The following September, though, another evaluation reached far less enthusiastic conclusions. That study used a much more rigorous methodology, but only included a handful of School of One sites. It found "neither very large positive nor very large negative effects relative to the math instruction that students would have otherwise received."
Students in the program also reported consistently feeling like they learned more when working directly with a teacher.
New Classrooms has barely mentioned those results, leading some observers to accuse the group of being more interested in positive public relations than helping the field learn what actually works.
For educators and administrators on the ground, it all adds up to continued uncertainty about who and what to trust.
Specific Software Programs
Perhaps the most-cited research into a particular digital tool used to support personalized learning is RAND's 2013 research on the effectiveness of Cognitive Tutor Algebra 1, an adaptive-software program for teaching math, originally developed at Carnegie Mellon University in Pittsburgh. The study was a rigorous randomized-control trial, undertaken in a variety of real-world schools, putting it pretty close to the gold standard for education research.
Overall, the study found that compared with textbook-based curricula, Cognitive Tutor "significantly improved algebra scores for high school students," but that positive effect emerged only in the second year of schools' implementation.
Less-rigorous recent studies have examined other popular math software programs, including DreamBoxKhan Academy, and ST Math. Researchers generally found encouraging signs of positive effects on student achievement, and teachers and students typically gave positive reports on their experiences. But actual usage of the programs varied considerably, and researchers were unable to definitively attribute positive outcomes to the software alone.
Beyond that, the research base behind specific products is often very thin, with far more poorly designed studies done by companies themselves than robust evaluations conducted by independent third parties.
As a result, school and district leaders who want to pursue personalized learning need to be particularly savvy consumers, said Bart Epstein, the CEO of the Jefferson Education Accelerator, a network of researchers, educators, and entrepreneurs based at the University of Virginia.
That means asking hard questions about any claims a vendor is making, he said.
It also means understanding that context matters—just because a software program appears to have worked in one district doesn't mean it's going to work in another.
And perhaps most importantly, Epstein said, those on the ground need to understand that they're responsible for helping generate good research about personalized learning, too. Small districts might not be able to contract with RAND, but they can often reach out to a local university or engage in what's called "short cycle" evaluation, to get formative feedback as they go.
"Any district that is bringing in a major new program should absolutely be budgeting for real research support," Epstein said. "Everyone wants someone else to spend the time and money to study [personalized learning.] But it's all of our responsibility."
larrycuban | May 21, 2017

Nenhum comentário:

Postar um comentário