The website of Carnegie Learning, a company started by scientists at Carnegie Mellon University that sells classroom software, trumpets this promise: ''Revolutionary Math Curricula. Revolutionary Results.''
The pitch has sounded seductive to thousands of schools across the country for more than a decade. But a review by the U.S. Department of Education last year would suggest a much less alluring come-on: Undistinguished math curricula. Unproven results.
The federal review of Carnegie Learning's flagship software, Cognitive Tutor, said the program had ''no discernible effects'' on the standardized test scores of high school students. A separate 2009 federal look at 10 major software products for teaching algebra as well as elementary and middle school math and reading found that nine of them, including Cognitive Tutor, ''did not have statistically significant effects on test scores.''
Amid a classroom-based software boom estimated at $2.2 billion a year, debate rages over the effectiveness of technology on learning, and how best to measure it. But it is hard to tell that from the technology companies' promotional materials.
Many companies ignore well-regarded independent studies that test their products' effectiveness Carnegie's website, for example, makes no mention of the 2010 review, by the Education Department's What Works Clearinghouse, which analyzed 24 studies of Cognitive Tutor's effectiveness, but found only four of those met high research standards. Some firms misrepresent research by cherry-picking results, and promote surveys or limited case studies that lack the scientific rigor required by the clearinghouse and other authorities.
''The advertising from the companies is tremendous oversell compared to what they can actually demonstrate,'' said Grover Whitehurst, a former director of the Institute of Education Sciences, the federal agency that includes What Works.
School officials, confronted with a morass of complicated and sometimes conflicting research, often buy products based on personal impressions, marketing hype or faith in technology for its own sake.
''They want the shiny new one,'' said Peter Cohen, chief executive of Pearson School, a leading publisher of classroom texts and software. ''They always want the latest, when other things have been proven the longest and demonstrated to get results.''
Carnegie, one of the most respected of the educational software firms, is hardly alone in overpromising or misleading. The website of Houghton Mifflin Harcourt says that, ''Based on scientific research, Destination Reading is a powerful early literacy and adolescent literacy program,'' but fails to mention that it was one of the products the Department of Education found in 2009 not to have statistically significant effects on test scores. Similarly, Pearson's website cites several studies of its own to support its claim that Waterford Early Learning improves literacy, without acknowledging the same 2009 study's conclusion that it had little impact.
And Intel, in a Web document urging schools to buy computers for every student, acknowledges that ''there are no longitudinal, randomized trials linking eLearning to positive learning outcomes,'' yet nonetheless argues that research shows technology can lead to more engaged and economically successful students, happier teachers and more involved parents.
''To compare this public relations analysis to a carefully constructed research study is laughable,'' said Alex Molnar, professor of education at the National Education Policy Center at the University of Colorado. ''They are selling their wares.''
Carnegie officials say 600,000 students in 44 states use its products, many taking teacher-led classes three times a week with Carnegie-provided workbooks, and spending the other two class periods in computer labs using Cognitive Tutor. Officials declined to release annual revenue figures, but Carnegie Learning was acquired in August for $75 million by the parent of the for-profit University of Phoenix.