A Problem with the Yellow Pages
Karen Lochbaum earned her Ph.D. in computer science at Harvard University. At the time, she was at the forefront of new research about using computers to interpret language—but she didn’t want to follow many of her peers into academia.
“I wanted to write software and make stuff,” she says.
Karen stayed true to those aspirations, turning her talent as an up-and-coming software developer into work with Pearson. Today, she is Vice President of Technology Services at Pearson Knowledge Technologies.
Karen worked for a time at Bell Laboratories during her academic studies.
They were working with something called Latent Semantic Analysis. It was an early form of online search technology, applied to the dense information in thousands of pages of yellow pages. Telephone companies wanted their customers to have a better, easier, faster experience than they were having thumbing through those thousands of pages.
“Before this kind of technology,” Karen says, “you had to know that ‘doctors’ were listed in the ‘physician’ section, or ‘drugstores’ were listed as ‘pharmacies.'”
“Latent Semantic Analysis helped us develop ways for computers to learn about words and recognize other words and phrases that mean the same thing,” Karen says.
Fueling Everyday Technology
Today, the same technology helps companies like Google perform internet searches. Other companies like Amazon use it to create algorithms that suggest books based on what customers previously purchased.
It’s also the technology used today for automated scoring. Developed through decades of research by cognitive and language scientists, automated scoring technology is used to provide consistent, accurate, and timely feedback on online tests or writing assignments. It can be used for both formative (used to gauge student learning during a lesson) and summative (used at the end of a year or course to assess how much content students learned overall) tests.
Better Scoring, More Teaching
“It’s not just that we can take student essays and score them in a couple of seconds,” Karen says. “It’s all about consistency.”
“Computers apply the same standards to every essay every time,” Karen says. “The process gives students immediate, detailed feedback—and it allows teachers to do more teaching.”
While trained human scorers may be able to cross reference student essays with a handful of standard essays called “anchor sets,” computers can reference thousands of essays.
“And human scorers are always performing spot checks to see if the automated process is producing what’s expected,” Karen says.
Scores Today, Complex Feedback Tomorrow
Karen and her colleagues have plans to make the automated process even more beneficial to learning.
“We don’t just want to score essays and point out grammar errors or spelling mistakes,” she says. “We want this system to give students more detailed feedback in the future. How can they improve the way they organize their thoughts on the page? What content was left out of an essay?”
When used in the classroom setting, there are many benefits to explore. “With automated scoring, kids receive instant feedback and can practice their writing a whole lot more,” Karen says.
“And teachers can focus on teaching.”
Automated Scoring, An Overview
Educators, students and parents have asked for quicker results of student performance on standardized testing to help inform teaching and learning. Automated scoring, which is based on and used in combination with human scoring, can help us deliver on that goal.
So, what exactly is it? Automated scoring uses a computer to score open-ended test questions like essays. Experts train a computer – pulling on human inputs — to create a learning algorithm that can score an assessment as accurately as human scorers.
As this technology may be new to many, we understand that there may be some uncertainty about automated scoring. That’s why we’ve put together a list of five things you should know about automated scoring when used for assessments: