While automated essay-grading machines can tackle 16,000 essays in 20 seconds, are they any good compared to a human? Online course administrators in the machine camp think so, but retired MIT prof Les Perelman has shown that such bots aren't about to pass a Turing test anytime soon. His proof is the Babel Generator, aka the Basic Automatic BS Essay Language Generator. In less than two seconds, the software can spit out an essay capable of scoring 90 percent or better on automated tests. The only problem is that while grammatically correct, the papers are utter gibberish. Here's an example he gave the Chronicle of Higher Education: "Privateness has not been and undoubtedly never will be lauded, precarious, and decent."
You can stare at that as long as you want; it'll never make sense -- but such writing could still have netted you a high score on certain tests. The point is not to help cheaters, obviously, and Perelman isn't completely opposed to grading software. But he thinks it's far from ready to be relied on alone ("maybe in 200 years") and needs to be supplemented by actual teachers. In the end, he fears that it's a money-saving shortcut that could actually damage students -- especially underprivileged ones who can't afford a standard education.