Prepare for another set of judgmental eyes.
Now in its fifth year, the Large Scale Visual Recognition Challenge just wrapped up this weekend. With nearly 40 entrants from a over a dozen countries, AI researchers attempted to mimic what you and I do everyday: see the world. That meant that teams from National University of Singapore, the Oxford University, Adobe Systems, the Center for Intelligent Perception and Computing at the Chinese Academy of Sciences and Google all used advanced software to identify what different objects are from a giant data set.
And they're making progress. Accuracy doubled and error rates were cut in half. But there's still a long way to go. Machines are still terrible at "scene recognition," which is deciphering what's happening rather than simply what something is. “Human-level understanding is much deeper than machine image classification,” Olga Russakovsky, a Stanford University graduate researcher who is the lead organizer for the contest said. “I can easily find a image that will fool the algorithm and I can’t do it with humans, but we’re making significant progress.”
“Human-level understanding is much deeper than machine image classification but we’re making significant progress.”
Perhaps the Large Scale contest will succeed where the Kinect ultimately failed in the marketplace. By engendering devices that read the world, instead of just your face, whole new worlds are opened up such as cars that recognize pedestrians or medical devices that could give prognosis. Imagine a Kinect or PS4 Camera that could read the room your in as well and give you fashion tips.
James Bridle famously predicted this direction and our response to it in his closing keynote at Web Directions in 2011. Titled "Waving at Machines," it laid the groundwork for what would be come the New Aesthetic and highlighted how computers are trying to make sense of our world
You can foresee a future when in entering a room this is what you’ll do, to identify yourselves not just to the people but to the computers and the machines who are watching us too. We’ll have entered into this dialogue with them, and we’re already doing it like this. We already share our world with these things that are watching us. And it can be creepy and it can be surveillance, or it can be a shared vision.
This weekend marked a big step along that path.