One of the stumbling blocks to the creation of what I've termed the "participatory panopticon" is the need to organize and structure terabytes of data. While it's possible to add tags and metadata by hand, nobody would want to take that kind of time -- the system itself needs to handle it. Marc Davis and his team at Yahoo! Research Labs in Berkeley, California, may have brought that capability a bit closer with a new way to allow cell phones to identify who you've just snapped a picture of.
The concept... is based on a central server that registers details sent by the phone when the photo is taken. These include the nearest cellphone mast, the strength of the call signal and the time the photo was taken. [...]
...in tests Davis and his team found that by combining [facial recognition software] with context information the system could correctly identify people 60 per cent of the time. The context information can also be combined with image-recognition software to identify places within photos.
60% recognition? Not useful, yet -- but that's why it's still in the labs and not on your phone. This is just the sort of thing that will get much better, much faster than some might expect. Get ready.
Actually, I think 60% facial recognition might be very useful for pictures taken on a camera phone, especially if it can give a useful indication of confidence.
Say you take 100 pictures at a big event (party, family reunion, whatever). If the camera phone could identify who was in, even 50% of them (with a pretty high level of confidence), you'd already be half done with the task of identifying who was in the pictures.
I have no particular interest in using my camera phone to identify strangers, but having a feature like that to collect meta-data for my photo library could save me a lot of time and effort.