Apple’s “Social Flash” Patent Leads to Innovative Mobile Social Video Capture

Recently, it was revealed that Apple has applied for a patent on “social flash” functionality in its iDevice cameras.  Not a surprise, considering how popular iPhones have become in the last few years as photo-capturing devices.  But far more intriguing than the possible professional applications of this function—master-slave relationships between devices that coordinate their flash and exposure to capture one well-defined image—are the possible applications in an active, fluid social setting.  The app Vyclone has already taken the important first step in delivering the “social capture” concept to the public.  The app’s UI is quite impressive, and the functionality is pretty reliable.  But if Apple and/or Google move to development quickly (or follow Yahoo and Facebook’s lead and simply buy and assimilate Vyclone into their platforms), social flash could be merged with social capture and adopted as part of their devices’ native camera function, which would improve on Vyclone’s product in a couple of big ways.

First: It’s evident when viewing clips made with Vyclone that a major drawback is the difference in camera quality across devices.  There’s no way to guarantee that app users who are collaborating on a video have devices with equal ability to capture images and sounds.  The end result is a video that doesn’t flow as well, because the difference in camera resolution and frame rate is hard to ignore (for those of us acutely interested in aesthetics, anyway).  Were social flash/capture a native function, it could be limited to certain devices that meet minimum hardware performance standards.  Imagine twenty users wearing Google Glass, watching the same event, then sharing that crowdsourced clip to a network where users edit and patch together their own clip using streams from all of those devices.

Second: Conceivably, if a social-flash app can detect other device locations with precision (which we know Apple wants to do, as evidenced by their acquisition of indoor GPS startup WifiSlam), then it may also process the positions of several app users relative to each other, in real-time.  One of the biggest complaints myself and others have with Vyclone is you have no control over who your collaborators are; with social flash as a native function, users would simply need to share a common device, not an app (and naturally, iPhones or Android phone owners are far more numerous than Vyclone users).  With native ability to coordinate not only across devices, but also in tandem with other devices, the app would be able to precisely coordinate exposure times, and even sequence the shot.  Imagine several social-flash app users capturing the same subject in motion—spectacular sports plays, silly YouTube hijinks, concerts/recitals, etc.—and think of how easy it would be to freeze and “move through” that scene in crowdsourced, Matrix-style “bullet time.”  It turns your life event experience into something straight out of Minority Report.

Watch out for this concept to really take off within the next six months.  Social flash/capture could change the very nature of user interaction when it comes to video and image-capturing.  Users would not have to rely on other users to download an app to create the experience—they’d only need similar devices and a shared interest in capturing a sports play, concert performance, or similar compelling moment in time.  Couple the social flash feature with Apple/Google’s reliability and user experience design, and it could be the biggest thing to happen to smartphone videography since Vine.

%d bloggers like this: