What is this project about?
The proliferation of cheap digital cameras in mobile and wearable devices as well as Internet-based publication platforms (e.g. blogs, social networks) led to a massive increase of published photos of unintentionally or involuntarily photographed people. These photos also pile up additional metadata: amongst other, identification through automatic face recognition or „tagging“ on social networks. Since there is often no contact to the photographer or publisher (an analog control gap), the affected persons are often unable to excercise their legal personality rights (e.g. rights on their own picture). With P3F, an understandable, personal “Privacy Policy” is inconspicuously encoded in the clothing and can transport self-chosen restrictions to be handled automatically on publication or indexing through search engines. Simple licenses, similar to Creative Commons, encode attributes like “Do not publish” or “Do not index”. If required the affected face can automatically be rendered unrecognizable.
What is the analog gap?
Many countries define legal rights regarding a person’s own image. However, they are not easy for a person to enforce. The image of a person might have been unintentionally captured by a photographer without the person noticing that his/her picture was taken, the person may simply not know the photographer, or the person may not know when and where his/her picture was published and in which context. This lack of knowledge can hinder the person from exercising his/her legal rights. Moreover, the person has no way to inform potential or actual picture takers of their self-chosen restrictions on how their image shall be handled. Likewise, a conscientious photographer might not have the chance to ask all the people whose image he/she captured for their consent to use their images. In any case, the person’s right to control how his/her image is used is lost due to a gap in the communication and control path from the person to the photographer and/or publisher of the photo.
How do you encode this information?
A modular visual coding system is used to convey the policy information across the communication gap described above. The policy is embedded in the visual information of the photograph (e.g., as part of the clothing), making it an inseparable part of the picture so that it is highly likely to survive along the publishing path. Under favorable conditions, this information is hidden in such a way that it is unnoticed by the human eye.
How does this encoding look like?
Many pieces of wardrobe are made with some pattern or print. By varying the appearance of this pattern slightly, information can be encoded. This information can be automatically extracted by social networks, publishing websites and search engines.
So you are using DRM techniques for the privacy of ordinary people instead for the good of big content industries?
Yes, if you like.
Can the big internet companies be forced to obey my encoded privacy restrictions?
Maybe. There are a few examples in the past where exactly this has been done.
After a public outcry shortly after the introduction of Google Street View, the service started to blur faces and license plates. In Germany, Google additionally agreed to provide an opt-out feature after the Minister of Justice of Rhineland-Palatinate, the data protection supervisor for Schleswig-Holstein, and Germany’s Federal Consumer Protection Minister threatened the company with legal actions. Since 2009, German home owners can blur the image of their home.
Another example is the integration of a banknote detection algorithm in popular software (e.g., Photoshop and PaintShop Pro), several printers, several scanners, and most color copying machines. In 2004, the Central Bank Counterfeit Deterrence Group (founded by the G10) published a Counterfeit Deterrence System software module for detecting banknotes that has subsequently found its way into many products although it is only available as a closed source module and there is no legal obligation for companies to include it.
Despite that, if P3F becames an accepted standard, no one handling pictures professionally will be able to deny that he/she did not know about your wishes regarding your own picture.
So, you are a robots.txt for real world objects?
With P3F you can restrict usage of your personal image in more ways than just exclude it from search engines. Our framework consists of three simple person-related restrictions and two picture-wide restrictions.
The Do not Search flag specifies that the user does not want to be found through an internal or external search engine using a person-specific keyword. This includes the person’s real name, user name, birth date, and any other indexable data. Furthermore, it includes other images (e.g., ”find similar faces,” ”find other pictures of the same user”) or joined data (e.g. ”other customers who bought this product,” ”friend of the person”). In the case of Facebook, the user accepts being identified (”tagged”) in a photo but does not want this photo to show up if someone searches on his/her name or visits his/her timeline.
The Do not Identify flag specifies that the user does not want to be identified in a picture. This includes automatic face identification as well as manual name tagging by other users. If this information should become available by other means despite this specification, it is not to be included in a search index.
The Do not Publish flag specifies that the user does not want to have any pictures of him or her published. If the person is not the main subject (e.g., his/her image was unintentionally captured) his or her face should be blurred, pixelated, or covered to make identification impossible. The publisher (e.g., newspaper editor, blog writer, or uploading social network user) can also crop the picture to exclude the person in question. A modern publishing system can blur faces automatically in accordance with P3F policy.
Two additional picture-related flags complete the privacy policy. The No Geolocation data flag specifies that geographic location should not be added, displayed, or indexed for this picture, and the No Timestamp flag specifies that a timestamp should not be processed for this picture.
Under what license are the artefacts of this project distributed?
Artefacts will have a Creative Commons (CC-SA-BY) license; Source code will be published under GPLv2 or similar. Scientific publications will be offered for free download under the preprint/open-access/self-publishing license of the appropriete conference. Citations welcomed.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s