be misidentified and wrongly associated with items that are not relevant
to them (note even if accuracy of face
recognition is high and false positives
are low, the number of items and users is huge). Interestingly, this seems
to open a completely new and exciting
type of privacy-related trade-off compared to the well-known privacy-utility
trade-off, which would be multiparty vs.
individual privacy. Note, however, that a
multiparty-individual privacy trade-off
will not be needed if privacy-preserving
face-recognition methods27 are used by
MP tools, so that parties would be recognized while preserving their privacy.
Beyond photos, party recognition may
be easier for some content type such as
events (people invited or attending are
explicitly mentioned) or even more challenging for some other content such as
text posts, in which affected users may
not always be explicitly tagged.
Support for inferential privacy. Another issue not considered before in a MP
context is that of inferential privacy. That
is, it may not only be about what your
friends say about you online, but also
what it may be inferred from what they
said regardless of the type of content.
For instance, Sarigol et al. 27 have demonstrated the feasibility of constructing
shadow profiles of sexual orientation for
users and non-users, using data from
more than three million accounts of a
single OSN. Note that negotiations or
agreements for the case of inferential
privacy may be more complex, as the reasons not to publish content may not be
about the content itself but more about
the consequences in terms of the information that may be inferred from it, so
solutions to this type of MPC might be
more difficult to comprehend by users,
which would also challenge the usability
and understandability of MP tools. Also,
we are unaware of any social media site
that provides users with any sort of controls for inferential privacy; let alone any
research conducted that considers both
MP and inferential privacy together.
Privacy-preservation guarantees.
Last but not least, MP tools should pro-
vide some sort of individual privacy
guarantees. This is particularly impor-
tant when a multiparty agreement is
not possible. For instance, a user may
be posting on purpose content that de-
fames another user. In these cases, there
may be room for enforcing individual
with at most 50 participants. 2, 12–14, 32, 36
This is in part due to a distinct lack of
systematic and repeatable methods
and/or protocols to evaluate MP tools
and compare them to each other. In
order for evaluations to be more conclu-
sive and generalizable, MP tools should
be evaluated considering wider and
more varied populations. Also, evalu-
ation protocols should be developed
with a view to maximize ecological va-
lidity, which is particularly challenging
in this domain. Firstly, participants in
user studies would always seem reluc-
tant to share sensitive information with
researchers37 (for example, photos they
feel embarrassed about and prefer not
sharing online), which would bias any
evaluations toward non-sensitive issues
only, leaving out the scenarios where
the adequate performance of MP tools
would be critical. An alternative could
be evaluations with fake data/scenarios
where participants self-report how they
would behave, but the results may not
match participants’ actual behavior in
practice due to the well-known dichot-
omy between privacy attitudes and be-
havior. 1 Secondly, conducting MP evalu-
ations in the wild is very difficult, as it
would require all the users affected by a
particular piece of content to be studied
together to understand the conflicts and
whether the solutions to the conflicts
are optimal. A possible way forward
could be methodologies based on living
labs, which would integrate and validate
research in evolving real-life contexts.
Privacy-enhanced party recognition.
Given a particular item uploaded, MP
tools should derive the users who are affected by the item. For instance, if a user
uploads a photo and tags in it all the
other users that appear in the photo, MP
tools can directly use this to know which
users are involved. However, users many
times either do not tag all people clearly
identifiable in a photo or incorrectly tag
people who actually do not appear in the
photo. Face recognition software could
be used for this, such as the one developed by Facebook researchers called
DeepFace, 34 which has 97.35% accuracy.
The question that arises is whether using face recognition software could be
too privacy invasive for individuals, that
is, the social media provider would be
able to identify individuals in any photo
even for photos outside the social media infrastructure, or individuals could
MP tools should
aim for usability
without becoming
a fully automated
solution, as this
may not achieve
satisfactory results
when it comes
to privacy in
social media.