Primitives Secure Against
Rubber Hose Attacks
By Hristo Bojinov, Daniel Sanchez, Paul Reber, Dan Boneh, and Patrick Lincoln
Cryptographic systems often rely on the secrecy of cryptographic keys given to users. Many schemes, however, cannot
resist coercion attacks where the user is forcibly asked by
an attacker to reveal the key. These attacks, known as
rubber hose cryptanalysis, are often the easiest way to defeat
cryptography. We present a defense against coercion attacks
using the concept of implicit learning from cognitive psychology. Implicit learning refers to learning of patterns
without any conscious knowledge of the learned pattern.
We use a carefully crafted computer game to allow a user to
implicitly learn a secret password without them having any
explicit or conscious knowledge of the trained password.
While the trained secret can be used for authentication, participants cannot be coerced into revealing it since they have
no conscious knowledge of it. We performed a number of
user studies using Amazon’s Mechanical Turk to verify that
participants can successfully re-authenticate over time and
that they are unable to reconstruct or even robustly recognize the trained secret.
Consider the following scenario: a high security facility
employs a sophisticated authentication system to check
that only persons who know a secret key, possess a hardware
token, and have an authorized biometric can enter. Guards
ensure that only people who successfully authenticate can
enter the facility. Suppose a clever attacker captures an
authenticated user. The attacker can steal the user’s hardware token, fake the user’s biometrics, and coerce the victim by threatening them with a weapon such as a rubber
hose into revealing his or her secret key. At this point, the
attacker can impersonate the victim and defeat the expensive authentication system deployed at the facility.
So-called rubber hose attacks have long been the bane of
security systems and are often the easiest way to defeat cryptography.
12 The problem is that an authenticated user must
possess authentication credentials and these credentials
can be extracted by force10 or by other means.
In this work, we present a new approach to preventing
a class of rubber hose attacks using the concept of implicit
2, 7 from cognitive psychology. Implicit learning is
believed to involve the part of the brain called the basal
ganglia that supports skill learning for tasks such as riding a
bicycle and playing golf by repeatedly performing those tasks.
Experiments designed to depend primarily on implicit learn-
ing show that knowledge learned this way is not consciously
accessible to the person being trained.
7 An everyday example
of this phenomenon is riding a bicycle: we know how to ride
a bicycle, but cannot explain how we do it. Section 2. 1 gives
more background of the relevant neuroscience.
Implicit learning presents a fascinating tool for designing coercion-resistant security systems. In this paper, we
focus on user authentication where implicit learning is used
to train the human brain on a password that can be detected
during authentication, but cannot be explicitly described by
the user. Such a system avoids the problem that people can
be persuaded to reveal their password. To use this system,
participants would be initially trained to perform a specific
task called Serial Interception Sequence Learning (SISL),
described in the next section. Training is done using a computer game that is known to depend largely on implicit
learning and results in knowledge of a specific sequence of
key strokes that functions as an authentication password. In
our experiments, training sessions last approximately 30 to
45 minutes and participants learn a random password that
has about 38 bits of entropy. We conducted experiments to
show that after training, participants cannot reconstruct the
To be authenticated at a later time, a participant again
performs the SISL task with multiple embedded sequences,
including elements of the previously trained sequence. By
exhibiting reliably better performance on the trained elements compared to untrained, the participant validates
his or her identity within 5 to 6 minutes. An attacker who
does not know the trained sequence cannot exhibit the
user’s performance characteristics measured at the end
of training. Note that the authentication procedure is an
interactive game in which the server knows the participant’s secret training sequence and uses it to authenticate
the participant. Readers who want to play with the system
can check out the training task at brainauth.com/testdrive.
The original version of this paper appeared in the
Proceedings of the 21st USENIX Security Symposium, 2012.