4. USABILITY EXPERIMENTS
We report on preliminary experiments that demonstrate
feasibility and promise of the SISL authentication system.
We carried out the experiments in three stages. First, we
established that reliable learning was observed with the
new expanded version of the SISL task using Mechanical
Turk. Second, we verified that users retain the knowledge
of the trained sequence after delays of 1 and 2 weeks.
Finally, we investigated the effectiveness of an attack on
participants’ sequence knowledge based on sampling
the smallest fragments from which the original sequence
could potentially be reconstructed.
The experiments were carried out online within
Amazon’s Mechanical Turk platform. The advantages of
Mechanical Turk involve a practically unlimited base of participants, and a relatively low cost. One drawback of running
the experiments online is the relative lack of control we had
over users coming back at a later time for repeat evaluations.
4. 1. Experiment 1: implicit and explicit learning
Our first experiment confirmed that implicit learning can
be clearly detected while explicit conscious sequence knowledge was minimal. Experimental data from 35 participants
was included in the analysis.
The experiment used the training procedure described
in the previous section where the training phase contained
3780 total trials and took approximately 30–45 minutes to
complete. Recall that training consists of seven 540-trial
training blocks. After the training session, participants
completed a SISL authentication test that compares performance on the trained sequence to performance on two random test sequences.
Learning of the trained sequence is shown in Figure 3 as a
function of the performance advantage (increase in percent
correct responses) for the trained sequence compared with
the randomly occurring noise segments. On the test block
following training, participants performed the SISL task at
an average rate of 79.2% correct for the trained sequence
SISL authentication. To authenticate at a later time, a
trained user is presented with the SISL task where the struc-
ture of the cues contains elements from the trained authen-
tication sequence and untrained elements for comparison.
By exhibiting reliably better performance on the trained
elements compared to untrained, the participant validates
his or her identity. Specifically, we experimented with the
following authentication procedure:
• Let k0 be the trained 30-item sequence and let k1, k2 be
two additional 30-item sequences chosen at random
from Σ. The same sequences (k0, k1, k2) are used for all
authentication sessions so that no additional informa-
tion about k0 is revealed.
• The system chooses a random permutation π of (0, 1, 2, 0,
1, 2) (e.g., π = ( 2, 1, 0, 0, 2, 1)) and presents the user with a
SISL task with the following sequence of 540 = 18 × 30 items:
kπ1, kπ1, kπ1, . . . , kπ6, kπ6, kπ6.
That is, each of k0, k1, k2 is shown to the user exactly six
times (two groups of three repetitions), but ordering
is random. The task begins at the speed at which the
training for that user ended.
• For i = 0, 1, 2, let pi be the fraction of correct keys the
user entered during all plays of the sequence ki. The
system declares that authentication succeeded if
p0 > average ( p1, p2) + s ( 3. 1)
where s > 0 is sufficiently large to minimize the possibility that this gap occurred by chance, but without causing authentication failures.
In the above, preliminary formulation, the authentication
process is potentially vulnerable to an attack by which an
untrained user degrades his performance across two blocks
hoping to exhibit an artificial performance difference in
favor of the trained sequence (and obtaining a 1/3 chance of
passing authentication). We discuss a robust defense against
this in Section 5, but for now we mention that two simple precautions offer some protection, even for this simple assessment procedure. First, verifying that the authenticator is a
live human makes it difficult to consistently change performance across the foil blocks k1, k2. Second, the final training
speed obtained during acquisition of the sequence is known
to the authentication server and the attacker is unlikely to
match that performance difference between the trained and
foil blocks. A performance gap that is substantially different
from the one obtained after training indicates an attack.
Analysis. The next two sections discuss two critical aspects
of this system:
• Usability: can a trained user complete the authentica-
tion task reliably over time?
• Security: can an attacker who intercepts a trained user
coerce enough information out of the user to properly
authenticate?
Figure 3. Across training participants gradually begin to express
knowledge of the repeating sequence by exhibiting a performance
advantage for the trained sequence compared to randomly
interspersed noise segments. Note that overall performance on the
task stays at around 70% throughout due to the adaptive nature of
the task by which the speed is increased as participants become
better at general SISL performance.
18
16
14
12
10
Tra
ine
d
se
quen
ce
a
d
van
ta
ge
(
%)
8
6
4
2
0
1234
Training Block (540 trials)
567