(taken immediately beforehand) with 100% accuracy. The fact
that the pulse-response reference is taken at the beginning
of the session and is used only during that session, makes it
easier to overcome consistency issues that can occur when
the reference and test samples are days or months apart.
Unobtrusive. Users do not need to modify their behavior
at all when using the continuous authentication system.
Thus, user burden is minimal.
Difficult to Circumvent. With a true positive rate of 100%
it is unlikely that the adversary can manage to continuously
fool the classifier. Even if the adversary happens to have a
pulse-response biometric similar to the original user, it must
evade the classifier on a continuous basis. We explore this
further in the security analysis section below.
6. 3. Security
The adversary’s goal is to subvert the continuous authentication system by using the secure terminal after the original
user has logged in. In the analysis below, we assume that the
original user colludes with the adversary. This eliminates any
uncertainty that results from the original user “discovering”
that the adversary is using its terminal, which is hard to model
accurately. This results in a worst-case scenario and the detection probability is a lower bound on security provided by the
continuous authentication system.
An important measure of security is the detection time—
the number of times biometric acquisition is performed
between the adversary’s initial appearance and detection.
Obviously, longer inter-acquisition intervals imply slower
collection of measurements and subsequent detection of
We model the probability of detecting an adversary using
two static probabilities derived from our experiments—an
initial probability α and a steady state probability β. A more
detailed model with several intermediate decreasing probabilities could be constructed but this simple model fits quite
well with our experiments.
The probability α is the probability that the adversary is
detected immediately, that is, the very first time when his
pulse-response is measured. However, if the adversary’s
biometric is very close to that of the original user, the adversary might not be detected every time biometric capture is
performed. This is because the biometric is subject to measurement noise and the measurements from an individual
form a distribution around the “fingerprint” of that user.
If the adversary manages to fool the classifier once, it must
be because its biometric is close to that of the original user.
Thus, the adversary’s subsequent detection probability
must be lower:
P[Xi = advçXi− 1 = usr] ≤ P[Xi = adv]
We call this decreased probability β. The probabilities α
and β are approximations that model how similar two indi-
viduals are, that is, how well their probability distributions
overlap in about 100 dimensions. Using α and β we build
a Markov model, shown in Figure 2, with three states to cal-
culate the probability that the adversary is detected after i
When the adversary first accesses the keyboard, it is
either detected with probability α or not detected, with prob-
ability 1 − α. In the latter case, its pulse-response biometric
must be close the original user’s. Thus, β is used for the sub-
sequent rounds. In each later round, the adversary is either
detected with probability β or not detected, with probabil-
ity 1 − β. To find the combined probability of detection after
i rounds, we construct the state transition matrix P of the
Markov model, as follows:
Each row and each column in P corresponds to a state.
The entry in row q and column r, pqr, is the probability of
transitioning from state q to state r. To find the probabilities
of each state we start with a row vector ρ that represents the
initial probability of being in state 1, 2, and 3. Clearly, ρ = [ 1, 0, 0],
indicating that we always start in state 1. The probability of
being in each state after one round (or one transition) can
be represented by the inner product ρP. Probabilities for
each subsequent round are determined via another multi-
plication by P. The probabilities of being in each state after
i rounds (state transitions), is therefore:
[ 1, 0, 0] ⋅ Pi = [0, ( 1 - α) ( 1 -;β)i- 1, 1 -;( 1 - α) ( 1 -;β)i- 1]
As expected, the probability of being in state 1 (the initial
state) is 0, since the first state transition forces a transition
from the initial state and there is no way back (see Figure 2).
The probability of being in state 2, that is, to escape detection for
i rounds, is given by the second element of ρ: ( 1 − α)( 1 − β)i− 1.
The probability of detection is thus: 1 − ( 1 − α)( 1 − β)i− 1.
α roughly corresponds to the sensitivity of the classifier,
that is, the true positive rate reported in Section 7. We use
99% (rather than the 100% found in our experiments) in order
to model the possibility of making a classification mistake
at this point. We do not have enough data to say with absolute certainty if this is valid for very large populations, but we
continue under the assumption that our data is representative. β is harder to estimate but we set β = 0.3 based on numbers from our experiments in Section 7. 4. Using these values
there is a 99.96% chance of detecting the adversary after 10
Figure 2. Markov model of the continuous authentication detection
probability. States are numbered 1–3 for easy reference in text.
1 – a
1 – b
down 1 2