All relationships relast are part of Rels, that is, relast ∈ Rels. As
discussed earlier, ranges over binary operators. All relationships derived using the above grammar have exactly one occurrence of L and R. For a relationship r ∈ relast, let r[x/L, y/R, e/_]
denote the expression where x is substituted for L, y is substituted for R and the expression e is substituted for _. Then,
given two program elements a and b and a relationship r ∈ relast,
a match is said to exist if r[a/L, b/R, [expr]/_] ∩ Exp(x) ≠ ⁄. Here,
[expr] denotes all possible expressions in the programming
language and Exp(x) is all expressions of program x. An edge
(a, b, r) ∈ Ex between two program elements a and b exists if
there exists a match between a, b, and r.
Note that for a given pair of elements a and b there could
be more than one relationship which matches, that is, both
r1, r2 ∈ relast match where r1 ≠ r2 (therefore, there could be
multiple edges between a and b with different
The relationships described above are useful for both name
and type inference. For names, the expressions being related
are variables, while for types, they need not be restricted to
variables. For example, in Figure 6(c) there is a relationship
between the types of k and i+j via L<R. Note that our rules
do not directly capture relationships between [i] and [i+j],
but they are transitively dependent. Still, many important
relationships for type inference are present. For instance, in
classic type inference, the relationship L=R implies a constraint rule [L] [R] where is the supertype relationship
(indicated in Figure 5). Interestingly, our inference model can
learn such rules instead of providing them manually.
Other relations. We introduce three other types of semantic
relations: (i) a relationship capturing aliasing between expressions, (ii) a function calling relationship capturing whether a
function (represented by a variable) may call another function
(also represented by a variable), and (iii) an object access relationship specifying whether an object field (represented by a
string constant) may be accessed by a function. The last two
relationships are only used in name prediction and are particularly effective when predicting function names.
5. IMPLEMENTATION AND EVALUATION
We implemented our approach in an interactive system,
called JSNice, which targets name and type predictions for
In standard operation, this compiler takes human-readable
and returns an optimized, minified, and human-unreadable
In our system, we added a new mode to the compiler that
reverses its standard operation: given an optimized minified
well annotated (with types) and as human-readable as possible (with useful identifier names). Our two applications for
names and types were implemented as two probabilistic
models that can be invoked separately.
5. 1. Our dataset
To evaluate our approach, we collected two disjoint sets of
ects with the highest number of commits from BitBucket.
taking projects from different repositories, we decrease the
likelihood of overlap between training and evaluation data.
We also searched in GitHub to check that the projects in
the evaluation data are not included in the training data.
Finally, we implemented a simple checker to detect and filter out minified and obfuscated files from the training and
the evaluation data. After filtering minified files, we ended
up with training data consisting of 324,501 files and evaluation data of 2,710 files.
5. 2. Precision
To evaluate the precision of JSNice, we first minified all
2,710 files in the training data with UglifyJS8 (other sound
minifiers should produce input that is equivalent for the
purposes of using JSNice). As mentioned, we focus on a par-
ticular popular form of obfuscation called layout obfusca-
tion. It works by renaming local variables to meaningless
short names and removing whitespaces and type annota-
tions (other types of obfuscation such as encryption are not
considered in this work). Each minified program is semanti-
cally equivalent (except when using with or eval) to the
original. Then, we used JSNice on the minified programs to
evaluate its capabilities in reconstructing name and type
information. We compared the precision of the following
• The most powerful system works with all of the training
data and performs structured prediction as described
• Two systems using a fraction of the training data — one
on 10% and one on 1% of the files.
• To evaluate the effect of structure when making predictions, we disabled relationships between unknown
properties and performed predictions on that network
(the training phase still uses structure).
• A naïve baseline which does no prediction: it keeps
names the same and sets all types to the most common
Name predictions. To evaluate the accuracy of name predictions, we took each of the minified programs and used
the name inference in JSNice to rename its local variables.
Then, we compared the new names to the original names
(before obfuscation) for each of the tested programs. The
results for the name reconstruction are summarized in the
all training data 63. 4 81. 6 66. 9
10% of training data 54. 5 81. 4 64. 8
1% of training data 41. 2 77. 9 62. 8
all data, no structure 54. 1 84.0 56.0
baseline – no predictions 25. 3 37. 8 100
Table 1. Precision and recall for name and type reconstruction of