THE INTEGRATION OF BANKING AND TELECOMMUNICATIONS: THE NEED FOR REGULATORY REFORM

(Jeff_L) #1
ON THE ROBUSTNESS OF AUTHORSHIP ATTRIBUTION 431

set by modifying t. In this study, the following frequency
threshold values were used: 500, 300, 200, 100, 50, 30, 20, 10,
5, 3, 2, 1.
The well-known Support Vector Machine (“SVM”)
classifier^23 is used. It is a powerful classification model that can
handle high dimensional and sparse data, and it is considered
one of the best algorithms for text categorization tasks. The
linear kernel (which is used to produce a linear boundary
between the classes) is used since the dimensionality of the
representation is usually high, including several hundreds or
thousands of features.^24 There is no attempt to optimize the
classification model by using different classification algorithms,
since our aim is to highlight the capability of text representation
features to remain robust in cross-topic and cross-genre
conditions.
In each experiment, we follow the procedure described
below:
 An attribution model is learned based on SVM and texts
from a single topic category of TGC (e.g., Politics). At
most, ten texts per author are used in the training phase.
This provides an imbalanced training corpus.
 The learned classifier is applied to the texts of a category of
TGC. Again, at most ten texts per author are used. If the
selected category is Politics, that is the same as the topic
category used in the training phase (intratopic attribution).
The first ten texts are skipped, so there is no overlapping
with the texts used in the training corpus. If the selected
category is U.K., World, Society (cross-topic attribution) or
Books (cross-genre attribution), then an imbalanced test
corpus is compiled. Note that the distribution of the training
corpus over the candidate authors is not necessarily the same
with the corresponding distribution of the test corpus. This
ensures that in case the attribution model favors the authors
with the most training texts, it will produce many errors.


(^23) See Corinna Cortes & Vladimir Vapnik, Support-Vector Networks, 20
MACHINE LEARNING 273, 274–75 (1995).
(^24) See Thorsten Joachims, Text Categorization with Support Vector
Machines: Learning with Many Relevant Features, MACHINE LEARNING:
ECML-98: 10 TH EUR. CONF. ON MACHINE LEARNING, 1998, at 137.

Free download pdf