Support Vector Machines

Support Vector Machines (SVMs) can also be applied for multiclass classification tasks through techniques such as one-vs-one or one-vs-all. In the one-vs-one strategy, SVM constructs multiple binary classifiers, each trained to distinguish between pairs of classes. In the one-vs-all strategy, SVM constructs a single classifier for each class, which is trained to distinguish that class from all other classes.

Description of data set

We use a version of the bfi dataset from class to predict the level of education by Big-5 personality traits. For the data, a subset of observations is chosen from the original dataset where educational levels are balanced. The reason is that classifiers often struggle with imbalanced classes (e.g., majority of education being 3).

For simplicity, we treat education as a categorical variable here, although it is actually an ordinal variable (i.e., 1 < 2 < 3 < 4 < 5).

Type ?psych::bfi into your console for more information on the dataset. Note that the Big-5 triats agree, conscientious, extra, neuro, and open were created by averaging each participant’s targets to the five survey items per trait (e.g., A1-A5).

Tasks

  1. Read the data file modeul2-bfi.csv into R (assign it to a variable called “dat”).
library(tidyverse)
── Attaching core tidyverse packages ──────────────────────────── tidyverse 2.0.0 ──
✔ dplyr     1.1.4     ✔ readr     2.1.5
✔ forcats   1.0.0     ✔ stringr   1.5.1
✔ ggplot2   3.5.0     ✔ tibble    3.2.1
✔ lubridate 1.9.3     ✔ tidyr     1.3.1
✔ purrr     1.0.2     
── Conflicts ────────────────────────────────────────────── tidyverse_conflicts() ──
✖ dplyr::filter() masks stats::filter()
✖ dplyr::lag()    masks stats::lag()
ℹ Use the ]8;;http://conflicted.r-lib.org/conflicted package]8;; to force all conflicts to become errors
dat <- read.csv('module2-bfi.csv', header = TRUE)
  1. Transform the education variable to a factor and assign the data set “dat” to a mlr3 classification task called “tsk” with education as target and agree and conscientious as features.
dat$education <- factor(dat$education)

library(mlr3verse)
Lade nötiges Paket: mlr3
Registered S3 method overwritten by 'data.table':
  method           from
  print.data.table     
tsk <- as_task_classif(education ~ agree + conscientious, data = dat)
  1. Randomly separate the dataset into 80% training and 20% testing data (Hint: Set the seed to ensure reproducibility of your results).
set.seed(42)
row_ids <- partition(tsk, ratio = 0.8)
row_ids
$train
 [1]  1  2  4  5  6  7  8  9 10 14 15 16 17 18 19 20 21 22 23 24 25 27 28 29 30 31
[27] 33 34 35 37 38 40 41 42 43 44 45 46 48 49 51 52 53 54 55 56 57 58 61 62 63 64
[53] 65 66 67 68 71 73 74 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94
[79] 97 98

$test
 [1]   3  11  12  13  26  32  36  39  47  50  59  60  69  70  72  75  95  96  99 100
  1. Use the training sample to build a SVM (with default settings) to predict the target education with agree and conscientious as features.
mdl = lrn("classif.svm")
mdl$train(tsk, row_ids = row_ids$train)
summary(mdl$model)

Call:
svm.default(x = data, y = task$truth(), probability = (self$predict_type == 
    "prob"))


Parameters:
   SVM-Type:  C-classification 
 SVM-Kernel:  radial 
       cost:  1 

Number of Support Vectors:  80

 ( 16 16 16 16 16 )


Number of Classes:  5 

Levels: 
 1 2 3 4 5
  1. Visualize the classifier for agreeableness on the x-axis and conscientiousness on the y-axis.
autoplot(mdl, task = tsk)

  1. Now use the training sample to build a SVM (with default settings) for education as target and all Big-5 traits as features.
tsk <- as_task_classif(education ~ agree + conscientious + extra + neuro + open, data = dat)
mdl$train(tsk, row_ids = row_ids$train)
summary(mdl$model)

Call:
svm.default(x = data, y = task$truth(), probability = (self$predict_type == 
    "prob"))


Parameters:
   SVM-Type:  C-classification 
 SVM-Kernel:  radial 
       cost:  1 

Number of Support Vectors:  80

 ( 16 16 16 16 16 )


Number of Classes:  5 

Levels: 
 1 2 3 4 5
  1. Predict the educational levels of the observations in the training sample as well as in the held-out test sample. Also calculate the in-sample training classification error and compare it to the out-of-sample testing classification error. Why is the former likely (much) smaller than the latter?
mes <- msrs("classif.ce")

# In-sample performance:
pred <- mdl$predict(tsk, row_ids = row_ids$train)
pred$confusion
        truth
response  1  2  3  4  5
       1  9  0  2  0  0
       2  0 11  1  0  2
       3  2  1 12  2  6
       4  4  3  1 11  1
       5  1  1  0  3  7
pred$score(mes)
classif.ce 
     0.375 
# Out-of-sample performance:
pred <- mdl$predict(tsk, row_ids = row_ids$test)
pred$confusion
        truth
response 1 2 3 4 5
       1 2 0 1 1 0
       2 1 1 1 0 0
       3 0 2 1 1 3
       4 0 0 1 1 1
       5 1 1 0 1 0
pred$score(mes)
classif.ce 
      0.75 

The in-sample training classification error is likely (much) smaller than the out-of-sample testing classification error due to overfitting the training data. Cross-validation (CV) helps to address this issue by partitioning the data into multiple subsets, allowing the model to be trained and evaluated on different combinations of training and validation sets, providing a more robust estimate of its performance on unseen data.

  1. Assess the expected out-of-sample performance of your learner from task 6 using 10-fold cross-validation (CV). Does CV improve the prediction of your model’s out-of-sample classification performance? (Hint: Set the seed to ensure reproducibility of your results)
# 10-fold CV:
set.seed(42)
cv <- rsmp("cv", folds = 10)
mdl_cv <- resample(learner = mdl, task = tsk, resampling = cv)
INFO  [13:43:50.770] [mlr3] Applying learner 'classif.svm' on task 'dat' (iter 1/10)
INFO  [13:43:50.866] [mlr3] Applying learner 'classif.svm' on task 'dat' (iter 2/10)
INFO  [13:43:50.914] [mlr3] Applying learner 'classif.svm' on task 'dat' (iter 3/10)
INFO  [13:43:51.035] [mlr3] Applying learner 'classif.svm' on task 'dat' (iter 4/10)
INFO  [13:43:51.068] [mlr3] Applying learner 'classif.svm' on task 'dat' (iter 5/10)
INFO  [13:43:51.108] [mlr3] Applying learner 'classif.svm' on task 'dat' (iter 6/10)
INFO  [13:43:51.145] [mlr3] Applying learner 'classif.svm' on task 'dat' (iter 7/10)
INFO  [13:43:51.188] [mlr3] Applying learner 'classif.svm' on task 'dat' (iter 8/10)
INFO  [13:43:51.228] [mlr3] Applying learner 'classif.svm' on task 'dat' (iter 9/10)
INFO  [13:43:51.267] [mlr3] Applying learner 'classif.svm' on task 'dat' (iter 10/10)
mdl_cv$aggregate(mes)
classif.ce 
       0.8 

The classification error derived from cross-validation is much closer to the out-of-sample classification error observed in task 7.

  1. Bonus: Using 10-fold cross-validation, choose a value for the tuning parameter \(C\) (cost) from the set (1, 10, 50, 100). (Hint: Set the seed to ensure reproducibility of your results)

Note that it is not possible (in mlr3; and quite complex in general) to print classifiers using more than two features. Therefore, we cannot plot the classification of the final (best) model.

LS0tDQp0aXRsZTogIk1vZHVsZSAyOiBUdXRvcmlhbDogU3VwcG9ydCBWZWN0b3IgTWFjaGluZXMiDQpvdXRwdXQ6IGh0bWxfbm90ZWJvb2sNCmVkaXRvcl9vcHRpb25zOiANCiAgY2h1bmtfb3V0cHV0X3R5cGU6IGlubGluZQ0KLS0tDQoNCiMgU3VwcG9ydCBWZWN0b3IgTWFjaGluZXMNCg0KU3VwcG9ydCBWZWN0b3IgTWFjaGluZXMgKFNWTXMpIGNhbiBhbHNvIGJlIGFwcGxpZWQgZm9yIG11bHRpY2xhc3MgY2xhc3NpZmljYXRpb24gdGFza3MgdGhyb3VnaCB0ZWNobmlxdWVzIHN1Y2ggYXMgb25lLXZzLW9uZSBvciBvbmUtdnMtYWxsLiBJbiB0aGUgb25lLXZzLW9uZSBzdHJhdGVneSwgU1ZNIGNvbnN0cnVjdHMgbXVsdGlwbGUgYmluYXJ5IGNsYXNzaWZpZXJzLCBlYWNoIHRyYWluZWQgdG8gZGlzdGluZ3Vpc2ggYmV0d2VlbiBwYWlycyBvZiBjbGFzc2VzLiBJbiB0aGUgb25lLXZzLWFsbCBzdHJhdGVneSwgU1ZNIGNvbnN0cnVjdHMgYSBzaW5nbGUgY2xhc3NpZmllciBmb3IgZWFjaCBjbGFzcywgd2hpY2ggaXMgdHJhaW5lZCB0byBkaXN0aW5ndWlzaCB0aGF0IGNsYXNzIGZyb20gYWxsIG90aGVyIGNsYXNzZXMuDQoNCiMjIERlc2NyaXB0aW9uIG9mIGRhdGEgc2V0DQoNCldlIHVzZSBhIHZlcnNpb24gb2YgdGhlIGBiZmlgIGRhdGFzZXQgZnJvbSBjbGFzcyB0byBwcmVkaWN0IHRoZSBsZXZlbCBvZiBlZHVjYXRpb24gYnkgQmlnLTUgcGVyc29uYWxpdHkgdHJhaXRzLiBGb3IgdGhlIGRhdGEsIGEgc3Vic2V0IG9mIG9ic2VydmF0aW9ucyBpcyBjaG9zZW4gZnJvbSB0aGUgb3JpZ2luYWwgZGF0YXNldCB3aGVyZSBlZHVjYXRpb25hbCBsZXZlbHMgYXJlIGJhbGFuY2VkLiBUaGUgcmVhc29uIGlzIHRoYXQgY2xhc3NpZmllcnMgb2Z0ZW4gc3RydWdnbGUgd2l0aCBpbWJhbGFuY2VkIGNsYXNzZXMgKGUuZy4sIG1ham9yaXR5IG9mIGBlZHVjYXRpb25gIGJlaW5nIDMpLg0KDQpGb3Igc2ltcGxpY2l0eSwgd2UgdHJlYXQgYGVkdWNhdGlvbmAgYXMgYSBjYXRlZ29yaWNhbCB2YXJpYWJsZSBoZXJlLCBhbHRob3VnaCBpdCBpcyBhY3R1YWxseSBhbiBvcmRpbmFsIHZhcmlhYmxlIChpLmUuLCAxIFw8IDIgXDwgMyBcPCA0IFw8IDUpLg0KDQpUeXBlID9wc3ljaDo6YmZpIGludG8geW91ciBjb25zb2xlIGZvciBtb3JlIGluZm9ybWF0aW9uIG9uIHRoZSBkYXRhc2V0LiBOb3RlIHRoYXQgdGhlIEJpZy01IHRyaWF0cyBgYWdyZWVgLCBgY29uc2NpZW50aW91c2AsIGBleHRyYWAsIGBuZXVyb2AsIGFuZCBgb3BlbmAgd2VyZSBjcmVhdGVkIGJ5IGF2ZXJhZ2luZyBlYWNoIHBhcnRpY2lwYW50J3MgdGFyZ2V0cyB0byB0aGUgZml2ZSBzdXJ2ZXkgaXRlbXMgcGVyIHRyYWl0IChlLmcuLCBgQTFgLWBBNWApLg0KDQojIyBUYXNrcw0KDQoxLiAgUmVhZCB0aGUgZGF0YSBmaWxlIG1vZGV1bDItYmZpLmNzdiBpbnRvIFIgKGFzc2lnbiBpdCB0byBhIHZhcmlhYmxlIGNhbGxlZCAiZGF0IikuDQoNCmBgYHtyfQ0KbGlicmFyeSh0aWR5dmVyc2UpDQpkYXQgPC0gcmVhZC5jc3YoJ21vZHVsZTItYmZpLmNzdicsIGhlYWRlciA9IFRSVUUpDQpgYGANCg0KMi4gIFRyYW5zZm9ybSB0aGUgZWR1Y2F0aW9uIHZhcmlhYmxlIHRvIGEgZmFjdG9yIGFuZCBhc3NpZ24gdGhlIGRhdGEgc2V0ICJkYXQiIHRvIGEgYG1scjNgIGNsYXNzaWZpY2F0aW9uIHRhc2sgY2FsbGVkICJ0c2siIHdpdGggYGVkdWNhdGlvbmAgYXMgdGFyZ2V0IGFuZCBgYWdyZWVgIGFuZCBgY29uc2NpZW50aW91c2AgYXMgZmVhdHVyZXMuDQoNCmBgYHtyfQ0KZGF0JGVkdWNhdGlvbiA8LSBmYWN0b3IoZGF0JGVkdWNhdGlvbikNCg0KbGlicmFyeShtbHIzdmVyc2UpDQp0c2sgPC0gYXNfdGFza19jbGFzc2lmKGVkdWNhdGlvbiB+IGFncmVlICsgY29uc2NpZW50aW91cywgZGF0YSA9IGRhdCkNCmBgYA0KDQozLiAgUmFuZG9tbHkgc2VwYXJhdGUgdGhlIGRhdGFzZXQgaW50byA4MCUgdHJhaW5pbmcgYW5kIDIwJSB0ZXN0aW5nIGRhdGEgKEhpbnQ6IFNldCB0aGUgc2VlZCB0byBlbnN1cmUgcmVwcm9kdWNpYmlsaXR5IG9mIHlvdXIgcmVzdWx0cykuDQoNCmBgYHtyfQ0Kc2V0LnNlZWQoNDIpDQpyb3dfaWRzIDwtIHBhcnRpdGlvbih0c2ssIHJhdGlvID0gMC44KQ0Kcm93X2lkcw0KYGBgDQoNCjQuICBVc2UgdGhlIHRyYWluaW5nIHNhbXBsZSB0byBidWlsZCBhIFNWTSAod2l0aCBkZWZhdWx0IHNldHRpbmdzKSB0byBwcmVkaWN0IHRoZSB0YXJnZXQgYGVkdWNhdGlvbmAgd2l0aCBgYWdyZWVgIGFuZCBgY29uc2NpZW50aW91c2AgYXMgZmVhdHVyZXMuDQoNCmBgYHtyfQ0KbWRsID0gbHJuKCJjbGFzc2lmLnN2bSIpDQptZGwkdHJhaW4odHNrLCByb3dfaWRzID0gcm93X2lkcyR0cmFpbikNCnN1bW1hcnkobWRsJG1vZGVsKQ0KYGBgDQoNCjUuICBWaXN1YWxpemUgdGhlIGNsYXNzaWZpZXIgZm9yIGFncmVlYWJsZW5lc3Mgb24gdGhlIHgtYXhpcyBhbmQgY29uc2NpZW50aW91c25lc3Mgb24gdGhlIHktYXhpcy4NCg0KYGBge3Igb3V0LndpZHRoPSI1MCUiLCBmaWcuYWxpZ249J2NlbnRlcid9DQphdXRvcGxvdChtZGwsIHRhc2sgPSB0c2spDQpgYGANCg0KNi4gIE5vdyB1c2UgdGhlIHRyYWluaW5nIHNhbXBsZSB0byBidWlsZCBhIFNWTSAod2l0aCBkZWZhdWx0IHNldHRpbmdzKSBmb3IgYGVkdWNhdGlvbmAgYXMgdGFyZ2V0IGFuZCBhbGwgQmlnLTUgdHJhaXRzIGFzIGZlYXR1cmVzLg0KDQpgYGB7cn0NCnRzayA8LSBhc190YXNrX2NsYXNzaWYoZWR1Y2F0aW9uIH4gYWdyZWUgKyBjb25zY2llbnRpb3VzICsgZXh0cmEgKyBuZXVybyArIG9wZW4sIGRhdGEgPSBkYXQpDQptZGwkdHJhaW4odHNrLCByb3dfaWRzID0gcm93X2lkcyR0cmFpbikNCnN1bW1hcnkobWRsJG1vZGVsKQ0KYGBgDQoNCjcuICBQcmVkaWN0IHRoZSBlZHVjYXRpb25hbCBsZXZlbHMgb2YgdGhlIG9ic2VydmF0aW9ucyBpbiB0aGUgdHJhaW5pbmcgc2FtcGxlIGFzIHdlbGwgYXMgaW4gdGhlIGhlbGQtb3V0IHRlc3Qgc2FtcGxlLiBBbHNvIGNhbGN1bGF0ZSB0aGUgaW4tc2FtcGxlIHRyYWluaW5nIGNsYXNzaWZpY2F0aW9uIGVycm9yIGFuZCBjb21wYXJlIGl0IHRvIHRoZSBvdXQtb2Ytc2FtcGxlIHRlc3RpbmcgY2xhc3NpZmljYXRpb24gZXJyb3IuIFdoeSBpcyB0aGUgZm9ybWVyIGxpa2VseSAobXVjaCkgc21hbGxlciB0aGFuIHRoZSBsYXR0ZXI/DQoNCmBgYHtyfQ0KbWVzIDwtIG1zcnMoImNsYXNzaWYuY2UiKQ0KDQojIEluLXNhbXBsZSBwZXJmb3JtYW5jZToNCnByZWQgPC0gbWRsJHByZWRpY3QodHNrLCByb3dfaWRzID0gcm93X2lkcyR0cmFpbikNCnByZWQkY29uZnVzaW9uDQpwcmVkJHNjb3JlKG1lcykNCg0KIyBPdXQtb2Ytc2FtcGxlIHBlcmZvcm1hbmNlOg0KcHJlZCA8LSBtZGwkcHJlZGljdCh0c2ssIHJvd19pZHMgPSByb3dfaWRzJHRlc3QpDQpwcmVkJGNvbmZ1c2lvbg0KcHJlZCRzY29yZShtZXMpDQpgYGANCg0KVGhlIGluLXNhbXBsZSB0cmFpbmluZyBjbGFzc2lmaWNhdGlvbiBlcnJvciBpcyBsaWtlbHkgKG11Y2gpIHNtYWxsZXIgdGhhbiB0aGUgb3V0LW9mLXNhbXBsZSB0ZXN0aW5nIGNsYXNzaWZpY2F0aW9uIGVycm9yIGR1ZSB0byBvdmVyZml0dGluZyB0aGUgdHJhaW5pbmcgZGF0YS4gQ3Jvc3MtdmFsaWRhdGlvbiAoQ1YpIGhlbHBzIHRvIGFkZHJlc3MgdGhpcyBpc3N1ZSBieSBwYXJ0aXRpb25pbmcgdGhlIGRhdGEgaW50byBtdWx0aXBsZSBzdWJzZXRzLCBhbGxvd2luZyB0aGUgbW9kZWwgdG8gYmUgdHJhaW5lZCBhbmQgZXZhbHVhdGVkIG9uIGRpZmZlcmVudCBjb21iaW5hdGlvbnMgb2YgdHJhaW5pbmcgYW5kIHZhbGlkYXRpb24gc2V0cywgcHJvdmlkaW5nIGEgbW9yZSByb2J1c3QgZXN0aW1hdGUgb2YgaXRzIHBlcmZvcm1hbmNlIG9uIHVuc2VlbiBkYXRhLg0KDQo4LiAgQXNzZXNzIHRoZSBleHBlY3RlZCBvdXQtb2Ytc2FtcGxlIHBlcmZvcm1hbmNlIG9mIHlvdXIgbGVhcm5lciBmcm9tIHRhc2sgNiB1c2luZyAxMC1mb2xkIGNyb3NzLXZhbGlkYXRpb24gKENWKS4gRG9lcyBDViBpbXByb3ZlIHRoZSBwcmVkaWN0aW9uIG9mIHlvdXIgbW9kZWwncyBvdXQtb2Ytc2FtcGxlIGNsYXNzaWZpY2F0aW9uIHBlcmZvcm1hbmNlPyAoSGludDogU2V0IHRoZSBzZWVkIHRvIGVuc3VyZSByZXByb2R1Y2liaWxpdHkgb2YgeW91ciByZXN1bHRzKQ0KDQpgYGB7cn0NCiMgMTAtZm9sZCBDVjoNCnNldC5zZWVkKDQyKQ0KY3YgPC0gcnNtcCgiY3YiLCBmb2xkcyA9IDEwKQ0KbWRsX2N2IDwtIHJlc2FtcGxlKGxlYXJuZXIgPSBtZGwsIHRhc2sgPSB0c2ssIHJlc2FtcGxpbmcgPSBjdikNCg0KbWRsX2N2JGFnZ3JlZ2F0ZShtZXMpDQpgYGANCg0KVGhlIGNsYXNzaWZpY2F0aW9uIGVycm9yIGRlcml2ZWQgZnJvbSBjcm9zcy12YWxpZGF0aW9uIGlzIG11Y2ggY2xvc2VyIHRvIHRoZSBvdXQtb2Ytc2FtcGxlIGNsYXNzaWZpY2F0aW9uIGVycm9yIG9ic2VydmVkIGluIHRhc2sgNy4NCg0KOS4gIEJvbnVzOiBVc2luZyAxMC1mb2xkIGNyb3NzLXZhbGlkYXRpb24sIGNob29zZSBhIHZhbHVlIGZvciB0aGUgdHVuaW5nIHBhcmFtZXRlciAkQyQgKGBjb3N0YCkgZnJvbSB0aGUgc2V0IGAoMSwgMTAsIDUwLCAxMDApYC4gKEhpbnQ6IFNldCB0aGUgc2VlZCB0byBlbnN1cmUgcmVwcm9kdWNpYmlsaXR5IG9mIHlvdXIgcmVzdWx0cykNCg0KYGBge3J9DQpzZXQuc2VlZCg0MikNCg0KIyBEZWZpbmUgc2V0IG9mIGNvc3QgcGFyYW1ldGVyIHZhbHVlcyB0byBiZSB0ZXN0ZWQNCkNfY3YgPC0gYygxLCAxMCwgNTAsIDEwMCkNCg0KIyBTZXQgdXAgdGhlIGNvbmRpdGlvbnMgZm9yIHRoZSBoeXBlcnBhcmFtZXRlciB0dW5pbmcNCm1kbF9jdiA9IGF1dG9fdHVuZXIoDQogIGxlYXJuZXIgPSBscm4oImNsYXNzaWYuc3ZtIiwgdHlwZSA9ICdDLWNsYXNzaWZpY2F0aW9uJywgY29zdCA9IHRvX3R1bmUobGV2ZWxzID0gQ19jdikpLA0KICByZXNhbXBsaW5nID0gcnNtcCgiY3YiLCBmb2xkcyA9IDEwKSwNCiAgbWVhc3VyZSA9IG1zcigiY2xhc3NpZi5jZSIpLA0KICB0dW5lciA9IHRucigiZ3JpZF9zZWFyY2giKSwNCiAgdGVybWluYXRvciA9IHRybSgibm9uZSIpDQopDQoNCiMgQWN0dWFsbHkgdHVuZSB0aGUgaHlwZXJwYXJhbWV0ZXIgKGkuZS4sIGNwKSBhbmQgZml0IHRoZSBmaW5hbCBtb2RlbA0KaW52aXNpYmxlKHtjYXB0dXJlLm91dHB1dCh7ICNyZW1vdmUgY29uc29sZSBvdXRwdXQgZnJvbSBodG1sIGRvY3VtZW50DQogIG1kbF9jdiR0cmFpbih0c2spDQp9KX0pDQoNCiMgUHJpbnQgdGhlIG91dHB1dCBvZiB0aGUgdHVuaW5nDQptZGxfY3YkYXJjaGl2ZSAlPiUgDQogIGFzLmRhdGEudGFibGUoKSAlPiUgDQogIHNlbGVjdChjb3N0LCBjbGFzc2lmLmNlKSAlPiUgDQogIGFycmFuZ2UoYXMubnVtZXJpYyhjb3N0KSkNCm1kbF9jdiR0dW5pbmdfcmVzdWx0DQoNCiMgRmluYWwgbW9kZWw6DQpzdW1tYXJ5KG1kbF9jdiRsZWFybmVyJG1vZGVsKQ0KYGBgDQoNCk5vdGUgdGhhdCBpdCBpcyBub3QgcG9zc2libGUgKGluIGBtbHIzYDsgYW5kIHF1aXRlIGNvbXBsZXggaW4gZ2VuZXJhbCkgdG8gcHJpbnQgY2xhc3NpZmllcnMgdXNpbmcgbW9yZSB0aGFuIHR3byBmZWF0dXJlcy4gVGhlcmVmb3JlLCB3ZSBjYW5ub3QgcGxvdCB0aGUgY2xhc3NpZmljYXRpb24gb2YgdGhlIGZpbmFsIChiZXN0KSBtb2RlbC4NCg==