Partitioning Around Medoids Clustering Learner
Source:R/LearnerClustPAM.R
mlr_learners_clust.pam.RdA LearnerClust for PAM clustering implemented in cluster::pam().
cluster::pam() doesn't have a default value for the number of clusters.
Therefore, the k parameter which corresponds to the number
of clusters here is set to 2 by default.
The predict method uses clue::cl_predict() to compute the
cluster memberships for new data.
Dictionary
This mlr3::Learner can be instantiated via the dictionary mlr3::mlr_learners or with the associated sugar function mlr3::lrn():
Meta Information
Task type: “clust”
Predict Types: “partition”
Feature Types: “logical”, “integer”, “numeric”
Required Packages: mlr3, mlr3cluster, cluster
Parameters
| Id | Type | Default | Levels | Range |
| k | integer | - | \([1, \infty)\) | |
| metric | character | euclidian | euclidian, manhattan | - |
| medoids | untyped | NULL | - | |
| stand | logical | FALSE | TRUE, FALSE | - |
| do.swap | logical | TRUE | TRUE, FALSE | - |
| pamonce | integer | 0 | \([0, 5]\) | |
| trace.lev | integer | 0 | \([0, \infty)\) |
References
Reynolds, P A, Richards, Graeme, de la Iglesia, Beatriz, Rayward-Smith, J V (2006). “Clustering rules: a comparison of partitioning and hierarchical clustering algorithms.” Journal of Mathematical Modelling and Algorithms, 5, 475–504.
Schubert, Erich, Rousseeuw, J P (2019). “Faster k-medoids clustering: improving the PAM, CLARA, and CLARANS algorithms.” In Similarity Search and Applications: 12th International Conference, SISAP 2019, Newark, NJ, USA, October 2–4, 2019, Proceedings 12, 171–187. Springer.
See also
Chapter in the mlr3book: https://mlr3book.mlr-org.com/chapters/chapter2/data_and_basic_modeling.html#sec-learners
Package mlr3extralearners for more learners.
as.data.table(mlr_learners)for a table of available Learners in the running session (depending on the loaded packages).mlr3pipelines to combine learners with pre- and postprocessing steps.
Extension packages for additional task types:
mlr3proba for probabilistic supervised regression and survival analysis.
mlr3cluster for unsupervised clustering.
mlr3tuning for tuning of hyperparameters, mlr3tuningspaces for established default tuning spaces.
Other Learner:
mlr_learners_clust.MBatchKMeans,
mlr_learners_clust.SimpleKMeans,
mlr_learners_clust.agnes,
mlr_learners_clust.ap,
mlr_learners_clust.bico,
mlr_learners_clust.birch,
mlr_learners_clust.cmeans,
mlr_learners_clust.cobweb,
mlr_learners_clust.dbscan,
mlr_learners_clust.dbscan_fpc,
mlr_learners_clust.diana,
mlr_learners_clust.em,
mlr_learners_clust.fanny,
mlr_learners_clust.featureless,
mlr_learners_clust.ff,
mlr_learners_clust.hclust,
mlr_learners_clust.hdbscan,
mlr_learners_clust.kkmeans,
mlr_learners_clust.kmeans,
mlr_learners_clust.mclust,
mlr_learners_clust.meanshift,
mlr_learners_clust.optics,
mlr_learners_clust.xmeans
Super classes
mlr3::Learner -> mlr3cluster::LearnerClust -> LearnerClustPAM
Examples
# Define the Learner and set parameter values
learner = lrn("clust.pam")
print(learner)
#>
#> ── <LearnerClustPAM> (clust.pam): Partitioning Around Medoids ──────────────────
#> • Model: -
#> • Parameters: k=2
#> • Packages: mlr3, mlr3cluster, and cluster
#> • Predict Types: [partition]
#> • Feature Types: logical, integer, and numeric
#> • Encapsulation: none (fallback: -)
#> • Properties: complete, exclusive, and partitional
#> • Other settings: use_weights = 'error'
# Define a Task
task = tsk("usarrests")
# Train the learner on the task
learner$train(task)
# Print the model
print(learner$model)
#> Medoids:
#> ID Assault Murder Rape UrbanPop
#> [1,] 22 255 12.1 35.1 74
#> [2,] 16 115 6.0 18.0 66
#> Clustering vector:
#> [1] 1 1 1 1 1 1 2 1 1 1 2 2 1 2 2 2 2 1 2 1 2 1 2 1 2 2 2 1 2 2 1 1 1 2 2 2 2 2
#> [39] 2 1 2 1 1 2 2 2 2 2 2 2
#> Objective function:
#> build swap
#> 46.10633 38.41780
#>
#> Available components:
#> [1] "medoids" "id.med" "clustering" "objective" "isolation"
#> [6] "clusinfo" "silinfo" "diss" "call" "data"
# Make predictions for the task
prediction = learner$predict(task)
# Score the predictions
prediction$score(task = task)
#> clust.dunn
#> 0.1033191