Overview

Chung-hong Chan 1

The validation test is called “oolong test” (for reading tea leaves). Creating oolong test for topic models and dictionary-based uses the same function: create_oolong(). The most important parameters are input_model and input_corpus. Setting each of them to NULL generates different tests.

input_model input_corpus output
Not NULL NULL oolong test for validating a topic model with word intrusion test
Not NULL Not NULL oolong test for validating a topic model with word intrusion test and topic intrusion test
NULL Not NULL oolong test for creating gold standard
NULL NULL error

Installation

Because the package is constantly changing, we suggest using the development version from GitHub:

# install.packages("devtools")
devtools::install_github("chainsawriot/oolong")

You can also install the “stable” (but slightly older) version from CRAN:

install.packages("oolong")

Validating Topic Models

Word intrusion test

abstracts_stm is an example topic model trained with the data abstracts using the stm package. Currently, this package supports structural topic models / correlated topic models from stm, Warp LDA models from text2vec , LDA/CTM models from topicmodels, Biterm Topic Models from BTM and Keyword Assisted Topic Models from keyATM.

To create an oolong test, use the function create_oolong_test.

As instructed, use the method $do_word_intrusion_test() to start coding.

After the coding, you need to first lock the test. Then, you can look at the model precision by printing the oolong test.

Topic intrusion test

For example, abstracts_stm was generated with the corpus abstracts$text

Creating the oolong test object with the corpus used for training the topic model will generate topic intrusion test cases.

Similarly, use the $do_topic_intrusion_test to code the test cases, lock the test with $lock() and then you can look at the TLO (topic log odds) value by printing the oolong test.

Suggested workflow

The test makes more sense if more than one coder is involved. A suggested workflow is to create the test, then clone the oolong object. Ask multiple coders to do the test(s) and then summarize the results.

Train a topic model.

Create a new oolong object.

Clone the oolong object to be used by other raters.

Ask different coders to code each object and then lock the object.

Get a summary of the two objects.

About the p-values

The test for model precision (MP) is based on an one-tailed, one-sample binomial test for each rater. In a multiple-rater situation, the p-values from all raters are combined using the Fisher’s method (a.k.a. Fisher’s omnibus test).

H0: MP is not better than 1/ n_top_terms

H1: MP is better than 1/ n_top_terms

The test for the median of TLO is based on a permutation test.

H0: Median TLO is not better than random guess.

H1: Median TLO is better than random guess.

One must notice that the two statistical tests are testing the bear minimum. A significant test only indicates the topic model can make the rater(s) perform better than random guess. It is not an indication of good topic interpretability. Also, one should use a very conservative significant level, e.g. \(\alpha < 0.001\).

About Warp LDA

There is a subtle difference between the support for stm and for text2vec.

abstracts_warplda is a Warp LDA object trained with the same dataset as the abstracts_stm

abstracts_warplda
#> <WarpLDA>
#>   Inherits from: <LDA>
#>   Public:
#>     clone: function (deep = FALSE) 
#>     components: 0 1 0 46 0 95 0 20 42 8 31 36 50 23 0 0 0 58 0 43 0 0 0  ...
#>     fit_transform: function (x, n_iter = 1000, convergence_tol = 0.001, n_check_convergence = 10, 
#>     get_top_words: function (n = 10, topic_number = 1L:private$n_topics, lambda = 1) 
#>     initialize: function (n_topics = 10L, doc_topic_prior = 50/n_topics, topic_word_prior = 1/n_topics, 
#>     plot: function (lambda.step = 0.1, reorder.topics = FALSE, doc_len = private$doc_len, 
#>     topic_word_distribution: 0 9.41796948577887e-05 0 0.00446992517733942 0 0.0086837 ...
#>     transform: function (x, n_iter = 1000, convergence_tol = 0.001, n_check_convergence = 10, 
#>   Private:
#>     calc_pseudo_loglikelihood: function (ptr = private$ptr) 
#>     check_convert_input: function (x) 
#>     components_: 0 1 0 46 0 95 0 20 42 8 31 36 50 23 0 0 0 58 0 43 0 0 0  ...
#>     doc_len: 80 68 85 88 69 118 99 50 57 88 70 67 53 62 66 92 89 79 1 ...
#>     doc_topic_distribution: function () 
#>     doc_topic_distribution_with_prior: function () 
#>     doc_topic_matrix: 0 0 0 0 0 3 111 0 0 0 0 0 90 134 0 174 0 321 0 0 109 38  ...
#>     doc_topic_prior: 0.1
#>     fit_transform_internal: function (model_ptr, n_iter, convergence_tol, n_check_convergence, 
#>     get_c_all: function () 
#>     get_c_all_local: function () 
#>     get_doc_topic_matrix: function (prt, nr) 
#>     get_topic_word_count: function () 
#>     init_model_dtm: function (x, ptr = private$ptr) 
#>     internal_matrix_formats: list
#>     is_initialized: FALSE
#>     n_iter_inference: 10
#>     n_topics: 20
#>     ptr: externalptr
#>     reset_c_local: function () 
#>     run_iter_doc: function (update_topics = TRUE, ptr = private$ptr) 
#>     run_iter_word: function (update_topics = TRUE, ptr = private$ptr) 
#>     seeds: 135203513.874082 471172603.061186
#>     set_c_all: function (x) 
#>     set_internal_matrix_formats: function (sparse = NULL, dense = NULL) 
#>     topic_word_distribution_with_prior: function () 
#>     topic_word_prior: 0.01
#>     transform_internal: function (x, n_iter = 1000, convergence_tol = 0.001, n_check_convergence = 10, 
#>     vocabulary: explor benefit risk featur medic broker websit well type ...

All the API endpoints are the same, except the one for the creation of topic intrusion test cases. You must supply also the input_dfm.

About Biterm Topic Model

Please refer to the vignette about BTM.

Validating Dictionary-based Methods

Creating gold standard

trump2k is a dataset of 2,000 tweets from @realdonaldtrump.

For example, you are interested in studying the sentiment of these tweets. One can use tools such as AFINN to automatically extract sentiment in these tweets. However, oolong recommends to generate gold standard by human coding first using a subset. By default, oolong selects 1% of the origin corpus as test cases. The parameter construct should be an adjective, e.g. positive, liberal, populistic, etc.

As instructed, use the method $do_gold_standard_test() to start coding.

After the coding, you need to first lock the test and then the $turn_gold() method is available.

Example: Validating AFINN using the gold standard

A locked oolong test can be converted into a quanteda-compatible corpus for further analysis. The corpus contains two docvars, ‘answer’.

In this example, we calculate the AFINN score for each tweet using quanteda. The dictionary afinn is bundle with this package.

Put back the vector of AFINN score into the respective docvars and study the correlation between the gold standard and AFINN.

Suggested workflow

Create an oolong object, clone it for another coder. According to Song et al. (Forthcoming), you should at least draw 1% of your data.

Instruct two coders to code the tweets and lock the objects.

Calculate the target value (in this case, the AFINN score) by turning one object into a corpus.

Summarize all oolong objects with the target value.

Read the results. The diagnostic plot consists of 4 subplots. It is a good idea to read Bland & Altman (1986) on the difference between correlation and agreement.

The textual output contains the Krippendorff’s alpha of the codings by your raters. In order to claim validity of your target value, you must first establish the reliability of your gold standard. Song et al. [Forthcoming] suggest Krippendorff’s Alpha > 0.7 as an acceptable cut-off.

References

  1. Chang, J., Gerrish, S., Wang, C., Boyd-Graber, J. L., & Blei, D. M. (2009). Reading tea leaves: How humans interpret topic models. In Advances in neural information processing systems (pp. 288-296). link
  2. Song et al. (2020) In validations we trust? The impact of imperfect human annotations as a gold standard on the quality of validation of automated content analysis. Political Communication. link
  3. Bland, J. M., & Altman, D. (1986). Statistical methods for assessing agreement between two methods of clinical measurement. The lancet, 327(8476), 307-310.
  4. Chan et al. (2020) Four best practices for measuring news sentiment using ‘off-the-shelf’ dictionaries: a large-scale p-hacking experiment. Computational Communication Research. link
  5. Nielsen, F. Å. (2011). A new ANEW: Evaluation of a word list for sentiment analysis in microblogs. arXiv preprint arXiv:1103.2903. link


  1. University of Mannheim