Textplumber

Textplumber provides pipeline components for Sci-kit learn to make it easier to extract relevant features from text data, including tokens, parts of speech, lexicon scores, document-level statistics and embeddings.

Introduction to Textplumber

The Textplumber library is intended to make it easier to build text classification pipelines with Sci-kit learn. Sci-kit learn provides a powerful suite of tools for machine learning, including in-built support for text. Textplumber adds to Sci-kit learn’s functionality, leveraging libraries like spaCy and new feature extraction techniques like Model2Vec, and provides easy access to a range of text feature types.

Development status

Textplumber is in active development. It is currently released for beta testing. The Github site may be ahead of the Pypi version, so for latest functionality install from Github (see below). The Github code is pre-release and may change. For the latest release, install from Pypi (pip install textplumber). The documentation reflects the most recent functionality. See the CHANGELOG for notes on releases.

Development Team

The developers of Textplumber are:

  • Dr Geoff Ford, Senior Lecturer, Faculty of Arts, University of Canterbury
  • Dr Christopher Thomson, Senior Lecturer in English and Digital Humanities, University of Canterbury
  • Karin Stahel, PhD Candidate, Data Science, University of Canterbury

Dr Geoff Ford is leading development of Textplumber and is the main contributor to date.

Some Textplumber functionality has been created through collaborations of team members to develop teaching resources for DIGI405, Text, Discourses and Data, a course offered through the Digital Humanities and Master of Applied Data Science programmes at the University of Canterbury. The entire team are contributing to testing and will contribute to the development of Textplumber documentation.

Acknowledgements

Dr Ford’s work on Textplumber has been made possible by funding from the Royal Society of New Zealand’s Marsden Fund, Grant 22-UOC-059 “Into the Deep: Analysing the Actors and Controversies Driving the Adoption of the World’s First Deep Sea Mining Governance”. Textplumber is an output of that project.

The developers of Textplumber are researchers with Te Pokapū Aronui ā-Matihiko | UC Arts Digital Lab (ADL). Thanks to the ADL team and the ongoing support of the University of Canterbury’s Faculty of Arts who make work like this possible.

Installation

Install via pip

You can install Textplumber from pypi using this command:

$ pip install textplumber

To install the latest development version of Textplumber, which may be ahead of the version on Pypi, you can install from the repository:

$ pip install git+https://github.com/polsci/textplumber.git

Install a language model

Many of Textplumber’s pipeline components require a SpaCy language model. After installing textplumber, install a model. Here’s an example of how to install SpaCy’s small English model:

python -m spacy download en_core_web_sm

If you are working with a different language or want to use a different ‘en’ model, check the SpaCy models documentation for the relevant model name.

Using Textplumber

A good place to start is the quick introduction and an example notebook, which allows you to use Textplumber with different datasets and different kinds of text classification problems.

The documentation site provides a reference for Textplumber functionality and examples of how to use the various components. The current Textplumber components are listed below.

Component Functionality Pipeline Requires
TextCleaner Cleans text data -
SpacyPreprocessor Preprocess text using spaCy -
TokensVectorizer Extract individual tokens or token ngram features SpacyPreprocessor
POSVectorizer Extract individual part of speech or POS ngram features SpacyPreprocessor
TextstatsTransformer Extract document-level statistics SpacyPreprocessor
LexiconCountVectorizer Extract features based on lexicons (i.e. counts of lists of words) SpacyPreprocessor
VaderSentimentExtractor Extract sentiment features using VADER -
VaderSentimentEstimator Predict sentiment using VADER -
Model2VecEmbedder Extract embeddings using Model2Vec -
CharNgramVectorizer Extract character ngrams -

Here are some helpful functions for working with text pipelines …

Function Functionality
preview_dataset Output information about a Huggingface dataset
plot_confusion_matrix SVG confusion matrix with counts and row-wise proportions and appropriate labels
plot_logistic_regression_features_from_pipeline Plot the most discriminative features for a logistic regression classifier
plot_decision_tree_from_pipeline Plot the decision tree of the classifier from a pipeline using SuperTree
preview_pipeline_features Output the features at each step in a pipeline

Developer Guide

The instructions below are only relevant if you want to contribute to Textplumber. The nbdev library is being used for development. If you are new to using nbdevc, here are some useful pointers to get you started (or visit the nbdev website).

Install textplumber in Development mode

# make sure textplumber package is installed in development mode
$ pip install -e .

# make changes under nbs/ directory
# ...

# compile to have changes apply to textplumber
$ nbdev_prepare