Skip to content

ecly/learning_to_negate_adjectives

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

68 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Learning to Negate Adjectives with Bilinear Models

Repository replicating the results from the paper Learning to Negate Adjectives with Bilinear Models (2017) by Laura Rimell, Amandla Mabona, Luana Bulat and Douwe Kiela.

Thanks to Laura Rimell for supplying the test data for the original experiments.

A pre-trained standard model trained for 175 epochs (200 used in original paper) can be downloaded here.

Requirements

pip3 install -r requirements.txt
python3 -c "import nltk; nltk.download('wordnet')"

Implementation

Examples for running preprocess.py, train.py, evaluate.py and test.py are given at the top of the files respectively.

Produces adjective_embeddings.txt using GoogleNews 300 dimensional word2vec embeddings.

Logic for parsing WordNet, test data and for building the various variations of the training data ['standard', 'restricted', 'unsupervised']. Also contains the AdjectiveModel which is the wrapping structure for keeping track of adjective's antonyms/cohyponyms as well as querying for k-nearest-neighbors.

Implementation of PyTorch modules for Encoder, Decoder and their wrapper EncoderDecoder.

Load/save/train models. Supports multi-GPU training using DataLoaders and nn.DataParallel. See python train.py --help for options.

Run and print evaluation results for Experiment 1 and 2 as described in Rimell et al. 2018. Uses test data present in the data directory.

Easy way to predict and print antonyms for adjectives either given as arguments or added to the python file itself. Antonyms printed for a word are marked with [] if they are present in the gold standard.

About

PyTorch reimplementation of the paper 'Learning to Negate Adjectives with Bilinear Models" - https://www.aclweb.org/anthology/E17-2012.pdf

Topics

Resources

Stars

Watchers

Forks

Contributors

Languages