Code and model files for the paper: "A Multilayer Convolutional Encoder-Decoder Neural Network for Grammatical Error Correction" (In AAAI-18). If you use any part of this work, make sure you include the following citation:
@InProceedings{chollampatt2018mlconv,
author = {Chollampatt, Shamil and Ng, Hwee Tou},
title = {A Multilayer Convolutional Encoder-Decoder Neural Network for Grammatical Error Correction},
booktitle = {Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence},
month = {February},
year = {2018},
}
- Clone this repository.
- Download the pre-requisite software:
-
N-best Reranker (Requires KenLM Python module)
NOTE: For training and evaluation of the models, we suggest that you download the exact revisions of the above software. Go to
software/directory and rundownload.shdirectory to download the exact revisions of these software.
- Compile and install Fairseq-py.
- Go to
data/directory and runprepare_test_data.shscript to download and process CoNLL-2014 test dataset - Go to
models/directory and rundownload.shto download the required model files - For running the system, run the
run.shscript with the following format
./run.sh <input-file> <output-directory> <gpu-device-number> <models-path>
<input-file>: path to tokenized input data
<gpu-device-number>: typically 0,1,2 etc to be used with the environment variable CUDA_VISIBLE_DEVICES
<models-path>: could be the path to a single model file or a directory having multiple model files alone.
You can also run the script by adding optional arguments for re-ranking
./run.sh <input-file> <output-directory> <gpu-device-number> <models-path> <weights-file> <features>
<weights-file>: path to trained feature weights for the re-ranker (within models/reranker_weights
<features>: use 'eo' for edit operation features, and 'eolm' for both edit operations and language model features.
- Update the paths to
NUCLE_TARandLANG8V2withinprepare_data.sh - Run the script
prepare_data.shfrom withindata/directory. (NOTE: To get the exact data you may need to use LangID.py v1.1.6 for language filtering and NLTK v2.0b7 for tokenization. The prepared training data (data/train.tok.{src,trg}) will have 2210277 sentence pairs with 26,557,233 source tokens and 30,028,798 target tokens).
For training, download the version of Fairseq-py
In the training/ directory, within the preprocess.sh script, place paths to the the training datasets and development datasets. The source and target files must be tokenized.
- Go to
training/directory - Run
./preprocess.shscript -
- To train the models without pre-trainined embeddings use the
train.shscript. - To train the models with pre-trained word embeddings use the
train_embed.shscript. (NOTE: The pre-trained embeddings are trained using Wikipedia data segmented using the released BPE model. If your training data and BPE model are different, we suggest that you pre-train fastText embeddings on Wikipedia text segmented with your own BPE model and modify the paths within the script accordingly.)
- To train the models without pre-trainined embeddings use the
- To train the re-ranker, you would additionally need to have compiled Moses software. Run
train_reranker.shscript with the following arguments:./train_reranker.sh <output_dir> <gpu-device-number> <models-path> <path-to-moses><output-dir>: directory to store temporary files and final outputweights.txtfile. - Run the trained model from within
training/directory using the scriptrun_trained_model.py.
The code and models in this repository are licensed under the GNU General Public License Version 3. For commercial use of this code and models, separate commercial licensing is also available. Please contact:
- Shamil Chollampatt (shamil@u.nus.edu)
- Hwee Tou Ng (nght@comp.nus.edu.sg)