Project 1


Overview

Impact of dummy-token insertion on robustness of BERT pre-training


Objective

BERT pre-training is based on identifying masked tokens.  Attempted to enhance the pre-training robustness by mixing ‘dummy tokens’ alongside masked tokens

Environment

AWS, GPU, Numpy, Pandas, Python, Tensorflow