📢 KaoGPT: Studying the Performance of Text Generating Models

We developed text-generation models, including the RNN, decoder stack, encoder-decoder, and fine-tuned GPT-2, to emulate Professor Kao’s lectures. Through experimentation, we found that finetuning GPT-2 led to a model that outperformed all others. However, given the limited dataset, the trained-from-scratch decoder stack performed surprisingly well. Our results offer insights into the strengths and limitations of various text generation models, aiding researchers in selecting the most suitable model for their needs.

This paper was written for the final project of UCLA’s ECE C147: Neural Networks and Deep Learning, Winter 2023, taught by Professor Jonathan Kao.