News Details

Stanford U’s Language Model Leverages Stochastic Processes to Improve Efficiency and Coherence in Long Text Generation

Writing a few paragraphs is a relatively simple task for most humans, but even experienced novelists often run into problems when trying to develop their second chapter. A similar issue hinders today’s large-scaled pretrained language models such as GPT-2,

Stanford U’s Language Model Leverages Stochastic Processes to Improve Efficiency and Coherence in Long Text Generation - Image

Writing a few paragraphs is a relatively simple task for most humans, but even experienced novelists often run into problems when trying to develop their second chapter. A similar issue hinders today’s large-scaled pretrained language models such as GPT-2,