阅读笔记|Language Models are Few-Shot Learners

info: T. B. Brown et al., “Language Models are Few-Shot Learners,” 2020, doi: 10.48550/ARXIV.2005.14165.

A. Radford, K. Narasimhan, T. Salimans, I. Sutskever, and others, “Improving language understanding by generative pre-training,” 2018.

A. Radford et al., “Language models are unsupervised multitask learners,” OpenAI blog, vol. 1, no. 8, p. 9, 2019.