阅读笔记|Life on the Edge: Unraveling Policies into Configurations

info: W. X. Zhao et al., “A Survey of Large Language Models.” arXiv, Sep. 11, 2023. Accessed: Sep. 18, 2023. [Online]. Available: http://arxiv.org/abs/2303.18223

背景

  • 现有的SDN网络编程框架假设网络由同质设备组成,可以快速重新配置以响应不断变化的策略和网络条件。但这些假设与现实网络的情况不符,现实网络由于历史遗留设备、功能异构、性能限制等导致只能慢速配置。
  • 网络服务商需要在为用户提供灵活性和维护核心网络的完整性和可靠性之间做出平衡。

阅读笔记|A Survey of Large Language Models

info: W. X. Zhao et al., “A Survey of Large Language Models.” arXiv, Sep. 11, 2023. Accessed: Sep. 18, 2023. [Online]. Available: http://arxiv.org/abs/2303.18223

阅读笔记

模型选择:是否一定要选择参数量巨大的模型?如果需要更好的泛化能力,用于处理非单一的任务,例如对话,则可用选更大的模型;而对于单一明确的任务,则不一定越大越好,参数小一些的模型也能调教得很好。

阅读笔记|Language Models are Few-Shot Learners

info: T. B. Brown et al., “Language Models are Few-Shot Learners,” 2020, doi: 10.48550/ARXIV.2005.14165.

A. Radford, K. Narasimhan, T. Salimans, I. Sutskever, and others, “Improving language understanding by generative pre-training,” 2018.

A. Radford et al., “Language models are unsupervised multitask learners,” OpenAI blog, vol. 1, no. 8, p. 9, 2019.