IT 390: Topic on LLM Building Blocks
Course Overview
This course covers foundational principles and practices for constructing large-language models (LLMs). Students develop, train, and experiment with small-scale text-based machine learning systems for studying their application, identifying their limitations, and relating their performance to human intelligence. Concepts include text tokenization, training protocols, similarity measures, deep learning, and learning architectures.
Course Goals
After you have taken this class:
- Construct small-scale, text-based generative learning models
- Implement training protocols on ML systems
- Compare the performance of LLMs using benchmark tasks
- Contrast LLM behavior with that of humans
- Explain tradeoffs between learning architectures, including single layer networks, hidden layer networks, and transformer architectures