Categories: Artificial Intelligence

Learning about Large Language Models

I am reading Sebastian Raschka’s Build a Large Language Model from Scratch book with a few colleagues at work. This video is mentioned in chapter 1 of the book.

Developing an LLM: Building, Training, Finetuning

We have quizes to see how well we understand the material. These are the questions & notes I have jotted down from my reading so far.

Chapter 1

  1. What is an LLM? p2
  2. What are 2 dimensions that “large” refers to?
  3. Which architecture do LLMs utilize? p3
  4. Why are LLMs often referred to as generative AI/genAI?
  5. What is the relationship between AI, ML, deep learning, LLMs, and genAI?
  6. Give a difference between traditional ML and deep learning.
  7. What are other approaches to AI apart from ML and deep learning? p4
  8. List 5 applications of LLMs.
  9. What are 3 advantages of custom built LLMs? p5
  10. What are the 2 general steps in creating an LLM? p6
  11. What is a base/foundation model? Give an example. p7
  12. What are the few-shot capabilities of a base model?
  13. What are 2 categories of fine-tuning LLMs?
  14. Which architecture did Attention Is All You Need introduce?
  15. Describe the transformer architecture.

Part II

  1. What are the 2 submodules of a transformer? p7
  2. What is the purpose of the self-attention mechanism?
  3. What is BERT? What do the initials stand for? p8
  4. What does GPT stand for?
  5. What is the difference between BERT and GPT? Which submodule of the original transformer does each focus on?
  6. List a real-world application of BERT. p9
  7. What is the difference between zero-shot and few-shot capabilities?
  8. What are applications of transformers (other than LLMs). p10
  9. Give 2 examples of architectures (other than transformers) that LLMs can be based on.
  10. See Dolma: an Open Corpus of Three Trillion Tokens for Language Model Pretraining Research for a publicly available training dataset (may contain copyrighted works) p11
  11. Why are models like GPT3 called based or foundation models?
  12. What is an estimate of the cost of training GPT3? See https://www.reddit.com/r/MachineLearning/comments/h0jwoz/d_gpt3_the_4600000_language_model/
  13. What type of learning is next-word prediction? p12
  14. What is an autoregressive model? Why is GPT one?
  15. How many transformer layers and parameters does GPT3 have? p13
  16. When was GPT-3 introduced?
  17. Which task was the original transformer model explicitly designed for? p14
  18. What is emergent behavior?
  19. What are the 3 main stages of coding an LLM in this book?
  20. What is the key idea of the transformer architecture? p15

GPT References

  1. Improving Language Understanding by Generative Pre-Training p12
  2. Training language models to follow instructions with human feedback

Chapter 2

  1. What is embedding? p18
  2. What is retrieval-augmented generation? p19
  3. Which embeddings are popular for RAG?
  4. What is Word2Vec? What is the main idea behind it?
  5. What is an advantage of high dimensionality in word embeddings? A disadvantage?
  6. What is an advantage of optimizing embeddings as part of LLM training instead of using Word2Vec?
  7. What is the embedding size of the smaller GPT-2 models? The largest GPT-3 models?

Chapter 3

  1. Why can’t we simply translate a text from one language to another word by word? p52
  2. How can this challenge be addressed using a deep neural network?
  3. What is a recurrent neural network?
  4. What was the most popular encoder-decoder architecture before the advent of transformers?
  5. Explain how an encoder-decoder RNN works. p53
  6. What is the big limitation of encoder-decoder RNNs?
  7. What is the Bahdanau attention mechanism? p54
  8. What is self-attention? p55
  9. What serves as the cornerstone of every LLM based on the transformer architecture?
  10. What does the self in self-attention refer to? p56
  11. What is a context vector? p57
  12. Why are context vectors essential in LLMs?
  13. Why is the dot product a measure of similarity? p59
  14. Give 2 reasons why the attention scores normalized?
  15. Why is it advisable to use the softmax function for normalization in practice? p60
  16. Why is it advisable to use the PyTorch implementation of softmax in particular (instead of your own)?
  17. What is the difference between attention scores and attention weights? p62
  18. How are context vectors computed from attention weights? p63
  19. Which are the 3 weight matrices in self-attention with trainable weights? p65
  20. How are these matrices initialized? How are they used?
  21. What is the difference between weight parameters (matrices) and attention weights?
  22. How are the attention scores computed in the self-attention with trainable weights technique?
  23. What about the attention weights? p68
  24. What is scaled-dot product attention? p69
  25. Why do we scale by the square root of the embedding dimension?
  26. How does the softmax function behave as the dot products increase?
  27. How is the context vector computed?
  28. What is nn.module? p71
  29. What is a significant advantage of using nn.Linear instead of nn.Parameter(torch.rand(…))?
  30. What is causal attention?
  31. How can the tril function be used to create a mask where the values above the diagonal are 0?
  32. Explain a more effective masking trick for more efficiently computing the masked attention weights.
  33. What is dropout in deep learning?
  34. Which are the two specific times when dropout is typically applied in the transformer architecture?
  35. Why does nn.Dropout scale the remaining values? p79-80
  36. What are some advantages of using register_buffer? p81
  37. What is multi-head attention? p82
  38. How can multiple heads be processed in parallel? p85

Article info



Leave a Reply

Your email address will not be published. Required fields are marked *