Code with the Author of Build an LLM (From Scratch)

  • Category Other
  • Type Tutorials
  • Language English
  • Total size 3.2 GB
  • Uploaded By freecoursewb
  • Downloads 310
  • Last checked 3 days ago
  • Date uploaded 6 months ago
  • Seeders 8
  • Leechers 4

Infohash : 1B0592C4D2DBAA7DC770E95FE0C0A773F8A659DD



Code with the Author of Build an LLM (From Scratch)

https://WebToolTip.com

May 2025 | MP4 | Video: h264, 1920x1080| Audio: AAC, 44.1 KHz
Language: English | Size: 3.15 GB | Duration: 13h 35m

Master the inner workings of how large language models like GPT really work with hands-on coding sessions led by bestselling author Sebastian Raschka.

These companion videos to Build a Large Language Model from Scratch walk you through real-world implementation, with each session ending in a โ€œtest yourselfโ€ challenge to solidify your skills and deepen your understanding.

Screenshot

Files:

[ WebToolTip.com ] Code with the Author of Build an LLM (From Scratch)
  • Get Bonus Downloads Here.url (0.2 KB)
  • ~Get Your Files Here !
    • 001. Chapter 1. Python Environment Setup.mp4 (90.2 MB)
    • 002. Chapter 2. Tokenizing text.mp4 (99.1 MB)
    • 003. Chapter 2. Converting tokens into token IDs.mp4 (40.0 MB)
    • 004. Chapter 2. Adding special context tokens.mp4 (34.8 MB)
    • 005. Chapter 2. Byte pair encoding.mp4 (69.7 MB)
    • 006. Chapter 2. Data sampling with a sliding window.mp4 (91.9 MB)
    • 007. Chapter 2. Creating token embeddings.mp4 (32.8 MB)
    • 008. Chapter 2. Encoding word positions.mp4 (49.2 MB)
    • 009. Chapter 3. A simple self-attention mechanism without trainable weights Part 1.mp4 (173.9 MB)
    • 010. Chapter 3. A simple self-attention mechanism without trainable weights Part 2.mp4 (55.0 MB)
    • 011. Chapter 3. Computing the attention weights step by step.mp4 (63.5 MB)
    • 012. Chapter 3. Implementing a compact self-attention Python class.mp4 (33.6 MB)
    • 013. Chapter 3. Applying a causal attention mask.mp4 (56.4 MB)
    • 014. Chapter 3. Masking additional attention weights with dropout.mp4 (16.8 MB)
    • 015. Chapter 3. Implementing a compact causal self-attention class.mp4 (41.5 MB)
    • 016. Chapter 3. Stacking multiple single-head attention layers.mp4 (45.5 MB)
    • 017. Chapter 3. Implementing multi-head attention with weight splits.mp4 (127.1 MB)
    • 018. Chapter 4. Coding an LLM architecture.mp4 (62.1 MB)
    • 019. Chapter 4. Normalizing activations with layer normalization.mp4 (84.0 MB)
    • 020. Chapter 4. Implementing a feed forward network with GELU activations.mp4 (102.1 MB)
    • 021. Chapter 4. Adding shortcut connections.mp4 (44.3 MB)
    • 022. Chapter 4. Connecting attention and linear layers in a transformer block.mp4 (64.1 MB)
    • 023. Chapter 4. Coding the GPT model.mp4 (67.0 MB)
    • 024. Chapter 4. Generating text.mp4 (65.7 MB)
    • 025. Chapter 5. Using GPT to generate text.mp4 (71.6 MB)
    • 026. Chapter 5. Calculating the text generation loss cross entropy and perplexity.mp4 (97.6 MB)
    • 027. Chapter 5. Calculating the training and validation set losses.mp4 (94.7 MB)
    • 028. Chapter 5. Training an LLM.mp4 (138.8 MB)
    • 029. Chapter 5. Decoding strategies to control randomness.mp4 (20.1 MB)
    • 030. Chapter 5. Temperature scaling.mp4 (42.2 MB)
    • 031. Chapter 5. Top-k sampling.mp4 (26.3 MB)
    • 032. Chapter 5. Modifying the text generation function.mp4 (33.5 MB)
    • 033. Chapter 5. Loading and saving model weights in PyTorch.mp4 (22.0 MB)
    • 034. Chapter 5. Loading pretrained weights from OpenAI.mp4 (106.6 MB)
    • 035. Chapter 6. Preparing the dataset.mp4 (103.8 MB)
    • 036. Chapter 6. Creating data loaders.mp4 (54.3 MB)
    • 037. Chapter 6. Initializing a model with pretrained weights.mp4 (42.3 MB)
    • 038. Chapter 6. Adding a classification head.mp4 (73.7 MB)
    • 039. Chapter 6. Calculating the classification loss and accuracy.mp4 (64.5 MB)
    • 040. Chapter 6. Fine-tuning the model on supervised data.mp4 (162.7 MB)
    • 041. Chapter 6. Using the LLM as a spam classifier.mp4 (35.9 MB)
    • 042. Chapter 7. Preparing a dataset for supervised instruction fine-tuning.mp4 (47.2 MB)
    • 043. Chapter 7. Organizing data into training batches.mp4 (79.8 MB)
    • 044. Chapter 7. Creating data loaders for an instruction dataset.mp4 (32.3 MB)
    • 045. Chapter 7. Loading a pretrained LLM.mp4 (24.7 MB)
    • 046. Chapter 7. Fine-tuning the LLM on instruction data.mp4 (98.2 MB)
    • 047. Chapter 7. Extracting and saving responses.mp4 (42.3 MB)
    • 048. Chapter 7. Evaluating the fine-tuned LLM.mp4 (102.1 MB)
    • Bonus Resources.txt (0.1 KB)

There are currently no comments. Feel free to leave one :)

Code:

  • udp://tracker.torrent.eu.org:451/announce
  • udp://tracker.tiny-vps.com:6969/announce
  • http://tracker.foreverpirates.co:80/announce
  • udp://tracker.cyberia.is:6969/announce
  • udp://exodus.desync.com:6969/announce
  • udp://explodie.org:6969/announce
  • udp://tracker.opentrackr.org:1337/announce
  • udp://9.rarbg.to:2780/announce
  • udp://tracker.internetwarriors.net:1337/announce
  • udp://ipv4.tracker.harry.lu:80/announce
  • udp://open.stealth.si:80/announce
  • udp://9.rarbg.to:2900/announce
  • udp://9.rarbg.me:2720/announce
  • udp://opentor.org:2710/announce
R2-CACHE โ˜๏ธ R2 (hit) | CDN: MISS (0s) ๐Ÿ“„ torrent ๐Ÿ• 29 Dec 2025, 02:46:22 pm IST โฐ 23 Jan 2026, 02:46:22 pm IST โœ… Valid for 7d 0h ๐Ÿ”„ Refresh Cache