Back to articles

How AI is trained: Pre-training, mid-training, and post-training explained | Lex Fridman Podcast

via Matt WolfeLex Clips

Lex Fridman Podcast full episode: https://www.youtube.com/watch?v=EV7WhVT270Q Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/cv9603-sb See below for guest bio, links, and to give feedback, submit questions, contact Lex, etc. *GUEST BIO:* Nathan Lambert and Sebastian Raschka are machine learning researchers, engineers, and educators. Nathan is the post-training lead at the Allen Institute for AI (Ai2) and the author of The RLHF Book. Sebastian Raschka is the author of Build a Large Language Model (From Scratch) and Build a Reasoning Model (From Scratch). *CONTACT LEX:* *Feedback* - give feedback to Lex: https://lexfridman.com/survey *AMA* - submit questions, videos or call-in: https://lexfridman.com/ama *Hiring* - join our team: https://lexfridman.com/hiring *Other* - other ways to get in touch: https://lexfridman.com/contact *EPISODE LINKS:* Nathan's X: https://x.com/natolambert Nathan's Blog: https://interconnects.ai Nathan's Website: https://natolam

Watch on Matt Wolfe

Opens in a new tab

Watch on YouTube
1 views

Related Articles