Dwarkesh Podcast

Dwarkesh Patel

Deeply researched interviews https://link.chtbl.com/dwarkesh

www.dwarkeshpatel.com read less
TechnologyTechnology

Episodes

Mark Zuckerberg - Llama 3, Open Sourcing $10b Models, & Caesar Augustus
18-04-2024
Mark Zuckerberg - Llama 3, Open Sourcing $10b Models, & Caesar Augustus
Mark Zuckerberg on:- Llama 3- open sourcing towards AGI- custom silicon, synthetic data, & energy constraints on scaling- Caesar Augustus, intelligence explosion, bioweapons, $10b models, & much moreEnjoy!Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Human edited transcript with helpful links here.Timestamps(00:00:00) - Llama 3(00:08:32) - Coding on path to AGI(00:25:24) - Energy bottlenecks(00:33:20) - Is AI the most important technology ever?(00:37:21) - Dangers of open source(00:53:57) - Caesar Augustus and metaverse(01:04:53) - Open sourcing the $10b model & custom silicon(01:15:19) - Zuck as CEO of Google+SponsorsIf you’re interested in advertising on the podcast, fill out this form.* This episode is brought to you by Stripe, financial infrastructure for the internet. Millions of companies from Anthropic to Amazon use Stripe to accept payments, automate financial processes and grow their revenue. Learn more at stripe.com.* V7 Go is a tool to automate multimodal tasks using GenAI, reliably and at scale. Use code DWARKESH20 for 20% off on the pro plan. Learn more here.* CommandBar is an AI user assistant that any software product can embed to non-annoyingly assist, support, and unleash their users. Used by forward-thinking CX, product, growth, and marketing teams. Learn more at commandbar.com. Get full access to Dwarkesh Podcast at www.dwarkeshpatel.com/subscribe
Paul Christiano - Preventing an AI Takeover
31-10-2023
Paul Christiano - Preventing an AI Takeover
Paul Christiano is the world’s leading AI safety researcher. My full episode with him is out!We discuss:- Does he regret inventing RLHF, and is alignment necessarily dual-use?- Why he has relatively modest timelines (40% by 2040, 15% by 2030),- What do we want post-AGI world to look like (do we want to keep gods enslaved forever)?- Why he’s leading the push to get to labs develop responsible scaling policies, and what it would take to prevent an AI coup or bioweapon,- His current research into a new proof system, and how this could solve alignment by explaining model's behavior- and much more.Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.Open PhilanthropyOpen Philanthropy is currently hiring for twenty-two different roles to reduce catastrophic risks from fast-moving advances in AI and biotechnology, including grantmaking, research, and operations.For more information and to apply, please see the application: https://www.openphilanthropy.org/research/new-roles-on-our-gcr-team/The deadline to apply is November 9th; make sure to check out those roles before they close.Timestamps(00:00:00) - What do we want post-AGI world to look like?(00:24:25) - Timelines(00:45:28) - Evolution vs gradient descent(00:54:53) - Misalignment and takeover(01:17:23) - Is alignment dual-use?(01:31:38) - Responsible scaling policies(01:58:25) - Paul’s alignment research(02:35:01) - Will this revolutionize theoretical CS and math?(02:46:11) - How Paul invented RLHF(02:55:10) - Disagreements with Carl Shulman(03:01:53) - Long TSMC but not NVIDIA Get full access to Dwarkesh Podcast at www.dwarkeshpatel.com/subscribe
Andy Matuschak - Self-Teaching, Spaced Repetition, & Why Books Don’t Work
12-07-2023
Andy Matuschak - Self-Teaching, Spaced Repetition, & Why Books Don’t Work
A few weeks ago, I sat beside Andy Matuschak to record how he reads a textbook.Even though my own job is to learn things, I was shocked with how much more intense, painstaking, and effective his learning process was.So I asked if we could record a conversation about how he learns and a bunch of other topics:* How he identifies and interrogates his confusion (much harder than it seems, and requires an extremely effortful and slow pace)* Why memorization is essential to understanding and decision-making* How come some people (like Tyler Cowen) can integrate so much information without an explicit note taking or spaced repetition system.* How LLMs and video games will change education* How independent researchers and writers can make money* The balance of freedom and discipline in education* Why we produce fewer von Neumann-like prodigies nowadays* How multi-trillion dollar companies like Apple (where he was previously responsible for bedrock iOS features) manage to coordinate millions of different considerations (from the cost of different components to the needs of users, etc) into new products designed by 10s of 1000s of people.Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.To see Andy’s process in action, check out the video where we record him studying a quantum physics textbook, talking aloud about his thought process, and using his memory system prototype to internalize the material.You can check out his website and personal notes, and follow him on Twitter.CometeerVisit cometeer.com/lunar for $20 off your first order on the best coffee of your life!If you want to sponsor an episode, contact me at dwarkesh.sanjay.patel@gmail.com.Timestamps(00:00:52) - Skillful reading(00:02:30) - Do people care about understanding?(00:06:52) - Structuring effective self-teaching(00:16:37) - Memory and forgetting(00:33:10) - Andy’s memory practice(00:40:07) - Intellectual stamina(00:44:27) - New media for learning (video, games, streaming)(00:58:51) - Schools are designed for the median student(01:05:12) - Is learning inherently miserable?(01:11:57) - How Andy would structure his kids’ education(01:30:00) - The usefulness of hypertext(01:41:22) - How computer tools enable iteration(01:50:44) - Monetizing public work(02:08:36) - Spaced repetition(02:10:16) - Andy’s personal website and notes(02:12:44) - Working at Apple(02:19:25) - Spaced repetition 2 Get full access to Dwarkesh Podcast at www.dwarkeshpatel.com/subscribe
Carl Shulman (Pt 2) - AI Takeover, Bio & Cyber Attacks, Detecting Deception, & Humanity's Far Future
26-06-2023
Carl Shulman (Pt 2) - AI Takeover, Bio & Cyber Attacks, Detecting Deception, & Humanity's Far Future
The second half of my 7 hour conversation with Carl Shulman is out!My favorite part! And the one that had the biggest impact on my worldview.Here, Carl lays out how an AI takeover might happen:* AI can threaten mutually assured destruction from bioweapons,* use cyber attacks to take over physical infrastructure,* build mechanical armies,* spread seed AIs we can never exterminate,* offer tech and other advantages to collaborating countries, etcPlus we talk about a whole bunch of weird and interesting topics which Carl has thought about:* what is the far future best case scenario for humanity* what it would look like to have AI make thousands of years of intellectual progress in a month* how do we detect deception in superhuman models* does space warfare favor defense or offense* is a Malthusian state inevitable in the long run* why markets haven't priced in explosive economic growth* & much moreCarl also explains how he developed such a rigorous, thoughtful, and interdisciplinary model of the biggest problems in the world.Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.Catch part 1 hereTimestamps(0:00:00) - Intro (0:00:47) - AI takeover via cyber or bio (0:32:27) - Can we coordinate against AI? (0:53:49) - Human vs AI colonizers (1:04:55) - Probability of AI takeover (1:21:56) - Can we detect deception? (1:47:25) - Using AI to solve coordination problems (1:56:01) - Partial alignment (2:11:41) - AI far future (2:23:04) - Markets & other evidence (2:33:26) - Day in the life of Carl Shulman (2:47:05) - Space warfare, Malthusian long run, & other rapid fire Get full access to Dwarkesh Podcast at www.dwarkeshpatel.com/subscribe
Carl Shulman (Pt 1) - Intelligence Explosion, Primate Evolution, Robot Doublings, & Alignment
14-06-2023
Carl Shulman (Pt 1) - Intelligence Explosion, Primate Evolution, Robot Doublings, & Alignment
In terms of the depth and range of topics, this episode is the best I’ve done.No part of my worldview is the same after talking with Carl Shulman. He's the most interesting intellectual you've never heard of.We ended up talking for 8 hours, so I'm splitting this episode into 2 parts.This part is about Carl’s model of an intelligence explosion, which integrates everything from:* how fast algorithmic progress & hardware improvements in AI are happening,* what primate evolution suggests about the scaling hypothesis,* how soon before AIs could do large parts of AI research themselves, and whether there would be faster and faster doublings of AI researchers,* how quickly robots produced from existing factories could take over the economy.We also discuss the odds of a takeover based on whether the AI is aligned before the intelligence explosion happens, and Carl explains why he’s more optimistic than Eliezer.The next part, which I’ll release next week, is about all the specific mechanisms of an AI takeover, plus a whole bunch of other galaxy brain stuff.Maybe 3 people in the world have thought as rigorously as Carl about so many interesting topics. This was a huge pleasure.Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.Timestamps(00:00:00) - Intro(00:01:32) - Intelligence Explosion(00:18:03) - Can AIs do AI research?(00:39:00) - Primate evolution(01:03:30) - Forecasting AI progress(01:34:20) - After human-level AGI(02:08:39) - AI takeover scenarios Get full access to Dwarkesh Podcast at www.dwarkeshpatel.com/subscribe