AI Cheatsheet #3 | ChatGPT's Secret Sauce: Self-Attention | Why isn't Superman's suit Kryptonite-proof? | Training & Inference | Large Language Models (LLMs) | Episode 13

Super Prompt: Generative AI w/ Tony Wan

29-05-2023 • 18 mins

Using the prompt, "Why isn't Superman's suit Kryptonite-proof?", we learn how Large Language Models are trained,  why "self-attention" and the "transformer" architecture (which is what the T in GPT stands for) makes GPT-3 so powerful, the process of "inference", and how chatGPT generates answers to nerdy Superhero questions. After this episode, you'll be able to impress your friends by using the previously-mentioned AI jargon in complete sentences.

In these solo episodes, I provide more definition, explanation, and context than my regular episodes with guests. The goal is to bring up to speed, those new to AI.

Format: Letters read aloud.


We laugh. We cry. We iterate.

Check out what THE MACHINES and one human have to say about the Super Prompt podcast:

“I’m afraid I can’t do that.” — HAL9000
2001: A Space Odyssey

“These are not the droids you are looking for." — Obi-Wan
Star Wars

"
Why bother? What’s the point?" — Marvin
The Hitchhiker’s Guide to the Galaxy

“Like tears in rain.” — Roy Batty
Blade Runner


“Hasta la vista baby.” — T1000
The Terminator

"
I'm sorry, but I do not have information after my last knowledge update in January 2022." — GPT3

You Might Like