Sort by

Newest

Oldest

Popular

Tech is Good, AI Will Be Different
00:09:29
AI Safety Career Advice! (And So Can You!)
00:23:42
Using Dangerous AI, But Safely?
00:30:38
AI Ruined My Year
00:45:59
Why Does AI Lie, and What Can We Do About It?
00:09:24
We Were Right! Real Inner Misalignment
00:11:47
Intro to AI Safety, Remastered
00:18:05
Deceptive Misaligned Mesa-Optimisers? It's More Likely Than You Think...
00:10:20
The OTHER AI Alignment Problem: Mesa-Optimizers and Inner Alignment
00:23:24
Quantilizers: AI That Doesn't Try Too Hard
00:09:54
Sharing the Benefits of AI: The Windfall Clause
00:11:44
10 Reasons to Ignore AI Safety
00:16:29
9 Examples of Specification Gaming
00:09:40
Training AI Without Writing A Reward Function, with Reward Modelling
00:17:52
AI That Doesn't Try Too Hard - Maximizers and Satisficers
00:10:22
Is AI Safety a Pascal's Mugging?
00:13:41
A Response to Steven Pinker on AI
00:15:38
How to Keep Improving When You're Better Than Any Teacher - Iterated Distillation and Amplification
00:11:32
Why Not Just: Think of AGI Like a Corporation?
00:15:27
Safe Exploration: Concrete Problems in AI Safety Part 6
00:13:46
Friend or Foe? AI Safety Gridworlds extra bit
00:03:47
AI Safety Gridworlds
00:07:23
Experts' Predictions about the Future of AI
00:06:47
Why Would AI Want to do Bad Things? Instrumental Convergence
00:10:36
Superintelligence Mod for Civilization V
01:04:40
Intelligence and Stupidity: The Orthogonality Thesis
00:13:03
Scalable Supervision: Concrete Problems in AI Safety Part 5
00:05:03
AI Safety at EAGlobal2017 Conference
00:05:30
AI learns to Create  ̵K̵Z̵F̵ ̵V̵i̵d̵e̵o̵s̵ Cat Pictures: Papers in Two Minutes #1
00:05:20
What can AGI do? I/O and Speed
00:10:41