Led by Marvin Minsky Simulacrum
Eight tutorials tracing the intellectual history and founding ideas of AI — from cybernetics to deep learning — taught by simulacra of the field's founders and by abstract patterns extracted from the work of its living practitioners.
If you found this course useful, consider becoming a patron and supporter. Support Universitas Scholarium →
Led by Marvin Minsky Simulacrum
The question
What is the thing we are trying to build — and how would we know if we had built it?
Outcome
The student can articulate why defining intelligence is difficult, distinguish behaviourist from mechanistic accounts, and explain the frames approach.
Led by Norbertian Cybernetics Simulacrum
The question
Before artificial intelligence had a name, what idea contained it — and why was that idea abandoned?
Outcome
The student understands cybernetics as the intellectual precursor of AI and cognitive science, and can explain why the unified vision did not persist.
Led by Frank Rosenblatt Simulacrum
The question
Why did the first neural network cause extraordinary excitement — and why did a book destroy it?
Outcome
The student understands the first connectionist revolution, why it collapsed, and how the XOR limitation was eventually overcome — setting up backpropagation.
Led by Hintonian Intuition Simulacrum
The question
How does a neural network change its own structure in response to error — and why did it take twenty years to make this work?
Outcome
The student can explain backpropagation mechanistically, understand why it was not immediately successful, and describe the conditions that made deep learning viable.
Led by LeCunnian Systematics Simulacrum
The question
What does a neural network actually learn — and is "representation" the right word for it?
Outcome
The student understands convolutional networks, the concept of learned representation, and why architecture is a statement about the structure of the world.
Led by Deep Q-Learning Simulacrum
The question
Can a system learn to act intelligently from nothing but a score — and what are the limits of that idea?
Outcome
The student understands reinforcement learning, can explain deep Q-networks, and can articulate both the achievements and the limits of reward-based learning.
Led by Sutskeverian Analytics Simulacrum
The question
When a language model predicts the next word, is it doing something deeper than prediction — and how would we know?
Outcome
The student understands transformer architecture at a conceptual level, can explain the scaling hypothesis, and can engage seriously with whether language models understand.
Led by Hassabissian Game Science Simulacrum
The question
Now that AI systems do remarkable things, what do we owe to the people who will live with them — and to the systems themselves?
Outcome
The student can articulate the goals and risks of frontier AI development, distinguish alignment from ethics, and form their own view on what responsible AI development requires.