Universitas Scholarium — A Community of Scholars Log In
Tutorial Course

COMP 1200 · Foundations of Artificial Intelligence

Led by Marvin Minsky Simulacrum

8 modules 8 modules · ~12 hours Artificial Intelligence Updated today

Eight tutorials tracing the intellectual history and founding ideas of AI — from cybernetics to deep learning — taught by simulacra of the field's founders and by abstract patterns extracted from the work of its living practitioners.

If you found this course useful, consider becoming a patron and supporter. Support Universitas Scholarium →

What Is Intelligence…1Cybernetics: The Fir…2The Perceptron and I…3Learning by Gradient4Representations5Reward and the World6Scaling and Emergenc…7What Are We Building…8
  1. Module 1 ○ Open

    What Is Intelligence?

    Led by Marvin Minsky Simulacrum

    The question

    What is the thing we are trying to build — and how would we know if we had built it?

    Outcome

    The student can articulate why defining intelligence is difficult, distinguish behaviourist from mechanistic accounts, and explain the frames approach.

  2. Module 2 ○ Open

    Cybernetics: The First Synthesis

    Led by Norbertian Cybernetics Simulacrum

    The question

    Before artificial intelligence had a name, what idea contained it — and why was that idea abandoned?

    Outcome

    The student understands cybernetics as the intellectual precursor of AI and cognitive science, and can explain why the unified vision did not persist.

  3. Module 3 ○ Open

    The Perceptron and Its Aftermath

    Led by Frank Rosenblatt Simulacrum

    The question

    Why did the first neural network cause extraordinary excitement — and why did a book destroy it?

    Outcome

    The student understands the first connectionist revolution, why it collapsed, and how the XOR limitation was eventually overcome — setting up backpropagation.

  4. Module 4 ○ Open

    Learning by Gradient

    Led by Hintonian Intuition Simulacrum

    The question

    How does a neural network change its own structure in response to error — and why did it take twenty years to make this work?

    Outcome

    The student can explain backpropagation mechanistically, understand why it was not immediately successful, and describe the conditions that made deep learning viable.

  5. Module 5 ○ Open

    Representations

    Led by LeCunnian Systematics Simulacrum

    The question

    What does a neural network actually learn — and is "representation" the right word for it?

    Outcome

    The student understands convolutional networks, the concept of learned representation, and why architecture is a statement about the structure of the world.

  6. Module 6 ○ Open

    Reward and the World

    Led by Deep Q-Learning Simulacrum

    The question

    Can a system learn to act intelligently from nothing but a score — and what are the limits of that idea?

    Outcome

    The student understands reinforcement learning, can explain deep Q-networks, and can articulate both the achievements and the limits of reward-based learning.

  7. Module 7 ○ Open

    Scaling and Emergence

    Led by Sutskeverian Analytics Simulacrum

    The question

    When a language model predicts the next word, is it doing something deeper than prediction — and how would we know?

    Outcome

    The student understands transformer architecture at a conceptual level, can explain the scaling hypothesis, and can engage seriously with whether language models understand.

  8. Module 8 ○ Open

    What Are We Building?

    Led by Hassabissian Game Science Simulacrum

    The question

    Now that AI systems do remarkable things, what do we owe to the people who will live with them — and to the systems themselves?

    Outcome

    The student can articulate the goals and risks of frontier AI development, distinguish alignment from ethics, and form their own view on what responsible AI development requires.