Week 1

2026 Winter, COGSCI 600 W1: Prof. Jesse Hoey

Summary by Max Ku

In the Bayesian brain hypothesis, biological cognition is understood as probabilistic inference under uncertainty in the service of adaptive action. When there is a mismatch between prediction and sensory input, the brain treats it as an error signal and updates its beliefs accordingly [1, 2]. Crucially, especially in Clark’s framing, the brain predicts just because prediction supports effective action and survival [1]. If an organism’s internal world model is wrong, it may starve, get injured, or die. Since incorrect beliefs carry real costs, inference is disciplined by consequences and calibrated through continuous embodied interaction with the environment [1, 2]. By contrast, modern AI systems do not inhabit such a consequence driven loop. When an AI generates an incorrect output, it does not face direct harm or survival pressure, and the cost is typically externalized to the user. As a result, AI can optimize primarily for linguistic plausibility rather than reality tested calibration. In this sense, AI has no skin in the game.
This difference becomes especially clear under Taleb’s white swan and black swan framing [3, 4]. Skin in the game is fundamentally about exposure to downside risk. Agents that personally bear the consequences of being wrong are pressured to model rare risks and remain cautious under uncertainty, because rare extreme events can be catastrophic [4]. Modern AI often operates in the opposite regime. It is trained to perform well on the dominant mass of the data distribution, while incorrect outputs typically incur limited direct penalty to the model itself. Consequently, AI performs best in white swan settings, namely high frequency pattern rich regimes where plausibility correlates with correctness. However, it can fail sharply under black swan conditions, namely rare high impact events where wrong inference has outsized consequences [3]. Taleb’s classic turkey story captures this asymmetry. A turkey is fed every day, and each day’s evidence increases its confidence that humans are benevolent, until the day before Thanksgiving when the model collapses catastrophically [3]. Similarly, an AI trained mostly on normal cases may become overconfident in distribution extrapolation, yet fail catastrophically under rare extreme events. In Taleb’s terms, systems that do not bear downside risk often look reliable in normal conditions, but break when rare shocks matter most [4].
In my view, AI has no skin in the game in much the same way that a human consultant does: both can produce plausible advice without personally bearing the consequences. The key difference is that biological agents are ultimately shaped by embodied feedback in their own lives, whereas most AI systems remain open loop unless explicitly coupled to real world outcomes. That said, not all learning requires direct embodied feedback. Humans acquire much of their competence through simulation, ranging from imaginative rehearsal to structured games and training environments. Feedback is obtained without severe real world consequences, for example VR based emergency response training and serious games in medical education [5, 6]. Similarly, AI systems can learn from simulated interaction, for example via reinforcement learning in environments that provide repeated trial and error signals [7, 8]. In this sense, simulation can provide a form of proto-skin in the game: errors are not merely abstract, but carry penalties within the training loop.
Moreover, vulnerability to black swan failures can be partially mitigated by deliberately emphasizing difficult or rare failure modes during training. For instance, hard negative mining in contrastive learning shifts optimization away from easy, frequent examples and toward boundary cases where the model is most likely to be wrong [9]. While such hard negatives are not true black swans in Taleb’s strict sense, they function as stress tests that approximate rare high impact failures, forcing the model to internalize sharper distinctions and reducing overconfidence outside normal regimes.

[1] A. Clark, Surfing Uncertainty: Prediction, Action, and the Embodied Mind. New York, NY, USA: Oxford University Press, 2016.
[2] J. Hohwy, The Predictive Mind. Oxford, UK: Oxford University Press, 2013.
[3] N. N. Taleb, The Black Swan: The Impact of the Highly Improbable. New York, NY, USA: Random House, 2007.
[4] N. N. Taleb, Antifragile: Things That Gain from Disorder. New York, NY, USA: Random House, 2012.
[5] R. Liu, B. Becerik-Gerber, and G. M. Lucas, “Effectiveness of VR-based training on improving occupants’ response and preparedness for active shooter incidents,” Safety Science, vol. 164, p. 106175, Aug. 2023, doi: 10.1016/j.ssci.2023.106175.
[6] I. Gorbanev et al., “A systematic review of serious games in medical education: quality of evidence and pedagogical strategy,” Med. Educ. Online, vol. 23, no. 1, p. 1438718, Dec. 2018, doi: 10.1080/10872981.2018.1438718.
[7] V. Mnih et al., “Human-level control through deep reinforcement learning,” Nature, vol. 518, no. 7540, pp. 529–533, Feb. 2015, doi: 10.1038/nature14236.
[8] OpenAI et al., “Dota 2 with Large Scale Deep Reinforcement Learning,” arXiv preprint arXiv:1912.06680, Dec. 2019. [Online]. Available: https://arxiv.org/abs/1912.06680
[9] J. Robinson, C. Y. Chuang, S. Sra, and S. Jegelka, “Contrastive Learning with Hard Negative Samples,” in Proc. Int. Conf. Learn. Represent. (ICLR), 2021. [Online]. Available: https://arxiv.org/abs/2010.04592

Week 2

2026 Winter, COGSCI 600 W2: Prof. Paul Thagrad
AI Boom or Doom?
Summary by Max Ku

Whether AI is a boom or a doom is a philosophical question about what AI is and how humans should deal with it. AI already helps in medicine [1], education [2], and research [3], but these benefits are not the main issue. One key argument is that AI does not care about people. Caring is an emotional response that comes from having a body and physical feelings [4]. AI has none of these. Even when AI sounds caring or empathetic, it does not actually feel anything. Because of this, AI cannot truly be a partner or a therapist. I feel this statement is tricky because humans themselves often act caring without genuinely feeling it. Politeness, professional empathy, and social manners are common in daily life. If people accept this kind of “performed” care from other humans, then why should it matter that AI’s care is not real? I believe one reason is that AI’s caring behavior comes from imitation rather than experience. Most AI systems are trained mainly on text, learning patterns from how people talk and respond to each other. In this sense, AI resembles infants who learn language by copying others, but without the bodily experiences, emotions, or consequences that shape human understanding. Humans, on the other hand, can feel emotions when hearing or speaking these words [5, 6].
Another concern raised is power without concern. Some suggest stopping AI research or relying on regulation by governments and global organizations, but regulation has often been ineffective in the past, as seen with nuclear weapons, pollution, and drugs. In my view, rather than relying mainly on regulation, AI alignment may be a more effective approach. Today, many researchers are actively working on AI ethics [7], alignment, and safety [8], making it a rapidly growing field. These efforts aim to reduce harm, improve reliability, and better align AI behavior with human values. AI is being studied before the worst consequences occur, which I see as a hopeful sign. I also think an important direction is the move toward more modalities: instead of learning only from text, newer AI systems are trained on images, audio, video, and interaction with the physical world. In my view, multimodal learning can help AI better understand context, human behavior, and real-world consequences rather than just copying language patterns. While this does not give AI emotions, it can make systems more grounded and more aware of how their actions affect people. My takeaway is that alignment and safety research are not perfect solutions, but they represent meaningful progress, and offer practical ways to reduce risks without stopping innovation entirely.

[1] J. Abramson et al., “Accurate structure prediction of biomolecular interactions with AlphaFold 3,” Nature, vol. 630, pp. 493–500, 2024, doi: 10.1038/s41586-024-07487-w.
[2] M. Ku, T. Chong, J. Leung, K. Shah, A. Yu, and W. Chen, “TheoremExplainAgent: Towards multimodal explanations for LLM theorem understanding,” arXiv preprint arXiv:2502.19400, 2025.
[3] J. Tang, L. Xia, Z. Li, and C. Huang, “AI-Researcher: Autonomous scientific innovation,” arXiv preprint arXiv:2505.18705, 2025.
[4] L. Ghanbari-Afra, M. Adib-Hajbaghery, and M. Dianati, “Human caring: A concept analysis,” J. Caring Sci., vol. 11, no. 4, pp. 246–254, Aug. 2022, doi: 10.34172/jcs.2022.21.
[5] D. R. Perszyk and S. R. Waxman, “Linking language and cognition in infancy,” Annu. Rev. Psychol., vol. 69, pp. 231–250, 2018, doi: 10.1146/annurev-psych-122216-011701.
[6] P. Graham, Hackers & Painters: Big Ideas from the Computer Age. Sebastopol, CA, USA: O’Reilly Media, 2004.
[7] T. Korbak et al., “Chain of thought monitorability: A new and fragile opportunity for AI safety,” arXiv preprint arXiv:2507.11473, 2025.
[8] Y. Y. Chiu et al., “MoReBench: Evaluating procedural and pluralistic moral reasoning in language models, more than outcomes,” arXiv preprint arXiv:2510.16380, 2025.

Week 3

2026 Winter, COGSCI 600 W3: Prof. Roxane Itier
The perceptual roots of the social brain: on the importance of the eyes and fixation location on face and emotion perception.
Summary by Max Ku

Early behavioral studies suggest that the eyes play a special role in social perception. Humans are highly sensitive to eye visibility and contrast, supporting the idea that the eyes act as an important cue for detecting and understanding faces [1]. This naturally raises the question of whether this perceptual importance is also reflected at the neural level: is there a dedicated “eye detector” in the brain? Electrophysiological evidence provides partial support for this hypothesis. The N170, a face-sensitive Event-Related Potentials (ERP) peaking at ~170 ms, is typically modulated by face inversion; however, it has been proposed that the N170 observed for inverted faces is driven largely by eye processing [2]. To further examine this idea, later studies compared responses to isolated facial features with responses to whole faces. These studies showed that isolated eyes can produce strong neural responses, but these responses differ from those evoked by complete faces, indicating that eyes alone are not processed in the same way as full faces [3]. Other work highlights the importance of overall face structure. Removing or altering the face outline significantly reduces face-selective neural responses, even when key internal features such as the eyes remain visible [4, 5]. Together, these findings suggest that early face perception depends on interactions between salient facial features, especially the eyes, and global face configuration, rather than on a single eye-specific detector.
I feel the interpretation of eye-specific mechanisms remains controversial. Many electrophysiological studies rely on relatively small sample sizes, often fewer than 50 participants, which limits statistical power and reproducibility. Data collection is also time-consuming, as participants typically spend several hours in the laboratory. In addition, obtaining stable and high-quality Electroencephalography (EEG) signals is technically challenging, further complicating interpretation. As a result, the existence of eye-specific detectors versus more distributed face-processing mechanisms remains an open question. But these insights also resonate with a related phenomenon that people often report an intuitive ability to tell whether a face image is AI-generated, even though they cannot clearly explain why. Eye-tracking studies show that viewers attend differently to the eye region when looking at real versus AI-generated faces [6, 7]. Taken together, these results suggest that judgments of AI-generated faces may arise from subtle mismatches between highly salient features, particularly the eyes, and overall facial structure, rather than from a single obvious visual artifact.

[1] H. Kobayashi and S. Kohshima, “Unique morphology of the human eye,” Nature, vol. 387, no. 6635, pp. 767–768, Jun. 1997, doi: 10.1038/42842.
[2] R. J. Itier, C. Alain, K. Sedore, and A. R. McIntosh, “Early face processing specificity: It’s in the eyes!,” J. Cogn. Neurosci., vol. 19, no. 11, pp. 1815–1826, 2007, doi: 10.1162/jocn.2007.19.11.1815.
[3] K. B. Parkington and R. J. Itier, “One versus two eyes makes a difference! Early face perception is modulated by featural fixation and feature context,” Cortex, vol. 109, pp. 35–49, 2018, doi: 10.1016/j.cortex.2018.08.025.
[4] S. B. Winward, J. Siklos-Whillans, and R. J. Itier, “Impact of face outline, parafoveal feature number and feature type on early face perception in a gaze-contingent paradigm: A mass-univariate re-analysis of ERP data,” NeuroImage: Reports, vol. 2, no. 4, 2022.
[5] K. B. Parkington and R. J. Itier, “From eye to face: The impact of face outline, feature number, and feature saliency on the early neural response to faces,” Brain Res., vol. 1722, Art. no. 146343, 2019, doi: 10.1016/j.brainres.2019.146343.
[6] J. Huang, S. Gopalakrishnan, T. Mittal, J. Zuena, and J. Pytlarz, “Analysis of human perception in distinguishing real and AI-generated faces: An eye-tracking based study,” arXiv preprint arXiv:2409.15498, 2024.
[7] J. Vaitonytė, P. A. Blomsma, M. Alimardani, and M. M. Louwerse, “Realism of the face lies in skin and eyes: Evidence from virtual and human agents,” Comput. Human Behav. Rep., vol. 3, Art. no. 100065, 2021, doi: 10.1016/j.chbr.2021.100065.

Week 4

2026 Winter, COGSCI 600 W4: Dr. Samira Rasouli
Social Robots
Summary by Max Ku

Social robots are designed to interact with humans in human-centric ways and to operate within human environments [1]. Unlike industrial robots or purely virtual agents, social robots engage users through interpersonal communication, including verbal, nonverbal, and affective modalities. Two defining characteristics are physical presence and multimodal interaction, which distinguish social robots from screen-based agents and contribute to higher engagement and perceived social presence [2]. A key application domain for social robots is healthcare, particularly robot-assisted therapy. In this context, social robots are used to support users with the goal of improving psychological well-being. While these applications show promise, they also raise ethical and conceptual concerns, such as the appropriateness of machine-mediated empathy and the potential for emotional dependency.
Research in this area is situated within Human-Robot Interaction (HRI), an interdisciplinary field that examines how humans and robots communicate, collaborate, and interact in both social settings. Three overlapping perspectives are commonly adopted: robot-centered approaches that emphasize robot behavior and capabilities, human-centered approaches that prioritize user experience and acceptance, and robot cognition-centered approaches that focus on reasoning and decision-making. Human-Centered Design (HCD) provides a foundational methodology for developing social robots, emphasizing the identification of user needs, tasks, and contexts of use, followed by iterative prototyping with stakeholder involvement [3]. This approach is particularly relevant in mental well-being applications, such as designing social robots to support university students. Studies on social anxiety suggest that individuals with higher social anxiety often experience less anticipatory anxiety when interacting with robots than with humans [4], and users tend to prefer animal-like robots over humanoid or purely virtual agents. I think it is because humanoid robots invite scrutiny, due to the uncanny valley [5] but animal robots invite tolerance.
Thinking from my own perspective on the challenges in Human–Robot Interaction (HRI) research, AI used in social robots may simulate empathy and social presence but does not possess genuine emotions, raising ethical concerns about deception and emotional over attachment, particularly for vulnerable users. Although such extreme outcomes have not been reported for embodied social robots, related incidents involving conversational AI systems and subsequent lawsuits following user deaths highlight the potential risks of emotionally engaging artificial agents and the need for careful ethical safeguards in HRI design [6]. Evaluating real world impact is also challenging due to the subjective nature of mental well being and difficulties in conducting reliable long term assessments. Moreover, user responses vary widely across individuals and cultures, making one size fits all designs impractical. Finally, reliance on sensitive personal data introduces privacy and trust concerns [7].

[1] C. Breazeal, K. Dautenhahn, and T. Kanda, “Social robotics,” in Springer Handbook of Robotics, B. Siciliano and O. Khatib, Eds. Cham, Switzerland: Springer, 2016, pp. 1935–1972, doi: 10.1007/978-3-319-32552-1_72.
[2] J. Li, “The benefit of being physically present: A survey of experimental works comparing copresent robots, telepresent robots and virtual agents,” Int. J. Hum.-Comput. Stud., vol. 77, pp. 23–37, 2015, doi: 10.1016/j.ijhcs.2015.01.001.
[3] J. Kirakowski and N. Bevan, Handbook of User-Centred Design, Telematics Applications Project IE, Information Engineering Usability Support Centres, Final Version, Feb. 1998, 130 pp. [Online]. Available: https://uxp.ie/INUSE_Handbook_of_UCD.pdf
[4] T. Nomura, T. Kanda, T. Suzuki, and S. Yamada, “Do people with social anxiety feel anxious about interacting with a robot?,” AI & Soc., vol. 35, no. 2, pp. 381–390, 2020, doi: 10.1007/s00146-019-00889-9.
[5] M. Mori, K. F. MacDorman and N. Kageki, “The Uncanny Valley [From the Field],” in IEEE Robotics & Automation Magazine, vol. 19, no. 2, pp. 98-100, June 2012, doi: 10.1109/MRA.2012.2192811.
[6] M. Cunningham, “ChatGPT served as ‘suicide coach’ in man’s death, lawsuit alleges,” CBS News, Jan. 15, 2026. [Online]. Available: https://www.cbsnews.com/news/chatgpt-lawsuit-colordo-man-suicide-openai-sam-altman/
[7] K. Matheus, R. Ramnauth, B. Scassellati, and N. Salomons, “Long-term interactions with social robots: Trends, insights, and recommendations,” J. Hum.-Robot Interact., vol. 14, no. 3, Art. no. 55, pp. 1–42, Jun. 2025, doi: 10.1145/3729539.

Week 5

2026 Winter, COGSCI 600 W5: Dr. Nathan Haydon

Peircean Semiotics
Summary by Max Ku

In Peircean semiotics, meaning is something grounded in practice and inference. To understand a concept such as “diamond” is to know how it can be identified when put to the test. Meaning is therefore tied to the conditions under which something would count as evidence, rather than to how an object merely appears or is verbally described. A concept is universal rather than particular. It does not refer to this specific horse, chair, or water bottle, but to what allows us to recognize any instance of those things across situations. Concepts are inherently inferential: to have a concept is to know what follows from it, what to expect, and how it connects to other concepts. In this sense, a concept always includes implicit expectations about experience and possible action.

Belief, for Peirce, grows out of doubt. Doubt disrupts our usual ways of thinking, while belief is a stable tendency to think and act in certain ways. Inquiry is the process that turns doubt into belief through inference. This happens in three steps: abduction proposes possible explanations, deduction works out what should follow from them, and induction checks those expectations against experience. Peirce also explains meaning through his theory of signs [1]. Even an ordinary moment like daydreaming involves a flow of feeling, behavior, and thought. Feelings involve immediate qualities and similarities, behavior involves reactions and signals, and thought involves symbols that follow rules. Signs connect these streams, allowing experience, action, and thinking to work together. Meaning emerges from this ongoing interaction, where concepts shape expectations and are constantly checked against experience.

From an AI perspective, Peirce’s ideas feel more like a diagnosis of what current systems lack. Much of today’s AI can produce answers but does not truly doubt itself: it rarely recognizes uncertainty, revises its beliefs, or tests its own expectations. Peirce’s view that belief arises from doubt helps explain why uncertainty, exploration, and error-driven inquiry are essential to real intelligence. He also reframes what it means for an AI system to “understand” a concept. Understanding is not storing a definition or embedding, but forming hypotheses, predicting consequences, and checking those predictions against experience, as in world models [2, 3]. His theory of signs also implicitly reveals why symbol-heavy AI struggles with grounding: models are strong at language and abstraction but weak at connecting symbols to perception and action [4]. Peirce suggests that meaning emerges only when feeling, behavior, and thought are linked through experience and consequence. Seen this way, AI should be evaluated not just by hard-coded benchmarks, but by whether its concepts can survive inquiry: whether the system can notice when it is wrong, update itself with new evidence, and use its representations to guide expectation and action in the world.

[1] C. S. Peirce, “What is a sign?,” in The Essential Peirce: Selected Philosophical Writings (1893–1913), vol. 2, N. Houser and C. Kloesel, Eds. Bloomington, IN, USA: Indiana Univ. Press, 1998, pp. 4–10.
[2] P. Fung et al., “Embodied AI Agents: Modeling the World,” 2025, arXiv:2506.22355. [Online]. Available: https://arxiv.org/abs/2506.22355
[3] D. Ha and J. Schmidhuber, “World Models,” 2018, arXiv:1803.10122. [Online]. Available: https://arxiv.org/abs/1803.10122
[4] S. Harnad, “The Symbol Grounding Problem,” 1999, arXiv:cs/9906002. [Online]. Available: arxiv.org