jueves, 24 de julio de 2025

Affective Image Classification


This paper by Machajdik and Hanbury presents a pioneering approach to the field of affective computing, focusing specifically on the task of classifying images based on the emotions they evoke. Unlike traditional models that rely heavily on low-level image features like color or texture, the authors propose a framework that integrates both low-level visual features (such as colorfulness, saturation, edge density) and high-level semantic features derived from image content and composition—including objects, scenes, and artistic elements. Using a combination of support vector machines (SVMs) and a carefully constructed dataset labeled with emotional categories such as happiness, sadness, fear, and disgust, they demonstrate that this dual-feature approach significantly enhances classification accuracy. The research leverages psychological models of emotion—particularly Ekman’s basic emotions and Plutchik’s wheel—to anchor the categories, ensuring conceptual robustness. A key insight is that emotional perception is shaped not just by raw visual data but also by contextual and symbolic meanings, which machines must learn to interpret. For example, the presence of nature scenes correlates strongly with positive emotions, while dark indoor settings are often linked to negative affect. The study's results open new avenues for emotion-aware multimedia applications, from personalized content delivery to mental health monitoring, suggesting that machines can be trained to recognize and respond to human affective states through visual cues. Ultimately, the paper underscores the necessity of bridging computational vision and psychological theory to advance affective computing as a discipline.


Machajdik, J. and Hanbury, A. (2010) ‘Affective image classification using features inspired by psychology and art theory’, Proceedings of the 18th ACM International Conference on Multimedia, pp. 83–92. https://doi.org/10.1145/1873951.1873965