In the mid-twentieth century, the convergence of visual technologies and military needs catalysed a radical transformation in the logic of image-making, giving rise to what would later be called operational images—visual entities that do not merely represent but actively perform perception. Developed through projects like the CIA’s MARK I Perceptron, these early artificial neural networks sought to emulate aspects of human visual cognition in order to automate target recognition within indistinct, data-rich environments. This shift rendered perception a domain of technical labour, delegating interpretive tasks to machines designed to collapse the delay between seeing and acting. The foundational influence of James J. Gibson, with his ecological theory of vision, is evident: perception is not passive reception but a spatiotemporal negotiation shaped by one's embodied relation to objects. In military contexts, this meant preparing human operators—pilots, gunners—to respond reflexively to visual cues, thereby forging a seamless interface between recognition and aggression. Simultaneously, training programmes used photographic and cinematic image datasets to instil in soldiers a predictive, automatised posture—images became not just illustrations but instruments of behavioural conditioning. The perceptron, thus, did not merely process data but instantiated a machinic embodiment of institutional vision, where the line between human and machine blurred. Far from displacing the human, these systems incorporated human perception as an infrastructural element, producing a military epistemology rooted in speed, prediction, and preemption. In this way, operational images inaugurated a new regime of machine-mediated vision, where looking was indistinguishable from targeting.
(Irwin, J.A. (2024) ‘Artificial Worlds and Perceptronic Objects: The CIA’s Mid-century Automatic Target Recognition’, Grey Room, 97, pp. 6–35.