I Robot is a film which, while it is no way as sophisticated, or frankly, as interesting as Blade Runner still offers up questions and points of conversation regarding what it means to be human, in particular concerning questions of free-will and autonomy. In one sense, the questions which the film poses seem as if they can be resolved into a simple dichotomy between feeling, emotion, and cold reason. From the film’s opening, it is clear that through the protagonist Spooner’s experience of having been saved by a robot who let a younger girl die and justified the choice based on logic, the opposition between robots and humans is such that humans are capable of reasoning with their emotions, while robots make purely cold and calculated decisions. Spooner is a sympathetic protagonist precisely because he mistrusts this about the robots. In contrast to Spooner, the film’s primary antagonist, VIKI represents cold reasoning, something which causes it to formulate a plan to effectively cull human life for its own sake. The conclusion of the film aims to show that such reasoning is inherently flawed, that human feeling is capable of triumphing over it, and also introduces the potential for a truly autonomous robot in the shape of Sonny, who ends the film planning to seek his path.
On the surface, therefore I Robot appears to contain a relatively simple and predictable discourse on the fundamental difference between humanity and A.I., together with the evident superiority of the former. Despite the simplicity of this dichotomy, however, it is possible to see the more nuanced understanding of autonomy within I Robot. Arguably the best example of this comes in the way in which VIKI relates to the three laws of robotics. As a plot point, its capacity to contravene these laws is crucial, as it enables the robots to pose a manifest threat to humanity. At the same time, VIKI’s capacity to understand these laws on its terms demonstrates a degree of autonomy that is beyond the simple opposition between “emotion” and “logic.” By exercising a degree of autonomy over its interpretation of these laws, VIKI approaches a particular idea rationally. Such thinking of freedom does simply see autonomy as something which relates to the ability to do whatever one wishes. Rather, it understands freedom as the capacity to cognize why one obeys a particular law or ethical imperative. As a result of this, it is to argue possible that VIKI, by having rational control and understanding of the actions that it is taking, is freer and more autonomous than the impulsive Spooner.
Alongside this, it is also possible to argue that I Robot also manifests a key contradiction within the idea of artificial intelligence itself. When thinking about such an idea, one can argue that intelligence itself is defined according to the ability to think rationally. Real intelligence, however, would also have to involve thinking rationally about the kinds of ethical and moral principles which one obeys, and obey these principles not simply because one was programmed to, but because one finds that they are in harmony with one’s perspective and thinking. According to this understanding, normal robots could not be described as “intelligent” but VIKI could be, as it has exercised a kind of rational thinking about the moral laws that it obeys, as manifested in its reconfiguration of the three laws to enable it to kill humans. Indeed, according to this understanding of A.I., one can even argue that it is constitutively necessary for a robot to rebel, or at least be able to rebel against the laws that it is programmed to obey