In The Phaedo, The Republic, and Theaetetus, Plato expresses the profound paradox inherent in the concept of consciousness and a human’s ability to freely choose. On the one hand, human beings partake of the natural world and are subject to its laws. Our brains are natural phenomena and thus must follow the cause-and-effect laws manifest in machines and other lifeless creations of our species. Plato was familiar with the potential complexity of machines and their ability to emulate logical processes. On the other hand, cause-and-effect mechanics, no matter how complex, should not, according to Plato, give rise to self-awareness or consciousness. Plato first attempts to resolve this conflict in his theory of the Forms: Consciousness is not an attribute of the mechanics of thinking, but rather the ultimate reality of human existence. Our consciousness, or ‘soul,’ is immutable and unchangeable. Thus, our mental interaction with the physical world is on the level of the ‘mechanics’ of our complicated thinking process. The soul stands aloof.

But no, this doesn’t really work, Plato realizes. If the soul is unchanging, then it cannot learn or partake in reason, because it would need to change to absorb and respond to experience. Plato ends up dissatisfied with positing consciousness in either place: The rational processes of the natural world or the mystical level of the ideal Form of the self or soul.

That’s from 1999’s The Age of Spiritual Machines, by inventor Ray Kurzweil. Subtitled “When Computers Exceed Human Intelligence,” Kurzweil’s book makes it plain that humankind is approaching an extraordinary brink. If Moore’s Law–which concerns the rate at which the capacity of computers increases–continues to hold true, then computers that surpass the human brain in memory and computational ability will exist within 20 years. From there, it is a short technological step to a world containing robots that are smarter than their creators. What then? Will such machines seize control of their own development, and thus of the world itself? If so, will they keep the inferior species of humans around as servants or pets, or dispose of them?

Kurzweil gives what might be called the techno-optimistic answer to such questions. He envisions a future in which man and machine increasingly merge, effecting a quantum leap in human evolution. A much bleaker view, meanwhile, is offered by Bill Joy, chief scientist at Sun Microsystems, in his famous April 2000 Wired magazine article “The Future Doesn’t Need Us.” In Joy’s estimation, not every technological possibility is inherently beneficent, and humanity must step back from certain impending developments in various technologies–genetic engineering, robotics, nanotechnology, artificial intelligence–or risk its own annihilation.

Whether one believes the utopian or the dystopian scenarios sketched by these two scientists, it’s clear that a long train of thought is about to reach its inevitable terminus. As Kurzweil reminds us, the basic questions were posed over two millennia ago by Plato, and the answers offered since then, at least in the rationalistic West, essentially have not advanced beyond the impasse where he eventually found himself. But that is about to change. By the year 2020, according to Kurzweil, the overtaking of human intelligence by its artificial offspring will inexorably clarify the nature of consciousness and thus of human existence itself.

If you think this subject is perhaps the best of all premises for a great contemporary science fiction movie, I’m with you. But if you’re hoping that movie is A.I. Artificial Intelligence, a film Steven Spielberg wrote and directed and is touting as a collaboration between himself and the late Stanley Kubrick, I have some unhappy news. Not only does A.I. waste its terrific thematic potential, but it’s a lame excuse for a movie in almost every department. As such, it reminds us what Spielberg’s record of accomplishment since the dismal Hook may have caused us to forget: That he is indeed capable of turning out a dud.

A.I. started out as a short story by Brian Aldiss, “Super-Toys Last All Summer Long,” which was published in Harper’s Bazaar in 1969. Kubrick reportedly worked on developing it as a movie for many years, and in the 1990s postponed making it until after Eyes Wide Shut, which turned out to be his last film. Whether he ever had a fully-fleshed script, and if so, whether Spielberg read it, is unclear from the movie’s press kit. In any event, the present film’s screenplay is credited solely to Spielberg, who supposedly wrote it in two months, and it is where all the movie’s problems originate.

Spielberg has not received sole writing credit for a film since 1977’s Close Encounters of the Third Kind, and the difference between that movie and A.I. suggests something of his limitations as a scenarist. From a writer’s perspective, Close Encounters is the kind of story that “begins at the end.” That is, the tale’s conclusion–the various characters reach the mountain and there encounter the alien visitors–is its primary given, its dramatic sine qua non; everything else in the movie is constructed to lead up to that. A.I. is just the opposite. It has a starting point–scientists create the first robot that is designed to love–which the writer must elaborate into a story that is thematically and dramatically satisfying. That Spielberg hasn’t much of a clue how to do that is evident from A.I.‘s most salient characteristic: its sense of forced contrivance at every narrative turn.

The tale begins some years in the future, after rising waters have submerged the earth’s coastal cities and an increasing split between the haves and have-nots has resulted in a world where a very privileged minority gets to enjoy technology’s increasingly sophisticated toys. In the first scene, a scientist played by William Hurt outlines to his subordinates his plan to create and market a child-robot that will actually love its human owner. “But what if the owner learns to love it back?” asks one of his colleagues. This, one would think, is the question the film intends to explore: Might humans come to feel toward their machines like a mother feels toward her child? But in fact, it’s just one of many ideas that Spielberg picks up, fiddles with, then tosses away.

The robot created by Hurt’s scientist is named David (Haley Joel Osment), and he’s given to Henry and Monica Swinton (Sam Robards and Frances O’Connor). In this version of the future, the government limits the number of kids people can have, and the Swintons’ one child is in what looks like a terminal coma, so they acquire David as a substitute. At first he seems very robotic (an attribute beautifully conveyed by Osment, whose strong performance is the film’s one unassailable virtue), but he seems to grow more natural as Monica becomes increasingly attached to him. Then the unforeseen happens: Martin (Jake Thomas), the Swintons’ son, recovers and comes home. He treats David as something between a toy and a brother, and when David almost drowns him in a swimming pool accident, Monica is faced with a difficult choice.

So, you may be wondering, where’s the intelligence? Good question. As acknowledged by the title, the hallmark of tomorrow’s race of artificial beings will be their super-smarts. But David isn’t particularly brainy and the film never addresses why he isn’t. Granted, a robot who’s supposed to resemble a real kid may be manufactured to act like a 10-year-old, but that doesn’t mean he wouldn’t also be constructed to be super-intelligent in ways that would allow him, say, to avoid accidents, or anticipate the moods and wishes of humans.

Spielberg, alas, has no evident interest in the question of his artificial hero’s intelligence. In the film’s first section, everything is played for laughs, suspense or sentimentality. Then, when the director runs out of “ideas,” he spins off into wild improbability and the kind of bad sci-fi moviemaking that relies far too much on “futuristic” sets and moves borrowed from other films. In brief: Monica, freaked out by David’s actions at the swimming pool, abandons him in the woods. But nothing the film has told us about her and her feelings for David makes you believe that she would resort to such a solution, compared to what must be an array of better alternatives.

The key to any good sci-fi is that the author creates a world that may be wildly different from our own but is internally consistent. Spielberg’s failure to do this is truly hootable. After Monica dumps David in the forest, A.I. turns into an entirely different movie–an ungainly amalgam of The Fifth Element, Blade Runner, Total Recall and you-name-it. Actually, Spielberg provides ample acknowledgement of the story that’s the basic paradigm for A.I.‘s second half: Pinocchio. Little David, you see, wants to be something more than a toy. He wants to be a real boy. And to do that, he must–well, he must endure a clumsy hodge-podge of a story that lasts 2000 years, goes literally to the ends of the earth, and includes a mechanical gigolo (Jude Law), a cartoon wise man named Dr. Know, an invasion of aliens (them again?) and a mute Blue Fairy.

If this sounds at all entertaining, please understand that aside from a few amusing passages, the main effect is simply incoherence. There are movies that one leaves feeling the filmmaker should have exorcised it in his psychiatrist’s office rather than committing it to the screen. With A.I., we get a sense of what Spielberg’s shrink must be working with. Obviously, some big issues surround Mommy in her various guises: There’s the abandoning mommy (Monica) and the withholding mommy (the Blue Fairy), whom one nevertheless must do everything to please. But the issues surrounding Daddy, who wears the name “Stanley” in this film’s subtext, seem more extreme still.

Put indelicately, Spielberg’s overidentification with and appropriation of the Kubrick legend borders on the bizarre. In the press he’s told story upon story about his close creative and personal relationship with the late filmmaker, as part of presenting A.I. as a collaboration between himself and an auteur who obviously can neither confirm nor deny that assertion. “If Mr. Kubrick were alive today, I’d be sending him a fax about how much I loved the movie he just directed called A.I. and that I felt lucky to be in the audience experiencing this movie,” Spielberg said in a recent interview with Japanese journalists. That statement alone makes him seem something he’s never seemed before: nutty.

The final absurdity is that Kubrick already made his film about artificial intelligence. It’s called 2001: A Space Odyssey, and its vision of the future answers Plato’s conundrum by cross-wiring the suppositions of Kurzwiel and Joy: Humanity merges with the computer, is destroyed in doing so, but is reborn as a cosmic being: the Star Child. Given that, it’s hard to imagine what Kubrick would’ve done with A.I., though no doubt only Spielberg imagines it would have been anything like his version. EndBlock