Skip to main content
Close

The Orwellian Future of Script Analysis: Would You Trust the Fate of Your Screenplay to AI?

By Staff · May 23, 2017

Can a character’s likability be quantified? Think about that for a second. Imagine scanning your screenplay through a piece of software designed to break down a story into all its composite parts.

What would you expect this software to be able to reliably tell you? The number of scenes, perhaps? The number of locations from beginning to end? A potential budget – something gleaned, perhaps, from the number of “booms” and “pows” littered throughout your script?

What about the elements of a story that we tend to think of as slightly more subjective? A character’s emotional range, for example. Their level of happiness, compared to, say, their fear. Would you trust an artificial intelligence with that kind value judgement, over, say, a professional reader or script consultant?

As it so happens, this Orwellian-hypothetical situation is already stranger than fiction. Just last month, The Black List, in partnership with ScriptBook introduced an innovative AI capable of processing and evaluating screenplays against a series of metrics.

Some of these areas were practical – genre and rating, for example – while others, like character likability, seemed a considerable stretch. What’s the criteria? How can artificial intelligence, regardless of sophistication, account for that? In fact, how can it make any objective claim about something as subjective as creativity?  The answer is “it doesn’t.” At least, not in the traditional sense.

“I very much enjoyed your mid-point culmination, Dave.”

In fact, the AI introduced by The Black List relies on data culled from thousands of screenplays as the basis for its analysis. In other words, screenplays are essentially judged according to how much or how little they variate from the norm – though this still doesn’t explain how the quality of those norms are determined in the first place. And besides, are algorithmic, mathematical norms really the standard of judgment we ought to be applying to cinema?  

Introduced as a $100 alternative to traditional screenplay coverage, The Black List introduced the new service in an effort to provide standardized levels of analysis similar to the measures increasingly employed by film studios, rather than to mathematically determine a film’s overall quality.

For writers with a genuine desire to take their screenplays to the next level, a human touch – one that combines skill with experience – is still the best option. It’s also why professional coverage services, like TheScriptLab’s own TSLNotes, exist in the first place. For example, what would an AI have to say about a screenplay like Taxi Driver? Travis Bickle isn’t exactly the most likable guy, after all, which is why putting too much stock in objective storytelling norms is likely to have a leveling effect on the medium as a whole. 

Feedback is an essential part of the process, of course, but the highly subjective, nuanced nature of storytelling demands a more imperfect, human approach. Sure – a sophisticated algorithm might correctly identify that your inciting incident occurs 10 pages too late, but unless you’re solely in the business of making formulaic movies, this sort of feedback isn’t particularly useful. 

Still, it’s hard to escape the feeling that we’re living through some sort of dystopian scenario in which “good art” is determined by a team of computer overlords. A dramatic reaction perhaps, but one that explains the severe backlash that prompted the The Black List to dismantle the service a day following its introduction.

In an interview with Vice Motherboard, filmmaker Steven Tsapelas called the algorithmic approach into question, instead emphasizing the natural instincts of screenwriters.

“Nobody knows what’s going to hit or work. And when you talk about the likability of the character, how do you quantify that? There Will Be Blood has the least likable character of all time. But you love him, and you can’t quantify a Daniel Day-Lewis performance when you enter a script into this software.”

Logic dictates that a low score in a category as seemingly important as “protagonist likability” could result in an automated pass from a studio – especially without the overall quality of the screenplay taken into account. Is that really an innovation worth pursuing in an industry increasingly stifled by creative risk-aversion?

“Subjective decisions lead to box office failure,” declares a line from ScriptBook’s website. Fair enough, but shouldn’t there be more to a creative industry than a maxed-out bottom line?