Skip to main content

Interview

Stargate | How Does Artificial Intelligence Write New Stargate?

The Companion looks so much better on our free app. Customize your feeds, get notifications on new content, and stream podcasts with our in-app player.

Download it from App Store for iPhone or Google Play for Android now – and don’t forget to rate and review!

“Don’t explain computers to laymen. Simpler to explain sex to a virgin.”

Robert A. Heinlein, The Moon Is a Harsh Mistress (1966)

As Lawrence explained in his mission report/captain’s log from the very heart of our Stargate AI Project, “I kind of understand it, but now we have this amazingly ambitious brief to try and push the boundaries [of artificial intelligence].”

Lawrence is way ahead of me on this. I used to run two science magazines, How it Works and All About Space, but the actual nuances of their content made my eyes glaze over. My strategy was pretty much to leave the ridiculously smart editors and their ridiculously smart teams to do the capital-s Science, whilst I supplied the capital-m Magazine. (Seriously, at the time there were two ongoing masters and one PhD on-site)

I suspect I’m not alone in trying to understand exactly what it is that artificial intelligence is capable of generally, how it would approach writing a scene specifically, and just how impossible hard it’s going to be realizing Brad Wright’s dream of a brand new Stargate script for his stars to perform this Fall.

[See also: Podcast | Brad Wright’s Conversations in Sci-Fi with David Hewlett]

Bot to the Future

“Robots will neither be common nor very good in 2014, but they will be in existence.”

Isaac Asimov, ‘Visit to the World’s Fair of 2014’, The New York Times (August 16, 1964)

“The most basic difference between AI and the kinds of computer programs that we’ve been using for longer is that AI does not follow specific instructions on how to solve a problem,” explains Janelle Shane, research scientist, custodian of the fascinating aiweirdness.com, and author of You Look Like a Thing and I Love You: How AI Works and Why It’s Making the World a Weirder Place.

“Instead, you give it the goal and it comes up with an approach to solving this problem. And you know, you may not even be able to tell exactly what its approach is, which can it can be useful if it’s a problem that you don’t know how to solve. If you don’t know how to write down specific instructions for, you know, ‘What does a bird look like?’ The beak, okay, well, what does the beak look like? How do you differentiate a beak from anything else that it’s like pointy, like almonds or something? And so the nice thing about working with AI for some of these applications is that you give it the goal – these are birds, these are not birds – and it can figure out these internal rules that help it do the tasks.”

This machine learning process lurks behind a number of apps or devices we use on daily basis, most often focused on learning our habits to improve our customer experience like sycophantic pocket butlers (Siri, Alexa, Nest), or to flog us a load of unnecessary tat that matches the idle whimsy of our browsing history (Amazon, Facebook, eBay). 

Netflix is a pretty good example of this that we’re all likely to have encountered, not only putting together ever more nuanced recommendations of things you might want to watch based on your habits but tweaking how it presents those recommendations to emphasize different elements. If you watch a ton of ‘80s comedies, you’ll get a goofy thumbnail for Stranger Things showing the lads in their Ghostbusters get-up. Watch a lot of superhero shows, you might get an intense Eleven with a nosebleed and Professor-X style telekinesis hands. It’s smart, but it’s ultimately a simple task only made challenging by the sheer volume of users and their data to wrestle with.

A selection of Netflix thumbnails from Stranger Things Season 1. | Netflix, 2016.

“These are the types of nuanced problems that we have a lot of data for, but we don’t have a good description that we can write for them. So one of the first commercial applications with AI was in language translation. Google Translate was able to figure out how to do this language stuff without a human having to sit down and write a detailed [decision] tree of how to do it. But then we also have AI doing really well in playing games. You may have heard of AI players beating humans in chess and go and certain video games. And the thing about tasks like games, is that they tend to be really, really narrow tasks. And that turns out to be kind of the differentiator between where AI is going to behave pretty seamlessly, and where it’s going to struggle and start to show mistakes. 

“The more narrow trunk of a world that you can carve off for it to work with, the better.”

[See also: Stargate | How the New Script Project Began – A.I. Diary 1 by Lawrence Kao]

Rise of the Machines Learning

“It’s your fiction that interests me. Your studies of the interplay of human motives and emotion.”

Isaac Asimov, I, Robot (1950)

Whilst it’s an open secret in the industry that Riverdale is written entirely by predicted text, having artificial intelligence create a Stargate script is far more challenging. AI doesn’t just conjure something from nothing, they need a diet of examples to remix.

“Like in other AI examples, you have to give it the goal first,” says Janelle. “And in this case, when you’re trying to generate text, usually, the goal you give it is ‘Here is a bunch of texts that we have from somewhere, try to predict which letters or which words come next in the sequence’. If you’ve given it a bunch of scripts as training data, then it may be fairly good at figuring out what comes after that in the script. If you give it cooking recipes, instead, as training data, it will definitely start like adding vanilla and saute everything.”

For our Stargate AI project, coherence – or perhaps the illusion of coherence – is the Holy Grail. AI has a reputation for generating a surreal word salad and the title of Janelle’s book (You Look Like a Thing and I Love You) is inspired by one of a series of perplexing chat-up lines she had an AI generate. Without explaining the rules of innuendo or suggestion, the poor thing had no real comprehension of what made a chat-up line a chat-up line. 

“The idea is you don’t manually teach it anything, except by giving it lots of examples where the words ‘Oh, hello, how are you?’ are followed by another character saying, ‘Oh, I’m doing all right, or something like this. There are these kinds of formulaic things that happen all the time, it tends to pick up more quickly. Then trying to teach it more nuanced things like ‘No, this character is always angry’, or ‘This character is always uptight’. If there are enough examples of this character being angry or uptight, then the AI tends to start learning the words and phrases that go along with that, but you have to have a pretty sophisticated state-of-the-art AI to handle that at this point.”

I attempted to broach the subject with likable chat bot Kuki.

This is effectively what predicted text or chatbots do. They calculate the most likely next word in the sentence or the most likely response to a question, but there’s no real understanding of what is actually important about a sentence or story. That’s why you get things like Corridor Crew’s bizarre Avatar 2 script.

Obviously, we don’t want that, we’re here to celebrate Stargate, not drag it through a river of open sewage.

“You can sometimes tweak the settings. If it’s playing safe it’s just going with what it thinks is really likely to come next, you may have a couple of characters having a circular conversation, repeating the same thing over and over again to each other. You can also tweak a setting that has lets them depart into the less likely territory. And the topics get more interesting and varied. But you know, it can tend to lose the thread of conversation, or introduce things that make no sense.”

[See also: Podcast | Brad Wright’s Conversations in Sci-Fi with Ben Browder]

Droid Rage

“You and I are real, Teddy, aren’t we?”
The bear’s eyes regarded the boy unflinchingly. “You and I are real, David.” It specialized in comfort.

Brian Aldiss, Super-Toys Last All Summer Long (1969)

“These AI’s tend to have limited memory,” Janelle continues. “And back in the early days of generating text, when I started doing it for AIweirdness, they could only keep track of maybe a few words at a time. And so when I generated cookbook recipes, I would find that it would add the salt and pepper over and over again, having no idea it has done that before, or a recipe that might start out as casserole at the beginning would turn into a cake. By the end, they didn’t remember the title of the recipe or anything. This happens because adding extra stuff for the AI to keep track of when it’s trying to predict the next little chunk of text, but the more stuff, it’s got to keep track of the more memory-intensive that tends to be. 

Kuki can’t keep her story straight and wouldn’t last five minutes in Making a Murderer.

“And so there are AIs nowadays that are getting better at that there are different ways like building in structured memory, where you’re looking at a bigger chunk and less detail in smaller chunks in more detail and trying to use that to predict what comes next. And that’s where these AIs that I’m working with now, like GPT-2, GPT-3, these kinds of things can start to use information that happened sentences and paragraphs ago. And they quote, so the coherence has really improved.”

Without getting trapped in Escher’s existential stairwell of wondering whether an AI could ever write anything that’s truly original (I mean, can humans ever write anything that’s truly original?), given a suitably sophisticated AI with a ton of memory, is it at least possible of writing a scene with a beginning, middle, and end, rather than an exchange of well-wishes?

Oh Kuki

“What it’s got to work with, most likely, are the sorts of opening and closing formulas. Like it knows kind of what the opening of a scene sounds like. If it’s got enough memory, it might know approximately how long it’s been in the scene. And [know that] it’s time to wrap it up and what kind of things you wrap up a scene with, and maybe the characters do walk off or say ‘I gotta go’ and they walk off stage. But making the words actually correlate with reality… that is a tough thing because it is trained to generate text that sounds likely based on what it’s seen on the internet. And it’s not being rewarded for generating things that match reality. So you may end up having the same door open twice without being shut because it’s not rewarded for keeping track. It just knows opening doors is likely.”

For our Stargate AI Project, this throws up a ton of challenges. Just as having AI parachute platitudes into the mouths of Jack and Sam doesn’t pass as ‘new Stargate’, nor does a script with no internal logic. We’ll sit through some nonsense in the name of fandom (Stargate Origins, we see you), but we at least expect our characters to be consistent with what we know of them and for them to advance a plot in a half-consistent manner. For us to consider this a success, it has to go well beyond everything we’ve seen artificial intelligence do up to this point.

“There are people trying to build in this kind of real-world correlation. But usually, it’s in a really limited context, because of course, there’s so much stuff that happens in the real world, maybe you could train something to be a little bit more sensible about recipes and keeping track of like the state of each of these ingredients. But trying to extend that to starships and planets and characters and their mental states. That’s a big problem.

“There are tricks that people are working with. So that’s an active area of research is how do you get an AI to keep track of the story? And one of the ways of making that tell a story problem more narrow is to abstract away a lot of the story and say, ‘Okay, now you’re working with Main Character 1, he or she does Plot Generation Action 2. And there are from a list of different kinds of in, you fill in the details later. And it all becomes it tends to work better, you tend to get more coherent stories, but you lose a lot of this freedom because now you sort of restricted the number of characters and the kinds of outcomes that can happen…”

You can follow our Stargate AI Project weekly on The Companion. Check back next week to see Janelle’s Stargate scene written by GPT-3 along with an explanation for how it works and why it’s not the answer to our prayers.


Reccommended Articles


Testimonial Author Image

James is editor of The Companion. He has been “working in publishing” since the early 1990s when he made his own Doctor Who fanzine to sell in the school playground.

You can find him on Twitter @JDHoare

Looks like you’re using an ad blocker that may prevent our website from working properly. To receive the best experience, please make sure any blockers are switched off and refresh the page.