The AI-Augmented Author. Writing With GPT-3 With Paul Bellow

How can authors use AI writing tools like GPT-3? What’s the best way to prompt the models to output usable text? Are there copyright issues with this approach?

Author Paul Bellow explains how he is using the tools and how authors need to embrace the possibilities rather than reject them.

The AI-Augmented Author. Writing With GPT-3 With Paul Bellow

In the intro, I talk about getting access to the Open AI GPT-3 beta shortly after interviewing Paul and how I used ideas from this interview to generate prompts, plus my thoughts on the tool. You can find some of the sites built on top of GPT-3 at TheCreativePenn.com/AIWriting; plus Will Artificial Intelligence Ever Write a Novel from thriller author, Andrew Mayne. You can find all the AI-related episodes and book recommendations at TheCreativePenn.com/future

Paul Bellow is a LitRPG author. He’s also the publisher of LitRPG Forum, LitRPG Reads, and LitRPG Adventures. A writer for over four decades, he’s currently tinkering with GPT-3 to create tools for authors and help his own writing too.

You can listen above or on your favorite podcast app or read the notes and links below. Here are the highlights and full transcript below.

Show Notes

  • What is LitRPG?
  • What is Open AI’s GPT3?
  • Shifting your mindset to embrace AI tools rather than be scared of them
  • Ways to use prompts with GPT-3 and other Natural Language Generation tools in order to output coherent and useful text
  • Is it cheating to use AI tools to help write your book?
  • Copyright issues related to GPT-3 — more in Episode 519: Copyright law and blockchain for authors in an age of AI
  • How might AI tools for writing be used in the near future?

You can find Paul Bellow at LitRPGForum.com and on Twitter @LitRPGforum and you can find the D&D character backstory generator at LitRPGAdventures.com

Transcript of Interview with Paul Bellow

Joanna: Paul Bellow is a LitRPG author. He’s also the publisher of LitRPG Forum, LitRPG Reads, and LitRPG Adventures. A writer for over four decades, he’s currently tinkering with GPT-3 to create tools for authors and help his own writing too. Welcome to the show, Paul.

Paul: Hi. Thanks for having me. It’s good to be here.

Joanna: I’m so excited to talk to you today. As I was saying before, you’ve really helped me shift my own mindset around AI writing. We’re going to get into so much today.

Tell us a bit more about you and how you got into writing and, also, what is LitRPG?

Paul: I’ll start with the LitRPG first. It’s basically a genre. The term was coined, back in 2010, I believe, by Russian authors that were putting out a anthology. And stories of this type, the basic trope being you’re trapped in a video game or real people going inside a video game, has been around since at least 1978, I think, with ‘Quag Keep’ by Andre Norton.

It was probably around 2015 or so that American authors were kind of a little bit fed up with having to read translated works from Russian to English, which caused some problems sometimes. So, they figured, ‘Well, I can write this. I’m a gamer.’ And thus began the American LitRPG scene.

I missed the first year, 2015, 2016. It was late 2016. I was writing romance under a pen name, at the time, and that’s when I discovered the genre. And I went all in, creating my community website at LitRPG Forum, I did a blog, and so on.

Then I was writing books in the genre and I got sidetracked with some video games and then this AI stuff. I started writing personally in the fourth grade. Pac-Man was big in the ’80s, and I was actually writing a Pac-Man fan fiction where he goes on a time travel adventure.

My fourth-grade teacher was so impressed she helped me bind the story into a book with… I think we used yarn and the hole punch pages and whatnot. I wish I still had that but I lost it in the last 40 years during one of my moves or something.

But that kicked off my joy of writing. I grew up in a household where my father read a lot of science fiction. And, before he traded those in, I would go through those and read them and whatnot. So, over the years, I got interested in technology too.

I’ve always had, in the back of my mind, ‘What if there was an AI tool that could help my writing?’ I did a lot of research on various ideas that have people have had, over the years, for natural language generation. It was 2019, that GPT-2 came out.

I got it up and running on a cloud server and I think there’s a data set of like 170 megabytes of old science-fiction pulp stories. I trained GPT-2 on that. Then I was playing around with other things. That’s how I actually stumbled across the Janelle. She’s really big in the AI space. And she did a blog post on using AI to create D&D character backstories.

Joanna: That’s Janelle Shane, who has a great blog at AIWeirdness and her book is You Look Like a Thing and I Love You, which is fantastic, that talks a bit about this and was written around the GPT-2 time, which I think was 2019.

You’ve used some terms there that other people might not understand.

Can you talk about what GPT-2, what GPT-3 now, is in case people haven’t really come across it before?

Paul: It stands for generative pre-trained transformer, Version 3. And basically, it’s a language model that uses a neural net. They basically, I guess, ‘read’ a whole bunch of data, written words. And then it comes up with a matrix where it’s able to guess, based on all of the content that it’s read, what word is going to come next in a sequence. I don’t know if that helps any or if it makes it more confusing.

Joanna: Essentially, it is a massive autocomplete based on a data set. So, you mentioned there a data set of science-fiction novels. I believe GPT-3’s data set is, essentially, the whole of the internet.

Paul: Yeah. A lot of Reddit in there. I think they did some fiction novels. Now, the cool thing about GPT-2 was that you could fine-tune it. They have their general model that’s just built on all the text of the internet.

You could fine-tune it on a smaller data set, say, just fiction, or what I did was I just trained it on D&D character backstories. And then, if you fine-tune it, it becomes even better.

Now, the problem with GPT-3 is that it’s so big there’s no way that I can fine-tune it on my own. And they’re not allowing everybody to do that yet, just the projects that are bigger and have a wide audience and stuff like that.

So, that’s something I definitely want to look to in the future is actually fine-tuning GPT-3 to get it even better at either writing fiction or the D&D character backstories because right now, when I was setting up LitRPG Adventures, which is basically a series of generators that creates D&D character backstories… It does magic items, cities, taverns. It generates all this stuff for world-building.

What I had to do was actually write out a whole bunch of examples. GPT-3, it works on a system where you can either give it a 0 shot. That means you basically just tell it, ‘Give me a list of 10 funny names for movies,’ and it would list them out.

You can do a one-shot where you give GPT-3 one example, ‘Here is a character backstory. Write me a new one.’ What I’ve found for the longer text, usually, you want to do a two-shot or even three-shot where you give it two or three different examples, and then it’s able to better recreate what you’ve shown it with the examples.

Joanna: Right. We’re going to get into some more sort of, ‘How do we use it?’ things in a minute. But I want to take it up a level because I’ve had people email me and I think people say,

‘Why would you even bother using these tools? You’re not a real writer if you’re going to use AI.’ What mindset do we need to approach this with?

Paul: I’ve run into a little bit of the same. I think people are just worried right now that AI is going to take over completely. There will be no more need for writers or anything like that.

But I, personally, think that we have at least another 5 or 10 years before the AI evolves enough that it’s able to do a novel, or even a short story, on its own. So, I think, over the next 5 or 10 years, you’re going to see a merging of human creativity enhanced by AI tools. And that’s how I see GPT-3.

It’s not like some magic button where I just press it and I get 1,000 words of perfectly clean text. It’s more of I guide GPT-3 along and it’s more of a productivity tool, I think, at this point.

Joanna: I totally agree. I’ve tried a number of the tools that are based on it and it’s so interesting. This is really what I wanted to talk to you about is how do we basically talk to the machine?

How do we talk to GPT-3 or these tools to get the best output?

You mentioned there a list or a one-shot or a two-shot. Can we get into that in a bit more detail? Because, for example, you’ve mentioned character backstories, and I’ve had a go at your LitRPG one. And I was like, ‘Wow, this is amazing. I can get this whole backstory to a character.’

That’s something that I personally struggle with my characters. I write thrillers. How much backstory do I really need? This is moving forward really fast, and all this stuff. So, that’s something I’m really interested in.

If I do have access to GPT-3 or I use a different tool, so, Shortly AI’s one or Inferkit, what would I type in order to get the type of thing I want?

Paul: What you’re talking about is what we’re calling a prompt. Basically, you feed a prompt into GPT-3, and then it gives output based on that.

I’ve been using it different ways for the character backstories, like I said. I give it two, sometimes three, examples. And then it’s just able to look at those two examples, base it on the gigabytes of data that it’s ingested, and it comes out with a really good copy.

It comes down to the old computer term too, ‘Garbage in, garbage out.’ You really have to be careful with the prompts because sometimes, it can go a little crazy if the prompt is off a little bit.

Now, for writing fiction, there’s a couple tools that are pretty interesting and will probably improve in the future. I’ve been just using the Playground application on the OpenAI website and that’s just a big text box where you can type in a prompt, hit the button, and then you get a output.

How I’m using it is I usually set up a prompt with what I’m writing. So, that will be things like the characters in the scene, a little bit of an outline of what I want to be in this chapter. And then, usually, I will write about 100 to maybe 300 words on my own.

By giving GPT-3 about 500 words of prompt, I can just hit the button and it will continue writing the story using whatever I gave it.

One of the things I notice is that, if you don’t list the characters, sometimes GPT-3 will just introduce George. ‘Oh, George walked in.’ Who’s George? Wait a minute. So, you really do have to watch it.

This is probably going to improve in the future, as more specific author tools come on the market. But, for now, it’s basically been able to double my word output for the day.

Joanna: I just want to come back on the prompt because I really want to be specific because this is what I found. When I first went in there, so, I’m not a programmer, you’re obviously a programmer. So, you’re more technical, but you don’t need to be a programmer to use these tools.

As you say, you’re typing in English just some prompts. But I don’t write, ‘Who is Joe?’ and then expect it to come up with a story or anything, I might have to say, ‘Joe met Paul down the pub and they had a drink. And then they opened the box and found…’ and then hit Enter, whatever.

Paul: Yeah, that’s one thing I found is that I really have no problem with writer’s block anymore. If I got to a place where I’m not sure what’s going to happen, I’ll just hit the AI button. I might not agree with what direction it takes me in, but, sometimes, it has some really good ideas, like, ‘Oh, I didn’t even think about that.’

Joanna: I think that’s what I found really interesting. They actually have a slider where you can make it more creative as in more unusual or you can make it less creative and less unusual, so, really more standard.

What I’ve done is copy and paste 100 words of one of my stories in and then just press Continue. And what it’s generated with the more unusual one, I guess a bit like what some of Janelle Shane’s done, is some metaphors and some language that I’ve been stunned by, in a way.

Things that I never would have thought of because it’s so different to the way I think. So, that I think it’s kind of what I’m interested in is using it for prompts, or prompting back to me almost. When you said it has doubled your word output, I think the question people have is you’re putting this stuff in.

Are you just copying and pasting those output lines and that’s it?

Paul: I have a process because we don’t have tools yet and I haven’t had time to build one on my own. But, you have this text box on the website, and there is what’s known as a context window of 2,000 tokens. And because of the way they break down words, individual words, 2,000 tokens is actually about 1,600 words.

So, you have to limit your prompt, and the output can’t be more than the 2,000 tokens. What I usually do is try to have at least 500 to 700 words of a prompt for the fiction writing, and then usually only have it create maybe 50 to 100 words at a time.

This did start to get a little expensive. I think last month I spent $110 and I was able to write, I think, about 120,000 words in the month. The pricing is not that bad but I think it will come down in the future.

Joanna: There are lots of sites now being built on top of GPT-3, which is the ones that most people will be able to get access to because, of course, at the moment, it’s still in closed beta. But, again, just on those words, so, once it’s generated a couple of hundred words. Are you copying those words and pasting them into your manuscript and just publishing it like that?

Paul: No. What I’ll do is once it gets to the 2,000 context window, I’ll go back to the top. I will keep my scene outline and character list and stuff like that. But then I will delete a copy, cut out maybe 500 words, and put that into a Word document.

And then that opens the context window on GPT-3. And it has all the other content that it wrote, plus my description of the scene and whatnot. And then you hit the button again and you keep writing with my writing and the AI kind of merging together.

But it does need editing at the end. That’s a good point. I have tried to sneak one or two small things out on another pen name, just to see how they would be accepted. But like anything in writing, it really comes down to the second draft, third draft, fourth draft.

GPT-3 really helps with that first draft and really banging through, making sure you don’t get stopped or blocked or anything like that.

Joanna: So, even as an idea generator. I did one prompt where Thomas is in the graveyard and he’s digging into the ground and he finds a box. And then he opens the box and finds…

You hit Enter and it comes up with whatever it comes up with something. And then you can just do it again and again and again. Every single time, it’s going to come up with something different, some of which will be ridiculous and some of which might be interesting. As you say, it’s ideas that might be interesting.

I did have a couple of times where it did come out with text I could’ve just copied and pasted on its own because it made sense. But I don’t think that’s how most people are going to use it really.

It’s more like a tool for helping you come up with ideas and move on to the next phase, at the moment anyway.

Paul: Exactly. Now, I did use it. I am working on my next novel called Shadow Ranger. And he’s trapped in a video game that’s a cheap knockoff of a popular game, and it has to do with guilds. I could’ve spent, and it would’ve been fun, too, coming up with guild names and attributes of these guilds.

But what I did was just write maybe 3 or 4 of them and put that as a prompt and hit the Enter button on GPT-3. And it spit out 200 guilds. Not all of them were great but some of them were pretty hilarious or clever or the name had to do with what the guild was about and stuff like that.

It’s really a good tool I’ve found for, at least me and LitRPG, for generating treasures and describing monsters and stuff like that. I think it’s more good on the smaller tasks right now rather than a big project, just because of the 2,000 token character limit.

Joanna: Right. And when you want to generate a list, are you literally just entering a list? So, if I put ‘Joe’ and then, on the next line ‘Paul’ and the next line ‘Mary,’ and the next line ‘Fred,’ and then hit Enter, is it going to carry on generating a list of names, for example?

Paul: Usually. But what I did is I put, ‘The following is a list of 20 fantasy elf names for male rangers.’ And then I do like maybe one parenthesis and a name, two parentheses and a name, three parentheses, and then just stop.

And then by telling it, ‘The following is a list of,’ what you want a list of, sometimes it can infer with, ‘Okay, this is going to be a list of names,’ because the first two are Paul and John. But, if you give it a little more, like these are fantasy elf names, I find it’s a little better at understanding exactly what you want.

Joanna: That’s really helpful to me. What I find really interesting is that I know how to use Google to find what I want. And I think we know how to do that now, although, it’s funny because my mom, who’s mid-70s, will sometimes not use Google in a way that I think makes sense. I’m like, ‘Why aren’t you using this tool properly?’

And I feel, at the moment, when I’m playing with these various tools, ‘Why can’t I use this properly? Why am I not getting it?’ which is why I wanted to talk to you.

You’ve used the word ‘fun’ and ‘double word output.’ And Yudhanjaya, who’s been on the show talking about co-writing with AI, also used the word ‘joy.‘ And I think this is what’s so cool. It’s actually fun. It’s like playing a game.

I’m not a gamer but I’m getting a kind of gaming idea when I’m playing with this stuff. And so, that to me is what’s important. But some people are saying, ‘Well, isn’t this cheating?’

Is it cheating to use AI language generation?

Paul: I would ask those authors if they hand write all of their manuscripts or if they use a typewriter or if they use a modern word processor. In some ways, I think GPT-3 can be viewed as basically an enhanced word processor.

I know Google Docs, right now, they have a feature built-in where if you’re typing something, it’ll give a one or two-word suggestion sometimes. And you just have to hit Tab and it will auto-complete that. I think that is going to be built into more tools as we move forward.

I know OpenAI just signed a big contract with Microsoft, and, of course, Microsoft has Word and all the other fun programs that they have. I’m thinking that probably this year, GPT-3 will be rolled into a product like Word, where you can basically access the power of GPT-3 through a normal word processor.

Joanna: I completely agree with you. There is no way Microsoft did this big deal. It was like a billion dollars or something. It was a lot of money to essentially be able to use GPT-3. And we had two and then three a year later, Maybe we’re going to have four this year or whatever it is, GPT-X will go into the Microsoft suite of tools.

It’s funny because it’s enough to make me consider going back into Microsoft. I’ve been Mac for years but now I’m like, ‘Oh, does that mean I’ll need a PC to kind of access this stuff?’ Which kind of is scary, but there you go!

I did want to come back on one question around copyright or two questions on copyright. I see that people have issues from two angles.

The first one is, obviously, the model is trained on data. Now, the assumption is that the data is publicly available and, therefore, is not under copyright. But I just can’t see how there isn’t some breach of copyright somewhere in that model. It just seems impossible to avoid. So, that’s one thing, the data that’s fed in.

The second one is the stuff that comes out. Do you have the right to publish and to own the words that come out of GPT-3 or any of the other tools?

[I go into these issues more in Episode 519: Copyright law and blockchain for authors in an age of AI]

What do you think about these two copyright issues — the data that it’s trained on, and the ownership of the output?

Paul: For the data that it’s trained on, I did a little research on this, and I don’t know if you remember. Google got into trouble a few years back when they decided to go to all the libraries and scan every single book ever made and put them into the cloud.

They weren’t worrying about copyright or anything or if people wanted their book to be sucked up into this database that they were building. I think they eventually used it for their book search website. But they got sued and it went back and forth. It went all the way to the Supreme Court.

The Supreme Court ruled on Google’s side, which basically set a precedent for AI being used for not generative, but NLP, natural language processing. They sided with Google. Now, if that’s going to change in the future, I don’t know. It’s something I will definitely be watching closely.

For the generative side, I know OpenAI had some really loose language in their agreement that you had to sign when getting access to the tool. But they listened to the community and changed that. So, basically anything I create with GPT-3, according to them, I own the copyright.

Personally, I would think that with all the editing and input that I have to put into the works… Like I said, it’s not just pressing a button and getting something out. I don’t feel bad for saying that I own the copyright of something that I wrote using this tool.

It would be the same as saying, ‘Does Microsoft own your work because you used Microsoft Word to write it?’

Joanna: I think that’s where my position has changed from maybe even 3 months ago where I was really thinking that there just happened to be more problems with it. But, as I’ve got into some of the World Intellectual Property Organization’s material and the UK government, which is very pro wanting to do more with AI and attending AI seminars and reading books, and I have also come to this feeling that, ‘Yes, I mean, I would like to license my work.’

Paul: I loved your blockchain idea.

Joanna: Oh, you did? Oh, good. I’m not in the same genre as you but I gave the example of what if lots of indie thriller writers created a data set that we called Indie Thriller Writers 2021, and then we were able to license that data set to people? Like you talked about tuning GPT-3.

If we could get paid for our licensing our data, that would be one thing. And I had suggested, as you say, using the blockchain to get a percentage of the output. But I’m now more with you on this in that I feel that what comes out is not going to be just click-and-publish. And maybe it never will be, and that’s not how we create anyway.

We’re writers because we enjoy creating. And even the very first thing you must do, which is you have to put in a prompt, it’s not like click a button and anything’s going to happen.

You have to prompt this machine to do something. So, the initial spark is always from you. And then, you’re going to edit it and shape it and turn it into something else.

Do you think we’ll be able to license our data in the future?

Paul: I don’t know. That barn door might already be open.

Joanna: That is kind of how I feel. Anyone could, obviously, suck our books out of wherever and use them without us knowing. How would we ever know?

Paul: Right. For LitRPG Adventures, talking about writing the prompts and stuff, most of those are based on a two-shot method. I ended up having to write maybe 100,000 words over the summer of really good examples of, ‘This is an elf fighter. This is an elf mage. This is an elf thief.’

I did have to do a lot of work to actually come up with examples for it to copy, rather than just saying, ‘Hey, write me a really cool D&D adventure.’ Enter. It doesn’t work that way yet.

Joanna: And as I said, I tried that and I was really impressed by the quality of the output.

Paul: Well, thank you.

Joanna: So, clearly, that’s because you did a lot of work to train it.

Paul: Yes, basically.

Joanna: I think this is the thing. I think the people who would think this is cheating, in some way, feel like it is just a magic button where you just go, ‘Write me a thriller.’ Click. But that’s just so not true at all. As you say, it’s more like it gives prompts back to you.

Is there anything else that you think people need to know? Where are we right now, and where do you think people are going to be within the next year? Because, at the moment, you’ve got access to GPT-3 and I haven’t, and I’m pretty into it. And yet, I still can’t really access it.

How is this going to become more easily accessible for writers, and how long will it take?

Paul: It scares me that I’m saying this but I think Microsoft might actually help with this.

Joanna: Who would’ve thought?!

Paul: They have the infrastructure and everything to get this out to the people, I think.

From what I’ve heard, there were 250,000 people that applied for GPT-3, and I was one of maybe 1,500 that got access. And what I did, basically, was showed them all the work that I had done on GPT-2, creating my own data sets and stuff like that.

But, from what I hear, they’re going to be opening up access a little more this year. I’m not sure if it’s going to happen this month or next month, but I think OpenAI is going to release their tool to more people. And then, like I said, hopefully, Microsoft will use what they’ve built up, over the years, to get this out to more people too.

Joanna: If people are interested go to OpenAi.com, you can actually apply for the beta, and it’s just a form that you fill in.

I also read about this thing called GPT Neo from an open-source group called eleuther.ai who’ve got some funding. The problem with building this kind of thing is that it’s incredibly expensive to train.

Paul: Millions and millions of dollars.

Joanna: I have read about some technical possible solutions that don’t involve training it with so much data.

Are we just going to be exploding with possibilities by this time next year?

Paul: I think so because it’s interesting you bring up the smaller models because, in some of my tests, the smaller models did perfectly fine on GPT-2, as long as you fine-tuned them with hundreds of megabytes of examples of fiction. I think I only had like 2 or 3 megabytes of character backstories.

But even that was enough to get it able to figure out exactly what I wanted. So, there’s a future, I think, in the smaller models, and then them actually open sourcing a tool that’s really neat. I don’t know if you heard, but Google, I think last week or the week before announced they have a model that is, I think, 20 or 30 times bigger than GPT-3.

Joanna: I think it was six times, but that’s still a lot!

Paul: It was over a trillion, not synapses, but connections in the neural net. If you think about human beings, I think we have multiple trillions of synapses in our brains. But as these models get larger, it’s going to be interesting to see how much the output actually improves.

In the meantime, I think there’s going to be other companies, like the one you mentioned and others, that are coming out with smaller models that are easier to manage but can do just as good output, depending on what task that you want the AI to do.

Joanna: Absolutely. We’re recording this in February 2021, and I’m in my mid-40s and I certainly expect to be doing this for, let’s hope, another 45 years at least. We’re definitely saying, and you mentioned 5 to 10 years, but even if it’s 20 years, it’s still within our creative lifetimes that this stuff is going to become pretty extraordinary.

At the same time, we’ve got AI translation getting better and better. I feel the fear of people, and, in a way, myself, is that we’ve had so many problems with Kindle, in particular, being swamped by bad quality work or badly-translated work or work that it drowns out the voices of other writers.

How much more will this happen if people can generate books or auto-translate books very quickly? I expect that to absolutely happen.

How can we differentiate ourselves and use these tools in a way that help us be more creative and make more money in an abundantly creative world?

What happens if we’re swamped with lots of content generated by GPT-X that people just whack a cover on and put up on KU?

Paul: Quality control. One of the interesting things about GPT-2 was that since you could create text with it, you could reverse engineer that and discover if text was written with GPT-2 or not because, basically, the probability of the words being used would match up and you could tell if it was machine-generated.

This is why I think that authors are going to be an integral part of the process for the next 5, 10, 20 years is, I think it would be easy for Amazon to automate something to have a quality control gate maybe.

Even without AI, the market on Amazon has been growing. I started back in 2012 and that wasn’t even the golden age of Kindle publishing. I missed it by X number of years. But, since then, we’ve had more and more and more people putting content on Amazon.

I would hope that Amazon would put some kind of quality-control measure in place but we will see if that happens or not.

I think of Amazon these days as they basically monetize the slush pile from traditional publishing. That might be mean but I think it’s kind of true that, before, in traditional publishing, not all these books got to market You had to be the best of the best, the top 10% to get through that editorial gateway.

Whereas, now, if you have 10 minutes, you can throw it up on Amazon and have it available for sale all throughout the world. There’s good things to that but then there’s also bad things, like you said, is that a lot of garbage gets through, and it becomes harder and harder for people to find good quality content.

And that’s where, I think, AI might help too with projects like Book Lamp, although Apple bought them out and we haven’t heard anything about them since Apple bought them out. But just the idea of using AI to analyze a text and say, ‘Okay, if you liked Paul Bellow’s Tower of Gates book, you will probably also like X, Y, and Z.’

And that’s not based on sales data or anything like that. It would be based upon basically, the genome snapshot of a book is really similar to this other book. They kind of went into that in The Bestseller Code, the fact that The Da Vinci Code and, I think, 50 Shades of Grey had the same basic emotional arc through the story.

Joanna: But in different genres. Yes.

Paul: Exactly. But the AI was able to suss out that these are similar books. Even though they’re in different genres, they use that same emotional arc through to the end. So, I don’t know.

I think it’s going to come down to Amazon working to be the gatekeeper. But I’m not sure if I see that happening because they basically took down the gateway when they opened up online publishing.

Joanna: Exactly. But it may be that that curation task is increasingly done by some of the other sites, as we find now. Like BookBub, you can’t always get on BookBub, for example.

I’ve actually been saying, for probably a decade, that the book itself should be metadata. We have to type in these categories and keywords and this type of stuff when why do we have to do that? If you upload the text of your book, the text of the book should be your metadata. That should be the thing.

I think what you’re saying there is that might be possible in a more AI-driven discoverability space where it can almost pause the emotional journey and the tone and everything within the book and match it to people even more effectively than we do now. I’m actually looking forward to that because I don’t want to enter categories and keywords anymore.

Paul: Right, exactly. And that means too that the good content is going to rise to the top, which I think Amazon wants that cream at the top, give everybody access to publish but not everybody’s going to sell on Amazon.

Joanna: That’s certainly true now.

Paul: I think as the discoverability tools improve in quality, I think that’s the next big thing in self-publishing is going to be not the creation side but deciding what books a person should read next or recommendation engine, something like that.

Joanna: Absolutely. Well, we live in very exciting times.

Where can people find you and everything you do online?

Paul: Right now, I’m usually found on litrpgforum.com. That’s been running, I think, 3 or 4 years now and we got about 1,000 members signed up. I’m also on Facebook, Twitter, and Discord. But if you go to LitRPG Forum and sign up if you have any questions or want to pick my brain a little bit, you can do that, and I will likely find you there.

Joanna: Brilliant. Well, thanks so much for your time, Paul. That was great.

Paul: Well, thank you.

The post The AI-Augmented Author. Writing With GPT-3 With Paul Bellow first appeared on The Creative Penn.

Go to Source

Author: Joanna Penn

  • If you’re an artist, up to a creative challenge, and love this story, enter your email here. Click here for more info.

Date:
  • February 25, 2021
Share: