AI Valley: How Big Tech, Ethics & Journalism Collide in the Age of AI | Gary Rivlin | Glasp Talk #53

This is the fifty-third session of Glasp Talk!

Glasp Talk features in-depth conversations with leading thinkers, creators, and storytellers to uncover the ideas, insights, and intentions behind their work.

Today’s guest is Gary Rivlin, a Pulitzer Prize-winning journalist and author of "AI Valley: Microsoft, Google, and the Trillion-Dollar Race to Cash in on Artificial Intelligence." Drawing from his decades of tech reporting, Gary shares insights into the current AI boom, comparing it to past tech revolutions and discussing the intense competition between tech giants and startups.

In this episode, Gary explores the ethical concerns surrounding AI safety, its potential impact on journalism, and his unique process of writing with AI as a co-pilot. From the challenges of scaling AI startups to the future of human-AI collaboration, this conversation delves into the opportunities and threats of our AI-driven world.


Read the summary



Transcripts

Glasp: Hi everyone, welcome back to another episode of Glasp Talk. Today, we are excited to have Gary Rivlin with us, a Pulitzer Prize-winning journalist and acclaimed author whose work has appeared in the New York Times, Wire, and Mother Jones, among many others. Gary has written 11 books, and his latest, “AI Valley: Microsoft, Google, and the Trillion-Dollar Race to Cash in on Artificial Intelligence,” offers a gripping behind-the-scenes look at the AI gold rush unfolding in Silicon Valley. A former tech correspondent for the New York Times and a contributor to the Panama Papers, Gary has spent decades at the front line of tech reporting, chronicling everything from the rise of Google and Facebook to the challenges facing small businesses in times of crisis. With his sharp eye for storytelling and a deep understanding of technology's societal impact, Gary brings a powerful lens to one of the most important shifts of our time. And today, we will dive into the world of AI Valley to discuss the growing power of big tech and explore what this moment means for the future of innovation in society. Gary, thank you so much for joining the show today.

Gary: Oh, thanks for having me. My pleasure.

Glasp: So first of all, we love your book, AI Valley. It's really insightful. But could you tell us what inspired you to write the book? I mean, I heard the main focus in the book is Reid Hoffman, the founder of LinkedIn, and also co-founder of Inflection AI. But I heard you were going to someone else in the beginning. But could you tell us the behind-the-scenes story a little bit?

Gary: Yeah, you know, it was interesting. So it's 2022. And I had taken a break from tech for a while. I wrote a book about small businesses surviving COVID. But I was really drawn back to tech. And actually, I was thinking of a biography of a prominent figure in tech, a controversial figure in tech. And, you know, I knew it would be hard because he was going to be hostile to me writing it. Every interview is going to be hard to get. But then, kind of randomly, I just received an email from Reid Hoffman. I mean, not to me personally, but, you know, one of those dear friend blasts and all. You know, I checked. He's like 2,000 plus people, you know, get that. But in the email, he mentioned that I'm founding my first startup since LinkedIn. I'm co-founding my first startup since LinkedIn. And it's going to be at AI. And, you know, earlier in my career, at the start of my career, I actually went to college as an engineer. I programmed, you know, I took Fortran. I mean, you know, long dormant language. But it's like, you know, I was always frustrated. I'd make a small syntax mistake. And, of course, it wouldn't my program wouldn't work. And he had this one line in the email. Instead of us learning the machine's language, the machine was going to learn our language. You know, this idea that the new programming language is English or, you know, a human language. And that just kind of gave me this idea of speaking human. I will confess to you, I had not thought about artificial intelligence since I was in college. So that had been a lot of decades. But it's just like, huh, this is interesting. So one of the people I was talking to about doing this other book. I said, you know, this is like December, early December in 2022. It was right literally within four or five days of Chat GPT being released. It wasn't even a big deal yet. There was kind of a bit of a delay. It took like a week or two before the whole world was talking about Chat GPT. So I called this guy I knew at Source. I'm like, what do you think of it? And he just said, like, it's the perfect time. And I just, you know, I guess a lesson for people listening. I try to follow my instincts. It's just like, and I try to listen to smart people. Reid Hoffman is a smart person, the fellow I was calling. I was reaching out to a well-known venture capitalist. You know, he really knew what was going on. And so I just kind of jumped on it. It just seemed like this was the time to write about artificial intelligence. So just as things were starting to go crazy, just as the whole world was starting to talk about AI. You know, I had this idea, was working on a proposal to sell to a publisher. You know, just like, well, you could just decide to write a book. But, you know, I have to write a proposal to a publisher. Are you interested? Will you pay me money for it? So I just had really, really good timing because of this random email. On a different day, I made a hit delete. I get two or three of these blasts from Reid Hoffman a year. And like I confess, I didn't always read them. This one I read, it just kind of really stuck. It really struck me.

Glasp: Nice. And in the book, you portray the current AI boom as a trillion-dollar race. And I think you have seen the tech cycles many times before. And what do you think this AI wave makes different? I think we went through so many AI winters. Do you think this time is different?

Gary: I definitely think this time is different. I mean, you know, like any boom, there'll be a slowdown. There'll be a fallout and stuff. But no, I think AI is here to stay. This is the real deal. I think a couple of things, you know, comparing. So I started, I mean, I kind of worked on a mainframe, I guess. You know, when I was in college, I was used to punch cards with a terminal. And then, you know, I'm old enough to kind of bought a computer, a personal computer, early on in the PC revolution. And of course, I covered, starting in the mid-90s, the internet revolution. And, you know, we'll throw another one 14 years later. 1994 was the start of the Internet. 2008 or so was the arrival of mobile as a big category, as a big platform shift. And, you know, to me, what's so interesting about AI is that it builds on all of them. It builds on, you know, kind of the network effects and the internet, you know, mobile on chips. And so to me, it's the culmination. You know, I've been at this for decades. And so AI really, you know, struck me as a culmination. And there are similarities to the Internet era. Like you want to compare 2023 to 1997, 1998. Like someone comes up with an idea and a memo, and one week later, you know, have a $100 million valuation. You know, that kind of thing, like instant company and stuff. I'll tell you, this is moving faster. I mean, you know, some of the stats we've seen that, you know, the adoption rate, that Chat GPT got to a million users much faster than any product, tech product ever. It got to 100 million much faster than any product ever. You know, Facebook, Google, whatever example you want to give. And just like it's moving so fast. Like we're trying to say what's going to be in six months, but the world is changing. Like, you know, everything was big, big, big, and then deep-sea kits. And, you know, suddenly we're rethinking things. You know, it's just like all of a sudden it's reasoning models, but now that people are casting doubt on that. So to me, just like the speed at which this is happening and the decisions that are being made, even though we don't know where we're going to be a year from now. I mean, we'll have a sense that, you know, AI will be better and stronger and all. But, you know, it's like this to me, it's almost like the PC era probably felt very quick to people, but then the internet came along and that accelerated the speed. And now AI, to me, since 2022, has caused an even greater acceleration, that things are moving that fast.

Glasp: And to me, like open AI is winning in this game, and it's, you know, things happening now. But, I mean, after the transformer paper came out in 2017, I think many companies had a chance to win this game, right? AI. 

Gary: Including Google, which came up with the transformer. That was kind of a quaint time when, you know, researchers were sharing, you know, their research. I'm sure Google, for the next transformer paper, if they come up with one, won't be sharing it with the wider world. Yeah. I mean, it was, you know, it's interesting. AI, I make the argument in the book, a lot of people make this argument that it really favors the incumbents. It requires so much money and so much data. You know, Apple keeps on stumbling, but I wouldn't count Apple out, right? There are a billion, you know, a billion plus users. Facebook is hardly at the leading edge, but they have billions of loyal users. But with that said, you know, the advantage to the incumbents, with that said, there's a disadvantage to the incumbents. They're more cautious. They're putting more at risk. So, you know, Google, I write about this in the book, they had MENA, M-E-E-N-A, I think. And it was ChatGPT before ChatGPT. They had it. It was like 2020, you know, a couple of years before ChatGPT came out, but they were scared of it. You know, PR, the marketing people were fearful, like, oh my goodness, what if it says something inappropriate? There was a Microsoft release called Tay in 2016. And within 24 hours, it was a Nazi saluting, white supremacist, anti-Semitic. You know, it was all this spewing, all this garbage. And, now, Microsoft has to pull it. And I think all the big companies, Google, Microsoft, MENA, were frightened by that. So it had to be a startup. It had to be OpenAI that came out with something this bold. Let's just put it out there. Let's just put ChatGPT out there. It's not quite perfect yet. Of course, it will say things that we're not gonna like, but we're gonna take that risk. And they really haven't lost that lead. There was a time about a year ago, it seemed, when it was really narrowing. Like Anthropx, Claude was catching up. But I 100% agree with you. This is OpenAI's fight to lose. And I do think there's a very good chance they will, given they're up against Google, Microsoft, Meta, and Anthropx, I'm impressed with as a company. But it's also different, because if we're looking at the large language models, then we could throw Anthropic in the mix. But another thing that's amazing about Google is they're competing with very strong products on large language models, but also text-to-video, text-to-image. They are across the board. They have a strong offering of AI. So again, Google got beat. They blew it. Google was so far ahead of everyone in the 2010s in AI. They had all this talent. It's not a coincidence to me that the Transformer paper was born inside of Google, but they were just scared of it, and all. But they're not scared of it anymore. And so it is an interesting battle. OpenAI is the favorite, but the favorites don't always win.

Glasp: Yeah, very interesting time we live in. And now Anthropx and OpenAI are doing very well, but do you think they will keep winning in this game? As you mentioned, like a big tech company, because Google, Facebook, and Apple have a bunch of money and resources, will they catch up eventually, or acquire those startups eventually? I wanna know the dynamics of like a startup versus big tech companies in this AI era.

Gary: Yeah, so I mean, obviously, no one knows the answer to that question, but I do think about the money. So Dario Maude, the CEO, a very well-regarded figure in AI, said that, so when I started in 2023, you needed millions of dollars to train and fine-tune one of these large language models. By the time I was done reporting, let's say, the end of 2024, it was hundreds of millions of dollars into the billions. And Maude says that by 2027, it could be $100 billion to train and fine-tune one of these things. And of course, that's only some of the cost. You didn't have to operate. It's very energy-intensive. It's very, it takes a lot of capital just to answer the queries and all that. In fact, one of the startups I talked to for the book had to cut back on their product. They had to stop spreading the word because it was too expensive. They didn't have enough cash to operate as an amazing model, but they had too many users, and they were gonna go bankrupt and stuff. So it's a very expensive thing. How does a startup raise a hundred billion dollars? I mean, you give away a piece of yourself for that first tranche, the C, the A, the B, the C. Eventually, there's not that much more to give away. So playing it out, OpenAI is remarkable. They have a $300 billion-plus valuation. They've raised tens of billions of dollars, but what happens when you get to the hundreds of billions? So I'm not ready to count them out, but I do wonder if it's inevitable that an Anthropic or an OpenAI has to merge. They have to be acquired by a giant, but I have no idea. I'm just sort of kind of playing it out in my head. And I do think that it's harder to be a startup if you wanna do something foundational. Anthropic and OpenAI runway here in New York City, a startup I spent some time with for the book, they do text-to-video, they do AI video. They have clients in Hollywood. They're very, they have revenue. They're doing very, very well. So there are a few that are doing it, but A, they're now up against Google and OpenAI, both of which have their very impressive text-to-video models. And other companies, and I just don't know how it plays out. It's just, all I know is I went looking for the next Google, the next Meta, and I fear, not sure I'm right, but I fear that the next Google is Google, that the next Meta is Meta. And there's plenty of room to make money for a startup. I mean, I'm not saying venture capitalists stop investing, but I'm talking about the next trillion-dollar company. I'm talking about the next Google, Meta, Microsoft, and Apple. The giant. So there's plenty of room to do apps and do a million interesting things. But if we're talking about those companies that define the era, I'm not sure if Anthropica, Runway, and OpenAI are gonna make it. I do wonder if they're gonna have to be bought.

Glasp: I see, I see. And when, actually, when Google started and became a dominant, I mean, like in 1997 or 2008, I was too young and I don't remember what the time looked like. But I mean, Google, you've seen how Google grew over time. And then Google became the one, the biggest one, one of the biggest ones. But don't you think that OpenAI can go to the same path? But you said, like, next to Google is Google, but-

Gary: I said, I worry that the next Google. So, actually, I was around then. So I think it was 1998 when Google was founded. I was working for this internet magazine called the Industry Standard, a very interesting magazine that only lasted a few years. But we're all using the AltaVista search engine. And when people heard about another search engine, why? This one's good. Why do we need another one? But once Google was released, let's say 1999, I think maybe there was 2000 when it really started to gain popularity. It was almost like one day, every single computer in the newsroom had AltaVista. And one week later, virtually every computer in the newsroom had Google. It was a better product and stuff. And so it was heads and tails above others. Can OpenAI make it? Yes, I'm not bidding against that company. I'm not bidding against Sam Altman, astounding founder, astounding leader. What he's pulled off is remarkable. But I will say I like ChatGPT, but I don't think it's any better than Claude. I like Claude, but I don't think it's like heads and tails above Gemini as Google's or Microsoft's co-pilot and all. So for the foreseeable future, they're all a commodity. They're leapfrogging each other and like, oh, now there's this more powerful, now there's this more powerful and stuff. And I don't think one has that product advantage that Google has. And again, that second factor that Google raised money from kind of at the time, the two best-known venture capital outfits in Silicon Valley, maybe they raised $20 million combined. I can't remember exactly, but something on that order. It might've even been less and all. And like, they didn't really have to raise that much money afterwards. The advertising model rather quickly made like, oh, we don't have to raise more funds. OpenAI's problem is that they have to raise astonishing sums of money. And again, like Google, the breadth of their offering means that they have to raise money for everything, for their large language model, for their text-to-video, for their image generation. Again, they have a range of products, which is very expensive to create, to improve, to fine-tune them and to operate them. So again, I'm not gonna count OpenAI out. They're an astonishing startup. I just worry about the amount of cash they're gonna have to raise.

Glasp: I see. Yeah, really interesting and insightful. And in that sense, so how do you like a startup, an AI startup compete with, or like find the opportunity in this market? So,o if you are a startup founder and a startup CEO of an AI startup, how do you compete or how do you find the opportunity in the market?

Gary: Yeah, it's, I mean, you go up the stack, like I'm saying, on a foundational front, I'm not even sure any venture capitalists, I'm not sure there are many venture capitalists who are out there looking for, anyone doing a foundational company, I could be wrong. There might be, like Google with AltaVista, there's a completely different way of doing it. And so, folks find that promising. Saying. But again, go up the stack. Look for AI in the law, AI in medicine, and AI in education. There are lots of opportunities to build on top. It's far less expensive. I mean, that's even perplexity, which we haven't mentioned, a very impressive AI answer engine. I mean, they're not doing the foundational stuff. They're operating on top of other large language models. And that, to me, is more the approach I would be taking if I were in that world. Like, kind of, what's the expression? Like, do the wrapper. You know, I mean, then you have the problem, like, okay, well, how do you differentiate yourself? What's your mode? But I would argue that there is no mode for anybody. The foundation, you know, does Google, does OpenAI, does, you know, Anthropic? No one really has a mode right now. That's the problem. But, you know, again, I think the opportunity to create, you know, an extraordinary company that goes from, you know, hundreds of millions to billions to tens of billions of dollars, maybe more in revenue, you know, that's a hard battle to fight. But, you know, the idea of creating interesting companies that, you know, make millions and tens of millions and ultimately hundreds of millions, you know, in SaaS and kind of middleware and kind of data labeling. I mean, there are a million opportunities out there. I am just dubious of the number of opportunities to create the next huge, great company.

Glasp: And in the book, I think you have interviewed so many, I mean, to write a book, I think you have interviewed so many key figures and checked out so many AI tools and products and so on. But do you have anything you wanted to publish in the book, but you couldn't? Or something, hidden story, hidden gem, if you have any?

Gary: Yeah, no, actually, it's funny. So when I started the book, again, it was the end of 2022. And the world wasn't as clear then as it is today. So I had this idea. And, you know, I'll confess, you know, I jumped in without having the foundational research. I, you know, I kind of knew AI had been around, I kind of had heard of the Transformer paper, I had played a little bit with what's called OpenAI's image generator. And so, yeah, DALL-E, yes, yes, yes. But, you know, I didn't have much, you know, I didn't have much depth to the knowledge. So I started off saying, like, oh, I want to follow three startups. So I had Reid Hoffman, co-founder, Mustafa Suleiman, you know, a star of AI, he's a co-founder of DeepMind, he went on to work at Google, he was an executive at Google. And, you know, a third co-founder, who is a very well-regarded figure in AI. And I felt like, okay, they have star power. You know, they raise money from Bill Gates, and Will.i.am, and Ashton Kutcher, you know, I, you know, before they eat, like, before they had a product, they had raised hundreds of millions of dollars, they brought out their product, it's called Pi, it's kind of, you know, it differentiated because it was high in EQ, emotional intelligence, and not just, you know, IQ intelligence. And, you know, one month after they released it, they raised another $1.3 billion. So they raised, like, over a billion and a half dollars. You know, they're well-known founders, they got all this attention in the media. So then I went looking for, like, an underdog. It was a company they got into YC, but there were two outsiders. It was, you know, kind of someone who was a community college dropout, rather than a regular college dropout, and kind of was hooked up with a really genius founder, who, like, in 2018, came up with AI Search long before almost anyone else. And so he was working on AI Search in 2018. I thought, oh, that'd be an interesting company. So, you know, one, you know, inflection, Reid Hoffman's company going for, you know, kind of the chatbot market, you know, which seemed the most central market. And, you know, the second company, an underdog no one had really heard of, they raised only a few million dollars, you know, going, taking on Google, right? I mean, there's no bigger company, tech company in Silicon Valley than Google. And then the third company, Runway, you know, I thought they were really interesting. They were also outsiders. There were three artists who got involved in the mid-2010s in doing text-to-video. And, you know, they were, again, they were earlier than most. In 2018, they founded Runway, and they were getting a lot of momentum and stuff. So I had this idea in my head that I'd follow these three startups as they all try to cash in on this AI moment. But then, you know, I realized, like, so much of the story is around Google, so much of the story is around Microsoft, so much of the story is about meta, and it kind of crowded out those other two smaller startups. So they're in there. But, you know, I wrote several chapters about Andy, A-N-D-I, that's the AI search company that, you know, was accepted into YC. But, you know, I ended up publishing a few paragraphs about them instead of a few chapters about them. Runway, there are only a couple of lines in the book about Runway. I ended up, you know, doing a spit out and doing an article for the information about Runway. But, you know, whenever you write a book, I kind of write long. I have that story in my head, and I put it on, I commit it to paper, to the screen, and stuff. But I always write long. Every book is mine; that was my 11th book, you know, I'll cut it by 20 or 25 percent. You know, this book, I tried to avoid that as much as possible, like, so I ended up not writing the Runway chapters, because, like, it took me, like, six months to realize, like, oh, okay, I have to devote a lot more attention to Google and Microsoft than I originally thought. There's so much interesting history that, you know, at the beginning of the book, I kind of do a quick run through the history. How did we get here? You know, like, in the 1950s, someone coined the term artificial intelligence, and they were such optimists. You know, for 70 years, AI has been just around the next corner. It's always been a decade away. And, of course, as you mentioned earlier, there have been AI winters, like, oh, we think, oh, no, we don't. Like, you know, back then, almost all of the resources, almost all the attention was on the rules-based approach, not the machine learning approach. But we're going to, through sheer muscle, line by line, anticipate every single scenario, and that's how, that's what artificial intelligence is going to be. It obviously didn't work, or, you know, perhaps we'll have to go back to that and merge the two, but, you know, that was kind of this 50-year wrong turn. And so, you know, I ended up, you know, cutting out, you know, not writing or just tossing aside and not publishing, you know, a lot of material in the book, the reporting I did for the book.

Glasp: Very interesting. And since you mentioned, you tapped on, like, the process of editing and writing and editing, I was always curious, how does the writing process look? I mean, did you have an editor, or, first of all, do you structure or think of, oh, this is chapter one, two, three, four, or is it more like you interview someone, then write a chapter, and how does it work?

Gary: I mean, right? So, I mean, you know, every writer has his or her methods and styles. So, you know, I have mine, but, you know, it's in, you know, how do you program, how do, you know, I mean, there are different approaches, you know, to writing, but, you know, I always work with an editor. A, I really can't afford to, like, oh, I'm going to spend the next year and a half, two years of my life writing about AI without somebody, you know, kind of helping me pay for that, you know, get an advance from a publisher. So, HarperCollins, a big publisher, you know, gave me an advance, which, you know, allowed me to spend a year plus researching and then, you know, six months writing. I spent a couple of months rewriting. But, you know, with my editor, we have a good working relationship. This is our third book together. And so, from the start, I would say, like, here's my potential characters. What do you think of, you know, Andy, the search company I mentioned before, the underdog and stuff? And, like, so we're kind of, like, not constantly, every couple few months, you know, I'm doing reporting. Oh, I just spent a week out in Silicon Valley. Here's how I'm thinking about things. You know, she really pushed me to include some of my own story in it. I've been covering this since the mid-1990s, and I really wanted to, like, okay, you've been writing about this for 25 plus years. How is 30 years? How is this different? How is this the same? So, she's kind of helping me shape it. But, you know, who's kidding who, it's a very lonely process, right? I mean, like, I go out and, you know, I do my interviews, but then I come back and then, you know, I use AI to transcribe the interviews. But even then, you just have a mass of words that I have to go through the transcripts. And, you know, it's interesting that that second listen, like, obviously, I sat through the first interview. But, you know, you're thinking of different things. What's my next question?That's when you could listen back to the tape, especially with the transcript, you're like, oh, wow, that was a good insight. I didn't quite catch that. So, you're constantly reshaping it. So, you know, I don't do a formal outline. There are many writers who do. I do not. But I have a very clear idea. I have all these index cards on my wall. And, you know, kind of chapter one, history, you know, oh, I want to have these characters, you know, Marvin Minsky, you know, I kind of like what characters. Do I have there? What do I want to say in that chapter? Chapter two, let's introduce Reid Hoffman as a character. That was one of those chapters where Reid has a very interesting life. I mean, not only is he co-founder of LinkedIn, but the first investor with two others in Facebook. So the $37,500 he invested in Facebook when it went public was worth $400 million. He leaves LinkedIn, finds a CEO to take over, and becomes a venture capitalist. His first ever investment as a venture capitalist, Airbnb, which was more than 1,000x for Greylock, his firm, and the biggest hit in the firm's history. So he just has a really, plus he's just an interesting character. A lot of founders, and I understand this, they're kind of monomaniacal. They're focused on their company and maybe the realm they're in, AI, whatever realm they're in. But Hoffman's always been, I mean, some of it's just a constant, constantly workaholic, doesn't really take breaks. But he just thinks about the world. He's kind of like Silicon Valley's philosopher. He's kind of one of the best-connected people in Silicon Valley. Everyone knows him. He's a liberal fellow, but even the conservatives, despite the nastiness, or whatever you want to call it, of the Trump versus non-Trump folks, everyone seems to like Reed. And he seems to know everyone and get along with more or less everyone. So that chapter started off probably as 10,000 words. And in the book, it's probably like 4,000 or 5,000 words. So a lot of it is distillation. I write it like, OK, do I need that? Do I need that? I shrink it down. I shrink it down. So I have a general idea of chapter by chapter, at least through the first half. And then the second half is like, OK, that's going to be my reporting. So a lot of what I did was I followed inflection. Reed Hoffman and Mustafa Suleyman's company is like, they're trying to teach EQ to the model. How do you do that? I spend time like, OK, now we're giving it voice. Now we're connecting it to the intranet. OK, it's too agreeable. It's just too, oh, that was a really good question. How do you tone that down? It's a kind of whack-a-mole of you wanted to have a little bit more personality, a little bit more edge, like, oh, OK, now it has too much edge. So that was going to go through my reporting. The reporting was going to dictate the second half of the book. And so again, there are a lot of writers who plot it all out. I just kind of have almost like a map, like How do I get from New York to San Francisco? Well, OK, I'll cross this bridge, and I'll hit New Jersey, and then I'll go through Pennsylvania. You know what I mean? I have a general idea, but what does Pennsylvania look like? I don't know. It's just like once I really delve into the material, then I'll have a good sense of who the main character is, what the main point is, all of that kind of stuff. And then I hand in a draft, and this is where my editor, Hollis Heimbach of HarperCollins, is where she's invaluable. She doesn't really line edit at that point, but she gives me feedback. I really like this, but too much about this person; I want to hear more about that. I kind of describe Silicon Valley as a place. You guys know it well. It's like it's not really a place, right? It's just like it's highways. But I wanted to, like, the way I write is through characters, and scenes, and plays. And like, oh, I had that in chapter 16 or 18, or something like that. We need that much earlier. You need to show us the characters and make them real. And so a lot of it was just her giving me notes, giving me feedback. And then I take a month, and I rewrite it. And then it's more she edits it, the publisher as a copy editor, just making sure all the grammar and all that stuff is right, proofread, a legal review, and all. So it's a collaborative process, except for the writers doing like 90% of the work until the very end.

Glasp: Did you use AI in the process? I mean, you said you use AI to transcribe the interviews. But did you use AI for writing, for editing?

Gary: So the first thing I did, the first thing I did when I had this idea, like I got Reid Hoffman's email, is I went to Chat GPT and said, "Write me a 5,000-word book proposal on AI." And I did more than that. I gave it like, "I want to tell Reid Hoffman, be a large character. I want to make these points," and yada, yada, yada. And it spat it out. I mean, it was kind of the first time I used it as a tool. Again, like I said, I had played with DALL-E. But it was like it was magic. It was a sorcery. Like, I hit Enter, and within a few seconds, it's starting to spit out the answer. So it really opened my eyes. Like, wow, this is something. I've got to pay attention to this. But two things. One, it's far better read than I am, and has a far better memory than I do. And it was interesting. It had all these insights, all these little factoids or insights. Like, "Oh, that's good. I should use that." But the second thing is, it can't write for crap. It's GPT 3.5, I guess, at that point. I don't have to worry about my job. I don't have to worry at GPT 4.5 or whatever they're on. I don't have to worry about it replacing me. GP10, I don't know. We could talk about that in 10 years. I'll come back on. But it's like, it really couldn't write for anything. But it was very useful to me. A, it taught me. And you know what really impressed me is watching what other people did with it. Generative AI. It's not a regurgitation machine. It's original. Like, explain Karl Marx's economic theory in the form of a Taylor Swift song. And it was like, OK, that's original. It's not like that's sitting out there. It found it. And it was really good. It was really good at it. And so I really kind of started off. During the dot-com era, I was kind of more of a skeptic. You can't get rich overnight. The internet's interesting. But it's not instant companies, that kind of thing. But here I was, not a skeptic. And I used AI every day, sometimes, especially when I was writing, 20 times a day. Like, I'm struggling. I'd give it a paragraph. I'm struggling with this paragraph. I just feel like I'm not getting to the main point. Or it's too wordy. Cut it. Or I'd give it a section. Like, I need a transition here. And it was never like, I'm going to cut and paste. I'm like, oh, that's perfect. I'd go through that process. Or give me five different ways of saying this same sentence that I was struggling with. And it'd be interesting, because like, oh, I hadn't thought of it that way. Flip the sentences. Change this. That's the right word. So it was almost like breaking writer's block. Like, I'm struggling. I'm struggling. I'm not making progress. I give it to the machine, and it's giving me some good ideas back. Again, I really can't write the kind of way I write. It's too formulaic. It's too flat. But it was a very useful tool. And then I should have said this first, a great research assistant. I'm about to meet with this venture capitalist. Tell me. Give me their background. What's the specialty of this venture firm? That kind of thing. So I'd always use Perplexity, though, because they would footnote. And as a journalist, early on, I did one of those for a venture capitalist. And it said she worked at blah, blah, blah firm. Well, she didn't. And I kind of like, OK, watch out for hallucinations, because I don't want to put that into my book. So I made sure that whenever I use it as a research assistant, I could click, oh, OK, that quote is from a CNBC article on the CNBC site. I can trust that it didn't make up that quote. While I was doing my research early on, two lawyers in New York feared they'd be disbarred. They only got fined. They handed in an appeal to a court. And all five of the cases cited in the pleading were made up. And that really kind of like, OK, watch out. Don't be those lawyers who are on page one, because you trusted this thing. I've had interns I've paid in the past, and they make mistakes. I have to check out their work. This is just an amazing research assistant. And they kind of iterate ideas, like, hey, here's this interview I did. Help me out. What's significant here? What's novel here and stuff? So I would use it for that. And then I kind of got in the habit of like, reading this chapter. Correct any mistakes in bold. Make any suggestions you want. And stuff like, oh, right. I missed this misspelling. Sometimes it was as simple as that. Other times, it had a suggestion for rewriting. Again, it wasn't like I would take that suggestion. It showed me like, oh, there's something wrong with this. So then I had to put it through my human brain. But it would call it out. And then, you know, so Google Notebook, which is a really interesting product, was just starting to become popular by the time I was finished. And so I got in the habit of like, hey, I've written this thing. Here are the two interviews it's based on. Did I miss anything important? And so that was really interesting, too. I do wonder, for my next book, I'm going to routinely take every single interview I do, put it into a Google Notebook or whatever product is the one I use. And, you know, kind of help me highlight. You know, like, OK, these are the top live things. Here's the original thing. You know, they're saying it's, it's, it's, it's to me an amazing, amazing tool. Now, I know some writers who would be horrified when I'm talking, you know, I'm talking about  I used it as a co-pilot. I used it as a really, you know, as a tireless, well-read, amazing at summarizing stuff. You know, I would use it that way too, like, okay, here's a 47-page paper written in much more technical language than I can understand. Help me understand this, summarize this, you know, for someone who doesn't have a deep technical background. So I don't know, I find AI an enormous tool. And in fact, you know, this idea that AI is going to replace humans, that's going to happen. But I think in the short run, maybe the medium run, medium term, humans who use AI are going to best humans who don't use AI, because I'm more productive, I, it amplifies my intelligence, it amplifies my ability to write again, if you use it right, like, if I just use it to write something and hand it in, I think my career would be over, I mean, it'd be flat and not very good. And I think you'd get caught doing that, because, you know, it is something of a plagiarizing engine, at least, that's what the lawsuit, the New York Times followed against OpenAI. So I'm scared of that, too. If it gives me a few lines, like, how original is that? So again, I use it as almost source material, and then I put it through the human brain, my brain, you know, to rework it and kind of put it in my words.

Glasp: Interesting. And I recently realized the danger in AI, using AI is like, as you mentioned, like, you know, if perplexity or AI tools show the citation, and people tend to trust. But the funny story is, recently, we are a startup based in San Francisco, and recently, my friend, a friend, good friend of mine, reached out to me, he's, she's an investor. And oh, I searched about Glasp, and oh, did you guys raise 8.5 million or something? And because perplexity shows, oh, Glasp raised 8.5 million. And from these, like a top-tier investor of this season, but we've never, and so. Sorry. So yeah.

Gary: Maybe he's anticipating the future. Well, I actually had the same experience. Was it Claude? No, it was, it was another one. I went to it, like, tell me about Gary Rivlin. And it told me that I won an Emmy. It was like, and I asked, Why did Gary Rivlin win an Emmy? And of course, it couldn't tell me because no, I have not won an Emmy and stuff. And so again, that was kind of an early good lesson, like, okay, these things are sorcery, but double-check. You know, maybe, you know, I experienced in a very personal way, the hallucination poem. And that was an important lesson to me, that I don't trust, don't overly trust these things.

Glasp: Yeah. And regarding AI safety, I remember like, when Google acquired DeepMind, I think there is a condition, like a clause that like, have an AI safety vote and also don't use, don't go for like a military surveillance and so on. But eventually, Google decided to go with that. What are your thoughts on AI safety and these things?

Gary: You know, when I started off in 2023, and I was really impressed by this, like, AI safety was front and center, you know, everyone had their trust in safety, all the big companies at least, and some of the startups had their trust in safety teams. You know, there's been three now, you know, international meetings around AI. The first two were about AI safety. But the third one in Paris in January, January 2025, you know, AI safety took a distant second, you know, a backseat to, you know, winning this competition. I really do fear that with the stakes, you know, again, as I described before, like, you know, oh, Google's model is now the most powerful. Nope, anthropic that, you know, I do fear that AI safety is taking a backseat. And, you, my worry beyond just being a human being and worrying like these things are powerful, I tend to be optimistic. I think they'll, you know, do extraordinary things in education and medicine and science across the board and all. But, you know, any new technology is pro and con, you know, good things come, you know, television's amazing. Television means people staring at a box for six or eight hours. A car is amazing, but 35,000, 40,000 Americans die every year in car accidents and pollution, you know, so all technologies cut both ways. I'm convinced that if we're deliberate about it, we're smart about it, AI will be a net positive. But if we just toss aside AI safety, responsible AI, I fear that it'll be a net negative. And another thing, too, is I just think it's a bad strategy by Silicon Valley that, you know, you look at polling and the majority of people, at least in our country, in the U.S., are mistrustful. They're fearful of AI. I mean, you know, a minority are excited about AI, and, you know, AI had really bad timing. It came, you know, it hit in a big time kind of way at the end of 2022 and 2023 when mistrust in big tech was at its height. You know, Google, Meta, Amazon, you know, there's a lot of mistrust of all those companies. And I fear that those who are saying full speed ahead, you know, Mark Andreessen's famous for saying, you know, the trust and safety teams are an enemy of AI because AI could do so many amazing things. If you slow it down, you might be responsible for deaths because of all the lives AI could save. I understand the argument he's making, but I want to say back, like, however, if you get too far ahead, because I guarantee you something bad is going to happen with AI. You know, I don't know what it is going to be like, you know, a trillion dollars is siphoned off from the global economic system before a human even sees what's happening. I don't know what it is, but something's going to happen. And I think people are going to be really turned off by AI. I wish Silicon Valley were holding the public's hand a little bit more and just reassuring us, like, don't worry, we're testing this. And, you know, a lot of that, you know, red team testing, like, you know, we have a new powerful model. We're not just releasing it. I mean, OpenAI, Anthropic, they all, you know, test this stuff. I just wish they were talking about that more. And I wish that they were taking it more seriously. Google is a perfect example. You brought it up. They're like, oh, we swear, we pinky swear. We're never going to use this for military reasons or for, you know, kind of surveillance and stuff. And it started in 2025. They changed that. You know, they're in a race for their life. You know, their core search is being threatened by AI. And, you know, you just read reports all the time from inside Google that folks who had been working on trust and safety were brought over to, you know, a different part of the company to get this stuff out there faster. So don't just worry, it's a bad strategy.

Glasp: Yes. And it seems like the power goes into human hands. And also, I think there are so many threats that AI could impact on our daily life and work, and so on, and humanity. But what threat should we care about the most? I mean, there are so many aspects, as you mentioned. Privacy and, I don't know, like the fake news and so on. And power goes to huge, huge hands.

Gary: It's a big energy thing. Like, you know, within five years, we're going to have to double the number of data centers we have just to power this stuff. You know, it's a huge burden on the electric grid and, you know, kind of the climate change angle. Yeah, there is a long list of worries. I mean, you know, a lot of these can be managed. But, you know, on the top of my list, you mentioned the one that is on top of my list. And that's the concentration of too much power in too few hands. Like, this is so powerful, it's so transformative that it can't be in the hands of a few people in Silicon Valley. I admire Sam Altman. I think he's an extraordinary human being. I don't really trust Sam Altman. Sam Altman is hyper-competitive and wants to win more than anything else. You know, we're talking about, you know, AI, like every language, every culture, every everything. Like, it needs to be a global effort. And that's a big fear of mine that, you know, I fear that the same few big tech companies are going to dominate this. That again, there'll be lots of other smaller companies. But, you know, the large language model is foundational to a lot of what's going to happen. So that's one big fear. I also am scared of something that probably excites a lot of people in Silicon Valley, and that's autonomous AI. I want a human in the loop. You know, in 20 years, I don't know. But for the foreseeable future, we need a human in the loop. These people don't understand a thing. They're a parrot. They're just repeating. They don't have common sense. We call them reasoning models. They emulate human reasoning. They don't have reasoning. And so that worries me, especially the more important tasks. You want to have an autonomous AI that you talk with as a chatbot. OK, that's one thing. But AI is in charge of any essential system, and AI is in charge of something. There are so many things that humans need to be in the loop. So I get really worried about autonomous. What's the term for where it self-learns?

Glasp: Agent.

Gary: Yeah, well, I mean, there are agents, which I think agents will be great in five or 10 years. I think if 2025 is the year of the agent, it's a lot of hype. I joke that I think AI is both overhyped and underhyped. And it's this idea, and I've seen this since the mid-1990s, that we tend to overstate the short-term impact of a new technology and understate the long-term impact of a new technology. We saw that with the internet. We see that with a lot of things. And I think that's AI. And so the underhyped argument is like in 10 years, in 15 years, AI is going to be at the center of everything, just like the internet is at the center of everything, just like the phone is at the center of everything. But the overhype is companies today saying, This is the year of the AI agent. It's companies today saying that AGI is just around. It's coming next year. Like, OK, I get it. You've raised a billion dollars in venture capital. Your venture capitalists want to hear you say that and stuff. But I do think we're a breakthrough or two away from artificial general intelligence. So that's another way. So to me, it's like individual companies, individual founders, and individual investors are overhyping this. They're kind of optimistically crossing their fingers. I hope this comes sooner rather than later, because I have to start showing revenue. But I think kind of the broader point about AI, in a way, I think it's being underhyped, just in the sense that I just think it's going to be transformational. I think it's going to be at the center of everything again. In 1995, 1998, the internet was Craigslist. The internet was static. It wasn't really that impressive. In 2000, when PayPal started to become popular, you know the problem? 90% of the people were buying something online and then mailing a paper check. This takes a long time. Companies are cautious. People, you have to fight for them to change their habits. And so over time, though, people are going to see the benefit. Like, oh, wow, OK, I could plan a trip in an hour working with a personal agent rather than spending a day or two. People are figuring it out in the work world. Like, oh, wait, I have a basic report to write. Why don't I just have the AI do the draft, and then I'll go work with it, and I could do it in a day? Well, it would take me two weeks to do. So I think there are actual use cases where this thing is useful today. But I do think people tend to embrace technology. Businesses, organizations, and society tend to embrace technology much slower than we in our heads imagine they will.

Glasp: Totally. And how does human society address those risks and threats? Power issue, also AGI. So, how does human society address it?

Gary: I think we need to be thoughtful about it. I think we have to be deliberate. I wish we were talking about it more. I wish there were kind of an international body that was seriously taking this on. I mean, it's hard because if you say, let's test these models more, let's require them to share information with the government, people are going to say, you're slowing them down. But China, China, China, we're in a race with China. And it's just like, it's hard. But I really do think that we need to be deliberate. Like, things can break humanity's way if we're deliberate about it, if we're talking about it. And that's a big fear I have about the Trump administration in Washington, where Biden's administration is taking baby steps. They were doing modest proposals just so there was more of a discussion, more sharing of information. One of the things they were requiring is, if you have a new powerful model, a model more powerful than the ones that exist right now, you need to red team them. You need to kind of let the government know this is going on, just so we know what's going on here. And within two days of Trump being in office, they got rid of that. It was an executive order. They could just undo it. And that Paris meeting that started in 2025, J.D. Vann said, let's stop with the hand-wringing, and let's just win this race. And I get it. I mean, there are exciting things to come. This is a race that will be Silicon Valley. It will be in China. That's kind of leading the way here. I get the stakes here. But I think we could do both. I think we could do cutting-edge stuff. I think we could do amazing stuff. And I just wish we would think, talk about it, have some basic rules in place, some basic boundaries in place, and say, as long as you're doing this, you're fine. But that's not the approach right now. And I fear that in this country, at least, that's not going to be the approach during the four years that Trump is president.

Glasp: I see. And I'm curious about the future of journalism, because you were a journalist for decades. And do you see the impact? I think AI could have a lot in journalism. But first of all, what is journalism to you, in your words? And also, how do you see the impact of AI on these things?

Gary: Yes. So one basic thing that journalists do is they go out and they talk to people. And so can an AI model create a basic news story? Yes, very well. There's something formulaic about a news article. However, who's doing the interviewing? I suppose you could have AI listening to, watching a city council meeting, and coming up with quotes. But I don't think it would be the way that technology is now. I don't think it's going to do a very good job of that. But again, what I do, I did was interview 150 human beings. Who is doing that? And that's core to journalism. What's journalism? It's just like you're going out and explaining a little piece of the world, whether it's around politics or technology, whatever it is. I'm more of a feature writer, so I'm bringing you to a world. I'm creating something that brings you into that world so you understand what the issues are, what the stakes are, those kinds of things. I guess I am worried for concrete reasons. I've seen different news outlets, CNET, Sports Illustrated, have relied on AI and got in trouble for it. At least they got grief for it. CNET backed off a lot of what they were doing. But we're seeing copy editors cut. Because, like I was saying, AI is a pretty good copy editor, meaning it's been written. A human editor has gone over it. But the copy editor is just making sure everything's just so. Your verbs agree, if you know what I'm talking about, and just all of that kind of stuff. It's very precise. And so I do fear, and this is kind of a general fear for journalism and many other professions, shrinkage. If you had to have 200 people on the editorial side for a newspaper, well, OK, we could cut corners and have more of the copy edits done by AI. It's obviously much, much cheaper. I will say, though, that AI for an investigative journalist is a great tool. It could go through reams, a vast amount of data, finding trends, finding the things you ask it to find, in a way that it would take a human. For some of it, it would be impossible for a human. But you can use it as a tool. Again, my construct for AI is as a co-pilot. It's a tool. The human goes to the Runway and makes a movie like Martin Scorsese would make. No, he can't do that. You have to come up with the storyline. You have to come with the characters. You have to go with the dramatic moments. There's a lot of iteration. It's just like, OK, that's close, but it needs to be this, this, and that. The human is still the creator, the driver, and all. But with that said, it's a really powerful tool. So instead of journalism, let's use marketing. The company has a marketing department of 40 people. Well, I'm imagining in two or three years, that's gonna be a marketing department of 20 people because, well, the basic research the AI could do. The basic report writing that the AI could do. Instead of having four, three, or four illustrators, like why don't we just have one illustrator and use AI, and like it could produce a hundred different versions, and you could choose and fine-tune and stuff. So to me, journalism and just like many, many other professions, we're gonna have a shrinkage where in the past, X could do it. Now half that number is gonna be able to do something.

Glasp: I see. At the same time, I mean, Google, Facebook, their business model relies on like advertisement and meaning the more the user engages the content, the more money they can make, right? It's part of the problem, yes. Yeah, so how should we balance these like business model and journalism, but we should deliver the true story, not the fake news and important news to people, and yeah.

Gary: Well, that's part of the reason you need humans to kind of help discern between fake news. And by fake news, that term has become so loaded. You're talking about deep fakes. You're talking about fabricated, you know, true fake news, not like I don't like your coverage, your coverage there, but kind of lost sight of the question. I'm sorry, ask the question again.

Glasp: Oh, the question was like how, like the mindset of journalism and what journalism is, and also like what kind of advertisement-based like business model, Google, Facebook kind of against each other, and how should we balance in the future?

Gary: No, it's interesting. I don't know. I've spent time on this, but like, so Google when it first came was a godsend for journalism sites, right? Like it would send people, excuse me, to the new sites and stuff. And you know, the new site would get a cut of the advertising over time. Google had also met up to do the same thing and took a greater share of that money. But at least there was some cut of advertising revenue. At least they were sending some, right? If you have your 10 blue links, like, oh, let me click on the New York Times story. Now I'm on the New York Times site reading about it, reading an ad. And so, you know, there was an upside for the New York Times. The real problem with AI is that it's amazing. I don't have to go to the New York Times site anymore. The AI is gonna sum it up. It's not delivering me blue links. It's delivering me the answer. It's kind of what many people love about it. That, you know, it's like, it gives you your individually crafted Wikipedia answer. Whatever your question is, I don't have to painstakingly go through all these links. It just gives me the answer. Again, with footnotes, because I'm a journalist and stuff. And so, you know, AI is a great threat to journalism. How will a Gemini, how will ChatGPT make money? Like, okay, there's a subscription service. And, you know, a small portion of people are paying for the subscription service. I pay for a couple, but the truth is I don't pay for ChatGPT. It's still terrific. You know, the almost-as-good one is still terrific. And so I don't know, you know, how these things are gonna monetize. I mean, there are a lot of different AI companies out there. Like, they're doing interesting stuff. And unless they're able to, like, okay, I'm doing this thing for law firms. Okay, law firms will pay us a lot of money for this. So that's how they're gonna do that. But, you know, a lot of it, I think, you know, just like in the internet age, the business model was the big question. This is astonishing. We have users, but how are we ever gonna actually make profits? How are we actually gonna make more revenue than this thing costs? I mean, you know, open AI ain't there yet. Anthropic isn't there yet. Gemini is a, excuse me, Google is a profitable company, but I'm sure they're not making money on any of their AI. They're not making profits on any of their AI. So that's me, just an open question. That's what makes me fear for any startup that's competing with a Microsoft or Google. I mean, Google has a hundred billion dollars in cash lying around. You know, they generate tens of billions of dollars in profits, like they can fund themselves in a way a startup can't. And so I, you know, in fact, in my book, Inflection, Reid Hoffman, Mustafa Suleiman company, they end up, you know, Suleiman and most of his folks at Inflection went to Microsoft for a payout and stuff because Suleiman believes that none of these AI companies, you know, he said, like, I just saw into the future that Anthropic, you know, maybe open AI, I didn't ask him specifically. That's kind of a fraud one with him, given Microsoft's open AI. I'm not sure he would have answered, but like, you know, it was his feeling like none of these freestanding models, none of these, you know, models at Anthropic or an open AI are gonna survive because it's a commodity for the foreseeable future. You know, no one's making money on this. And, you know, they're figuring out interesting ways, APIs, you know, whatever, you know, kind of thing. And so, you know, it was his judgment, like no matter how great a product we have, you know, we're still years away from actually making profits, i.e. we're gonna have to raise, you know, five or 10 billions next year, maybe 20, 50, 100 billion dollars, you know, a year or two afterwards. So he thought, like, that's just not worth it. I'll just use Microsoft's money to create my product. I'll just use Microsoft's data and take advantage of Microsoft's reach, you know, they're on a billion-plus machines and stuff. So it's kind of an open question. Will these things start making ample money where they could be a freestanding business, or will Anthropic open AI have to retreat, be bought by a larger company?

Glasp: Nice. And that reminded me of a kind of double-edged sword of AI, of the use of AI, because I've seen this story somewhere on the internet that, you know, like since AI is helping a lot in productivity, so and legal firm usually charges money for the hours they work. But if AI helps them work so fast and they can't make money, so how should we balance the use of AI or something like that? 

Gary: And then look at meta or mistrial, mistrial. You know, I mean, like there's a funny expression in Silicon Valley, like, you know, my company's business model is destroying your company's business model. And that's meta, right? I mean, like, we'll just offer this stuff for free and just create an environment where we'll make money in other ways and stuff. And so, you know, arguably this stuff will be free. I mean, I don't think anyone in 2022 thought like, oh, wait, open source is gonna make these models, LLMs, whatever, text to video, that are almost as good. They're almost as powerful. In some ways, they're better because they're smaller, whatever. But like, you know, the idea that, you know, you'd have open source as a competitor so early on, I think, I mean, Mustafa Suleiman, you know, he brought that up, others bring that up. So, you know, it's like, you might even be competing with free. And by the way, if I were a large corporation, maybe I'd rather do open source. You know, I wanna tailor it for me. You know, it's not this closed thing. I can look at the source code, and I can make it work for me. I have the talent, I have the people here. So, I mean, you know, AI is at such an interesting period right now. I'll kind of go back to where I started. It's just like this stuff is moving so fast, and there are all these curveballs being thrown in. Like, who knows where we'll be a year from now?

Glasp: Totally, yes. So you've written an amazing book, AI Valley now. So, what's next for you? And are you thinking about writing another AI-related book, or what are you into now?

Gary: Yeah, no, I'm always thinking about writing my next book. You know, I've been a daily newspaper writer, I've been a magazine writer, you know, I've written for trade publications. Books are my favorite thing to do. You know, I just, I'm always, I'm not happy unless I have another book idea in my head. You know, I have a couple of ideas. I'm fascinated by AI. I'd love to write another AI book. I mean, I spent the last two and a half years getting smarter about it. So take advantage of that. I'm doing something, I'm thinking of doing something kind of AI adjacent. You know, we brought it up during this conversation, like kind of around energy and AI, like, you know, kind of nuclear, whatever it is, or kind of like, there is this idea in Silicon Valley, especially, you know, where there's a lot of optimism, you know, AI is gonna help us get to abundant energy, energy that's near free. And, you know, the set of issues we've spoken about a couple of times during this conversation will no longer be an issue. So I'm very interested, kind of AI means climate change, AI, you know, means kind of energy and stuff. But you know, I don't know, I've got a few ideas. You got any good ideas for me? What should my next book be about?

Glasp: Interesting. Sorry, one quick question, because you've written so many interesting books, like the last one was Saving Main Street. And also, like Katrina, how did you how do you pick that idea?

Gary: Yeah, I mean, I mean, that's changed over the years, like, so, you know, initially, I always had a beat, you know, in journalism, that's, that's my topic area. So, you know, for half a dozen years or so, five years, I wrote about Chicago politics. And my first book grew out of that, like, you know, like, oh, there's an interesting book to be written about this moment in Chicago history for this set of reasons. It's interesting to the wider, the wider country. In fact, that's where Barack Obama got his political start in Chicago during this period. And so, you know, when the same thing happened, then I was in California and Oakland, my second book around youth violence, I was writing about that for two or three years. And I saw like, oh, wait, this is an important story, you know, kids killing kids, it's kind of too soon on that it wasn't, you know, at that point, it wasn't a very big issue. And I'll and so like, you know, Katrina, I was working for the New York Times, they sent me to New Orleans after Hurricane Katrina, and, you know, kind of, hey, I should have this book came out the 10 year point, like, you know, all these folks, 80% of the city was covered in water, like, you know, that means roughly 80% of the people, you know, their home was destroyed, you know, should they rebuild? Could they rebuild? Would anyone help them rebuild? So I kind of followed these families that I had met as a reporter for the New York Times, and followed their journey. But you know, now I'm independent, the last 10 years I've been independent. So I'm not in quotes, working a beat, and stuff. So you know, I gave the, you know, was Saving Main Street, you know, my dad was a small businessman, his whole working life, and stuff. So like, it was like, wow, Howard, how are small businesses going to survive that? So that's just kind of an idea that came to me early on in COVID, you know, AI, I just, you know, it's sort of random, it was sort of serendipity, I just happened to receive an email at the right moment. So you know, I, this part of me that longs for having, longs for covering a beat, because then it more naturally grows. Now I'm just sort of almost waiting for inspiration, you know, kind of, kind of thing. So.

Glasp: Thank you. So, I think our audience is like writers, aspiring writers, founders, and so on. And do you have any advice for these people?

Gary: You know, I love the idea of 10,000 hours. There's this idea that to get good at anything, like writing, you have to put in the time. So someone came up with this idea of like 10,000 hours, that's basically five years, you know, 40 hours a week, five times 50 weeks, that's 2000, five years, you put in your 10,000 hours, and like, I kind of, I was not very good when I first started. And, you know, it's just like practice. And, you know, I had a couple of mentors who, you know, helped me along the way. But, you know, it's like, people always say follow your passion. Like, okay, it's important to be passionate about something like, but you know, maybe I'm passionate about rap music, like, okay, I'm not sure there's a career there, you know, for me, but like, find what you're good at, what you like to do, and then just practice, practice, practice. And like, I turned myself into a journalist, I, you know, I welcome somebody to read my early articles. And like, you wouldn't think, oh, this guy's gonna write 11 books, I'm gonna pull a surprise, you know, it's just like, you know, so just stick with it. And the other thing is that, you know, I write for a living, but I joke, I rewrite for a living, you know, whereas by by that, I mean, like, get the rough draft on instead of having like, 10,000 pages of interview notes and articles and stuff, you know, like, that's overwhelming. But I wrote this 400 500 page draft. Now I just have to make that 400-page draft better. I call it shrinking the world and stuff. So I read probably every word in that book. I've gone through three, four or five times I rewrite as I cut it, I you know, can I say in 75 words, what I took 100 words to say kind of almost the Zen editing, like, you know, saying is fewer words, I think that's a good thing. And so, so, you know, I rewrite a lot. It's like, again, if you read my drafts, you go like, Oh, this guy can't write very well. But the good thing is you don't read my drafts, I get to go through it three or four times, I have an editor who weighs in, and you know, what's working, what's not working, gives me advice. And also, you know, just even if what you put down on paper, like, this is no good, like, yeah, well, you shouldn't judge yourself that way, work it, rework it, rework. And then if you think this is no good, then we should talk.

Glasp: But but how do you decide? Oh, this is I'm done. Okay, this is not because you can edit infinitely. And yes. How do you decide? Oh, this is okay. Is that 95% of your gut feeling says okay, yes, or 85%? I'm so sick of it after reading it four or five times.

Gary: I'm so sick of it after reading it four or five times. You know, I'm pretty confident once I've gotten through it several times, once I've gotten feedback from my editor, there's a long-deceased writer who jokingly said, I was disappointed to learn I can no longer make corrections once the book is in the bookstore. You know, so there is this idea like, in fact, it's funny, I went, you know, right before I'm about to go on the publicity circuit, do a book tour, I'll reread the book. And I read it with a pencil. And I make changes. I mean, towards what end, I guess, for the paperback, and stuff. I, you're right, you're never done. But like, in this case, I got it that AI is a hot topic. You know, I was, you know, among the first books out about this new AI wave. And so I had a big incentive to, like, okay, you know, this is ready. I'm very happy with it. I think it's my best book. I'm not saying I shortchanged it. And I've kind of put this slop out there. But on the other hand, I was keenly aware that, you know, this is a competitive space, there was a hunger right now for AI, get this out as soon as I can.

Glasp: Yeah, I was always wondering, why can't we change the book? I mean, I think we can do for Kindle ebooks or ebooks.

Gary: Well, articles do that. I mean, you know, like you make corrections online, and stuff. But you know, it's funny, I actually wrote one of the first ebooks ever for Random House in 2001. And one of the things I thought there were two things I thought were amazing about an ebook, one, you know, you could write a 30,000 word book, you know, if it was a book form, it'd be like, 100 120 pages, like, no one's gonna buy 120 page book, they, you know, they feel ripped off. So, you know, this idea of a book has to be, you know, there's an article, and then there's a book and there's a lot in between 10,000 word articles, a really long article, you know, a 60,000 word book is on the shorter side, a book. So I kind of like the idea, like, in an ebook, like, we'll write the right amount of words, this is a 37,000-word topic, right? 37,000 words. But the other thing I liked is exactly what you said. Oh, I had it. You know, I really know, there are a few mistakes, you know, small mistakes, that don't change the meaning of anything, but it irritates me, I wish I could change it. You know, in an ebook, you can, in this book, I have to wait for it to go into a second printing or the paperback to come out, you know, to make changes. But so I, I'm with you, I always kind of thought that was an enticing idea, that you can kind of update the ebook.

Glasp: Yeah. Yes. Thank you. So, okay, this is the last question. And since Glass is a startup, you know, where people share what they're reading and learning as the digital legacy, and we want to ask this question, what impact or legacy do you want to leave behind for future generations?

Gary: Okay, if I'm leaving out the fact that I have two sons, and I think that's kind of the most important thing I'm doing in life, is raising two sons. As far as my work, yeah, so I always want to have an impact. By the time I was writing this book, I had strong feelings about AI, some of which I've shared during this conversation. And I'd like to think that I'm having an impact, I'm, you know, putting a spotlight on some injustice, I'm kind of dramatizing a set of issues. So you could think more thoughtfully about them. But, you know, as far as my legacy, like, I'd like to think that I for future generations, help define what they think of Hurricane Katrina, and what happened, you know, I'd like to think that for people interested in Chicago politics, that long after I'm gone, my book, my work, the articles I wrote, are still here. And I can help you understand this moment, this interesting moment in this city's history, you know, COVID, like, what was it like for small businesses back then? In fact, there's an AI angle on to this, like, so a lot of writers are very angry, needless to say that, you know, their intellectual property, their work was used to train these models, without permission, the permission part doesn't bother me without compensation, I would like a little bit of money for the work I did. But the truth is, like, if I found out that, you know, someone is getting an answer about Chicago politics, or New Orleans, New Orleans after Hurricane Katrina, without having been trained on my book, I'd be disappointed. In fact, I was very pleased. So The Atlantic Monthly, The Atlantic magazine, put out this link where you could plug in your name, and I think it was Meta's model, and find out if your work or your book might have been used to train it. And I'll be honest with you, I was really relieved that every one of my books was listed. Again, this idea that like, okay, I'd like a little bit of compensation for my work, but I think more importantly, like I really wanted to think like, oh, okay, if you ask that model about what happened in New Orleans after Hurricane Katrina, like I help inform that. So that's my legacy. I'd like to be like informing the conversation long after I've gone on these various, on the internet, on the dot-com era. I wrote a lot, and I wrote a couple of books around that. I'd like to think that I'm helping to shape that even after I'm done on this earth.

Glasp: Yeah, totally, and a beautiful one. I totally listened to what you mentioned, and it reminded me of the famous quote by the Dalai Lama, Share your knowledge, it's a way to achieve immortality. You can share your knowledge, and you can achieve immortality, and it's an interesting way to see your life and legacy. And yeah, I totally listened to that. Yeah, again, thank you so much for joining today. We really enjoyed learning a lot from you.

Gary: This was a lot of fun.

Glasp: Thank you.


Follow Gary Rivlin on social

Twitter

Share Your Excitement

Ready to highlight and find good content?

Glasp is a social web highlighter that people can highlight and organize quotes and thoughts from the web, and access other like-minded people’s learning.

Start Highlighting