How to Build a Safer Future: AI’s Role in Cybersecurity and Beyond | John Whaley | Glasp Talk #25

How to Build a Safer Future: AI’s Role in Cybersecurity and Beyond | John Whaley | Glasp Talk #25

This is the twenty-fifth session of Glasp Talk!

Glasp Talk delves deep into intimate interviews with luminaries from various fields, unraveling their genuine emotions, experiences, and the stories behind them.

Today's guest is John Whaley, a visionary in cybersecurity and AI. John is the founder of Inception Studio, a nonprofit accelerator empowering minds in AI, and Red Code AI, dedicated to combating AI-enhanced social engineering attacks. With a PhD from Stanford and a track record of founding successful companies like UnifyID and Moka5, John stands at the forefront of cutting-edge technologies in programming, cybersecurity, and AI.

In this interview, John shares insights into the evolution of AI, his journey from academia to entrepreneurship, and how he’s fostering innovation through Inception Studio. He delves into the implications of AI advancements for cybersecurity, his perspective on AI’s future, and the challenges of social engineering threats. Join us as we explore John Whaley’s inspiring career and his thoughts on building impactful AI-driven companies for the future.


Read the summary

How to Build a Safer Future: AI’s Role in Cybersecurity and Beyond | John Whaley | Glasp Talk #25 | Video Summary and Q&A | Glasp
- John Wy, founder of Inception Studio and Red Code AI, emphasizes the importance of AI in modern cybersecurity, especially in combating social engineering attacks. - He highlights the unique model of Inception Studio, a nonprofit accelerator focused on fostering AI innovations without taking equity


Transcripts

Glasp: Welcome back to another episode of Glasp Talk. Today, we are excited to welcome John Whaley, a visionary in the world of cybersecurity and AI. So, John is the founder of Inception Studio, a nonprofit community-driven accelerator focused on empowering the brightest minds in AI, as well as Red Code AI, a company dedicated to protecting people from AI-enhanced social engineering attacks. So, with a PhD from Stanford and a track record of founding multiple successful companies like UniFi ID and Mocha 5, John has been at the forefront of cutting-edge technologies from programming, program analysis, and optimization to virtualization and cybersecurity. John is also a passionate educator, teaching courses at Stanford on building apps with LLMs like LG Prods and sharing his insights on next-gen AI technologies. So today, we'd like to ask him about what's happening in AI nowadays and also how AI will impact our lives in the future. So, thank you for joining today’s show!

John: Yeah, great to be here. I mean, thanks for the introduction. And I’ve been working, I don't know, working in this space... I've been here in the Bay Area for almost 25 years or so. When I came here, like when I started at Stanford, right? And it's very interesting to kind of see all the different waves of things that have happened since then. I remember kind of working in AI even before like this time, it was called the "AI Winter." Everything is coming kind of full circle and now AI has become so, such a hot thing. It's... it's very... yeah, it's very interesting to see. But like, I've always been very technical and very much a builder, I think. Like, even when I was doing my PhD at Stanford and I was planning to be a professor, even then, all of my research was about building things. And so, my PhD research and everything was about my research areas in compilers and program analysis, but all my research was about like how do you help software engineers build things faster and better, and provably correct and like those sorts of things.

And so, even, even all of my research was kind of very much engineering-focused and always has been. So, that's always been my background, in terms of very much being a builder, starting with writing code and being a software engineer. And then eventually, later on, it was about doing research and helping other people do that. And then ultimately, like building companies, and now I'm kind of at a meta-level, sort of, because through Inception Studio, I help other people build companies. So, it's been a lot of fun so far.

Glasp: Cool. So, first of all, I'm curious about Inception Studio. I know what Inception Studio is, but could you explain to the people who don’t know about Inception Studio and what it is? And why did you start the project?

John: Yeah, the backstory there is that my last company got acquired about four, almost four years ago, and I was looking for my next thing to do, and not making a ton of progress because my life was just too comfortable. This was like in 2021, so it was like my, 2022. And, this was like pandemic times and everything. And I was just kind of stuck. I knew I wanted to do another company, but all of the things were happening with GPT-3, which had come out in, actually in 2020. And, like, we were, it seemed to do some pretty interesting things, there. So, we knew this was going to be big, but I wasn’t making much progress towards a company. And part of that is like, I know myself. I'm very deadline-driven. Like if I don't have a deadline, like nothing happens. I just end up procrastinating and just not getting much done. And so, I needed to have a deadline there.

So, I knew what I needed, which was like to go away somewhere, to remove all my distractions, to be surrounded by very smart, creative people, and then, yeah, most importantly, have a deadline, right? And so, that was the genesis of the very first event we did, which was back in November 2022. The topic area was around LLMs and generative AI. We got a bunch of amazing founders there, and then quite a few companies were either launched there or kind of ended up growing out of that, right? And which was really... that part was really exciting. And part of the reflection there about why the first event had gone so well was that we kept the quality bar high and we were able to attract good people who were there, right? And so when we thought about doing more of these types of events we wanted to keep that quality bar high and avoid this problem of adverse selection where like the best people would kind of choose not to go. And so, that's what we did. And we've run 12 of these events so far, like these cohorts of founders. We had 144 founders come through the program so far. And, it's been very, very successful because it turns out when you create a group of like really amazing people and put them all together, they end up... and like, and people who are ready to start companies, they end up starting companies and those companies end up doing extremely well. And so, that is, uh... that’s been the premise of what we’re doing. We run this as a nonprofit. We don’t take any equity in the companies, again, because we want to focus on quality and get the best quality people together. And yeah, we’re a 501(c)(3) nonprofit. So, we just... we ask people if they can donate if they're able to, if they can donate just to cover their costs for the room, board, and lodging, right?

There, and if they can't, then that's fine as well. Like, we can waive the cost as well. And because of that, we've been able to attract some great founders, and some really good success stories so far, have come out of Inception, right? But this is a different model than you see from most of these other accelerators, which, most of the time, they're just looking for people who will take some type of deal, like, "I'll give up 7% of my company, and then, you have to pay," They’ll pay like $125K for 7%. Like, that's a typical type of deal. Or, sometimes even more, up to 10%. And such, like, these are... that might work well for early-stage, like for first-time founders, but if you are... you work in a really hot space, or you're a serial entrepreneur who’s had success in the past, or you already have a lot of connections to investors, etc.

Like people like that, they don’t need to join that type of accelerator program where they’re giving up a lot. And so, they have a problem with adverse selection where they don’t, they’re not able to attract the very best people. So, we wanted to avoid that. That’s why we don’t take any equity. That’s why we’re a nonprofit. And that’s why we've been able to attract some amazing people. And yeah, there’s inherent value in curating a group of amazing people and putting them all together. So now, we’ve run 12 of these retreats. There's also a really strong founder community there as well. This is something I never pegged myself as kind of a community organizer type at all, but it just happened over time, that’s been a really exciting part of that journey as well for me, like just kind of exploring that side of how to organize a community of founders, right?

But yeah, it's been, but, everything has been great so far. We’re running another event. Our next event is coming up at the end of September 2024. And we've run these events every six to eight weeks now, and we’re talking about even kind of expanding internationally as well. We're set to plan an event, for Japan and in other locations as well. So, the truth is that there are a lot of great founders in many different areas and many different walks of life, right? And I’m a big believer that the best teams are the most diverse teams. And, this is something that we put some conscious effort into, and we think about when we're putting together cohorts of people. Not just having all the same type of people, because if we did the event would not be nearly as successful as it is, right?

If it was all just a bunch of engineers, it would just be like a hackathon, and then people would hack up some solutions, but they would never really turn into anything because they wouldn’t turn into viable businesses, right? But on the other hand, if it’s like too many product managers together, then, that’s not going to turn into anything either, right? Or business people or any of these other areas, right? The worst case would be if it were all CEOs, that would be a disaster! That would be a, yes, not... maybe that's more like a reality TV show or something like that. That’s not going to. Because the truth is, you need all types. Like, you need people who can be CEOs, you need people who are not CEOs, right? You need people who can be who are technical. You need people who understand the product. You need people who can sell or understand go-to-markets and all of this. And ultimately, the ones that have that kind of a good mixture of skills on the founding team or the early team are much more likely to be successful.

And this is statistically proven. You look at, there’s a book that’s called The Founder’s Dilemma that goes into this in-depth about just analyzing what types of teams and founders of, founding teams are more likely to be successful. And so, that one shows that the ones that where there's some diversity on that early team are just statistically more likely to be successful. So, I hate these situations where if there’s somebody who is, they are talented, and they are ambitious, and for the good of the world, they have a destiny that they should be doing something like, should be starting a company, but because of circumstances outside of their control, discrimination or, just society and those types of things, they’re not able to fulfill their dream and their destiny.

Like that, I just hate those situations, right? And it makes me very much want to fight and support people who are in that type of situation. So we have, so far, I think Inception has been 25% women and we’re working to try to increase that as well because I believe that there are so many women who can and should be starting companies and, like, because of a variety of reasons, they have not been able to, but, we want to help support that. But this is similarly kind in regions like Japan and other places as well where entrepreneurship is maybe not as common, but there are a lot of very talented, ambitious people, and I want to do everything we can to support people in those circumstances as well.

Glasp: Yeah, and I totally... yeah, listening to this, I understand. And by the way, I went to the cohort, like the first pilot cohort, and that was amazing. But I don't know what's happening nowadays in the recent cohort. Have you found any interesting projects or ideas there?

John: So many! I mean, this is why I love doing this, like, you get to interact with so many interesting people with different backgrounds, but they're all very ambitious, and they're all very accomplished, and they’re all, like, kind of been thinking about this for a while. And just hearing about some of the very interesting things that people are doing... Now the very first event we did, the one that you went to, was like explicitly... there was a topic area of large language models and generative AI. That was because we did it in November 2022, and this was about two weeks before ChatGPT came out, right?

Glasp: Yeah. Very good timing.

John: Yeah, very good timing there, right? And the reality of it is that it’s not like, um... it’s not like we're just trying to restrict and focus only on AI founders and everything. It’s more that it’s just a very hot area right now. There's a lot of interest in it, there's this new capability called large language models that in the past didn’t exist, and, now it does, right? And it opens up so many new opportunities that I think it’s natural that a lot of founders are just very interested in building using these tools, and in these areas, right? So it's not...

John: So, it’s not..., because the danger is if you kind of start from generative AI and you just start from, "We have this amazing tool called GPT-4," or whatever, like, any of the large... called the large language model or, some generative AI model, then you end up... it's like a hammer that's looking for nails, and then you end up, just not solving real problems for customers, right? And so, I think it's much better to start from the problem, and then if AI is a solution, that's great. But there are a lot of cases where AI is not the best solution, and that's fine as well. That being said, I would say, I don’t know, most of the companies that we deal with, through Inception are like..., not fully embracing AI. And I think it's not only in terms of their product that they're doing but also internally, like for their company.

John: Because there is, there’s an opportunity now with, like, utilizing the latest tools and everything to... it's like a superpower. You can act like a much larger company with much fewer people by leveraging a lot of these generative tools, right? And, to handle a lot of different things about your company. And, yeah, and the people who are fully fluent in these tools and able to use them, yeah, it’s almost like a superpower right now, right? You get... you end up being five or ten times more productive than people who don’t... who aren't using those tools, right? Or don’t understand how to use those tools. So, that has been a really interesting evolution, I guess, over the recent time. Like, it used to be that if you wanted to do this, you had to kind of scale up a pretty big team and hire a bunch of people. And that’s a lot less true now. Like, you can get away with having just a much smaller team and just a smaller handful of people, and effectively punch way above your weight if you know how to use these tools effectively, right?

John: So this is like... so when we talk about AI-native companies, it’s both, like, both in terms of, like, yes, they're kind of building products that are AI-native and they have a... they have a data strategy that is, they have a strategy that understands about the value of data and all this. But they're also utilizing AI tools themselves, within their companies to, give them a huge competitive advantage compared to the larger companies that, are not as closely embracing these types of tools.

Glasp: Yeah, and I’ve seen in the past, friend or Frame, those, many, like, interesting projects came out from Inception Studio, and that’s impressive.

John: Yeah, we have a lot of great... I mean I could go through each of the companies, and I’m very excited about each of them for a variety of reasons. Yeah, I mean, Friend is... that was one of the first ones that we actually kind of started to get into the hardware space as well. There have been some high-profile failures, I would say, in this space where products that were released were just not very good. That being said, I think there are I still think someone’s going to crack it. There’s going to be a lot of promise in that area around those types of AI wearables. Frame is another one you mentioned, very interesting, and exciting. And this is the type of thing where Josh Payne who was the founder, he came to our cohort three, and when we were talking about different ideas and everything when he described this, the idea of what he’s doing, which is, I have a webpage and I want to do A/B testing, and you use generative AI to generate your different variants, and then reinforcement learning to kind of automatically improve it, improve your website that... I was like, "Of course, that makes so much sense," right? Not only that, he took it in kind of an even further direction to talk about the future of user interfaces and being kind of these types of generative AI-driven kind of living user interfaces and everything. And that’s the benefit that we have, like, working with founders that are kind of high caliber and ambitious, right? Is that they’re not, they’re not thinking small, right? They’re thinking, they’re thinking big, right? And they’re thinking kind of things that have a big impact.

A lot of times when you have like kind of an early like a first-time founder or early-stage you know first-time founder, they're thinking about, "Oh well look, I can make a viable business doing this," and it's like, "Okay, that's fine," but just... I learned this early on. It's like just because you can solve a problem doesn't mean you should, because your time is very valuable, right? And even though you can say, "I could see I could solve this problem," right? Or, "I could see I can build a viable business in this area," but the truth is if you are talented and if you're a good engineer or you have other skills, like there are a lot of things you could do, right?

And so what separates people who are very... highly successful people versus like the people who don't achieve that level of success, it's often, it's a little bit about talent, but it's even more so about your taste and the problems that you work on, right? And so if you have good taste in problems to work on, you're much more likely to hit those levels of success, right? And so that's often what separates the people who have talent and the people who are going to be like kind of next level of success, right?

And so I think that is some of the benefit again. I mean like what I mentioned about Inception—maybe around 70% or so are serial entrepreneurs, people have started at least one company in the past, but we have almost 30% that are not, they're first-time founders as well, right? But we're looking for people who have that combination of talent and ambition. So like they're not just looking to be like, "Oh I could make a viable business in this area," but they're looking to be, "Okay, we're looking to build the next great company or change the world in these very particular ways." That's what we look for and filter for.

By curating a group of people that are all kind of like-minded like that, it ends up being a lot of fun and there's a lot of these synergies there because you're just getting those people together who are all about the same stage and working in similar areas, who are all ambitious, who are all trying to start companies, right? There's a lot of positive synergies that happen between those people.

Glasp: Yeah.

John: And some of this I've been learning over time as well. I didn't really think or anticipate this, but there was, whenever you're in any type of program where there's a cohort or there's a group of people together, there is an opportunity. Sometimes what happens is you get this kind of sibling rivalry type of thing where it's just like, the fact is, you see other people and they're doing well, and you know them, and you kind of don't want to fall behind. So it kind of gives you that little bit of extra push to be like, "Well, I want to keep up, I don't want to fall behind, so I'm going to push forward as well," right?

This was something that I did not think about or even anticipate at all, but there's some aspect of that as well. It's kind of a, not so much directly competitive like, "Hey, we're all... there's... we're all being graded on a curve and only a small number of us are going to get A's and everyone else is going to get B's or C's or whatever." It's not like that. It's much more like, "Hey, it's like being on the sports team together," right? And like, we're all kind of pushing each other in the same way because we're all part of the same experience, right?

And to do that in a kind of environment where it's more collaborative than competitive, but it's kind of like a gentle competition, in terms of people inspiring other people to kind of, "Oh, well I should do this too." I've seen this happen so many times where one of the Inception companies launches on Product Hunt, and another one is like, "I got to launch on Product Hunt too." And this happens again and again, and it becomes this positive feedback cycle, which is just really amazing to be part of. So yeah.

Glasp: Yeah, see, and did that happen to you, right? Because you started Red Code AI from the Inception studio, right? Could you tell us why you started, what Red Code AI does, and why you decided to start that?

John: Yeah, so Red Code AI... the backstory here. My first two companies were in cybersecurity, so I had a lot of experience in cybersecurity. But also, I was pretty convinced that my next company would not be part of cybersecurity. I was looking to broaden out because I knew all the downfalls of starting a cybersecurity company. But what happened was that it was at one of these Inception cohorts, in the early days, and I was still looking for my next thing as well. So I would join some of the teams and work with the teams because, I mean, part of it is fun, and part of it is I want to explore different ideas and stuff.

It was actually in cohort three that we formed a team, and we began to work on this idea around Red Code AI. At the beginning, I was like, "Maybe there's not that much here," but as we talked about it more, it became pretty interesting. So basically what we do at Red Code AI is... LLMs (large language models) and generative AI are the biggest changes that I've seen in my career in cybersecurity, just in terms of their implications. Nothing more so than the area of social engineering.

Social engineering in the past was really easy to detect. If somebody sends you a scam message, it has misspellings, bad grammar, and things like that. You could usually tell pretty easily. Now, with generative AI, you can make highly targeted and perfectly fluent attacks—not only with text but also with voice, and now even video. This has become very possible, and of course, with every new wave of technology, it's always scams and porn that are the early adopters. AI is no exception there, right? So this is... yeah.

A lot of scammers have started to build this. You've probably seen smishing and other types of messages—they’re up 1700% year over year. It's become a real problem, and the quality is getting better and better. These attacks always existed, but now you can do them at scale. It’s like the equivalent of a "script kiddie," where just using a few tools, they can run highly targeted attacks at scale. People fall for this stuff all the time.

So this is where we thought, “Somebody needs to solve this problem.” It’s a tricky problem to solve. We came up with a two-part solution. One is a product we call Defender, which helps prevent and detect social engineering attacks. It uses the large LLM models for good—using them to analyze text from emails, text messages, WhatsApp, Telegram, LinkedIn, and anything else. We can then classify very accurately whether this is a social engineering attempt or not. Not just looking at keywords, but understanding the intent behind the message.

That’s what generative AI gives us—the ability to revolutionize the way we detect these attacks. So that’s Defender. Then we also have a product called Pretender, which is the offensive version of this. You give it anyone's profile, and it will generate a fake version of it that you can use to send text messages, reach out on LinkedIn, or even make phone calls. Soon, we’ll have a video there as well.

The intent here is to inoculate your workforce. You want to protect them against these types of next-generation threats. The best way to do that is to show them what’s possible. So the next time they receive a phone call that sounds like the CEO asking them to buy gift cards, give up company secrets, or wire money, they’ll know, “Oh, I should go through the proper processes because just hearing the voice doesn’t mean it’s real.” Deepfakes exist and all this stuff, right?

So both parts are important. We just saw this tsunami coming for cybersecurity, and the industry is completely unprepared for it. We understand what’s possible and how to deploy these things for defense. We almost felt compelled, like, “We have to start this company because somebody needs to solve this problem.” We didn’t see any incumbents being in a position to solve it because cybersecurity is a very conservative field. It’s good that security should be a little conservative, but none of the vendors were anywhere close to having the right approach to solve this problem.

We knew what it took, so we started the company with co-founders who met at Inception as well. It's very meta, right? We started the accelerator to find companies and then started our own company out of the accelerator program.

Glasp: That’s amazing. But as a normal person like me, how can I recognize, “Oh, this is a scam?” How can I proactively defend against it?

John: Yeah, how do you know? How do you know this is the real me, right? That’s an interesting question. The truth is, today, for video, the generation technology isn’t quite perfect, although it’s getting so good. About a year ago, I don’t know if you’ve seen this thing—generative AI for the “Will Smith eating spaghetti” video. If you haven’t, Google it. It’s really funny. That was the state of the art around 2023. Now, compare that to Sora and other technologies, especially for virtual avatars. There are off-the-shelf services now where all you need is a single image, and for voice cloning, just three seconds of audio. With that, you can create a very convincing deepfake of someone talking. It mimics the expressions, everything—it’s astonishing.

The trajectory we’re on is that it’s not perfect yet, but it’s getting there. Not too long ago, the best practice for detecting deepfakes was to wave your hand in front of your face or turn to the side, because the software would glitch. It's like how people used to count fingers in generative AI images to spot fakes, but now the latest models get that right. Any trick you use to detect a deepfake will soon become obsolete.

What we focus on is not whether something is real or fake, but rather the intent behind it. Whether it’s AI-generated or human-generated is just one data point. What matters is: is this malicious or not?

John: So, like I said, whether something is real or fake isn't always the core problem. The core problem is, "Is this malicious?" because sometimes even real humans are doing these attacks. There's something called shallow fakes, where it's real media—real video, real audio, real text—but it's taken out of context. So it passes all the checks, and it’s like, “Yes, this is real,” but the context has been manipulated, which can still trick a lot of people.

Before deepfakes, shallow fakes were the primary technique scammers used, and they’re still very effective. That’s why focusing on whether something is AI-generated or not isn’t necessarily solving the problem. The problem is, “What is the intent behind this message, and is it trying to manipulate you into doing something harmful?”

Glasp: Yeah, that makes sense.

John: Right. And while we do have some detection technology that can identify AI-generated content, that’s an arms race. The attackers will always find new ways to evade detection, so trying to outsmart them on whether something is AI-generated or human-generated won’t win the game in the long run. That’s why focusing on malicious intent is a better approach.

Glasp: That’s interesting. And I saw today, Ilya, the co-founder of OpenAI, raised $1 billion from notable investors for his new company focused on safety and superintelligence. Have you seen that?

John: Yeah, I did see that. I mean, with Ilya and the others on that team, it’s kind of like... it almost doesn’t matter what they’re working on—they were going to raise money. With a star team like that, especially now, there’s a lot of capital and interest in AI. Things are as hot as they’ve ever been, and there’s a lot of interest in AI safety, AGI (Artificial General Intelligence), and even ASI (Artificial Superintelligence).

There’s a lot of promise around those ideas. My personal view hasn’t changed much in the last few years, even with the incredible advances in AI models. A lot of what we’re seeing is still kind of like parlor tricks. The models are just doing recall—spitting back things they’ve memorized from the vast datasets they’ve been trained on.

For example, you ask it a question, and somewhere in its training data, something similar existed, so it’s not really reasoning in a human sense. But then people are amazed because the model can output something coherent. But that's because it’s trained on the entire internet, every book, every document—so it's bound to give impressive answers sometimes.

Glasp: Yeah, I get that.

John: But when you try to push these models beyond memorization—like asking them to do something more complex, such as performing math calculations in a different base system—they often fail because there isn’t enough training data on that kind of problem-solving.

That being said, there’s also this argument where people say, “Well, if it walks like a duck and quacks like a duck, then it is a duck.” If the AI is consistently passing tests, whether it’s doing memorization or not, people say, “Isn’t that intelligence?”

Glasp: Yeah, I’ve heard that too.

John: Right. I get the logic behind that. There’s some truth to it, but at the same time, I think we’re due for a correction. A lot of hype around AI promises more than what’s currently possible, and I think we'll see some recalibration in expectations. But there’s no doubt that what AI can already do is pretty incredible.

Glasp: Yeah, I’ve seen some amazing use cases already.

John: Exactly. So, while AGI is an exciting concept, a lot of what we see now is still just sophisticated pattern matching. It’s great for certain tasks, but we’re not quite at the level where machines can truly think like humans or reason independently.

Glasp: Makes sense. Thanks for breaking that down.

John: Absolutely.

Glasp: Yeah, that's super helpful. I feel like there's so much hype, and it's hard to know where we actually are in the AI landscape versus what people think is possible right now.

John: Exactly. There's always a lot of hype with new technologies. I mean, don't get me wrong, the potential is real, but it's important to understand where we are versus where we’re headed. People often conflate things like AGI and the very real applications of current AI, which are powerful in their own right but not "superintelligence." When you hear about all these breakthroughs, it can seem like we're just a step away from machines being able to think like humans, but that’s still quite a way off, at least in terms of practical applications.

Glasp: Yeah, that totally makes sense. It’s also interesting how AI, especially in cybersecurity, has kind of flipped things on its head, both for good and for bad. As you were saying earlier, scammers are using AI to create more sophisticated attacks, but at the same time, companies like Red Code AI are using it to prevent those very attacks.

John: That’s right. It's a constant battle, and technology like generative AI is a double-edged sword. On one side, bad actors are leveraging AI to scale their efforts in ways we’ve never seen before—whether that’s automating phishing attacks, creating deepfake videos or audio to impersonate people, or generating more convincing scam emails. The worst part is that these techniques used to require significant skill and effort, but now, with tools and models readily available, practically anyone can do it.

But on the other hand, we can use the same tools to defend against these attacks. That's why we built Defender—to proactively detect and mitigate social engineering threats by analyzing the intent behind the communication, not just the content. It’s about turning the tables on the attackers by leveraging the same technology for good.

Glasp: Yeah, it’s interesting to see how these same technologies are being used on both sides. Do you think we'll ever get to a point where it’s impossible to tell if something is real or fake, like if a deepfake will be indistinguishable from reality?

John: I think we’re getting closer to that point every day, especially with the advancements in AI-generated content. As I mentioned earlier, voice cloning with just three seconds of audio, and generating convincing video from a single image—are things that would’ve sounded like science fiction not too long ago.

We’re already at a place where, with some effort, a well-crafted deepfake can fool most people. But soon, it might not even take that much effort. The challenge will be less about detecting whether something is fake, and more about understanding the intent behind the content. Is someone trying to manipulate you? Are they trying to trick you into doing something that could harm you or your organization? That’s what we need to focus on.

Glasp: Right, that makes sense. So, you’re saying that detection might not always be reliable, but understanding the intent behind communication will be key.

John: Exactly. Deepfake detection will always be an arms race. Attackers will keep getting better at making convincing fakes, and while we can try to develop better tools to detect them, it’s a race that’s hard to win in the long term. Instead, focusing on whether the content, regardless of how it was created, has malicious intent is where we need to shift our efforts.

That’s what we’re doing with Red Code AI. We're trying to go beyond the superficial detection of whether something was AI-generated. We want to understand the deeper implications of the communication, like whether it’s trying to manipulate or deceive someone into doing something harmful.

Glasp: Yeah, I think that's a really important shift in mindset. So, with Red Code AI, you're not just focusing on the technical side of detection, but also on understanding human behavior and intent behind these attacks.

John: Exactly. Cybersecurity is no longer just a technical problem—it’s also a human problem. Attackers are exploiting human psychology, and that’s something that technology alone can’t always fix. We’re trying to build tools that bridge that gap, helping people recognize when they’re being manipulated, regardless of whether the threat comes from a human or an AI. It’s about combining the best of both worlds—leveraging advanced AI models while understanding the social dynamics at play in these attacks.

Glasp: That’s fascinating. So, what do you think the future holds for both cybersecurity and AI in general? Where do you see things going in the next few years?

John: Well, in cybersecurity, I think the arms race between attackers and defenders will continue. Attackers will keep finding new ways to exploit AI, but defenders will have to stay ahead by using AI to outsmart them. We’ll also see more collaboration between AI and humans—AI handling the large-scale data analysis and detection, and humans providing the context and judgment to make the final decisions.

In terms of AI more broadly, I think we’re going to see more integration of AI into everyday tools and workflows. We’re already seeing that with things like ChatGPT being used to assist with customer support, content generation, coding, and more. AI will become more of a partner to humans, augmenting our abilities rather than replacing them. But with that will also come more challenges around ethics, bias, and misuse, so we’ll need to be vigilant.

Glasp: Yeah, that’s going to be important as AI continues to grow. Thanks for sharing all of this—it’s been insightful.

John: And all of these and AGI and everything else, the reality of it is not going to match what was promised. That one, I can pretty much guarantee. Anybody who's worked with these systems knows that they're kind of simultaneously amazing and also like really stupid at the same time. Like you can... you ask things like how many R's are in the word "strawberry," and it gives you the wrong answer. So it's like... I mean, that's just one little piece of trivia, but there are a bunch of examples like this, where the language model itself is not even self-consistent in its answer, in the context of the same answer.

Because it's just spitting out tokens that are statistically likely, based on what it's seen in the dataset, right? And there's a lot of structure in that language and everything like that, that gives it the ability to do these amazing things. But yeah, I still think we're a long way away from this type of full artificial general talent or superintelligence. Even when we do have it, the first versions of it are going to be just in very particular domains. I don't think we're going to have a breakout moment where, suddenly, we have millions of virtual beings that are all smarter than humans.

Even if you look at the capacity of a human brain—how big of a neural network you need, how much power you need—all that stuff with the current state of technology, you'd need to build an entire data center and power it with an entire power plant to simulate one of these at the level you'd want for a kind of human-like intelligence. I mean, the first step is getting it to be as intelligent as a cat or something. We're not even there yet, but that'll take some time. But I do think the promise that's been made, there's such a big gap there. If people are, with a straight face, saying, "We're investing a billion dollars in this because we firmly believe that ASI is going to happen within the next two or three years," they're going to be disappointed.

But if they're putting a billion dollars into it because, yes, they're going to shoot for that, but in the meantime, they'll have all these other amazing use cases that are going to solve problems and create a huge amount of value—then yes, that makes total sense. I agree that this kind of thing is going to happen. I think the danger of focusing the conversation so much on AGI or ASI is that it distracts us from the real dangers of using this technology.

The actual danger isn't some superintelligence trying to eradicate humanity and that we need a magic off-switch to turn it off in case it goes rogue. That's sci-fi stuff. The real danger is that humans will use these tools to do bad things—things they couldn’t do before at scale. Like creating a million sock-puppet accounts to influence elections, deepfakes that change public perception, or even things that aren't quite as nefarious but are still harmful.

Glasp: Right.

John: Yeah, LLMs are just a tool. I saw Andrew Ng making a parallel, saying, "An LLM is like an engine." You can put engines in things and use them for good things or bad things. The good uses overwhelmingly outweigh the bad ones, but you could use it for either. I think that's very true. It doesn’t make sense to unnecessarily restrict the development of this technology when it has so many positive outcomes.

Usually, the issue comes down to people's perceived likelihood of a catastrophic event—where AI becomes sentient and decides to kill its creators or whatever. Some well-known people in the industry believe that's a real risk. If you believe that, then you’d go down a particular path of caution.

But most people who work in this space—not political scientists or philosophers, but people who know the technology—know that we are very far away from that scenario. I've heard it described as "I don't worry about that for the same reason I don't worry about overpopulation on Mars." Yes, theoretically, it could happen, but there are so many other problems, and the chance of that happening anytime soon is really low.

So yeah, that's certainly the case on some of the safety concerns, but I also think that the current approach toward LLMs—this next-token prediction method—is not going to bring us to AGI or anything close to it. It’s going to require a fundamental reimagining of how these systems work, with talented teams who understand things at that level. That’s the kind of work that needs to happen to break through and get closer to human intelligence.

Glasp: Yeah, thanks. We need to prepare for it. And we love what you're doing, so we need to be prepared by doing things like that. Thank you for talking about your Inception studio and generative AI. We’re also interested in your career as well. We saw your biography, and it says you started coding when you were five years old. Is that true? Why were you interested in computer science and AI at such a young age?

John: Yeah, I wasn’t interested in computer science back then, for sure. Mostly what happened was that we had a computer at home, and we didn’t have many games—just a few. I liked playing computer games, but I got bored of the ones we had. There used to be these magazines that had source code listings for games, and you could type them in and play. So, I started typing them in, line by line. I was five, I was in kindergarten, doing this because I wanted to play the games.

And of course, when I typed them in, I made typos. I didn’t understand what I was typing, so I had to learn how to debug it. At first, I’d look character by character for mistakes, but over time, I got a bit of intuition for where the bugs might be. Then, I wasn’t satisfied with just playing the games as written, so I started making changes—like putting my name in, or adding new features. That’s how it started. I wasn’t writing code, just making small changes. But over time, I got better and better at it.

So, that’s how I got started. Eventually, I started to understand more and more. I used to run this thing called a BBS—Bulletin Board System—which was pre-internet. You had a modem, you’d dial in, and you could send messages and talk to people. I got into that scene early on, when I had a 1200 baud modem, which was super fast at the time. Most people had 300 baud modems.

I started running my own BBS, and some of my friends were building games for their BBSs. So, I thought, "I’m going to make my own game for this BBS." That’s what got me into coding games. I built this RPG game for my BBS, and that was around middle school and into high school. By that time, I had learned languages like Pascal and C, and that’s how I got good at coding.

Then, in high school, I took AP Computer Science as a sophomore. That was the first year they let me take it; they wouldn’t let me take it as a freshman. I did well in the class and got a 5 on the AP test, and my teacher told me about this thing called the USA Computing Olympiad, which was like competitive programming. She knew I was good at coding because I had been doing it for a while, but I hadn’t done algorithmic problem-solving before.

The Olympiad problems were really hard, way harder than anything I had worked on before, but I got really into it. I ended up going to the USA Computing Olympiad as one of the top 15 in the country. I didn’t make it to the International Olympiad, where only the top 4 go, but making it to the top 15 was my first indication that I might be good at this.

Glasp: That’s impressive.

John: Yeah, it was a big confidence boost. Until that point, I never really thought of myself as particularly good at computer science—I just enjoyed doing it. But that was the first time I thought, “Maybe there’s something here.” That was also my first exposure to the more theoretical side of computer science. Before that, I was just coding, not thinking much about algorithms or efficiency. But in preparing for the Olympiad, I started studying algorithms, and I got into it.

After that, I applied to MIT, and I got in, mainly because of the Computing Olympiad. My grades and SAT scores were okay, but nothing spectacular. I think it was the Olympiad that helped me stand out. Getting into MIT was a miracle for me—it was my dream school. Once I got there, I was able to apply myself much better than I had in high school. I did much better academically.

At MIT, I got really into compilers. I have always been fascinated by how you go from source code to machine code. It’s amazing how that works, and I wanted to learn all about it. So I started diving deep into compilers, and I still love them. They’re one of my favorite areas of computer science.

Glasp: That’s amazing! So it sounds like from a young age, you had this natural curiosity that drove you to learn all these things, even when it was challenging.

John: Yeah, I think it was mostly driven by boredom and a desire to play games initially, but it evolved into a genuine curiosity. I just enjoyed figuring things out and making things work. Once I started getting into the deeper aspects of computer science, like algorithms and compilers, it became more about understanding how things worked behind the scenes.

Glasp: That’s a great story. It’s inspiring to hear how you started from just wanting to play games to becoming one of the top computer science students in the country and going to MIT.

John: Favorite topic here—this is why I taught the compilers class at Stanford multiple times. It’s my first love, whatever, so I can talk about compilers for days with anybody because I really love the area of compilers. I think working on compilers forces you to work at a meta-level, and you have to be strong on both algorithms and implementation together, right? You can’t just hack together a compiler—it will never work reliably enough for people to use it. But you also can’t work entirely in a theoretical domain with compilers because, ultimately, these are running on real computers and real hardware with real programs.

So, you have to understand how people write programs, how architecture works—all that kind of stuff. It’s like a wedding of these two areas, and that really attracted me to that problem space. I don’t teach the compilers class currently because now I teach the LLMs class, you know, the CS 224G, which is about building applications using large language models. Although I used to teach the compilers class in the past, one of the most amazing experiences was that I got to co-teach the compiler class with Jeff Ullman, who won the Turing Award for his work in compiler education. He’s one of the authors of the Dragon Book and multiple other textbooks.

And so I got to co-teach a class with him. He had won the Turing Award, and I thought, "He’s not going to come and co-teach this class with me. There’s no way—he already hit the pinnacle; why would he come back?" But he did because he’s a great guy and he loves this stuff as well. I mean, he’s obviously getting up there in years, but he came back the year after he won the Turing Award, and we co-taught the class together. That was an amazing experience. I never thought I’d have the chance to do that, ever. That was definitely a high point—to be able to teach the compiler class at Stanford with Jeff, which was very cool.

Glasp: Yeah, that’s very cool. Thank you for sharing your life story, starting coding, and also about hardware. So, right now, the field is so competitive—not only with LLMs but also with cloud chips. It’s such severe competition. Do you have any big-picture thoughts on how it's going, or what we should focus on? I mean, is this specifically around the hardware area or just in software in general?

John: Yeah, I mean, look, it’s so interesting to see, especially in this machine-learning space. A lot of the topic areas you hear about—like systolic arrays, wafer scale, and parallelization—are things we worked on in the late '80s in the compiler space. A lot of it overlaps with scientific computing and other areas. There weren’t that many compelling use cases back then, but the ideas really haven’t changed much since then.

Even on the architecture side, or on the compiler code generation side, for a long time the state of the art, with TensorFlow and PyTorch, was abysmal. The utilization of GPUs was really low. Trying to get them to perform well was embarrassing. As a compiler person, I could look at it and say, "Oh my gosh, just use basic techniques we’ve known for 20 or 30 years, and we can do a much better job!" But the truth is, there weren’t that many good compiler people working in machine learning early on.

Now that’s changed. Now, all the smartest people are working on these problems, which is why we’re seeing these leaps in efficiency, both on the hardware and software sides. The new capabilities are much better because there was a lot of catch-up that needed to happen.

Nvidia is certainly the de facto leader in this space—by far. They just released their latest numbers, completely blew out their targets, and still their stock price went down because people expected more. Nvidia was even flirting with being the most valuable company in the world at one point, and for good reason. It’s not just their hardware—it’s their entire software stack. Look at CUDA and everything else they own in that stack. It’s hard for someone else to come in and displace all that. You have to build great hardware, but you also need the tools, compilers, debugging systems, and everything else.

That’s a ton of work, and Nvidia has been doing it much longer than anyone else. But that being said, some interesting new companies are making specialized hardware, particularly optimized for inference and other things. These are 100x or 1000x more efficient, and that makes sense. It’s not like Nvidia has all the answers.

Some of these companies are going to start to eat into Nvidia’s market. They’re at the top right now, but the only way is down. These smaller competitors will find their niche, and Nvidia won’t compete there. From that niche, these companies can grow into other use cases.

It’s like the old saying, “Nobody gets fired for buying IBM.” Now it’s, “Nobody gets fired for buying Nvidia,” because it’s a safe bet. But if you buy hardware from a smaller company and it fails, that’s a catastrophe. So, those competitors need to be significantly better—10x better, not just 10%—for companies to take that risk.

Glasp: Yeah, I see what you mean. It makes sense. And I saw so many players working in this space, like Google and OpenAI exploring chips. Do you think OpenAI will keep its position in, say, five or ten years? Or do you think another company might come out on top?

John: I don’t think OpenAI will maintain the same position. They were the undisputed leader for a long time, but now they have real competition. In the beginning, we thought maybe Gemini would be a competitor, but they flubbed the launch, and it didn’t work very well. But you can’t ignore Google—they have a lot of resources, and they’ll figure it out. Meta is also in the mix, and Anthropic too. Recently, Anthropic’s Claude beat GPT-4 in some benchmarks. So, OpenAI has real competitors now.

There’s also a shift happening. Once upon a time, the best students at Stanford would go to Google or Facebook because those were the cool companies. Now, OpenAI is one of the most sought-after companies for top AI talent, but I’m starting to see signs of that change. People are leaving OpenAI because of growing pains and other internal issues. For example, Greg Brockman is on leave, and there are others facing challenges.

This is an indicator of what could happen in three, four, or five years. If you’re not attracting the best talent, it’s going to be hard to keep your innovation edge. I’ve started to see this with OpenAI. A lot of top talent is now going to startups or other early-stage companies. They want to join the next OpenAI, the next big thing.

It’s funny to think of OpenAI as the incumbent, but in this space, they are. And the top talent is no longer flocking to the incumbent but to challengers and up-and-coming companies. If OpenAI can’t maintain its edge, five years from now, it may not be the dominant player anymore.

In various ways, they have stuff that they haven't released yet that they will release, and then they'll leapfrog again. So, it'll be competitive for a while. But if they're not in a position where they're attracting the very best talent, they're not going to be able to maintain that. And so this is why I think, like, five years from now—is OpenAI still going to be the dominant player? Maybe not.

Glasp: Yeah, I see. Yeah, that sounds... yeah, makes sense. And so, would you rec— especially like, student to student—would you recommend them going to the next OpenAI or starting their own company? Because, you know, thanks to AI, we can keep teams small, and we can dedicate so many things to AI. But, sometimes students don't have enough skills to do something, right? So, joining a company like the next OpenAI can get, they can gain great experience there. So, what would you recommend?

John: So, I mean, given the fact that I run an early-stage AI startup accelerator, and I'm a huge proponent of entrepreneurship and startups, I will also have to say—it’s not for everyone. There are people for whom the right thing is to join OpenAI, or join Google, or join a larger, more established, more stable company. Especially if you're in a situation where work is not your life, I guess I’d say. If you want to have a good work-life balance and you feel like you want to prioritize things that are not your work, then yeah, join a bigger company. Your life is going to be a lot easier there if that's what's important to you right now.

That being said, if you're ambitious, if you are driven, if you really want to make an impact, then either join an early-stage company that’s on this rocket ship that you can be part of the whole time, or just strike out and start your own company. You're going to learn way more from that. It’s far better in the early part of your career to optimize for learning versus optimizing for salary or other things right now. But this is all couched in the question of, "What do you actually want?" Like, what's your ambition, and what do you want to do with your life?

Yes, if you feel like work is not the most important thing in your life and you want to do other things, then optimize for that. You’re not going to be happy starting a company where you're working 80 or 100 hours a week to keep the company afloat and survive, right?

But what I would say is, if you're just starting out, don't optimize for building the ideal resume or anything like that. Optimize for figuring out what you actually want to do. If you're successful in figuring out what you like to do and what you don't like to do, then that’s the actual path to happiness and fulfillment. You don’t want to end up on someone else’s career path and then realize that’s not what you want to do with your life.

Glasp: Yeah.

John: So, the first step is to figure out what excites you—what do you want to spend your valuable, limited time on Earth doing? Once you understand that, then you can think about, “How do I get there?” And usually, that’s about optimizing for learning. You want to put yourself in a position where you’re learning the most you possibly can. Working in a big company, you’ll learn some things, but you’re not going to learn many things about entrepreneurship. The larger the organization, the more narrow your role is within it, so you learn how to do one thing well but not others.

By the way, learning the skills to be successful in a company like Google is a really different set of skills than the skills needed to be successful at a startup. They’re totally different. In a bigger company, it’s about navigating politics, getting people on your side, and fighting for resources. You can’t step on other people’s toes because that’s going to make them mad, and it causes problems. That’s what it takes to be successful in a bigger company.

Being successful as a startup founder or in an early-stage company requires totally different skills. So, just think about that and optimize for learning and skill development early on. A very good way to learn is by joining an early-stage company and being a key employee there, growing with that company. It’s a great way to get a firsthand view of things like marketing, sales, products, and all these other areas.

Or, you could just jump in the deep end and start your own company. You’ll be forced to learn quickly. There’s a benefit to being in a position where you've seen greatness in some area. It’s hard to be great unless you’ve seen great. Different organizations are great at different things. Having an opportunity to work with someone great in an area aligned with your long-term goals is an amazing opportunity, whether that person is in a big company or a startup.

Glasp: Yeah, that’s great advice.

John: Yeah, and there’s no perfect resume or LinkedIn profile. After your first job, no one cares. What matters is your performance in the real world, not where you’ve worked. People care less and less about that as you progress in your career. They don’t care about your school or GPA after a certain point. What matters is what you’ve done. It’s good because it means that, even if you didn’t have all the opportunities early on, you can still be successful by working hard and proving yourself.

Glasp: Yeah, that’s encouraging.

John: Definitely. Just look at people like Greg Brockman. He didn’t graduate—he left MIT a semester before graduating to start Stripe, and it was the right decision. And he’s doing pretty well without that degree, right? There are a lot of examples like that, so it shows you that it's possible.

Glasp: Yeah, that makes a lot of sense. It’s encouraging to hear that you don’t need the perfect background or resume to succeed. So, considering you’ve founded three successful cybersecurity companies, looking back now, is there anything you would’ve done differently? What’s the biggest lesson or challenge you’ve learned through your journey as a founder?

John: Oh my gosh, I have so many things I would’ve done differently—especially with my first company. I made so many mistakes and have all these battle scars from doing things the wrong way. I would say the biggest lesson I learned is this: just because someone is willing to give you money for an idea doesn’t mean it’s a good idea. You can't rely on investors to know what’s a good idea or not.

There’s this image of, "Oh, well, this famous investor invested in this, so it must be a great opportunity." But that’s not always true. More often than not, investors don’t have the answers. The founders are the ones living in that space; they know it better than anyone. Founders are the world’s experts in their domain, or at least they should be. So, don’t get too enamored by big-name investors or assume that just because they’re on board, you’re working on a brilliant idea.

Your time is worth more than your money, so think about that when deciding what to pursue.

Glasp: That’s a really important point. Sometimes, it’s easy to get caught up in the validation that comes from investors, but ultimately, it’s about what the founders believe in, right?

John: Exactly. Another big lesson I learned is the importance of having aligned and supportive investors. If your investors don’t truly understand what you’re doing, it’s going to lead to problems down the road. They need to be aligned with your vision and strategy. If they’re not, it will create tension, and Life’s Too Short to spend time working with people who aren’t supportive. You should find investors who really understand and believe in what you’re doing.

To have the answers, but like you want to be able to go to them and say, "Hey, we need help with X," and they will jump at the opportunity to help out. Those are the people that you want on your side.

And also, be careful about who your co-founders are. Just kind of understand, and make sure that you have good complementary skill sets, but mutual respect becomes important. Because if there's a question about who's responsible for what, it leads to all sorts of problems and issues down the road. There has to be mutual respect. You have to be able to trust that, “I'm going to fully delegate this part of the business or the founding roles to this person,” and you have to be able to fully trust that they are going to do this not only better than you could but better than people you would hire or anyone else, right?

If you can’t answer or don’t believe that to be the case, it’s unlikely to work out. Sometimes you have a situation where you have a senior founder and a junior founder, and there’s a clear hierarchy, but those situations can still get difficult. The best scenario is that you have complementary skills and mutual respect, where each person respects the other for their contributions.

A lot of companies struggle with co-founder issues, and that's why it’s important to find someone a bit different from you. There are benefits to doing that. It’s easier if you have a diverse set of DNA in the company early on because the more homogeneous it is, the harder it becomes to adjust down the line.

The best companies aren’t made by cloning the founder 50 times. Sometimes it feels like, "Oh, I wish I had a bunch of clones of me to get all this work done," but that’s not the path to building a strong company. You want to build something bigger than you could do yourself, and the only way to do that is by incorporating people with different skill sets and perspectives. That’s how you build a company and a product that’s better than what you could do on your own.

Glasp: That makes sense.

John: Yeah, so being conscious of that and going outside of your comfort zone by saying, "This person is different from me, but they’re good at what they do," is important. Even if it’s a bit uncomfortable, it’s good to work with people who have skills and perspectives that are complementary to yours. It's easy to stick with people who are like you—say, engineers talking to engineers—but that won’t help the company grow in the long run.

And one last thing: there’s a stigma against solo founders, but I don’t know why. There are pros and cons to being a solo founder. Sure, it’s easier to have more diverse DNA with more than one person, but being a solo founder avoids a lot of drama and issues that come with co-founder relationships. Yes, it’s a lonely job, and you don’t have someone to bounce ideas off of, but there are ways to supplement that.

So, if you don’t have a co-founder, that’s fine. It’s better to have no co-founder than to have a bad co-founder. Even a mediocre co-founder can cause problems. If the right answer is to be a solo founder, don’t be afraid of it. There are plenty of highly successful companies that were started by solo founders. You can be successful, raise money, and do everything you need as a solo founder. It has its own set of challenges, but it also avoids a whole other set of problems.

Glasp: That’s a really interesting point.

John: Yeah, and the critical path in a company is often the communication bandwidth between the co-founders. If you’re a solo founder, you don’t have that issue because the communication is in your own brain! You can’t get more efficient than that. So, that problem becomes much easier when there’s only one founder.

These are some of the things I’ve learned over time, and being in the position I am now with Inception Studio, I get to work with a lot of founders. That has accelerated my learning because I see a wide range of experiences and challenges through working with them.

Glasp: Yeah, it sounds like Inception Studio is an amazing place to learn and grow.

John: Absolutely. I’m fortunate to be in a position where I get to work with so many talented founders, and it’s been a lot of fun. It’s helped me think more deeply about these issues and learn from others’ experiences as well as my own.

Glasp: Thank you for sharing that. As we wrap up, I have one last big question for you. Since Glasp is a platform where people share what they’re reading and learning as part of their digital legacy, I’d love to know—what legacy or impact do you want to leave behind for future generations?

John: Wow, that is a big question. I’ve always felt that because I’ve been given certain talents and had access to great educational opportunities, I have an obligation to do something positive with them. It would feel like a waste if I didn’t. I went to MIT, I did a PhD, and not many people go through that many years of schooling. So, I feel like I have to use those talents and the investment others made in me to contribute to the world in a meaningful way.

I feel an obligation to help others and to try to make the world a better place, in whatever ways I can. I’m fortunate to be in a life stage now where I can do that through things like Inception Studio, teaching at Stanford, and other endeavors. I’m still a long way from where I want to be in terms of my legacy, but that’s my goal—to have a positive impact on the world.

Glasp: That’s a beautiful way to think about it.

John: Thanks. It’s really important to me, and I hope that by the end of my life, I can look back and feel like I fulfilled that obligation. It’s not just about business success, but also about making a difference for my family, my community, and the world. That’s what will ultimately matter to me.

Glasp: Thank you so much for sharing your thoughts, your experiences, and your insights. This has been an incredible conversation.

John: Thank you. I enjoyed it, and I’m looking forward to seeing how the recording turns out!


Follow John Whaley on social

Twitter

Share Your Excitement

Ready to highlight and find good content?

Glasp is a social web highlighter that people can highlight and organize quotes and thoughts from the web, and access other like-minded people’s learning.

Start Highlighting