Isabelle Hau: We Learn Socially, Why AI Can’t Replace Human Connection in Education | Glasp Talk #59

This is the fifty-ninth session of Glasp Talk.
Glasp Talk delves into intimate interviews with luminaries from various fields, unraveling their genuine emotions, experiences, and the stories behind them.

Today’s guest is Isabelle Hau, Executive Director of the Stanford Accelerator for Learning and a leading voice in reimagining education for the AI era.

Formerly a venture capitalist at Omidyar Network, Isabelle now advocates for human-centered educational approaches that prioritize enhancing human connections through AI rather than replacing them. She is also the author of "Love to Learn: The Transformative Power of Care and Connection in Early Education," a book that examines how relational intelligence, empathy, and curiosity can revolutionize the learning process.

In this conversation, Isabelle explains why the future of learning must strike a balance between AI’s potential and the irreplaceable value of human relationships. She shares how educators and policymakers can integrate AI tools to complement teachers, and offers insights into rethinking assessment, equity, and engagement in the digital age.

From early childhood through lifelong learning, Isabelle offers practical ideas for creating education systems that empower learners to thrive in a rapidly changing, technology-driven world. Whether you are an educator, parent, policymaker, or simply passionate about the future of education and AI, this episode will inspire you to imagine an educational landscape where technology serves humanity.



Transcripts

Isabelle: We, as human beings, have social brains. So we learn socially, especially in the earliest years of life, but I would say throughout life.

Glasp: Hi, everyone. Welcome back to another episode of Glasp Talk. Today, we are honored to have Isabelle Hau with us. Isabelle is executive director of the Stanford Accelerator for Learning, where she leads efforts to accelerate solutions to the most pressing challenges facing learners, from early childhood education to digital and working learners. Previously, she was a partner and a US education lead at Imaginable Futures and Omidyar Network, helping scale innovative initiatives that expanded access to education for millions. A passionate advocate for equity in early learning, Isabelle is dedicated to ensuring that every child, regardless of gender, race, or zip code, has the opportunity to thrive. She also writes the popular weekly newsletter Small Talks, contributes regularly to Fox and EdSurge, and is the author of Love to Learn, the Transformative Power of Care and Connection in Early Education, released earlier this year. Today, we will explore Isabelle's journey, her vision for the future of learning, and how care, connection, and innovation can transform education for generations to come. Thank you for joining us, Isabelle, today.

Isabelle: Thank you, Keisuke, for having me on the Glasp podcast. Really excited to be with you.

Glasp: Thank you so much. And so we are big fans of you and your book. We really enjoyed reading your book, Love to Learn.

Isabelle: Thank you, Kei.

Glasp: So you mentioned that you immigrated to the US over 20 years ago, but could you share your personal journey from France to the US and what shaped your passion for education and equity with our audience?

Isabelle: Yeah. So I moved to the US, indeed, about 25 years ago. I started my career in a different field from education. I started my career in finance, where I had the privilege of learning one key skill, which was how to think about scale. And then I applied those skills to impact, starting now about 15, 20 years ago, where I had the opportunity of first joining an organization focused on building the field of impact investing, which was very nascent at the time, and then joined Omidyar Network, which was a pioneer, which has been a pioneer in the field of impact investing. And while at Omidyar Network, as the organization started to grow and become more and more specialized, I had the opportunity to focus more and more on the education work and started to establish an education initiative that my colleague Amy Clement and I ended up spinning off into what exists today, called Imaginable Futures. And then COVID hit. And that was a big moment for me of saying, OK, what is my role in this ecosystem, and how can I continue to have impact, maybe in a different way? So I decided that I wanted to be a lot more reflective by writing. So that's when I started my newsletter, started to think about writing a book, and writing in many different other formats, some academic formats, some non-academic formats. And anyway, as I was doing this, this opportunity to lead the Stanford Accelerator for Learning came about, and I have now been running this incredible initiative at Stanford University for the past three years. So that's a little bit of my journey, my professional journey. And happy to also answer any other questions on why I'm so passionate about education.

Glasp: And now, as you mentioned, you are at, you know, you're working for Stanford Accelerator for Learning, and the website, and you say, you know, the project, the team is tackling some of the biggest challenges in education. And what are the most pressing challenges you are facing or you're focused on right now? You or your team, company as a company?

Isabelle: Yeah, so in education, we have a really interesting and obvious problem, which is that a lot of the efforts over the past 100, 150 years in education, both in the US and globally, have been optimized around access. So we have focused a lot of our efforts on, you know, increasing access to public education in elementary, middle, high school, college, and more, OK? However, we have not optimized our education systems around another key principle of the Stanford Accelerator for Learning, which is learning itself. So, for example, we have very large classes, where we have a sage on the stage, which is a teacher or an educator. That's not optimizing for individual learning, that's optimizing for access. So what the key problem or key opportunity that the Stanford Accelerator for Learning is trying to address is both simple and super ambitious in many ways is to say: what if we were to think about all the knowledge that we have accumulated from research on how humans learn? And apply it to education so that every child, every adult, actually learns. And then we are, so the way we are organized, so that's at an uber level, the key problem is we're centering everything around learning and ensuring people learn. So moving from schooling to learning, OK? And learning is lifelong, so it starts in the earliest years of life and continues up to the grave, because we continue learning into old age groups. And it's also life-wide. Of course, we learn in schools, of course. Schools are a great mechanism for everyone to learn. However, we also learn outside of school. Children learn on playgrounds, they learn at home, they learn in libraries, they learn in so many other settings outside of schools. So the way we are organized at the Accelerator for Learning is around those two themes of life-wide and lifelong. So we have six initiatives, three that are lifelong. So one in early childhood education, one around the K-12 age groups, and then one in adult and workforce. And then we have three life-wide themes, one on digital learning and AI, one on learning differences for children with disability or learners with disability, and then one on equity as a big theme that's transversal across all our activities.

Glasp: Thank you. And so, you said your work across early childhood digital learners and working learners, but I was wondering which of these areas do you think holds the most untapped potential for impact, in your opinion?

Isabelle: Yeah, so let me just pick one. I love all of them, so it's a little bit like picking one of your children, you know, choosing one of your children. I love all of them. But one thing is very clear to me: that it has a profound impact, both at the individual level and... societal level, is investing early. So the early childhood education work that we do at Stanford University is very special. It's very rare, by the way, only very, very few universities have a center dedicated to early childhood education. So very proud that Stanford does have a dedicated effort there. Because if you start with the foundations of learning and thriving, children are much better off long term. So a little bit like a house, if the foundation is not, you know, strong enough, you know, it may shake over time. So it's the same for all of us human beings. And so that's one. And then the other one, where we are spending extraordinary time right now, is digital learning and AI. Because there is a, you know, AI, while it carries many opportunities and risks by itself as a technology, what it has done for the education field is something that's quite unprecedented, which is raising big questions around how children and adults learn, what pedagogies are needed, and assessments. What are the future skills and competencies that all of us need? So it's framing some huge questions that we have been needing for a long time. And it's accelerating some major transformation. Teachers and administrators in schools, innovating with those technologies at unprecedented rates of adoption. You know, at this point, for example, just over the past two years, we now have 26 states in the US that have adopted AI frameworks. This is unprecedented in just two years. I mean, some people may say, Oh, it's too slow. We only we still have 24 states that need to adopt. But the positive side and the incredible side is that 26 have adopted in just two years, which is very fast in terms of policy adoption, some frameworks on AI and education. So we are seeing this incredible transformation of education and with huge opportunities for positive change and also major concerns. Of course, there are always two sides to any tool or any transformation, of course. But those would be the two areas, so early childhood and digital learning, that I would highlight, but I love all of them.

Glasp: Thank you. And you mentioned, since you mentioned AI, the impact of AI. So we read your book, and there was a section about the AI. I remember that chapter seven, eight, or something. And so wondering. Since the book was launched this February, it's been six months or so. Do you have any do we have any updates on the impact of AI in education or learning? I mean, on one hand, it has a promising potential for children or everyone who uses AI to learn something better. But on the other hand, as you mentioned, there are huge like major concerns. And so, for example, you mentioned like there's a replica of AI or AI robots that like replacing human-human interaction to like a human-machine interaction. And we'd love to learn more about the impact of AI in education and so on, if you have any updates.

Isabelle: Yeah. So what has been fascinating and maybe a surprise for a lot of us, including tech companies, is that the education sector has been the fastest adopter of AI. And we have multiple data points on this. One came recently from a report from Microsoft that was that had multiple sources on this trend. Another one that I like a lot is more recent, even more recent than the Microsoft report on education, which came out this summer, where OpenAI is seeing a huge decline in the use of ChatGPT by about 10%, which is attributed to students being on summer break. So, students and the education field are very big adopters of AI. Okay. Now, to your question, Kazuki, what are we learning from research? Research is actually quite mixed at the moment. So some research shows that those tools can actually hinder learning. So what I mean by this is that we are referring to one report that has been widely quoted that was done on brain studies by MIT recently, which showed that there was very little brain activity, and certainly meaningfully lower brain activity when learners were using ChatGPT versus being actively engaged in other forms of learning. However, the back end of the report was also showing that when a learner has been asked to tackle a particular problem, and then when they use a tool subsequent to having thought on their own about that particular problem, their brain activity is quite high. Then their brain activity is quite high. So, there is a big question that this report raised, which is when is the optimal timing to introduce this technology to learners? Okay. There is another report, there is another research that showed some concerns about children performing better with those AI tools on tests. But when you are taking away the tools, then they perform less well than if they had been unaided by any AI tools throughout their learning. Okay. Which would suggest that AI is more of a clutch, or has a clutch, what people call a clutch effect, which is a little bit like when you have a clutch when you have had a physical accident, and your muscles get atrophied after working with a clutch. So that particular effect may be appearing in learning in some ways. However, the interesting question I think that we all have to resolve is the second piece of this MIT study, which is, what are the optimal times to introduce this tool, and how to use this tool? So that's an area where we are doing a lot of work at Stanford University, and are finding some much more positive outcomes when learners are exposed to this tool to create, and being empowered with this technology, we're seeing a lot of beneficial effects. So, where the technology is no longer a crutch, but rather a catalyst for additional learning. So for example, we have a really fun study underway right now on environmental studies for middle schoolers, where a middle schooler is using, or is asked to use a comic book format to learn about, let's say volcano, and has to use those tools, those AI tools, to generate new information to build a comic book. As a result of this, because the format is quite engaging and creative, the middle schoolers actually tend to learn more than unaided, on an unaided situation. So the question, out of all this emerging research, and by the way, the research is still very emerging, because we are only, you know, at the end of the day, only two plus years post introduction of those ChatGPT, of those generative AI tools, following the introduction of ChatGPT in November 2022. So it's still very recent. And the cycle of research is such that there is right now an acceleration of new research that is emerging, but it's still very new for a researcher to have set up an experiment and then have published, you know, it takes some time. So we are going to see more and more research, you know, being published, but we are still very, very new to the cycle of research on the impact on learning outcomes.

Glasp: I see. And I saw that an open AI launched study mode today or yesterday. And do you think that helps, like students using AI better, or do we not know yet the results? I remember that there's a collaboration between Stanford and also open AI, but do we?

Isabelle: Yes. So we have not, we have, you know, they were just launched. There were a few of us who had access to those tools on a pilot basis, but we have not studied the effect of this particular new modality. We are about to do the research on ChatGPT for education, which is very exciting for the field at large, because we are going to do the study on a large scale, which is a national scale. So we are going to do the study for the country of Estonia, which is a high performing, one of the highest performing countries in the world on PISA scores. So excellent education system overall. And we are going to be able to look at students who are using ChatGPT for education and those who are not, and contrast the two groups. So that will give a lot of very interesting information to the field at large on what the impact of, you know, access to those tools to a meaningful and at-scale number of students. Very excited about this upcoming study that starts in a month in September.

Glasp: Exciting. Yes. Yeah. Yeah. And I see. Now, also, so you say, did you see any interesting research about AI impact in early childhood care, early childhood food? So the research you mentioned is about MIT students, MIT research for students using ChatGPT, but now not only students, but also, you know, children are using AI. So did you see any updates around it?

Isabelle: So, for very young students, I would say the research is very, very limited to date. Very limited. We are, we are, yeah, I would say for the youngest, youngest students, very limited. There is some research for elementary and middle school that has started to be conducted, but certainly more in high school and college students, because for a while, there has been an obvious reason behind all of this is when those technologies were initially introduced, they were only accessible to adults. So while you could have studied them for educators in early childhood education, and we have one project at Stanford going on for young, for that, for early, for adult caregivers in early childhood education, but for children, it's been much more limited to date. We'll see more, we'll see more and more that's going to rise. There are several of my colleagues who are studying the impact of this technology increasingly with younger younger younger learners.

Glasp: Yeah, thank you. Sorry for the back and forth, but, you know, we were talking about the book, you know, you wrote and love to learn, and we love it. And so, could you tell us the core message you hope readers will take away and especially I think parents and educators, and could you explain the book, what the book is about, and with our audience, just in case, so sorry about that.

Isabelle: Yeah, so the book is called Love to Learn: The Transformative Power of Care and Connection in Early Education. And the big topic is quite simple, profound, but something that most people certainly know, which is the importance of relationships and the fact that we as human beings have social brains. So we learn socially, especially in the earliest years of life, but I would say throughout life. So that's a key, and there's a huge body of science called the science of relationships that exists that makes this point very well and has made this point for years and years and years. Now, we have many, many research that that emphasize that key point that we learned socially. The problem, and this is why I wrote this book, is that the circles of relationships around children are shrinking, and they are shrinking for a variety of reasons. Some are, you know, demographic-based because families are getting smaller with fertility rates going down, or grandparents, you know, living further and further away, you know, those types of societal demographic trends. But there are also other trends, such as play declining in our society and resulting in fewer connections for children with other peers. So there are different trends, whether it's in school or outside of school, and then there's technology that can be an amazing connector and enhance connections, but also, if we are not intentional about it, can isolate young learners even more than they are. This is leading to children having fewer and less human connections around them. And I had this one particular moment, this was the beginning of COVID, when one of my colleagues at Columbia University, a neuroscientist named Danny Dumitruiu, shared some very, very concerning data that she was studying based on a full cohort of moms and babies in New York City at one of the largest hospitals in New York City called Morgan Stanley Hospital, where she was observing that 80% of moms didn't have a strong emotional connection with their little one. And she also shared some data before COVID and showed that 60% before COVID didn't. So that led me, Kazuki, and Kei, to write this book because the stats are quite concerning, because we know that these emotional connections are a huge driver of future learning outcomes and future success. So if those babies who are currently born in this big hospital in New York City or other places in the U.S. or the world do not feel connected to their primary caregiver, we have meaningful implications ahead of us on how these little ones will then fare socially, emotionally, and also from a learning perspective.

Glasp: And you've mentioned that in the future, more like a deduction of intelligence models, more in the future, not IQ, maybe IQ and EQ model, but you mentioned like a deduction of intelligence model more. Could you tell us a little bit about it?

Isabelle: Yeah. So certainly, when I do think about human intelligence, my first thought is always IQ. I think, you know, this is what we have all been conditioned to think that human intelligence is, for the most part, cognitive intelligence, which is what IQ measures. When I think about human intelligence, I think that human intelligence is when. And we've had, we've seen over the past 20-plus years, a rise of EQ, especially starting from the workplace, you know, the workforce has been pushing for EQ more and more of the past 20 years. It's another form of emotional intelligence, which has been translated in schools into social emotional learning and those types of concepts, which has been great.  I think that, and it's not only me, it's several factors, including the chief economist at LinkedIn. But other people as well now think, think that the future of our economies, because the rise of artificial intelligence is so good at replicating both the cognitive and, increasingly, empathy and emotional intelligence. For example, there are these big studies that were done in health, not in education, showing that patients have a preference for AI relative to a doctor from an empathy perspective. So both IQ and EQ have the risk of essentially being substituted in some ways by artificial intelligence. So the belief increasingly is that we need to focus on something else for all of us, which is our ability to connect and relate with other human beings, which is what I call relational intelligence. And I would like to see this. And so again, the, you know, other people are calling the future of our economies relational economies, where we already, by the way, see this already now, where, for example, the number one skill that employers want is communication skills. And communication skills are very crucial, as part of being relational is our ability to connect with other humans. So, employers are already speaking about those relational skills. But I think those are only going to rise as artificial intelligence rises in our lives.

Glasp: And in the book, you mentioned that during 2006 2018, we see the IQ score has been had been declined, declining. And after a smartphone and IQ score declined, but do you think the score will decline after the AI post AI? I mean, for the next five years, 10 years, because the more if the more student people rely on or outsource their intelligence, like the IQ aspect to AI, so we will spend less time thinking. So, do you think the IQ score will be declining in the future in this era?

Isabelle: My concern is also that AI could have an impact on IQ. But I also I'm concerned about how our social the size of our social networks that have been declining, and its impact on the evolution of human intelligence and our brains. So let me bring you back into our past on the evolution of humankind. And amazing work from anthropologists, including one of the most famous modern anthropologists named Robin Dunbar at Oxford University, who has studied social networks, and has this entire thesis that he has published that's very, very famous on connecting the size of different animal species size of brains. And it's particularly the neocortex with the size of our social networks. So his thesis is that the size of animal brains, including humans, is optimised for the size of our social networks. And humans are optimised, the size of our brains is optimised for the size of our brains is optimised for what is called a Robin number, which is 150. Which is why the size of weddings, for example, the average size of weddings is about 150. The size of networks on social media, not the size of all our networks, there are close ties on social networks, is also 150. You know, he has a lot of different data points that all kind of consistently show that we as humans are optimised for 150 number, which is fascinating. But if the size of our social networks starts declining, which is what I'm showing with different data points for children in my book and my concerns about it, what will it do to our brains, to the size of our brains, and our brain activity? So even outside of AI, AI may have some additional impact. But what I'm also concerned about is our social activities, and their impact from an anthropological evolutionary perspective on the brain.

Glasp: This is a random question, but if we have 150 or 200 AI robots that interact with children, will that increase brain activity or brain capacity? Or is it a different topic?

Isabelle: So excellent question. And one that we need to start studying a lot more. We have very little so far on the biological impact of interacting with machines. We have a few studies on social robots. We have one study recently that I was looking at on the impact of reading that's aided or not aided by an AI agent. So we are starting to see some data points on this, but it's still very early. And that will become a critical question of our future, which is, can some of those machines replicate or substitute, and where some of those, you know, human relationships, where are they helpful and where are they not? I have a lot of concerns right now. We have a rapid, rapid rise of this category called AI companions. So a recent study from Common Sense Media showed that 72% of the young people that they interviewed had used an AI companion. And then 50% was slightly above 50% said that they were regularly using one of those AI companions. And the parallel study in the UK showed that 15. 15% of young people who were interviewed or surveyed preferred the machine relationship to a human relationship. What we have studied at Stanford University is one, we have studied one of those tools called Replika AI, which is one of the large AI companions. And we did a, we published a study last year on it. What we showed is four different things. One is that lonelier young adults were more likely to use those AI companions. Two, which I thought was fascinating, 90% of the users of Replika AI were confused about the machine being human-like, which suggests to me that we humans are very, it's very difficult for us to, when we have an anthropomorphic manifestation of an avatar, or whether it's 2D or social robot in 3D, we are very confused very quickly, whether it's a machine or human. And so certainly, we know from research that children are even more confused than adults. So if the number is 90% for young adults, it's probably close to 100% for young children. The third conclusion was that, quite actually, slightly positive. It showed that people who were using Replika AI had slight benefits from a mental health perspective. So there was a reduction in suicidal ideation as a result of using Replika AI. I would be very cautious about that particular conclusion and explaining it, because we know that there are children who have committed suicide using some of those AI companions. For example, at character.ai, there is a lawsuit ongoing right now. Anyway, so I would be very careful about, you know, generalizing that finding. But anyway, interesting and slightly positive. And then a more negative one was that, and that's a fourth conclusion from this study, was that there was a slight displacement in human relationships. So people were lonelier at the beginning, and they were even lonelier after using those tools because they had fewer friends, real friends, after using Replika AI. So fascinating, MIT published with ChatGPT, with OpenAI, a study on the use of ChatGPT. So, not an AI companion, but general chatbots for companionship and therapy. And they have a very similar conclusion to our study on Replika AI, where they show that people who use those tools, so people who use ChatGPT more, tend to think that the chatbot is more friend-like. So there is also a sense of confusion, the more you use it. And they also showed in a different parallel study some fascinating impact of voice. So, as those AI systems become more and more multi-modal, using voice or video or other characteristics that are beyond text, the confusion gets even stronger.

Glasp: Yes. And one eye-opening moment when I was reading your book was, I hadn't thought about, was, so I thought, I mean, AI can answer anyone's question 24-7, and they are patient and so friendly and so on. But if children are used to that relationship, and because humans respond differently, like sometimes people are upset, you know, people are not patient and angry, you know, mad at answering the question. And so does that impact, I mean, if the children's expectation is set by AI's relationship, meaning, oh, you know, usually if I ask something, they are patient, answer 24-7 and so on, if they expect that high, when they interact with human, oh, they respond different way. And I might, the children get confused about building a relationship with humans.  And do you have any thoughts around this?

Isabelle: Absolutely. So that's a critical question going forward is where and when are those AI, where is this AI used for companionship or relationship building, or are social elements beneficial or not? So there are instances where it's probably very helpful. So, for example, someone who has more social anxiety or social, you know, difficulty connecting may be able to learn through those machines how to connect, how to relate, okay, and then can maybe rehearse with a machine how to have a difficult conversation or establish a friendship, or whatever, whatever it might be, okay, as a learning pathway to a real, in real life relationship. It might also be helpful for certain categories of children, like children on the autism spectrum or children with, you know, like me who were not raised in, with English as a native language, you know, people who are like, you know, different groups of people for whom those tools may be very, very useful for rehearsing or learning social skills or language or cultural types of nuances, okay. But what's becoming concerning is that right now, a lot of those tools are monetized by private companies, and the monetization is either through data or through advertising. And in either case, those companies have an incentive to keep people on those platforms. The ideal would be that it's more of a training ground for real-life connections. But those companies don't really have incentives to do this. So I love the space where companies are thinking about that transfer from machine to the outside world, real life, in real-life connections. I think that's a beautiful exploration for a lot of edtech companies and big tech, where if we are using those tools to help children and adults be better in real life, on social connection, amazing. If, on the contrary, this is really to have, you know, mimicking the entirety of a relationship for machines, I have a lot more reservations, and we don't know what the long-term impact will be on humans. It is a fascinating area and is rapidly rising. So we'll see a lot more research on that. But right now, I would say it's an area where we ideally need better frameworks, better guardrails, better protections, especially for young children. Because what is feeding those AI companions at the moment is large language models that are not, you know, not necessarily appropriate to the real world. It's not necessarily appropriate for children and certainly not always appropriate to, you know, for recommendations on social connections. I mean, humans may also not be perfect all the time, but right now, relying on machines and systems very imperfect and certainly never built for children is very, very concerning.

Glasp: Yeah, definitely and totally, yes. And as a parent, I really resonated with the whole concept of this book and how we should nurture love and relationships with children, and so on. And so that I understand, you know, the importance of the relationship, so that I can change my behavior when I interact with children. But I feel like anyone who can read this book will be able to solve the problem, maybe. But the real challenge is, I think, changing the behavior of people who won't read it, right? I mean, there are so many people, parents, educators, you know, who don't actually don't have a habit of reading books, you know, your book, and learning about the importance of relationships and so on. How should we? How should we reach them? I mean, we should, because I mean, people who read your book understand the importance of relationships, but people who don't, don't know about the importance. So, how should we, how to say, spread this information with them? And how should we reach them, people who need your lessons and learnings?

Isabelle: Yeah, if you have ideas, let me know, certainly. But I'm trying to make two simple arguments for, especially families and parents, and maybe educators, to care about this. One is that love and relationships are critical for brain development. You know, children who have fewer relationships or no relationships have brains that are smaller, for example, based on multiple studies. So, love and relationships, if anyone cares about them, and so many parents, you know, care about their children's future success, they should care about this. This is critical for a child to have a bigger brain and lifelong success. OK, so that's number one. And then to the point that we were chatting about earlier, if we think about the future and where we are heading, especially with artificial intelligence, if we as, you know, like you Kazuki as a dad or me as a mom, if we want our children to have successful careers, they need to have very strong relational intelligence. This is what will make them successful in this future economy. Simply because this is what is really remaining strictly human will be our capacity to relate and connect with others.

Glasp: Yes. And, yeah, thank you. Since I think your time is running up, but what, sorry, you already shared so many lessons and advice, but what advice would you give to young people or parents listening today who want to know about lifelong love? If you could pick one.

Isabelle: Yeah, there is one thing we have not discussed, which is the concept that a lot of tech, a lot of my colleagues in research are calling techno-fearance, which is the impact of technology in interfering with human relationships. So, where it comes up is when we pick up our phone or we get a notification on our phone or on our device, OK? So, the average number of, the average number that any American adult, the average for American adults is now 205 times that an adult picks up their device per day. 205 times. So, that means that for any one of us, and you can check by the way your number, your exact number for yesterday or the day before on your phone, my average, so that I'm being very clear and I'm not, you know, no judgment on anyone, my average is a little bit higher than 205. But it's an extraordinary number when you think about it, where we, all of us, are so conditioned to live with those devices that interfere or have the risk of interfering with a beautiful play or reading moment or even a discussion with a child. So, my advice is very simple is to put away our devices and have moments. No, I'm not, I'm not, I'm not, certainly saying let's ignore technology altogether. This is not realistic for most of us. But rather saying, let's have some moments where, that are fully tech-free in our family time so that we can focus on those very special relationships and make family time, relational time, at least for those special moments during the day, whether it's dinner, whether it's playtime, whether it's bath time, whatever, whatever time might be ideal for families.

Glasp: Yes, that's totally, and I resonate with that. So, can I ask one more question I've always been curious about? So, let's say you focus on like early childhood education, but let's say if I'm the one who missed a relationship, losing a relationship in early days, so like, I'm not, but you know, let's say, for example, like my parents didn't put much love to me and so I missed the opportunity to, you know, building intelligence and relationships and I'm 20, 30 now. Can we catch up or fill this gap, fill in this gap? And if I'm an adult?

Isabelle: Yeah, so the science is also very clear on this. The short answer is yes. Yes. However, it is more and more difficult. So, the easiest is really in the earliest years of life, even months and days of life, where building loving, strong, nurturing relationships with a child is critical. But those relationships are also essential throughout life. And so, yes, catch-up can happen and continue throughout life. But much easier to do well in the earliest moments, years, you know, hours, months, weeks, days, weeks, months, years of life.

Glasp: Yeah. And yeah, we wanted to keep asking questions like, yeah, and one is like, caregivers, caregivers are underpaid, you know, and less than baristas, and so on. And I was, yeah, we had so many topics we wanted to discuss today, but due to the time limitation. So, very last question, since Glasp is a platform where people share what they're reading and learning, we want to ask this question. What legacy or impact do you want to leave behind for future generations?

Isabelle: The impact and legacy that I would like to leave is very simple, very simple, Kazuki. The title of the book is Love to Learn. But in so many ways, I would also like to reverse that phrase and say, I would like my legacy to be for people to also learn how to love.

Glasp: Amazing. Yes, so beautiful. Yes. And yeah, again, thank you so much for joining today.

Isabelle: We learned a lot, and I hope our audience learned from this conversation. Thank you.

Glasp: Thank you for having me.


Follow Isabelle Hau on social

Twitter

Share Your Excitement

Ready to highlight and find good content?

Glasp is a social web highlighter that people can highlight and organize quotes and thoughts from the web, and access other like-minded people’s learning.

Start Highlighting