Patch #25: Learning is about to change forever
Notes from a roadtrip conversation about the future of AI and education
Preamble
Humans are inherently bad at comprehending exponential change, and there’s a pretty good chance things are about to go exponential. Ian Bremmer put it this way in a recent newsletter about AI, “We are poised to experience more technological and, accordingly, more social, political and geopolitical change in the next decade than we’ve experienced in the last half century.”
As humans, we face fundamental obstacles trying to reconcile our expectations with a statement like the above. Suddenly, it feels like the evolution of humanity, society, and civilization are a bit more uncertain. Vertigo, excitement, anxiety, and abject terror are all reasonable responses.
Amidst all of these big picture shifts, one useful strategy is to focus on what could change in a single domain. I recently did this over the course of a drive from Phoenix to Denver when, due to the imposed constraints of vehicular confinement and the judicious handing of iPads to children, I settled in a to long conversation with my wife, Michelle, about AI and the future of education.
Michelle and I met teaching fifth grade in Los Angeles in 1999. But whereas I’ve had a winding path across design and technology jobs since then, Michelle has stayed in education: transitioning from the classroom to eventually leading a number of Teach For America’s regions. Recently, she founded Teach For America’s Reinvention Lab to “power innovation” at the organization and “fuel the future of learning.” She’s someone who not only has deep expertise about the classroom, but has long worked to effect change in complex systems. Michelle, in other words, is uniquely positioned to judge the gap between hype and utility in this space.
Zeroing in on AI and education felt practical and clarifying as Michelle and I imagined near-term solutions to long-established problems. Given how much early discourse about AI and learners has focused on fears about cheating, it feels worthwhile to spend some time imagining some benefits.1
1. Reframing our relationship with productivity
One mystery of the modern era is why the arrival of new technologies hasn’t done more to improve productivity. Despite decades of technological advancement, investment, and management consultants, most OECD countries have seen a drop in labor productivity since 2005.2
A notable things about new AI tools like ChatGPT is that early studies show eye-popping productivity gains for tasks like coding and business writing. As I’ve mentioned previously in my essay about the future of work, I don’t think we’re spending nearly enough time discussing how we allocate the benefits of this productivity to unlock human creativity.
The same dynamic has a parallel in the classroom in the form of an outsized focus on improving achievement on standardized tests. In order to show measurable improvements, teachers have dedicated more and more time covering what will be tested. But what if AI tools gave a boost to learners akin to the one we’re seeing with knowledge workers?
If it happened, we could shift our focus a few degrees from achievement and use the additional time won for whole child learning objectives — enabling a classroom with more time to prioritize what it means to be human. In a landscape of escalating youth struggles with anxiety, depression, and a general lack of optimism about the future, the need for this shift is pressing.
Imagine a class where you spend time exploring content through one vantage point (customized for your learning style and ability level via AI-powered tools) and then spend even more time building on that understanding through conversation and connection. Imagine incorporating outside perspectives, lived experiences, and competing interpretations in a much richer way than we have time for now. On Michelle’s current team, for example, she has students who represent high school, college, and indigenous perspectives informing the team’s outputs.
The skills of connecting to other people, developing empathy for their perspectives, and communicating will all be more important in a world where AI automates away vast swaths of computational work. As SNHU President Paul Leblanc points out here, “Our society has endless need for people to do distinctly human jobs that algorithms cannot do, jobs that only work when people are in relationship with each other.”
These high-touch jobs are often understaffed and underpaid, but it’s easy to foresee a future where that changes as society realizes the value of people who can navigate complex emotional terrain. To take but one example, look at the recent emphasis here in Colorado on the use of highly trained social workers and paramedics in what used to be considered police functions. There will be a lot more demand for these kinds of high-touch, high EQ jobs going forward — we should begin thinking about how to best prepare people for them.
2. Engaging worthwhile, complex issues through building
One of the most mind-blowing aspects of the explosion of AI tools is the falling barrier between thinking and building across modalities. We’ve seen a lot of early examples of people building apps, animating storyboards, and customizing workflows through the deployment of autonomous agents that leverage AI to achieve a specified outcome by chaining together multiple prompts.
While current functionality is still limited, it’s easy to see how students could use the next generation of these tools to better turn their thoughts into tangible outputs. This is inspiring in and of itself, but Michelle and I also spent some time discussing how students might utilize new tools to actually stand up solutions to problems of local relevance.
For example, imagine a classroom where students observe a lack of fresh produce available in their community, or a lack of green space, or any number of other pervasive issues that hinder the quality of life in communities across America. What if, instead of just discussing the issues, they could sketch concepts, work at raising funds, find partners, and learn from community groups that are already engaged? What if they could develop empathy for the people they are designing for by incorporating field research and learning how to engage in user testing? In a world where it’s suddenly much easier to make things real, it feels far more possible to incorporate design, building, and production into learning from the get-go.
3. Cultivating curiosity and lifelong learning
In my own interactions with ChatGPT, one of the biggest benefits is the ability tune my inquiries using natural language to a suitable learning level. This trick alone enables me to go far deeper into subjects than something like Wikipedia, where I’m forced to engage at the level of the general audience (which may or may not be suitable given my knowledge or lack of knowledge about a given subject). So if I engage a complex subject like, say, nuclear fusion, I can ask for ChatGPT to talk to me like a twelve-year-old. This allows me to ask question after question, guiding my own learning journey as I build my fluency over time. What if you could build a link between ChatGPT and Wikipedia to get the best of both worlds? It’s already here. I happen to love chat, but it may not be accessible or preferable for many learning styles. At the recent ASU + GSV Summit, Michelle saw a lot of examples of emerging edtech products focused on anticipating what a learner needs and when—thereby delivering both open-ended exploration and expert guidance at the same time (the best of both worlds).
What if we could expand this approach to make information relevant to those who are accessing it, no matter where they’re from and who they are, but especially if they are part of the vast population of learners who are locked out or marginalized in school? Imagine a world where students steer their own learning to make it as relevant as possible to their particular context (both in school and far beyond). Learners can start with points of interest (robots, anime, insects, video games, astronomy, etc.) and pull learning toward them.
In a classroom setting, the benefits of these capabilities are obvious. Suddenly, every student will have the ability to guide their own learning journey. Not only will this boost engagement across a range of levels and abilities; it will train all of us to get better at asking questions to get the result we want. For the record, this aligns with the proven pedagogical concept of elaborative interrogation, wherein learners develop the ability to enhance the memory of what they’re learning by using an interrogative structure (e.g. why? how?). Done correctly, this represents a marked departure from our current system’s emphasis on passive reception of knowledge and demonstration of mastery. It opens the way to a world where learners are actively engaged in a far more open-ended, inquiry-based acquisition of knowledge—one where students learn how to learn instead of learning to demonstrate proficiency on a predetermined outcome.
As Sal Khan pointed out in a recent talk on this subject, educational psychologist Benjamin Bloom convincingly showed how individualized instruction produces dramatic results. In Bloom's 2 Sigma Problem Study, students who received tutoring scored in the 98th percentile, while the control group scored in the 50th percentile. Tutored students also scored better on mastery learning.3 Bloom argued that if educators could find methods to replicate personalized learning on a larger scale, it could revolutionize education. It seems highly plausible that, thanks to AI, we’re about to test that hypothesis.
Early signals on this front show tantalizing glimpses of what’s coming. For a more polished product example, look at Khan Academy’s own Khanmigo. Or go ahead and build your own tutor and customize it to your style and ability, then add plug-ins to improve functionality and scope. However this all shapes out, personalized coaching and instruction are coming, and suddenly anyone with the grit and the determination to learn will have the ability to do so at their own pace and in the style that best works for their particular mind and interests.
4. Producing unique creative artifacts
In the previously mentioned piece by Paul Leblanc, there are two great questions:
What is the role of knowledge when all knowledge is immediately available?
If much of the basic cognitive work we now do is being done by an algorithmic coworker—a “cobot”—how do we rethink the levels of cognitive ability our students must now possess?4
Michelle and I discussed how the power of these tools can help students find their unique communication styles and bring more of their own story\ies, passions, and personal histories into the way that they develop their ideas. In a world where it’s much easier to produce outputs, why would we want the things we produce things that look like everyone else’s?
The best storytellers already bring themselves into their work as a way to create connection with audiences and communicate authentically. My mind immediately goes to the way Brené Brown uses stories about her family to get across larger points about vulnerability and connection. Last year, I gave a talk on the future of technology and trust with my colleagues Daniel and Mick. Daniel opened with a story about visiting mainland China for the first time as a child and experiencing the effects of censorship. This story instantly made our talk better and more credible. When we have better tools to express ourselves across mediums, we have far more opportunities to develop a storytelling style that fits our unique perspective, interests, and strengths as communicators. We also, to the point memorably made by novelist Chimamanda Ngozi Adichie, mitigate the risk of critical misunderstandings that arise from “relying on a single story.”
In a future where content outputs and analysis are abundant, the ability to communicate directly and authentically won’t just be important in the kind of high-touch jobs I mentioned earlier; it’ll be important in all jobs.
Mathematician and entrepreneur Stephen Wolfram points out here that super-specialized knowledge and expertise will be commodified when everyone has access to the ability to go deep on any subject. He argues people will need to do more ‘big-picture thinking’ and develop more creative and arbitrary approaches that don’t presume to know the endpoint before the work begins. One of the ways to get such unique outputs will be to bring diverse perspectives into collaboration and provide people with the space and ability to share not only their ideas, but how they arrived at them.
Closing Thoughts
Talking about all of this with Michelle reminded me of something I hadn’t thought about for years. In the resource-strapped elementary school where Michelle and I taught together in Compton, there was an elaborate computer lab funded due to the noble belief that access to the internet would democratize technology and give our students the tools they needed to succeed in the rapidly changing, newly networked world.
And yet the internet, to my recollection, never seemed to quite connect. There was no one in place to explain why the internet was important, how to use it, and how to turn access into opportunity (or at least curiosity). It’s not enough to hand out tools; we need to seriously start thinking about how we help people use them (and use them responsibly).
In trying to wrap my mind around the current moment, there is a framing a colleague shared with me via an article in the MIT Technology Review by David Rotman:
Simply put, we are at a juncture where either less-skilled workers will increasingly be able to take on what is now thought of as knowledge work, or the most talented knowledge workers will radically scale up their existing advantages over everyone else.5
If we don’t quickly incorporate these tools and a thoughtful consideration of how to use them into classrooms, inequality will worsen.
While personalized learning at scale is certainly exciting, it’s when we think of it in combination with whole child learning objectives, the ability to turn ideas into tangible outputs, and developing more ways for students to cultivate relationships with their community and each other that we start to really see the outlines of what’s possible. We should be responsible with these tools, but we should also spread them as far and wide as possible if we want to see what a true revolution in access to education looks like.
This week’s recommendations:
Reading: Annie Ernaux Has Broken Every Taboo of What Women Are Allowed to Write, by Rachel Cusk in NYT
Listening: With a Hammer, by Yaeji
Music credits for article audio:
Opening Theme: “Friendly Evil Gangsta Synth Hip Hop” by mesostic via Wikimedia Commons
Closing Theme: “Hopes” by Kevin MacLeod via Wikimedia Commons
Note: Since our initial conversation, Michelle spoke on a panel on this topic at the ASU + GSV Summit, and has continually been evolving her thinking with her team since then.
Robert J Gordon, Why has economic growth slowed when innovation appears to be accelerating?, National Bureau of Economic Research, Working paper 24552; David Adler and Laurence B. Siegel, The Productivity Puzzle: Restoring Economic Dynamism; New York, NY: CFA Institute Research Foundation Publications, 2019
Bloom, Benjamin S. "The 2 Sigma Problem: The Search for Methods of Group Instruction as Effective as One-to-One Tutoring." Educational Researcher, vol. 13, no. 6, 1984, pp. 4-16.
Inside Higher Ed, Paul LeBlanc, We’re Asking the Wrong Questions About AI, March 12, 2023
Rotman, David. (2023, March 25). ChatGPT Could Revolutionize the Economy - But Who Decides What That Looks Like? MIT Technology Review