Erica Orange on constant evolution, lifelong forgetting, robot symbiosis, and the power of imagination (AC Ep59)
Manage episode 436604693 series 3510795
“We all have to acquire new information to stay relevant. But if we’re piling new information onto outdated thinking, we need to become more comfortable with lifelong forgetting.”
– Erica Orange
About Erica Orange
Erica Orange is a futurist, speaker, and author, and Executive Vice President and Chief Operating Officer of leading futurist consulting firm The Future Hunters. She has spoken at TEDx and keynoted over 250 conferences around the world, and been featured in news outlets including Wired, NPR, Time, Bloomberg, and CBS This Morning. Her book AI + The New Human Frontier: Reimagining the Future of Time, Trust + Truth is out in September 2024.
Website: www.ericaorange.com
LinkedIn: @ericaorange
YouTube: @EricaOrangeFuture
X: @ErOrange
Book: AI + The New Human Frontier: Reimagining the Future of Time, Trust + Truth
What you will learn
- Lifelong learning vs. lifelong forgetting
- The intersection of humans and technology
- The importance of imagination in the future of work
- The role of judgment in an AI-driven world
- Navigating the blurred lines between reality and AI
- Rethinking education for a digital age
- The evolving workplace and redefining workspaces
Episode Resources
Transcript
Ross Dawson: Erica, it’s a true delight to have you on the show.
Erica Orange: Ross, thank you so much for having me, I’m so happy to be here.
Ross: So you have been a very long time futurist, and I think it’s pretty fair to say that you’ve also been a believer in humans all along the way.
Erica: Yes, I have to say I’ve been a believer in humans for far longer than I have been a futurist, but I have been doing this work, my goodness, for the better part of close to two decades at this point, really knowing that so much is operating really quickly, with obviously the biggest thing today being the pace of technological change. But when you strip back the layers, I’ve always come back to the one kind of central thesis and the one very central and core understanding that we are inextricably linked with all of these trends, whether it’s technological trends or sociocultural trends, we cannot really be extricated from that equation. My interest has always been in more of the psychological component to the future, right? I was a psychology major in college, and I never really knew exactly how that was going to serve me, and never in a million years did I think that it would be applied to this world of Futurism that I didn’t even know existed when I was 18 years old, but that thinking has really informed much of how I do what I do.
Ross: Yes, it’s always this aspect of ‘humans are inventors’. We create technologies of various kinds which change who we are. So this is a wonderful self reinforcing loop of ‘we create the classic thing, we create our tools, and our tools create us’. And this cycle of growth.
Erica: Right? Everything is always a constant evolution. It’s just that that piece of evolution is very different depending on who or what it’s applied to. So at this moment of our history, technological evolution is outpacing human evolution, but the biggest question mark is, will we be able to catch up? Will we be able to double down on those things that make us uniquely human? Will we be able to, even economically, and when it comes to the future of work, be able to reprioritize what those unique human skill sets are going to be? And basically, for the sake of not putting it very poetically, will we be able to get our heads screwed on right now and for the indeterminate future, so that we are not in a position where technology has passed us by, where we actually have a very unique role to play, and we know how we can really compete and thrive and succeed in this world that is just full of so many unknowns.
Ross: Absolutely. I agree that these are questions we can’t know whether we’ll be able to get through but I always say, ‘let’s start with the premise that we can’. And if so, how do we do it? What are the things that will allow us to be masters of, amongst other things, the tools we’ve created and to make these boons for who we are, who we can be, who we can become?
Erica: That is such a great question. I think it comes down to something that I talk a lot about, which is really the difference between lifelong learning and lifelong forgetting. And it seems the most cliche nowadays to talk about lifelong learning. I always say, of course, it’s important to become a lifelong learner, right? We all have to become lifelong learners and acquire all of the new information that’s going to keep us relevant. But if we’re piling on new information onto outdated thinking, we have to become more comfortable becoming lifelong forgetters.
I tell so many of my clients or my audiences that it can be just a very simple exercise of identifying one or two things. It could be a heuristic, it could be a value judgment, it could even be just a way that we approach our work. It doesn’t have to be anything more complex than that. What is it that we are holding on to that no longer serves us? Once we are able to get up on the forgetting curve as quickly as we’re able to get up on the learning curve, neurologically we can free up some of that space such that we are able to compete against some of these ever evolving technologies. And then the other thing that I would add to that is one of the first things I learned in doing this work, as I said about 20 years ago, was that the future was never about ‘or’ the future is about ‘and’, and again, such a simple word that carries a lot of weight today, because we tend to view things in this world of hyperpolarization and in the social media echo chamber. With tribalism, we’re so bifurcated that we tend to think of it as this future or this future, this reality or that reality. When the simplest of lessons, it kind of goes back to the most basic of math, which is the Venn diagram, it’s looking for those intersections in the middle and knowing that everything operates from an environment of ‘and’.
This goes back, Ross, to what we were talking about with humans and technology, right? When it comes to the future of AI, there’s this narrative out there of us versus them, and that is where a lot of the anxiety and the fear is coming from, because that whole narrative of a robotic takeover when we view it as something that is collaborative and something that is symbiotic and something that is augmentative, that it really is, humans and technology and those polar opposites always coexist and coevolve, just like progress and stagnation. It’s chaos and creativity, right? It’s imagination and inertia. It’s all of these things that are happening simultaneously. So to both forget and to view things in an ‘and’ way are two of the ways that we can kind of start getting our heads around some of this.
Ross: Wow, I love that. We are very aligned. You know, that goes to my framing of ‘humans plus AI’, whereas others have framed it somewhat more confrontationally. But just the other thing which comes after me is improv. In improvisation, it’s always ‘and’. Whatever you’re offered, it’s always and as opposed to contradicting or the but’s or the no’s and so on. And the improvisational approach to life is always ‘alright, and’? What can we add?
Erica: Yep, that is exactly right. And again, I go back to my early days of doing this work, and the Yogi Bear quote of ‘when you get to a fork in the road, take it’ always comes up. And now the challenges, that fork in the road, which used to be kind of bifurcated in a couple of directions, is now going off in multiple and multiple directions that trying to navigate this roadmap is like, oh my goodness, I feel like I’m almost kind of schizophrenic, because life in each moment and even each minute, is a constant arc of improvisation. We’re all at each moment just trying to figure it out and apply a little bit of control to the chaos.
Ross: Yes. Going back to the first point, this lifelong forgetting, which I totally love, I think this requires metacognition. What is it that I need to forget? And then how do I forget it? Let’s get a little tactical here. How is it you work out what you need to forget, and how do you then forget it in order to make space for the new?
Erica: I would say, one of the best ways to even start forgetting, and as the mother to a seven year old who is wise beyond his years, we know so much about these, like Corporate Mentor programs and to learn from institutional knowledge. And again, that is great. That is the learning component, but the reverse mentorship component goes into the forgetting part of this. And I tell a lot of people, whether it’s parents or anybody, it doesn’t really matter. Don’t even have your kid be your reverse mentor, because they’re gonna have no patience for your questions or your learnings, and not even for anyone internal to an organization or a business, don’t even have it be an intern, they’re already too old. So it could be a neighbor, a niece, a nephew, whoever it is, and just be open to what it is that you hear. It could be what platform they’re using, what gaming system they’re using, how they make friends, what they talk about, what they’re learning in school that they find interesting. Do they use ChatGPT for their homework? Do they like to play outside? I don’t know, a whole laundry list of questions.
They’re kind of like the aliens from another planet. They can allow us to see these completely unbiased perspectives of how the world operates, and the biggest thing is using what it is that we learn and being open to those learnings to inform some of our own value judgments and ways of seeing the world, where it’s like, ‘oh, I never thought of it from that perspective’. Kind of like bringing back the lost art of nuance, we just don’t have those nuanced conversations anymore, because we operate in these silos around viewpoints that kind of support ours and that are comfortable, right? It’s like the security blanket in a world of uncertainties. So to tap into some of this, different ways of thinking is one very practical way to achieve this.
Ross: Now we live in a world of AI, here on all fronts, and this is a particularly pointed way, I suppose, illustration of the technologies that we have created, which are changing us, which are changing the world, and where, you know, we’re trying to move to a world where it is a complement to us, it supports us, it makes us more. What’s that journey look like?
Erica: I mean, there are so many facets to this, because it is an ‘and’ reality, meaning in many ways human creativity is being unleashed through AI. It is augmenting a lot of our cognitive abilities. At the same time, it is distorting many of our realities. It is leading to a world where it is getting harder and harder to differentiate between what is real, fake, true and false, with the rise of deep fakes, and we know that AI is manipulating a lot of our electoral systems. It’s putting into question even the whole nature of democracy itself in some ways. We have this very dark rabbit hole on one end, and then we have a very exciting possibility on the other. Kind of like the two DNA strands, right? It’s like we have to decouple a lot of the hype from the reality. We also have to know what the tools can do, what the tools can’t do, and what they are unable to do now, but will be able to do in the future. That’s where we have to do away with the cliches, because it’s not just augmenting us and it’s not just taking our jobs. Just like a rubber band, right? There’s a lot in that middle part where there’s a lot of tension, but there’s also a lot of opportunity there.
Ross: So how do we start? What’s the framing of it, as you say, the deep challenges? There’s unquestionably many negative consequences of not just the tools themselves, but in particular, how they are used or misused.
Erica: I would say, it’s a time of learning and it’s a time of experimentation, and it’s a time of implementation, right? And each one of these have a different set of strategies. So for those at the top of an organization, they have to think of what problem they’re looking to solve, what strategic problem they’re looking to solve, what time based efficiencies they want to create if they want to glean insights or crunch data in new ways. That’s the implementation piece. The experimentation and learning piece goes back to, I think, one of the biggest future proof skill sets going into the future, which, again, sounds so simple, but it still is very complex. We all have to get better at asking the questions that are going to matter. It’s not just going to be in the purview of the prompt engineer. It’s going to be that we have to question the output of absolutely everything, and knowing that a lot of the generative AI systems right now can be used as tremendous tools, but they are still just that. They are still a tool, and they are subject to their own flaws, their own biases, it’s no wonder that a lot of people are talking to them as kind of these mysterious black boxes, or we hear the word hallucinations thrown out there. While the output generated can be great, it still is so deeply reliant on human oversight and human judgment and even human decision making. Those are really, I think, three of the fundamental pillars. If we double down on those three things, then I think we will emerge as the winners in this new reality. But, when we forget those three things, including ethics and ethical frameworks, then we will see too much control to systems that still have not been ironed out yet.
Ross: Absolutely. So everything’s very fast moving at the moment and is probably going to continue to be the case. So drilling down and say, judgment, which I think is really central to all of these, and that you know, where that’s part of decision making as well. We are in a fast paced world, it’s very hard to judge what is going on in, amongst other things, what the AI tools are doing and so on. But you know, that’s what our role is, obviously we need our reference to be the reference point where we make judgments. So how do we develop and apply and refine our ability to judge in this very fast paced, discombobulating world.
Erica: It really underpins something that is so basic to human nature, which is also doubling down on human to human relationships and human to human trust. And not just have human judgment be something that you do on your own, but have it be a more human centric exercise that is much more collaborative and honestly doesn’t even start when you are using AI for your job. These are things that have to be instilled at the earliest of ages, which is why the conversations from an educational perspective are not the right ones that we need to be having. We’re having all these conversations about cheating and ChatGPT for research like, you know, what time out, educational system that was appropriate for four economies ago and an industrialized age. Are we actually preparing the young mind to tackle a lot of these digital challenges, or are we just spitting out a whole bunch of what we would hope to be smart learners, when the future, I always say, is not about smart, it’s about intelligence. Right now, ‘artificial intelligence’ is actually ‘artificial smart’.
We need to think through the lens of judgment, decision making, oversight, how we can instill these values, even if it comes down to a new civic space framework for the younger generations who are going to interface with these systems, who are going to build these systems, but it can’t just be a plug and play sort of thing for a 50 year old who has never used these systems, we need to reverse engineer a lot of this so that we have the thinkers, the critical thinkers, the analytical thinkers, who are able to decipher the output of these systems and think about it even from an organizational perspective, if they are bringing in all these young hires that don’t know how to view through the lens of however judgment is defined. The reputational and the industry and enterprise risks that could actually be created, and the second and third order risks from putting in people that don’t understand how these technologies can really tend to call out in unforeseen ways.
Ross: You have a book coming out?
Erica: I do, Ross. I do.
Ross: Which is rather on point, all of our conversations. So tell us about the book and what is coming out. Tell us what you say in the book in a few words.
Erica: Well, ‘in a few words’ has never been my forte, but the book is called ‘AI and the New Human Frontier, Reimagining the Future of Time, Trust and Truth’, and a lot of the book is based on that central thesis that we’ve already talked about, right? How do we double down on those things that make us uniquely human in an age of accelerating AI and the subtitle ‘time, trust and truth’ are three of the really critical components here, because time is based on the fact that things are happening at such an exponential rate, right? How do we even get our thinking aligned with something that could be outdated a month from now or even a few weeks from now. And then, the trust and truth goes back to what I said earlier in our conversation, how in a world where much of our reality is being manipulated, and the bigger question that AI poses that even if it’s not about real, false, true, fake. The bigger thing is, how do we in a world of AI prove any of these things and these realities to be true? So it takes us down a very important rabbit hole, and then really brings about the clarion call, which is, how do we really refocus on imagination? How do we reimagine our own value? How do we reimagine what work is going to look like without any preconceived notions or any constraints of how we’ve done any of these things before, knowing that those frontiers are all going to shift and evolve?
Ross: So how do we support our ability to imagine and reimagine better than we did before?
Erica: It’s one of those things where it’s just like play, or just like whimsy. A lot of these things, as we become adults, have been coached out of us, but it’s so core. It is so central to humans throughout time, right? Ancient Civilizations wouldn’t have created unbelievable technologies of their own making had they not imagined, and had they not imagined even the universe and our connection to the stars, right? We all have the ability to imagine, and a lot of it comes down to just channeling our inner titles, playing around with things in the physical world, playing around with things in the digital or the virtual world, doubling down on those human connections, and just kind of getting out of our own way. This goes back to the thinking piece, so that it’s not just linear extrapolation based on what it is that we know or where we think the future is going, but allowing ourselves to just imagine new possibilities and new ways that we can really survive and thrive in a world that in many years from now is going to look increasingly unfamiliar in some ways and just as familiar in others.
Ross: Yes, that reminds me of one of my favorite quotes from Keith Johnstone, the father of improvisational theater, said that children are undeveloped adults, but adults are atrophied children.
Erica: Yes, I love that- was it George Bernard Shaw that said we don’t stop playing because we get old, we get old because we stop playing?
Ross: Yes, yes.
Erica: So yeah, same sort of thing, right? And just kind of going back and tapping into the wisdom and the empathy and the connection. And again, we hear so much about how AI will augment certain things, but it really is about that intersection of imagination and the biggest thing that can galvanize all of us, which is hope, right? A lot of these things just seem very scary and outside of our control, but these things are very much in our control. And it’s not kind of a rah rah, mature leader for humanity, but I do believe in humanity’s ability to kind of catapult ourselves into a new age and a new way of being without those constraints of the past, because we know that we can’t apply all of the old and outdated thinking to whether it’s new problems or ultimately new solutions. It has to be unfettered and it has to be based and really rooted in imagination.
Ross: I love that. The positive potential is absolutely there, but still remains for us to take that.
Erica: Part of my chapter comes with a disclaimer, like things are about to get a little dark in these following chapters. Let’s ride it out like the roller coaster it is, and as we come out the other end, let’s really talk about what those possibilities are. Let’s talk about bringing back the lost art of storytelling and telling the stories about the future, right? Hollywood is depicting a lot of the stories about the future in this dark, post apocalyptic or even superhero way. Where are the stories about the positive imagination, the H. G. Wells and the Asimov, the old science fiction writers of the past? That was pure imagination, and we don’t really think of how we can apply those really powerful stories to solve a lot of those existential issues.
Ross: So let’s round out with saying sets of advice for two people: individuals and leaders of organizations. Today, what is it that individuals can and should do to chart their own course for themselves, for their families, to prosper and to contribute in this world where AI and technologies are shaping who we are in society.
Erica: It goes back to ‘and’. We have to be aware of what these tools are. We have to be aware of how they are evolving. We have to question our relationship with them. We even need to just have those conversations with that next generation of responsible youth. Issues of bullying, issues of deep fakes, right? We hear so much about the creation of a digital literacy framework, but what is it really? What really are we teaching that next generation, and how do we also just, you know, as I said, I have a seven year old son, and people are very surprised when I say that I am a futurist who studies technology, but my son is very analog, and it’s done deeply on purpose, because we know that at the same time, a lot of these technologies are rewiring the brain. They are rewiring the brains of young people, and longitudinally we don’t quite know how any of that is going to play out. So all of this, in many ways, is ‘we are putting a new generation into a petri dish’. And you can say that that has happened in the past, but it really is happening more than ever with the massively multiplayer online games, with virtual reality, with constant connectivity, with putting them in front of iPads from the time that they can basically see, and we don’t really question what it is doing from an attention perspective, from a learning perspective, so we also need to have more of those conversations.
If young people are accessing AI and ChatGPT, what is it doing to critical thinking? How is it changing their own neural wiring, knowing that the ones that are in the earliest part of Gen Z are the first ever in history to have different neural wiring than the preceding generation. There aren’t really enough conversations yet about that, and I think more families need to view these technologies less as band aids and really think of what is the appropriate use, and how can we also cultivate that human to human relationship given the fact that we also have all of these tools. Now, the other thing that I would just add is it’s a different conversation for business leaders, right? I think this goes back to the difference, really, and this is something I talked to a lot of clients about, which is the difference between vision and strategy, and strategies now, the time horizon is so shortened, so don’t have one set of strategies when it comes to AI, when it comes to digital, when it comes to talent management, when it comes to anything today, be so nimble and flexible and adjustable and adaptable in those strategies, and have a different set for different timelines, but your north star has to be your vision, right? What you stand for, who you are, what you represent, what matters to you as a company, a brand, and have that be so clear and clear in the articulation of it that it trickles down through that institution or that organization, so that everyone knows that the strategies are then in service of that vision, and not enough organizations, I think, are really going back to the drawing board to say, what is my vision in an age of AI, and how can I use these tools in the service of that versus in the service of my strategies?
Ross: So on that point, I think I absolutely agree. We need to create, use your vision, and sometimes reform your vision. But part of that is, what are the roles of people in that future organization? Technologies have always changed the nature of work. It’s continuing to do so at an increasing pace. As leaders look to the vision and the future of their organizations, I don’t think we have the answers now, but what’s that journey to imagining that future of the role of people within that and how they are complemented by AI?
Erica: Part of the role of the imagination in this is in the vocabulary. Because work, what is work? Work is such an open book, when people talk about the future of work, what does that even mean? How is that even defined anymore, in a world where one thing matters, and one thing is defining all of this, and that is a sense of boundarylessness, right? Time and space have different definitions when people talk about the evolving workplace, workplace also is a word that has no meaning anymore because it’s all about the workspace. So part of that reimagining is in the vocabulary that we use to even define these things, and we haven’t even begun to really get our head around what a workspace is. So many leaders are still struggling with even words that have been out there for the last 15 years, distributed, virtual, flexible and hybridized work. They’re like, ‘oh my goodness, how do I get my head around this’? When we’ve always known it’s not one size fits all. Part of imagination is just a blank slate. It is just a completely blank canvas. But we think of imagination as taking all of those preconceived notions and kind of rejiggering it, where we just do away with the words that don’t work and create new ones to describe completely new ways of tapping into talent, whether that talent is carbon based, as in humans, or non carbon based, as in AI based systems.
Ross: We just need to take that one step further. What’s that process of bringing together those AI and those humans? What does that look like? Of course, it’ll be different across many organizations, but what might it look like? Let’s paint a positive vision.
Erica: It is going to be different for every single organization, every single functional capacity, every single individual, every single geography and every single generation. When we think, when we say, there’s no one size fits all, that is exactly that, and that is why, again, it’s in service of a vision. An AI might be a useful tool for accounts payable and accounts receivable, because it helps streamline the payment process. It could be deeply helpful for someone in a research capacity, because they’re able to glean insights in new ways. It could be helpful for a doctor in a completely different capacity, because it could be used as a tool to diagnose whatever problem you are looking to solve. There is a facet of AI that can come in and help streamline that process. But ultimately, it gets down to the one thing that is the biggest value proposition in our economy, which is time. What are you trying to solve for from a time based perspective, and that is completely dependent on not just one set of variables, but dozens upon dozens of different variables.
Ross: Just to round out, give us a piece of wisdom, tell us what we should do, what’s a big insight from all of this that we should take away.
Erica: Well, there’s no silver bullet, right? And I wish there was, and I wish that there was one thing that could just solve all of the world’s problems and ameliorate any concern, right? But I always kind of go back to one of my favorite adages, which is ‘a diamond is merely a lump of coal that did well under pressure’. And I love that, and I end so many of my presentations that way, just because it’s human nature to view whether it is a problem, a change, a an anything right as just this lump of coal that we really don’t know what to do with, then we might try to hammer away at it, but we really know that with time and pressure and compression, right, it can turn into that diamond. And what is that diamond? That diamond is the future proofed opportunity, but it’s something that doesn’t come from just pure innovation, right? Coal doesn’t turn to diamonds because of innovation. Coal turns to diamonds because of imagination.
Ross: Fabulous. So where can people find out more about your work and your book?
Erica: Yeah, so my book is available for presale. It launches on September 18. So it is on Amazon ‘AI and the New Human Frontier’, and lots more information on my website, ericaorange.com and also my business website, the futurehunters.com.
Ross: Thank you so much for your time and your insight. You are truly inspiring. Erica,
Erica: Well, this has been so fantastic, Ross, thank you so much for having me on. I really appreciate it.
The post Erica Orange on constant evolution, lifelong forgetting, robot symbiosis, and the power of imagination (AC Ep59) appeared first on amplifyingcognition.
100 epizódok