Artwork

A tartalmat a Ross Dawson biztosítja. Az összes podcast-tartalmat, beleértve az epizódokat, grafikákat és podcast-leírásokat, közvetlenül a Ross Dawson vagy a podcast platform partnere tölti fel és biztosítja. Ha úgy gondolja, hogy valaki az Ön engedélye nélkül használja fel a szerzői joggal védett művét, kövesse az itt leírt folyamatot https://hu.player.fm/legal.
Player FM - Podcast alkalmazás
Lépjen offline állapotba az Player FM alkalmazással!

Marc Ramos on organic learning, personalized education, L&D as the new R&D, and top learning case studies (AC Ep66)

37:24
 
Megosztás
 

Manage episode 445396563 series 3394253
A tartalmat a Ross Dawson biztosítja. Az összes podcast-tartalmat, beleértve az epizódokat, grafikákat és podcast-leírásokat, közvetlenül a Ross Dawson vagy a podcast platform partnere tölti fel és biztosítja. Ha úgy gondolja, hogy valaki az Ön engedélye nélkül használja fel a szerzői joggal védett művét, kövesse az itt leírt folyamatot https://hu.player.fm/legal.

“The craft of corporate development and training has always been very specialized in providing the right skills for workers, but that provision of support is being totally transformed by AI. It’s both an incredible opportunity and a challenge because AI is exposing whether we’ve been doing things right all along.”

– Marc Steven Ramos

Robert Scoble
About Marc Steven Ramos

Marc Ramos is a highly experienced Chief Learning Officer, having worked in senior global roles with Google, Microsoft, Accenture, Novartis, Oracle, and other leading organizations. He is a Fellow at Harvard’s Learning Innovation Lab, with his publications including the recent Harvard Business Review article, A Framework for Picking the Right Generative AI Project.

LinkedIn: Marc Steven Ramos

Harvard Business Review Profile: Marc Steven Ramos

What you will learn

  • Navigating the post-pandemic shift in corporate learning
  • Balancing scalable learning with maintaining quality
  • Leveraging AI to transform workforce development
  • Addressing the imposter syndrome in learning and development teams
  • Embedding learning into the organizational culture
  • Utilizing data and AI to demonstrate training ROI
  • Rethinking the role of L&D as a driver of innovation

Episode Resources

Transcript

Ross: Ross Mark, it is wonderful to have you on the show.

Marc Steven Ramos: It is great to be here, Ross.

Ross: Your illustrious career has been framed around learning, and I think today it’s pretty safe to say that we need to learn faster and better than ever before. So where do you think we’re at today?

Marc Steven: I think from the lens of corporate learning or workforce development, not the academic, K-12 higher ed stuff, even though there’s a nice bridging that I think is necessary and occurring is a tough world. I think if you’re running any size learning and development function in any region or country and in any sector or vertical, these are tough times. And I think the tough times in particular because we’re still coming out of the pandemic, and what was in the past, live in person, instructor-led training has got to move into this new world of all virtual or maybe blended or whatever. But I think in terms of the adaptation of learning teams to move into this new world post-pandemic, and thinking about different ways to provide ideally the same level of instruction or training or knowledge gain or behavior change, whatever, it’s just a little tough. So I think a lot of people are having a hard time adjusting to the proper modality or the proper blends of formats. I think that’s one area where it’s tough. I think the other area that is tough is related to the macroeconomics of things, whether it’s inflation. I’m calling in from the US and the US inflation story is its own interesting animal. But whether it’s inflation or tighter budgets and so forth, the impact to the learning functions and other functions, other support functions in general, it’s tighter, it’s leaner, and I think for many good reasons, because if you’re a support function in legal or finance or HR or learning, the time has come for us to really, really demonstrate value and provide that value in different forms of insights and so forth.

So the second point, in terms of where I think it is right now, the temperature, the climate, and how tough it is, I think the macroeconomic piece is one, and then clearly there’s this buzzy, brand new character called AI, and I’m being a little sarcastic, but not I think it’s when you look at it from a learning lens. I think a lot of folks are trying to figure out not only how do I on the good side, right? How can I really make my courses faster and better and cooler and create videos faster in this, text to XYZ media is cool, so that’s but it’s still kind of hypey, if that’s even a word.

But what’s really interesting? And I’m framing this just as a person that’s managed a lot of L&D teams, it’s interesting because there’s this drama that’s below the waterline of the iceberg of pressure, in the sense that I think a lot of L&D people, because AI can do all this stuff, it’s kind of exposing whether or not the stuff that the human training person has been doing correctly all this time. So there’s this newfound ish, imposter syndrome that I think is occurring within a lot of support functions, again, whether it’s legal or HR, but I think it’s more acute in learning because the craft of corporate development, of training, has always been very specialized in the sense of providing the right skills for workers, but that provisioning of stuff to support skills, man, it is being totally benefiting from AI, but also challenging because of AI. So there’s a whole new sense of pressure, I think for I think the L&D community, and I’m just speaking from my own perspective, rather than representing, obviously, all these other folks. But those are some perspectives in terms of where I think the industry is right now and again, I’m looking at it more from the human perspective rather than AI’s perspective. But we can go there as well.

Ross: Yeah. Well, there’s lots to dig into there. First point is the do more with less mantra has been in place for a very long time. And I, as I’ve always said, it’s business is going to get tougher. It’s always going to get, you’re going to always have to do more. But the thing is, I don’t think of learning as a support function, or it shouldn’t be. It’s so, okay, yes, legal, it’s got its role. HR has got a role. But we are trying to create learning organizations, and we’ve been talking about that for 30 years or so, and now more than ever, the organization has to be a learning organization. I think that any leader that tries to delegate learning to the L&D is entirely missing their role and function to transform the organization to one where learning is embedded into everything. And I think there’s a real danger to separating out L&D as all right. They’re doing their job. They’ve got all their training courses, and we’re all good now to one of transformation of the organization, where as you’re alluding to, trying to work out, well, what can AI do and what can humans do? And can humans be on the journey where they need to do what they need to do? So we need to think of this from a leadership frame, I’d say.

Marc Steven: Yeah, I totally agree. I think you have three resonating points. The first one that you mentioned, you know, the need to get stuff out faster, more efficient and so forth, and make sure that you’re abiding by the corporate guidelines of scale, right? And that’s a very interesting dilemma, I think, just setting aside the whole kind of AI topic. But what’s interesting is, and I think a lot of L&D folks don’t talk about this, particularly at the strategy level. Yes, it’s all about scale. Yes, it’s about removing duplication, redundancy. Yes, it’s about reach. Yes, it’s about making sure that you’re efficiently spending the money in ways where your learning units can reach as many people as possible. The dilemma is, the more you scale a course, a program, with the intention of reaching as many people as possible, frankly, the more you have to dummy down the integrity of that course to reach as many people. The concern that I’ve had about scale, and you need scale. There’s no doubt. But that the flip side of the scale coin, if I can say that, is how do you still get that reach at scale, the efficiencies at scale, but in such a way that you’re not providing vanilla training for everyone? Because what happens is when you provide too much scaled learning, you do have to, forgive the term, dummy it down for a more common lowest common denominator reach. And when that happens, all you’re basically doing is building average workers in bulk. And I don’t really think that’s the goal of scalable learning.

Ross: But that’s also not going to give you, well, gives you competitive disadvantage, as opposed to competitive advantage. If you’re just churning out people that have defined skill sets, yeah, doing it. Do you, even if you’re doing that well or at scale. The point is, you know, for a competitive advantage, you need a bunch of diverse people that think, have different skills, and you bring them together in interesting ways. That’s where competitive advantage comes from. It’s not from the L&D churning out a bunch of people with skill sets, X, Y and Z.

Marc Steven: Yeah, and I think you’re so right. The dilemma might not be in terms of, you know, the internal requirements of the training teams, strategic approach, whatever, it’s just getting hit from different angles. I mean, when you’re looking at a lot of large learning content, course providers, you know, without naming names, they’re in a big, big, big dilemma because AI is threatening their wares, their stuff, and so they’re trying to get out of that. There’s something, as you mentioned, too, that this is not verbatim, Ross, but something about making sure that, you know, L&D, let me kind of step back that, you know, building the right knowledge and skills and capabilities for a company, it’s everyone’s responsibility, and if anything, what is L&D’s role to kind of make that happen? The way I’ve been kind of framing this with some folks, this is maybe not the best metaphor, analogy, example, whatever. Within the L&D function, the support functions, talent, HR, whatever, we’ve been striving to gain the seat at the table for years, right? And what’s interesting now is because of some of the factors that I mentioned beforehand, coming out of COVID, macroeconomics, there’s a lot more pressure on the L&D team to make sure that they are providing value. What’s happening now is that expectation of more duty, responsibility, showing the return has peaked, and I think in good ways, so much so that, you know, I don’t think we are striving to get the seat at the table. I think the responsibilities have been raised so high where L&D is the table. I think, you know, we are a new center of gravity. I’m not saying we’re the be all, end all, but there’s so much, and I think necessary responsible scrutiny of learning, particularly related to cultural aspects, because everyone is responsible to contribute, to share. You learn. What was the old statement? Teaching is learning twice, and so everyone has that responsibility to kind of unleash their own expertise and help lift each other without getting called all kind of soft and corporate mushy. But that’s just the basic truth.

The other thing is this whole kind of transformation piece, you know, whether we are the table, whether we are a new center of gravity, we have that responsibility. And my concern is, as I speak with a lot of other learning leaders and so forth, and just kind of get a general temperament of the economic play of learning. In other words, how much money support are you actually receiving? It is tough, but now’s the time actually where some companies are super smart because they are enabling the learning function to find new mechanisms and ways to actually show the return because the learning analytics, learning insights, learning reporting and dashboards, back to the executives. It’s been fairly immature now, whether it’s AI or not, but now it’s actually getting a lot more sophisticated and correct. The evidence is finally there, and I think a lot of companies get that where they’re basically saying, wow, I’ve always believed in the training team, the training function, and training our team, our employees, but I’ve never really figured out a way for those folks to actually show the return, right? I don’t mind giving them the money because I know I can tell. But now there’s, like, really justified, evidence-based ways to show, yeah, this program that costs $75,000 I know now that I can take that the learner data from the learning management system, correlate that into the ERP or CRM system, extract the data related to learning that did have an impact on sellers being able to sell faster or bigger or whatever, and use that as a corollary, so to speak, it’s not real causation, but use that as evidence, maybe with a small e back to the people managing your budgets. And that’s the cool part, but that’s what I’m saying beforehand. It’s time that I think collectively, we’ve got to step up. And part of that stepping up means that we have the right evidence of efficacy, that the stuff we’re building is actually working.

Ross: I think that is very valuable. And you want to support appropriate investment in learning. Absolutely, though, is I actually, when was it? It was 27 years ago something, I did a certificate in workplace training, and I was getting very frustrated because the whole course was saying, okay, these, this is what your outcomes from the learning, and this is how you and then you set all your low objectives to choose our outcome. I was saying, well, but what happens? Why don’t you want to get beyond what you’ve defined as the outcome to have open-ended learning, as opposed to having a specific bar and getting to that bar? And I think today we, again, this, if we have this idea of a person in a box, as in, that’s the organization in the past. This is the person. This is the job function? This is all the definitions of this thing. That person fits in that box, and they’ve got all this learning to be able to do that. So now we’ve got to create people that can respond on the fly to a different situation where the world is different, and where we can not just reach bars, but go beyond those two people to hunger to learn and to create and to innovate. And so I think we absolutely want to have show ROI to justify investment in learning, but we also need to have an open-endedness to it to we’re going into areas where we don’t even know what the metrics are because we don’t know what we’re creating. I mean, this obviously creates requires leaders who are prepared to go there. But I think part of I have similar conversations with technology functions, where the sense of you have to, as if you’re a CIO, you have to go to the board and the executive team, and you have to say, this is why you should be investing in technology. It’s partly because we will, we are part of the transformation of the organization. We’re not just a function to be subsumed. And same thing with learning. It’s like saying learning has to be part of what the organization is becoming. And so that goes beyond being able to anything you can necessarily quantify, to quantify completely. At this point, I think takes us a little bit to the AI piece. I’d love to get your thoughts on that. So you’ve kind of kept on saying, let’s keep that out of the conversation for now, let’s bring that in, because you’ve been heavily involved in that, and I’d love to hear your all right. Big Picture thoughts start. We can dig in from there. What’s the role of AI in organizational learning?

Marc Steven: That’s a big question. Yeah, it’s a big question, and it’s an important question, but it’s also a question that’s kind of flavored with, I think, some incredible levels of ambiguity and vagueness for lack of better words. So maybe a good way to kind of frame that was actually circling back to your prior comment about people in a box to a certain degree, right? I mean, you have the job architecture of a role, right? Here’s the things that the guy or gal or the individuals got to do. I get it. It’s really interesting in the sense of this whole kind of metaphorical concept of a box, of a container, is super fascinating to me. And there’s an AI play here I’ll share in a second in the way I’m gonna kind of think about this as an old instructional designer fella. We’ve always been trained, conditioned, whatever, to build courses that could be awesome. But in general, the training event is still bound by a duration. Here’s your two-hour class, here’s your two-day event, here’s your 20-week certification program. I don’t know, but it’s always in. It’s always contained by duration. It’s always contained by fixed learning objectives. It’s typically contained by a fixed set of use cases. In other words, by the time you exit this training, you’ll be able to do XYZ things a lot better. This whole kind of container thing is just really, it boggles me, and maybe I’m thinking too much about this.

There’s a great movie, one of my favorite movies, called Sideways. It’s a couple guys that go to wine country in California, and they’re drinking a lot of wine, and they’re meeting some people. There’s one great scene where one of these actors, these characters, is talking to someone else, and this other person, he’s trying to figure out, where did you? Why did you get so enticed and in love with wine? What she says is just really, really remarkable to me. What she basically says is, you know why she loves wine is because she always felt that when you open up a bottle of wine, you’re opening up something that’s living, that’s alive. When you open up a wine and really think about it from that perspective, you think about the people that were actually tending the grapes when they were gathered. You might be thinking about what was the humidity? What was the sunshine? So I’m going to come back to the whole kind of container thing, but in AI, I just think that’s a really interesting way to kind of look at learning now, in the sense of what has been in that container in truth, has been alive. It’s an organic, living thing that becomes alive once the interaction with the learner occurs. What you want to do is think about extending the learning outside of the box, outside of the container. So getting back to your question, Ross, about the intersection, so to speak, of AI and learning, that’s one way I kind of think about it sometimes, is how can we recreate the actual learning event where it’s constantly alive, where if you take a course, the course is something that is everlasting, is prolonged, and it’s also unique to your amount of time that you might have, the context of which you’re working, blah, blah, blah. I’m not going to talk about learning styles. I think it’s fascinating because if AI, particularly with what large language models are doing now, and the whole kind of agentic AI piece where these agents can go off and do multiple tasks against multiple use cases, but against multiple systems, and then you got the RAG piece here too. That’s really interesting now, right? Because if somebody wants to learn something on XYZ subject, and let’s just say that you work for a company that has 50,000 people, and let’s just say that, I don’t know, half of those folks probably know something related to the course that you’re taking. But it’s not in the learning management system; it’s in a whole bunch of Excel spreadsheets, or it’s in your Outlook emails, it’s in the terabytes of stuff. Well, if AI and its siblings, GPTs, LLMs, agents, whatever, if they can now tap into that, that missing information on an ongoing dynamic basis to feed that back to Ross or to Marc or whomever, you’re literally tapping into this living organism of information.

AI is becoming smart enough to shift that living, breathing information into instruction to give it shape, to give it structure, to give it its own kind of appeal, and then make it, tailor it, and personalize it and adapt it for the individual. So if that occurs, I don’t know if it’s 2024 or 2034, but if that occurs, this whole kind of concept of really thinking about learning where the true benefits are organic, it’s alive, and it’s constantly being produced in the beautiful sunshine of everyone else’s unleashed expertise. That’s a really, really fun kind of dream state to think about because there’s a significant AI play. What it really does, it changes the whole, frankly, the whole philosophy of how corporate learning is supposed to operate. If we see some companies kind of heading into that direction or a correlation, which is probably going to happen, that’s going to be super, super fascinating.

Ross: Yeah, that’s fantastic. It goes back to the Aridigos and his living company metaphor in the sense of it is self-feeding, that’s autopoiesis. This definition of life is you feed on itself in a way. I think that’s a beautiful evocation of organization as alive because it is dynamic. It’s taking its own essence and using it to feed itself. Is there anything in the public domain around organizations that aren’t truly on this path? Because, I mean, that’s compelling what you describe. But I’m sure that there’s plenty of organizations that have, you know, you’re not the only person to think of something like this. But are there any companies that are showing the way on this enable to be able to put this into place?

Marc Steven: Definitely, it’s interesting. I’m trying to finish a book on AI, but I’m not talking about AI. Frankly, I’m talking about the importance of change management. But my slant is, is there any other greater function or team that can drive the accelerated adoption of AI in your company other than the L&D team? The clickbaity title that I think about is, is L&D the new R&D? Is learning and development the new research and development? That’s just one kind of crazy perspective. The way I’m kind of thinking about that is when I’ve been interviewing some folks for a piece that I’m doing, these are CLOs of major, major, major companies. With that change management framing, there are so many incredibly awesome stories I’m hearing related to how to really drive adoption, and what is L&D’s role. To your question, related to is anybody doing it? Some of these companies that really, really get it, they totally see the value of human-driven change management. By that, I mean the more successful deployments that at least I’ve come across is one where you’re not thinking about, well, identify those 24 use cases that have a higher probability of AI doing X, Y and Z. The smarter companies, I think, my own take, no, they don’t even ask that question. They kind of go a level higher. They basically say, can we put together a dedicated, I didn’t say senior, a dedicated group, cross-functional group of folks to figure out question number one.

Question number one is, what the heck do we do with this? They’re not talking about use cases. They’re not talking about the technology, so to speak. They were just trying to figure out, okay, what’s the plan here, people? That’s an interesting way to kind of do this. You’re not hiring Accenture, you’re not hiring whatever to bring in the bazillions of billable hours to kind of figure that out. They want a grassroots way of figuring out how to deal with AI, what does it mean to us? Good, bad, right or wrong? That’s one thing that I see a lot of companies are doing. They’re really taking a much more forward, people-first perspective of figuring out the ball game, and then if the ball game says, hey, we understand that, thinking about risk, thinking about responsibility, whatever. Yeah, here’s the three places we got to start. I think that’s just a really, really smart way to do it. On the vendor side, there’s a lot of really, really cool vendors now thinking about enabling companies for the betterment of AI. The ones that I think are really sharp, they’re getting it. They’re not like the really big, content course providers that say, hey, this is AI 101, this is the, here’s your list of acronyms. We’re going to talk through every single dang acronym and blah, blah, blah. That’s necessary. That’s great stuff. Some of the vendors that are really cool are the ones that are not really focusing on those basics, so to speak. They’ll go into an enterprise, name your company anywhere, and they’ll say, what are your concerns? What are your needs? What are your requirements related to this, this AI thing? Have you, oh, customer identified the areas where you think AI can best benefit yourselves and the company? Then they shape the instruction to blend in those clients’ needs very specifically. They literally customize the instruction to do that. That way, when the learner goes through the learning, they’re talking about the stuff they really focus on, on a day-in and day-out basis. It’s not this generic stuff off the shelf. The other thing that they’re doing is they’re actually embedding, no surprise, but they’re embedding agents, LLM processes, proper prompting into the instruction itself. If you want to know Gemini, then use Gemini to learn Gemini. They really, really go deep. That blending of it’s a different instructional design method as well, but that kind of blending is really, really super smart, just on the companies, the corporates.

Ross: Is there any companies you can name? Would you say these are companies doing a good job?

Marc Steven: I mean, yeah, I mean, so some of the folks I’ve interviewed and some companies I’m aware of, I think what DHL is doing is just remarkable because what they’re doing is, I was just using my prior example. Let’s have a people-first approach about what do we do about this? It’s kind of a given, you kind of know there’s an efficiencies play, there’s a speed play, there’s a, you know, building stuff more efficiently, play, whatever. But I think DHL is really smart about looking at it from that grassroots perspective, but still at the same time having this balanced approach, again, related to responsibility and risk. I think what Ernst and Young is doing, EY, they’re really, really super sharp too because they’re focusing a lot on, making sure that we’re providing the basics and following, I think, the basic corporate capability guidance of give them the one-on-one training, make sure they’re tested, make sure that people have the opportunity to become certified in the right ways. Maybe the higher level of certification can affect their level hours, which affects their compensation, yada yada yada. So I think that’s really, really great. What’s really cool is, what they’re also doing is, they’ve created kind of a, it’s kind of a Slack, it is Slack, but kind of a Slack collection point for people to contribute what they think are just phenomenal prompts. They’re creating, it’s not gamification, but they’re creating a mechanism because Slack is very social, right? People can now chime in to say, wow, that prompt was so great. If I just changed this and added three adjectives, this is my result, and then somebody else can chime and go, whoa. That’s great. What’s interesting is, you’re building this bottoms-up collection of super valuable prompts without the corporate telling you to do it. Again, it’s really kind of telling into the culture of the company, which I think is just fantastic as well. Then obviously there’s the big, big provider players, you know, the Microsofts, Salesforce.com, ServiceNow. What ServiceNow is doing is just phenomenal. I’m really glad to see this. It’s just a matter of keeping track of what’s truly working. It’s not all about data. Data is there to inform the ultimately, it’s the combination of AI’s data provisioning and a human being, the Johnny and Jane, the Ross and the Marc saying, well, yeah, but which I think is, again, super important.

Ross: So Taranda, you’re writing a book you mentioned in passing. Can you tell us anything about that? What’s the thesis, and is there a title and launch date?

Marc Steven: The book is, what I was highlighting beforehand, is really thinking about change management, but what is the learning functions, role of driving, more accelerated adoption of AI. That’s why I’ve been interviewing a whole bunch of these folks. I want to give a perspective of what’s really happening, rather than this observational, theoretical stuff. I’m interviewing a ton of folks, and my dilemma right now, to be honest with you, maybe you can help me, Ross, because I know you’re a phenomenal author. I don’t know if this is going to be a collection of case studies versus some sort of blue book or a playbook is a better description. I’m still on the fence, and maybe in good ways that should be maybe a combination. How do you take some of these really cool things that people are doing, the quote unquote case studies or whatever, but wait a second, is there a way to kind of operationalize that in a very sensible way that might align to certain processes or procedures you might already have but has maybe a different spin, thinking about this socially minded intelligence, you have to work with an agent to make sure that you’re following the guidelines of the playbook correctly. I don’t know. Maybe the agent is the coach of all your plays. Maybe that’s not the best, well, maybe it is a good example. Depends on what the person’s coaching, but yeah, that’s the book. I don’t know, I don’t have a good title. It could be the real campy, L&D is the new R&D. I get feedback from friends. I get feedback from friends that that is a really great way to look at it because there’s so much truth in that. Then I get other buddies and say, oh, geez, Marc, that’s the worst thing I’ve ever heard.

Ross: You do some market testing, but I mean very much looking forward to reading it because this is about, it’s frustrating for me because I’m sitting on the outside because I want to know what’s the best people doing and, and I see bits and pieces from my clients and various other work, but I think sharing as you are, obviously uncovering the real best of what’s happening, I think is going to be a real boon. So thank you so much for your work and your time and your insights. Today, Marc has been a real treat.

Marc Steven: Now that the treat, Ross has been mine, I really appreciate the invitation, and hopefully, this has been helpful to our audience. Great.

The post Marc Ramos on organic learning, personalized education, L&D as the new R&D, and top learning case studies (AC Ep66) appeared first on amplifyingcognition.

  continue reading

101 epizódok

Artwork
iconMegosztás
 
Manage episode 445396563 series 3394253
A tartalmat a Ross Dawson biztosítja. Az összes podcast-tartalmat, beleértve az epizódokat, grafikákat és podcast-leírásokat, közvetlenül a Ross Dawson vagy a podcast platform partnere tölti fel és biztosítja. Ha úgy gondolja, hogy valaki az Ön engedélye nélkül használja fel a szerzői joggal védett művét, kövesse az itt leírt folyamatot https://hu.player.fm/legal.

“The craft of corporate development and training has always been very specialized in providing the right skills for workers, but that provision of support is being totally transformed by AI. It’s both an incredible opportunity and a challenge because AI is exposing whether we’ve been doing things right all along.”

– Marc Steven Ramos

Robert Scoble
About Marc Steven Ramos

Marc Ramos is a highly experienced Chief Learning Officer, having worked in senior global roles with Google, Microsoft, Accenture, Novartis, Oracle, and other leading organizations. He is a Fellow at Harvard’s Learning Innovation Lab, with his publications including the recent Harvard Business Review article, A Framework for Picking the Right Generative AI Project.

LinkedIn: Marc Steven Ramos

Harvard Business Review Profile: Marc Steven Ramos

What you will learn

  • Navigating the post-pandemic shift in corporate learning
  • Balancing scalable learning with maintaining quality
  • Leveraging AI to transform workforce development
  • Addressing the imposter syndrome in learning and development teams
  • Embedding learning into the organizational culture
  • Utilizing data and AI to demonstrate training ROI
  • Rethinking the role of L&D as a driver of innovation

Episode Resources

Transcript

Ross: Ross Mark, it is wonderful to have you on the show.

Marc Steven Ramos: It is great to be here, Ross.

Ross: Your illustrious career has been framed around learning, and I think today it’s pretty safe to say that we need to learn faster and better than ever before. So where do you think we’re at today?

Marc Steven: I think from the lens of corporate learning or workforce development, not the academic, K-12 higher ed stuff, even though there’s a nice bridging that I think is necessary and occurring is a tough world. I think if you’re running any size learning and development function in any region or country and in any sector or vertical, these are tough times. And I think the tough times in particular because we’re still coming out of the pandemic, and what was in the past, live in person, instructor-led training has got to move into this new world of all virtual or maybe blended or whatever. But I think in terms of the adaptation of learning teams to move into this new world post-pandemic, and thinking about different ways to provide ideally the same level of instruction or training or knowledge gain or behavior change, whatever, it’s just a little tough. So I think a lot of people are having a hard time adjusting to the proper modality or the proper blends of formats. I think that’s one area where it’s tough. I think the other area that is tough is related to the macroeconomics of things, whether it’s inflation. I’m calling in from the US and the US inflation story is its own interesting animal. But whether it’s inflation or tighter budgets and so forth, the impact to the learning functions and other functions, other support functions in general, it’s tighter, it’s leaner, and I think for many good reasons, because if you’re a support function in legal or finance or HR or learning, the time has come for us to really, really demonstrate value and provide that value in different forms of insights and so forth.

So the second point, in terms of where I think it is right now, the temperature, the climate, and how tough it is, I think the macroeconomic piece is one, and then clearly there’s this buzzy, brand new character called AI, and I’m being a little sarcastic, but not I think it’s when you look at it from a learning lens. I think a lot of folks are trying to figure out not only how do I on the good side, right? How can I really make my courses faster and better and cooler and create videos faster in this, text to XYZ media is cool, so that’s but it’s still kind of hypey, if that’s even a word.

But what’s really interesting? And I’m framing this just as a person that’s managed a lot of L&D teams, it’s interesting because there’s this drama that’s below the waterline of the iceberg of pressure, in the sense that I think a lot of L&D people, because AI can do all this stuff, it’s kind of exposing whether or not the stuff that the human training person has been doing correctly all this time. So there’s this newfound ish, imposter syndrome that I think is occurring within a lot of support functions, again, whether it’s legal or HR, but I think it’s more acute in learning because the craft of corporate development, of training, has always been very specialized in the sense of providing the right skills for workers, but that provisioning of stuff to support skills, man, it is being totally benefiting from AI, but also challenging because of AI. So there’s a whole new sense of pressure, I think for I think the L&D community, and I’m just speaking from my own perspective, rather than representing, obviously, all these other folks. But those are some perspectives in terms of where I think the industry is right now and again, I’m looking at it more from the human perspective rather than AI’s perspective. But we can go there as well.

Ross: Yeah. Well, there’s lots to dig into there. First point is the do more with less mantra has been in place for a very long time. And I, as I’ve always said, it’s business is going to get tougher. It’s always going to get, you’re going to always have to do more. But the thing is, I don’t think of learning as a support function, or it shouldn’t be. It’s so, okay, yes, legal, it’s got its role. HR has got a role. But we are trying to create learning organizations, and we’ve been talking about that for 30 years or so, and now more than ever, the organization has to be a learning organization. I think that any leader that tries to delegate learning to the L&D is entirely missing their role and function to transform the organization to one where learning is embedded into everything. And I think there’s a real danger to separating out L&D as all right. They’re doing their job. They’ve got all their training courses, and we’re all good now to one of transformation of the organization, where as you’re alluding to, trying to work out, well, what can AI do and what can humans do? And can humans be on the journey where they need to do what they need to do? So we need to think of this from a leadership frame, I’d say.

Marc Steven: Yeah, I totally agree. I think you have three resonating points. The first one that you mentioned, you know, the need to get stuff out faster, more efficient and so forth, and make sure that you’re abiding by the corporate guidelines of scale, right? And that’s a very interesting dilemma, I think, just setting aside the whole kind of AI topic. But what’s interesting is, and I think a lot of L&D folks don’t talk about this, particularly at the strategy level. Yes, it’s all about scale. Yes, it’s about removing duplication, redundancy. Yes, it’s about reach. Yes, it’s about making sure that you’re efficiently spending the money in ways where your learning units can reach as many people as possible. The dilemma is, the more you scale a course, a program, with the intention of reaching as many people as possible, frankly, the more you have to dummy down the integrity of that course to reach as many people. The concern that I’ve had about scale, and you need scale. There’s no doubt. But that the flip side of the scale coin, if I can say that, is how do you still get that reach at scale, the efficiencies at scale, but in such a way that you’re not providing vanilla training for everyone? Because what happens is when you provide too much scaled learning, you do have to, forgive the term, dummy it down for a more common lowest common denominator reach. And when that happens, all you’re basically doing is building average workers in bulk. And I don’t really think that’s the goal of scalable learning.

Ross: But that’s also not going to give you, well, gives you competitive disadvantage, as opposed to competitive advantage. If you’re just churning out people that have defined skill sets, yeah, doing it. Do you, even if you’re doing that well or at scale. The point is, you know, for a competitive advantage, you need a bunch of diverse people that think, have different skills, and you bring them together in interesting ways. That’s where competitive advantage comes from. It’s not from the L&D churning out a bunch of people with skill sets, X, Y and Z.

Marc Steven: Yeah, and I think you’re so right. The dilemma might not be in terms of, you know, the internal requirements of the training teams, strategic approach, whatever, it’s just getting hit from different angles. I mean, when you’re looking at a lot of large learning content, course providers, you know, without naming names, they’re in a big, big, big dilemma because AI is threatening their wares, their stuff, and so they’re trying to get out of that. There’s something, as you mentioned, too, that this is not verbatim, Ross, but something about making sure that, you know, L&D, let me kind of step back that, you know, building the right knowledge and skills and capabilities for a company, it’s everyone’s responsibility, and if anything, what is L&D’s role to kind of make that happen? The way I’ve been kind of framing this with some folks, this is maybe not the best metaphor, analogy, example, whatever. Within the L&D function, the support functions, talent, HR, whatever, we’ve been striving to gain the seat at the table for years, right? And what’s interesting now is because of some of the factors that I mentioned beforehand, coming out of COVID, macroeconomics, there’s a lot more pressure on the L&D team to make sure that they are providing value. What’s happening now is that expectation of more duty, responsibility, showing the return has peaked, and I think in good ways, so much so that, you know, I don’t think we are striving to get the seat at the table. I think the responsibilities have been raised so high where L&D is the table. I think, you know, we are a new center of gravity. I’m not saying we’re the be all, end all, but there’s so much, and I think necessary responsible scrutiny of learning, particularly related to cultural aspects, because everyone is responsible to contribute, to share. You learn. What was the old statement? Teaching is learning twice, and so everyone has that responsibility to kind of unleash their own expertise and help lift each other without getting called all kind of soft and corporate mushy. But that’s just the basic truth.

The other thing is this whole kind of transformation piece, you know, whether we are the table, whether we are a new center of gravity, we have that responsibility. And my concern is, as I speak with a lot of other learning leaders and so forth, and just kind of get a general temperament of the economic play of learning. In other words, how much money support are you actually receiving? It is tough, but now’s the time actually where some companies are super smart because they are enabling the learning function to find new mechanisms and ways to actually show the return because the learning analytics, learning insights, learning reporting and dashboards, back to the executives. It’s been fairly immature now, whether it’s AI or not, but now it’s actually getting a lot more sophisticated and correct. The evidence is finally there, and I think a lot of companies get that where they’re basically saying, wow, I’ve always believed in the training team, the training function, and training our team, our employees, but I’ve never really figured out a way for those folks to actually show the return, right? I don’t mind giving them the money because I know I can tell. But now there’s, like, really justified, evidence-based ways to show, yeah, this program that costs $75,000 I know now that I can take that the learner data from the learning management system, correlate that into the ERP or CRM system, extract the data related to learning that did have an impact on sellers being able to sell faster or bigger or whatever, and use that as a corollary, so to speak, it’s not real causation, but use that as evidence, maybe with a small e back to the people managing your budgets. And that’s the cool part, but that’s what I’m saying beforehand. It’s time that I think collectively, we’ve got to step up. And part of that stepping up means that we have the right evidence of efficacy, that the stuff we’re building is actually working.

Ross: I think that is very valuable. And you want to support appropriate investment in learning. Absolutely, though, is I actually, when was it? It was 27 years ago something, I did a certificate in workplace training, and I was getting very frustrated because the whole course was saying, okay, these, this is what your outcomes from the learning, and this is how you and then you set all your low objectives to choose our outcome. I was saying, well, but what happens? Why don’t you want to get beyond what you’ve defined as the outcome to have open-ended learning, as opposed to having a specific bar and getting to that bar? And I think today we, again, this, if we have this idea of a person in a box, as in, that’s the organization in the past. This is the person. This is the job function? This is all the definitions of this thing. That person fits in that box, and they’ve got all this learning to be able to do that. So now we’ve got to create people that can respond on the fly to a different situation where the world is different, and where we can not just reach bars, but go beyond those two people to hunger to learn and to create and to innovate. And so I think we absolutely want to have show ROI to justify investment in learning, but we also need to have an open-endedness to it to we’re going into areas where we don’t even know what the metrics are because we don’t know what we’re creating. I mean, this obviously creates requires leaders who are prepared to go there. But I think part of I have similar conversations with technology functions, where the sense of you have to, as if you’re a CIO, you have to go to the board and the executive team, and you have to say, this is why you should be investing in technology. It’s partly because we will, we are part of the transformation of the organization. We’re not just a function to be subsumed. And same thing with learning. It’s like saying learning has to be part of what the organization is becoming. And so that goes beyond being able to anything you can necessarily quantify, to quantify completely. At this point, I think takes us a little bit to the AI piece. I’d love to get your thoughts on that. So you’ve kind of kept on saying, let’s keep that out of the conversation for now, let’s bring that in, because you’ve been heavily involved in that, and I’d love to hear your all right. Big Picture thoughts start. We can dig in from there. What’s the role of AI in organizational learning?

Marc Steven: That’s a big question. Yeah, it’s a big question, and it’s an important question, but it’s also a question that’s kind of flavored with, I think, some incredible levels of ambiguity and vagueness for lack of better words. So maybe a good way to kind of frame that was actually circling back to your prior comment about people in a box to a certain degree, right? I mean, you have the job architecture of a role, right? Here’s the things that the guy or gal or the individuals got to do. I get it. It’s really interesting in the sense of this whole kind of metaphorical concept of a box, of a container, is super fascinating to me. And there’s an AI play here I’ll share in a second in the way I’m gonna kind of think about this as an old instructional designer fella. We’ve always been trained, conditioned, whatever, to build courses that could be awesome. But in general, the training event is still bound by a duration. Here’s your two-hour class, here’s your two-day event, here’s your 20-week certification program. I don’t know, but it’s always in. It’s always contained by duration. It’s always contained by fixed learning objectives. It’s typically contained by a fixed set of use cases. In other words, by the time you exit this training, you’ll be able to do XYZ things a lot better. This whole kind of container thing is just really, it boggles me, and maybe I’m thinking too much about this.

There’s a great movie, one of my favorite movies, called Sideways. It’s a couple guys that go to wine country in California, and they’re drinking a lot of wine, and they’re meeting some people. There’s one great scene where one of these actors, these characters, is talking to someone else, and this other person, he’s trying to figure out, where did you? Why did you get so enticed and in love with wine? What she says is just really, really remarkable to me. What she basically says is, you know why she loves wine is because she always felt that when you open up a bottle of wine, you’re opening up something that’s living, that’s alive. When you open up a wine and really think about it from that perspective, you think about the people that were actually tending the grapes when they were gathered. You might be thinking about what was the humidity? What was the sunshine? So I’m going to come back to the whole kind of container thing, but in AI, I just think that’s a really interesting way to kind of look at learning now, in the sense of what has been in that container in truth, has been alive. It’s an organic, living thing that becomes alive once the interaction with the learner occurs. What you want to do is think about extending the learning outside of the box, outside of the container. So getting back to your question, Ross, about the intersection, so to speak, of AI and learning, that’s one way I kind of think about it sometimes, is how can we recreate the actual learning event where it’s constantly alive, where if you take a course, the course is something that is everlasting, is prolonged, and it’s also unique to your amount of time that you might have, the context of which you’re working, blah, blah, blah. I’m not going to talk about learning styles. I think it’s fascinating because if AI, particularly with what large language models are doing now, and the whole kind of agentic AI piece where these agents can go off and do multiple tasks against multiple use cases, but against multiple systems, and then you got the RAG piece here too. That’s really interesting now, right? Because if somebody wants to learn something on XYZ subject, and let’s just say that you work for a company that has 50,000 people, and let’s just say that, I don’t know, half of those folks probably know something related to the course that you’re taking. But it’s not in the learning management system; it’s in a whole bunch of Excel spreadsheets, or it’s in your Outlook emails, it’s in the terabytes of stuff. Well, if AI and its siblings, GPTs, LLMs, agents, whatever, if they can now tap into that, that missing information on an ongoing dynamic basis to feed that back to Ross or to Marc or whomever, you’re literally tapping into this living organism of information.

AI is becoming smart enough to shift that living, breathing information into instruction to give it shape, to give it structure, to give it its own kind of appeal, and then make it, tailor it, and personalize it and adapt it for the individual. So if that occurs, I don’t know if it’s 2024 or 2034, but if that occurs, this whole kind of concept of really thinking about learning where the true benefits are organic, it’s alive, and it’s constantly being produced in the beautiful sunshine of everyone else’s unleashed expertise. That’s a really, really fun kind of dream state to think about because there’s a significant AI play. What it really does, it changes the whole, frankly, the whole philosophy of how corporate learning is supposed to operate. If we see some companies kind of heading into that direction or a correlation, which is probably going to happen, that’s going to be super, super fascinating.

Ross: Yeah, that’s fantastic. It goes back to the Aridigos and his living company metaphor in the sense of it is self-feeding, that’s autopoiesis. This definition of life is you feed on itself in a way. I think that’s a beautiful evocation of organization as alive because it is dynamic. It’s taking its own essence and using it to feed itself. Is there anything in the public domain around organizations that aren’t truly on this path? Because, I mean, that’s compelling what you describe. But I’m sure that there’s plenty of organizations that have, you know, you’re not the only person to think of something like this. But are there any companies that are showing the way on this enable to be able to put this into place?

Marc Steven: Definitely, it’s interesting. I’m trying to finish a book on AI, but I’m not talking about AI. Frankly, I’m talking about the importance of change management. But my slant is, is there any other greater function or team that can drive the accelerated adoption of AI in your company other than the L&D team? The clickbaity title that I think about is, is L&D the new R&D? Is learning and development the new research and development? That’s just one kind of crazy perspective. The way I’m kind of thinking about that is when I’ve been interviewing some folks for a piece that I’m doing, these are CLOs of major, major, major companies. With that change management framing, there are so many incredibly awesome stories I’m hearing related to how to really drive adoption, and what is L&D’s role. To your question, related to is anybody doing it? Some of these companies that really, really get it, they totally see the value of human-driven change management. By that, I mean the more successful deployments that at least I’ve come across is one where you’re not thinking about, well, identify those 24 use cases that have a higher probability of AI doing X, Y and Z. The smarter companies, I think, my own take, no, they don’t even ask that question. They kind of go a level higher. They basically say, can we put together a dedicated, I didn’t say senior, a dedicated group, cross-functional group of folks to figure out question number one.

Question number one is, what the heck do we do with this? They’re not talking about use cases. They’re not talking about the technology, so to speak. They were just trying to figure out, okay, what’s the plan here, people? That’s an interesting way to kind of do this. You’re not hiring Accenture, you’re not hiring whatever to bring in the bazillions of billable hours to kind of figure that out. They want a grassroots way of figuring out how to deal with AI, what does it mean to us? Good, bad, right or wrong? That’s one thing that I see a lot of companies are doing. They’re really taking a much more forward, people-first perspective of figuring out the ball game, and then if the ball game says, hey, we understand that, thinking about risk, thinking about responsibility, whatever. Yeah, here’s the three places we got to start. I think that’s just a really, really smart way to do it. On the vendor side, there’s a lot of really, really cool vendors now thinking about enabling companies for the betterment of AI. The ones that I think are really sharp, they’re getting it. They’re not like the really big, content course providers that say, hey, this is AI 101, this is the, here’s your list of acronyms. We’re going to talk through every single dang acronym and blah, blah, blah. That’s necessary. That’s great stuff. Some of the vendors that are really cool are the ones that are not really focusing on those basics, so to speak. They’ll go into an enterprise, name your company anywhere, and they’ll say, what are your concerns? What are your needs? What are your requirements related to this, this AI thing? Have you, oh, customer identified the areas where you think AI can best benefit yourselves and the company? Then they shape the instruction to blend in those clients’ needs very specifically. They literally customize the instruction to do that. That way, when the learner goes through the learning, they’re talking about the stuff they really focus on, on a day-in and day-out basis. It’s not this generic stuff off the shelf. The other thing that they’re doing is they’re actually embedding, no surprise, but they’re embedding agents, LLM processes, proper prompting into the instruction itself. If you want to know Gemini, then use Gemini to learn Gemini. They really, really go deep. That blending of it’s a different instructional design method as well, but that kind of blending is really, really super smart, just on the companies, the corporates.

Ross: Is there any companies you can name? Would you say these are companies doing a good job?

Marc Steven: I mean, yeah, I mean, so some of the folks I’ve interviewed and some companies I’m aware of, I think what DHL is doing is just remarkable because what they’re doing is, I was just using my prior example. Let’s have a people-first approach about what do we do about this? It’s kind of a given, you kind of know there’s an efficiencies play, there’s a speed play, there’s a, you know, building stuff more efficiently, play, whatever. But I think DHL is really smart about looking at it from that grassroots perspective, but still at the same time having this balanced approach, again, related to responsibility and risk. I think what Ernst and Young is doing, EY, they’re really, really super sharp too because they’re focusing a lot on, making sure that we’re providing the basics and following, I think, the basic corporate capability guidance of give them the one-on-one training, make sure they’re tested, make sure that people have the opportunity to become certified in the right ways. Maybe the higher level of certification can affect their level hours, which affects their compensation, yada yada yada. So I think that’s really, really great. What’s really cool is, what they’re also doing is, they’ve created kind of a, it’s kind of a Slack, it is Slack, but kind of a Slack collection point for people to contribute what they think are just phenomenal prompts. They’re creating, it’s not gamification, but they’re creating a mechanism because Slack is very social, right? People can now chime in to say, wow, that prompt was so great. If I just changed this and added three adjectives, this is my result, and then somebody else can chime and go, whoa. That’s great. What’s interesting is, you’re building this bottoms-up collection of super valuable prompts without the corporate telling you to do it. Again, it’s really kind of telling into the culture of the company, which I think is just fantastic as well. Then obviously there’s the big, big provider players, you know, the Microsofts, Salesforce.com, ServiceNow. What ServiceNow is doing is just phenomenal. I’m really glad to see this. It’s just a matter of keeping track of what’s truly working. It’s not all about data. Data is there to inform the ultimately, it’s the combination of AI’s data provisioning and a human being, the Johnny and Jane, the Ross and the Marc saying, well, yeah, but which I think is, again, super important.

Ross: So Taranda, you’re writing a book you mentioned in passing. Can you tell us anything about that? What’s the thesis, and is there a title and launch date?

Marc Steven: The book is, what I was highlighting beforehand, is really thinking about change management, but what is the learning functions, role of driving, more accelerated adoption of AI. That’s why I’ve been interviewing a whole bunch of these folks. I want to give a perspective of what’s really happening, rather than this observational, theoretical stuff. I’m interviewing a ton of folks, and my dilemma right now, to be honest with you, maybe you can help me, Ross, because I know you’re a phenomenal author. I don’t know if this is going to be a collection of case studies versus some sort of blue book or a playbook is a better description. I’m still on the fence, and maybe in good ways that should be maybe a combination. How do you take some of these really cool things that people are doing, the quote unquote case studies or whatever, but wait a second, is there a way to kind of operationalize that in a very sensible way that might align to certain processes or procedures you might already have but has maybe a different spin, thinking about this socially minded intelligence, you have to work with an agent to make sure that you’re following the guidelines of the playbook correctly. I don’t know. Maybe the agent is the coach of all your plays. Maybe that’s not the best, well, maybe it is a good example. Depends on what the person’s coaching, but yeah, that’s the book. I don’t know, I don’t have a good title. It could be the real campy, L&D is the new R&D. I get feedback from friends. I get feedback from friends that that is a really great way to look at it because there’s so much truth in that. Then I get other buddies and say, oh, geez, Marc, that’s the worst thing I’ve ever heard.

Ross: You do some market testing, but I mean very much looking forward to reading it because this is about, it’s frustrating for me because I’m sitting on the outside because I want to know what’s the best people doing and, and I see bits and pieces from my clients and various other work, but I think sharing as you are, obviously uncovering the real best of what’s happening, I think is going to be a real boon. So thank you so much for your work and your time and your insights. Today, Marc has been a real treat.

Marc Steven: Now that the treat, Ross has been mine, I really appreciate the invitation, and hopefully, this has been helpful to our audience. Great.

The post Marc Ramos on organic learning, personalized education, L&D as the new R&D, and top learning case studies (AC Ep66) appeared first on amplifyingcognition.

  continue reading

101 epizódok

Semua episod

×
 
Loading …

Üdvözlünk a Player FM-nél!

A Player FM lejátszó az internetet böngészi a kiváló minőségű podcastok után, hogy ön élvezhesse azokat. Ez a legjobb podcast-alkalmazás, Androidon, iPhone-on és a weben is működik. Jelentkezzen be az feliratkozások szinkronizálásához az eszközök között.

 

Gyors referencia kézikönyv