Artwork

A tartalmat a Michael Helbling, Tim Wilson, Moe Kiss, Val Kroll, and Julie Hoyer, Michael Helbling, Tim Wilson, Moe Kiss, Val Kroll, and Julie Hoyer biztosítja. Az összes podcast-tartalmat, beleértve az epizódokat, grafikákat és podcast-leírásokat, közvetlenül a Michael Helbling, Tim Wilson, Moe Kiss, Val Kroll, and Julie Hoyer, Michael Helbling, Tim Wilson, Moe Kiss, Val Kroll, and Julie Hoyer vagy a podcast platform partnere tölti fel és biztosítja. Ha úgy gondolja, hogy valaki az Ön engedélye nélkül használja fel a szerzői joggal védett művét, kövesse az itt leírt folyamatot https://hu.player.fm/legal.
Player FM - Podcast alkalmazás
Lépjen offline állapotba az Player FM alkalmazással!

#254: Is Your Use of Benchmarks Above Average? with Eric Sandosham

1:04:34
 
Megosztás
 

Manage episode 440440382 series 2448803
A tartalmat a Michael Helbling, Tim Wilson, Moe Kiss, Val Kroll, and Julie Hoyer, Michael Helbling, Tim Wilson, Moe Kiss, Val Kroll, and Julie Hoyer biztosítja. Az összes podcast-tartalmat, beleértve az epizódokat, grafikákat és podcast-leírásokat, közvetlenül a Michael Helbling, Tim Wilson, Moe Kiss, Val Kroll, and Julie Hoyer, Michael Helbling, Tim Wilson, Moe Kiss, Val Kroll, and Julie Hoyer vagy a podcast platform partnere tölti fel és biztosítja. Ha úgy gondolja, hogy valaki az Ön engedélye nélkül használja fel a szerzői joggal védett művét, kövesse az itt leírt folyamatot https://hu.player.fm/legal.

It’s human nature to want to compare yourself or your organization against your competition, but how valuable are benchmarks to your business strategy? Benchmarks can be dangerous. You can rarely put your hands on all the background and context since, by definition, benchmark data is external to your organization. And you can also argue that benchmarks are a lazy way to evaluate performance, or at least some co-hosts on this episode feel that way! Eric Sandosham, founder and partner at Red & White Consulting Partners (and prolific writer), along with Moe, Tim, and Val break down the problems with benchmarking and offer some alternatives to consider when you get the itch to reach for one!

Links to Items Mentioned in the Show

Photo by Noah Silliman on Unsplash

Episode Transcript

[music]

0:00:05.8 Announcer: Welcome to the Analytics Power Hour. Analytics topics covered conversationally and sometimes with explicit language.

0:00:14.1 Tim Wilson: Hi everyone. Welcome to the Analytics Power Hour where all the data is strong, all the models are good looking, and all the KPIs are above average. This is episode number 254, and listeners to this show who are also public radio nerds of a certain age will absolutely get that reference. I don’t even know if my co-host will get that reference. Val Kroll, did you get my public radio? Deep cut there.

[laughter]

0:00:39.2 Val Kroll: I’m sorry to disappoint him, but absolutely not.

[laughter]

0:00:42.7 TW: Okay. Your dad totally would have. So this will be a conversation to have with him afterwards. [laughter] And Moe like you’re in Australia, I don’t even know if the A, B, C or any other service ever carried Prairie Home Companion. Do you have any idea what I’m talking about?

0:00:56.4 Moe Kiss: I’m totally and utterly baffled. [laughter] Right now.

0:01:00.8 TW: Oh right.

0:01:00.9 VK: Great place to start.

[laughter]

0:01:03.4 TW: That’s how things go for me and most of my social interactions. [laughter] So I’m Tim, and I just did a little benchmarking right there on, my Garrison Keillor knowledge relative to my co-hosts. So, for listeners who don’t get it, I’m referencing a radio show that ran four years in the states that included a segment called The News from Lake Wobegon, which was a fictitious town in Minnesota. And the segment always ended by the host Garrison Keillor, noting that in Lake Wobegon, all the women are strong, all the men are good looking, and all the children are above average, but those exact beats.

0:01:49.2 TW: So, that last bit is sort of a lead in to this episode because we’re gonna be talking about averages, specifically benchmarks, this off requested comparison metric that we get so often from our business partners. Personally these sorts of requests tend to trigger me to curse profusely. [chuckle] For now though, I’m just gonna introduce our guest who wrote a pretty thoughtful article on the subject on Medium as part of his first year of pinning one post a week on the platform, which is impressive.

0:02:21.7 TW: Eric Sandosham is a founder and partner at Red and White Consulting Partners, where he works with companies across a range of industries to help them improve their business decisioning and operating processes. Eric is also on the adjunct faculty at Non Yang Technological University, Singapore Management University, and the Wealth Management Institute. He was previously the customer intelligence practice lead for North Asia for SaaS, and before that was the managing director head of Decision Management at Asia Pacific Consumer Bank at Citibank Singapore. And today he is our guest. So welcome to the show, Eric.

0:02:43.6 Eric Sandosham: Thank you very much. Thank you. Thank you for having me, Tim.

0:02:46.7 TW: I should ask you, have you ever heard of Prairie Home Companion, or.

0:02:49.6 ES: I’ve heard that phrase.

0:02:52.4 TW: Okay.

[laughter]

0:02:52.8 ES: Yeah, but I’ve not heard the radio broadcast, obviously.

0:02:58.1 TW: Or you’re at least being, I mean, I’ve seen Garrison Keillor live, so I’m telling you there, I’m gonna get some Slack messages, people saying, I can’t believe they hadn’t heard a pre-owned companion. [laughter] But, I will not belabor that any further. Trust me, it killed in a certain group. So, Eric, I sort of noted in the introduction that, that we, or actually Val, came across you because of a post that you wrote on Medium, and the post was titled The Problem with Benchmarks, and it had a subtitle, which was, why are We Obsessed with Comparisons? So maybe we can start there, Eric, why are we obsessed with comparisons?

0:03:38.9 ES: I think it’s such a built in phenomenon as a human species to always compare, or we’re growing up. I’m sure our parents always tell us, don’t, don’t compare with your neighbors, don’t compare with your friends and colleagues. Just compare with yourself as long as you’re making that onward progress. But I don’t think any of us really, stick to that. It’s just so natural when you step into a room, you get into a new place of work or in anything that you do, you’re trying to size yourself in relation to someone else.

0:04:13.4 ES: And I think it maybe starts trying to understand our, our place in the larger scheme of things. And this carries over in, into the business world, the first thing typically most organizations ask for. And as you mentioned, I run my own consulting practice at the very start of many of the consulting practices, the engagements the clients would ask, can you help me with some benchmarks? I’m trying to get some information and reference points and all of that. And once we get there, we can go into the deeper stuff. But it seems always top of mind, for them to want to have a sense of almost like a yardstick. Or a placeholder to know where they are on that map.

0:04:57.0 TW: Moe, does your team get hit with requests for tracking down benchmarks? Creating benchmarks, internal, external?

0:05:05.5 MK: Yeah, I have some pretty strong thoughts that are gonna come out in today’s episode, I suppose. Yeah, pretty often. And one of the points that I think is quite interesting, I don’t know, and I’m really going to, yeah, interested to see how the conversation goes, Eric, because I think it is maybe different when you are like a startup or in an earlier stage of a business, and particularly like trying to understand opportunity sizing. So, I’m curious to kind of get your perspective on that, of like where there are useful comparisons to make.

0:05:41.0 ES: Okay. Yeah. So, I think in the article that I wrote, I’m a big fan. Maybe let, let me take a step back. I’m a big fan of the way we look at data in terms of information signals as opposed to just data as data. And, I constantly ask myself, when I look at any piece of data or any report, what is the information signal or signals contained, within there? And so, the same thing I would apply to benchmarks. What kind of information signal is the client looking for when they make such a request for a benchmark.

0:06:19.2 ES: And to keep things simple. I sort of think of it as both a front-end and back-end sort of information signal. You’re either thinking of benchmarks as an input to a decision. You’ve got certain uncertainties about your decision making process and say, well, if I know some stuff that I didn’t know before, I would make a little bit of a better decision. Or you’re looking at it on the back-end where you say, look, I’ve already taken the decision, but I don’t know whether I’m on track. And can the benchmark therefore give me that sense of, my place and whether I’m still keeping to the path that I intended to go on too. And so simplistically front-end, back-end, and I think most organizations when they look at benchmarks tend to look at it from a back-end process as an evaluation, information signal. And therein lies the problem because is it the right way to evaluate whether you are on the path? Is it a right way to evaluate your actions and your strategy in comparison to others who may or may not be doing the same thing.

0:07:23.8 MK: Because the business strategy is different or the customer set is different or.

0:07:29.2 ES: Exactly.

0:07:30.0 TW: Well, And I think you had that front-end versus back-end, ’cause the front-end. And that was a little bit of ah, I thought, ah, I’m so used to just like raging against them because I feel like I’m generally being asked. I’m way less kind as to saying it’s human nature. And I attribute it more to, it’s a way to duck out of saying, well, I don’t need to figure out what I’m expecting to achieve. Just do the thing and then find me a benchmark to compare it to. And if I’m above the benchmark, I’ll say, yay. So that falls under that back-end, that front-end part was actually pretty interesting. Like you called out that like salary benchmarks, if you’re an HR department and trying to figure out where are we relative to the market?

0:08:16.0 TW: And, I’ve worked at companies that have said we wanna pay at market. ’cause we think the quality of the work and the living is way better. So we have the secondary benefits are worthwhile or hey, it sucks to work here. We’ve gotta pay above benchmarks. So that, to me, I was like, oh, okay, so not that those are perfect like that there’s still all the noisiness in trying to get those sorts of benchmarks. But you brought up pricing as another one to say, if I’m selling something, I need to figure out what’s kind of the normal sell rate because I need to think is what I’m offering higher value or lower value and adjust accordingly. So that I thought was actually pretty useful. It’s just that back-end just feels like lazy, to me.

0:09:10.9 ES: Yeah. I think I like the point you’re making about being lazy in the sense that, get somebody, well, the consultant is here anyway, since we’re gonna pay the consultant, why don’t we get them to do all the measurement for us? And just doing the external measurement I sort of absorbed myself of even doing any internal metric collection. Everything is just gonna be evaluated by something outside of the organization and just smacks of poor business management thinking. I mean, if you wanna try and compliment with some external, with internal, I think great. But very often, many of these companies that ask for benchmarks are just looking for the external evaluation. And at the end of it, whether you are up or down. So what?

0:09:58.8 S1: It’s time to step away from the show for a quick word about Piwik PRO. Tim, tell us about it.

0:10:04.9 TW: Well, Piwik PRO has really exploded in popularity and keeps adding new functionality.

0:10:10.1 S1: They sure have. They’ve got an easy to use interface, a full set of features with capabilities like custom reports, enhanced e-commerce tracking and a customer data platform.

0:10:21.0 TW: We love running Piwik PRO’s free plan on the podcast website, but they also have a paid plan that adds scale and some additional features.

0:10:28.2 S1: Yeah. Head over to piwik.pro and check them out for yourself. You can get started with their free plan. That’s piwik.pro. And now let’s get back to the show.

0:10:41.4 MK: Okay. So what if we blend them? This is not where I thought I was gonna go. But, what if you are using those external benchmarks as an input to help you set your own target as one of saying many inputs, like one of your other inputs might be previous performance or like another department’s performance or something like that. And then this is one of those inputs you are choosing to help you set whatever your target is going to be. Like, what’s the…

0:11:13.4 ES: View of that [laughter]

0:11:14.2 MK: Yeah. View of that?

0:11:16.4 ES: No I think that’s valid. To say, look, can it be an additional supplementary metric. The challenge would be when you’re using external metrics, the problem would be on attribution. And so whether you can either make that direct connection and attribute the change, let’s say in your market share or, a satisfaction score, attribute it to a particular action or set of actions that you took. And typically the answer will be it’s difficult. Because there’s so much noise, there’s so much things happening, even both internally and then the market is doing its own thing. So at best you can just can sort of hand wave and say, looks like it’s affecting it, that you can’t say it with any certainty. So even if you’re supplementing, you will still have to take it with a pinch of salt. Yeah.

0:12:08.0 MK: And I think that that’s a really good perspective mode like the blending and using that as potentially an anchor to think about like, okay, so here’s what’s happening, but we can set our own target whether it’s below or above. But I think what you were just calling out Eric, is an important consideration because the strategies are different because the consumers are different because, you think you’re playing in this space because like, that’s where you’re strongest. So there’s all that context where like that kind of devalues that number a little bit, even if you were to like average it out with the target setting activity from the voices in the room. Just because you don’t have that context. And so that actually might undercut some of the value of the discussion that’s happening internally about what success looks like.

0:12:54.7 ES: Yeah. That’s true. Yeah. And again, you can define the market or the reference point in so many different ways, which many organizations do to always come out smelling roses, as they say. So, I’m benchmarking myself to just one other person. Well somebody’s gotta win. And they are part of the challenge also.

0:13:17.6 MK: One other thing though, I think like the sincere, the information that comes along with the request, if you were trying to like break that down, like what is someone actually craving? I like the way you were talking about that earlier too, Eric. I think sometimes people are worried that if we were to set a target ourselves, would that not be aggressive enough? Like if maybe we were setting the bar too low or we’re not thinking about like, a larger piece of a puzzle. And so they’re doing it as a way to make sure that they’re being aggressive and that they’re not gonna be outpaced by just setting the targets that are internally.

0:13:50.9 MK: Again, if you get to that conversation, Tim’s, if you only could see Tim’s faces. Tim is calling absolute bullshit on this. I can see it in his face. I mean, I’m talking about like, if you get to a convert, if you’re already getting to a place where your organization is comfortable with talking about those targets. And so this is a special beast. ‘Cause we, I totally agree with you wholeheartedly Tim about laziness. I think that that’s like a shorthand way of saying like some resistance against the finality of setting, you know, or drawing a line in the sand. But I do think that sometimes it comes from a good place of making sure that we’re not setting something and it’s just too easy of a target or then we achieve it and then what?

0:14:36.8 ES: But you’re doing something to achieve a result. And if you’re saying, this is gonna cost me something in time and money, and I would be happy if I spent this amount of time and money and achieve this result. To me, that’s the core of what a target is. And if you said, well, yeah, yeah, but we compared that target to some super unattainable, messy AF benchmark and that’s not aggressive enough. Like, to me, that’s like super distracting from the conversation. If I say, if we put $1,000 into this and we get $8,000 back, are we cool? And everybody’s like, sure. You’re like, well, I actually sandbagged. If I found these benchmarks, which by the way, don’t really exist, then I should have set it at 10,000.

0:15:30.3 MK: I’m not saying it’s a fruitful exercise, especially when we get into conversations like, well, Netflix runs this many tests a year, so I think we should, okay, great. ‘Cause you’re definitely built to run Netflix. So I’m not saying it’s a fruitful exercise, I’m just saying that’s the place where that comes from sometimes so.

0:15:48.1 VK: Can I just touch on, there was something as I was digesting Eric’s blog post and I did actually try to figure out which one it was, but I can’t remember off the top of my head and I’m hoping Tim who’s smarter than me will remember. But there is this bias that people have to be like, what previously has been done so I can use that as a frame of reference? Like we’re saying like benchmarks aren’t great, use this way. But like this is human intuition of like what previous experience or what have other companies done? What is the data, “whether the data is right or not or useful or not” is a different point. But like, we’re talking about a human bias to seek what previous experience can I rely upon to understand what good is? And so like, we’re talking about it like it’s the worst thing ever, but it’s like, this is our natural intuition as humans.

0:16:45.8 TW: I mean, my quick take is that why I would rather that I just sit down and just think about what everything that, what I’ve learned or seen or known before and what I’m doing and what I expect to get out and start with that. And then say, if I, I’m gonna be way, way more emotionally tied to and invested in that than if I just had the analyst or the consultant go pull me a number. ‘Cause if the consultant comes and gets a benchmark and then we wildly miss it, then it’s like, oh, you must’ve pulled a bad benchmark. I’ve circumvented any ability to say, did this thing do what I expected it to do or not? If I pull that myself, now I have information of what I did. And now when I’m considering the next thing, I can say, well, I gotta lower my target a ton, which hopefully may make me say, is it worth doing or not? But I don’t know. What do you think, Eric?

0:17:52.3 ES: Actually, this is cutting it at different, I feel at different perspectives. So the word benchmark, I think it’s a loaded word because it can mean many different things to many different people. And as you’re talking, Tim, you’re saying, someone has this external reference and you’re using it as a target, for example. Am I doing well enough to achieve a particular reference target? Yeah, you’re right, that’s a benchmark, but it’s a different kind of benchmark from say, if I’m looking at relative market share, where I’m looking at a ranking. Or even if like pay salary, it’s not a target we use for reference of how we wanna pay, but you wanna know what’s your relative standing, what’s your relative rank with your competitor.

0:18:41.0 ES: I think there are, More raised a good point to say, there are stuff where we aim to achieve, for example, in certain practices where there’s a long duration or history of maturity to it and it’s become almost commoditized. And if you are a startup and then you’re saying, look, if I’m gonna do this, then I actually have to get to the similar state and all run rate of whatever everyone else is doing. So for example, I come from the retail banking side. If you take like a credit card business, interesting benchmark would be CPA, cost per acquisition. And you say, look, everybody has to acquire a new credit card customer. We can’t have very widely divergent cost of acquisition.

0:19:25.4 ES: At some point, if you run this well and you’re matured, logically all of us will begin to converge around a very narrow range of value, right? And if you say, is my business healthy? Am I doing all that I can to take out the waste and being effective in my targeting, then logically I should be within that narrow range of CPA. And if I’m not, then there is something I’m missing perhaps. And it can be an interesting way as a diagnostic measure to say what am I not seeing, right? What am I not getting right in my process? Because if everyone’s converging, then logically I should be.

0:20:02.5 TW: But that I mean… That’s, that gets too… I mean, in the banking sector, I feel like I’ve had, when I’ve had questions where people have asked me for what’s a benchmark e-commerce conversion rate for retailers. And I was like, I asked them, I’m like, are you, have you shared what yours is like publicly? Have you shared that with a trader? And they’re like, oh my God, no, like, I wouldn’t share that. It’s like, well, but you think your competitors are sharing that. And I even think like in banking or a cost per acquisition if you’re looking at your customer lifetime value for different customers and financial services varies hugely.

0:20:46.8 TW: So like an average CPA, when if you’re selling to a super high-end, or you’ve got another product that’s selling to super low-end, even like within your company, you may say, well, we don’t look for the same CPA for the different products, because these are much more valuable, and these are much lower value. But I mean, it just… It feels like such a slippery slope that you go down to try to get those… I mean, it happens with salary information, too, when you say it works for very specifically, clearly defined roles, where there are regional differences, and there’s enough scale, and there’s enough data collected. But I think we’ve all watched companies that HR struggles when they say, well, the service we subscribe to for our salary benchmarking doesn’t have an analytics engineer role.

0:21:42.7 TW: So we’re just gonna name our people data scientists, and then we’ll compare to that. I mean, it’s a bizarre… So that I agree, it’s a really good use case. But it still runs into this mode of there’s this idea that the benchmark is much cleaner, has much less uncertainty around it than it does. And so, where does it come in?

0:22:13.9 VK: And can I just add to that, which is hysterical, because I feel I’m now arguing against myself. But a really good example, right is iPhone users versus Android users, you might be like, Oh, pull a benchmark report on mobile usage. And it’s like, like I know from looking at those, the customer set is very different. The same is true of Amex and Visa, very different, like customer lifetime value, that sort of thing. But you’re like, Oh, look at credit card usage. And you’re like totally different audiences. And yeah, lifetime value. So I think that is where it can get dangerous.

0:22:46.7 VK: But hypothetical situation for you, which I love. Let’s say you are… I don’t know, you have an iPhone app, you’re trying to decide if you should invest in an Android app. And you don’t have any historical data on the Android app, because it doesn’t exist. So you need to make a decision about… And this is probably getting in the front end side of Eric’s thinking, you need to make a decision about whether you should invest in Android is the comparison to the iPhone usage that you have internally correct? Probably not. Because as I just said, different customer groups. So in that situation, I can see that some type of benchmarking from competitors might be useful as one of the inputs into helping you make a decision.

0:23:29.5 ES: But I think in that perspective, if you’re looking for information to sort of bushwhack your way forward.

[laughter]

0:23:43.2 ES: That’s a bit of market research and benchmark for the purpose of saying it is collecting benchmarks, but actually you’re collecting information signals for the purpose of market research, right? And you’re completely right. It’s on the front-end, because you’re trying to decide which path I should take to forge ahead, right? Yeah. But often my struggle in the consulting is the organizations or the clients are asking for benchmark actually are not clear exactly what they would do with it. But the so what question if you ask them, of course, they get a little bit offended and defensive, right? So if we had a client stop asking we just want our benchmarks. But the reality is that I don’t think they really know what they would do with it. But it’s a great piece of information to bring up to senior management and to the boards and all that, and this is various benchmarks and where we stand. It has its appeal, obviously, right? But I don’t see how many people use it to take better decisions.

0:24:46.0 VK: Can I ask a controversial question?

0:24:49.0 ES: Sure.

[laughter]

0:24:51.3 VK: I don’t know if you’re familiar with net promoter score. [laughter]

0:24:54.3 ES: I am.

[laughter]

0:24:54.9 VK: I mean, that is the ultimate benchmark, right? And there are lots of research from consulting that says that it’s very closely tied to the revenue performance of a company. What are your thoughts on using that as a benchmark?

0:25:12.7 ES: Okay, so in the academic space, actually, the Net Promoter Score, NPS, right, for short, has been debunked, right? In case people are not aware, actually, it’s been debunked, academically debunked, because it doesn’t hold up to scratch, right? I mean, it looks like a great, nice shorthand. And the reality, again, with a lot of stuff that gets onto the business world, intuitively feels familiar, it’s easy to run with and sometimes takes on a life of its own. And then the actual research about validation happens much later, and it’s sort of never see the light of day. But academically, actually, the paper has been debunked, one.

0:25:50.8 ES: Two, they’ve found that actually, it doesn’t give any sharp diagnostic or measure, versus how organizations used to do it with their customer satisfaction, where they had multiple questions, and you could slice and dice, and there was no lift against the previous methods that people employed. But it was a nice shorthand. But of course, with things that are shorthand, you introduce noise. Then obviously, I mean, when I first saw it I encountered it in my corporate life in Citibank, in Asia, I came out of the US with Bain, right, and working with professors. It was like, really you…

0:26:26.9 ES: You benchmark to say anybody above seven or eight is good. Anything below that, shouldn’t, you know, five be the, the average, but no, not average is much higher than, than the median point we saying. But in Asia, everyone’s conservative. No one’s going to tell you you are good. You know, it’s everyone, you know. I mean, in Asia we sort of understand this. Yeah. Yeah. Never, never give good, you know? Good, good you know, feedback you always criticize, right? Because people’s head will get. You know, so, and they all feel great about themselves. In Asia we take the opposite stance. Every, everyone’s not good enough, right? So even customers will never, never give you that good feedback.

[laughter]

0:27:04.5 MK: And that’s such a good point actually about benchmarks being quite dangerous. Is that like it doesn’t account for cultural differences or many of the other differences, right? Like cultural, just being one of them. When you, when you take an average like that and try and apply it broadly.

0:27:21.6 VK: I have.

0:27:22.1 ES: Agree, agree.

0:27:22.3 VK: NPS has a special place in my heart. We, when I was in market research at the beginning of my career, there was a cable provider that we did a lot of customer satisfaction and also transaction based satisfaction. So after you had to interact with customer service, and when NPS came out and everyone was reading the book and everyone’s talking about how it is like Miracle Cure [laughter] they embarked on an NPS study where it was only five questions and it was just getting at NPS for all 70 plus divisions and they ran [laughter] it on a monthly basis. But if you think about the, also like the skews of the fit, right? Like it started with like airlines and hospitality where there was a lot of choice and it was a lot easier, like lower switching costs. But we’re talking [laughter] about how likely are you to recommend to a friend or colleague your cable provider. Like first of all, whoever is having that conversation at a party.

[laughter]

0:28:14.0 VK: Like that’s curious behavior [laughter] But also a lot of, especially at the time, this is like, you know, 2010, you, you actually couldn’t switch to all the different providers and they were benchmarking themselves against Dish Network and that was one of our favorites ’cause they were like, oh, people like Dish better than us. And it’s like they had to think that your service was so terrible that they would pay to have a, you know, 20 foot dish installed on their house to try to circumvent [laughter] the service you could provide them, right? So it just ended up being this, how do we explain the volatility month over month this across these 70 plus divisions. It was like the craziest wildest ride. And I remember our in-house statistician would have to like breathe into a paper bag [laughter] when we get on the calls with the client to try to like, well how do you explain why this one’s up and this one’s down? And it’s like, ’cause it means nothing. It means nothing. It was just like the wildest experience. I’ll just never forget.

[laughter]

0:29:07.4 TW: Well, well the volatility, I mean, it kind of goes to the bane of analytics in general when a metric that’s noisy, if it’s a noisy metric and it moves and you look at it and it goes up and people are like, why’d it go up? I’m like, noise it went down. Why’d it go down noise? I’ve twice worked at agencies, mid-sized agencies that had, were doing NPS, you know, B2B small sample, you’re taking a 10 point scale and chunking it into three buckets and then doing subtractive math on it. I mean, it’s just like it’s taking this one thing and making it so crude. And then they were doing it with a small sample size and it was just every quarter, some percentage of the clients would get hit up with it. Some smaller percentage would respond to it. It was an, it was just a noise generator. But by golly, if it went up, we’d hear about it. And if it was down low, nobody talked about it. And, and, but there would be that thrown out that A NPS above, I don’t even know what the number is.

0:30:05.5 VK: Either airlines got a 70, you know? Yeah.

0:30:09.0 MK: What does that mean? That’s not a helpful benchmark.

0:30:12.8 TW: But, but, but on that, like, asking a question, especially in a, in a D to C context, like asking how likely are you to recommend, I mean like, oh, you’re selling to like at scale to consumers. Like it seems like you could get volume that there would be, it’s a, seems like a reasonable question. Would you recommend us, especially if we’re in a growth area, how many of our clients customers would say they would recommend us? It, that actually feels like in certain contexts, depending on where our strategy is a fair question to ask. It’s when it gets jumped to now subtract the detractors and do this, and to compare yourself to a bench. All of a sudden now it feels like it’s gone into the nope. I’m just trying to, you know, get a number that, that makes me sound good.

0:31:06.7 VK: Okay.

0:31:08.6 TW: Maybe.

0:31:09.3 VK: Now that we’ve had our NPS deviation, can we please talk more about, I guess the front end benchmarking side? Because, okay, can I throw another scenario out there?

[laughter]

0:31:26.5 VK: Let’s say you’re looking at, I’m trying to, I’m like thinking about this on the fly marketing budget, right? And you’re going, you’re trying to basically determine which markets are worth investing in. You’ve got a finite budget, you can’t go into all of the markets. So maybe you look at like total addressable market. You look at TAM, you look at GDP, like you look at a bunch of, you might also use your own internal like monetization rate or something like that. Like it seems like in that scenario, using benchmarks is appropriate. But I noted Eric, in your article, you said, we should never start with comparisons unless they help shape our decision inputs. Most don’t. So is this scenario shaping our decision inputs?

0:32:10.7 ES: I would say yes. But I would also then challenge are these kinds of information really benchmarks?

0:32:16.7 VK: Yeah.

0:32:17.0 ES: Right.

0:32:17.4 VK: Ooh.

0:32:17.7 ES: Yeah, because again, the word, the word is all loose, right?

0:32:21.3 VK: Yeah.

0:32:21.6 ES: So, so you say. Oh, I, you know, give me GDP or of the, the various country or options that I want to go invest in. Is that a benchmark?

0:32:30.6 VK: So you’re saying no?

0:32:31.9 TW: Or is that just market research?

0:32:33.5 VK: Okay. So we’re saying it’s market research.

0:32:36.4 MK: It’s primary and secondary research. It feels like if, just ’cause it’s desk research, just because you’re not going to your customers or prospects, that’s still like research input.

0:32:46.9 VK: I would agree with that.

0:32:48.1 ES: Yeah.

0:32:49.3 VK: Okay. So then let’s rewind the thing that I should have asked [laughter] right at the very start. How would you define a benchmark then, Eric?

0:32:57.4 ES: Okay. Yeah. So, so for me, a benchmark means I extremely speaking in the space that I sort of rant about. It’s the relative difference. To a competitor or to a space that I operate with. So, if I have a way to compare myself with somebody, as opposed to say, is the market a good market or bad market? That’s not necessarily about me comparing with somebody. Then to me, the benchmark, if it’s competitor comparison, has some facet of competitor comparison would be a benchmark for me.

0:33:35.6 MK: And how is that different from a baseline? ‘Cause I think that that’s, that would be good to tease apart.

0:33:41.7 ES: Okay, okay. So, a baseline for me is a sort of a hurdle that you wanna get over. So, if you’re starting out and you’re building some capability, the baseline should be something that you try to get over. Again, like the CPA. So, if you think of CPA, I think CPA has this sort of two sides to it. Because at some point, if you’re gonna compete like a mass credit card with everyone, then, when you first launch your product, obviously the CPA is gonna be high because you’re trying to win over market share, trying to get your brand out there and awareness and all of that.

0:34:20.2 ES: But once you’ve got a mature business, logically, you need to get over some kind of, baseline CPA. Now, that baseline CPA may be very different from a benchmark comparison CPA, where you have a variety of different competitors at various levels. Again, different segments they go after and so on and so forth. But a good operating business would say that, if I can’t get a CPA of X dollars, then I’m not gonna be running a profitable business regardless. And I think to me, that baseline is about some kind of minimum hurdle that you wanna get over so that the business makes sense.

0:34:58.6 TW: This is another axe I have to grind is like, I feel like the other thing I will hear in the CPA example, it would say, well, what CPA do we need? And it’d be great if there was like math done to say, we’ve got to at least, the CPA has to be below this, or we’re not gonna be profitable. I feel like I’ve run into more often, we’ve never done this before. So let’s just run it out there and let’s get a baseline for our CPA. And then we’ll know going forward, what it needs to be, which also sends me kind of around the bend, ’cause the way you just articulated is saying, no, you’re setting a baseline as opposed to, I’m just gonna do whatever and kind of do a good job and gather some data.

0:35:46.3 TW: And ’cause that’s a can that can be kicked down the road again, and again, and again, there’s always an excuse to say, well, I don’t have a baseline. I don’t have a historical internal data. So I kind of tend to think of like an internal benchmark and a baseline as being kind of comparable, but I’d get irritated with them as well. ‘Cause it’s also like, you can’t possibly expect me to set a target for what good is. And because I’ve never done this before and I’m like, bullshit, I can’t, but…

0:36:22.4 MK: So wait, sorry, Tim, are you saying that an internal benchmark and a baseline are kind of interchangeable? Am I following that?

0:36:33.7 TW: I feel like that’s how I see it used and tend to use it. I don’t know that that’s a hill I’m gonna die on.

0:36:42.6 MK: I like it like that because I do think, ’cause a lot of times inside an organization, you do have the context that you don’t have when it’s an external benchmark. Like, oh, well this one had like half the marketing budget. So you have to take that in consideration. It’s kind of like you have the use with caution, kind of like the baked in assumptions or the things that are kind of really different about that point of comparison. So you understand how valuable it can be to helping you set context. And so that is how, one of the reasons why I think about that too, just because it gives you, you do have that background. You can source the information again to assess how helpful it is.

0:37:18.7 VK: Couldn’t you do the same for external benchmarks too? Like I get that maybe you don’t have as much context, but you could still have like use with caution warnings of like, this is what we do know about how they were created or whatever. Or is it like, you just feel like it’s way too black boxy? Oh, everyone’s shaking their heads. So I’m gonna assume I’m super wrong on this.

0:37:43.4 ES: Yeah. So for me, I think most information, most data are equivocal in the sense that, as equivocal meaning we have multiple interpretations and often conflicting interpretations and that’s the issue. When it goes up, someone can say, oh, you’ve done a good job or it’s just noise or, and vice versa. I think if we start with, say, look, whatever the benchmark that exists, how much equivocality does this benchmark have? And I think if we hand to our heart and we’re honest and look, actually, there’s quite a lot of equivocality in terms of the benchmark, then is it going to be useful? Because ultimately, even as an evaluation metrics, we are biased in the sense that, we reward ourselves for all the successes, whether it’s on us or not. And then we’ll try to find excuses and reasons for why we fail. And if we can make a valid reason because the metric is equivocal, then does it really help you chug along?

0:38:54.5 VK: Such a good point.

0:38:57.3 TW: And that is like, if you see it and you exceed it, you’re like, look, this magical thing, we did better. And if you see it and we did worse, then you say, ah, it must be garbage.

0:39:09.8 ES: The market looked against me.

0:39:12.0 TW: I mean, we had, Val and I had the same client, this was a few years ago, that had an agency that had multiple clients, but when they would put their media results and they would less CPA, just the nature of where they were, they would look at cost per click or CPM, which is super, super common. And they would say, they were constantly reporting that they were, beat benchmark. And they would say, and guess what? Because we have so much data, you’re beating benchmark for your sector. And I’m like. That’s it and the client would just take it and say, look, this campaign was great. Our CPM was below benchmark or our CPC was below benchmark and what’s like, yeah, but that’s such a noisy thing. Like, no, no, no. They told us that the data they were using was for totally apples to apples. Which is all the kids are above average.

0:40:11.7 ES: So just sorry to interrupt and jump in. So saying, okay this client, it says that my CPM is below benchmark and look how well I’ve done. You can also flip the narrative and simply say, did we under invest? Did we leave money on the table? Because if we were at benchmark, couldn’t we make more?

0:40:29.4 MK: Can I flip this? So in my mind, and again, people might violently challenge me on this. There tend to be kind of like two trains of thought. I have found, when you’re working with executives, one does tend to be the like, how are we doing against our competitors? And I do find then there are also the execs that are like, I don’t care what our competitors are doing. We’re running our own race. How are we comparing year on year or like to the last time we did like very much about internal comparisons. If you have got the one that is very focused on, how are we doing against our competitors? I feel this benchmarking discussion is something you would need to bring up. How do you think you do that in a constructive way that would get them, I guess, to see the like, I don’t wanna say like the errors of their way, ’cause that sounds super patronizing, but, how do you start to educate them about this?

0:41:28.6 ES: I think in business, you definitely have to have competitive information. Whether it is in the form of benchmark or not. I mean, the business is not a one man race. You’re obviously competing in a space without this. And so, to say that I’m just going to isolate myself and just look at internal metrics and then, yay, I’m successful or not, I think that’s not wise and definitely not realistic.

0:41:54.2 ES: But to run a business entirely based on competitor evaluation and where I am at each point in time, it’s also meaningless because then you don’t have a mind of your own and making a decision whether I wanna stick to it or not. I think it’s really the collection of information that you would use. So, if you’re saying, look, I want some competitor benchmark, then it is because I have some kind of evaluation or decision uncertainty that I can fill in with that. Recognizing also that the minute I go out of the organization with external information, then there is a lot more noise. And I don’t think people realize that because they are thinking internal metrics and external metrics, yeah, they all have variance. They’re not the same kind of variance. The internal metrics in many instances, you can control the variance. Even if you say there’s noise, I can always isolate it because I know something about my process. But with the external one, you don’t even know the nature of the noise, let alone wanting to try and control that.

0:43:00.2 MK: And also to say there is a lot of value in getting competitor information in context of a decision you’re going to make. And so one area that I’ve seen a lot of clients do, especially my bias coming from market research, is understanding sentiment or attitudes. And so sometimes shifting away even from MPS to understand word of mouth, if that’s what it’s really trying to get at, then let’s ask some questions about that. Or how likely are you to XYZ behaviors? And I think some of those are helpful to capture against competitors too. And that can be informative of where you play or how closely you are delivering on your value proposition or differentiation from key competitors. But again, I don’t necessarily consider those benchmarks because if you’re still saying we’re gonna have a separate conversation to evaluate our own performance, the choices we’re making, like that can still just kind of be more on the input side. But Eric, you can let me know if [laughter] if I misinterpreted that.

0:44:00.2 ES: I would agree. I thought you were saying, look, I need this sentiment analysis. Then, of course, the challenge would be who does that best and how do they do it so that it’s comparable. And they’ve sort of normalized the noise in it. I think where the rent is as sort of startup consulting and all of that, it’s strange. I know when I talk to clients and they know that I’m a boutique consulting business and say, well, can you get me benchmark? Well, yeah. I mean, I’ve consulted for a range of some of the clients, but I’m not a McKinsey. I’m not an Accenture or Deloitte where you say you work with everyone and you sort of have seen the ins and outs in those businesses. And approaching a small startup for external benchmark, even though you can say, maybe they’re prepared to do it because they need your business and all of that, but they don’t really have that kind of methodology that would stabilize the noise.

0:45:02.7 MK: Yeah, that’s a good way to put it.

0:45:03.8 ES: And so you can get a number that ultimately, and again, you can fiddle a number to make the client happy and that’s not going to be useful.

[laughter]

0:45:14.4 TW: Well, and that’s a, I mean, you take the big, the large scale consultancies that say we have a massive customer database and we therefore have access and we are going to obfuscate it and develop benchmarks for you. That tends to be, that’s what Boston Consulting Group or McKinsey or Deloitte is trying to sell you. So the organizations that, the metrics that they’re gonna be the tightest and cleanest on gathering their benchmarks just happen to be for metrics that they and their services, they say, well, they will help you with. So there’s a little bit of a fox in the henhouse of that they may have, they may factually be accurate and they’re probably behaving pretty well. Like they’re not out there being malicious, but they do have sort of perverse incentives to have the new client or the prospect be performing below the benchmark because that’s how they’re going to get paid.

0:46:31.7 TW: So even like considering who’s doing the aggregation of the like the National Retail Foundation or Federation, NRF, whatever that is, like they would gather like three metrics conversion rate and you know whatever from their their members but like what was their incentive like well that was so they could publish a book once a year that would have these three metrics in it and that would be part of their justification for their members to kind of re-up. But it doesn’t seem like there’s a totally objective and altruistic market out there in the business world saying we’re gonna go through all this work to minimize the noise in benchmarks around a handful of metrics like qui bono like who benefits from that. So that just goes back to questioning the usefulness of them. Yeah.

0:47:31.7 VK: Well we’re definitely not going to get through this episode without me being able to have a little bit of a cathartic moment about my most hated least helpful benchmark that in my previous role when I was very focused on experimentation a client didn’t go by where we didn’t have to address it. So let’s see if anyone can finish this sentence. Oh, Even best-in-class experimentation programs have a win rate of 95%.

0:48:03.9 MK: 30.

0:48:04.0 VK: It’s low. 30.

0:48:05.2 MK: Oh, 30. That’s not close at all. Sorry I thought you were gonna say oh anyway.

0:48:10.9 VK: No no yeah.

0:48:11.4 MK: A win rate of 30%.

0:48:11.7 VK: Best-in-class optimization experimentation programs that that’s like and I don’t know who said it first but we kind of the industry kind of like rallied around it I’m telling you Google it you’ll find everyone references that point. But there is no relationship between win rate and how much smarter you’re making your organization by taking that hypothesis led mindset or using controlled experimentation to de-risk decisions and so it would just irk me to the umpteenth degree about well let’s let’s put this down as benchmark against the 30%. Okay let’s have a more meaningful.

0:48:45.4 MK: I’ve never heard that.

0:48:45.4 VK: Really? Moe I’m so surprised.

0:48:50.7 MK: Am I doing it wrong? Like the thing I always hear is like you need to have a 95% confidence interval. Like that’s the thing I always hear.

0:48:55.8 VK: Yes you always hear that too for sure. But the benchmark of win rate.

0:49:00.7 MK: But I’ve never heard the 30% win rate. But I don’t know maybe I haven’t been doing enough experimentation lately.

0:49:05.6 VK: Not helpful.

0:49:09.8 MK: So about, Val, how did you handle that? How did you address that with all the clients?

0:49:14.9 VK: It was all about like do we think that, that really has any relationship with the more meaningful metrics about why you’re making this choice to invest in experimentation. It’s the same thing as like the the false relationship between MPS and revenue. Like there was no predictability or relationship between those two things. And so like let’s decouple those concepts and see how we can make sure we’re putting smart inputs into the machine to make sure that again we’re testing what matters to the business and things that are going to help move things forward versus like well you know I can test these button colors over here without getting legal approval so let’s push those 30 tests through.

0:49:54.0 TW: I briefly before we started this show I thought are we gonna be able to talk for a whole show about benchmarks? And we have.

0:50:04.3 MK: Mainly because I clearly did not understand what benchmarks are. So that’s been a helpful place.

0:50:10.8 TW: But I think Eric nailed it. Like it is a word that is like oh this is a plain word and it does get contorted by different people can mean which is is another whole area where it can we can get in trouble if we’re not talking about the same thing. I could get labeled as the person who hates benchmarks and somebody’s actually thinking I hate market research. So.

0:50:34.8 MK: I’ve realized like to be honest through the course of this conversation I’ve realized when I talk to finance and they say benchmarks they mean market research. That is like been my epiphany in this conversation and we are often working on things together and now I’m like oh I need to reframe this. So this has been very helpful Eric.

0:50:54.4 TW: Well we are, I’ve there are more things I would love to fetch about but I am sitting in Michael Helbling’s seat and he wants it back. So we’re gonna have to start to wrap. Before we close out we always like to do a last call go around and have everyone share a thing or two that they found interesting related to benchmarks or not. Hopefully it’s an above baseline quality last call but if not that’s okay too. So Eric you’re our guest do you wanna share the first last call?

0:51:32.1 ES: Sure, sure. Okay. But it’s not related to benchmarks [0:51:34.9] ____.

0:51:36.9 VK: That’s okay. Mine’s not either.

0:51:38.1 TW: Talk a little bit about…

0:51:39.6 ES: Expected yeah. So this was an article I read on Medium which I post my articles on as well and it’s all a rage now with generative AI and artificial general intelligence everyone’s worried that we are all going to hell in a handbasket.

0:52:00.9 ES: It’s a terminal event right where the AI wakes up and all of that and this person on Medium wrote I don’t know the person’s name at all because they write under a handle, or a pen name and their pen name is from narrow to general AI that’s all I see in the title of the author and while it’s the title of the blog or the article is actually a very long theory of intelligence that denies teleological purpose and okay, so the title was so odd that when it popped up in my inbox and Medium I said okay let’s check it out. It’s a pretty long long article a little bit philosophical but one of the points that they were making about why we won’t get to in the near term to artificial general intelligence really resonated with me. And when we think of say AI today we think that it will be able to reason solve and of course there are arguments both ways but clearly we’re making some progress but the author here makes a very nice succinct argument to say look all of AI ultimately comes down into the space called problem solving. And you can push for it.

0:53:14.8 ES: You can even say, well, at some point maybe the AI will be able to reason well enough and all of that, but it is still in the space of problem solving. But the author says actually we are not, the human experience is not defined by problem solving. In fact, a big chunk of it is defined by problem finding. And that was a huge aha moment for me, is like, it’s true. I mean, we make our own problems. Like, look at this conversation on benchmark. We didn’t have a problem before. And then we define it, shape it, we argue it. And this idea of problem finding, problem defining was a huge aha moment for me.

0:54:01.7 VK: I love it.

0:54:02.4 ES: It says, no, AI isn’t built to do that. Yeah.

0:54:05.6 MK: Oh, I love that.

0:54:06.4 TW: That’s good. Val and I are salivating because that’s kind of core to the facts and feelings process is identifying problems and then thinking through how they might be solved. So… I like it. Very good. Nicely done. Val, what’s your last call?

0:54:26.5 VK: Sure. So mine’s a twofer, but both of them are relatively quick. One, I just have to give a shout out to Eric. I know you mentioned in the intro, Tim, that Eric had been doing a publis…

0:54:36.6 TW: That was gonna be my twofer was gonna be a shout out to Eric. Okay.

0:54:38.5 VK: Well, guess who got to go first?

0:54:40.4 TW: I guess I’m just gonna have one then.

0:54:45.7 VK: Well, maybe you’ll call out some different pieces, but I love the way you write too, Eric. Like the, there’s a whole section in there about the problem with dashboards, the problem with data visualization, the problem with data literacy. And I just like, love the stance that you take and the way that you break it down. And it’s always like really succinct. And so it’s really a fun read. So I’ve enjoyed following you and so glad that you could be our guest today.

0:55:05.8 ES: Thank you. Thank you for that.

0:55:07.1 VK: So that’s one. And Tim, if you have some specific ones, I didn’t go too deep. So you can throw them some up too. And then the second one is an upcoming conference that you all might have heard of, Experimentation Island. So February 26th through 28th of next year, it is its inaugural year. So Kelly Wortham and Ton Wesseling are bringing it to the US, the best parts of conversion hotel that happened over in Europe years ago.

0:55:36.3 MK: Is it on an island?

0:55:37.6 VK: It is on an island.

0:55:41.4 MK: What? Maybe I need to go to this.

0:55:44.5 VK: It’s gonna be awesome. They’re doing a lot to really make sure that the experience of the attendees is gonna be great, but it’s on St. Simon’s Island off of Georgia, which…

0:55:53.1 TW: There’s a keynote about benchmarking your win rate for your experimentation program.

0:56:00.8 MK: Triggered. Yeah.

0:56:00.9 VK: Tim and I are speakers. So I’m super excited. Oh, performers. They’re called performers, but yeah, there’s some good programming.

0:56:05.7 TW: I did not know that.

0:56:08.0 VK: Yeah, Tim, get into it. But I’m excited. So we’re just starting doing some of the planning. And so what was front of mind, I just wanted to drop that for our listeners so they could plan for that.

0:56:19.7 MK: To be clear, I was thinking like Hawaii.

0:56:23.8 ES: When you’re thinking island, right?

0:56:26.2 MK: Yeah. I’ve just looked it up. I’m like, I’m gonna have to mention this to Ton and Kelly.

0:56:33.8 ES: Benchmarks, yeah. What islands?

0:56:38.7 VK: There you go.

0:56:43.1 TW: Well, what’s your last call that we can then dinner greet?

0:56:48.1 MK: To Shreds, yeah. Okay, look, I always bang on about the Acquired podcast, but obviously I traveled not too long ago and I got a snippet of time to listen to a couple of things. And it’s two particular episodes that just blew my mind. One is the episode on Costco and the other one is the second on Hermes. And I just love how these guys really get into the history of a company. There was so much stuff about Costco that I didn’t know that now makes me probably an even bigger Costco lover. And likewise, I now have this obsession with wanting to buy something from Hermes, which I never had any desire to do ever. But then that’s not actually my real last call because I’ve mentioned that podcast many, many times. I found this Instagram called To You From Steph, and it’s really about like growth and personal development. And you’ll see some of like quite common, I guess, like sentences and posts about growth and personal development.

0:57:48.1 MK: But she’s such a beautiful designer. And I don’t know, I’m still trying to figure out where and how I can use it. Like it’s comments like talking about like the heaviness of the load or like what today’s progress feels like. It’s very personal developing, but like her posts are just so beautiful that it kind of makes you revisit some of these sentiments. And I’m trying to figure out like how I can adopt, I don’t know, I’m not creative or artistic, so I have a lot of admiration for her page and just how she’s getting, just making, revisiting some of the thoughts really nice just because of how beautiful they are. So yeah, that’s a bit of a random one.

0:58:26.6 VK: Does she post it from an island?

0:58:29.8 MK: No, but it would be better if it came from Hawaii, like obviously.

0:58:36.0 VK: I’m excited to check it out. All right.

0:58:37.8 MK: And over to you, Tim, what’s your last call?

0:58:40.3 TW: So I promise we do not consistently like log roll the guests, but the same thing, one impressed with, so I was also gonna note that Eric, your weekly posts and they’re very consumable, but the one specifically, because we had a listener who had submitted a idea. So please listeners continue to submit ideas. We have a long list and we’ve been getting, I swear the quality of the ideas for show ideas has gone up like markedly in the last 12 months.

0:59:09.9 TW: But somebody had actually chimed in and said like, what am I using data? Like get a, like small company, like small data. And literally the next day Eric had a post, it was how smaller organizations can build data analytics capabilities and sort of talking about sort of, there’s a little bit of turning it on its head with kind of how you approach that. So it wasn’t exactly what that listener was kind of asking for, but so that was, again, I’m now kind of hooked on your writing.

0:59:39.4 ES: Thank you.

0:59:41.4 TW: But my other one is, it’s like an oldie that’s new again, and I don’t think I have brought it up on here, but tylerviggen.com, Spurious Correlations, the OG, I saw him years ago speak at Emetrics. I mean, he’s a fascinating guy, ’cause he’s like a BCG consultant in supply chain stuff, but he has completely revamped, and this was maybe six months ago, he redid the tylerviggen.com, same Spurious Correlations, the same, you go there, it shows whatever two metrics that are training together, but what he added was the academic paper, LLM Generated that supports it, and I mean, it is fully academic paper formatted, abstract, two columns, totally auto-generated. And I mean, you read them, they’re like maybe three or four pages, and like the level of rationale explaining why these two metrics.

1:00:36.1 MK: Shut the front to us.

1:00:37.5 TW: Yeah, it’s, I mean, a lot of times you see that, you’re like, oh, that’s cute, like you think like, oh, that’s cute, like it’s the idea, no, I’ve actually read a few of these, because I’m like, these are so delightful to read. And the, I don’t know where he finds the time, I’m like, that wasn’t like, oh, I’m just coming up with a little, making a little ChatGPT app, the thing’s like formatted, and somehow he’s got it actually pulling rationalizations for theories that kind of…

1:01:14.4 MK: Well, it was one of your favorites, they always make me laugh.

1:01:16.5 TW: Yeah, and then, of course, you were going to ask, I logged it a while back, and now, of course, I cannot remember.

1:01:21.6 MK: Nicolas Cage movies, and drownings, and like, people who eat cheese in divorce, or something.

1:01:27.7 VK: I mean, my sister would be very supportive of that as a non-cheese eater, she’s like, obviously, that’s the end of every marriage.

1:01:37.3 TW: Yeah, so with that, so, I’ll do my final housekeeping, and I realize I did not take my notes, ’cause Michael can usually just rattle these off, but Eric, thank you again for coming on the show, this has been a really fun discussion, and I picked up, what was it, it was bushwhacked, what?

1:01:56.6 MK: Bushwhacking your way through, yeah, so good.

1:01:58.6 TW: Bushwhacking your way through, yeah, other good stuff, too, but this was great, so.

1:02:05.3 ES: Thank you, thank you for having me. It was such a wonderful conversation, yeah.

1:02:11.3 TW: Awesome. Listeners, we love to hear from you, so reach out to us on The Measure Slack, on LinkedIn, if you wanna submit a topic idea, with or without a proposed guest, you can do that at analyticshour.io. You can also request yourself a free sticker there. So, thank you for listening, if you really are motivated and want to go onto your podcast listening platform and leave us a review or a rating, that’d be kind as well too. No show would be complete without thanking Josh Crowhurst, our behind-the-scenes, mostly producer, who makes the audio sound normal and less incoherent than it would if we published it raw. He’s also was kind of the engine behind our presence on YouTube, which we now have a presence on YouTube, if that’s your preferred consumption. And with that, regardless of whether you are listening to podcasts at a normal speed, at an above benchmark speed, at a below benchmark speed, at two and a half speed, for Val and for Moe, keep analyzing.

1:03:25.1 Announcer: Thanks for listening. Let’s keep the conversation going with your comments, suggestions, and questions on Twitter@@analytics hour, on the web, @analyticshour.io, our LinkedIn group and the Measured Chat Slack group. Music for the podcast by Josh Crowhurst.

1:03:42.8 Charles Barkley: So smart guys wanted to fit in. So they made up a term called analytics. Analytics don’t work.

1:03:49.1 Speaker 7: Do the analytics. Say go for it, no matter who’s going for it? So if you and I were on the field, the analytics say, go for it. It’s the stupidest, laziest, lamest thing I’ve ever heard for reasoning in competition.

1:04:03.1 S1: Hi everyone. Welcome to the analytics Power Out. You know what? Were gonna start over one more time. Wow.

1:04:14.8 MK: Wow, that tiredness really gets you.

1:04:18.9 TW: Word number seven.

1:04:21.5 MK: That’s a record.

1:04:22.6 TW: One more time.

1:04:24.7 S1: Rock, flag, and NPS rules.

The post #254: Is Your Use of Benchmarks Above Average? with Eric Sandosham appeared first on The Analytics Power Hour: Data and Analytics Podcast.

  continue reading

13 epizódok

Artwork
iconMegosztás
 
Manage episode 440440382 series 2448803
A tartalmat a Michael Helbling, Tim Wilson, Moe Kiss, Val Kroll, and Julie Hoyer, Michael Helbling, Tim Wilson, Moe Kiss, Val Kroll, and Julie Hoyer biztosítja. Az összes podcast-tartalmat, beleértve az epizódokat, grafikákat és podcast-leírásokat, közvetlenül a Michael Helbling, Tim Wilson, Moe Kiss, Val Kroll, and Julie Hoyer, Michael Helbling, Tim Wilson, Moe Kiss, Val Kroll, and Julie Hoyer vagy a podcast platform partnere tölti fel és biztosítja. Ha úgy gondolja, hogy valaki az Ön engedélye nélkül használja fel a szerzői joggal védett művét, kövesse az itt leírt folyamatot https://hu.player.fm/legal.

It’s human nature to want to compare yourself or your organization against your competition, but how valuable are benchmarks to your business strategy? Benchmarks can be dangerous. You can rarely put your hands on all the background and context since, by definition, benchmark data is external to your organization. And you can also argue that benchmarks are a lazy way to evaluate performance, or at least some co-hosts on this episode feel that way! Eric Sandosham, founder and partner at Red & White Consulting Partners (and prolific writer), along with Moe, Tim, and Val break down the problems with benchmarking and offer some alternatives to consider when you get the itch to reach for one!

Links to Items Mentioned in the Show

Photo by Noah Silliman on Unsplash

Episode Transcript

[music]

0:00:05.8 Announcer: Welcome to the Analytics Power Hour. Analytics topics covered conversationally and sometimes with explicit language.

0:00:14.1 Tim Wilson: Hi everyone. Welcome to the Analytics Power Hour where all the data is strong, all the models are good looking, and all the KPIs are above average. This is episode number 254, and listeners to this show who are also public radio nerds of a certain age will absolutely get that reference. I don’t even know if my co-host will get that reference. Val Kroll, did you get my public radio? Deep cut there.

[laughter]

0:00:39.2 Val Kroll: I’m sorry to disappoint him, but absolutely not.

[laughter]

0:00:42.7 TW: Okay. Your dad totally would have. So this will be a conversation to have with him afterwards. [laughter] And Moe like you’re in Australia, I don’t even know if the A, B, C or any other service ever carried Prairie Home Companion. Do you have any idea what I’m talking about?

0:00:56.4 Moe Kiss: I’m totally and utterly baffled. [laughter] Right now.

0:01:00.8 TW: Oh right.

0:01:00.9 VK: Great place to start.

[laughter]

0:01:03.4 TW: That’s how things go for me and most of my social interactions. [laughter] So I’m Tim, and I just did a little benchmarking right there on, my Garrison Keillor knowledge relative to my co-hosts. So, for listeners who don’t get it, I’m referencing a radio show that ran four years in the states that included a segment called The News from Lake Wobegon, which was a fictitious town in Minnesota. And the segment always ended by the host Garrison Keillor, noting that in Lake Wobegon, all the women are strong, all the men are good looking, and all the children are above average, but those exact beats.

0:01:49.2 TW: So, that last bit is sort of a lead in to this episode because we’re gonna be talking about averages, specifically benchmarks, this off requested comparison metric that we get so often from our business partners. Personally these sorts of requests tend to trigger me to curse profusely. [chuckle] For now though, I’m just gonna introduce our guest who wrote a pretty thoughtful article on the subject on Medium as part of his first year of pinning one post a week on the platform, which is impressive.

0:02:21.7 TW: Eric Sandosham is a founder and partner at Red and White Consulting Partners, where he works with companies across a range of industries to help them improve their business decisioning and operating processes. Eric is also on the adjunct faculty at Non Yang Technological University, Singapore Management University, and the Wealth Management Institute. He was previously the customer intelligence practice lead for North Asia for SaaS, and before that was the managing director head of Decision Management at Asia Pacific Consumer Bank at Citibank Singapore. And today he is our guest. So welcome to the show, Eric.

0:02:43.6 Eric Sandosham: Thank you very much. Thank you. Thank you for having me, Tim.

0:02:46.7 TW: I should ask you, have you ever heard of Prairie Home Companion, or.

0:02:49.6 ES: I’ve heard that phrase.

0:02:52.4 TW: Okay.

[laughter]

0:02:52.8 ES: Yeah, but I’ve not heard the radio broadcast, obviously.

0:02:58.1 TW: Or you’re at least being, I mean, I’ve seen Garrison Keillor live, so I’m telling you there, I’m gonna get some Slack messages, people saying, I can’t believe they hadn’t heard a pre-owned companion. [laughter] But, I will not belabor that any further. Trust me, it killed in a certain group. So, Eric, I sort of noted in the introduction that, that we, or actually Val, came across you because of a post that you wrote on Medium, and the post was titled The Problem with Benchmarks, and it had a subtitle, which was, why are We Obsessed with Comparisons? So maybe we can start there, Eric, why are we obsessed with comparisons?

0:03:38.9 ES: I think it’s such a built in phenomenon as a human species to always compare, or we’re growing up. I’m sure our parents always tell us, don’t, don’t compare with your neighbors, don’t compare with your friends and colleagues. Just compare with yourself as long as you’re making that onward progress. But I don’t think any of us really, stick to that. It’s just so natural when you step into a room, you get into a new place of work or in anything that you do, you’re trying to size yourself in relation to someone else.

0:04:13.4 ES: And I think it maybe starts trying to understand our, our place in the larger scheme of things. And this carries over in, into the business world, the first thing typically most organizations ask for. And as you mentioned, I run my own consulting practice at the very start of many of the consulting practices, the engagements the clients would ask, can you help me with some benchmarks? I’m trying to get some information and reference points and all of that. And once we get there, we can go into the deeper stuff. But it seems always top of mind, for them to want to have a sense of almost like a yardstick. Or a placeholder to know where they are on that map.

0:04:57.0 TW: Moe, does your team get hit with requests for tracking down benchmarks? Creating benchmarks, internal, external?

0:05:05.5 MK: Yeah, I have some pretty strong thoughts that are gonna come out in today’s episode, I suppose. Yeah, pretty often. And one of the points that I think is quite interesting, I don’t know, and I’m really going to, yeah, interested to see how the conversation goes, Eric, because I think it is maybe different when you are like a startup or in an earlier stage of a business, and particularly like trying to understand opportunity sizing. So, I’m curious to kind of get your perspective on that, of like where there are useful comparisons to make.

0:05:41.0 ES: Okay. Yeah. So, I think in the article that I wrote, I’m a big fan. Maybe let, let me take a step back. I’m a big fan of the way we look at data in terms of information signals as opposed to just data as data. And, I constantly ask myself, when I look at any piece of data or any report, what is the information signal or signals contained, within there? And so, the same thing I would apply to benchmarks. What kind of information signal is the client looking for when they make such a request for a benchmark.

0:06:19.2 ES: And to keep things simple. I sort of think of it as both a front-end and back-end sort of information signal. You’re either thinking of benchmarks as an input to a decision. You’ve got certain uncertainties about your decision making process and say, well, if I know some stuff that I didn’t know before, I would make a little bit of a better decision. Or you’re looking at it on the back-end where you say, look, I’ve already taken the decision, but I don’t know whether I’m on track. And can the benchmark therefore give me that sense of, my place and whether I’m still keeping to the path that I intended to go on too. And so simplistically front-end, back-end, and I think most organizations when they look at benchmarks tend to look at it from a back-end process as an evaluation, information signal. And therein lies the problem because is it the right way to evaluate whether you are on the path? Is it a right way to evaluate your actions and your strategy in comparison to others who may or may not be doing the same thing.

0:07:23.8 MK: Because the business strategy is different or the customer set is different or.

0:07:29.2 ES: Exactly.

0:07:30.0 TW: Well, And I think you had that front-end versus back-end, ’cause the front-end. And that was a little bit of ah, I thought, ah, I’m so used to just like raging against them because I feel like I’m generally being asked. I’m way less kind as to saying it’s human nature. And I attribute it more to, it’s a way to duck out of saying, well, I don’t need to figure out what I’m expecting to achieve. Just do the thing and then find me a benchmark to compare it to. And if I’m above the benchmark, I’ll say, yay. So that falls under that back-end, that front-end part was actually pretty interesting. Like you called out that like salary benchmarks, if you’re an HR department and trying to figure out where are we relative to the market?

0:08:16.0 TW: And, I’ve worked at companies that have said we wanna pay at market. ’cause we think the quality of the work and the living is way better. So we have the secondary benefits are worthwhile or hey, it sucks to work here. We’ve gotta pay above benchmarks. So that, to me, I was like, oh, okay, so not that those are perfect like that there’s still all the noisiness in trying to get those sorts of benchmarks. But you brought up pricing as another one to say, if I’m selling something, I need to figure out what’s kind of the normal sell rate because I need to think is what I’m offering higher value or lower value and adjust accordingly. So that I thought was actually pretty useful. It’s just that back-end just feels like lazy, to me.

0:09:10.9 ES: Yeah. I think I like the point you’re making about being lazy in the sense that, get somebody, well, the consultant is here anyway, since we’re gonna pay the consultant, why don’t we get them to do all the measurement for us? And just doing the external measurement I sort of absorbed myself of even doing any internal metric collection. Everything is just gonna be evaluated by something outside of the organization and just smacks of poor business management thinking. I mean, if you wanna try and compliment with some external, with internal, I think great. But very often, many of these companies that ask for benchmarks are just looking for the external evaluation. And at the end of it, whether you are up or down. So what?

0:09:58.8 S1: It’s time to step away from the show for a quick word about Piwik PRO. Tim, tell us about it.

0:10:04.9 TW: Well, Piwik PRO has really exploded in popularity and keeps adding new functionality.

0:10:10.1 S1: They sure have. They’ve got an easy to use interface, a full set of features with capabilities like custom reports, enhanced e-commerce tracking and a customer data platform.

0:10:21.0 TW: We love running Piwik PRO’s free plan on the podcast website, but they also have a paid plan that adds scale and some additional features.

0:10:28.2 S1: Yeah. Head over to piwik.pro and check them out for yourself. You can get started with their free plan. That’s piwik.pro. And now let’s get back to the show.

0:10:41.4 MK: Okay. So what if we blend them? This is not where I thought I was gonna go. But, what if you are using those external benchmarks as an input to help you set your own target as one of saying many inputs, like one of your other inputs might be previous performance or like another department’s performance or something like that. And then this is one of those inputs you are choosing to help you set whatever your target is going to be. Like, what’s the…

0:11:13.4 ES: View of that [laughter]

0:11:14.2 MK: Yeah. View of that?

0:11:16.4 ES: No I think that’s valid. To say, look, can it be an additional supplementary metric. The challenge would be when you’re using external metrics, the problem would be on attribution. And so whether you can either make that direct connection and attribute the change, let’s say in your market share or, a satisfaction score, attribute it to a particular action or set of actions that you took. And typically the answer will be it’s difficult. Because there’s so much noise, there’s so much things happening, even both internally and then the market is doing its own thing. So at best you can just can sort of hand wave and say, looks like it’s affecting it, that you can’t say it with any certainty. So even if you’re supplementing, you will still have to take it with a pinch of salt. Yeah.

0:12:08.0 MK: And I think that that’s a really good perspective mode like the blending and using that as potentially an anchor to think about like, okay, so here’s what’s happening, but we can set our own target whether it’s below or above. But I think what you were just calling out Eric, is an important consideration because the strategies are different because the consumers are different because, you think you’re playing in this space because like, that’s where you’re strongest. So there’s all that context where like that kind of devalues that number a little bit, even if you were to like average it out with the target setting activity from the voices in the room. Just because you don’t have that context. And so that actually might undercut some of the value of the discussion that’s happening internally about what success looks like.

0:12:54.7 ES: Yeah. That’s true. Yeah. And again, you can define the market or the reference point in so many different ways, which many organizations do to always come out smelling roses, as they say. So, I’m benchmarking myself to just one other person. Well somebody’s gotta win. And they are part of the challenge also.

0:13:17.6 MK: One other thing though, I think like the sincere, the information that comes along with the request, if you were trying to like break that down, like what is someone actually craving? I like the way you were talking about that earlier too, Eric. I think sometimes people are worried that if we were to set a target ourselves, would that not be aggressive enough? Like if maybe we were setting the bar too low or we’re not thinking about like, a larger piece of a puzzle. And so they’re doing it as a way to make sure that they’re being aggressive and that they’re not gonna be outpaced by just setting the targets that are internally.

0:13:50.9 MK: Again, if you get to that conversation, Tim’s, if you only could see Tim’s faces. Tim is calling absolute bullshit on this. I can see it in his face. I mean, I’m talking about like, if you get to a convert, if you’re already getting to a place where your organization is comfortable with talking about those targets. And so this is a special beast. ‘Cause we, I totally agree with you wholeheartedly Tim about laziness. I think that that’s like a shorthand way of saying like some resistance against the finality of setting, you know, or drawing a line in the sand. But I do think that sometimes it comes from a good place of making sure that we’re not setting something and it’s just too easy of a target or then we achieve it and then what?

0:14:36.8 ES: But you’re doing something to achieve a result. And if you’re saying, this is gonna cost me something in time and money, and I would be happy if I spent this amount of time and money and achieve this result. To me, that’s the core of what a target is. And if you said, well, yeah, yeah, but we compared that target to some super unattainable, messy AF benchmark and that’s not aggressive enough. Like, to me, that’s like super distracting from the conversation. If I say, if we put $1,000 into this and we get $8,000 back, are we cool? And everybody’s like, sure. You’re like, well, I actually sandbagged. If I found these benchmarks, which by the way, don’t really exist, then I should have set it at 10,000.

0:15:30.3 MK: I’m not saying it’s a fruitful exercise, especially when we get into conversations like, well, Netflix runs this many tests a year, so I think we should, okay, great. ‘Cause you’re definitely built to run Netflix. So I’m not saying it’s a fruitful exercise, I’m just saying that’s the place where that comes from sometimes so.

0:15:48.1 VK: Can I just touch on, there was something as I was digesting Eric’s blog post and I did actually try to figure out which one it was, but I can’t remember off the top of my head and I’m hoping Tim who’s smarter than me will remember. But there is this bias that people have to be like, what previously has been done so I can use that as a frame of reference? Like we’re saying like benchmarks aren’t great, use this way. But like this is human intuition of like what previous experience or what have other companies done? What is the data, “whether the data is right or not or useful or not” is a different point. But like, we’re talking about a human bias to seek what previous experience can I rely upon to understand what good is? And so like, we’re talking about it like it’s the worst thing ever, but it’s like, this is our natural intuition as humans.

0:16:45.8 TW: I mean, my quick take is that why I would rather that I just sit down and just think about what everything that, what I’ve learned or seen or known before and what I’m doing and what I expect to get out and start with that. And then say, if I, I’m gonna be way, way more emotionally tied to and invested in that than if I just had the analyst or the consultant go pull me a number. ‘Cause if the consultant comes and gets a benchmark and then we wildly miss it, then it’s like, oh, you must’ve pulled a bad benchmark. I’ve circumvented any ability to say, did this thing do what I expected it to do or not? If I pull that myself, now I have information of what I did. And now when I’m considering the next thing, I can say, well, I gotta lower my target a ton, which hopefully may make me say, is it worth doing or not? But I don’t know. What do you think, Eric?

0:17:52.3 ES: Actually, this is cutting it at different, I feel at different perspectives. So the word benchmark, I think it’s a loaded word because it can mean many different things to many different people. And as you’re talking, Tim, you’re saying, someone has this external reference and you’re using it as a target, for example. Am I doing well enough to achieve a particular reference target? Yeah, you’re right, that’s a benchmark, but it’s a different kind of benchmark from say, if I’m looking at relative market share, where I’m looking at a ranking. Or even if like pay salary, it’s not a target we use for reference of how we wanna pay, but you wanna know what’s your relative standing, what’s your relative rank with your competitor.

0:18:41.0 ES: I think there are, More raised a good point to say, there are stuff where we aim to achieve, for example, in certain practices where there’s a long duration or history of maturity to it and it’s become almost commoditized. And if you are a startup and then you’re saying, look, if I’m gonna do this, then I actually have to get to the similar state and all run rate of whatever everyone else is doing. So for example, I come from the retail banking side. If you take like a credit card business, interesting benchmark would be CPA, cost per acquisition. And you say, look, everybody has to acquire a new credit card customer. We can’t have very widely divergent cost of acquisition.

0:19:25.4 ES: At some point, if you run this well and you’re matured, logically all of us will begin to converge around a very narrow range of value, right? And if you say, is my business healthy? Am I doing all that I can to take out the waste and being effective in my targeting, then logically I should be within that narrow range of CPA. And if I’m not, then there is something I’m missing perhaps. And it can be an interesting way as a diagnostic measure to say what am I not seeing, right? What am I not getting right in my process? Because if everyone’s converging, then logically I should be.

0:20:02.5 TW: But that I mean… That’s, that gets too… I mean, in the banking sector, I feel like I’ve had, when I’ve had questions where people have asked me for what’s a benchmark e-commerce conversion rate for retailers. And I was like, I asked them, I’m like, are you, have you shared what yours is like publicly? Have you shared that with a trader? And they’re like, oh my God, no, like, I wouldn’t share that. It’s like, well, but you think your competitors are sharing that. And I even think like in banking or a cost per acquisition if you’re looking at your customer lifetime value for different customers and financial services varies hugely.

0:20:46.8 TW: So like an average CPA, when if you’re selling to a super high-end, or you’ve got another product that’s selling to super low-end, even like within your company, you may say, well, we don’t look for the same CPA for the different products, because these are much more valuable, and these are much lower value. But I mean, it just… It feels like such a slippery slope that you go down to try to get those… I mean, it happens with salary information, too, when you say it works for very specifically, clearly defined roles, where there are regional differences, and there’s enough scale, and there’s enough data collected. But I think we’ve all watched companies that HR struggles when they say, well, the service we subscribe to for our salary benchmarking doesn’t have an analytics engineer role.

0:21:42.7 TW: So we’re just gonna name our people data scientists, and then we’ll compare to that. I mean, it’s a bizarre… So that I agree, it’s a really good use case. But it still runs into this mode of there’s this idea that the benchmark is much cleaner, has much less uncertainty around it than it does. And so, where does it come in?

0:22:13.9 VK: And can I just add to that, which is hysterical, because I feel I’m now arguing against myself. But a really good example, right is iPhone users versus Android users, you might be like, Oh, pull a benchmark report on mobile usage. And it’s like, like I know from looking at those, the customer set is very different. The same is true of Amex and Visa, very different, like customer lifetime value, that sort of thing. But you’re like, Oh, look at credit card usage. And you’re like totally different audiences. And yeah, lifetime value. So I think that is where it can get dangerous.

0:22:46.7 VK: But hypothetical situation for you, which I love. Let’s say you are… I don’t know, you have an iPhone app, you’re trying to decide if you should invest in an Android app. And you don’t have any historical data on the Android app, because it doesn’t exist. So you need to make a decision about… And this is probably getting in the front end side of Eric’s thinking, you need to make a decision about whether you should invest in Android is the comparison to the iPhone usage that you have internally correct? Probably not. Because as I just said, different customer groups. So in that situation, I can see that some type of benchmarking from competitors might be useful as one of the inputs into helping you make a decision.

0:23:29.5 ES: But I think in that perspective, if you’re looking for information to sort of bushwhack your way forward.

[laughter]

0:23:43.2 ES: That’s a bit of market research and benchmark for the purpose of saying it is collecting benchmarks, but actually you’re collecting information signals for the purpose of market research, right? And you’re completely right. It’s on the front-end, because you’re trying to decide which path I should take to forge ahead, right? Yeah. But often my struggle in the consulting is the organizations or the clients are asking for benchmark actually are not clear exactly what they would do with it. But the so what question if you ask them, of course, they get a little bit offended and defensive, right? So if we had a client stop asking we just want our benchmarks. But the reality is that I don’t think they really know what they would do with it. But it’s a great piece of information to bring up to senior management and to the boards and all that, and this is various benchmarks and where we stand. It has its appeal, obviously, right? But I don’t see how many people use it to take better decisions.

0:24:46.0 VK: Can I ask a controversial question?

0:24:49.0 ES: Sure.

[laughter]

0:24:51.3 VK: I don’t know if you’re familiar with net promoter score. [laughter]

0:24:54.3 ES: I am.

[laughter]

0:24:54.9 VK: I mean, that is the ultimate benchmark, right? And there are lots of research from consulting that says that it’s very closely tied to the revenue performance of a company. What are your thoughts on using that as a benchmark?

0:25:12.7 ES: Okay, so in the academic space, actually, the Net Promoter Score, NPS, right, for short, has been debunked, right? In case people are not aware, actually, it’s been debunked, academically debunked, because it doesn’t hold up to scratch, right? I mean, it looks like a great, nice shorthand. And the reality, again, with a lot of stuff that gets onto the business world, intuitively feels familiar, it’s easy to run with and sometimes takes on a life of its own. And then the actual research about validation happens much later, and it’s sort of never see the light of day. But academically, actually, the paper has been debunked, one.

0:25:50.8 ES: Two, they’ve found that actually, it doesn’t give any sharp diagnostic or measure, versus how organizations used to do it with their customer satisfaction, where they had multiple questions, and you could slice and dice, and there was no lift against the previous methods that people employed. But it was a nice shorthand. But of course, with things that are shorthand, you introduce noise. Then obviously, I mean, when I first saw it I encountered it in my corporate life in Citibank, in Asia, I came out of the US with Bain, right, and working with professors. It was like, really you…

0:26:26.9 ES: You benchmark to say anybody above seven or eight is good. Anything below that, shouldn’t, you know, five be the, the average, but no, not average is much higher than, than the median point we saying. But in Asia, everyone’s conservative. No one’s going to tell you you are good. You know, it’s everyone, you know. I mean, in Asia we sort of understand this. Yeah. Yeah. Never, never give good, you know? Good, good you know, feedback you always criticize, right? Because people’s head will get. You know, so, and they all feel great about themselves. In Asia we take the opposite stance. Every, everyone’s not good enough, right? So even customers will never, never give you that good feedback.

[laughter]

0:27:04.5 MK: And that’s such a good point actually about benchmarks being quite dangerous. Is that like it doesn’t account for cultural differences or many of the other differences, right? Like cultural, just being one of them. When you, when you take an average like that and try and apply it broadly.

0:27:21.6 VK: I have.

0:27:22.1 ES: Agree, agree.

0:27:22.3 VK: NPS has a special place in my heart. We, when I was in market research at the beginning of my career, there was a cable provider that we did a lot of customer satisfaction and also transaction based satisfaction. So after you had to interact with customer service, and when NPS came out and everyone was reading the book and everyone’s talking about how it is like Miracle Cure [laughter] they embarked on an NPS study where it was only five questions and it was just getting at NPS for all 70 plus divisions and they ran [laughter] it on a monthly basis. But if you think about the, also like the skews of the fit, right? Like it started with like airlines and hospitality where there was a lot of choice and it was a lot easier, like lower switching costs. But we’re talking [laughter] about how likely are you to recommend to a friend or colleague your cable provider. Like first of all, whoever is having that conversation at a party.

[laughter]

0:28:14.0 VK: Like that’s curious behavior [laughter] But also a lot of, especially at the time, this is like, you know, 2010, you, you actually couldn’t switch to all the different providers and they were benchmarking themselves against Dish Network and that was one of our favorites ’cause they were like, oh, people like Dish better than us. And it’s like they had to think that your service was so terrible that they would pay to have a, you know, 20 foot dish installed on their house to try to circumvent [laughter] the service you could provide them, right? So it just ended up being this, how do we explain the volatility month over month this across these 70 plus divisions. It was like the craziest wildest ride. And I remember our in-house statistician would have to like breathe into a paper bag [laughter] when we get on the calls with the client to try to like, well how do you explain why this one’s up and this one’s down? And it’s like, ’cause it means nothing. It means nothing. It was just like the wildest experience. I’ll just never forget.

[laughter]

0:29:07.4 TW: Well, well the volatility, I mean, it kind of goes to the bane of analytics in general when a metric that’s noisy, if it’s a noisy metric and it moves and you look at it and it goes up and people are like, why’d it go up? I’m like, noise it went down. Why’d it go down noise? I’ve twice worked at agencies, mid-sized agencies that had, were doing NPS, you know, B2B small sample, you’re taking a 10 point scale and chunking it into three buckets and then doing subtractive math on it. I mean, it’s just like it’s taking this one thing and making it so crude. And then they were doing it with a small sample size and it was just every quarter, some percentage of the clients would get hit up with it. Some smaller percentage would respond to it. It was an, it was just a noise generator. But by golly, if it went up, we’d hear about it. And if it was down low, nobody talked about it. And, and, but there would be that thrown out that A NPS above, I don’t even know what the number is.

0:30:05.5 VK: Either airlines got a 70, you know? Yeah.

0:30:09.0 MK: What does that mean? That’s not a helpful benchmark.

0:30:12.8 TW: But, but, but on that, like, asking a question, especially in a, in a D to C context, like asking how likely are you to recommend, I mean like, oh, you’re selling to like at scale to consumers. Like it seems like you could get volume that there would be, it’s a, seems like a reasonable question. Would you recommend us, especially if we’re in a growth area, how many of our clients customers would say they would recommend us? It, that actually feels like in certain contexts, depending on where our strategy is a fair question to ask. It’s when it gets jumped to now subtract the detractors and do this, and to compare yourself to a bench. All of a sudden now it feels like it’s gone into the nope. I’m just trying to, you know, get a number that, that makes me sound good.

0:31:06.7 VK: Okay.

0:31:08.6 TW: Maybe.

0:31:09.3 VK: Now that we’ve had our NPS deviation, can we please talk more about, I guess the front end benchmarking side? Because, okay, can I throw another scenario out there?

[laughter]

0:31:26.5 VK: Let’s say you’re looking at, I’m trying to, I’m like thinking about this on the fly marketing budget, right? And you’re going, you’re trying to basically determine which markets are worth investing in. You’ve got a finite budget, you can’t go into all of the markets. So maybe you look at like total addressable market. You look at TAM, you look at GDP, like you look at a bunch of, you might also use your own internal like monetization rate or something like that. Like it seems like in that scenario, using benchmarks is appropriate. But I noted Eric, in your article, you said, we should never start with comparisons unless they help shape our decision inputs. Most don’t. So is this scenario shaping our decision inputs?

0:32:10.7 ES: I would say yes. But I would also then challenge are these kinds of information really benchmarks?

0:32:16.7 VK: Yeah.

0:32:17.0 ES: Right.

0:32:17.4 VK: Ooh.

0:32:17.7 ES: Yeah, because again, the word, the word is all loose, right?

0:32:21.3 VK: Yeah.

0:32:21.6 ES: So, so you say. Oh, I, you know, give me GDP or of the, the various country or options that I want to go invest in. Is that a benchmark?

0:32:30.6 VK: So you’re saying no?

0:32:31.9 TW: Or is that just market research?

0:32:33.5 VK: Okay. So we’re saying it’s market research.

0:32:36.4 MK: It’s primary and secondary research. It feels like if, just ’cause it’s desk research, just because you’re not going to your customers or prospects, that’s still like research input.

0:32:46.9 VK: I would agree with that.

0:32:48.1 ES: Yeah.

0:32:49.3 VK: Okay. So then let’s rewind the thing that I should have asked [laughter] right at the very start. How would you define a benchmark then, Eric?

0:32:57.4 ES: Okay. Yeah. So, so for me, a benchmark means I extremely speaking in the space that I sort of rant about. It’s the relative difference. To a competitor or to a space that I operate with. So, if I have a way to compare myself with somebody, as opposed to say, is the market a good market or bad market? That’s not necessarily about me comparing with somebody. Then to me, the benchmark, if it’s competitor comparison, has some facet of competitor comparison would be a benchmark for me.

0:33:35.6 MK: And how is that different from a baseline? ‘Cause I think that that’s, that would be good to tease apart.

0:33:41.7 ES: Okay, okay. So, a baseline for me is a sort of a hurdle that you wanna get over. So, if you’re starting out and you’re building some capability, the baseline should be something that you try to get over. Again, like the CPA. So, if you think of CPA, I think CPA has this sort of two sides to it. Because at some point, if you’re gonna compete like a mass credit card with everyone, then, when you first launch your product, obviously the CPA is gonna be high because you’re trying to win over market share, trying to get your brand out there and awareness and all of that.

0:34:20.2 ES: But once you’ve got a mature business, logically, you need to get over some kind of, baseline CPA. Now, that baseline CPA may be very different from a benchmark comparison CPA, where you have a variety of different competitors at various levels. Again, different segments they go after and so on and so forth. But a good operating business would say that, if I can’t get a CPA of X dollars, then I’m not gonna be running a profitable business regardless. And I think to me, that baseline is about some kind of minimum hurdle that you wanna get over so that the business makes sense.

0:34:58.6 TW: This is another axe I have to grind is like, I feel like the other thing I will hear in the CPA example, it would say, well, what CPA do we need? And it’d be great if there was like math done to say, we’ve got to at least, the CPA has to be below this, or we’re not gonna be profitable. I feel like I’ve run into more often, we’ve never done this before. So let’s just run it out there and let’s get a baseline for our CPA. And then we’ll know going forward, what it needs to be, which also sends me kind of around the bend, ’cause the way you just articulated is saying, no, you’re setting a baseline as opposed to, I’m just gonna do whatever and kind of do a good job and gather some data.

0:35:46.3 TW: And ’cause that’s a can that can be kicked down the road again, and again, and again, there’s always an excuse to say, well, I don’t have a baseline. I don’t have a historical internal data. So I kind of tend to think of like an internal benchmark and a baseline as being kind of comparable, but I’d get irritated with them as well. ‘Cause it’s also like, you can’t possibly expect me to set a target for what good is. And because I’ve never done this before and I’m like, bullshit, I can’t, but…

0:36:22.4 MK: So wait, sorry, Tim, are you saying that an internal benchmark and a baseline are kind of interchangeable? Am I following that?

0:36:33.7 TW: I feel like that’s how I see it used and tend to use it. I don’t know that that’s a hill I’m gonna die on.

0:36:42.6 MK: I like it like that because I do think, ’cause a lot of times inside an organization, you do have the context that you don’t have when it’s an external benchmark. Like, oh, well this one had like half the marketing budget. So you have to take that in consideration. It’s kind of like you have the use with caution, kind of like the baked in assumptions or the things that are kind of really different about that point of comparison. So you understand how valuable it can be to helping you set context. And so that is how, one of the reasons why I think about that too, just because it gives you, you do have that background. You can source the information again to assess how helpful it is.

0:37:18.7 VK: Couldn’t you do the same for external benchmarks too? Like I get that maybe you don’t have as much context, but you could still have like use with caution warnings of like, this is what we do know about how they were created or whatever. Or is it like, you just feel like it’s way too black boxy? Oh, everyone’s shaking their heads. So I’m gonna assume I’m super wrong on this.

0:37:43.4 ES: Yeah. So for me, I think most information, most data are equivocal in the sense that, as equivocal meaning we have multiple interpretations and often conflicting interpretations and that’s the issue. When it goes up, someone can say, oh, you’ve done a good job or it’s just noise or, and vice versa. I think if we start with, say, look, whatever the benchmark that exists, how much equivocality does this benchmark have? And I think if we hand to our heart and we’re honest and look, actually, there’s quite a lot of equivocality in terms of the benchmark, then is it going to be useful? Because ultimately, even as an evaluation metrics, we are biased in the sense that, we reward ourselves for all the successes, whether it’s on us or not. And then we’ll try to find excuses and reasons for why we fail. And if we can make a valid reason because the metric is equivocal, then does it really help you chug along?

0:38:54.5 VK: Such a good point.

0:38:57.3 TW: And that is like, if you see it and you exceed it, you’re like, look, this magical thing, we did better. And if you see it and we did worse, then you say, ah, it must be garbage.

0:39:09.8 ES: The market looked against me.

0:39:12.0 TW: I mean, we had, Val and I had the same client, this was a few years ago, that had an agency that had multiple clients, but when they would put their media results and they would less CPA, just the nature of where they were, they would look at cost per click or CPM, which is super, super common. And they would say, they were constantly reporting that they were, beat benchmark. And they would say, and guess what? Because we have so much data, you’re beating benchmark for your sector. And I’m like. That’s it and the client would just take it and say, look, this campaign was great. Our CPM was below benchmark or our CPC was below benchmark and what’s like, yeah, but that’s such a noisy thing. Like, no, no, no. They told us that the data they were using was for totally apples to apples. Which is all the kids are above average.

0:40:11.7 ES: So just sorry to interrupt and jump in. So saying, okay this client, it says that my CPM is below benchmark and look how well I’ve done. You can also flip the narrative and simply say, did we under invest? Did we leave money on the table? Because if we were at benchmark, couldn’t we make more?

0:40:29.4 MK: Can I flip this? So in my mind, and again, people might violently challenge me on this. There tend to be kind of like two trains of thought. I have found, when you’re working with executives, one does tend to be the like, how are we doing against our competitors? And I do find then there are also the execs that are like, I don’t care what our competitors are doing. We’re running our own race. How are we comparing year on year or like to the last time we did like very much about internal comparisons. If you have got the one that is very focused on, how are we doing against our competitors? I feel this benchmarking discussion is something you would need to bring up. How do you think you do that in a constructive way that would get them, I guess, to see the like, I don’t wanna say like the errors of their way, ’cause that sounds super patronizing, but, how do you start to educate them about this?

0:41:28.6 ES: I think in business, you definitely have to have competitive information. Whether it is in the form of benchmark or not. I mean, the business is not a one man race. You’re obviously competing in a space without this. And so, to say that I’m just going to isolate myself and just look at internal metrics and then, yay, I’m successful or not, I think that’s not wise and definitely not realistic.

0:41:54.2 ES: But to run a business entirely based on competitor evaluation and where I am at each point in time, it’s also meaningless because then you don’t have a mind of your own and making a decision whether I wanna stick to it or not. I think it’s really the collection of information that you would use. So, if you’re saying, look, I want some competitor benchmark, then it is because I have some kind of evaluation or decision uncertainty that I can fill in with that. Recognizing also that the minute I go out of the organization with external information, then there is a lot more noise. And I don’t think people realize that because they are thinking internal metrics and external metrics, yeah, they all have variance. They’re not the same kind of variance. The internal metrics in many instances, you can control the variance. Even if you say there’s noise, I can always isolate it because I know something about my process. But with the external one, you don’t even know the nature of the noise, let alone wanting to try and control that.

0:43:00.2 MK: And also to say there is a lot of value in getting competitor information in context of a decision you’re going to make. And so one area that I’ve seen a lot of clients do, especially my bias coming from market research, is understanding sentiment or attitudes. And so sometimes shifting away even from MPS to understand word of mouth, if that’s what it’s really trying to get at, then let’s ask some questions about that. Or how likely are you to XYZ behaviors? And I think some of those are helpful to capture against competitors too. And that can be informative of where you play or how closely you are delivering on your value proposition or differentiation from key competitors. But again, I don’t necessarily consider those benchmarks because if you’re still saying we’re gonna have a separate conversation to evaluate our own performance, the choices we’re making, like that can still just kind of be more on the input side. But Eric, you can let me know if [laughter] if I misinterpreted that.

0:44:00.2 ES: I would agree. I thought you were saying, look, I need this sentiment analysis. Then, of course, the challenge would be who does that best and how do they do it so that it’s comparable. And they’ve sort of normalized the noise in it. I think where the rent is as sort of startup consulting and all of that, it’s strange. I know when I talk to clients and they know that I’m a boutique consulting business and say, well, can you get me benchmark? Well, yeah. I mean, I’ve consulted for a range of some of the clients, but I’m not a McKinsey. I’m not an Accenture or Deloitte where you say you work with everyone and you sort of have seen the ins and outs in those businesses. And approaching a small startup for external benchmark, even though you can say, maybe they’re prepared to do it because they need your business and all of that, but they don’t really have that kind of methodology that would stabilize the noise.

0:45:02.7 MK: Yeah, that’s a good way to put it.

0:45:03.8 ES: And so you can get a number that ultimately, and again, you can fiddle a number to make the client happy and that’s not going to be useful.

[laughter]

0:45:14.4 TW: Well, and that’s a, I mean, you take the big, the large scale consultancies that say we have a massive customer database and we therefore have access and we are going to obfuscate it and develop benchmarks for you. That tends to be, that’s what Boston Consulting Group or McKinsey or Deloitte is trying to sell you. So the organizations that, the metrics that they’re gonna be the tightest and cleanest on gathering their benchmarks just happen to be for metrics that they and their services, they say, well, they will help you with. So there’s a little bit of a fox in the henhouse of that they may have, they may factually be accurate and they’re probably behaving pretty well. Like they’re not out there being malicious, but they do have sort of perverse incentives to have the new client or the prospect be performing below the benchmark because that’s how they’re going to get paid.

0:46:31.7 TW: So even like considering who’s doing the aggregation of the like the National Retail Foundation or Federation, NRF, whatever that is, like they would gather like three metrics conversion rate and you know whatever from their their members but like what was their incentive like well that was so they could publish a book once a year that would have these three metrics in it and that would be part of their justification for their members to kind of re-up. But it doesn’t seem like there’s a totally objective and altruistic market out there in the business world saying we’re gonna go through all this work to minimize the noise in benchmarks around a handful of metrics like qui bono like who benefits from that. So that just goes back to questioning the usefulness of them. Yeah.

0:47:31.7 VK: Well we’re definitely not going to get through this episode without me being able to have a little bit of a cathartic moment about my most hated least helpful benchmark that in my previous role when I was very focused on experimentation a client didn’t go by where we didn’t have to address it. So let’s see if anyone can finish this sentence. Oh, Even best-in-class experimentation programs have a win rate of 95%.

0:48:03.9 MK: 30.

0:48:04.0 VK: It’s low. 30.

0:48:05.2 MK: Oh, 30. That’s not close at all. Sorry I thought you were gonna say oh anyway.

0:48:10.9 VK: No no yeah.

0:48:11.4 MK: A win rate of 30%.

0:48:11.7 VK: Best-in-class optimization experimentation programs that that’s like and I don’t know who said it first but we kind of the industry kind of like rallied around it I’m telling you Google it you’ll find everyone references that point. But there is no relationship between win rate and how much smarter you’re making your organization by taking that hypothesis led mindset or using controlled experimentation to de-risk decisions and so it would just irk me to the umpteenth degree about well let’s let’s put this down as benchmark against the 30%. Okay let’s have a more meaningful.

0:48:45.4 MK: I’ve never heard that.

0:48:45.4 VK: Really? Moe I’m so surprised.

0:48:50.7 MK: Am I doing it wrong? Like the thing I always hear is like you need to have a 95% confidence interval. Like that’s the thing I always hear.

0:48:55.8 VK: Yes you always hear that too for sure. But the benchmark of win rate.

0:49:00.7 MK: But I’ve never heard the 30% win rate. But I don’t know maybe I haven’t been doing enough experimentation lately.

0:49:05.6 VK: Not helpful.

0:49:09.8 MK: So about, Val, how did you handle that? How did you address that with all the clients?

0:49:14.9 VK: It was all about like do we think that, that really has any relationship with the more meaningful metrics about why you’re making this choice to invest in experimentation. It’s the same thing as like the the false relationship between MPS and revenue. Like there was no predictability or relationship between those two things. And so like let’s decouple those concepts and see how we can make sure we’re putting smart inputs into the machine to make sure that again we’re testing what matters to the business and things that are going to help move things forward versus like well you know I can test these button colors over here without getting legal approval so let’s push those 30 tests through.

0:49:54.0 TW: I briefly before we started this show I thought are we gonna be able to talk for a whole show about benchmarks? And we have.

0:50:04.3 MK: Mainly because I clearly did not understand what benchmarks are. So that’s been a helpful place.

0:50:10.8 TW: But I think Eric nailed it. Like it is a word that is like oh this is a plain word and it does get contorted by different people can mean which is is another whole area where it can we can get in trouble if we’re not talking about the same thing. I could get labeled as the person who hates benchmarks and somebody’s actually thinking I hate market research. So.

0:50:34.8 MK: I’ve realized like to be honest through the course of this conversation I’ve realized when I talk to finance and they say benchmarks they mean market research. That is like been my epiphany in this conversation and we are often working on things together and now I’m like oh I need to reframe this. So this has been very helpful Eric.

0:50:54.4 TW: Well we are, I’ve there are more things I would love to fetch about but I am sitting in Michael Helbling’s seat and he wants it back. So we’re gonna have to start to wrap. Before we close out we always like to do a last call go around and have everyone share a thing or two that they found interesting related to benchmarks or not. Hopefully it’s an above baseline quality last call but if not that’s okay too. So Eric you’re our guest do you wanna share the first last call?

0:51:32.1 ES: Sure, sure. Okay. But it’s not related to benchmarks [0:51:34.9] ____.

0:51:36.9 VK: That’s okay. Mine’s not either.

0:51:38.1 TW: Talk a little bit about…

0:51:39.6 ES: Expected yeah. So this was an article I read on Medium which I post my articles on as well and it’s all a rage now with generative AI and artificial general intelligence everyone’s worried that we are all going to hell in a handbasket.

0:52:00.9 ES: It’s a terminal event right where the AI wakes up and all of that and this person on Medium wrote I don’t know the person’s name at all because they write under a handle, or a pen name and their pen name is from narrow to general AI that’s all I see in the title of the author and while it’s the title of the blog or the article is actually a very long theory of intelligence that denies teleological purpose and okay, so the title was so odd that when it popped up in my inbox and Medium I said okay let’s check it out. It’s a pretty long long article a little bit philosophical but one of the points that they were making about why we won’t get to in the near term to artificial general intelligence really resonated with me. And when we think of say AI today we think that it will be able to reason solve and of course there are arguments both ways but clearly we’re making some progress but the author here makes a very nice succinct argument to say look all of AI ultimately comes down into the space called problem solving. And you can push for it.

0:53:14.8 ES: You can even say, well, at some point maybe the AI will be able to reason well enough and all of that, but it is still in the space of problem solving. But the author says actually we are not, the human experience is not defined by problem solving. In fact, a big chunk of it is defined by problem finding. And that was a huge aha moment for me, is like, it’s true. I mean, we make our own problems. Like, look at this conversation on benchmark. We didn’t have a problem before. And then we define it, shape it, we argue it. And this idea of problem finding, problem defining was a huge aha moment for me.

0:54:01.7 VK: I love it.

0:54:02.4 ES: It says, no, AI isn’t built to do that. Yeah.

0:54:05.6 MK: Oh, I love that.

0:54:06.4 TW: That’s good. Val and I are salivating because that’s kind of core to the facts and feelings process is identifying problems and then thinking through how they might be solved. So… I like it. Very good. Nicely done. Val, what’s your last call?

0:54:26.5 VK: Sure. So mine’s a twofer, but both of them are relatively quick. One, I just have to give a shout out to Eric. I know you mentioned in the intro, Tim, that Eric had been doing a publis…

0:54:36.6 TW: That was gonna be my twofer was gonna be a shout out to Eric. Okay.

0:54:38.5 VK: Well, guess who got to go first?

0:54:40.4 TW: I guess I’m just gonna have one then.

0:54:45.7 VK: Well, maybe you’ll call out some different pieces, but I love the way you write too, Eric. Like the, there’s a whole section in there about the problem with dashboards, the problem with data visualization, the problem with data literacy. And I just like, love the stance that you take and the way that you break it down. And it’s always like really succinct. And so it’s really a fun read. So I’ve enjoyed following you and so glad that you could be our guest today.

0:55:05.8 ES: Thank you. Thank you for that.

0:55:07.1 VK: So that’s one. And Tim, if you have some specific ones, I didn’t go too deep. So you can throw them some up too. And then the second one is an upcoming conference that you all might have heard of, Experimentation Island. So February 26th through 28th of next year, it is its inaugural year. So Kelly Wortham and Ton Wesseling are bringing it to the US, the best parts of conversion hotel that happened over in Europe years ago.

0:55:36.3 MK: Is it on an island?

0:55:37.6 VK: It is on an island.

0:55:41.4 MK: What? Maybe I need to go to this.

0:55:44.5 VK: It’s gonna be awesome. They’re doing a lot to really make sure that the experience of the attendees is gonna be great, but it’s on St. Simon’s Island off of Georgia, which…

0:55:53.1 TW: There’s a keynote about benchmarking your win rate for your experimentation program.

0:56:00.8 MK: Triggered. Yeah.

0:56:00.9 VK: Tim and I are speakers. So I’m super excited. Oh, performers. They’re called performers, but yeah, there’s some good programming.

0:56:05.7 TW: I did not know that.

0:56:08.0 VK: Yeah, Tim, get into it. But I’m excited. So we’re just starting doing some of the planning. And so what was front of mind, I just wanted to drop that for our listeners so they could plan for that.

0:56:19.7 MK: To be clear, I was thinking like Hawaii.

0:56:23.8 ES: When you’re thinking island, right?

0:56:26.2 MK: Yeah. I’ve just looked it up. I’m like, I’m gonna have to mention this to Ton and Kelly.

0:56:33.8 ES: Benchmarks, yeah. What islands?

0:56:38.7 VK: There you go.

0:56:43.1 TW: Well, what’s your last call that we can then dinner greet?

0:56:48.1 MK: To Shreds, yeah. Okay, look, I always bang on about the Acquired podcast, but obviously I traveled not too long ago and I got a snippet of time to listen to a couple of things. And it’s two particular episodes that just blew my mind. One is the episode on Costco and the other one is the second on Hermes. And I just love how these guys really get into the history of a company. There was so much stuff about Costco that I didn’t know that now makes me probably an even bigger Costco lover. And likewise, I now have this obsession with wanting to buy something from Hermes, which I never had any desire to do ever. But then that’s not actually my real last call because I’ve mentioned that podcast many, many times. I found this Instagram called To You From Steph, and it’s really about like growth and personal development. And you’ll see some of like quite common, I guess, like sentences and posts about growth and personal development.

0:57:48.1 MK: But she’s such a beautiful designer. And I don’t know, I’m still trying to figure out where and how I can use it. Like it’s comments like talking about like the heaviness of the load or like what today’s progress feels like. It’s very personal developing, but like her posts are just so beautiful that it kind of makes you revisit some of these sentiments. And I’m trying to figure out like how I can adopt, I don’t know, I’m not creative or artistic, so I have a lot of admiration for her page and just how she’s getting, just making, revisiting some of the thoughts really nice just because of how beautiful they are. So yeah, that’s a bit of a random one.

0:58:26.6 VK: Does she post it from an island?

0:58:29.8 MK: No, but it would be better if it came from Hawaii, like obviously.

0:58:36.0 VK: I’m excited to check it out. All right.

0:58:37.8 MK: And over to you, Tim, what’s your last call?

0:58:40.3 TW: So I promise we do not consistently like log roll the guests, but the same thing, one impressed with, so I was also gonna note that Eric, your weekly posts and they’re very consumable, but the one specifically, because we had a listener who had submitted a idea. So please listeners continue to submit ideas. We have a long list and we’ve been getting, I swear the quality of the ideas for show ideas has gone up like markedly in the last 12 months.

0:59:09.9 TW: But somebody had actually chimed in and said like, what am I using data? Like get a, like small company, like small data. And literally the next day Eric had a post, it was how smaller organizations can build data analytics capabilities and sort of talking about sort of, there’s a little bit of turning it on its head with kind of how you approach that. So it wasn’t exactly what that listener was kind of asking for, but so that was, again, I’m now kind of hooked on your writing.

0:59:39.4 ES: Thank you.

0:59:41.4 TW: But my other one is, it’s like an oldie that’s new again, and I don’t think I have brought it up on here, but tylerviggen.com, Spurious Correlations, the OG, I saw him years ago speak at Emetrics. I mean, he’s a fascinating guy, ’cause he’s like a BCG consultant in supply chain stuff, but he has completely revamped, and this was maybe six months ago, he redid the tylerviggen.com, same Spurious Correlations, the same, you go there, it shows whatever two metrics that are training together, but what he added was the academic paper, LLM Generated that supports it, and I mean, it is fully academic paper formatted, abstract, two columns, totally auto-generated. And I mean, you read them, they’re like maybe three or four pages, and like the level of rationale explaining why these two metrics.

1:00:36.1 MK: Shut the front to us.

1:00:37.5 TW: Yeah, it’s, I mean, a lot of times you see that, you’re like, oh, that’s cute, like you think like, oh, that’s cute, like it’s the idea, no, I’ve actually read a few of these, because I’m like, these are so delightful to read. And the, I don’t know where he finds the time, I’m like, that wasn’t like, oh, I’m just coming up with a little, making a little ChatGPT app, the thing’s like formatted, and somehow he’s got it actually pulling rationalizations for theories that kind of…

1:01:14.4 MK: Well, it was one of your favorites, they always make me laugh.

1:01:16.5 TW: Yeah, and then, of course, you were going to ask, I logged it a while back, and now, of course, I cannot remember.

1:01:21.6 MK: Nicolas Cage movies, and drownings, and like, people who eat cheese in divorce, or something.

1:01:27.7 VK: I mean, my sister would be very supportive of that as a non-cheese eater, she’s like, obviously, that’s the end of every marriage.

1:01:37.3 TW: Yeah, so with that, so, I’ll do my final housekeeping, and I realize I did not take my notes, ’cause Michael can usually just rattle these off, but Eric, thank you again for coming on the show, this has been a really fun discussion, and I picked up, what was it, it was bushwhacked, what?

1:01:56.6 MK: Bushwhacking your way through, yeah, so good.

1:01:58.6 TW: Bushwhacking your way through, yeah, other good stuff, too, but this was great, so.

1:02:05.3 ES: Thank you, thank you for having me. It was such a wonderful conversation, yeah.

1:02:11.3 TW: Awesome. Listeners, we love to hear from you, so reach out to us on The Measure Slack, on LinkedIn, if you wanna submit a topic idea, with or without a proposed guest, you can do that at analyticshour.io. You can also request yourself a free sticker there. So, thank you for listening, if you really are motivated and want to go onto your podcast listening platform and leave us a review or a rating, that’d be kind as well too. No show would be complete without thanking Josh Crowhurst, our behind-the-scenes, mostly producer, who makes the audio sound normal and less incoherent than it would if we published it raw. He’s also was kind of the engine behind our presence on YouTube, which we now have a presence on YouTube, if that’s your preferred consumption. And with that, regardless of whether you are listening to podcasts at a normal speed, at an above benchmark speed, at a below benchmark speed, at two and a half speed, for Val and for Moe, keep analyzing.

1:03:25.1 Announcer: Thanks for listening. Let’s keep the conversation going with your comments, suggestions, and questions on Twitter@@analytics hour, on the web, @analyticshour.io, our LinkedIn group and the Measured Chat Slack group. Music for the podcast by Josh Crowhurst.

1:03:42.8 Charles Barkley: So smart guys wanted to fit in. So they made up a term called analytics. Analytics don’t work.

1:03:49.1 Speaker 7: Do the analytics. Say go for it, no matter who’s going for it? So if you and I were on the field, the analytics say, go for it. It’s the stupidest, laziest, lamest thing I’ve ever heard for reasoning in competition.

1:04:03.1 S1: Hi everyone. Welcome to the analytics Power Out. You know what? Were gonna start over one more time. Wow.

1:04:14.8 MK: Wow, that tiredness really gets you.

1:04:18.9 TW: Word number seven.

1:04:21.5 MK: That’s a record.

1:04:22.6 TW: One more time.

1:04:24.7 S1: Rock, flag, and NPS rules.

The post #254: Is Your Use of Benchmarks Above Average? with Eric Sandosham appeared first on The Analytics Power Hour: Data and Analytics Podcast.

  continue reading

13 epizódok

All episodes

×
 
Loading …

Üdvözlünk a Player FM-nél!

A Player FM lejátszó az internetet böngészi a kiváló minőségű podcastok után, hogy ön élvezhesse azokat. Ez a legjobb podcast-alkalmazás, Androidon, iPhone-on és a weben is működik. Jelentkezzen be az feliratkozások szinkronizálásához az eszközök között.

 

Gyors referencia kézikönyv