Mitch Weiss of Harvard Business School was recently in Medellín, Colombia as a keynote speaker during the 2025 WOBI event, which brings ideas and inspiration from some of the world’s greatest business minds. The global event rotates between major business hubs and capitals throughout the world, connecting thought leadership with local audiences from the business community, government, and academia.

Weiss gave an engaging, interactive presentation, evangelizing not just the adoption of generative artificial intelligence (AI) but how to make the best of it the platform, using it to work alongside humanity rather than supplanting us.

After initial dialogue between Loren Moss, executive editor of Cognitive Business News, Weiss answered questions from the local press corps with Moss moderating and serving as interpreter for the questions from local journalists attending the event.

Cognitive Business: I’m here with Mitch Weiss of Harvard Business School. You just gave a really interesting talk at this year’s 2025 WOBI. You gave great advice and counseling on using artificial intelligence and its ramifications.

Now I know that you have a history working with innovation in the public sector, and so one of the things I wanted to ask is if we look at… We’re here in Medellín, and you had a chance to tour the city, and you see that we’re kind of in a valley. And if we go into the other side, you know, most of the people here today probably already have some kind of experience with generative AI. That’s kind of the audience that we have here, but if we go on the other side of this mountain range behind us, you have people who still struggle using an ATM. And you know, fraud is a big concern and taking advantage of people and things like that, and what is the role… How can let’s say, governments, because you have experience in the public sector, help protect the consumer? Because now we have things like deepfakes, we have things like… I’m experimenting with ElevenLabs, and they can replicate my voice and things like that.

How do we promote innovation, but at the same time protect not just the consumer broadly, but when we have a big digital divide between well-versed people, let’s say the techno elite, and then people who are struggling to even catch up? What’s the role that you see the public sector and academia can play in maybe policing this wild west that we’re in?

Mitchell Weiss: Well, I think, frankly, policing is going to be hard. I think, as you mentioned, there are so many ways these tools can be used for bad means, and also there are lots of ways to circumvent whatever government regulations are put into place with open source technologies and otherwise. So I do think there’s a place for prudent government regulation, but I think it’d be a mistake to rely on that entirely.

I think we should instead, or in addition, imagine actually, how we can empower people with these tools? It’s true that there’s the ability to use them for fraud and all the rest of it, but it’s also true that anybody with it now has a teacher in their pocket. Anybody with it now has some version of a doctor in their pocket.

Anybody with it, if it’s reliable, now has some version of a fraud protector also in their pocket. So I think my posture would be, how do we make the tools available to people so that they are empowered? And there will need to be a kind of massive, in a way, upskilling or reskilling given these tools, which the government can help bring about also, but they’re not complicated to use at a first pass. I mean, one aspect of these tools is if you can talk, you can use them, if you can see, if you can type, if you can do any of those things, any one of those things, not even all of them, you can use the tools.

And so I think the government’s role, yes, is on regulation, but also a democratization and upskilling of people so they can make the best use of these as teachers and doctors and all the rest.

Cognitive Business: You know, we have… with you and I being from the US, the US kind of has a philosophy of you can do everything until it’s proven dangerous, and then we’re going to step in and regulate it, which as a libertarian I like. But then, let’s say, Europe has the opposite extreme, which is that you have to prove that it’s not dangerous before you can do it. And then most countries, including Colombia, are somewhere in the middle of that.

And again, if we look at this rapidly… I mean, I’m old enough to remember when the internet kind of came into common use, but this is so much more rapid. This is so much more disruptive. And I would say, you know, to show my bias, resist the temptation to regulate, but how can it not just be governments and regulatory entities, but academia… I went to Ohio State, and Ohio State just announced that they’re not going to fight AI. And they just said students are now expected to use it, which I thought was very interesting. And I know a lot of other colleges are as well, but how do we balance the I guess, temptation to regulate or the fear, with, on the other hand, the desire to foment innovation and to break boundaries? There has to be a balance there somewhere. And how do we find that balance?

“You can use AI to market to potential customers better. You can use AI to manage your teams potentially more effectively,” – Mitch Weiss

Mitchell Weiss: Well, I mean, the first thing is, I would agree that there is a balance. I mean, in academia, I’m a professor at Harvard Business School, and we want to make sure that our students have exposure to these tools. We’ve been very AI forward. We were probably the first leading business school in the country to make sure that all of our entering students had access to the most advanced versions of ChatGPT when it came out. And all of our students and staff, and faculty all have access to GPT Edu, and a whole host of lots of other tools as well. We want to make sure that they have access to these tools to help them become able leaders in a world that’s going to be full of them. And to give them a professional edge so that they can get jobs and lead and lead well with these tools. And at the same time, make sure that they can continue to forge and shape the tool that’s on top of their neck. And that is the balance.

I think there are ways to do that. I wouldn’t say that they’re foolproof. Some of those ways include making sure that you set out expectations so people know what’s expected inside an organization or a school. Expectations aren’t everything, but it matters to tell people what to expect. You can make work for students harder, or you can access more metacognition, so they have to think more and work more, given that they have access to these tools. You can use them to teach better so that you’re pushing and challenging students more. You can become a better yourself on the other side. And at the end of the day, I think you constantly have to ask students, How do you know it’s true? How do you know it’s true? How do you know? You have to keep helping them cultivate this part of their brain, too. And so I think achieving that balance is a matter of expectations. It’s also a matter of training and pedagogy. And I think that translates from schools beyond, but it’s not going to be easy.

Cognitive Business: Great, great. Appreciate your time. If you could, you know, go a little deeper on how we can work with artificial intelligence, keeping that human sense in it, humanize it?

Mitchell Weiss: So if the question is, you know, humanizing our AI, I’m not sure that I would do that. I think it’s important to still recognize that it is a piece of technology. And even as it starts to talk more like a human or listen more like a human, it isn’t a human. But to work with it like it’s a coworker, so for example, when I use my AI, I will often say, ‘por favor’, or, you know, ‘gracias’, for me, ‘please and thank you’. And people will say, like, “Why would you do that?” And this happens a lot. Other people are being polite to their AI. Why do you do that? Some people think it will work better. Some papers suggest that it helps or hurts. Some people say it’s because, you know, just in case these things take over, at least I’ll be nice to it. The reason that I do that is because I want to remember to think, sort of interact with it, like it’s a coworker or research assistant.

And so I’m not trying to make it a human, but I’m trying to engage in a conversation and an ongoing iterative conversation with it. I think that’s how you get the most out of it. So I’m not trying to humanize it, but I’m trying to remind myself that I can work with it as a teammate. And there are colleagues of mine who are writing papers about what it means to have an AI, a team that’s made up of people and AI together, and showing that they’re more productive than people by themselves. And so I think it’s useful to think about it like a teammate.

I really believe, and my colleagues are writing and doing some work on this, that you can use AI to understand your customer base or potential customer base better. We saw about synthetic market testing and all the rest. You can use AI to prototype your new products and services better. You can use AI to market to potential customers better. You can use AI to manage your teams potentially more effectively. Entrepreneurs can do all these things.

And one of my colleagues has written a book called The Experimentation Machine, where he tells the story of a founder who was finding that he was spending more time with his customers, not less, because of AI, because the AI was helping do lots of other things. And also, by the way, helping prepare for these meetings so they could be more engaging, and all the rest. So I would say entrepreneurs can embrace the tools and the toolkit and stay heavily invested and involved in their companies with their workers, with their customers, all at the same time.

Leave a Reply

Your email address will not be published.

Cognitive Business News