In this rebroadcast of the Motley Fool Money podcast, Wharton Professor Ethan Mollick walks through his four rules for using AI, how he pushes students to use the technology, and the research from his book Co-Intelligence: Living and Working with AI.
And Reddit CEO Steve Huffman explains how the company is using AI to move forward.
To catch full episodes of all The Motley Fool's free podcasts, check out our podcast center. To get started investing, check out our beginner's guide to investing in stocks. A full transcript follows the video.
This video was recorded on Dec. 20, 2024.
Dylan Lewis: We're looking back on some of our favorite interviews from 2024. This week's Motley Fool Money Radio show starts now.
It's the Motley Fool Money radio show. I'm your host, Dylan Lewis. Listeners, today, we are coming to you with our special episode as we head into the holidays. Each week on Motley Fool Money, we air an interview segment. It's our chance to go outside the Fool and get some perspectives on where the world is heading, straight from the people that are helping shape it. Today, we are going back to two of my favorite interviews from the past year. Both of them touch on the undeniable topic of 2024, Artificial Intelligence. It's new, it's interesting, it's a little bit scary, but it isn't going away. The best time to dig into the technology, other than yesterday is now. Today we are going to do that.
Our first guest, Ethan Mollick, talked to me about how folks like you and I can use AI as a companion for everyday tasks, work, and more. He's a professor at Wharton, focused on entrepreneurship and innovation, who has brought AI into his classroom with his students. When we spoke earlier this year, he walked me through his four rules for using AI and how we put those rules into practice in his classroom and to help him write his book, Co-Intelligence, Living and Working with AI.
As I noted in your intro, you're a professor at Wharton, and I know you've been focused on innovation, entrepreneurship, a lot of the major themes of the business world over the last decade plus. That's had you focusing on things like crowd funding. When did AI begin to come more into focus for you?
Ethan Mollick: I've always been AI adjacent. At the MIT Media Lab, I worked with Marvin Minsky, who was one of the founding fathers of AI, but I was always the non-technical person in the room, the entrepreneur and connection maker, but I've long been interested in the idea of how do we use education at scale? How do we teach lots of people, especially entrepreneurship and business lesson? I'm a business school professor, I teach entrepreneurship. I've been using AI for doing that work for a long time. Then when ChatGPT came out, I just happened to be ready for a world that was already using those tools, and knew what was happening a little before other people. I like to think of a couple of months ahead.
Dylan Lewis: I think you're probably more than a couple months ahead of a lot of people. What I liked about the book, reading through it is it was a great exploration of the space and a foundation, but also in a lot of ways, a very practical user guide for getting up to speed very quickly and going from, I don't know anything, to beginner, to intermediate. I think that it's really useful for people in that sense. Knowing how quickly the AI landscape has changed, what was the process like for writing the book, and was it an accelerated timeline?
Ethan Mollick: I wrote the book and edited it through the end of December. I wrote it knowing GPT-5 is coming, and it's out yet, but will be, and all these other tools were coming one way. I did write it pretty quickly, this is my third book, but I couldn't have written it without AI. Actually, there's almost no AI writing in the book, it's not AI writing. There's little AI segments, but they're clearly marked. The interesting thing is, AI did all the other stuff that made writing books horrible for me on my behalf. I got stuck on a paragraph.
Sometimes you work on sent IDs for a long time. I'm like, give me 30 verses of the sentence, I'll use that as an inspiration. There's a lot of work showing that AI works well as a marketer, and as a persona to market too. I asked it to read my book and various personas to give me feedback on what I was doing and advice. I asked it to summarize research papers that I turned into part of the paper. It was very helpful in accelerating this process. It's what AI does, the co-intelligence idea, it's an accelerator.
Dylan Lewis: I think that tees up nicely for some of the rules of using AI that you talk about in the book. I want to run through them because I think there are probably some listeners out there that are avid users of ChatGPT, and probably some other folks who maybe aren't as familiar or have never interacted with an LLM before. How do you structure how people should be using AI?
Ethan Mollick: I recommend four rules to get started with. The first rule is invite AI to everything you do. Basically nobody knows how AI is most useful for your field. Nobody does. I think people are waiting for instructions. I talk to OpenAI all the time, I talk to Microsoft, I talk to Google. There is no instruction manual. Nobody has a book that they haven't shown you yet, there's no consultant who knows anything, nobody knows anything. The way to figure out what this does is just to use it a lot and see what it does, and strongly recommend just trying to use it for everything legally and ethically can. Then the second piece of advice is that you should learn to be the human in the loop.
The AI is really good at a lot of things. We can talk about the studies and results on this, but it's really good at innovation, it's really good at analysis. It invents most people. You want to think about what you're actually really good at whatever you're best at, you're definitely better than the AI, and I think that there's going to be a real benefit to think about what you want to do and what you want to delegate. The third principle in the book is one where I say, you should treat the AI like a person. It's considered a sin in the world of AI you're not supposed to anthropomorphize it, but the fact is it's trained on human language and human interactions, so it works best when you work with like a human.
In fact, one of the mistakes people make is they assume that software developers are people who should be using AI, but it's actually not, it's really managers, writers, journalists, teachers often do a much better job using AI because they can take the perspective of it as a person, even though it is, that helps you do great work. The fourth is, this is the worst AI you're ever going to use, and we are in the early days of this revolution.
Dylan Lewis: I know you really have your students focus on using AI. I believe it's a requirement of your classes. What do those rules that you laid out there look like in practice for them using it as part of the classroom experience?
Ethan Mollick: I initially went viral a while ago for my AI rules for the class right after ChatGPT came out, where I required use and made people accountable for the outcomes. None of that works anymore. That was great for GPT-3.5, the free version of ChatGPT. GPT-4 writes better than most of my students. I teach Ivy League School, my suits are amazing, but it writes better than most of them at homework assignments. It makes less errors than an average student. How do you deal with the fact that that's the case? You can't just say use it and you're responsible for the outcome because I can't find the errors as easily anymore. Instead of adapted to how we use AI in the class, AI helps me co-teach classes, it helps me provide assignments. One of my assignments a couple of weeks ago was that people had to replace themselves at their own jobs. You're going for a job interview, you have to use the AI to do your job for you and hand a GPT that does this to your employer and say, I'm ready for a raise now. I had students who were everywhere from navy pilots to financial analysts to hip hop promoters, and they all found ways of automating their jobs. Three of them got jobs that week, by the way, as a result of this.
Dylan Lewis: Is this a successful trial?
Ethan Mollick: I think so, yes.
Dylan Lewis: It's interesting to hear you say that. I think where most people have lived with AI usage, certainly for me as someone who works in content, it is a co-pilot for brainstorming, it is something that can be very helpful to get the ball rolling as a creative process. I know with education, you've really focused on the way that can help simulate experiences for people and the way that you're able to mimic real world situations that people might be in.
Ethan Mollick: That's one of many uses. I've been building simulators for a decade. They're very expensive and hard to build. I built realistic sims where built fake Gmail, fake Chat, fake Dropbox, fake Zoom, and you literally run a fake start-up in real time over the course of six weeks. Those took a big team of people, a lot of money, a lot of resources, I could get almost the same effect from two paragraphs, not quite as good, but pretty good. Simulation is one of the areas that AI is really good at.
Another one is that it's very effective as a tutor under certain circumstances. Actually, the default way of using it as a tutor doesn't work very well, which is asking the AI to explain something like you're 10. That's great for getting an explanation, but we don't remember that. A real tutor asks you questions and interrogates you, and we can make the AI do that. It works really well as there's a whole bunch of assignments we have around this for integrating knowledge for helping test you. There's a lot of uses.
Dylan Lewis: You mentioned the asking to be a tutor, and that leads us into some of the ideas around prompting and the way to set AI up well to give you what you're looking for and maybe what's most helpful for you. For some folks who maybe haven't spent as much time prompting, what would be some of your tips for interacting with an LLM?
Ethan Mollick: The mental model you want to have for LLM is that it knows a lot of things about the world. It has a huge web of connections, but it's got to give you the average median answer all the time. Your job as a prompter is to knock it away from that average answer to something more interesting. You do that by providing it with context, that gives it a different place to start from than just its default. Easiest way to give it context is a persona. You are blank, you're a very good marketer, you are a marketer for consumer products. That's an easy way to provide context, and then there's more advanced ways of doing that too, asking it to think step by step by providing examples, but your goal is to provide additional context and information.
Dylan Lewis: It's similar in a lot of ways to the way that people use Google. Early on with search, the more specific you could be, the tools of using things like quotations to have specific text rather than just a general query help you get closer to what you were looking for. Over time, search end results have gotten better and better because they've learned more and more about what we're looking for when we provide specific queries. Does that feel a good parallel for how people should be thinking about prompting and interacting with AI?
Ethan Mollick: Well, Google headed the way that AI did. You can't do all the things you used to do with Google. They removed a lot of their specialized controls that used to make you good at Googling. It used to be called Google-fu, you were good at Googling. Right now, you can be good at prompting. I actually don't think it's going to be that important in the long-term, because I think the AI already knows your intent. If you want to write a novel, ask the AI, help me write a novel, and you'll get a surprisingly large part of the way there from just that.
Dylan Lewis: I know we saw a lot of companies putting out these prompting roles that they were hiring for prompt engineers. Do you think that that's a short-lived career?
Ethan Mollick: I think prompt engineering will be useful if you're building prompts for other people, but I think for most of us using these systems, these systems aren't getting smart enough. I don't know a single person who's an insider at one of these organizations who doesn't think that the AI will itself be able to be self-prompting. Most of the people I talk to at openAI or Google or Anthropic don't think prompt engineering is a long time thing to learn for most people.
Dylan Lewis: Does that just become a skill that most people are generally bringing into their work rather than it being a specialized trade that we're hiring out?
Ethan Mollick: No, I just think that the AI is smart enough to be the problem. We already know that the AI can already figure out intent better than most of us can. You'll just say what you want, and you're not going to need to prompt it because it'll know.
Dylan Lewis: As you've been bringing students in, I imagine some folks have some familiarity with AI, others maybe don't. Have you gotten pushback on bringing it into the classroom?
Ethan Mollick: I think people expect a lot more pushback in the world than there is. I think there's a lot of theoretical pushback, but people want to learn how to use these systems. There's a lot to discuss about ethics and privacy and other sets of concerns, but I think people also want to figure out how to make these things useful. They're here. There isn't really a choice anymore. Occasionally, I have a Google deep-minded person in one of my talks, they'll raise the hand and say, what do you think about the ethics of releasing AI? I always say to them, look, you made the decision to release large language models. This is not a conversation we get to have anymore, you made this choice. We should know what their limitations are and what their ethical compromises are, but it's out there in the world, so we better figure out how to use it.
Dylan Lewis: Listeners, more from Professor Ethan Mollick on how individuals and companies are using AI after the break. We'll be back in a minute. You're listening to Motley Fool Money. Welcome back to Motley Fool Money. I'm Dylan Lewis. This is our annual Best of Interview show for 2024. We are zooming in on the most zeitgeist of zeitgeist thing this week, Artificial Intelligence. Earlier this year, I caught up with Ethan Mollick. He is a Wharton professor and author of the AI handbook, Co-Intelligence, Living and Working with AI. What I loved about our conversation is that it brought AI down to the everyday person and the implications of things like AI at work and how companies might use it well and also misread its impact.
Let's dive in. I want to dig into AI at work in particular. Studies in the book that you mentioned show, I think, it's like, 95% of job categories, have some overlap with AI, including professor, you know it yourself, you're right up there at the top of jobs with AI Crossover. How do you personally feel and think about that shape in your work?
Ethan Mollick: The thing to think about with jobs is jobs are not just jobs, jobs are bundles of tasks, we do many things. You do podcast interviews. I'm sure you do five other things, plus you have to fill in an expense report. You have to email me about this, have a pre-conversation, do research, all this stuff. Some of those tasks you probably really enjoy and you're really good at, some of those tasks you're probably mediocre with, but just got bundled into your job. The first place you want to start with AI is thinking about, what parts of my job bundle do I want to hand off to the AI? It's like accountants used to spend almost all of their time doing math by hand, and then spreadsheets came along, and now accountants don't do that anymore. They're still rooming for accountants. Their job has shifted and moved upmarket. If you're conscious about how you build it, you can shift and move upmarket as well.
Dylan Lewis: AI is putting you in a spot where you can maybe focus on more of the things that you'd like to be focused on and doing some of the less wrote things that you aren't as interested in.
Ethan Mollick: Expense reports are something I'm happy to hand over to AI. I would love to hand over grading, but I don't do it because ethically, I don't feel comfortable doing that, but there's a lot of tasks that the AI can do that are things I don't want to do. When we survey people who use AI, we get the same two answers, which is they are both a little nervous about the future and also really happy because the AI's taking the worst parts of their job.
Dylan Lewis: You talk about research showing that AI can really help close performance gaps when it's provided to people. What does that look like and what is some of the research showing in that?
Ethan Mollick: There is a universal result from all cases where AI does work with you, which it improves performance of the lowest performers, more than the highest performers. Now, there are a couple of caveats that's really important. One of those is that's naive AI use that everyone goes through initially. We don't know in the long term whether the top performers also get a 10 times boost. It starts off as a leveler, an elevator for, but it may not work that way in the long term. Little bits of evidence around that. There's a really cool controlled study in Kenya. They found small business owners who got the AI. If you were a top small business owner, you got a 20% profit improvement by getting AI advice, not help, but actual advice. While the lowest performers weren't able to implement the advice of the AI. I think it depends on the circumstances, but we're definitely seeing that leveling upskilling effect almost everywhere.
Dylan Lewis: Are you seeing that in the classroom?
Ethan Mollick: Yes, of course. There's no bad writers anymore.
Dylan Lewis: Knowing that everyone's writing gets better, how do you start looking for top performers and weaker students?
Ethan Mollick: Welcome to everyone's giant problem. Classes are easier, we have short term disruptions, but we could test. We've got lots of options, but if you think about middle management in most companies, what most managers produce is words. They do a whole bunch of tasks, but they produce is words. They write reports, they write documents, they do presentations, and the number of words they write is an indicator of effort. Big document, lots more effort. The quality of the words they produce is their intelligence or ability. The lack of errors is their conscientiousness. All that just broke. I can produce an infinite number of words that are all high quality, seem good enough at scale. What does that mean for organizations?
Dylan Lewis: I think one of my favorite sections in the book is you talk about this notion of the button, and it's existing in basically any productivity software application or something people would use for slacking, emailing people, this button that automagically drafts responses for you. It's a very visible, very clear extension of the technology. I think one that's easy for people to wrap their heads around, but when we start drafting a lot of things out with AI and then maybe tinkering a little bit, what do you think that does to the value of communication and work? How do you wrap your head around that?
Ethan Mollick: That's the problem. We're about to just break how all of work and communication operates, you're an idiot to not use AI for stuff. The clearest example to me is, as a professor, I'm supposed to write letters of recommendation for people. The whole point of a letter of recommendation is not the letter. It's the fact that I'm setting my time on fire as a signal to people that I care about this student. They send me the resume, the job they're applying for, and I spend a good 45 minutes working on a letter for them, but if I just give the AI, the resume, the job they're applying for and a thumbs up or thumbs down and say, I'm Ethan Mollick, I will get a much better letter in 35 seconds, especially if I go round or two with the AI, maybe two minutes. Do I send the ethical letter that's less likely to get them the job, or do I send them the unethical letter that's more likely to get them the job?
Dylan Lewis: Maybe you split the difference? I don't know.
Ethan Mollick: It's an open question, although I did just have a student send me the prompt they want me to use to write their letter.
Dylan Lewis: Wow. This is the first. I want to extend that line of thinking a little bit. The Motley Fool is a business of intellectual property. That's what we do. We provide premium stock newsletters. We provide model portfolios. We have coverage. We have this very podcast, as well as a lot of articles. As you see people that are in content making investments in AI, what are some of the things you start to wonder about when it comes to information, when it comes to the way we consume things?
Ethan Mollick: Right now, AI is at the 80th percent of many kinds of human performance. There's no way you're working within Motley Fool that you're not on the top 1% or 0.1% of ability level in whatever you're doing because you wouldn't be there otherwise. To me, the biggest danger is organizations, especially content organization, thinking that this is fungible and replaceable. I think the idea that this is going to do automated content, and that's the value, is not really the point. The point is, how do I get my writers and my IP creators to do the stuff that uses their 0.1% ability, their high-end ability, and that it lets the AI help with the other stuff? That means you're not doing that interesting high-end task. You can get the AI to write reasonably good portfolio articles, I'm sure, and with a little bit of tuning, it would do a reasonably good job, but you're not into reasonably good job. That's not why people are signing up for your organization.
I think that there's a danger of a race to the bottom of not realizing the main advantages, and this is true with every company. Most dangerous thing you can do is view AI as a productivity tool for cost-cutting. The idea is like, OK, it increases performance by three times. That's great. I could fire two thirds of my staff. In a moment where we're actually going through transformation change, that's a really dangerous viewpoint.
Dylan Lewis: That's a wrap on my conversation with Ethan Mollick, but we've got plenty more AI wisdom ahead from the CEO of one of the market's hottest stocks in 2024. Stay right here. You're listening to Motley Fool Money.
Ricky Mulvey: Hey, it's Ricky, and I've got a podcast that I want to recommend to you. It's called Think Fast, Talk Smart, and it can help you become a more effective communicator. Every Tuesday, host and Stanford lecturer Matt Abrahams sits down with experts to find out their best advice that can help you hone your communication skills, whether you want to have better small talk at work or stay calm during a large presentation. Enjoyed a recent episode titled Fix Meetings, Transform gatherings into meaningful moments. I picked up some nuggets that are going to be useful for the Motley Fool Money programming meetings when we're figuring out what segments we want to do for the week. Matt shares tips, tricks, and science based strategies to boost your confidence and clarity. Become a better communicator by listening to Think Fast, Talk Smart wherever you get your podcasts and find additional content to level up your communication at FasterSmarter.io.
Dylan Lewis: Welcome back to the Motley Fool Money Radio Show. I'm Dylan Lewis, and this is our year-end holiday special, where we bring forward some of our favorite conversations from the past year. Earlier in the show, we focused on how individuals can interact with artificial intelligence and how the technology is shaping the workplace. Now we're going to turn our gaze over to how companies are using the technology in tangible ways to make their products better for end users.
If you spend a lot of time online, you probably know Reddit as the front page of the Internet. If you don't, maybe you know the online community company as one of the best performing IPOs of 2024. Shares have tripled since the business came public this past March, and its growing user base and monetization efforts are a big part of the reason why.
Back in September, Reddit CEO Steve Huffman walked me through how a 20-year-old business continues to find new users and how the company is harnessing AI to localize content and reach new markets internationally. It's a treat to talk to you because I'm a long time user of Reddit. I first started using Reddit in college, and that was I don't want to date myself too much. That was over 10 years ago. When I started using it was a mix. It was for the memes, but also I was learning how to dress myself. I was going to male fashion advice and trying to pick up some tips there. I was going to school in Boston, and so I was trying to figure out what was going on in the city and what I needed to know about events, and so I was going to R Boston. I'm guessing that some of our listeners of the show are also long time Reddit users like me. There are probably also some folks who are part of the 300 million plus folks who come to you weekly. For folks that do not know Reddit very well, how do you describe it to them?
Steve Huffman: Starting with the hard questions. Look, thanks for being a user for so long. Sounds like, you've been on this journey with us a little bit over the last while. Depending on, like what I sense their context is, I explain Reddit in a couple of ways. If I was explaining it from the ground up, I'd say Reddit is communities. These communities can be about anything and everything, every interest, passion, hobby, whatever you're into, whatever you're going through, it's on Reddit somewhere. Then what I would say is, look, if you're pretty much between the age of, like, 17 and 70 whether you're a nerd or Normi man or woman, you have a home on Reddit.
There's something there for literally everybody. Other times I explain it in contrast to social media. Social media is powered by algorithms. Reddit is powered by people. So every piece of content that becomes popular in Reddit is made popular by people voting and voting in the context of a community. Up or down. Users can make things popular, but they can also disappear things. By definition, polarizing content doesn't do as well on Reddit. What you get as a result is Reddit is the most human place on the Internet because it's powered by people. If you look at the conversations on Reddit, everybody has comments, but if you look at the comments on Reddit, if you look at, like, the object of the sentences, you'll see that they're talking to each other about whatever it is. As opposed to social media, where they're often talking at, but past the poster. They're either super effusive or maybe the opposite, but there's a lack of connection there. On Reddit, its people organized around things they love, talking about those things like real human beings.
Dylan Lewis: Where do you guys think you are in the grand scheme of Reddit's potential? I guess you can take that in the platform direction. You can take that in the business direction, wherever you want to go with that?
Steve Huffman: It's such an interesting idea to contemplate, because on one hand, Reddit has been bigger than I ever thought it would be since August 2005. Look, by some measure, we're big now. We have about 90 million people visit Reddit every day, 360 million people visit Reddit every week. It's big in terms of absolute numbers, but social media, the biggest platforms there have a billion, two billion users every day. There is, I think, huge opportunity there. Reddit. We're about 50, 50 US versus non US. I'd say other major platforms are more like 80-90% non US. I think a lot of opportunity to grow more users. Then on the business side, I think we've gotten out of the beginning phase. We're in the ads business. That's our primary business model, though we license data, and we do some other stuff as well. We're primarily ads.
It's growing. It grew last quarter, we reported 50% growth, a little more than 50% growth. That's great. Our ads are working. Our customers are happy. We're continuing to deepen relationships there, but I still think, so on one hand we iPod in March. On one hand, it feels like, OK, we've gotten to a certain level of stability and scale where, like, this feels real, and it's working. On the other hand, it almost feels like we're at the very beginning. I have a lot of the same feelings today as I did almost 20 years ago, which is, we've barely scratched the surface of this thing, and it can be so special, and I think really great on the platform side and the business side. I'm really of two minds about it, but the Jeff Bezos idea of Day 1 is really something I feel like we're living right now. It feels like the beginning.
Dylan Lewis: I do want to dig into some of the company numbers a little bit and talk through some of that. I was impressed as a longtime follower of the business to see the growth that you guys put up in 2024, because platform has been around for a very long time. The revenue growth of 50%, to me, not crazy, surprising for where you guys are at the modernization story. The user growth story of 50% was a surprise. What was behind that growth?
Steve Huffman: There's a couple specific things, which I'll get too, but the big picture idea is this goes back to what I said when I was describing Reddit. Everybody's got a home on Reddit. Then that raises the question, if everybody has a home on Reddit and Steve, if you say the contents so great, and it's so unique, and it's such a great experience, then why isn't everybody on Reddit already? I think there's two possible answers for that. Well, actually, three. The first, they haven't heard of Reddit.
Well, certainly in the US, that's increasingly less likely. Number 2, they tried Reddit, and it didn't work for them. That's the group we've really been focused on making it so the new users who are coming to our front page are opening the app for the first time, i.e, they're primed to experience Reddit as opposed to coming from search or something like that. Make sure they find a community that speaks to them. We made sign up much easier, the community onboarding, helping you find your home on Reddit much more effective. We made both the website and the app much faster. We just redesigned it in a lot of little ways, so it's easier on the eyes. The fewer bugs, and our home feed has gotten much better at making recommendations of communities that you might like. Really getting people into their home on Reddit and then finding all of their interests much more effectively. That's been working very well.
Steve Huffman: Then at the same time, we made our website substantially faster, 2-5 times faster. We launched this in May of 2023. Google bought like speed, and so faster pages rank higher, and faster pages also get indexed faster. The Google search works in mysterious ways, as close a partner as we are with Google. We have no idea how search works.
Dylan Lewis: Nobody does, right?
Steve Huffman: Right. Nobody does. But speed matters. When our website got a lot faster, we started ranking higher. Then combine that with the product improvements, users are having better experience on Reddit. Now it creates this flywheel that we're really benefiting from as we see a lot of new and core users coming from search, and we were much more effective at getting them into their home on Reddit. I said there was two things, so either you haven't heard Red or it didn't work for you. Those are that number two we're really focused on. There's a third one, which is, you don't speak English. That's the next frontier of Reddit. Reddit's corpus today is still mostly English, but growing outside of the United States, outside of English, that's a part of the next chapter of Reddit unlocking that.
Dylan Lewis: What does tackling that look like? What are some of the challenges and things that you guys are working through to make Reddit localize to some of the other big international markets?
Steve Huffman: Sure. All the things I just mentioned around speed, performance, all that, all that matters, so that's the foundation. There's other parts of the foundation. Like safety is a big part of it as well. The foundation helps everybody. But on top of the foundation, there's a chicken and egg. You need content to attract users, and you need users to create content. We come at this from two ways. One is just program work. We target a market. There are users in every market.
We're not starting from zero anywhere, so we go to the communities there. We reach out to the mods, we figure out what communities probably should exist that don't, like cities, sports teams, local passions, things like that. We work with modes to try to bring those communities to life, make sure they're in discovery, make sure everything's humming there. The second thing we're doing, which is working very well, or at least off to a great start is machine translation. New technology here with large language models, we can actually translate the existing Reddit corpus into other languages at human quality. Now, not all the content is relevant, but a lot of it is. We have been testing this in French in the first half this year, and it's gone very well. Now we're adding on more languages. We're doing German, Portuguese, and Spanish.
That will get us just a bigger content foundation. Then from there, we need to see the next step, which is organic growth or call it native organic growth on top of that. International, it's real work. One of the difference between Reddit and social media is, Reddit is communities. People don't just join communities overnight, let alone create them. We can't force it. What we try to do is really create the conditions for growth, but we can't actually force anything. We're getting a little bit better at that, but that is, there's a lot of, I think, finesse required to get that right.
Dylan Lewis: Folks we've got more from Reddit CEO Steve Huffman ahead, including how the company is fueling LLM efforts by licensing its corpus of data to companies like OpenAI and also how artificial intelligence is improving the site and leading to a better, safer Internet. That's next here on Motley Fool Money.
As always, people on the program may have interest in the stocks they talk about, and the Motley Fool may have formal recommendations for or against. So don't buy anything based solely on what you hear. All personal finance content follows Motley Fool editorial standards. It's not approved by advertisers. Motley Fool only picks products it had personally recommend to friends like you.
This is the Motley Fool Money Radio Show. I'm your host, Dylan Lewis. Today, we are tackling the topic of 2024, Artificial Intelligence, from all angles, how people can put it to use, how companies are thinking about it. On that note, that is where we will pick up my conversation with Reddit CEO Steve Huffman.
Diving into how the company is looking at AI to open up some revenue opportunities in data licensing with AI leaders like OpenAI, and also some of the ways they are using the tech to improve the site and user experience. Outside of the ad business, I know it's a small piece of the pie for you guys right now, but you do have a data licensing business, I think it was about 28 million in the recent quarter. That is, as I understand it, allowing other companies to use platform data for LLM training, for AI applications. What was the decision there, and what are you guys seeing there?
Steve Huffman: Yeah, so Reddit is one of the largest corpuses of human conversation on the Internet. For better or for worse, but I think overwhelmingly for better, Reddit has been open platform. Reddit's content was used for training these AIs. Now our terms of service are like, Reddit's open. You can use Reddit's content for non-commercial use. But for commercial use, you need a license. I want Reddit to stay open. I also want to be practical. Reddit's content is useful for these things. Look these AIs, these large language models, they help our business. I think they advance humanity. It's one of the most important technologies of the last generation. Help us make Reddit safer, help make the whole Internet safer. We like these technologies existing and we're proud that Red's content can be used to create these technologies or advance them.
But I think just a matter of business practicality, commercial use of Reddit's public content requires a commercial agreement. That's what we've been working on. We still do non-commercial agreements, so we'll give Reddit count on away to researchers or other non-profits like Internet Archive. Now, there's terms on all of these things. We created a public content policy. We released this earlier this year. Every platform MRLs has a privacy policy. That basically says, this is what we do with your private information. We have one of those too. Now, we don't have a whole lot of private information, but what we do have, we don't share. It doesn't leave Reddit. The public content policies basically says, this is the content that you put on Reddit in a public community, it's on the public Internet. You should know that.
If you don't want it on the public Internet or in search indexes or showing up in research potentially or potentially being used for training, don't put it on Reddit. We want the terms of engagement to be clear there. Then we said, look, if you have to use this content, you have to have agreement with us. To use it, you can't do certain things, like reverse engineer the identity of our users or use it to target ads against users, things like that. I think under those terms, and under those policies, we've been able to strike a few deals that I think are important. Google and OpenAI are the biggest ones on the training side. Then we've done others like with Scission and Sprinkler on the social listening, what are people saying about these brands, that sort of thing. Yeah, it's a new business for us. It's off to a good start. But I'd say we're still in early days in what is quite frankly a developing market.
Dylan Lewis: I'm curious how AI fits into the picture for Reddit itself and the product, the app, the site experience that users interact with.
Steve Huffman: It's very exciting. There's so much hype around Large Language Models, but one thing that is undeniable is they are very good at jobs involving text and words. Reddit has a lot of text and words. I think some things I'm most excited about, we've been playing around with this idea of post guidance. If you're a new user, you've grown up on social media, you've come to Reddit, and you're submitting your first post, but you've never submitted to read it before, so maybe you don't understand that this is a community space and this community has very specific rules. What used to happen is you'd submit this post, and then a moderator be like, this violates a rule.
In the science community, no joking allowed, for example. Then you get banned. That's not a good user experience, and so now we can use things like LLMs to detect, like, hey, this is a joke. You can tell the user. They click something and say, hey, too funny. Funny it's not allowed here. We know you mean well, but, try again. Much better experience. The user can adapt their post to something that should be a better fit for the community. The community gets a new user, Reddit gets a happy customer or happy user. I think that thing is really, really powerful. Obviously, for safety. Things like harassment and bullying or whatever idiosyncratic rule have subreddit have, like I was just giving you another example, LLMs can help detect. I think that's really powerful. I was a moderator for a little bit last year. We have a program called Adopt and Admin, where our employees guest moderate subreddit, and so I did it with MI the App. It's a large community on Reddit.
Dylan Lewis: Can you explain the community?
Steve Huffman: You submit a post as a user. I don't know. I wore a cream colored dress to my sister's wedding and everybody got mad at me, like MI the App, to pick a real example for my wife. Then the community debates and gives you feedback. Yes, you're the app or you're not. It's a really interesting community of people basically debating these social situations. But they have a rule. You can't use the word Karen, and you can't use the word man child. Now, I've been thinking about rules on the Internet for a long time. I don't like word rules. You can't say this word because I think they're too brittle. Because there's always this context.
Indeed, on that subreddit, we'd spend a lot of time adjudicating uses of the word Karen. Like, where they saying it meanly or are they correcting somebody else, or is this the story literally about a person named Karen? There's just so much time spent on that rule. Now, they eventually convinced me it's an important rule because it sets a tone for the conversation and a tone for that community. I came around to their viewpoints, like this is important and it's had a good effect on this community. But I'm looking forward to when an LLM can do that work, so that the human moderators can do something else because some of the rules are really complex. I think LLMs will make Reddit safer. Honestly, for that matter, they'll make the whole Internet safer, I think that's very exciting.
Dylan Lewis: Listeners, that's a wrap on our annual best of interview show, but it's not a wrap on 2024 from Motley Fool Money. Next week, Asit Sharma and Ron Gross will be on with me to preview what's ahead for investors in the new year and the corners of the market they are paying attention to. In particular, we'll be back with that next week. If you don't want to wait, check out our daily show wherever you listen to your podcast. Special shout out to Rick Engulf magic behind the glass this week. I'm Dylan Lewis. Thank you for listening. We'll see you next time.