Wilson Sonsini | Breakthrough Minds

Breakthrough Minds Episode 7: Conversation with Evan Judge, Morning Consult

Wilson Sonsini Season 1 Episode 7

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 52:31

Evan Judge, Chief Operating Officer of Morning Consult, joins host Raj Mahapatra to explore the intersection of AI and critical thinking, building an AI-native organization, employee empowerment, human insight in AI utilization, and ethical considerations for AI. 

Conversation Highlights:

·        The Importance of Critical Thinking in AI

Evan emphasizes that critical thinking is essential in the age of AI. He discusses how curiosity can enhance problem-solving skills and improve interactions with AI tools, enabling users to extract better results and mitigate misinformation.

·        Building an AI-Native Organization

Evan previews Morning Consult’s approach to becoming AI-native, focusing on integrating AI from the ground up. He shares insights on the importance of training and surveying staff to ensure they understand AI's capabilities and limitations.

·        Empowering Employees Through Training

Evan discusses the comprehensive training program at Morning Consult, designed to elevate staff proficiency in AI. He explains how assessing current knowledge levels and providing targeted training leads to a more engaged and knowledgeable workforce.

·        Human Element in AI Utilization 

Evan emphasizes the importance of human expertise in maximizing the effectiveness of AI, particularly in complex or edge case scenarios. While AI can efficiently handle straightforward tasks, the true value lies in complementing human expertise ensuring that critical analysis and creativity remain central in AI applications.

·        Tools and Ethical Considerations

Evan discusses the rollout of secure AI tools at Morning Consult, ensuring that client data remains protected while encouraging employees to explore and innovate responsibly.

Meet the host and this episode’s guest: 

·        Evan Judge

·        Raj Mahapatra 

 

This episode may feature current or former clients of Wilson Sonsini. This podcast is for general informational purposes and is not intended to provide and should not be relied on as legal advice. The information in this podcast is not tailored to a particularized situation or jurisdiction, may not reflect the current state of the law, and should not be considered a substitute for the advice of your own legal counsel. You should not consider this podcast an invitation to form an attorney-client relationship, and no attorney-client relationship is created by your use of this podcast. Wilson Sonsini disclaims all liability in respect to any actions taken or not taken based on the content of this podcast and notes that prior results do not guarantee a similar outcome. This podcast may be considered attorney advertising in certain jurisdictions. To learn more about Wilson Sonsini visit https://www.wsgr.com.   

SPEAKER_00

Welcome to Breakthrough Minds with Raj Mahapartra. I'm Raj, and I get to sit in the room where cutting-edge technology is being built. And the conversations I'm part of are just too good not to share. That's what this podcast is all about. Evan, thank you so much for taking the time. A few months ago, you and I had a chance to have a conversation, and um I've been thinking about that conversation ever since. Um I think I had just been into a local high school and I had talked to them about um Chad GPT and critical thinking. Um and we got onto that topic in our conversation, and uh we had just the most fascinating uh time off the back of that. I'm gonna revisit that in this conversation today, if that's okay with you. Brilliant. Now, but but before we start my very first question to you, as I say to all my guests, is who are you and what do you do?

SPEAKER_01

So my name is Evan Judge, and I am the chief operating officer of Morning Consult. Uh and what that role is, I just describe as I'm a guy trying to get it done. And you know, whatever the company needs at that point in time, I try to shift in there and flex. You know, there's the Venn diagram of what are you good at? What do you want to do, and what does the company need? And you just try to stay in the middle and execute the vision of a CEO and get it to where it needs to be. And personally, uh, you know, I'm a guy just trying to find the end of the internet. There's a lot out there to learn, and I think this is such a fun time to explore and play with everything at your fingertips. And that kind of brings us into this conversation today.

SPEAKER_00

It really does. And I just want to say for for the people who are listening in on this, um, when we started, just before we started recording, you were telling me about a little project you were doing in the background about this what did you call it? About segmentation of your marketing?

SPEAKER_01

Segmentation on my Wi-Fi. Yeah, I wanted to upgrade to Wi-Fi 7. And um, I'm using this company that makes my firewall firewall-up uh because I like a lot of security features. I think it's really interesting. I can't wait to tell my kids one day, I'll give you a thousand dollars if you can break the firewall, because they're gonna learn so much. But anyway, I wanted to set up micro segmentation on my Wi-Fi to put different device groups in different spots with different rules. And really, my big goal was to separate all my Wi-Fi connected devices, Internet of Things, and control the inbound and outbound traffic on that, while also creating specific rules for myself, my wife who works in PR, so a little bit less restrictions on data that can flow in and out, and then my family and guests.

SPEAKER_00

I mean, and I love this because you know, we talked about you being a COO, and this is really not what COOs in my mind get up to. So it's it's leading me on to I think our conversation really nicely, actually. Um going back to where we started the conversation back in the summer. So I was hanging about the students I was talking to, and I mentioned that um I had got I was using the um I was basically I did a fireside chat with a bunch of students in the classroom and um and my fireside chat was with uh ChatGPT. So I had ChatGPT on audio and that going on. And we were talking about Megabeth to start off with, um, and then having talked about Megabeth and about how they can reuse Megabeth to revise, I then went on to you know, how do you use a structured study? And then and then we went from that into I created an environment where I got ChatGPT to lie to the student. And I demonstrated how it lied, and then I demonstrated how bias was within the system. And then we got into the question of like critical thinking with the student for this area. And it made me think, actually, as I was talking to them, and then knew I was going to be talking to you as well. How do you see this whole thing playing out in your own work? And how do you think the critical thinking is, you know, do you think that critical think is important in this age of AI?

SPEAKER_01

Uh unbelievably so. You know, with critical thinking, you can approach any problem and start out by saying, okay, what are the different parts of it? You know, like as attorneys, we like to look at different clauses and segments and how they play together. And to a degree, that's how LLMs function from the paper attention is all you need. But what I think about with critical thinking in AI is you really just have to be curious. And so often I'll tell people, like, just open it up and talk to it. I got my parents to start using it by saying, hey, when you have a problem on your computer, just screenshot it, drop it in there, and say, what's wrong and how do I fix it? Which has been an incredible first line of defense for my IT support. Um, but if you don't have the ability to kind of break down a problem and look at the different parts of it and how they play, and then use your facilities to explore what other solutions there may be, you get limited use out of AI. And that kind of gets back into like prompt engineering and how you're structuring and talking to it. Um on the lying part, you know, I'll hear a lot of friends like, oh, you got this out. It's probably not true. And what I say there is like, totally hear you, common conception. But if you know what you're doing and you know how to say, I want you to use well-cited sources, I need everything to have two references, et cetera, et cetera, you can really mitigate that problem.

SPEAKER_00

Yeah, no, absolutely. Um, I mean, this is this is a really important thing. It's about learning how to use AI properly. And I know that you have used of you've adopted a very specific model within Morning Consult to kind of roll it out across the organization. I say roll it out. Tell me a bit about how Morning Consult is approaching AI generally.

SPEAKER_01

So, you know, the goal going forward is really to work to be AI native. And that doesn't mean, you know, you integrate it into processes and workflows, but it's that you think about it from the beginning when you're building something out. You know, slapping AI on top of a broken process is still a broken process. And so sometimes you have to look at something and say, well, we're gonna burn it to the ground and rethink how we could do it. But to get there, you kind of have to understand how AI works and you know, where it can be implemented and where it might not be yet. And even where it might not be yet, you know, it might be there in a month or six months or 12 months. Um, so we rolled out uh a program that had the kind of philosophy overall of the people closest to the work will do the best work. And that really means that we need to teach our staff how to use these tools, how to ethically use these tools, the capabilities and limitations, because just driving a top-down mandate of, hey, we're gonna adopt AI and do this, you know, there's numerous articles that that's not always successful. But to get to that point, we rolled out a program, and the theory of how that would work is assessing, training, and tooling. Um, assessing the first step is critically important because you need to understand the baseline of where the people you're working with are coming from. Do they know nothing? Do they have ChatGPT on their phone? Do they use it day to day? Are they skeptics? So we did an interactive assessment that I really liked at our company because it wasn't just questions, it was like, hey, write a prompt for this situation. You're taking a vacation in Spain and you want to see as many national monuments as you can, have ChatGPT help you write an itinerary. And then they would write a prompt back, and so it was much more interactive instead of just saying, what's the answer to this question? Uh and I think that demonstrated use was really important. So we got the results back. Um, 60% of our staff was above proficient, which was great. And we had 0% on the skeptic side, which made my heart sing. Um, and that was really cool.

SPEAKER_00

That's actually amazing as a start. 0%.

SPEAKER_01

I mean, I think we have a theme throughout this will be curiosity. And like I think we have a really intellectually curiosity, uh curious workforce. And that's one of the things I love about Morning Consult is you know, there's so much going on at the company and so much that can be done that sometimes it just takes a different approach or a new thought. And so I don't think people are skeptical of everything with AI. You know, I absolutely hear some engineers when they're like, yes, it can code, but can it write beautiful code? And I'm like, I don't know, probably not for that application yet, but it couldn't do Excel sheets six months ago, and now Claude can turn my financials into a board reporting pack. So maybe we'll get there. Um, it was really cool to see that. And I was really happy, you know, and it just reaffirmed my faith in kind of who I get to work with every day.

SPEAKER_00

Yeah, absolutely. So sorry, I interrupted you. You were talking about assessing. So this is all about assessing where people were at. So you had you had 60% at above proficient, you had zero at non at uh zero skeptics.

SPEAKER_01

And what would the other what were the other numbers of before I you know there's kind of words for subproficient or needs improvement? And the way we approach that is the people in those areas, we wanted them to take an intro to AI course that was similar with homework and you know, targeted classes, and then also an AI ethics course. We really tried to have everyone in the company take an AI ethics course. And we do those two courses for everybody as they come on board now. Um and that really tries to bring everyone up to a baseline because that has to happen for you to get into the second part of tooling.

SPEAKER_00

Excellent. And I thought training was the second part. I'm sorry.

SPEAKER_01

Training. I get so excited I get to the end. The tools are fun. So, yes, to the second part of training. Um, so that brings us into training, which is that workflow of providing people the ability to explore uh and learn different topics like you know, AI for finance, AI for sales, AI for marketing, AI for advanced data analytics, and really letting people use that curiosity to explore much deeper applications of it. So it's okay, great. We know where everybody is. We've tried to move everyone to a baseline so they'll understand what we're talking about. And now we're providing training so that people can go out and take different courses, different programs, uh, multi-course programs, and really kind of further that knowledge. Because at the end of the day, again, people closest to the work will have the best ideas. And so we really want them to know how to approach ideas they could have for AI implementation.

SPEAKER_00

It's really interesting because actually, as you're talking, I just started thinking, you know, you you had 0% skeptics. Did you have a plan if there was skeptics within your organization, how to deal with them? Or do you were just thinking, I really hope they have no skeptics?

SPEAKER_01

No, my plan was if we had pretty heavy skeptics, uh I was gonna be excited to talk to them and not like a hey, what are you doing? But more of like, why are you skeptical? Uh what don't you believe? What can I help show you or help you learn, you know, in a one-on-one manner to really bring them up? Uh, you know, when somebody's like, oh, I don't believe that, or they're on the other side of the argument from you. Yeah, in my view, you know, both personally and professionally, it's really important to understand the other side before you seek to be understood. Uh, and that's both in professional relationships and personal relationships, it really helps you orient where you're trying to take them based on where they are right now. And so my plan had been to sit down with them, talk to them, hear their thoughts and concerns, show them some things that I do or I find interesting, and acknowledge like, yes, this isn't fully developed right now. But there's an obscene amount of money,$5 trillion going into this revolution. You know, it's the same as railroads, electricity, uh, telecom and dot com, like those massive revolutions is what's happening right now with AI. And so it's understandable when people say it can't do this, but you should show them the progression of other things like that to say maybe not right now, but there's what it can do also. It can do other things.

unknown

Yeah.

SPEAKER_00

And and the other thing, I guess, before we get on to tooling, um, still on kind of the assessing training side of things. Um I can't remember how many uh people are in morning consults across the whole team. About 350, yeah. About 350. So you've got about 350 employees. Um are you doing this with all 350 employees, or are you saying actually the only people who need to do this are you know, this team or this, this the people who are client-facing or whatever? It was every team.

SPEAKER_01

I will say there were some exceptions on engineering because you know, our head of AI pretty much knows where everything's going and how to operate in that industry. Right. So, you know, for those engineers that are heavy in it and already know or using cursor, it's like, yeah, I don't need you to take a training course on the 101 of AI for engineering. Now you can, it's available to you. You're welcome to look at it, but you know, your time's valuable, and I'd rather not distract you from the core mission of the company.

SPEAKER_00

AI ethics?

SPEAKER_01

Yeah. Oh, AI ethics was for everyone. Yeah. I mean, it's definitely for everyone. And we we had a long chat internally with our stakeholders across security, IT, legal, the commercial leaders, engineering about kind of what the importance of that was and why we really needed to have everyone do it and look into it. Because, you know, it can only take one bad actor with permissioning to really cause a problem. And we do want to be above board. This is a brand new area of technology at this scale. It's been around for a bit.

SPEAKER_02

Yeah.

SPEAKER_01

But we want to make sure our clients feel comfortable that the entire workforce is operating safely in this world.

unknown

Yeah.

SPEAKER_00

It's interesting because if you go back, I'm gonna go back 20 years, probably not even that long, actually. But actually, there was always this idea that engineers didn't need to worry about ethics because by the point it got into production, someone else had thought about ethics.

SPEAKER_01

If they can find it, but if I'm writing, you know, things I can inject code in the workflow to even for 30 seconds mine crypto out of your browser, yeah, that's pretty unethical, I'd say. And I've seen that happen before from other companies, and it's like you can't just rely on somebody else to implement ethics. Everyone needs to have a baseline understanding and be committed to it.

SPEAKER_00

Yeah. So then you say you've done your assessing and now you've done your training. Now tell me how that now goes into your next phase of tooling. And and I guess, you know, when did you I guess alongside this conversation, it'd be really good to understand the timelines you're talking about. You know, are you talking about it took you three weeks or you know, you're six months into the plan now? Where where's where are you on this this journey?

SPEAKER_01

We are almost a year into this. Oh no, actually, from what we rolled out to staff on assessing training and tooling, uh, we're about six months into that now. But the thought process for me uh was started about a year ago in July. Uh, I was uh talking about all the different roles and hats I've worn in the company, coming in from like 20 people to all the way through our Series B and being worth over a billion and looking at everything I've done differently and how much of a nerd I am on the inside. And so I was having dinner, and the CFO looked over me and he was like, Hey, I want you to start focusing on internal AI. Because I think you build systems and you know processes and groups really well, but you also just want to learn everything. And there's a lot to learn going on right now. And so we had this conversation, and it took a little while to ramp up and to really get into it. Uh now it has permeated so many areas of my life. Um, but it was that conversation that kind of kicked off the like, hey, how are we gonna help our internal teams with this? Not as much client-facing, but our internal teams. Um we're about six months into that. Uh, you know, we did the assessing. Uh, that was about a month and a half for us to get back the results and analyze them, about two to three months of uh training, and at the same time we're rolling out some of our tools.

SPEAKER_00

How do you uh it's interesting when you when we're talking about all these things? I mean, we you touched on the idea of of bad actors and uh and bits in code that risk privacy and all that, yeah. The importance of being able to show your clients that you're being very serious, you're taking the uh these risks of privacy very seriously very seriously. What about when you're having these conversations with your team about assessing them and training? Um, and how does that dovetail with this at the moment we're seeing this general anxiety within the workplace about AI will take your job, you know, totally uh I I I hear that completely.

SPEAKER_01

And what I will say is our assessments weren't blinded, but it was myself and my partner uh on the uh people team, Nathan, that really were the only ones looking at the data. And I would tell everyone, like, guys, I don't care. I just want to know where you stand. This is not like a litmus test, you know, this isn't the office where I'm calling you in to ask you how you do your job and how I can automate it. Like I just want to understand where we are so we can help level up everyone in the company. And my whole theory on AI in our workplace is I want to remove low impact work and empower high impact work. You know, the more mundane stuff in market research, like you know, checking survey links, reviewing responses, like where can we implement that, and then let you use your full mental facilities that you went to school for, that you're building your career on to really focus in that area. And that obviously helps the company as well. Uh, but the more stuff we can remove from the day-to-day that kind of sucks and can be automated automated right now, and the more stuff we can empower them to focus on that's more important, the better. As far as the anxiety, you know, there's two kind of things I would touch on. Um one is like who can approach it? And I think very much for AI, all you have to do is be curious. Again, just open an LM and start talking to it. And that really gets into the skills and learning commitment over pedigree. You know, like I don't really care if you're Magnum Kum Laude from somewhere, if you just studied and took tests and you know didn't really try to figure out how you learn, what you like to learn, because you know, the AI, especially right now, is a pretty heavy commitment to self-improvement in learning. And so that skill is wildly more important to me than just a pedigree. And it's the same thing for like non-technical roles. Not everyone needs to be able to explain, you know, what RAG is or the difference in all the models. What I do need you to be able to do is understand AI and think about how it could be applied in your day-to-day. You know, everything from, you know, my general counsel will be like, yeah, I used it to help draft a clear email to outside counsel to make sure I was very clearly giving him all the points and giving him all the information that we want to discuss. And I'm like, great, that helps save billable hours. Not that we don't love billable hours with Wilson Cincini, but just in general. Or, you know, also on our legal team, we've got an attorney that has been working on um a software that can review a legal document, specifically call out clauses, tell you where they are, tell you if they comply, um, give information to Redline. And you know, we've started doing that with NDAs and moving into vendor agreements and eventually customer agreements as well. So it's not just technical roles that use this. And it's not saying it needs to be everywhere, but it needs to be open to it and willing to learn. Because again, the evolution is pretty quick.

SPEAKER_00

Yeah, and I think that the uh you know, the your your points well made on this is that actually what what do you need the people for? And the answer is that in my mind, you know, you know, the answer is that AI right now, let's be honest, AI doesn't think. What it does is it makes very good predictions as to what would be the most sensible answer, not sensible, but most probable answer based on the question you've asked it. Now, using that as the approach, if there's a lot of data on a certain point, it's likely to come back with a very good probable answer. My Macbeth point at the beginning was a really interesting one. I asked it about Macbeth, it could answer in detail about every single character within Macbeth.

SPEAKER_01

And then it had a high hallucination and you said, out, out, damn hallucination.

SPEAKER_00

Exactly. Um but it's it's that idea that you can you can you can have it when you ask it to do something it really knows, it can deal with it very well. But actually, when you're on those edge cases and when you're doing your consulting in your in your within your company, you're uh you're talking to clients, the real value you guys bring is not what AI brings. It's actually when you work on those edge cases. And actually your team is just remarkable at you know seeing things that other people wouldn't see. And so what you want to do is empower them to do those things and critically look at the output from an AI to very rapidly go, does that make sense?

SPEAKER_01

Is that right? Yes. And I think I have two great examples of how we do that. So right now in a project we're working on, um, you know, let's say it's drafting a memo about a brand and the category and the entry points for it. Um I could tell it how to think, what you know, researchers to think and write like, give it a data set and a slide deck, and it could pump out a memo. And that might be okay, but we're not down for okay. And so what I'm trying to do a lot in our workflows right now to only help people understand better how this all works, but also to give them confidence that they're steering the ship is uh kind of the AI sandwich where on either side of it there's human input. And so, like for that example, there's a set of pre-questions. You know, what's the their goal? What have they done to date? You know, where are they looking to go with this? How recent are we looking? And so first we're gonna give it a bunch of pre-questions that you know an analyst can answer and then have it draft the memo and then go through a bunch of post questions after you read it to help augment back what you got in the end and some of the analyses and conclusions. And in that instance, I really try to make it as simple as possible for somebody to do good work. You know, I we want it to be programmed to the lowest common denominator so that the person with the requisite knowledge can walk up to this and really understand what's going on and create something safely because we built in fail-safes like, you know, did you check for this? Does this you know jibe with what you're saying is the conclusion? And that's one of pre and post questions on AI workflows. Could it be totally done end-to-end one day? Maybe, but I don't think something that's standardized, you know, is ready to go to market yet. I think that over time, when we can refine and help educate clients as to what they're looking at, you know, there's absolutely a value prop for being able to do these things instantly. And then there's a separate value prop for having a lot of human input. So that's one side, the AI sandwich. Yeah. Uh the other side is something I call Ask 10. Uh, and it's like my favorite thing that I've thought of in working with AI. So initially I was really into prompt engineering, like, oh, think like this, do this, do that. Um, and I wrote what I call my Ask 10 protocol, where I will just type in what I'm trying to do and then say ask 10. It will read that prompt and then it will ask me 10 refining questions over the task, the context, and the scope. Because those three areas with LLMs and how they function right now are super important for helping it decide what to look at and what not to look at and to understand what you're trying to do. And so the idea is that by forcing the AI and the human with it to pause and clarify what's really being asked, you end up with you know a sharper response, fewer blind spots, and it's less about magic and more about the discipline of building it in up front. And I love it because it brings me options instead of me having to think of every possible option out there. And I could run ask 10 over and over until I get something I really feel comfortable with.

SPEAKER_00

And is um and with your ask 10, is this something you say, these are the standard questions you ask me, or is it literally these are the themes you ask me the 10 questions on? Or is it not even that? It's just ask me 10 questions about what I've asked you.

SPEAKER_01

It is review the prompt as I've written it, and then ask me 10 questions about the task, 10 questions about the scope, 10 questions about the context. And I do have some subwords in each of those that describe for the LLM what I mean by task, what I mean by scope, what I mean by context. Present all 30 at once, ingest all 30 answers before moving on to the next response.

SPEAKER_00

I love that. Um I have to say that I I do something similar when I'm playing with my stuff as well. Um, but I don't do it. I don't do as many questions of that back to me. Um, ask five possible number. I'm quite tempted to try that out, so I might ask you to share that with me afterwards.

SPEAKER_01

Yeah, we we can put it in a link or uh you know in the description below. Um you can make it ask five as well. With S10, I use a lot of voice dictation so I can move faster because AI is great at taking a massive unstructured text, organizing it, interpreting it, and then applying it.

SPEAKER_00

Oh, it really is. Yeah, I agree completely. I mean, I love it. I love being able to do that. And I think at the same time, you know, to be able to go back in it afterwards and understand what you meant is a key thing that is missing. And this is where we get back to critical thinking every single time, is you know, you you know what it is recorded, right? You know that it's in the right place, maybe, but sometimes it's not even in the right place.

SPEAKER_01

And it helps you understand the workflow a little bit, and it helps you improve your prompting up front of like, oh yeah, I want to think of things like this, or I want to ask it, hey, should we consider anything from this point of view? And so it can really help you not only refine and you know uh have a better execution for the task you're on, but if you're curious, it can teach you about what it's thinking about when you just give it that blank task up front.

SPEAKER_00

And I love this. I mean, we we're falling you're you you and I both are falling into that trap at the moment, is it teaches you that it is doing something. And actually is it?

SPEAKER_01

Oh I I think you can learn anything in the world.

SPEAKER_00

Are you saying you are being taught? Is it teaching you or are you being taught?

SPEAKER_01

I think a little bit of column A and a little bit of column B. And I'll use an academic example here. I have to write a paper explaining Plato's Allegory of the Cave. I love political theory. And I just say, hey, write me a three-page paper about Plato's Allegory of the Cave. Boom, done. Great. If I really want to do it, you know, and I don't want to actually read the test text, which you should do because that helps your critical thinking analysis down the line. It's a cool book. Um, you know, you could say, what's the main theme and theory of this? Give me five examples. Explain it to me like I'm five. Ask me ten questions to make sure I understand this. Here's the outline of the paper I'm going to write. Do you see any questions I should be asking myself on where I want to look? You know, um I know an attorney whose daughter uploaded all of her notes from APUS history and was like, hey, ask me four potential exam questions. I'm gonna give them back to you, and then you grade them and tell me. So I think it is being taught when you get the responses, but the teaching is happening when you're interacting with it and using that critical thinking to say what would help me understand better. And that's why I think it's you know so crucial, you know, to the extent that people can use it properly in school to use it, because it could be a bespoke tutor helping you understand exactly what you're missing on these concepts and theories, and instantly there all the time for you to look at. And it understands how you learn, how you retain knowledge. Are you better with flashcards or long blurbs or concise bullet points or reading and giving something back? So I think there's a lot of use there in academia, but it's gonna take some time to figure out how to navigate that.

SPEAKER_00

Yeah, and you know, I I I think I'd like to uh come back to this academia question in a moment. Before we get to that though, um I think we we kind of skipped over tooling. So let's go back to tooling first of all, what that means to you and your organization, and then um I I want to ask you one more question about um about the company um and about kind of the way in which the company operates.

SPEAKER_01

Yeah. Right. So when it comes to tooling, I was really excited because the first thing we rolled out was uh a platform called Flynn, uh, which is a great Tron reference. And uh in Flynn, you could basically access all the top models. And that was great because different people work in different ways. Um, you could also build custom tools in there, you know, for briefing about a brand or a client you're gonna go see, or you know, a follow-up, or helping you draft surveys and stuff like that. And so I really loved that because it was very secure. You know, everything was housed in our AWS system and our S3 buckets. And so it client data wasn't necessarily flowing off-site. It was all being held in one secure location that we knew. And so that gave me a lot of comfort to talk to staff and tell them here's how you can experiment here. Here's what you need to be aware of still. Remember the ethical guidelines. But it was a lot better than just telling people to go figure out how to use LLMs and people spinning up their personal accounts and dropping PI in it and you know, really breaching some of the They were never doing that, Evan.

SPEAKER_00

They were never doing that. None of your team would ever do that.

SPEAKER_01

Well, I mean, within that, we use cyber haven, and when we would see people taking data out, we would have a pop-up that would say, like, hey, what are you doing? Yeah, can you explain it to me here? Uh and then we would take a lot of that feedback and I would see like, okay, kind of why are people not using Flynn for this? Is it because it can't do A, B, and C, or is it because it's more convenient? Um, and that would help us craft a lot of the messaging to tell staff, you know, hey, here's the safe use. So that was the build side. You know, there's always build versus buy. Yeah. And, you know, building your own tools is great because they fit your organization. Buying tough stuff is great because it works on day one, it's supported as long as you pay for it, and somebody is solely dedicated to that. And you know, an example of that would be something like Cursor for Engineering. You know, it's not perfect and exactly apple to every single role uh for every engineer, you know, that is increasing every day. But being able to say to that team, hey, who wants a license? Who will play with it? And seeing how different people adopt it, and then speaking to each other about like, hey, here's what I'm seeing at work, how it works, here's where I'm seeing where it goes, uh, and really being able to find tools that work in the moment and that might be further along and fit, you know, 80, 90% use case and saying, okay, we're gonna buy that tool.

SPEAKER_00

It's really funny, isn't it? Because the one thing you're describing there is this big shift, actually, in the way in which we as organizations now acquire tools. It used to be a really top-down approach, you know. You know, you would buy the IBMs for everyone in a computer company, you everyone would have access to whatever the software package that the the IT department decided was the right department, the right the right product based on a whole bunch of other things. What you're describing now is this shift we're seeing not just in software acquisition, but in a lot of other uh areas right now, which is you tell me the tool you need and explain to me how you can use it safely. And you can have it.

SPEAKER_01

And you know, that's what I try to encourage with a lot of people. Um, you know, some people on our geopolitical risk and economics team wanted to use Claude, and I was like, cool, how are you gonna use it? Great, let me go set up an account for you. It's gonna be on the company, it'll be on the company card, you know, go use credits uh on the API and like just show me what you do. And, you know, that's the hard thing about helping with uh gentle but intentional exploration of like, if you have a cool tool and you think it can help people, yeah, let me know. Let's explore. Maybe it works for everyone in the company, maybe it just works for your division, maybe it just works for your pod and commercial. But if there's efficiency gains, I'm excited when somebody brings me a tool and it's like, hey, can we buy this? And most often I'm saying, yes. Let me go have a few quick conversations with legal and security to make sure we're gonna be up to snuff. But let's try this out, let's implement it. All I ask is in a month, I'm gonna reach out to you and ask, hey, how's it going? What have you learned? What are you, you know, what have you implemented? And then a month after that again to say, you know, what do we think this is? How do we think it's going? Might be faster, might be longer, depending on the tool. But most of all, it's exciting when people are trying to, you know, improve and on their own accord saying, I think there's something that can help you with this, and maybe even use AI to find that tool.

SPEAKER_00

Yeah. I mean, it's I love this because actually, my my next question, actually, it was about company culture and about the importance, the the important part that what level of importance do you put on company culture in this journey right now? And by the sounds of things, it's it's super important. Um, but I would love to know a bit more about the culture of your organization. Um, you know, is it also also you know, are you a hybrid organization? Are you remote? Are you all in the office? I mean, you look like you're in the nicest office ever, obviously. Um so you know, you've got, you know, you know, so depending on there's an awful lot there to kind of look at and say, well, if you've got uh, you know, everyone remote, how do you do this in a remote setting? How do you do it in a hybrid setting? How do you do it? So how have you approached those questions?

SPEAKER_01

So uh at Morning Consult, some roles are in office because they need to be. Like my facilities team needs to be in office. They can't really sort service and support our offices if they're remote all the time. Now, sometimes absolutely, you know, they'll work from home on certain days because they're dealing with vendors or other stuff. Um, but for a lot of teams, it's choose your own adventure. What works best? I have mixed feelings on hybrid and you know, when it's beneficial and when it can be detrimental, and that's a conversation for the other time another time.

SPEAKER_02

Yeah.

SPEAKER_01

But um, you know, we do let people choose where they want to work. And then we really try to do, you know, we have an all-company summit every year where everyone comes in for a few days and we talk about the year ahead, where we're going, we'll do workshops, teams will work together. I'm really excited. I want to do a setup where I'm just gonna raffle off 20-minute time slots and be like, what do you want? I'll build whatever you want in a yeah. You want something to manage your lawn care? Great. Here's how we're gonna do it. You want something to help you learn how to fix your car? Cool, let's get cracking and just show people the personal uses of it. Um but we really let people be wherever they are, and I think a big thing that we try to strive and push for is experimental safety. Like, yeah, I want you to try new stuff. I mean, nobody here is gonna have an idea and I'm gonna say, oh, you're an associate. You guys don't get to have ideas, you guys get to follow rules. I've worked at places like that. That sounds like the dumbest thing I've ever heard. You know, great ideas can come from anyone, and especially the people closest to the work. So I think really giving people that experimental safety to say, yeah, there's a right way to do it. But if you have an idea, I will be frustrated if you keep your mouth shut. Please reach out, please slack me. You know, I'll hold AI office hours where anyone in the company can find me and talk to me about something they're trying to do and saying, I am available. I want everyone to be able to learn these skills. I want everybody to be able to understand it. If you have concerns, great. I'm down to talk about it. Uh and so I think having that safety is good.

SPEAKER_00

Yeah, and I love this idea that you're coming up with or you're sharing here. And please correct me if I'm wrong. What you're basically saying to people is I don't care how you are trying to use and implement it, I want you just to be playing with it and learning where it lands, how it can impact you in your life. And eventually, because you're so fluent, I guess, in it, you will then find ways of using it within your work setting. You find a thing that drive you passionately to start off with, and then you can bring that passion across this way.

SPEAKER_01

And then you can go home and micro-segment your home Wi-Fi network. Uh, engagement's absolutely the most important thing. You know, I I can look at some high-level data in Flynn and see, you know, we have you know anywhere between 200 and 250 active users a week, and that's great. And then I look at our custom tooling, and I might see only like half of one division using a tool. And so that's kind of the next evolution of like helping people understand, you know, how this can be applied at a higher level. And I'm excited to work on some initiatives with some people on our research science team to do that. But I look at everybody using it and I'm like, great, as long as they're playing with it and engaging with it, that means that we can begin to plan and move to step two of a more advanced use case, people building and using their own custom GPTs, exploring further things. But just the fact that you're in there right now and this tool, you know, less than six months old is very exciting for me.

SPEAKER_00

That's brilliant. Um I I guess as you're talking, I'm thinking, is there a challenge here? And the challenge being I've created a tool that I love and I use. And because I've created it and you haven't, you don't use my tool. You want to create your own version that does something slightly differently rather than dealing with the way I do it, you would rather do it your own way, do the own thing, and so you lose a little bit of efficiency that you could have gained. Or are you seeing it actually it's today is not about that?

SPEAKER_01

It's not about that. There are no, you know, nothing is off limits. If I build a tool that does something and you think there's a variation of it that could be better applied for your specific workflow, great, ping me. I'll give you the instruction set and documents, adjust it and build it for your report. That's awesome. That's incredible. Why why would I as an executive be upset about someone building a tool that's even more customized for their exact needs? I mean, that is awesome. And, you know, we have an AI channel and people will give us feature requests all the time. Hey, can we look at this? Hey, can we look at that? And I really like that because I want to know how people are using it, what they're doing. Um, and you know, I look forward, you know, in Q4 to trying to implement myself in some teams and just being like, I just want to see how you work. I just want to see what you do and what we can fix. Maybe it's nothing, maybe it's something, but I just want to try to help you do less low impact work and more high impact work. But at the end of the day, all ideas are valid. So like let's talk about it and let's make sure you understand how we can implement it or why we can't.

SPEAKER_00

Well um what I am what I'm thinking about at the moment, I'm gonna throw out an idea right now, is that we've been talking a lot about how businesses can do these things. And the benefit of you know, from a business point of view is that you can make a decision and just do it. Okay. But you mentioned academia before. Okay, and within academia, there are structures, there are processes, there are top-down dip tats from you know almost from a government level. Certainly in the UK, you know, there's there's the Department of Education that dictates certain things have to happen. Um various board exam boards that dictate things have to happen in a certain way. Um, there's teaching methodologies that are in play, all that kind of thing. I guess the same as in the US, there are kind of top-down instructions how things have to happen. If you had a teacher in front of you today, forget about all the other things. What would you say to a teacher today about this? Buckle up. Buckle up.

SPEAKER_01

I mean, yeah, there's like A, I I think responsible use, just like ethics, is really important for students. And, you know, A, for kids in college, to remind them, like, hey, you're paying to be here, so it's your goal to learn. Like, you're never gonna stop everyone from cheating. That's just a hard fact. You know, people hide notes in their shoe during an exam and go to the bathroom and look at them. Kids are going to use Chat GPT, and you can try to instill the culture around why that's not useful to you and why that's not helpful in your growth to get really good critical thinking skills, but I think you can't shy away from it. I completely understand there are processes and procedures that we need to respect and understand in academia. My fear is if we move too slowly, those choices will be made for us. You know, I used to love Fourth and Fifth Amendment data privacy rights. Like in the early teens, Wild West. We hadn't really even figured out, you know, a nationwide standard for GPS tracking warrants. And, you know, the problem what I came to is that like the law was very linear in how it comes and changes, but technology is exponential and it moves really quickly. And that problem is becoming bigger and bigger now in the academic world.

SPEAKER_00

And so it has to track that, it has to track that progress, doesn't it?

SPEAKER_01

Yeah, or else you're gonna deal with the fallout. And this technology is moving, you know, it's it's 20 years of innovation packed into the next four or five. Um, I'm working with my alma mater, Swanee, because I was like, hey, I talk to a lot of our graduating students, and either they know a little bit about AI or not a lot about AI. And I'm not saying anything of how they should use it at school, but like, I would love to teach four, one-hour classes on like what is AI, how do you use it, building a custom GPT, and implementing it in your life going forward. Just so they have a baseline understanding and start to maybe find a better way to use it for tutoring themselves or anything. But it's a skill that people are gonna learn on their own. And if we don't start to understand how that will work with the learning process, I don't think we're gonna have the best outcomes.

SPEAKER_00

It's funny, I mean, I think about times in the past where um I have worked in teams where senior people in the teams will look at the junior people joining the team and go, I don't care how they've been taught to do things. The way in which we do things is the way we say it. And then when they're in my position, they can make decisions about how things happen in the future.

SPEAKER_02

Totally.

SPEAKER_00

And actually, you know, we we need to kind of get off that, don't we? Now we need to be starting to think about how our kids, young people, young, you know, junior members of the team using these tools, how are they adopting them, um, how they integrate into their lives, because actually we need to be building at that speed. Um and as I'm thinking this, I'm thinking, you know, one of the things I said when I was in the school talking about AI and critical thinking, as I said, look, what you need to do to I said to the teachers, you know, what you should be doing now is you should be setting the homework and say, go home and use these tools and come back with an answer tomorrow. And then tomorrow in class, without those tools, we're gonna tear apart those answers.

SPEAKER_01

Love it. I mean, that's the right way. Uh you know, kids are gonna use it, it's really fun and cool, it can be applied in so many ways, but teaching them how they can not use it but utilize it in this learning process is just as important. And like, kudos to whoever came up with that process because that's great. It's like go learn how you can find these things. And if you know that teacher or professor has the chops to be like, okay, cool, so that's totally fake. What was the prompt you used to get that? Or like what was the workflow? Okay, guys, let's all talk about why you have to be careful and give you know rationale and have an engineered prompt to get a good answer. Yeah, that's really valuable.

SPEAKER_00

Yeah, and you know, I talked about getting uh Chat GPT to lie. Um, the way in which I did that was I played two truths and a lie with it. So I played two truths and a lie and asked it to identify which one was the lie, then I asked it to lie back to me, and it did. And then but it was obvious what the lie was. And I said, look, the reason why the truth the lie is so obvious is because. And I explained why it was, then it I had to have another go, and then it had another go. And then I said, okay, clearly the lie is this, and it's like no, it's not. Actually, the lie was a different thing. And I thought, well, actually, no, that's that's not true. Because I can it's blatantly obvious that you can't, in this case, you can't have run a marathon on six different continents.

SPEAKER_01

Right.

SPEAKER_00

You are an AI, you can't have done that. Right. And then it went into great detail to explain how it did. How the travel was involved, everything else to get one place to the other. It was adamant that it had done it.

SPEAKER_01

You're absolutely right. I mean, right now, sometimes AI can be too complimentary and say, sure, you can be a movie director. Let's figure out how. You're gonna make it happen, no problem at all. Uh, and you're like, oh, great. I've never even thought of being a movie director, but this sounds pretty simple. Um, and right now there are ways to work around that. Like, you know, pretend you were someone trying to do this. What would they do and how would I stop them? Uh and I think those bugs might change over time, but you know, it's the same adage with a lot of technology in the internet that not everything you read on the internet is true. Yeah. And it helps to fact-check it and make sure that the sources it's coming from are relatable uh and you know, credible. It's the same thing for AI. Okay, cool. Where did you do that? Has anyone else ever done that before? And you just have to, again, have that critical thinking and curious skill of saying, okay, but how? And you know, diving into it, not just taking everything carte blanche, because you could Google things and get a lot of wrong answers out there, just like you could ask AI and get a lot a lot of wrong answers. But there's a right way to do web search, and then there's a right way to use AI. And it's helping, again, those people understand how to do that. That's really important.

SPEAKER_00

Yeah, and and and I think you the point you made was leave no one behind in this journey as well.

unknown

Yeah.

SPEAKER_00

That's the really important thing that's come out loud and clear throughout the whole conversation. Um I've got two more questions if you've got time. Oh.

unknown

Okay.

SPEAKER_00

So the first question for you is um, or the penultimate question for you is this um if you had to give one lesson to other leaders, or one takeaway for other leaders from your journey, what would be the one thing?

SPEAKER_01

Very simple. The people closest to the work will have the best ideas. And that's really important because they will understand the minutiae of their workflows. And if you can train them up to the place where they understand AI and applications writ large, they'll be able to identify those processes. And, you know, listen, they might say, oh, it can't do this. And it's like, right, it can't do that in 100% of the cases, but it could do it in 80. And if it could tell you the 20% it can't do, wouldn't that be great? And so I think really leaning into enabling your organization and holding them accountable for self-improvement and learning is gonna be the best way to find success in this.

SPEAKER_00

Amazing. Um, and my very last question for you, Evan. Um, and this is the one question I have briefed you on beforehand, so I hope you've got a good answer now. Okay, is in the world of AI, what does it mean to be human?

SPEAKER_01

So this is a tricky one. Uh, and I was thinking about it again in the political theory aspect, because the Plato example, uh, you know, there's this Aristotle quote about we're the most human when we're together, you know, talking things through, building a shared life, and then you have the Hobbes side on we're nasty, brutish, and short, uh, and we need guardrails. And on this, I think, you know, to be human is to experience things in kind of more of a Hobbesian way, and have to learn that we need those guardrails. We need to watch out for self-interest in how we use AI, and we need clear rules as we build these really powerful tools. And so I think to be human is to experiment and understand.

SPEAKER_00

I I I am I love that so much because that's not at all what I expected you to say. I expected you to go with the the Aristotle approach on that one about being together is the important thing here about being human. So I just think that's because because so much of our conversation has been about valuing the person in that way, and so I expecting the the value of that, the relationships and all that to be the primary thing, but I love it.

SPEAKER_01

Thanks, like it too, and I do value being together. I uh I got to be with some people on our team this week, and it was just incredible. And I would always say that like there is a strong correlation between the strength of relationships and the amount of time you get to spend together, and so I prefer to be Aristotle, but human nature in this time is a little bit more Hobbesian.

SPEAKER_00

Okay. Well, look, I look forward to spending some more time with you next time I'm in New York. Um, it's gonna be excellent. Um, thank you so much for taking the time to talk to me today.

SPEAKER_01

Thank you for having me. It was my pleasure, really.

SPEAKER_00

Just so you know, this podcast is for general information only. It's definitely not legal advice and it doesn't create a lawyer-client relationship. Laws change and they can be different where you live. So please don't rely on this when making decisions. If you need advice, speak to a lawyer. Every now and then, platformers might pop links or extra content alongside this podcast. They're not from me or Wilson Cincinnati. And in some places, this episode counts as attorney advertising.