Episode 256

Using Generative AI to Unlevel the Playing Field

Frederick Vallaeys - Optmyzr
October 18, 2023
SUBSCRIBE: iTunesStitcher

Are you dabbling with generative AI or using it to gain a competitive advantage? 

Are you getting better at using AI or staying the same? 

If you're like some professionals I know, maybe you've gotten frustrated with only slight gains using AI, so you've moved on.

I'll admit, I'm still just dabbling compared to my buddy Fred.

Fred Vallaeys is one of the smartest dudes I know. His perspective on AI is inspiring and sure to get your wheels turning. 

But first, take a peek at Fred's resume! 

  • One of the first 500 employees at Google
  • He was on the team that acquired Urchin, the precursor to Google Analytics 
  • He was on the original Quality Score team
  • He was one of the original 6 people who built the AdWords editor

Now, he's the co-founder and CEO of Optmyzr and the author of 2 amazing books.

Here's a look at the taste AI morsels we chew on in this episode:

  • Practical ways to use Generative AI that you might be missing.
  • How to ground the AI so it doesn't "hallucinate."
  • Understanding the best LLMs and how they impact generative AI.
  • How generative AI is different from Machine Learning.
  • Embedding, vectors, and chain prompting

Ready to step up your AI game? I am!

Transcript:

Fred:

The machine continues to learn. And so the answer it gives you today is not going to be the answer it gives you tomorrow. It basically gives them a function, a python function, to calculate some probability of next month's budget. And it's like, oh, great, it's able to do it. So now they come back every week and they plug into new numbers and they ask the GPT system to do it again and again and again. But it's not because it wrote to the correct Python code last week that there's a guarantee it's going to do so again or do it better this week. So every time you use generative, you kind of have to fact check, and that's to your point. I mean, wish I could go sit on the beach, but no, I keep having to validate what it's saying because if I don't, I'm going to get into a lot of trouble.

Brett:

Today we're talking about how to unlevel the playing field with generative ai. We don't want to level playing field. We want to slant things in our favor. Now, my guest today is one of the smartest dudes that I know, Fred Valets. Take a listen to this resume. One of the first 500 employees at Google, the team he was on acquired urchin. Urchin was, it's the U in UTM parameters by the way, but Urchin was the precursor to Google Analytics. Also, his team designed quality score. Quality score is like one of the early innovations that set Google Ads apart from all the rest that awarded ads for their good quality and quality score, so legendary. Also, he was on the team of six that invented ad Words editor. Now he's an author of multiple books, one called Unlevel The Playing Field, and the other called Digital Marketing in an AI World.

And in this episode we talk about how to understand the difference between generative AI and traditional ai, how to use it in very practical ways for your business, where people get it wrong, how to get the AI to stop hallucinating if it is, plus some really nerdy things like embedding and vectoring and chain prompting. It does get a little bit nerdy, but it's also super fun and super practical. So please enjoy this interview with Fred Valets. I'm here with Fred Valets talking generative AI and how to leverage it to grow your business, improve your PPC and do other wildly cool things. So Fred, man, how's it going? It's been a long time. Welcome back to the show, and thanks for taking the time.

Fred:

Yeah, thanks for having me back on, Brett. It's been too long indeed. It's been a weird couple of years and just got back on the road seeing a lot of the industry people. So good to see you again as well.

Brett:

Yeah, it's so fun to get back in person more and yeah, and I used to see each other probably at least once a year, right at a Google Marketing Live or something. And of course that has all dramatically shifted over the last several years, but getting back out there, just thrilled to have you here. And so I love the topic of ai and you literally wrote a book on ai, and so why don't you tell us a little bit about that book?

Fred:

Yeah, I mean, so it's funny because I wrote a book called Digital Marketing in an AI World, and this was published in 2019, and this was after I'd been writing on search engine line in 2017 on the topic of how AI would change the landscape and the P P C professional's job and the digital marketer's job. But that AI that we were talking about back then is so different from the AI that we're talking about now, the generative ai. So I wrote a second book, which is called Unlevel, the Playing Field, and the premise was to take that concept to the next level. So if you believe that the human is still necessary to produce better results, even in conjunction together with the ai, what are some of the techniques you can put in play to make your team perform better? Because as an agency, as an in-house marketing team, sure you can use all of Google's automations and you can get average results.

Now, anyone can get average results, which is really cool because before it wasn't possible for everyone. But if this is your job, if this is what you do, then average is not good enough. You got to prove your additional value. And that's sort of what the book is about is how do you take these really cool technologies and how do you make them your own? And then when it comes to generative, I think it's hugely misunderstood, used in many incorrect ways. And so I'm kind of on a quest right now to teach people how is it different from traditional ai? What does that mean? What's it good at? What's it not good at? How could you use it, and how do you get it to basically get a promotion as opposed to get fired because someone else used it better than

Brett:

Ted? And that's the topic of today, really understanding what generative AI is and understanding how we can use it to unlevel the playing field in your advantage. And I'm still the mindset that it's better to have AI plus smart humans. We're not wanting exclusively one or the other. We don't want to give the stiff arm to AI and say, no, no, no, just smart humans. That's all we need. Scared of the ai, but we also just can't turn things over to the machine and then go hang out at the beach. It's more about how do we leverage AI to do more, to be more strategic, to get more leverage out of what's going on. And so really excited to dive into that topic. We'll certainly talk about a few of the things that we're doing on our end as it pertains to ai, but I'm more, and I kind of this before we hit record, I'm more in the observing, learning, watching dabbling stage. I know you're going really hard into the generative AI space, so can't wait to get your perspective on that. But yeah, talk about what is generative AI and why is it different than traditional? How is it different than traditional AI and where does that trip people up?

Fred:

Yeah, I mean, so when you think about traditional ai, it's really been about machine learning. And we've been using that for a very, very long time. And when I say, I mean everyone listening who advertises on Google, you've had quality score for your keywords, and that has been artificial intelligence. And that thing has existed for over 15 years. So I joined Google in 2002, fairly shortly after I came there, Google started looking at ad rank as C T R was a factor of it, but then it was like, we're not just going to use historical C T R, we're going to use predicted C T R and how do you get predicted C T R? Well, that was a big machine learning system, and it was crazy because these machine learning systems back then, they took months to train, we'd feed the data, and it wasn't months, but it was like weeks.

And then so the machine be learning, and then eventually we'd get something out of it and then we'd be able to go and have the humans validate that the machine predictions were somewhat valid and then we deploy it. And nowadays you do the same thing in minutes, but that's sort of the traditional machine learning, pattern detection, feed it a bunch of your historical data and make predictions about the future outcomes of similar things. Generative is about you start with a blank slate and come up with headlines, come up with keywords, come up with songs, make videos. What's interesting is generative ai, the way it does this is super mathematical and it is based on machine learning. It's based on predicting the next logical word in the sequence. But it's not about giving you the number, it's about giving you the text, the verbose, the beautiful description of something that happened.

So that's a big difference. But then sometimes people think about it and they're like, Hey, well it seems like I can go to Chad g p t. And by the way, when we talk generative and Chad, g p t, it's often interchanged, right? But Chad, g p t is just one of the vendors in that space. But you can go through it and you can give it a mathematical question, and then sometimes you get the right answer. And if you get the right answer, you're very lucky. Lemme tell you that because it's a prediction, it's not an actual mathematical equation that it's solving for. It's just like predicting what's the likely next word. And if you're lucky, it sends your question to the actual arithmetic and then you get the right answer. But there's no guarantee of that. So that's one scary thing because I've seen people go to it and be like, here's a list of keywords. Can you tell me the predicted cost per click for these? It's like, it's just guessing. It's not telling you anything actual from a database.

Brett:

Yeah. And so I think that's really appreciate that breakdown there. And we have been using traditional AI for some time, even going back to the quality score, which kudos to you and the team that developed quality score. I believe that was one of the original innovations that really made Google what it is today in addition to measuring backlinks and stuff like that in the really, really early days. But this idea of giving ads a quality score and then rewarding good ads and the whole ad rank quality score system, brilliant game changer, I think changed the industry for the better. And then of course, as we look at smart bidding, target return ad spend, target C P A, even Performance Max and all the things that are in there that still leans more machine learning and predictive ai. But I like how you laid that out. I've heard that even generative ai, it's still predicting, it's still predicting what should come next in a sequence of words, but it does, it shows up different and it functions a little bit different.

Fred:

Exactly. I mean it's all based on the transformer technology, which by the way, was not invented by OpenAI, it was invented by Google Brain. And then OpenAI, which was a nonprofit, took that technology, did really well with it, and then all of a sudden they were like, Hey, we're going to become a commercial company. And then Elon Musk, who was part of it, he got so pissed, he was like, oh, I'm not doing this anymore. So he left and now he's building his own version of transformer generative ai. It's all fascinating if quality score hadn't existed.

Brett:

So just to double click on that, so transformer that's sitting at the core of OpenAI, that was actually developed by Google.

Fred:

So it was Google Brain that did the academic research that developed that, like you're saying, right? I mean it's all very mathematical and prediction driven models. So I can't explain this because it's way beyond my capabilities, but again, in the sequence. So if you give generative AI a very simple phrase like the cat in the most basic example of generative ai, it's trying to complete your sentence. You basically are saying continue writing from this point forward. And so now it says, well, I could put in the word meows or barks, but how does it know which one is the better option? Well, it uses predictions and it says, well, when I've looked historically, and this is a large language model, so when we say historically, it means all the historical text that it's considered to learn, is it more likely that the word meows appears close to the word cat, or is it more likely that the word barks appears close to the word cat?

And so obviously now it says, well, it's 85% likely it should be meow and only 25% that it's barks. So it's going to say the cat meows, but it could have equally well said, the cat purse. That's another fine answer. And so that's fine when you're talking about being creative, writing headlines coming up with keywords. But if you're saying one plus one equals, well, likely it's seen two appear very close to that combination of words in the past. But again, it's guessing. And there's this fascinating study that came out of Stanford that was done recently, and it basically talks about the concept of drift in ai. And so what they did is they said, is the number 17,077, is that a prime number? It is a prime number. But they asked G P T four in March of 2023, and it was about 84% of the time it got the answer correct. And then three months later, by June of 2023, it was down to around 50%. And that's interesting, right? Because it used to be really good at answering that or quite good

Brett:

In theory, it should get better if it was at 84%, then several months later it should be getting better. Ideally you would

Fred:

Think, but this is what's happening. So they call this the concept of drift. The machine continues to learn. And so the answer it gives you today is not going to be the answer it gives you tomorrow. And that's quite scary because then I've talked to digital marketing professionals and they say, oh, I've been using the advanced analytics capabilities and G P T, and we can talk more about exactly what that is, but it basically gives them a function, a python function to calculate some probability of next month's budget. And it's like, oh, great, it's able to do it. So now they come every week and they plug in the new numbers and they ask the G P T system to do it again and again and again. But it's not because it wrote to the correct python code last week that there's a guarantee it's going to do so again or do it better this week. Every time you use generative, you kind of have to fact check. And that's to your point, I mean, wish I could go sit on the beach, but no, I keep having to validate what it's saying because if I don't, I'm going to get into a lot of trouble.

Brett:

Yeah, totally makes sense. We can't be totally unplugged and unengaged and you got to fact check and you got to spot check. So how can we use generative AI from a marketing perspective? What are some of the ways you are using it now? What are some of the clever ways you're seeing it used? Let's get practical for a minute.

Fred:

I mean, so there's the basic stuff in P P C and search marketing, which is give me some headlines. We have a responsive search ad, so we're being asked to provide a whole bunch of different headline variations, and as humans, we get bored and we sort of run out of steam. And so we have our 10 headlines and it's like, Hey, G P T, can you suggest five more? Kind of like this? So it's really good at stuff like that, and that makes sense right now, one thing to keep in mind is most people, again, they use chat, g p t and chat, G P T is like this one interface where you can have a conversation, but there's not that many tweaks and settings and things that you can control. Once you start deploying these solutions at scale, you're probably going to look at some a p I capability or a plugin for Google Sheets or a plugin for Microsoft Excel. So yeah, one thing that I think is really interesting, especially when you're thinking about creatives and headlines and how creative or non-creative you should be. Like say you're in a regulated industry and you can't really make stuff up or the AI can't make stuff up

Brett:

Finance or health related or things like that, you got to be really buttoned up.

Fred:

Exactly right. And so now you have this factor called the temperature in generative AI that you can control. If you are using a sheets plugin or you're using the A P I, you actually have access to this. And so you can say the temperature is zero, which means the model has to be very deterministic, and it can only say things that it's heard somewhere before or seen somewhere before. Or you can set the temperature all the way to one which is the highest, and that says, just be as creative as you want. And maybe in the example of the cat, it's going to say at some point, sure, the cat barks. Let me try something new and see how that lands. So that's basically one of the things people can do there.

Brett:

Yeah, it's really interesting. We're using chat GBT in a couple of ways. Of course, we run a lot of Google ads, and so we've got specialists that are writing headlines, and now there's kind of the generative experience inside of Google ads, which is pretty exciting. But we've been using it for a while to, as you said, if we're trying to write 20, 30, 50, a hundred headlines, that becomes difficult. If we're running a thousand headlines, that becomes difficult. And so utilizing Ative AI for that is great. I'm writing copy for the podcast and bullet points, and I, I'm creating other content. I found that when I want chat G B T to rewrite bullet points for me or rewrite headlines for a podcast or headlines for an article or something like that, I'm only using the suggestions a pretty small amount of the time, but it's still right, even if I only use 10% of what it's giving me, sometimes it sparks a thought or sometimes it leads me in a different direction or it is just a lever. It's a way to not start at zero. It's a way to jumpstart the next idea. And so yeah, it is generally quite helpful.

Fred:

It is. And I think the more that you start using it and the more you use capabilities like custom instructions and you start doing prompt chaining, it is going to give you better results, right? Because I find the same thing. If I give it a pretty simple prompt, I often have to rewrite it extensively because it might use words that I don't really like, even though I say write in my voice, but maybe I don't like my own voice, I want it to be different. So I've started using these custom instructions on G P T. So if you don't know what that is, it's basically under settings, you can say, this is my context, this is who I am, this is how I'd like you to respond. So as opposed to me having to have that interaction every time I start a new chat, it already knows this about me. And so one song that my kids are listening to a lot right now is A, B, C, D, E, F U.

Brett:

Yes,

Fred:

I've heard song, that song. And yesterday, my son's playing a variation of that, which is actually A, B, C, D, E, F, G, H I love you.

That's kind of interesting. Somebody took that song and what if I took something and did P, P C related A, B, C, D, E, F, and then P, P C, right? So I'm like, I'm going to go to G P T and I tell it, I like this song and I want you to write something like it. Are you ready to do this? And I'm thinking, I'm going to have to give a follow-up prompt, which is like, what is the starting the seed word that I want to start with? And before I even do that, it's like, oh yeah, here's a chorus. And because it knows my custom instructions, it knows that I'm super into P P C. And it wrote me a chorus right there about P P C. I was like, wow, that's spot on.

Brett:

We should share, we should drop that chorus into the show notes. If you're able to share it, that would be super.

Fred:

Right now I'm working on producing the, so I have the lyrics, I have the full lyrics. They're quite good, I think. So

Brett:

You're produce the

Fred:

Song, I have to figure out, yeah, I'm going to produce this song. And so this is not a P P C example of course, but from a marketing perspective, yeah, that's kind of cool if we can put a video out and do something cool along those lines. Now, I'm also, so I'm a Captain America fan.

Brett:

I see that.

Fred:

So I'm trying to produce a comic book with some superheroes about P P C. And again, I'm not a good artist. I can't really draw, but I figured out ways to get generative AI to draw characters in a certain style and to draw images in a certain style. And so a lot of what I'm working on these days is make this generative your own, make it follow your brand guidelines. And that's really cool, right? Because once you get it to that point, now I could imagine you, as opposed to only getting 10% of the headlines and saying, these are good, what if we could get that to 20%, 30%? And that's again, that's the playing field because you're using really cool technology better than anyone else,

Brett:

And it really makes sense. And so you, you're taking your ideas that maybe you previously couldn't execute on either efficiently or maybe even at all, right? To use the art example, got this idea for a comic book, I'm an artist, I can't draw it, but I can explain my ideas to generative AI and it will create it for me. And so yeah, I love that love finding those little improvements and ways to go from 10% to 20% to 30% can be a real game changer. We're also using generative AI to review, do competitor research and look at, hey, this is a product that we're competing against in Amazon. And so we feed all the reviews to the AI and say, Hey, what do people like about this? Not about this synthesizes to me in the top five, top 10 topic thing. And then we can also use it for landing page copy, product detail pages, things like that. So all of that, again, stuff you could do on your own, but unlevel the playing field and gets you to a great place much faster.

Fred:

I don't want to get too tactical, but these are really great examples that you're putting out there. But I think a lot of advertisers or marketers, they kind get stuck at the level of how do I input, where do I even get these reviews? And then once I have the reviews, how do I give it to the machine? Because every time you talk to G P T, you put in a long blob of text. It's like, oh, sorry, I can only read the first 2000 words or whatever. Now you have to figure out chunking it up. So I've done sequences where I'm like, I'm going to sit here on chat and I'm going to give you 10 sections, process each of these and then give me the output. So a lot of the time savings are kind of lost in me having to give it that many examples.

I've been using Claude from tro, that's a Google backed llm claude.ai. What's really cool about that one is you can do five uploads with every chat that you have, and each of these uploads can have 10 megabytes of text. If you think about 10 megabytes of pure text with no formatting, not a P D F, that's a lot of text. This is really good. And then I've even there had conversations. One thing that I like to do in terms of blog production, a new topic comes out and I want to write about it. What I do is I turn on my iPhone, put in my AirPods, go outside for a walk around the block, 15 minutes, and I'm just rambling and recording myself the whole time. I'm like, oh, well here's what I think about it, and it's really cool that this new report includes the cost metric.

And then I'm like, oh, wait, does it include a cost metric? So I paused my recording, I go on Google, I'm like, oh no, it doesn't include the cost metric. So I turned my recording back on. I'm like, scratch that. It doesn't actually include cost, so that's not that cool, so don't mention that. But here's the other things that are cool about it. And then I take that voice recording, I transcribe it. There's a lot of transcription software that doesn't cost a lot of money nowadays, and it's really good. And so you get this transcript of just my stream of consciousness, which by itself is useless, but I give it to Claude and I'm like, what was I talking about? Summarize? It does a really good job. And even the points, the cost thing where I misspoke it understands that I misspoke and it doesn't make that part of the final summary.

And then I'm like, okay, well, so that's what I think, but now here is the article, the help center article from Google about the topic, or here's the blog announcement from the actual place that built this thing, what they say. And then maybe there's like, here's a couple of blog posts from other people who've talked about it and have validated that the blog posts are good, they're factual. And then I'm like, okay, so on what I think and the factual nature of this, propose some bullet points for a blog post, write me an outline. And then it's kind of that chain, that chain of prompting or the prompt chaining that's really worked a well for me.

Brett:

Yeah, it's almost like having a personal assistant, personal writer, researcher all roll into one. But again, it's so relying on your prompts and your direction and your input, it looks fascinating. So Claude, c l a u d.ai. So claude.ai, fascinating. I'll have to use that voice recording idea because that's something I actually, I'm an audible processor, I think better as I'm speaking sometimes. And I do like to be outside, so I've heard some people talking about that. Never use that myself, but I love that. Just pick a topic, start talking. I could see then using Claude to help you write a blog post or write social media posts or whatever. Just lots of options there.

Fred:

Exactly. And the one thing for people to keep in mind too is we've all had voice dictation for a very long time, and it's quite good. You can go into a Google Doc and you can start speaking and it writes your blog post. But I think where I always get stuck on that is I can't process in a logical manner what I'm going to speak. So that's why I prefer writing because I can go back, I can take a pause,

Brett:

You're chasing rabbit trails, you're looping, you're coming back. It's pretty convoluted. And sometimes when you're just speaking, and I think Google is better at this than Siri by a mile, but you start talking, then you're like, oh, wait, no, you didn't get that right. Well, okay, so lemme stop. Lemme correct that. Then lemme get back to it. And then now you've kind of lost your train of thought, right? So

Fred:

Exactly what I'm saying is don't do any of that. Just speak into your iPhone, into your voice memos and then transcribe the whole thing. And then don't even read the transcript. If there's mistakes in there, like G P T is good, they understand the context of what you were talking about and then sort of the same grammatical correction that Grammarly would do by looking at that word in context and knowing it's misspelled. Well, G P T does the same thing. It knows, oh, you probably meant C P C as cost per click instead of C, B, C with like B as in boy, which may be the transcripted, right? It doesn't matter. It picks that up and it's going to fix it for you.

Brett:

Really, really cool. Super helpful. You'd mentioned something before, you talked about drift with AI and how sometimes you can just start progressively getting worse in certain areas. How do you ground the AI so that you can get better results, better results for you?

Fred:

Yeah, I mean, so the easiest way to do grounding is what I just talked about, right? It's chain prompting and feeding it in. Here's the actual thing that I want you to transcribe, or here's the article I want you to summarize. Then it becomes sort of focused in that area. Now, one project that we did as well, so I took the two books that I've written and we wanted to build a chat bot around this, and we also wanted to bring a sidekick into optimizer. So where you could start asking questions about how is my account doing? And maybe we say, well, your budget is not fully spent, but your results are good, but you have budget available, so what is it I should do next? Well, in that case, we would recommend that you maybe increase your bids a little bit, or you find new keywords for more coverage so that you more fully spend that budget.

So that's the advice that we want to give. So how do we ground it, right? Well, there's a couple of ways of grounding. So the first way of grounding is in G P T, there's a thing called function calling. So keep in mind G P T by and large, the data that it has is from a couple of years ago, and then anything specific to your account or your situation, it just does not know. But what you can do is you can say, here's the structure of an A p I call the J ss o n to do an a p I call that's going to give you back the information that you need to do a good job. So if you go and say, tell me something about the budget, it will know, oh, I have a function which allows me to query for the budget for this account.

And so it constructs the J ss o n, the Js O then gets sent on to whatever a p I needs it. The j ss o n comes back with the answer, and now you've grounded it by saying, this is the actual budget, or this is the actual amount of money that you've spent. Make that part of the answer. And then it can do its construction of, okay, well it seems like you spent less than you wanted to and now it needs to give advice. Okay, so how do we give advice? And this is where we get into embedding. So embedding is this kind of advanced concept of vectors where basically the question that you ask is turned into a mathematical number and then that mathematical number is compared to every page of the book. And how do you get mathematical numbers for every page of the book?

Well, you have to embed the book, and this is actually not that hard. So if you look at open ai, they have an a P I that's called embedding. So you feed it one page of the book and you say, give me the mathematical representation of that one page. And then you store these mathematical representations in a vector database. So Pine cone is one example. They have a free plan available, so you put it into Pine cone, and now what happens is if the user comes to my chat bot and wants to have a conversation about their account or advice from the book, we take their question, we turn it into the mathematical representation through OpenAI, they do that for us. Then we compare that mathematical number in the vector database and then it says, well, here's five pages that seem to be similar in the mathematical representation.

So these five pages come back, get given to the large language model, and now the large language model construction response from those five pages. And so what comes out of it is not based on all of the text that the model's ingested, but it's based on five pages in my book. So it is grounded, it is factual through function calling, and that is a way to make it your own. And of course there's many marketing applications to this as well. So you could build a model that says, here's my style guideline, here's every page on our website and how we describe products. So when you go and make ads for these products, it's grounding it in how you speak.

Brett:

Dude, that is next level. That may be one of those we're have to pause back up the recording a little bit, listen to that again, start making notes. There's a lot to that. Super, super interesting. And it really interesting, I think for a lot of people, and I confess this to you before we started recording, I'm more in the learning phase with ai. You are getting after it, and a lot of this is next level stuff. So super interesting. I want to talk a little bit about large language models. I know you talked about Claude, which is kind of backed by Google, and you talked about transformer, which was part of Google Brain. So what do we need to understand about these large language models? Do you have preferences on which ones are better? Which ones are better for different circumstances? What do we need to understand here?

Fred:

Yeah, I mean, so large language models have bias is sort of the first thing because they've been taught based on a certain set of text. And one fascinating example to me is if you ask a large language model a question about gun control, you're going to get a liberal sounding answer. And if you ask that same large language model a question about religion, you're going to get a conservative sounding answer. Why is that? Does it have a political affiliation? Well, no, it doesn't. It's just because more liberals have written about gun control and more conservatives have written about religion. So it takes that tone of voice, it mimics that. So the question that becomes about can I build a large language model that maybe it doesn't talk about the things we don't want to talk about? So could I build a large language model just based on my corpus of data? So the books that I've written, the FAQs on the optimizer website, the support questions that we've had. So the answer is no, you probably shouldn't because you're not going to get to the volume of text to teach a good large language model. That's

Brett:

Not a large language model at that point. That's just too small of a set of data.

Fred:

Exactly. It's pretty small. And there's pretty interesting studies that show there's really like an inflection point and it needs to be a very, very, and we're talking about on the smallest side. So meta has a large language model, which is based on 70 billion parameters, 70 billion parameters, and that's small. Five times bigger is Google's model, which is called Palm two, so that's about 340 billion parameters. And then five times larger is G P T four, and it's split eight ways. But basically when they combine this all together, it's five times as large. So it's in the trillions of parameters, it's 1.4 ish trillion and we're just not going to get there. So we have to look at these different models. So Metas is good. It's called Lama Lama two. It is free for commercial use. So it's not the biggest model, but it's free, which is really appealing. As with anything from Google, I think it's a little bit too factual. It lacks creativity in my mind. That's always the frustration I have. That's the frustration I have with Bard as well. If I ask it to write something, it's like, yeah, it's kind of correct, but it doesn't read nicely, so I have to teach it too much to take on a certain tone of voice. G P T four, I mean, what comes out of that is beautiful. I really love how G PT four writes sort of the inferences it makes.

Brett:

You can get it to write in any style. We were got to get the office who's always getting Chad G P T to write memos that he sends out. I want this to be like a snarky third grade English teacher's voice or whatever. And it is pretty good. It's pretty good at adopting that tone.

Fred:

Exactly. And so that's sort of the three models that I would look at, the three primary ones. But then even within the large language model, if you say that you go with open ai, you have to start thinking about costs, right? Because Chad, g P T, sure you pay $20 a month, but you're not going to scale your business. You're not going to scale an agency or a big in-house project with that. You're going to have to use the A P I in some capacity to do things for 10,000 products for 15 advertisers that you're working with. And so when it comes to the A P I, now you have choices. You can use model 3.5, you can use model three, model four. And sort of the trend is every model becomes three times as expensive as the previous one. And then if you get into the really old model, it's like a hundred times cheaper, a thousand times cheaper, but that cheapness comes at the cost of it's not as good, it's just not very good at writing headline.

So at the very minimum, you probably want to use model three. And then speed is the other consideration I love, you can go and talk to G P T four and you can sit there reading the response as it's generating. So by the time it's done generating, I've already processed it and I'm like, yeah, this is good, or No, this is bad. It needs a tweak, G P T 3.5. On the other hand, it's like, boom, here's the response. Here's two pages of answer about the thing you just asked, which is amazing. But if speed is of the essence, which it often isn't in business, then 3.5 may be a better model to stick to. And then you sort of ask what's up with these different large language models? And so training your own model, that's like question number one, should you be doing that? And then if you want, we can talk about fine tuning and embedding, which are sort of prompt engineering, which are those next two levels that are probably a bit more accessible to the average user.

Brett:

So we talked a little bit about embedding already, but how does that apply in this context and kind of the prompt engineering? What advice would you give there? Where do we maybe get it wrong naturally? What say you on that topic?

Fred:

Yeah, exactly. So the prompt engineering that oftentimes is about things like in context learning, and it's about providing that thing that you wanted to summarize or that you wanted to talk about or the source of truth for what it's going to verbalize. And so again, it's about function calling. It's about turning things into vectors and storing it into a vector database. But again, it can also be about simple prompt chaining. The other thing people often don't really understand about generative AI is that open AI's initial models, we're not chat-based. Now chat is nice because it becomes an interaction and that interaction has memory. So the thing you asked five questions ago, that's part of the memory of that large language model. And so it brings that back and it keeps grounding things in what you asked at that point. Whereas the original forms of generative AI were much more in completion mode.

So it was like, here's a list of five bullet point headlines, and then you would say, now complete this. And it'd be like, okay, here's 6, 7, 8, 9, and 10 in that same style. So it would use your one prompt to come up with the next thing. But prompt chaining is probably one of the easiest ways to not go into embedding, but sort of prompt engineering, give it better or give better answers based on what you've built up to. And the other thing, like you said, this isn't like having an assistant, right? You can't come into the office and say, Hey, write me a blog post. I mean, sure, but what about how long? You have to give it very specific instructions, take a few minutes to come up with a really good instruction, and what would you have put in the email to your employee to help you with that? That's a good instruction probably for a large language model. So you still have to do that work. But what's cool too, that is you can actually use, so we use Mid Journey for image generation, and there's a very specific way of prompting Mid journey. So you can go to G P T and you can say, write me five prompts to get this sort of an output from Mid Journey. And now the one AI is telling the other AI what to do. And that works really well too

Brett:

If you want to get better at prompt engineering so that AI gives you better answers, use AI to help your prompt engineering. It totally makes it very meta.

Fred:

And then you know what, at the end of the day, Google's AI is reading the content that you've written to decide what's good enough to recommend to a user. And by the way, that's the other thing I was really, my mind was blown when Google and Microsoft, they started putting generative results. Because when I looked at chat G P T and people would ask the questions, I go, what's the highest tallest mountain in the world? Okay, it's Mount Everest. Maybe it's K two, I think. I'm not sure if that debate ever got resolved, but it's like, okay, well, so it seems to know things, but it's often also making things up. There's this great story I was talking to a friend who's a PM at Google, and he was in a meeting and they were debating what's the average conversion rate in B two B P P C?

And they were not agreeing with him that it was around 30%. So he goes back, G P T had just come out with chat G P T. So he goes, ask the question, and G P T comes back and he's like, yeah, it's about 30%. So he feels super validated, but he's like, well, can you give me some examples of reliable c r REM companies that talked about average conversion rate in the past year? It's like, okay, Salesforce, HubSpot, Oracle, and it's like these PDFs to these amazing sounding reports. So he clicks on the links and it's 4 0 4 error after 4 0 4 error. And what the large language model had done, it was like, well, you told me reliable C R M companies. So HubSpot, Salesforce, Oracle, it knows that it's like something that was written in the last year. Oh, let me put 2022 in the title of the link.

And by the way, G P T doesn't even look at links after 2021 or some of the models Don, right? So how does it even know that? So he was smart enough to click on it and not lose his job as a result of making stuff up. But then it was like, well, now all of a sudden Google and Microsoft, they're doing generative results. So how do they make sure that that is correct? That's exactly what they use as they use embedding. So it's not like they have a large language model and that thing magically knows the right answer to anything. No, they're still running it through Google's ranking algorithms and they're saying, well, look, here's 20 high ranking pages about whatever you asked. And they use vectors and they use semantic search to do that. And then it says, okay, now give these 20 results to a generative system and summarize it. And that's why the answer is usually fairly correct because it's not making stuff up and it's grounding by in context learning of saying, here's the 20 articles that I want you to take back and do something for the user with.

Brett:

It's amazing. It's amazing. So we're running out of time. I want to talk about just a couple things here as we wrap up just to see if you get a perspective. So how do you see chat-based AI changing Google's core, which is search, right? So there's, there's all kinds of debates and articles and stuff online talking about, Hey, chat, G P T is going to destroy Google search, and will Bard be enough? And Bard's going to upend Google's economic model and stuff. How do you see chat influencing search and search ads?

Fred:

Yeah, I mean, it's a big unknown. And the question is how do users interact with this chat? And I've been fascinated because I think Microsoft's approach to generative is much less aggressive, at least on the search results page than Google's. And you would think as the number two player, Microsoft would be incented to really change things up. But for them, it's a fairly small section that runs on the right side, and then you can expand it up in the edge browser to be full page. Whereas Google, it's like, I've turned that capability on, and half the time I don't see the organic search results anymore because they get pushed down by the generative answer. I think I've already seen improvements where Google is getting quite good at knowing what deserves a generative answer and opens that up by default. And something where it's more debatable, if that's helpful, it stays closed until I say, go and give me the generative answer.

But what you do have to realize, and you see this within Google, is that like I'm saying, what's in that generative answer is still what came out of the top ranked organic results, the top ranked shopping ads show up in there. It's just summarizing it. It's just providing a different interface to interact with them. But then the big question is, how does the user interact? Does this become a zero click search event where they got their answer? And if that's the case, then it probably wasn't commercial. It was probably not going to be leg gen anyway. So at some level, like who caress, right? But those things where the user does need to click through to buy something to get more information from your company, those links are appearing in generative. So I would say, I mean, keep doing the things you're doing, use generative AI to be more creative, to produce better content.

But if you're just turning it loose and saying, generate me 10,000 landing pages about different cities for my hotels, that's probably pretty risky. You need to ground it, you need to train it, you need to fine tune it so that it speaks your language, and then you still need to have human quality control on top of that, and that's going to produce good content. Google might appreciate that. That might become part of the rankings, but at the end of the day, this is Google's cash cow. So if the cash cow dies, then I think we're in a lot more trouble in general.

Brett:

Yeah, yeah. They're going to find a way. Google's always good at figuring out how do we still monetize this? How do we make sure there's plenty of ads to click? Because you got to keep the machine going and you got to keep advertisers happy. And let's face it, we all love Google search. If we're searching for something that has commercial intent, if I'm looking for a product of some type or a place to stay or a place to go, people click on ads. People click on ads even more oftentimes than they click on organic results. And I love what you said though, even the generative results that is pulled from, if it's something product related that's pulled from a shopping theater, it's pulled from a website. And so having the right structure, the right ss, e o, the right feed optimization, all of those details really, really important. And if you have that, then you're not going to get just left in the dust likely by the generative results.

Fred:

Exactly. And I think this whole track of having more authority, more influence, I think is really going to continue to matter a lot. So for what it's worth, I mean, something that I've written, because I've written a lot about it, it's been linked back to like Google's going to say, well, something that he produces probably is going to be better than something that's been no name author. That is probably just generative ai. So build your brands, build authority. It's same things we've been doing before. And then we also have to think about multimodality, right? We're seeing fewer and fewer clicks from text-based search results. We do more video. I mean, we're doing this podcast, right? It's because people like listening to people, they like seeing people. That's how people get a lot of content these days to do more of that.

Brett:

But it also does sort of just go back to Google's original. One of the original thesis was what if great ads are just answers to questions? And so then it's a matter of, okay, well, how do I answer the question in a really great way, whether that's through text or video or through my feed or through the page or whatever. And yeah, it comes down to just building a great brand and being great at merchandising and creating a great experience. And if you do that, yeah, the AI is going to help you, not hurt you in the long run. Fred, this has been amazing, and we could keep going and we just barely scratched the surface, and a lot of people's heads are spinning. Mine was spinning at several points in this conversation, but if you want to dig in, read your books, read your blog, check out optimizer, how can they best do that?

Fred:

Yeah, all of those ways. And then you can connect with me on LinkedIn. So frederick vales optimizer.com, go take a look at our blog. We produce A P P C town hall, which is every month we do a video episodes. So we talk to interesting people, and I think you've been on it, so have great conversations there. But yeah, thanks everyone for listening. And if I made head spin, I'm sorry. I hope I at least gave you some nuggets. I'm working to dig deeper

Brett:

In a good way, in a really good inspiring, yeah. Yeah, you're generating ideas, man. People are going to be able to listen to this and put this into, I also feel smarter just by listening to you, so that's always good. So awesome. Fred Valets, ladies and gentlemen, I'll link to the books. I'll link to everything in the show notes. So check that out. Check out optimizer. And also you spell optimizer too, Fred, because that's an area where people get tripped up sometimes.

Fred:

Yeah, O P T M Y Z R,

Brett:

Which just turned 10 years old, by the way. So congrats on that. Awesome piece of software top rated. Check it out if you need some help with your P P C optimization and with that, until next time, thank you for listening.

Have questions or requests? Contact us today!

Thank you for reaching out! We'll be in touch soon.
Oops! Something went wrong!