That’s the question at the heart of “Life Science Keyword Research: Is ChatGPT Smarter than a PhD Scientist?”, the debut episode of The Supreme Pod: Life Science & Healthcare Marketing. Our host, Supreme Group’s CMO Eric Southwell, challenges ChatGPT to take on Account Director Nikki Koudis, PhD, with planning a life science Google Ads campaign. ChatGPT and Nikki go head to head on keyword research, ad copy, and account setup using a real product and simulated budgets.
To kick the episode off, Nikki selected a product she knows well from her doctoral research: a Cas9 nuclease from Sigma-Aldrich. Cas9 is an enzyme used in CRISPR gene editing to cut DNA. It’s a powerful, high-demand product in genomics labs and one that comes with plenty of technical nuance.
Translate intricate data into powerful, accessible messages;
Manage reputational risks with precision;
Help life science companies build lasting credibility with the audiences that matter most.
Nikki’s chosen product, a Cas9 protein from Sigma-Aldrich.
So how did ChatGPT’s Google Ads expertise stack up against a PhD scientist with years of experience in life science and marketing?
Let’s find out.
Round One: Keyword Research
The first task was to suggest keywords for promoting the Sigma-Aldrich Cas9 nuclease using Google Search Ads. ChatGPT generated a structured list of terms, grouped into primary (high intent), secondary (application-based), and informational categories. At first glance, the output looked comprehensive and organized. But as Nikki quickly pointed out, things fell apart on closer inspection.
Nikki’s first query to ChatGPT: help her come up with some keywords for the Sigma-Aldrich Cas9.
Firstly, many of the keywords selected by ChatGPT were too long-tailed to drive real traffic. For example, the phrase “Cas9 protein CRISPR gene editing” might sound descriptive, but it is generally longer than what a scientist would actually type into Google. These kinds of hyper-specific queries often register as “low search volume,” meaning they’re deprioritized by Google and the platform won’t show ads for them.
At the same time, the more general terms suggested by ChatGPT (e.g. “CRISPR Cas9”) were too broad to be useful. While relevant, these keywords attract information seekers rather than buyers. Bidding on them would likely waste budget on unqualified traffic, such as students or curious readers, rather than actual customers looking to purchase reagents.
According to Nikki and Eric, it’s important to strike the right balance and target the “Goldilocks zone” of keywords: terms that are specific enough to show commercial intent, but broad enough to capture measurable search volume. Think “Cas9 enzyme” or “order Cas9,” not academic phrases or general informational terms.
As Eric put it, “It's the Goldilocks zone in life science Google ads, where you want it to be broad enough where you're getting searches and your ads are showing up, but not too broad where you're wasting money on an undergraduate or just some person at their house.”
Another common issue with ChatGPT’s analysis was intent mismatch. Many of the secondary keywords it suggested, such as “Cas9 protein delivery methods” or “Cas9 protein transfection,” would likely be used by researchers looking for protocols, not products. These aren’t inherently bad keywords, but in a paid search campaign, they don’t indicate purchase behavior and could be a waste of budget.
ChatGPT also lumped branded terms in with unbranded ones, which Nikki identified as another red flag. Branded searches like “Sigma Cas9” deserve their own campaign with separate budget allocation and bidding strategy. Keeping these terms separate ensures clearer performance insights, tighter budget control, and messaging that matches the higher intent behind branded queries.
To its credit, ChatGPT did include some solid tips, such as using phrase and exact match types, building out negative keyword lists, and even suggesting single keyword ad groups (SKAGs). These are all good ideas for life science marketing, a niche industry where precision matters.
Overall, Nikki and Eric both gave ChatGPT’s keyword research a score of three out of ten. Not because it was completely wrong, but because launching a campaign with ChatGPT’s list would have led to limited ad delivery, low ROI, and wasted spend on informational traffic.
The AI tool applied generic digital marketing logic to a space that demands scientific intuition and contextual awareness. Without access to search volume data and without understanding the behaviors of scientific users, ChatGPT’s keyword research just didn’t work very well.
Round Two: Ad Copy
Next up was writing responsive search ads. ChatGPT generated headlines like “Buy Cas9 Protein Online” and “Trusted by Researchers,” along with matching descriptions such as “High-quality recombinant Cas9 protein for genome editing. Fast shipping available.”
Nikki was quick to point out a key issue. The copy wasn’t inaccurate or inappropriate; it just wasn’t compelling. The headlines were vague and repetitive. The descriptions sounded like they were pulled from a generic template.
Unfortunately, nothing in the ads highlighted what made this particular Cas9 product different or why a researcher should trust the brand behind it. This matters a lot in the life sciences. Researchers are skeptical by nature and are looking for proof, not fluff. A headline like “Trusted by Researchers” doesn’t carry any weight unless it's backed up by specifics: how many publications cite the product? Is it GMP-grade?
In this light, ChatGPT’s copy was missing credibility. Strategically using the Sigma-Aldrich brand name in phrases outlining how long they’ve been in business or how many product units they’ve sold would have immediately strengthened the ads. Instead, the copy leaned on vague terms like “high-quality” and “reliable,” without any evidence to back them up.
Making things worse, there was also no consideration for actual ad formatting. Google Ads has character limits and some of the suggested headlines exceeded them.
Still, there were a few positives to highlight. Nikki noted that “High Purity Cas9 Protein” and “In Stock and Ready to Ship” were good headlines that she could likely use. These phrases are specific, relevant, and signal value to researchers who care about timelines and consistency.
Ultimately, Nikki gave the ad copy a three out of ten. Eric was slightly more generous, scoring it a five…Not because the copy was strong, but because it was usable in a pinch. As he put it, “If you uploaded this ad copy, it wouldn't really be a catastrophe. It's just like, meh.”
In life science marketing, good ad copy reflects a deep understanding of the product, the buyer, and the competitive context. While ChatGPT can offer a quick starting point, it still takes a human marketer with scientific fluency to make the message work.
Round Three: Account Structure
For the next battle, Nikki asked ChatGPT how it would organize a Google Ads account for Cas9 with a monthly budget of $10,000. The response was logical on the surface:
Segment campaigns by intent;
Allocate spend across branded, high-intent, competitor, and informational campaigns;
Use a mix of match types.
This isn’t a bad framework. Both Nikki and Eric praised the inclusion of a dedicated brand campaign in particular. Bidding on branded terms like “Sigma Cas9” is often the lowest-cost, highest-ROI activity for this type of campaign and assigning it a $500 budget made sense.
ChatGPT’s recommendations for structuring a Google Ads campaign for the Sigma-Aldrich Cas9 nuclease.
But as they moved down the list, cracks started to show.
Firstly, the high-intent product campaign received $3,000, which was reasonable enough, but the keywords chosen repeated the same issues from earlier; they were too long-tailed or not something a real scientist would type into Google. Without better keywords, the budget wouldn’t lead to high-quality conversions and a strong return on investment.
Even worse were the next two categories: a $1,500 competitor campaign and a $1,500 educational campaign. Both Nikki and Eric flagged these as problematic.
Competitor campaigns can work, but they’re typically reserved for more advanced strategies or brands looking to scale. That money is generally better spent on product-focused keywords with clearer purchase intent. Additionally, in the case of Cas9, users often stick to the exact supplier they’ve validated in past experiments, so competitor campaigns may offer limited value.
The educational campaign was a guaranteed waste. Phrases like “What is Cas9 protein?” or “How does Cas9 protein work?” may get high search volume, but they rarely lead to conversions. As Nikki put it, “You’ll just end up spending budget and not getting any purchases.”
ChatGPT did recommend a $1,000 remarketing campaign, which Nikki thought was useful, but at half the budget proposed by ChatGPT for this activity. For a $10,000 account, $500 is usually enough remarketing budget to keep recent visitors engaged.
In total, both Eric and Nikki gave ChatGPT’s account setup strategy a four out of ten. The correct ideas were there (e.g. brand segmentation, match types) but the prioritization and budget allocation reflected a lack of strategic nuance.
What does this tell us? AI can recommend a campaign structure that looks smart, but ChatGPT doesn’t always understand where the money should go. Without someone to evaluate keyword intent, segment mindfully, and tie each dollar to a likely outcome, your campaign won’t deliver the results you’re looking for.
Final Verdict: AI Is a Capable Assistant, Not a Replacement
After walking through all three categories (keyword research, ad copywriting, and account setup), Nikki and Eric delivered their overall verdict: ChatGPT scored an average of three out of ten.
Ultimately, ChatGPT is not a plug-and-play replacement for a Google Ads strategist. Without scientific context and marketing judgment, ChatGPT lacks the precision needed to succeed in life science advertising.
That precision, as Nikki explained, comes from a mix of intuition and experience; a real PhD-level life science marketer knows which keywords signal purchase intent, how scientists actually search for products, and when to lean into brand credibility instead of generic claims. ChatGPT also doesn’t (currently) have access to real-time search volume or performance data, which makes it nearly impossible for it to prioritize keywords or structure campaigns to deliver a strong ROI.
Where ChatGPT does shine is in brainstorming and onboarding. For marketers in highly technical spaces like life sciences, ChatGPT can be an excellent starting point for ideation, helping generate first drafts of copy variants, exploring campaign structures, or creating onboarding materials. It can even surface fresh angles on familiar topics, which is particularly helpful when you're up against a tight deadline or are suffering a mental block.
In fact, Nikki noted that she regularly uses ChatGPT to get up to speed on unfamiliar products, especially when working with a new client. She finds it helpful for quick summaries, market context, or basic product descriptions. It’s also useful for generating early drafts of content, which she then reviews and humanizes before publishing.
Most of all, it’s a thinking partner. “I sort of treat ChatGPT like an intelligent colleague,” Nikki said. “It won’t get sick of listening to me, and it’s always available.”
But most of ChatGPT’s value lies upstream in the ideation process. When it comes to deciding how to proceed (i.e. actually selecting keywords, refining copy, or allocating budget), AI simply doesn’t have the depth. It can’t evaluate search volume. It doesn’t understand the nuance behind user intent. And it lacks the contextual awareness that’s required to market to scientists and researchers, who expect specificity, clarity, and proof.
That being said, this technology isn’t going away; it’s becoming more and more integrated across all of the tools that marketers use on a daily basis. Nikki and Eric even discussed how ads could one day appear in platforms like ChatGPT itself, through integrations with Bing. That’s why life science marketers need to evolve alongside AI, rather than trying to outsource their thinking to it.
The future isn’t AI versus marketers. It’s AI plus marketers (who know how to guide it). Marketers who combine AI with their own scientific fluency and campaign expertise will have a clear advantage against those that don’t. They’ll move faster, onboard quicker, and test ideas more efficiently, without losing the depth or rigor that makes their marketing effective in the first place.
Want to hear the full debate?
Then tune in to the podcast episode to hear Nikki and Eric walk through the challenge step by step, with real examples, honest reactions, and practical takeaways.
Listen to “Life Science Keyword Research: Is ChatGPT Smarter than a PhD Scientist?” now on Spotify, Apple, and YouTube to see how your own workflows compare.