AI Generated Titles: a Better Way To Use AI For Blog Title Ideas
Everyone’s tried it by now. You throw a keyword into ChatGPT and ask for blog title ideas. Out comes a stream of clever sounding post titles…which are really just nicely written randomness.
The problem is that the AI knows nothing about the competing sites in the search engine results page (SERP) or the true keyword intent as Google sees it.

In January, even Google’s John Mueller weighed in on AI title generation, saying it may be worth trying “if you’re running out of ideas…or to try new things out.” But he also warned against “blindly following” ChatGPT for blog post title ideas. And for good reason…
ChatGPT Is Blind To The (SERP) Facts
If you plan to write a post for a given keyword or topic area, one of the first things you do is to look at the SERP. You want to see what pages are ranking, what type of articles were written and how they are crafting their titles. You check out the competition.
But if ChatGPT could know what page titles were ranking for the keyword, it could potentially provide better title
So why are you so quick to take these blog title ideas from ChatGPT? It doesn’t see what you see.
But if ChatGPT could know what page titles were ranking for the keyword, it could potentially provide better title ideas.
Spoiler Alert
It does.
I performed a simple experiment that shows what most people would already guess.
Giving ChatGPT more data about the competitors in the SERP yields better, more targeted title ideas. These titles were both subjectively and objectively better than the keyword-only output.
Get SERP Enriched AI-Generated Title Ideas, Meta Description Suggestions and Search Intent Analysis Now With The SERP Sonar Browser Extension
Testing The Title Theory
Below are the parameters of this brief experiment as well as the results.
The idea was to run multiple queries using two different types of prompts. For one, only the keyword would be provided to ChatGPT, along with some basic framing language (like telling it who it is and how to format the answers).
I also added language to the prompt that prioritized the titles higher in the list, effectively instructing the AI to give higher weighting to higher ranking page titles.
For the second prompt, it would be given the keyword and the same framing language. It would also get the page titles for the top n ranking results. it would be instructed to read all the text provided in order to inform its title writing process.
I also added language to the prompt that prioritized the titles higher in the list, effectively instructing the AI to give higher weighting to higher ranking page titles.
…it is now incorporated as a plugin feature so you can still use it for free, now that it’s added to the SERP Sonar AI Content Report.
The comparison would also be run on both text-davinci-003 and ChatGPT v3.5 to see if there are differences. Finally, some additional prompt and setting variations were included at the end, which also yielded some interesting results.
NOTE: In case you were wondering, I will NOT be sharing the exact prompts. I spent the most time in this exercise fine tuning the multi-title prompt. However, it is now incorporated as a plugin feature so you can still use it for free, now that it’s added to the SERP Sonar AI Content Report. 🙂
Data Decisions
The first step was to consider how much information should be fed into the AI. Most tools on the market that do SERP analysis or data scraping tend to be limited to only the first 10 ranking results.
SERP Sonar makes it easy to export data for up to 100 SERP results. But sometimes more information isn’t better.
…note that I used the original publisher page titles, as opposed to the SERP titles (which may be rewritten by Google).
In the case of title ideas, the user will presumably just pick one or two. Of course, there’s no harm in requesting more (other than spending tokens). It would be easy to feed the AI 20, 30, 50 or more ranking page title results, and ask for just as many title ideas.
For this first test, I decided to start with just 10, in and out. Also, note that I used the original publisher page titles, as opposed to the SERP titles (which may be rewritten by Google).
The test keyword was: free serp analysis tool. Readers may recognize this from an in-progress case study; it is a keyword I am currently trying to rank for.
The 10 ranking competitor titles used in the test are listed below.

Results: More Information Is Better (Sometimes)
Ultimately, whether a title is good or not is largely subjective. The output is all below for you to see. You be the judge.
The search intent of the test keyword is a bit of a challenge (that was intentional). If you look at the ranking pages in the SERP it’s clear that Google isn’t 100% sure what to do with it.
When given a keyword only, the davinci-003 AI version was more likely to use the exact match keyword form.
I did ~10 runs for each of these 4 main test variations, with only one sample output result for each shown here (not including additional variations below). But all lists are intact and shown as they were delivered.
Test Results – Text-Davinci-003
One thing is clear from the “Test 1” list immediately below. When given a keyword only, the davinci-003 AI version was more likely to use the exact match keyword form, and it often was put at the end of the title.

However, giving the AI more contextual information definitely generates richer results, as seen in the “Test 2” list above. The title ideas have more ‘spark’ and variation. The lack of the exact match keyword was most pronounced in this Test 2 sample list. However, none of the davinci-003 Test 2 samples used it every time (as with Test 1 above).
This is how the AI can give you title ideas that incorporate words and elements that Google is actually ranking.
I selected this sample list because it also was a good demonstration of competing title ‘inspiration’. Words like “rank/ranking”, “checker”, “fresh”, “instant”, “best” and number use (“11”) all came from the extra input data (see table above).
These AI generated titles were also more likely to include phrasing structure similar to the competitor titles in the SERP. This is how the AI can give you title ideas that incorporate words and elements that Google is actually ranking.
Test Results – ChatGPT 3.5
When given only the keyword, and no extra context, ChatGPT v3.5 did generate subjectively better blog title ideas, as can be seen in the “Test 1” below. It still can’t resist using the exact match keyword but at least there’s more variety in word order.

Giving ChatGPT 3.5 the ranking title data arguably yielded the best results of all. As seen in the Test 2 list above these, again, include elements of the competitor titles but present as more nuanced and less wordy than the earlier output. They don’t all make sense, but that’s an entirely different (AI) problem.
Btw, my prompts make no mention of title length. Neither minimum nor maximum limits are place on the AI.
Test Results – Prompt and Setting Refinement
In the initial tests above I did not make use of my usual prompt methods for compelling more randomness and surprise (think words like “perplexity” and “burstiness”). I have a go-to phrase that I incorporate into many prompts, but I wanted to see clearly the impact of including the competitor title information.
This was the first appearance of the ‘All-in-title’ keyword configuration…
In the final rounds, I did include these additional prompt instructions. And something surprising happened.
This was the first appearance of the ‘All-in-title’ keyword configuration (marked in blue below).

The second list above includes the additional prompting plus increased Temperature (from 0.7 to 1). Based on this limited test, if I were ever stuck using the Davinici-003 model and Playground settings I would likely keep the Temperature at 0.7.
We see good balance in how (or if) the target keyword appears, and competitor title element usage is pronounced but not overt.
For those curious about the technical settings, all the other Davinici-003 tests above used a Temperature of 0.7 (fairly ‘creative’) and a Top_P of 1 (consider all possible guesses/options).
Unsurprisingly, the added prompting also led to improved AI title ideas with the v3.5 model as well. Below, we see good balance in how (or if) the target keyword appears, and competitor title element usage is pronounced but not overt.

Other Test Takeaways
Another minor variation I applied was in how I did subsequent prompt runs. I alternated both new (clean) chats/threads and using the “Regenerate Response” feature.
The regenerate button lets the AI see its previous attempts, which theoretically will inform its next attempts. For this kind of ideation I think regenerate might yield slightly better results. Try it yourself and see.
The keyword-only titles read well (if a bit click-baity) but, for this SERP, they completely miss the mark.
As noted above, this was a challenging keyword with a schizophrenic SERP (as many are). This shows that Google is unclear on how best to serve searchers.
So, the exact match keyword will likely not impact rankings as effectively. The keyword-only titles read well (if a bit click-baity) but, for this SERP, they completely miss the mark.
If I ask an AI for ideas, then I want truly novel and fresh ideas. This method does that.
By incorporating the ranking results the title ideas were quite different and varied. Some of the resulting blog titles did not even contain the term words. They did, however, include words from the competitor titles.
I think that’s good. If I ask an AI for ideas, then I want truly novel and fresh ideas. This method does that.
Currently, as with title ideas, asking ChatGPT to confirm search intent based only on the keyword is pointless.
Intend To Add Intent
Another area we are focused on at SERP Sonar, with regard to AI, is keyword intent. Currently, as with title ideas, asking ChatGPT to confirm search intent based only on the keyword is pointless.
I have already done a similar test as above, feeding ChatGPT the keyword as well as competitor title and meta-description data. The results are compelling. Look for a separate post on that experiment soon.
…feeding the AI competing blog titles along with the target keyword yields much better results. And soon AI generated title ideas will be a standard part of the SERP Sonar Report.
In many cases, the output from these large language models is fairly vanilla. With meticulous prompt tweaking it’s possible to get perfectly acceptable (but painfully average) AI generated titles.
But, in SEO, the title of a post is one of the most important pieces of the puzzle. Your blog title ideas need to really pop off the screen for readers and also do the job with the search engines.
This method of feeding the AI competing blog titles along with the target keyword yields much better results. And soon AI generated title ideas will be a standard part of the SERP Sonar Report. Watch this space!
I am super excited about this process.. the keyword-only titles fall flat. But the AI does a much better job when given more information and SERP context. What do you think!?
Love this article. I don’t think I have seen somebody go so indepth about google titles. As a matter fact I’ve only seen seo say include your main keyword phrase in you title and make it catchy…. Well sometimes you can’t include the keyword phrase and make it catchy or clickable without breaking up the phrase. Really enjoyed this
Thanks for your comment, Conray! Yes.. despite all the many factors that (probably) do contribute to the overall ranking results the title tag remains a key part of the mix.
I actually agree that it’s best to include the keyword (exact or variation) if possible. But as you say it’s not always practical. Using a variation H1 and trying to get it there can be a fall-back. After all, Google seems to prefer the H1 sometimes!