How to Optimize Your Content for LLM in 2025

SEO
September 8, 2025
13 mins
how to optimize your content
Content

With the development of technology and the changing demands of online audiences, content is evolving as well. In the past, we competed over keyword density; today, the success of content depends on its ability to be clear and helpful for AI systems, such as Google AI, Perplexity, and Claude.

Promodo experts explained how to optimize content for LLMs, why AI systems evaluate it differently than traditional search, and how to move from outdated methods to creating content that will be indexed and cited by artificial intelligence.

How Content is Evaluated in Traditional Search

From the very beginning, the core principle was set: search results should correspond to the user’s query. The logic of selecting an answer (a website page in search) was quite primitive, which made it possible to manipulate the rankings. At that time, it was enough to add more keywords to the text or even spam the keywords field — and just like that, you were number one in the results. This was fixed rather quickly.

Vector models used by Google:
- GloVe (Global Vectors for Word Representation)
– a word embedding model developed at Stanford University. It creates vector representations of words that reflect semantic relationships, taking into account co-occurrence frequency correlations in a matrix.
- Word2vec – one of the natural language processing methods. The word2vec algorithm uses a neural network model to learn word associations from a large text corpus.
- and others.


Google has built a complex system of machine learning and numerous algorithms that function and evaluate content independently, while constantly influencing one another.

Timeline of Google Algorithm Evolution

2011: Fight against “content farms” (Panda Update) – the first blow, when simply adding keywords was no longer enough.

Google Panda is part of Google’s search algorithm designed to downgrade websites with low-quality, duplicate, or spammy content. The main goal of the update was to filter out sites that didn’t add value to users and reduce their visibility in search results.


2015: Introduction of RankBrain.
A machine learning system that helps better understand user queries and page context. In fact, this can be considered the first step toward the future of artificial intelligence.

2019: BERT Update. Google introduced BERT (Bidirectional Encoder Representations from Transformers), a model that allows for a deeper understanding of the context of words in queries. The key difference was that simply adding keywords was no longer enough — one now had to “predict” the user’s final intent behind the search query.

2023: Helpful Content Update. From this point on, the era of search and detailed evaluation of overall content quality begins, rather than just keyword usage.

This shaped the following algorithm for analyzing and identifying “ideal content” for the search engine:

  1. Pages with 1,000–2,000 words have a better chance of ranking higher, as they usually cover the topic more comprehensively.

  2. Use of keywords: search engines look for matches between the query and the page content.

  3. For proper keyword integration in content to appeal to the search engine, additional indicators are considered:

  • TF (Term Frequency) — the frequency of a specific term on the page. A higher TF may indicate that the term is important for the context, but it should not be overused.

  • DF (Document Frequency) — the number of documents in the entire index that contain this term. If DF is high, it means the term is common and likely less specific or more general.

  • IF (Inverse Frequency) — the inverse frequency of a term in a document. This is important for determining how rare or significant a word is for a particular context or page.

  1. Use of LSI phrases (Latent Semantic Indexing) made text more relevant and authoritative compared to competitors. 
LSI is a method used by search engines, including Google, to understand the context and meaning of text, not just count keywords.
  1. Using keyword phrases in meta tags allowed a clearer indication to the search engine of the main purpose for which the page was created.

  2. Working with text structure. Primarily, this means using H1–H6 headings, which not only highlight subtopics but also indicate the importance of specific intent on the page and correlate with relevance.


How AI Bots Analyze Content Differently

The GPT-3 system was introduced in 2020 and, within just two years, became available to mass users.

We no longer “optimize” — we “engineer.”
— Mike King (iPullRank), SMX Advanced 2025 Conference


Today, LLMs are offered not only by major players but are also available in custom-built formats for individual companies, businesses, or even users. You can generate text, images, videos, or simply maintain communication in chat.

large language models

Different models have different operating logics and use various sources of information for training. Let’s look at some examples to help you better understand how AI works from the inside.

Being the intelligence for AI means being the source of its answers.
— Will Scott, SMX Advanced 2025 Conference

AI Overviews

Google confirms the use of the “query fan-out” technique in AI Overviews: the system launches multiple parallel queries to construct a single response. This means that for a single query, your page has a chance to be included even if it is not exactly relevant to the original query — the key is that it answers the sub-queries. This hypothesis was confirmed by the DEJAN experiment.

“Query fan-out” is a technique in AI systems, particularly in search systems, that involves breaking down the user’s initial query into several related sub-queries.


If you want to understand exactly how it works, it is recommended to review Google’s patent.

Perplexity

A recent study by Metehan Yesilyurt identified 59 ranking patterns used in Perplexity, along with cryptographic schemes for content evaluation.

However, it is worth noting that some aspects of this system may remain closed or insufficiently verified due to limited access to the details of its internal workings.

Perplexity uses a complex three-level (L3) re-ranking system for object retrieval, which allows it to fundamentally alter search results. The system includes security mechanisms that can completely discard result sets if they do not meet quality thresholds, ensuring that users only see highly reliable matches.

Let’s take a closer look at the parameters that have been identified and how they affect the content evaluated by the AI-powered search system.

Я вже згенерував для вас мініфікований код HTML таблиці з наданих даних у попередніх відповідях. Ось він ще раз: ```
ParameterFunctionImpact on Content
l3_reranker_enabledEnables/disables the advanced re-ranking systemWhen enabled, adds an additional quality evaluation layer on top of standard ranking
l3_xgb_modelSpecifies the version of the XGBoost model for re-ranking (likely)Different models may prioritize different content features and quality signals
l3_reranker_drop_thresholdSets the quality threshold for keeping/discarding resultsContent below this threshold is completely removed from the results
l3_reranker_drop_all_docs_if_count_less_equalMinimum threshold for viable resultsIf too few results pass the quality check, the entire result set is discarded
``` Чи є щось інше, чим я можу вам допомогти?


In conclusion, the L3 re-ranking system does more than just review the generated results — it can completely reject search results based on their substance if there is no quality verification.

Successfully displaying your content requires not only keyword optimization but also topical authority and quality signals that satisfy machine learning evaluation.

The researcher also found that the Perplexity ranking system includes manually configured authoritative domains.

Key authoritative domains by category (not a complete list):

  1. eCommerce and shopping:

amazon.com, ebay.com, walmart.com, bestbuy.com
etsy.com, target.com, costco.com, aliexpress.com

  1. Productivity and professional tools:

github.com, notion.so, slack.com, figma.com
jira.com, asana.com, confluence.com, airtable.com

  1. Communication platforms:

whatsapp.com, telegram.org, discord.com
messenger.com, signal.org, microsoftteams.com

  1. Social and professional networks:

linkedin.com, twitter.com, reddit.com
facebook.com, instagram.com, pinterest.com

  1. Educational resources:

coursera.org, udemy.com, edx.org
khanacademy.org, skillshare.com

  1. Travel and booking:

booking.com, airbnb.com, expedia.com
kayak.com, skyscanner.net

A strong connection has been found between the Perplexity and YouTube platforms: when YouTube videos use titles with exact matches to trending Perplexity queries, they gain significant ranking advantages on both platforms. Check the experiment results yourself here.

To form the ranking of content that will be included in results, a complex system of categorizing user intent is used:

user intent funnel


Content that matches these pre-programmed suggestion categories gains better visibility since it aligns with predefined valuable user intents.

Thus, the content goes through the following verification:

content visibiity


It is important to note that these content verification stages involve manual configurations and predefined optimization patterns.

Here is a summary table of the ranking factors:

``` html
Factor CategoryKey ParametersImpact on RankingOptimization Strategy
New Publicationnew_post_impression_threshold, new_post_published_time_threshold_minutes, new_post_ctrCritical for initial visibilityLaunch with maximum distribution, monitor early CTR
Topic Classificationsubscribed_topic_multiplier, top_topic_multiplier, default_topic_multiplier, restricted_topicsExponential differences in visibilityFocus on AI, tech, and science topics; avoid entertainment/sports
Time Decaytime_decay_rate, item_time_range_hoursRapid decline in visibilityPublish frequently, update existing content
Semantic Relevanceembedding_similarity_threshold, text_embedding_v1Essential gateway for rankingCreate semantically rich, comprehensive content
User Engagementdiscover_engagement_7d, historic_engagement_v1, discover_click_7d_batch_embeddingLong-term ranking boostOptimize for clicks, dwell time, and repeat visits
Memory Networksboost_page_with_memory, memory_limit, related_pages_limitRewards interconnected contentBuild topical clusters, cross-link to previous work
Feed Distributionpersistent_feed_limit, feed_retrieval_limit_topic_matchControls content reachUnderstand feed mechanics, optimize publishing time
Negative Signalsdislike_filter_limit, dislike_embedding_filter_threshold, discover_no_click_7d_batch_embeddingCan significantly limit visibilityMonitor feedback, maintain quality
Content Diversitydiversity_hashtag_similarity_threshold, hashtag_match_thresholdPrevents gaming/spamVary hashtags, maintain thematic breadth
Domain Restrictionsblender_web_link_domain_limit, blender_web_link_percentage_thresholdLimits the dominance of a single sourceDiversify content sources, limit outbound links
Technical Systemsenable_ranking_model, enable_union_retrieval, calculate_matching_scoresCore ranking infrastructureEnsure compliance with technical requirements
```

A detailed study of the factors and their impact can significantly help you improve your ranking in the Perplexity system.

Claude

At the end of June 2025, a leak of internal documentation from the Claude AI system was announced. Well-known Western experts, Hans Kronenberg and Aleyda Solis, analyzed the information and highlighted key takeaways for all of us.

Key Findings:

  • Claude uses 4 search modes:
    never_search,
    do_not_search_but_offer,
    single_search,
    research.

  • Claude applies the never_search mode when answering questions that contain commonly known, unchanging information.

  • The single_search mode is used to look up specific facts when forming answers to simple questions.

“LLMs don’t just search — they reason. You need to “own expertise” to appear in reasoning models. Deep search = your deep expertise”.

Crystal Carter, SMX Advanced 2025 conference

  • For complex queries, Claude generates answers in research mode, using 2–20 calls to search tools.


search mode use for claude

What this information means for us as SEO specialists:

  • According to the researchers, neither authority nor brand plays a decisive role. What matters is having a clearly structured answer that an AI bot can easily analyze and break down into parts.

  • Create content that users search for outside AI-generated answers — such as tables, tools, and editorial analytics.

  • Shift from traditional SEO to optimization for citation: write in a way that makes your phrases easy to integrate into AI responses.

ChatGPT

Another researcher, Jérôme Salomon, actively pressed the ChatGPT support team to uncover details about how its search works. We now know that the process happens in the following stages:

  • First, ChatGPT transforms the user query into one or several requests to Bing.
  • Then, Bing returns a list of search results.
  • Next, the AI bot scans a selection of relevant sources.
  • Finally, ChatGPT generates a response, incorporating content from the most relevant citations.



chatgpt search process

The author of the study asked ChatGPT support how the system chooses URLs from the long list of search results — and they provided an answer:

“The decision about which pages to scan is primarily influenced by the relevance of the title, the content in the snippet, the freshness of the information, and the reliability of the domain.”
— ChatGPT Support


If you’d like to see for yourself how ChatGPT selects its sources when generating an answer, here’s a quick guide:

  1. Enter your question (prompt) into ChatGPT with Search enabled.
  2. Open DevTools (Right-click → Inspect or press F12).
  3. Go to the Network tab → refresh the page with Ctrl+R, for example.
  4. Look for a request with /c/{code} in the URL. This is the one that contains the JSON response.
  5. Click on it → open the Response tab.

What you will see in the JSON response:

  • “thoughts” — a description of the logic ChatGPT applied to expand the query. Very useful for understanding LLM reasoning.

  • “search_queries” — the exact queries that were sent to Bing. If you want to appear in the results, you need to rank for these phrases.

  • “search_result_groups” — the sources that were retrieved. For each result, you get:

    • URL address
    • Title
    • Snippet (usually based on the description)
    • Position in the ranking
    • Metadata (e.g., publication date)

Note: At the time of writing, it was only possible to access one parameter — “search_result_groups.” But here another expert comes into play — Mark Williams-Cook.

Also, read the article about Optimizing for AI Search: A Practical Guide for Travel Agencies


Recommendations for Optimizing Website Content for AI Search

​​The first thing to note: in traditional SEO, we optimized an entire website page, but LLMs are not interested in the whole page or all of its content — only in specific parts of the content, known as chunks.

Chunks are small, self-contained pieces of content that come together to form a larger article or page.

To ensure your content appears in AI responses, it’s important to consider this new structure of interaction with texts optimized for LLMs. This means you need to create short, logically complete text fragments that can be easily integrated into AI-generated answers.

Also read How Doctors Can Use AI and SEO to Win in AI Search Results


How does it work?

A chunk is a separate, logically complete text fragment of about ~100–300 tokens (75–225 words) that an LLM (ChatGPT, Claude, Gemini) can extract, analyze, and use when generating its response.

These fragments aren’t “assembled manually” — they are processed automatically. If you want to appear in an AI answer, you need to create short, logically complete ideas.

Even with massive context windows (GPT-4 Turbo — 128K tokens, Gemini 1.5 — up to 2M), these systems still work with individual semantic parts, not the entire text.

How to Optimize On-Page Content for AI

Structure your text into chunks. To do this, first consider the following points:

  • Use entities. Unlike keywords, entities rely on contextual connections to help search algorithms understand the intent behind a query. That’s why entity optimization is far more important than traditional keyword optimization.

An entity is a person, object, place, or any other concept that search engines and LLMs can understand.

  • Divide all text into sections with structured H2–H3 headings. But unlike traditional SEO, your headings should describe a specific intent that is revealed in the paragraphs below. It’s no longer enough to just insert keywords into a heading — if the text doesn’t deliver on that intent, the heading will be considered irrelevant.

  • Keep the text in blocks. Follow the rule: “one block = one idea.” Today, AI bots cannot extract an entity if it’s diluted with “fluff” or if information unrelated to the definition (question) and the explanation (answer) is placed in between. If you start a topic in a paragraph or section, provide the answer immediately, without “lyrical” digressions.

Embedding algorithms require each chunk to have a clear, consistent meaning.

  • LLM systems understand clear, direct sentences best. Metaphors, jokes, and digressions reduce the quality of semantic analysis. Remember: LLMs only retrieve fragments directly related to the user’s query.
We live in the era of open-book AI retrieval. What’s needed is high-quality, semantically structured content with clear topics and chunks.

- Dawn Anderson, SMX Advanced 2025 Conference

  • Structuring content is crucial: use not only headings but also tables, separate boxes (blocks) with definitions. Example:

Example:

  • With the advent of LLMs, a new concept has emerged — the “Key Takeaways” block, which serves as a summary for the LLM. Add it after each section or as a brief preview at the beginning.

  • LLMs select chunks that match the natural phrasing of the query. Therefore, use blocks from traditional search tools: AlsoAsked, AnswerThePublic, People Also Ask.
  • Today, LLMs not only hallucinate but are often forgetful. Therefore, it’s important to spell out abbreviations and provide explanations for terms.

  • Repeat important ideas in your text using different formulations. Unlike traditional SEO, where the presence of exact keywords matters, LLMs don’t search for or count identical keywords — they work with vectors. Therefore, it’s worth mentioning a key idea several times throughout the article using different descriptions.
  • Across your entire site, it’s now important not just to interlink content but to build an internal knowledge graph. Each internal link should guide users and AI bots not to entirely new content, but to continuations and expansions of ideas introduced on the original page. Your internal linking should reflect a content map for the entire site.
  • Use FAQ sections for direct answers. Today, a FAQ block is not just a collection of “keyword queries” in question form, but actual questions that users are asking. Provide concise answers of 50–100 words, phrased the way users would ask them to an LLM. Research confirms the effectiveness of using FAQs:

Conclusion

Today, LLMs are still in the process of development and constantly evolving. All LLMs share many similarities, but they follow different paths of evolution. Some AIs rely more on semantic analysis, while others use manual configurations and take into account a range of additional factors that are independent of content quality. That’s why what works for one AI search system may not yield results in another.

Take your content to the next level!
Optimize for AI search with Promodo.
Written by
Oleksandr Kovalchuk

SEO Specialist at Promodo

Published:
September 8, 2025
Updated:
Subscribe to our newsletter
This is some text inside of a div block.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
List of the US Trust worthy Automotive Forums
DOWNLOAD

You may also like

Check our similar articles

Choose quality and trusted services to improve the presence of your company on the Internet, and feel free to contact our UK team if you have any questions.

SEO
How to Make ChatGPT Recommend Your Real Estate Agency

Think of AI tools not as threats, but as new marketing channels. In this guide, we’ll walk you through how AI is changing real estate search.

July 22, 2025
8 mins
All categories
SEO
How to Automate SEO Tasks with ChatGPT

Promodo SEO specialist Oleksandr Hetmanskyi explained how to automate typical SEO tasks with ChatGPT.

May 19, 2025
8 mins
Boost your effectiveness

We at Promodo are ready to help you improve your performance across all digital marketing channels.

Get started
Contact us
Get a free strategy session

Let us look at your business challenge from a different angle and share our ideas.

Valid number
Send message

By clicking on “Send message” button, you agree to our Privacy Policy, and allow Promodo to use this information for marketing purposes.

Please check your inbox for a new email containing a list of reputable automotive forums.
Got it
Oops! Something went wrong while submitting the form.