{"id":372,"date":"2026-04-14T09:56:15","date_gmt":"2026-04-14T09:56:15","guid":{"rendered":"https:\/\/www.promptposition.com\/blog\/robot-txt-noindex\/"},"modified":"2026-04-14T09:56:27","modified_gmt":"2026-04-14T09:56:27","slug":"robot-txt-noindex","status":"publish","type":"post","link":"https:\/\/www.promptposition.com\/blog\/robot-txt-noindex\/","title":{"rendered":"Why Robot TXT Noindex Fails &#038; What to Use Instead"},"content":{"rendered":"<p>If your team still thinks <strong>robot txt noindex<\/strong> is a valid way to keep pages out of Google, you&#039;re working from outdated advice.<\/p>\n<p>That mistake shows up in real marketing workflows all the time. A team launches a campaign hub, adds <code>noindex<\/code> to <code>robots.txt<\/code>, assumes the job is done, and then wonders why the URL still appears in search. The problem isn&#039;t subtle. It&#039;s a mismatch between the instruction you&#039;re giving and the system you&#039;re trying to control.<\/p>\n<p>That matters even more now because visibility is no longer just about Google&#039;s index. A page can disappear from search results and still surface in AI-generated answers, training data, or retrieval systems. If you manage brand, content, or SEO, you need tighter control than old forum advice can give you.<\/p>\n<h2>The Enduring SEO Myth of Robot TXT Noindex<\/h2>\n<p>The myth survives because it used to be sort of true.<\/p>\n<p>Google supported <code>noindex<\/code> in <code>robots.txt<\/code> unofficially for years. That history left a long tail of blog posts, agency playbooks, and inherited technical setups that still treat it like a valid option. But that support ended years ago, and many teams never updated their mental model.<\/p>\n<p><figure class=\"wp-block-image size-large\"><img decoding=\"async\" src=\"https:\/\/www.promptposition.com\/blog\/wp-content\/uploads\/2026\/04\/robot-txt-noindex-misconception-scaled.jpg\" alt=\"A concerned person looks at a computer screen showing the incorrect use of noindex in robots.txt.\" \/><\/figure><\/p>\n<p>A lot of SEO mistakes persist because they once worked, or seemed to work. That&#039;s why it&#039;s useful to revisit lists of <a href=\"https:\/\/foureyes.com\/21-seo-myths-debunked\/\" target=\"_blank\" rel=\"noopener\">other common SEO myths<\/a> that still shape decisions long after the underlying platforms changed.<\/p>\n<p>The practical issue is simple. <strong><code>robots.txt<\/code> is not where you control indexing anymore.<\/strong> If your team is still relying on robot txt noindex, you&#039;re using a retired method and expecting current systems to honor it.<\/p>\n<p>That creates two risks:<\/p>\n<ul>\n<li><strong>Search risk<\/strong> because pages you meant to hide can remain indexed.<\/li>\n<li><strong>Measurement risk<\/strong> because your reporting becomes unreliable. Teams think the rule is in place, so they stop investigating.<\/li>\n<li><strong>AI visibility risk<\/strong> because old SEO advice says nothing about how modern AI systems may ingest or reuse content.<\/li>\n<\/ul>\n<p>For teams trying to understand search and AI discoverability together, this is exactly the kind of technical misunderstanding that spills into strategy. That&#039;s part of why AI-focused marketers are now treating indexing controls as part of a broader visibility framework, not just a cleanup task. A useful starting point is this guide to <a href=\"https:\/\/www.promptposition.com\/blog\/ai-search-engine-optimization\/\">https:\/\/www.promptposition.com\/blog\/ai-search-engine-optimization\/<\/a>.<\/p>\n<blockquote>\n<p><strong>Practical rule:<\/strong> If the goal is &quot;don&#039;t show this in Google,&quot; don&#039;t start with <code>robots.txt<\/code>. Start with a method Google can actually read at the page or header level.<\/p>\n<\/blockquote>\n<h2>Understanding Crawling vs Indexing<\/h2>\n<p>Most confusion around robot txt noindex comes from treating <strong>crawling<\/strong> and <strong>indexing<\/strong> like they&#039;re the same thing. They aren&#039;t.<\/p>\n<p>A simple way to explain it to a marketing team is the library model. A crawler is the librarian walking through rooms and examining books. The index is the public catalog. One action is about access. The other is about inclusion.<\/p>\n<p><figure class=\"wp-block-image size-large\"><img decoding=\"async\" src=\"https:\/\/www.promptposition.com\/blog\/wp-content\/uploads\/2026\/04\/robot-txt-noindex-crawling-indexing.jpg\" alt=\"An infographic explaining web crawling and indexing using a digital librarian analogy with descriptive text labels.\" \/><\/figure><\/p>\n<h3>What robots.txt actually does<\/h3>\n<p><code>robots.txt<\/code> tells crawlers where they shouldn&#039;t go.<\/p>\n<p>It can discourage crawling of folders, files, or URL patterns. That&#039;s useful when you want to reduce wasted crawling on low-value areas like internal search results, faceted combinations, or staging sections that should never be explored.<\/p>\n<p>But <code>robots.txt<\/code> doesn&#039;t function as a guaranteed &quot;keep this out of search results&quot; command. If a search engine knows a URL exists from links, references, or prior crawling, that URL can still be associated with the index.<\/p>\n<h3>What noindex actually does<\/h3>\n<p><code>noindex<\/code> is different. It&#039;s an indexing instruction.<\/p>\n<p>You place it where the crawler can read it on the page itself or in the HTTP response headers. That way the crawler can access the resource, process the instruction, and decide not to keep it in the index.<\/p>\n<p>This is why the library analogy matters. Telling the librarian &quot;don&#039;t walk into Room B&quot; is not the same as handing the librarian a visible note that says &quot;don&#039;t catalog this book.&quot;<\/p>\n<h3>Why the old approach died<\/h3>\n<p>Google formally ended support for <code>noindex<\/code> in <code>robots.txt<\/code> on <strong>September 1, 2019<\/strong>, after announcing the shift in <strong>July 1 to 2, 2019<\/strong>. Google had supported the behavior unofficially for more than a decade, with Matt Cutts documenting the practice in <strong>2008<\/strong>, but retired it after analysis showed misuse was widespread. Gary Illyes said <strong>&quot;the number of sites that were hurting themselves was very high&quot;<\/strong> (<a href=\"https:\/\/www.lumar.io\/blog\/best-practice\/robots-txt-noindex-the-best-kept-secret-in-seo\/\" target=\"_blank\" rel=\"noopener\">Lumar<\/a>).<\/p>\n<p>That quote explains why this topic isn&#039;t a tiny technical footnote. Teams were damaging their own visibility because they used a crawl control to solve an indexing problem.<\/p>\n<h3>The working mental model<\/h3>\n<p>Keep this split clear:<\/p>\n<ul>\n<li><strong>Use <code>robots.txt<\/code><\/strong> when you want to limit crawler access.<\/li>\n<li><strong>Use <code>noindex<\/code><\/strong> when you want a page or file removed from search indexes.<\/li>\n<li><strong>Don&#039;t swap them<\/strong> because they solve different problems.<\/li>\n<\/ul>\n<blockquote>\n<p>If the crawler can&#039;t see the page, it can&#039;t read the indexing instruction on that page.<\/p>\n<\/blockquote>\n<p>That one sentence eliminates a lot of bad implementations.<\/p>\n<h2>The Right Tools for Indexing Control<\/h2>\n<p>If you need a page out of the index, you have two supported options. That&#039;s it.<\/p>\n<p><strong>The meta robots tag<\/strong> and <strong>the X-Robots-Tag HTTP header<\/strong> are the two equivalent, Google-supported ways to block indexing. The meta tag belongs in the <code>&lt;head&gt;<\/code> of an HTML page. The header version is sent in the HTTP response, which makes it the right fit for non-HTML assets like PDFs or images (<a href=\"https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTML\/Reference\/Elements\/meta\/name\/robots\" target=\"_blank\" rel=\"noopener\">MDN on robots meta directives<\/a>).<\/p>\n<h3>Choosing the correct method<\/h3>\n<p>The choice isn&#039;t philosophical. It&#039;s based on file type and implementation control.<\/p>\n\n<figure class=\"wp-block-table\"><table><tr>\n<th>Criterion<\/th>\n<th>Meta Robots Tag<\/th>\n<th>X-Robots-Tag HTTP Header<\/th>\n<\/tr>\n<tr>\n<td>Best fit<\/td>\n<td>HTML pages<\/td>\n<td>Non-HTML files and server-level control<\/td>\n<\/tr>\n<tr>\n<td>Where it lives<\/td>\n<td>Inside the <code>&lt;head&gt;<\/code> of the document<\/td>\n<td>In the HTTP response header<\/td>\n<\/tr>\n<tr>\n<td>Typical use cases<\/td>\n<td>Blog posts, category pages, thin landing pages<\/td>\n<td>PDFs, images, video files, generated documents<\/td>\n<\/tr>\n<tr>\n<td>Needs page access to read?<\/td>\n<td>Yes<\/td>\n<td>Yes<\/td>\n<\/tr>\n<tr>\n<td>Good for bulk rules?<\/td>\n<td>Sometimes, if templated<\/td>\n<td>Often, if applied by server logic<\/td>\n<\/tr>\n<tr>\n<td>Example directive<\/td>\n<td><code>&lt;meta name=&quot;robots&quot; content=&quot;noindex&quot;&gt;<\/code><\/td>\n<td><code>X-Robots-Tag: noindex<\/code><\/td>\n<\/tr>\n<\/table><\/figure>\n<h3>Meta robots tag example<\/h3>\n<p>Use this on an HTML page you don&#039;t want indexed:<\/p>\n<pre><code class=\"language-html\">&lt;meta name=&quot;robots&quot; content=&quot;noindex&quot;&gt;\n<\/code><\/pre>\n<p>If you also want to stop link following from that page, use:<\/p>\n<pre><code class=\"language-html\">&lt;meta name=&quot;robots&quot; content=&quot;noindex, nofollow&quot;&gt;\n<\/code><\/pre>\n<p>Other directives can be stacked when needed. For example:<\/p>\n<pre><code class=\"language-html\">&lt;meta name=&quot;robots&quot; content=&quot;noindex, noarchive, nosnippet&quot;&gt;\n<\/code><\/pre>\n<p>That&#039;s useful when the page shouldn&#039;t appear in search and you also don&#039;t want a cached result or snippet treatment.<\/p>\n<h3>X-Robots-Tag example<\/h3>\n<p>Use this when the asset isn&#039;t an HTML page, or when your server rules are the cleanest place to manage indexing:<\/p>\n<pre><code class=\"language-http\">HTTP\/1.1 200 OK\nX-Robots-Tag: noindex\n<\/code><\/pre>\n<p>This is the better method for PDFs, media files, and documents generated outside your CMS template layer.<\/p>\n<h3>When teams usually choose wrong<\/h3>\n<p>The common implementation mistake isn&#039;t syntax. It&#039;s scope.<\/p>\n<p>Marketing teams often add a meta tag manually to one page and assume the problem is solved across a whole page set. Or engineering applies a header too broadly and accidentally suppresses indexation for files that should rank. The fix is to decide first whether you&#039;re controlling:<\/p>\n<ul>\n<li>a <strong>single page<\/strong><\/li>\n<li>a <strong>template group<\/strong><\/li>\n<li>a <strong>file class<\/strong><\/li>\n<li>or a <strong>pattern generated outside the CMS<\/strong><\/li>\n<\/ul>\n<blockquote>\n<p><strong>Use the page for page decisions. Use the server for file and pattern decisions.<\/strong><\/p>\n<\/blockquote>\n<p>If your work increasingly spans both search indexing and AI discoverability, the tooling conversation gets broader than Google Search Console. Teams evaluating platforms in that space often start with resources like <a href=\"https:\/\/www.promptposition.com\/blog\/ai-seo-software\/\">https:\/\/www.promptposition.com\/blog\/ai-seo-software\/<\/a> to understand what should be monitored.<\/p>\n<h2>Why Combining Disallow and Noindex Backfires<\/h2>\n<p>This is one of the most common technical SEO own goals.<\/p>\n<p>A team blocks a URL in <code>robots.txt<\/code> with <code>Disallow<\/code>, then adds a <code>noindex<\/code> tag to the page template. On paper, it looks extra safe. In practice, it&#039;s contradictory.<\/p>\n<h3>The logic problem<\/h3>\n<p>Go back to the librarian analogy.<\/p>\n<p>You&#039;re telling the librarian not to enter the room. Then you&#039;re taping a note inside the room that says &quot;don&#039;t catalog this book.&quot; The note exists, but the librarian never sees it.<\/p>\n<p>That&#039;s exactly what happens when a crawler is blocked from accessing a page that contains the <code>noindex<\/code> directive. The crawler skips the page, which means it can&#039;t read the instruction that would have removed the URL from the index.<\/p>\n<h3>What this looks like in search<\/h3>\n<p>When this happens, the URL can linger in search in an awkward state.<\/p>\n<p>You may see the address still known to Google, sometimes with limited presentation because Google has restricted information about the page. Teams often describe these as zombie listings. They&#039;re not fully useful in search, but they also haven&#039;t gone away.<\/p>\n<h3>What to do instead<\/h3>\n<p>Pick one objective first.<\/p>\n<p>If the goal is <strong>removal from the index<\/strong>, allow crawling long enough for the crawler to read <code>noindex<\/code>.<\/p>\n<p>If the goal is <strong>crawl restriction<\/strong>, use <code>robots.txt<\/code>, but don&#039;t expect that alone to act as a guaranteed deindexing mechanism.<\/p>\n<p>A cleaner way to frame this is:<\/p>\n<ul>\n<li><strong>Need the URL gone from search?<\/strong> Make it crawlable and apply <code>noindex<\/code>.<\/li>\n<li><strong>Need to reduce crawling of low-value sections?<\/strong> Use <code>robots.txt<\/code>.<\/li>\n<li><strong>Need both at different stages?<\/strong> Sequence the changes, don&#039;t stack contradictory instructions at the same time.<\/li>\n<\/ul>\n<blockquote>\n<p>A blocked page can&#039;t deliver a page-level instruction.<\/p>\n<\/blockquote>\n<p>That single implementation detail explains a large share of indexing tickets.<\/p>\n<h2>Beyond Google Controlling Visibility in AI Search<\/h2>\n<p>The old SEO playbook assumes Google is the whole battlefield. It isn&#039;t anymore.<\/p>\n<p>A page can be excluded from Google&#039;s index and still influence how your brand appears in AI systems. That&#039;s where the robot txt noindex conversation gets more interesting, because the usual search guidance doesn&#039;t fully answer the next question: <strong>will an LLM respect the same controls?<\/strong><\/p>\n<p><figure class=\"wp-block-image size-large\"><img decoding=\"async\" src=\"https:\/\/www.promptposition.com\/blog\/wp-content\/uploads\/2026\/04\/robot-txt-noindex-ai-paradox-scaled.jpg\" alt=\"A conceptual illustration of a brain with the word NOINDEX overlaid, representing AI data indexing limitations.\" \/><\/figure><\/p>\n<h3>The AI indexing paradox<\/h3>\n<p>Current SEO documentation focuses on search engine indexing behavior, but it doesn&#039;t provide clear rules for whether LLMs respect <code>noindex<\/code> when collecting content for training datasets or retrieval workflows. That creates an <strong>AI Indexing Paradox<\/strong> where a page blocked from Google may still appear in ChatGPT-style environments. The same context notes that a <strong>2025 study suggested 60 to 70 percent of websites have not updated their policies for AI crawlers<\/strong> (<a href=\"https:\/\/developers.google.com\/search\/docs\/crawling-indexing\/block-indexing\" target=\"_blank\" rel=\"noopener\">Google documentation context<\/a>).<\/p>\n<p>That gap matters for brand and communications teams.<\/p>\n<p>If you remove a page from Google because it&#039;s outdated, legally sensitive, off-message, or partner-only, you may assume it&#039;s no longer part of your discoverability footprint. That assumption may be wrong in AI environments.<\/p>\n<h3>Search visibility and AI access are not identical<\/h3>\n<p>Strategy gets more nuanced than standard SEO checklists.<\/p>\n<p>A <code>noindex<\/code> directive is designed to affect search indexing for engines that support it. It is not a universal &quot;erase this from all machine-readable systems&quot; instruction. That distinction is one reason more teams are studying adjacent frameworks like <a href=\"https:\/\/nanopim.com\/post\/what-is-generative-engine-optimization\" target=\"_blank\" rel=\"noopener\">Generative Engine Optimization (GEO)<\/a>, which focus on how brands surface in AI-generated answers rather than in blue-link search alone.<\/p>\n<p>For teams trying to set policy, a few practical principles help:<\/p>\n<ul>\n<li><strong>Separate goals clearly<\/strong>. Search suppression and AI suppression may require different controls.<\/li>\n<li><strong>Audit sensitive content classes<\/strong>. Pricing pages, executive bios, gated thought leadership, internal PDFs, and old campaign assets often need different treatment.<\/li>\n<li><strong>Document crawler policies intentionally<\/strong>. AI-specific crawler handling is no longer an edge concern.<\/li>\n<\/ul>\n<h3>A tactical nuance most guides skip<\/h3>\n<p>There is an under-discussed scenario where <strong>disallow without noindex<\/strong> can be strategically useful.<\/p>\n<p>If a page must remain visible in traditional search but you want to limit access by specific crawlers that honor <code>robots.txt<\/code>, a crawler-specific robots policy may be worth evaluating. That is very different from using <code>disallow<\/code> as a deindexing tool.<\/p>\n<p>It won&#039;t solve every AI visibility problem. It also won&#039;t guarantee behavior across all systems. But it reflects a distinct split between search presence and machine access.<\/p>\n<p>A strong primer on this policy layer is <a href=\"https:\/\/www.promptposition.com\/blog\/llms-txt\/\">https:\/\/www.promptposition.com\/blog\/llms-txt\/<\/a>.<\/p>\n<p>One useful explainer on the broader shift is below.<\/p>\n<iframe width=\"100%\" style=\"aspect-ratio: 16 \/ 9\" src=\"https:\/\/www.youtube.com\/embed\/l1Tht0a4RqI\" frameborder=\"0\" allow=\"autoplay; encrypted-media\" allowfullscreen><\/iframe>\n\n<blockquote>\n<p><strong>Watch for this trap:<\/strong> teams often apply classic SEO rules to AI visibility problems and assume the outcome is the same. It often isn&#039;t.<\/p>\n<\/blockquote>\n<h2>Troubleshooting Common Indexing and Robots TXT Issues<\/h2>\n<p>When a page won&#039;t leave the index, don&#039;t guess. Check the implementation in the order Google encounters it.<\/p>\n<h3>The page still appears in Google<\/h3>\n<p>Start with the live URL, not your CMS settings.<\/p>\n<p>Use Google Search Console&#039;s URL Inspection tool and check the current crawled version. The most common problem is that the intended <code>noindex<\/code> was added in a draft, in a JavaScript layer Google didn&#039;t process as expected, or on a different URL variant than the one indexed.<\/p>\n<p>Then verify:<\/p>\n<ul>\n<li><strong>Canonical alignment<\/strong>. Make sure you&#039;re testing the exact URL Google is showing, including protocol, subdomain, trailing slash, and parameters where relevant.<\/li>\n<li><strong>Rendered source<\/strong>. Confirm the meta robots tag appears in the <code>&lt;head&gt;<\/code>, not just in a component preview.<\/li>\n<li><strong>Header output<\/strong>. For PDFs or files, inspect the actual response headers and confirm the <code>X-Robots-Tag<\/code> is present.<\/li>\n<\/ul>\n<h3>The tag exists, but nothing changes<\/h3>\n<p>A correct tag doesn&#039;t help if the page is blocked from being crawled.<\/p>\n<p>Check <code>robots.txt<\/code> and look for accidental <code>Disallow<\/code> rules that prevent access to the URL. Also check whether internal links or sitemaps are still reinforcing that URL as a normal site page.<\/p>\n<p>If the goal is removal, keep the URL accessible long enough for the crawler to process the <code>noindex<\/code>.<\/p>\n<h3>The wrong pages got noindexed<\/h3>\n<p>This usually comes from template logic, not search engine behavior.<\/p>\n<p>Review whether a global rule was added to a page type that includes pages you want indexed. This happens often on filtered collections, campaign templates, and headless implementations where one component controls many outputs.<\/p>\n<p>Use a simple audit checklist:<\/p>\n<ol>\n<li><strong>List affected URLs<\/strong> and identify the shared template or header logic.<\/li>\n<li><strong>Compare indexable and non-indexable examples<\/strong> from the same content type.<\/li>\n<li><strong>Inspect one live page manually<\/strong> before changing rules sitewide.<\/li>\n<li><strong>Resubmit key URLs for validation<\/strong> after the fix.<\/li>\n<\/ol>\n<p>For teams dealing with broader discoverability issues, including pages that don&#039;t surface when they should, this guide can help frame the diagnosis: <a href=\"https:\/\/www.promptposition.com\/blog\/why-doesnt-my-website-show-up-on-google\/\">https:\/\/www.promptposition.com\/blog\/why-doesnt-my-website-show-up-on-google\/<\/a><\/p>\n<h2>Frequently Asked Questions on Noindex Strategies<\/h2>\n<h3>How long does it take for Google to remove a page after adding noindex<\/h3>\n<p>It depends on when Google recrawls the page and reads the directive.<\/p>\n<p>A documented case study found that after proper implementation, a site achieved <strong>an 85 percent de-indexing rate within one month<\/strong>, and reached <strong>approximately 95 percent removal of the submitted URLs shortly after<\/strong> (<a href=\"https:\/\/www.tldrseo.com\/robots-txt-noindex\/\" target=\"_blank\" rel=\"noopener\">TLDR SEO case study<\/a>). That&#039;s a useful benchmark for large-scale cleanup, not a guarantee for every site.<\/p>\n<h3>Is noindex the same as a canonical tag<\/h3>\n<p>No.<\/p>\n<p>A canonical tag is a consolidation hint. It tells search engines which version of similar content you prefer to be treated as the main URL. A <code>noindex<\/code> directive tells search engines not to keep that page in the index. They solve different problems and shouldn&#039;t be treated as substitutes.<\/p>\n<h3>Can I use noindex on files that aren&#039;t HTML pages<\/h3>\n<p>Yes, if you use the <strong>X-Robots-Tag<\/strong> HTTP header.<\/p>\n<p>That&#039;s the right method for assets like PDFs or images where you can&#039;t place a meta robots tag in an HTML <code>&lt;head&gt;<\/code>.<\/p>\n<h3>Can I use robot txt noindex today<\/h3>\n<p>No, not as a supported Google deindexing method.<\/p>\n<p>If your goal is to remove content from search, use a page-level meta robots tag or an X-Robots-Tag header that the crawler can access and process.<\/p>\n<hr>\n<p>If your team needs to track not just Google visibility but also how AI systems describe your brand, <a href=\"https:\/\/www.promptposition.com\">promptposition<\/a> gives you a practical way to monitor LLM visibility, sentiment, competitor presence, and the sources shaping those answers. It&#039;s built for marketers who need to turn AI search from a black box into something they can measure and improve.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>If your team still thinks robot txt noindex is a valid way to keep pages out of Google, you&#039;re working from outdated advice. That mistake shows up in real marketing&#8230;<\/p>\n","protected":false},"author":1,"featured_media":371,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[48,195,194,196,197],"class_list":["post-372","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-uncategorized","tag-ai-search-optimization","tag-noindex-directive","tag-robot-txt-noindex","tag-seo-best-practices","tag-technical-seo"],"_links":{"self":[{"href":"https:\/\/www.promptposition.com\/blog\/wp-json\/wp\/v2\/posts\/372","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.promptposition.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.promptposition.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.promptposition.com\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.promptposition.com\/blog\/wp-json\/wp\/v2\/comments?post=372"}],"version-history":[{"count":1,"href":"https:\/\/www.promptposition.com\/blog\/wp-json\/wp\/v2\/posts\/372\/revisions"}],"predecessor-version":[{"id":376,"href":"https:\/\/www.promptposition.com\/blog\/wp-json\/wp\/v2\/posts\/372\/revisions\/376"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.promptposition.com\/blog\/wp-json\/wp\/v2\/media\/371"}],"wp:attachment":[{"href":"https:\/\/www.promptposition.com\/blog\/wp-json\/wp\/v2\/media?parent=372"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.promptposition.com\/blog\/wp-json\/wp\/v2\/categories?post=372"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.promptposition.com\/blog\/wp-json\/wp\/v2\/tags?post=372"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}