The promise of AI website builders is seductive: describe your dream site, and watch it appear in seconds. For internal tools, dashboards, and prototypes, this is a revolution. But for marketing websites—properties whose primary function is to be discovered—there is a hidden architectural flaw in many popular tools.
The issue is not the quality of the copy or the design. The issue is how the HTML is delivered to the browser.
1. The "View Source" Litmus Test
To understand the problem, we need to look at what a search engine crawler sees. Humans browse visually; crawlers browse code.
<!DOCTYPE html> <html lang="en"> <head> <title>My App</title> <script src="/main.js" defer></script> </head> <body> <div id="root"></div> <!-- Content is missing! --> </body> </html>
Crawler sees an empty page. Indexing is deferred until JavaScript executes (if it executes).
<!DOCTYPE html>
<html lang="en">
<head>
<title>Pricing - My Startup</title>
<meta name="description" content="...">
</head>
<body>
<nav>...</nav>
<main>
<h1>Straightforward Pricing</h1>
<p>Get started for free...</p>
<!-- Content is immediate -->
</main>
</body>
</html>Crawler sees all content, links, and metadata immediately. Instant indexing.
2. Google's "Two-Wave" Indexing
It is a myth that Google cannot render JavaScript. It can. The problem is cost and priority.
Rendering JavaScript requires significantly more CPU power than parsing HTML. Because the web is infinite and Google's resources are finite, they employ a two-wave system:
The Instant Crawl
The bot fetches the HTTP response. It parses the HTML immediately to find links and content. If the HTML is empty (as in CSR), no content is indexed and no new links are followed yet.
The Render Queue
Pages requiring JS are added to a separate "render queue". Depending on Google's resource availability, this can happen hours, days, or weeks later. For fresh content, this delay is fatal.
Crawl Budget Impact: If your site relies entirely on the Render Queue, Google may decide it's too expensive to crawl deeply. You end up with "Discovered - currently not indexed" errors in Search Console.
3. The Metadata Duplicate Problem
Beyond indexing content, social sharing and rich results rely on <meta> tags. In a pure Single Page Application (SPA), the <head> is often static.
Result: Google sees duplicate content across your entire site. Rankings tank.
4. Performance & Core Web Vitals
Google uses Core Web Vitals (CWV) as a ranking factor. Client-side rendering significantly affects Largest Contentful Paint (LCP) and Cumulative Layout Shift (CLS).
| Metric | Typical AI/CSR Site | Static SSG Site |
|---|---|---|
| LCPTime until largest content is visible | Slow (2.5s - 4.0s+) Waits for JS bundle + API fetch | Fast (< 1.0s) HTML arrives ready to paint |
| CLSVisual stability | Unstable Elements pop in as data loads | Stable (0.0) Layout defined in CSS/HTML |
| TTFBServer response time | Fast But minimal utility (empty shell) | Fast Served from CDN edge |
Architecture, not just Optimization
Fixing this isn't about installing an SEO plugin or adding keywords. It requires changing how the site is built.
This is why Pagesmith uses Static Site Generation (SSG) by default.
- Pre-rendered HTML: We generate the final HTML at build time. When a request comes in, the CDN serves a complete file.
- Zero-JS by Default: Pagesmith sites work perfectly without JavaScript enabled. We only hydrate interactive "islands" like forms or calculators.
- Perfect Metadata: Every page is an individual file with its own title, description, and Open Graph tags.