We Tested Google’s New 2MB Limit. Search Console Won’t Warn You.

Google quietly updated its Googlebot documentation, dropping the stated crawl limit from 15MB down to 2MB. We immediately started asking questions:

  • Will Google start ignoring some of our web pages?
  • Is this only about HTML, or does it hit CSS and JS too?
  • And what about images? We serve high-quality visuals on many of our projects, and those files can easily exceed 2MB.

The documentation alone didn’t give us clear answers. So we built a set of test files, submitted them to Google, and watched what happened. You can review all our test files and results yourself. Below is the full breakdown.

The Background, What Changed?

On February 3, 2026, Google reorganized its crawler documentation and clarified file size limits for Googlebot:

File TypeCrawl Limit
HTML & supported formats2MB
PDF files64MB
Other Google crawlers (default)15MB

At first, the headline looks alarming. A drop from 15MB to 2MB sounds massive. Now, should you worry? For most sites, probably not.

According to the Web Almanac 2025, the median HTML page is roughly 33KB on mobile. That’s about 60 times smaller than the 2MB limit. Even at the 90th percentile, pages are only around 151KB.

You’d need an exceptionally bloated page to even approach the threshold. But if you do have pages near that size, know that Google won’t warn you when it starts cutting content.

As John Mueller clarified: “Googlebot is one of Google’s crawlers, but not all of them.” The 15MB default still applies to other Google crawlers (Image, Video, etc.).

He also confirmed on Bluesky that this isn’t a behavioral change: “None of these recently changed, we just wanted to document them in more detail.

That’s all well and good. But documentation says one thing. Real-world behavior can say something else. We wanted to know what actually happens when Google encounters files above 2MB. So we tested it.

Summary of Our Test Results

Before diving into the details, here’s a quick overview of what we found. All test files are available at spotibo.com/test-google-sources/.

Resource TypeFetched?Indexed/parsed?GSC Inspection Warning?Notes
HTML (3MB)YesOnly the first 2MBNoneSilent truncation, no errors shown
HTML (16MB)NoNoGeneric error“Something went wrong”, all data N/A
Images (2.5MB)YesYesSeparate limit, not affected by 2MB cap
CSS/JS (>2MB)YesLikely truncatedNoneSame silent behavior expected

The most important column is “Google Search Console Inspection Warning?” Notice how it says “None” across the board. That turned out to be the biggest surprise of our testing. But more on that below.

Test 1: HTML Files

We started with the most fundamental question. What happens when an HTML file exceeds 2MB?

We created test HTML pages at various sizes, including a 3MB file (seo-guide-3mb.html) and a 16MB file (test-16mb.html), and submitted them to Google for indexing. Then we waited for Google to process them and checked the results in Google Search Console.

The first result confused us. When we opened the URL Inspection tool in GSC and ran a live test on our 3MB file, it showed the complete source code. All 23,825 lines. The entire 3MB document loaded without a problem.

Google Search Console URL Inspection live test for a 3MB HTML file The left panel shows a green checkmark with "URL is available to Google" The right panel displays the full HTML source code extending to line 23,825, confirming the live test loaded the entire 3MB document without truncation

So we thought: maybe the 2MB limit doesn’t actually apply? Maybe Google is more generous in practice?

We were wrong. It took us a while to figure out why. The URL Inspection tool doesn’t use Googlebot. “Google-InspectionTool” crawler probably operates under the general 15MB fetch limit, not the 2MB indexing limit. That’s why it shows you everything. It’s fetching the page like any other Google crawler would, but it has nothing to do with how Googlebot indexes your page for search.

This is a critical detail that can easily lead you to wrong conclusions. The tool most SEOs reach for to verify crawling is, in this case, actively misleading.

So we looked at the actual indexed source code instead:

Google Search Console showing the actual indexed version of the same 3MB HTML file The left panel shows "URL is on Google" and "Page is indexed," crawled as Googlebot smartphone on Feb 7, 2026 The right panel shows the HTML source code abruptly ending at line 15,210 with the text cut midword at "Prevention is b" followed by closing HTML tags The annotation reads "Truncated after 2MB"
  • Our 3MB HTML file was truncated after 2MB. The source code cuts off mid-word around line 15,210. The text literally stops at “Prevention is b” and then the closing </html> tag. Everything beyond the 2MB mark was silently dropped.
  • The GSC status showed “URL is on Google” and “Page is indexed.” Everything looked perfectly normal. No warning, no error, no indication of truncation whatsoever.

Our 16MB test file fared even worse. Google didn’t just truncate it. It refused to process it entirely. When we tried to request indexing, GSC returned an error: “Oops! Something went wrong. We also had a problem submitting your indexing request.” All crawl data showed N/A.

Google Search Console URL Inspection for a 16MB HTML test file The status reads "URL is not on Google" with a blue info icon A popup error message says "Something went wrong If the issue persists, try again in a few hours" All crawl data fields including Last crawl, Crawled as, Crawl allowed, Page fetch, and Indexing allowed show N/A

This is the most important finding from our entire test. For files above 2MB but within the 15MB fetch limit, Google silently truncates the content with zero warning. For very large files like our 16MB test, Google can’t even process the indexing request. And in neither case does GSC give you a clear explanation of what went wrong.

Test 2: Images

After seeing the HTML results, this was the test we were most anxious about. On many of our projects, we deliberately serve high-quality images. Product shots, editorial photos, and detailed infographics. These files regularly push past 2MB. If Google started ignoring them, it would be a serious problem.

The good news: images are completely unaffected.

We tested with a 2.5MB image and submitted it for indexing. Within two days, it appeared in Google Image Search without any issues. The image rendered correctly and was fully accessible.

2,5MB image in index

We also checked Google Image Search more broadly and found that images well over 2MB appear there regularly. This isn’t surprising once you understand the architecture.

Images are not handled by the same Googlebot that indexes HTML. They’re fetched by Googlebot Image, which operates under its own separate constraints.

Even if Google didn’t crawl the full original file, it could still serve cached or resized versions in Image Search. So even in the worst case, your images would still show up.

Test 3: CSS and JavaScript

This was the trickiest test to interpret, and it produced the most ambiguous results.

We tested CSS and JS files exceeding 15MB. Google Search Console reported no errors on any of them. Everything showed as OK:

But after what we learned from the HTML test, we no longer trust “no errors” as confirmation. We saw the exact same “everything is fine” behavior with HTML files that were, in reality, being silently truncated. There’s a strong reason to believe the same pattern applies to CSS and JS.

In practice, this is unlikely to affect most sites. Well-optimized projects keep their total bundled code well under 1MB. On our own heavily loaded projects, we stay under 1MB. If your bundles are approaching 2MB, the crawl limit is probably the least of your problems. You likely already have serious page speed issues.

But if you’re running a site with exceptionally large inline scripts or stylesheets, this silent truncation is worth investigating.

Don’t Panic, but Don’t Ignore It Either

Every time Google updates its documentation, a wave of fear-mongering sweeps through the SEO industry.

But our testing revealed something that goes beyond the documentation. The limit is real, it’s enforced, and Google gives you zero warning when you hit it.

The URL Inspection tool masks the problem by using a different crawler. Search Console shows no errors. Your content gets silently cut off, and unless you manually compare the indexed source against your original page, you’ll never know.

For most sites, 2MB of HTML is more than enough. Keep your code clean, externalize your assets, and put important content early in the document. Do these things not because of a documentation change, but because they make your site better for both users and search engines.

1 comment

Leave your comment

Read next

Contents