Core Web Vitals

Performance

Core Web Vitals are Google’s standardised user experience metrics for loading, responsiveness, and visual stability. As of 12 March 2024, the set comprises Largest Contentful Paint (LCP), Interaction to Next Paint (INP), and Cumulative Layout Shift (CLS), measured from real users (field data) and assessed at the 75th percentile across device types. They inform product and SEO decisions because slow or unstable pages correlate with higher bounce and lower conversions, and Google factors them into overall page experience considerations in Search. For image-heavy pages, images commonly determine LCP and many layout shifts, and image gallery scripts can influence INP.

Core Web Vitals (CWV) are a standardized subset of Google’s Web Vitals that quantify real‑world user experience in three areas: loading performance, responsiveness, and visual stability. As of March 12, 2024, CWV consist of Largest Contentful Paint (LCP), Interaction to Next Paint (INP), and Cumulative Layout Shift (CLS).

These metrics are designed to reflect what users actually experience rather than synthetic benchmarks alone. They are derived from field data collected in Chrome from opted-in users and reported over a rolling 28‑day window, segmented by device type. A page or origin is assessed at the 75th percentile of observed loads and interactions, which helps account for less‑than‑ideal networks, devices, and variability in real traffic. Passing the assessment requires all three metrics to be within their “good” thresholds for the chosen segment (mobile or desktop).

The focus on user‑centred outcomes aligns product, performance, and SEO teams around a common set of goals. Faster rendering of meaningful content, reliable interactivity, and stable layouts are strongly associated with lower abandonment, improved engagement, and better conversion rates. Image optimisation plays an outsized role because images are often the largest elements on a page, can trigger layout shifts when dimensions are unspecified, and can interact with JavaScript in galleries or carousels that affect responsiveness. Addressing CWV therefore sits at the intersection of front‑end performance engineering, image delivery, and UX design.

Overview of the Core Web Vitals metrics

Largest Contentful Paint (LCP)

LCP measures the render time of the largest content element within the viewport, typically a hero image, background image with CSS, or a large text block. It reflects when the main content appears to be loaded and useful. Common LCP bottlenecks include slow server responses, render‑blocking CSS or JavaScript, unoptimised image delivery, and not prioritising critical assets. Strategies that improve LCP usually involve server‑side improvements (TTFB), inlining or minimising critical CSS, preloading the LCP resource, and serving a compressed, appropriately sized image in a modern format.

Interaction to Next Paint (INP)

INP evaluates a page’s overall responsiveness by observing the latency of user interactions throughout a visit and reporting a single value (the worst, or near‑worst, interaction). It captures the time from an interaction (tap, click, or keyboard) until the next frame is painted in response. High INP values often stem from long main‑thread tasks, heavy event handlers, layout thrashing, or excessive third‑party scripts. Reducing JavaScript execution cost, splitting long tasks, deferring non‑critical work, and keeping image decoding and carousels lightweight tend to improve INP.

Cumulative Layout Shift (CLS)

CLS quantifies unexpected visual instability by summing layout shift scores that occur without user input. It is typically caused by images or embeds without reserved dimensions, late‑loading ads, dynamic content injected above existing content, and custom fonts that swap late. Stable layouts benefit from reserving space for media via width/height or aspect‑ratio, careful ad slot management, avoiding lazy‑loading above‑the‑fold images that need immediate rendering, and minimising reflow‑inducing DOM changes during initial load.

Overview

Core Web Vitals evolved from Google’s broader Web Vitals initiative to foreground a small set of metrics with strong UX correlation and implementation clarity. The composition has changed as understanding and measurement have matured: First Input Delay (FID) was replaced by Interaction to Next Paint (INP) on 12 March 2024 to capture responsiveness more comprehensively across an entire session. The programme continues to iterate as browsers and device capabilities change, but stability and clear guidance remain priorities so that product teams can plan roadmaps with predictable targets.

CWV are reported through multiple channels. Field data appears in PageSpeed Insights, Chrome UX Report (CrUX), and Search Console’s Core Web Vitals reports. Lab tools such as Lighthouse, Chrome DevTools, and WebPageTest provide diagnostic views and synthetic conditions for reproducibility. Real‑user monitoring (RUM) can be instrumented with the open‑source Web Vitals library to capture per‑page and per‑segment performance at scale. Combining field and lab perspectives is common: field data validates impact, while lab diagnostics isolate regressions and point to code‑level causes.

Role in ranking: Core Web Vitals (CWV) are part of Google’s broader "page experience" signals used in Search ranking. They are one of many signals; content relevance and intent match remain primary. Better page experience can help when pages have similar relevance, but it does not override high-quality content.

Google treats page experience, including CWV, as a set of signals rather than a standalone ranking system. Improvements can assist competitiveness when pages are otherwise comparable on intent and relevance, and they align with user‑centric outcomes like engagement and satisfaction. However, high‑quality, relevant content continues to outweigh performance differences, particularly for queries where topical authority and intent satisfaction are clear. In short, CWV act as meaningful but supporting signals within a broader ranking context.

From an SEO workflow perspective, CWV provide measurable, auditable targets that can be owned by engineering and monitored by marketing. Organisations commonly integrate CWV into technical SEO audits, deploy guardrails in CI/CD (e.g., Lighthouse or Web Vitals budgets), and track origin‑level progress in Search Console. For publishers, the removal of AMP requirements from Top Stories made performance and UX quality more universally important, with CWV offering shared language between content, ad‑ops, and engineering stakeholders when balancing monetisation with stability and speed.

Role of images in Core Web Vitals

Images frequently dominate transfer size and render cost, which makes them central to CWV outcomes. The LCP element is often a hero image, meaning server response time, resource prioritisation, and the image’s format, dimensions, and compression directly influence the LCP timestamp. For CLS, missing width/height or aspect‑ratio on images, responsive images that adopt unexpected sizes, and late‑loading placeholders can trigger noticeable shifts. For INP, heavy image carousels, lightboxes, and synchronous decoding on the main thread can delay the next paint after a tap or click, especially on mid‑range mobile devices.

  • LCP impacts: compress and resize hero images, use modern formats (WebP/AVIF), set fetchpriority="high" or preload for the LCP resource, minimise render‑blocking CSS/JS, and ensure fast TTFB via caching/CDN.
  • INP impacts: avoid heavy carousels; reduce long tasks in interaction handlers; prefer passive event listeners; perform image decoding off the main thread (decoding="async"); defer non‑critical third‑party scripts.
  • CLS impacts: always include width/height or CSS aspect‑ratio; match intrinsic dimensions to delivered size; reserve ad and embed slots; avoid layout‑shifting placeholders; do not lazy‑load above‑the‑fold images unnecessarily.

Thresholds and evaluation

  • LCP: good ≤ 2.5 s; needs improvement > 2.5 s to 4.0 s; poor > 4.0 s.
  • INP: good ≤ 200 ms; needs improvement > 200 ms to 500 ms; poor > 500 ms.
  • CLS: good ≤ 0.10; needs improvement > 0.10 to 0.25; poor > 0.25.

Assessment uses field data at the 75th percentile of page views for each metric, evaluated separately for mobile and desktop over a rolling 28‑day period. A URL passes the Core Web Vitals assessment only if all three metrics are within the good range. Origin‑level reporting aggregates similar pages when URL‑level data is sparse. In practice, improvements often surface first in lab tools; they appear in field datasets once sufficient traffic accrues through the updated experience. Seasonality, traffic mix, and geo distribution can influence observed values, so segment‑aware monitoring is valuable.

Implementation notes

Measuring and monitoring

  • Use PageSpeed Insights for a combined lab/field snapshot; Search Console for sitewide origin/URL groups; CrUX API/BigQuery for custom field analyses; and RUM with the Web Vitals library for granular segmentation.
  • Track mobile and desktop separately; set performance budgets in CI; correlate changes with deployments; and validate regressions with reproducible lab runs (Lighthouse, WebPageTest, DevTools performance traces).

Improving LCP, INP, and CLS in practice

  • LCP: reduce server latency (caching, CDN, SSR/edge rendering), inline critical CSS, eliminate render‑blocking resources, preload or set fetchpriority="high" for the LCP image, serve responsive images (srcset/sizes) in WebP/AVIF with efficient compression.
  • INP: trim JavaScript bundles, prioritise interactivity, split long tasks, defer non‑critical hydration, minimise heavy carousels/lightboxes, prefer CSS transitions where possible, and keep main thread free during input by offloading work (Web Workers, async decode()).
  • CLS: reserve media and ad space with fixed dimensions or aspect‑ratio, stabilise fonts (font-display, preloading), avoid inserting content above existing content, and limit layout‑shifting placeholders for above‑the‑fold elements.

Comparisons

Core Web Vitals vs Web Vitals (broader set)

Web Vitals is an umbrella initiative that includes a variety of UX metrics (e.g., FCP, TTFB, TBT) for diagnostics. Core Web Vitals is the curated subset Google emphasises for a consistent, comparable user‑experience bar in Search and product decisions. Teams often use the broader metrics in debugging while tracking CWV for accountability and reporting.

INP vs FID (responsiveness metrics)

FID measured the delay before event handlers ran on first input only, missing processing, presentation, and subsequent interactions. INP observes all interactions and measures until the next paint following input, making it more representative of real responsiveness. With INP’s adoption, optimising for consistently quick interactions across the session is more impactful than solely improving the first input delay.

LCP vs FCP (loading metrics)

First Contentful Paint (FCP) marks when any content renders, which may be a spinner or small text. LCP focuses on when the largest, most meaningful element appears, correlating better with perceived readiness. FCP remains useful diagnostically, but LCP is preferred for judging when a page becomes useful to users.

Lab vs field data for CWV assessment

Lab tests run under controlled conditions and are ideal for regression testing and debugging; they provide approximations of CWV and additional diagnostics like Total Blocking Time. Field data captures real‑world variability, networks, and devices, and is authoritative for passing or failing the CWV assessment. Both perspectives are complementary, and differences often point to network, device, or traffic‑mix factors that lab tests do not emulate.

FAQs

Do all three Core Web Vitals need to be in the green to pass?

Yes. A URL passes the Core Web Vitals assessment when the 75th‑percentile values for LCP, INP, and CLS are all within their good thresholds for the selected device category. If even one metric sits in the needs‑improvement or poor range, the page does not pass for that segment. Origin‑level reporting follows the same principle across grouped pages.

What happens if a page has insufficient field data?

If an individual URL lacks enough samples, tools may fall back to origin‑level data that aggregates similar pages. PageSpeed Insights will still show lab diagnostics from Lighthouse, but the definitive pass/fail assessment relies on field data. Implementing RUM on your site can provide URL‑level visibility independent of public datasets when traffic volumes are modest or highly segmented.

How often do Core Web Vitals values update in Search Console and CrUX?

Both rely on a rolling 28‑day collection window. Search Console refreshes data regularly (typically daily updates reflecting the latest window), while the CrUX dataset is updated on a scheduled basis. After deploying performance changes, expect a lag before improvements are fully reflected in field reports, depending on traffic volumes and user exposure to the updated experience.

Are Core Web Vitals different on mobile and desktop?

Thresholds are the same, but data is segmented by device class because networks, CPUs, and input methods differ. It is common for a site to pass on desktop and fail on mobile due to heavier CPU and network constraints on phones. Planning and reporting should consider both segments, with a particular focus on mobile if that represents the majority of traffic or revenue.

Does lazy‑loading help or hurt Core Web Vitals?

Lazy‑loading offscreen images can reduce network contention and improve LCP by prioritising critical assets, and lower CLS by avoiding late insertions. However, lazy‑loading above‑the‑fold images often delays the LCP element and can introduce shifts if placeholders are not sized. Native loading="lazy" is generally safe for below‑the‑fold media; above the fold, consider eager loading with reserved dimensions and appropriate priority signals.

Synonyms

CWVGoogle Core Web VitalsCore web-vitals metricsPage experience metricsWeb Vitals (Core subset)