The three Core Web Vitals (LCP, CLS, INP) are what Google uses for ranking. Other metrics — TTFB, FCP, TBT, TTI — are diagnostic supports that help explain why a Core Web Vital is slow. This reference covers all of them.
Core Web Vitals
LCPCore Web Vital
Largest Contentful Paint
Time from page load start to when the largest visible element finishes rendering. Measures perceived load speed.
Thresholds (75th percentile of real users)
Good: ≤ 2.5s
Needs improvement: ≤ 4.0s
Poor: > 4.0s
Common bottlenecks
Slow server (high TTFB), large unoptimised hero image, render-blocking JavaScript, lazy-loaded LCP element.
Full LCP guide →
CLSCore Web Vital
Cumulative Layout Shift
How much visible content shifts unexpectedly during page load. Measures visual stability.
Thresholds
Good: ≤ 0.1
Needs improvement: ≤ 0.25
Poor: > 0.25
Common causes
Images without dimensions, web fonts swapping, ads/embeds without reserved space, JS-injected content above existing content.
Full CLS guide →
INPCore Web Vital
Interaction to Next Paint
How long the page takes to visually respond to user interactions. Measures responsiveness. Replaced FID in March 2024.
Thresholds
Good: ≤ 200ms
Needs improvement: ≤ 500ms
Poor: > 500ms
Common bottlenecks
Long JavaScript tasks blocking main thread, heavy event handlers, third-party scripts, slow render after handler.
Full INP guide →
Supporting Performance Metrics
TTFB
Time to First Byte
Time from request to first byte of response. Affected by server speed, network latency, and CDN coverage. The foundation of LCP.
Thresholds
Good: ≤ 800ms
Needs improvement: ≤ 1.8s
Poor: > 1.8s
Fixes
Add a CDN, enable full-page caching, optimise database queries, choose hosting closer to users.
FCP
First Contentful Paint
Time from page load start to when any text or image first paints. Earlier than LCP — useful for diagnosing whether the page is starting to render or stuck on TTFB.
Thresholds
Good: ≤ 1.8s
Needs improvement: ≤ 3.0s
Poor: > 3.0s
Fixes
Reduce TTFB, eliminate render-blocking resources, inline critical CSS.
FID
First Input Delay (deprecated)
The previous Core Web Vital for responsiveness. Measured only the delay between first input and handler running. Replaced by INP in March 2024 because it didn't capture full interaction latency or subsequent interactions.
Still reported in tooling for reference; no longer a ranking signal.
TBT
Total Blocking Time
Lab metric — total time the main thread was blocked by long tasks (over 50ms) between FCP and Time to Interactive. Strongly correlated with INP in field data.
Thresholds
Good: ≤ 200ms
Needs improvement: ≤ 600ms
Poor: > 600ms
Fixes
Same as INP — break long tasks, defer scripts, audit third-party code.
TTI
Time to Interactive
Time until the page is reliably responsive — the main thread has been quiet for long enough to handle input. Older diagnostic metric; INP is now the user-facing version.
SI
Speed Index
How quickly the visible content of a page is populated. Lower is better. Useful for comparing two versions of the same page; less actionable than the Core Web Vitals.
Page-Level Diagnostics
Total Page Weight
Total size of all resources downloaded. Median web page is around 2.3MB; recommended target is under 1MB. Larger pages take longer to load on slow networks regardless of optimisation.
Number of Requests
How many HTTP requests the page makes. Each request has overhead (DNS, TCP, TLS, queue time). Median page is around 70 requests; targets vary by site.
JavaScript Execution Time
Total time spent parsing, compiling, and executing JavaScript. Affects INP directly. Lighthouse reports per-script breakdown so you can identify the biggest offenders.
Main Thread Work
Total time the main thread was busy across all tasks. Affects INP. Lighthouse breaks this down by category (script evaluation, style/layout, rendering, parsing HTML).
Lab Data vs Field Data
Two flavours of measurement:
- Lab data — synthetic measurement on a controlled test environment. Lighthouse, Site Speed Check, WebPageTest, etc. Reproducible; useful for debugging and CI.
- Field data — what real users actually experience, aggregated from Chrome telemetry (CrUX). What Google uses for ranking. Updates over a 28-day rolling window.
Lab data is faster to iterate on; field data is the source of truth. Always validate lab improvements against field data before declaring victory.
Where to See These Metrics
- Site Speed Check — instant lab measurement of all the metrics above.
- PageSpeed Insights — both lab and field data for any URL.
- Google Search Console → Core Web Vitals — field data per URL group, sourced from CrUX.
- Chrome DevTools Performance panel — detailed per-interaction profiling.
- Lighthouse — comprehensive lab audit, including actionable recommendations.