// post

How to Audit and Improve Your Website's UI/UX

Last modified: April 29, 2026
·
17 min read

I've sat through this meeting more times than I'd like. A redesign ships, conversion stays flat or dips a little, and a few weeks later there's a thirty-minute back-and-forth about whether the new homepage is the culprit. Nobody can really answer, because nobody collected the evidence before the redesign went out the door.

The audit is the boring fix for that. You spend a couple of focused days going over the site, cross-checking against what users are actually doing in session recordings, and at the end you've got a ranked list of things to address. There is no big reveal. The deliverable is a spreadsheet. It tends to move the metric more than whatever redesign got it called in the first place though, so I keep doing it.

This is roughly the order I run one in. The sequence matters, because doing it backwards (visual first, data last) is how you end up redesigning things that were never the problem.

Start by defining what "good" means

Skip this and the audit turns into a list of things that personally annoy you, which is not useful to anyone but you. So write down, on paper or in a doc, the one or two things the site is actually meant to do. Sign-ups, trial starts, articles read, tickets deflected, whichever applies. Next to each one, the metric you'd watch to know it's working: conversion rate, completion rate, bounce on the landing page, task success on the primary flow.

That short list becomes the filter for everything that comes later. If a finding does not connect back to something on it, park it. Otherwise the report fills up with twenty things that sound important and nothing actually ships.

Run a heuristic evaluation first

Heuristic evaluation is the cheapest, fastest part of an audit. One reviewer with a checklist can cover an entire site in a day. Jakob Nielsen's ten usability heuristics are still the standard reference, and they hold up well in 2026:

  1. Visibility of system status (loading states, confirmations, progress)
  2. Match between system and the real world (plain language, familiar metaphors)
  3. User control and freedom (undo, back, escape hatches)
  4. Consistency and standards (within the product and with platform conventions)
  5. Error prevention (constraints, defaults, confirmations on destructive actions)
  6. Recognition rather than recall (visible options instead of memorized ones)
  7. Flexibility and efficiency of use (shortcuts, bulk actions for power users)
  8. Aesthetic and minimalist design (no decoration that competes with content)
  9. Help users recognize, diagnose, and recover from errors (clear messages, suggested fixes)
  10. Help and documentation (searchable, contextual, only as much as you need)

Walk the main flows of your site with the list open. Sign up. Buy. Search. Cancel. Reset password. For each step, note where the system fails one of the ten. You'll end up with twenty to fifty observations on a typical site, and roughly a third of them will be quick wins.

Look at what users actually do, not what they say

Heuristics catch the obvious stuff. For the rest you have to look at what people actually do on the site, and that means three different sources of data, used in roughly this order.

Start with whatever analytics you've got plugged in. GA4 if that's the company default, Plausible if you've gone the lighter route, Mixpanel if someone in product spent a quarter setting up funnels. Pull the funnel for your primary outcome and look at where the biggest drop is. If a single step is losing 60 percent of the people who entered it, that step is broken in a UX sense, not a marketing one. I cannot remember the last time that turned out to be a copy issue.

Then heatmaps and scroll maps. Microsoft Clarity is free and good enough for almost everyone, which is why I default to it now instead of Hotjar. The two things heatmaps are genuinely useful for: catching rage-clicks on stuff that isn't actually a link (a styled heading, a non-interactive icon), and confirming nobody is scrolling far enough to hit your primary CTA on a long landing page. Beyond that, heatmaps tend to confirm what you already suspected, so do not over-invest.

Session recordings are the slowest tool in the stack and the one I learn the most from. Pick ten recordings of users who fell out of the funnel at the step your analytics flagged, and watch them at normal speed, not 2x. The pattern almost always repeats. Someone tabs back up the form because a label was unclear. Someone clicks a button three times waiting for a response because there's no loading state. Someone hits a modal that swallowed focus and they bail out of the tab. You see it once, you see it ten times, you have a fix.

Resist the urge to jump to user interviews before you have these three. People tell you what they remember doing, which is not the same thing as what they did. Recordings settle that argument in your favor every time.

Audit the visual layer and the design system

This is where most so-called redesigns stop, and it is only one slice of the audit. Open the site and squint at it for ten seconds. You should still be able to tell what the primary action is on each page. If you can't, the hierarchy is off and that is usually a spacing problem before it is a color problem. Then go page by page checking that buttons actually look pressable, links are obviously links, and there's a visible focus ring on everything you can tab to. The number of production sites that ship without focus rings because someone added outline: none in a global reset and never put one back is genuinely depressing.

The other half of this is design-system drift. If your team has a Button component but you can find five different ad-hoc buttons in the codebase, the system is not really being used. On larger projects I'll lean on a visual diff (Chromatic, Percy, or a Storybook-based pipeline) to catch this automatically. On smaller projects an hour with screenshots and a coffee covers it.

It also helps to get a second pair of eyes at this stage, ideally before you've already convinced yourself the page is fine. pixelait.com is one I've started using as that second opinion: feed it a URL, get back a structured read on hierarchy, contrast, and consistency in a few minutes. It also catches the things you stop seeing after staring at a page for a week, like a typo in a section heading, a CTA that reads at a college reading level when the audience is broader than that, or two buttons whose labels say almost the same thing in slightly different words. It is not a replacement for a real heuristic pass, and it will occasionally flag stuff that's intentional, but for a quick before-and-after check on a redesign it has saved me from shipping a couple of obvious mistakes (including an actual typo in a hero, more than once).

Check accessibility seriously, not as a checklist

Most audits go thin here. People run axe DevTools, fix the red items, and call it done. That gets you maybe a third of the real issues. The rest only surface when you actually try to use the site the way someone with a different setup would.

The fastest version that actually works: unplug your mouse and complete the primary flow with the keyboard. If the focus order skips around, or the focus indicator disappears halfway through, or you can't escape a modal, those are bugs. Then run axe DevTools and Lighthouse and fix what they find, because they catch the easy stuff cheaply. Color contrast is worth a separate pass against WCAG 2.2 AA (4.5:1 for body text, 3:1 for large text and UI components like icon buttons), since the contrast checker built into DevTools only flags some of these.

The part most people skip is actually opening the page in a screen reader. VoiceOver on macOS is two keystrokes (Cmd+F5). NVDA on Windows is free. TalkBack on Android is built in. Listen to the first thirty seconds of the page being announced. If you cannot tell what the page is about, your heading structure, landmarks, or image alt text are doing the wrong job. Last thing, bump the browser text to 200 percent in settings and see what breaks. Layouts that collapse or hide content under that condition fail WCAG and they also fail anyone reading on a 4K monitor at native scaling.

The thing nobody says often enough about accessibility work: it makes the site faster for everyone. A logical tab order is faster for power users filling out forms. Visible focus rings help any keyboard user, not just screen-reader users. Plain language and good headings help anyone who is in a hurry, which is most of your traffic.

Treat performance as part of UX

Slow is broken. There is no separate UX rating from "this page took six seconds to become usable", that is the rating. Core Web Vitals are the closest thing to a shared baseline right now: LCP under 2.5 seconds, INP under 200ms, CLS under 0.1. Run the page through PageSpeed Insights for the lab numbers, then check the Core Web Vitals report in Search Console for field data from actual visitors. When the two disagree (and they often will), trust the field data. Synthetic tests from a fast machine on a fiber connection rarely reflect what someone is getting on a mid-tier Android phone over LTE.

The culprits that come up over and over:

  • A render-blocking webfont, often loaded from a third-party CDN with no font-display: swap
  • A massive hero image served at full resolution to mobile
  • Layout shift from ads, embeds, or images without intrinsic dimensions
  • A monolithic client bundle that should have been split with next/dynamic or route-level code splitting

Fixing the first three is usually a one-day job and moves LCP and CLS by enough to clear the threshold.

Pull it all together with a prioritized issue list

By the end of the audit you'll have something like fifty to a hundred observations. The output that actually drives change is a prioritized list, not a deck.

I use a simple two-axis ranking:

PriorityImpactEffort
P0Blocks the primary outcomeAny
P1High impact on a key flowLow to medium
P2Medium impact, low effortLow
P3Polish, low impactAny

Anything in P0 ships this sprint. P1 and P2 fill the next two sprints. P3 lives in a backlog that gets revisited next quarter, not this one. The discipline is in being honest about impact: if you can't connect an issue to one of the metrics you defined at the start, it probably belongs in P3.

What I actually keep installed

For the heuristic pass I genuinely just use a printed sheet of Nielsen's ten heuristics taped above my monitor and a notebook. There is no software better than that for the job. Analytics is whatever the company has already paid for, which is usually GA4, sometimes Plausible if the team cares about lighter loading, and once in a blue moon Mixpanel. For heatmaps and session recordings I default to Microsoft Clarity because it is free and it works, and I only switch to Hotjar when I need real segmentation that Clarity does not handle well.

The visual review is where my stack has actually changed in the last year. I still do a manual walk-through, but I now also drop the URL into pixelait.com and read what it flags before committing to my own conclusions. The two passes catch slightly different things, which is the whole point of doing both.

For accessibility, axe DevTools and Lighthouse cover the easy half and either NVDA or VoiceOver covers the rest. Performance is PageSpeed Insights for the quick read, WebPageTest when I need to see filmstrips, and the Chrome DevTools Performance panel when I am actually fixing something. User testing, when I bother, is UserTesting.com for moderated sessions or Maze for the unmoderated kind. And if the site is big enough to justify it, Chromatic or Percy will keep an eye on visual regressions between releases so the audit findings do not silently come back.

The minimum viable kit for a small site is shorter than that. Clarity, axe DevTools, PageSpeed Insights, and a quick pass through pixelait or any similar analyzer will surface most of what a paid agency audit would, and you can do it in an afternoon between meetings.

Wrap-up

If there is one thing I'd want you to take from all of this, it is that an audit is an evidence-collection exercise, not a taste exercise. Pick the outcome that matters, walk the site against the ten heuristics with that outcome in mind, and then go look at what users are actually doing on the recordings before you trust your own opinion. Fix the visual, accessibility, and performance issues that are clearly between the user and the outcome. Park everything else.

The teams I've seen actually move conversion through this kind of work are not necessarily the ones with the strongest designers. They are usually the ones who run the audit on a calendar, every quarter, and ship the top three or four items before someone gets the urge to commission another redesign. The list of tools up there is enough to start. The hard part, in my experience, is getting the audit on the calendar in the first place and keeping it there once the next priority shows up.