AVIF Encoder Comparison 2025 — SVT‑AV1 vs libaom vs rav1e (Quality and Speed)
Published: Sep 21, 2025 · Reading time: 3 min · By Unified Image Tools Editorial
Introduction
AVIF offers strong compression, but encoders (SVT‑AV1 / libaom / rav1e) differ in quality/speed/stability. Don’t chase “smallest file” alone — consider visual artifact character, decode time, CI runtime, and parameter stability. This article documents a comparison workflow over a representative dataset and recommends presets for practical use.
TL;DR
- Compatibility/stability: libaom; speed: SVT‑AV1; qualitative comparison: rav1e
- Start with WebP as a baseline and add AVIF if artifacts are acceptable
- For LCP, consider decode time and tune q by ~5 points
Background: Ultimate Image Compression Strategy 2025 – A Practical Guide to Preserving Quality While Optimizing Perceived Speed, Compression Artifact Audit 2025 — What to look for, when it worsens, and how to avoid
Dataset and evaluation axes
Dataset (example):
- Portrait (skin/bokeh) ×3
- Text/UI (fine lines/high contrast) ×3
- Landscape (foliage/bricks) ×3
- Gradients (sky/background) ×3
Metrics:
- Visual degradation (skin banding/edges/texture retention)
- File size (4 widths: 640/960/1280/1536)
- Encode time (avg per file; CI throughput)
- Decode time (perceived for LCP candidates)
Use SSIM/PSNR as aids; decide by visual checks.
Comparison lenses
- Visual artifacts (edges/skin/gradients/noise)
- File size (3–5 widths)
- Encode/decode time (server/client)
Additional:
- Parameter stability (variance with the same q/speed)
- Deployability (CLI/libraries/hosting support)
Commands (examples)
# libaom (avifenc)
avifenc --min 28 --max 32 --speed 6 input.png out-libaom.avif
# SVT-AV1
aomenc --good --cpu-used=6 -q 35 -o out-svt.ivf input.y4m
# → package to AVIF with appropriate tooling
# rav1e
rav1e input.y4m -s 6 -q 35 -o out-rav1e.ivf
In practice, you’ll often run bulk jobs via GUI/libraries; include toolchain stability and CI runtime in your evaluation.
Suggested ranges (still images)
- libaom:
--min 28 --max 32 --speed 6
(quality/stability) - SVT‑AV1:
-q 34–38 --cpu-used=6
(balance of speed/size) - rav1e:
-q 34–38 -s 6
(good for qualitative checks)
UI/text may need 4:4:4/lossless; photos are usually fine with 4:2:0.
Bench pipeline (automation sketch)
- Define dataset: 12–20 fixed scenes (version when updating)
- Generate grid: encoder × widths × q/speed
- Measure: size/encode time/SSIM/Butteraugli
- Visualize: auto‑built comparison pages for human inspection
- Recommend: pick presets by thresholds and apply in CI
import { execFile } from 'node:child_process';
function run(cmd: string, args: string[]) {
return new Promise((res, rej) => execFile(cmd, args, (e, o) => e ? rej(e) : res(o)));
}
Operational notes (pitfalls)
- rav1e quality/speed can fluctuate across versions; pin versions
- SVT‑AV1 is fast but some q steps show different artifact shapes
- libaom is steady; watch CI time and config complexity
- Confirm whether your image CDN re‑encodes/re‑packages (avoid double compression)
QA checklist
- [ ] No visible artifacts on skin/text/gradients/high‑frequency textures
- [ ] Widths and
sizes
align with layout - [ ] LCP candidates tuned for decode time vs visual quality
- [ ] CI runtime acceptable
FAQ
Q. Which encoder first?
A. Establish a baseline with libaom; if speed is an issue, try SVT‑AV1; keep rav1e for qualitative comparisons.
Q. Minor artifacts but big file savings — worth it?
A. Consider decode time and end‑user perception. Choose wins that benefit users, not just benchmarks.
Selection guidelines
- “When unsure, ship WebP; add AVIF when it passes visual checks” — a safe migration path
- Differences are content‑dependent. If AVIF struggles on skin/text/gradients, prefer WebP
- Include CI time/hosting compatibility/decode time in the overall decision
Finally, document screenshots and settings in a shared log for reuse.
Summary
Encoders differ, but the final decision is about visual quality and operational constraints. Run WebP as the stable base, and roll out AVIF gradually where it helps.
Related Articles
Compression Artifact Audit 2025 — What to look for, when it worsens, and how to avoid
A practical, fast inspection routine for JPEG/WebP/AVIF artifacts. Where they appear, what worsens them, and concrete mitigations.
Ultimate Image Compression Strategy 2025 – A Practical Guide to Preserving Quality While Optimizing Perceived Speed
A comprehensive, field-tested image compression and delivery strategy for Core Web Vitals: format choices, presets, responsive workflows, build/CDN automation, and troubleshooting.
PNG Optimization in 2025 — Palettization and Lossless Squeeze
A practical workflow to reduce PNG size while preserving transparency and sharp edges: palettization, redundant chunk removal, and final lossless squeeze.
Animation UX Optimization 2025 — Improve Experience, Cut Bytes
Retire GIFs in favor of video/animated WebP/AVIF. Practical patterns for loops and motion design, balancing performance and accessibility.
AVIF vs WebP vs JPEG XL in 2025 — A Practical, Measured Comparison
We benchmark AVIF, WebP, and JPEG XL for real-world use: visual quality, file size, decode speed, and browser support. Get an actionable rollout strategy, fallback design, and integration guidance.
Format Conversion Strategies 2025 — When to Use WebP/AVIF/JPEG/PNG and Why
Decision trees and workflows for choosing the right format per content type, balancing compatibility, size, and fidelity.