Metrik Kualitas Gambar AI LPIPS・SSIM Panduan Praktis 2025

Diterbitkan: 26 Sep 2025 · Waktu baca: 15 mnt · Redaksi Unified Image Tools

Evaluasi kualitas pemrosesan gambar berkembang dari metrik numerik tradisional ke evaluasi berbasis AI yang bergantung pada persepsi manusia. Artikel ini menyediakan penjelasan detail pada level implementasi untuk metode evaluasi terbaru termasuk LPIPS (Learned Perceptual Image Patch Similarity) dan SSIM (Structural Similarity Index Measure).

Evolusi Evaluasi Kualitas Gambar AI

Keterbatasan Metode Tradisional

Masalah dengan PSNR (Peak Signal-to-Noise Ratio)

  • Hanya mengevaluasi perbedaan level pixel
  • Divergensi besar dari persepsi manusia
  • Mengabaikan kesamaan struktural
  • Tidak dapat mengevaluasi artefak kompresi dengan tepat

Kebutuhan Pendekatan Baru

  • Meniru sistem visual manusia
  • Ekstraksi fitur melalui deep learning
  • Kuantifikasi kesamaan perseptual
  • Evaluasi adaptif konten

Tautan Internal: Anggaran Kualitas Gambar dan Gerbang CI 2025 — Operasi untuk Mencegah Kerusakan Secara Proaktif, Strategi Kompresi Gambar Lengkap 2025 — Panduan Praktis Optimasi Kecepatan Persepsi sambil Mempertahankan Kualitas

LPIPS: Metrik Perseptual Berbasis Pembelajaran

Dasar Teoritis LPIPS

LPIPS (Learned Perceptual Image Patch Similarity) adalah metrik kesamaan perseptual yang memanfaatkan representasi fitur dari jaringan neural dalam.

import torch
import torch.nn as nn
import lpips
from torchvision import models, transforms

class LPIPSEvaluator:
    def __init__(self, net='alex', use_gpu=True):
        """
        Inisialisasi model LPIPS
        net: Pilih dari 'alex', 'vgg', 'squeeze'
        """
        self.loss_fn = lpips.LPIPS(net=net)
        self.device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu')
        self.loss_fn.to(self.device)
        
        # Pipeline preprocessing
        self.transform = transforms.Compose([
            transforms.ToTensor(),
            transforms.Normalize(mean=[0.485, 0.456, 0.406],
                               std=[0.229, 0.224, 0.225])
        ])
    
    def calculate_lpips(self, img1, img2):
        """
        Hitung jarak LPIPS antara dua gambar
        """
        # Preprocessing
        tensor1 = self.transform(img1).unsqueeze(0).to(self.device)
        tensor2 = self.transform(img2).unsqueeze(0).to(self.device)
        
        # Kalkulasi LPIPS
        with torch.no_grad():
            distance = self.loss_fn(tensor1, tensor2)
        
        return distance.item()
    
    def batch_evaluate(self, image_pairs):
        """
        Evaluasi LPIPS dengan batch processing
        """
        results = []
        
        for img1, img2 in image_pairs:
            lpips_score = self.calculate_lpips(img1, img2)
            results.append({
                'lpips_distance': lpips_score,
                'perceptual_similarity': 1 - lpips_score,  # Ekspresikan sebagai kesamaan
                'quality_category': self.categorize_quality(lpips_score)
            })
        
        return results
    
    def categorize_quality(self, lpips_score):
        """
        Klasifikasi kategori kualitas berdasarkan skor LPIPS
        """
        if lpips_score < 0.1:
            return 'excellent'
        elif lpips_score < 0.2:
            return 'good'
        elif lpips_score < 0.4:
            return 'acceptable'
        else:
            return 'poor'

Konstruksi Jaringan LPIPS Kustom

class CustomLPIPSNetwork(nn.Module):
    def __init__(self, backbone='resnet50'):
        super().__init__()
        
        # Pemilihan jaringan backbone
        if backbone == 'resnet50':
            self.features = models.resnet50(pretrained=True)
            self.features = nn.Sequential(*list(self.features.children())[:-2])
        elif backbone == 'efficientnet':
            self.features = models.efficientnet_b0(pretrained=True).features
        
        # Layer ekstraksi fitur
        self.feature_layers = [1, 4, 8, 12, 16]  # Indeks layer untuk ekstrak
        
        # Layer transformasi linear
        self.linear_layers = nn.ModuleList([
            nn.Sequential(
                nn.Conv2d(64, 1, 1, bias=False),
                nn.GroupNorm(1, 1, affine=False)
            ),
            nn.Sequential(
                nn.Conv2d(256, 1, 1, bias=False),
                nn.GroupNorm(1, 1, affine=False)
            ),
            nn.Sequential(
                nn.Conv2d(512, 1, 1, bias=False),
                nn.GroupNorm(1, 1, affine=False)
            )
        ])
    
    def forward(self, x1, x2):
        # Ekstraksi fitur
        features1 = self.extract_features(x1)
        features2 = self.extract_features(x2)
        
        # Kalkulasi jarak pada setiap layer
        distances = []
        for i, (f1, f2) in enumerate(zip(features1, features2)):
            # Normalisasi L2
            f1_norm = f1 / (torch.norm(f1, dim=1, keepdim=True) + 1e-8)
            f2_norm = f2 / (torch.norm(f2, dim=1, keepdim=True) + 1e-8)
            
            # Kalkulasi jarak
            diff = (f1_norm - f2_norm) ** 2
            
            # Transformasi linear
            if i < len(self.linear_layers):
                diff = self.linear_layers[i](diff)
            
            # Rata-rata spasial
            distance = torch.mean(diff, dim=[2, 3])
            distances.append(distance)
        
        # Rata-rata tertimbang
        total_distance = sum(distances) / len(distances)
        return total_distance

SSIM: Indeks Kesamaan Struktural

Definisi Matematis SSIM

import numpy as np
from skimage.metrics import structural_similarity
from scipy.ndimage import gaussian_filter

class SSIMEvaluator:
    def __init__(self, window_size=11, k1=0.01, k2=0.03, sigma=1.5):
        self.window_size = window_size
        self.k1 = k1
        self.k2 = k2
        self.sigma = sigma
    
    def calculate_ssim(self, img1, img2, data_range=1.0):
        """
        Kalkulasi SSIM dasar
        """
        return structural_similarity(
            img1, img2,
            data_range=data_range,
            multichannel=True,
            gaussian_weights=True,
            sigma=self.sigma,
            use_sample_covariance=False
        )
    
    def calculate_ms_ssim(self, img1, img2, weights=None):
        """
        Implementasi Multi-Scale SSIM (MS-SSIM)
        """
        if weights is None:
            weights = [0.0448, 0.2856, 0.3001, 0.2363, 0.1333]
        
        levels = len(weights)
        mssim = 1.0
        
        for i in range(levels):
            ssim_val = self.calculate_ssim(img1, img2)
            
            if i < levels - 1:
                # Downsampling
                img1 = self.downsample(img1)
                img2 = self.downsample(img2)
                mssim *= ssim_val ** weights[i]
            else:
                mssim *= ssim_val ** weights[i]
        
        return mssim
    
    def downsample(self, img):
        """
        Filtering Gaussian + downsampling
        """
        filtered = gaussian_filter(img, sigma=1.0, axes=[0, 1])
        return filtered[::2, ::2]
    
    def ssim_map(self, img1, img2):
        """
        Generate peta SSIM
        """
        # Konversi ke grayscale
        if len(img1.shape) == 3:
            img1_gray = np.mean(img1, axis=2)
            img2_gray = np.mean(img2, axis=2)
        else:
            img1_gray = img1
            img2_gray = img2
        
        # Mean
        mu1 = gaussian_filter(img1_gray, self.sigma)
        mu2 = gaussian_filter(img2_gray, self.sigma)
        
        mu1_sq = mu1 ** 2
        mu2_sq = mu2 ** 2
        mu1_mu2 = mu1 * mu2
        
        # Varians dan kovarians
        sigma1_sq = gaussian_filter(img1_gray ** 2, self.sigma) - mu1_sq
        sigma2_sq = gaussian_filter(img2_gray ** 2, self.sigma) - mu2_sq
        sigma12 = gaussian_filter(img1_gray * img2_gray, self.sigma) - mu1_mu2
        
        # Kalkulasi SSIM
        c1 = (self.k1 * 1.0) ** 2
        c2 = (self.k2 * 1.0) ** 2
        
        ssim_map = ((2 * mu1_mu2 + c1) * (2 * sigma12 + c2)) / \
                   ((mu1_sq + mu2_sq + c1) * (sigma1_sq + sigma2_sq + c2))
        
        return ssim_map

Metrik Evaluasi Lanjutan

DISTS: Kesamaan Struktur dan Tekstur Gambar Dalam

import torch
import torchvision.models as models

class DISTSEvaluator:
    def __init__(self, use_gpu=True):
        self.device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu')
        
        # Gunakan bagian ekstraksi fitur VGG
        vgg = models.vgg16(pretrained=True).features
        self.stages = nn.ModuleList([
            vgg[:4],   # conv1_2
            vgg[:9],   # conv2_2
            vgg[:16],  # conv3_3
            vgg[:23],  # conv4_3
            vgg[:30]   # conv5_3
        ]).to(self.device)
        
        for param in self.stages.parameters():
            param.requires_grad = False
    
    def extract_features(self, x):
        features = []
        for stage in self.stages:
            x = stage(x)
            features.append(x)
        return features
    
    def calculate_dists(self, img1, img2):
        """
        Hitung DISTS (Deep Image Structure and Texture Similarity)
        """
        # Preprocessing
        tensor1 = self.preprocess(img1).to(self.device)
        tensor2 = self.preprocess(img2).to(self.device)
        
        # Ekstraksi fitur
        feats1 = self.extract_features(tensor1)
        feats2 = self.extract_features(tensor2)
        
        structure_score = 0
        texture_score = 0
        
        for f1, f2 in zip(feats1, feats2):
            # Kesamaan struktur (kesamaan mean)
            struct_sim = self.structure_similarity(f1, f2)
            structure_score += struct_sim
            
            # Kesamaan tekstur (kesamaan kovarians)
            texture_sim = self.texture_similarity(f1, f2)
            texture_score += texture_sim
        
        # Komposisi tertimbang
        alpha = 0.8  # bobot struktur
        beta = 0.2   # bobot tekstur
        
        dists_score = alpha * structure_score + beta * texture_score
        return dists_score.item()
    
    def structure_similarity(self, feat1, feat2):
        """
        Hitung kesamaan struktur
        """
        # Mean di sepanjang arah channel
        mean1 = torch.mean(feat1, dim=1, keepdim=True)
        mean2 = torch.mean(feat2, dim=1, keepdim=True)
        
        # Kesamaan struktural
        numerator = 2 * mean1 * mean2
        denominator = mean1 ** 2 + mean2 ** 2
        
        structure_map = numerator / (denominator + 1e-8)
        return torch.mean(structure_map)
    
    def texture_similarity(self, feat1, feat2):
        """
        Hitung kesamaan tekstur
        """
        # Hitung matriks kovarians dari peta fitur
        b, c, h, w = feat1.shape
        feat1_flat = feat1.view(b, c, -1)
        feat2_flat = feat2.view(b, c, -1)
        
        # Kalkulasi kovarians
        cov1 = torch.bmm(feat1_flat, feat1_flat.transpose(1, 2)) / (h * w - 1)
        cov2 = torch.bmm(feat2_flat, feat2_flat.transpose(1, 2)) / (h * w - 1)
        
        # Kesamaan dengan norma Frobenius
        diff_norm = torch.norm(cov1 - cov2, 'fro', dim=[1, 2])
        max_norm = torch.maximum(torch.norm(cov1, 'fro', dim=[1, 2]),
                                torch.norm(cov2, 'fro', dim=[1, 2]))
        
        texture_sim = 1 - diff_norm / (max_norm + 1e-8)
        return torch.mean(texture_sim)

FID: Jarak Fréchet Inception

from scipy.linalg import sqrtm
import numpy as np

class FIDEvaluator:
    def __init__(self):
        # Model Inception v3 (untuk ekstraksi fitur)
        self.inception = models.inception_v3(pretrained=True, transform_input=False)
        self.inception.fc = nn.Identity()  # Hapus layer klasifikasi
        self.inception.eval()
        
        for param in self.inception.parameters():
            param.requires_grad = False
    
    def extract_features(self, images):
        """
        Ekstraksi fitur menggunakan Inception v3
        """
        features = []
        
        with torch.no_grad():
            for img in images:
                # Resize ke ukuran yang tepat (299x299)
                img_resized = F.interpolate(img.unsqueeze(0), 
                                          size=(299, 299), 
                                          mode='bilinear')
                
                feat = self.inception(img_resized)
                features.append(feat.cpu().numpy())
        
        return np.concatenate(features, axis=0)
    
    def calculate_fid(self, real_images, generated_images):
        """
        Hitung FID (Fréchet Inception Distance)
        """
        # Ekstraksi fitur
        real_features = self.extract_features(real_images)
        gen_features = self.extract_features(generated_images)
        
        # Kalkulasi statistik
        mu_real = np.mean(real_features, axis=0)
        sigma_real = np.cov(real_features, rowvar=False)
        
        mu_gen = np.mean(gen_features, axis=0)
        sigma_gen = np.cov(gen_features, rowvar=False)
        
        # Kalkulasi jarak Fréchet
        diff = mu_real - mu_gen
        covmean = sqrtm(sigma_real.dot(sigma_gen))
        
        # Hapus komponen imajiner akibat kesalahan numerik
        if np.iscomplexobj(covmean):
            covmean = covmean.real
        
        fid = diff.dot(diff) + np.trace(sigma_real + sigma_gen - 2 * covmean)
        
        return fid

Konstruksi Sistem Evaluasi Komprehensif

Evaluator Multi-metrik

class ComprehensiveQualityEvaluator:
    def __init__(self):
        self.lpips_evaluator = LPIPSEvaluator()
        self.ssim_evaluator = SSIMEvaluator()
        self.dists_evaluator = DISTSEvaluator()
        self.fid_evaluator = FIDEvaluator()
        
        # Konfigurasi bobot
        self.weights = {
            'lpips': 0.3,
            'ssim': 0.3,
            'dists': 0.2,
            'psnr': 0.1,
            'fid': 0.1
        }
    
    def evaluate_single_pair(self, img1, img2):
        """
        Evaluasi kualitas komprehensif pasangan gambar
        """
        results = {}
        
        # LPIPS
        results['lpips'] = self.lpips_evaluator.calculate_lpips(img1, img2)
        
        # SSIM
        results['ssim'] = self.ssim_evaluator.calculate_ssim(img1, img2)
        
        # DISTS
        results['dists'] = self.dists_evaluator.calculate_dists(img1, img2)
        
        # PSNR (nilai referensi)
        results['psnr'] = self.calculate_psnr(img1, img2)
        
        # Hitung skor komposit
        composite_score = self.calculate_composite_score(results)
        results['composite_score'] = composite_score
        
        # Tentukan level kualitas
        results['quality_level'] = self.determine_quality_level(composite_score)
        
        return results
    
    def calculate_psnr(self, img1, img2):
        """
        Kalkulasi PSNR
        """
        mse = np.mean((img1 - img2) ** 2)
        if mse == 0:
            return float('inf')
        return 20 * np.log10(1.0 / np.sqrt(mse))
    
    def calculate_composite_score(self, metrics):
        """
        Skor komposit dari multiple metrik
        """
        # Normalisasi setiap metrik ke rentang 0-1
        normalized_scores = {
            'lpips': 1 - min(metrics['lpips'], 1.0),  # Lebih rendah lebih baik
            'ssim': metrics['ssim'],                   # Lebih tinggi lebih baik
            'dists': metrics['dists'],                 # Lebih tinggi lebih baik
            'psnr': min(metrics['psnr'] / 50, 1.0),   # Normalisasi
        }
        
        # Komposisi tertimbang
        composite = sum(
            self.weights[metric] * score 
            for metric, score in normalized_scores.items()
            if metric in self.weights
        )
        
        return composite
    
    def determine_quality_level(self, score):
        """
        Penentuan level kualitas berdasarkan skor
        """
        if score >= 0.9:
            return 'excellent'
        elif score >= 0.8:
            return 'very_good'
        elif score >= 0.7:
            return 'good'
        elif score >= 0.6:
            return 'acceptable'
        elif score >= 0.5:
            return 'poor'
        else:
            return 'very_poor'

Sistem Batch Processing

import asyncio
import aiofiles
from pathlib import Path

class BatchQualityEvaluator:
    def __init__(self, evaluator, max_workers=4):
        self.evaluator = evaluator
        self.max_workers = max_workers
        self.semaphore = asyncio.Semaphore(max_workers)
    
    async def evaluate_directory(self, original_dir, processed_dir, output_file):
        """
        Evaluasi batch direktori
        """
        original_path = Path(original_dir)
        processed_path = Path(processed_dir)
        
        # Dapatkan pasangan file gambar
        image_pairs = self.get_image_pairs(original_path, processed_path)
        
        # Evaluasi batch dengan processing paralel
        tasks = [
            self.evaluate_pair_async(orig, proc) 
            for orig, proc in image_pairs
        ]
        
        results = await asyncio.gather(*tasks, return_exceptions=True)
        
        # Generate laporan
        report = self.generate_report(image_pairs, results)
        
        # Simpan hasil
        await self.save_report(report, output_file)
        
        return report
    
    async def evaluate_pair_async(self, original_path, processed_path):
        """
        Evaluasi asinkron pasangan gambar
        """
        async with self.semaphore:
            # Load gambar
            img1 = await self.load_image_async(original_path)
            img2 = await self.load_image_async(processed_path)
            
            # Eksekusi evaluasi
            result = self.evaluator.evaluate_single_pair(img1, img2)
            result['original_path'] = str(original_path)
            result['processed_path'] = str(processed_path)
            
            return result
    
    async def load_image_async(self, path):
        """
        Loading gambar asinkron
        """
        async with aiofiles.open(path, 'rb') as f:
            data = await f.read()
        
        # Decode gambar dengan PIL
        from PIL import Image
        import io
        img = Image.open(io.BytesIO(data))
        return np.array(img) / 255.0
    
    def generate_report(self, image_pairs, results):
        """
        Generate laporan evaluasi
        """
        successful_results = [r for r in results if not isinstance(r, Exception)]
        
        # Kalkulasi statistik
        stats = {
            'total_images': len(image_pairs),
            'successful_evaluations': len(successful_results),
            'average_composite_score': np.mean([r['composite_score'] for r in successful_results]),
            'average_lpips': np.mean([r['lpips'] for r in successful_results]),
            'average_ssim': np.mean([r['ssim'] for r in successful_results]),
            'quality_distribution': self.calculate_quality_distribution(successful_results)
        }
        
        report = {
            'summary': stats,
            'detailed_results': successful_results,
            'failed_evaluations': [r for r in results if isinstance(r, Exception)]
        }
        
        return report
    
    async def save_report(self, report, output_file):
        """
        Simpan laporan sebagai JSON
        """
        import json
        async with aiofiles.open(output_file, 'w') as f:
            await f.write(json.dumps(report, indent=2, default=str))

Monitoring Kualitas Real-time

Monitor Kualitas Real-time

import threading
import queue
from collections import deque

class RealTimeQualityMonitor:
    def __init__(self, evaluator, window_size=100):
        self.evaluator = evaluator
        self.window_size = window_size
        self.quality_history = deque(maxlen=window_size)
        self.alert_queue = queue.Queue()
        self.is_running = False
        
        # Threshold alert
        self.thresholds = {
            'composite_score': {
                'warning': 0.6,
                'critical': 0.4
            },
            'lpips': {
                'warning': 0.3,
                'critical': 0.5
            }
        }
    
    def start_monitoring(self, input_queue):
        """
        Mulai monitoring real-time
        """
        self.is_running = True
        monitor_thread = threading.Thread(
            target=self.monitor_loop, 
            args=(input_queue,)
        )
        monitor_thread.start()
        return monitor_thread
    
    def monitor_loop(self, input_queue):
        """
        Loop monitoring utama
        """
        while self.is_running:
            try:
                # Dapatkan pasangan gambar dari queue
                img_pair = input_queue.get(timeout=1.0)
                
                if img_pair is None:  # Sinyal terminasi
                    break
                
                # Evaluasi kualitas
                result = self.evaluator.evaluate_single_pair(*img_pair)
                
                # Tambahkan ke history
                self.quality_history.append(result)
                
                # Cek alert
                self.check_alerts(result)
                
                # Update statistik
                self.update_statistics()
                
            except queue.Empty:
                continue
            except Exception as e:
                print(f"Error monitoring: {e}")
    
    def check_alerts(self, result):
        """
        Cek kondisi alert
        """
        for metric, thresholds in self.thresholds.items():
            if metric in result:
                value = result[metric]
                
                if value < thresholds['critical']:
                    self.alert_queue.put({
                        'level': 'critical',
                        'metric': metric,
                        'value': value,
                        'threshold': thresholds['critical'],
                        'timestamp': time.time()
                    })
                elif value < thresholds['warning']:
                    self.alert_queue.put({
                        'level': 'warning',
                        'metric': metric,
                        'value': value,
                        'threshold': thresholds['warning'],
                        'timestamp': time.time()
                    })
    
    def get_current_statistics(self):
        """
        Dapatkan statistik saat ini
        """
        if not self.quality_history:
            return {}
        
        recent_scores = [r['composite_score'] for r in self.quality_history]
        recent_lpips = [r['lpips'] for r in self.quality_history]
        
        return {
            'window_size': len(self.quality_history),
            'average_quality': np.mean(recent_scores),
            'quality_trend': self.calculate_trend(recent_scores),
            'average_lpips': np.mean(recent_lpips),
            'quality_stability': np.std(recent_scores)
        }

Optimasi Kualitas Otomatis

Tuning Parameter Dinamis

class AdaptiveQualityOptimizer:
    def __init__(self, evaluator, target_quality=0.8):
        self.evaluator = evaluator
        self.target_quality = target_quality
        self.parameter_history = []
        
        # Parameter target untuk optimasi
        self.parameters = {
            'compression_quality': {'min': 50, 'max': 100, 'current': 85},
            'resize_algorithm': {'options': ['lanczos', 'bicubic', 'bilinear'], 'current': 'lanczos'},
            'sharpening_strength': {'min': 0.0, 'max': 2.0, 'current': 1.0}
        }
    
    def optimize_parameters(self, test_images, max_iterations=50):
        """
        Optimasi parameter menuju target kualitas
        """
        best_params = self.parameters.copy()
        best_quality = 0
        
        for iteration in range(max_iterations):
            # Proses dengan parameter saat ini
            processed_images = self.process_with_parameters(
                test_images, self.parameters
            )
            
            # Evaluasi kualitas
            avg_quality = self.evaluate_batch_quality(
                test_images, processed_images
            )
            
            print(f"Iterasi {iteration + 1}: Kualitas = {avg_quality:.3f}")
            
            # Update hasil terbaik
            if avg_quality > best_quality:
                best_quality = avg_quality
                best_params = self.parameters.copy()
            
            # Cek pencapaian target
            if avg_quality >= self.target_quality:
                print(f"Target kualitas {self.target_quality} tercapai!")
                break
            
            # Update parameter
            self.update_parameters(avg_quality)
            
            # Catat history
            self.parameter_history.append({
                'iteration': iteration,
                'parameters': self.parameters.copy(),
                'quality': avg_quality
            })
        
        return best_params, best_quality
    
    def update_parameters(self, current_quality):
        """
        Update parameter berdasarkan kualitas saat ini
        """
        quality_gap = self.target_quality - current_quality
        
        # Gunakan setting lebih konservatif ketika kualitas rendah
        if quality_gap > 0.1:
            # Tingkatkan kualitas kompresi
            self.parameters['compression_quality']['current'] = min(
                100, 
                self.parameters['compression_quality']['current'] + 5
            )
            
            # Kurangi sharpening
            self.parameters['sharpening_strength']['current'] = max(
                0.0,
                self.parameters['sharpening_strength']['current'] - 0.1
            )
        
        # Fokus pada efisiensi ketika kualitas cukup tinggi
        elif quality_gap < -0.05:
            self.parameters['compression_quality']['current'] = max(
                50,
                self.parameters['compression_quality']['current'] - 2
            )

Implementasi dan Deployment

Layanan Evaluasi Dockerized

FROM pytorch/pytorch:1.9.0-cuda10.2-cudnn7-runtime

WORKDIR /app

# Install dependencies
COPY requirements.txt .
RUN pip install -r requirements.txt

# Application code
COPY src/ ./src/
COPY models/ ./models/

# Entry point
COPY entrypoint.sh .
RUN chmod +x entrypoint.sh

EXPOSE 8080

ENTRYPOINT ["./entrypoint.sh"]

Implementasi Web API

from fastapi import FastAPI, File, UploadFile, HTTPException
from fastapi.responses import JSONResponse
import uvicorn

app = FastAPI(title="API Evaluasi Kualitas Gambar")

# Evaluator global
quality_evaluator = ComprehensiveQualityEvaluator()

@app.post("/evaluate/single")
async def evaluate_single_image(
    original: UploadFile = File(...),
    processed: UploadFile = File(...)
):
    """
    Evaluasi pasangan gambar tunggal
    """
    try:
        # Load gambar
        original_img = await load_upload_image(original)
        processed_img = await load_upload_image(processed)
        
        # Eksekusi evaluasi
        result = quality_evaluator.evaluate_single_pair(
            original_img, processed_img
        )
        
        return JSONResponse(content=result)
    
    except Exception as e:
        raise HTTPException(status_code=500, detail=str(e))

@app.post("/evaluate/batch")
async def evaluate_batch_images(
    files: List[UploadFile] = File(...)
):
    """
    Evaluasi batch
    """
    if len(files) % 2 != 0:
        raise HTTPException(
            status_code=400, 
            detail="Diperlukan jumlah file genap (pasangan original + processed)"
        )
    
    results = []
    for i in range(0, len(files), 2):
        original_img = await load_upload_image(files[i])
        processed_img = await load_upload_image(files[i + 1])
        
        result = quality_evaluator.evaluate_single_pair(
            original_img, processed_img
        )
        results.append(result)
    
    # Kalkulasi statistik
    summary = {
        'total_pairs': len(results),
        'average_quality': np.mean([r['composite_score'] for r in results]),
        'quality_distribution': calculate_quality_distribution(results)
    }
    
    return JSONResponse(content={
        'summary': summary,
        'results': results
    })

@app.get("/health")
async def health_check():
    return {"status": "healthy"}

if __name__ == "__main__":
    uvicorn.run(app, host="0.0.0.0", port=8080)

Ringkasan

Metrik evaluasi kualitas gambar AI memungkinkan evaluasi yang jauh melampaui indikator numerik tradisional dalam merefleksikan persepsi manusia secara akurat. Teknik yang diperkenalkan dalam artikel ini dapat meningkatkan manajemen kualitas untuk sistem pemrosesan gambar secara signifikan.

Poin Kunci:

  1. Evaluasi Multi-segi: Evaluasi kualitas komprehensif melalui kombinasi LPIPS, SSIM, dan DISTS
  2. Monitoring Real-time: Deteksi dini masalah melalui monitoring kualitas real-time
  3. Optimasi Otomatis: Penyesuaian parameter dinamis menuju target kualitas
  4. Skalabilitas: Dukungan operasi skala besar melalui batch processing dan pengembangan API

Tautan Internal: Anggaran Kualitas Gambar dan Gerbang CI 2025 — Operasi untuk Mencegah Kerusakan Secara Proaktif, Strategi Kompresi Gambar Lengkap 2025 — Panduan Praktis Optimasi Kecepatan Persepsi sambil Mempertahankan Kualitas, Strategi Konversi Format 2025 — Panduan Penggunaan WebP/AVIF/JPEG/PNG

Artikel terkait