AI इमेज क्वालिटी मेट्रिक्स LPIPS・SSIM प्रैक्टिकल गाइड 2025

प्रकाशित: 26 सित॰ 2025 · पढ़ने का समय: 14 मि. · Unified Image Tools संपादकीय

इमेज प्रोसेसिंग की गुणवत्ता मूल्यांकन पारंपरिक संख्यात्मक मेट्रिक्स से AI-आधारित मूल्यांकन की ओर विकसित हो रहा है जो मानव धारणा पर निर्भर है। यह लेख LPIPS (Learned Perceptual Image Patch Similarity) और SSIM (Structural Similarity Index Measure) सहित नवीनतम मूल्यांकन विधियों के लिए implementation स्तर पर विस्तृत व्याख्या प्रदान करता है।

AI इमेज क्वालिटी मूल्यांकन का विकास

पारंपरिक तरीकों की सीमाएं

PSNR (Peak Signal-to-Noise Ratio) के साथ समस्याएं

  • केवल pixel-स्तर के अंतर का मूल्यांकन करता है
  • मानव धारणा से बड़ा विचलन
  • संरचनात्मक समानता को नजरअंदाज करता है
  • compression artifacts का उचित मूल्यांकन नहीं कर सकता

नए दृष्टिकोण की आवश्यकता

  • मानव दृश्य प्रणाली की नकल करना
  • गहरी शिक्षा के माध्यम से feature extraction
  • perceptual similarity की मात्रा निर्धारण
  • सामग्री-अनुकूलित मूल्यांकन

आंतरिक लिंक: छवि गुणवत्ता बजट और CI गेट्स 2025 — विफलताओं को सक्रिय रूप से रोकने के लिए संचालन, छवि संपीड़न संपूर्ण रणनीति 2025 — गुणवत्ता संरक्षित करते हुए अनुभवित गति अनुकूलन का व्यावहारिक गाइड

LPIPS: शिक्षण-आधारित Perceptual मेट्रिक्स

LPIPS का सैद्धांतिक आधार

LPIPS (Learned Perceptual Image Patch Similarity) एक perceptual similarity metric है जो गहरे neural networks के feature representations का लाभ उठाता है।

import torch
import torch.nn as nn
import lpips
from torchvision import models, transforms

class LPIPSEvaluator:
    def __init__(self, net='alex', use_gpu=True):
        """
        LPIPS मॉडल initialization
        net: 'alex', 'vgg', 'squeeze' में से चुनें
        """
        self.loss_fn = lpips.LPIPS(net=net)
        self.device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu')
        self.loss_fn.to(self.device)
        
        # Preprocessing pipeline
        self.transform = transforms.Compose([
            transforms.ToTensor(),
            transforms.Normalize(mean=[0.485, 0.456, 0.406],
                               std=[0.229, 0.224, 0.225])
        ])
    
    def calculate_lpips(self, img1, img2):
        """
        दो इमेजों के बीच LPIPS distance की गणना
        """
        # Preprocessing
        tensor1 = self.transform(img1).unsqueeze(0).to(self.device)
        tensor2 = self.transform(img2).unsqueeze(0).to(self.device)
        
        # LPIPS calculation
        with torch.no_grad():
            distance = self.loss_fn(tensor1, tensor2)
        
        return distance.item()
    
    def batch_evaluate(self, image_pairs):
        """
        Batch processing के साथ LPIPS मूल्यांकन
        """
        results = []
        
        for img1, img2 in image_pairs:
            lpips_score = self.calculate_lpips(img1, img2)
            results.append({
                'lpips_distance': lpips_score,
                'perceptual_similarity': 1 - lpips_score,  # समानता के रूप में व्यक्त करें
                'quality_category': self.categorize_quality(lpips_score)
            })
        
        return results
    
    def categorize_quality(self, lpips_score):
        """
        LPIPS स्कोर के आधार पर गुणवत्ता श्रेणी वर्गीकरण
        """
        if lpips_score < 0.1:
            return 'excellent'
        elif lpips_score < 0.2:
            return 'good'
        elif lpips_score < 0.4:
            return 'acceptable'
        else:
            return 'poor'

कस्टम LPIPS नेटवर्क निर्माण

class CustomLPIPSNetwork(nn.Module):
    def __init__(self, backbone='resnet50'):
        super().__init__()
        
        # Backbone network selection
        if backbone == 'resnet50':
            self.features = models.resnet50(pretrained=True)
            self.features = nn.Sequential(*list(self.features.children())[:-2])
        elif backbone == 'efficientnet':
            self.features = models.efficientnet_b0(pretrained=True).features
        
        # Feature extraction layers
        self.feature_layers = [1, 4, 8, 12, 16]  # निकालने के लिए layer indices
        
        # Linear transformation layers
        self.linear_layers = nn.ModuleList([
            nn.Sequential(
                nn.Conv2d(64, 1, 1, bias=False),
                nn.GroupNorm(1, 1, affine=False)
            ),
            nn.Sequential(
                nn.Conv2d(256, 1, 1, bias=False),
                nn.GroupNorm(1, 1, affine=False)
            ),
            nn.Sequential(
                nn.Conv2d(512, 1, 1, bias=False),
                nn.GroupNorm(1, 1, affine=False)
            )
        ])
    
    def forward(self, x1, x2):
        # Feature extraction
        features1 = self.extract_features(x1)
        features2 = self.extract_features(x2)
        
        # प्रत्येक layer पर distance calculation
        distances = []
        for i, (f1, f2) in enumerate(zip(features1, features2)):
            # L2 normalization
            f1_norm = f1 / (torch.norm(f1, dim=1, keepdim=True) + 1e-8)
            f2_norm = f2 / (torch.norm(f2, dim=1, keepdim=True) + 1e-8)
            
            # Distance calculation
            diff = (f1_norm - f2_norm) ** 2
            
            # Linear transformation
            if i < len(self.linear_layers):
                diff = self.linear_layers[i](diff)
            
            # Spatial averaging
            distance = torch.mean(diff, dim=[2, 3])
            distances.append(distance)
        
        # Weighted average
        total_distance = sum(distances) / len(distances)
        return total_distance

SSIM: संरचनात्मक समानता सूचकांक

SSIM की गणितीय परिभाषा

import numpy as np
from skimage.metrics import structural_similarity
from scipy.ndimage import gaussian_filter

class SSIMEvaluator:
    def __init__(self, window_size=11, k1=0.01, k2=0.03, sigma=1.5):
        self.window_size = window_size
        self.k1 = k1
        self.k2 = k2
        self.sigma = sigma
    
    def calculate_ssim(self, img1, img2, data_range=1.0):
        """
        बेसिक SSIM calculation
        """
        return structural_similarity(
            img1, img2,
            data_range=data_range,
            multichannel=True,
            gaussian_weights=True,
            sigma=self.sigma,
            use_sample_covariance=False
        )
    
    def calculate_ms_ssim(self, img1, img2, weights=None):
        """
        Multi-Scale SSIM (MS-SSIM) implementation
        """
        if weights is None:
            weights = [0.0448, 0.2856, 0.3001, 0.2363, 0.1333]
        
        levels = len(weights)
        mssim = 1.0
        
        for i in range(levels):
            ssim_val = self.calculate_ssim(img1, img2)
            
            if i < levels - 1:
                # Downsampling
                img1 = self.downsample(img1)
                img2 = self.downsample(img2)
                mssim *= ssim_val ** weights[i]
            else:
                mssim *= ssim_val ** weights[i]
        
        return mssim
    
    def downsample(self, img):
        """
        Gaussian filtering + downsampling
        """
        filtered = gaussian_filter(img, sigma=1.0, axes=[0, 1])
        return filtered[::2, ::2]
    
    def ssim_map(self, img1, img2):
        """
        SSIM map generate करें
        """
        # Grayscale में convert करें
        if len(img1.shape) == 3:
            img1_gray = np.mean(img1, axis=2)
            img2_gray = np.mean(img2, axis=2)
        else:
            img1_gray = img1
            img2_gray = img2
        
        # Mean
        mu1 = gaussian_filter(img1_gray, self.sigma)
        mu2 = gaussian_filter(img2_gray, self.sigma)
        
        mu1_sq = mu1 ** 2
        mu2_sq = mu2 ** 2
        mu1_mu2 = mu1 * mu2
        
        # Variance और covariance
        sigma1_sq = gaussian_filter(img1_gray ** 2, self.sigma) - mu1_sq
        sigma2_sq = gaussian_filter(img2_gray ** 2, self.sigma) - mu2_sq
        sigma12 = gaussian_filter(img1_gray * img2_gray, self.sigma) - mu1_mu2
        
        # SSIM calculation
        c1 = (self.k1 * 1.0) ** 2
        c2 = (self.k2 * 1.0) ** 2
        
        ssim_map = ((2 * mu1_mu2 + c1) * (2 * sigma12 + c2)) / \
                   ((mu1_sq + mu2_sq + c1) * (sigma1_sq + sigma2_sq + c2))
        
        return ssim_map

उन्नत मूल्यांकन मेट्रिक्स

DISTS: गहरी इमेज संरचना और बनावट समानता

import torch
import torchvision.models as models

class DISTSEvaluator:
    def __init__(self, use_gpu=True):
        self.device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu')
        
        # VGG feature extraction portion उपयोग करें
        vgg = models.vgg16(pretrained=True).features
        self.stages = nn.ModuleList([
            vgg[:4],   # conv1_2
            vgg[:9],   # conv2_2
            vgg[:16],  # conv3_3
            vgg[:23],  # conv4_3
            vgg[:30]   # conv5_3
        ]).to(self.device)
        
        for param in self.stages.parameters():
            param.requires_grad = False
    
    def extract_features(self, x):
        features = []
        for stage in self.stages:
            x = stage(x)
            features.append(x)
        return features
    
    def calculate_dists(self, img1, img2):
        """
        DISTS (Deep Image Structure and Texture Similarity) की गणना
        """
        # Preprocessing
        tensor1 = self.preprocess(img1).to(self.device)
        tensor2 = self.preprocess(img2).to(self.device)
        
        # Feature extraction
        feats1 = self.extract_features(tensor1)
        feats2 = self.extract_features(tensor2)
        
        structure_score = 0
        texture_score = 0
        
        for f1, f2 in zip(feats1, feats2):
            # Structure similarity (mean similarity)
            struct_sim = self.structure_similarity(f1, f2)
            structure_score += struct_sim
            
            # Texture similarity (covariance similarity)
            texture_sim = self.texture_similarity(f1, f2)
            texture_score += texture_sim
        
        # Weighted composition
        alpha = 0.8  # structure weight
        beta = 0.2   # texture weight
        
        dists_score = alpha * structure_score + beta * texture_score
        return dists_score.item()
    
    def structure_similarity(self, feat1, feat2):
        """
        Structure similarity की गणना
        """
        # Channel direction पर mean
        mean1 = torch.mean(feat1, dim=1, keepdim=True)
        mean2 = torch.mean(feat2, dim=1, keepdim=True)
        
        # Structural similarity
        numerator = 2 * mean1 * mean2
        denominator = mean1 ** 2 + mean2 ** 2
        
        structure_map = numerator / (denominator + 1e-8)
        return torch.mean(structure_map)
    
    def texture_similarity(self, feat1, feat2):
        """
        Texture similarity की गणना
        """
        # Feature maps का covariance matrix calculate करें
        b, c, h, w = feat1.shape
        feat1_flat = feat1.view(b, c, -1)
        feat2_flat = feat2.view(b, c, -1)
        
        # Covariance calculation
        cov1 = torch.bmm(feat1_flat, feat1_flat.transpose(1, 2)) / (h * w - 1)
        cov2 = torch.bmm(feat2_flat, feat2_flat.transpose(1, 2)) / (h * w - 1)
        
        # Frobenius norm से similarity
        diff_norm = torch.norm(cov1 - cov2, 'fro', dim=[1, 2])
        max_norm = torch.maximum(torch.norm(cov1, 'fro', dim=[1, 2]),
                                torch.norm(cov2, 'fro', dim=[1, 2]))
        
        texture_sim = 1 - diff_norm / (max_norm + 1e-8)
        return torch.mean(texture_sim)

FID: Fréchet Inception Distance

from scipy.linalg import sqrtm
import numpy as np

class FIDEvaluator:
    def __init__(self):
        # Inception v3 model (feature extraction के लिए)
        self.inception = models.inception_v3(pretrained=True, transform_input=False)
        self.inception.fc = nn.Identity()  # Classification layer हटाएं
        self.inception.eval()
        
        for param in self.inception.parameters():
            param.requires_grad = False
    
    def extract_features(self, images):
        """
        Inception v3 का उपयोग करके feature extraction
        """
        features = []
        
        with torch.no_grad():
            for img in images:
                # उचित size पर resize करें (299x299)
                img_resized = F.interpolate(img.unsqueeze(0), 
                                          size=(299, 299), 
                                          mode='bilinear')
                
                feat = self.inception(img_resized)
                features.append(feat.cpu().numpy())
        
        return np.concatenate(features, axis=0)
    
    def calculate_fid(self, real_images, generated_images):
        """
        FID (Fréchet Inception Distance) की गणना
        """
        # Feature extraction
        real_features = self.extract_features(real_images)
        gen_features = self.extract_features(generated_images)
        
        # Statistics calculation
        mu_real = np.mean(real_features, axis=0)
        sigma_real = np.cov(real_features, rowvar=False)
        
        mu_gen = np.mean(gen_features, axis=0)
        sigma_gen = np.cov(gen_features, rowvar=False)
        
        # Fréchet distance calculation
        diff = mu_real - mu_gen
        covmean = sqrtm(sigma_real.dot(sigma_gen))
        
        # Numerical errors के कारण imaginary components हटाएं
        if np.iscomplexobj(covmean):
            covmean = covmean.real
        
        fid = diff.dot(diff) + np.trace(sigma_real + sigma_gen - 2 * covmean)
        
        return fid

व्यापक मूल्यांकन प्रणाली निर्माण

Multi-metric Evaluator

class ComprehensiveQualityEvaluator:
    def __init__(self):
        self.lpips_evaluator = LPIPSEvaluator()
        self.ssim_evaluator = SSIMEvaluator()
        self.dists_evaluator = DISTSEvaluator()
        self.fid_evaluator = FIDEvaluator()
        
        # Weight configuration
        self.weights = {
            'lpips': 0.3,
            'ssim': 0.3,
            'dists': 0.2,
            'psnr': 0.1,
            'fid': 0.1
        }
    
    def evaluate_single_pair(self, img1, img2):
        """
        इमेज pair का व्यापक गुणवत्ता मूल्यांकन
        """
        results = {}
        
        # LPIPS
        results['lpips'] = self.lpips_evaluator.calculate_lpips(img1, img2)
        
        # SSIM
        results['ssim'] = self.ssim_evaluator.calculate_ssim(img1, img2)
        
        # DISTS
        results['dists'] = self.dists_evaluator.calculate_dists(img1, img2)
        
        # PSNR (reference value)
        results['psnr'] = self.calculate_psnr(img1, img2)
        
        # Composite score calculate करें
        composite_score = self.calculate_composite_score(results)
        results['composite_score'] = composite_score
        
        # Quality level निर्धारित करें
        results['quality_level'] = self.determine_quality_level(composite_score)
        
        return results
    
    def calculate_psnr(self, img1, img2):
        """
        PSNR calculation
        """
        mse = np.mean((img1 - img2) ** 2)
        if mse == 0:
            return float('inf')
        return 20 * np.log10(1.0 / np.sqrt(mse))
    
    def calculate_composite_score(self, metrics):
        """
        Multiple metrics से composite score
        """
        # प्रत्येक metric को 0-1 range में normalize करें
        normalized_scores = {
            'lpips': 1 - min(metrics['lpips'], 1.0),  # कम बेहतर है
            'ssim': metrics['ssim'],                   # अधिक बेहतर है
            'dists': metrics['dists'],                 # अधिक बेहतर है
            'psnr': min(metrics['psnr'] / 50, 1.0),   # Normalization
        }
        
        # Weighted composition
        composite = sum(
            self.weights[metric] * score 
            for metric, score in normalized_scores.items()
            if metric in self.weights
        )
        
        return composite
    
    def determine_quality_level(self, score):
        """
        Score के आधार पर quality level निर्धारण
        """
        if score >= 0.9:
            return 'excellent'
        elif score >= 0.8:
            return 'very_good'
        elif score >= 0.7:
            return 'good'
        elif score >= 0.6:
            return 'acceptable'
        elif score >= 0.5:
            return 'poor'
        else:
            return 'very_poor'

Batch Processing System

import asyncio
import aiofiles
from pathlib import Path

class BatchQualityEvaluator:
    def __init__(self, evaluator, max_workers=4):
        self.evaluator = evaluator
        self.max_workers = max_workers
        self.semaphore = asyncio.Semaphore(max_workers)
    
    async def evaluate_directory(self, original_dir, processed_dir, output_file):
        """
        Directory batch evaluation
        """
        original_path = Path(original_dir)
        processed_path = Path(processed_dir)
        
        # Image file pairs प्राप्त करें
        image_pairs = self.get_image_pairs(original_path, processed_path)
        
        # Parallel processing के साथ batch evaluation
        tasks = [
            self.evaluate_pair_async(orig, proc) 
            for orig, proc in image_pairs
        ]
        
        results = await asyncio.gather(*tasks, return_exceptions=True)
        
        # Report generate करें
        report = self.generate_report(image_pairs, results)
        
        # Results save करें
        await self.save_report(report, output_file)
        
        return report
    
    async def evaluate_pair_async(self, original_path, processed_path):
        """
        Image pair का asynchronous evaluation
        """
        async with self.semaphore:
            # Images load करें
            img1 = await self.load_image_async(original_path)
            img2 = await self.load_image_async(processed_path)
            
            # Evaluation execute करें
            result = self.evaluator.evaluate_single_pair(img1, img2)
            result['original_path'] = str(original_path)
            result['processed_path'] = str(processed_path)
            
            return result
    
    async def load_image_async(self, path):
        """
        Asynchronous image loading
        """
        async with aiofiles.open(path, 'rb') as f:
            data = await f.read()
        
        # PIL के साथ image decode करें
        from PIL import Image
        import io
        img = Image.open(io.BytesIO(data))
        return np.array(img) / 255.0
    
    def generate_report(self, image_pairs, results):
        """
        Evaluation report generate करें
        """
        successful_results = [r for r in results if not isinstance(r, Exception)]
        
        # Statistics calculation
        stats = {
            'total_images': len(image_pairs),
            'successful_evaluations': len(successful_results),
            'average_composite_score': np.mean([r['composite_score'] for r in successful_results]),
            'average_lpips': np.mean([r['lpips'] for r in successful_results]),
            'average_ssim': np.mean([r['ssim'] for r in successful_results]),
            'quality_distribution': self.calculate_quality_distribution(successful_results)
        }
        
        report = {
            'summary': stats,
            'detailed_results': successful_results,
            'failed_evaluations': [r for r in results if isinstance(r, Exception)]
        }
        
        return report
    
    async def save_report(self, report, output_file):
        """
        Report को JSON के रूप में save करें
        """
        import json
        async with aiofiles.open(output_file, 'w') as f:
            await f.write(json.dumps(report, indent=2, default=str))

रियल-टाइम गुणवत्ता निगरानी

रियल-टाइम Quality Monitor

import threading
import queue
from collections import deque

class RealTimeQualityMonitor:
    def __init__(self, evaluator, window_size=100):
        self.evaluator = evaluator
        self.window_size = window_size
        self.quality_history = deque(maxlen=window_size)
        self.alert_queue = queue.Queue()
        self.is_running = False
        
        # Alert thresholds
        self.thresholds = {
            'composite_score': {
                'warning': 0.6,
                'critical': 0.4
            },
            'lpips': {
                'warning': 0.3,
                'critical': 0.5
            }
        }
    
    def start_monitoring(self, input_queue):
        """
        रियल-टाइम monitoring शुरू करें
        """
        self.is_running = True
        monitor_thread = threading.Thread(
            target=self.monitor_loop, 
            args=(input_queue,)
        )
        monitor_thread.start()
        return monitor_thread
    
    def monitor_loop(self, input_queue):
        """
        मुख्य monitoring loop
        """
        while self.is_running:
            try:
                # Queue से image pair प्राप्त करें
                img_pair = input_queue.get(timeout=1.0)
                
                if img_pair is None:  # Termination signal
                    break
                
                # Quality evaluation
                result = self.evaluator.evaluate_single_pair(*img_pair)
                
                # History में add करें
                self.quality_history.append(result)
                
                # Alerts check करें
                self.check_alerts(result)
                
                # Statistics update करें
                self.update_statistics()
                
            except queue.Empty:
                continue
            except Exception as e:
                print(f"Monitoring error: {e}")
    
    def check_alerts(self, result):
        """
        Alert conditions check करें
        """
        for metric, thresholds in self.thresholds.items():
            if metric in result:
                value = result[metric]
                
                if value < thresholds['critical']:
                    self.alert_queue.put({
                        'level': 'critical',
                        'metric': metric,
                        'value': value,
                        'threshold': thresholds['critical'],
                        'timestamp': time.time()
                    })
                elif value < thresholds['warning']:
                    self.alert_queue.put({
                        'level': 'warning',
                        'metric': metric,
                        'value': value,
                        'threshold': thresholds['warning'],
                        'timestamp': time.time()
                    })
    
    def get_current_statistics(self):
        """
        वर्तमान statistics प्राप्त करें
        """
        if not self.quality_history:
            return {}
        
        recent_scores = [r['composite_score'] for r in self.quality_history]
        recent_lpips = [r['lpips'] for r in self.quality_history]
        
        return {
            'window_size': len(self.quality_history),
            'average_quality': np.mean(recent_scores),
            'quality_trend': self.calculate_trend(recent_scores),
            'average_lpips': np.mean(recent_lpips),
            'quality_stability': np.std(recent_scores)
        }

स्वचालित गुणवत्ता अनुकूलन

डायनामिक Parameter Tuning

class AdaptiveQualityOptimizer:
    def __init__(self, evaluator, target_quality=0.8):
        self.evaluator = evaluator
        self.target_quality = target_quality
        self.parameter_history = []
        
        # Optimization के लिए target parameters
        self.parameters = {
            'compression_quality': {'min': 50, 'max': 100, 'current': 85},
            'resize_algorithm': {'options': ['lanczos', 'bicubic', 'bilinear'], 'current': 'lanczos'},
            'sharpening_strength': {'min': 0.0, 'max': 2.0, 'current': 1.0}
        }
    
    def optimize_parameters(self, test_images, max_iterations=50):
        """
        Quality target की ओर parameter optimization
        """
        best_params = self.parameters.copy()
        best_quality = 0
        
        for iteration in range(max_iterations):
            # Current parameters के साथ process करें
            processed_images = self.process_with_parameters(
                test_images, self.parameters
            )
            
            # Quality evaluation
            avg_quality = self.evaluate_batch_quality(
                test_images, processed_images
            )
            
            print(f"Iteration {iteration + 1}: Quality = {avg_quality:.3f}")
            
            # Best result update करें
            if avg_quality > best_quality:
                best_quality = avg_quality
                best_params = self.parameters.copy()
            
            # Target achievement check करें
            if avg_quality >= self.target_quality:
                print(f"Target quality {self.target_quality} achieved!")
                break
            
            # Parameters update करें
            self.update_parameters(avg_quality)
            
            # History record करें
            self.parameter_history.append({
                'iteration': iteration,
                'parameters': self.parameters.copy(),
                'quality': avg_quality
            })
        
        return best_params, best_quality
    
    def update_parameters(self, current_quality):
        """
        Current quality के आधार पर parameter updates
        """
        quality_gap = self.target_quality - current_quality
        
        # Quality कम होने पर अधिक conservative settings का उपयोग करें
        if quality_gap > 0.1:
            # Compression quality बढ़ाएं
            self.parameters['compression_quality']['current'] = min(
                100, 
                self.parameters['compression_quality']['current'] + 5
            )
            
            # Sharpening कम करें
            self.parameters['sharpening_strength']['current'] = max(
                0.0,
                self.parameters['sharpening_strength']['current'] - 0.1
            )
        
        # Quality पर्याप्त होने पर efficiency पर focus करें
        elif quality_gap < -0.05:
            self.parameters['compression_quality']['current'] = max(
                50,
                self.parameters['compression_quality']['current'] - 2
            )

Implementation और Deployment

Dockerized Evaluation Service

FROM pytorch/pytorch:1.9.0-cuda10.2-cudnn7-runtime

WORKDIR /app

# Dependencies install करें
COPY requirements.txt .
RUN pip install -r requirements.txt

# Application code
COPY src/ ./src/
COPY models/ ./models/

# Entry point
COPY entrypoint.sh .
RUN chmod +x entrypoint.sh

EXPOSE 8080

ENTRYPOINT ["./entrypoint.sh"]

Web API Implementation

from fastapi import FastAPI, File, UploadFile, HTTPException
from fastapi.responses import JSONResponse
import uvicorn

app = FastAPI(title="Image Quality Evaluation API")

# Global evaluator
quality_evaluator = ComprehensiveQualityEvaluator()

@app.post("/evaluate/single")
async def evaluate_single_image(
    original: UploadFile = File(...),
    processed: UploadFile = File(...)
):
    """
    Single image pair evaluation
    """
    try:
        # Images load करें
        original_img = await load_upload_image(original)
        processed_img = await load_upload_image(processed)
        
        # Evaluation execute करें
        result = quality_evaluator.evaluate_single_pair(
            original_img, processed_img
        )
        
        return JSONResponse(content=result)
    
    except Exception as e:
        raise HTTPException(status_code=500, detail=str(e))

@app.post("/evaluate/batch")
async def evaluate_batch_images(
    files: List[UploadFile] = File(...)
):
    """
    Batch evaluation
    """
    if len(files) % 2 != 0:
        raise HTTPException(
            status_code=400, 
            detail="Even number of files required (original + processed pairs)"
        )
    
    results = []
    for i in range(0, len(files), 2):
        original_img = await load_upload_image(files[i])
        processed_img = await load_upload_image(files[i + 1])
        
        result = quality_evaluator.evaluate_single_pair(
            original_img, processed_img
        )
        results.append(result)
    
    # Statistics calculation
    summary = {
        'total_pairs': len(results),
        'average_quality': np.mean([r['composite_score'] for r in results]),
        'quality_distribution': calculate_quality_distribution(results)
    }
    
    return JSONResponse(content={
        'summary': summary,
        'results': results
    })

@app.get("/health")
async def health_check():
    return {"status": "healthy"}

if __name__ == "__main__":
    uvicorn.run(app, host="0.0.0.0", port=8080)

सारांश

AI इमेज quality evaluation metrics ऐसे मूल्यांकन की सुविधा देते हैं जो मानव धारणा को सटीक रूप से प्रतिबिंबित करने में पारंपरिक संख्यात्मक संकेतकों से कहीं अधिक श्रेष्ठ हैं। इस लेख में प्रस्तुत तकनीकें इमेज प्रोसेसिंग सिस्टम के लिए गुणवत्ता प्रबंधन में महत्वपूर्ण सुधार ला सकती हैं।

मुख्य बिंदु:

  1. बहुमुखी मूल्यांकन: LPIPS, SSIM, और DISTS के संयोजन से व्यापक गुणवत्ता मूल्यांकन
  2. रियल-टाइम निगरानी: रियल-टाइम quality monitoring के माध्यम से समस्याओं का शीघ्र पता लगाना
  3. स्वचालित अनुकूलन: Quality targets की दिशा में dynamic parameter adjustment
  4. स्केलेबिलिटी: Batch processing और API development के माध्यम से बड़े पैमाने के संचालन का समर्थन

आंतरिक लिंक: छवि गुणवत्ता बजट और CI गेट्स 2025 — विफलताओं को सक्रिय रूप से रोकने के लिए संचालन, छवि संपीड़न संपूर्ण रणनीति 2025 — गुणवत्ता संरक्षित करते हुए अनुभवित गति अनुकूलन का व्यावहारिक गाइड, प्रारूप रूपांतरण रणनीतियां 2025 — WebP/AVIF/JPEG/PNG चयन दिशानिर्देश

संबंधित टूल्स

संबंधित लेख

कंप्रेशन

छवि गुणवत्ता बजट और CI गेट्स 2025 — विफलताओं को सक्रिय रूप से रोकने के लिए संचालन

गुणवत्ता में गिरावट, रंग बदलाव और आकार वृद्धि को स्वचालित CI निरीक्षण के माध्यम से रोकने के लिए SSIM/LPIPS/Butteraugli मेट्रिक्स और मानवीय नजरों दोनों का उपयोग करने वाला एक व्यवस्थित दृष्टिकोण।

तुलना

इमेज गुणवत्ता मेट्रिक्स SSIM/PSNR/Butteraugli प्रैक्टिकल गाइड 2025

कंप्रेशन और रीसाइज़िंग के बाद इमेज गुणवत्ता की वस्तुनिष्ठ तुलना और सत्यापन के लिए मैकेनिकल न्यूमेरिकल इंडिकेटर्स का प्रभावी उपयोग करने की व्यावहारिक प्रक्रियाएं। SSIM/PSNR/Butteraugli के लिए उपयोग पैटर्न और सावधानियां, वर्कफ़्लो इंटीग्रेशन के उदाहरणों के साथ।