Efficient Video Quality Assessment Leveraging Diverse PreTrained Models from the Wild (2024)

Kun Yuan1โ€ , Hongbo Liu1,2โ€ , Mading Li1โ€ , Muyi Sun3,4,
Ming Sun1โข(๐Ÿ–‚)1๐Ÿ–‚{}^{1(\textrm{\Letter})}start_FLOATSUPERSCRIPT 1 ( ๐Ÿ–‚ ) end_FLOATSUPERSCRIPT, Jiachao Gong1, Jinhua Hao1, Chao Zhou1, Yansong Tang2โข(๐Ÿ–‚)2๐Ÿ–‚{}^{2(\textrm{\Letter})}start_FLOATSUPERSCRIPT 2 ( ๐Ÿ–‚ ) end_FLOATSUPERSCRIPT
1Kuaishou Technology,2Tsinghua University,3School of AI, BUPT,4CASIA
{yuankun03,limading,sunming03}@kuaishou.com, liuhbleon@gmail.com, tang.yansong@sz.tsinghua.edu.cn

Abstract

Video quality assessment (VQA) is a challenging problem due to the numerous factors that can affect the perceptual quality of a video, e.g., content attractiveness, distortion type, motion pattern, and level. However, annotating the Mean opinion score (MOS) for videos is expensive and time-consuming, which limits the scale of VQA datasets, and poses a significant obstacle for deep learning-based methods. In this paper, we propose a VQA method named PTM-VQA, which leverages PreTrained Models to transfer knowledge from models pretrained on various pre-tasks, enabling benefits for VQA from different aspects.

Specifically, we extract features of videos from different pretrained models with frozen weights and integrate them to generate representation. Since these models possess various fields of knowledge and are often trained with labels irrelevant to quality, we propose an Intra-Consistency and Inter-Divisibility (ICID) loss to impose constraints on features extracted by multiple pretrained models. The intra-consistency constraint ensures that features extracted by different pretrained models are in the same unified quality-aware latent space, while the inter-divisibility introduces pseudo clusters based on the annotation of samples and tries to separate features of samples from different clusters. Furthermore, with a constantly growing number of pretrained models, it is crucial to determine which models to use and how to use them. To address this problem, we propose an efficient scheme to select suitable candidates. Models with better clustering performance on VQA datasets are chosen to be our candidates. Extensive experiments demonstrate the effectiveness of the proposed method.

โ€ โ€ footnotetext: โ€  Equal contribution. ๐Ÿ–‚๐Ÿ–‚{}^{\textrm{\Letter}}start_FLOATSUPERSCRIPT ๐Ÿ–‚ end_FLOATSUPERSCRIPT Corresponding authors.

1 Introduction

In recent years, social network platforms that focus on videos have gained immense popularity. According to Ciscoโ€™s Visual Networking Index (VNI), global IP video traffic is predicted to account for 82% of all IP traffic by 2022, both in business and consumer sectors [2]. The significant increase in video content consumption poses significant challenges for video providers to deliver better services. Since the perceptual quality of videos has a significant impact on the Quality of Experience (QoE), identifying the quality of videos has become one of the most important issues [26, 11, 47, 18, 10].Video quality assessment (VQA) aims to assess the perceptual quality of input videos automatically, imitating the subjective feedback of humans when viewing a video. It has been extensively studied in the context of assessing compression artifacts, transmission errors, and overall quality [45, 34, 40, 27, 30]. Data-driven deep learning-based methods have been drawing more and more attention compared to conventional methods based on hand-crafted features, as they possess better performance [9, 8, 65, 28, 7, 68, 42, 31, 57].

Efficient Video Quality Assessment Leveraging Diverse PreTrained Models from the Wild (1)

Compared with other high-level computer vision tasks, datasets for VQA are much smaller. One of the most popular datasets for human action classification Kinetics [5] has 650,000 clips, while the popular VQA dataset KoNViD-1k [22] has only 1,200 videos. One of the reasons is because VQA is a highly subjective task [61, 58].To obtain an unbiased label, it is recommended by annotation guidelines [44] that the subjective quality of a single video should be measured in a laboratory test by calculating the arithmetic mean value of multiple subjective judgments, i.e., Mean Opinion Score (MOS). Take KoNViD-1k as an example, it has 114 votes for each video on average. This significantly raises the cost of labeling and limits the size of the VQA dataset.Such a small amount of data limits the power of data-driven VQA methods. To deal with the problem, most existing VQA methods [68, 7, 28, 65] choose to finetune using weights pretrained on common larger datasets (e.g., ImageNet [15]).However, existing works [29, 30, 57] show that the perceptual quality of a video is related to many factors, e.g., content attractiveness, aesthetic quality, distortion type, motion pattern, and level. Only considering content-based pretrained models may not be sufficient for VQA. Thus, in this work, we focus on how to better utilize a large amount of available pretrained models to benefit VQA.

The present study initially observes a correlation between VQA tasks and other computer vision tasks. To illustrate, Fig.1 displays several examples from the KoNViD-1k dataset. It is reasonable to assume that models pretrained on datasets for various pre-tasks have the ability to capture distinct characteristics regarding video quality.Consequently, we conduct a simple clustering experiment utilizing Large Margin Nearest Neighbor (LMNN) [59] to investigate the correlation between typical pretrained models and the VQA task. Based on the findings, we propose a practical approach, named PTM-VQA (PreTrained Models-VQA), which leverages pretrained models as feature extractors and predicts the quality of input videos based on integrated features. As the parameters of pretrained models remain fixed, we can introduce more pretrained models without exhausting computational resources.

Moreover, we notice that labels in common datasets for pretraining are quite quality-irrelevant. For instance, a clear photo of a puppy with high quality and a blurred photo of a puppy may have the same object-wise label, whereas their quality-wise label may be significantly different. This will confuse the learning process for the VQA task. Therefore, we propose an Intra-Consistency and Inter-Divisibility (ICID) loss, which applies constraints on features extracted by multiple pretrained models from different samples. Specifically, model-wise intra-consistency requires features extracted by different pretrained models to be in the same unified quality-aware latent space. Meanwhile, sample-wise inter-divisibility introduces pseudo clusters based on the MOS of samples and aims to separate features of samples from different clusters.

Furthermore, as the number of pretrained models continues to grow (e.g., PyTorch image models library (Timm) [60] supports over 700 pretrained models), finding models suitable for the VQA task through trial-and-error becomes unfeasible. Therefore, we propose to use the Davies-Bouldin Index (DBI) [14] to evaluate the clustering results and adopt it as the basis for model selection and weighting for feature integration.To summarize, the main contributions are specified below:

  • โ€ข

    We explore and confirm the association between pretrained models utilizing various pre-text tasks and their effectiveness in performing VQA. Moreover, we present a practical non-reference VQA method named PTM-VQA, which exploits cutting-edge pretrained models with diversity to benefit VQA effectively.

  • โ€ข

    To constrain features with diversity into a unified quality-aware space and eliminate the mismatch between objective and perceptual annotations, we propose an ICID loss. To avoid looking for a needle in a haystack, we propose an effective way to select candidate models based on DBI, which also determines the contributions of different pretrained models.

  • โ€ข

    PTM-VQA achieves SOTA performance with a rather small amount of learnable weights on three NR-VQA datasets, including KoNViD-1k, Live-VQC, and YouTube-UGC. Extensive ablations also prove the effectiveness of our method.

2 Related Work

VQA.

Based on whether the pristine reference video is required, VQA methods can be classified as Full Reference (FR), Reduced Reference (RR), and No Reference (NR). Our work will be focused on the NR-VQA method.Traditional NR-VQA methods either measure video quality by rule-based metric [66], or predict MOS by an estimator (e.g., Multi-Layer Perceptron, Support Vector Machine) based on hand-crafted features [13]. In recent years, deep learning-based VQA methods have been studied and surpassed traditional methods.STDAM [65] introduced a graph convolution to extract features and a bidirectional long short-term memory network to handle motion information.StarVQA [64] proposed encode spatiotemporalinformation of each patch on video frames and feed them into a Transformer. RAPIQUE [53] proposed to combine texture features and deep convolutional features. These works, however, neglected the correlation between VQA and other tasks and did not explore other datasets. BVQA [29] took one step further and proposed to transfer knowledge from IQA and action recognition to VQA. Our work further investigates the possibility of using more kinds of tasks.

Pretrained models.

Pretrained models reveal the great potential in deep learning. In Natural Language Processing (NLP), BERT [16] and GPT-3 [4] demonstrated substantial gains on many NLP tasks and benchmarks by pretraining on a large corpus of text followed by finetuning on a specific task. The advent of ViT [17] had migrated this capability into the visual realm. Some subsequent literature [43, 21, 32] had shown that the same benefits can be achieved. For example, CLIP [43] trained on the WebImageText matched the performance of the original ResNet-50 on ImageNet zero-shot, without using any of the original labeled data. In the field of quality assessment (QA), there are also efforts [30, 8, 38, 29] to introduce pretrained models to improve performance. Among them, VSFA [30] extracted features from a pretrained image classification neural network for its inherent content-aware property. And BVQA [29] proposed transferring knowledge from IQA and action recognition datasets with motion patterns. Recently, Ada-DQA [33] utilized diverse pretrained models to distill quality-related knowledge. However, its training cost is relatively high. We hope to tap the potential of the pretrained model itself and reduce the tuning process, in this work.

Metric learning.

Metric learning can learn distance metrics from data to measure the difference between samples. It has been used in many research, including QA. RankIQA [35] trained a siamese network to rank synthesized images with different levels of distortions constrained by pairwise ranking hinge loss and then finetune the model on the target IQA dataset.UNIQUE [70] sampled ranked image pairs from individual IQA datasets and used a fidelity loss [51] and a hinge constraint to supervise the training process.FPR [6] extracted distortion/reference feature from the input/reference, hallucinated pseudo reference feature from the input alone, and used a triplet loss [46] to pull the pristine and hallucinated reference features closer while pushing the distortion feature away.

3 Method

3.1 Observations

In recent years, there has been a surge of research attention towards pretraining, as evidenced by a number of notable works [16, 4, 43, 21], that demonstrate the effectiveness of applying pretrained models to downstream tasks.This meets the main obstacle of VQA tasks, where the cost of annotation poses a significant challenge in scaling up datasets. In addition to such efforts, the field of VQA has also witnessed endeavors [30, 8, 29] towards leveraging pretraining to capture intrinsic content-aware or motion-related patterns, with a view to enhancing the representation of perceptual qualities. However, the impact of various factors inherent to pretrained models (e.g., neural network architectures, pre-text tasks, and pretrained databases) on the performance of model transfer remains a subject of inquiry. To the best of our knowledge, there has been limited exploration and exploitation of these factors, as well as newly-appeared cutting-edge pretrained models, in VQA. Therefore, our objective is to investigate ways to fully leverage these models in VQA applications.

Efficient Video Quality Assessment Leveraging Diverse PreTrained Models from the Wild (10)

In order to examine the relationship between pretrained models and VQA tasks, we designed a simple clustering experiment. Specifically, we selected a pretrained model and utilized its frozen weights as a feature extractor to obtain corresponding video features. We then clustered these features into multiple centers using LMNN [59], based on their range of MOS values. To this end, we selected eight models, including MAE [21] trained on ImageNet-1k [15], Swin-Base [36] trained on ImageNet-22k [15], X3D [19] trained on Kinetics-400 [24], ir-CSN-152 [50] trained on Sports-1M [23], CLIP [43] trained on WebImageText [43], ConvNeXt [37] trained on ImageNet-22k, TimeSformer [3] trained on Kinetics-400, and ViT-Base [17] trained on ImageNet-22k. In Fig.2, we show that some of these models display surprisingly discriminatory results, despite not having been exposed to quality-related labels during pre-text task training. We hypothesize that some quality-aware representations were learned concurrently during the pre-text task training. For instance, CLIP, which learns visual concepts through natural language supervision, may include emotional descriptions relating to image quality in some texts. Similarly, other models trained on action tasks (e.g., ir-CSN-152) may be sensitive to motion-related distortions [29] (e.g., camera shaking or motion blurriness). As such, these broader pretrained models may be useful in improving VQA task performance.

3.2 Pipeline of the proposed PTM-VQA

Assuming the availability of multiple pre-trained models, the conventional approach for employing them in VQA tasks is through fine-tuning on target datasets while integrating extracted features for quality prediction. Nonetheless, this approach is computationally resource-intensive, making it less feasible as the number of pre-trained models increases and their sizes become larger. For instance, the ViT model requires a TPUv3 with eight cores and 30 days of training [17], while the MAE model consumes 128 TPUv3 cores and 800 epochs of training [21]. This would be unaffordable in a VQA task. However, the findings illustrated in Fig.2 suggest that pretrained models have the potential to be applied to VQA tasks with their weights frozen. In this paper, we propose an effective framework, named PTM-VQA, to utilize the knowledge from diverse pretrained models efficiently.

As shown in Fig.3, given an input video ๐ฑ(i)superscript๐ฑ๐‘–\mathbf{x}^{(i)}bold_x start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT, N๐‘Nitalic_N pretrained models, whose weights are frozen, are utilized to extract features, resulting in representations from different perspectives. Specifically, for video clip-based models, we uniformly sample frames in the temporal dimension to form the input clip. Corresponding representations are then generated by these models. For frame-based models, they are fed with sampled frames and the output features are averaged to perform the spatiotemporal representation. Features extracted by models can be noted as ๐ณn(i)superscriptsubscript๐ณ๐‘›๐‘–\mathbf{z}_{n}^{(i)}bold_z start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT, where nโˆˆ{1,โ€ฆ,N}๐‘›1โ€ฆ๐‘n\in\{1,\dots,N\}italic_n โˆˆ { 1 , โ€ฆ , italic_N }. To further distill quality-aware features and perform dimension alignment, we apply a learnable transformation module following each feature extractor. Structurally, the transformation module consists of two fully connected layers, each followed by a normalization layer and an activation layer of GELU. The transformed features are defined as ๐Ÿn(i)โˆˆโ„Dsuperscriptsubscript๐Ÿ๐‘›๐‘–superscriptโ„๐ท\mathbf{f}_{n}^{(i)}\in\mathbb{R}^{D}bold_f start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT โˆˆ blackboard_R start_POSTSUPERSCRIPT italic_D end_POSTSUPERSCRIPT, where D๐ทDitalic_D represents the aligned dimension. Then features are integrated to obtain a unified representation through:

๐ก(i)=โˆ‘n=1Nฯ‰nโข๐Ÿn(i)โˆ‘n=1Nฯ‰n,superscript๐ก๐‘–subscriptsuperscript๐‘๐‘›1subscript๐œ”๐‘›subscriptsuperscript๐Ÿ๐‘–๐‘›subscriptsuperscript๐‘๐‘›1subscript๐œ”๐‘›\mathbf{h}^{(i)}=\frac{\sum^{N}_{n=1}\omega_{n}\mathbf{f}^{(i)}_{n}}{\sum^{N}_%{n=1}\omega_{n}},bold_h start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT = divide start_ARG โˆ‘ start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_n = 1 end_POSTSUBSCRIPT italic_ฯ‰ start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT bold_f start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT end_ARG start_ARG โˆ‘ start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_n = 1 end_POSTSUBSCRIPT italic_ฯ‰ start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT end_ARG ,(1)

where ฯ‰nsubscript๐œ”๐‘›\omega_{n}italic_ฯ‰ start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT is the coefficient for each model. When ฯ‰nsubscript๐œ”๐‘›\omega_{n}italic_ฯ‰ start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT equals 1/N1๐‘1/N1 / italic_N, it means calculating an average, with each model contributing equally to the final representation. Last, ๐ก(i)superscript๐ก๐‘–\mathbf{h}^{(i)}bold_h start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT is used to get the quality prediction through a regression head, which is a single fully-connected layer.

Efficient Video Quality Assessment Leveraging Diverse PreTrained Models from the Wild (11)

Drawing on the proposed design, the training methodology exhibits remarkable efficiency, thereby circumventing the computational overhead associated with the finetuning approach aforementioned. As attested by the results presented in Tab.1, the entire training regimen can be accomplished within a span of approximately two hours, leveraging a single GPU. This retains the information of the pretrained models well, but it also increases the difficulty of obtaining preferable performance due to the reduction of learnable parameters. Some concerns are as follows:

  • 1.

    Due to various pre-texts of pretrained models, features generated by different models are of large diversity, which may distribute over inconsistent feature spaces [62]. How to constrain these abundant features into a unified quality-aware space is important.

  • 2.

    Different from the objective category in common classification tasks, the perceptual quality of a video is more implicit and related to various factors (e.g., content attractiveness, distortion type, and level, motion pattern and level), whereas videos of the same quality often render completely different content and vice versa. Therefore, it is difficult for the models trained based on objective annotations to distinguish these samples of the same category but with a large perceptual quality difference. A more comprehensive contrast approach beyond sample-wise comparison needs to be proposed to deal with these outliers.

  • 3.

    There exist hundreds of pretrained models available in public libraries. How to select the desired models efficiently and how to determine the contribution of these models to represent the perceptual quality effectively is an urgent problem to be solved.

3.3 Intra-Consistency and Inter-Divisibility Loss

To solve the above concerns and better satisfy VQA tasks, we intend to constrain the features between different pretrained models and different samples using metric learning. Triplet loss, which is one of the most widely adopted metric learning measures, can be formed as follows:

โ„’tripletโข(๐Ÿa^,๐Ÿp^,๐Ÿn^)=maxโก(โ€–๐Ÿa^โˆ’๐Ÿp^โ€–2โˆ’โ€–๐Ÿa^โˆ’๐Ÿn^โ€–2+ฮฑ,0),subscriptโ„’tripletsubscript๐Ÿ^๐‘Žsubscript๐Ÿ^๐‘subscript๐Ÿ^๐‘›superscriptnormsubscript๐Ÿ^๐‘Žsubscript๐Ÿ^๐‘2superscriptnormsubscript๐Ÿ^๐‘Žsubscript๐Ÿ^๐‘›2๐›ผ0\small\mathcal{L}_{\text{triplet}}(\mathbf{f}_{\hat{a}},\mathbf{f}_{\hat{p}},%\mathbf{f}_{\hat{n}})=\max(\|\mathbf{f}_{\hat{a}}-\mathbf{f}_{\hat{p}}\|^{2}-%\|\mathbf{f}_{\hat{a}}-\mathbf{f}_{\hat{n}}\|^{2}+\alpha,0),caligraphic_L start_POSTSUBSCRIPT triplet end_POSTSUBSCRIPT ( bold_f start_POSTSUBSCRIPT over^ start_ARG italic_a end_ARG end_POSTSUBSCRIPT , bold_f start_POSTSUBSCRIPT over^ start_ARG italic_p end_ARG end_POSTSUBSCRIPT , bold_f start_POSTSUBSCRIPT over^ start_ARG italic_n end_ARG end_POSTSUBSCRIPT ) = roman_max ( โˆฅ bold_f start_POSTSUBSCRIPT over^ start_ARG italic_a end_ARG end_POSTSUBSCRIPT - bold_f start_POSTSUBSCRIPT over^ start_ARG italic_p end_ARG end_POSTSUBSCRIPT โˆฅ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT - โˆฅ bold_f start_POSTSUBSCRIPT over^ start_ARG italic_a end_ARG end_POSTSUBSCRIPT - bold_f start_POSTSUBSCRIPT over^ start_ARG italic_n end_ARG end_POSTSUBSCRIPT โˆฅ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT + italic_ฮฑ , 0 ) ,(2)

where ๐Ÿa^subscript๐Ÿ^๐‘Ž\mathbf{f}_{\hat{a}}bold_f start_POSTSUBSCRIPT over^ start_ARG italic_a end_ARG end_POSTSUBSCRIPT,๐Ÿp^subscript๐Ÿ^๐‘\mathbf{f}_{\hat{p}}bold_f start_POSTSUBSCRIPT over^ start_ARG italic_p end_ARG end_POSTSUBSCRIPT,๐Ÿn^subscript๐Ÿ^๐‘›\mathbf{f}_{\hat{n}}bold_f start_POSTSUBSCRIPT over^ start_ARG italic_n end_ARG end_POSTSUBSCRIPT are features of an anchor sample a^^๐‘Ž\hat{a}over^ start_ARG italic_a end_ARG, a positive sample p^^๐‘\hat{p}over^ start_ARG italic_p end_ARG of the same class as a^^๐‘Ž\hat{a}over^ start_ARG italic_a end_ARG, and a negative sample n^^๐‘›\hat{n}over^ start_ARG italic_n end_ARG which has a different class of a^^๐‘Ž\hat{a}over^ start_ARG italic_a end_ARG. And ฮฑ๐›ผ\alphaitalic_ฮฑ is a margin between anchor-positive and anchor-negative pairs. Some previous studies [6, 20] in QA also applied triplet loss to measure the distance between the distorted feature and the reference feature of the same sample. Since the MOS values are continuous, the original triplet loss cannot be directly used to constrain the distance between arbitrary samples. We make some modifications to constrain features generated by different pretrained models and samples, as given in Fig.4.

Intra-consistency constraint.

To solve the first concern, and unify features generated by different pretrained models into a unified quality-aware latent space, we propose a model-wise intra-consistency constraint. Formally, it is defined to minimize the distance between arbitrary two of the transformed features through computing a cosine similarity, which is widely used in deep metric learning [55]:

โ„’intra=2Nโ‹…(Nโˆ’1)โขโˆ‘n=1Nโˆ‘m,mโ‰ nN(1โˆ’๐Ÿn(i)โ‹…๐Ÿm(i)โ€–๐Ÿn(i)โ€–2โขโ€–๐Ÿm(i)โ€–2).subscriptโ„’intra2โ‹…๐‘๐‘1superscriptsubscript๐‘›1๐‘superscriptsubscript๐‘š๐‘š๐‘›๐‘1โ‹…subscriptsuperscript๐Ÿ๐‘–๐‘›subscriptsuperscript๐Ÿ๐‘–๐‘šsubscriptnormsubscriptsuperscript๐Ÿ๐‘–๐‘›2subscriptnormsubscriptsuperscript๐Ÿ๐‘–๐‘š2\small\mathcal{L}_{\text{intra}}=\frac{2}{N\cdot(N-1)}\sum_{n=1}^{N}\sum_{m,m%\neq n}^{N}\big{(}1-\frac{\mathbf{f}^{(i)}_{n}\cdot\mathbf{f}^{(i)}_{m}}{\|%\mathbf{f}^{(i)}_{n}\|_{2}\|\mathbf{f}^{(i)}_{m}\|_{2}}\big{)}.caligraphic_L start_POSTSUBSCRIPT intra end_POSTSUBSCRIPT = divide start_ARG 2 end_ARG start_ARG italic_N โ‹… ( italic_N - 1 ) end_ARG โˆ‘ start_POSTSUBSCRIPT italic_n = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT โˆ‘ start_POSTSUBSCRIPT italic_m , italic_m โ‰  italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT ( 1 - divide start_ARG bold_f start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT โ‹… bold_f start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT end_ARG start_ARG โˆฅ bold_f start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT โˆฅ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT โˆฅ bold_f start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT โˆฅ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT end_ARG ) .(3)
DatasetCombination of Pretrained ModelsFramesIntervalLRParam(M)FLOPs(T)Mem(G)Time1(h)Time2(s)
KoNViD-1kConvNeXt, ir-CSN-152, CLIP1621e-30.660.434.942.000.14
LIVE-VQCCLIP, TimeSformer1645e-30.300.534.321.970.17
YouTube-UGCConvNeXt, ir-CSN-152, CLIP, Video Swin-B3281e-30.861.355.322.341.20

Inter-divisibility constraint.

To solve the second concern, we split videos into distinct pseudo clusters under different numerical intervals, according to the annotated MOS values (on a scale of 1.0 to 5.0). For example, videos with MOS in the range of 1.0 to 2.0 are generally considered to be of poor quality, and whose content cannot be normally recognized due to the existence of various distortions. And videos with MOS in the range of 4.0 to 5.0 are of high quality, whose content is unambiguous, without noise, shaking, and blurring. We identify the videos within the same range as the same category, thus dividing them into K๐พKitalic_K clusters. Each cluster can be noted as ๐’ฎk={๐ฑ(i)|y(i)โˆˆ(pk,qk],qk>pkโˆˆ[1.0,5.0]}subscript๐’ฎ๐‘˜conditional-setsuperscript๐ฑ๐‘–formulae-sequencesuperscript๐‘ฆ๐‘–subscript๐‘๐‘˜subscript๐‘ž๐‘˜subscript๐‘ž๐‘˜subscript๐‘๐‘˜1.05.0\mathcal{S}_{k}=\{\mathbf{x}^{(i)}|y^{(i)}\in(p_{k},q_{k}],q_{k}>p_{k}\in[1.0,%5.0]\}caligraphic_S start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT = { bold_x start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT | italic_y start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT โˆˆ ( italic_p start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT , italic_q start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ] , italic_q start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT > italic_p start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT โˆˆ [ 1.0 , 5.0 ] }, where y(i)superscript๐‘ฆ๐‘–y^{(i)}italic_y start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT is the labeled MOS for the i๐‘–iitalic_i-th input video, pksubscript๐‘๐‘˜p_{k}italic_p start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT and qksubscript๐‘ž๐‘˜q_{k}italic_q start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT are the endpoints of the interval. Through this pseudo cluster, triplet loss can be utilized for samples belonging to the same cluster to be closer and samples of the different clusters to be farther away. Then Equ.2 can be rewritten as:

โ„’tripletโข(๐ก(i),๐ก(j),๐ก(l)),whereโข๐ฑ(i),๐ฑ(j)โˆˆ๐’ฎk,๐ฑ(l)โˆ‰๐’ฎk.formulae-sequencesubscriptโ„’tripletsuperscript๐ก๐‘–superscript๐ก๐‘—superscript๐ก๐‘™wheresuperscript๐ฑ๐‘–superscript๐ฑ๐‘—subscript๐’ฎ๐‘˜superscript๐ฑ๐‘™subscript๐’ฎ๐‘˜\small\mathcal{L}_{\text{triplet}}(\mathbf{h}^{(i)},\mathbf{h}^{(j)},\mathbf{h%}^{(l)}),~{}\text{where}~{}\mathbf{x}^{(i)},\mathbf{x}^{(j)}\in\mathcal{S}_{k}%,\mathbf{x}^{(l)}\notin\mathcal{S}_{k}.caligraphic_L start_POSTSUBSCRIPT triplet end_POSTSUBSCRIPT ( bold_h start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT , bold_h start_POSTSUPERSCRIPT ( italic_j ) end_POSTSUPERSCRIPT , bold_h start_POSTSUPERSCRIPT ( italic_l ) end_POSTSUPERSCRIPT ) , where bold_x start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT , bold_x start_POSTSUPERSCRIPT ( italic_j ) end_POSTSUPERSCRIPT โˆˆ caligraphic_S start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT , bold_x start_POSTSUPERSCRIPT ( italic_l ) end_POSTSUPERSCRIPT โˆ‰ caligraphic_S start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT .(4)

Besides, the original feature ๐Ÿ๐Ÿ\mathbf{f}bold_f extracted by individual models is replaced by the integrated feature ๐ก๐ก\mathbf{h}bold_h.As shown in Fig.4, the original triplet loss performs a sample-to-sample form, which is highly affected by the sampling of triples. When facing outliers that are of the same quality but render different contents or vice versa, it may lead to bad local minima and prevent the model from achieving top performance. Thus we propose using the centroid of the cluster to represent the positive and negative points as:

โ„’inter=โ„’tripletโข(๐ก(i),๐œk,๐œt),wheresubscriptโ„’intersubscriptโ„’tripletsuperscript๐ก๐‘–subscript๐œ๐‘˜subscript๐œ๐‘กwhere\displaystyle\mathcal{L}_{\text{inter}}=\mathcal{L}_{\text{triplet}}(\mathbf{h%}^{(i)},\mathbf{c}_{k},\mathbf{c}_{t}),~{}\text{where}caligraphic_L start_POSTSUBSCRIPT inter end_POSTSUBSCRIPT = caligraphic_L start_POSTSUBSCRIPT triplet end_POSTSUBSCRIPT ( bold_h start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT , bold_c start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT , bold_c start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ) , where(5)
๐œk=1|๐’ฎk|โขโˆ‘{i|๐ฑ(i)โˆˆ๐’ฎk}๐ก(i),๐œt=1|๐’ฎt|โขโˆ‘{j|๐ฑ(j)โˆˆ๐’ฎt}๐ก(j).formulae-sequencesubscript๐œ๐‘˜1subscript๐’ฎ๐‘˜subscriptconditional-set๐‘–superscript๐ฑ๐‘–subscript๐’ฎ๐‘˜superscript๐ก๐‘–subscript๐œ๐‘ก1subscript๐’ฎ๐‘กsubscriptconditional-set๐‘—superscript๐ฑ๐‘—subscript๐’ฎ๐‘กsuperscript๐ก๐‘—\displaystyle\mathbf{c}_{k}=\frac{1}{|\mathcal{S}_{k}|}\sum_{\{i|\mathbf{x}^{(%i)}\in\mathcal{S}_{k}\}}\mathbf{h}^{(i)},\mathbf{c}_{t}=\frac{1}{|\mathcal{S}_%{t}|}\sum_{\{j|\mathbf{x}^{(j)}\in\mathcal{S}_{t}\}}\mathbf{h}^{(j)}.bold_c start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT = divide start_ARG 1 end_ARG start_ARG | caligraphic_S start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT | end_ARG โˆ‘ start_POSTSUBSCRIPT { italic_i | bold_x start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT โˆˆ caligraphic_S start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT } end_POSTSUBSCRIPT bold_h start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT , bold_c start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT = divide start_ARG 1 end_ARG start_ARG | caligraphic_S start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT | end_ARG โˆ‘ start_POSTSUBSCRIPT { italic_j | bold_x start_POSTSUPERSCRIPT ( italic_j ) end_POSTSUPERSCRIPT โˆˆ caligraphic_S start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT } end_POSTSUBSCRIPT bold_h start_POSTSUPERSCRIPT ( italic_j ) end_POSTSUPERSCRIPT .

Given a batch consisting B๐ตBitalic_B inputs, during training, the optimization objective can be summarized as:

minโขโ„’1+ฮฒโข(โˆ‘i=1Bโ„’intra+โ„’inter),minsubscriptโ„’1๐›ฝsubscriptsuperscript๐ต๐‘–1subscriptโ„’intrasubscriptโ„’inter\small\text{min}~{}\mathcal{L}_{1}+\beta\big{(}\sum\nolimits^{B}_{i=1}\mathcal%{L}_{\text{intra}}+\mathcal{L}_{\text{inter}}\big{)},min caligraphic_L start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT + italic_ฮฒ ( โˆ‘ start_POSTSUPERSCRIPT italic_B end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT caligraphic_L start_POSTSUBSCRIPT intra end_POSTSUBSCRIPT + caligraphic_L start_POSTSUBSCRIPT inter end_POSTSUBSCRIPT ) ,(6)

where ฮฒ๐›ฝ\betaitalic_ฮฒ is the coefficient balancing smooth โ„’1subscriptโ„’1\mathcal{L}_{1}caligraphic_L start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regression loss and the proposed ICID loss.

3.4 Selection scheme through DBI

For the third concern, we observe an obvious difference in the clustering results of different pretrained models in Fig.2. Since the weights of models are frozen both in the clustering test and subsequent training process, the divergence of clustering results can reflect the relevance of VQA tasks. We propose using the Davies-Bouldin Index (DBI) [14] as a metric for model selection, which is commonly employed for evaluating clustering results. In our particular setting, the DBI can be expressed as follows:

ฯˆ=1Kโขโˆ‘k=1Kmaxtโ‰ kโก๐k+๐tโ€–๐œkโˆ’๐œtโ€–2,where๐œ“1๐พsubscriptsuperscript๐พ๐‘˜1subscript๐‘ก๐‘˜subscript๐๐‘˜subscript๐๐‘กsubscriptnormsubscript๐œ๐‘˜subscript๐œ๐‘ก2where\displaystyle\psi=\frac{1}{K}\sum^{K}_{k=1}\max\limits_{t\neq k}\frac{\mathbf{%d}_{k}+\mathbf{d}_{t}}{\|\mathbf{c}_{k}-\mathbf{c}_{t}\|_{2}},~{}\text{where}italic_ฯˆ = divide start_ARG 1 end_ARG start_ARG italic_K end_ARG โˆ‘ start_POSTSUPERSCRIPT italic_K end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_k = 1 end_POSTSUBSCRIPT roman_max start_POSTSUBSCRIPT italic_t โ‰  italic_k end_POSTSUBSCRIPT divide start_ARG bold_d start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT + bold_d start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT end_ARG start_ARG โˆฅ bold_c start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - bold_c start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT โˆฅ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT end_ARG , where(7)
๐œk=1|๐’ฎk|โขโˆ‘๐’ฎk๐ณ(i),๐k=1|๐’ฎk|โขโˆ‘๐’ฎkโ€–๐ณ(i)โˆ’๐œkโ€–2,formulae-sequencesubscript๐œ๐‘˜1subscript๐’ฎ๐‘˜subscriptsubscript๐’ฎ๐‘˜superscript๐ณ๐‘–subscript๐๐‘˜1subscript๐’ฎ๐‘˜subscriptsubscript๐’ฎ๐‘˜subscriptnormsuperscript๐ณ๐‘–subscript๐œ๐‘˜2\displaystyle\mathbf{c}_{k}=\frac{1}{|\mathcal{S}_{k}|}\sum_{\mathcal{S}_{k}}%\mathbf{z}^{(i)},\mathbf{d}_{k}=\frac{1}{|\mathcal{S}_{k}|}\sum_{\mathcal{S}_{%k}}\|\mathbf{z}^{(i)}-\mathbf{c}_{k}\|_{2},bold_c start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT = divide start_ARG 1 end_ARG start_ARG | caligraphic_S start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT | end_ARG โˆ‘ start_POSTSUBSCRIPT caligraphic_S start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT end_POSTSUBSCRIPT bold_z start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT , bold_d start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT = divide start_ARG 1 end_ARG start_ARG | caligraphic_S start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT | end_ARG โˆ‘ start_POSTSUBSCRIPT caligraphic_S start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT end_POSTSUBSCRIPT โˆฅ bold_z start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT - bold_c start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT โˆฅ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ,

where ๐œksubscript๐œ๐‘˜\mathbf{c}_{k}bold_c start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT is the centroid of cluster ๐’ฎksubscript๐’ฎ๐‘˜\mathcal{S}_{k}caligraphic_S start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT for the set of extracted feature ๐ณ(i)superscript๐ณ๐‘–\mathbf{z}^{(i)}bold_z start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT, dksubscript๐‘‘๐‘˜d_{k}italic_d start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT represents the average distance between each sample and its corresponding centroid. For the n๐‘›nitalic_n-th model, its DBI score can be noted as ฯˆnsubscript๐œ“๐‘›\psi_{n}italic_ฯˆ start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT. A lower DBI indicates better clustering performance, which means that the pretrained model (e.g., ConvNeXt, Swin-Base, ir-CSN-152, CLIP in Fig.2) is more relevant to downstream VQA tasks. During training, the DBI scores computed offline can be used in the aggregation procedure as given in Equ.1, where ฯ‰nsubscript๐œ”๐‘›\omega_{n}italic_ฯ‰ start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT can be replaced by 1/ฯˆn1subscript๐œ“๐‘›1/\psi_{n}1 / italic_ฯˆ start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT. It means the models that are more relevant to the VQA task contribute more to the feature representation.

4 Experiments

4.1 Datasets and evaluation criteria

Datasets. Our method is evaluated on 4 public NR-VQA datasets, including KoNViD-1k [22], LIVE-VQC [48], YouTube-UGC [56] and LSVQ [67]. In detail, KoNViD-1k contains 1,200 videos that are fairly filtered from a large public video dataset YFCC100M. The videos are 8 seconds long with 24/25/30 FPS and a resolution of 960ร—540960540960\times 540960 ร— 540. The MOS ranges from 1.22 to 4.64. Each video owns 114 annotations to get a reliable MOS. LIVE-VQC consists of 585 videos with complex authentic distortions captured by 80 different users using 101 different devices, with 240 annotations for each video. YouTube-UGC has 1,380 UGC videos sampled from YouTube with a duration of 20 seconds and resolutions from 360P to 4K, with 123 annotations for each video.And LSVQ is the largest VQA dataset currently (proposed in 2021) with 39,076 videos.All the datasets contain no pristine videos, thus only NR methods can be evaluated on them. Following [65, 49], we split the dataset into a 80% training set and a 20% testing set randomly for the first three datasets. For LSVQ, we follow the official split setting. We perform 10 repeat runs in each dataset using different splittings to get the mean values of PLCC and SRCC.

Evaluation criteria. Pearsonโ€™s Linear Correlation Coefficient (PLCC) and Spearmanโ€™s Rank-Order Correlation Coefficient (SRCC) are selected as criteria to measure the accuracy and monotonicity. They are in the range of [0,1]01\displaystyle[0,1][ 0 , 1 ]. A larger PLCC means a more accurate numerical fit with MOS scores. A larger SRCC shows a more accurate ranking between samples. Besides, the mean average of PLCC and SRCC is also reported as a comprehensive criterion.

MethodKoNViD-1kLIVE-VQCYouTube-UGCWeighted Avg
PLCCSRCCMeanPLCCSRCCMeanPLCCSRCCMeanPLCCSRCCMean
VIIDEO [40]0.30300.29800.30050.21640.03320.12480.15340.05800.10570.22180.14440.1832
NIQE [39]0.55300.54170.54730.62860.59570.61210.27760.23790.25770.44690.43650.4417
BRISQUE [38]0.6260.6540.6400.6380.5920.6150.3950.3820.3880.52750.52400.5257
VSFA [30]0.7440.7550.749---------
TLVQM [27]0.76880.77290.77080.80250.79880.80060.65900.66930.66410.72720.73250.7298
RIRNet [8]0.78120.77550.77830.79820.77130.7847------
UGC-VQA [52]0.78030.78320.78170.75140.75220.75180.77330.77870.77600.76630.77320.7697
CSPT [9]0.80620.80080.80350.81940.79890.8091------
RAPIQUE [53]0.81750.80310.81030.78630.75480.77050.76840.75910.76370.79250.77890.7857
StarVQA [64]0.7960.8120.8040.8080.7320.770------
BVQA* [29]0.83350.83620.83480.84150.84120.84130.81940.83120.82530.83520.83490.8351
STDAM* [65]0.84150.84480.84310.82040.79310.80670.82970.83410.83190.83250.83370.8331
Fast-VQA [63]0.8550.8590.8570.8440.8230.834------
VQT [69]0.86840.85820.86330.83570.82380.82980.85140.83570.84360.85290.84210.8475
PTM-VQA0.87180.85680.86430.81980.81100.81540.85700.85780.85740.85910.84540.8523

4.2 Implementation details

Our experiments are performed using PyTorch [41] and MMAction2 [12], and are all conducted on one Nvidia V100 GPU by training for 60 epochs. For KoNViD-1k, we select ConvNeXt, ir-CSN-152, and CLIP as feature extractors. For LIVE-VQC, we use CLIP and TimeSformer. For YouTube-UGC, an extra Video Swin-Base is used together with those selected on KoNViD-1k. For KoNViD-1k, we sample 16 frames with a frame interval of 2. As videos in LIVE-VQC and YouTube-UGC has a longer time duration, we use larger intervals for these two datasets. Since most augmentations will introduce extra interference to the quality of videos [25], we only choose the center crop to produce an input with a size of 224ร—224224224224\times 224224 ร— 224. During training, we use AdamW optimizer with a weight decay of 0.02. Cosine annealing with a warmup of 2 epochs is adopted to control the learning rate. The dimension D๐ทDitalic_D of transformed features is set to 128. The margin ฮฑ๐›ผ\alphaitalic_ฮฑ is set to be 0.05. ฮฒ๐›ฝ\betaitalic_ฮฒ is set to be 0.2. By default, we select the checkpoint generated by the last iteration for evaluation. During inference, we follow a similar procedure as given in [1] by using 4ร—5454\times 54 ร— 5 views. To be specific, 4 clips are uniformly sampled from a video in the temporal domain. For each clip, we take 5 crops in the four corners and the center. The final score is computed as the average score. More details are given in Tab.1.

4.3 Comparison with SOTA methods

MethodLSVQ-TestLSVQ-1080P
PLCCSRCCPLCCSRCC
BRISQUE [38]0.5760.5690.5310.497
VSFA [30]0.7960.8010.7040.675
TLVQM [27]0.7740.7720.6160.589
PVQ (w/o patch) [67]0.8160.8140.7080.686
PVQ (w patch) [67]0.8280.8270.7390.711
PTM-VQA-1k0.85360.85300.77840.7279
PTM-VQA-VQC0.86370.85450.78170.7359
PTM-VQA-UGC0.84430.84290.77690.7300

We select existing VQA methods for comparison in three datasets. As shown in Tab.2, our method obtains competitive results on all three datasets. Compared with traditional methods that rely on statistical regularities (e.g., VIIDEO [40], NIQE [39], and BRISQUE [38]), PTM-VQA models outperform by large margins. Compared with some deep learning-based methods that apply well-designed networks (e.g., TLVQM [27], StarVQA [64]), PTM-VQA still obtains higher performances. Especially, VSFA [30] and RIRNet [8] also adopt pretrained models that contain content-dependency or motion information to finetune in VQA tasks. PTM-VQA demonstrates that features extracted directly from pretrained models can also achieve better results. As the best two SOTA methods BVQA [29] and STDAM [65] who utilize extra IQA datasets, PTM-VQA proves that transferring knowledge from pretrained models can achieve competitive results compared with a model trained with additional data.

To assess the generalizability of the selected combinations, we evaluate on the largest LSVQ dataset using the three combinations utilized in KoNViD-1k, LIVE-VQC, and YouTube-UGC. As given in Tab.3, PTM-VQA models demonstrate a significant performance advantage over existing methods, indicating the benefits of leveraging pretrained models with a larger amount of data.

We also compare the cost of inference time with open-source methods on a 1080P video (100 repeat runs). And the inference time cost is 75s (BRISQUE), 248s (TLVQM), 117s (VSFA), 0.12s (StarVQA), and 2.45s (BVQA) respectively. Thanks to the reduced dimensions (e.g., frame sampling, center cropping) and model selection using DBI, PTM-VQA models do not significantly increase inference time over StarVQA and BVQA, as given in Tab.1. Meanwhile, due to the different number and composition of pretrained models, the calculation cost of PTM-VQA models vary. Even so, the largest PTM-VQA can process a high-resolution video in about 1s, and the smaller models can process nearly 6/7 videos per second.

Cross-database comparison.

To emphasize the validity and generalizability, we perform the cross-database evaluation in Tab.4. Models trained on LSVQ are tested on much smaller datasets of KoNViD-1k and LIVE-VQA directly. It can be seen that PTM-VQA transferred very well to both datasets, highlighting the general efficacy.

MethodKoNViD-1kLIVQ-VQC
PLCCSRCCPLCCSRCC
BRISQUE [38]0.6470.6460.5360.524
VSFA [30]0.7940.7840.7720.734
TLVQM [27]0.7240.7320.6910.670
PVQ (w/o patch) [67]0.7810.7810.7760.747
PVQ (w patch) [67]0.7950.7910.8070.770
PTM-VQA-1k0.82490.81640.77720.7215
PTM-VQA-VQC0.82920.82430.77720.7231
PTM-VQA-UGC0.83030.82230.78520.7367

4.4 Ablation studies

We conduct experimental analysis to evaluate the effectiveness of each component. By default, experiments are performed following the best configurations in Sec.4.2.

โ„’1subscriptโ„’1\mathcal{L}_{1}caligraphic_L start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPTโ„’intrasubscriptโ„’intra\mathcal{L}_{\text{intra}}caligraphic_L start_POSTSUBSCRIPT intra end_POSTSUBSCRIPTโ„’intersubscriptโ„’inter\mathcal{L}_{\text{inter}}caligraphic_L start_POSTSUBSCRIPT inter end_POSTSUBSCRIPTโ„’trisubscriptโ„’tri\mathcal{L}_{\text{tri}}caligraphic_L start_POSTSUBSCRIPT tri end_POSTSUBSCRIPTPLCCSRCC
โœ“โœ“โœ“0.87180.8568
โœ“โœ“0.79680.7850
โœ“0.78670.7655
โœ“โœ“0.85450.8299
โœ“โœ“0.81720.7707
K๐พKitalic_KintervalsPLCCSRCC
2๐’ฎ1subscript๐’ฎ1\mathcal{S}_{1}caligraphic_S start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT=[1,3), ๐’ฎ2subscript๐’ฎ2\mathcal{S}_{2}caligraphic_S start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT=[3, 5]0.82770.8066
4๐’ฎ1subscript๐’ฎ1\mathcal{S}_{1}caligraphic_S start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT=[1,2), ๐’ฎ2subscript๐’ฎ2\mathcal{S}_{2}caligraphic_S start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT=[2,3),0.84310.8012
๐’ฎ3subscript๐’ฎ3\mathcal{S}_{3}caligraphic_S start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT=[3,4), ๐’ฎ4subscript๐’ฎ4\mathcal{S}_{4}caligraphic_S start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT=[4,5]
6๐’ฎ1subscript๐’ฎ1\mathcal{S}_{1}caligraphic_S start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT=[1,2), ๐’ฎ2subscript๐’ฎ2\mathcal{S}_{2}caligraphic_S start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT=[2, 2.5), ๐’ฎ3subscript๐’ฎ3\mathcal{S}_{3}caligraphic_S start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT=[2.5, 3),0.87180.8568
๐’ฎ4subscript๐’ฎ4\mathcal{S}_{4}caligraphic_S start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT=[3, 3.5), ๐’ฎ5subscript๐’ฎ5\mathcal{S}_{5}caligraphic_S start_POSTSUBSCRIPT 5 end_POSTSUBSCRIPT=[3.5, 4), ๐’ฎ6subscript๐’ฎ6\mathcal{S}_{6}caligraphic_S start_POSTSUBSCRIPT 6 end_POSTSUBSCRIPT=[4, 5]

Ablation on different constraints.

As given in Tab.5, direct usage of triplet loss cannot obtain satisfying results. When either or both constraints are absent, performance degrades significantly. These prove the effectiveness of intra-consistency constraints in transferring knowledge from different pretrained models and inter-divisibility constraints in generating stable predictions.

Ablation on the clustering settings.

Tab.6 gives the results with different numbers of clusters. When K๐พKitalic_K is 2, videos are simply classified as low-quality and high-quality ones. When K๐พKitalic_K is 4, videos are evenly divided into four parts on a scale of 1.0 to 5.0. Due to the relatively small amount of data at both endpoints, a 6-split setting can be obtained by using fine-grained division in the middle fraction segment. Since the need to ensure the number of samples per cluster within the batch, a larger number of clusters are not attempted. The best result can be acquired when K๐พKitalic_K is 6.

Datasetsฯ‰nsubscript๐œ”๐‘›\omega_{n}italic_ฯ‰ start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPTPLCCSRCC
KoNViD-1k1/N1๐‘1/N1 / italic_N0.86310.8521
1/ฯˆ1๐œ“1/\psi1 / italic_ฯˆ0.87180.8568
LIVE-VQC1/N1๐‘1/N1 / italic_N0.82050.8107
1/ฯˆ1๐œ“1/\psi1 / italic_ฯˆ0.81980.8110
YouTube-UGC1/N1๐‘1/N1 / italic_N0.84270.8446
1/ฯˆ1๐œ“1/\psi1 / italic_ฯˆ0.85700.8578

Ablation on the DBI strategy.

The effectiveness of DBI can be evaluated in two aspects: (1) Model selection strategy. We performed 10 experiments based on randomly selected pretrained models in KoNViD-1k, resulting in PLCC (0.7917ยฑ0.0578plus-or-minus0.79170.05780.7917\pm 0.05780.7917 ยฑ 0.0578, SRCC (0.7583ยฑ0.0492plus-or-minus0.75830.04920.7583\pm 0.04920.7583 ยฑ 0.0492). Compared with the DBI-based strategy, the performance is poor and the randomness is high. (2) Feature integration. Tab.7 shows the effectiveness of DBI in guiding the integration of different models, allowing more relevant models to contribute more.

5 Conclusion

In this paper, we proposed PTM-VQA that utilizes in-the-wild pretrained models as feature extractors for NR-VQA tasks, transferring quality-related knowledge from diverse pre-text domains. The DBI scores are used to select candidates from a large amount of available pretrained models. To constrain features with large diversity into a unified latent space of quality and tackle outliers, we propose a new ICID loss. Under small computational cost, PTM-VQA models obtain SOTA results in widely-used benchmarks. Experiments in larger datasets and cross-database evaluation further prove generalizability.

Acknowledgments

This work was partly supported by the Natural Science Foundation of China (NSFC) under Grant No. 62306309.

References

  • Arnab etal. [2021]Anurag Arnab, Mostafa Dehghani, Georg Heigold, Chen Sun, Mario Lucic, and Cordelia Schmid.Vivit: A video vision transformer.CoRR, abs/2103.15691, 2021.
  • Barnett etal. [2018]Thomas Barnett, Shruti Jain, Usha Andra, and Taru Khurana.Cisco visual networking index (vni) complete forecast update, 2017โ€“2022.Americas/EMEAR Cisco Knowledge Network (CKN) Presentation, 2018.
  • Bertasius etal. [2021]Gedas Bertasius, Heng Wang, and Lorenzo Torresani.Is space-time attention all you need for video understanding?In ICML, pages 813โ€“824. PMLR, 2021.
  • Brown etal. [2020]TomB. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, DanielM. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei.Language models are few-shot learners.In NeurIPS, 2020.
  • Carreira etal. [2019]Joรฃo Carreira, Eric Noland, Chloe Hillier, and Andrew Zisserman.A short note on the kinetics-700 human action dataset.CoRR, abs/1907.06987, 2019.
  • Chen etal. [2022a]Baoliang Chen, Lingyu Zhu, Chenqi Kong, Hanwei Zhu, Shiqi Wang, and Zhu Li.No-reference image quality assessment by hallucinating pristine features.IEEE Trans. Image Process., 2022a.
  • Chen etal. [2022b]Baoliang Chen, Lingyu Zhu, Guo Li, Fangbo Lu, Hongfei Fan, and Shiqi Wang.Learning generalized spatial-temporal deep feature representation for no-reference video quality assessment.IEEE Trans. Circuits Syst. Video Technol., 32(4):1903โ€“1916, 2022b.
  • Chen etal. [2020]Pengfei Chen, Leida Li, Lei Ma, Jinjian Wu, and Guangming Shi.Rirnet: Recurrent-in-recurrent network for video quality assessment.In ACM Multimedia, pages 834โ€“842. ACM, 2020.
  • Chen etal. [2022c]Pengfei Chen, Leida Li, Jinjian Wu, Weisheng Dong, and Guangming Shi.Contrastive self-supervised pre-training for video quality assessment.IEEE Trans. Image Process., 31:458โ€“471, 2022c.
  • Chen etal. [2015]Yanjiao Chen, Kaishun Wu, and Qian Zhang.From qos to qoe: A tutorial on video quality assessment.IEEE Commun. Surv. Tutorials, 17(2):1126โ€“1165, 2015.
  • Chikkerur etal. [2011]Shyamprasad Chikkerur, Vijay Sundaram, Martin Reisslein, and LinaJ. Karam.Objective video quality assessment methods: A classification, review, and performance comparison.IEEE Trans. Broadcast., 57(2):165โ€“182, 2011.
  • Contributors [2020]MMAction2 Contributors.Openmmlabโ€™s next generation video understanding toolbox and benchmark.https://github.com/open-mmlab/mmaction2, 2020.
  • Culibrk etal. [2009]Dubravko Culibrk, Dragan Kukolj, Petar Vasiljevic, Maja Pokric, and Vladimir Zlokolica.Feature selection for neural-network based no-reference video quality assessment.In ICANN (2), pages 633โ€“642. Springer, 2009.
  • Davies and Bouldin [1979]DavidL. Davies and DonaldW. Bouldin.A cluster separation measure.IEEE Trans. Pattern Anal. Mach. Intell., 1(2):224โ€“227, 1979.
  • Deng etal. [2009]Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei.Imagenet: A large-scale hierarchical image database.In CVPR, pages 248โ€“255. IEEE Computer Society, 2009.
  • Devlin etal. [2019]Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova.BERT: pre-training of deep bidirectional transformers for language understanding.In NAACL-HLT (1), pages 4171โ€“4186. Association for Computational Linguistics, 2019.
  • Dosovitskiy etal. [2021]Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby.An image is worth 16x16 words: Transformers for image recognition at scale.In ICLR. OpenReview.net, 2021.
  • Fan etal. [2019]Qiang Fan, Wang Luo, Yuan Xia, Guozhi Li, and Daojing He.metrics and methods of video quality assessment: a brief review.Multim. Tools Appl., 78(22):31019โ€“31033, 2019.
  • Feichtenhofer [2020]Christoph Feichtenhofer.X3D: expanding architectures for efficient video recognition.In CVPR, pages 200โ€“210. Computer Vision Foundation / IEEE, 2020.
  • Golestaneh etal. [2022]S.Alireza Golestaneh, Saba Dadsetan, and KrisM. Kitani.No-reference image quality assessment via transformers, relative ranking, and self-consistency.In WACV, pages 1220โ€“1230, 2022.
  • He etal. [2021]Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollรกr, and RossB. Girshick.Masked autoencoders are scalable vision learners.CoRR, abs/2111.06377, 2021.
  • Hosu etal. [2017]Vlad Hosu, Franz Hahn, Mohsen Jenadeleh, Hanhe Lin, Hui Men, Tamรกs Szirรกnyi, Shujun Li, and Dietmar Saupe.The konstanz natural video database (konvid-1k).In QoMEX, pages 1โ€“6. IEEE, 2017.
  • Karpathy etal. [2014]Andrej Karpathy, George Toderici, Sanketh Shetty, Thomas Leung, Rahul Sukthankar, and Li Fei-Fei.Large-scale video classification with convolutional neural networks.In CVPR, 2014.
  • Kay etal. [2017]Will Kay, Joรฃo Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, and Andrew Zisserman.The kinetics human action video dataset.CoRR, abs/1705.06950, 2017.
  • Ke etal. [2021]Junjie Ke, Qifei Wang, Yilin Wang, Peyman Milanfar, and Feng Yang.MUSIQ: multi-scale image quality transformer.In ICCV, pages 5128โ€“5137. IEEE, 2021.
  • Klink and Uhl [2020]Janusz Klink and Tadeus Uhl.Video quality assessment: Some remarks on selected objective metrics.In SoftCOM, pages 1โ€“6. IEEE, 2020.
  • Korhonen [2019]Jari Korhonen.Two-level approach for no-reference consumer video quality assessment.IEEE Trans. Image Process., 28(12):5923โ€“5938, 2019.
  • Kossi etal. [2022]Koffi Kossi, Stรฉphane Coulombe, Christian Desrosiers, and Ghyslain Gagnon.No-reference video quality assessment using distortion learning and temporal attention.IEEE Access, 10:41010โ€“41022, 2022.
  • Li etal. [2022]Bowen Li, Weixia Zhang, Meng Tian, Guangtao Zhai, and Xianpei Wang.Blindly assess quality of in-the-wild videos via quality-aware pre-training and motion perception.IEEE Trans. Circuits Syst. Video Technol., 32(9):5944โ€“5958, 2022.
  • Li etal. [2019]Dingquan Li, Tingting Jiang, and Ming Jiang.Quality assessment of in-the-wild videos.In ACM Multimedia, pages 2351โ€“2359. ACM, 2019.
  • Li etal. [2021a]Dingquan Li, Tingting Jiang, and Ming Jiang.Unified quality assessment of in-the-wild videos with mixed datasets training.Int. J. Comput. Vis., 129(4):1238โ€“1257, 2021a.
  • Li etal. [2021b]Yanghao Li, Saining Xie, Xinlei Chen, Piotr Dollรกr, Kaiming He, and RossB. Girshick.Benchmarking detection transfer learning with vision transformers.CoRR, abs/2111.11429, 2021b.
  • Liu etal. [2023]Hongbo Liu, Mingda Wu, Kun Yuan, Ming Sun, Yansong Tang, Chuanchuan Zheng, Xing Wen, and Xiu Li.Ada-dqa: Adaptive diverse quality-aware feature acquisition for video quality assessment.In ACM Multimedia, pages 6695โ€“6704. ACM, 2023.
  • Liu etal. [2018]Wentao Liu, Zhengfang Duanmu, and Zhou Wang.End-to-end blind quality assessment of compressed videos using deep neural networks.In ACM Multimedia, pages 546โ€“554. ACM, 2018.
  • Liu etal. [2017]Xialei Liu, Joost vande Weijer, and AndrewD. Bagdanov.Rankiqa: Learning from rankings for no-reference image quality assessment.In ICCV, pages 1040โ€“1049. IEEE Computer Society, 2017.
  • Liu etal. [2021]Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo.Swin transformer: Hierarchical vision transformer using shifted windows.In ICCV, pages 9992โ€“10002. IEEE, 2021.
  • Liu etal. [2022]Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, and Saining Xie.A convnet for the 2020s.CoRR, abs/2201.03545, 2022.
  • Mittal etal. [2012]Anish Mittal, AnushKrishna Moorthy, and AlanConrad Bovik.No-reference image quality assessment in the spatial domain.IEEE Trans. Image Process., 21(12):4695โ€“4708, 2012.
  • Mittal etal. [2013]Anish Mittal, Rajiv Soundararajan, and AlanC. Bovik.Making a "completely blind" image quality analyzer.IEEE Signal Process. Lett., 20(3):209โ€“212, 2013.
  • Mittal etal. [2016]Anish Mittal, MicheleA. Saad, and AlanC. Bovik.A completely blind video integrity oracle.IEEE Trans. Image Process., 25(1):289โ€“300, 2016.
  • Paszke etal. [2019]Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kรถpf, EdwardZ. Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala.Pytorch: An imperative style, high-performance deep learning library.In NeurIPS, pages 8024โ€“8035, 2019.
  • Qian etal. [2021]Lihui Qian, Tianxiang Pan, Yunfei Zheng, Jiajie Zhang, Mading Li, Bing Yu, and Bin Wang.No-reference nonuniform distorted video quality assessment based on deep multiple instance learning.IEEE Multim., 28(1):28โ€“37, 2021.
  • Radford etal. [2021]Alec Radford, JongWook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever.Learning transferable visual models from natural language supervision.In ICML, pages 8748โ€“8763. PMLR, 2021.
  • Rec [2006]ITUT Rec.P. 800.1, mean opinion score (mos) terminology.International Telecommunication Union, Geneva, 2006.
  • Saad etal. [2014]MicheleA. Saad, AlanC. Bovik, and Christophe Charrier.Blind prediction of natural video quality.IEEE Trans. Image Process., 23(3):1352โ€“1365, 2014.
  • Schroff etal. [2015]Florian Schroff, Dmitry Kalenichenko, and James Philbin.Facenet: A unified embedding for face recognition and clustering.In CVPR, pages 815โ€“823. IEEE Computer Society, 2015.
  • Shahid etal. [2014]Muhammad Shahid, Andreas Rossholm, Benny Lรถvstrรถm, and Hans-Jรผrgen Zepernick.No-reference image and video quality assessment: a classification and review of recent approaches.EURASIP J. Image Video Process., 2014:40, 2014.
  • Sinno and Bovik [2019]Zeina Sinno and AlanConrad Bovik.Large-scale study of perceptual video quality.IEEE Trans. Image Process., 28(2):612โ€“627, 2019.
  • Su etal. [2020]Shaolin Su, Qingsen Yan, Yu Zhu, Cheng Zhang, Xin Ge, Jinqiu Sun, and Yanning Zhang.Blindly assess image quality in the wild guided by a self-adaptive hyper network.In CVPR, pages 3664โ€“3673. Computer Vision Foundation / IEEE, 2020.
  • Tran etal. [2019]Du Tran, Heng Wang, Matt Feiszli, and Lorenzo Torresani.Video classification with channel-separated convolutional networks.In ICCV, pages 5551โ€“5560. IEEE, 2019.
  • Tsai etal. [2007]Ming-Feng Tsai, Tie-Yan Liu, Tao Qin, Hsin-Hsi Chen, and Wei-Ying Ma.Frank: a ranking method with fidelity loss.In SIGIR, pages 383โ€“390. ACM, 2007.
  • Tu etal. [2021a]Zhengzhong Tu, Yilin Wang, Neil Birkbeck, Balu Adsumilli, and AlanC. Bovik.UGC-VQA: benchmarking blind video quality assessment for user generated content.IEEE Trans. Image Process., 30:4449โ€“4464, 2021a.
  • Tu etal. [2021b]Zhengzhong Tu, Xiangxu Yu, Yilin Wang, Neil Birkbeck, Balu Adsumilli, and AlanC. Bovik.RAPIQUE: rapid and accurate video quality prediction of user generated content.CoRR, abs/2101.10955, 2021b.
  • vander Maaten [2009]Laurens vander Maaten.Learning a parametric embedding by preserving local structure.In AISTATS, pages 384โ€“391. JMLR.org, 2009.
  • Wang etal. [2019a]Xun Wang, Xintong Han, Weilin Huang, Dengke Dong, and MatthewR. Scott.Multi-similarity loss with general pair weighting for deep metric learning.In CVPR, pages 5022โ€“5030. Computer Vision Foundation / IEEE, 2019a.
  • Wang etal. [2019b]Yilin Wang, Sasi Inguva, and Balu Adsumilli.Youtube UGC dataset for video compression research.In MMSP, pages 1โ€“5. IEEE, 2019b.
  • Wang etal. [2021]Yilin Wang, Junjie Ke, Hossein Talebi, JoongGon Yim, Neil Birkbeck, Balu Adsumilli, Peyman Milanfar, and Feng Yang.Rich features for perceptual quality assessment of UGC videos.In CVPR, pages 13435โ€“13444. Computer Vision Foundation / IEEE, 2021.
  • Wang and Li [2007]Zhou Wang and Qiang Li.Video quality assessment using a statistical model of human visual speed perception.JOSA A, 24(12):B61โ€“B69, 2007.
  • Weinberger and Saul [2009]KilianQ. Weinberger and LawrenceK. Saul.Distance metric learning for large margin nearest neighbor classification.J. Mach. Learn. Res., 10:207โ€“244, 2009.
  • Wightman [2019]Ross Wightman.Pytorch image models.https://github.com/rwightman/pytorch-image-models, 2019.
  • Winkler [1999]Stefan Winkler.Issues in vision modeling for perceptual video quality assessment.Signal Process., 78(2):231โ€“252, 1999.
  • Wortsman etal. [2022]Mitchell Wortsman, Gabriel Ilharco, SamirYa Gadre, Rebecca Roelofs, RaphaelGontijo Lopes, AriS. Morcos, Hongseok Namkoong, Ali Farhadi, Yair Carmon, Simon Kornblith, and Ludwig Schmidt.Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time.In ICML, pages 23965โ€“23998. PMLR, 2022.
  • Wu etal. [2022]Haoning Wu, Chaofeng Chen, Jingwen Hou, Liang Liao, Annan Wang, Wenxiu Sun, Qiong Yan, and Weisi Lin.FAST-VQA: efficient end-to-end video quality assessment with fragment sampling.In ECCV (6), pages 538โ€“554. Springer, 2022.
  • Xing etal. [2021]Fengchuang Xing, Yuan-Gen Wang, Hanpin Wang, Leida Li, and Guopu Zhu.Starvqa: Space-time attention for video quality assessment.CoRR, abs/2108.09635, 2021.
  • Xu etal. [2021]Jiahua Xu, Jing Li, Xingguang Zhou, Wei Zhou, Baichao Wang, and Zhibo Chen.Perceptual quality assessment of internet videos.In ACM Multimedia, pages 1248โ€“1257. ACM, 2021.
  • Yang etal. [2005]Fuzheng Yang, Shuai Wan, Yilin Chang, and HongRen Wu.A novel objective no-reference metric for digital video quality assessment.IEEE Signal Process. Lett., 12(10):685โ€“688, 2005.
  • Ying etal. [2021]Zhenqiang Ying, Maniratnam Mandal, Deepti Ghadiyaram, and AlanC. Bovik.Patch-vq: โ€™patching upโ€™ the video quality problem.In CVPR, pages 14019โ€“14029. Computer Vision Foundation / IEEE, 2021.
  • You [2021]Junyong You.Long short-term convolutional transformer for no-reference video quality assessment.In ACM Multimedia, pages 2112โ€“2120. ACM, 2021.
  • Yuan etal. [2023]Kun Yuan, Zishang Kong, Chuanchuan Zheng, Ming Sun, and Xing Wen.Capturing co-existing distortions in user-generated content for no-reference video quality assessment.In ACM Multimedia, pages 1098โ€“1107. ACM, 2023.
  • Zhang etal. [2021]Weixia Zhang, Kede Ma, Guangtao Zhai, and Xiaokang Yang.Uncertainty-aware blind image quality assessment in the laboratory and wild.IEEE Trans. Image Process., 30:3474โ€“3486, 2021.
Efficient Video Quality Assessment Leveraging Diverse PreTrained Models from the Wild (2024)

References

Top Articles
Latest Posts
Article information

Author: Rob Wisoky

Last Updated:

Views: 6158

Rating: 4.8 / 5 (48 voted)

Reviews: 87% of readers found this page helpful

Author information

Name: Rob Wisoky

Birthday: 1994-09-30

Address: 5789 Michel Vista, West Domenic, OR 80464-9452

Phone: +97313824072371

Job: Education Orchestrator

Hobby: Lockpicking, Crocheting, Baton twirling, Video gaming, Jogging, Whittling, Model building

Introduction: My name is Rob Wisoky, I am a smiling, helpful, encouraging, zealous, energetic, faithful, fantastic person who loves writing and wants to share my knowledge and understanding with you.