418 research outputs found

    Calculation of thermodynamic properties of liquid alkali metals by the first-principle-pseudopotential and Weeks-Chandler-Andersen methods

    Full text link
    A study was conducted to demonstrate calculation of thermodynamic properties of liquid alkali metals by the first-principle-pseudo-potential and Weeks Chandler Andersen (WCA) methods. The pseudo-potential was used for calculating the thermodynamic properties of liquid alkali metals within the framework of the variational method. The use of the true wave functions of conduction electrons also enabled to take into account correctly the energy shifts for the core electrons caused by their interaction with the potential formed by the distribution of the conduction-electron charge. The results of calculations by the WCA method agreed well with the experimental data. This agreement for Li and Na was better than that in the case of using the variational method

    Bit-depth enhancement detection for compressed video

    Full text link
    In recent years, display intensity and contrast have increased considerably. Many displays support high dynamic range (HDR) and 10-bit color depth. Since high bit-depth is an emerging technology, video content is still largely shot and transmitted with a bit depth of 8 bits or less per color component. Insufficient bit-depths produce distortions called false contours or banding, and they are visible on high contrast screens. To deal with such distortions, researchers have proposed algorithms for bit-depth enhancement (dequantization). Such techniques convert videos with low bit-depth (LBD) to videos with high bit-depth (HBD). The quality of converted LBD video, however, is usually lower than that of the original HBD video, and many consumers prefer to keep the original HBD versions. In this paper, we propose an algorithm to determine whether a video has undergone conversion before compression. This problem is complex; it involves detecting outcomes of different dequantization algorithms in the presence of compression that strongly affects the least-significant bits (LSBs) in the video frames. Our algorithm can detect bit-depth enhancement and demonstrates good generalization capability, as it is able to determine whether a video has undergone processing by dequantization algorithms absent from the training dataset

    IOI: Invisible One-Iteration Adversarial Attack on No-Reference Image- and Video-Quality Metrics

    Full text link
    No-reference image- and video-quality metrics are widely used in video processing benchmarks. The robustness of learning-based metrics under video attacks has not been widely studied. In addition to having success, attacks that can be employed in video processing benchmarks must be fast and imperceptible. This paper introduces an Invisible One-Iteration (IOI) adversarial attack on no reference image and video quality metrics. We compared our method alongside eight prior approaches using image and video datasets via objective and subjective tests. Our method exhibited superior visual quality across various attacked metric architectures while maintaining comparable attack success and speed. We made the code available on GitHub: https://github.com/katiashh/ioi-attack.Comment: Accepted to ICML 202

    Search for Environmentally Friendly Technology for Processing Molybdenum Concentrates

    Get PDF
    At the Institute of Metallurgy, Ural Branch, Russian Academy of Sciences, a search has been carried out for the oxidative annealing of the molybdenum sulfide concentrate of the new Yuzhno-Shameiskoe deposit with calcium-containing additives. As a result, sulfurous gas transforms into calcium sulfate and does not evolve into the gas phase. In calcine, molybdenum and rhenium remain complete as calcium molybdate and perrhenate. The principles of the selective desalination of molybdenum and rhenium from calcine have been studied. Processes of their recovery from solutions have been studied

    Can No-Reference Quality-Assessment Methods Serve as Perceptual Losses for Super-Resolution?

    Full text link
    Perceptual losses play an important role in constructing deep-neural-network-based methods by increasing the naturalness and realism of processed images and videos. Use of perceptual losses is often limited to LPIPS, a fullreference method. Even though deep no-reference image-qualityassessment methods are excellent at predicting human judgment, little research has examined their incorporation in loss functions. This paper investigates direct optimization of several video-superresolution models using no-reference image-quality-assessment methods as perceptual losses. Our experimental results show that straightforward optimization of these methods produce artifacts, but a special training procedure can mitigate them.Comment: 4 pages, 3 figures. The first two authors contributed equally to this wor

    BASED: Benchmarking, Analysis, and Structural Estimation of Deblurring

    Full text link
    This paper discusses the challenges of evaluating deblurring-methods quality and proposes a reduced-reference metric based on machine learning. Traditional quality-assessment metrics such as PSNR and SSIM are common for this task, but not only do they correlate poorly with subjective assessments, they also require ground-truth (GT) frames, which can be difficult to obtain in the case of deblurring. To develop and evaluate our metric, we created a new motion-blur dataset using a beam splitter. The setup captured various motion types using a static camera, as most scenes in existing datasets include blur due to camera motion. We also conducted two large subjective comparisons to aid in metric development. Our resulting metric requires no GT frames, and it correlates well with subjective human perception of blur
    corecore