402 research outputs found

    Cyclin D1 and p16 expression in recurrent nasopharyngeal carcinoma

    Get PDF
    Abstract Background Cyclin D1 and p16 are involved in the regulation of G1 checkpoint and may play an important role in the tumorigenesis of nasopharyngeal carcinoma (NPC). Previous studies have examined the level of expression of cyclin D1 and p16 in primary untreated NPC but no such information is available for recurrent NPC. We set out in this study to examine the expression level of cyclin D1 and p16 in recurrent NPC that have failed previous treatment with radiation +/- chemotherapy. Patients and methods A total of 42 patients underwent salvage nasopharyngectomy from 1984 to 2001 for recurrent NPC after treatment failure with radiation +/- chemotherapy. Twenty-seven pathologic specimens were available for immunohistochemical study using antibodies against cyclin D1 and p16. Results Positive expression of cyclin D1 was observed in 7 of 27 recurrent NPC specimens (26%) while positive p16 expression was seen in only 1 of 27 recurrent NPC (4%). Conclusion While the level of expression of cyclin D1 in recurrent NPC was similar to that of previously untreated head and neck cancer, the level of p16 expression in recurrent NPC samples was much lower than that reported for previously untreated cancer. The finding that almost all (96%) of the recurrent NPC lack expression of p16 suggested that loss of p16 may confer a survival advantage by making cancer cells more resistant to conventional treatment with radiation +/- chemotherapy. Further research is warranted to investigate the clinical use of p16 both as a prognostic marker and as a potential therapeutic target

    Black-Box Attack against GAN-Generated Image Detector with Contrastive Perturbation

    Full text link
    Visually realistic GAN-generated facial images raise obvious concerns on potential misuse. Many effective forensic algorithms have been developed to detect such synthetic images in recent years. It is significant to assess the vulnerability of such forensic detectors against adversarial attacks. In this paper, we propose a new black-box attack method against GAN-generated image detectors. A novel contrastive learning strategy is adopted to train the encoder-decoder network based anti-forensic model under a contrastive loss function. GAN images and their simulated real counterparts are constructed as positive and negative samples, respectively. Leveraging on the trained attack model, imperceptible contrastive perturbation could be applied to input synthetic images for removing GAN fingerprint to some extent. As such, existing GAN-generated image detectors are expected to be deceived. Extensive experimental results verify that the proposed attack effectively reduces the accuracy of three state-of-the-art detectors on six popular GANs. High visual quality of the attacked images is also achieved. The source code will be available at https://github.com/ZXMMD/BAttGAND

    High pointwise emergence and Katok's conjecture for systems with non-uniform structure

    Full text link
    Recently, Kiriki, Nakano and Soma introduced a concept called pointwise emergence as a new quantitative perspective into the study of non-existence of averages for dynamical systems. In the present paper, we consider the set of points with high pointwise emergence for systems with non-uniform structure and prove that this set carries full topological pressure. For the proof of this result, we show that such systems have ergodic measures of arbitrary intermediate pressures

    Trusted Video Inpainting Localization via Deep Attentive Noise Learning

    Full text link
    Digital video inpainting techniques have been substantially improved with deep learning in recent years. Although inpainting is originally designed to repair damaged areas, it can also be used as malicious manipulation to remove important objects for creating false scenes and facts. As such it is significant to identify inpainted regions blindly. In this paper, we present a Trusted Video Inpainting Localization network (TruVIL) with excellent robustness and generalization ability. Observing that high-frequency noise can effectively unveil the inpainted regions, we design deep attentive noise learning in multiple stages to capture the inpainting traces. Firstly, a multi-scale noise extraction module based on 3D High Pass (HP3D) layers is used to create the noise modality from input RGB frames. Then the correlation between such two complementary modalities are explored by a cross-modality attentive fusion module to facilitate mutual feature learning. Lastly, spatial details are selectively enhanced by an attentive noise decoding module to boost the localization performance of the network. To prepare enough training samples, we also build a frame-level video object segmentation dataset of 2500 videos with pixel-level annotation for all frames. Extensive experimental results validate the superiority of TruVIL compared with the state-of-the-arts. In particular, both quantitative and qualitative evaluations on various inpainted videos verify the remarkable robustness and generalization ability of our proposed TruVIL. Code and dataset will be available at https://github.com/multimediaFor/TruVIL

    Video Inpainting Localization with Contrastive Learning

    Full text link
    Deep video inpainting is typically used as malicious manipulation to remove important objects for creating fake videos. It is significant to identify the inpainted regions blindly. This letter proposes a simple yet effective forensic scheme for Video Inpainting LOcalization with ContrAstive Learning (ViLocal). Specifically, a 3D Uniformer encoder is applied to the video noise residual for learning effective spatiotemporal forensic features. To enhance the discriminative power, supervised contrastive learning is adopted to capture the local inconsistency of inpainted videos through attracting/repelling the positive/negative pristine and forged pixel pairs. A pixel-wise inpainting localization map is yielded by a lightweight convolution decoder with a specialized two-stage training strategy. To prepare enough training samples, we build a video object segmentation dataset of 2500 videos with pixel-level annotations per frame. Extensive experimental results validate the superiority of ViLocal over state-of-the-arts. Code and dataset will be available at https://github.com/multimediaFor/ViLocal.Comment: arXiv admin note: substantial text overlap with arXiv:2406.1357

    Characterization of ternary derivation of strongly double triangle subspace lattice algebras

    Get PDF
    Let D \mathcal{D} be a strongly double triangle subspace lattice on a nonzero complex reflexive Banach space. In this paper, we characterize the linear maps δ,τ \delta, \tau : AlgDAlgD {\rm{Alg}}\mathcal{D}\to {\rm{Alg}}\mathcal{D} satisfying δ(A)B+Aτ(B)=0 \delta(A)B+A\tau(B) = 0 for any A,BAlgD A, B\in {\rm{Alg}}\mathcal{D} with AB=0 AB = 0 . This result can be used to characterize linear maps derivable (centralized) at zero point and local centralizers on AlgD {\rm{Alg}}\mathcal{D} , respectively
    corecore