434 research outputs found

    Semitonal Relationships in Chopin's Music.

    Full text link
    This dissertation investigates Chopin’s chromatic harmony and its coordination with chromatic voice leading, which produce modulations to remote tonal regions, by exploring his use of semitonal relationships. To the work of previous scholars who have discussed the audacity of Chopin’s harmonic practice and his extensive use of third relationships this study adds by focusing on semitonal relationships to help account for the individuality of Chopin’s approach to chromaticism as a distinctive feature of his compositional style. One of the specific techniques identified in this dissertation, semitonal modulation, raises important issues in music theory as such a modulation often involves unusual voice-leading events. Such events disrupt a passage’s tonal focus, requiring both a reorientation on the listener’s part and an eventual reintegration of that event into a single-key framework. Combining a Schenkerian approach with ideas drawn from recent theories, the dissertation explains local chromatic events and phenomenological aspects of modulation to consider this reinterpretive process of listening. By incorporating these approaches into a reading, analysts can effectively show tonal disorientation of listeners in a local context and a retrospective understanding on a larger scale. This dissertation addresses two main types of modulation—one involving a transformation between scale degree 1 and 7, the other involving a transformation between scale degree b6 and 5. I call the first type a leading-tone modulation, since it occurs when the tonic in one key changes into the leading tone of the new key. The second type involves a semitonal shift that Chopin handles in distinctive ways by emphasizing a note involved in that scale-degree transformation. The dissertation also sheds light on semitonal relationships as they affect musical parameters other than key areas, since Chopin’s use of semitonal relationships radiates into other elements of music; it thus offers analysts a new perspective to interpret forms, motives, and large-scale pitch structuring as well. Analytical in orientation, the dissertation examines a large number of Chopin’s works, including several Preludes and Nocturnes, the Ballades in G Minor (Op. 23) and F Minor (Op. 52), the Second Scherzo (Op. 31), and the Fantasy (Op. 49).PhDMusic: TheoryUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/113349/1/hwchung_2.pdfhttp://deepblue.lib.umich.edu/bitstream/2027.42/113349/2/hwchung_1.pd

    Encoding Rational Numbers for FHE-based Applications

    Get PDF
    This work addresses a basic problem of security systems that operate on very sensitive information, such as healthcare data. Specifically, we are interested in the problem of privately handling medical data represented by rational numbers. Considering the complicated computations on encrypted medical data, one of the natural and powerful tools for ensuring privacy of the data is fully homomorphic encryption (FHE). However, because the plaintext domain of known FHE schemes is restricted to a set of quite small integers, it is not easy to obtain efficient algorithms for encrypted rational numbers in terms of space and computation costs. Our observation is that this inefficiency can be alleviated by using a different representation of rational numbers instead of naive expressions. For example, the naïve decimal representation considerably restricts the choice of parameters in employing an FHE scheme, particularly the plaintext size. The starting point of our technique in this work is to encode rational numbers using continued fractions. Because continued fractions enable us to represent rational numbers as a sequence of integers, we can use a plaintext space with a small size while preserving the same quality of precision. However, this encoding technique requires performing very complex arithmetic operations, such as division and modular reduction. Theoretically, FHE allows the evaluation of any function, including modular reduction at encrypted data, but it requires a Boolean circuit of very high degree to be constructed. Hence, we primarily focus on developing an approach to solve this efficiency problem using homomorphic operations with small degrees

    Amortized Large Look-up Table Evaluation with Multivariate Polynomials for Homomorphic Encryption

    Get PDF
    We present a new method for efficient look-up table (LUT) evaluation in homomorphic encryption (HE), based on Ring-LWE-based HE schemes, including both integer-message schemes such as Brakerski-Gentry-Vaikuntanathan (BGV) and Brakerski/Fan-Vercauteren (BFV), and complex-number-message schemes like the Cheon-Kim-Kim-Song (CKKS) scheme. Our approach encodes bit streams into codewords and translates LUTs into low-degree multivariate polynomials, allowing for the simultaneous evaluation of multiple independent LUTs with minimal overhead. To mitigate noise accumulation in the CKKS scheme, we propose a novel noise-reduction technique, accompanied by proof demonstrating its effectiveness in asymptotically decreasing noise levels. We demonstrate our algorithm\u27s effectiveness through a proof-of-concept implementation, showcasing significant efficiency gains, including a 0.029ms per slot evaluation for 8-input, 8-output LUTs and a 280ms amortized decryption time for AES-128 using CKKS on a single GPU. This work not only advances LUT evaluation in HE but also introduces a transciphering method for the CKKS scheme utilizing standard symmetric-key encryption, bridging the gap between discrete bit strings and numerical data

    Adaptive Successive Over-Relaxation Method for a Faster Iterative Approximation of Homomorphic Operations

    Get PDF
    Homomorphic encryption is a cryptographic technique that enables arithmetic operations to be performed on encrypted data. However, word-wise fully homomorphic encryption schemes, such as BGV, BFV, and CKKS schemes, only support addition and multiplication operations on ciphertexts. This limitation makes it challenging to perform non-linear operations directly on the encrypted data. To address this issue, prior research has proposed efficient approximation techniques that utilize iterative methods, such as functional composition, to identify optimal polynomials. These approximations are designed to have a low multiplicative depth and a reduced number of multiplications, as these criteria directly impact the performance of the approximated operations. In this paper, we propose a novel method, named as adaptive successive over-relaxation (aSOR), to further optimize the approximations used in homomorphic encryption schemes. Our experimental results show that the aSOR method can significantly reduce the computational effort required for these approximations, achieving a reduction of 2–9 times compared to state-of-the-art methodologies. We demonstrate the effectiveness of the aSOR method by applying it to a range of operations, including sign, comparison, ReLU, square root, reciprocal of m-th root, and division. Our findings suggest that the aSOR method can greatly improve the efficiency of homomorphic encryption for performing non-linear operations

    Ghostshell: Secure Biometric Authentication using Integrity-based Homomorphic Evaluations

    Get PDF
    Biometric authentication methods are gaining popularity due to their convenience. For an authentication without relying on trusted hardwares, biometrics or their hashed values should be stored in the server. Storing biometrics in the clear or in an encrypted form, however, raises a grave concern about biometric theft through hacking or man-in-the middle attack. Unlike ID and password, once lost biometrics cannot practically be replaced. Encryption can be a tool for protecting them from theft, but encrypted biometrics should be recovered for comparison. In this work, we propose a secure biometric authentication scheme, named Ghostshell, in which an encrypted template is stored in the server and then compared with an encrypted attempt \emph{without} decryption. The decryption key is stored only in a user\u27s device and so biometrics can be kept secret even against a compromised server. Our solution relies on a somewhat homomorphic encryption (SHE) and a message authentication code (MAC). Because known techniques for SHE is computationally expensive, we develop a more practical scheme by devising a significantly efficient matching function exploiting SIMD operations and a one-time MAC chosen for efficient homomorphic evaluations (of multiplication depth 2). When applied to Hamming distance matching on 2400-bit irises, our implementation shows that the computation time is approximately 0.47 and 0.1 seconds for the server and the user, respectively

    Bulletproofs+: Shorter Proofs for Privacy-Enhanced Distributed Ledger

    Get PDF
    We present a new short zero-knowledge argument for the range proof and the arithmetic circuits without a trusted setup. In particular, the proof size of our protocol is the shortest of the category of proof systems with a trustless setup. More concretely, when proving a committed value is a positive integer less than 64 bits, except for negligible error in the 128128-bit security parameter, the proof size is 576576 byte long, which is of 85.7%85.7\% size of the previous shortest one due to Bünz et al.~(Bulletproofs, IEEE Security and Privacy 2018), while computational overheads in both proof generation and verification are comparable with those of Bulletproofs, respectively. Bulletproofs is established as one of important privacy enhancing technologies for distributed ledger, due to its trustless feature and short proof size. In particular, it has been implemented and optimized in various programming languages for practical usages by independent entities since it proposed. The essence of Bulletproofs is based on the logarithmic inner product argument with no zero-knowledge. In this paper, we revisit Bulletproofs from the viewpoint of the first sublinear zero-knowledge argument for linear algebra due to Groth~(CRYPTO 2009) and then propose Bulletproofs+, an improved variety of Bulletproofs. The main ingredient of our proposal is the zero-knowledge weighted inner product argument (zk-WIP) to which we reduce both the range proof and the arithmetic circuit proof. The benefit of reducing to the zk-WIP is a minimal transmission cost during the reduction process. Note the zk-WIP has all nice features of the inner product argument such as an aggregating range proof and batch verification

    Doubly Efficient Fuzzy Private Set Intersection for High-dimensional Data with Cosine Similarity

    Get PDF
    Fuzzy private set intersection (Fuzzy PSI) is a cryptographic protocol for privacy-preserving similarity matching, which is one of the essential operations in various real-world applications such as facial authentication, information retrieval, or recommendation systems. Despite recent advancements in fuzzy PSI protocols, still a huge barrier remains in deploying them for these applications. The main obstacle is the high dimensionality, e.g., from 128 to 512, of data; lots of existing methods, Garimella et al. (CRYPTO’23, CRYPTO’24) or van Baarsen et al. (EUROCRYPT’24), suffer from exponential overhead on communication and/or computation cost. In addition, the dominant similarity metric in these applications is cosine similarity, which disables several optimization tricks based on assumptions for the distribution of data, e.g., techniques by Gao et al. (ASIACRYPT’24). In this paper, we propose a novel fuzzy PSI protocol for cosine similarity, called FPHE, that overcomes these limitations at the same time. FPHE features linear complexity on both computation and communication with respect to the dimension of set elements, only requiring much weaker assumption than prior works. The basic strategy of ours is to homomorphically compute cosine similarity and run an approximated comparison function, with a clever packing method for efficiency. In addition, we introduce a novel proof technique to harmonize the approximation error from the sign function with the noise flooding, proving the security of FPHE under the semi-honest model. Moreover, we show that our construction can be extended to support various functionalities, such as labeled or circuit fuzzy PSI. Through experiments, we show that FPHE can perform fuzzy PSI over 512-dimensional data in a few minutes, which was computationally infeasible for all previous proposals under the same assumption as ours

    Systemic Blockage of Nitric Oxide Synthase by L-NAME Increases Left Ventricular Systolic Pressure, Which Is Not Augmented Further by Intralipid<sup>®</sup>

    Full text link
    Intravenous lipid emulsions (LEs) are effective in the treatment of toxicity associated with various drugs such as local anesthetics and other lipid soluble agents. The goals of this study were to examine the effect of LE on left ventricular hemodynamic variables and systemic blood pressure in an in vivo rat model, and to determine the associated cellular mechanism with a particular focus on nitric oxide. Two LEs (Intralipid(®) 20% and Lipofundin(®) MCT/LCT 20%) or normal saline were administered intravenously in an in vivo rat model following induction of anesthesia by intramuscular injection of tiletamine/zolazepam and xylazine. Left ventricular systolic pressure (LVSP), blood pressure, heart rate, maximum rate of intraventricular pressure increase, and maximum rate of intraventricular pressure decrease were measured before and after intravenous administration of various doses of LEs or normal saline to an in vivo rat with or without pretreatment with the non-specific nitric oxide synthase inhibitor N(ω)-nitro-L-arginine-methyl ester (L-NAME). Administration of Intralipid(®) (3 and 10 ml/kg) increased LVSP and decreased heart rate. Pretreatment with L-NAME (10 mg/kg) increased LSVP and decreased heart rate, whereas subsequent treatment with Intralipid(®) did not significantly alter LVSP. Intralipid(®) (10 ml/kg) increased mean blood pressure and decreased heart rate. The increase in LVSP induced by Lipofundin(®) MCT/LCT was greater than that induced by Intralipid(®). Intralipid(®) (1%) did not significantly alter nitric oxide donor sodium nitroprusside-induced relaxation in endothelium-denuded rat aorta. Taken together, systemic blockage of nitric oxide synthase by L-NAME increases LVSP, which is not augmented further by intralipid(®)

    HyperCLOVA X Technical Report

    Full text link
    We introduce HyperCLOVA X, a family of large language models (LLMs) tailored to the Korean language and culture, along with competitive capabilities in English, math, and coding. HyperCLOVA X was trained on a balanced mix of Korean, English, and code data, followed by instruction-tuning with high-quality human-annotated datasets while abiding by strict safety guidelines reflecting our commitment to responsible AI. The model is evaluated across various benchmarks, including comprehensive reasoning, knowledge, commonsense, factuality, coding, math, chatting, instruction-following, and harmlessness, in both Korean and English. HyperCLOVA X exhibits strong reasoning capabilities in Korean backed by a deep understanding of the language and cultural nuances. Further analysis of the inherent bilingual nature and its extension to multilingualism highlights the model's cross-lingual proficiency and strong generalization ability to untargeted languages, including machine translation between several language pairs and cross-lingual inference tasks. We believe that HyperCLOVA X can provide helpful guidance for regions or countries in developing their sovereign LLMs.Comment: 44 pages; updated authors list and fixed author name

    Observation of γγ → ττ in proton-proton collisions and limits on the anomalous electromagnetic moments of the τ lepton

    Get PDF
    The production of a pair of τ leptons via photon–photon fusion, γγ → ττ, is observed for the f irst time in proton–proton collisions, with a significance of 5.3 standard deviations. This observation is based on a data set recorded with the CMS detector at the LHC at a center-of-mass energy of 13 TeV and corresponding to an integrated luminosity of 138 fb−1. Events with a pair of τ leptons produced via photon–photon fusion are selected by requiring them to be back-to-back in the azimuthal direction and to have a minimum number of charged hadrons associated with their production vertex. The τ leptons are reconstructed in their leptonic and hadronic decay modes. The measured fiducial cross section of γγ → ττ is σfid obs = 12.4+3.8 −3.1 fb. Constraints are set on the contributions to the anomalous magnetic moment (aτ) and electric dipole moments (dτ) of the τ lepton originating from potential effects of new physics on the γττ vertex: aτ = 0.0009+0.0032 −0.0031 and |dτ| &lt; 2.9×10−17ecm (95% confidence level), consistent with the standard model
    corecore