32 research outputs found
Design and development of a quantum circuit to solve the information set decoding problem
LAUREA MAGISTRALENegli ultimi anni i crittosistemi basati su codici lineari sono stati oggetto
di studi sempre più approfonditi data la loro maggior resistenza ad attacchi
tramite calcolatori quantistici. La sicurezza di questo tipo di crittosistemi
si basa sulla difficoltà di ricavare il valore di una parola di codice corretta
a partire da una affetta da errore dato un codice lineare con una struttura
apparentemente casuale.
In questo lavoro abbiamo progettato e implementato diversi circuiti
quantistici in grado di risolvere il problema noto come Information Set
Decoding, che è attualmente il più efficace tipo di attacco a tali
crittosistemi. Basati sull'algoritmo di Grover, gli algoritmi quantistici
proposti si sono dimostrati in grado di identificare l'errore originale con
un'elevata percentuale di affidabilità, durante la loro validazione tramite
simulatore di calcolatore quantistico.
Abbiamo esplorato due tipi di attacchi diversi: il primo, basato su un algoritmo
di ricerca esaustiva tradizionale, è puramente quantistico; il secondo, basato
sull'algoritmo di Lee-Brickell, è un algoritmo ibrido classico-quantistico.
In entrambi i casi, sono state utilizzate e comparate modalità di esecuzione
diverse, dimostrando come un'attenta preparazione dello stato iniziale del
sistema possa ridurre drasticamente il numero di iterazioni rispetto
all'utilizzo di una versione base dell'algoritmo di Grover.
In questo lavoro abbiamo inoltre fornito una misura quantitativa della complessità
di calcolo di entrambi gli algoritmi proposti in termini di numero di
quantum gates e numero complessivo di qubit.Cryptosystems based on linear codes are gaining momentum due to their stronger
resistance to quantum attacks. They rely on the hardness of finding a
minimum-weight codeword in a large linear code with an apparently random
structure.
In this work we designed and implemented several quantum circuits to
specifically solve the Information Set Decoding problem, which is currently
the most effective attack against code-based cryptoschemes. Relying on
Grover's algorithm, the proposed algorithms were shown capable of
effectively recover the original error vector simulating the computation of a
quantum computer. Both an exhaustive search and a variant of Lee-Brickell's
algorithm are proposed, with the former relying only on a quantum circuit and
the latter using a hybrid classic-quantum approach. In both cases, two variants
have been analyzed and compared, showing how a proper preparation of the initial
state of the system can drastically reduce the number of iterations with respect
to the uniform superposition of the classic Grover's algorithm.
We provide, for the proposed algorithms, a quantitative evaluation of their
computational complexity in terms of the number of involved quantum gates and
required storage in qubits
Quantum circuits for information set decoding : quantum cryptanalysis of code-based cryptosystems
DOTTORATOL’avvento del calcolo quantistico rappresenta una profonda sfida alla sicurezza dei sistemi crittografici basati su chiavi pubbliche ampiamente utilizzati. Tali sistemi fanno affidamento sulla complessità computazionale di operazioni come la fattorizzazione di grandi numeri interi o la risoluzione di logaritmi discreti. Per affrontare questa sfida, istituzioni di grande prestigio come l’ufficio nazionale di standard e tecnologia degli Stati Uniti (NIST), l’associazione Cinese per la ricerca crittografica (CACR) e l’istituto Europeo per le norme delle telecomunicazioni (ETSI), sono impegnate nella formulazione di primitive crittografiche in grado di resistere sia agli attacchi classici che a quelli quantistici. Questi innovativi sistemi crittografici, noti collettivamente come crittosistemi post-quantistici, sono al centro degli sforzi di standardizzazione. Tra i principali contendenti in questo sforzo di standardizzazione emergono i crittosistemi basati su codici lineari, che basano la loro sicurezza sulla complessità computazionale del problema di decodifica della sindrome (SDP). Il SDP è definito come il compito di recuperare un vettore di errori a partire dalla matrice di controllo di parità di un codice di correzione di errori lineare a blocchi generato casualmente, e della sindrome dell’errore calcolata attraverso la stessa matrice. Dal punto di vista classico, la tecnica più efficace per risolvere il SDP è il metodo di decodifica dell’insieme di informazioni (ISD), che mostra una complessità esponenziale rispetto ai parametri dei crittosistemi. D’altra parte, le attuali soluzioni quantistiche per il SDP non superano l’accelerazione quadratica offerta dall’adattamento dell’algoritmo di Grover alla tecnica ISD e forniscono solo stime asintotiche dei costi computazionali, nascondendo potenziali fattori costanti e polinomiali non trascurabili. Il fulcro di questo studio ruota intorno alla valutazione precisa della complessità computazionale dei risolutori quantistici per il SDP, adattata ai parametri dei codici proposti per la crittografia post-quantistica. La ricerca svolta mostra circuiti quantistici progettati per modelli di calcolo universali basati su porte logiche quantistiche, che si basano sui fondamenti delle tecniche ISD classiche proposte da Prange, Lee e Brickell. L’analisi si estende sia a soluzioni quantistiche complete per il SDP che a metodologie ibride che suddividono efficacemente il carico computazionale tra risorse di calcolo classico e quantistico. Nel corso dello studio, è emersa chiaramente l’efficacia dell’approccio derivante dalla proposta di Prange alla tecnica ISD, in grado di ottenere un miglioramento sostanziale dell’efficienza computazionale. In particolare, si mostra una riduzione sia della profondità dei circuiti quantistici che della metrica profondità per larghezza da 212 a 224. Sorprendentemente, i risultati rivelano che i miglioramenti ottenuti tramite l’approccio ispirato alle idee di Lee e Brickell, che sono state materiliazzati come un algoritmo ibrido classico-quantistico, sono più modesti, variando da 210 a 220 per gli stessi parametri crittografici, contrariamente alle aspettative basate sulle controparti classiche, in cui l’approccio di Lee e Brickell è più efficiente di quello di Prange. Tuttavia, l’approccio ibrido riduce significativamente la dimensione e la profondità dei circuiti quantistici, rendendo le stime più realistiche e agevolando l’esecuzione parallela su piattaforme di calcolo quantistiche separate. L’analisi quantitativa dei costi computazionali porta a una conclusione significativa: tutti i crittosistemi basati su codici esaminati da istituzioni di grande prestigio come il NIST, in particolare BIKE, HQC e Classic McEliece, superano inequivocabilmente la soglia predefinita per la complessità computazionale. In altre parole, questi crittosistemi si rivelano computazionalmente più esigenti rispetto ai corrispondenti cifrari simmetrici con chiavi di dimensioni adeguate. Tuttavia, lo studio rivela una vulnerabilità critica nel crittosistema Classic McEliece. La parallelizzazione di questo algoritmo su diverse unità di elaborazione quantistiche erode la sua sicurezza, portandola al di sotto della soglia di sicurezza mirata di un fattore di 16. Un contributo accessorio di questa ricerca è la creazione di un insieme di circuiti quantistici capaci di risolvere comuni problemi algebrici e algoritmici, tra cui l’eliminazione di Gauss-Jordan su campi finiti, la classificazione di stringhe binarie e il calcolo del peso di Hamming, che possono rappresentare un notevole interesse indipendente nel campo del calcolo quantistico.The emergence of quantum computing represents a profound challenge to the security of widely-adopted public-key cryptographic systems, which rely on the computational complexity of tasks such as factoring large integers or solving discrete logarithms. To confront this challenge, esteemed organizations like the U.S. National Institute of Standards and Technology (NIST), the Chinese Association for Cryptologic Research (CACR), and the European Telecommunications Standards Institute (ETSI) are actively engaged in the formulation of cryptographic primitives capable of withstanding both classical and quantum attacks. These novel cryptographic systems, collectively termed post-quantum cryptosystems, are at the forefront of standardization efforts. Among the leading contenders in this standardization endeavor, linear code-based cryptosystems, deriving their strength from the computational complexity of the Syndrome Decoding Problem (SDP), have gained significant recognition. The SDP is defined as the task of retrieving an error vector when provided with the parity check matrix of a randomly generated linear block error correction code and the syndrome of the error, as computed through the same matrix. Classically, the most effective technique for solving the SDP is the Information Set Decoding (ISD) method, which, notably, exhibits exponential complexity with respect to the parameters of the cryptosystems. Current quantum approaches to the SDP, on the other hand, do not surpass the quadratic speedup offered by adapting Grover’s algorithm to the ISD technique, and provide only asymptotic estimates of their computational cost, potentially hiding non-trivial constant and polynomial factors. The central focus of this study revolves around the precise computational complexity evaluation of quantum solvers for the SDP, tailored to cryptography-grade code parameters. Our approach introduces quantum circuits designed for universal quantum gate-based computing models, that are build upon the foundations laid by classic ISD techniques. Our scrutiny extends to both complete quantum solutions to the SDP and hybrid methodologies that effectively partition the computational load between classical and quantum computing resources. In our investigation, the approach stemming from Prange’s approach to the ISD technique stands out, as it displays a substantial enhancement in computational efficiency. Notably, it leads to a reduction in both the depth of quantum circuits and the depth-times-width metric by factors ranging from 212 to 224 applicable to concrete cryptography-grade parameters. Surprisingly, our findings reveal that the gains achieved through the approach inspired by Lee and Brickell’s ideas, which materialize as a hybrid classical-quantum algorithm, are somewhat modest. These enhancements range from 210 to 220 for the same cryptographic parameters, a result contrary to expectations based on classical counterparts, where Lee and Brickell’s approach prevails over Prange’s one. However, the hybrid approach substantially reduces the size and depth of quantum circuits, rendering the estimates more realistic and facilitating parallel execution on separate quantum computing platforms. Our quantitative analysis of computational costs brings forth a significant conclusion: all code-based cryptoschemes under the scrutiny of esteemed organizations such as NIST, particularly BIKE, HQC, and McEliece, unequivocally surpass the predefined threshold for computational hardness. Put simply, they prove to be computationally more demanding than the task of breaking a corresponding symmetric cipher with appropriately-sized key lengths. Furthermore, a critical vulnerability in the Classic McEliece cryptoscheme is unveiled. Parallelizing this algorithm across multiple quantum processing units erodes its security, plunging it below the targeted security threshold by a factor of 16. An ancillary contribution of this research is the development of a set of quantum circuits capable of solving common algebraic and algorithmic problems, including Gauss-Jordan Elimination over finite fields, bit string sorting, and Hamming weight computation, which may be of independent interest in the field of quantum computing.DIPARTIMENTO DI ELETTRONICA, INFORMAZIONE E BIOINGEGNERIAComputer Science and Engineering35SILVANO, CRISTINAPIRODDI, LUIG
A Quantum Circuit to Execute a Key-Recovery Attack Against the DES and 3DES Block Ciphers
Quantum computing enabled cryptanalytic techniques are able to concretely reduce the security margin of existing cryptographic primitives. While this reduction is only polynomial for symmetric cryptosystems, it still provides a reduction in their security margin.
In this work, we propose a detailed quantum circuit designed to cryptanalyze both the Data Encryption Standard (DES) cryptosystem, and its successor Triple-DES (3DES), currently
standardized in ISO/IEC 18033-3, and still widely employed in satellite data and bank card encryption. To do so, we introduce the first quantum circuit implementation of the 8 substitution tables (a.k.a. S-boxes), applying a bitslicing strategy, which is currently the most efficient classical combinatorial circuit design in terms of number of two inputs Boolean gates. Secondly, we present the complete quantum circuits required to attack
both DES and 3DES leveraging Grover’s algorithm. We provide finite regime, closed form equations, delineating the circuits complexities in terms of the number of qubits, gates, depth and number of qubits multiplied by depth. The complexity analysis is based on two distinct gate sets: a NOT-CNOT-Toffoli (NCT) extended with the Hadamard gate; and the fault-tolerant Clifford+T. Finally, akin to the classical attack to the 3DES, we introduce a meet-in-the-middle strategy relying on an exponential amount of Quantum Random Access Memory. Our findings show that the 3DES with keying option 2, the most widely
employed variant of 3DES, can be attacked with a circuit depth of approximately 2^{67} and less than a thousand qubits. This is close to the 2^{64} value suggested by NIST for the depth achievable sequentially by a single quantum computer in a decade. Our technique can be further sped up parallelizing the approach onto multiple devices, pointing to the practicality of cryptanalyzing 3DES in such a scenario
Designing QC-MDPC Public Key Encryption Schemes with Niederreiter\u27s Construction and a Bit Flipping Decoder with Bounded DFR
Post-quantum public key encryption (PKE) schemes employing Quasi-cyclic (QC) sparse
parity-check matrix codes are enjoying significant success, thanks to their
good performance profile and reduction to believed-hard problems from coding
theory.
However, using QC sparse parity-check matrix codes (i.e., QC-MDPC/LDPC codes)
comes with a significant challenge: determining in closed-form their decoding
failure rate (DFR), as decoding failures are known to leak information on the
private key.
Furthermore, there is no formal proof that changing the (constant) rate of the
employed codes does not change the nature of the underlying hard problem, nor
of the hardness of decoding random QC codes is formally related to the
decoding hardness of random codes.
In this work, we address and solve these challenges, providing a novel
closed-form estimation of the decoding failure rate for three-iteration bit
flipping decoders, and proving computational equivalences among the
aforementioned problems.
This allows us to design systematically a Niederreiter-style QC-MDPC PKE,
enjoying the flexibility granted by freely choosing the code rate, and the
significant improvements in tightness of our DFR bound.
We report a improvement in public key and ciphertext size
w.r.t. the previous best cryptosystem design with DFR closed-form bounds,
LEDAcrypt-KEM. Furthermore, we show that our PKE parameters yield % smaller
public key size and smaller ciphertexts w.r.t. HQC,
which is the key encapsulation method employing a code based PKE, recently selected by the US NIST for standardization
Evaluation of a quality improvement intervention to reduce anastomotic leak following right colectomy (EAGLE): pragmatic, batched stepped-wedge, cluster-randomized trial in 64 countries
Background: Anastomotic leak affects 8 per cent of patients after right colectomy with a 10-fold increased risk of postoperative death. The EAGLE study aimed to develop and test whether an international, standardized quality improvement intervention could reduce anastomotic leaks. Methods: The internationally intended protocol, iteratively co-developed by a multistage Delphi process, comprised an online educational module introducing risk stratification, an intraoperative checklist, and harmonized surgical techniques. Clusters (hospital teams) were randomized to one of three arms with varied sequences of intervention/data collection by a derived stepped-wedge batch design (at least 18 hospital teams per batch). Patients were blinded to the study allocation. Low- and middle-income country enrolment was encouraged. The primary outcome (assessed by intention to treat) was anastomotic leak rate, and subgroup analyses by module completion (at least 80 per cent of surgeons, high engagement; less than 50 per cent, low engagement) were preplanned. Results: A total 355 hospital teams registered, with 332 from 64 countries (39.2 per cent low and middle income) included in the final analysis. The online modules were completed by half of the surgeons (2143 of 4411). The primary analysis included 3039 of the 3268 patients recruited (206 patients had no anastomosis and 23 were lost to follow-up), with anastomotic leaks arising before and after the intervention in 10.1 and 9.6 per cent respectively (adjusted OR 0.87, 95 per cent c.i. 0.59 to 1.30; P = 0.498). The proportion of surgeons completing the educational modules was an influence: the leak rate decreased from 12.2 per cent (61 of 500) before intervention to 5.1 per cent (24 of 473) after intervention in high-engagement centres (adjusted OR 0.36, 0.20 to 0.64; P < 0.001), but this was not observed in low-engagement hospitals (8.3 per cent (59 of 714) and 13.8 per cent (61 of 443) respectively; adjusted OR 2.09, 1.31 to 3.31). Conclusion: Completion of globally available digital training by engaged teams can alter anastomotic leak rates. Registration number: NCT04270721 (http://www.clinicaltrials.gov)
InfraRed Thermography proposed for the estimation of the Cooling Rate Index in the remote survey of rock masses
Evidence of Increased Systemic Glucose Production and Gluconeogenesis in an Early Stage of NIDDM
To assess the mechanisms of fasting hyperglycemia in NIDDM patients with mild elevation of fasting plasma glucose (FPG) compared with NIDDM patients with overt hyperglycemia, we studied 29 patients with NIDDM, who were divided in two groups according to their fasting plasma glucose (&lt;7.8 and ≥7.8 mmol/l for groups A and B, respectively), and 16 control subjects who were matched with NIDDM patients for age, sex, and body mass index. All subjects were infused with ]3-3H]glucose between 10:00 P.M. and 10:00 A.M. during overnight fasting to determine glucose fluxes. In 27 subjects (17 diabetic and 10 control), ]U-14C]alanine was simultaneously infused between 4:00 A.M. and 10:00 A.M. to measure gluconeogenesis (GNG) from alanine. Arterialized- venous plasma samples were collected every 30 min for measurement of glucose fluxes, GNG, and glucoregulatory hormones. In group A, plasma glucose, rate of systemic glucose production (SGP), and GNG were greater than in control subjects (7.2 ± 0.2 vs. 4.9 ± 0.1 mmol/l, 10.9 ± 0.2 vs. 9.5 ± 0.3 μmol · kg−1 · min−1, and 0.58 ± 0.04 vs. 0.37 ± 0.02 μmol · kg−1 · min−1, respectively, for group A and control subjects; mean value 8:00 A.M.-10:00 A.M., all P &lt; 0.05). Both increased SGP and GNG correlated with plasma glucose in all subjects (r = 0.77 and r = 0.75, respectively, P &lt; 0.005). Plasma counterregulatory hormones did not differ in NIDDM patients compared to control subjects. The present studies demonstrate that SGP and GNG are increased in NIDDM patients without overt fasting hyperglycemia.Thus these metabolic abnormalities primarily contribute to early development of overnight and fasting hyperglycemia in NIDDM.</jats:p
