Friday, April 28, 2017

Faster Homomorphic Function Evaluation using Non-Integral Base Encoding.

Homomorphic evaluation of arithmetic circuits is exactly what FHE and SHE schemes were created for. In practice, those circuits imply the use of real numbers which seems inconsistent with the fact that the plaintext space of the most efficient and commonly used FHE/SHE schemes is of the form
$$R_t = \mathbb{Z}_t[X]/(X^d+1).$$
Thus, basically we must encode real values into polynomials, which is done by expansion with respect to a base $b$ and then replacing all occurrences of $b$ by $X$. But then the problem arises of what SHE parameters to choose. Indeed, each homomorphic operation increases the encoding coefficients, so after a few consecutive multiplications this may lead to wrapping around modulo $t$ and invalid decoding results. To solve this problem $t$ must be chosen big enough. However, at the same time it should be small to suppress the growth of the inherent noise inside ciphertexts, in order to decrypt correctly. In practice, this problem is solved by the Chinese Remainder theorem (CRT) that splits a homomorphic algorithm into a number of threads with different plaintext spaces each modulo a factor of $t$. Nevertheless, to reduce the number of threads it is desirable to be able to take $t$ as small as possible.
To decrease the plaintext modulus one should choose an encoding scheme such that the encoding coefficients are as small as possible. The first such algorithm was presented by Dowlin et al. and studied by our colleagues Costache et al. This approach uses a balanced base $b=3$ representation where each coefficient belongs to the set $\{-1,0,1\}$. However, this choice is not optimal, because the degrees of the outcomes of the homomorphic computations are much smaller than required by the security recommendations of the underlying SHE scheme.

On the picture above the grey zone denotes the available plaintext space while the dark one is what actually was used. The lion's share of the plaintext space stays untouched. To use that space more efficiently, one might reduce the degree $d$ but it induces a significant decrease in the security level. Hence, informally, the optimisation problem consists in "squashing" the grey zone such that it almost coincides with the dark one as shown on the picture below.
Our idea is to encode real numbers with long but very sparse polynomials. In this setting the non-zero coefficients group together to a lesser extent, thereby hopefully decreasing the required plaintext modulus. One such representation is already well-known, namely, w-NAF. It is a base $b=2$ representation with coefficients from the set $\{0, \pm1, \pm3, \dots, \pm2^{w-1}-1\}$ and the property that any $w$ consecutive coefficients have at most one non-zero. Bigger $w$'s reduce the average number of non-zero digits but increase their size, which is problematic for our needs as it imposes the coefficients to grow exponentially. To address this issue we switch from the base $b=2$ to a smaller non-integral base $b>1$. Then it will be possible to construct a w-NAF-like encoding but only with the smallest possible set of digits $\{-1,0,1\}$. We called this representation "w-NIBNAF" which can be read as "non-integral base w-NAF".
For any real number $a$ the w-NIBNAF algorithm generates a ternary sequence $a_0, \dots, a_n$ in the following way:

  1. Define the base $b_w$ as the unique positive root of the polynomial \[F_w(x) = x^{w+1} - x^w - x - 1.\]
  2. Find a power $r$ such that $t = -b_w^r$ or $b_w^r$ is the closest number to $a$.
  3. $a_r =$ sgn$(t)$
  4. Go to step 2 with $a \leftarrow a - t$.  

The algorithm stops when $t < \varepsilon$ for some real $\varepsilon$ which determines the precision of the encoding.

Our w-NIBNAF representation drastically reduces the average number of non-zero coefficients per representation. We encoded 10 000 random integers in the range from $-2^{40}$ to $2^{40}$ with the precision $\varepsilon = 1$ and found that even for 1-NIBNAF the percentage of non-zeros is around 50 percent while it is 67 percent for balanced ternary expansions. It means that the w-NIBNAF encoding leads to smaller coefficients in the worst case where the input values of an arithmetic circuit are chosen such that the circuit output contains the maximal possible coefficient.

To illustrate the practical impact of the new encoding scheme we applied 950-NIBNAF to a non-trivial arithmetic circuit containing multiplications and additions. The choice of the circuit was motivated by the load forecasting use-case of the HEAT discussed in our previous paper. In a nutshell, the prediction was performed there by a neural-network-like structure that can be expressed by a polynomial with real coefficients and degree 16. To preserve correct evaluation of the algorithm with balanced ternary expansion, it required to take the plaintext modulus around 104 bits long. This huge modulus imposes to use a CRT decomposition of the plaintext space that boosts up the execution time of the homomorphic algorithm to 32 seconds per run. Evaluated with 950-NIBNAF encodings the same circuit needs only 6 bits for a plaintext modulus, and as a consequence does not require to split the plaintext space. Hence, one run takes only 2.5 seconds that is a speed-up by a factor 13.

A more thorough explanation with sharp bounds on the maximal coefficients can be found via ePrint IACR archive.

The KU Leuven and NXP teams of the HEAT project.
SaveSaveSaveSave
SaveSave

Tuesday, February 21, 2017

Homomorphic Encryption API Software Library

The Homomorphic Encryption Application Programming Interface (HE-API) software library is an open source software library being developed as part of the Homomorphic Encryption Applications and Technology (HEAT) project, and is available here. The main purpose of this software library is to provide a common easy-to-use interface for various existing Somewhat Homomorphic Encryption (SHE) libraries. Limited support for fixed-point arithmetic is also provided by this library. Note that the HE-API library is still a work in progress.

Fully Homomorphic Encryption (FHE) is a cryptographic primitive that allows meaningful manipulation of ciphertexts. In spite of several recent advances, FHE remains out of practical reach. Hence a reasonable restriction to make is to limit the set of evaluated circuits to a specified subclass, usually determined by the multiplicative depth of the circuit. Such encryption schemes are called as SHE schemes.  Various libraries such as HElib, SEAL, FV-NFLlib, HElib-MP, etc., are already available that implement these SHE schemes.

The purpose of this HE-API software library is to provide a common, generic, easy-to-use interface for various existing libraries that implement SHE schemes. The SHE libraries that are currently integrated in the HE-API library are HElib and FV-NFLlib. It may be noted that the FV-NFLlib library is itself an outcome of the HEAT project. At a high-level, the HE-API software library abstracts out the technicalities present in the underlying SHE libraries. For instance, the HElib library implements the BGV SHE scheme, while the FV-NFLlib implements the FV SHE scheme. Needless to say, the syntax for various classes and routines in the individual libraries will be different, though the underlying semantics are very similar. The HE-API library integrates the underlying SHE libraries under a single interface, thereby shielding the user from syntactic differences. Another feature of the HE-API library is that it contains minimal, yet complete, set of routines to perform homomorphic computations. The design of this library is motivated by the ease of use for non-experts.

Supported Data Types
The following application data types are supported by the HE-API software library. 
  • Boolean
  • Unsigned long integers
  • GMP's arbitrary precision integers class: mpz_class
  • Polynomials with coefficients of type: unsigned long integers or mpz_class
  • Vectors of : unsigned long integers or mpz_class
  • Fixed-point numbers
Note that all the data types and routines described above may not be currently supported by every underlying SHE library.

Tuesday, December 20, 2016

Privacy-friendly Forecasting for the Smart Grid using Homomorphic Encryption and the Group Method of Data Handling

Machine learning has become more and more popular last years. Every day new applications appear on the market such as augmented reality, pattern recognition, intelligent search, personal recommendations, forecasting etc. Big companies like Google, Amazon, Apple use comprehensive statistical models to improve their business. They rely on immense amounts of data that clients provide through different on-line services. This information can be very privacy sensitive and causes immediate damage after unapproved disclosure.
The question, then, arises: how can private data be encrypted and managed at the same time by machine learning algorithms? In our recent paper we tried to answer this question for the load forecasting problem in the smart-grid using homomorphic encryption.

The idea behind the smart-grid technology is very simple. It assumes that a client housing and a service provider are equipped with communication devices called smart-meters. These allow to transmit current information about load consumption, weather conditions, current financial information, structure of a local utility grid etc. A service provider can apply statistical techniques in order to forecast energy demand for a next period of time. Having predictions clients might control their consumption more efficiently and public utilities may optimise production and logistic costs to lower prices or avoid blackouts.

One popular class of prediction algorithms is based on artificial neural networks. ANNs compute the so-called activation functions, in practice it is common to use a sigmoid function where the logistic function $t \mapsto 1/(1 + e^{−t})$ is a popular choice. Due to the highly non-linear nature computing such a sigmoid function homomorphically is far from practical. To overcome that issue one can substitute a sigmoid by a polynomial function that was done recently in papers of Xie et al. and Dowlin et al. for pattern recognition purposes. But there exist an older statistical approach built especially for prediction purposes by a Soviet computer scientist Alexey Ivakhnenko in 1970. It is called the Group Method of Data Handling (GMDH).

GMDH takes a history of the system outcome $\{x_i\}$ and construct truncated Wiener series to approach the next output value

$a_0 + \sum_{i=1}^{n_0} a_i x_i + \sum_{i=1}^{n_0} \sum_{j=i}^{n_0} a_{ij} x_i x_j + \sum_{i=1}^{n_0} \sum_{j=i}^{n_0} \sum_{k=j}^{n_0} a_{ijk} x_i x_j x_k + \dots$

In the original paper this polynomial is called a Kolmogorov-Gabor polynomial and it can be nicely illustrated in a neural network manner



where each node has only two input edges and its own activation function described by a quadratic polynomial over real numbers

$\nu_{ij} : \textbf{R}^2 \rightarrow \textbf{R}: (x, y) \mapsto b_{ij0} + b_{ij1} x + b_{ij2} y + b_{ij3} xy + b_{ij4} x^2 + b_{ij5} y^2.$

The learning phase determines the polynomial coefficients and the wiring of nodes while the number of layers and the number of nodes at each layer are fixed in the beginning. Actually, one should solve a linear regression problem at each node and decide whether this node is among the "best" ones in a layer in terms of error performance. As a result a GMDH neural network can be expressed by an arithmetic circuit of multiplicative depth equal to the number of layers. This construction is a perfect match for homomorphic evaluation.

To perform experiments on real data we found a source provided by Irish government over 5000 housings and commercial buildings. It turns out that the prediction performance of ANNs for individual housings is quite low. But combining a few houses makes the forecasting error decrease drastically. Thus we considered aggregated consumption for 10 house combinations which correspond to small apartment complexes where privacy issues still make sense.

Finally, we applied an FV SHE scheme on top of the forecasting algorithm together with a fixed-point representation of real numbers recently studied by Costache et al. The simplest GMDH network with 4 layers was chosen to show reasonable error performance. The resulting algorithm evaluates a load demand for the next 30 min with relative error around 20% which is comparable with modern ANNs. The computation of one prediction takes 90 seconds in the sequential mode or 4 seconds in the parallel implementation on an average laptop equipped with an Intel Core i5-3427U CPU (running at 1.80GHz).

Please, check http://eprint.iacr.org/2016/1117 for further details.
   

Monday, November 28, 2016

Interval arithmetic and practical lattice reduction

IALatRed in a Nutshell

Lattice reduction is fundamental in computational number theory and in computer science, especially in cryptography. The celebrated Lenstra–Lenstra–Lovász reduction algorithm (called LLL or L3) has been improved in many ways through the past decades and remains one of the central tool for reducing lattice basis. In particular, its floating-point variants — where the long-integer arithmetic required by Gram–Schmidt orthogonalization is replaced by floating-point arithmetic — are now the fastest known. Yet, the running time of these floating-point versions is mostly determined by the precision needed to perform sound computations: theoretical lower bounds are large whereas the precision actually needed on average is much lower. IALatRed is a proof-of-concept of an adaptive precision version of LLL, one of its variant Potential-LLL and the BKZ reduction. In these algorithms, floating-point arithmetic is replaced by Interval Arithmetic. The certification property of interval arithmetic enables runtime detection of precision defects in numerical computations and accordingly, makes it possible to run the reduction algorithms with guaranteed nearly optimal precision. As such, these adaptive reduction algorithms run faster than the state-of-the-art implementations, while still being provable.

A brief history of lattice reduction

Lattices are defined as additive discrete subgroups of $\mathbb{R}^n$,  i.e. the integer span $L(\mathbf{b_1}, \ldots \mathbf{b_d}) = \bigoplus_{i=1}^d \mathbb{Z} \mathbf{b_i}$ of a linearly independent family of vectors $\mathbf{b_1}, \ldots, \mathbf{b_d}$ in $\mathbb{R}^n$. Such a family is called a basis of the lattice, and is not unique. Nevertheless, all the bases of a given lattice have the same number of elements, $d$, which is called the *dimension* of the lattice. Among the infinite number of different bases of a n-dimensional lattice with $n\geq 2$, some have interesting properties,  such as having reasonably small vectors and low orthogonality defect. Finding such reduced bases is the goal of *lattice reduction* theory and has been crucial in several fields of computer science and mathematics, especially in cryptology. For instance, lattices have been used to break many public-key cryptosystems in the last decades,  including knapsack cryptosystems or RSA in specific settings thanks to Coppersmith's method.

The problem of finding good bases goes back to the early works of Lagrange and Gauss, for dimension two lattices, and the introduction of the Gauss algorithm, even though first introduced by Lagrange. This procedure can be seen as a 2-dimensional extension of the well-known Euclid algorithm for computing the greatest common divisor of two integers. In 1850, Hermite published the first reduction algorithm for arbitrary dimension. A century later, in 1982, Lenstra, Lenstra and Lovász designed the LLL algorithm, with the polynomial factorization problem as an application, after the celebrated work of Lenstra on integer programming. This algorithm is a milestone in the history of lattice reduction algorithms, being the first algorithm whose running-time is polynomial, for arbitrary dimension. This work has been improved in multiple ways among others by Kaltofen, Schnorr, and Gama and Nguyen, decreasing the time complexity or improving the quality of the reduction.

 

 A bird's eye view on Interval Arithmetic

Interval arithmetic is a representation of reals by intervals — whose endpoints are floating-point numbers — that contain them. Arithmetic operations, in particular the basic operations $+$, $-$, $\times$ , $\div$ can be redefined in this context. The main interest of this representation lies in its certification property:  if real numbers are represented by intervals, the interval resulting from the evaluation of an algebraic expression contains the exact value of the evaluated expression.

If the birth of Lattice Reduction is well-dated, it is not the case of Interval Arithmetic. For some authors, it has been introduced by R. Moore in 1962 in his PhD thesis. For others it can be dated back in 1958 in an article of T. Sunaga which describes an algebraic interpretation of the lattice of real intervals, or even sooner in 1931 as a proposal in the PhD thesis of R.C. Young at Cambridge. Nonetheless its development and industrial applications have to wait for 1980, and the momentum of U. Kulisch in Karlsruhe, leading IBM to develop a specific instruction set and a compiler natively integrating Interval Arithmetic.  Its main asset — calculating directly on sets — is nowadays used to deterministically determine the global extrema of a continuous function or determining the zeroes of a function and proving their existence. Another application of Interval Arithmetic is to be able to detect lack of precision at run-time of numerical algorithms, thanks to the guarantees it provides on computations.

Mixing the two of them: IALatred

This last application can lead to the design of adaptive precision numerical algorithms. In particular e  transformed the celebrated LLL algorithm  into an adaptive precision version, leading to design a provable reduction algorithm which in some cases even run faster than the state-of-the-art standard FpLLL of D. Stehle et al. We also applied this technique to a stronger variant of LLL, called Potential-LLL, first  introduced by Fontain et al., with similar results, as well as the stronger BKZ algorithm.



Wednesday, July 20, 2016

Full RNS Implementation of Fan Vercauteren scheme

In a paper (https://eprint.iacr.org/2016/510), Jean-Claude Bajard, Julien Eynard, Anwar Hasan and Vincent Zucca explain how to perform the division and rounding steps occurring during decryption and homomorphic multiplication procedures of scale invariant schemes such as FV and YASHE without using any multi-precision library. 

Such schemes, based on RLWE, usually lie in a power-of-two cyclotomic ring $\mathcal{R} = \mathbb{Z}[X]/(X^n+1)$ for efficiency reasons. In FV scheme, a ciphertext $\texttt{c} = (\texttt{c}_0, \texttt{c}_1)$ is a polynomial of degree 1 with coefficients in $\mathcal{R}_q = \mathbb{Z}_q[X]/(X^n+1)$. Each $\texttt{c}_i$ is then represented as a polynomial of degree $n-1$ with coefficients in $\mathbb{Z}_q$, $q$ is usually chosen as a product of several small prime moduli $q_i$. This allows to represent the integer coefficients of the $\texttt{c}_i$'s, through the Chinese Reminder Theorem (CRT), by their residues modulo the smaller $q_i$'s. However the use of the Residue Number System (a.k.a non-positional CRT representation) remains hardly compatible with procedures involving coefficient-wise division and rounding which require costly comparisons to be performed. Thus, until now, one had to invert the CRT representation and then use multi-precision arithmetic to perform these steps. 

The techniques presented in this paper allow to perform these steps directly in RNS permiting to avoid a costly conversion between RNS and positional system and the use of multi-precision library. Compared with the FV-NFLlib implementation (https://github.com/CryptoExperts/FV-NFLlib) it enables speed-ups from a factor 5 to 20 for decryption and from 2 to 4 for homomorphic multiplication. It may also lead to future implementations on highly parallel accelerator cards such as GPU. This paper will be presented at SAC (http://www.engr.mun.ca/~sac2016/), in St John's Canada, in August, this blog post briefly presents it.


Decryption
A plaintext $\texttt{m}$ is an element of $\mathcal{R}_t = \mathbb{Z}_t[X]/(X^n+1)$, after being encrypted it becomes a ciphertext $\texttt{c} = (\texttt{c}_0,\texttt{c}_1)\in (\mathcal{R}_q)^2$  with $t < q$. To recover the message the decryption procedure consists in computing $[\lfloor t/q(\texttt{c}_0+\texttt{c}_1 \texttt{s}) \rceil]_t$ with $\texttt{s}$ the secret key.

There are two difficulties to perform this computation in RNS. First $\texttt{c}'=t(\texttt{c}_0+\texttt{c}_1 \texttt{s})$ lies in $\mathcal{R}_q$ and is represented in the "native" RNS base $\mathcal{B}_q = \{q_1,....,q_k\}$ with $q = q_1...q_k$. Thus in base $\mathcal{B}_q$, it is impossible to perform the division by $q$. Second computing a rounding is unnatural in a non-positional systeme like RNS.

To overcome the first difficulty we need to convert the elements in base $\mathcal{B}_q$ to another base coprime to $\mathcal{B}_q$.  As we only need the result modulo $t$ we can use the single moduli base $\{ t \}$, as long as $t$ and $q$ are coprime (usually $t$ is a power of two while $q$ is odd). To achieve this conversion we use a fast base conversion which adds a multiple of $q$ called $q$-overflow. After the division by $q$ in the new base, $q$ is canceled and it only remains an error term $\tau$.

To solve the second difficulty, we choose to compute a flooring, using the equality $\lfloor \frac{x}{q}\rfloor = \frac{x-|x|_q}{q}$ instead of the rounding. The computation of the residue modulo $q$ is achieved by using a Montgomery reduction, which fits well in RNS. Of course, as flooring and rounding are not the same, an error may occur. In such case we simply consider this error as part of the previous one $\tau$.

At this step we need to cancel, the error $\tau$ due to the conversion and the flooring but also the inherent noise in the ciphertext to recover the message. To do this, we add an other moduli $\gamma$, coprime to $q$ to the base $\{t\}$, we multiply $\texttt{c}'$ by $\gamma$ in base $\mathcal{B}_q$ and we run the computations explained above in base $\{\gamma, t\}$ instead of $\{t\}$. Then, for a well-suited $\gamma$, we are able to cancel the noise in the residue modulo $t$ by subtracting the centered residue modulo $\gamma$ to it. Finally we are able to recover $[\texttt{m}]_t$.

Homomorphic multiplication

To perform an homomorphic multiplication of two ciphertexts $\texttt{c}^{(1)}$ and $\texttt{c}^{(2)}$, you first have to scale the product $\texttt{c}^{(1)}\times\texttt{c}^{(2)}$ (which is a degree two polynomial with coefficients in $\mathcal{R}_q$) by $t/q$ and to round the result coefficient-wise as for the decryption. Then one has to relinearize this degree two polynomial to get back a degree one polynomial. But, to limit the noise growth during this step, it requires to decompose the coefficients of an element of $\mathcal{R}_q$ in some radix base $\omega$. Sadly, this is precisely the kind of thing we cannot do in a non-positional number representation.

One should understand that the product $\texttt{c}^{(1)}\times\texttt{c}^{(2)}$ should be lifted in $\mathbb{Z}$ and because we scale down by $t/q$ the coefficients of the product fits back in $\mathcal{R}_q$. Instead of making the product computation in $\mathbb{Z}$, we use a second RNS base $\mathcal{B}'$ coprime to $\mathcal{B}_q$ of same size to not loose any information.

Then we use a fast conversion to convert $\texttt{c}^{(1)}$ and $\texttt{c}^{(2)}$ from $\mathcal{B}_q$ to $\mathcal{B}'$. We manage to reduce the error in base $\mathcal{B}'$ (due to the $q$-overflow) in order to reduce its growth during the product and then we perform the product in both bases.

Next, we use a fast conversion to convert the residues of the product in base $\mathcal{B}_q$ to base $\mathcal{B}'$. Then, as for decryption, we perform a flooring using Montgomery reduction instead of a rounding and we get $\lfloor t/q (\texttt{c}^{(1)}\times\texttt{c}^{(2)})\rfloor$ in base $\mathcal{B}'$. We consider the error due to the different conversions as part of the noise.

For the next step, we convert the result back in base $\mathcal{B}_q$, but this time using an exact base conversion (more costly), because we cannot tolerate overflows from the base $\mathcal{B}'$. To overcome the last difficulty we have managed to decompose the elements belonging to $\mathcal{R}_q$ in a very similar way from the original, however we can perform this one in RNS. The end of the homomorphic multiplication procedure remains the same, we just replace the original decomposition function by our own.  

Impact on the noise
Of course, using all this tricks add some errors which become part of the noise and thus make it bigger. One could wonder if, and how, our multiplicative depth has been affected. Indeed our methods modify the different bounds on the noise for the homomorphic multiplication and also the decryption bound. Nevertheless, by running an analysis with our new bounds, we have shown that our multiplicative depth remains unchanged from the one of the original scheme, for all the sets of parameters but one.

Thursday, July 7, 2016

WHEAT 2016: The Pre-history of Lattice-based Cryptanalysis

On the second day of the Workshop HEAT, Antoine Joux gave an invited talk on the history of cryptanalytic applications of  lattice-basis reduction algorithms.

Lattice is a discrete subgroup of $\mathbb{R}^n$. Equivalently, it is a span of integer-linear combinations of $\mathbb{R}$-linearly independent vectors in $\mathbb{R}^n$. Finding a short basis in $\mathbb{R}^2$ is relatively easy and Gauss' lattice-basis reduction algorithm is effective for this purpose. As the speaker emphasised, things become more complicated when we move to higher dimensions. He then went on to discuss the details of the famous LLL algorithm, invented by Lenstra, Lenstra and Lovász in 1982, for lattice-basis reduction in arbitrary dimensions. LLL is a polynomial-time algorithm but returns a short vector within an exponential approximation factor of the length of the shortest vector in the lattice. This algorithm, which performs well in practice, may be viewed as a combination of the Gauss lattice reduction and the Gram-Schmidt orthogonalisation methods.

An early application of lattice-basis reduction is the cryptanalysis of knapsack-based cryptosystems. The knapsack problem, a.k.a. the subset-sum problem, is given integers $a_1, \ldots, a_n$, and $S$, find $\epsilon_1,\ldots,\epsilon_n \in \{0,1\}$ such that 
\[S = \overset{n}{\underset{i=1}{\sum}} \epsilon_i \cdot a_ i.\]
This problem is NP-hard though some cases are easy, and it served as a basis for cryptosystems such as Merkle-Hellman cryptosystem. The basic idea is to hide an easy knapsack in a hard-looking one. However, at CRYPTO 1982, Shamir broke this cryptosystem using lattice-basis reduction. This attacks works for super-increasing knapsacks, where we have $a_i > \sum_{j=1}^{i-1} a_j$.  Other works starting with that of Lagarias-Odlyzko in 1985 lead to improved attacks on low-density knapsacks.

The speaker also briefly spoke about the application of lattie-basis reduction to the problems of

  •   reconstructing small linear dependencies: let  $(x_1,\ldots,x_n)$ be a sequence of real numbers, we need to find small coefficients $v_i$ such that $\sum v_i\cdot x_i =0$,
  • constructing polynomials for use in the Number Field Sieve for solving the Discrete Logarithm Problem, and
  • finding small roots of univariate and bivariate integer polynomials modulo composite numbers: Coppersmith's attack.

Tuesday, July 5, 2016

WHEAT Workshop 2016, Paris

The second HEAT workshop on Fully Homomorphic Encryption (otherwise known as WHEAT) kicked off this morning in (disappointingly cloudy) Paris.

The first talk of the day was Martin Albrecht's invited talk on solving short secret LWE instances. This is joint work with Rachel Player and Sam Scott. He provided a clear summary of the state of the art in terms of algorithms available and thus set the grounds for the rest of the day. Attacks using parameter choices inspired by HElib and the LP model were discussed and where possible, estimate running times were provided. A partial conclusion was the latter should not be assumed. For a more detailed read, see https://eprint.iacr.org/2015/046.pdf.

Also discussed was RLWE security in the FHE case, a talk given by Guillaume Bonnoron. This is joint work with Caroline Fontaine. They present an attack on specific FHE parameter choices and use the FV scheme for the reason that they wish to avoid doing a Modulus Switch Operation (see for example https://eprint.iacr.org/2015/889.pdf for a description). The conclusion of this talk is that their attack does work, but FHE and RLWE are not broken, because it does not scale up. A bigger parameter $n$ leads to bigger errors. Whilst they took one day to complete the attack versus a previous 3.5 days (https://eprint.iacr.org/2015/176.pdf), this talk's final conclusion was that more cryptanalysis is needed.

The second invited talk in the afternoon was Leo Ducas', joint work with Martin Albrecht and Shi Bai. This discussed the subfield lattice attack on "over stretched NTRU assumptions"; for the full technical read, see https://eprint.iacr.org/2016/127.pdf. This attack is based on using a subfield of the field we are working in - they assume the latter is a power of two cyclotomic, but claim it can be done using any field which is Galois. We map down our RLWE instances using the (partial) norm map in order to solve a (hopefully) easier problem in the subfield. Given the right conditions, we can then map the solution we find back up and recover a short enough solution to carry through with the attack. The practicality grows with the modulus $q$, hence the "over stretched" condition. Although this does not seem (yet?) to affect the NTRU schemes used in practice, this talk's conclusion was to drop the NTRU assumption altogether.