• No results found

We model our system with the same players that already existed in the Norwegian e-voting project, namely the voter V with her com-puter and mobile phone, a ballot box B, a receipt generator R, a decryption serviceDand an auditorA. We quickly explain the exist-ing motivation before proceedexist-ing. For more details, see Gjøsteen [14].

A premise for Gjøsteen’s work is that the users may not be in con-trol of their own equipment, for instance due to malware. One should therefore distinguish the voter’s intention and what the computer ac-tually does. When the ballot box receives an encrypted ballot from the voter’s computer, it transforms and partially decrypts the ballot, and forwards it to the receipt generator. Then the transformed ballot is completely decrypted, and the correct receipt code is sent by SMS to the voter’s mobile phone.

Both the ballot box and the receipt generator give the auditor a log of everything they have seen, so that the auditor can compare logs and make sure no one of them is ignoring information seen by the other. Any information dropped by the ballot box should ideally be detected by the voter, because of a text message. (The soundness of this protocol is based on an assumption that the phone is independent of the device used to vote. While this may have been an acceptable assumption when the system was first introduced, it is less so today.

Finding another solution may be necessary, but is outside the scope of this paper.)

Next consider what happens when the election closes. Then the ballot box should provide ciphertexts to the decryption service, which outputs the public result of the election. The auditor verifies that the decryption service got the right ciphertexts from the ballot box, and that the output was correct.

5.1 Setup assumptions

We assume the existence of the following algorithms.

K takes in a security parameter 1λ, and outputs a tuple (pk1, sk1, sk2, evk, tk), where pk1 is the public encryption key, sk1 and

sk2are private decryption keys,evkis an evaluation key andtk is a transformation key to switch from decryption under sk1 to sk2.

S takes in all voters {vi} within the vote counting district and sys-tem key material and outputs a family of personalised functions {hvi}and other receipt material described in Section 6.1.1.

E takes inpk1 and a ballot band outputs a ciphertext c.

T takes in a ciphertextcwhich can be decrypted bysk1 and a trans-formation key tk, and outputs a ciphertext c0 that can be de-crypted by sk2.

Eval takes in an allowed circuitC, ciphertextsc1, . . . , cnand the eval-uation key evk, and outputs a ciphertextc0 which encrypts the output of C on the decryptions ofc1, . . . , cn.

D takes in a ciphertext and a corresponding decryption key and out-puts a message.

5.2 Security requirements for counting

The security requirements can be summarised in the following list.

The definitions are close to those of the original protocol, in order to be able to compare apples and apples. However, some properties must necessarily be changed to correspond to our updated model.

The formal definitions are available in Gjøsteen [14].

D-privacy The decryption service should not be able to correlate its input to voter identities.

Generate parameters. Let V be a voter, let {ci} be all cipher-texts containing ballots, and let c0 be the output of the tally circuit. The distribution of c0 should be independent of V. B-privacy An adversary that knows the transformation key should

not be able to say anything about the content of the ciphertext he sees.

R-privacy An adversary that controls the pre-code decryption key and sees many transformed encryptions of valid ballots should not be able to say anything non-trivial about the content of those encryptions.

One can formalise the notion by letting an adversary with access to the encryption key and the decryption key of transformed ballots submit a sequence of ballots to the simulator, who in turn encrypts either the sequence or the sequence under some permutation. The simulator transforms the ballots, and sends back original ciphertexts and transformed ciphertexts to the adversary, who must distinguish.

A-privacy An adversary that sees the insertion of blank ballots, can correlate all ciphertexts to real identities, and verifies all de-cryptions should not be able to learn how anyone voted.

This definition differs considerably from previous work. We play the following game between a simulator and an adversary, and probability of the adversary winning should be close to 1/2.

The simulator selects a random bit b, and runs K and S. The adversary gets pk1, and produces a set {Vi} of voters and two sets {m0i},{m1i} of corresponding ballots that will create iden-tical election outcomes. The simulator uses the set {(Vi, mbi)} to simulate an election with the adversary playing the part of the auditor. The adversary outputs a bit b0 and wins if b=b0. The probability that the adversary wins should be close to 1/2.

B-integrity An adversary that knows all the key material, and can choose the per-voter key material, should not be able to create an identity, a ciphertext and a transformed ciphertext such that the transformed ciphertext decryption is inconsistent with the decryption of the ciphertext.

D-integrity The decryption service must not be able to alter the election outcome.

The focus in this work will be on D/A-privacy and D-integrity.