• No results found

35

blockchains potentially offering better protection of data subjects rights whilst also benefitting from the immutability and verifiability that blockchains offer.

36 4.3.1 Human intervention

A key component of automated individual decision making is whether there is human interven-tion involved in the decision-making process. Processing that occurs automatically but which does not result in a decision being made (or where a human looks at the results of the processing and decides) does not fall foul of Article 22. An issue only arises if the decision is made without any human intervention; The Article 29 Working Party guidelines highlight that automated de-cision-making is “the ability to make decisions by technological means without human involve-ment”.

Therefore, with regard to smart contracts, one must ask the question: when does a decision take place? This requires an evaluation of the threshold of the term ‘solely’. The Article 29 Working Party assert that a decision based solely on automated processing is not negated by the trifling presence of a human.225 There must be meaningful influence from the human in the decision-making process.

The three major points of conceivable human interaction in smart contracts concern: the initial creation of a smart contract; the possible use of a human ‘oracle’ to provide information that a certain set of circumstances has occurred (thus potentially triggering part or parts of a smart contract); and a potential human review after a decision has been made.226

4.3.1.1 Human creator

With the exception of smart contracts created through an algorithm or machine learning tech-nology,227 there is likely to be a human creator who sets the parameters and operational rules of a smart contract.228 However, whilst this is undoubtedly human involvement, it must concern the decision-making process. It is hard to argue that a decision has taken place when the entire notion of a smart contract is that different outcomes (or decisions) can arise contingent on cer-tain criteria. Perhaps it should be seen that the role of human creator in the context of a smart contract does not cross the threshold of human intervention to prohibit a decision being solely based on automated processing. On the other hand, if one is to consider the decision to be ‘pre-loaded’ into the smart contract, then this could potentially satisfy the need for human

225 Article 29 Working Party, Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679 WP251 (2018) 20.

226 M Finck, ‘Smart Contracts as a Form of Solely Automated Processing under the GDPR’ Max Planck Institute for Innovation & Competition Research Paper 19-01 (2021) 16

227 AS Almasoud, MM Eljazzar and F Hussain, ‘Toward a Self-Learned Smart Contracts’ 2018 IEEE 15th ICEBE (2018) 269 Available at: <https://doi.org/10.1109/ICEBE.2018.00051> Accessed on: 12th August

228 (n 226)

37

intervention. Clarification on this issue from data protection authorities would be a welcome addition to the discourse regarding this specific point.

4.3.1.2 Human oracle

A human acting as an oracle appears to be unlikely to pass the human intervention threshold stated by Article 29 Working Party. Their intervention is limited to providing factual infor-mation to inform the rules set out in the smart contract. They cannot control the circumstances which attach themselves to a specific smart contract’s architecture. For example, a human ora-cle registering factual information to a life insurance smart contract that a person is now de-ceased and therefore triggering a smart contract to decide on whether a financial distribution to a named beneficiary should be paid.229 Perhaps if the human oracle had discretion here, the

‘solely’ threshold may not necessarily be breached and there could potentially be human in-volvement. However, if the human involvement only amounts to a pure notification role (which he may be obliged to fulfil, due to a contractual or legal obligation) then this is likely to still fall under the definition of ‘solely’ automated individual decision-making. This is because the human oracle would have no actual influence in the decision-making process, but rather is a

‘cog’ in an automated ‘machine’.

4.3.1.3 Human reviewer

The necessity for a controller to put in place safeguards where automated individual decision-making is taking place on the basis of consent or contract creates a final situation in which human intervention can be present.230 It appears possible for a controller to establish human intervention after a decision has been made – so long as there that review is “carried out by someone who has the appropriate authority and capability to change the decision”.231 From a smart contract perspective, it may be relatively easy for some smart contracts to ensure there the potential for human review, particularly in heavily regulated industries such as finance or insurance.232 however, in more decentralised applications, smart contracts may struggle to meet this human intervention requirement (such as their role in regulated activity in Decentralised Autonomous Organisations).233

229 A simplified example of ‘InsurTech’. See PricewaterhouseCoopers, ‘How insurers can seize InsurTech oppor-tunities’ (2019) Available at: <https://www.pwc.com/us/en/industries/insurance/library/insurtech-innova-tion.html> Accessed on: 21st October 2021.

230 GDPR, Article 22(3).

231 Article 29 Working Party, Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679 WP251 (2018) 27.

232 M Finck, ‘Smart Contracts as a Form of Solely Automated Processing under the GDPR’ Max Planck Institute for Innovation & Competition Research Paper 19-01 (2021) 16

233 M Finck, ‘Smart Contracts as a Form of Solely Automated Processing under the GDPR’ Max Planck Institute for Innovation & Competition Research Paper 19-01 (2021) 27

38

4.3.2 Decisions producing legal effects or similar

Not all individual automated decision-making is caught by Article 22. The processing must also cause significant or legal effects to the data subject. Looking at the wording of Article 22(1),

‘legal effects’ appears somewhat easier to define than the broadly worded processing which

‘similarly significantly affects’ a data subject. The Article 29 Working Party provide the exam-ples of being refused a credit application or a job rejection from automated e-recruitment. The InsurTech example, provided in this thesis at section 4.3.1.2, would also likely fall under the definition of legal or similarly significant effects. It is noted, however, that this is an issue that not only affects smart contracts on a blockchain.

4.3.3 Controller obligations in relation to Article 22

There are a number of obligations that may fall on a controller that is engaged in processing which results in automated individual decision-making. Naturally, controller identification234 may be an issue depending on the architecture of the smart contract and the blockchain it is operating on. Moreover, this may hinder the data subject’s rights to receive certain information.

In a situation where smart contracts are considered to be creating decisions based solely on automated processing (pursuant to Article 22), the controller will be under an obligation to provide “meaningful information” on the logic used in addition to the “significance” of the consequences of the processing.235

4.3.4 Necessity for a DPIA

If smart contracts are considered to satisfy the requirements of automated individual decision-making to fall under Article 22, then the processing is likely to be considered high risk and require the controller to conduct a Data Protection Impact Assessment. Under Article 35 of the GDPR, explicit reference is made to processing concerning ‘new technologies’ and ‘a high risk to the rights and freedoms natural persons’.236 However, if there are difficulties in correctly identifying a controller there may be an infringement on the obligation to conduct a DPIA.