The Moral Character of Cryptographic Work (2015)

Preamble

Part 1: Social responsibility of scientists and engineers
A famous manifesto
Two modes of behaving politically
Implicit politics
Overt politics
The ethic of responsibility
Historical events shaping the ethic of responsibility
The good scientist
The ethic of responsibility in decline
Technological optimism
Conclusion to part 1

Part 2: The political character of cryptographic work

Scientist or spy?
Academic cryptography used to be more political
Stripping out the politics
Children of [Chaum81] and [GM82].
The cypherpunks
Cryptography favors whom?
Encryption
IBE
Differential privacy
FHE and iO
Cryptanalysis
Unthreateningly engaged
Conclusion to part 2

Part 3: The dystopian world of pervasive surveillance

Law-enforcement framing
Surveillance-studies framing
Our dystopian future
Conclusion to part 3

Part 4: Creating a more just and useful field
Secure messaging in the untrusted-server model
Bigkey cryptography
More examples
Choose well
Practice-oriented provable security
Funding
Academic freedom
Against dogma
A more expansive view
Learn some privacy tools
No cutesy adversaries
A cryptographic commons
Communications
Institutional values
Conclusion to it all

Acknowledgements

Phillip Rogaway

Department of Computer Science

University of California, Davis, USA

rogaway@cs.ucdavis.edu

December 2015

(minor revisions March 2016)

Abstract. Cryptography rearranges power: it configures who
can do what, from what. This makes cryptography an inherently
political
tool, and it confers on the field an intrinsically moral dimension.
The Snowden revelations motivate a reassessment of the political and moral positioning
of cryptography. They lead one to ask if our inability to effectively address
mass surveillance constitutes a failure of our field. I believe that it does.
I call for a community-wide effort to develop more effective means to resist
mass surveillance. I plead for a reinvention of our disciplinary culture to attend
not only to puzzles and math, but, also, to the societal implications of our
work.

Keywords: cryptography · ethics · mass surveillance · privacy
· Snowden · social responsibility


Preamble. Most academic cryptographers seem to think that
our field is a fun, deep, and politically neutral game—a set of puzzles
involving communicating parties and notional adversaries. This vision of who
we are animates a field whose work is intellectually impressive and rapidly
produced, but also quite inbred and divorced from real-world concerns. Is
this what cryptography
should be like? Is it how we should expend the bulk of our
intellectual capital?

For me, these questions came to a head with the Snowden disclosures of 2013.
If cryptography’s most basic aim is to enable secure communications, how
could it not be a colossal failure of our field when ordinary people
lack even a modicum of communication privacy when interacting electronically?
Yet I soon realized that most cryptographers didn’t see it this way. Most seemed
to feel that the disclosures didn’t even implicate us cryptographers.

I think that they do. So I want to talk about the moral obligations of
cryptographers, and my community as a whole. This is not a topic
cryptographers routinely discuss. In this post-Snowden era, I think it needs
to be.

Part 1: Social responsibility of scientists and engineers

A famous manifesto. I’d like to begin with a story—a true
story.1 To set the stage, it is London, the summer of 1955. A
roomful of reporters have assembled for a press conference in Caxton Hall, a
red brick building in Westminister. The media have been summoned in a plan
hatched by Bertrand Russell, with some help from the editor of
The Observer
newspaper. The reporters don’t know just why they are here, having only
been told that a team of the world’s leading scientists were ready to release
something of world-wide significance. The press knows that Bertrand Russell is
involved. With Einstein’s recent death, Russell has become the world’s most famous
living intellectual.

Russell has been in his home, hiding, all week. All day long the phone
rings, the doorbell rings. Reporters are trying to find out what is this big
announcement. Russell’s wife and his housekeeper make excuses and shoo the
reporters away.

As the press conference begins, the reporters learn from Russell and
accompanying physicist Joseph Rotblat that they have not been
assembled to hear of some new scientific discovery, but to receive a
prepared, political statement. It’s a fairly brief statement, but it’s been
signed by eleven2 of the world’s leading scientists—nine of them Nobel
laureates. Albert Einstein is among the signatories, signing just days before
he became ill and died.

The document would become known as the Russell–Einstein manifesto.3 I hope that its contents are known to you. It speaks of the existential threat
to mankind posed by nuclear weapons. It’s final passage sounds desperately plaintive
as Russell writes:

We appeal, as human beings, to human beings: Remember your humanity, and
forget the rest. If you can do so, the way lies open to a new Paradise; if
you cannot, there lies before you the risk of universal death.4

The reporters ask questions and soon warm to the manifesto’s importance. The
next day, the manifesto is carried as front-page news of most the world’s
major newspapers. For the next several days, at least, it is the talk of the
world.

The Russell–Einstein manifesto galvanized the peace and disarmament
movements. It led to the Pugwash conferences, for which Joseph Rotblat and
the conference-series itself would eventually win the Nobel Peace Prize
(1995). Rotblat credits the manifesto for helping to create the conditions
that gave rise to the Nuclear Non-Proliferation Treaty (NPT, 1970).5
In his Nobel Peace Prize acceptance speech, Rotblat explains:

From my earliest days I had a passion for science. But science, the exercise
of the supreme power of the human intellect, was always linked in my mind
with benefit to people. I saw science as being in harmony with humanity. I
did not imagine that the second half of my life would be spent on efforts to
avert a mortal danger to humanity created by science.6


Two modes of behaving politically. I begin with the
Russell–Einstein manifesto to remind you of two things: first, that
technical work itself can implicate politics; and second, that some
scientists, in response, do take on overtly political roles. These two ways to
behave politically are different (even if, to people like Rotblat, they go hand-in-hand).
Let’s look at each.


Implicit politics. A scientist engages in what I’ll call
implicit
politics by influencing power relations as a byproduct of technical work. Politics
is about power—who has how much of it, and what sort. The nuclear bomb is the
ultimate expression of coercive power; it is politics incarnate. Had Rotblat
shunned every ostensibly political role in his life, his life’s work would still
have been political. Immensely, if implicitly, so.

But we don’t need the specter of mushroom clouds to be dealing with
politically relevant technology: scientific and technical work routinely
implicates politics. This is an overarching insight from decades of work at the
crossroads of science, technology, and society.7 Technological
ideas and technological things are not politically neutral: routinely, they
have strong, built-in tendencies. Technological advances are usefully
considered not only from the lens of how they work, but also
why
they came to be as they did, whom they help, and whom they
harm. Emphasizing the breadth of man’s agency and technological options, and
borrowing a beautiful phrase of Borges, it has been said that
innovation is a garden of forking paths.8

Still, cryptographic ideas can be quite mathematical; mightn’t this
make them relatively apolitical? Absolutely not. That cryptographic work is deeply
tied to politics is a claim so obvious that only a cryptographer could fail to
see it. Thus I’ll devote considerable time to this claim. But let me first speak
of the second way for scientist to behave politically.


Overt politics. A scientist can engage in overt
politics through the mechanisms of activism and participatory democracy. In writing
the Russell–Einstein manifesto and in rolling it out the way he did, Russell
was working masterfully in this domain. Russell was not
only a mathematician: he had broad contributions across philosophy,
had won the Nobel prize in literature, and was a well-known social critic and
anti-war activist.


The ethic of responsibility. Bertrand Russell’s breadth was
extraordinary. But the mere existence of the politically engaged
intellectual doesn’t suggest that this pairing is at all representative. To
what extent are
scientists and engineers socially engaged? And to what extent do societal norms
demand that they be?9

Nowadays, an ethic of responsibility is preached in university
courses and advocated by professional organizations. It is the doctrinal
view. The putative norm contends that scientists and engineers have an
obligation to select work that promotes the social good (a
positive right), or, at the very least, to refrain from work that
damages mankind or the environment (a
negative right).10 The obligation stems from three basic
truths: that the work of scientists and engineers transforms society; that
this transformation can be for the better or for the worse; and that what we
do is arcane enough that we bring an essential perspective to public
discourse. The socially engaged scientist is expected to bring a normative
vision for how work in his or her field should impact society. He or
she aims to steer things in that direction.

To be sure, decision making under the ethic of responsibility is not easy.
It can be impossible to foresee if a line of work is going to be used for
good or for ill. Additionally, the for-good-or-for-ill dichotomy can be
simplistic and subjective to the point of meaninglessness. Still, despite
such difficulties, the socially engaged scientist is supposed to
investigate, think, and decide what work he will or will not do, and what
organizations he will or will not work for. The judgment should be made
without over-valuing one’s own self-interest.


Historical events shaping the ethic of responsibility. The ascendancy
of the ethic of responsibility was shaped by three historical events of World
War 2 and its aftermath.

1. The first, already touched on, was the experience of the
atomic scientists. After the war, with science left in a position both
revered and feared, prominent physicists became public figures. Some became
outspoken in their advocacy for peace, or their opposition to further
weapons development. Recall the widespread concerns from physicists to
Reagan’s Strategic Defense Initiative (SDI)11 or Hans Bethe’s
famous letter to Bill Clinton where he argued against another round of U.S.
nuclear-weapons development.12 A willingness to
speak truth to power13 became a tradition among
physicists—one that, I think, continues to shape physicists’ identity.14

As an example, recall the pepper-spray incident of 2011 at my own campus,
the University of California, Davis.15 Carrying out the
Chancellor’s instructions to clear “Occupy” protesters, police officer John
Pike pepper-sprayed students who sat, arms linked, on the university’s
central quad. Videos of the event went viral,16 while memes of
Officer Pike casually pepper-spraying anything became a second
Internet sensation. But the observation I’d like to make is that, in the
aftermath of the incident, the
only UCD department outside the humanities to condemn the
Chancellor or call for her resignation was Physics.17 The
Chancellor was mystified. She understood the strong reaction from our
(underfunded and politically liberal) English department, but she didn’t
anticipate complaints from a (well-funded and generally conservative)
Physics department.18 What the Chancellor might not have internalized
is that physicists retain a post-war legacy not only of snuggling up close to
power, but also of nipping at its ankles.

2. A second historical event that helped shape the post-war
view of moral responsibility was the Nuremberg trials (1945–1946). While the
defense repeatedly proffered that the accused were simply following orders,
this view was almost universally rejected: following orders did not
efface legal or moral culpability. The Nuremberg trials began with the
Medical Case, the prosecution of 23 scientists, physicians, and other senior
officials for gruesome and routinely fatal medical experiments on prisoners.19

Years later, as though in sequel, the world would watch in nervous
fascination the trial of Adolf Eichmann (1961). Hannah Arendt’s
controversial portrayal of Eichmann would come to be formative in shaping
our understanding of what, ethically, had transpired during the Holocaust.
She wrote of the utter ordinariness of the man.20
Arendt’s book on the trial, memorably subtitled
The Banality of Evil, would be published the same year (1963) as
Stanley Milgram’s classical experiments on obedience, where Milgram produced
the stunning (and widely repeated) finding that a large fraction of
volunteers would follow a whitecoated scientist’s gentle urging to deliver
apparently life-threatening shocks to someone they thought was a fellow test
subject.21

3. Finally, I would mention the rise of the environmental
movement as contributing to the rise of an ethic of responsibility. While
environmentalism dates to the mid-nineteenth century and before, as a
significant social movement, the 1962 publication of Rachel Carson’s
Silent Spring is a milestone. Her book painted a picture of the end
of life not by the drama of nuclear warfare, but the disappearance of songbirds,
silenced by the routine if oversized activities of chemical manufacturing and
non-specific pesticides.


The good scientist. The three experiences I have just
described implied a
democratization of responsibility. Scientists had to assume
responsibility for what they did, for technology would take us to a very dark
place if they did not. Stripped of ethical restraint, science would bring a world
of nightmare bombs, gas chambers, and macabre human experiments. It would bring
a dying, poisoned world.

And so, in the decades following the war, the ethic of responsibility
became— at least rhetorically—the doctrinal norm. Increasing numbers of
scientists and engineers, as well as their professional organizations, began
to engage on issues of social responsibility. The Pugwash Conferences began
in 1955. The National Society of Professional Engineers adopted a code of
ethics in 1964 that gave primacy to social responsibility. As its first
imperative, the code says that “Engineers, in the fulfillment of their
professional duties, shall hold paramount the safety, health, and welfare of
the public.” Similar language would spread across other codes of ethics,
including those of the ACM and IEEE.22 The Union of Concerned
Scientists was formed at MIT in 1969—the same year a work stoppage at MIT,
coordinated with 30 other universities, enjoyed substantial student,
faculty, and administrative support. It called for a realignment of research
directions away from military pursuits and towards human needs. Computer
Professionals for Social Responsibility (CPSR) began its work opposing the
SDI in 1983.23
That same year, the IACR was founded, its self-described mission not only to
advance the theory and practice of cryptology but also, lest we forget, to serve
the public welfare.24 The Electronic Frontier Foundation (EFF) and
Privacy International (PI) were both formed in 1990, and became effective advocates
in such matters as the defeat of the Clipper Chip. All of this is but a sampling
of the overt politics from scientists and engineers.

Against this backdrop, the figure of the brilliant but humane scientist
became a cultural motif. Jonas Salk had wiped out polio. Einstein became a
cultural icon, one unfazed by the inconvenience of his death. The image of
him sticking out his tongue may be the most widely recognizable photograph
of any scientist, ever. Richard Feynman would be painted in equally colorful
ways, the no-nonsense genius pounding on bongo drums and shoving black
rubbery stuff into ice water. Gene Roddenberry’s Star Trek imagined
a future that featured the scientist–humanist–hero as one team, if not one
individual. Carl Sagan, speaking gently to the camera in episodes of
Cosmos
(1980), seemed the real-life embodiment of this aspirational package.


The ethic of responsibility in decline. And yet, for all I
have said, the scientist or engineer seriously concerned about the social
impact of his work is, I think, so rare as to be nearly a matter of myth.
Never during the cold war, nor in any of the subsequent US wars, did US
companies have difficulty recruiting or retaining the hundreds of thousands
of scientists and engineers engaged in building weapons systems.25
Universities like my own were happy to add their support; the University of California
would, for decades, run the USA’s nuclear weapons design laboratories.26
In nearly 20 years advising students at my university, I have observed that a
wish for right livelihood27
almost never figures into the employment decisions of undergraduate computer
science students. And this isn’t unique to computer scientists: of the five most
highly ranked websites I found on a Google search of
deciding among job offers, not one suggests considering the
institutional goals of the employer or the social worth of what they do.28

Nowadays I ask computer-science faculty candidates to explain their view on
the ethical responsibilities of computer scientists. Some respond like a
deer in headlights, unsure what such a question could even mean. One recent
faculty candidate, a data-mining researcher whose work seemed a compendium
of DoD-funded projects for socially reprehensible aims, admitted that she
felt no social responsibility. “I am a body without a soul,” she earnestly
explained. It was sincere—and creepy.

Stanley Fish, a well-known literary theorist, professor, and dean,
admonishes faculty not to pursue research programs rooted in
values. (His 2012 book is titled Save the World on Your Own Time.)
Fish advises professors to

do your job; don’t try to do someone else’s job…; and don’t let anyone
else do your job. In other words, don’t confuse your academic obligations
with the obligation to save the world; that’s not your job as an academic…

Marx famously said that our job is not to interpret the world, but to
change it. In the academy, however, it is exactly the reverse: our job is
not to change the world, but to interpret it.29

Perhaps such amorality, however revolting, is harmless in Fish’s
intellectual realm: one doesn’t particularly expect literary theory to
change the world. But scientists and engineers do just that. A refusal to
direct the change we do is both morally bankrupt and ingracious. Our work as
academics, we should never forget, is subsidized by society.30

So far I have not said why the post-war ethic-of-responsibility didn’t catch
on. I could give multiple answers, starting with the rise of radical
individualism.31 But I prefer to focus on something else: extreme
technological optimism.


Technological optimism. Technological optimists believe that
technology makes life better. According to this view, we live longer, have more
freedom, enjoy more leisure. Technology enriches us with artifacts, knowledge,
and potential. Coupled with capitalism, technology has become this extraordinary
tool for human development. At this point, it is central to mankind’s mission.
While technology does bring some unintended consequences, innovation itself will
see us through.

Technological optimism animates everyone from school children to Turing
Award winners. Accepting his 2012 Turing Award, Silvio Micali, said that

Computer science is marking an epical change in human history. We are
conquering a new and vast scientific continent. … Virtually all areas of
human activity… [and] virtually all areas of human knowledge… are benefiting
from our conceptual and technical contributions. … Long live computer
science!32

If you’re a technological optimist, a rosy future flows from the wellspring
of your work. This implies a limitation on ethical responsibility. The
important thing is to do the work, and do it well. This even becomes a moral
imperative, as the work itself is your social contribution.

But what if computer science is not benefiting man? Technological
pessimists like Jacques Ellul, Herbert Marcuse, and Lewis Mumford certainly
didn’t think that it was. They saw modern technology as an interlocking,
out-of-control system that, instead of fulfilling human needs, engendered
pointless wants and deadlier weapons. Man is becoming little more than the
sex organs of the machine world.33

Taking a less “extreme” view,34 technological contextualists35 acknowledge the concerns of the pessimists, but emphasize man’s essential agency
and the malleability of technology. Contextualism dominates the dialectic of
technology studies.

The ethic of responsibility is always paired with the contextualist view of
sociotechnology. At some level, this must be so: a normative need vanishes
if, in the garden of forking paths, all paths lead to good (or, for that
matter, to bad). But it is technological optimism that most people buy into,
especially scientists and engineers. And unbridled technological optimism
undermines the basic need for social responsibility.


Conclusion to part 1. Ultimately, I think the post-war turn
towards social responsibility in science and engineering was less a turn
than a sideways glance. While the rhetoric of responsibility would provide
cover from technology’s critics, few scientists or engineers would ever come
to internalize that their work embodied socially relevant values. If
researchers like us were actually supposed to know or care about this stuff in
any operationally significant way, well, I think we didn’t get the memo.

So let me retransmit it. It says that your moral duties extend beyond the
imperative that you personally do no harm: you have to try to promote the
social good, too. Also, it says that your moral duties stem not just from
your stature as a moral individual, but, also, from the professional
communities to which you belong: cryptographer, computer scientist,
scientist, technologist.

With few exceptions, the atomic scientists who worked on disarmament were
not the same individuals as those who built the bomb. Their
colleagues—fellow physicists—did that. Cryptographers didn’t turn the
Internet into an instrument of total surveillance, but our colleagues—fellow
computer scientists and engineers—did that. And cryptographers have some capacity to help.

But you will only believe that claim if you recognize that cryptography can
influence power relations. I suspect that many of you see no real connection
between social, political, and ethical values and what you work on. You don’t
build bombs, experiment on people, or destroy the environment. You don’t spy
on populations. You hack math and write papers. This doesn’t sound ethically
laden. I want to show you that it is.

Part 2: The political character of cryptographic work

Scientist or spy? There’s an irony in discussing the claim that
cryptographic work is political, and it is this: to someone unconnected to the
field, and also to the crypto-hobbyist, the claim may seem obviously true. But
to the young researcher who spends his life writing papers in cryptography, the
claim may seem just as obviously false. What gives?

The outsider’s view of cryptography might be based on cinematic portrayals.
Films like Sneakers (1992), Pi (1998), A Beautiful Mind (2001), Enigma
(2001), Traveling Salesman (2012), Citizenfour (2014), and The Imitation
Game (2014) depict cryptography as a field intertwined with politics.
Cryptographers are the brilliant and handsome mathematicians that power
needs to have working on its side. We are, I am happy to report, heroic
geniuses. A little crazy, to be sure, but that just adds to the luster.

Similarly, the crypto hobbyist may have read historical accounts
dealing with cryptography, like the books of James Bamford or David Kahn.36
Such accounts demonstrate that, historically, cryptography is about
power. It’s a realm in which governments spend enormous sums of money,37
and maybe not unwisely: the work shapes the outcome of wars, and undergirds diplomatic
and economic maneuvering.38

Yet no academic cryptographer would confuse historical or fictional accounts
of cryptography with what we actually do. Our discipline investigates
academic problems that fall within our disciplinary boundaries. Pick up a
Springer proceedings or browse ePrint papers and our field looks utterly non-political. If power is anywhere in the picture, it is in the abstract
capacities of notional adversaries39 or, in a different branch of
our field, the power expenditure, measured in watts, for some hardware. We work
on problems that strike us as interesting or scientifically important. We’re
not aiming to advance the interests of anything but science itself (or, perhaps,
one’s own career).

So distinct claims about cryptography’s connectedness to power stem, at
least in part, from radically different archetypes of what the cryptographer
is: scientist or spy. The NSA/GCHQ employee who hacks Gemalto to implant
malware and steal SIM keys40 is just as deserving of being called
a “cryptographer” as the MIT-trained theorist who devises a new approach for
functional encryption. Both are dealing in questions of privacy, communications,
adversaries, and clever techniques, and we would do well to emphasize these commonalities
if we want to see our disciplinary universe in context or nudge it towards greater
relevance.


Academic cryptography used to be more political. The ascendance
of a new cryptographer archetype—the academic cryptographer—fails to really explain
our politically detached posture. For one thing, academic cryptographers were
once more concerned with our field’s sociopolitical dimensions. Some even came
to cryptography for such reasons. Consider, for example, this fragment of Whit
Diffie’s testimony at the Newegg trial. Speaking of his wife, Diffie says:

I told her that we were headed into a world where people would have
important, intimate, long-term relationships with people they had never met
face to face. I was worried about privacy in that world, and that’s why I
was working on cryptography.41

Diffie and his advisor, Martin Hellman, have long evinced a concern for
sociopolitical problems touching technology. You see it in their criticism
of DES’s key length,42 in Hellman’s activism on nuclear
disarmament,43 in Diffie’s book on the politics of wiretapping
with Susan Landau,44 and in his co-invention of forward secrecy.45
You see it in the New Directions paper:46
when the authors boldly begin “We stand today on the brink of a revolution in
cryptography,” the anticipated revolution was not, at least primarily, the theory
community bringing forth mind-altering notions of provable security, simulatability,
or multiparty computation.47 The authors were interested in
technological changes that were transpiring, and concomitant social
opportunities and needs.48

Still more ostensibly political is David Chaum’s body of scientific work,
which thoroughly embeds concerns for democracy and individual autonomy.
Chaum’s 1981 paper49Untraceable Electronic Mail, Return Addresses, and Digital Pseudonyms, [Chaum81], suggests that a crucial privacy goal when sending an email is
to hide who is communicating with whom. The metadata, in modern
political parlance. The author offered mix nets for a solution.50

Chaum would go on to provide the founding ideas for anonymous electronic
cash and electronic voting. His papers would routinely draw on overtly
political motivations.51 In a recent conversation, Chaum
expressed surprise at the extent to which academics gravitated to a
field—cryptography—so connected to issues of power.52


Stripping out the politics. But as academics gravitated to cryptography,
they tended to sanitize it, stripping it of ostensible connectedness to power.
Applied and privacy-related work drifted outside of the field’s core venues,
the IACR conferences. It is as though a chemical synthesis would take place,
transforming this powerful powder into harmless dust.

Consider that there is now a conference named “Real World Cryptography”
(RWC).53 There is humor—but maybe gallows humor—that a field with
a genesis and capability as real-world as ours should find reason to create
a venue so named.54 Ask a colleague in Graphics or Cloud
Computing how it would fly in their community if someone started a conference
called Real World Computer Graphics (RWCG 2015) or Real World Cloud Computing
(RWCC 2016). They will laugh.

An especially problematic excision of the political is the marginalization
within the cryptographic community of the secure-messaging problem,55 an instance of which was the problem addressed by [Chaum81].
Secure-messaging is the most fundamental privacy problem in cryptography:
how can parties communicate in such a way that nobody knows who said what.
More than a decade after the problem was introduced, Rackoff and Simon would
comment on the near-absence of attention being paid to the it.56
Another 20-plus years later, the situation is this: there is now a mountain of
work on secure-messaging, but it’s unclear what most of it actually
does. A recent systemization-of-knowledge article57
paints a picture of a cryptographic task enjoying a flourishing of ad hoc solutions,
but little of it arising from the cryptographic community, as narrowly construed,
or tied to much theory.58 While one could certainly claim that this
is true for almost all practical security goals that employ cryptography, I think
the case is different for secure-messaging: here the work feels almost intentionally
pushed aside.


Children of [Chaum81] and [GM82]. Why would I make such a
claim? An illuminating case study is provided by comparing the venues of the
most cited papers citing [Chaum81] and [GM82], Goldwasser and Micali’s
Probabilistic Encryption.59 The two papers appeared
around the same time and have comparable citation counts.60

The [GM82] paper put forward the definitionally centered, reduction-based
approach to dealing with cryptographic problems. It became a seminal work of
the cryptographic community. The most cited papers that cite it appear in
Crypto and Eurocrypt, FOCS, STOC, and ACM CCS.61 The [Chaum81]
paper put forward the secure email problem and suggested a solution. This
paper would be just as seminal—but in spawning work mostly outside the core
cryptographic community. The ten most cited papers that cite Chaum’s paper
appear in venues that, mostly, I had never heard of. Venues not
crypto-focused, like MobiSys and SIGOPS. In fact, the venues for the ten
most cited papers citing [GM82] and the venues for the ten most cited papers
citing [Chaum81] have void intersection. I find this fairly remarkable. It
reflects a research community split into fragments that include a GM-derived
one and a Chaum-derived one, the second fragment not really being a part of
the cryptographic community at all.62

Why did this fragmentation occur? The most obvious explanation has to do
with rigor: [GM82] offered a mathematically precise approach to its subject,
while [Chaum81] did not. So a partitioning might seem to make sense:
cryptographic work that can be mathematically formal goes to the right; ad
hoc stuff, over to the left.

The problem with this explanation is that it’s wrong. The [Chaum81] paper
supports rigor just fine. Indeed provable security would eventually catch up
to mix nets, although the first definition would take more than 20 years to
appear (2003), in a paper by Abe and Imai.63 That the [Chaum81]
paper itself didn’t provide a formal treatment says nothing about the
formalizability of the problem or what communities would later embrace it;
after all, Diffie and Hellman’s paper64 only informally described
trapdoor permutations, public-key encryption, and digital signatures, but
all would be absorbed into the cryptographic fold. Now one might well
counter that the problem addressed by [Chaum81] is more difficult to
formalize than any of the examples just named. That’s true. But it’s simpler
to formalize than MPC,65 say, which would quickly gain entrée and
stature in the cryptographic community—even
without definitions or proofs. So, ultimately, neither
formalizability nor complexity goes far to explain why secure-messaging
has been marginalized.

A better answer (but by no means the only answer) is made obvious by
comparing the introductions to the most-cited papers citing [GM82] and the
most-cited papers citing [Chaum81]. Papers citing [GM82] frame problems
scientifically. Authors claim to solve important technical questions. The
tone is assertive, with hints of technological optimism. In marked contrast,
papers citing [Chaum81] frame problems socio-politically. Authors speak
about some social problem or need. The tone is reserved, and explicitly
contextualist views are routine. One observes the exact same distinction in
tone and stated motivations when comparing survey articles.66

In 2015, I attended PETS (Privacy Enhancing Technologies Symposium) for the
first time. Listening to people in this community interact is a bit like
watching the cryptographic community through a lens that magically inverts
most things. The PETS community attends closely to the values embedded in
work. They care about artifacts that support human values. They aim to serve
the users of those artifacts. They’re deeply concerned with the politics and
technology of surveillance. Where, after Chaum, did the moral soul of
academic cryptography go? Maybe it moved to PETS.

There is a lesson in all this. Some might think that a community’s focus is
mostly determined by the technical character of the topic it aims to study.
It is not. It is extra-scientific considerations that shape what gets
treated where.


The cypherpunks. Now there is a group that has long worked
at the nexus of cryptography and politics: the cypherpunks.67 The
cypherpunks emerged in the late 1980’s, unified by a mailing list and some
overlapping values. The core belief is that cryptography can be a key tool
for protecting individual autonomy threatened by power.68

The cypherpunks believed that a key question of our age was whether state
and corporate interests would eviscerate liberty through electronic
surveillance and its consequences, or if, instead, people would protect
themselves through the artful use of cryptography. The cypherpunks did not
seek a world of universal privacy: many wanted privacy for the individual,
and transparency for the government and corporate elites. The cypherpunks
envisioned that one could hack power relations by writing the right code.
Cypherpunk-styled creations—think of Bitcoin, PGP, Tor, and WikiLeaks—were
to be transformative because they challenge authority and address basic
freedoms: freedom of speech, movement, and economic engagement.69

Exactly how such tools are to shape society is not always obvious.
Consider WikiLeaks. The hope is not just that a better informed public will
demand accountability and change. Rather, Assange sees governmental and
corporate abuse as forms of conspiracy that could be throttled by the mere
threat of leaks. Conspiracies are like graphs, the conspirators nodes, the
pairwise relations among them, the edges. Instead of removing nodes or
disrupting links, you can weaken any conspiracy by suffusing it in an
ever-present threat of leaks. The more unjust the conspiracy, the more
likely leaks will occur, and the more damage they will do. As elites become
fearful to conspire, they do so reservedly. The conspiratorial creature’s
blood thickens and it dies.70 It is a fascinating vision.

It is cypherpunks, not cryptographers, who are normally the strongest
advocates for cryptography. Julian Assange writes:

But we discovered something. Our one hope against total domination. A hope
that with courage, insight and solidarity we could use to resist. A
strange property of the physical universe that we live in.

The universe believes in encryption.

It is easier to encrypt information than it is to decrypt it.

We saw we could use this strange property to create the laws of a new
world.71

Similarly, Edward Snowden writes:72

In words from history, let us speak no more of faith in man, but bind him
down from mischief by the chains of cryptography.73

When I first encountered such discourse, I smugly thought the authors were
way over-promising: they needed to tone down this rhetoric to be accurate. I
no longer think this way. More engaged in implementing systems than I’ll
ever be, top cypherpunks understand more than I about insecure operating
systems, malware, programming bugs, subversion, side channels, poor
usability, small anonymity sets, and so on. Cypherpunks believe that despite
such obstacles, cryptography can still be transformative.


Cryptography favors whom? Cypherpunk discourse seems
sometimes to assume that cryptography will benefit ordinary people. But one
has to be careful here. Cryptography can be developed in directions that
tend to benefit the weak
or the powerful. It can also be pursued in ways likely to benefit nobody
but the cryptographer. Let’s look at some examples.


Encryption. One reason people might assume cryptography to benefit
the weak is that they’re thinking of cryptography as conventional encryption.
Individuals with minimal resources can encrypt plaintexts in a manner that even
a state-level adversary, lacking the key, won’t be able to decrypt.

But does it necessarily come out that way? To work, cryptographic primitives
must be embedded into systems, and those systems can realize arrangements of
power that don’t trivially flow from the nature of the tool. In his
typically pithy way, Schneier reminds people that “encryption is just a
bunch of math, and math has no agency.”74 If a content-provider
streams an encrypted film to a customer who holds the decryption key locked
within a hardware or software boundary she has no realistic ability to
penetrate,75 we’ve empowered content providers, not users. If we
couple public-key cryptography with a key-escrow system that the FBI and NSA
can exploit, we empower governments, not people.76

All that said, I do believe it accurate to say that conventional encryption
does embed a tendency to empower ordinary people.
Encryption directly supports freedom of speech. It doesn’t require expensive
or difficult-to-obtain resources. It’s enabled by a thing that’s easily
shared. An individual can refrain from using backdoored systems.77
Even the customary language for talking about encryption suggests a worldview
in which ordinary people—the world’s Alices and Bobs—are to be afforded the opportunity
of private discourse. And coming at it from the other direction, one has to
work
to embed encryption within an architecture that props up power, and one may encounter
major obstacles to success. The Clipper Chip completely failed. Trusted Computing
mostly did.78


IBE.79 What about identity-based encryption,
IBE? The setting was proposed by Shamir, with Boneh and Franklin, years
later, providing a satisfying, provably secure realization.80 The
aim is to allow a party’s email address, for example, to serve as his public
key. So if Alice already knows the email address for Bob, she won’t need to obtain
his public key to send him an encrypted message: she just encrypts under Bob’s
email address.

But this convenience is enabled by a radical change in the trust model:
Bob’s secret key is no longer self-selected. It is issued by a trusted
authority. That authority knows everyone’s secret key in the system. IBE
embeds key escrow—indeed a form of key escrow where a single entity
implicitly holds all secret keys—even ones that haven’t yet been issued. And
even if you do trust the keygenerating authority, a state-level adversary
now has an extremely attractive locus to subpoena or subvert. In the end, from
a personal-privacy point of view, IBE might seem like an enormous leap backwards.

Descriptions of IBE don’t usually emphasize the change in trust model.81 And the key-issuing authority seems never to be named anything like that: it’s
just the PKG, for Private Key Generator. This sounds more innocuous than it is,
and more like an algorithm than an entity. In papers, the PKG further recedes
from view because it is the tuple of algorithms, not the entities that one imagines
to run them, that grounds formal definitions and proofs.

To be clear, I am not condemning IBE as some sort of fascist technology.
That sounds silly. Nor am I suggesting that IBE can’t be refined in ways to
make the trust model less authoritarian.82 Yet one can easily see
the authoritarian tendency built into IBE. And technologies, while adaptable,
are not infinitely so. As they evolve, they tend to retain their implicit orientations.


Differential privacy. Let’s consider differential privacy.83
Dwork says that

ϵepsilon

-differential privacy “addresses concerns that
any participant might have about the leakage of her personal information:
even if the participant removed her data from the data set, no outputs…
would become significantly more or less likely.”84

At some level, this sounds great: don’t we want to protect
individuals from privacy-compromising disclosures from corporate or
governmental datasets? But a more critical and less institutionally friendly
perspective makes this definitional line seem off.85 Most
basically, the model implicitly paints the database owner (the curator) as
the good guy, and the users querying it, the adversary. If power would just
agree to fudge with the answers in the right way, it would be fine for it to
hold massive amounts of personal data about each of us. But the history of
data-privacy breaches suggests that the principal threat to us is from the
database owner itself, and those that gain wholesale access to the data (for
example, by theft or secret government programs). Second, the harm
differentialprivacy seeks to avoid is conceived of in entirely
individualistic terms. But privacy violations harm entire communities. The
individualistic focus presupposes a narrow conception of privacy’s value.
Finally,86 differential privacy implicitly presupposes that the
data collection serves some public
good. But, routinely, this is a highly contestable claim. The
alternative of less data collection, or no data collection at all, is rarely
even mentioned. In the end, one must compare the reduction in harm actually
afforded by using differential privacy with the increase in harm afforded by
corporations having another means of whitewash, and policy-makers believing,
quite wrongly, that there is some sort of cryptomagic to protect people from
data misuse.

I recently asked an expert in differential privacy, Ilya Mironov, for his
reaction to my harsh critique. He explained that the abstract roles in
differential-privacy settings need not correspond to business relationships
in the obvious way. For example, a privacy-conscious organization might
choose to make its own analysts access sensitive data through an API that
provides some differential-privacy guarantee, effectively treating its own
employees as “adversaries.” Mironov also explained that there are variant
notions of differential privacy that do not implicitly regard the
database owner as good, and those querying it as bad. He described
differential privacy in the local model,87 where everyone
keeps his data to himself. They can distributively compute the responses to queries.
Fundamentally, Mironov explained, the definition of differential privacy is agnostic
to the data model.

While everything explained makes good sense, I don’t think it changes the
landscape. No actual mechanism can be agnostic to what data resides where.
And at the point when a data-mining architecture and mechanism is laid down,
considerations of efficiency, familiarity, and economics—not to mention
authorities’ fundamental desire to have and to hold the data—make it easy to
predict what will happen: almost always, a centralized design will emerge.
To me, differential privacy may be as authoritarian in its conceptual
underpinnings as IBE.


FHE and iO. Ever since Craig Gentry’s groundbreaking work,88
fully homomorphic encryption (FHE) has been a target of enormous intellectual
capital. In brief, FHE allows you to outsource your data, encrypted under a public
key of your choosing, to a service provider. Later, you can ask that party whatever
you’d like about your plaintext. The service provider computes the encrypted
answer, not knowing what it means. This is returned to you for decryption.

From a political perspective, FHE sounds empowering—even utopian. The
powerful party, say a cloud service provider, is denied access to your data.
You sidestep the Faustian bargain that routinely underlies cloud computing.89

But the analysis above is specious. It is quite speculative if FHE will ever
evolve into something practically useful. If you want to assess the
political leanings of something of speculative utility, you shouldn’t just
assume that it will give rise to the touted applications, and then try to
see who would win and who would lose. It’s too conjectural. It is better to
focus on how the pursuit changes us in the here and now.

And on that, I would say that FHE, along with iO,90 have
engendered a new wave of exuberance. In grant proposals, media interviews,
and talks, leading theorists speak of FHE and iO as game-changing
indications of where we have come.91 Nobody seems to emphasize just
how speculative it is that any of this will ever have any impact on practice.
Nor do people emphasize our vanishing privacy, our lousy computer security, or
how little modern cryptography has really done to change this landscape. And
this has consequences. (a) It misleads the public about where we stand. (b) It
shifts financial resources away from areas more likely to have social utility.
(c) It encourages bright young researchers to work on highly impractical directions.
(d) And it provides useful cover to the strongest opponents of privacy: defense
and intelligence agencies.

Let me expand on the last claim. Here is what DARPA Program Director Dan
Kaufman had to say about FHE in a 2014 interview:

Imagine a future that says: OK, I have to collect everything for big data
to work because if I knew what wasn’t relevant it wouldn’t be big data.
But I don’t want the government to just willy-nilly look through my
emails: that feels creepy. …

So this guy, Craig Gentry, … showed that you could … take a piece of data,
encrypt it, send it down the wire, never decrypt it, [but] still perform
[computation] … on it. It sounds crazy, except he showed you can do it ….

You could imagine the following: … [Organizations] collect … data but only
in … encrypted form …. Now let’s say you believe there is a bad guy hiding
somewhere in this encrypted data. So, I come up with a bunch of search
terms …. I could then go to a court … [and they] could say “yeah, that
looks reasonable.” I put the search into the engine but … all that comes
out is a number: how many people meet that criteria … You go back to the
FISA court, and say O.K. guys, we have 12. … I picture FISA putting in a
key, and then the Agency putting in a key, and they both turn it. And [at]
that point, for the first time, … are those 12 now revealed.92

Of course, it’s utter nonsense. To begin with, there’s no way to make sense
of who holds what key and what data for FHE to even apply. We’re also told:
that we need to collect everything because, if we didn’t, we wouldn’t have
enough data to have lots of data; that the government will be careful, as it
would be “creepy” if they weren’t; that they’ll get court orders—even,
apparently, to discover the number of people in datasets who satisfy some
specified search criteria; and that to get personally identifiable
information, they’ll need to have the cooperation of the NSA and the
FISA court.

Kaufman’s inchoate interview is but a tiny patch of discourse from an ocean
of misdirection on privacy. It doesn’t impugn FHE, but it does suggest how
power aims to use such work: to let them mumble words that sound
privacy-friendly. Providing strong funding for FHE and iO provides risk-free
political cover. It supports a storyline that cloud storage and computing is
safe. It helps entrench favored values within the cryptographic community:
speculative, theory-centric directions. And it helps keep harmless academics
who could, if they got feisty, start to innovate in more sensitive
directions.


Cryptanalysis. Finally, let me briefly mention
cryptanalysis. One might misinterpret the academic cryptanalytic undertaking
as an attack on the privacy of legitimate users—an attack on the inoffensive
Alice and Bob—which would thus seem to favor power.93 But this is
the opposite of the right view. The reason that academic
cryptographers do cryptanalysis is to better inform the designers and users
of cryptosystems about what is and what is not safe to do. The activity is
not done to surveil people, but to help ensure that people are not
surveilled—at least by cryptanalytic means. And the work routinely has exactly
that effect. The history of WEP provides a nice example.94

When the NSA or GCHQ engage in cryptanalysis, it is for a very different
purpose, and it has a very different effect. Does that mean that
cryptanalysis done by one group of people (spooks) will tend to favor
authority, while cryptanalysis done by another group of people (academics)
will tend in the exact opposite direction? It does. The specific work will
be different; its dissemination will be different; and its impact on human
rights will be different.


Unthreateningly engaged. Of course it hasn’t escaped the
notice of intelligence agencies that the majority of the academic
cryptographic community is unthreateningly engaged. In a declassified
trip-report about Eurocrypt 1992, the NSA author opines, for example:95

There were no proposals of cryptosystems, no novel cryptanalysis of old
designs, even very little on hardware design. I really don’t see how things
could have been better for our purposes.

The NSA’s newsletter in which this report appears would never again mention
that academic cryptographic community.96 Nor did any released
Snowden-derived document discuss anything of our community.97
It’s as though we progressed from a band of philosophers98 worth
a few pages of snarky commentary99 to an assemblage too insignificant
even for that.


Conclusion to part 2. A 2013 essay by Arvind Narayanan
suggests a simple taxonomy for cryptographic work:100 there’s
crypto-for-security
and
crypto-for-privacy. Crypto-for-security is crypto for commercial
purposes. It’s the crypto in TLS, payment cards, and cell phones.
Crypto-for-privacy has social or political aims. Here the author
distinguishes between pragmatic crypto—which is about trying to use
cryptography to retain our predigital privacy—and
cypherpunk crypto—the grander hope of using cryptography to
precipitate sweeping social or political reforms. The author suggests that
crypto-for-security has done well, but crypto-for-privacy has fared badly.

I think Narayanan’s division is illuminating, but he fails to mention that
most academic cryptography isn’t really crypto-for-security or
crypto-for-privacy: it is, one could say, crypto-for-crypto—meaning
that it doesn’t ostensibly benefit commerce or privacy, and it’s quite
speculative if it will ever evolve to do either. Perhaps every field
eventually becomes primarily self-referential. Maybe this is even necessary,
to some extent. But for cryptography, much is lost when we become so
inward-looking that almost nobody is working on problems we could help
with that address some basic human need. Crypto-for-crypto starves cryptofor-privacy,
leaving a hole, both technical and ethical, in what we collectively do.

Part 3: The dystopian world of pervasive surveillance

Mass surveillance has motivated the contents of this essay, but is it so
serious a thing? Before the Snowden revelations,101 I myself didn’t
really think so. Environmental problems seemed more threatening to man’s future,
and my country’s endless wars seemed more deserving of moral consternation. It
wasn’t until Snowden that I finally internalized that the surveillance issue
was grave, was closely tied to our values and our profession, and was being quite
misleadingly framed.


Law-enforcement framing. The framing of mass
surveillance determines what one thinks it is about.102
And mass surveillance has been brilliantly framed by authority so as to slant
discourse in a particular and predictable direction. Let me describe what I’ll
call the law-enforcement framing, as regularly communicated by
(U.S.) FBI Director James Comey:103

  1. Privacy is personal good. It’s about your desire to control personal
    information about you.
  2. Security, on the other hand, is a collective good. It’s about living
    in a safe and secure world.
  3. Privacy and security are inherently in conflict. As you strengthen one,
    you weaken the other. We need to find the right balance.
  4. Modern communications technology has destroyed the former balance. It’s
    been a boon to privacy, and a blow to security. Encryption is especially
    threatening. Our laws just haven’t kept up.104
  5. Because of this, bad guys may win. The bad guys are terrorists,
    murderers, child pornographers, drug traffickers, and money launderers.105 The technology that we good guys use—the bad guys use it too, to escape detection.
  6. At this point, we run the risk of Going Dark.106 Warrants will be
    issued, but, due to encryption, they’ll be meaningless. We’re becoming a country
    of unopenable closets. Default encryption may make a good marketing pitch,
    but it’s reckless design. It will lead us to a very dark place.

The narrative is inconsistent with the history of intelligence gathering,
and with the NSA’s own mission statement.107 Yet the narrative’s
uneasy coexistence with reality hasn’t mattered. It is, in fact, beautifully
crafted to frame matters in a way guaranteed to lead discourse where
authority wants it to go. It is a brilliant discourse of fear: fear of
crime; fear of losing our parents’ protection; even fear of the dark. The
narrative’s well-honed deceptiveness is itself a form of
tradecraft.108


Surveillance-studies framing. Of course there are radically
different ways to frame mass surveillance. Consider the following way to do
so, which follows often-heard thoughts from cypherpunks and surveillance
studies.109

  1. Surveillance is an instrument of power.110 It is part of an apparatus
    of control. Power need not be in-your-face to be effective: subtle, psychological,
    nearly invisible methods can actually be more effective.
  2. While surveillance is nothing new, technological changes have given
    governments and corporations an unprecedented capacity to monitor
    everyone’s communication and movement. Surveilling everyone has became
    cheaper than figuring out whom to surveil, and the marginal cost is now
    tiny.111 The Internet, once seen by many as a tool for
    emancipation, is being transformed into the most dangerous facilitator for
    totalitarianism ever seen.112
  3. Governmental surveillance is strongly linked to cyberwar. Security
    vulnerabilities that enable one enable the other. And, at least in the
    USA, the same individuals and agencies handle both jobs. Surveillance is
    also strongly linked to conventional warfare. As Gen. Michael Hayden has
    explained, “we kill people based on metadata.”113 Surveillance and
    assassination by drones are one technological ecosystem.
  4. The law-enforcement narrative is wrong to position privacy as an
    individual good when it is, just as much, a social good. It is equally
    wrong to regard privacy and security as conflicting values, as privacy enhances security as often as it rubs against it.
  5. Mass surveillance will tend to produce uniform, compliant, and shallow
    people.114 It will thwart or reverse social progress. In a world
    of ubiquitous monitoring, there is no space for personal exploration, and no
    space to challenge social norms, either. Living in fear, there is no genuine
    freedom.
  6. But creeping surveillance is hard to stop, because of interlocking
    corporate and governmental interests.115 Cryptography offers at
    least some hope. With it, one might carve out a space free of power’s reach.

History teaches that extensive governmental surveillance becomes political
in character. As civil-rights attorney Frank Donner and the Church
Commission reports thoroughly document, domestic surveillance under U.S. FBI
director J. Edgar Hoover served as a mechanism to protect the status quo and
neutralize change movements.116 Very little of the FBI’s
surveillance-related efforts were directed at law-enforcement: as the
activities surveilled were rarely illegal, unwelcome behavior would result
in sabotage, threats, blackmail, and inappropriate prosecutions, instead.
For example, leveraging audio surveillance tapes, the FBI’s attempted to get
Dr. Martin Luther King, Jr., to kill himself.117
U.S. universities were thoroughly infiltrated with informants: selected students,
faculty, staff, and administrators would report to an extensive network of FBI
handlers on anything political going on on campus. The surveillance of dissent
became an institutional pillar for maintaining political order. The U.S. COINTELPRO
program would run for more than 15 years, permanently reshaping the U.S. political
landscape.118


Our dystopian future. Where mass surveillance leads has
been brilliantly explored in fictional accounts, starting with Yevgeny
Zamyatin’s 1921 novel We
(which inspired Orwell’s 1984). Set in a future of total
surveillance, the denizens of the “One State” have internalized lessons such
as: “we” is from God, and “I” is from the devil; that imagination is
illness; and that the key to ridding man of crime is ridding him of freedom.

But you don’t have to reach to fictional or historical accounts to
anticipate where we are headed. In a 2012 newsletter column, NSA’s “SIGINT
Philosopher,” Jacob Weber, shares his own vision. After failing an NSA
lie-detector test, he says:

I found myself wishing that my life would be constantly and completely
monitored. It might seem odd that a self-professed libertarian would wish
an Orwellian dystopia on himself, but here was my rationale: If people
knew a few things about me, I might seem suspicious. But if people knew
everything about me, they’d see they had nothing to fear.

… A target that119 has no ill will to the U.S., but which is
being monitored, needs better and more monitoring, not less. So if we’re
in for a penny, we need to be in for a pound.120

Shrouded in enormous secrecy and complexity, the basic contours of the
surveillance state are fundamentally unknowable. What is the individual to
do? With everyone’s communication machine monitored, he knows that he’s a de facto
target. Millions of observations are made of his life. He is analyzed by techniques
he cannot remotely understand. He knows that today’s data, and yesterday’s, will
be scrutinized by tomorrow’s algorithms. These will employ sophisticated natural-language
processing, but probably won’t
actually understand human discourse. With all this, the rational individual
has no choice but to watch what he says, and to try to act like everyone else.

The film Citizenfour (2014) is at its best when it manages to sketch the
shape of this emerging world. One reviewer writes of the film

evoking the modern state as an unseen, ubiquitous presence, an abstraction
with enormous coercive resources at its disposal. …

It is everywhere and nowhere, the leviathan whose belly is our native
atmosphere. Mr. Snowden, unplugging the telephone in his room, hiding
under a blanket when typing on his laptop, looking mildly panicked when a
fire alarm is tested on his floor, can seem paranoid. He can also seem to
be practicing a kind of avant-garde common sense. It’s hard to tell the
difference, and [this] … can induce a kind of epistemological vertigo.
What do we know about what is known about us? Who knows it? Can we trust
them?121

To be more prosaic: I pick up the phone and call my colleague, Mihir
Bellare, or I tap out an email to him. How many copies of this communication
will be stored, and by whom? What algorithms will analyze it—now and in the
future? What other data will it be combined with in an attempt to form a
picture of me? What would trigger a human analyst to get involved? Might my
call or email contribute to a tax audit, a negative grant-funding decision,
some Hoover-style dirty tricks, or even an assassination? There is not a
single person who knows the answer to these questions, and those who know
most aren’t about to tell.


Conclusion to part 3. Ultimately, I’m not much interested in
individual grievances over privacy; I am far more concerned with what surveillance
does to society and human rights. Totalized surveillance vastly diminishes the
possibility of effective political dissent. And without dissent, social progress
is unlikely.

Consider an event like the 1971 burglary of the FBI branch office in Media,
Pennsylvania.122 With the degree of surveillance we now live
under, the whistleblowers—beginning with that feisty physics professor who
led the effort123—would be promptly arrested, and even charged
with espionage. They would have spent years in prison, or even faced
execution. Facing such outcomes and odds, the activists would not have
attempted their daring burglary. In an essay that focuses on remedies for
excessive surveillance, Richard Stallman asks

Where exactly is the maximum tolerable level of surveillance, beyond which
it becomes oppressive? That happens when surveillance interferes with the
functioning of democracy: when whistleblowers (such as Snowden) are likely
to be caught.124

Online and telephone surveillance already results in the imprisonment of
political dissidents around the world,125 and it undergirds my
own country’s drone-assassination program.126 In the U.S.,
Miami-model policing127 has made attending political protests (or
just being near one in your car, or with your phone) an intimidating
proposition. With journalists’ communications routinely monitored,
investigative journalism is under attack.128 Is democracy or social
progress possible in such an environment?

But, despite all these arguments, I am skeptical about rationalist accounts
of ethical affronts, be it mass surveillance or anything else. If we behave
morally, it is not because of rational analyses, but an instinctual
preference for liberty, empathy, or companionship.129 As Schneier
points out, animals don’t like to be surveilled because it makes them feel
like prey, while it makes the surveillor feel like—and act like—a predator.130 I think people know at an instinctual level that a life in which our thoughts,
discourse, and interactions are subjected to constant algorithmic or human monitoring
is no life at all. We are sprinting towards a world that we know, even without
rational thought, is not a place where man belongs.

Part 4: Creating a more just and useful field

What can we cryptographers realistically do to collectively up our
contribution to crypto-for-privacy? I claim no easy answers. I can offer
only modest ideas.


Secure messaging in the untrusted-server model. Problem
selection is the most obvious aspect in determining our community’s impact,
and secure messaging, in all its forms, remains the most outstanding problem
in crypto-for-privacy. While mix nets, onion routing, and DC nets have all
proven to be highly useful,131
it is not too late to be thinking on new architectures for secure communications.

Consider the following problem, which is inspired by Pond and the PANDA
protocol that it can use.132 The aim is similar to Adam Langley’s
Pond protocol: to create an alternative to email or instant messaging but where
“big brother” is unable to figure out who is communicating with whom. Unlike
Pond, I don’t want to rely on Tor, for we seek security in the face of a global,
active adversary (as well as a clean, provable-security treatment). Tor can always
be layered on top, as a heuristic measure, to hide system participants.

The intent is this. Pairs of people who want to communicate are assumed to
initially share a password. They won’t directly talk with one another;
rather, all communications will go through an untrusted server.
First, parties upgrade their shared password to a strong key with an
anonymous rendezvous
protocol. Thereafter, the sender can deposit a (constant-length) encrypted message
at the server. When a party wants to retrieve his ith message,
he’ll interact with the same server, which gives him a string computed from
the database contents. The value permits the receiver to recover the
intended message—or, alternatively, an indication that there is no such
ith message for him. But, throughout, all the server ever sees are
parties depositing random-looking strings to the server, and parties
collecting random-looking strings from the server, these computed by
applying some non-secret function to the server’s non-secret database.
Neither the server nor an active, global adversary can figure out who has
communicated with whom, or even whether a communications has taken place.
The goal is to do all this as efficiently as possible—in particular, much
more efficiently than the server just handing each recipient its entire
database of encrypted messages.

In ongoing work, colleagues and I are working out a provable-security
treatment for the approach above. It uses conventional, game-based
definition, not the fuzzy concepts or vocabulary from much of the anonymity
literature.133 We hope that the anonymous messaging in this untrusted-server
model will eventually prove practical for the high-latency setting. We will see.


Bigkey cryptography. Let me next describe some recent work
by Mihir Bellare, Daniel Kane, and me that we call
bigkey cryptography.134

The intent of bigkey cryptography is to allow cryptographic operations to
depend on enormous keys—megabytes to terabytes long. We want our keys so
long that it becomes infeasible for an adversary to exfiltrate them. Yet
using such a bigkey mustn’t make things slow. This implies that, with each
use, only a small fraction of the bigkey’s bits will be inspected.

The basic idea is not new: the concept is usually referred to as security in
the
bounded-retrieval model.135 But our emphasis is new: practical
and general tools, with sharp, concrete bounds. We have no objection to using
the random-oracle model to achieve these ends.

Suppose you have a bigkey

Kmathbf K

. You want to use it
for some protocol

PP

that has been designed to use a conventional-length
key

Kmathbf K

. So choose a random value

RR

(maybe 256 bits) and hash it to get some number

pp

of probes
into the bigkey:

i1=H(R,1)i2=H(R,2)ip=H(R,p).i_1 = H(R, 1) quad i_2 = H(R, 2) quad ldots quad i_p = H(R, p).

Each probe

iji_j

points into

Kmathbf K

:
it’s a number between 1 and

K|mathbf K|

. So you grab the

pp

bits at those locations and hash them, along with

RR

, to get a derived key

KK

:

K=H(R,K[i1],,K[ip])=XKEY(K,R).K = H'(R, mathbf K[i_1], ldots, mathbf K[i_p]) = mathrm{XKEY}(mathbf K, R).

Where you would otherwise have used the protocol

PP

with a
shared key

KK

, you will now use

PP

with a
shared bigkey

Kmathbf K

, a freshly chosen

RR

, this
determining the conventional key

K=XKEY(K,R)K = mathrm{XKEY}(mathbf K, R)

.

We show that derived-key

KK

is indistinguishable from a uniformly
random key

KK’

even if the adversary gets

RR

and can learn lots of information about the bigkey

Kmathbf K

. The result is quantitative, measuring how good the derived key is as a
function of the length of the bigkey, the number of bits leaked from it, the
number of probes

pp

, the length of

RR

,
and the number of random-oracle calls.

At the heart of this result is an information-theoretic question we call the
subkey-prediction problem. Imagine a random key

Kmathbf K

that an adversary can export

<Kell < |mathbf K|

bits of
information about. After that leakage, we select

pp

random
locations into

Kmathbf K

, give those locations to the
adversary, and ask the adversary to predict those

pp

bits.
How well can it do?

It turns out that the adversary can do better than just recording

ell

bits of the key

Kmathbf K

and hoping that lots of probes
fall there. But it can’t do much better. Had nothing been leaked to
the adversary,

=0ell = 0

, then each probe would contribute
about one bit of entropy to the random variable the adversary must guess.
But if, say, half the key is leaked,

K/2ell leq |mathbf K|/2

, each probe will now contribute about

0.1560.156


bits of entropy.136 The adversary’s chance of winning the
subkey-prediction game will be bounded by something that’s around

20.156p2^{-0.156p}

. One needs about

p=820p = 820


probes for 128-bit security, or twice that for 256-bit security.

I think that the subkey prediction problem, and the key-encapsulation
algorithm based on it, will give rise to nice means for
exfiltration-resistant authenticated-encryption and pseudorandom generators.137 In general, I see bigkey cryptography as one tool that cryptographers can contribute
to make mass surveillance harder.


More examples. Here are a few more examples of crypto-for-privacy
work.

Consider the beautiful paper on Riposte, by Corrigan-Gibbs,
Boneh, and Mazières.138 A user, speaking with others on the
Internet, wants to broadcast a message, such as a leaked document, without
revealing his identity. The network is subject to pervasive monitoring. The
authors develop definitions, protocols, and proofs for the problem,
attending closely to efficiency.139
They implement their schemes. Combining all these elements is rare—and very much
needed.140

Or consider the work of Colin Percival in which he introduced the hash
function scrypt.141 Percival explained that,
when applying an intentionally slow-to-compute hash function to a password
and salt so as to up the cost of dictionary attacks,142 it is
better if the hash function can’t be sped up all that much with custom
hardware. To achieve this aim, computing the hash function shouldn’t just
take lots of time, but lots of (sequentially accessed) memory. This
insightful idea comes from Abadi, Burrows, Manasse, and Wobber, who wanted
to make sure that, for a variety of settings, computing an
intentionally-slow hash function on a high-end system would take roughly as
long as computing it on a low-end system.143 Quite recently, a
Password Hashing Competition (PHC) concluded having chosen a scheme,
Argon2,144 that follows this lead. Meanwhile,
the theory for this sort of hash function has nicely progressed.145
While we don’t yet have good bounds on schemes like scrypt and Argon2, I think
we’re getting there.146

Or consider the paper on the susceptibility of symmetric encryption to mass
surveillance by colleagues and me.147 We discussed
algorithm-substitution attacks, wherein big brother
replaces a real symmetric encryption algorithm by a
subverted
one. Big brother’s aim is to surreptitiously decrypt all encrypted traffic. The
idea goes back to Young and Yung;148 all we did was to rigorously
explore the idea in the context of symmetric encryption. Yet what we found was
disturbing: that almost all symmetric encryption schemes can be easily subverted.
Still, we showed that it is easy to make schemes where this isn’t true.

And then there’s the Logjam paper, showing, for the
umpteenth time, that we must watch out for the cryptanalytic value of
precomputation.149 Attacks should routinely be regarded as a
two-step process: an expensive one that depends on widely shared parameters,
then a cheaper, individualized attack.150
Such thinking goes back to early time-memory tradeoffs,151 and to
many cryptographer’s preference for nonuniform adversaries. It occurs in
practical work, as in attacks on A5/1 in GSM phones.152 And it is
also the model that intelligence agencies seem to gravitate to, as suggested
by the NSA’s attack on FPE scheme FF2 and the fact that they regarded this
attack as serious.153


Choose well. As I hope the examples I have given
illustrate, there are important crypto-for-privacy problems out there, and
they are quite diverse. Choose your problems well. Let values inform your
choice. Many times I have spoken to people who seem to have no real idea
why they are studying what they are. The real answer is often that they
can do it, it gets published, and that people did this stuff before. These are
lousy reasons for doing something.

Introspection can’t be rushed. In the rush to publish paper after paper, who
has the time? I think we should breathe, write fewer papers, and have them
matter more.

  • Attend to problems’ social value. Do anti-surveillance research.
  • Be introspective about why you are working on the problems you are.

In enumerating example directions for anti-surveillance research, I didn’t
include the kind of work, rather common in the PET (privacy-enhancing
technology) literature, that assumes that there will be pervasive
collection, and then tries to do what one can to minimize misuse.154
Since the immorality occurs at the point of data collection, the aim here is
to try to blunt the impact of the wrong already done. But it is hard to know
how this plays out. I am concerned that the work can play into the hands of those
who seek technical support for a position that says, in effect, “the collect-it-all
approach is inevitable and only temporarily problematic, for, once we figure
this all out, privacy will be handled downstream, when the data is used.” But
pervasive collection itself
chills free-speech and threatens liberal democracy, regardless of what one claims
will happen downstream.155


Practice-oriented provable security. It’s not just the
topics we work on, but how we execute on them that shapes our field’s
direction. For nearly 25 years Mihir Bellare and I have developed that we
call practice-oriented provable security. In a 2009 essay and talk,156
I discussed how various inessential choices engendered a theory of cryptography
that was less useful than necessary. Today, I might number among the important
historical choices (1) a preference for asymptotic analyses
and theorems, and the correspondingly coarse conceptualizations of security
with which this is paired; (2) a preference towards
minimalism, aesthetically construed, as a starting point for reductions;
(3)
the dismissal of symmetric primitives and finite functions as targets of rigorous
inquiry; (4) a tradition of using nonconstructive language
for stating results; (5) the marginalization of secure
messaging; and (6) a condemnatory attitude towards the
random-oracle model, the random-permutation model, the ideal-cipher model,
Dolev-Yao models,157 and any other model deemed non-standard.

Practice-oriented provable security inverts such choices. It retains
provable-security’s focus on definitions and proofs, but these are
understood as tools that earn their value mostly by their utility to
security or privacy. The approach is equally at home in those two realms,
but it has been underused for privacy problems like secure messaging. Better
treating mix-nets and onion routing is an obvious place to start, which
students and I are doing.

  • Apply practice-oriented provable security to anti-surveillance problems.

Funding.158 In the United States, it would seem
that the majority of extramural cryptographic funding may now come from the
military.159 From 2000 to 2010, fewer than 15% of the papers at
CRYPTO that acknowledged U.S. extramural funding acknowledged DoD funding.160
In 2011, this rose to 25%. From 2012 to 2015, it rose to 65%.161 Nowadays,
many cryptographers put together a large patchwork of grants, the largest of
which are usually DoD. The following funding acknowledgment isn’t so very atypical:

This work was supported by NSF, the DARPA PROCEED program, an AFOSR MURI
award, a grant from ONR, an IARPA project provided via DoI/NBC, and by
Samsung.162

The military funding of science invariably redirects it163 and
creates moral hazards.164 Yet suggesting to someone that they might
want to reconsider their taking DoD funding may anger even a placid colleague,
for it will be perceived as an assault both on one’s character and his ability
to succeed.

No matter what people say, our scientific work does change in
response to sponsor’s institutional aims. These aims may not be one’s own.
For example, the mission of DARPA is “to invest in the breakthrough
technologies that can create the next generation of [U.S.] national security
capabilities.” Having begun in the wake of Sputnik, the agency speaks of
avoiding technological surprise—and creating it for America’s
enemies.165 In the USA, the NSA advises other DoD agencies on crypto-related
grants. At least sometimes, they advise the NSF. Back in 1996, the NSA tried
to quash my own NSF CAREER award. I learned this from my former NSF program manager,
Dana Latch, who not only refused the NSA request, but, annoyed by it, told me.
An internal history of the NSA reports on the mistake of theirs that allowed
funding the grant leading to RSA.

NSA had reviewed the Rivest [grant] application, but the wording was so
general that the Agency did not spot the threat and passed it back to NSF
without comment. Since the technique had been jointly funded by NSF and the
Office of Naval Research, NSA’s new director, Admiral Bobby Inman, visited
the director of ONR to secure a commitment that ONR would get NSA’s
coordination on all such future grant proposals.166

People are often happy to get funding, regardless of its source. But I would
suggest that if a funding agency embraces values inconsistent with your own,
then maybe you shouldn’t take their money. Institutions have values,
no less than men. Perhaps, in the modern era, they even have more.

Large organizations have multiple and sometimes conflicting aims. Military
organizations with offensive and defensive roles in cybersecurity have COIs
built into their design. Individuals are wrong to assume that their work is
non-military work errantly funded by the military.

In his farewell address of 1961, President Dwight D. Eisenhower introduced
the phrase, and concept, of the military-industrial complex. In an earlier
version of that speech, Eisenhower tellingly called it the
military-industrial-academic complex.167 If scientists wish
to reverse our complicity in this convergence of interests, maybe we need to
step away from this trough.

None of this was clear to me when I first joined the university. A few years
ago I joined in on a DoD grant proposal (fortunately, unfunded), which I
would not do today. It took me a long time to realize what eventually became
obvious to me: that the funding we take both impacts our beliefs and
reflects on them.

In the end, a major reason that crypto-for-privacy has fared poorly may be
that funding agencies may not want to see progress in this direction,168
and most companies don’t want progress here, either. Cryptographers have internalized
this. Mostly, we’ve been in the business of helping business and government keep
things safe. Governments and companies have become our “customers,” not some
ragtag group of activists, journalists, or dissidents, and not some abstract
notion of the people. Crypto-for-privacy will fare better when
cryptographers stop taking DoD funds and, more than that, start thinking of
a very different constituency for our output.

  • Think twice, and then again, about accepting military funding.169
  • Regard ordinary people as those whose needs you ultimately aim to satisfy.

Academic freedom. Those of us who are academics at
universities enjoy a tradition of academic freedom. This refers to your
right—and even obligation—to think about, speak about, and write about
whatever you want that is connected to your work, even if it goes against
the wishes of power: your university, corporations, or the state. While
academic freedom seems to be in decline,170 at least for now, it recognizably
persists.

Normally, scientists and other academics don’t actually need or use their
academic freedom: all they really need is funding and skill.171
But crypto-for-privacy may be a rare topic where academic freedom
is useful.172 I suggest that people use this gift. Unexercised,
academic freedom will wither and die.

Many nonacademics also have something akin to academic freedom: sufficient
autonomy to work on what they think is important, without losing their jobs,
even if it’s not what their employer really wants or likes.

  • Use the academic freedom that you have.

Against dogma. I think that many cryptographers would do
well to foster a more open-minded attitude to unfamiliar models, approaches,
and goals. The disciplinary narrowing within cryptography’s tier-1 venues
has been pronounced.173 Many people seem to hold rather strident
beliefs about what kinds of work are good. Sometimes it borders on
silliness, as when people refuse to use the word proof for proofs in
the random-oracle model. (Obviously a proof in the random-oracle model is no
less a proof than a proof in any other model.)

As cryptographers, we must always be sensitive, and skeptical, about the
relationship between our models and actual privacy or security.
This doesn’t mean that we should not take models seriously. It means that we
should see them as tentative and dialectical. There’s a lovely aphorism from
statistician George Box, who said that
all models are wrong, but some are useful.174

Cryptography needs useful models. But the assessment of a model’s utility
is itself problematic. We ask of definitions: How clean? How understandable?
How general? What aspects of the computing environment are covered? What does
and doesn’t it imply? The definitional enterprise sits at a juncture of math,
aesthetics, philosophy, technology, and culture. So situated, dogma is disease.

It has been claimed that the mission of theoretical cryptography is to
define and construct provably secure cryptographic protocols and schemes.175
But this is an activity of theoretical cryptography, not its mission. There are
many other activities. One might work on models and results that are completely
rigorous but fall outside of the provable-security framework.176
Or one can take an important protocol as fixed and then analyze it, in whatever
framework works best. The aim for my own work has been to develop ideas that
I hope will contribute to the construction of secure computing systems. In the
symbology of Amit Sahai’s lovely flower-garden,177 theory-minded cryptographers
can be gardeners, growing seeds (hardness assumptions) into flowers (cryptographic
goals); but they can do many other things as well. Which is fortunate, as cryptographic
practice hasn’t benefited all that much from our horticultural activities.

  • Be open to diverse models. Regard all models as suspect and dialectical.

A more expansive view. I would encourage cryptographers—especially
young people in our field—to try to get a systems-level view of what is going
on when cryptography is used. You need a way better view of things than a technophobe
like me will ever have.

I remember reading that 2012 paper of Dan Boneh and his coauthors, The Most Dangerous Code in the World,178 and feeling humbled by the fact that there was this entire
universe of code—this middleware—that I didn’t even know
existed, but that could, and routinely did, annul the cryptography
that was there. When the NSA revelations caused people to speculate as to
how Internet cryptography was being defeated, it occurred to me that perhaps
the NSA didn’t need any clever cryptanalysis—what they needed, most of all,
was to buy exploits and hire people with a systems-level view of the
computing ecosystem.

One approach that might be useful for gaining a good vantage is to take an
API-centric view of things.179 Not only are API misunderstandings
a common security problem, but gaps between cryptographic formalizations and
APIs can produce serious cryptographic problems.180 And in the
constructive direction, the notion of online-AE, for example,181 effectively
flows from taking an API-centric view. APIs and “serious” cryptography need stronger
bonds.

Research communities have a general tendency to become inward-looking. As a
community, we have fostered strong relationships to algorithms and
complexity theory, but have done less well attending to privacy research,
programming languages, or the law. We will play a larger social role if we
up our connections to neighbors.

I recently saw a nice talk by Chris Soghoian in which he described his
frustration in trying to get media to report on, or anyone else to care
about, the well-known fact (that is actually not well known) that
cell-phone conversations have essentially no privacy.182 Cryptographers
should be helping with such communications. But I wonder how much we have even
paid attention. For most of us, if it’s not what one’s working on, one doesn’t
really care. There isn’t time.

  • Get a systems-level view. Attend to that which surrounds our field.

Learn some privacy tools. I would like to gently suggest
that we cryptographers would do well to learn, and use, contemporary privacy
tools. Very few of us use tools like OTR, PGP, Signal, Tails, and Tor. It’s
kind of an embarrassment—and I suspect our collective work suffers for it.
Christopher Soghoian insightfully remarks: “It’s as if the entire academic
medical community smoked 20 cigarettes a day, used intravenous drugs with
shared needles, and had unprotected sex with random partners on a regular
basis.”183

I’m a bizarre person to advocate in this direction—it’s definitely a case of
the pot calling the kettle black. I am dispositionally uninterested in using
technology, and am incompetent at doing so if I try. I don’t even own a
smartphone. Yet I suspect that there is nothing like experience to motivate
cryptographers to identify and solve the privacy problems that will help us
to transform hard-to-use tools for nerds into transparently embedded
mechanisms for the masses. The first problem I suggested in Section 4 is
something I thought of within days of starting to use Pond.

  • Learn some privacy tools. Use them. Improve them.

No cutesy adversaries. There is a long tradition of cutesiness
in our field. People spin fun and fanciful stories. Protocol participants are
a caricatured Alice and Bob. Adversaries are little devils, complete with horns
and a pitchfork. Some crypto talks are so packed with clip-art you can hardly
find the content. I have never liked this, but, after the Snowden revelations,
it started to vex me like never before.

Cryptography is serious, with ideas often hard to understand. When we try to
explain them with cartoons and cute narratives, I don’t think we make our
contributions easier to understand. What we actually do is add in a
layer of obfuscation that must be peeled away to understand what has
actually been done. Worse, the cartoon-heavy cryptography can reshape our
internal vision of our role. The adversary as a $53-billion-a-year
military-industrial-surveillance complex and the adversary as a
red-devil-with-horns induce entirely different thought processes. If we see
adversaries in one of these ways, we will actually see at a different set of
problems to work on than if we see things in the other. Whimsical
adversaries engender a chimerical field.184

As a graduate student, I wanted our field to feel fantastical. I wanted a
discipline full of space aliens and communicating millionaires. Not only was
it fun, but it stroked my ego, effectively embodying the sentiment: I am a scientist too smart to have to deal with small-minded concerns.

At this point, I think we would do well to put ourselves in the mindset of a
real adversary, not a notional one: the well-funded intelligence
agency, the profit-obsessed multinational, the drug cartel. You have an
enormous budget. You control lots of infrastructure. You have teams of
attorneys more than willing to interpret the law creatively. You have a huge
portfolio of zero-days.185 You have a mountain of self-righteous
conviction. Your aim is to
Collect it All, Exploit it All, Know it All.186 What would
frustrate you? What problems do you not want a bunch of super-smart academics
to solve?

  • Stop with the cutesy pictures. Take adversaries seriously.

A cryptographic commons. Many people see the Internet as some
sort of magnificent commons. This is a fantasy. There are some successful commons
within the Internet: Wikipedia, the free software movement, Creative Commons,
OpenSSL, Tor, and more. But most people turn almost exclusively to services mediated
by a handful of corporations that provide the electronic mail, instant messaging,
cloud storage, and cloud computing, for example, that people use. And they provide
the hardware on which all this stuff sits.

We need to erect a much expanded commons on the Internet. We need to realize
popular services in a secure, distributed, and decentralized way, powered by
free software and free/open hardware. We need to build systems beyond the
reach of super-sized companies and spy agencies. Such services must be based
on strong cryptography. Emphasizing that prerequisite, we need to expand our
cryptographic commons.

Dreams for such a commons go back to the cypherpunks, who built remailers,
for example, as a communitarian service to enable secure communications.
More recently, Feigenbaum and Koenig articulate such a vision.187
After explaining that centralized cloud services play a central role in enabling
mass surveillance, they call for a grass-roots effort to develop new, global-scale
cloud services based on open-source, decentralized, configuration-management
tools.

We might start small by doing our piece to improve the commons we do have:
Wikipedia. It could become a routine undertaking at IACR conferences and
workshops, or at Dagstuhl meeting, for folks to gather around for an
afternoon or evening to write, revise, and verify selected Wikipedia pages
dealing with cryptography. It’s the sort of effort that will pay off in many
unseen ways.

  • Design and build a broadly useful cryptographic commons.

Communications. In advancing our field, well-named notions
have always been important. One has only to think back to
zero-knowledge
(and the competing term minimal-disclosure) to recall how a
beautiful phrase could help catapult a beautiful idea into prominence.
Similarly, the six-letter phrase 33 bits does a remarkably good job
of embodying an important concept without going anywhere near contested
vocabulary.188 In both cryptography and privacy, language is both
formative and fraught.

The word privacy, its meaning abstract and debated, its
connotations often negative, is not a winning word. Privacy is for medical
records, toileting, and sex — not for democracy or freedom. The word
anonymity is even worse: modern political parlance has painted this
as nearly a flavor of terrorism. Security is more winning a word
and, in fact, I spoke of secure messaging instead of
private messaging or anonymous messaging because I think
it better captures what I want conveyed: that a communication whose
endpoints are manifest is not at all secure. A person needs to feel
insecure if using such a channel.

But even the word security doesn’t support a good framing of our
problem: we should try to speak of thwarting mass surveillance more than
enhancing privacy, anonymity, or security. As discussed before, we know
instinctively that ubiquitous surveillance is incompatible with freedom,
democracy, and human rights.189 This makes surveillance a thing against
which one can fight. The surveillance camera and data center make visual our
emerging dystopia, while privacy, anonymity, and security are so abstract as
to nearly defy visual representation.

Concretely, research that aims to undermine objectionable surveillance might
be called anti-surveillance research.190 Tools for this
end would be anti-surveillance technologies.191 And
choosing the problems one works on based on an ethical vision might be
called conscience-based research.

  • Choose language well. Communication is integral to having an impact.

Institutional values. This essay might seem to focus on the
ethical weight of each scientist’s personal, professional choices. But I am actually
more concerned about how we, as cryptographers and computer scientists, act in
aggregate. Our collective behavior embodies values—and the institutions we create
do, too.

I do not intend to criticize any particular individual. People should and
will work on what they think to be most valuable. The problem occurs when
our community, as a whole, systematically devalues utility or social worth.
Then we have a collective failure. The failure falls on no one in
particular, and yet it falls on everyone.


Conclusion to it all. Many before me have discussed the
importance of ethics, disciplinary culture, and political context in shaping
what we do. For example, Neal Koblitz asserts that the founding of the
CRYPTO conference in 1981 was itself an act of defiance. He warns of the
corrupting role that funding can play. And he concludes his own essay with
an assertion that drama and conflict are inherent in cryptography, but that
this also makes for some of the field’s fun.192
Susan Landau reminds us that privacy reaches far beyond engineering, and into
law, economics, and beyond. She reminds us that minimizing data collection is
part of the ACM Code of Ethics and Professional Conduct.193

As computer scientists and cryptographers, we are twice culpable when it
comes to mass surveillance: computer science created the technologies that
underlie our communications infrastructure, and that are now turning it into
an apparatus for surveillance and control; while cryptography contains
within it the underused potential to help redirect this tragic turn.194

Authors and filmmakers, futurists and scientists, have laid out many
competing visions for man’s demise. For example, Bill Joy worries about
nanotechnology turning the biosphere into gray goo, or super-intelligent
robots deciding that man is a nuisance, or a pet.195 I don’t lose
sleep over such possibilities; I don’t see them as our likely end. But a creeping
surveillance that grows organically in the public and private sectors, that becomes
increasingly comprehensive, entwined, and predictive, that becomes an instrument
for assassination, political control, and the maintenance of power—well, this
vision doesn’t merely seem possible, it seems to be happening before our eyes.

I am not optimistic. The figure of the heroic cryptographer sweeping in to
save the world from totalitarian surveillance is ludicrous.196 And
in a world where intelligence agencies stockpile and exploit countless vulnerabilities,
obtain CA secret keys, subvert software-update mechanisms, infiltrate private
companies with moles, redirect online discussions in favored directions, and
exert enormous influence on standards bodies, cryptography alone will be an ineffectual
response. At best, cryptography might be a tool for creating possibilities within
contours circumscribed by other forces.

Still, there are reasons to smile. A billion users are getting encrypted
instant messaging using WhatsApp and its embedded Axolotl protocol.197
Two million clients connect using Tor each day.198 Cryptography
papers inspired by the Snowden revelations are starting to come out apace.
More than 50 crypto and security researchers from the U.S.A. signed an open
letter I co-organized deploring society-wide surveillance.199 The
15-author Keys Under Doormats report200 is an explicit attempt
to have cryptographic expertise inform policy.

And it’s not as though crypto-for-privacy is something new or deprecated
within our community. Cryptographers like Ross Anderson, Dan Bernstein, Matt
Blaze, David Chaum, Joan Feigenbaum, Matt Green, Nadia Heninger, Tanja
Lange, Arjen Lenstra, Kenny Paterson, Ron Rivest, Adi Shamir, Nigel Smart,
and Moti Yung, to name just a few, have been attending to practical privacy
long before it started to get trendy (if this is happening). The RWC
(Real World Cryptography) conference is creating a new and healthy mix of participants.

Talks, workshops, and panel discussions on mass surveillance are helping
cryptographers see that dealing with mass surveillance is a problem
within our discipline. Bart Preneel and Adi Shamir have been going around
giving talks entitled Post-Snowden Cryptography, and there were
panel discussions with this title at Eurocrypt 2014 and RSA-CT 2015.

Articles are emerging with titles like “Cryptographers have an ethics
problem.”201 When an attack on Tor by CMU researchers was
allegedly used to provide bulk anonymized data to the FBI, CMU and the
researchers involved were publicly shamed.202 The IACR itself has
been getting more vocal, both with the Copenhagen Resolution203
and the statement on Australia’s Defence Trade Controls Act.204

While our community has embraced crypto-for-privacy less than I would like,
this has been a cultural issue—and culture can change.

I have heard it said that if you think cryptography is your solution, you
don’t understand your problem.205 If this quip is true, then our field
has gone seriously astray. But we can correct it. We need to make cryptography
the solution to the problem: “how do you make surveillance more expensive?”

Dan Bernstein speaks of interesting crypto and
boring crypto. Interesting crypto is crypto that supports plenty of
academic papers. Boring crypto is “crypto that simply works, solidly resists
attacks, [and] never needs any upgrades.” Dan asks, in his typically
flippant way,

What will happen if the crypto users convince some crypto researchers to
actually create boring crypto?

No more real-world attacks. No more emergency upgrades. Limited audience
for any minor attack improvements and for replacement crypto.

This is an existential threat against future crypto research.206

If this is boring crypto, we need to go do some.

Cypherpunk cryptography has been described as crypto with an attitude.207
But it is much more than that, for, more than anything else, what the cypherpunks
wanted was crypto with values. And values, deeply felt and deeply
embedded into our work, is what the cryptographic community needs most. And
perhaps a dose of that cypherpunk verve.208

It has been said that just because you don’t take an interest in politics,
doesn’t mean politics won’t take an interest in you.209 Since cryptography
is a tool for shifting power, the people who know this subject well, like it
or not, inherit some of that power. As a cryptographer, you can ignore this landscape
of power, and all political and moral dimensions of our field. But that won’t
make them go away. It will just tend to make your work less relevant or socially
useful.

My hope for this essay is that you will internalize this fact and recognize
it as the starting point for developing an ethically driven vision for what
you want to accomplish with your scientific work.

I began this essay speaking of the Russell–Einstein manifesto, so let me end
there as well, with Joseph Rotblat’s plea from his Nobel prize acceptance
speech:

At a time when science plays such a powerful role in the life of society,
when the destiny of the whole of mankind may hinge on the results of
scientific research, it is incumbent on all scientists to be fully conscious
of that role, and conduct themselves accordingly. I appeal to my fellow
scientists to remember their responsibility to humanity.210

Acknowledgements

My thanks go first to Mihir Bellare for countless discussions on the topic
of this essay. For years, not only have we collaborated closely on technical
matters, but we have also much discussed the values and sensibilities
implicitly embedded within cryptographic work. Without Mihir, not only would
I have done far less technically, but I would also understand far less about
who cryptographers are.

Ron Rivest not only provided useful comments, but has been much on my mind
as I have agonized over this essay. Many other people have given me
important suggestions and ideas. I would like to thank Jake Appelbaum, Ross
Anderson, Tom Berson, Dan Boneh, David Chaum, Joan Feigenbaum, Pooya
Farshim, Seda G¨urses, Tanja Lange, Chip Martel, Stephen Mason, Chanathip
Namprempre, Ilya Mironov, Chris Patton, Charles Raab, Tom Ristenpart, Amit
Sahai, Rylan Schaeffer, Adi Shamir, Jessica Malekos Smith, Christopher
Soghoian, Richard Stallman, Colleen Swanson, Bj¨orn Tackmann, Helen Thom,
Jesse Walker, Jacob Weber, and Yusi (James) Zhang for their comments,
discussions, and corrections.

My view of what science is and what the scientist should be was strongly
shaped by watching Jacob Bronowski when I was a child.211

All original technical work mentioned in this essay (e.g., what is
described in the first pages of Part 4) was supported by NSF Grant CNS
1228828. But I emphasize that all opinions, findings, conclusions and
recommendations in this essay (and this essay is mostly opinions and
recommendations) reflect the views of the author alone, not necessarily the views
of the National Science Foundation.

Thanks to the Schloss Dagstuhl staff and to the participants of workshop
14401, Privacy and Security in an Age of Surveillance, where ideas
related to this essay were discussed.212

Some of the work on this essay was done while I was a guest professor at
ENS, Paris, hosted by David Pointcheval.

My thanks to the IACR Board for the privilege of giving this year’s IACR
Distinguished Lecture. It is an honor that happens at most once in a
cryptographer’s career, and I have tried my best to use this opportunity
wisely.

This essay owes its existence to the courage of Edward Snowden.

Read More

Rebecka Howe