Facing New Risks with Big Data and the Cloud Pr. Bryan Ford
A Holistic Approach to Data Security.
By Bryan Ford, AXA Chair in Information Security and Privacy, Ecole Polytechnique Fédérale de Lausanne.
While exciting progress in computer science opens so many new venues, the growth of cloud computing also introduces new risks—and the well-known ones may be only the tip of the iceberg, Prof. Ford says. It’s not the problems tied directly to your cloud provider, like outages cutting off your access, that worry him, but the second-order risks. These include cloud services that may appear independent, but actually share resources behind the scenes, undermining the safety usually conferred by redundancy in a system. “This could create unexpected and potentially catastrophic failure correlations, reminiscent of financial industry crashes,” Prof. Ford explains. Also requiring urgent attention is the challenge that cloud computing adds to the preservation of digital artifacts. Technology changes rapidly and versions become out of date, putting long-term availability at risk. With cloud-based applications, users are never in possession of a complete, functional copy of the item to store in a repository—think search engines or mapping applications, versus word processing software installed directly on your computer. How, then, can digital archivists file away historically significant cloud artifacts for long-term cultural preservation?
Prof. Ford’s research project will provide, first of all, a much deeper understanding of the questions like this that must be asked in a new era of cloud computing. In response to the risks exposed, he will also design new system architectures capable of facing the problems involved. He aims to develop methods of quantifying the risk of compromised privacy or failure in a system. Then, he’ll create protoypes capable of using this measurement to reconfigure cloud systems at risk. By getting started now, Prof. Ford hopes to understand the risks and devise solutions “before our socioeconomic fabric becomes inextricably dependent on a convenient but potentially unstable computing model,” he says.
Apple, FBI, and Software Transparency*
*From Freedom to Tinker, Princeton's Center for Information Technology Policy, march 10th 2016,https://freedom-to-tinker.com/2016/03/10/apple-fbi-and-software-transparency
The Apple versus FBI showdown has quickly become a crucial flashpoint of the “new Crypto War.” On February 16 the FBI invoked the All Writs Act of 1789, a catch-all authority for assistance of law enforcement, demanding that Apple create a custom version of its iOS to help the FBI decrypt an iPhone used by one of the San Bernardino shooters. The fact that the FBI allowed Apple to disclose the order publicly, on the same day, represents a rare exception to the government’s normal penchant for secrecy.
The reasons behind the FBI’s unusually loud entrance are important – but even more so is the risk that after the present flurry concludes, the FBI and other government agencies will revert to more shadowy methods of compelling companies to backdoor their software. This blog post explores these software transparency risks, and how new technical measures could help ensure that the public debate over software backdoors remains public.
The Decryption Assistance Order
Apple and other technology companies regularly comply with government orders for data in their possession. The controversial order’s key distinction, however, is that the data is not in Apple’s posession but on an encrypted iPhone, and the order requires Apple to create new software to help the FBI circumvent that iPhone’s security. While Apple is probably technically able to comply with the FBI’s order, it is fighting the order on the grounds that “the government demands that Apple create a back door to defeat the encryption on the iPhone, making its users’ most confidential and personal information vulnerable to hackers, identity thieves, hostile foreign agents, and unwarranted government surveillance.”
Indeed, demanding that a private company create a new forensic instrument to the government’s order, weakening the security of Apple’s own devices and exposing their users’ innermost secrets, may violate the first amendment. At any rate, the order is “like asking General Motors to build a new truck with a fifth wheel by next month.”
The FBI could probably create their own backdoored version of iOS. However, Apple’s devices accept only software updates digitally signed with a secret key that presumably only Apple controls. Presumably. We’ll come back to that.
Why All the Publicity?
One of the most interesting and unusual features of this particular case is how quickly we, the public, learned about it from Apple. The FBI could have quietly delivered this order under seal, as it has done with similar decryption-assistance demands to Apple – as well as to other companies such as Lavabit, the now-defunct E-mail provider that Edward Snowden used.
Apple even reportedly requested that the FBI’s order be sealed, but the FBI wanted the public showdown. The facts of the case undermine the FBI’s claims of urgently needing this iPhone’s contents: the killers were already long dead, the mountain of metadata the FBI already had about the killers revealed no hint of connections to other terrorists, and the iPhone in question was an employer-provided work phone that the killers did not bother to destroy as they did their two personal phones. The Occam’s Razor interpretation of the facts suggest that the FBI is far less interested in the data itself than in the court precedent a legal win would establish.
In short, it appears the FBI is “playing politics” via a “carefully planned legal battle…months in the making.” The iPhone in question represents a strategically-chosen battleground on which the FBI thinks it can win using the terrorism card – even if this particular iPhone in fact has little or no intelligence value.
Lining up in Apple’s defense are a plurality of the American public; public-interest organizations such as the ACLU, EFF, and CDT; many technology giants including Google, Intel, Microsoft, Cisco, and Amazon; newspapers such as the New York Times and the Wall Street Journal, the UN High Commissioner for Human Rights; even the former NSA director and other former top US government officials.
The Secrecy Alternative, Past and Future
Important as this public battle is, the FBI and governments around the world can and often have pursued the same goals in secret: Apple versus FBI is more the exception than the rule. Recall the result of the first Crypto Wars, in which the US government attempted to mandate key escrow encryption embodied in the infamous Clipper Chip. While the government lost that public battle, they did not give up but merely shifted their efforts to compromise encryption back into the shadows.
For example, the NSA apparently slipped a backdoor into a NIST standard for random number generation, allowing the holder of a secret to compromise all cryptographic algorithms on a device. Demonstrating the perils of trying to keep a backdoor accessible only to “the good guys,” an unknown attacker recently managed to “re-key” and take over a latent copy of this backdoored random number generator in Juniper Networks routers.
Even if sanity prevails in this new round of the Crypto Wars, we can count on continued attempts by the US and governments around the world to aquire secret backdoors. Governments can of course exploit software bugs or physical vulnerabilities to break into personal devices, but secret backdoors will remain an attractive Siren song. It is easier, cheaper, and less risky to exploit a known backdoor than to “reach into the treasure chest…and engineer a custom exploit.”
The Software Update Backdoor
Nearly all of today’s personal devices, including Apple’s, already have a ready-made “backdoor” ripe for exploitation, in the form of automatic software updates validated by digital signatures. One way the US government could acquire a universal backdoor to Apple’s devices is simply by demanding a copy of Apple’s secret software signing keys. The government already showed a willingness to do exactly this, in demanding the master keys to Lavabit’s encrypted E-mail service while investigating Snowden. This might not be entirely trivial if Apple’s software signing keys are held in hardware security modules designed to thwart the extraction or cloning of secret keys. In that case, however, the government could still simply demand that Apple use its secret key to produce a valid digital signature for the FBI’s backdoored version of iOS, while keeping this process and the existence of this backdoored iOS secret.
Even if Apple wins this public battle, therefore, they will still face well-founded post-Snowden fears and suspicions – from companies and governments around the world – as to whether Apple can be coerced into secretly helping to sign backdoored software and firmware images. This risk is by no means specific to Apple, but faced by any organization that creates and releases software. Even open source software is not immune, because you cannot be certain whether a software update represents a correctly-compiled or backdoored version of a source release unless you build it yourself, which precious few users do.
Software Transparency via Decentralized Witness Cosigning
In IEEE Security & Privacy 2016 we will present a paper (preliminary draft available here) introducing decentralized witness cosigning, a technological means by software makers such as Apple could protect their users from secretly backdoored versions of their software – and in turn protect themselves and their financial bottom lines from worldwide fears and suspicions about the possibility of backdoored software.
With conventional digital signatures, as currently used for most software and firmware signing processes, a single party (e.g., Apple) holds the secret key needed to produce valid software images that devices and their software update systems will accept. Any well-designed update system refuses to accept any software image unless it has been authenticated using a digital certificate embedded in the device, which cryptographically identifies the software maker via a mathematical relationship with the secret signing key. Best practices for software signing are already to keep particularly sensitive signing keys offline, perhaps in HSMs or even split across multiple HSMs, as ICANN does in its ornate DNSSEC key signing ceremony. But as noted above, such measures do not prevent the organization from being coerced into secret misuse of these signing keys.
With decentralized witness cosigning, a software maker imprints their devices and software update systems with a digital certificate corresponding not just to their own secret key but also to secret keys held by a group of independent witnesses. These witnesses might include other cooperating software companies, public-interest organizations such as the ACLU, EFF, or CDT, or major corporate customers or governments around the world desiring not just verbal but also technological assurances of the software maker’s commitment to transparency. In turn, before accepting any software image the device’s update system verifies that it has been signed not only by the software maker but also by a threshold number of the designated witnesses. In essence, the device does not accept any software image unless it arrives with a cryptographic “proof” that this particular software image has been publicly observed by – and thereby placed under the scrutiny of – a decentralized group of independent parties scattered around the world in different jurisdictions.
The Scalability of Witness Cosigning
Technically, it is quite easy to implement witness cosigning if the number of witnesses is small. A software maker could simply gather a list of individual signatures for each new software release, in much the same way people have handled public petitions for hundreds of years. If we want the group of witnesses to be large, however – and we do, to ensure that compromising transparency would require not just a few but hundreds or even thousands of witnesses to be colluding maliciously – then gathering hundreds or thousands of individual signatures for each software release could become painful and inefficient. Worse, every device needing to validate a software download or update would need to check all these signatures individually, causing delays and consuming battery power.
The key technical contribution of our research is a distributed protocol that automates this process and makes large, decentralized witness cosigning groups practical. I will spare you the details, but those interested can find them here. The oversimplified summary is that the protocol involves compressing hundreds or thousands of signatures into a single one that can be verified almost as simply and efficiently verifying a normal individual signature. For illustration, a traditional many-signature petition handled this way might look as follows:
What a classic petition might look like as a cryptographic multisignature
Superposing standard pencil-and-paper signatures this way would of course offer little or no security, but such superposition can be made secure with modern digital signatures. This is one of the remarkable properties of modern cryptography, and is a well-understood property that long predates our work. Again, our main contribution is to make witness cosigning scale.
How Does Anyone Know If There’s a Backdoor?
Unfortunately, independent witnesses cannot necessarily determine immediately, during the witness cosigning process, whether or not a particular software image actually contains a backdoor. This is especially true in the common case where the source code is proprietary and the software maker signs and releases only binary images. Nevertheless, the witnesses can still proactively ensure transparency by ensuring that every correctly-signed software image in existence has been disclosed, cataloged, and made subject to public scrutiny.
For example, if future Apple devices adopted decentralized witness cosigning, and a government attempted to coerce Apple secretly into signing a backdoored version of iOS version 11.2.1, then the only way Apple could do so would be to submit the backdoored iOS version to the independent witnesses for cosigning. Even though those witnesses could not necessarily recognize the backdoor, they could immediately notice that two different iOS images labeled “version 11.2.1” have been signed: the standard one and the backdoored one. This inconsistency alone should immediately raise alarms and draw the attention of security companies around the world, who could carefully inspect the differences between the two software images.
A government could of course coerce Apple to give the backdoored image a different version number that most of their customers never receive: e.g., “11.2.1fbi” – or a more anonymous “126.96.36.199.” However, the witnesses would still be able to tell that an iOS image exists that has been signed but not widely distributed, again likely drawing suspicion and careful scrutiny by security experts.
Of course, Apple – or a malicious Apple employee – could still slip a subtle backdoor or security “bug” into the standard iOS releases that everyone runs. Accidental bugs and backdoors alike can persist for years without being noticed, as the Juniper incident amply demonstrates. Open source software offers a transparency advantage, especially with reproducible builds – but even source-level backdoors can be devilishly tricky.
Nevertheless, techniques and tools for analysing both source and binary software are constantly improving, and decentralized witness cosigning can ensure that all releases of a software distribution are publicly known and exposed to this analysis by talented security researchers and white-hat hackers around the world. An attacker who slips a backdoor into a public software release inherently faces a risk that the backdoor could be discovered at any time. Witness cosigning prevents attackers from sidestepping that risk of discovery, even by secretly deploying the backdoored software only on targeted devices under attacker-controlled conditions.
Proactive versus Retroactive Transparency Approaches
Decentralized witness cosigning is by no means the first cryptographic transparency mechanism. For example, the Public Key Infrastructure (PKI) used to secure Web connections has similar weaknesses. PKI transparency mechanisms such as Convergence, Sovereign Keys, Certificate Transparency, AKI, and ARPKI chip away at this problem. Certificate Transparency is now standard in the Chrome browser. Application Transparency is a proposed variant of Certificate Transparency adapted to software downloads and updates. Related proposals such as Perspectives and CONIKS address closely-related problems for Secure Shell (SSH) connections and end-to-end encrypted messaging, respectively.
These prior transparency mechanisms have two crucial weaknesses, however: they do not significantly increase the number of secret keys an attacker must control to compromise any personal device, and personal devices cannot even retroactively detect such compromise unless they can actively communicate with multiple well-known Internet servers. For example, even with Certificate Transparency, an attacker can forge an Extended Validation (EV) certificate for Chrome after compromising or coercing only three parties: one Certificate Authority (CA) and two log servers. Since many CAs and log servers are in US jurisdiction, such an attack is clearly within reach of the US government. If such an attack does occur, Certificate Transparency cannot detect it unless the victim device has a chance to communicate or gossip the fake certificate with other parties on the Internet – after it has already accepted and started using the fake digital certificate.
Gossip Mechanisms Can’t Guarantee Software Transparency
These weaknesses are especially severe in the domain of software transparency, the central issue in the Apple versus FBI case. First, if a personal device accepts and starts running a backdoored software update before the device has had a chance to gossip information about the update with other parties on the Internet, then the backdoored software can evade transparency simply by disabling gossip in the updated code. Second, even if for some reason the attacker cannot or neglects to take this obvious step, the attacker can still evade transparency by controlling either the device or its Internet access paths. In the FBI versus Apple case, for example, the FBI could trivially evade gossip-based transparency, and keep its backdoored iOS image secret, by keeping the device disconnected from the rest of the Internet after installing their backdoored software update. (They probably plan to anyway, to ensure that no “cyber pathogens” escape.)
This weakness of gossip-based transparency also applies to attackers who may not control the device itself but control the device’s Internet access path. For example, a compromised Internet service provider (ISP) or corporate Internet gateway can defeat gossip-based transparency by persistently blocking a victim’s access to transparency servers elsewhere on the Internet. Even if the user’s device is mobile, a state intelligence service such as China’s “Great Firewall” could defeat gossip-based transparency by persistently blocking connections from a targeted victim’s devices to global transparency servers, in the same way that China already blocks connections to many websites and to the Tor anonymity network.
The noisy Apple versus FBI battle is merely the visible tip of a looming software integrity iceberg, illustrating both the importance of software transparency mechanisms and the technical challenges in securing them. Current gossip-based methods cannot actually guarantee transparency if an attacker is in control of the target device or its Internet access path, as in the current FBI versus Apple scenario. Even if software updates were guarded by Certificate Transparency or Application Transparency, the FBI could still secretly force Apple to sign a backdoored software update, coerce two US-based log servers to sign fake log entries while keeping both the software update and the fake logs secret, and isolate the targeted device offline so that it cannot gossip the fake update metadata with anyone.
Decentralized witness cosigning is currently the only known method of ensuring transparency and public accountability in such situations. Taking a proactive approach, witness cosigning provides devices with a standalone cryptographic proof that a software update has been observed by many independent parties, before the device accepts or runs the software. In this way, companies such as Apple could offer their customers a strong guarantee that every valid software image in existence has been publicly disclosed before any of their devices, anywhere, will consider it valid – even if the device and/or its network is controlled by an attacker who does not exhibit the FBI’s fleeting taste for publicity.
The present public debate over backdoors in personal devices is of critical importance to our security, privacy, and personal freedom. But equally important is ensuring that this time the debate stays public.
Discover research projects related to the topic
Financial & Social Inclusion
Culture & Society
Joint Research Initiative
Understanding the Financial Lives of Low Income Households in China
Leveraging financial diaries research methodology, this joint initiative aims to provide actionable insights about the financial lives of low-income households... Read more
Chinese Academy of Financial Inclusion
Using New Technologies to Fill the Supply Chain Insurance-Reinsurance Gap Post Covid
First, the project will develop tools for machine learning-based market design in supply chain networks with reinsurance contracts. Reinsurance is... Read more