This is a personal blog. My other stuff: book | home page | Twitter | prepping | CNC robotics | electronics

December 13, 2017

The deal with Bitcoin

♪ Used to have a little now I have a lot
I'm still, I'm still Jenny from the block
          chain ♪

For all that has been written about Bitcoin and its ilk, it is curious that the focus is almost solely what the cryptocurrencies are supposed to be. Technologists wax lyrical about the potential for blockchains to change almost every aspect of our lives. Libertarians and paleoconservatives ache for the return to "sound money" that can't be conjured up at the whim of a bureaucrat. Mainstream economists wag their fingers, proclaiming that a proper currency can't be deflationary, that it must maintain a particular velocity, or that the government must be able to nip crises of confidence in the bud. And so on.

Much of this may be true, but the proponents of cryptocurrencies should recognize that an appeal to consequences is not a guarantee of good results. The critics, on the other hand, would be best served to remember that they are drawing far-reaching conclusions about the effects of modern monetary policies based on a very short and tumultuous period in history.

In this post, my goal is to ditch most of the dogma, talk a bit about the origins of money - and then see how "crypto" fits the bill.

1. The prehistory of currencies

The emergence of money is usually explained in a very straightforward way. You know the story: a farmer raised a pig, a cobbler made a shoe. The cobbler needed to feed his family while the farmer wanted to keep his feet warm - and so they met to exchange the goods on mutually beneficial terms. But as the tale goes, the barter system had a fatal flaw: sometimes, a farmer wanted a cooking pot, a potter wanted a knife, and a blacksmith wanted a pair of pants. To facilitate increasingly complex, multi-step exchanges without requiring dozens of people to meet face to face, we came up with an abstract way to represent value - a shiny coin guaranteed to be accepted by every tradesman.

It is a nice parable, but it probably isn't very true. It seems far more plausible that early societies relied on the concept of debt long before the advent of currencies: an informal tally or a formal ledger would be used to keep track of who owes what to whom. The concept of debt, closely associated with one's trustworthiness and standing in the community, would have enabled a wide range of economic activities: debts could be paid back over time, transferred, renegotiated, or forgotten - all without having to engage in spot barter or to mint a single coin. In fact, such non-monetary, trust-based, reciprocal economies are still common in closely-knit communities: among families, neighbors, coworkers, or friends.

In such a setting, primitive currencies probably emerged simply as a consequence of having a system of prices: a cow being worth a particular number of chickens, a chicken being worth a particular number of beaver pelts, and so forth. Formalizing such relationships by settling on a single, widely-known unit of account - say, one chicken - would make it more convenient to transfer, combine, or split debts; or to settle them in alternative goods.

Contrary to popular belief, for communal ledgers, the unit of account probably did not have to be particularly desirable, durable, or easy to carry; it was simply an accounting tool. And indeed, we sometimes run into fairly unusual units of account even in modern times: for example, cigarettes can be the basis of a bustling prison economy even when most inmates don't smoke and there are not that many packs to go around.

2. The age of commodity money

In the end, the development of coinage might have had relatively little to do with communal trade - and far more with the desire to exchange goods with strangers. When dealing with a unfamiliar or hostile tribe, the concept of a chicken-denominated ledger does not hold up: the other side might be disinclined to honor its obligations - and get away with it, too. To settle such problematic trades, we needed a "spot" medium of exchange that would be easy to carry and authenticate, had a well-defined value, and a near-universal appeal. Throughout much of the recorded history, precious metals - predominantly gold and silver - proved to fit the bill.

In the most basic sense, such commodities could be seen as a tool to reconcile debts across societal boundaries, without necessarily replacing any local units of account. An obligation, denominated in some local currency, would be created on buyer's side in order to procure the metal for the trade. The proceeds of the completed transaction would in turn allow the seller to settle their own local obligations that arose from having to source the traded goods. In other words, our wondrous chicken-denominated ledgers could coexist peacefully with gold - and when commodity coinage finally took hold, it's likely that in everyday trade, precious metals served more as a useful abstraction than a precise store of value. A "silver chicken" of sorts.

Still, the emergence of commodity money had one interesting side effect: it decoupled the unit of debt - a "claim on the society", in a sense - from any moral judgment about its origin. A piece of silver would buy the same amount of food, whether earned through hard labor or won in a drunken bet. This disconnect remains a central theme in many of the debates about social justice and unfairly earned wealth.

3. The State enters the game

If there is one advantage of chicken ledgers over precious metals, it's that all chickens look and cluck roughly the same - something that can't be said of every nugget of silver or gold. To cope with this problem, we needed to shape raw commodities into pieces of a more predictable shape and weight; a trusted party could then stamp them with a mark to indicate the value and the quality of the coin.

At first, the task of standardizing coinage rested with private parties - but the responsibility was soon assumed by the State. The advantages of this transition seemed clear: a single, widely-accepted and easily-recognizable currency could be now used to settle virtually all private and official debts.

Alas, in what deserves the dubious distinction of being one of the earliest examples of monetary tomfoolery, some States succumbed to the temptation of fiddling with the coinage to accomplish anything from feeding the poor to waging wars. In particular, it would be common to stamp coins with the same face value but a progressively lower content of silver and gold. Perhaps surprisingly, the strategy worked remarkably well; at least in the times of peace, most people cared about the value stamped on the coin, not its precise composition or weight.

And so, over time, representative money was born: sooner or later, most States opted to mint coins from nearly-worthless metals, or print banknotes on paper and cloth. This radically new currency was accompanied with a simple pledge: the State offered to redeem it at any time for its nominal value in gold.

Of course, the promise was largely illusory: the State did not have enough gold to honor all the promises it had made. Still, as long as people had faith in their rulers and the redemption requests stayed low, the fundamental mechanics of this new representative currency remained roughly the same as before - and in some ways, were an improvement in that they lessened the insatiable demand for a rare commodity. Just as importantly, the new money still enabled international trade - using the underlying gold exchange rate as a reference point.

4. Fractional reserve banking and fiat money

For much of the recorded history, banking was an exceptionally dull affair, not much different from running a communal chicken ledger of the old. But then, something truly marvelous happened in the 17th century: around that time, many European countries have witnessed the emergence of fractional-reserve banks.

These private ventures operated according to a simple scheme: they accepted people's coin for safekeeping, promising to pay a premium on every deposit made. To meet these obligations and to make a profit, the banks then used the pooled deposits to make high-interest loans to other folks. The financiers figured out that under normal circumstances and when operating at a sufficient scale, they needed only a very modest reserve - well under 10% of all deposited money - to be able to service the usual volume and size of withdrawals requested by their customers. The rest could be loaned out.

The very curious consequence of fractional-reserve banking was that it pulled new money out of thin air. The funds were simultaneously accounted for in the statements shown to the depositor, evidently available for withdrawal or transfer at any time; and given to third-party borrowers, who could spend them on just about anything. Heck, the borrowers could deposit the proceeds in another bank, creating even more money along the way! Whatever they did, the sum of all funds in the monetary system now appeared much higher than the value of all coins and banknotes issued by the government - let alone the amount of gold sitting in any vault.

Of course, no new money was being created in any physical sense: all that banks were doing was engaging in a bit of creative accounting - the sort of which would probably land you in jail if you attempted it today in any other comparably vital field of enterprise. If too many depositors were to ask for their money back, or if too many loans were to go bad, the banking system would fold. Fortunes would evaporate in a puff of accounting smoke, and with the disappearance of vast quantities of quasi-fictitious ("broad") money, the wealth of the entire nation would shrink.

In the early 20th century, the world kept witnessing just that; a series of bank runs and economic contractions forced the governments around the globe to act. At that stage, outlawing fractional-reserve banking was no longer politically or economically tenable; a simpler alternative was to let go of gold and move to fiat money - a currency implemented as an abstract social construct, with no predefined connection to the physical realm. A new breed of economists saw the role of the government not in trying to peg the value of money to an inflexible commodity, but in manipulating its supply to smooth out economic hiccups or to stimulate growth.

(Contrary to popular beliefs, such manipulation is usually not done by printing new banknotes; more sophisticated methods, such as lowering reserve requirements for bank deposits or enticing banks to invest its deposits into government-issued securities, are the preferred route.)

The obvious peril of fiat money is that in the long haul, its value is determined strictly by people's willingness to accept a piece of paper in exchange for their trouble; that willingness, in turn, is conditioned solely on their belief that the same piece of paper would buy them something nice a week, a month, or a year from now. It follows that a simple crisis of confidence could make a currency nearly worthless overnight. A prolonged period of hyperinflation and subsequent austerity in Germany and Austria was one of the precipitating factors that led to World War II. In more recent times, dramatic episodes of hyperinflation plagued the fiat currencies of Israel (1984), Mexico (1988), Poland (1990), Yugoslavia (1994), Bulgaria (1996), Turkey (2002), Zimbabwe (2009), Venezuela (2016), and several other nations around the globe.

For the United States, the switch to fiat money came relatively late, in 1971. To stop the dollar from plunging like a rock, the Nixon administration employed a clever trick: they ordered the freeze of wages and prices for the 90 days that immediately followed the move. People went on about their lives and paid the usual for eggs or milk - and by the time the freeze ended, they were accustomed to the idea that the "new", free-floating dollar is worth about the same as the old, gold-backed one. A robust economy and favorable geopolitics did the rest, and so far, the American adventure with fiat currency has been rather uneventful - perhaps except for the fact that the price of gold itself skyrocketed from $35 per troy ounce in 1971 to $850 in 1980 (or, from $210 to $2,500 in today's dollars).

Well, one thing did change: now better positioned to freely tamper with the supply of money, the regulators in accord with the bankers adopted a policy of creating it at a rate that slightly outstripped the organic growth in economic activity. They did this to induce a small, steady degree of inflation, believing that doing so would discourage people from hoarding cash and force them to reinvest it for the betterment of the society. Some critics like to point out that such a policy functions as a "backdoor" tax on savings that happens to align with the regulators' less noble interests; still, either way: in the US and most other developed nations, the purchasing power of any money kept under a mattress will drop at a rate of somewhere between 2 to 10% a year.

5. So what's up with Bitcoin?

Well... countless tomes have been written about the nature and the optimal characteristics of government-issued fiat currencies. Some heterodox economists, notably including Murray Rothbard, have also explored the topic of privately-issued, decentralized, commodity-backed currencies. But Bitcoin is a wholly different animal.

In essence, BTC is a global, decentralized fiat currency: it has no (recoverable) intrinsic value, no central authority to issue it or define its exchange rate, and it has no anchoring to any historical reference point - a combination that until recently seemed nonsensical and escaped any serious scrutiny. It does the unthinkable by employing three clever tricks:

  1. It allows anyone to create new coins, but only by solving brute-force computational challenges that get more difficult as the time goes by,

  2. It prevents unauthorized transfer of coins by employing public key cryptography to sign off transactions, with only the authorized holder of a coin knowing the correct key,

  3. It prevents double-spending by using a distributed public ledger ("blockchain"), recording the chain of custody for coins in a tamper-proof way.

The blockchain is often described as the most important feature of Bitcoin, but in some ways, its importance is overstated. The idea of a currency that does not rely on a centralized transaction clearinghouse is what helped propel the platform into the limelight - mostly because of its novelty and the perception that it is less vulnerable to government meddling (although the government is still free to track down, tax, fine, or arrest any participants). On the flip side, the everyday mechanics of BTC would not be fundamentally different if all the transactions had to go through Bitcoin Bank, LLC.

A more striking feature of the new currency is the incentive structure surrounding the creation of new coins. The underlying design democratized the creation of new coins early on: all you had to do is leave your computer running for a while to acquire a number of tokens. The tokens had no practical value, but obtaining them involved no substantial expense or risk. Just as importantly, because the difficulty of the puzzles would only increase over time, the hope was that if Bitcoin caught on, latecomers would find it easier to purchase BTC on a secondary market than mine their own - paying with a more established currency at a mutually beneficial exchange rate.

The persistent publicity surrounding Bitcoin and other cryptocurrencies did the rest - and today, with the growing scarcity of coins and the rapidly increasing demand, the price of a single token hovers somewhere south of $15,000.

6. So... is it bad money?

Predicting is hard - especially the future. In some sense, a coin that represents a cryptographic proof of wasted CPU cycles is no better or worse than a currency that relies on cotton decorated with pictures of dead presidents. It is true that Bitcoin suffers from many implementation problems - long transaction processing times, high fees, frequent security breaches of major exchanges - but in principle, such problems can be overcome.

That said, currencies live and die by the lasting willingness of others to accept them in exchange for services or goods - and in that sense, the jury is still out. The use of Bitcoin to settle bona fide purchases is negligible, both in absolute terms and in function of the overall volume of transactions. In fact, because of the technical challenges and limited practical utility, some companies that embraced the currency early on are now backing out.

When the value of an asset is derived almost entirely from its appeal as an ever-appreciating investment vehicle, the situation has all the telltale signs of a speculative bubble. But that does not prove that the asset is destined to collapse, or that a collapse would be its end. Still, the built-in deflationary mechanism of Bitcoin - the increasing difficulty of producing new coins - is probably both a blessing and a curse.

It's going to go one way or the other; and when it's all said and done, we're going to celebrate the people who made the right guess. Because future is actually pretty darn easy to predict -- in retrospect.

December 10, 2017

Weekend distractions, part deux: a bench, and stuff

Continuing the tradition of the previous post, here's a perfectly good bench:

The legs are 8/4 hard maple, cut into 2.3" (6 cm) strips and then glued together. The top is 4/4 domestic walnut, with an additional strip glued to the bottom to make it look thicker (because gosh darn, walnut is expensive).

Cut on a bandsaw, joined together with a biscuit joiner + glue, then sanded, that's about it. Still applying finish (nitrocellulose lacquer from a rattle can), but this was the last moment when I could snap a photo (about to get dark) and it basically looks like the final product anyway. Pretty simple but turned out nice.

Several additional, smaller woodworking projects here.

November 04, 2017

Weekend distractions: a perfectly good dining table

I've been a DIYer all my adult life. Some of my non-software projects still revolve around computers, especially when they deal with CNC machining or electronics. But I've been also dabbling in woodworking for quite a while. I have not put that much effort into documenting my projects (say, cutting boards) - but I figured it's time to change that. It may inspire some folks to give a new hobby a try - or help them overcome a problem or two.

So, without further ado, here's the build log for a dining table I put together over the past two weekends or so. I think I turned out pretty nice:

Have fun!

May 04, 2017

RFD: the alien abduction prophecy protocol

"It's tough to make predictions, especially about the future."
- variously attributed to Yogi Berra and Niels Bohr

Right. So let's say you are visited by transdimensional space aliens from outer space. There's some old-fashioned probing, but eventually, they get to the point. They outline a series of apocalyptic prophecies, beginning with the surprise 2032 election of Dwayne Elizondo Mountain Dew Herbert Camacho as the President of the United States, followed by a limited-scale nuclear exchange with the Grand Duchy of Ruritania in 2036, and culminating with the extinction of all life due to a series of cascading Y2K38 failures that start at an Ohio pretzel reprocessing plan. Long story short, if you want to save mankind, you have to warn others of what's to come.

But there's a snag: when you wake up in a roadside ditch in Alabama, you realize that nobody is going to believe your story! If you come forward, your professional and social reputation will be instantly destroyed. If you're lucky, the vindication of your claims will come fifteen years later; if not, it might turn out that you were pranked by some space alien frat boys who just wanted to have some cheap space laughs. The bottom line is, you need to be certain before you make your move. You figure this means staying mum until the Election Day of 2032.

But wait, this plan is also not very good! After all, how could your future self convince others that you knew about President Camacho all along? Well... if you work in information security, you are probably familiar with a neat solution: write down your account of events in a text file, calculate a cryptographic hash of this file, and publish the resulting value somewhere permanent. Fifteen years later, reveal the contents of your file and point people to your old announcement. Explain that you must have been in the possession of this very file back in 2017; otherwise, you would not have known its hash. Voila - a commitment scheme!

Although elegant, this approach can be risky: historically, the usable life of cryptographic hash functions seemed to hover at somewhere around 15 years - so even if you pick a very modern algorithm, there is a real risk that future advances in cryptanalysis could severely undermine the strength of your proof. No biggie, though! For extra safety, you could combine several independent hashing functions, or increase the computational complexity of the hash by running it in a loop. There are also some less-known hash functions, such as SPHINCS, that are designed with different trade-offs in mind and may offer longer-term security guarantees.

Of course, the computation of the hash is not enough; it needs to become an immutable part of the public record and remain easy to look up for years to come. There is no guarantee that any particular online publishing outlet is going to stay afloat that long and continue to operate in its current form. The survivability of more specialized and experimental platforms, such as blockchain-based notaries, seems even less clear. Thankfully, you can resort to another kludge: if you publish the hash through a large number of independent online venues, there is a good chance that at least one of them will be around in 2032.

(Offline notarization - whether of the pen-and-paper or the PKI-based variety - offers an interesting alternative. That said, in the absence of an immutable, public ledger, accusations of forgery or collusion would be very easy to make - especially if the fate of the entire planet is at stake.)

Even with this out of the way, there is yet another profound problem with the plan: a current-day scam artist could conceivably generate hundreds or thousands of political predictions, publish the hashes, and then simply discard or delete the ones that do not come true by 2032 - thus creating an illusion of prescience. To convince skeptics that you are not doing just that, you could incorporate a cryptographic proof of work into your approach, attaching a particular CPU time "price tag" to every hash. The future you could then claim that it would have been prohibitively expensive for the former you to attempt the "prediction spam" attack. But this argument seems iffy: a $1,000 proof may already be too costly for a lower middle class abductee, while a determined tech billionaire could easily spend $100,000 to pull off an elaborate prank on the entire world. Not to mention, massive CPU resources can be commandeered with little or no effort by the operators of large botnets and many other actors of this sort.

In the end, my best idea is to rely on an inherently low-bandwidth publication medium, rather than a high-cost one. For example, although a determined hoaxer could place thousands of hash-bearing classifieds in some of the largest-circulation newspapers, such sleigh-of-hand would be trivial for future sleuths to spot (at least compared to combing through the entire Internet for an abandoned hash). Or, as per an anonymous suggestion relayed by Thomas Ptacek: just tattoo the signature on your body, then post some post some pics; there are only so many places for a tattoo to go.

Still, what was supposed to be a nice, scientific proof devolved into a bunch of hand-wavy arguments and poorly-quantified probabilities. For the sake of future abductees: is there a better way?

April 22, 2017

AFL experiments, or please eat your br├Âtli

When messing around with AFL, you sometimes stumble upon something unexpected or amusing. Say, having the fuzzer spontaneously synthesize JPEG files, come up with non-trivial XML syntax, or discover SQL semantics.

It is also fun to challenge yourself to employ fuzzers in non-conventional ways. Two canonical examples are having your fuzzing target call abort() whenever two libraries that are supposed to implement the same algorithm produce different outputs when given identical input data; or when a library produces different outputs when asked to encode or decode the same data several times in a row.

Such tricks may sound fanciful, but they actually find interesting bugs. In one case, AFL-based equivalence fuzzing revealed a bunch of fairly rudimentary flaws in common bignum libraries, with some theoretical implications for crypto apps. Another time, output stability checks revealed long-lived issues in IJG jpeg and other widely-used image processing libraries, leaking data across web origins.

In one of my recent experiments, I decided to fuzz brotli, an innovative compression library used in Chrome. But since it's been already fuzzed for many CPU-years, I wanted to do it with a twist: stress-test the compression routines, rather than the usually targeted decompression side. The latter is a far more fruitful target for security research, because decompression normally involves dealing with well-formed inputs, whereas compression code is meant to accept arbitrary data and not think about it too hard. That said, the low likelihood of flaws also means that the compression bits are a relatively unexplored surface that may be worth poking with a stick every now and then.

In this case, the library held up admirably - spare for a handful of computationally intensive plaintext inputs (that are now easy to spot due to the recent improvements to AFL). But the output corpus synthesized by AFL, after being seeded just with a single file containing just "0", featured quite a few peculiar finds:

  • Strings that looked like viable bits of HTML or XML: <META HTTP-AAA IDEAAAA, DATA="IIA DATA="IIA DATA="IIADATA="IIA, </TD>.

  • Non-trivial numerical constants: 1000,1000,0000000e+000000, 0,000 0,000 0,0000 0x600, 0000,$000: 0000,$000:00000000000000.

  • Nonsensical but undeniably English sentences: them with them m with them with themselves, in the fix the in the pin th in the tin, amassize the the in the in the inhe@massive in, he the themes where there the where there, size at size at the tie.

  • Bogus but semi-legible URLs: CcCdc.com/.com/m/ /00.com/.com/m/ /00(0(000000CcCdc.com/.com/.com

  • Snippets of Lisp code: )))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))).

The results are quite unexpected, given that they are just a product of randomly mutating a single-byte input file and observing the code coverage in a simple compression tool. The explanation is that brotli, in addition to more familiar binary coding methods, uses a static dictionary constructed by analyzing common types of web content. Somehow, by observing the behavior of the program, AFL was able to incrementally reconstruct quite a few of these hardcoded keywords - and then put them together in various semi-interesting ways. Not bad.

April 15, 2017

Shadow Brokers, or the hottest security product to buy in 2018

For the past three years and a change, the security industry has been mesmerized by a steady trickle of leaks that expose some of the offensive tooling belonging to the Western world's foremost intelligence agencies. To some folks, the leaks are a devastating blow to national security; to others, they are a chilling peek at the inner workings of an intrusive security apparatus that could be used to attack political enemies within.

I find it difficult to get outraged at revelations such as the compromise of some of the banking exchanges in the Middle East, presumably to track the sources of funding for some of our sworn enemies; at the same time, I'm none too pleased about the reports of the agencies tapping overseas fiber cables of US companies, or indiscriminately hacking university e-mail servers in Europe to provide cover for subsequent C&C ops. Still, many words have been written on the topic, so it is not a debate I am hoping to settle here; my only thought is that if we see espionage as a legitimate task for a nation state, then the revelations seem like a natural extension of what we know about this trade from pre-Internet days. Conversely, if we think that spying is evil, we probably ought to rethink geopolitics in a more fundamental way; until then, there's no use complaining that the NSA is keeping a bunch of 0-days at hand.

But in a more pragmatic sense, there is one consequence of the leaks that I worry about: the inevitable shifts in IT policies and the next crop of commercial tools and services meant to counter this supposedly new threat. I fear this outcome because I think that the core exploitation capabilities of the agencies - at least to the extent exposed by the leaks - are not vastly different from those of a talented teenager: somewhat disappointingly, the intelligence community accomplishes their goals chiefly by relying on public data sources, the attacks on unpatched or poorly configured systems, and the fallibility of human beings. In fact, some of the exploits exposed in the leaks were probably not developed in-house, but purchased through intermediaries from talented hobbyists - a black market that has been thriving over the past decade or so.

Of course, the NSA is a unique "adversary" in many other ways, but there is no alien technology to reckon with; and by constantly re-framing the conversation around IT security as a response to some new enemy, we tend to forget that the underlying problems that enable such hacking have been with us since the 1990s, that they are not unique to this actor, and that they have not been truly solved by any of the previous tooling and IT spending shifts.

I think that it is useful to compare computer spies to another, far better understood actor: the law enforcement community. In particular:

  1. Both the intelligence agencies and law enforcement are very patient and systematic in their pursuits. If they want to get to you but can't do so directly, they can always convince, coerce, or compromise your friends, your sysadmins - or heck, just tamper with your supply chain.

  2. Both kinds of actors operate under the protection of the law - which means that they are taking relatively few risks in going after you, can refine their approaches over the years, and can be quite brazen in their plans. They prefer to hack you remotely, of course - but if they can't, they might just as well break into your home or office, or plant a mole within your org.

  3. Both have nearly unlimited resources. You probably can't outspend them and they can always source a wide range of tools to further their goals, operating more like a well-oiled machine than a merry band of hobbyists. But it is also easy to understand their goals, and for most people, the best survival strategy is not to invite their undivided attention in the first place.

Once you make yourself interesting enough to be in the crosshairs, the game changes in a pretty spectacular way, and the steps to take might have to come from the playbooks of rebels holed up in the mountains of Pakistan more than from a glossy folder of Cyberintellics Inc. There are no simple, low-cost solutions: you will find no click-and-play security product to help you, and there is no "one weird trick" to keep you safe; taping over your camera or putting your phone in the microwave won't save the day.

And ultimately, let's face it: if you're scrambling to lock down your Internet-exposed SMB servers in response to the most recent revelations from Shadow Brokers, you are probably in deep trouble - and it's not because of the NSA.

February 03, 2017

In defense of the doomsday-minded super-rich

Several days ago, The New Yorker ran a lengthy story titled "Doomsday prep for the super-rich". The article revealed that some of the Silicon Valley's most successful entrepreneurs - including the execs from Yahoo, Facebook, and Reddit - are planning ahead for extreme emergencies, "Doomsday Preppers" style.

The article made quite a few waves in the tech world - and invited near-universal ridicule and scorn. People sneered at the comical excess of blast-proof bunkers and bug-out helicopters, as if a cataclysm that kills off most of us would somehow spare the nouveaux riches. Hardcore survivalists gleefully proclaimed that the highly-paid armed guards would turn on their employers to save own families. Conservatives rolled their eyes at the story of a VC who is preparing for the collapse of the civilized world but finds guns a bit too icky for his taste. Progressives were repulsed by the immorality of the world's wealthiest people wanting to hide when the angry underclass finally takes it to the streets. In short, no matter where you stood, the story had something for you to hate.

My first instinct was to join the fray; if nothing else, it's cathartic to have fun at the expense of people who are far wealthier and far more powerful than we could ever be. Sure, I have written about the merits of common-sense emergency preparedness, but to me, it meant having a rainy day fund and a fire extinguisher, not holing up in a decommissioned ICBM silo with 10,000 rounds of ammo and a pallet of canned cheese. Now hold my beer and let me throw the first stone!

But then, I realized that the article in The New Yorker is a human interest piece; it is meant to entertain us and has no other reason to exist. The author is trying to show us a dazzling world that is out of ordinary and out of reach. It may be that the profiled execs spend most of their time planning ahead for far more pedestrian risks, but no sane newspaper would publish a multi-page expose about the brand of fire extinguishers or tarp favored by the ultra-rich. The readers want to read about helicopters and ICBM silos instead - and so the author obliges.

It is also a fallacy to look at the cost of purchases outlined in the article in absolute terms. For us, spending $5M on a luxury compound and a helicopter may seem insane - but for a person with hundreds of millions in the bank, such an investment would be just 1% of their wealth - a reasonable price to pay for insurance against unlikely but somewhat plausible risks. In terms of relative financial impact, it is no different than a person with $10k in the bank spending $100 on a fire extinguisher and some energy bars - hardly a controversial thing to do.

What's more, although we're living in a period of unprecedented prosperity and calm, there's no denying that in the history of mankind, revolutions happen with near-clockwork regularity. We had quite a few in the past one hundred years alone - and when the time comes, it's usually the heads of the variously-defined aristocracy that roll. Angry mobs are unlikely to torch down Joe Prepper's cookie-cutter suburban neighborhood, but being near the top of the social ladder carries some distinct risk. We can have a debate about the complicity of the elites, or the comparative significance of this risk versus the plight of other social classes - but either way, the paranoia of the rich may be more grounded in reality than it seems.

Of course, an argument can be made that preparing for the collapse of the society is immoral when their wealth could be better spent on trying to bridge the income gap or otherwise make the world a more harmonious place. It is an interesting claim, but it rings a bit hollow to me. We would not deny the rich the right to buy a fire extinguisher or a bug-out bicycle; our outrage is rather conveniently reserved for the purchases we can't afford. But more importantly, prepping and philanthropy are not mutually exclusive; in fact, I suspect that some of the folks mentioned in the article spend far more on trying to help the less fortunate than they are spending on canned cheese. Whether this can make a difference, and whether they should be doing more, is a different story.

February 01, 2017

...or, how I learned not to be a jerk in 20 short years

People who are accomplished in one field of expertise tend to believe that they can bring unique insights to just about any other debate. I am as guilty as anyone: at one time or another, I aired my thoughts on anything from CNC manufacturing, to electronics, to emergency preparedness, to politics. Today, I'm about to commit the same sin - but instead of pretending to speak from a position of authority, I wanted to share a more personal tale.


The author, circa 1995. The era of hand-crank computers and punch cards.

Back in my school days, I was that one really tall and skinny kid in the class. It wasn't trying to stay this way; I preferred computer games to sports, and my grandma's Polish cooking was heavy on potatoes, butter, chicken, dumplings, cream, and cheese. But that did not matter: I could eat what I wanted, as often as I wanted, and I still stayed in shape. This made me look down on chubby kids; if my reckless ways had little or no effect on my body, it followed that they had to be exceptionally lazy and must have lacked even the most basic form of self-control.

As I entered adulthood, my habits remained the same. I felt healthy and stayed reasonably active, walking to and from work every other day and hiking with friends whenever I could. But my looks started to change:


The author at a really exciting BlackHat party in 2002.

I figured it's just a part of growing up. But somewhere around my twentieth birthday, I stepped on a bathroom scale and typed the result into an online calculator. I was surprised to find out that my BMI was about 24 - pretty darn close to overweight.

"Pssh, you know how inaccurate these things are!", I exclaimed while searching online to debunk that whole BMI thing. I mean, sure, I had some belly fat - maybe a pizza or two too far - but nothing that wouldn't go away in time. Besides, I was doing fine, so what would be the point of submitting to the society's idea of the "right" weight?

It certainly helped that I was having a blast at work. I made a name for myself in the industry, published a fair amount of cool research, authored a book, settled down, bought a house, had a kid. It wasn't until the age of 26 that I strayed into a doctor's office for a routine checkup. When the nurse asked me about my weight, I blurted out "oh, 175 pounds, give or take". She gave me a funny look and asked me to step on the scale.

Turns out it was quite a bit more than 175 pounds. With a BMI of 27.1, I was now firmly into the "overweight" territory. Yeah yeah, the BMI metric was a complete hoax - but why did my passport photos look less flattering than before?


A random mugshot from 2007. Some people are just born big-boned, I think.

Well, damn. I knew what had to happen: from now on, I was going to start eating healthier foods. I traded Cheetos for nuts, KFC for sushi rolls, greasy burgers for tortilla wraps, milk smoothies for Jamba Juice, fries for bruschettas, regular sodas for diet. I'd even throw in a side of lettuce every now and then. It was bound to make a difference. I just wasn't gonna be one of the losers who check their weight every day and agonize over every calorie on their plate. (Weren't calories a scam, anyway? I think I read that on that cool BMI conspiracy site.)

By the time I turned 32, my body mass index hit 29. At that point, it wasn't just a matter of looking chubby. I could do the math: at that rate, I'd be in a real pickle in a decade or two - complete with a ~50% chance of developing diabetes or cardiovascular disease. This wouldn't just make me miserable, but also mess up the lives of my spouse and kids.


Presenting at Google TGIF in 2013. It must've been the unflattering light.

I wanted to get this over with right away, so I decided to push myself hard. I started biking to work, quite a strenuous ride. It felt good, but did not help: I would simply eat more to compensate and ended up gaining a few extra pounds. I tried starving myself. That worked, sure - only to be followed by an even faster rebound. Ultimately, I had to face the reality: I had a problem and I needed a long-term solution. There was no one weird trick to outsmart the calorie-counting crowd, no overnight cure.

I started looking for real answers. My world came crumbling down; I realized that a "healthy" burrito from Chipotle packed four times as many calories as a greasy burger from McDonald's. That a loaded fruit smoothie from Jamba Juice was roughly equal to two hot dogs with a side of mashed potatoes to boot. That a glass of apple juice fared worse than a can of Sprite, and that bruschetta wasn't far from deep-fried butter on a stick. It didn't matter if it was sugar or fat, bacon or kale. Familiar favorites were not better or worse than the rest. Losing weight boiled down to portion control - and sticking to it for the rest of my life.

It was a slow and humbling journey that spanned almost a year. I ended up losing around 70 lbs along the way. What shocked me is that it wasn't a painful experience; what held me back for years was just my own smugness, plus the folksy wisdom gleaned from the covers of glossy magazines.


Author with a tractor, 2017.

I'm not sure there is a moral to this story. I guess one lesson is: don't be a judgmental jerk. Sometimes, the simple things - the ones you think you have all figured out - prove to be a lot more complicated than they seem.

November 11, 2016

On Trump

I dislike commenting on politics. I think it's difficult to contribute any novel thought - and in today's hyper-polarized world, stating an unpopular or half-baked opinion is a recipe for losing friends or worse. Still, with many of my colleagues expressing horror and disbelief over what happened on Tuesday night, I reluctantly decided to jot down my thoughts.

I think that in trying to explain away the meteoric rise of Mr. Trump, many of the mainstream commentators have focused on two phenomena. Firstly, they singled out the emergence of "filter bubbles" - a mechanism that allows people to reinforce their own biases and shields them from opposing views. Secondly, they implicated the dark undercurrents of racism, misogynism, or xenophobia that still permeate some corners of our society. From that ugly place, the connection to Mr. Trump's foul-mouthed populism was not hard to make; his despicable bragging about women aside, to his foes, even an accidental hand gesture or an inane 4chan frog meme was proof enough. Once we crossed this line, the election was no longer about economic policy, the environment, or the like; it was an existential battle for equality and inclusiveness against the forces of evil that lurk in our midst. Not a day went by without a comparison between Mr. Trump and Adolf Hitler in the press. As for the moderate voters, the pundits had an explanation, too: the right-wing filter bubble must have clouded their judgment and created a false sense of equivalency between a horrid, conspiracy-peddling madman and our cozy, liberal status quo.

Now, before I offer my take, let me be clear that I do not wish to dismiss the legitimate concerns about the overtones of Mr. Trump's campaign. Nor do I desire to downplay the scale of discrimination and hatred that the societies around the world are still grappling with, or the potential that the new administration could make it worse. But I found the aforementioned explanation of Mr. Trump's unexpected victory to be unsatisfying in many ways. Ultimately, we all live in bubbles and we all have biases; in that regard, not much sets CNN apart from Fox News, Vox from National Review, or The Huffington Post from Breitbart. The reason why most of us would trust one and despise the other is that we instinctively recognize our own biases as more benign. After all, in the progressive world, we are fighting for an inclusive society that gives all people a fair chance to succeed. As for the other side? They seem like a bizarre, cartoonishly evil coalition of dimwits, racists, homophobes, and the ultra-rich. We even have serious scientific studies to back that up; their authors breathlessly proclaim that the conservative brain is inferior to the progressive brain. Unlike the conservatives, we believe in science, so we hit the "like" button and retweet the news.

But here's the thing: I know quite a few conservatives, many of whom have probably voted for Mr. Trump - and they are about as smart, as informed, and as compassionate as my progressive friends. I think that the disconnect between the worldviews stems from something else: if you are a well-off person in a coastal city, you know people who are immigrants or who belong to other minorities, making you acutely attuned to their plight; but you may lack the same, deeply personal connection to - say - the situation of the lower middle class in the Midwest. You might have seen surprising charts or read a touching story in Mother Jones few years back, but it's hard to think of them as individuals; they are more of a socioeconomic obstacle, a problem to be solved. The same goes for our understanding of immigration or globalization: these phenomena make our high-tech hubs more prosperous and more open; the externalities of our policies, if any, are just an abstract price that somebody else ought to bear for doing what's morally right. And so, when Mr. Trump promises to temporarily ban travel from Muslim countries linked to terrorism or anti-American sentiments, we (rightly) gasp in disbelief; but when Mr. Obama paints an insulting caricature of rural voters as simpletons who "cling to guns or religion or antipathy to people who aren't like them", we smile and praise him for his wit, not understanding how the other side could be so offended by the truth. Similarly, when Mrs. Clinton chuckles while saying "we are going to put a lot of coal miners out of business" to a cheering crowd, the scene does not strike us as a thoughtless, offensive, or in poor taste. Maybe we will read a story about the miners in Mother Jones some day?

Of course, liberals take pride in caring for the common folk, but I suspect that their leaders' attempts to reach out to the underprivileged workers in the "flyover states" often come across as ham-fisted and insincere. The establishment schools the voters about the inevitability of globalization, as if it were some cosmic imperative; they are told that to reject the premise would not just be wrong - but that it'd be a product of a diseased, nativist mind. They hear that the factories simply had to go to China or Mexico, and the goods just have to come back duty-free - all so that our complex, interconnected world can be a happier place. The workers are promised entitlements, but it stands to reason that they want dignity and hope for their children, not a lifetime on food stamps. The idle, academic debates about automation, post-scarcity societies, and Universal Basic Income probably come across as far-fetched and self-congratulatory, too.

The discourse is poisoned by cognitive biases in many other ways. The liberal media keeps writing about the unaccountable right-wing oligarchs who bankroll the conservative movement and supposedly poison people's minds - but they offer nothing but praise when progressive causes are being bankrolled by Mr. Soros or Mr. Bloomberg. They claim that the conservatives represent "post-truth" politics - but their fact-checkers shoot down conservative claims over fairly inconsequential mistakes, while giving their favored politicians a pass on half-true platitudes about immigration, gun control, crime, or the sources of inequality. Mr. Obama sneers at the conservative bias of Fox News, but has no concern with the striking tilt to the left in the academia or in the mainstream press. The Economist finds it appropriate to refer to Trump supporters as "trumpkins" in print - but it would be unthinkable for them to refer to the fans of Mrs. Clinton using any sort of a mocking term. The pundits ponder the bold artistic statement made by the nude statues of the Republican nominee - but they would be disgusted if a conservative sculptor portrayed the Democratic counterpart in a similarly unflattering light. The commentators on MSNBC read into every violent incident at Trump rallies - but when a a random group of BLM protesters starts chanting about killing police officers, we all agree it would not be fair to cast the entire movement in a negative light.

Most progressives are either oblivious to these biases, or dismiss them as a harmless casualty of fighting the good fight. Perhaps so - and it is not my intent to imply equivalency between the causes of the left and of the right. But in the end, I suspect that the liberal echo chamber contributed to the election of Mr. Trump far more than anything that ever transpired on the right. It marginalized and excluded legitimate but alien socioeconomic concerns from the mainstream political discourse, binning them with truly bigoted and unintelligent speech - and leaving the "flyover underclass" no option other than to revolt. And it wasn't just a revolt of the awful fringes. On the right, we had Mr. Trump - a clumsy outsider who eschews many of the core tenets of the conservative platform, and who does not convincingly represent neither the neoconservative establishment of the Bush era, nor the Bible-thumping religious right of the Tea Party. On the left, we had Mr. Sanders - an unaccomplished Senator who offered simplistic but moving slogans, who painted the accumulation of wealth as the source of our ills, and who promised to mold the United States into an idyllic version of the social democracies of Europe - supposedly governed by the workers, and not by the exploitative elites.

I think that people rallied behind Mr. Sanders and Mr. Trump not because they particularly loved the candidates or took all their promises seriously - but because they had no other credible herald for their cause. When the mainstream media derided their rebellion and the left simply laughed it off, it only served as a battle cry. When tens of millions of Trump supporters were labeled as xenophobic and sexist deplorables who deserved no place in politics, it only pushed more moderates toward the fringe. Suddenly, rational people could see themselves voting for a politically inexperienced and brash billionaire - a guy who talks about cutting taxes for the rich, who wants to cozy up to Russia, and whose VP pick previously wasn't so sure about LGBT rights. I think it all happened not because of Mr. Trump's character traits or thoughtful political positions, and not because half of the country hates women and minorities. He won because he was the only one to promise to "drain the swamp" - and to promise hope, not handouts, to the lower middle class.

There is a risk that this election will prove to be a step back for civil rights, or that Mr. Trump's bold but completely untested economic policies will leave the world worse off; while not certain, it pains me to even contemplate this possibility. When we see injustice, we should fight tooth and nail. But for now, I am not swayed by the preemptively apocalyptic narrative on the left. Perhaps naively, I have faith in the benevolence of our compatriots and the strength of the institutions of - as cheesy as it sounds - one of the great nations of the world.

August 26, 2016

So you want to work in security (but are too lazy to read Parisa's excellent essay)

If you have not seen it yet, Parisa Tabriz penned a lengthy and insightful post about her experiences on what it takes to succeed in the field of information security.

My own experiences align pretty closely with Parisa's take, so if you are making your first steps down this path, I strongly urge you to give her post a good read. But if I had to sum up my lessons from close to two decades in the industry, I would probably boil them down to four simple rules:

  1. Infosec is all about the mismatch between our intuition and the actual behavior of the systems we build. That makes it harmful to study the field as an abstract, isolated domain. To truly master it, dive into how computers work, then make a habit of asking yourself "okay, but what if assumption X does not hold true?" every step along the way.

  2. Security is a protoscience. Think of chemistry in the early 19th century: a glorious and messy thing, chock-full of colorful personalities, unsolved mysteries, and snake oil salesmen. You need passion and humility to survive. Those who think they have all the answers are a danger to themselves and to people who put their faith in them.

  3. People will trust you with their livelihoods, but will have no way to truly measure the quality of your work. Don't let them down: be painfully honest with yourself and work every single day to address your weaknesses. If you are not embarrassed by the views you held two years ago, you are getting complacent - and complacency kills.

  4. It will feel that way, but you are not smarter than software engineers. Walk in their shoes for a while: write your own code, show it to the world, and be humiliated by all the horrible mistakes you will inevitably make. It will make you better at your job - and will turn you into a better person, too.

August 04, 2016

CSS mix-blend-mode is bad for your browsing history

Up until mid-2010, any rogue website could get a good sense of your browsing habits by specifying a distinctive :visited CSS pseudo-class for any links on the page, rendering thousands of interesting URLs off-screen, and then calling the getComputedStyle API to figure out which pages appear in your browser's history.

After some deliberation, browser vendors have closed this loophole by disallowing almost all attributes in :visited selectors, spare for the fairly indispensable ability to alter foreground and background colors for such links. The APIs have been also redesigned to prevent the disclosure of this color information via getComputedStyle.

This workaround did not fully eliminate the ability to probe your browsing history, but limited it to scenarios where the user can be tricked into unwittingly feeding the style information back to the website one URL at a time. Several fairly convincing attacks have been demonstrated against patched browsers - my own 2013 entry can be found here - but they generally depended on the ability to solicit one click or one keypress per every URL tested. In other words, the whole thing did not scale particularly well.

Or at least, it wasn't supposed to. In 2014, I described a neat trick that exploited normally imperceptible color quantization errors within the browser, amplified by stacking elements hundreds of times, to implement an n-to-2n decoder circuit using just the background-color and opacity properties on overlaid <a href=...> elements to easily probe the browsing history of multiple URLs with a single click. To explain the basic principle, imagine wanting to test two links, and dividing the screen into four regions, like so:

  • Region #1 is lit only when both links are not visited (¬ link_a ∧ ¬ link_b),
  • Region #2 is lit only when link A is not visited but link B is visited (¬ link_a ∧ link_b),
  • Region #3 is lit only when link A is visited but link B is not (link_a ∧ ¬ link_b),
  • Region #4 is lit only when both links are visited (link_a ∧ link_b).

While the page couldn't directly query the visibility of the segments, we just had to convince the user to click the visible segment once to get the browsing history for both links, for example under the guise of dismissing a pop-up ad. (Of course, the attack could be scaled to far more than just 2 URLs.)

This problem was eventually addressed by browser vendors by simply improving the accuracy of color quantization when overlaying HTML elements; while this did not eliminate the risk, it made the attack far more computationally intensive, requiring the evil page to stack millions of elements to get practical results. Gave over? Well, not entirely. In the footnote of my 2014 article, I mentioned this:

"There is an upcoming CSS feature called mix-blend-mode, which permits non-linear mixing with operators such as multiply, lighten, darken, and a couple more. These operators make Boolean algebra much simpler and if they ship in their current shape, they will remove the need for all the fun with quantization errors, successive overlays, and such. That said, mix-blend-mode is not available in any browser today."

As you might have guessed, patience is a virtue! As of mid-2016, mix-blend-mode - a feature to allow advanced compositing of bitmaps, very similar to the layer blending modes available in photo-editing tools such as Photoshop and GIMP - is shipping in Chrome and Firefox. And as it happens, in addition to their intended purpose, these non-linear blending operators permit us to implement arbitrary Boolean algebra. For example, to implement AND, all we need to do is use multiply:

  • black (0) x black (0) = black (0)
  • black (0) x white (1) = black (0)
  • white (1) x black (0) = black (0)
  • white (1) x white (1) = white (1)

For a practical demo, click here. A single click in that whack-a-mole game will reveal the state of 9 visited links to the JavaScript executing on the page. If this was an actual game and if it continued for a bit longer, probing the state of hundreds or thousands of URLs would not be particularly hard to pull off.

May 11, 2016

Clearing up some misconceptions around the "ImageTragick" bug

The recent, highly publicized "ImageTragick" vulnerability had countless web developers scrambling to fix a remote code execution vector in ImageMagick - a popular bitmap manipulation tool commonly used to resize, transcode, or annotate user-supplied images on the Web. Whatever your take on "branded" vulnerabilities may be, the flaw certainly is notable for its ease of exploitation: it is an embarrassingly simple shell command injection bug reminiscent of the security weaknesses prevalent in the 1990s, and nearly extinct in core tools today. The issue also bears some parallels to the more far-reaching but equally striking Shellshock bug.

That said, I believe that the publicity that surrounded the flaw was squandered by failing to make one very important point: even with this particular RCE vector fixed, anyone using ImageMagick to process attacker-controlled images is likely putting themselves at a serious risk.

The problem is fairly simple: for all its virtues, ImageMagick does not appear to be designed with malicious inputs in mind - and has a long and colorful history of lesser-known but equally serious security flaws. For a single data point, look no further than the work done several months ago by Jodie Cunningham. Jodie fuzzed IM with a vanilla setup of afl-fuzz - and quickly identified about two dozen possibly exploitable security holes, along with countless denial of service flaws. A small sample of Jodie's findings can be found here.

Jodie's efforts probably just scratched the surface; after "ImageTragick", a more recent effort by Hanno Boeck uncovered even more bugs; from what I understand, Hanno's work also went only as far as using off-the-shelf fuzzing tools. You can bet that, short of a major push to redesign the entire IM codebase, the trickle won't stop any time soon.

And so, the advice sorely missing from the "ImageTragick" webpage is this:

  • If all you need to do is simple transcoding or thumbnailing of potentially untrusted images, don't use ImageMagick. Make a direct use of libpng, libjpeg-turbo, and giflib; for a robust way to use these libraries, have a look at the source code of Chromium or Firefox. The resulting implementation will be considerably faster, too.

  • If you have to use ImageMagick on untrusted inputs, consider sandboxing the code with seccomp-bpf or an equivalent mechanism that robustly restricts access to all user space artifacts and to the kernel attack surface. Rudimentary sandboxing technologies, such as chroot() or UID separation, are probably not enough.

  • If all other options fail, be zealous about limiting the set of image formats you actually pass down to IM. The bare minimum is to thoroughly examine the headers of the received files. It is also helpful to explicitly specify the input format when calling the utility, as to preempt auto-detection code. For command-line invocations, this can be done like so:

    convert [...other params...] -- jpg:input-file.jpg jpg:output-file.jpg

    The JPEG, PNG, and GIF handling code in ImageMagick is considerably more robust than the code that supports PCX, TGA, SVG, PSD, and the likes.

February 09, 2016

Automatically inferring file syntax with afl-analyze

The nice thing about the control flow instrumentation used by American Fuzzy Lop is that it allows you to do much more than just, well, fuzzing stuff. For example, the suite has long shipped with a standalone tool called afl-tmin, capable of automatically shrinking test cases while still making sure that they exercise the same functionality in the targeted binary (or that they trigger the same crash). Another similar tool, afl-cmin, employed a similar trick to eliminate redundant files in any large testing corpora.

The latest release of AFL features another nifty new addition along these lines: afl-analyze. The tool takes an input file, sequentially flips bytes in this data stream, and then observes the behavior of the targeted binary after every flip. From this information, it can infer several things:

  • Classify some content as no-op blocks that do not elicit any changes to control flow (say, comments, pixel data, etc).
  • Checksums, magic values, and other short, atomically compared tokens where any bit flip causes the same change to program execution.
  • Longer blobs exhibiting this property - almost certainly corresponding to checksummed or encrypted data.
  • "Pure" data sections, where analyzer-injected changes consistently elicit differing changes to control flow.

This gives us some remarkable and quick insights into the syntax of the file and the behavior of the underlying parser. It may sound too good to be true, but actually seems to work in practice. For a quick demo, let's see what afl-analyze has to say about running cut -d ' ' -f1 on a text file:

We see that cut really only cares about spaces and newlines. Interestingly, it also appears that the tool always tokenizes the entire line, even if it's just asked to return the first token. Neat, right?

Of course, the value of afl-analyze is greater for incomprehensible binary formats than for simple text utilities; perhaps even more so when dealing with black-box parsers (which can be analyzed thanks to the runtime QEMU instrumentation supported in AFL). To try out the tool's ability to deal with binaries, let's check out libpng:

This looks pretty damn good: we have two four-byte signatures, followed by chunk length, four-byte chunk name, chunk length, some image metadata, and then a comment section. Neat, right? All in a matter of seconds: no configuration needed and no knobs to turn.

Of course, the tool shipped just moments ago and is still very much experimental; expect some kinks. Field testing and feedback welcome!

January 14, 2016

Show and tell: doomsday planning for less crazy folk

Yup. I've been quiet of recent, but that's in part because I've been working on this piece:

http://lcamtuf.coredump.cx/prep/

It's a fairly systematic and level-headed approach to threat modeling and risk management, except not for computer systems - and instead, for real life. There's not much I can add on top of what's already said on the linked page; have a look, you will probably find it to be an interesting read.

October 02, 2015

Subjective explainer: gun debate in the US

In the wake of the tragic events in Roseburg, I decided to briefly return to the topic of looking at the US culture from the perspective of a person born in Europe. In particular, I wanted to circle back to the topic of firearms.

Contrary to popular beliefs, the United States has witnessed a dramatic decline in violence over the past 20 years. In fact, when it comes to most types of violent crime - say, robbery, assault, or rape - the country now compares favorably to the UK and many other OECD nations. But as I explored in my earlier posts, one particular statistic - homicide - is still registering about three times as high as in many other places within the EU.

The homicide epidemic in the United States has a complex nature and overwhelmingly affects ethnic minorities and other disadvantaged social groups; perhaps because of this, the phenomenon sees very little honest, public scrutiny. It is propelled into the limelight only in the wake of spree shootings and other sickening, seemingly random acts of terror; such incidents, although statistically insignificant, take a profound mental toll on the American society. At the same time, the effects of high-profile violence seem strangely short-lived: they trigger a series of impassioned political speeches, invariably focusing on the connection between violence and guns - but the nation soon goes back to business as usual, knowing full well that another massacre will happen soon, perhaps the very same year.

On the face of it, this pattern defies all reason - angering my friends in Europe and upsetting many brilliant and well-educated progressives in the US. They utter frustrated remarks about the all-powerful gun lobby and the spineless politicians, blaming the partisan gridlock for the failure to pass even the most reasonable and toothless gun control laws. I used to be in the same camp; today, I think the reality is more complex than that.

To get to the bottom of this mystery, it helps to look at the spirit of radical individualism and classical liberalism that remains the national ethos of the United States - and in fact, is enjoying a degree of resurgence unseen for many decades prior. In Europe, it has long been settled that many individual liberties - be it the freedom of speech or the natural right to self-defense - can be constrained to advance even some fairly far-fetched communal goals. On the old continent, such sacrifices sometimes paid off, and sometimes led to atrocities; but the basic premise of European collectivism is not up for serious debate. In America, the same notion certainly cannot be taken for granted today.

When it comes to firearm ownership in particular, the country is facing a fundamental choice between two possible realities:

  • A largely disarmed society that depends on the state to protect it from almost all harm, and where citizens are generally not permitted to own guns without presenting a compelling cause. In this model, adopted by many European countries, firearms tend to be less available to common criminals - simply by the virtue of limited supply and comparatively high prices in black market trade. At the same time, it can be argued that any nation subscribing to this doctrine becomes more vulnerable to foreign invasion or domestic terror, should its government ever fail to provide adequate protection to all citizens. Disarmament can also limit civilian recourse against illegitimate, totalitarian governments - a seemingly outlandish concern, but also a very fresh memory for many European countries subjugated not long ago under the auspices of the Soviet Bloc.

  • A well-armed society where firearms are available to almost all competent adults, and where the natural right to self-defense is subject to few constraints. This is the model currently employed in the United States, where it arises from the straightfoward, originalist interpretation of the Second Amendment - as recognized by roughly 75% of all Americans and affirmed by the Supreme Court. When following such a doctrine, a country will likely witness greater resiliency in the face of calamities or totalitarian regimes. At the same time, its citizens might have to accept some inherent, non-trivial increase in violent crime due to the prospect of firearms more easily falling into the wrong hands.

It seems doubtful that a viable middle-ground approach can exist in the United States. With more than 300 million civilian firearms in circulation, most of them in unknown hands, the premise of reducing crime through gun control would inevitably and critically depend on some form of confiscation; without such drastic steps, the supply of firearms to the criminal underground or to unfit individuals would not be disrupted in any meaningful way. Because of this, intellectual integrity requires us to look at many of the legislative proposals not only through the prism of their immediate utility, but also to give consideration to the societal model they are likely to advance.

And herein lies the problem: many of the current "common-sense" gun control proposals have very little merit when considered in isolation. There is scant evidence that reinstating the ban on military-looking semi-automatic rifles ("assault weapons"), or rolling out the prohibition on private sales at gun shows, would deliver measurable results. There is also no compelling reason to believe that ammo taxes, firearm owner liability insurance, mandatory gun store cameras, firearm-free school zones, bans on open carry, or federal gun registration can have any impact on violent crime. And so, the debate often plays out like this:

At the same time, by the virtue of making weapons more difficult, expensive, and burdensome to own, many of the legislative proposals floated by progressives would probably gradually erode the US gun culture; intentionally or not, their long-term outcome would be a society less passionate about firearms and more willing to follow in the footsteps of Australia or the UK. Only as we cross that line and confiscate hundreds of millions of guns, it's fathomable - yet still far from certain - that we would see a sharp drop in homicides.

This method of inquiry helps explain the visceral response from gun rights advocates: given the legislation's dubious benefits and its predicted long-term consequences, many pro-gun folks are genuinely worried that making concessions would eventually mean giving up one of their cherished civil liberties - and on some level, they are right.

Some feel that this argument is a fallacy, a tell tale invented by a sinister corporate "gun lobby" to derail the political debate for personal gain. But the evidence of such a conspiracy is hard to find; in fact, it seems that the progressives themselves often fan the flames. In the wake of Roseburg, both Barack Obama and Hillary Clinton came out praising the confiscation-based gun control regimes employed in Australia and the UK - and said that they would like the US to follow suit. Depending on where you stand on the issue, it was either an accidental display of political naivete, or the final reveal of their sinister plan. For the latter camp, the ultimate proof of a progressive agenda came a bit later: in response to the terrorist attack in San Bernardino, several eminent Democratic-leaning newspapers published scathing editorials demanding civilian disarmament while downplaying the attackers' connection to Islamic State.

Another factor that poisons the debate is that despite being highly educated and eloquent, the progressive proponents of gun control measures are often hopelessly unfamiliar with the very devices they are trying to outlaw:

I'm reminded of the widespread contempt faced by Senator Ted Stevens following his attempt to compare the Internet to a "series of tubes" as he was arguing against net neutrality. His analogy wasn't very wrong - it just struck a nerve as simplistic and out-of-date. My progressive friends did not react the same way when Representative Carolyn McCarthy - one of the key proponents of the ban on assault weapons - showed no understanding of the supposedly lethal firearm features she was trying to eradicate. Such bloopers are not rare, too; not long ago, Mr. Bloomberg, one of the leading progressive voices on gun control in America, argued against semi-automatic rifles without understanding how they differ from the already-illegal machine guns:

Yet another example comes Representative Diana DeGette, the lead sponsor of a "common-sense" bill that sought to prohibit the manufacture of magazines with capacity over 15 rounds. She defended the merits of her legislation while clearly not understanding how a magazine differs from ammunition - or that the former can be reused:

"I will tell you these are ammunition, they’re bullets, so the people who have those know they’re going to shoot them, so if you ban them in the future, the number of these high capacity magazines is going to decrease dramatically over time because the bullets will have been shot and there won’t be any more available."

Treating gun ownership with almost comical condescension has become vogue among a good number of progressive liberals. On a campaign stop in San Francisco, Mr. Obama sketched a caricature of bitter, rural voters who "cling to guns or religion or antipathy to people who aren't like them". Not much later, one Pulitzer Prize-winning columnist for The Washington Post spoke of the Second Amendment as "the refuge of bumpkins and yeehaws who like to think they are protecting their homes against imagined swarthy marauders desperate to steal their flea-bitten sofas from their rotting front porches". Many of the newspaper's readers probably had a good laugh - and then wondered why it has gotten so difficult to seek sensible compromise.

There are countless dubious and polarizing claims made by the supporters of gun rights, too; examples include a recent NRA-backed tirade by Dana Loesch denouncing the "godless left", or the constant onslaught of conspiracy theories spewed by Alex Jones and Glenn Beck. But when introducing new legislation, the burden of making educated and thoughtful arguments should rest on its proponents, not other citizens. When folks such as Bloomberg prescribe sweeping changes to the American society while demonstrating striking ignorance about the topics they want to regulate, they come across as elitist and flippant - and deservedly so.

Given how controversial the topic is, I think it's wise to start an open, national conversation about the European model of gun control and the risks and benefits of living in an unarmed society. But it's also likely that such a debate wouldn't last very long. Progressive politicians like to say that the dialogue is impossible because of the undue influence of the National Rifle Association - but as I discussed in my earlier blog posts, the organization's financial resources and power are often overstated: it does not even make it onto the list of top 100 lobbyists in Washington, and its support comes mostly from member dues, not from shadowy business interests or wealthy oligarchs. In reality, disarmament just happens to be a very unpopular policy in America today: the support for gun ownership is very strong and has been growing over the past 20 years - even though hunting is on the decline.

Perhaps it would serve the progressive movement better to embrace the gun culture - and then think of ways to curb its unwanted costs. Addressing inner-city violence, especially among the disadvantaged youth, would quickly bring the US homicide rate much closer to the rest of the highly developed world. But admitting the staggering scale of this social problem can be an uncomfortable and politically charged position to hold. For Democrats, it would be tantamount to singling out minorities. For Republicans, it would be just another expansion of the nanny state.

PS. If you are interested in a more systematic evaluation of the scale, the impact, and the politics of gun ownership in the United States, you may enjoy an earlier entry on this blog. Or, if you prefer to read my entire series comparing the life in Europe and in the US, try this link.

August 31, 2015

Understanding the process of finding serious vulns

Our industry tends to glamorize vulnerability research, with a growing number of bug reports accompanied by flashy conference presentations, media kits, and exclusive interviews. But for all that grandeur, the public understands relatively little about the effort that goes into identifying and troubleshooting the hundreds of serious vulnerabilities that crop up every year in the software we all depend on. It certainly does not help that many of the commercial security testing products are promoted with truly bombastic claims - and that some of the most vocal security researchers enjoy the image of savant hackers, seldom talking about the processes and toolkits they depend on to get stuff done.

I figured it may make sense to change this. Several weeks ago, I started trawling through the list of public CVE assignments, and then manually compiling a list of genuine, high-impact flaws in commonly used software. I tried to follow three basic principles:

  • For pragmatic reasons, I focused on problems where the nature of the vulnerability and the identity of the researcher is easy to ascertain. For this reason, I ended up rejecting entries such as CVE-2015-2132 or CVE-2015-3799.

  • I focused on widespread software - e.g., browsers, operating systems, network services - skipping many categories of niche enterprise products, Wordpress add-ons, and so on. Good examples of rejected entries in this category include CVE-2015-5406 and CVE-2015-5681.

  • I skipped issues that appeared to be low impact, or where the credibility of the report seemed unclear. One example of a rejected submission is CVE-2015-4173.

To ensure that the data isn't skewed toward more vulnerable software, I tried to focus on research efforts, rather than on individual bugs; where a single reporter was credited for multiple closely related vulnerabilities in the same product within a narrow timeframe, I would use only one sample from the entire series of bugs.

For the qualifying CVE entries, I started sending out anonymous surveys to the researchers who reported the underlying issues. The surveys open with a discussion of the basic method employed to find the bug:

  How did you find this issue?

  ( ) Manual bug hunting
  ( ) Automated vulnerability discovery
  ( ) Lucky accident while doing unrelated work

If "manual bug hunting" is selected, several additional options appear:

  ( ) I was reviewing the source code to check for flaws.
  ( ) I studied the binary using a disassembler, decompiler, or a tracing tool.
  ( ) I was doing black-box experimentation to see how the program behaves.
  ( ) I simply noticed that this bug is being exploited in the wild.
  ( ) I did something else: ____________________

Selecting "automated discovery" results in a different set of choices:

  ( ) I used a fuzzer.
  ( ) I ran a simple vulnerability scanner (e.g., Nessus).
  ( ) I used a source code analyzer (static analysis).
  ( ) I relied on symbolic or concolic execution.
  ( ) I did something else: ____________________

Researchers who relied on automated tools are also asked about the origins of the tool and the computing resources used:

  Name of tool used (optional): ____________________

  Where does this tool come from?

  ( ) I created it just for this project.
  ( ) It's an existing but non-public utility.
  ( ) It's a publicly available framework.

  At what scale did you perform the experiments?

  ( ) I used 16 CPU cores or less.
  ( ) I employed more than 16 cores.

Regardless of the underlying method, the survey also asks every participant about the use of memory diagnostic tools:

  Did you use any additional, automatic error-catching tools - like ASAN
  or Valgrind - to investigate this issue?

  ( ) Yes. ( ) Nope!

...and about the lengths to which the reporter went to demonstrate the bug:

  How far did you go to demonstrate the impact of the issue?

  ( ) I just pointed out the problematic code or functionality.
  ( ) I submitted a basic proof-of-concept (say, a crashing test case).
  ( ) I created a fully-fledged, working exploit.

It also touches on the communications with the vendor:

  Did you coordinate the disclosure with the vendor of the affected
  software?

  ( ) Yes. ( ) No.

  How long have you waited before having the issue disclosed to the
  public?

  ( ) I disclosed right away. ( ) Less than a week. ( ) 1-4 weeks.
  ( ) 1-3 months. ( ) 4-6 months. ( ) More than 6 months.

  In the end, did the vendor address the issue as quickly as you would
  have hoped?

  ( ) Yes. ( ) Nope.

...and the channel used to disclose the bug - an area where we have seen some stark changes over the past five years:

  How did you disclose it? Select all options that apply:

  [ ] I made a blog post about the bug.
  [ ] I posted to a security mailing list (e.g., BUGTRAQ).
  [ ] I shared the finding on a web-based discussion forum.
  [ ] I announced it at a security conference.
  [ ] I shared it on Twitter or other social media.
  [ ] We made a press kit or reached out to a journalist.
  [ ] Vendor released an advisory.

The survey ends with a question about the motivation and the overall amount of effort that went into this work:

  What motivated you to look for this bug?

  ( ) It's just a hobby project.
  ( ) I received a scientific grant.
  ( ) I wanted to participate in a bounty program.
  ( ) I was doing contract work.
  ( ) It's a part of my full-time job.

  How much effort did you end up putting into this project?

  ( ) Just a couple of hours.
  ( ) Several days.
  ( ) Several weeks or more.

So far, the response rate for the survey is approximately 80%; because I only started in August, I currently don't have enough answers to draw particularly detailed conclusions from the data set - this should change over the next couple of months. Still, I'm already seeing several well-defined if preliminary trends:

  • The use of fuzzers is ubiquitous (incidentally, of named projects, afl-fuzz leads the fray so far); the use of other automated tools, such as static analysis frameworks or concolic execution, appears to be unheard of - despite the undivided attention that such methods receive in academic settings.

  • Memory diagnostic tools, such as ASAN and Valgrind, are extremely popular - and are an untold success story of vulnerability research.

  • Most of public vulnerability research appears to be done by people who work on it full-time, employed by vendors; hobby work and bug bounties follow closely.

  • Only a small minority of serious vulnerabilities appear to be disclosed anywhere outside a vendor advisory, making it extremely dangerous to rely on press coverage (or any other casual source) for evaluating personal risk.

Of course, some security work happens out of public view; for example, some enterprises have well-established and meaningful security assurance programs that likely prevent hundreds of security bugs from ever shipping in the reviewed code. Since it is difficult to collect comprehensive and unbiased data about such programs, there is always some speculation involved when discussing the similarities and differences between this work and public security research.

Well, that's it! Watch this space for updates - and let me know if there's anything you'd change or add to the questionnaire.

July 15, 2015

Poland and the United States: all that begins must end

With my previous entry, I wrapped up an impromptu series of articles that chronicled my childhood experiences in Poland and compared the culture I grew up with to the American society that I'm living in today. For the readers who want to be able to navigate the series without scrolling endlessly, I wanted to put together a quick table of contents. Here it goes.

The entry that started it all:

  • "On journeys" - a personal story recounting my travels from Poland to the US.

Oh, the places you won't go:

Poland (and Europe) vs the United States:

And now, back to the regularly scheduled programming...

Poland vs the United States: American exceptionalism

This is the fourteenth article talking about Poland, Europe, and the United States. To explore the entire collection, start here.

This is destined to be the final entry in the series that opened with a chronicle of my journey from Poland to the United States, only to veer into some of the most interesting social differences between America and the old continent. There are many other topics I could still write about - anything from the school system, to religion, to the driving culture - but with my parental leave coming to an end, I decided to draw a line. I'm sure that this decision will come as a relief for those who read the blog for technical insights, rather than political commentary :-)

The final topic I wanted to talk about is something that truly irks some of my European friends: the belief, held deeply by many Americans, that their country is the proverbial "city upon a hill" - a shining beacon of liberty and righteousness, blessed by the maker with the moral right to shape the world - be it by flexing its economic and diplomatic muscles, or with its sheer military might.

It is an interesting phenomenon, and one that certainly isn't exclusive to the United States. In fact, expansive exceptionalism used to be a very strong theme in the European doctrine long before it emerged in other parts of the Western world. For one, it underpinned many of the British, French, Spanish, and Dutch colonial conquests over the past 500 years. The romanticized notion of Sonderweg played a menacing role in German political discourse, too - eventually culminating in the rise of the Nazi ideology and the onset of World War II. It wasn't until the defeat of the Third Reich when Europe, faced with unspeakable destruction and unprecedented loss of life, made a concerted effort to root out many of its nationalist sentiments and embrace a more harmonious, collective path as a single European community.

America, in a way, experienced the opposite: although it has always celebrated its own rejection of feudalism and monarchism - and in that sense, it had a robust claim to being a pretty unique corner of the world - the country largely shied away from global politics, participating only very reluctantly in World War I, then hoping to wait out World War II up until being attacked by Japan. Its conviction about its special role on the world stage has solidified only after it paid a tremendous price to help defeat the Germans, to stop the march of the Red Army through the continent, and to build a prosperous and peaceful Europe; given the remarkable significance of this feat, the post-war sentiments in America may be not hard to understand. In that way, the roots of American exceptionalism differed from its European predecessors, being fueled by a fairly pure sense of righteousness - and not by anger, by a sense of injury, or by territorial demands.

Of course, the new superpower has also learned that its military might has its limits, facing humiliating defeats in some of the proxy wars with the Soviets and seeing an endless spiral of violence in the Middle East. The voices predicting its imminent demise, invariably present from the earliest days of the republic, have grown stronger and more confident over the past 50 years. But the country remains a military and economic powerhouse; and in some ways, its trigger-happy politicians provide a counterbalance to the other superpowers' greater propensity to turn a blind eye to humanitarian crises and to genocide. It's quite possible that without the United States arming its allies and tempering the appetites of Russia, North Korea, or China, the world would have been a less happy place. It's just as likely that the Middle East would have been a happier one.

Some Europeans show indignation that Americans, with their seemingly know-it-all attitudes toward the rest of the world, still struggle to pinpoint Austria or Belgium on the map. It is certainly true that the media in the US pays little attention to the old continent. But deep down inside, European outlets don't necessarily fare a lot better, often focusing its international coverage on the silly and the formulaic: when in Europe, you are far more likely to hear about a daring rescue of a cat stuck on a tree in Wyoming, or about the Creation Museum in Kentucky, than you are to learn anything substantive about Obamacare. (And speaking of Wyoming and Kentucky, pinpointing these places on the map probably wouldn't be the European viewer's strongest feat). In the end, Europeans who think they understand the intricacies of US politics are probably about as wrong as the average American making sweeping generalizations about Europe.

And on that intentionally self-deprecating note, it's time to wrap the series up.