Bikelash Trailer

A true story by Chris Bucchere. On August 5th, 2018, the ten-week podcast series begins.

March 29th, 2012, 8:20am, Castro Street
When he regained consciousness, Chris found himself bloodied and bruised, being loaded into an ambulance. He had no idea that his life had permanently changed. Because, a few days later, an elderly man he had hit with his bicycle would die of injuries sustained in the accident. News would go viral internationally, including articles in the New York Times and Forbes magazine. The District Attorney of San Francisco would see Chris’s case as an opportunity to send a message to the city’s cyclists.

But this isn’t a story about cycling. It’s about criminal justice. It’s about prosecutors manipulating the press in order to deprive defendants of due process, where facts get misconstrued and inaccurate details leaked. It’s about social media whipping public opinion to a frenzy, giving DAs fodder for political gain. It’s about what really happens behind the headlines: who wins, who loses, who plays fair—and who doesn’t.

Bikelash is a ten-week podcast series chronicling Chris’s role in a fatal bicycle-pedestrian accident. These 107,000 words are based on court transcripts, emails exchanged with his attorney, and Chris’s in-the-moment journaling from immediately after the accident until he pleaded guilty to felony vehicular manslaughter eighteen months later.

Subscribe to Bikelash on Android, iTunes, Stitcher, or where-ever you get your podcasts.

Bikelash

On Sunday, August 5th, 2018, I’ll publish the first installment of a ten-week blog and podcast series Bikelash: How San Francisco created America’s first bicycle felon, chronicling my role in a bicycle/pedestrian accident that made international headlines in 2012.

Follow the story here on my blog, on medium, on Facebook, and on Twitter.

The Post-Facebook Internet

The recent Facebook scandal—in which Cambridge Analytica obtained 50 million Facebook users’ personal data—really shouldn’t have been such a big deal. By no means was it the biggest breach of data, nor a breach of the most sensitive kind of data. It wasn’t nearly as salacious as PRISM nor any of the other secret programs the NSA designed to siphon phone and internet data (which remained covert until whistleblower Edward Snowden famously told the Guardian about them in 2013).

The most—if not the only—potentially unlawful act in this saga was the deal struck to share the data between Aleksandr Kogan and Cambridge Analytica, a potential violation of the terms of service for the survey app that Kogan created to harvest the data in the first place. The act of collecting the data, though no longer permitted by Facebook, was perfectly lawful at the time.

So why then was this breach such a big deal?

First of all, we can universally agree that Facebook has amassed a trove of personal data larger than that of any other company on the planet. Given the obvious value of these data, Facebook is constantly targeted. When such a breach occured on the world’s largest social network, millions of people became upset, rightfully so. Facebook got pressured to explain how it happened. Still seems like no big deal.

Their explanation, however, begged an important question: Why does Facebook need all these data in the first place? That in turn led to an interesting Catch-22: In explaining the data breach, Facebook had to draw attention to its business model, namely that it collects data, anonymizes them, and sells them—albeit indirectly—to their real customers. The real customers are not us, the Facebook/Instagram/WhatsApp users. No, Facebook’s customers are ad networks and advertisers, i.e. companies and people who pay to promote their products and services on Facebook.

This “revelation” should have come as no big surprise to anyone, or at least to anyone paying attention. Revenue through tailored advertising is the business model of Google, Facebook and nearly every media entity that operates online. Last year, Facebook reported that 98% of their revenue came from advertising.

Using your data (and mine and everyone else’s), Facebook built an incredibly powerful ad targeting platform, a platform we allowed them to build and deploy when we accepted their terms of service—all two (plus) billion of us.

It’s even possible—through Facebook’s publicly-available advertising platform—to target a 41-year-old man in San Francisco who speaks Spanglish, who has attended at least one Lindyhop event and who belongs to the Bay Area Esk8 group. In other words, I can target an ad so narrowly that it’s shown only to me. (I just tried this, and though the platform gave me a warning that my targeting parameters might be “too specific,” it didn’t stop me from setting up the ad.)

So this is how Facebook makes hay using our personal data. Along with paywalls/subscriptions (e.g. San Francisco Chronicle, Medium, New York Times) and donations (e.g. The Guardian, NPR, Wikipedia), selling ads targeted to people’s personal sensibilities is how hay gets made not just on Facebook, but all over the internet. If that means I get to consume ads for dance camps and wetsuits in lieu of celebrity plastic surgery disasters, then everybody wins. (Facebook infers, correctly, that I surf. O’Neill pays Facebook to advertise the wetsuit to me and other surfers, we buy the wetsuits from O’Neill. Repeat. Cha-ching.)

Somehow we got from selling wetsuits to throwing elections. To understand how our current internet failed us in order to frame to where the new internet needs to take us, it’s worth doing a shallow dive into internet history.

A Brief History of the Internet (and Cats)

The internet was never intended to be a money-making machine. In the late 60s and early 70s, large universities wired their computers together in order to share research, primarily through email (of all things) on an early version of the internet known as ARPAnet. Along the way, the DoD provided financing to create DARPAnet. In the 80s, I’m sure sharing cat pictures (uuencoded as streams of text) started to become a thing, if it wasn’t already. Even still, the internet’s only “business model” was government-sponsored academic propeller-spinning.

In 1994, with the advent of the Netscape browser, non-academics flooded onto the internet in droves. Ten years prior, I got my first email account and dial up access from AppleLink. I connected to and explored BBSs and started using protocols like Gopher and NNTP (Usenet). I read up on “netiquette,” learned how to keep my CAPS LOCK key off, how to spot an AOLer (hint: CAPS LOCK USUALLY ON) and how to construct some basic emoticons, something we once called “ASCII art.” |_|] ← That’s a coffee mug right there. Really, it is.

This early internet, on the precipice of becoming commercial, had the feel of a loosely-coupled collection of “expert communities”—for lack of a better term—scattered amongst BBSs, Usenet and AOL chatrooms. (Keep this notion of “expert communities” in mind as you continue reading; I’ll circle back to it later.)

From roughly 1994-2002, companies flocked to the internet to experiment with the web’s first “real” business model: ecommerce. For a few years, it seemed like every business needed a web storefront. However, when investors realized that selling cat food online wasn’t quite what it was cracked up to be, the bubble burst. The same market forces that quickly evaporated five trillion dollars of value also declared Amazon the clear “winner” of ecommerce, proving that centralized inventory (along with on-demand inventory) and centralized technology and fulfillment logistics were the best way—if not the only way—to sell cat food online and actually turn a profit.

After a brief moment of reckoning, from the second wave of the internet—what some call Web 2.0—emerged a new, more indirect business model, this one borrowed from traditional media companies. Like newspapers and magazines, “Web 2.0” sites and applications would also run ads, but insteading of hiring professional photographers and journalists, everyday users would supply the cat photos and write the heartwarming cat stories. Sites like these could save money by letting amateurs create the content—called User Generated Content (or UGC for short)—while they collected money for every cat food ad impression (CPM), every cat photo click-thru (CPC) and every action, e.g. signing up for a site’s feline marketing content or taking a cat survey (CPA).

Naturally, the sites with the most users and the most cat photos (predominantly Facebook and Twitter) could provide the richest ad targeting platforms. Facebook’s claim of making the world more connected belied another mission: to create the richest, most effective ad-targeting platform known to mankind.

(It’s worth nothing that I’m glossing over huge swaths of the ad industry, including search ads/SEO/SEM and scores of networks that serve up ads on third-party sites and mobile applications. I’m also neglecting to talk about the mobile web in general terms, the Semantic Web, the Internet of Things and a whole host of other topics, just so we can stay focused on UGC.)

User Generated CatsContent

While it has been part of the technology toolkit and lingo for at least 15 years, many—if not most—people heard about UGC for the first time during the recent fallout from the Facebook/Kogan/Cambridge Analytica scandal. Prior to a few days ago, people thought Facebook was free; in reality it’s not. We pay for Facebook by bartering our personal information in exchange for the Facebook features we enjoy.

Perhaps “used to enjoy” would have been better phrasing, since this latest scandal left angry mobs of people joining the #DeleteFacebook movement. In many ways, they’re doing so in vain, because we would literally need to stop using our smartphones and the entire internet, change our names, addresses, hair/eye color, purchase history and a thousand other things to escape the personal data collection happening everywhere on the web.

On Facebook and elsewhere, UGC greases the gears of an enormous machine designed to turn cat photos into cash. And it works, or at least it works for a few massive companies, which seems to be a theme as far as internet companies go.

In fact, at least three times in the brief history of the internet have we seen huge oligopolies create—and consume—entire online business models: Amazon (for ecommerce), Google (for search advertising) and Facebook (for UGC advertising).

Organic growth and the acquisitions by Facebook alone resulted in more than two billion people’s personal information, likes, preferences and social interactions gettings stored inside what is effectively one enormous database.

And that finally explains why this scandal is important: because it has caused people to start asking some really good questions, like: Was it a good idea to allow companies like Facebook to give everyone a free microphone in exchange for harvesting, storing and mining everything everyone says?

It’s Not the Cat Photos; It’s the Cat Distribution

Facebook may be the biggest collector of data, but they certainly aren’t the only one. Plus, they’re not going to delete their data, as it’s the lifeblood of their company. So instead of focusing on Facebook, I want to ask a more fundamental question, one that will surely ignite the ire of free speech advocates everywhere, but one that needs to be asked regardless: Was it even a good idea to give everyone a free microphone in the first place?

Put another way, when is it a good idea—in the real, non-digital world—for us to tell something, instantly, to everyone we know: family, good friends, co-workers, acquaintances, people we just met and immediately befriended? Before Facebook, this wasn’t easily possible. We used to hide our reading materials and journals under the mattress and only send things like baby announcements to everyone we know (even then selectively skipping creepers like Uncle Charlie). Now Facebook has flipped that notion on its head. Your cat photo has more likes than my baby announcement? Does this make any kind of sense IRL? Then why should it be possible online?

But, what about free speech? Yes, in this country we are all free to say nearly anything without fear of repercussion. In another sense, however, speech isn’t really free at all. Our precious free speech is utterly worthless without distribution. Without distribution, our posts on the internet are nothing more than trees falling in the forest with no one around to listen for the sounds they might make. Distribution costs money—and that’s why we strike a Faustian promise with every word and click on Facebook. We provide the content; they provide the distribution. And we pay for the distribution, albeit indirectly, by allowing Facebook to broker our data to advertisers.

Too often and too easily is distribution confused with truth. If something is “widely reported,” that doesn’t make it factual. Therein lies problem with the awesome distribution power of Facebook: It can be used to distribute facts just as efficiently as it can to spread, um, “alternative facts.” As a result, Facebook and Twitter and other UGC sites are heavily moderated both by people and by machines. The other day, Facebook’s censorship robots blocked my friend Tim from saying “trees cause global warming.” Many artists have had their work removed for showing a little too much nipple (or a little too much something). This introduces a whole new set of problems, the most of important of which is: Do we trust Facebook to arbitrate “good” speech from “bad?” Under what or whose standards?

I had a revealing personal experience in 2012 when I helped Miso—a Google-backed venture conceived as a social media site for videos—build an application called Quips. This app would allow people to use their phones to take still images from TV shows and movies and create memes from them by adding the chunky white text we’ve come to associate with such artifacts.

Long story short: we didn’t build moderation (a common internet euphemism for censorship) into the first version of the platform. Rather, we gave people unfettered access to tools they could use to create potentially viral content. What could possibly go wrong? Within weeks, Quips had degenerated into the most profoundly hateful cesspool I’ve yet to see on the internet—and I even (sometimes) read YouTube video comments! Who knew Miso was actually short for misogyny—and racism, homophobia, xenophobia and a million other kinds of hate speech?

It was easy for us to sunset Quips and bury the steaming pile of dreck that Quippers created. It’s not so simple for Facebook.

They certainly can’t delete everything without destroying the data vital to their business model. Meanwhile trying to censor posts is an endless game of algorithmic Whack-a-Mole certain to offend the sensibilities of moles on the far-right, the far-left and every mole in between, including my friend Tim (who doesn’t actually believe that trees cause global warming; it was just a joke).

So distribution without moderation/censorship leads to a cesspool. We technologists all knew this already, but it hasn’t stopped a host of really smart people from trying to build a better moderation/censorship mousetrap. Ultimately they will fail because of (what I can only hope is merely a few) creative individuals with a lot of free time producing a seemingly-limitless supply of garbage. Or art. Or jokes! Sarcasm, something nearly impossible to detect on the web, can often be mistaken for hate speech, especially when the whole point of the sarcasm was to raise awareness of the hate speech in the first place.

When faced with an intractable situation like “stamping out misinformation on the internet,” it helps to reframe the problem by looking at the actual root cause. The cause is not fake news per se, nor ad networks, nor Facebook, Cambridge Analytica nor even UGC. Rather, the naive ideology of the internet coupled with the worst traits in humanity formed ideal grounds for a Tragedy of the Commons: If you create something open and free, some people will eventually find a way to exploit it for their own benefit and thereby ruin it for everyone else.

Emerging from the Cesspool

Even though it’s likely a very small segment of “bad actors” who are ruining the internet for everyone, I’m proposing a radical shift: let’s leave the internet for what it is (a cesspool) and build a better one. What if we could start over with the same lofty goals—connecting the world by sharing information—but this time build an internet with failsafes that would prevent us from creating yet another cesspool of misinformation and hate speech?

I’m not suggesting that we shut down the internet, but instead that we build something atop existing protocols that helps the world organize information, validate claims, and establish fact; in other words, we need to build an internet that lives up to its early design considerations, which, obviously, did not include building a cesspool of falsehoods and hate speech.

A recent NYT article really drove this point home for me: “The downgrading of experience and devaluing of expertise can be explained partly by the internet, which allows people to assemble their own preferred information and affords them the delusion of omniscience.”

Note it said “partly.” The internet is partly at fault. Humanity bears responsibility for the rest.

So yes, humanity is a big part of the problem. But it’s also the solution. For every bad actor, there are thousands and thousands of good ones.

What if we could build an internet wherein good actors could drive out bad?

What if we could create an internet consisting only of factual information? An internet devoid of corporate interests? An internet of real people wherein everyone could only interact with the system using a proven identity?

What if we could finally draw the line between private and non-private digital communications, such that private conversations could remain truly private?

What if all information was organized into siloes, like the “expert communities” of the early internet, but codified into a meritocratic hierarchy where every claim needed to be vetted by an established community of experts? What if experts could delegate privileges to other experts who prove their worth through contributions? What if the information curated remained free to the consumer, but provided a basic income to its creators and gardners for the work they put into curating the information? What if this internet could remain completely read-only to everyone not designated an expert in a particular silo?

Web X.0

Much of the technology we need to build something like this already exists. Signal, Keybase and scores of other platforms offer peer-to-peer (serverless) encrypted messaging. StackExchange already provides a model for curated expert communities, entirely based upon Q&A. Modeling the new internet off of StackExchange (or Quora or WhySaurus), each question response could be stored as a block in a blockchain with experts from the appropriate communities recruited to validate the responses, much like block validation already works today for cryptocurrencies.

Every information silo would require a community of experts to curate it. But what good are these experts if we can’t check their credentials and contributions to validate that they really are experts? The missing piece here is global identity management, i.e. a way to prove that we are who we say we are. We need a biometric-seeded revocable cryptographic key that would allow us to conduct business using our IRL identities or with pseudonyms that the owners can prove are theirs (but not the other way around). The Human Unique Identifier (or HUID) described by the ambitious Cicada Project proposes a clever design for this.

Creating a secure, un-spoofable identity system is a fundamental challenge, but it’s surely not the only challenge. In building this new internet, our biggest enemy is what we don’t know—and what we won’t know until we we’ve already written oodles of code and tests, as is often the case with software projects.

But we can’t let fear of the unknown stop us. The time has come—in fact it’s long overdue—to create a new internet, an internet that can’t be defeated by Nigerian scammers, Russian fake news bots or that 400-pound kid in his bed somewhere. Let’s leave the existing internet intact but teach our kids that they should assume that nearly everything they read there is either bullshit or sponsored bullshit. If vetted, cite-able, factual information is what they seek: They need to consult Web X.0.

And yes, this new internet would be read-only for 99.9999% of the world’s population. This would leave about 7,000 experts in control of all the world’s public factual information, with the ability to delegate more experts as needed. No corporations would be allowed; no corporate interests would be tolerated. In this way, the denizens of the new internet would maintain all the world’s information much like the denizens of the early internet “expert communities” on BBSs, Usenet and chatrooms, but this time with HUIDs and block validation keeping everyone honest.

People could still interact with corporations on the “old internet,” but we could use the Web X.0 HUID to doll out Basic Attention Tokens (or something like them) to allow people to decide for themselves which revocable personal information they want to share with commercial entities—and get compensated with cryptocurrency in return. In other words, corporations would pay consumers directly for paying attention to their messages, eliminating the layers of ad network middlemen who get paid for matching companies to consumers.

The Cicada Project takes this a step further by adding a secure direct democracy component, which would allow populations small and large to self-govern. Direct democracies are notoriously disastrous (e.g. Athens) but given that two of our last three presidents took office despite losing the popular vote, maybe is an idea worth considering again.

Then again maybe direct democracy is biting off more than we can chew. Maybe we should start by building and deploying the HUID on the existing internet and then go from there.

Maybe this is all hogwash.

But maybe—thanks to Facebook, Kogan and Cambridge Analytica—we’re finally starting to ask the right questions.

Make America Kittens Again

Adorable Tomasina, available for adoption at the SFSPCA.

America, lend me your ear.

This has got to stop. We’ve fallen prey to the greatest con in the history of mankind. We sold our liberty not to Putin, but to something far more sinister: a reality TV personality. He has turned our fragile democracy into a particularly bad episode of the Jerry Springer show. But times a billion. And a billion times worse.

There’s only one solution. Everyone needs to install Make American Kittens Again, a browser extension that replaces images of these shysters with kittens. We also need to build one that rewrites every Trump headline as: “Wow, Look How Fucking Cute This Kitten Is!”

Better yet, Dear Media: Just do this for us. Every time Trump says anything, just write a story about a really cute kitten or cat. Include lots of pictures.

In case you were wondering, this is why we put all those cats on the internet in the first place.

Let’s end this reality show by deploying the cats and showing this administration who’s really in charge: we, citizens of the internet.

Internet: 1, Trump, et. al.: 0

Let’s do this.

How I Quit Email (and You Can Too)

CC0 Public Domain

Today, email turns 44 years old.

If that doesn’t already sound odd, consider this: We upgrade our smartphones and laptops every few years, yet we’re using those very devices to communicate via a crusty old protocol that’s barely changed in half a century.

But there’s a more important, more existential problem: email consumes us. Adobe surveyed 400 American white-collar workers in 2015 and found that on average, we use email six hours a day (or 30+ hours a week).

Several months ago, I decided it was time to pull myself out of this quagmire. Today, on the 44th anniversary of its birth, I am declaring email dead. At least to me. If you’re willing to jump over a few hurdles, you too can free yourself from its clutches.

If you’re not already convinced that it’s time to say goodbye to email, here are a few reminders of why it sucks:

1. It’s not secure (and simply never can be)

Most email travels around the internet in clear text. Even when message bodies are encrypted, which is rare, the metadata still have to be sent in clear text.

Because it’s so prevalent, and because it’s easy, spearphishing attacks have caused dozens of major crises over the years: Sony, the DNC/Podesta and Hillary were all victims of simple, un-sexy email password theft. More recently, Reality Leigh Winner (an NSA whistleblower who allegedly smuggled classified documents out of a SCIF and snail-mailed them to The Intercept) was recently apprehended in Trump’s first major bust-the-leaker case. Why? Traces left behind by emails sent to the media from her work computer.

2. It’s chatty (and the chat logs live forever)

One email touches dozens of servers as it travels to and fro, leaving a digital trail a mile wide across the internet. The sender and the recipient have no way of knowing who has seen, captured or even altered the state of an email while in transit. Neither party has any control over the security of any of the logs, something that varies substantively from one data center/network to another.

3. It’s overrun by spam and near-spam

Despite heroic legislative efforts (e.g. CAN-SPAM) and heroic technical efforts (e.g. Gmail’s spam filters), we still get unsolicited email.

Even if we don’t get actual spam, we often inadvertently (or not) sign up for mailing lists and notifications while shopping online, reading news, etc. leaving our inboxes cluttered with junk, much like snail mail.

4. It’s a CC mishap waiting to happen

We’ve all been on email threads from hell where 20 people somehow end up on the CC line. We’ve all said the wrong thing, had it CC’d to the wrong person and had it come back to bite us. But it gets even more insidious: People can seamlessly add or remove other people from the CC line, either hastening the spread of foot-in-mouth disease or leaving key people out of an important conversation.

Even when we think we know who we’re communicating with, let’s not forget about the endless wonders of BCC.

Even when we’re aware of everything on the TO and CC lines, we have no way of authenticating that sending to someone’s email address will actually result in that someone receiving the message. (Perhaps not, because someone just fell victim to a phishing attack.)

5. It’s the worst possible way ever to share living documents

There are dozens of better ways to collaborate, yet somehow people still send documents as email attachments asking for feedback, creating untoward madness.

Email is a never-ending, relentless time-sink in which the important gets drowned out by the worthless screaming, “Look at me!”

Believe it or not, it wasn’t the above that pushed me to do away with email; rather, it was a conversation I had with my then-10-year-old daughter. At the time she was (and still is) an avid iMessage user. (I’ve never seen so many emoticons!) When I tried to describe email, she asked, “Why is it better than txt?”

And—despite my self-proclaimed mansplaining prowess—I didn’t have a good answer for her.

Why not? Because it’s not better than iMessage. In fact, it’s far, far worse.

On that day I started the process of moving away from email. Fast-forward several months and I’ve reduced my inbox to a healthy, manageable non-urgent notification queue filled up entirely of things I actually want to see, put there almost entirely by bots, some of my own design.

If you fancy the same or something similar, consider the following steps:

1. Verify your digital identity

Set up Keybase. It’s super geeky, so it might not be clear what you’re doing, but do it anyway. In laymen’s terms you’re “signing” your digital identities (e.g. Facebook and Twitter) so that people have a way of knowing that when they’re talking to you, they’re really talking to you and not someone (or something) else.

2. Embrace a secure messaging app

Any of these send encrypted messages: iMessage, FaceTime audio (or video), WhatsApp, Facebook Messenger, Google Phone/Messenger, Skype, Twitter DM or Slack. There are hundreds of others. Of course, YMMV based on how much you trust the companies responsible for these apps not to get hacked.

I’m trying to make Signal (by Open Whisper Systems) my goto messaging app. The UI is a little rough around the edges, but the emphasis on security, disappearing messages and a really slick device onboarding flow more than makes up for it. Give it a try.

As an added benefit, your conversations remain organized by person and not by message, which more accurately models the way people communicate IRL.

Ironically, you might get email notifications that you’ve received messages on some of the above platforms, which is okay (see #5).

3. Use Google Docs to Collaborate

Like with your choice of messaging app, you’re putting your trust in a vendor. Google, from any angle, is a pretty safe bet, especially if you’ve enabled TFA (Two-factor Authentication) for yourself and all your collaborators.

4. Set up an auto-responder

The auto-responder covers the edge case of someone actually trying to write me an email in the traditional sense. They get a short note asking them to find me on: 1. Facebook, 2. Twitter or 3. Signal (by phone number). That should work for, respectively: 1. people I know, 2. people I don’t know and 3. people who are close enough to me to already have my phone number. Of course nearly all of the auto-responders will end up getting sent to bots — and they certainly won’t mind.

5. Fine tune your notifications

I use IFTTT to filter out popular stories from the New York Times and email them to me (usually about five a day, unless Trump forgets to take his medications). I also get daily briefings from the Guardian and the WaPo. I get some mass emails from my daughter’s school, from the lindyhop community and from a few editorial sites I really enjoy (Tasting Table, Urban Daddy, Bold Italic and a few others).

Aside from communicating with bots (e.g. shuttling a NYT article delivered by IFTTT to Pocket so I can read it later), I’ve sent no more than two dozen emails this year. My inbox has become a dumping ground for notifications, none of which is urgent or terribly important. I can keep up with them most of the time. Once in a while, I get behind and I mass-delete everything in my inbox, something I can do with a high level of confidence that I haven’t missed anything important.

I’ve ceased using email for all important (and human!) communication and at the same time turned my inbox into a bespoke, bot-generated “daily briefing” of sorts.

Real conversations need authenticity, reliability and privacy. Bots don’t care about those things, so they get relegated to my once-sacrosanct inbox.

Let’s hand email over to the bots. Humans deserve a better way to communicate.

In Explaining Why He Sacked Comey, Trump Borrows From Mein Kampf

Be forewarned: I’m going to compare Trump to Hitler, again. Before accusing me of violating Godwin’s Law, please understand that his “law” refers to the odds of a Hitler reference approaching 100% in comment threads. Godwin doesn’t mention anything about the opening lines—let alone the entire premise—of a blog post.

So why Hitler? Why again? And why now? Pundits have already jumped on the liar-liar-pants-on-fire bandwagon, but they’re missing something crucial to understanding the latest balderdash to come from Trump, a literal font of nonsense and duplicity.

This time, he lied so bigly, so obviously and with such brazen impunity that his words qualify as a “big lie,” as defined by the  Führer himself in Chapter 10 of Mein Kampf:

“All this was inspired by the principle—which is quite true within itself—that in the big lie there is always a certain force of credibility; because the broad masses of a nation are always more easily corrupted in the deeper strata of their emotional nature than consciously or voluntarily; and thus in the primitive simplicity of their minds they more readily fall victims to the big lie than the small lie, since they themselves often tell small lies in little matters but would be ashamed to resort to large-scale falsehoods.

It would never come into their heads to fabricate colossal untruths, and they would not believe that others could have the impudence to distort the truth so infamously. Even though the facts which prove this to be so may be brought clearly to their minds, they will still doubt and waver and will continue to think that there may be some other explanation. For the grossly impudent lie always leaves traces behind it, even after it has been nailed down, a fact which is known to all expert liars in this world and to all who conspire together in the art of lying.”

[Emphasis mine.]

On a number of occasions, I’ve heard the claim that a lie becomes true if repeated often enough. Some even quantify this: It must be repeated at least seven times, they say. Often the qualified and/or the quantified version of this sentiment get attributed—incorrectly—to Hitler.

Hitler never said anything about the importance of repeating the lie, to the best of my knowledge, though repetition surely also had to be part of his strategy (in an epoch before instant mass communication). His description of the evil genius of a “big lie” merely states that the lie’s likelihood of being believed grows proportionally with the level of said lie’s intrinsic preposterousness.

Hitler adds that “the grossly impudent lie always leaves traces behind it.” For evidence of this, one need not look further than Trump’s other attempts at big lies. He had a hand in the infamous birther lie, a big lie whose “traces behind it” literally birthed a movement unto itself. Others that come to mind? The size of the inauguration crowds. The alleged Obama wiretapping stunt. Now this.

Trump’s lie that Comey’s firing had something to do with Clinton’s emails is yet another “big lie.”

If Hitler was correct in his analysis of the efficacy of a “big lie” (and I’m afraid he is), then this lie—Trump’s biggest and most “grossly impudent” to date—is even more dangerous than all the others. Because “in the primitive simplicity of [our] minds” we are inclined to believe it.

Whether we believe it or not, we’ll be stuck with the “traces left behind it.”

Where will we find those “traces” this time around? In the selection process for the new head of the FBI. In the process—and eventual outcome—of the pending investigation into Trump’s alleged Russia connections. In his many, many conflicts of interest, not the least of which is firing the person investigating him. In more investigations of the Clintons, even.

After all, if Comey did get fired for bungling the Clinton email server investigation, we will of course want to know how exactly it was bungled so that the Clintons will finally be “brought to justice,” right?

That, of course, is a trap. If we fall into it, then we help manufacture the many “traces left behind” that will haunt us indefinitely.

An Unlikely Cure for Procrastination

“It always seems impossible until it’s done.” —Nelson Mandela

We all have tasks that—for whatever reason—we just don’t want to do.

They might be as mundane as organizing the garage or as grandiose as building the next Facebook. Small or large, easy or complex, self-rewarding or based on the obligations to others; regardless of what needs doing, I noticed something recently that consistently helps me break through cycles of procrastination and stay focused on the tasks that matter.

My “ah-ha” moment of introspection about procrastination came when a coworker said, “I’m addicted to working on this project.”

I didn’t doubt that he was telling the truth. People have been addicted to far stranger things than software projects. But the remark made me wonder: Can I improve my productivity by channelling my inner addict?

The answer was a resounding yes. I use and re-use “addiction training” (for lack of a better term) any time I find myself resisting some task that I don’t want to perform.

In order to understand why this works for me—and may also work for you—we need to understand how someone becomes addicted. The word addiction carries with it some serious baggage. Everyone knows how dependence on hard drugs or alcohol can lead to financial and emotional ruin, the destruction of relationships and sometimes even death.

Most people also know that addiction is not a character flaw; rather a person’s brain chemistry changes related to how “rewards” get processed. A shallow dive into neurology explains the chemical nature of addiction, beginning with the prefrontal cortex, a region of the brain associated with logic and decision-making. At first, we consciously set “goals” of getting drunk or high (or working out or having sex) because those things feel good. After a relatively short period of time—with some drugs, just a few doses or with “good” habits, some say 21 days—the motivation to continue the nascent behavior moves from a logical, conscious place to a more Pavlovian one. A new part of the brain takes over: the anterior dorsolateral striatum, wherein we process rewards-based learning.

“In rats seeking cocaine, additional evidence supports the hypothesis that seeking behavior is initially goal-directed, but after extended training becomes habitual and under the control of the anterior dorsolateral striatum (aDLS).” [source]

Once the aDLS has taken over, addicts will feed their addiction at all costs, even if they can knowingly reason that “smoking is unhealthy” or “alcohol is ruining my life.” It’s literally beyond their logical control.

The chemistry of addictive drugs, stimulants in particular, facilitates the transition of using drugs from “goal-based” to “habitual.” But how does this apply to my software project—or cleaning my garage?

Here’s what I do when I find myself procrastinating:

  1. Set up an extremely small reward challenge (to trigger the aDLS), e.g. “I’m going to install RVM/ruby and create my Rails project, then I’m going to have a bowl of ice cream.”
  2. Do the extremely small task. (Okay, that was easy and it took less than five minutes.)
  3. Eat the ice cream. (That felt good.)
  4. Go back to procrastinating.
  5. Repeat.

By associating the smallest level of effort with a reward, we can begin to trigger the reward processing module of our brain, effectively feeding our nascent addiction. (Bonus points for substituting “eat a bowl of ice cream” with “go for run” or some other healthy habit.) After repeating these steps several times, you’ll likely find yourself autonomously attracted to the work you logically don’t want to do. There’s a lesson to agile product owners here too: Stories reduced to the smallest atomic parts can give developers little “slam dunks” wherein the reward is baked into the process of moving the story along the agile board.

It’s important not to create additional negative addictions during this process—and equally important to keep the aDLS on its “toes.” Give yourself a huge reward for doing very little. Then give yourself a small reward for doing something huge. Sometimes, give no reward. Or flip a coin and if it’s heads, eat the ice cream; tails: Go back to work! This “random” nature of the rewards helps cement the working addiction using ideas from something (anecdotally) more addictive than cocaine: gambling.

This method for training an addiction might work better for some than others. One study claimed that 47% of the population carried a genetic marker for addiction. Even so, we all have an aDLS and we can all learn to train it to our advantage.

Having trouble exploiting your addictive tendencies to become more productive? What other techniques have you tried when you need to break out of a procrastination rut?

Why We Shouldn’t Compare Vault 7 to Snowden’s Leaks

For seven years I worked as a government contractor developing software for CIA. Although I was not briefed into as many compartments as a systems administrator like Snowden, I held a TS/SCI clearance and had the same ability to access classified information as any “govie,” just with a different color badge.

Also unlike Snowden, I didn’t knowingly compromise any classified material. That being said, what Snowden did is ultimately good for civil liberties in this country. Moreover, the courage and bravery of his actions make him a true patriot, an American hero and the mother of all whistleblowers.

This is simply not the case for the anonymous leaker(s) behind Vault 7.

The reason for this lies not in the specific methods of cyberwarfare that were leaked today, but rather in who was the target and by whom were they targeted. In other words, CIA using cyber attacks against foreign nations is very different from NSA violating American citizens’ 4th Amendment rights with wholesale data collection from wireless carriers.

Spying on Americans is simply not in CIA’s charter. We have plenty of ways to fuck with Americans: NSA, FBI, DOJ, IRS, state and local police, metermaids and a million other authorities. But unless you’re communicating with ISIS, CIA could care less about what’s happening in your living room.

What CIA does care about is gathering intelligence around the world to keep Americans safe at home and abroad. Of course there are boundaries. Sometimes those boundaries get crossed. Cyber attacks, however, do not violate the Geneva Conventions or any other rules of engagement. It’s 2017, ffs. If our country wasn’t exploiting hostile nations’ computer networks and systems, I would be disappointed in us. If Alan Turing didn’t “hack” the Enigma code during WWII, this post would probably be written in German.

There are two big arguments against this, two reasons why people are saying this release of information is good for America and her freedoms.

The first argument is that CIA did us a disservice by not sharing these exploits with the private sector, thereby leaving the doors open for bad guys.

That is true, but only in part. Hackers would need to independently find these same vulnerabilities and find ways to exploit them. It’s not like they’re gonna call CIA’s helpdesk for virus installation instructions. Furthermore, we in the open source community have a long history of whitehat hacking, the process of finding and reporting vulnerabilities back to vendors to make the digital world more safe and secure.

The second (and related) argument is that viruses and other malware could fall into the wrong hands. This is also true, just like it’s true for assault weapons, hard drugs and prostitution. They’re all illegal af, yet the bad guys still have ways to get them. This doesn’t mean we should stop cyber espionage, any more than it means we should stop making military assault rifles. Like with all our spying activities—and with spying activities in general—we should just do a better job covering them up, in much the same way we protect the real identities of (human) assets in the field.

In sharp contrast with what Snowden did, this release will have a net negative impact on our intelligence-gathering capabilities, weakening our ability to engage with potentially dangerous foreign powers.

 

Perhaps the worst part of this disclosure is that it further undermines CIA and erodes confidence in the intelligence community, already under fire from the so-called Trump Administration. It also comes, conveniently, just after Trump claimed he was inappropriately wiretapped.

Technically, this leak has no bearing upon wiretapping, but it’s safe to assume that Trump will take this as an opportunity to further belittle CIA and the intelligence claims about Russian interference in the election.

We will probably never know, but I strongly suspect a Russian source provided some if not all of these leaked materials. Let’s not forget: even though Snowden lives in exile in Russia, he’s as American as apple pie.

I Made my Wife a Bot for Valentine’s Day

This morning I rolled out Tink, a simple interactive chatbot I wrote for my wife as a gift for Valentine’s Day.

Every few days, Tink will text my sweetie a randomly-selected yes-or-no question from a list of questions I wrote, e.g. Would you like to take hip-hop classes? At different random times, it will also text me random questions from the same list. When we both reply “Y” to the same question, it will notify us of that happy coincidence and suggest that we, say, finally enroll in those hip-hop classes.

Basically it’s Tinder, but for couples. But not in the way you’re thinking (you dirty dawg).

Instead it’s a fun way for two romantic partners (or just friends?) to discover shared interests they didn’t know they had. I suspect Tink will also become a motivator to actually do the things it suggests. (We’ve been meaning to sign up for hip-hop classes for months, but haven’t yet.)

The questions I wrote for Tink’s inaugural run mostly revolve around ideas for fun dates, outdoor activities, new restaurants we want to try, etc. However, there’s no reason why Tink questions couldn’t cover religion, politics, sex—or even topics actually fit for the dinner table.

With G-rated questions, Tink could serve families or even small friend groups, but right now it’s only a bicycle built for two.

Wanna take a peek under the hood? I made Tink opensource under the MIT license.