Uncategorized

Pessimistic induction

One of my favorite ideas from recent memory is pessimistic induction. As usual, a quick search finds plenty of smarter people who thought of it before me and easily refuted it. Even so, it’s oddly compelling, a gem of a misconception.

Looking back at history, most of our ideas, even the best ones, have turned out to be wrong. Very few have stood the test of time. Newtonian physics, the rational economic actor, and fat vs sugar are just a few famous examples.

It’s easy to think that we’re at the final culmination of our entire historical arc of science, art, and civilization. We may have been wrong in the past, but we’ve rooted out our mistakes, corrected them, and we now have everything figured out. It’s such a common misconception that it has its own name: the end of history illusion.

That name is apt, though: it really is just an illusion. The present day may feel special, but it’s usually just like every day before it. Many things are good, some things are bad, and our currently accepted scientific ideas are almost certainly wrong, to one degree or another. This is the pessimistic induction.

It’s chillingly elegant, but it has a fatal flaw. Yes, today’s best science may likely be wrong, but right and wrong are rarely black and white. Modern physics is famously incomplete, but working physicists would still say that the standard model and string theory and holographic universe are better ideas than Newtonian physics. We may not be perfectly right about everything, or maybe even anything, but we’re probably more right than we used to be.

That’s a comforting thought. The universe may be cold and indifferent, but we can admit our flaws and still hew ever closer to understanding. Onward.

Standard
Uncategorized

What I work on

I had a conversation with a good friend recently that crystallized something I’d always felt strongly, at a gut level, but never thought through: how I choose what to work on.

When I look for a new job, I think about project, people, compensation, role, company, commute, etc. I’ve tried focusing on different factors over time, and I’ve found that for me, project is often the most important. I’ll suffer with low pay, long train rides, or a role I’m overqualified for if I’m working on something I care about and believe in.

I prefer tools over products. Systems over tools. Protocols over systems. Problems over users. Wicked over tame. Research over application. Many of these are stereotypical engineer cliches, but they boil down to an interesting theme: I prefer to work in areas where the goals and incentives don’t change much over time.

I don’t know where I developed this tendency toward the long term, but it’s a big personal motivation. The time scales I’m thinking about are centuries and millenia, not years or decades. I could just as well replace time with generations. I’m fine with not shipping code often, or not making any progress for longer stretches, if I know the problem will still be around and my work will still apply down the road.

What does this mean? Well, scratch most products – consumer, enterprise, or other. Some of them last centuries, but not many. Scratch applications and services in general. I’m happy to do work that’s used in a product or service, but usually only if there’s an underlying problem with a longer lifespan.

The two main areas that fit are research and infrastructure. Academic departments and conferences rise and fall, but the central goal of research has stayed the same forever: pursuit of truth, knowledge, and understanding. That won’t change anytime soon.

Infrastructure, on the other hand, is worlds removed. Construction workers in hard hats on building sites don’t overlap much with tweedy professors in ivory towers. They do have one thing in common, though: their goals are consistent over time. If you want to cross a river today, you build a bridge, just like a thousand years ago. We still need roads to get from one place to another. Plumbing to carry water and sewage. Electricity and communication grids may be newer, but we’ll need energy and communication in a thousand years just like we do now.

When I look at the projects I’ve enjoyed most in my career, they fit the bill. Sharding databases and later Paxos etc: classic infrastructure. Networking: infrastructure and applied research (OpenFlow). Color Genomics: applied research. App Engine: infrastructure as a product. Even the side projects I’m looking into now fit: climate change, p-hacking, the reproducibility crisis.

Why do I care if goals change over time? I’m not sure. Some of it may be the natural human desire to leave a legacy. If I work on big, long standing problems, I’m more likely to be remembered after I die. I don’t spend much time thinking about legacy, but it could still be lurking in my subconscious.

A modern variation is “changing the world.” It’s a well worn phrase here in Startupland, but for me personally, it’s always seemed hopelessly ambitious. I have no illusions that I’m personally going to change the world in any significant way. Maybe a little, if I’m lucky, but not a lot.

Another Silicon Valley buzzword is “impact.” Everyone wants to work on something impactful. Most people use it to mean a bold new product, or a big user base, or innovating and disrupting an industry. I want to have impact, sure, but I want to do it by moving the needle on a big, important, long term problem. Growth hacking and TechCrunch coverage aren’t part of my personal equation.

Research and infrastructure aren’t unique. There are plenty of other areas where goals and incentives stay the same over time. Art, clearly. Philanthropy, education, entertainment, health care, public policy…the list goes on. I’d get restless if I was a teacher or actor or nurse and didn’t do anything new, but there’s plenty of opportunity to push on big problems in those fields on the front lines. I’d hate being a campaign manager, but I could easily do a stint as a policy wonk at a think tank.

This may not mean much to you, or even to me. After all, it was guiding my career decisions long before I thought it through and wrote it up. Still, now I know…and knowing is half the battle!


Don’t get the wrong idea, I’m still loving it at Color Genomics! I’m not going anywhere. On the contrary, we’re actively looking for good people. If you want to work on something meaningful and challenging, drop me a line!

Also, scratching my own itches is one big exception to this rule. If software is the tool of the knowledge worker, I’m lucky to be a toolsmith. I’ve written and modified lots of software over the years to solve my own problems. Some took significant time and effort, like granary and P4, and some have real user bases, like Bridgy and huffduff-video.

Even so, these tools have always felt practical, utilitarian, even a bit disposable. I don’t consider them a big part of my career or life’s work. I won’t need them forever, and they’ll all grow old and die eventually. That’s OK.

Standard
Uncategorized

A medium is born

Artistic mediums are a small, rarefied lot. The spectrum from War and Peace to The Da Vinci Code is huge, but it’s all a single medium: books. Citizen Kane and Jackass may not have much in common, but they’re both movies. Art, music, theater, radio, games, maybe graphic novels…that’s pretty much it. We can haggle over subtypes like radio drama or journalism or stand-up comedy, but still. Thousands of years of civilization, and we can count the main artistic mediums on just two hands, with room to spare.

…except now. VR is here, and it’s a truly new medium. You and I have the awesome privilege of witnessing its birth firsthand. That doesn’t happen often, and it’s pretty damn cool.

Mediums don’t spring into existence fully formed. Music has come a long way since Gregorian chant. Modern movies may descend from Méliès and the Lumières, but they’re so different as to be almost unrecognizable.

For example, film buffs celebrate movies with extended single-take scenes like Children of Men and Birdman, but the first movies were all single takes. Early filmmakers thought cuts would confuse audiences or lose them entirely. Reality doesn’t have cuts, right? Moving cameras, close-ups, subtitles, and establishing shots all have similar origin stories. They didn’t happen automatically. Early filmmakers discovered them through slow, tedious experimentation. And for every technique that lasted, ten others were tried and discarded.

We’re already seeing the same thing happen with VR. Just to get mainstream devices out the door, developers had to find the right resolution, framerate, field of view, latency, and inter-pixel distance (aka screen door effect) to make the experience viable.

Once creators got their hands on dev kits, they faced an entirely new set of challenges. VR infamously makes some people nauseous, especially when they’re moving in the virtual world but not in the real world. Creators have tried all sorts of things to prevent this, and many are now settling on two that work: teleporting and tunnel vision. Similarly, filmmakers aim their cameras, but 360° films let viewers look anywhere, so VR filmmakers are learning to frame shots with sets and lighting instead of cameras.

Good artists borrow; great artists steal, as the saying goes, and artistic techniques are no exception. Video games have faced many of these problems for decades, and developed their own solutions. Valve’s Half Life 2 famously pioneered set direction and framing to get players with freedom of movement to look where they wanted. Likewise, user interface conventions for Nintendo Wii and Microsoft Kinect motion controllers laid the groundwork for many VR UIs.

I find all this fascinating. Technology is constantly evolving, so we often think about how new technology enables and shapes art, whether it sticks (like 35mm film) or not (like multimedia). But art also shapes technology. Set framing and teleporting in VR may be intangible, but they’re arguably still technology just like 35mm film.

Even when we do think about art shaping technology, it’s much harder to actually see it happen in real time. Evolution is an apt analogy here. When life first hits a new environment, there’s an explosion of differentiation as evolution finds all the species that can survive. Hence “Cambrian explosion.” Progress soon slows down to incremental improvement, though, as more and more niches in the ecosystem are filled in.

Artistic mediums are the same way. At the beginning, no one knows the rules – there aren’t any! – so creators try anything and everything to see what sticks. This experimentation slows down as winners emerge. Novels have been around for millenia, for example, so we don’t often see truly new structures or techniques. We may gush about unreliable narrators or experimental forms these days, but even those are rooted decades or centuries ago.

OK, so there aren’t many different artistic mediums, VR is a new one, and we get to witness its birth. So what? Should we watch for anything in particular? Should we nudge it in some specific direction? Are there unique opportunities opening up that we may not see again?

Probably not. VR has plenty of attention and press coverage, and a small army of developers and creators pushing it in every direction. We’ll be learning new things for decades to come, and each step along the way will be heavily documented. More importantly, as with all technology, we don’t have much control over how it evolves anyway.

VR isn’t even the first new medium in recent history. Art, music, theater, and the written word may predate the 20th century, but radio, television, film, graphic novels, and video games were all born in just the last 2% of recorded history. We watched creators grapple with them, documented their progress, and adopted them into the mainstream just like we will with VR.

So, my only call to action is to pay attention. There’s plenty of precedent, sure, but the birth of a wholly new artistic medium still feels unique and new to me. It’s the first time I’ve ever really seen it happen. I don’t know where creators will take VR next, but I can’t wait to find out.

Standard
Uncategorized

Drawbridge up, drawbridge down

When populations are homogenous, people see that other people are mostly like them – ethnically, culturally, socioeconomically, etc. This boosts trust broadly, which makes everyone more open to progressive social policies and safety nets.

When populations are diverse, people are more visibly different, at least on the surface, which leads to othering, dampens trust, and leads to more protectionist, socially conservative policies.

This is an oversimplification, and happens only in our subconscious, but there may still be a nugget of truth to it. It might be one reason that Scandinavian countries have long been so open, progressive, and even socialist: their populations are extremely homogenous. North America and western Europe are historically diverse, on the other hand, now more than ever, which has coincided with populist waves of nationalism, isolationism, and xenophobia.

Stephan Shakespeare, co-founder of pollster YouGov, has a famously evocative metaphor:

We are either “drawbridge up” or “drawbridge down.” Are you someone who feels your life is being encroached upon by criminals, gypsies, spongers, asylum-seekers, Brussels bureaucrats? Do you think the bad things will all go away if we lock the doors? Or do you think it’s a big beautiful world out there, full of good people, if only we could all open our arms and embrace each other?

Whichever you feel personally, this begs the question: why do you feel that way? Homogeneity or diversity in your immediate surroundings could be one answer.

Subtle, powerful point that I hadn’t fully appreciated until now. Thanks to Jonathan Haidt.

Standard
Uncategorized

Channeling the hacker way

Also posted on the Color Genomics blog.

We have big dreams and ambitious plans. We want to push the state of the art in health and genetics, and we need broad, crazy moonshot product ideas to get us there. How do we find those ideas?

We’ve always been inspired by 20% time at Google and 3M and hack weeks at Twitter, so we decided to do our own hack week. We invited everyone to put normal work on hold for a full week to try out new ideas, no matter how crazy or tangential. We didn’t know how many people would participate, or whether we could ship any of the results, but it was worth a shot!

 

Our first task was to pick a theme. We chose engagement: how can we help people continue to engage with their genetics and health over time? We dragged everyone into a few big rooms and brainstormed project ideas. No judgment, no constraints, no requirements, no analysis or pro/con lists. Just ideas. By the end of these sessions, we had over 50 candidates to explore in the week to come.

We kicked off hack week proper on Monday morning with an example project: reincarnation detection. Could we identify who you’d been in past lives? With tongue firmly in cheek, we described how to recruit teammates with good karma, work with researchers to identify genetic markers across lives, partner with Buddhist monks, and design a “Talk to the Other Side” messaging UX.

We set everyone loose, and they hit the ground running. People wrote up pitches, formed teams with great names – Pink Duck was a personal favorite – and jumped on company mailing lists to recruit team members. Many people joined two or three teams each.

We emphasized that we wanted everyone to participate, across job roles. Design mockups and prototype code were nice, but not required! We encouraged people to start broad and high level, then find a vertical slice that they could flesh out in a single week. We put up screens around the office that rotated between team Slack channels, project docs, and mockups and prototypes as they came together.

The week culminated in an all hands Demo Day where each team got a few minutes to present their project. We saw live demos, skits, theme songs, and even a custom video. We voted on yearbook-style awards like Best Dressed, Most Likely to Succeed, and Crazy like a Fox. Everyone had a blast.

After the excitement wound down and the dust settled, we surveyed the results. Over half the company participated, working on 11 projects that all made significant progress toward proofs of concept.

Hack week infused us with new ideas to help our clients understand and act on their genetic data. We look forward to implementing some of the best ideas, and we hope they’ll help our clients lead longer, healthier lives. That’s the real prize.

We’re always looking for talented engineers. Join us!

Standard
Uncategorized

Wanted: climate change project

I’m in the market for a new side project. I’m looking for something related to climate change. I don’t know if I’ll end up farming biochar, spraying aerosols into the sky, hacking solar panel trackers, or something else entirely, but those are the kinds of thing I’m thinking about.

I’m leaning toward something technical. I’m open to coding, but ideally that won’t be my primary contribution. I’ve had a blast hacking on open source projects, and I’m sure I’ll do more, but right now I’m thinking about something different.

So far, I’ve just been reading a lot and learning as much as I can. There’s lots of great stuff going on, and I’ve found a few open communities here and there, but most substantial projects are either companies or academic research labs. Those are both great, but they’re hard to join part time and contribute a handful of hours a week.

The next step is to talk to people who know more, or know other people who do. If you know the space and have an interesting problem I might be able to help with, or if you know someone who might, please drop me a line!

Standard
Uncategorized

College is more than a job ticket

It’s become fashionable recently to second guess college. The ROI no longer works: it’s too expensive and doesn’t guarantee you a good job. It’s elitist and out of touch with reality. Student debt is predatory and out of control. Anyway, MOOCs’ unbundling model is the future of higher ed, so we might as well get on board, right?

That all may be true, but I think it’s too narrow. There’s a corollary to “you can’t optimize what you don’t measure”: measurement can give you tunnel vision. You can collect X University grads’ incomes, divide by tuition, and compare to College Y, but that doesn’t mean you can reduce either one to a simple financial investment you optimize to get the best salary.

College is an experience. It’s one of the most critical periods of your life: when you become an adult. You learn what to eat, what to drink, when to go to sleep and when to wake up. You manage your time (or not), juggle priorities (or not), make commitments and break them. You find substances, wonderful horrible tempting substances. You make friends and significant others, some more significant and some…um…less.

Most of us do this by making mistakes. No matter how mature we were already, every one of us slept through the History 101 final and flunked, or jumped into bed with someone we knew would break our heart, or woke up in a bush with no pants and the mother of all hangovers.

You can do this without college, of course, but the real world is harsh, and college has training wheels. Advisors, RAs, dorms, cafeterias, and built in health care make for a forgiving place to learn to “adult.” Drunk bicycling is a lot less dangerous than drunk driving. Classes are the perfect practice for jobs. Hated one? Failed the midterm? Start fresh next semester, older and wiser and still in the same dorm and meal plan.

Sadly, one big flaw with these safety nets is that they’re unequal. On campus housing, student advisors, and extracurricular activities all cost money. They may be standard at expensive top tier schools, but not at smaller state schools and community colleges. Maybe we should vote for Bernie next time.

I think a lot about how to prepare my daughter for the real world. I catch her if she’s about to fall off the bed, but I also show her the edge, let her look down, and say, “See? If you fall, it’ll hurt!” Sometimes I even let her fall a bit – not far, just enough to notice.

My college gave me the same kind of real world training wheels. The degree helped me get a job, sure, and the classical education made me a better person and citizen, but I treasure the safe space it gave me to grow. College is more than a financial investment. It’s a critical transition from childhood to adulthood. Don’t give that up.

Standard
Uncategorized

Decentralized Web Summit

I spent the last few days at the Decentralized Web Summit, a small gathering of like-minded hackers, thinkers, and activists from all over. I don’t go to many conferences, but this one was inspiring and exciting. Even the mainstream press noticed. I’ll see if I can describe why.

I spent some time in the peer-to-peer community during the first dot com boom. I hung out with the p2p-hackers and CodeCon folks, co-created a toy P2P network and contributed to others, idolized Nullsoft and Bram Cohen and anonymous remailers, and generally yearned to be free of The Man in the middle. By the time BitTorrent hit it big, P2P seemed unstoppable, even inevitable.

It wasn’t, of course, but recently there’s been a resurgence of interest. The financial crisis, the NSA spying revelations, and online power concentrated into a few big silos rallied the internet’s venerable elders, Vint Cerf and Tim Berners-Lee and Brewster Kahle and Mitchell Baker. (And Richard Stallman!) They turned over a few rocks and found the P2P cypherpunks keeping the flame alive, developing projects like IPFS and Dat and Tor and falling all over themselves to sanctify Satoshi and create the Next Big Blockchain.

The MC called us OGs and New Gs, drawn together by a common desire to redecentralize the web. We discussed the past, present, and future, compared projects and protocols – including my own little ditty on the IndieWeb (video, slides) – and debated what to do next. It was great.

My first big takeaway was that the community seems to be maturing. Just by organizing the summit, the elders showed that they were paying attention and cared. That wasn’t really true last time around. And given TimBL’s position atop the W3C and Brewster’s irrepressable energy for catalyzing action, that definitely matters.

Second, there was a lot of talk about real world problems like UX and monetization. People widely acknowledged that P2P projects still aren’t usable or accessible enough and don’t always address real user problems, which may be why they haven’t hit the mainstream yet, Skype and BitTorrent notwithstanding.

Also, despite the usual appeals to micropayments and transaction fees, most people admitted that we still don’t know how to sustainably pay for systems without centralized control. The honesty was very encouraging. The first step is admitting we have a problem!

Finally – and feel free to take a drink here – the blockchain. I’d grokked it at a high level before, but I came away from #DWebsummit with a newfound appreciation.

Yes, it’s an entirely new consensus algorithm, and as Mike Burrows famously declared, those don’t come around very often. More importantly though, it’s the first open membership consensus algorithm we’ve ever seen.

Before Bitcoin and the blockchain, all consensus algorithms required a closed group of participants. They came from the distributed systems community, from people who built and ran self-contained clusters of servers they owned and controlled. Bitcoin changed all that. Anyone can mine a bitcoin, or send one, and those transactions are just as consistent and durable as a Paxos round or ZAB broadcast.

This is pretty huge. The internet depends on centralized human organizations like ICANN, CAs, and Tier 1 networks, and they’ve worked surprisingly well for a surprisingly long time, but failures like CA compromises and BGP hijacking have exposed their inherent flaws. Projects like Certificate Transparency are laudable, but they’re really just band-aids. They can contain damage, but they can’t prevent it entirely.

Human organizations regularly do superhuman things, and they may still be the right long term approach here. We’ve just never had a real alternative before. Now we do, and that’s pretty damn cool.

Don’t worry, I’m not dropping everything to create a cryptocurrency or drum up VC funding for a blockchain startup. I have downloaded a handful of papers to read, though, and I may drop by Friday lunch at the Archive now and then. DWeb renewed my interest in the once and future decentralized web, and that’s a good thing.

Standard
Uncategorized

Judging campaign tactics

Donald Trump.

Admit it, you have half a lather worked up already. The man is incendiary; the campaign is astounding. He’s built a big base of support with tactics that we thought were off the table entirely and should have sunk him long ago.

What changed? Do we not have the grasp on campaign tactics we thought we did? Do we need to dive back into modern electioneering, break it down, and really understand it piece by piece if we want to get the leaders we think we’re electing?

Campaigning today is a complicated business. Fundraising, optics, opposition research, social media…there’s a lot to it, all far removed from the actual task of governing and policy making. (You may still spend much of your time fundraising and campaigning once you’re in office, but that’s a different issue.)

The founding idea of democracy is that citizens choose their representatives directly. For that to work, they need a clear picture of who each candidate is, what they stand for, and how they’d govern. So as a society, we’d ideally judge campaign tactics based on how well they describe a candidate. Facebook post on Alice’s home town? OK. Deceptive attack ad that lies about Bob’s voting record? Not OK.

Many common campaign tactics fail that test. We may not have large scale voter fraud, but we do have personal attacks, smears, undercover stings, and recently the rise of the “personal fact.” Social media targeting and astroturfing may be milder, but still misleading. Swinging right (or left) for the primary and back to center for the general is so common, we expect all candidates to do it.

We’ve had a bit of success outlawing and discouraging some of these tactics. Campaign funding reform has made some progress, Citizens United notwithstanding. Candidates routinely promise not to take money from certain groups (or anyone) and not to “go negative” and attack their opponents directly. Politifact has built a business determining whether campaign statements are true or pants-on-fire false.

That’s all well and good, but bad tactics persist. If we can’t prevent them, should we take them into account when we vote? Most candidates probably believe campaign tactics are for campaigns, and wouldn’t inform how they’d govern, but I’m not convinced. We routinely see elected officials smear opponents like WikiLeaks’s Julian Assange, for example.

Many candidates rationalize bad campaign tactics by saying the ends justify the means. “I’ll do good once I’m in office, so if I step on a few toes to get there, it’s worth it.” Utilitarian ethics and philosophy are beyond the reach of this little essay, but I will say that actions speak louder than words. If “ends justify the means” appeals to a candidate during a campaign, it will appeal to them afterward too, and next time those toes might be chests, or necks.

Predicting the future is hard, even if you’re creating it. If you don’t get elected – and you probably won’t! – it’s a lot harder to justify hurting people along the way. I constantly have to remind myself of this at work. It’s so tempting to add just one more thing to a product or codebase because I’m sure I’ll need it down the road, but I often don’t, and then I’m stuck with the extra complexity for no benefit.

There’s also the character question. It’s clearly unacceptable to call Megyn Kelly a bimbo or characterize Mexicans as rapists and criminals, full stop. Even if Trump might not do that kind of thing as President, is it a sign of the policies he’d pursue? How about the more abstract notions of character, integrity, and values?

Politicians’ personal lives have only been fair game in the press for half a century, or so, at least in the US, and it took another few decades for the Clinton/Lewinsky scandal and impeachment to ask whether politicians can compartmentalize private lives and public responsibilities. We’re still figuring out how we feel about all this, campaign tactics included.

In the end, the “clear description” test might be the best measuring stick. Democracy needs voters to understand what each candidate stands for and how they’d govern. Fundraising and opposition research may make us squirm, but they don’t necessarily mislead the electorate. Smears, personal facts, and changing positions to suit polls do. We shouldn’t stand for them, and we shouldn’t vote for candidates who do.

Standard
Uncategorized

What doesn’t kill my baby

Brooke used to have a painful head-butting habit. She’d be happily playing, then all of a sudden, she’d slam her head into my cheek…or the floor, or a toy, or anything else in range. She was just learning to use her neck, but she still cried and whimpered whenever she connected with anything solid. I cried too, when my nose or chin was in the line of fire.

We hated it every time we heard that *crack* and her shriek of pain. Our baby was hurting! We considered finding a helmet, or holding her at arm’s length, or even padding the house with bubble wrap, but they all seemed a bit overboard, even for us. Besides, she gradually strengthened her neck and stopped flailing quite so much. Life finds a way.

Brooke has decades of years of learning and life lessons ahead of her, most far harder than this one. She’ll handle most of them on her own, via trial and error. Others, she’ll learn from her friends, or from what she sees us do as role models. She may absorb a handful of things we tell her, but they’ll be few and far between.

I know we can’t change those ratios, and I’m ok with that. We still have some impact on the trial and error, though, especially while she’s still young and we control her environment. It sounds crazy, but we really could have plastered the house in bubble wrap, or popped a junior Pee Wee Football helmet on her head, and she would have avoided those painful bruises. On the other hand, she wouldn’t have learned that head butting hurts. She might still be doing it today.

I believe in the trial and error thing. I want her to make little mistakes and learn from them. Human beings are antifragile; failing and falling down usually make us stronger over the long term. It’s how the real world works. There’s plenty of ink spilled on this over the years, from Dangerism and Nurture Shock to the hygiene hypothesis and arresting a mom for letting her kid play unsupervised.

The problem is, Brooke is our baby! We hate it when she hurts. Right now, we instinctively reach out and catch her when she’s about to fall over. In a few years, we may not let her gorge herself on candy to the point of a stomachache. We may force her to study so she doesn’t get bad grades. We may even warn her if she dates the wrong boy or girl. (I expect she’ll ignore us.)

All of those pain points are life lessons. We can prevent the pain, or she can learn the lesson, but not both. So what do we do? How do we rationally let her make mistakes and get hurt when our emotions are screaming Don’t let her make herself sick! Don’t let that boy break her heart! She’ll regret it! You’ll regret it!

Honestly, I don’t know. Kahneman himself says it’s a hard problem. His advice is to identify when you’re acting from instinct or emotion, slow down, and give your conscious brain a chance to weigh in. Not easy. I can do it, but I still wonder if there are any shortcuts.

Maybe I’m overreacting. Maybe we don’t have enough impact on Brooke’s development for any of this to matter. We may not be around for most of her learning moments, and even when we are, we may not be very good at intervening. Maybe this is all just an ego-driven illusion, and I should sit back and enjoy the ride. When she’s older, I know she’ll pay way more attention to her friends and the wider world than us…but for the moment, I think we still have some impact, even if just a little.

Brooke stopped head butting things a few months ago. She sits up on her own now, and she usually stays put, but every now and then she leans back too far and falls over. Sometimes we catch her. Sometimes we don’t, and she falls and bumps her head. There’s still no rhyme or reason to our choices, and we definitely don’t have any hard-won wisdom or master plan yet. If you know of any, I’m all ears!

Standard
Uncategorized

Thoughts on having a baby

  1. You expect your world to narrow a bit. You can’t just bounce out the door whenever you want, and besides, you still need to figure out what the hell you’re doing.

    Even so, it’s a surprise when your thoughts are suddenly consumed by poop, and poop exclusively. Smell, color, texture, volume, frequency, how it arrives, how it departs, which clever literary references describe it best.

    Friends told you this would happen, but still. You realize you’ve hit rock bottom when you start gushing at a dinner party about soft, even grains in a thick yellow paste. Don’t fight it. Embrace it. (The talking, that is. Not the poop.)

  2. Asking other parents for advice is like reading the Bible. It’s a deep well, and there is some good stuff in there, but it’s couched in language you don’t understand, and when you do piece together a few bits, they all contradict each other.
  3. The advice you hear most often isn’t really advice at all, so much as a doomsday prophecy. “Your life will change forever,” they intone vacantly. “What should I do? How can I get ready?” They laugh, “There is no ready. Forsake all hope, ye who enter parenthood.”

    This is, of course, maddening. It’s the biggest change in your life, bar none – marriage isn’t even in the same league – and you can’t prepare for it. If you’re used to bending the world to your will, this will be tough. You can’t help it. You’ll read books, buy clothes, put together a nursery, and that’s all well and good, but it won’t matter. When she arrives, it’ll still hit you like a ton of bricks.

  4. Sleep deprivation. This is the other advice/doomsday prophecy you hear constantly. Know how you greet new parents with the gentle joke, “Getting any sleep?” It’s no joke. They’re not getting any. They’re up all night, feeding and rocking and pacifiering and white noising and not sleeping. Night after night after night.
  5. Having said that, we’re pretty lucky. Brooke was extremely chill for her first month or so. She’s more awake and vocal now, but still, she’s pretty damn easy. We have no idea which god smiled on us after all the goats we sacrificed, but we won’t argue.

    (I’m totally kidding, Baal, we know it was you. The goat’s in the mail.)

  6. The cats have been great. They took a few days to warm up to Brooke, but now they love her, and they’re very gentle. In general, if you worry about pets attacking your baby or smothering them, don’t. They’re more likely to get kidnapped by the chupacabra. Instead, worry about actual dangers: falling into the pool, getting hit by a car, developing allergies and metabolic syndrome (among others) due to oversanitizing.
  7. …but you won’t get much chance to worry at all, since one of the biggest things you lose is unbroken time. Love going for a long run, playing a great game until 2am, curling up and reading a trashy book cover to cover? Too bad. Free time now comes in 5-15 minute bursts. Get used to running laundry back and forth, playing Candy Crush, and reading Facebook posts. Pro tip: try podcasts or audiobooks, they’re hands free.
  8. The worst part might be the unrelenting sense of helplessness. If you’re accustomed to controlling your environment, being productive, getting things done, hoo boy get ready. A baby will smack you upside your head. The first time they scream for an hour straight and you can’t calm them down, no matter what you do? That’s tough. The tenth time? The hundredth? Straight up demoralizing.

    Take a step back, leave her in the crib and close the door and catch your breath. She may be pissed, but she’s also healthy, vibrant, and damn but she has a pair of lungs on her. She could be the next Lady Gaga.

    Repeat after me. The baby will cry. The milk will spill. The diapers will blowout. It is as it ever was, time and time again. Amen.

  9. I’m obviously not much for helpful tips, but I will happily shill for one product: Dr. Brown’s bottles and pacifiers. They’re great.

    I don’t know what it is about babies and burping and farting, but it’s kind of a love/hate relationship. There are techniques and positions galore to avoid swallowing air, but in the end, Dr. Brown’s bottles worked for us. Brooke stopped fighting and actually relaxed while she ate. Winner winner bottle dinner!

Standard
Uncategorized

Happy 1000th, Bridgy

Bridgy, my little IndieWeb side project, hit a milestone a couple days ago: 1000 users! Congratulations Brett Glisson, you win the prize!

1000 isn’t a big number. We’re a long way from viral marketing, growth hacking, and customer acquisition tracking, and that’s fine with me. I built Bridgy to scratch my own itch, in fine open source and IndieWeb tradition, and only launched it publicly because I thought other people might have the same itch.

Milestones are good excuses to count blessings, and Bridgy has plenty to be grateful for. Kyle Mahan has stepped up in a major way, building substantial new functionality and supporting servers and users alike with aplomb. He’s basically a co-owner at this point. You rock, Kyle! Emma Kuo, Barnaby Walters, and Kartik Prabhu have also contributed great code, and the IndieWeb community‘s support has been invaluable. Thanks everyone!

Milestones are also good excuses for navel gazing, so here are some graphs. They have pretty colors!

First, user growth. Clearly not exponential, but it is keeping up a steady clip. Also see the cumulative graph at the top.


Many people sign up for more than one silo – Facebook, Twitter, etc. – so it’s technically 1000 accounts, not users. I’d guess there are only 400-500 distinct users. It’s not always easy to tell the difference automatically, though, and 1000 is a nice round number, so I’m going with it.

Now, the stuff Bridgy actually does. Note that the second graph is log scale, since I threw in everything but the kitchen sink. Tufte would have a fit.


So much for the candy; now it’s time to eat my vegetables. What lessons have I learned so far? What has building Bridgy taught me?

Honestly, I’m not sure. I can’t think of anything particularly interesting or insightful. We need a balanced meal, though, so here are some thoughts.

First, the “scratch your own itch” thing really worked, at least in this case. I always knew what to build next, and how to prioritize, based on what I wanted for myself. Tellingly, the major features I added later – Bridgy Publish and webmentions for blogs – weren’t as strong itches for me personally, and correspondingly, each one has seen less uptake than the last. The graph ain’t lying.

Second, I want happy users, but tech support is no fun. When someone has a problem or a question, I try to find and fix (or automate away) the root cause, or at the very least update docs so the next person won’t need to ask. It’s not perfect, but I think it helps…

…or maybe it’s just that for most users, Bridgy is fire and forget. They sign up, poke at their user page, maybe skim the docs, and then never come back. They don’t need to. Comments and retweets and +1s start showing up on their web site automatically. This is my favorite kind of UI: none at all. And thank God for that, since I’m worthless at UI design.

Lastly, unit tests. I won’t go all rabid religious dogma on you, but man. The freedom to make big changes, refactor core logic, and then push that new code live, confident that it won’t break anything? It’s incredibly liberating. Not to mention that running them before deploying has saved my ass on more than one occasion.

Automated monitoring paged you at 3AM because the new release has a regression bug? Good. At least you caught it. Slept until morning because your tests caught the regression before it went live? Priceless.

Anyway. Contributors are still cranking away, and I’ve been pushing out a steady trickle of tweaks and bug fixes, but that’s slowed down recently. None of the remaining feature requests are above my itch threshold, so they may not happen anytime soon. I happily accept pull requests, but otherwise, Bridgy is basically on ops autopilot right now.

Keep it up, Bridgy! Here’s to 1000 more!

Standard
Uncategorized

Your data, our data

Biotech is all the rage these days. Fitbit and friends burn VC money at one end, Craig Venter captures imaginations and press cycles and grant funding at the other, and an endless array of startups, big pharma, and research foundations fill the middle to bursting. Sequencing genomes is sexy. Folding proteins, stimulating neurons, and hunting superbugs are all sexy.

Even so, my favorite recent biotech project isn’t a startup, new product, or breakthrough in the lab. It’s a government bureacracy, working on compliance, using billing data. No joke.

OK, that’s a bit unfair. It does at least have a Hollywood-killer-robot name: Mini-Sentinel. NPR did a great overview, but in short, it’s an FDA project that mines anonymized(ish) medical records from over half of the American population to discover unexpected drug side effects and reactions.

It’s a noble goal, but admittedly, it’s pretty straightforward big data. The bureacracy itself is actually the part that gets me worked up. The research community salivates at meta-analyses like these, but the data has always been tied up in a straitjacket of regulation, liability, and trade secrecy FUD. Big providers like Kaiser and the VA do similar work on their own data, internally, but this the first project I’ve seen that combines so much data from so many different sources.

It’s far from perfect, of course. The data itself is mainly billing codes, which is temptingly standardized but was never meant for medical research, and the methodology and results are still immature. Still, it’s a big deal.

At a higher level, Mini-Sentinel illustrates what I and others think may be the next defining question of our generation: how do we balance the individual’s right to privacy and control over their data with society’s need to use it for the public good?

Medical records have always skewed toward the individual. Projects like Mini-Sentinel may gradually erode that, but laws like HIPAA and valid privacy expectations will make progress slow and halting.

Other fields like advertising, and more recently national security, have skewed toward the group. The US has never enjoyed clear data privacy protection, unlike the EU, so credit bureaus and online ad networks have run rampant. Likewise, Snowden showed us just how brazenly the NSA has ignored the 4th Amendment in its epic, 9/11-fueled land (and budget) grab.

The waters are murkier elsewhere. Google’s web search, for example, is a public good that most of us depend on every day, but the EU has strong-armed it recently with a Right to Forget doctrine for individuals. Similarly, Google’s environmental map projects are powerful forces for positive change, but Street View has to blur faces and license plates to protect privacy.

In particular, I can’t wait to see how we rethink public and semi-public spaces. US courts have consistently protected our right to take photographs and audio recordings from streets and other public places, but I don’t expect that to last forever. Drones may deliver our packages soon, but they can also hover outside our windows and record video, eventually even through curtains. (Hello, TSA backscatter scanners!)

This is nothing less than the tragedy of the commons in reverse. Everyone loves an underdog, and right now that underdog is individual privacy, under threat from big bad corporate and government wolves. I worry that our bloodlust may lead us to muzzle important, worthy projects like Mini-Sentinel. On the other hand, I also worry about our ongoing failure to reign in power-thirsty behemoths like the NSA and tech oligopolies and provide meaningful privacy and data rights for individuals.

The silver lining, at least, is that we’re talking about the question. We may not be framing it quite the way I’d like, as a balance between individual and group rights, but that’s ok. We’ll be struggling with it for a while to come.

Standard
Uncategorized

Thought experiment

Say you have two apples. The first is local, organic, sustainable, grown on a family farm and delivered to your neighborhood farmer’s market by a smiling Johnny Appleseed. The second was grown with GMO seeds and pesticides by the farm-industrial complex and sold to a big box supermarket.

However. Nutritionally, chemically, down to the atom, the apples are identical. They had identical impacts on the environment. All workers involved were paid the same living wage and treated equally. As far as their effects on the world, and on your body when you eat them, the apples are indistinguishable.

Is the organic apple still “better” somehow? If so, how?

Yes, I know this is impossible. It’s a thought experiment. Not a politically correct one, granted, but humor me. I obviously have my own opinion, but I’m sincerely curious what you all think. Is there an inherent, ineffable righteousness to local-organic-sustainable? Or is it just the concrete differences to nutrition (if any), environment, and socioeconomics that matter? We’re working toward eliminating those differences, but will that ever be enough?

Which apple would you choose, and why?

Standard
Uncategorized

Polytics

Stop me if you’ve heard this one. When the last presidential campaign rolled around, Barack Obama wasn’t excited about business as usual. Instead, his campaign hired a bunch of smart techies, whipped databases into shape and mined them until they cried uncle, raised more money and mobilized more volunteers than ever before, and coasted to victory while reporters fawned over his Internet savvy and hard-nosed number crunching.

Oh, you’ve heard it? How about this one: Obama was originally against gay marriage, but he famously changed his mind a couple years ago when public support gained critical mass. His opponents seized the opportunity to label him a “flip-flopper”, implying that he didn’t stand up for his beliefs.

I think they’re both symptoms of the same thing: the Internet. We all get excited that Obama, MoveOn.org, and other groups use social media and big data to mobilize and react to voters, volunteers, and donors. At the same time, legislators cry foul over gotcha journalism, ballot-box budgeting, and paper-thin politicians who change their positions at the slightest breeze of likes and retweets.

The criticisms resonate with me. We already worry that our politicians are gridlocked, inauthentic, and don’t follow through on their promises. A/B testing and instant online feedback will make this even worse, right?

I wonder. The whole point of representative democracy is for our elected officials to represent us, to make laws and interact with other officials in the ways we want. Historically, we’ve had only had a handful of tools for this: the heavy sledgehammer of elections, the press grenade we can’t hope to aim, opinion polls about as accurate as rusted-out nails, and finally the tiny glasses screwdrivers: letters, phone calls, and town hall meetings with actual constituents. These tools may get the job done, but they’re not great.

Technology can clearly help. Online petitions and micro-elections, direct communication over social media, and instantaneous, 24/7 press cycles now give politicians a continuous stream of feedback. Groups like Code for America, Data.gov, and GovHack praise this new world of empowered citizens. They’re usually silent on the drawbacks, though, such as flip-flopping.

Take Obama’s gay marriage reversal. I don’t know if modern politicians change their minds more often due to technology, but if they do, would that be so bad? Say you supported gay marriage before Obama did. Aren’t you glad he come around eventually, instead of holding firm due to outdated public opinion polls – or worse, to avoid being accused of waffling?

Here’s a question: what if Obama didn’t actually change his mind at all? What if he changed his public position, but still personally opposed gay marriage? Would it matter whether he supported it in private or not, as long as he consistently supported it in public and voted for it?

The knee jerk reaction is, of course it would matter! But I don’t know. If his public actions were indistinguishable, then arguably it wouldn’t matter at all. Of course, that’s pretty unlikely. In reality, your personal beliefs motivate you to work hard and fight for what you believe in. Caring begets results.

Realistically though, most politicians care about getting re-elected more than any single issue. Modern technology lets them take our pulse faster, cheaper, and better than ever before. If that helps them hew more closely toward our desires so they can win elections, so much the better.

As usual, the conclusion isn’t that technology is inherently good or bad for politics, but that it amplifies and accelerates what we already want. In this case, though, I think that is inherently good. Representative democracy is all about getting us the government we want. If technology helps, so much the better! It may not solve the money part, or the gridlock or media echo chamber or any number of other problems, but I’ll take what I can get.

(Of course, that begs the question: is the government we want actually the government that’s best for us? I’m not convinced. But that’s a post for another day…)

Standard
Uncategorized

B corps

Over the last few years, a third of the US states have signed laws creating a new type of company: the benefit corporation, or B corp. I love them. They rock. I want one when I grow up.

Traditional companies are expected to maximize shareholder profits as their primary goal, a maxim that has been upheld in the courts. This makes them reluctant to pursue social goods or other initiatives that might detract from shareholder value.

B corps, on the other hand, are measured by the general public benefit they provide. Shareholders and boards of directors judge them on whether they have a “material positive impact on society and the environment,” explicitly allowing them – requiring them! – to work toward social good instead of profit alone.

I’ve never had much patience for the “big companies are evil, capitalism is evil” dogma. It’s a shallow, intellectually lazy idea that we all fall for because it tells a good story. Who doesn’t love an easy scapegoat? Sadly, it’s all too easy to dismiss different perspectives on the world as “evil” instead of actually engaging with them. At a high enough level, I think we all want (very roughly) the same things; we just disagree on specifics like which things come first and how to approach them.

In the same vein, I think individual people and organizations matter less than we think, and environments and incentives matter way more. Unfortunately, complex systems are notoriously hard to get right, and even harder to change without causing more problems than you solve. Capitalism is one of the most successful systems the world has ever seen, but despite a steady stream of tweaks over the last couple centuries – central banks, antitrust laws, labor unions, investor regulations – it’s not out of the woods yet.

My knee jerk reaction to “big companies are evil” is to blame the system, but it’s always nagged at me. Fixing systems is hard, and capitalism is a beast. How the hell do we do it? Historical alternatives like socialism and fascism clearly don’t work. The Occupy movement’s General Assembly is just the latest incarnation of older ideas that are interesting but impractical. Modern China may be the most promising candidate yet, with its hierarchy and experimental meritocracy, but it’s far too early to tell.

In the interest of incremental development, B corps are my new favorite answer. They’re not perfect, to be sure. They’re not even strictly necessary! Corporations aren’t actually required to maximize shareholder profit as strictly as we think, and B corps could create a dangerous double standard. Regardless, perception is reality. If we think corporations are required to profit above all else, that chilling effect is hard to fight without addressing it explicitly. B corps are a blowtorch. Bring on the flames!

Standard
Uncategorized

The paperless office arrived, and no one noticed

We used to hear a lot about the paperless office. Remember that? Back then we printed reports, filled out forms by hand, sent memos and shopped from catalogs and cursed junk mail. We used paper for pictures and greeting cards and other things we loved, too, but it was still a hassle sometimes. When the early days of the internet showed us a glimpse of a better way, we jumped at the idea.

That idea’s time has come. We now do business over email (or better), buy things online, download tickets to our phones, share pictures on screens and store them in the cloud. Computers may have spawned even more paper during their “terrible twos,” but that’s pretty clearly over. We can debate whether trading less paper for more energy is a net win for the environment, but it’s largely academic. We do need more sustainable energy, but we’re not going to give up technology and fire up the pulp mills again. That paper boat has sailed.

The part that interests me is the public awareness. For a while, tons of ink was spilled (onto paper!) heralding the arrival of the paperless office. Society didn’t quite cooperate, though, and the press gradually turned to other topics, looking back with scorn in year end top ten lists and “where are they now” pieces until the very phrase “paperless office” became a joke.

It wasn’t a joke, though. It arrived – we’re living in it! – but it didn’t arrive on the pundits’ schedule. It was a gradual societal shift, which is basically kryptonite for the press. Journalists understand events. They can dig up stories and report them. Slow, long term changes? Those are left to historians and intellectuals in musty, forgotten libraries. (Science reporting has the same problem, with an extra poison pill: individual papers and discoveries seem like meaningful scientific events, but often aren’t.)

This pattern isn’t unique to the paperless office, of course. Predicting the future is fraught in general, but predicting slow societial shifts is especially thankless. Even if you agree on the criteria, it usually takes a generation or more to see if you were right. Not many people have attention spans that long. I know I don’t.

Fortunately, people smarter than me are thinking about this too. The Long Now Foundation is one of my favorites. They encourage people to think on the scale of millenia, and they put their money where their mouth is with projects like Long Bets and the 10,000 Year Clock. They’re doing great work.

As for the paperless office, I’m writing this on a computer, you’re reading it on one, and if you know me, the last time we interacted was probably on a computer too. Like most societal shifts, it doesn’t matter too much if anyone noticed that the paperless office arrived, or how long it took. The important part is that it’s here.

Standard
Uncategorized

Software in 2014

Tim Bray‘s Software in 2014 is a great survey of the state of software engineering, particularly on the server (good) and the client (bad). I’ve spent my fair share of time in both places, and my experiences match up with his conclusions perfectly.

Beyond cheerleading, my main reaction is to consider why server side development is so much better than client side these days. On the client, we’ve seen a massive tectonic shift over the last 5-10 years. Win32 had a comfortable monopoly for decades, hangers-on notwithstanding, but that ocean is well and truly boiled, and clearly for the better. The new trio of web, iOS, and Android have done some pretty amazing things for end user technology. It takes time for tools and best practices to shake out of young new platforms, though. You know it’s saying something when the web is the most mature of any bunch.

The server side seas, on the other hand, have stayed cool and calm. Tools and languages have improved steadily, and the rise of APIs and app platforms and DVCSes and cloud computing has done wonders for code reuse and modularity. The spectre of multi core and concurrency still haunts us, but we have weapons to fight it with now, and they’ll only get better.

At a high level, these changes have clearly been good for users of both client and server software. The difference is that server developers are their own users, more or less, while client developers are not. Reinventing the client platform from scratch was an epic Spolsky’s Folly that client developers will be digging out of for decades, and general purpose computing may not make it out alive, but it’s already a big net win for end users, businesses, and most other constituencies. It’s not always all about us engineers, and that’s a good thing.

Standard
Uncategorized

Understanding relativity

I think I finally understood relativity, just now, thanks to the first half hour of Prof. Nima Arkani-Hamed’s Future of Fundamental Physics lecture series. Space-time too. Quantum mechanics is up next, but I’m not holding my breath for that one. Let’s just say I’m not a full-fledged physicist!

I’m still curious about one thing. Arkani-Hamed begins his explanation of relativity at 21:20 with an assumption: if you believe there’s a limit on how far objects can affect each other, at least within a given amount of time, ie how fast they can send a signal or travel…then lots of implications, culminating in relativity.

I understand the intuition behind this, it’s definitely something we’d naturally want to believe, but plenty (most?) of modern physics is counterintuitive. Why did they take this one intuition for granted?

I’m sure I’m just missing something simple, and it’s still a great talk series so far. Looking forward to the rest!

Standard
Uncategorized

Nearsighted painting

If you’re nearsighted, this scene will look familiar. You may not think about it much, but I’d bet it’s a fairly intimate part of your life. It’s how you see the world without your contacts or glasses on, when you’re just waking up or just going to bed or shivering out in the street at 2AM because your apartment building’s fire alarm jolted you awake and you forgot to grab your glasses on your way out the door. In other words, at your most vulnerable.

Continue reading

Standard