Uncategorized

What If: Ditching Social Security Numbers for Personal ID Keys

I’ve been thinking about this discussion on a National ID and the end of using Social Security Numbers. We’re used to having these 9 digit numbers represent us for loans, credit card transactions, etc., but in the modern age one would think we could do better.

Any replacement for Social Security Numbers would need to be secure, reduce the chances of identity theft, be able to withstand fraud/theft, and must not be scannable without knowledge (to avoid being able to track a person without their knowledge as they go from place to place). The ACLU has a list of 5 problems with National ID cards, which I largely agree with (though some — namely the database of all Americans — already exist in some forms (SSN, DMV, Facebook) and are probably inevitable).

In an ideal world, we’d have a solution in place that offered a degree of security, and there are technical ways we could accomplish this. The problem with technical solutions is that not every person would necessarily benefit (there are still plenty of Americans without easy access to computers), and technical solutions leading to complexity for many. However, generations are getting more technically comfortable (maybe not literate, but at least accustomed to being around smartphones and gadgets), and it should be possible to design solutions that require zero technical expertise, so let’s imagine what could be for a moment.

Personal ID Keys

Every year we have to renew our registration on our cars, and every so many years we have to renew our drivers license cards. So we’re used to that sort of a thing. What if we had just one more thing to renew, a Personal ID Key that went on our physical keychain, next to the car keys. Not an ID number to remember or a card that can be read by any passing security guard or police officer or device with a RFID scanner, but a single physical key with a safe, private crypto key inside, a USB port on the outside, that’s always with us.

I’m thinking something like a Yubikey, a simple physical key without any identifiable information on the outside that can always be carried with you. It would have one USB port on the outside and a single button (more on this in a minute). You’d receive one along with a PIN. People already have to remember PINs for bank accounts and mobile phones, so it’s a familiar concept.

Under the hood, this might be based around PGP or a similar private/public key cryptography system, but for the purpose of this “What if,” we’re going to leave that as an implementation detail and focus on the user experience. (Though an advantage of using PGP is that a central government database of all keys is not needed for all this to work.)

When you receive your Personal ID Key and your PIN (which could be changed through your computer, DMV, or some other place), it’s all set up for you, ready to be used. So how is it used? What benefits does this really give? Well, there’s a few I can think of.

Signing Documents

When applying for a home loan or credit card agreement, or when otherwise digitally signing a contract online, you’d use your Personal ID Key. Simply place it in the USB port and press the activation button on the key. You’ll have a short period of time to type your PIN on the screen. That’s it, you’re done. A digital signature is attached to the document, identifying you, the date, and the time. That can be verified later, and can’t be impersonated by anyone else, whether by a malicious employee in the company or a hacker half-way across the world.

Replacing Passwords

People are terrible when it comes to passwords. They’ll use their birthdates or their pet’s name on their computer and every site on the Internet. More technical people try to solve this with password management products, but good luck getting the average person to do this. I’ve tried.

This can be largely addressed with a Personal ID Key and the necessary browser infrastructure. Imagine logging into your GMail account by typing your username, placing your key in the USB port on any computer, pressing the activation button, and typing your PIN. No simple passwords that can be cracked, and no complex passwords that you’d have to write down somewhere. No passwords!

Actually, for some sites, this is possible today with Yubikeys (to some degree). Modern browsers and sites supporting a standard called U2F (such as any service by Google) allow the usage of keys like this to help authenticate you securely into accounts. It’s wonderful, and it should be everywhere. Granted, in these cases they’re used as a form of two-factor authentication, instead of as a replacement for a password. However, server administrators using Yubikeys can set things up to log into remote servers using nothing but the key and a PIN, and this is the model I’d envision for websites of the future. It’s safe, it’s secure, it’s easy.

Replacing the Key If Things Go Wrong

Inevitably, someone’s going to lose their key, and that’s bad. You don’t want someone else to have access to it, especially if they can guess your PIN. So there needs to be a process for replacing your key at a place like the DMV. This is just one idea of how this would work:

Immediately upon discovering your key is gone, you can go online or call a toll-free number to indicate your key is lost. This would lead to an appointment at the DMV (or some other place) to get a new key, but in the meantime your old key would be flagged as lost, which would prevent documents from being signed and prevent logging into systems.

Marking your key as lost would give you a special, lengthy, time-limited PIN that could be used to re-activate your key (in case you found out you left it in your other pants).

The owner of the key would need to arrive at the DMV (or wherever) and prove they are who they say they are and fill out a form for a new key. This would result in a new private key, and would require going through a recovery process for any online accounts. It’s important here that another person cannot pretend to be someone else and claim a new key.

Once officially requested at the DMV, the old key would be revoked and could no longer be used for anything.

Replacing the Key If Standards Change

Technology changes, and a Personal ID Key inevitably will be out-of-date. We’ve gone through this with credit cards, though. Every so often, the credit card company will send out a new card with new information, and sites would have to be updated. Personal ID Keys wouldn’t have to be much different. Get a new one in the mail, and go through converting your accounts. Sites would need to know about the new key, so there’d need to be a key replacement process, but that’s doable.

Back to Reality

This all could work, but in reality we have infrastructure problems. I don’t mean standards support in browsers or websites. That’s all fixable. I mean the processes by which people actually apply for loans, open bank accounts, etc. These are all still very heavily paper-based, and there’s not always going to be a USB port to plug into.

Standards on tablets and phones (in terms of port connectors and capabilities) would have to be worked out. iPads and iPhones currently use Lightning, whereas most phones use a form of USB. Who knows, in a year even Apple’s devices might be on USB 3, but then we’re still dealing with different types of USB ports across the market, with no idea what a future USB 4 or 5 would look like. So this complicates matters.

Some of this will surely evolve. Just as Square made it easy for anyone to start accepting credit card payments, someone will build a device that makes it trivial to accept and verify signatures, portably. If the country moved to a Personal ID Key, and there was demand for supporting, devices would adapt. Software and services would adapt.

So I think we could get there, and I think such a key could actually solve a lot of problems, particularly compared to Social Security Numbers and a National ID Card. Whether people would accept it, and how difficult it would be to get everyone on-board with it, I have no idea, but if designed just right, we could take some major steps toward personal digital security and fraud protection in this country.

Remembering the Line Ride

I spent the holidays at Disneyland this year with my girlfriend and my family. We stood in numerous lines for hours on end during the busiest week of the year, waiting to see Disney’s take on classic rides such as the Haunted Mansion and Small World.

Their take was fantastic, but this post is not about that.

Standing in line for the Haunted Mansion, listening to people murmur about how agonizing the lines were, it dawned on me that not everybody understood nor appreciated the true origins of these amazing amusement parks. My sister certainly didn’t know, and neither did my girlfriend.

You may not either, so allow me to share a bit of history.

Back to the middle ages

Much of what we’ve come to enjoy in amusement parks originated from fairs in the Middle Ages [1]. The food, the shows. They were further inspired over time by other events and inventions throughout the centuries that followed. One of the innovations in amusement technology that really sparked the modern era of amusement park rides was a classic mechanical ride, the steam-powered carousel, built by Thomas Bradshaw at the Alysham Fair in 1861 [2].

The problem with technological innovations is that they overshadow the simpler pleasures that came before them.

The Line Ride

Long before the carousel, in 1733, people enjoyed a simpler tradition. The humble fairgrounds in those days were unlike the marvels we have today, but were still full of events for children and adults of all ages.

One of the most beloved traditions in those days was known as the Rope Line Ride, or the Line Ride for short. Long lines of rope, attached to tall stakes in the ground, would be laid out in all sorts of patterns, forming paths for the kids to traverse. Common patterns included the spiral, the back-and-forth, and the weave.

Participating in the Line Ride was simple. A person would start at one end, following the line, seeing where it took them (by a garden, perhaps, or a wall of funny drawings), eventually coming out on the other side.

Remember, these were the days when Kick the Can and Hoop Rolling were the rage. The Line Ride was so popular that it was often nearly full of people, but this gave them time to socialize and join together in the admiration of their surroundings.

Evolution of the Line Ride

Times change, as they often do. While once a fun and common attraction, the younger generations began to grow weary of the Line Ride. In 1861, Thomas Bradshaw, the aforementioned inventor of the steam-powered carousel, forever changed the Line Ride by making it a means to an end. He put the carousel at the very end of the Alysham Fair Line Ride.

Now, instead of simply enjoying the Line Ride for what it was, people were passing through it, with great impatience, just to get to the all-new steam-powered carousel.

A new tradition was born. The Line Ride no longer became an attraction itself, but rather simply the Line, a way to control the flow of people leading up to an attraction. This was seen as a very controversial change in its day — after-all, the Line Ride was a tradition going back over a hundred years — and with it came a distrust of the newer attractions by the older generations. Of course, time passed, and the Line became the norm.

The spirit carries on

While often forgotten as an attraction, the Line Ride’s spirit remains today in our terminology and our parks. We’re all familiar with celebrities walking down the rope line, or hearing about people “working the rope line.”

And, of course, the long, grueling lines leading up to the popular attractions at amusement parks and carnivals around the world.

Using Freshdesk with PagerDuty for Better Customer Support

At Beanbag, we’ve been using Freshdesk to handle customer support for Review Board, Power Pack, and RBCommons.

We’ve also been using PagerDuty to inform us on any critical events, such as servers going down, memory/CPU load, or security updates we need to apply to our servers.

Our customers’ problems are just as important to us as our servers’ problems, but we were lacking a great way to really get our attention when our customers needed it most. After we started using PagerDuty, the solution became obvious: Integrate PagerDuty with Freshdesk! But how? Neither side had any native integration with the other.

Enter Freshdesk webhooks

Freshdesk’s webhooks support is pretty awesome. Not only can you set them up for any custom condition you like, but payload they send is completely customizable, allowing you to easily construct an API request to another service – like PagerDuty!

This is super useful. It means you won’t need any sort of proxy service or custom script to be set up on your server. All you need are your Freshdesk and PagerDuty accounts.

Deciding on your setup

There are probably many ways you can configure these two to talk, and we played with a couple configurations. Here are the general rules we settled on:

  • Only integrate PagerDuty for paying customers (with whom we have support contracts). We don’t want alerts from random people e-mailing us.
  • When a paying customer open new tickets or reply to existing tickets, assign them to a “Premium Support” group, and create an alert in PagerDuty.
  • When an agent replies or marks a ticket in “Premium Support” as resolved, resolve the alert in PagerDuty.

We always resolve the alert instead of acknowledging it, in order to prevent PagerDuty from auto-unacknowledging a period of time after the agent replies. When the customer replies, it will just reuse the same alert ID, instead of creating new alerts.

Also note that we’re setting things up to alert all of our support staff (all two of us founders) on any important tickets. You may wish to adjust these rules to do something a bit more fine-grained.

Okay, let’s get this set up.

Setting up PagerDuty

We’re going to create a custom Service in PagerDuty. First, log into PagerDuty and click “Escalation Policies” at the top. Then click the New Escalation Policy button.

Name this policy something like “Premium Support Tickets,” and assign your agents.

Next, click “Services” at the top. Then click the Add New Service button and set these fields:

title

Click Add Service. Make a note of the Service API Key. You’ll need this for later.

Edit your service and Uncheck the Incident Ack Timeout and Incident Auto-Resolution checkboxes. Click Save.

Optionally, configure some webhooks to point to other services you want to notify. For instance, we added Slack, so that we’ll instantly see any support requests in-chat.

Okay, you’re done here. Let’s move on to Freshdesk.

Setting up Freshdesk

Freshdesk is going to require four rules: One Dispatch’r and three Observers.

I’ll provide screenshots on this as a reference, along with some code and URLs you can copy/paste.

Start by logging in and going to the Administration page.

Dispatch’r Rule: Alert PagerDuty for important customer tickets

Click “Dispatch’r” and add a new rule.

Dispatchr rule

Set the Rule name and Description to whatever you like. We added a little reminder in the description saying that this must be updated as we add customers.

For the conditions, we’re matching based on company names we’ve created in Freshdesk for our customers. You may instead want to base this on Product, From E-mail, Contact Name, or whatever you like.

For the Callback URL, use https://events.pagerduty.com/generic/2010-04-15/create_event.json. Keep note of this, because you’ll use it for all the payloads you’ll set.

Now set the Content to be the following.

{
    "service_key": "YOUR SECRET KEY GOES HERE",
    "event_type": "trigger",
    "description": "Ticket ID {{ticket.id}} from {{ticket.requester.company_name}}: {{ticket.subject}}",
    "incident_key": "freshdesk_ticket_{{ticket.id}}",
    "client": "Freshdesk",
    "client_url": "{{ticket.url}}",
    "details": {
        "ticket ID": "{{ticket.id}}",
        "status": "{{ticket.status}}",
        "priority": "{{ticket.priority}}",
        "type": "{{ticket.ticket_type}}",
        "due by": "{{ticket.due_by_time}}",
        "requester": "{{ticket.requester.name}}",
        "requester e-mail": "{{ticket.from_email}}"
    }
}

Make note of the whole YOUR SERVICE KEY GOES HERE part in line 2. Remember the service key in PagerDuty? You’ll set that here. You’ll also need to do this for all the webhook payloads I show you from here on out.

Go ahead and add any other actions you may want (such as adding watchers, or setting the priority), and click Save.

Click Reorder and place that rule at the top.

Setting up Freshdesk Observer

Now we need to set up a few observers. Go back to the Administration page and click “Observer.” We’ll be adding three new rules.

Observer Rule #1: Resolve PagerDuty alerts on close

Add an event. You’ll set:

Resolve on Close rule

Use the same Callback URL as earlier, and set the Content to:

{
    "service_key": "YOUR SERVICE KEY GOES HERE",
    "event_type": "resolve",
    "description": "{{ticket.agent.name}} resolved ticket {{ticket.id}}: {{ticket.subject}}",
    "incident_key": "freshdesk_ticket_{{ticket.id}}",
    "details": {
        "ticket ID": "{{ticket.id}}",
        "status": "{{ticket.status}}",
        "priority": "{{ticket.priority}}",
        "type": "{{ticket.ticket_type}}",
        "due by": "{{ticket.due_by_time}}",
        "requester": "{{ticket.requester.name}}",
        "requester e-mail": "{{ticket.from_email}}"
    }
}

Don’t forget that service key!

Observer Rule #2: Resolve PagerDuty alerts on agent reply

Let’s add a new event. This one will resovle your PagerDuty alert when an agent replies to it.

Resolve on Reply rule

Again, same Callback URL, with this Content (and your service key):

{
    "service_key": "YOUR SERVICE KEY GOES HERE",
    "event_type": "resolve",
    "description": "{{ticket.agent.name}} acknowledged ticket {{ticket.id}}: {{ticket.subject}}",
    "incident_key": "freshdesk_ticket_{{ticket.id}}",
    "details": {
        "ticket ID": "{{ticket.id}}",
        "status": "{{ticket.status}}",
        "priority": "{{ticket.priority}}",
        "type": "{{ticket.ticket_type}}",
        "due by": "{{ticket.due_by_time}}",
        "requester": "{{ticket.requester.name}}",
        "requester e-mail": "{{ticket.from_email}}"
    }
}

Observer Rule #3: Alert PagerDuty on customer reply

title

Here’s your Content:

{
    "service_key": "YOUR SERVICE KEY GOES HERE",
    "event_type": "trigger",
    "description": "{{ticket.agent.name}} acknowledged ticket {{ticket.id}}: {{ticket.subject}}",
    "incident_key": "freshdesk_ticket_{{ticket.id}}",
    "details": {
        "ticket ID": "{{ticket.id}}",
        "status": "{{ticket.status}}",
        "priority": "{{ticket.priority}}",
        "type": "{{ticket.ticket_type}}",
        "due by": "{{ticket.due_by_time}}",
        "requester": "{{ticket.requester.name}}",
        "requester e-mail": "{{ticket.from_email}}"
    }
}

Done!

You should now be set. Any incoming tickets that match the conditions you set in the Dispatch’r rule will be tracked by PagerDuty.

Now you have no excuse for missing those important support tickets! And your customers will thank you for it.

A new adventure begins

Act 1, Scene 1

August 23rd, 2004. A young kid, not even 21, freshly dropped out of college, passionate about open source and programming. He walks into his new office at his new job at VMware, his first job, ready to start the day, eager to impress and meet his new co-workers.

Nobody was there. Thumbs twiddled.

10AM starts to roll around, and finally, the first sign of life. Over the next couple hours, more people show up.

Over the next week, he’s set up and learning the ropes. Working on his first bug, soon his first feature. Attending his first team get-togethers. Making his first Bay Area friends.

Over the next few months, his first birthday celebration at work. His first glass of champagne. His first real responsibilities.

Over the next few years, bigger roles, leadership roles. He began to get a feel for where he’s truly going in this silly little world.

This, of course, was me, on my first adventure in the tech industry.

I was lucky to be placed in a fantastic team full of smart, hard-working, dedicated, and fun software engineers and managers. We’d discuss architecture, brainstorm ideas, joke around, watch YouTube videos, play poker, watch movies, go to events. The web of awesome people extended throughout the company as well.

Over the past nine years, I worked on a great many things.

  • Eight releases of VMware Workstation, including a three-year effort to build Workstation 8.0 (a major undertaking).
  • VMware Server 1.0. I was the primary Linux developer, pulling caffeine-fueled all nighters to meet insane deadlines.
  • Player and VMRC, which powers the VM console for our enterprise products.
  • The core foundation used in Fusion and other products.
  • Icons and artwork for the Linux products.
  • I introduced Unity to Workstation. (Sorry, guys…)
  • Helped in the creation of the current generation of the View client for Linux.
  • More recently, I developed WSX, an experiment in developing a pure web client and console for accessing remote VMs anywhere, from desktops and tablets.

Not a bad run.

This Thursday, August 1st, 2013, I’ll be leaving VMware.

Revision 1: “Add the reviewboard”

Several years ago, I began working with my good friend David Trowbridge on an open source project for keeping track of patches and easing the review process. We spent many years in the open source world looking at raw diffs on bug trackers and in e-mails, and things weren’t that much better at VMware. As Mr. Wonderful says, “There has to be a better way!”

So we slaved away in the late nights and weekends, iterating and iterating until we had something we could use. We named this product “Review Board” (or “the reviewboard,” as our first commit says). We put it out there for people to play with, if anyone was interested.

There was interest. Review Board is now used around the world at companies big and small. We’ve continued to improve and grow the product and turn it into something that developers actually want to use.

We later built a startup around this. Beanbag.

It’s dangerous to go alone. Take this.

Earlier this year, we met a local entrepreneur as part of a program we participate in. We quickly developed a rapport, and he offered to help and advise us in our efforts to grow our business. It wasn’t long after that we started discussing funding, and where that could get us.

We started pitching, and he reached out to his contacts. Before long, we had what we needed to give this a try for a couple years.

Step 3: Profit?

There’s a lot of hard work ahead of us, but we’re up to the challenge. It’s both exciting and terrifying.

Leaving my team behind at VMware is hard, but everyone has been so supportive.

IMG_0720

Basically.

In the coming months, Review Board’s going to grow in exciting new ways. We’ll be gearing up for a new 1.8 release, releasing our first commercial extension to Review Board, and improving our SaaS, RBCommons. We have a pretty good idea where we want to go from here, and now we can better focus on making it happen.

It’s going to be an awesome adventure.

VMware WSX 1.0.1, and the new Community Page

Last month, we released WSX 1.0. Those following along with the beta knew what to expect, as it was largely our latest Tech Preview release with some more fixes thrown in.

Unfortunately, we also threw in a regression that we’ve since been working to fix. The console would, at times, stop displaying anything, just appearing black. Clicking the little Refresh button would fix it, but it was annoying and, to me personally, quite embarrassing.

Today I’m happy to announce that we’ve released WSX 1.0.1, which has fixes for the black screen issue, and also support for Windows domains in usernames (indicated by “MYDOMAIN\username”) when logging in.

Along with the release, we’ve also introduced the new WSX Community Page, where you’ll be able to find the latest releases, documentation, and discussions on WSX. I’ll be on there, as will some of our QA, to answer questions.

WSX, Meet Retina.

On Friday the 16th, an angel in white, glowing robes delivered a shiny new iPad to my desk, as heavenly music played softly in the background. (I may be misremembering the details.)

The most talked about feature of the new iPad is, of course, the shiny new retina display (a 2048×1536 resolution). A few apps really show this off, and text is certainly crisp, but a few people wondered aloud, “Is it really that big of a difference?” Yes, it is.

Naturally, I had to play around with getting WSX to show a retina-friendly desktop. See, by default, everything is scaled up 2x to simulate the resolution of the original iPad (1024×768), but they have some support in there for loading higher-resolution images. Turns out, with some tricks, you can also make the canvas retina-friendly.

So let me show off what my desktop here looks like with some apps open on the iPad 1.

Okay, that’s a bit crowded, but it’s only a 1024×768 resolution (minus some UI at the top of the screen). How about with the retina display?

Wooo. Looks pretty awesome, right?

Of course, the problem is that everything is very tiny. This is usable if you increase the DPI a bit, but I’m thinking about some magnifying support now. Still, pretty cool.

A Proud Moment: VMware Workstation 8

Today is kind of a career highlight for me. A moment I’m especially proud of. We just released VMware Workstation 8. Code-named “Nitrogen,” this release has been in the planning stages since around the time I joined VMware 7 years ago. It has been in active development for the past 3 years. Easily the longest development cycle we’ve had for Workstation, but also easily the best release we’ve ever done.

Previous users of Workstation will notice quite a lot of improvements to this release. We have a lot of changes, but I want to go into a few that I’ve worked on over the past three years, which I think are of particular interest.

Remoting

Remote Server Connection

This is the big one.

Workstation 8 can share VMs with other Workstation 8 clients. You can run a VM on one system (say, a beefy desktop machine in the back room) and access them from another (say, a light-weight laptop). All the processing happens on the machine running the VM. They can be made to start up along with the system, so you don’t even need Workstation running. You don’t even need X (on Linux).

Users of VMware Server or GSX should find this familiar. We’ve essentially succeeded the Server product with this release, with more features than Server ever had. For instance, one client can connect to multiple servers at once, alongside all your existing VMs.

That’s not all, though. You can also connect to ESXi/vSphere. As a developer, this is something I take advantage of nearly every day. I have an ESXi box running in my back room with several VMs for testing, and a couple for in-home servers. By running on ESXi, I minimize the overhead of a standard operating system, and gain a bunch of management capabilities, but previously I had to use vSphere Client to connect to it. Now I can just talk to it with Workstation.

Hear that, Linux admins? You don’t need vSphere Client running on Windows to connect to your ESXi/vSphere box anymore. That’s a big deal. (Unless you need to do some more advanced management tasks — we’re more about using the VMs, and light customization).

VM Uploading

We also make it easy to upload VMs to an ESXi/vSphere box. Connect to another server, drag a local VM onto it, and the VM will convert and upload directly to it. Super easy. Developing a VM locally and putting it up on a server as needed is just a simple drag-and-drop operation now.

No More Teams

Thumbnail Bar

Teams was a feature that we’ve wanted to rework for a long time. For those who aren’t familiar with them, Teams was a way to group several related VMs together (say, parts of a test server deployment) such that they could be viewed at the same time with a little live thumbnail bar. It offered some support for private virtual networks between them, with each NIC being able to simulate packet loss and different bandwidth limits.

We felt that these features shouldn’t have been made specific to “special” VMs like they were, so we tore the whole thing apart while preserving all the features.

Now, every VM’s NIC can simulate packet loss and bandwidth limits. Any VMs already together in some folder or other part of the inventory can be viewed together with live thumbnails, just like Teams. Any VM on the local system can be part of any other VM’s private virtual network.

It’s much more flexible. The restrictions are gone, and we’re back to using standard VMs, not special “Team VMs.”

Inventory Improvements

Inventory Filtering

You may have noticed the search field in the inventory in my screenshots. You can now filter the listed VMs by different criteria. Show the powered on VMs, the favorites, or search for VMs. Searching will take into account their name, guest OS, or data in the Description field in the VM. The Description searching is particularly helpful, if you’re good at documenting/listing what’s in a VM that you may care about (IE6, for instance).

Favorites

Favorites was reworked. It used to be that every VM in the sidebar was a “favorite.” Now we list the actual local VMs, and we don’t call them favorites. Instead, you can mark one of the listed VMs as a favorite (by clicking a little star beside it) and filter on that.

UI Improvements

Folder Thumbnail View

We’ve streamlined the UI quite a bit. All our menus are smaller and better organized. Our summary pages are cleaner and highlights the major things you want to see.

We have new ways of navigating your VMs, which is especially handy on large servers. You now get a tab for any folder-like node in the inventory showing your VMs in either a list view (with info on power states) or a zoomable live thumbnail view showing what’s happening on each VM.

And Much More

That’s just a few of the major things. There’s many, many more things in this release, but the official release notes will cover that better than me. (Honestly, I’ve been developing and using this release for so long, it’s hard to even remember what was added!)

Tip of the Hat

A lot of great people worked on this release. The engineers that developed the various components across the company. The QA groups who have provided valuable testing to make sure this was a solid release. The product marketing and management teams who kept us going and help draft the goals of this release and market it. The doc writers who spent countless hours documenting all the things we’ve done. Upper management who allowed us to take a risk with this version. Our beta testers who went through and gave us good feedback and sanity checks. And many others who I’m sure I’m forgetting.

I said this already, but I’m so proud of this release and what we’ve accomplished. More effort went into this than you would believe, and I really think it shows.

And now that we’re done, we’re on to brainstorming the next few years of Workstation.

I Invented Port Knocking

Let me tell you about something that’s been bothering me for a while.

I invented Port Knocking. No, really. In 2002.

According to portknocking.org, it was invented by Martin Krzywinski in 2003. I’m not here to debate that he didn’t come up with the idea separately, and choose the same names (it’s a pretty good name for the technology). But I do want to make it clear, for the record.

Wait, hold on, what’s Port Knocking?

Oh, got ahead of myself there.

Port Knocking is a security method where you can cloak a network completely (close all ports or put them in stealth mode) and yet still allow access from any computer in the world, by way of a sequence of “knocks” on a predefined list of ports.

The server can specify a list of ports (say, 53, 91, 2005, 2131, 7) and monitor to see if there are attempts to open them. If an outside computer accesses each of these ports in sequence, without hitting any other ports, and within a time period, the server can open a select set of ports (separate from the knock list) to that IP address only.

In my original designs, before opening the ports after a successful knock sequence, an authentication port would be opened at a predefined port, which the client would have to access, exchanging credentials, before the ports would be open.

And why the controversy?

First, some history.

In mid-2002, I was 18 and interested in security, amongst other things. Along with writing code for Pidgin (then Gaim), and a couple other projects, I was fooling around with firewalls and such.

I had this idea one morning while in the shower to add another layer of security. I really wanted to be able to completely close off my network, but still access it when out of town. I can’t tell you how it came into my head. Just a moment of inspiration. I wasn’t even really looking for another project, just brainstorming, but I liked the idea too much. I started writing code and made it work.

It was a while before I discussed it publicly on my old blog on Advogato. There are many posts, but I’ll highlight a couple here, where I introduce what I was working on:

The blog is full of lots of old teenage angst, so ignore most of it, but I spend the next few weeks going over my progress, answering questions from people who are asking for more information, etc. I was very open about it.

At one point a couple months later, I realized this was stupid. I had a good idea. I should patent it. I took it down for a while. This was after I had already put up the sourcecode, though, and many people had it.

Now, in retrospect, I should have made this into a full-on open source project and gained the recognition myself, continued development. But I was too busy with other things and didn’t really want another major product on my hands. I remember at one point I thought, “maybe I can sell this to a security company, or patent it!”

And since then…

One day, I opened a magazine and saw “port knocking” on the cover. My heart skipped a beat. Somebody wrote an article on my port knocking! I opened the magazine and read through it. “Invented by… Michael Krzywinski? What?!” I re-read to make sure. It was all my terminology, my methods. I was floored.

By that point, he made a name for himself as the inventor. And again, I’m not trying to discredit him, because he very well may have come up with the same thing separately. But it stung, because I had a great idea, a year before he wrote a paper on it, and I didn’t promote it the way he did.

Lesson learned

This is one of those life lessons. You always regret what happened, but you use it to make better decisions in the future. These days, I’m happy working on some awesome products. My day job at VMware and my highly successful code review software, Review Board (for which we’ve recently started a company).

Now, if I have a good idea, I make sure it’s heard, and demonstrated, far and wide. Truly great ideas don’t really come that often, so when you have one, make sure you do something with it, or you may end up regretting it for years to come.

Sentience discovered in the Linux kernel

Ladies and gentlemen, after much experimentation, I have made a remarkable discovery. Perhaps the very first case of a sentient AI has been discovered, sitting right under our noses, in the Linux Kernel. With such a complicated codebase that has evolved greatly over the years, there are certainly more surprising places for it to spring up, but it’s still quite unexpected.

And where, specifically, has this sentience manifested itself? The suspend/resume code.

See now, like many of you, I’ve dealt with the instabilities of suspend/resume. I’ve considered it to just be buggy, unreliable, and possibly incompatible with my hardware. That is, until I realized that there’s a pattern. One that began to make a sort of sense.

A couple months back, I gave suspend/resume another shot, and to my surprise it worked. I figured that Ubuntu 10.04 finally fixed it, but it still wasn’t perfect. I still noticed problems.

The first thing I noticed was that when I unsuspended at work, I couldn’t use my volume keys. Everything else was fine, but my laptop’s volume keys didn’t register as a key press on anything. If I suspended again and brought it back home, the keys would work fine. If I suspended at home and resumed at home, I wouldn’t have the volume key problem. Weird, but just buggy, right?

It was a couple weekends ago when I suspended my laptop to take it somewhere. It wouldn’t suspend at all. Just hard-locked. This continued until the week, when it worked again. Last weekend? Same problem, couldn’t suspend. Monday, it worked fine.

It was then that I realized suspend/resume was breaking deliberately! See, my laptop feels more comfortable at home, less so at work but it tolerates it (with some complaining), but absolutely doesn’t want to leave during the weekend. It’s like a cat that just wants to be in a familiar environment, selfishly vying for your attention through mischievous acts. Look at it hard enough and the pattern emerges. It’s undeniable.

That got me thinking. What other possible instances of AI have we been misconstruing as bugs or random glitches? All those inter-connected street lights that occasionally shut off as you walk underneath them? Maybe they’re just shy, or they hate you. Maybe NES cartridges just found being blown stimulating.

So remember guys. Windows suspend/resume may work just fine. Mac too. But Linux’s suspend/resume isn’t a buggy pile of crap. It’s an intelligent buggy pile of crap, that just wants to be loved.

Looking Back on Review Board

Just over 3.5 years ago, David Trowbridge and I spent some time discussing the annoyances of the typical patch submission and code review processes in the open source projects we participated in and at companies, and decided to play with some ideas for improving this. At the time, we knew very little about what we intended to do. We had a name for it pretty early on, but that was about all we had. We didn’t even know whether we’d get past an early prototyping stage. But here it is, over 3 years later, and we have the leading open source code review tool with an active support and development community, hundreds of companies using it, and exciting new innovations for aiding in the code review process.

I was thinking a few days ago about how far we’ve come and some of the decisions we made along the way. I went digging through our commit history in order to relive some of the past of our little project. Since so few people were even aware of Review Board’s existence at the time, I thought I’d share some of our history with you. Particularly the interesting and funny bits.

“Add the reviewboard.”

Commit #1. The very first thing we put in our Subversion tree on September 27, 2006. I don’t even remember what was in this change now. We transitioned to Git last year and this commit is now just plain empty. Maybe it was jut the directory structure? Who can say.

Early on, we didn’t refer to “Review Board” as a proper name. It was generally “the reviewboard” or something similar. The codebase was young. We didn’t actually do code review on the project at this point (and it shows!). The first few months are littered with odd or nonsensical commit messages, small breakages, and bad decisions.

A few of my favorite commit messages are:

  • “I suck. Make submitting of reviews.”
  • “Don’t stuff the list of files in the bug list. It’s impolite.”
  • “Avoid failing out with Christian’s wacko form”
  • “Gum.”
  • “Holy apple pancakes. It worked!”
  • “I suck… The array was empty… The tests never had a chance to fail. :(“
  • “‘This is a summary’ sucks. Now we use fortune for the summary, description, and testing done. ‘You’re ugly and your mother dresses you funny.'”
  • “Unbreak things before ChipX86 notices”
  • “I’m just… garhgh”

Nowadays, our commit messages look nothing like that, but that’s the fun of a new project. You get to go commit-crazy while you try to figure out what you’re building.

Dashboard, quips and fortunes

The UI of old looked quite different than the UI of today.

We had a dashboard from the very beginning (before the review request pages, even) but it wasn’t anything like the dashboard we had today. It was a simple page with a table containing all outgoing review requests and a table containing incoming review requests. But it also had one more thing: quips.

The beginning of quips functionality was being built. Quips are just little random quotes that are inserted in the UI. I think the plan was to put quips on certain pages, making Review Board a little more fun. We were using them in the dashboard for empty lists, with variations all saying something about the dashboard being empty. Quips are a neat feature that just never survived the early days of development.

Fortunes are similar. On Linux/Unix systems, there’s a little program called “fortune” that just displays a random quote. Since we at first had to test review request functionality without actually having a repository backend of any sort, and we didn’t want to input all the information each time, we just used fortune to generate the summary, description and testing done text. This made for some really funny review requests early on, but this is of course something that had no reason to survive initial development.

Sometimes we would create a bunch of review requests just to see what kind of quotes we’d get. 🙂

Multiple repositories? Almost didn’t happen.

One of the really critical parts of Review Board today is the ability to talk to a variety of different types of repositories in one instance. But, it turns out, this almost didn’t happen.

The initial goals were not that ambitious. Review Board talked to one repository per instance. Everything was basically hard-coded with one repository in mind. That type of repository, as well as its information, was customizable. You just couldn’t have more than one. At the time, this wasn’t a problem, but it didn’t take long until we had a need to talk to two repositories.

We discussed this and at first decided that if we needed to talk to two repositories, we could just set up two instances. It would have been a lot of work to update it for multiple repositories, after all. And really, this was a small project. Who would really need more than a couple repositories? This started to nag at me, though, and so I spent a couple nights rewriting all of the code as an experiment. It ended up working pretty nicely, and we were able to ditch the multi-instance model.

The importance of rewards

It’s always nice to have a little reward for milestones. Developers sometimes compete over cool bug numbers, revisions, etc. Initially, we were going to use quips to add some fun to the site, but we ended up settling on our current trophy system.

One of our first Review Board instances started to approach review request #1000, which was a huge milestone for us. I decided to commemorate the event by staying up and quickly hacking in a hidden feature for showing a trophy for review request #1000. The way we implemented it, you’d see the first ever trophy at 1,000, and from there you’d see it at every milestone number (1,000, 2,000, 3,000, 10,000, etc.). I didn’t want to stop there, though, so I added support for a second type of trophy, one that has confused people with its appearance to this day. Mission complete.

Of course, when we updated the server and someone finally hit 1000, it triggered a bug in the new trophy code and broke his review request. Oh well, I tried.

Diff viewers are hard

If I could pick one point during the whole history of Review Board where I was ready to completely give up, it would be during the creation of our diff viewer. All three diff viewers.

See, the first diff viewer was a complete and total hack. We generated a side-by-side diff using the diff tools and just parsed the output, basically generating a table of that. It was ugly, though, and limiting. It also caused problems where text on a row would either be truncated or would break the parser. I spent a long time working on this before I totally gave up and went on to try a new approach.

My second approach was closer to what we have today, but also limiting and very, very buggy. We were using Python’s built-in diff generation module, which implements a basic diff algorithm. It gave us insert and delete information, but not replace information. We had to hack that in ourselves, and it was really a hack. Try taking a bunch of inserts and deletes and find out which of those are really changed lines. No, really, try it. It’s harder than you think, and it’ll often be wrong.

Still, we stuck with this for a long time. It was slow, buggy, and didn’t generate the sort of output people expected from diff tools. Most people see diffs from GNU Diff, which implements the Meyers Diff Algorithm (with a few additions and tweaks). These Meyers diffs are much nicer to view than what Python gave us. Another problem we hit was that we didn’t have real line number information, so we had to output fake line numbers. They weren’t really line numbers so much as row numbers in the table. Ugh. Even getting this far was really hard and frustrating, and the result still wasn’t good.

Attempt #3. I decided to build our own diff parser and generator from scratch. What a project. I knew nothing about diff generation and hardly knew where to start. I spent probably a good month or so just trying to work on this new diff code, and was so close to giving up so many times. It ended up being completely worth it, though, as we ended up with a very nice, extensible diff parser.

Without that third attempt, we’d be in the stone age. Review Board would not be as nice to use. We wouldn’t have inter-line diffs (where we highlight what changed in a replace line), syntax highlighting, move detection (coming in 1.5), or function/class headers (where we show which function/class the part of the diff is in — also coming in 1.5).

What else…

Well, there’s a lot more I could talk about. Our initial attempts at JavaScript code for the UI, our trials and challenges with database migration, or our early problems storing diffs with different encodings in databases. This is getting long, though, so I’ll cover these in another post on lessons learned.

Scroll to Top