XBox Live UX Fail

November 2, 2010

The Story

Today I received an email from Microsoft. They wanted to let me know that in about a month, my XBox Live Gold membership was going to automatically renew. I really appreciate the notice because I don’t own an XBox any more, so it was time to cancel.

I clicked on the link provided in the email they sent me. Navigated the site to the page with the link to manage my account billing, and clicked on the link. “Oops, an error occurred.” I navigated back, and tried something else. “Oops, an error occurred.” In fact, no matter what I tried to do on the site, aside from look at the list of things I could (not) do caused an error to occur.

Fine, I thought, I’ll just call the phone number. I closed the window out of frustration and picked up the phone. As I navigated the phone tree, I was kindly reminded that I’d need details from my account, such as my gamertag. Well shit, I don’t remember that stuff, so I logged back in.

I was greeted with a page insisting I accept their new Terms of Service. I had no way to proceed to the rest of the site without checking a box saying I’d read them, and then clicking a button to proceed. So I did that, and a horde of trained mice swarmed over their database to update the bit on my user account record that said I’d agreed to the new terms.

Greeted, once again, with the page that offered to me all the things I could do on their website, I clicked the link to manage my billing details. It worked just fine. After hanging up the phone, I went through the multi-page attempt to try and save me from cancellation (“But you won’t be able to watch Netflix on your XBox. Have you heard about these games? Maybe you want to go month-to-month?”) and turned off my auto-renewal setting.

The Fail

There was actually two parts to the UX fail. The first was the lame, and at one point seemingly-never-ending, attempt to save me from cancelling my auto- renewal. I just wanted to turn off a setting, and I had to click through about five pages of completely unrelated bullshit to get to the radio switch that let me do that. The second part of the fail was a bit more subtle, and is probably a bug in the XBox Live site.

When companies change their terms of service, they like to ensure that you cannot do anything until you have accepted them. Products with a piece of client software (e.g. EVE Online, iTunes) can simply disallow access until you’ve clicked the appropriate widget. But websites are a bit more complicated. The vendor cannot control your web surfing experience. All they can control is what content they display to you. At first blush, that seems like a simple problem, just redirect if the user hasn’t accepted the TOS. But, when your site has several possible paths a user can follow to enter, that simple solution becomes vastly more complex.

My theory of what happened is that the link I clicked in my email went through a route in their site that was not wrapped in the code to see if I’d accepted the new TOS. That allowed me access to my user portal. However, all of the functionality was coded to verify I had accepted the TOS and (probably) raise an exception if I had not. So, everything I tried to do gave me an unexpected error. But, when I logged out, and came in through the usual route, I was presented with the TOS, the flag was set, and everything worked.

"git svn fetch dies with \"fatal: bad object\""

November 1, 2010

At the day job, I use git-svn to interface with the SVN repository my team uses to manage its code. For the last several months that’s been going on painlessly. For the last couple of weeks I haven’t been doing anything in the SVN repo, so I hadn’t been fetching. This morning, I wanted to stash a script into the repo, and so the first thing I did was git svn fetch. But, I was greeted with a beastly error:

Index mismatch: c2714646b0290e3e3c44a0d5f239e1849083aefe != e12bd4a7d89970506864ad2b1c5caeef0a6a0876
rereading f2260561581e98397c66b939d90efb2a351ce993

...SNIP: a bunch of SVN output...

fatal: bad object 5b32d4ac2e03a566830f30a702523c68dbdf715b
rev-list -1 4a9fb23c88083656f6019fdedd7dc3d61b12ea50 85d1c926ebf06809247ae33850a94641f784bb66 3cd61c68706584009288ffc6e0c51d9dffdf9e5c 94b0c26b28610ea96e8989f4be21c92a9d78ef96 d82148570448de76f11f6b47b64ca3d4ac69cd21 de7e70d23ae10c9025b66206962b0fe358752268 dd02022bac0c2127300c937d80dfa793503c484c 36e6caf0aec8ae79d1d40cb4687466cd2f9e3718 5b32d4ac2e03a566830f30a702523c68dbdf715b 796a816059111a495c8ca14836c9bf567cc19c2a a0c450588867bbe11363d7dd6eb8f610b21c7f6c 4c7b780c0369a17e9f166aaeb851d9e14ecaeeea 56215cbb64f791cd85bf6094c18033f8712960ea 2ac18c2a0f144e64ff03ffab0e8ab6e7c3dc08b5 ee3fefa770a98ef90c9696c7a635ffda8d762404 971afb2b2dbacb83ef96f68b3b84960ae97a6833 93900609d9ec96def83a43940c2722123ec5f73c 67edd1ed8ad06c79e39cbb631696077856961a84 --not f2260561581e98397c66b939d90efb2a351ce993: command returned error: 128

I tried it again, and got the same error. So, at least the behavior was deterministic. Indeed, running this pared down command yielded the same result:

$ git rev-list 5b32d4ac2e03a566830f30a702523c68dbdf715b
fatal: bad object 5b32d4ac2e03a566830f30a702523c68dbdf715b

Stepping through the code in git-svn it became clear that the command that was failing was part of the code that tries to reconcile svn:mergeinfo properties with the git commit info that git-svn produces. So I began to develop a theory. I had recently run a git gc and perhaps it had deleted a commit that was only referenced from git-svn’s information.

From inside the .git directory, I ran this:

$ find . -exec grep -Hin 5b32d4ac2e03a566830f30a702523c68dbdf715b {} \;
Binary file ./svn/.caches/lookup_svn_merge.db matches
Binary file ./svn/.caches/check_cherry_pick.db matches

That pretty much confirmed my suspicion. I removed those files (they’re caches, right?), and then my fetch was able to proceed successfully.

"Interview Question: Combinations"

April 15, 2010

So I had a job interview yesterday with a great company, and I met a lot of awesome people. A question that was asked, presumably to test my approach to algorithm design, caught me off guard, and I didn’t give the best answer I feel I was capable of. It bugged me the whole flight home, so, I whipped up a better answer. It seemed like the perfect kind of thing to post here on my blog.

The problem: generate all combinations (without repetiton) with length len of the numbers from 1 to max.

Hearing that problem, my mind immediately jumped to all of my combinatorics background, and I started thinking about how to count all of those combinations. I really just could not get my mind out of that little rat hole, but my interviewers coaxed me out. By the time we were done with that part of the interview, we had come up with something that was on the right track, but still wouldn’t work.

A few simplifying assumptions can be made. Since these are combinations, order doesn’t matter (e.g. [1,2,3] is the same as [3,2,1]). Also, since they do not have repetition, we can place an ordering constraint on the individual elements. So, for the 3-element combinations, we can say we want all combinations [z,y,x] where 1 ≤ x < y < z ≤ max (the tuple is backwards for convenience of implementation, you could do it the other way just as easy).

So with those two assumptions in mind, I stubbed out my function.

combinations :: Int -> Int -> [[Int]]
combinations len max = undefined

I had gotten three-quarters of the way toward a working implementation during my interview, so I was already leaning toward a recursive solution here. But, when I keyed in what I’d come up with earlier, it wasn’t working right. I was getting things like [2,2] which should just not show up. I also could not do something like combinations 3 3 and get anything back. I clearly had some boundary issues. So, I decided to actually write out the sets I was expecting and see if I saw any patterns.

combinations 0 3 => [[]]
combinations 1 3 => [[1],[2],[3]]
combinations 2 3 => [[2,1],[3,1],[3,2]]
combinations 3 3 => [[3,2,1]]
combinations 4 3 => [[]]

The most obvious thing is that there’s a clear relationship between the length of the combination and the number of combinations available, which is pretty basic combinatorics. There’s only one way to choose 3 items from a set of 3 items. But looking at this, I’m trying to conceive of some way to devise a recursive algorithm to produce those lists. So I rewrite the output to show how I would expect those to get built recursively.

combinations 0 3 => [[]]
combinations 1 3 => [1:[]] ++ [2:[]] ++ [3:[]]
combinations 2 3 => [2:[1]] ++ [3:[1], 3:[2]]
combinations 3 3 => [3:[2,1]]
combinations 4 3 => [[]]

Now it might be apparent why I chose the ordering constraint I did. It makes it easy to build these lists with conses. The most imporant observation to make from this data is what numbers actually get selected to be consed. At each level of recursion we’re selecting only the numbers between len and max to be added onto lists, and then we recurse with all the numbers less than those.

Here is the final implementation:

combinations 0 _ = [[]]
combinations len max = foldr reduce [] [len..max]
  where reduce x ys = recurse x ++ ys
        recurse x = prepend x (combinations (len - 1) (x - 1))
        prepend x = map (\xs -> x:xs)

you.would? like(:fries).with(that)

January 27, 2010

I was recently linked to an insightful article about how software is hard. He makes several good points, but the one that sticks out most in my mind is the following:

Every software engineer has a low opinion of the way we develop software. Even the term “software engineering,” Rosenberg writes, is a statement of hope, not fact. He quotes the 1968 NATO Software Engineering Conference that coined the term: “We undoubtedly produce software by backward techniques.” “We build systems like the Wright brothers built airplanes–build the whole thing, push it off the cliff, let it crash, and start over again.” Certainly statements that could still be made forty years later.

That is so true, especially when it comes to the programming languages we use. Most of the languages with widespread use in industry today are essentially thirty years old, and even those are just the latest in a long lineage that leads back to FORTRAN and Lisp in the late sixties. Languages in today’s generation don’t do anything for you that their ancestors didn’t do forty years ago.

What’s worse is that more modern languages exist–they have existed for decades–yet they don’t find mainstream adoption. For the last twenty years the main thrust of innovation in the software industry has been focused on people and process rather than languages and tools. Modern languages are dismissed as academic, and industry experts are struggling to find new and better ways to build useful systems with ancient technology.

We are rapidly approaching a singularity after which the descendants of FORTRAN and Lisp will be grossly inadequate for writing programs. The current rise of concurrent programming is just the beginning. As time plods on and concurrency and distributed programming become more commonplace, we are going to have to develop new ways of writing software.

This really isn’t news. Software is hard. Concurrent software is harder. We know that. The solutions offered today are primitive, and not much different from the thinking over thirty years ago. Whether you use some shared-memory model or some message-passing model, writing concurrent programs is a mentally exhausting process with a lot of work centered around getting the parallelism right.

Another problem we face is the increasing ubiquity of software. In the modern, Western world it is nearly impossible to take ten steps without walking past at least a hundred semiconductors. Many of those little bits of manufactured silicon are running some sort of program (whether it is firm or software). We passed the point in time where our lives are computerized a long time ago, and many people didn’t even notice. Your life relies on software in vehicles, hospitals and banks.

With software becoming so commonplace, the burden to produce quality software is greater than ever. Software verification is a critical area of the development process that has been shit on and forgotten for over forty years. Today’s state of the art in quality assurance is a combination between exploratory testing and developer-written regression suites. In other words, if you don’t think about verification, then none happens at all.

These problems are both huge. We must solve them, and wetware solutions will be woefully inadequate. We need to adopt and develop new languages and tools that directly address these issues. This is the area where we have regressed from forty years ago. Back then programmers quickly grew impatient with wetware solutions and would write software to automate things and introduce abstractions: programming languages, parser and scanner generators, build automation tools, build configuration tools, stream editors, and the list just goes on.

Somewhere in the last forty years, we just gave up. As an industry, we decided that languages and tool-chains were fundamentally solved problems, and all we needed to do was evolve the syntax and libraries. We have eschewed the virtuous path of software development in favor of a naively pragmatic alternative. Rather than working smarter, we try to work faster. Rather than finding smarter people, we just find more people. These days, there’s an alternative to working in food service: programming.

Layout, design, graphics, photography and text all © 2005-2010 Samuel Tesla unless otherwise noted.

Portions of the site layout use Yahoo! YUI Reset, Fonts & Grids.