Still Not Dead

November 19, 2008

So, the October ODYNUG meeting came and went, and it was great. My talk went well, despite my not having written my blogging tool.

It is amazing how busy a person can get with what seems like only a few things on his plate. Between my classes and work, I’ve been staying busy. I tried to do NaNoWriMo, but failed. I couldn’t even get a thousand words written because I am so busy.

I realized I hadn’t updated since September, and wanted to poke my head in and let you all know that in a month, I won’t be so busy. Come the middle of December, I’ll start working on blogerl again. I’ve made some design re- decisions. I’m basically going to be doing something similar to blosxom. It meshes well with my design goals, and I can even keep my posts in git if I want.

Not Dead Yet

September 22, 2008

I just wanted to poke my head in and say that I’m still around, and still working on/thinking about the erlang project. I am, after all, still giving the talk next month.

I am, however, getting super busy with school. So, I’m having to sit down and schedule all of my non-work programming time. It’s looking like the erlang project isn’t going to get much love until after the end of September.

That said, when I get a few free moments this week, I’ll post some code I’ve already written that makes a nice interface around shelling out from erlang.

"Erlang Project: Storage Choice"

September 11, 2008

It was suggested that I use CouchDB as the storage engine for blogerl. I’m not going to be using CouchDB for this project, and I would like to explain why.

My reason for choosing Git over CouchDB is really just a preference. I think Git is neat, and I want to use it in a novel way. I’d like to demonstrate how it is a very powerful tool and can be used to do more that just version software. I think CouchDB is cool, and at some point, I’ll probably do a project with it. But, for this project, it’s not what I want.

Edited: The suggestion mentioned above was in a comment, but as my blog has not suuported comments for quite some time, I have removed the hyperlink.

"Erlang Project: Stories"

September 8, 2008

In my previous post, I discussed the goals and priorities for blogerl. This time I am going to brainstorm some stories and prioritize them. Then I’ll select the first few from the prioritized list to be my first iteration. The goal is to have something I can deploy so I can get the blog started as soon as possible.

My philosophy for stories is to give them titles that are short. The story names are simply mnemonics to assist with recalling conversations and other details. Since I’m a team of one, the only conversations will be those I have here with you. But after this post, I’ll probably just refer to the stories by their title.

Here are the stories in priority order:

Storage : I need some way to store my posts. I’m not very choosy about how this is done. But, because I am in an experimental mood, I’m going to opt for using Git. I’m probably going to end up using Grit via erlectricity. There’s already been a wiki built on top of Git, why not a blog?

RESTful interface : Like I said before, I don’t want to compose my posts in a <textarea> but I do need to be able to get content into my application somehow. So, I’m going to design a RESTful interface to my data store. This is going to have to include an authentication mechanism, since I don’t want just anybody to be able to update or edit my blog.

Emacs mode : My preferred program for editing anything is Emacs. So it seems only natural that I’d choose to write my first REST client as an emacs mode.

Index : A front page that shows the posts in reverse chronological order. Should be paginated with 10 posts to a page.

Single : Permalinked pages for each individual post.

RSS : Add an RSS feed for the main index.

Atom : Add an Atom feed for the main index.

Archives : Provide archives based on year and month.

Templates : Templates for each of the views. Not YAWS pages. This will involve creating my own template language. I don’t like ErlTL for this as it is too programmery.

Caching : Keep rendered content in a cache with a TTL. Expire the cache for the index page if a post is added, edited or deleted. Expire the cache for a single post if it is edited or deleted.

Tags : Be able to specify an arbitrary number of tags to associate with a post. Provide archive pages for each tag.

Comments : Provide comments. They do not need to be threaded. Plain, flat comments are sufficient.

Trackbacks : Provide the ability for other blogs to post trackbacks.

Post-dating : Give the ability to submit a post that will be published at a later date.

Markdown : Add support for formatting a post in the Markdown formatting language.

Textile : Add support for formatting a post in the textile formatting language.

Formatting default : Add a configuration option specifying the default formatting (which starts out as plain HTML).

The first six are going to be what I aim to complete before I tag an 0.1 and deploy to my webserver. That’ll give me a means to share my blog posts with people as a stream or individually via their browsers or via RSS feeds.

"Erlang Project: Goals"

September 6, 2008

First off, I’ve decided on a name for my project: blogerl. I will be hosting the source code here on GitHub.

With that bookkeeping out of the way, I’ll get to the meat of the post. I want to include you in my brainstorming process as I figure out what my goals are with this project. In my next post I’ll brainstorm features that will help me meet these goals and select a subset of those features to implement initially.

Before I can start brainstorming features, I need to figure out an overall vision for what I am trying to build. So the first thing I want to describe is the primary goal of this system, and possibly some secondary goals as well.

I want a system that will manage the storage and presentation of my blog content. That content will be primarily textual, and may be annotated with various pieces of meta-data (e.g. date, title, or tags).

That’s pretty vague, and I can design several extremely different systems that will deliver on that, so I’m going to provide some additional goals, in priority order, to narrow the design down.

Adding or editing content should be easy. : So many blogging tools are a pain to use. I’ll be honest, I’m not the biggest fan of the web application. Especially not for things like writing. I intend for Omniloquent to frequently have essays, and I don’t really like doing massive amounts of writing in a <textarea>.

Viewing the website should be fast. : This isn’t usually a problem in most modern blogging tools. However, I’m writing this one from scratch, so it’s going to be a little less modern at first. I don’t want people to be sitting around waiting for my content to load. I want it to be snappy. The closer to static content it feels, the happier I’ll be.

Adding or editing content should be fast. : The converse of the previous goal: I don’t want posting to take forever. This is why I hate Movable Type. The idea that I should have to rebuild dozens of pages when I just update a single post is ridiculous. However, I am willing to suffer a little, as I recognize that I’d rather it take me a second or two to post but have the blog be lightning fast for my readers. That said, the closer to instantaneous posting I can get, the happier I’ll be.

Prodigal Blogger Returns

September 5, 2008

Hi. I know it’s been a long time. Things have been crazy-busy for me all summer, and as usual, the blog fell by the wayside.

One of the things keeping me busy has been my new job. Yes, I got another new job. Since June, I’ve been working at Engine Yard as a developer on the Vertebra project, and recently I’ve taken over as development lead. It’s very exciting and fulfilling to be working on such an interesting problem.

I’ve also been speaking and traveling quite a bit. I spoke at both BarCampKC and BarCampOmaha, and my usual user group talks.

But, the real reason I haven’t written since March, is that I just haven’t had ideas about which I wanted to write. I finally decided that I needed to start writing even if I couldn’t think of anything to write about. Of course, at that moment, an idea occurred to me.

I’m going to be starting a series of posts over the next month. I’m working on a project for my Advanced Erlang talk next month at ODYNUG, and I’m going to blog about it.

"Erlang Project: Kickoff"

September 5, 2008

On October 14th, I will be giving a talk at ODYNUG about advanced erlang. Unlike my previous talks, this one is going to delve into the code of a real application. I’ll be discussing the architecture of the application and the reasons behind the design decisions.

Of course, in order to do that, I need an application. More specifically, I need an application for which I know all the reasons behind the design decisions. So, I’m going to write one, and I’m not a person to buck a tradition, so I’m going to write a blog.

I’ll be documenting the process here, and you can see the blog live here.

Don't Mandate Velocity

March 31, 2008

I came across this amusing (fictional) story about a broken watch, and it’s impact on the author’s development process.

I particularly liked one of the author’s comments in the comment thread:

Velocity is a measurement that comes out of development, not an input that goes into it.

[Uncle Bob] said . . . that when you are trying to measure something, it’s counter-productive to put pressure on it.

git-svn with svn:externals

March 8, 2008

I’ve really fallen head over heels in love with Git. But my original solution was really a hack. There is a better way to do it. In fact, it’s so much better, it comes with Git.

I took a look at git-svn when I was researching this a couple weeks ago, and the trouble that I had was that it didn’t fetch externals. Rather than figure out the problem, I just moved on with the working solution that I had. But, it bothered me. So I continued to research, and sure enough, Git has a way to do it just fine.

Nazar Aziz over at Panther Software posted an excellent guide for setting up a Rails app with plugins using git-svn and Git submodules. I am going to distill it and add a few notes about how to make your use of Git as unnoticeable to the other SVN users as possible.

Step One: Clone your externals

Git submodules are fantastic, but to use them you need Git repositories for each of your externals. Fortunately, you can easily clone them with git-svn. First, to list your externals:

$ svn propget svn:externals foo_plugin

Now you should make a directory to put your clones of these in.

$ mkdir ~/Projects/plugins

And then cloning them is as simple as this:

$ git svn clone ~/Projects/plugins/foo_plugin

Step Two: Clone your SVN repository

The next step is to clone your repository sans-externals. We’ll use git-svn to do that, but we’ll use it in a slightly different manner. The Git folks recognize that there is a standard layout for SVN repositories. If you tell it where the trunk, branches and tags are kept relative the the URI you provide, it will try to preserve that information. It makes branches for each of the SVN branches, and it makes branches for the tags as well.

Just do this wherever you want your project to live. You may want to rename any SVN working copies you have so that there aren’t any naming conflicts.

$ git svn clone -T trunk -t tags -b branches

That’ll give you a git repository named “app” in the current directory. The master branch will be a remote tracking branch that is set up to track trunk, and other branches are set up for any tags and branches.

Step Three: Hook up the submodules

Now that we’ve got Git repositories for all of the plugins, and a Git repository for our project, we can hook everything up. We will set up a submodule for the external we cloned above.

From within the top-level of your project repository do this:

$ git submodule add ~/Projects/plugins/foo_plugin vendor/plugins/foo_plugin

After you’ve added the submodule do this:

$ git submodule init
$ git submodule update

That should get the code from the plugin repository and into your project repository just like the external did.

Step Four: Cover your tracks

When you are using git-svn it commits all of your Git commits into SVN, and you don’t really want to commit anything into your Git that you don’t want finding it’s way into SVN (at least not on the branch that you commit to SVN from). But it is easy to set Git up to ignore all of the files.

First, let’s make sure Git ignores all the same things SVN was ignoring:

$ git svn show-ignore >> .git/info/exclude

Then open up .git/info/exclude and add these lines to it:

# .git/info/exclude

That should prevent you from committing anything into SVN that is git- specific.

Step Five: Using this thing

So once you’re all set up, you’ll want to be able to interact with the SVN repository. Here are your two basic operations.

Update from SVN

$ git svn rebase

This works just like git-rebase, except it pulls from SVN instead of some other Git branch. It will not work if there are changes that have not been committed to Git. What it does is roll back all of the changes since the last time, and then update from SVN, then reapply the changes in order. If there are conflicts, you resolve them as you would if you were using git-rebase.

Commit to SVN

$ git svn dcommit

This will take all of the commits since your last time and commit them one at a time to SVN. This allows all those people still using SVN to see each individual commit instead of one monster commit.

I recommend using SVN to do anything more involved than simple adds, removes, renames and edits.

Something to be aware of with this set up is that your submodules are effectively frozen at whichever revision you cloned. If you want to update them, you’ll need to first update the cloned repository, and then run this command at the root of your repository:

$ git submodule update

Another caveat is that you need to keep your development as linear as you can. Don’t try to do anything crazy with lots of branches and merges between them. SVN can’t really make sense of it. The big deal here is you want to use git- rebase to pull in changes from SVN.

Here’s my workflow. I use a branch named work to do all of my work in. I will sync it up with SVN several times a day, just so it isn’t too stale. This is how I do that:

$ git checkout master
$ git svn rebase
$ git checkout work
$ git rebase master

Then, when I’ve commited all of my changes to my work branch, and I’m ready to commit to SVN:

$ git checkout master
$ git merge work
$ git svn rebase # Just to be safe
$ git svn dcommit

It works well, and it allows me to do my work disconnected from the network.

"Broken Window in ActiveRecord: ActiveRecord::StatementInvalid"

March 3, 2008

I love ruby, and I love Rails, but in some ways it really is a ghetto. It has a lot of broken windows that only serve to encourage bad coding from developers who should know better. Today I ran into an example of one of those broken windows and I was beside myself. I could not believe what I was reading.

One of the projects I work on for my employer is an import process that takes a long time. In order to make it resilient to database fail-overs, I wanted to catch the exception that is raised when the connection dies, wait a few seconds, and then try to reconnect. The idea is simple, and it works once I account for the broken window, but I am not pleased with the code I had to write.

When the database connection disappears, the database driver throws an exception. ActiveRecord::Base catches that exception and does this:

# Find this in Rails 2.0.2
# active_record/connection_adapter/abstract_adapter.rb:121

rescue Exception => e
  # Log message and raise exception.
  # Set last_verfication to 0, so that connection gets verified
  # upon reentering the request loop
  @last_verification = 0
  message = "#{}: #{e.message}: #{sql}"
  log_info(message, name, 0)
  raise ActiveRecord::StatementInvalid, message

This is the exception handler that catches all exceptions raised during a query run by ActiveRecord. As you can see, it snags the class name, and the exception message off of the exception, and then throws the object away, reraising with ActiveRecord::StatementInvalid. So, if your database driver has hundreds of error codes which are provided in order for you to tell specifically what error occurred, such as Mysql::Error, you lost them.

So ActiveRecord provides one exception that covers everything from primary key violations to database connection errors, and the only way to distinguish them is by inspecting the message. Surely, that can’t be true, right? I dig further and find this:

# Find this in Rails 2.0.2
# active_record/connection_adapters/mysql_adapter.rb:244
# Note: I snipped the error message because it is very long

rescue ActiveRecord::StatementInvalid => exception
  if exception.message.split(":").first =~ /Packets out of order/
    raise ActiveRecord::StatementInvalid, snipped_error_message

That is just completely unacceptable. I can find it in my heart to forgive the abstract adapter for doing something that throws away implementation-specific information, but the Mysql adapter should remedy that. It willingly lets it’s exception information be cast aside and goes about inspecting what the abstract adapter had the decency to keep around.

“But that information is good enough to tell what the exception is,” you might say.

Until the Mysql folks change the error message. The Mysql API exposes numeric constants, and I’m sure they’re very careful to keep them the same, but do you think they take the same approach to error messages? I doubt it. They provide a function that will give you an error message given the numeric constant, and encourage you to use it. That’s what the Mysql bindings for ruby do.

Expecting developers to inspect the exception message is essentially promoting programming with magic numbers. Sure, they’re string literals, but they’re still duplicated information, and extremely brittle.

All I’d want is an inner_exception attribute available on ActiveRecord::StatementInvalid or maybe its parent, and then assign it when doing reraises. Is that too much to ask for?

SVN + Git + 1 = Still Awesome

February 22, 2008

Yesterday I wrote about using SVN and Git together to have version control away from the network your SVN server is on. Now, I’ll admit, I wrote that shortly after figuring it out and doing it. So, at that time, I hadn’t actually come back into the office and merged my changes with the repository and committed them. Having done that a couple of times now, I’m here to say that this setup is fantastic.

So using this system you’re either at the office, so you can use SVN, or you’re not, so you have to use Git. I’ll give steps to follow for each. You should probably read the documentation available from the Git website to familiarize yourself further with these commands.

These steps assume that you’ve made a Git branch using the following command:

$ git branch home

Taking your work home: SVN -> Git

Make sure your SVN working copy is as up to date as you want it. Ideally, commit any changes. But, if you’re in the middle of a change set, that’s fine.

$ git commit -a -m "Merging in changes from SVN since last commit"
$ git checkout home
$ git merge master

Now you’re ready to use Git to continue making your changes while you’re away from the office.

Getting back to work: Git -> SVN

Make sure your Git changes are all committed to the home branch.

$ git checkout master
$ svn up

Resolve any conflicts from SVN.

$ git merge home

Resolve any conflicts from Git.

Now you can continue with your changes using SVN, or commit them right away if they’re already perfect.

SVN + Git = Awesome

February 20, 2008

I’ve been a big fan of SVN for several years now. I even helped my former employer migrate from VSS a couple of years ago after I sold everybody on the idea. I have lots of love for SVN, but it has its limitations, especially the need to have network connectivity to a central repository. I know at least some people would love to have a way to still commit code when offline. So, here’s how I did it with SVN and Git.

Git is a DVCS that works differently than SVN. Instead of making changes in your working copy and submitting them to a central repository, your working copy is your repository. You can push changes to another repository or you can pull changes from another repository. It’s a nice way of working, and it’s what they use on the Linux kernel. The Git folks have a nice tutorial for SVN users.

The key feature of Git that makes it well suited for use alongside SVN is that it keeps all of its metadata in one folder at the top of your repository. It does not put one in each directory like SVN does. So you can make your SVN working copy into a Git repository and then ignore the folder and SVN knows nothing about it. Here’s what you do.

At the top level of your SVN working copy:

$ git init
$ echo .svn > .gitignore
$ git add *
$ git add .gitignore
$ git commit -m "Initial commit"

Now we just need to teach SVN to ignore the Git stuff. So open up your ~/.subversion/config file and find the [miscellany] section. You should see a commented out setting for global-ignores. Uncomment it and add .git* to it like this:

### Set global-ignores to a set of whitespace-delimited globs
### which Subversion will ignore in its 'status' output, and
### while importing or adding files and directories.

global-ignores = *.o *.lo *.la \#*\# .*.rej *.rej .*~ *~ .\#* .DS_Store .git*

And voila! When you’re able to connect to your SVN repo, you can use SVN. But when you’re offline and still want the ability to use version control to incrementally save your changes, you can use Git. They’re working on the same files, so they play together very nicely.

Crunch Mode

February 18, 2008

James Golick writes about crunch mode and how it can turn a team of even the best all-star code artists in to mediocre programmers.

I have been on an agile team full of all-star programmers. Every one of them as bright as the sun, and every one of them dedicated to writing quality software. Sure, we inherited a legacy code base of nearly a million lines of not-so-all-star code, but that’s the same situation everybody else is in too, right? It took us a couple of years to really get the agile juices flowing, but once we did, it was great. Except for crunch mode.

I’ve seen what James is talking about first hand. I’ve been what James is talking about. What I’ve never understood is why crunch mode seems more appealing to managers than facilitating the things a team needs to truly deliver on the promise of constantly shippable code. To me, the value proposition of being able to ship after any week- or month-long iteration is a big win over going into crunch mode to hit a date.

But I’m a developer, what do I know about managing projects, right?

Beauty is Important

February 18, 2008

I was reminded of one of my favorite quotes today.

Beauty is more important in computing than anywhere else in technology because software is so complicated. Beauty is the ultimate defense against complexity.

Machine Beauty: Elegance and the Heart of Computing, David Gelernter

String transforms using Enumerable#inject

February 15, 2008

I love functional programming, and I love Ruby. One of the most awesome things about Ruby is how much it borrows from the functional programming mindset. One of the most powerful concepts that functional programming brings to the table is higher-order functions. Ruby’s Enumerable module is a great example of how it embraces the idea of higher-order functions to abstract out the various things you do with a collection and let you focus on the operation for each item.

One of the most mysterious methods on Enumerable is Enumerable#inject. The example that’s always given is this:

irb> [1, 2, 3, 4].inject(0) {|sum, i| sum + i}

That’s fine, and usually makes sense. But when you try to branch out into more esoteric uses of inject, it can get confusing. So I’m going to give an example of accomplishing something useful with inject that you hopefully find useful.

I always find myself doing a sequence of substitutions on a string. For example, when I implement a Telnet client, I like to normalize the line endings I’m sending so that they’re sane. I accomplish that by translating “” to “”, then translating “” to “”, then translating “” to “”. It’s a simple thing to do, and I could do it like this:

string.gsub("\r\n", "\n").gsub("\r", "\n").gsub("\n", "\r\n")

But that’s not very extensible. I’d like to apply this idea of a sequence of substitutions in an abstract way so that I can do dynamically. And while I could do something with Object#send, that’s like cheating. This is where inject comes to the rescue.

def normalize_line_endings(string)
  transforms = [proc {|s| s.gsub("\r\n", "\n")},
                proc {|s| s.gsub("\r", "\n")},
                proc {|s| s.gsub("\n", "\r\n")}]
  transforms.inject(string) {|s, transform|}

Kernel#proc (or Kernel#lambda if you prefer) is Ruby’s way of making higher-order functions. It returns a block which you can then call with an argument. In the above code, I make an array of transforms that take a string and return a string. The call to inject at the end is where the magic happens. It calls the first transform with string which was provided as the argument to inject. Then it calls the second transform with the result of the first, and it calls the third transform with the result of the second. That list could be as big as you want. It could even be dynamically generated.

That’s nice, but it’s still a a little verbose. I like to hide my use of Kernel#proc behind a declarative interface when I’m doing this sort of thing with it. So here’s how we can rewrite the method.

def transform(string, specifications = [])
  transforms = specifications.collect do |spec|
                 proc {|s| s.gsub(spec[:from], spec[:to])}
  transforms.inject(string) {|s, transform|}

def normalize_line_endings(string)
  transform(string, [{:from => "\r\n", :to => "\n"},
                     {:from => "\r", :to => "\n"},
                     {:from => "\n", :to => "\r\n"}])

Of course, at that point, we don’t really need to create the procs. We can just use inject right on the specifications array, so the final code I came up with for this was:

def transform(string, specifications = [])
  specifications.inject(string) do |s, spec|
    s.gsub(spec[:from], spec[:to])

def normalize_line_endings(string)
  transform(string, [{:from => "\r\n", :to => "\n"},
                     {:from => "\r", :to => "\n"},
                     {:from => "\n", :to => "\r\n"}])

Now that can be used with any list of transformations. Those transformations can be dynamically generated, and it’s a very clean implementation. That is the power of Enumerable#inject.

OCaml Talk

February 6, 2008

So, I was going to give a talk on OCaml at ODYNUG last night. But, well, snow happened, and the meeting was canceled.

I will be giving the talk next month, on March 4th along with Brent Adkisson who will be giving a talk about Android.

Adding files to auto-mode-alist

January 31, 2008

One of the things that I require from an editor is decent syntax highlighting. Emacs provides me that, and it also provides me with awesome indentation (I’ve grown addicted to hitting TAB and having my text just go to the right place regardless of where my cursor is). Emacs has a mode for every occasion and typically will load the right one based on the name of the file or its content. Recently, mine stopped doing that.

At first it was annoying. I’d open a LaTeX file and it wouldn’t change to latex-mode, or I’d open a C file and it wouldn’t go into c-mode, and worst of all, I’d load Emacs lisp files and they wouldn’t load in emacs-lisp-mode. I would just switch the mode by hand, but my patience could only last for so long.

Yesterday I finally took the time to find the trouble, shoot it, and replace it with better code. In my .emacs I had this:

(setq auto-mode-alist
       '(("\\.dtd$" . xml-mode)
         ("\\.xml$" . xml-mode)
         ("\\.yml$" . conf-mode)
         ("bash_profile$" . sh-mode)
         ("bashrc$" . sh-mode))

I’ve seen that same code all over the internet, and it used to work just fine. But for some reason, now it does something to my auto-mode-alist and makes Emacs hate it for some reason. So I do some research. The recommended way to put stuff in an alist like that is ADD-TO-LIST.

So the above code becomes this:

(mapcar (lambda (mapping) (add-to-list 'auto-mode-alist mapping))
        '(("\\.dtd$" . xml-mode)
          ("\\.xml$" . xml-mode)
          ("\\.yml$" . conf-mode)
          ("bash_profile$" . sh-mode)
          ("bashrc$" . sh-mode)))

Putting that code in fixed my problems, and now all of my modes load correctly.

Living In the House That Rails Built

January 29, 2008

I wanted to share a snippet of code. This code will print a call stack to STDOUT every time a Ruby class definition is evaluated. It is particularly useful when you find that class constants are being mysteriously redefined.

class Foo
  puts "\nRequired from:\n #{Kernel.caller.join("\n ")}"
  # ...

What inspired me to write that code? Rails did. The key to writing Ruby on Rails is that you’re writing Ruby on Rails. You don’t follow the Rails best practices because they’re convenient. You follow the Rails best practices because your program won’t work unless you do. Just like trains, you stay on the track and everything is great. If you try to take your train off-track, then it’s gruesome enough to make the nightly news.

How did I derail my application such that I cared how and where a file was being required? I wrote a unit test that explicitly required a model object. Oops. Remember that the semantics of require is load-once based on the name. So:

require "foo"


require "models/foo"

are very different to require. Rails is super helpful and requires everything that it makes for you. So it requires models for you, even when you run your unit tests.

So take this code:

class Foo < ActiveRecord::Base

And then write a test for something that Rails didn’t generate (such as something in the lib directory like I did):

# Require some other stuff
require "foo"

class TestTruth < Test::Unit::TestCase
  def test_truth
    assert true

If you rake test you will get an error complaining that RAILS_IS_A_GHETTO was reinitialized, and that’s because Rails loads it for you as “models/foo” and you load it as “foo” so it gets loaded twice.

The moral of the story is: let Rails load the things it built, and you load the things you built.

Making Terminal Bearable

January 3, 2008

I’ve been using Macintosh computers for several years now, but I came to them from the Unix world. There were several expectations I had from my terminal emulator. Among them were that the key that says Alt on it should be my Meta key for Emacs key-chords, and that the Page Up and Page Down keys should page up and page down in my application. None of those behaviors are the default, so fixing them is one of the first things I do when I get a Mac.

In Leopard these settings are all in the Settings section of Preferences (Cmd-, or Terminal > Preferences…), but in previous versions they are in the Window Settings screen (Terminal > Window Settings…).

First, to take care of the Alt key issue, go to the Keyboard section of the settings. There is a checkbox toward the bottom that says “Use option key as meta.” Check it.

With that problem solved, all that is left are the paging keys. The default behavior is to have Shift Page Up send Page Up to the terminal and Page Up and do the same for Page Down. In the Keyboard Settings you should see a grid with keystrokes on the left and text on the right. In many cases it will be an ANSI escape sequence to send to the terminal, in some cases it will be a special action. Here’s what you want to set things to.

  • page down = send string to shell: \033[6~

  • page up = send string to shell: \033[5~

  • shift page down = scroll to next page in buffer

  • shift page up = scroll to previous page in buffer

That effectively just swaps the keys, so that you still have the ability to scroll your Terminal buffer with the keyboard.

After I fix the keyboard, I like to fiddle with colors and other things, which will all be in the same preferences dialogs. Then, after all the tweaks and fixes, Terminal is ready to use.

Layout, design, graphics, photography and text all © 2005-2010 Samuel Tesla unless otherwise noted.

Portions of the site layout use Yahoo! YUI Reset, Fonts & Grids.