Make SLIME load faster

December 29, 2007

I have joined up with some of the guys from ODYNUG who have started meeting for breakfast and learning Common Lisp together. We are all using some version of Emacs, SLIME, and SBCL.

Blaine shared a cool way to make SLIME load much faster by taking advantage of the fact that Lisp uses images like Smalltalk (or more accurately, Smalltalk uses images like Lisp). He posted it for the group to see, but I wanted to post it here for my readers.

SBCL allows you to specify an image, or as they call it a core, by passing the --core option along with (as far as I can tell) an absolute path to the core file (well, at least it doesn’t know that ~ means $HOME). It, of course, also provides a way to create these core files, so you can load a bunch of stuff in, and then save a core file that has all of that already loaded.

So first, go into your SLIME directory and copy swank-loader.lisp to swank- loader-original.lisp. Then make swank-loader.lisp look like this (changing slime-dir to be wherever your SLIME is, of course):

(if (not (find-package 'swank-loader))
    ;; Edit SLIME-DIR to be where you have SLIME installed.
    (let ((slime-dir (merge-pathnames ".elisp/slime/" (user-homedir-pathname))))
      (load (merge-pathnames "swank-loader-original" slime-dir))))

Then, make a file called bootstrap.lisp with the following content:

;; Load Swank
(load (merge-pathnames ".elisp/slime/swank-loader" (user-homedir-pathname)))

;; Save image
(sb-ext:save-lisp-and-die "sbcl-with-slime.core")

And run this command:

$ sbcl --load bootstrap.lisp

Then copy sbcl-with-slime.core somewhere safe, I put mine in with my slime code to keep it all together. Then you just have to add the following to your .emacs:

(let* ((slime-dir (concat elisp-dir "/slime"))
       (core-file (concat slime-dir "/sbcl-with-slime.core")))
  (setq inferior-lisp-program (concat "sbcl --core " core-file)))

Then you can M-x slime and it will be super fast.

One config to rule them all

December 27, 2007

Yesterday I was reminded the importance of familiarity and comfort with my tools. Over the years I have developed a set of configurations that work for me. I have configurations for BASH and I have configurations for emacs and they help me be productive. Yesterday I started configuring my new computer here at my new job (yes I got a new job) and I couldn’t get to them because they were on my laptop at home.

Several years ago I had a system that involved keeping all of my config files in a Subversion repository and a shell script to make symlinks from the real locations to the ones in ~/.config. I eventually stopped using it, mostly because it was a little clunky and hard to get set up on new machines. Last night I devised a similar system but tweaked a few things and it has made it so much better.

The first thing I changed was the revision control tool. I’m using darcs as the version control. It is a distributed version control system and it is much simpler to use. To top it all off, it does not put a directory in each directory I add to my repository, it just puts one _darcs folder at the top level. To top it all off, it’s written in Haskell, so it gets cool points for that.

The second thing I tweaked was that instead of using symbolic links I’m using hard links. This means that both ~/.bashrc and ~/.config/home/.bashrc are actually pointing to the same file on disk. So I can update the darcs repository and the linked files out in the rest of my home directory will get updated too, but if I delete the repository, I’ll still have copies of the config.

Last, instead of keeping a flat list of files like ~/.config/bashrc and ~/.config/ssh_config, I’m keeping the files in a directory with their exact file names and the directory structure that they’d be stored in under my home directory. This makes writing the linking script much easier.

So with this structure in place I wrote an update script that makes directories and hard links so that what’s in my home directory mirrors what’s in my config repository. I even protected against files already existing with a friendly prompt (courtesy of ln -i).

A darcs repository of my config, including the update script, is available here.

The Lambda Calculus

December 23, 2007

Earlier this fall I wrote a little functional programming language. However, the guts of it were not based on the lambda calculus. I used more of a denotational semantics approach to the evaluation, which worked fine. But, I still wanted to implement an actual lambda calculus interpreter.

So, now that I am done with school and have some free time, I threw a little something together. I used it as an introductory project to OCaml, and really enjoyed writing it.

So what is the lambda calculus, you might ask?

There are three basic concepts in lambda calculus. There are variables:


There are abstractions:

fn x. x

And there are applications:

f x

Applications are left associative so:

f x y

is the same as:

(f x) y

So for a more complicated example from the REPL:

> (fn f. fn x. f x) (fn y. y) z;

The first part declares a function which binds f and returns a function which binds x and returns the application of f to x. We pass to that the identity function (fn y. y) and the variable z. That all reduces to just z.

You can download my code here. I will be posting snippets of it in future posts. I will also be blogging as I extend it to add more features.

Currying Function Parameters

December 23, 2007

One of the first things I wanted to do to improve the readability of my language was to add the currying of function parameters. Since it is such a common pattern to have three or four abstractions right in a row to bind variables, there is a syntax for expressing them more consisely.

So this:

fn x. fn y. fn z. x y z

Becomes this:

fn x y z. x y z

Adding the code do this was nearly trivial, and all in the parser. First I wrote a function that given a list of variables and an expression for the body, would be able to construct the parse tree for a curried function:

let curry ids body =
  List.fold_right (fun id expr -> Abstraction(id, expr)) ids body

Then I took the existing production for recognizing expressions:

  aexprs {apply $1}
| FN VAR PERIOD expr {Abstraction ($2, $4)}

And turned it into this:

  aexprs {apply $1}
| FN ids PERIOD expr {curry $2 $4}

  VAR {[$1]}
| VAR ids {$1::$2}

That ids production is using the OCaml :: operator which performs the cons operation. So as I recurse on the right, I’m building up a list and consing each new id onto it all the way up.

And just like that I’ve added currying to my language.

Blog Lift

December 23, 2007

You may have noticed that my layout changed a little. I got rid of my Google ads.

I’ve noticed that when I paste code snippets or other bits of pre-formatted text that is narrower than 80 columns, I still get a horizontal scroll. That was because my content area was simply too narrow. I intend to be posting more code, so I started looking for ways to get some space.

Changing the layout to only have one column on the right was a start. I was simply going to move the google ads down, but then Erica asked me how much money I was making off of them. The answer is not much at all. So I just got rid of them entirely.

A little more lambda

December 23, 2007

Alonzo Church invented the lambda calculus. He also figured out how to encode many kinds of data as lambda expressions. Take your simple booleans, for example.

This is true:

fn x y. x

And this is false:

fn x y. y

That makes the identity function the if then else construct:

> (fn p. p) (fn x y. x) a b;
> (fn p. p) (fn x y. y) a b;

And similarly you can get a logical and:

> (fn p. p) ((fn p q. p q p) (fn x y. x) (fn x y. x)) a b;
> (fn p. p) ((fn p q. p q p) (fn x y. x) (fn x y. y)) a b;
> (fn p. p) ((fn p q. p q p) (fn x y. y) (fn x y. x)) a b;

Fiddling around with these church booleans revealed several bugs in my code, which I’ve fixed. I’ve additionally added a new node to the parse tree to represent the () grouping that is typed into the code so that when it is formatted for display it looks better.

You can get the newest code here.

"Samba winbindd invalid request length: 2048"

November 19, 2007

An update came down from Gentoo for Samba updating it to version 2.0.26a. We had been having trouble with a problem in 3.0.24 that had been fixed in a intervening release of Samba. So, naturally, I wanted to upgrade. But when I did, I got this mysterious error in my log.winbindd.

[2007/11/19 13:27:16, 0] nsswitch/winbindd.c:request_len_recv(517)
  request_len_recv: Invalid request size received: 2084

I have spent the entire day googling and yahooing and searching and grepping to no avail. Nothing I have tried worked. Now at one point I was reading and somebody said “reboot, some other services are using stale references to” I couldn’t imagine that was right, because if I rolled back to 3.0.24 everything worked again.

Well, through a course of events I ended up with the following line in /etc/nsswitch.conf:

shadow: shadow

As you can imagine, that didn’t work too well for my unix user that I keep on the box for when ADS is hosed. So I broke out the trusty install CD, rebooted, and fixed the file. I then rebooted and re-emerged samba. I thought I had put the mask in /etc/portage/package.mask to make sure it was 3.0.24 I was installing, but I hadn’t. Lo and behold everything worked.

All I had to do was reboot.

That was it.

Remember ML-Yacc makes error-correcting parsers

October 9, 2007

So I got my language working. But there are still some things I want to add to it. One thing that was bothering me was that both this code:

fn x => x

and this code:

x => x

parsed to the same thing.

I banged my head against this. My grammar had the production right:

expr : ...
     | FN ident RARROW expr (T.FnDef (ident,expr))

and my lexer produced the tokens just fine:

<INITIAL> "fn" => (Tokens.FN(!pos, !pos));
<INITIAL> "=>" => (Tokens.RARROW(!pos, !pos));

So, I was confused. I downloaded the source for SML/NJ in hopes that their grammar and lexer would shed insight on what I was (obviously) doing wrong. But, inasmuch as SL is like SML, the grammar and lexer were the same.

Sleep beckoned, so I went. This morning I banged my head at it some more. Then once I started combing over the documentation, it hit me. ML-Yacc produces error-correcting parsers. It will perform single-token substitutions in order to get a valid parse. And, if you notice, it only has to make a single-token correction to get from the bad code to the good code.

My solution? The same as SML/NJ’s, set the lookahead to zero for interactive sessions and fail fast, so that if you are trying stuff interactively (or from unit tests) it will be relentless about grammar. On the other hand, if you are parsing a file, my interpreter will be forgiving. After all, why should it fail the whole file if all you’re missing is fn?

Samuel's Lambda and the Y Combinator

October 8, 2007

So I’m taking “Principles of Programming Languages” at UNO with Dr. Winter. There is a group project, and he is letting me do the project on my own. The project is to make a small imperative (and Turing-complete) language.

Well, as you may or may not know, I’m crazy about functional languages. I love them. So, while I have to write an imperative language for the project, I decided to spend some of the precious free time I have writing a functional one instead. As of now, my language (Samuel’s Lambda, or SL for short) is Turing-complete.

My test for this was that I could calculate the factorial using the most venerable of functional programming tools: the fixed point combinator (a.k.a. the Y combinator).

For my example of recursion, I’ll show you the factorial. What sort of discussion of recursion would this be if I didn’t?

  val y = fn f =>
      (fn g => g g) (fn g => f (fn x => g g x))
  val fac = fn f =>
            fn n =>
               if eq n 0 then 1
               else multiply n (f (subtract n 1))
in y fac 5

When run at the SML/NJ prompt:

- SLParser.evalPrintFile("examples/");
val it = () : unit

It helps that I’m working with a functional language to start with. That makes implementing things such as closures and let nearly trivial. I’m going to add static type checking (sans-inferencing like ML after which the syntax has been modeled) and then I’ll be done with SL. It has been a fun little exercise.

A Gentle Introduction to Erlang

October 4, 2007

I gave a talk at ODYNUG on Tuesday about one of my favorite dynamic languages: Erlang. It went pretty well, I think. Unlike my Lisp talk from last year, I don’t think I caused too many heads to explode.

I’ve posted my slides and some example code here.

I’ll be giving a talk on ML on Febuary 5, 2008.

"Review Haiku: Shoot Em Up"

September 14, 2007

Movie: Shoot Em Up

Hates a pussy with a gun

After, I need a smoke

"Emacs on Mac: option key as meta"

August 16, 2007

I’m an Emacs user, and I run on a Mac. I just use GNU Carbon Emacs out of CVS.

As long as I can remember, my command key mapped to the meta key in Emacs. This was particularly bothersome as it got my fingers in the habit of typing command-w to yank some text. But when I’m editing on a remote server in a Terminal window, that closes the window. Sometimes I manage to do it twice before I hit the option key instead.

I used to have this in my .emacs, although I never really noticed it:

(when stesla-mac-p
  (setq mac-command-key-is-meta nil))

A week ago when I wanted to finally fix this and make my command key do something other than be meta and make my option key be my meta key, I began to baffle as to why it wasn’t that way already. See, Google told me that the code I had in my .emacs should have done what I want.

Well, it turns out that it used to be how to do it. Emacs is cooler now, and lets you specify the behavior of all three special keys. So now what have is this:

(when stesla-mac-p
  (setq mac-command-modifier nil)
  (setq mac-option-modifier 'meta))

This makes Emacs not recognize the command key as a modifier at all and use the option key as meta, which is how I like it.

Base32 0.1.1 Released

June 29, 2007

Quickly on the heels of the initial release of my Base32 library, I have an update. I should have tried to compile it on Linux, as the GCC settings on my Gentoo box caught some silly things I had done.

It’s all better now, and the gem can install on both Mac OS X and Gentoo Linux. I assume other Linuxes are probably fine, as are BSDs and other *NIXes.

To download it go here.

Base32 0.1.0 Released

June 28, 2007

As you may know, I’ve been working with base32 encoding. Well, I decided to share my work with the world in the form of a library.

This first release simply contains the code I needed for my original project, but I’ve packaged it up as a nice Ruby extension.

You can visit the project page here.

You can download the release here.

For those about to base32

June 7, 2007

I’ve scribbled this down so many times, I thought you all might want to benefit a little from my troubles.

|0000 0|000 11|11 111|1 2222| 2222 3|333 33|33 444|4 4444
|0000 0|111 11|22 222|3 3333| 4444 4|555 55|66 666|7 7777

That is a chart to help understand how five octets turn into eight base32-encoded bytes. It should be fairly self explanatory, but when I say that, I’m always wrong.

The top row is the octets. I’ve each group of four represents four bits, and the numbers represent which octet it is. The bottom row represents the quintets, with the same numbering scheme. I spaced the digits the same, so you can see how they match up. The pipes are nice visual borders.

Base32 Encoded Freedom

June 5, 2007

So I’m writing the license-key generation code for the store-front for a shareware program my friend Tyler and I are preparing to release (more about that later). We’ve decided to use cryptography to reduce the likelihood that our licensing schema will be compromised (for relatively little effort on our part). We also decided to base32 encode the actual keys to make them easier to read.

Well, the store-front is going to be a Rails app, of course. Ruby has a module to base64 encode, but it doesn’t have one to base32 encode. So, I wrote one, and I did it test first (of course).

The first four tests were easy. Really short strings, but they worked out most of the kinks. But, I wanted something that would boost my confidence further. So I wrote the following test which ended up being quite patriotic.

def test_constitution_preamble
  plaintext =<<-EOT
    We the people of the United States, in order to form a more perfect union,
    establish justice, insure domestic tranquility, provide for the common
    defense, promote the general welfare, and secure the blessings of liberty
    to ourselves and our posterity, do ordain and establish this Constitution
    for the United States of America.
  encoded = %W(
  assert_equal(encoded, Base32.encode(plaintext))

Three little, two little, one little-endian

April 24, 2007

I recently found myself wanting a Cocoa class that represents a set of 8-bit bytes. Cocoa has NSCharacterSet, but that is for unichar, not uint8_t. So I wrote one. It was easy enough, I gave it an array of UINT8_MAX booleans and said that if a particular element in the array was YES then that byte was in the set, and not if the element was NO.

Initially the class only knew how to answer questions of membership: is a byte in the set or not? But then I found a number of places where I was enumerating all possible values and testing for membership, so I figured adding a method that would return a NSData with just the bytes included in the set would be useful.

So I wrote this:

- (NSData *) dataValue
  NSMutableData *result = [NSMutableData data];
  for (unsigned i = 0; i <= UINT8_MAX; ++i)
      if (contains[i])
        [result appendBytes: &i length: 1];
  return result;

I had unit tests that proved it worked, and they all passed, so I checked in. All was good in the world.

Five days later, I flip open my laptop and decide to use the program this code is part of. I always try to eat my own dog food, and I prefer the freshest dog food I can get. So, whenever I want to use this application, I delete it, update from our Subversion repository, and build it.

Much to my surprise, when I built it on my laptop, some of those tests did not pass. I was expecting the NSData returned from -dataValue to have certain bytes in it. The NSData I actually got back did have the correct number of bytes, but they were all zeroes.

I banged my head against it for about twenty minutes, until I had a flash of insight. My desktop machine at home is an iMac, and inside it is an Intel Core Duo processor. My laptop is a PowerBook, and inside it is a Motorola G4 processor. The Core Duo, like most other Intel processors, stores numbers in the little-endian format, whereas the G4 stores them in big-endian format.

Endianess is a computer topic that makes a lot of programmers’ heads hurt. Unfortunately, Cocoa programmers do have to think about this now. Since Apple switched from their old, big-endian, Motorola platform to their new, little-endian, Intel platform, applications that are meant to run on both have to be aware of byte-order issues.

Computers store data in bytes, which are eight bits long. However, eight bits is only enough to store a number up to 255. In order to store larger numbers, computers just concatenate bytes together. A 16-bit number is comprised of two bytes, and a 32-bit number is comprised of four. The endianess of a system determines what order those bytes are stored in.

When you read a decimal number like 4242, you read it from left to right. The most significant digit is the left-most digit. Similarly, when you read a binary number like 1000010010010, the most significant digit is the left-most digit. If we divide that number into bytes, 00010000 10010010, the left-most byte is called the most significant byte, or the high-order byte. The right- most byte is called the least significant byte, or the low-order byte.

A big-endian processor, like the G4, stores numbers exactly like you’d read them. So if you read a 16-bit integer in big-endian order, the first byte you read is the high-order byte. Now, if the number is less than 255, for example 42, you’ll get this: 00000000 00101010.

A little-endian processor, like the Core Duo, stores numbers just the opposite of how you’d expect. The first byte you read is the least significant byte, followed by the next most significant byte, and then so on. So when we read our binary number in we’ll get 10010010 00010000 instead of what we expected. Now, if we look at that small number again, you’d get this: 00101010 00000000.

So, to bring this back to my bug. The unsigned type is actually an unsigned 32-bit integer. Since my code was manipulating a set of 8-bit numbers, every single number would fit into the low-order byte of that unsigned, thus leaving the other three bytes all zero.

The line of code where I do this:

[data appendBytes: &i length: 1]

Is a clever little trick I’ve used to avoid having to actually declare a one- byte array when I want to append just one byte. It works great if i is actually an uint8_t. It also works great if i is an unsigned and stored in little-endian format, since the first byte happens to be the byte I’m interested in. However, on a big-endian processor, that will reference the most significant byte of the number instead, and since i never gets any bigger than UINT8_MAX (which is 11111111 in binary), that byte will always be zero.

So now the code looks like this:

- (NSData *) dataValue
  NSMutableData *result = [NSMutableData data];
  uint8_t byte[1];
  for (unsigned i = 0; i <= UINT8_MAX; ++i)
      if (contains[i])
          byte[0] = i;
          [result appendBytes: byte length: 1];
  return result;

The compiler knows to do the correct conversion between the 32-bit and 8-bit types when assigning from one to another, so the new code now works on both of my machines.

Update: The title is a joke that Erica made up when I told her about this bug. All blame for its terribleness should go to her, I just recognized how apropos it was for the post.

The true meaning of AJAX

April 5, 2007

I’m reading Agile Web Development With Rails and I just have to share this most amusing quote:

AJAX (which once stood for Asynchronous JavaScript and XML but now just means Making Browsers Suck Less)

Thank you Dave and Dave for calling things like they really are.

They Had Really Long Names

March 25, 2007

Two years ago when my employer adopted Extreme Programming, we began to write automated tests for our code. My memory is fuzzy at this point, but as I recall we wanted to be able to write tests and keep the source files anywhere in our source tree, and then have our test runner automagically find them, or something like that. So when we created it we called it MasterTestXMLReportApp because it was cool enough to deserve such a long name.

As time passed those tests got slower and slower, so we split them into two suites. One that was meant to be fast and one that was meant to be slow. The slow ones were run once a night. We called that suite MasterTestNightlyXMLReportApp because it ran every night.

Fast forward to yesterday. The “nightly” tests haven’t run nightly in a long time. They just run continuously, albeit very slowly. I can never remember the name of the projects, and neither can any of my teammates. So finally we got fed up. We renamed them to FastTests and SlowTests.

We held a little memorial service, and gave them a little plot on our 0.04 acres of whiteboard, complete with a headstone.

They had really long names

They had really long names

Hooking up a Delphi progress event to a .NET object

January 2, 2007

So we know how to create a DLL in C++ that exposes .NET code to the Win32 world. We also know how to consume that DLL from Delphi. We know how to instantiate an object, call methods on it, and destroy it. So now, let’s do something interesting. Let’s make a progress bar.

Since this is just an example, we’re going to do something really simple. We’ll make an object that runs for a number of cycles and calls our event handler each cycle after sleeping for a little bit. Here’s the code for Clock:

// Example.h

public ref class Clock


  _progressCallback(gcnew ProgressCallback(NULL, NULL)) {}

  ~Clock() {}

  void SetProgressCallback(ProgressCallback ^ callback)
    _progressCallback = callback;

  void Run(int cycles);


  ProgressCallback ^ _progressCallback;

// Example.cpp
void Example::Clock::Run(int cycles)
  for (int i = 1; i <= cycles; ++i)
      Thread::Sleep(250); // A noticeable pause
      _progressCallback->Execute(i, cycles);

Now the first thing to notice there is the ref keyword. This is how you let the compiler know this is a managed class. Next, you’re all probably wondering what ProgressCallback is. That is the class that takes care of all the magic behind simulating method pointers from Delphi.

A brief aside to talk about just what method pointers are. In C and C++ you can declare a pointer type like this:

typedef int (* CALLBACK)(int x, int y);

Then you can use that type like this:

int Apply(CALLBACK cb, int x, int y)
  return cb == NULL ? 0 : cb(x, y);

int Multiply(int x, int y)
  return x * y;

Apply(Multiply, 6, 7); // returns 42

You can do the exact same thing in Delphi like this:

type TCallback = function(X, Y: Integer): Integer;

But Delphi also offers another kind of function pointer called a method pointer. You declare it like this:

type TMethodCallback = function(X, Y: Integer): Integer of object;

Those two words of object make all the difference. What this does is it lets you use a pointer to a method on a specific instance of an object. When you call that method the Self pointer is set to the correct value so that you can access the state on the object. It is really powerful. This is typically how progress bars are driven in VCL applications. You just do something like this:

type TProgressEvent = procedure(ACurrent, AMax: Integer) of object;
// ...

type TMyForm = class(TForm)
  // ... stuff ...
  Progress(ACurrent, AMax: Integer);
  // ... more stuff ...

// ... then somewhere in the implementation ...
procedure TForm1.InitializeStuff;
  // ... Initialize some things ...
  FThingWithProgressEvent.OnProgress := Progress;
  // ... Initialize more things ...

And then that method can do something such as adjust a progress bar or log to a file. It’s really slick.

Well, we want to display a progress bar in our Delphi GUI that moves as our Clock ticks across. But we can only pass plain old procedure pointers (the kind without of object) to the DLL functions because the code in the DLL doesn’t know how to do the magic that makes method pointers so nice. So we’ll just have to make that magic happen ourselves by passing the object pointer in explicitly along with a procedure pointer that takes the object pointer in its parameter list. We can cast the pointer and then call a method on the object with the rest of the parameters.

So now that we have the basic strategy in mind. Let me show you the code that encapsulates this method pointer idea:

// Example.h

public ref class ProcedureOfObject
  ProcedureOfObject(void * object, void * procedure):

  _object((IntPtr) object), _procedure((IntPtr) procedure) {}


  property bool HasNullPointers
    bool get()
      return ObjectPointer == NULL ||
        ProcedurePointer == NULL;

  property void * ObjectPointer
    void * get() { return _object->ToPointer(); }

  property void * ProcedurePointer
    void * get() { return _procedure->ToPointer(); }


  IntPtr^ _object;
  IntPtr^ _procedure;

typedef void (* PROGRESSEVENT)(void *, int, int);

public ref class ProgressCallback : public ProcedureOfObject
  ProgressCallback(void * object, void * procedure): ProcedureOfObject(object, procedure) {}
  void Execute(int current, int max);

// Example.cpp

void Example::ProgressCallback::Execute(int current, int max)
  if (this->HasNullPointers)
  ((Example::PROGRESSEVENT) ProcedurePointer)(this->ObjectPointer, current, max);

Note that I store the pointers as IntPtr references. This is the type that all of the methods on System::Runtime::InteropServices::Marshal return pointers as. So, it’s useful to make your fields that way. You can always call ToPointer() on it.

Now, the last bit that we need is to export stuff in the DLL. But you’ll notice that all of the classes I’ve made so far have been managed classes. We can’t send a pointer to a managed object out of the DLL, but we can send a pointer to an unmanaged object that has a reference to our managed object. So we make this wrapper:

// Example.h

public class ClockWrapper

  ClockWrapper(): _clock(gcnew Clock()) {}

  ~ClockWrapper() {}

  void SetProgressCallback(void * object, PROGRESSEVENT callback)
    _clock->SetProgressCallback(gcnew ProgressCallback(object, callback));

  void Run(int cycles)


  gcroot<Clock ^> _clock;

So all that’s left is to export the DLL functions like before. Just to keep them separate I’ll make another delete method, even though it’s identical in every way except the name.

// Exports.h

DLLAPI void * ClockCreate();
DLLAPI void ClockDelete(void * clock);
DLLAPI void ClockRun(void * clock, int cycles);
DLLAPI void ClockSetProgressCallback(void * clock, void * object, PROGRESSEVENT callback);

// Exports.cpp

#define C(p) ((ClockWrapper *) p)

DLLAPI void * ClockCreate()
  return new ClockWrapper();

DLLAPI void ClockDelete(void * clock)
  delete clock;

DLLAPI void ClockRun(void * clock, int cycles)

DLLAPI void ClockSetProgressCallback(void * clock, void * object, PROGRESSEVENT callback)
  C(clock)->SetProgressCallback(object, callback);

Then on the Delphi side:

// interface
type TForm1 = class(TForm)
   edtCycles: TEdit;
   btnCycle: TButton;
   ProgressBar1: TProgressBar;
   procedure btnCycleClick(Sender: TObject);
   procedure Progress(ACurrent, AMax: Integer);

// implementation
   TProgressEvent = procedure(AObject: Pointer; ACurrent, AMax: Integer); cdecl;
   PForm1 = ^TForm1;

function ClockCreate: Pointer;
cdecl; external 'Example';

procedure ClockDelete(AClock: Pointer);
cdecl; external 'Example';

procedure ClockRun(AClock: Pointer; ACycles: Integer);
cdecl; external 'Example';

procedure ClockSetProgressCallback(AClock: Pointer; AObject: Pointer; ACallback: TProgressEvent);
cdecl; external 'Example';

procedure ProgressCallback(AObject: Pointer; ACurrent, AMax: Integer); cdecl;
   PForm1(AObject).Progress(ACurrent, AMax);

procedure TForm1.btnCycleClick(Sender: TObject);
   Clock: Pointer;
   Clock := ClockCreate();
      ProgressBar1.Position := 0;
      ClockSetProgressCallback(Clock, @Self, ProgressCallback);
      ClockRun(Clock, StrToInt(edtCycles.Text));

procedure TForm1.Progress(ACurrent, AMax: Integer);
   ProgressBar1.Max := AMax;
   ProgressBar1.Position := ACurrent;

Some things to note. First off, the Progress method on TForm1 is private, and yet I call it from ProgressCallback. This is because of how scoping works in Delphi. Any code in the same unit as a private or protected method can call that method. In Delphi 2005, the keywords strict private and strict protected were introduced to prevent this ability, but in this case we actually want the behavior because we don’t want to expose that event to anybody else.

Next, notice that the procedure pointer also has cdecl on it. This has to be this way because it’s a pointer that will be passed into the DLL. So, it needs to be declared with the same calliing convention that’ll be used on the other side.

That’s how you hook up a Delphi progress bar to a .NET object. It isn’t much more work to use the ProcedureOfObject pattern and call that from a delegate on the .NET side, which is useful when you are hooking events on sub-objects.

One big gotcha with this pattern that isn’t demonstrated in this code is hooking the callback in a constructor. The Self pointer does not point to what you think it does. So if you send that along to the other side and call back to it later, you’re referencing offsets off of something else entirely, and you will get access violations.

So now we know how to make a .NET DLL that we can call into from the Win32 world. We know how to consume that in Delphi, and we know how to simulate Delphi’s method pointers. With these building blocks, you can map just about anything in the .NET world into your Delphi applications.

Calling into the C++ DLL from Delphi

January 1, 2007

Before I get to the meat of this post, I want to make some ammendments and edits to the code from the last one. Today I was wrangling around and began to recall more of my C++, initializers in particular, so I’ve updated the Example1 class to use them.

public class Example1

  Example1(const char * name) : _name(gcnew String(name)) {}
  ~Example1() {}
  void ShowName();


  gcroot<String ^> _name;

I also realize that I forgot to show the implementation side of that class, so here it is:

// Example.cpp
#include "stdafx.h"
#include "Example.h"

using namespace System::Windows::Forms;

void Example::Example1::ShowName()

So there’s our DLL. Now, let’s use it from Delphi! I’m using Turbo Delphi for Win32 to do this. Go to File > New > “VCL Forms Application” and make your project. Make sure that it outputs to the same directory that the DLL does (or make the DLL output to the same directory this project does, which is what I do) for ease of edit-compile-run cycling.

I’m going to drop a TEdit and a TButton on the main form and hook it up so that when we click the button it creates an Example1, shows it, and then deletes it. Here is the implementation section from the main unit in the delphi program:

function Example1Create(AName: PChar): Pointer;
cdecl; external 'Example';

procedure Example1Delete(AExample: Pointer);
cdecl; external 'Example';

procedure Example1ShowName(AExample: Pointer);
cdecl; external 'Example';

procedure TForm1.btnDoItClick(Sender: TObject);
  Example: Pointer;
  Example := Example1Create(PChar(edtName.Text));

The button handler is straight-forward and normal. The only interesting thing there is to see how the calls from the DLL get used. The lifetime management works just like anything else, you just aren’t going to use FreeAndNil like you would for most things.

The interesting part is at the top where we import from the DLL, let’s look at one of those lines again:

function Example1Create(AName: PChar): Pointer;
cdecl; external 'Example';

Now this corresponds to the following line from Exports.h:

DLLAPI void * Example1Create(const char * name);

Since it has a return type that is not void, it becomes a function (the others became procedures). The void * becomes Pointer, and the const char * name becomes PChar in Delphi. So what’s the rest of that garbage? The cdecl flag is there to tell the compiler what calling convention to use. If you just create the DLL in Visual Studio, it defaults to using cdecl. You can also use something like stdcall, but it’s not necessary here. The other part just tells Delphi which DLL to look for this external function in.

So now we know how to call code from the DLL. Next time I’ll show you how to pass procedure pointers and even method pointers from Delphi into the DLL and have them get called properly for things like progress bars.

Layout, design, graphics, photography and text all © 2005-2010 Samuel Tesla unless otherwise noted.

Portions of the site layout use Yahoo! YUI Reset, Fonts & Grids.