GNU Guile on OS X via MacPorts

Despite having good success building other GNU packages, I have tried unsuccessfully for years to compile GNU Guile v2 for Mac OS X. Perhaps just as well, since OS X is proprietary software, but nevertheless, I wanted to get it running.

One blogger has written extensively about building Guile v2 on OS X, and following those directions helped move me along quite a bit. But ultimately I gave up when the compilation process failed to find GNU Readline, because the LLVM clang C compiler on OS X (masquerading as the GNU C compiler) was using some pseudo-Readline library provided by OS X instead of the real GNU Readline library. This problem, too, may well be surmountable, but I had had enough.

I installed MacPorts and ran its installation utility to get GNU Guile 2.0.11. MacPorts chugged away for a while; I went out to dinner, and when I came back, Guile 2.0.11 was installed on the OS X system. Hooray!

Delighted, I headed back to the MacPorts website to donate money to their project; after all, I had surely burnt up hours of my own time fiddling with this, to no avail, and MacPorts solved the problem for me while I was eating vegetable stir fry. Alas, I could find no link on the site for making donations, so I will just publicly recommend them instead.

If you run into a similar problem, try using MacPorts — preferably sooner than I did!

Designing Xerox Star Software in 400 Pages

I’ve been reading Bringing Design to Software, an ancient tome (published in 1996) that collects interviews with a dozen software practitioners on the subject of software design. In modern parlance, what was called “software design” in 1996 might overlap with what today is called “user experience”, but in any event, it is an activity related to, but separate from, programming, that results in a well-planned specification for what is to be programmed.

The first interview is with David Liddle, who worked on the Xerox Star, an early desktop computer system aimed at business productivity use. How did Liddle and his colleagues go about designing the Star software?

We ended up writing a 400-page functional specification before we ever wrote one line of code. It was long, because it included a screen view of every possible screen that a user would see. But we did not just sit down and write it. We prototyped a little bit, did some testing with users to decide what made sense, and then wrote a spec for that aspect. Then we prototyped a bit more, tested it, and then spec’d it again, over and over until the process was done.

400 pages of software requirements may be commonplace in specialized applications like avionics systems, but it’s a lot more planning than most software gets today. Not even content with that, Liddle’s team hired Bill Verplank, a human-computer interface expert from MIT:

Verplank and his crew did 600 or 700 hours of video, looking at every single feature and function. From all these video recordings, we were able to identify and eliminate many problems. For example, we chose a two-button mouse because, in testing, we found that users demonstrated lower error rates, shorter learning times, and less confusing than when they used either one-button or three-button mice.

Being on the front line of developing early office applications, Liddle also addresses the misconception that the software models of files and folders and desktops was meant to copy a real-world office environment:

It is a mistake to think that either spreadsheets or desktops were intended to imitate accounting pads, office furniture, or other physical objects. The critically important role of these metaphors was as abstractions that users could then relate to their jobs. The purpose of computer metaphors, in general, and particularly of graphical or icon-oriented ones, is to let people use recognition rather than recall. People are good at recognition, but tent to be poor at recall. People can see objects and operations on the screen, and can manage them quite well. But when you ask people to remember what string to type to perform some task, you are counting on one of their weakest abilities.

Curiously, a lot of software written for programmers to use puts heavy demands on recalling arbitrary strings of text…

Would it still make sense to write a 400-page specification for office application software today? Would it still make sense to record hundreds of hours of video to research the optimal way to use the software? Maybe not. Thirty-three years have passed since the Xerox Star, and along the way, many good software design concepts have been identified and established as common practice. If you’re building software for an Apple desktop or mobile platform, for example, you can simply follow Apple’s design guidelines and save yourself a great deal of fundamental human-computer interaction research.

Nevertheless, spending time to plan your application up front may still be a good idea. Thinking through the interaction experience and the needed functionality with a pad of paper and a pen can make writing the code more straightforward, and software is easier to test if you have a precise definition of what it’s supposed to do.

Thanks to people like David Liddle, we can draw on years of experience in good software design to get a head start on our own projects!

Recurrent Training for Software Developers?

Software developers who have been out of school for a while and apply for new jobs routinely bemoan that they have forgotten details of things like algorithms and data structures and other computer science topics that tend to pop up for discussion in interviews. Even software developers working in their current job can stand to be reminded of things they’ve been taught in the past but haven’t thought about for years.

Pilots undergo intense training before getting certified to fly in the first place, but then also must undergo recurrent training on a regular basis.

Instead of checking off having learned algorithm design in college and never thinking about it again, would it be useful for software developers to engage in regular recurrent training to refresh themselves on things they might be letting slip from their thinking?

Taking a full semester-long class might be overkill, and too much to expect out of the schedule of working professionals with families and responsibilities outside of their jobs. Maybe a twice a year spend a day or two being refreshed on things that should have already been studied in detail in the past.

A format of alternating between 15-minute lectures, 15-minute in-class exercises (done by individuals on their own laptops), and 15-minute review of solutions might be a good way to go.

How could such training be set up? Large companies could do it all in house, paying a small staff to be dedicated to administering such classes. Local community colleges could potentially offers this format of class to small companies and individuals in the area, as part of their alleged charter to foster continuing education.

In theory, it could also all be done online, with prerecorded lectures by especially great speakers, but spending the time to meet in person can sometimes be more motivating than watching videos.

Empty Index Entries in Texinfo

Working on a book formatted with GNU Texinfo, I ran into a mysterious error when processing the Texinfo source through TeX (via texi2pdf):

(Variable Index) [145] (Concept Index) [146] (./c.cps
./c.cps:1: Extra }, or forgotten \endgroup.
l.1 , 145}

I saw no related error or warning out of Makeinfo, and TeX was unhelpfully not saying anything about which Texinfo source line the error was coming from. Other clues suggested that the error has to do with indexing, and that it has to do with an index entry pointing to page 145 or maybe 146.

Looking at the PDF output, I backtracked to the relevant portion of the Texinfo source, and saw something like this:

@cindex function pointers
@cindex

A normal-looking index entry followed by an empty index entry! I removed the empty index entry and reran TeX; all was fine.

Evidently empty index entries are not handled well in Texinfo or texi2pdf or TeX or wherever the actual failing is coming from. A more helpful error message would be nice, but in the meantime, remember to avoid blank index entries!

Cedar Rapids IEEE Meeting on Real-time 3D Graphics Rendering

In one of those rare instances when my schedule and that of the local chapter of the IEEE aligned, tonight I enjoyed listening to a talk by Chris Wyman on the topic of real-time 3D graphics rendering. This blog post is just meant to capture some of the notes I made during the talk:

  • There are two kinds of wrong answers in computing: an answer can be so inaccurate that it is useless, or an answer can be so late as to be useless. Inaccurate answers might be timely, and late answers might be accurate, but in neither case is the answer helpful.
  • Some optimization techniques that made sense decades ago, such as caching trigonometric values in memory rather than recomputing them, no longer make sense, as computing has become faster than memory access.
  • There are two general approaches to improve real-time interactive graphics rendering: you start with fast but poor quality graphics and improve the quality, or you can start with slow but high quality graphics and improve the rendering time. Relatively simple adjustments can make a big difference, coming from either direction.

Graphics rendering is about as far away from what I do as you can get and still be within the field of computer science, but it was interesting to get some insight into this line of work.