CIPS CONNECTIONSINTERVIEWS by STEPHEN IBARAKI, FCIPS, I.S.P., ITCP, MVP, DF/NPA, CNPPhil Hord: Distinguished Developer, Award-Winning Innovator, Entrepreneur This week, Stephen Ibaraki has an exclusive interview with Phil Hord. Phil is an award winning, widely regarded developer with a 20-year history in software engineering. He is well respected for his in-depth knowledge, commitment to quality and service to the developercommunity and to technology users. Phil has a BSc in computer science from Florida State University. He is a Certified Master C/C++ and Windows programmer having outscored 99.7% of all C++ testers in the US. Discussion: Q: Phil, by earned reputation, you are widely regarded as one of the top developers. Thank you for taking the time to do this interview. A: Thank you, and you’re welcome. I work with a lot of other “top developers” so I really feel challenged on a daily basis. I often feel outclassed by the work I see from others. I really have benefited from strong peer environments. There’s a lot of respect among senior engineers in most places, and a lot of support as well. Q: What sparked your interest in computing initially? Can you describe your most memorable experiences at Gulf Coast College and then at Florida State? A: When I was a kid in school, computers were rare. The first personal computers came out in the late 70’s. Cheap calculators have more power than those original boxes. In the 6th grade I was in an experiments class in school that had a computer. Most of the time it sat at a “Ready>” prompt or running games that other kids loaded from cassette tape. One day when I walked in, it was busy flashing a message onscreen. It said “Brent Pristas is a programming god” or something like that, and it had an animated box drawn around it. It took about 10 lines of BASIC code, but it was far beyond my simple means. I asked Brent, a 7th-grader, how he “made the computer do that.” He shrugged and said something about “4 loops”, and then he listed the program. I was in awe. My dad bought a computer for his office. I don’t know if it was really useful there, but he used to bring it home on weekends. I didn’t do anything else on weekends after that. I taught myself to code and then I taught my dad. It was a great hobby and it was all-consuming. It was several years later before I found out I could get paid for it. By the time I got to college, there was nothing I needed to learn. Or so I thought. I did learn plenty about the areas of computing that had eluded my interest thus far, like mainframes, (primitive) networking, terminals, microcode, and so on. I did pick up some stuff in data structures, statistics and finite automata. The really good stuff didn’t happen until grad school, where I learned about computability, operating systems, and some new thing called “C++”. Most of this stuff just “made sense” to me since I had spent most of my life, by then, exploring the inner workings of computers. But there were two instances where the real world crept in. The first was the ACM computing challenge. We formed a four-person team and went to compete against other college teams in a sort of a “computer brain bowl” of problem solving. We finished somewhere in the middle of the pack, with a respectable three out of seven problems solved. But it was really the first time I had written code under unrealistic deadlines. I didn’t think much of it then, but it turns out that’s how most code is written in the industry, so it really prepared me for professional life. The other instance was a class, poorly organized and not very technical, where we all worked together on a class project for the school. We had a customer, we had assigned roles, and we had deadlines. What we didn’t have was a clue. But this class showed me, (and the others, I guess), how it was to work with and rely on people who didn’t really do their jobs. It was a bit like living in Dilbert-land, but it was 6 years before Dilbert was born. As much as I hated it, this prepared me for professional life much more than any of my technical classes. Q: Having worked with BASIC early in your career, then dBASE and Clipper, what were your experiences with these languages and what lessons still hold true today? A: Those are some ancient languages. They were useful in their time, and it was exciting to live through their evolution. But today they’re as useless as rotary dial telephones. I learned a lot about optimization in those languages, and finding more than one way to solve a problem. The biggest lesson I learned from all these languages is to seek elegance. I have found over the years that when I write code to solve a problem, if the code lacks simplicity and elegance, then I probably designed it wrong. Imagine a Rube Goldberg machine, (if you remember those things), where there’s a long complicated process involved to perform some simple action, like lighting a candle. There are so many “special cases” and unnecessary steps in the process that it’s bound to fail. Having many potential “points of failure” in a process is an invitation for disaster. Q: You moved into more C and C++ development in the early 90’s. Share examples of some early projects and your thoughts on these languages then and today. A: I taught myself C and later C++. At the university at the time, there were no classes for these languages. And languages are mostly the same anyway. It’s just a matter of learning a different syntax and a different philosophy, sometimes. But I’ve found that you really can’t learn a language until you’ve developed a project in it. You can read about it and study it all you want, but until you are forced to make it compile, do what you want, and then debug it, you really don’t have a good grasp of it. It’s frighteningly easy to write bad code in any language. C and C++ offer power and flexibility. The big job for a developer occurs in design. It helps to know the language well before designing the project, but writing the code is often the (relatively) easy part. I didn’t want to write a “real” program as my first attempt at learning a language. So I invented a game – an idea I stole from Nintendo, actually – and started writing it in C. The last 20% of the code seems to take about 80% of the project time. That’s where I really learned the details of C, and in this case, the Windows API. I got some acclaim for this game around the office and pretty soon people were clamoring for “scoring” and new levels and such. But by then I wanted to learn C++, so I started all over from scratch. I changed the architecture, created C++ objects to hold the various game pieces and so on. And again, I learned all the best lessons of C++ by being forced into corners to solve the minutest problems. At my job in the early 90’s we programmed almost exclusively in 8086 Assembly language, the “machine language” code of the CPU. My project for several months had been to translate a high-level front-end from an expressive scripting language into the barren language of Assembly. I had not used C++ at all at this point, but I had read about this “object-oriented” programming idea that was catching on. I understood how it worked and I wound up using it in my Assembly language structures. To a modern programmer, that probably doesn’t sound unusual, but at the time, figuring out how to dissect a concept like OOP and code it in terse assembly-language on a PC was unheard of. I don’t think anyone in the office really understood what I was doing at the time. But when it came time to implement all the actual front-end features on top of this back-end framework, everything fell into place quickly and easily. Almost all of it worked the first time it was in place. The QA guy even came by to compliment me on it. But more importantly to me, when I got around to exploring C++ as a language, I already had an understanding of how it all worked from the inside. This is an important concept. I find that people who understand computers from the high-level language all the way down to the electrical signals on the circuit boards have a far easier time with computers. People who don’t have this deep understanding seem always to be making elementary mistakes, writing suboptimal code, or setting off on bad designs. I rarely use my knowledge of the internals of computers directly in my work. But having it makes it seemingly easy to predict what a computer can do, should do, and will do. So, one of the primary things I look for in a job candidate is an innate understanding of the internals of computers and languages. If it is not there, neither is the job offer. Q: You have worked extensively in communications-related projects. Describe your key projects and how they have impacted your knowledge set. A: Communications protocols always seemed very basic to me. They still do, though some of them are extraordinarily complicated. Essentially, they’re all the same: they just send data back and forth in some controlled manner. The “controlled manner” part is the protocol. And it seemed to me that we should just pick one and go with it, instead of being forced to work with so many different ones. But no protocol ever fits all needs. We needed one for 8-bit data on modems, a different one for “legacy” networks, handshaking for fax transmission, store-and-forward message handling for email, management-data protocols for equipment monitoring and control, custom protocols for proprietary hardware, and so on. So I frequently wound up designing a new protocol, coding one to an existing standard, or working with a hardware engineer to meet his protocol needs. I wanted to get the intricacies of protocol code out of the way so I could get to the real coding. But I always found that there was another protocol to write, as if someone should have done this already and hadn’t. After a while, I realized I was writing a lot of communication code to support other projects. I also realized I was good at it. Protocol design is far more complicated than I originally imagined. There are so many issues a protocol might have to deal with. It makes a fascinating puzzle. Crafting an efficient and robust communications protocol for a particular need is such an intricate process that it is akin to writing an entire computer application. In fact, most modern protocols rely on several layers of protocols beneath them, much like modern languages rely on machine language and microcode beneath them. Working with them has led me to a greater understanding of the underpinnings of the network, like writing software led me to a greater understanding of the computer. Q: You have worked on many interesting projects as President of Phord Software including writing award-winning utilities for the Windows environment. Detail your top systems. What lessons can you pass onto other developers? A: I’m not sure how to classify my “top” systems. All of the projects Phord Software develops are tools that fill an immediate need. Shove-it, for example, was supposed to be a temporary “bug fix” for Windows 95. The problem it fixed still exists today in Windows XP and Server 2003. I’m sure it will be there in the next release of Windows too. KeyBlock was a solution to the Ctrl-Alt-Delete “problem”. You can’t trap Ctrl-Alt-Delete in a Windows application because it’s supposed to be like dialing 9-1-1. It should always work. But people who put their computers out for public use often need to exercise a bit more control over the machine. KeyBlock fills that need by disabling system keys, including Ctrl-Alt-Delete. I was surprised how successful KeyBlock was. These days I even have some competition. One of my most profitable (in terms of hourly wage) projects has been www.shove-it.com (not actually related to Shove-it, the program). It’s a horribly inept web site that I accidentally created that fills a dire need. It does that so successfully that it now requires a dedicated server simply to keep up with the website traffic from daily visitors -- in spite of my complete lack of attention to it or advertising for it. Lessons to share: If you don’t know how something works (and it’s your job to fix it), take the time to find out exactly how it is designed. The correct solution to the problem will only come from a good understanding of its source. Far too many bugs exist because someone “fixed” something without knowing what they were doing. Someone once gave me a bit of advice on contract rates and salaries (granted, in the 1990’s). He said, “Before you go in to meet the prospective client, think of the biggest number you can say out loud without laughing. That is your rate.” And further, “over time, you’ll find that this number gets bigger.” I offer this because it was useful to me for this reason: I so enjoy the work that I do that I forget that people are willing to pay me for it. Consequently I undervalue my time. I think any job that you enjoy will suffer the same problem. First corollary to lesson number 2: in lean times, you may have to take your “subsistence” number instead of your “straight-face” number. Second corollary to lesson number 2: find a job that you enjoy so much that the salary is an afterthought. Learn by doing. I don’t think real lessons are learned from reading or studying. If you want to learn a new operating system, program, language or tool, pick out a project to build with it and then go build it. Finish it. Completely. Then you’ll know. Explore new technology but don’t worship it. Before you rely on it, wait a while for the bugs to get worked out. In the meantime,play with it. Find a need and fill it. Live life unashamed. Be embarrassingly honest at all times. Q: Amongst the various development languages, tools, resources, and frameworks, which are your favorites and for what reasons? A: I don’t think I have any favorites anymore. I used to, but now I only have “currents”. That is, whatever I’m working in currently is what I know and love. I’ve been working on Unix systems for a few years now. And I’ve been working on embedded controllers where the resources are extremely limited. I’m not a fan of the programming environment on either system, but I’ve learned from each of them. Having said that however, I’m a big fan of some systems. I like PHP for web development, because it’s easy to write readable code in PHP. I like Perl for its flexibility, but it’s so easy to write illegible Perl code that doing so intentionally has become one of its most basic tenets. I really like the C++ Standard Template Library (STL). The design and consistency of this framework is impressive. The time it saves, (when I can use it), is wonderful. Q: It’s hard to predict the evolution of program/system development however with your wide and proven background of innumerable successes; we would appreciate your thoughts in this area. Describe program/system development in five and ten years. A: Sadly, I don’t think it will change much. If you think about where we were 10 years ago, it seems we do many of the same things today. C++ is a bit stronger today, but it was in fairly wide use 10 years ago. For a while there was more focus on maintainability and defect tracking in the industry, but then the bubble burst in about March 2001 and everyone learned to “get lean” again. Defect tracking is making a comeback as we try to manage development more like a science and less like an art. I hope that the future will integrate it more into our development cycles. Java and other “simpler” languages will allow more people to write code that doesn’t crash. But it also will let more people write code without really understanding the system. That is a good thing in many ways. But it means the software industry will continue to have the image that its product is unreliable. Exponential processor speeds and storage capabilities will even make us even lazier about optimization. So those mediocre programmers with C-sharp and Java in their hands will be able to produce useful tools anyway. There’s a lot of bleeding edge development in internet technologies, and open-source efforts have kept that alive. Linux will someday make a serious play for end-user desktops, but they will have the same battle to fight as Apple and with less money. Q: Which technology areas excite you the most and for what reasons? A: 1) VOIP: I was the designated “computer-telephony technology guru” at Hayes for several years while “voice over IP” was being born. Now, 7 years later, it has finally reached the masses. I guess that’s par for the course. I’ve been using a VOIP phone (from Vonage) exclusively for 18 months now. It hasn’t been problem-free, but the trouble I had was related to my internet provider. These days, my broadband pipe is so fat and clear that nothing bothers my VOIP connections anymore. The phone companies can’t seem to do anything right without the protection of monopoly-status. It’s about time that someone came along and showed them up. When I moved recently, I considered getting a landline phone again. But the cost was 5 times more than my VOIP connection and the features were fewer. I love my VOIP. I get my voicemails in my email, my calls are all free, and I can take my phone with me on trips. Also in the mass-market category is Tivo. Tivo is the first computer my wife could use right away from day one. It’s ironic that it was a Linux computer. I can’t adequately explain Tivo to a non-Tivo owner and I need not to anyone else. 2) Nanotechnology: This field is still highly experimental, but it holds exciting promise. For me, its future is still too far off to be clear, but I’ll bet it will be used in ways that are unimagined today. From a programmer’s standpoint this would be most interesting to work on if I could somehow model and build the machines logically. The really cool work here is being done in labs by mechanical engineering folks today. But there’s no telling where it can go once we’re able to put some intelligence into microscopic machines. 3) RSS: Blogs are the new media. In these early days there are plenty of crackpots and also-rans, but ultimately the dissemination of information will more commonly flow over something like RSS than anything else. It has a built-in immunity to spam, a natural popularity/relationship mechanism, and it is supremely democratizing. It will be belittled for a while longer, but in a few years, we will wonder what we did without it. 4) Virtual machines: These are programs that let your computer pretend to be a different kind of computer. Java does this, and MAME, the Multi Arcade Machine Emulator. I really have never been a big fan of Java. The VM idea is inherently flawed in that it immediately wastes a non-trivial portion of your resources. But the flaw is being mitigated as the technology progresses by talented people who are always shrinking that last bit of loss out of the system. I think it may actually be reaching the point soon where the VM will always be running and applications will be able to use it as if it truly was part of the hardware. I don’t know if it will be a VM for Java, C-sharp, or some other abstraction. But the prospects for software are very interesting. 5) Computer-enhanced molecular biology. If I were going to change careers tomorrow, I would want to work in a research lab that focused on using computers to help analyze and model microbial machines. I think we are just a couple of years away from being able to do near-instantaneous DNA sequencing, and a few years more from being able to do large-scale DNA database comparisons to identity traits, flaws, species, and so on. I’ve always felt that not enough was being done in this area and it would be exciting to work on. I have heard reports recently that make it sound like plenty of others thought the same thing. I think we’re getting close. Also we’re starting to engineer biological viruses. This technique has the capacity to destroy us all, but it also has the potential to cure cancer, AIDS, and nearly every other systemic affliction known to man. I hope its potential for good use is realized before its potential for harmful use is. But I don’t think, for example, that prohibiting research by the “good guys” will prevent it from being used against us in the future. 6) Molecular computing: If you think about it, DNA, the stuff which we are all made of, is really a huge computer program. Granted it’s just the microkernel stuff; it doesn’t control everything. But it is an amazing technology. We are starting to use this to design our own computer systems. But this has nothing to do with biology or life creation. It has everything to do with speed and efficiency, instead. But on the downside, it renders every known key-based encryption method to date utterly useless. I don’t think DNA computers will be commonplace in the home in our lifetimes. But that’s the funny thing about technological acceleration: it’s recursive, and so it is unimaginably impossible to predict the future beyond just a few decades. Q: Here is where we turn it around. Pick three topic areas of your choosing and provide commentary. A: These are pretty diverse topics. Thanks for leaving the floor so completely open. Area 1: Debuggers Our debuggers were pretty limited back then, but sometimes the debugger today is even more limited, depending on the environment. I have so many debuggers now for different platforms that I don’t get to know any of them very well. These days, however, I find that I don’t have to know the debugger well. I have reached the point where computer languages are like natural languages to me. I can read them, I can write them, and they make sense. If I design the system right, they even make logical sense. So when there’s a problem I find myself reading the source code, just like Bruce did, line by line, imagining what might go wrong. I can’t recommend this technique for everyone. Certainly a good debugger is an essential tool. But I find myself with this talent for reading code now, after all. I didn’t try to cultivate this ability, but I encourage other debugger-dependent programmers today to try it. I would have worked at it sooner if I had known it was feasible. Area 2: Hobbies, computers, and kids today It’s not as bad as it sounds, though. My hobbies include digital photography, political debates, bioethics discussions, foreign languages, cultures, mathematics and all forms of computer programming. (I also enjoy kites, boating and tae kwon do, but they’re not in this meme.) So I had a hard time explaining to my family just what is that I do. But kids today just get it. They are immersed in it. Computers are all around them, they grew up this way, and they always have at least three microprocessors on them at any given moment. They are used to living life online. Dire predictions about the loss of social intimacy have not arisen; there will always be customs and taboos. But the technological gap has widened for those who refuse to “get online”. These days, my 86 year old grandmother IM’s me about once a week. Maybe it’s not “kids today” but rather the proliferation of technology that did it. Either way, the future of technology and society is pretty exciting. Area 3: Honesty I started researching it a bit more, and through some non-scientific polling, questioning and small samples, I came to the conclusion that there’s only about 30% of the population of the US that is habitually honest. I know a lot of people won’t see anything wrong with that, because they see nothing wrong with “little white lies” or inconsequential falsehoods in the name of tact. But I do and I wish everyone did. I think it’s insidious. The state of habitual dishonesty damages society and relationships. It leaves us unable to trust most of what we hear, even from those closest to us. This distrust tends to make us even less trustworthy ourselves. It’s a vicious cycle. I noticed some cues in peoples’ speech and mannerisms which can suggest into which category they fit: chronically honest, chronically dishonest, or somewhere in-between. I look for them as part of this research. I wish more people were aware of their own category and could imagine a world with more honesty and less embarrassment. Q: Can you share your experiences with the Carputer and what are your ultimate hopes for the system? A: I could go on and on about my carputer. And I did, at my blog. (www.philhord.com/carputer/). But basically it is a Windows 2000 PC installed inside the dash of my Honda Odyssey minivan. I have a touchscreen on the dash which I use for GPS navigation (with voice prompts) and primary control of the system. I also have a flip-down monitor for the rear passengers to watch. The 2nd monitor has its own video output independent from the touchscreen, so I can use the front display for navigation while the kids watch a movie on the rear monitor. I have about 20 movies on the hard drive at any given time. I can rip new ones from DVDs or from my Tivo. I can even surf the web on the road (90kbps) by plugging in my cell phone. The carputer is a custom-built system, and dozens or hundreds of others have built something similar. I looked at packaged systems instead, but none of them did exactly what I wanted mine to do. So I built my own. It really wasn’t very hard to do. My wife, who hates computers, loves her carputer (she drives the van). She admits to relying on the nav system to find her way around our new neighborhood, and the kids love the movies for the long drives to events in the old neighborhood. When I disabled it for two weeks while I worked on a power problem, my wife actually pleaded with me to get it working again. I try not to break it anymore. Q: Any new freeware utilities we can expect from you? A: I have about a dozen that I want to write, but I have no time anymore. I always bite off more than I can chew. I have some digital photo sharing software that I started almost four years ago, but then life got in the way and I never finished it. I wrote some tools for my carputer, and I’ll probably release those soon. I’ve got a TRS-80 Basic emulator I wrote in a span of two weeks to see if I could. I’ll give that away, source and all as soon as I clean it up a bit. So I guess the answer is “yes, when I get around to it.” Q: Phil, we are indeed fortunate to have your share your considerable and valued experiences with us. Thank you! A: Thanks for your interest. I enjoyed our chat. |