Canadian Information Processing Society (CIPS)

CIPS CONNECTIONS


Interviews by Stephen Ibaraki, FCIPS, I.S.P., MVP, DF/NPA, CNP

Paul Bassett, I.S.P. (ret.): Leading Software Engineer and Computer Science Authority

This week, Stephen Ibaraki, FCIPS, I.S.P., DF/NPA, MVP, CNP has an exclusive interview with Paul Bassett.

Paul BassettPaul Bassett has given keynote addresses around the world, and was a member of the IEEE’s Distinguished Visitor Program from 1998 - 2001. He was General Chair for the Symposium on Software Reuse held May 18-20, 2001, held in conjunction with the International Conference on Software Engineering.

Ed Yourdon called Paul's book, Framing Software Reuse: Lessons from the Real World (Prentice Hall, 1997), "the best book about reuse I've seen in my career." DeMarco and Lister republished his 1987 IEEE paper, Frame-based Software Engineering, in their compilation of the 27 most significant papers of that decade. Paul also co-authored the IEEE’s Standard P1517: Software Reuse Lifecycle Processes.

Paul has over 35 years of academic and industrial software engineering experience. He taught computer science at York University for seven years, co-founded Sigmatics Computer Corporation and Netron Inc. (two on-going software engineering companies), and for over twenty years he helped governments and businesses (a partial list: the US and Canadian federal governments, The Hudson’s Bay Co., IBM, Fiserv, TD Bank, Ameritech, Union Gas, Teleglobe Insurance, Noma Industries) to improve their software development tools and techniques. He has a M.Sc. (U. of Toronto Computer Science), and is a CIPS information systems professional (retired). He is currently a senior consultant with the Cutter Consortium http://www.cutter.com/index.shtml.

Paul received the Information Technology Innovation Award from the Canadian Information Processing Society (CIPS) for his invention of frame technology. He later co-chaired their Certification Council, was a member of their Accreditation Council, and now helps to accredit honours programs in computer science and software engineering, as well as chairing the CIPS Committee on Software Engineering Issues.

The latest blog on the interview can be found in the IT Managers Connection (IMC) forum where you can provide your comments in an interactive dialogue.
http://blogs.technet.com/cdnitmanagers/

Index and links to Questions
Q1   Paul, your latest article in IEEE Software, The Case For Frame-Based Software Engineering, is generating buzz. What's its essential message?
Q2   Hold on. What's wrong with set theory? It's a foundation for all of mathematics!
Q3   You're saying our ability to manage complex software gets overwhelmed by too many similar classes, causing too much confusion and additional complexity. So, how does frame technology come to the rescue?
Q4   That was quite a mouthful, Paul. Is frame technology similar to anything out there now?
Q5   So you're saying classes rigidly inherit all the properties of their ancestor classes, whereas frames are involved in a flexible manufacturing process. But it sounded like you also said frames customize frames. How about explaining that novelty a bit further?
Q6   I'm beginning to understand, but what happens if two frames in the same hierarchy both want to override the same detail?
Q7   Fascinating revelations, Paul, about how to manufacture custom software cost-effectively. Does frame technology affect other phases of the lifecycle?
Q8   If the technology is that powerful, it ought to spread virally. Is that what you expect?
Q9   So how do you propose to overcome these barriers?
Q10   My, you certainly think big! How would the supercharging process work?
Q11   What kind of payoffs should the various stakeholders expect?
Q12   That's an amazing future. How about the past? How did frames get started?
Q13   In conclusion, what are the most important ideas to take away?

Discussion:

Q1: Paul, your latest article in IEEE Software, The Case For Frame-Based Software Engineering, is generating buzz. What's its essential message?

A: The evidence, Stephen, is by now fairly conclusive: Object Orientation has been unable to deliver on its key promises, especially those concerning our industry's chronic issues: High quality systems are just too expensive, and take too long to build; they're too hard to maintain, and their components are too hard to reuse.

In a nutshell, OO suffers because classes mirror sets. To have any hope of building and evolving systems with tomorrow's complexity requirements, we must mirror software's infinite malleability. Sets just can't do this; my article explains why, and how frame technology, which is now available as open-source freeware, does fulfill OO's failed promises, a claim backed with plenty of hard evidence.

Q2: Hold on. What's wrong with set theory? It's a foundation for all of mathematics!

A: Yes, of course. Sets can model anything, at least in principle. But in practice, the problem boils down to this: two objects must belong to separate classes if their definitions differ in even the slightest detail. For mathematicians this is no problem - simply define as many classes as you need, even uncountably many! For object orienteers, however, this can become a show stopper:

  • complexity explodes when domain models evoke thousands of classes;
  • confusion arises when there are too many similar classes to choose from;
  • unique and subtle details are fragmented among common properties;
  • real-world fidelity suffers when objects model components that should be seamlessly integrated, not encapsulated - is your head an autonomous agent, dynamically linked to your torso??
  • Last but not least, multiple inheritance failed because it cannot tolerate incompatibilities among parent classes. Set intersection is the mathematical analog, and the intersection of incompatible sets is the empty set!

Q3: You're saying our ability to manage complex software gets overwhelmed by too many similar classes, causing too much confusion and additional complexity. So, how does frame technology come to the rescue?

A: The key is to make the notion of "similarity" work for us, not against us. Think about it: every thing is similar to something else, depending, of course, on what you mean by "similar." Two cars may be similar, but is a car similar to a truck? Maybe. Despite its tremendous potential to collapse complexity, similarity's subjective nature makes it hard to harness.

That's where frame technology comes in. It's a rather pure form of manufacturing that exploits similarities. Imagine assembling cars from just a few generic parts. That is, we would use just one standard bumper, one standard chassis, one standard fender, and so on. The assembly line would automatically replicate and adapt each generic part to custom fit each specific car. Too bad we can't actually morph physical parts into similar variants; if we could, inventories, costs, and complexities would collapse.

Well, software is soft, infinitely malleable. Frame technology can, and routinely does build, maintain and evolve custom systems from a few dozen frames, organized into nested subassemblies. Each frame is a component of a generic information model that is open to an infinity of possible variations. Frames can apply specific variations to the generic elements to produce not only files of compileable source, but any text, expressed in any language: Design models, legal documents, bills-of-materials..., anything that is somehow similar to a model in the frame library. Reductions in costs and schedules can be so dramatic that companies have been known to ask for externally audited confirmations.

Q4: That was quite a mouthful, Paul. Is frame technology similar to anything out there now?

A: Great question, Stephen. Here is how frame technology is not similar to object orientation: Instead of thousands of look-alike classes we never need more than a few hundred generic frames, and ofen just a few dozen. Rather than obscure, tiny class definitions, frames are designed to model the way we think about various aspects of an information domain. Instead of multiple-inheritance breaking down, any "rough edges" that frames may have are "ground off" in the process of assembling them together. The "grinding and polishing" instructions are stored in the frames that control each subassembly. Such instructions not only tell the frame processor what to do, but they also tell us exactly what it takes to integrate clashing components into a seamless whole. This is the process of adapting context-free parts to a context.

While frame technology is similar to conventional manufacturing, there are two key differences: (1) frames are highly adaptable parts whereas conventional parts must be used-as-is, and (2) the instructions for assembling and adapting parts are carried inside the parts themselves!

Frame technology is also similar to aspect-oriented-programming (AOP). Both enable you to assure important engineering qualities that the current programming paradigm cannot e.g., separation of concerns. But the approaches behind FT and AOP are different. To make programs easier to maintain, AOP factors out a limited class of functionalities that crosscut program modules. FT also does this, but its mechanisms, being more general, also enable non-redundant, highly changeable software representations.

Q5: So you're saying classes rigidly inherit all the properties of their ancestor classes, whereas frames are involved in a flexible manufacturing process. But it sounded like you also said frames customize frames. How about explaining that novelty a bit further?

A: Let's back up a little bit. Frames originate as already existing model examples: Pick any good algorithm or data structure, say in Java or C#, or pick a design model, your favourite recipe, a standard legal document - any specific example that is worthy of being an archetype for all things of a similar nature. Then simply take each detail or group of details in the example and make it the default value of a uniquely named parameter. Presto, you have framed the example - you now have a generic text that the processor can convert into an unlimited number of similar but arbitrarily different versions of the original model, what we call "same as, except…"

A large model decomposes into an assembly of nested frame subassemblies of arbitrary depth, as shown in the schematic parts-explosion diagram. Every frame is the root of such a subassembly, and it stores two kinds of information: default pieces of the model (green bars), and commands to adapt its subassembly frames to its needs (orange bars). More specifically, it can select, add, delete, replace, and iterate any subassembly detail or group of details, using frame-commands to override corresponding frame-parameter defaults. Each subassembly, by itself, defaults to its piece of the original model, the archetype.

Q6: I'm beginning to understand, but what happens if two frames in the same hierarchy both want to override the same detail?

A: Another great question, Stephen. You'd almost think this interview was pre-scripted! Two frames can adapt the same detail differently as long as one is not an ancestor of the other - each frame adapts a separate copy. Otherwise the ancestor frame closest to the top of the hierarchy calls the shots… kind of like it is at your job!

There is a very good reason for this rule: the higher up you are the more responsibilities you have - you must be sensitive to potential opportunities and conflicts that are invisible to your subordinates. So it is with frames too. The lower the frame, the more context-free it is; the higher the frame, the more context-sensitive.

The root of an entire assembly is called a specification frame. While it has the power to override any detail anywhere in its assembly, it is small - typically 5-15% of its equivalent program, and not infrequently, more like 1%. This is because a specification frame needs to specify only what is said nowhere else - what makes that program unique. All other details, including adapt commands, can be hidden in the subassembly frames, yet shareable with other programs.

Frames provide the keys to making the most of reuse. They are adaptable to multiple contexts, both within and across programs. Reuse across programs turns a family of similar programs into a Product Line. Within a program, adaptive-reuse eliminates redundancy, condensing each program to its novel essences while remaining easy to change. Source files become read-only transients because specification frames give programmers control over every symbol input to the compiler. Can you imagine the productivity boost from having to write and maintain only 5-15% of every program?

Q7: Fascinating revelations, Paul, about how to manufacture custom software cost-effectively. Does frame technology affect other phases of the lifecycle?

A: Indeed, it's been called a "paradigm shift," a greatly overused term. I'll let you judge if I'm entitled to use it.

Let's start with the requirements phase. In mature frame environments it's never acceptable to start from scratch, so-called green-fields. Almost for free, a frame engineer can quickly generate an industrial-strength prototype from frameworks that are relevant to broad-brush requirements. Non-technical users relate to WYSIWYG executables, not abstract technical specs. The prototype also incorporates high quality architectural, safety and security features that such users usually don't know enough to ask for, and in the current paradigm, are not implemented until much later.

"Test drivers" of the prototype express feedback in a "same as, except…" fashion, where the "exceptions" can range over the gamut of high and low-level, functional and non-functional features. Frame engineers design frames that extend and customize the prototype based on these informal, evolving requirements. Next, they regenerate and integration-test the system to ensure the new and revised executables interoperate properly with the rest of the system. The new version is test-driven again, and the cycle repeats. At some point, test-driving becomes acceptance testing, but refinement cycles continue indefinitely. The so-called maintenance phase, as a separate process and mind-set, disappears. The same tools and techniques used to create the system are used to fine-tune and evolve it.

Quality and testing deserve special mention. There is a natural division of labour:

Domain-expert frame engineers design and write frameworks of high quality - functionally rich, robust, efficient…. Because such frameworks supply the bulk of every program, quality necessarily goes up, compared to conventional programs. Also, because frameworks are reused, hence tested in diverse contexts, they have fewer defects than un-reused code.

Application developers write specification frames using templates - each template already knows what frameworks to adapt, and explains the use of various parameters. Most errors are fast to find because all un-reused code is in specification frames. This combination of fewer defects, defect-localization, and functionally rich frameworks provides a powerful punch. Bottom line: faster better cheaper.

The anecdotal evidence sounded too good to be true. So I asked 15 corporations to hire an independent auditor, QSM Associates. They compared 30 frame-based projects, ranging up to 10 million lines of non-comment code. QSM confirmed that 85 to 95% is a normal reuse range. The auditor also found that a typical project was completed in 70% less time, with 84% less cost than their database of industry norms predicted. So, is this confirmation of a paradigm shift?

Q8: If the technology is that powerful, it ought to spread virally. Is that what you expect?

A: It's more of a hope, Stephen, than an expectation. As with all new paradigms there are significant barriers.

First of all, frames fly in the face of our mental conditioning: we know that modifying code is dangerous - one wrong symbol can cause chaos; no wonder OO biases us to "reuse as is", and compilers prohibit self-modifying programs. If we can't edit code reliably, how can we ever trust machines to do it? On the other hand, programs manipulate symbols much faster, more cheaply, and more reliably than we can ever dream of. Also, requirements changes cascade into edits whose interdependencies can be formalized. Whereas human editors are notoriously error-prone, frames never tire, never get sloppy, and they never forget all the places to edit. Yet the very idea of a machine beating us at our own game is a blow to our egos. We automate everyone else's jobs, never our own!

Second of all, middle managers find themselves outside their comfort zones. They know the organization and its infrastructure will change, and a new division of labour will emerge, with new gatekeepers of information and expertise. They fear a loss of power and prestige. They were promoted based on their competence with the current paradigm, so why would they prefer any other? And they were certainly never trained in paradigm shifting.

Third, senior management is skeptical. Is the gain really worth the risk? Can we really change our wheels while our train rolls down the track? Aren't our people already fully occupied with current priorities? Are we prepared to weed out change resistors? To ensure sharing across departments that haven't trusted each other for years? Are we willing to stake our careers on the outcome? If not, why should anyone else?

I could go on, but I think you get the picture of what stands in the way.

Q9: So how do you propose to overcome these barriers?

A: Well, here is my fantasy. The idea is to acquire a successful software vendor, "supercharge" it for market domination, then resell it and repeat the process in another market. After a few replications of the buy-supercharge-sell strategy, we reach a tipping point: the revolution takes off spontaneously; a supply-chain of framework vendors ultimately develops. Given the size and strategic nature of the software industry, we are talking $billions in capital-gains and dividends, not to mention the notoriety that comes with transforming a craft to an effective and respected software-engineering profession.

Q10: My, you certainly think big! How would the supercharging process work?

A: I certainly wouldn't run acquired companies myself. They'd already have decent management or my putative financier wouldn't touch them. I'd install a small team of superchargers, change-agents that have a track record of turning companies around while minimizing disruptions. Their objective involves creating a frame-based culture at all levels - technology, process, infrastructure, sales, and support.
Supercharging requires the following:

  1. Build trust through: education, opportunities to influence one's own destiny, expectation management, feedback, incentives for those who buy into the vision, and exit packages for those who can't or won't.
  2. Prioritize the company's strengths and weaknesses with a view to how and when to supercharge its various departments.
  3. Use systematic measures to improve software cost-effectiveness, from back-room development to marketing and sales.
  4. Frame the "hot spots" - places where software changes frequently. Improving hot-spot maintainability should produce immediate labour savings.
  5. Evolve a frame library to house the organization's intellectual assets in standardized, reusable capital-assets - frameworks. This is the "supercharge" that benefits all aspects of the business.
  6. Find opportunities for rapid growth, for market niches that are inaccessible to conventional vendors. One obvious opportunity is mass customization - frame technology makes it easy to tailor generic software packages to each customer's ever changing needs. In addition, customization patterns inevitably emerge that can be turned into new products.

Q11: What kind of payoffs should the various stakeholders expect?

A: Application developers will enjoy the speed with which they can design, create, test, and modify high quality systems. They will also like the fact that they have much less detail to look at. What they see are specifications synced to the unique features that made their programs worth writing. Frame engineers will gain the satisfaction of seeing their expertise captured in capital-assets that generate significant ROIs each time they are reused. Their contributions to the bottom line will be measured and rewarded.

Managers will do more with less. They will enjoy running professional organizations that pride themselves on their productivity/quality/responsiveness statistics. They will also enjoy trusting partnerships with their users and sponsors, relationships built on a track record of being responsive to ever changing business requirements, and delivering quality results on time and within budget. Managers will also share in bottom line results.

Enterprises will bring strategic, IT-based products and services to market faster and with more sophistication than their competitors can, at least long enough to gain a competitive edge. Package vendors will release new versions while remaining backwards compatible with previous versions. Even better, customers who have customized a previous version will upgrade much more easily, due to the automated nature of the recustomization process. Better bottom lines will benefit shareholders all around.

As I mentioned, the IT industry will eventually mature into tiered supply chains, much like the spokes of a wheel. Vendors at the hub will specialize in frameworks that are standard throughout an entire industry sector, such as the financial sector. Industry-specific vendors, say for life insurance, will integrate their frameworks with those of the financial-sector to provide a thick shell of features and functions that are common to all life insurance companies. At the radial end of this spoke will be vendors who specialize in packages and custom solutions that integrate with the rest of the supply-chain, and differentiate individual insurance companies to meet their custom needs. The coolest part of this scenario is that customizations can occur throughout a frame hierarchy without blurring the boundaries of who owns what, or has what intellectual property rights.

Last but not least, society benefits: from cost-effective software systems evolving cost-effectively; from reduced risk and harm due to program defects and project failures; and from increased innovation. In particular, frames increase our general understanding of adaptable systems. Artificial intelligence researchers might find a way to close the loop; that is, create a system that invents and refines its own frames based on interacting with its environment. Were that to happen, the world would quickly become a much different place!

Q12: That's an amazing future. How about the past? How did frames get started?

A: In a previous life I owned a small software company specializing in custom software for small businesses. I wrote my own code generators for reports and data entry. So, every time a layout or format changed, I was bound and determined to regenerate. But this meant reediting every regeneration with the same customizations. I also noticed that while each customer wanted something different, under the surface there were plenty of similarities. Trouble was it took more work to extract and re-customize them than to write a new version from scratch.

These two frustrations inspired an idea for how to automate cutting and splicing. I was teaching computer science at York University at the time, so I brainstormed my idea with my colleague, Prof. Gunnar Gotchalks, who also moonlighted for my company. Gunnar built a prototype frame processor while I altered my generators to emit frames; and the rest, as they say, is history.

The processor remained hidden in proprietary products until recently, when I helped another colleague, Prof. Stan Jarzabek at the National University of Singapore, to develop a free-ware version called XVCL, which stands for XML-based Variant Configuration Language. Go to http://xvcl.comp.nus.edu.sg to download manuals and tools. Stan's new book, Effective Software Maintenance and Evolution, published by CRC Taylor & Francis Group, provides an in-depth treatment of how frame technology alleviates the problems of software maintenance. Stan's website and book, together with my IEEE article, and my earlier Prentice Hall book, Framing Software Reuse, should be more than enough to kick start software professionals into making a difference.

Q13: In conclusion, what are the most important ideas to take away?

A: First, frame technology is a flexible manufacturing paradigm in which the parts are frames that are adaptable by other frames. This way of automating the construction phase alters and benefits all phases of the software lifecycle. Second, implement incrementally. Learn how to use the tools by framing one or two small systems; then go after both your legacy hot spots and new development, always looking to frame components with the most reuse potential. The rest will follow.

Closing Comment: Thank you, Paul for this tour de force.

A: You're welcome, Stephen. I've really enjoyed your questions.