Eidola Notation Gallery

Eidola Notation Gallery
Notations and Visualizations
User Interface Philosophy
User Interfaces of Note
Existing Visualizations of Programs
Possible Eidola Notations

Discussion

Eidola home
 

  ACCEL's Circuit Board Designer
Next: Existing Visualizations of Programs - Good Old Source Code >>
Discuss it!

Michael Balloni writes about a system for electrical engineers. I wish we had screen shots of the stuff he describes! It sounds cool.

The development model employed by electrical engineers and electronic manufacturers could be well employed by software developers and system implementers. Consider the design of electronic circuits and printed circuit boards. I worked for a now-defunct company named ACCEL Technologies, Inc. that made three great products: Schematic, PCB, and Library Manager.

In Schematic, you would design the circuits. You could make use of existing electronic components from parts suppliers like Texas Instruments and Motorola, or you could specify all new components for later fabrication. The process involved finding and choosing components to use, laying them down on sheets (representative of old sheets of drafting paper) and then connecting the pins of the components together. You could also specify modules, which were basically nested (abstracted) circuits, kind of like function calls for EEs. The end result would be schematic: sheets with component diagrams (picture capacitor and resistor symbols), and neatly arranged lines connecting the components and modules and so forth. You could run utilities that would assess the electrical properties of your design based on connectivity and component properties and so forth, but when all was said and done, you didn't really have an end product. Enter PCB.

PCB stands for Printed Circuit Board, and with this tool you were working with layers of real circuitry: real ICs and electrical components, real copper, real holes in boards, etc. The situation was three-dimensional, since most real circuit boards make use of at least two layers (front and back), and many make use of internal layers to manage the routing of signals between components on opposite sides. The main mission of PCB usage would be to go from a finished schematic to physical plans for a PCB. You'd start off with a rat's nest of components and connections between them, and you'd move the components around on the board, replacing virtual connections with real routed copper, and hopping between the front and back of the board as necessary. A really hard problem to solve, and the software did it well. All sorts of crazy issues like grid spacing for manufacturability, routing, graphical display of thousands of components and tens of thousands of little pieces of copper connecting them, etc.

And then there was Library Manager. Library Manager managed the mapping between electrical symbols (resistors, capacitors, etc.) to physical components (ICs, actual resistors, etc.). From within Schematic or PCB you would choose which libraries to use, and then you'd use Library Manager to browse the libraries, looking for parts that would do the job. What might appear to be several little drawings on the schematic would be folded into one big IC, etc. This mapping process was based on real parts libraries from people like Motorola and TI, and it included all the graphics and electrical equivalency information needed for both Schematic and PCB.

One thing to note about these applications is that it was most common for one person to operate only one of them regularly.

I see in software development and deployment a need for similar tools with similar feature sets: a design notation in which what is desired is laid out, and then an implementation notation in which how it's going to happen is finalized, with a common notion of software components between the two in which classes and software components are analogous to electrical functions and electrical components.

You may say "Silly Mr. Balloni, why should developers have to specify what they want twice?" Because there's a big difference between what we want (working software systems that do good stuff) and how we get it to work (distribution of workload and resource usage across farms of heterogeneous servers, use of specialized/optimized hardware for certain tasks, caching and concurrency issues, etc). Anyone who's written distributed software understands that no simple "write once and run anywhere" solution (think COM, CORBA, RMI, .NET, etc.) is going to do the trick: the algorithm and the implementation necessarily diverge, and it's the mapping and management of that divergence that is most underrepresented in today's computing environments.

In this last paragraph, I think "algorithm and implementation" my be a misnomer -- this is at least as much about the divergence of development and deployment. It might be really interesting to have a deployment specification tool with a deep knowledge of and strong binding to the large-scale structure of a program -- sort of like the PCB app. Interesting.

Man, I wish we had screen shots! -PPC

Discuss it!
Next: Existing Visualizations of Programs - Good Old Source Code >>
Copyright 2001 Nick Weininger