Why Computer Interfaces Are Not Like Paintings: the user as a deliberate reader

Marian Petre* and Blaine A. Price+

*CALRG and +HCRL, The Open University, Milton Keynes, MK7 6AA, UK

+DGP, CSRI, The University of Toronto, Toronto, M5S 1A1, CANADA

E-mail: M.Petre or B.A.Price@open.ac.uk

[NOTE: This is an HTML version of a paper appearing in: J. Gornostaev (ed.), Proceedings of East-West HCI92: The St. Petersburg International Conference on Human-Computer Interaction, 1 (pp. 217-224), ICSTI: Moscow 125353, Russia]

Abstract: Designers seeking to improve human-computer interfaces, particularly those concerned with programming environments, often assume that "graphics" will always result in an improvement over "text." Such claims are especially difficult to assess, given that people have used the terms "text" and "graphics" in different and conflicting ways throughout the literature. This paper suggests a preliminary, consistent terminology for discussing "graphical interfaces" (including so called "visual programming systems") to highlight some of the issues involved in using "graphics" in notations and interfaces. It discusses evidence from empirical studies showing that using "graphics" doesn't necessarily lead to improvement and may introduce its own problems. The paper concludes with a discussion of the successful integration of "graphics" and "text."

NOTE: Russian abstract omitted from HTML version

Introduction

Some writers on visual programming assume that graphical representations are self-evidently and universally superior to text, simply because they are graphical (Shu, 1988). We doubt that. Pictorial and graphic media can carry much information in what may be a convenient form, but incorporating graphics into computer systems requires us to understand the precise contribution that graphical representations might make to the job at hand.

It is unlikely that graphical representations are a panacea. Graphical representations are information structures, intended to emphasize some types of information, usually at the expense of other types of information. The implicit model behind at least some of the claims of graphical superiority is that the programmer takes in a program in the same way that a viewer takes in a painting: by standing in front of it and soaking in it, exploring first one part and then another, letting the eye wander from place to place, receiving a "gestalt" impression of the whole. We propose, instead, a model of the programmer as a deliberate reader: one who uses a graphical system with specific tasks or goals in mind. The success of a representation, graphical or not, depends on whether it makes the particular information the user needs accessible and on how well it copes with the different information requirements of the user's various tasks.

This paper sets out to identify some of the issues involved in using graphics--or text--in computer interfaces. It begins with issues of terminology.

Some initial definitions

Many of the terms in the literature associated with using "graphics" in computer interfaces are ambiguous and misused. "Visual" is a misleading term when applied to interfaces that use "graphics" because it implies the pre-eminence of the visual mode and that these interfaces have a monopoly on the visual mode, i.e., that textual interfaces are somehow not visual. Information structures are concerned with the creation of a mental image, not a visual image and all the display-based interfaces and systems of expression, however textual or graphical, are perceived visually (among other modes). Internal "visualizations" (mental images) are certainly not derived solely from the use of the eyes. Therefore, we will not use "visual" in this paper to refer to interfaces or programming languages, although others have used "graphical" and "visual" interchangeably elsewhere in the literature.

Ambiguity is rife in the terminology of "graphical" interfaces, although there seem to be a few patterns of basic usage:

In this paper, we will distinguish between "pictures" and "graphics." A "picture" is any visual image with an unlimited vocabulary. We will use "graphics" to refer to pictorial or diagrammatic representations used to express an information structure or to contribute to a notational system, such as "graphical programming languages" and "graphical interfaces." Hence, "graphic" is a subset of "picture."

The graphics-text continuum

Graphics and text are often treated as distinct opposites; however, they are not so easily separable. Each can be used to annotate or enhance the other (e.g., indented text, graphics with text labels), until they merge in a hybrid middle-ground. Consider examples of text representations, with varying degrees of graphical enhancement: Similarly, graphical documents can incorporate varying degrees of textual enhancement: Obviously, as text and graphics converge in the middle ground, it becomes difficult to distinguish between text annotated with graphics, and graphics annotated with text; the importance of such hybrids is that they take advantage of the qualities of graphics and text, not their relative proportions of each. Once boxes, lines, and colour have been integrated into a thoughtfully designed text to diagram its structure, for example, it is difficult--indeed, irrelevant--to identify the document as primarily textual or graphical. In practice, "graphical" is commonly applied to systems that clearly incorporate both textual and non-textual elements. In this paper, we will distinguish between hybrid "graphical" systems and "purely graphical" systems that are pictorial or diagrammatic and incorporate no textual elements.

The important difference between "textual" and "purely graphical" seems to be the trade-off between "descriptive" and "analog" representation (cf. Fish and Scrivener's (1990) distinction between description and depiction). Text, a descriptive representation, derives precision of expression from a small, fixed vocabulary, and it achieves range of expression by regular combination of vocabulary elements, whether at the word or phrase level. Similarly, readers learn rules of interpretation, so that they read plain text serially (although they may access it at random), and they can easily order and search text. Plain text does not rely on perceptual responses particular to a sensory mode; text is easily translated from the visual to other modes, as by reading aloud. Pure graphics, an analog representation, gains from the mapping of perceptual cues to information (e.g., the association of colour with temperature). It may draw on an unlimited vocabulary. Graphics may benefit from a "gestalt" response, an informative impression of the whole that provides insights to structure, but it lacks the precision of text, because much information is intrinsic in the analog mappings. The rules of interpretation are not as clearly defined as for text, and so graphics may suffer from ambiguity of interpretation. It is possible for test to be ambiguous, but we have many existing rules for resolving ambiguity in text. If we had a well understood set of rules for interpreting graphics then perhaps a higher degree of precision could be achieved; several Oriental languages are much more pictographic than the Roman, Greek, or Cyrillic alphabets, yet they can convey information with precision and often using less space than Western alphabets.

In some ways, many of the current "graphical" programming languages are more "textual" than "purely graphical"; they have replaced the familiar ASCII character set with an alternative fixed vocabulary of symbols and have not taken advantage of the analog mapping. And yet the conventional wisdom about the contributions of pure graphics emphasizes its analog qualities:

i. Graphics may provide a good overview of program structure (by inference from the use of diagrams to illustrate text).

It may be that, because people tend to use graphics to express higher-level or more abstract elements, graphical representations fit more overview information into the effective visual field. Myers (1990) writes that "... graphics tends to be a higher-level description of the desired actions (often de-emphasizing issues of syntax and providing a higher level of abstraction) and may therefore make the programming task easier even for professional programmers." (p. 100) Moreover, graphics take advantage of the perceptual cueing (texture, pop-out, foreground/background effects) suppressed by the density and learned associations of plain text. Hence, whereas text is limited in terms of effective viewing distance, graphics accommodates dramatic changes of scale (i.e., zooming and shrinking). Fish and Scrivener suggest: "...depictions [analog representations] have the important advantage that they facilitate the search for novel visual relationships not easy to represent descriptively or not easy to find because they are not explicit."(p. 118)

ii. Graphics might externalize the objects of thought and thus promote reasoning about the program and its relationship to the domain.

Pennington (1987) highlighted the importance of "cross-referencing" program structure and domain structure during comprehension tasks, and she suggested that a programmer's task goals influence mental representations. Kosslyn (1978) and Rohr (1986) suggest that relations among objects are visually/spatially grasped and it is easier to derive a mental model of a system structure from a graphical representation than from a textual one. In general, if mental representations are visual, there will be less translation between external graphical and internal representation, at least in the early stages of comprehension and acquisition. Of course, not everything is grasped spatially/visually, just as not every entity has spatial components. Rohr concluded that highly abstract and more exclusively event-related functions ("existential functions") are ill-served by visual concepts; these may be grasped verbally.

iii. The local syntax of graphical languages might be easier to read.

There are several reasons why this might be the case:

Given the trade-offs between text and graphics (as between descriptive and analog representations), it is unsurprising that each has its advocates. One might predict that, because precision matters in notation, text has been favoured in many notations. Indeed, homo sapiens is often distinguished from other animals precisely by its ability to produce a "one-dimensional," syntactically regulated, general purpose stream of language, rather than the multi-variate "iconic" cries and calls typical of other species. And yet, because people need help in grasping complex structures quickly, the "gestalt" properties and perceptual cueing make graphics appealing. The graphics advocacy manifest in the literature, although enthusiastic, seems not to recognize this trade-off. In many cases they do not take advantage of the particular contribution that graphical representations might make to the job at hand (hence, the appearance of so many homogeneous lines-and-boxes representations).

The "distinction" between pure graphics and text is best viewed as a continuum within which most information structures are represented as intermediate, or hybrid, forms. Pure graphics and text each have their places, and the determining factor in their use is the user's task.

The user as a deliberate reader

Text is essentially graphics with a very limited vocabulary. Each character is a pure graphic, with perceptual qualities, but readers have learned to see them quite differently, as "non-graphical" text symbols. The plain text system has, through a process of abstraction and evolution, suppressed the perceptual qualities of the individual graphics that comprise it. Although familiar words and phrases can usually be perceived as a single unit, the learned associations with text disrupt automatic perceptual processing. Instead, text--conventional natural-language printed text--has had some centuries in which to evolve into a well-tuned medium for the conveyance of technical information.

Studies of reading, as reviewed for instance by Bouwhuis (1988), clearly show that accomplished readers, reading for comprehension, are deliberate readers, goal-directed and hypothesis-driven, making great use of the typographic and semantic cues to be found in well-presented text. To support them in this activity, typographers have evolved ways--graphical enhancements--to make required information quickly accessible. Section headings, offset paragraphs, indented quotations, running titles, variations in type size and weight, rules and bullets, and the use of white space all help to guide the reader to requisite information. There is not much in common between a well-laid-out technical manual and a well-composed painting--and we propose that interfaces and programs serve functions more like those of technical manuals than of paintings.

The purpose of interfaces and programs is to present information clearly and unambiguously. Effective use requires purposeful perusal, not the unfettered, wandering eye of the casual art viewer. The aim is not poetic interpretation, but reliable interpretation. The user learns to approach the graphical representation as a deliberate reader and, while taking advantage of available perceptual cues, to apply acquired rules of interpretation. On this model, the task with which we are particularly concerned is accessing information: graphic cues must facilitate both navigation and understanding. In order for interfaces to make good use of graphics, their designers must understand the particular goals and tasks of the users.

Categories of use of graphics

The problem with graphical representations is that of any information structure: some information is made accessible at the expense of other information, while the many different tasks the users undertake require different information to be accessible. Graphical representations need be no exception. The notion that there exists some kind of "cognitively natural" programming language, which is better for all possible purposes, has been debunked not once but many times. For example, the structured programming school asserted that hierarchically designed programs were always the easiest to develop and to comprehend. This was refuted by a number of papers from Sime, Green, and others, ably summarised by Curtis (1989).

Surprisingly few proponents of universal superiority for graphical representations have identified what "kind" of graphics are suitable for various tasks, even though there are some empirically validated metrics in the literature. Cleveland and McGill (1984) showed the range of accuracy in human perception when using seven different pure graphical techniques for displaying quantitative information. Mackinlay (1986) extended this to compare thirteen different techniques in perceptual accuracy for quantitative, ordinal, and nominal data, but the few heuristics that exist only work for numeric data.

Certain pure graphical techniques have obvious uses for finding deviations in largely homogeneous masses of data (e.g., in meteorology or radiography), when the purely graphical representation of numerical data induces "pop-out " and foreground/background effects where deviations occur. Other analog cueing, e.g., the use of colour to highlight particular strands of data, can be used in addition to assist data analysis.

Since earlier work has shown that experts use beacons in tasks like program understanding (e.g., (Wiedenbeck, 1986)), enhanced text, or text annotated with graphics, could take advantage of the pop-out effect by associating perceptual cues with beacons. This is a case where enhanced text has an advantage over a pure graphical representation or annotated graphics: the perceptual cueing is used to augment the increased precision in the textual presentation. The demands of a notation (such as clarity and precision) are different from the demands of, say, presenting data. The uses of notations are complex and heterogeneous. Even when there is a single overall goal, such as program understanding, sub-tasks could include recognition/scanning, reading, comprehension, and searching, sometimes simultaneously. Thus the decision as to which graphical techniques to use, if any, is non-trivial and requires a detailed task analysis.

A common technique for introducing "graphics" at the interface is the use of icons. Although icons may begin as purely graphical images, their use usually resolves to a fixed set of symbols, just as a textual representation relies on an alphabet. The difference is that icons are not designed to be parsed serially by the reader: a novice is expected to study the icon and determine its meaning from its appearance, while the experienced reader is expected to know the meaning from experience and should require only a brief glance to parse it. Icons have the advantage of built-in mnemonics, i.e., they can make use of previously-learned associations between objects and functions, so that even if experienced readers forget the meaning, their glance at the icon has a chance of restoring their memory. This technique works well when the icon represents a concrete object, but Rohr (1986) showed that this breaks down as icons represent abstractions. The mnemonic effects are increased by better icon design, including the use of animation, as shown by Baecker et al. (1990; 1991). For small command sets the advantages of icons over text or text hybrids for command entry are clear, but there is a trade-off in scaling up. A large icon vocabulary will occupy increasingly large amounts of screen space, while textual interfaces require little space regardless of vocabulary size. There is also a swamping effect: when many icons are used, discriminability decreases (Green, Petre and Bellamy, 1991). Real estate problems are also evident when icons are used to display information, such as the visualization of a computer program.

Hybrid textual-graphical systems ranging from enhanced text to limited graphics have been used effectively in the visualization of computer software at various levels of abstraction, from algorithm to coded program. Limited graphical displays of running algorithms have been used effectively in the professional design and improvement of algorithms (Vitter, 1985) as well as in teaching (Baecker and Sherman, 1981; Brown, 1988) and there is some indication that enhanced text may be of use to professional programmers (Baecker and Marcus, 1990). Gaver et al. (1991) showed how audio perceptual cueing in other modes can also lead to interface improvements while DiGiano and Baecker (1992) have demonstrated this for software visualization in their work on program auralization. For a complete discussion of the different uses of graphics in software visualization see Price, Small, and Baecker (1992). Thus far there has not been a proven use of grahpics for "programming in the large" which is the area where proponents of graphics have promised the most.

Supplying extra information: analog mappings and secondary notation

A graphical representation can provide an "analog" mapping between the representation and the domain, as when mapping shape to function. Raymond (1991) argues that the possibility of analog mapping is the only specifically visual contribution of graphical programming languages, and that other characteristics of contemporary graphical programming languages can be realized just as well in textual languages. Good graphics usually means linking perceptual cues to important information. The strength of graphical representations--almost universally--is that they complement perceptually something also expressed symbolically. For instance, when functionally-related components are placed close together, which is typical practice in digital electronics design, an analog mapping is being used to supply extra information over and above the information explicitly represented by the components and their connections. Expert designers regard this secondary notation as being crucial to comprehensibility (Petre and Green, in press; Petre and Green, 1990).

But many of the determinants of "good" graphics are not part of the formal system. The mere presence of graphical features does not guarantee clarity in a representation. What is required in addition is good use of the secondary notation that is not formally part of the representation, of elements like adjacency, clustering, white space, labelling, and so on. Both secondary notation and good design are subject to personal style and individual skill. Poor use of secondary notation is one of the things that typically distinguishes a novice diagram from an expert one--a difference visible to experts (Petre and Green, in press). Moreover, in the available graphical representations known to us, the possibility for using layout in a controlled and useful way is dominated by the need to keep the visual picture reasonably clean. Placing components to minimise crossings of connection lines takes priority over placing them adjacent to functionally-related components.

Even a well-evolved notation is vulnerable to weaknesses in individual expressive skill. On the other hand, skill may help compensate for weakness in a notation. Design conventions attempt to reinforce meaningful use of secondary notation, but such conventions are rarely formalized or codified. We suggest that this is because, although it is possible to enumerate individual "rules of thumb" (e.g., that functionally-related elements should be clustered together), these will often conflict in practice, and the heuristics that resolve the conflicts within a particular context are buried deep in experience and expertise and are not easily externalized.

Training alters perception; viewing strategies are learned

Graphical representations, especially those used in notation, may require more training from both the originator and the reader to achieve the most effective communication. Arnheim (1969) argues that the accomplished art viewer also learns rules of interpretation, and that the effectiveness of art relies on the association of perceptual elements with semantic importance. Even so, the importance of precision is much lower in art criticism than in functional computer interfaces.

Novice users of graphics tend to lack reading and search strategies; their inspection strategies are more haphazard and take less account of the particular nature of the task or the representation. Far from guaranteeing clarity and superior performance, graphical representations may be more difficult to access. Unlike text, which is always amenable to a straight, serial reading, graphics require the reader to identify some appropriate inspection strategy and there are few cues to navigation. In a recent experiment by Green, Petre and Bellamy (1991) which investigated reading comprehension using various graphical and textual representations, reading of graphics was significantly slower than reading of text. When comparable graphical and textual representations were presented side-by-side, experienced readers always used the text to guide their reading of the graphics. Experts displayed more effective strategies than novices, who suffered from mis-readings and confusions. Strategy differences were strongly related to prior experience; broader experience led to more flexible performance. Empirical studies of this nature are quite difficult to conduct because all of the subjects have been trained at an early age to use text and will have an automatic prejudice for the representation with which they are most familiar.

The correlation between experience and reading strategy is not exclusive to graphics. In a previous experiment on the reading of programs, Gilmore and Green (1988) had shown that effects for Pascal programmers did not apply to Basic programmers, interpreting the difference as a difference between notations. However, Davies (1989) showed that the results were caused by differential training backgrounds: when Basic programmers were taught the precepts of structured programming, the differences disappeared. The importance of training and experience is clear, although it seems to be underestimated in the case of graphics.

Differences between novice and expert use of graphical representations are readily observable. For example, novices typically have difficulty in determining what is important or relevant--apparently in contradiction to the common assumption that graphics makes such information obviously accessible. In the experiment by Green, Petre and Bellamy, the less experienced subjects were unable to exploit the secondary notation of the graphical representations which would have improved their reading performance, and they were more prone to misconceptions about what was important or relevant. This finding echoes the study by Davies. Apparently, for non-experts, a visible symbol is interpreted as a relevant symbol. It appears that "salience" is influenced by experience, and that what the readers see is largely a matter of what they have learned to look for.

Some issues and problems in using graphics

The preceding sections describe a number of important pitfalls in the conventional, unqualified assumptions about the use of pure graphics:

Navigation: The problems of navigating through graphical representations have been discussed, including: the need to find an appropriate entry point, the need to keep track of non-linear inspections, the need to find appropriate inspection strategies, and the problem of how much is visible at a given time. The computer environment can provide assistance, for example by highlighting significant portions of the display or automatically managing the windowing environment. However, in many systems (e.g., the Prograph programming environment), the problem is exacerbated by the dispersal of material into numerous small windows, so that the user can see only fragments at a time.

Scaling up: The scaling up problem in graphics is profound. Graphics can make individual components more discriminable and seems to support recognition of simple components (e.g., (Cunniff and Taylor, 1987; Curtis, et al., 1989)), but this advantage diminishes if the structure is dense or the components are repeated. In the Green, Petre and Bellamy (1991) experiment using LabView (the graphical programming language used as the experiment vehicle), all the subjects agreed on the advantages. They claimed that the well-designed icons for operators made them easy to recognize but they added the qualification that recognizability is sharply diminished in large complex programs or formulae employing many icons.

Searching and sorting: Automatic search and sorting are not supported by any non-textual system; all of the current graphical systems rely on the text component for these activities. There are tremendous technical problems with ordering a graphical symbol set. Designers will be forced to define "like" and "different" in order to discriminate between similar images that vary only subtly (e.g. small size differences). Similarly, rules of grouping must also be established in order to cope with composite images.

Conclusion: graphics as a complement of text, and vice versa

Arguments that set pure graphics in opposition to plain text are missing the point: that each can be used to complement and enhance the other. The most successful graphical systems (like electronics schematics) are hybrids that make use of the particular advantages of each: the perceptual cueing afforded by pure graphics (and the recognition advantages of selective use of icons), and the precision afforded by text description. And, just as each representation has its advantages, each has its limitations: neither is a panacea.

Future research must clarify further the propensities of each type of representation and must identify the users' various information requirements in order to match representations to tasks. A good model of how people use each representation is needed in order to test the effectiveness of each in various situations. Given that different tasks are likely to favour different representations, automatic translation between representations will be desirable.

Acknowledgements

The first author thanks T.R.G. Green for the research and writing collaboration which contributed greatly to the ideas expressed here. The second author acknowledges support from UK MRC/SERC/ESRC Project 90/CS66 (Algorithm Visualization) and the Natural Sciences and Engineering Research Council of Canada.

References

Arnheim, R. (1969) Visual Thinking. University of California Press.

Baecker, Ronald M., and Aaron Marcus. (1990) Human Factors and Typography for More Readable Programs. Reading, MA: Addison-Wesley.

Baecker, Ronald M., and David Sherman, (1981) Sorting Out Sorting. Los Altos, CA: Morgan Kaufmann. narrated colour videotape, 30 minutes, presented at ACM SIGGRAPH '81.

Baecker, Ronald M., and Ian S. Small. (1990) "Animation at the Interface." The Art of Human-Computer Interface Design. Ed. Brenda Laurel. Reading, MA: Addison-Wesley. pp. 251-267.

Baecker, Ronald M., Ian S. Small, and Richard Mander. (1991) "Bringing Icons to Life." In Proceedings of CHI'91, 1-6, New Orleans, LA, 27 April-2 May, ACM Press/Addison-Wesley: New York.

Bouwhuis, D.G. (1988) "Reading as goal-driven behaviour." Working Models of Human Perception. Ed. B.A.G. Elsendoorn and H. Bouma. London: Academic Press. pp. 341-362.

Brown, Marc H. (1988) "Exploring Algorithms Using Balsa II." IEEE Computer 21(5):14-36.

Cleveland, W.S., and R. McGill. (1984) "Graphical Perception: Theory, experimentation and application to the development of graphical methods." Journal of the American Statistical Association 79:531-554.

Cunniff, N., and R.P. Taylor. (1987) "Graphical versus textual representation: an empirical study of novices' program comprehension." Empirical Studies of Programmers: Second Workshop. Ed. G.M. Olson, S. Sheppard and E. Soloway. Norwood, NJ: Ablex.

Curtis, Bill. (1989) "Five Paradigms in the Psychology of Programming." Handbook of Human-Computer Interaction. Ed. M. Helander. Amsterdam: Elsevier (North-Holland).

Curtis, Bill, Sylvia B. Sheppard, Elizabeth Kruesi-Bailey, John Bailey, and Deborah A. Boehm-Davis. (1989) "Experimental Evaluation of Software Documentation Formats." The Journal of Systems and Software 9(2):167-207.

Davies, Simon P. (1989) "Skill levels and strategic differences in plan comprehension and implementation in programming." People and Computers V. Ed. A. Sutcliffe and L. Macaulay. Cambridge: Cambridge University Press.

DiGiano, Christopher J., and Ronald M. Baecker. (1992) "Program Auralization: Sound Enhancements to the Programming Environment." In Proceedings of Graphics Interface'92, 44-53, Vancouver, Canada, 11-15 May, Morgan Kaufmann: Palo Alto, CA.

Fish, Jonathan, and Stephen Scrivner. (1990) "Amplifying the Mind's Eye: Sketching and Visual Cognition." Leonardo 23(1):117-126.

Gaver, W., T. O'Shea, and R. Smith. (1991) "Effective Sounds in Complex Systems: The ARKola SImulation." In Proceedings of CHI'91, ACM Press: New York.

Gilmore, D.J., and T.R.G. Green. (1988) "Programming Plans and Programming Expertise." Quarterly Journal of Experimental Psychology 40A:423-442.

Green, Thomas R.G., Marian Petre, and R.K.E. Bellamy. (1991) "Comprehensibility of visual and textual programs: a test of Superlativism against the "match-mismatch" conjecture." Empirical Studies of Programmers Fourth Workshop. Norwood, NJ: Ablex.

Koslyn, S.M. (1978) "Imagery and Internal Representation." Cogntition and Categorization. Ed. E. Rosch and B.B. Lloyd. Lawrence Erlbaum. pp. 227-286.

Mackinlay, Jock. (1986) "Automating the Design of Graphical Presentations of Relational Information." ACM Transactions on Graphics 5(2):110-141.

Myers, Brad A. (1990) "Taxonomies of Visual Programming and Program Visualization." Journal of Visual Languages and Computing 1(1):97-123.

Pennington, N. (1987) "Stimulus Structures and Mental Representations in Expert Comprehension of Computer Programs." Cognitive Psychology 19:295-341.

Petre, Marian, and Thomas R.G. Green. (in press) "Requirements of graphical notations for professional users: electronics CAD systems as a case study." Le Travail Humain .

Petre, M., and T.R.G. Green. (1990) "Where to draw the line with text: some claims by logic designers about graphics in notation." In Proceedings of INTERACT'90 Conference on Computer-Human Interaction, Cambridge, England.

Price, Blaine A., Ian S. Small, and Ronald M. Baecker. (1992) "A Taxonomy of Software Visualization." In Proceedings of The 25th Hawaii International Conference on System Sciences, 597-606, Kauai, Hawaii, January 7-10, IEEE Computer Society Press: New York.

Raymond, D. (1991) "Characterizing Visual Languages." In Proceedings of The 1991 IEEE Workshop on Visual Languages, Kobe, Japan, IEEE Computer Society Press: New York.

Rohr, G. (1986) "Using Visual Concepts." Visual Languages. Ed. S.-K. Chang, T. Ichikawa and P.A. Ligomenides. Plenum Press.

Shu, Nan C. (1988) Visual Programming. New York: Van Nostrand Reinhold.

Vitter, J.L. (1985) "Design and Analysis of Dynamic Huffman Coding." In Proceedings of The 26th Annual Symposium on Computer Science, 293-302, October, IEEE Computer Society Press: New York.

Wiedenbeck, Susan. (1986) "Cognitive Processes in Program Comprehension." Empirical Studies of Programmers. Ed. Elliot Soloway and Sitharama Iyengar. Human/Computer Interaction. Norwood, NJ: Ablex. pp. 48-57.