A hare started by reading a suggestion in Sacks’ book at reference 1 about how writing systems, invented far too recently for (brain) evolution to have been helpful, may have leveraged existing visual machinery in eye and brain, in the sense that the various characters make use of distinctions which our visual machinery had already been programmed to deal with. That alphabets are constrained by the pre-existing machinery. This led to references 2 and 3, both around twenty years old. Then to the digression at reference 9.
The first paper is about the number of strokes needed to make a character. It makes the relatively modest claims, on the basis of a sample of 115 writing systems, summarised in the snap above, first that the average number is about three and second that there is a fair bit of redundancy, redundancy which means that most of the strokes have to be damaged or missing before the character becomes illegible. Furthermore, that number of characters in the writing system, which varies widely but is generally less than fifty, does not seem to matter as far as this goes. The exception is numerals where the average is around two.
The authors go on to speculate that these facts, if indeed they are facts, may be rooted in pre-existing facts about human vision and its tuning for the recognition of objects in a cluttered and noisy visual field.
To this end, the authors use a model of writing systems which is summarised in the snap above. Each writing system has a character set and a stroke set, with each character made up of a small number of strokes. They appear to have no trouble establishing the stroke set for any particular writing system, so we suppose that stroke set is a reasonable construction, that characters don’t just free style, any old how. They are also interested in the pairing of strokes within characters and draw network diagrams based on those pairings, those connections. They do not, however, provide much in the way of statistical analysis of all this.
Some simple examples of the decomposition of characters in the Calibri font, in this case numerals, into strokes. Other fonts, such a that used above the main panel – Castellar – are apt to be more elaborate, with greater variation of stroke shape and with the addition of serifs and other flourishes.
There is also the matter of Microsoft’s Character Map – a tool which offers a window onto the characters available on a computer – particularly the special characters not to be found on a regular keyboard – organised by font. The impression given there is that for these purposes Arabic, Chinese and Japanese and Korean are all stroke orientated. This despite a tradition of elaborate calligraphy, which I imagine is mostly brush rather than pen or pencil. For the underlying Unicode, see reference 10.
They also link the various parameters of this system in the model snapped above, lightly edited from the text of the paper. Some of what follows there is about estimating these parameters for various sets of data.
We are told at the top of Table 1 that, for these purposes, lower case letters were used. There is no further discussion of the matter and, confusingly, the examples in Figures 2, 3 and 4 are all upper case. While I learn from references 4 and 5, that small letters evolved in European monasteries, long before the invention of printing, from which last the term ‘upper case’ is derived. But maybe this is all unimportant in the present context: what counts is that we have a recognised writing system which gets used by enough people to inform our understanding of how strokes are assembled into characters.
I wonder about the absence of consideration of things like serifs. In the past typographers put a lot of effort into this sort of thing, considered important from the points of view of both legibility and attractiveness: just look at the number of fonts available in Microsoft Word. Maybe legibility was more important in the past, when there was more reading with poor eyesight and in poor light than is the case now, at least in the richer parts of the world. And thinking with my fingers, I wonder also whether these two things, legibility and attractiveness, are linked.
Are we sure that reduction to strokes the best way forward? From where I associate to the ease with which mood can be expressed in the movements of stick men. Such men are clearly more capable than at first might appear and there are a lot of them on YouTube.
In the discussion there are some suggestions as to the neural underpinnings of these findings, some of them linked to work on computer vision, for which see references 6 and 7, the latter being freely available, the former not.
Other matters
What difference does it make that the written word is nearly always a two dimensional business, whereas vision has evolved to deal with a three dimensional world?
On the way to reference 9, I digressed to reference 8, from which I offer a modest taster.
Reference 8 is all about labelling the line drawings one can generate from a digital image, using the rule which is summarised at reference 9. We have two sorts of interior label, plus (roughly convex) and minus (roughly concave), and two sorts of boundary label, smooth (DA or double arrow) or sharp (SA or single arrow), further qualified by left or right, according as to where the object is relative to the arrow.
With the blue spot in the figure above being ambiguous. There is not enough information in the drawing to know whether it is a plus or a minus. One needs to bring (top down) knowledge to bear.
The paper goes on to exhibit a catalogue of junctions, very much the sort of thing which appears in reference 3, which I have yet to tackle properly.
Conclusions
An interesting introduction for me to the writing systems of the world – including a couple of global constraints on character complexity (in terms of strokes) and character redundancy (how much damage a character can take).
Which is consistent with the Sacks’ notion that writing systems leverage pre-existing neural machinery for dealing with such stuff, but does not amount to corroboration. Perhaps that is to be found in the more tricky reference 3 which followed.
References
Reference 1: The Mind's Eye – Oliver Sacks – 2010. Book_213
Reference 2: Character complexity and redundancy in writing systems over human history – Mark Changizi, Shinsuke Shimojo – 2005.
Reference 3: The Structures of Letters and Symbols throughout Human History Are Selected to Match Those Found in Objects in Natural Scenes - Mark A. Changizi, Qiong Zhang, Hao Ye, Shinsuke Shimojo – 2006.
Reference 4: https://en.wikipedia.org/wiki/Letter_case.
Reference 5: https://en.wikipedia.org/wiki/History_of_the_Latin_script.
Reference 6: https://www.omniglot.com/. Some useful background is to be found here. The work of one Simon Ager. Lots of advertisements.
Reference 7: A generalized line and junction labelling scheme with applications to scene analysis – Chakravarty, I. – 1979. Paywalled at IEEE.
Reference 8: Interpreting Line Drawings of Curved Objects – Jitendra Malik – 1987. A proxy for reference 7.
Reference 9: https://psmv5.blogspot.com/2024/08/a-toric-problem.html.
Reference 10: https://en.wikipedia.org/wiki/Unicode.
No comments:
Post a Comment