Legibility And Reader-Friendly Font
Page : 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19ALL
by dbking
Legibility And Reader-Friendly Font
<b>The Problem</b>
As a computer programmer, I have to spend a lot of time reading technical documentation. Most of it comes from the Internet, so I read it from my computer screen. I used to adjust the text size and read comfortably most of the time. Unfortunately, the most comfortable text size is different for different groups of letters (“ftlji” vs. “vkxsz”), so now and then I stumble across phrases like:
“Illiterate jinn illicitly tilts lifts, fills filters and flitters in Fiji!”
Not only does it slow down my reading pace, but also makes it uncomfortable to read the text. First I attributed this problem to the fact, that the majority of web pages employ sans-serif fonts, considered to be more legible on computer screens. So, whenever I come across a large text, I would copy it to the Notepad and then read using one of
my favourite serifed fonts.
But soon I discovered that texts in Cyrillic script are easier to read than texts in Latin, regardless of the font settings. This becomes especially obvious when I read mixed texts on the same page. Most of the minuscules (lowercase letters) in Cyrillic follow the shape of the respective capitals, they vary in width and height less than in Latin, and each glyph’s shape is more distinct. Any text is composed mostly of the minuscules, therefore Cyrillic appears to be more legible.
<b>History</b>
Originally texts were written entirely in capital letters, spaced between well-defined upper and lower bounds. When written quickly with a pen, letters tended to turn into rounder and simpler forms. It is from these forms the lowercase letters developed. Thus, Latin script was optimized for writing at the expense
of legibility. But it is not the only one that suffered from the hands of the writers. Nearly all modern writing systems are thought to have descended directly or indirectly from the single source – the Phoenician. This script is the ancestor of nearly every alphabet in use today, including Arabic, Greek, Latin and many others. The Hebrew alphabet remains closest to its predecessor, as only the form of the letters has been modified, while classical Mongolian script hardly bears any resemblance. The success of Poenician was due in part to its phonetic nature; Phoenician was the first widely used script in which one sound was represented by one symbol. This simple system contrasted the other scripts in use at the time, such as Cuneiform and Egyptian hieroglyphs, which employed many complex characters and were difficult to learn. This one-to-one configuration also made it
possible for Phoenician to be employed in multiple languages. Its evolution took different directions, and many different alphabets emerged, all influenced by the writers and optimized for writing.
But Phoenician not only was the first proper alphabet (or rather abjad, as it contained only consonants), but the only one optimized for readability! It used a system of acrophony to name the letters. Their names are essentially the word values of the original pictogram for each letter. For example, letter Aleph was derived from the Semitic word for “ox”, and the shape of this letter derived from a hieroglyph depicting an ox’s head, complete with horns and ears. Or Ayin (representing the sound which has no equivalent in European languages) – derived from the word “ayn” (“eye”), and it looked like an eye.
However, Phoenician did not fit Indo-European
languages very well. First, because of the difference in phonetics. Second, because Poenician alphabet lacked vowels. In Semitic languages it did not matter so much, but in European languages vowels play an important role, and without them these languages would be unintelligible (compare: “fill”, “full”, “fall”).
The Greek alphabet originated about the 9th century B.C. as a radical modification of the Phoenician. It was the first alphabet in the narrow sense, that is, a writing system using a separate symbol for each vowel and each consonant. The vowels were made out of Semitic consonants that were superfluous in Greek. E.g., Aleph was transformed into Alpha, and hence the Latin ‘A’ and Cyrillic ‘?’. Since now on, the consonants would always be accompanied by vowel signs to create a pronounceable unit.
But still, people used to perceive the letters as
hieroglyphs (“sacred signs” – Greek). Whenever people need an additional letter, they would ask: “Where could we borrow one?” They could not even think of designing a new glyph (a sacrilege!), so they preferred to mutilate an existing one instead. Later this attitude significantly contributed to development of diacritics.
The next revolution in writing systems took place more than a thousand years later, when the first proper alphabet was specially created from scratch for any particular language – not inherited or borrowed from somewhere. For example, in 9th century A.D. St.Cyril and St.Methodius created a brand-new Glagolitic alphabet for the Slavic languages. There was one-to-one correspondence between phonemes and graphemes, so Glagolitic alphabet perfectly fitted the Slavic languages. It’s descendant Cyrillic adapted to changes in spoken language and
developed regional variations to fit the features of national languages. Today, dozens of languages in Europe and Asia are written in Cyrillic.
And now, more than a millennium later, isn’t it time for another revolution?
<b>The task</b>
Don’t get scared at this point! I don’t suggest developing a brand-new alphabet instead of Latin. Indeed, Latin script does not fit European languages very well, unlike most other local alphabets. Some Eastern European languages employ diacritics so extensively, that they look like rather pathetic attempts to adapt Latin script. Nations that don’t use diacritics, are compelled to use digraphs and trigraphs – hardly a better option (e.g. German ‘tsch’). But it’s beyond the scope of this article, and besides, it’s not realistic.
And yet, new revolution is half a millennium overdue. Since
Gutenberg invented the printing press, people no longer depended on fonts optimized for writing, and could develop a legible one. Unfortunately, the readers did not rise up against the writers’ tyranny in Western Europe. Only Russians took advantage of the favorable circumstances and undertook modernization of their alphabet. Diacritics were abolished and letter-forms improved. Since then Cyrillic uppercase and lowercase letter-forms are not as differentiated as in Latin typography. In fact, Cyrillic lowercase letters were essentially small capitals (with very few exceptions). Other nations that used Cyrillic also followed the suit. The font did not change much since then, but even 300 years later Cyrillic is still more legible than Latin.
And now, when changing a font on a computer screen takes a single mouse-click, I wonder: why so many forms of shorthand
writing systems were developed, but not a single reader-friendly? Those different fonts we have now are essentially attempts to improve the aesthetic perception of the same old one, with only minor improvement of legibility. Let’s make it clear: aesthetic perception is very important, but when I read a large piece of technical documentation, I prefer legibility.
So, I challenge scientists and designers to develop a new type of Latin font, specially and exclusively optimized for fast and comfortable reading. I emphasize, that it should be the same old Latin alphabet – graphics should be improved, but the letters must be easily recognizable. Transition should be absolutely seamless. I just want to switch fonts now and then on my computer screen, and it should take no effort at all.
<b>Analysis</b>
<i>Once, when I was in
North Africa, a little girl of about seven approached me, trying to sell something. I offered her a game of backgammon instead – I would pay her for each win. Apparently, she was very smart – she was able to sustain a conversation in a couple of foreign languages, and she had beaten me four times in a row! But I was surprised to find, that she can not read. She explained that her parents are poor and can not afford a school. I said: “But surely you can learn it, as you have learned the foreign languages without any teacher!” She laughed as though she called my bluff, and answered: “Everybody knows, that you must study in school in order to learn read and write.”
Then I decided to learn Arabic alphabet myself. When I visited Israel, I taught myself Hebrew alphabet in a couple of days, and I expected that Arabic will not take much longer, as they are closely related.
To my surprise, it took me three weeks!</i>
In case of Arabic script, its complexity have a clear impact on the literacy rate, and it was the main reason why the Turks adopted the new alphabet instead of Arabic eight decades ago. But there are better examples. Paradoxically, Russians eventually benefited from the problems with their archaic alphabet, as they improved their script when the right time had come along with the printing press. And now, at the dawn of the new computer era, it’s the right time for us to do the same, or even better, as now we can employ scientific methods of analysis. I believe, this is inevitable in the long run, so the sooner it’s done, the better. Hoping that better understanding of the problems will help to raise awareness, I will briefly outline the basic improvements I expect to see.
First, dimensions (width and
height) of individual glyphs should not vary so much.
Currently, the process of glyph identification involves one additional step of assessing its position and dimensions. Letters in words like “killing” or “by” look disproportionate; their upper and lower bounds are unclear. Therefore they are more difficult to read.
Second, integrity of the glyphs should be preserved. Identification of glyphs, comprised of two separate elements, such as ‘i’ and ‘j’, requires an additional step – assembling the elements into one glyph. But why should I do somebody else’s job? Every glyph should be one-piece!
Third, sequence of two glyphs should not look like a third glyph, there should be a clear difference, as adjacent letters tend to merge (e.g. “bum” vs. “burn”). Sometimes it’s difficult to distinguish between ‘m’ and ‘rn’, ‘d’ and ‘cl’, etc. People with
sight problems are hit hardest by this flaw.
Next one is closely related to the previous: ligatures should be used only in decorative fonts. Identification involves one additional step – to split the ligature into separate glyphs. And letters tend to merge in words like “tiff”, “flitter”, “fifth”.
The last, but not the least: glyphs should not look too similar.
Currently, there are many glyphs that differ by only a tiny element. Juxtaposition of some glyphs will yield an annoying result – one glyph could be completely lost in another glyph’s shape (e.g. ‘c’ and ‘o’, ‘c’ and ‘e’, ‘i’ and ‘j’, ‘l’ and ‘k’, ‘n’ and ‘h’, ‘v’ and ‘y’, ‘i’ and ‘l’). Because of this, identification requires more effort than necessary. Glyphs should really differ from each other, and their crucial elements should not be too small. I believe, that 1×1 pixel elements
(like in i, j) will be eventually abolished, and even the punctuation marks will change their shape in the future (compare ‘:’ and ‘;’).
<i>One man ordered a number plate for his car with something like “IO10IOI”. He hoped, that policemen will fail to spell it properly, and thus he could dodge the fines. But the number he received was already misspelt: “1010101”.</i>
I expect these improvements to have a significant impact on legibility. But even if it will be only a minor increase, it would be still worth the effort. I am sure, people with poor sight will appreciate it. Besides, the most valuable product of our civilization is knowledge, and people who produce it must read a lot. Some people spend on reading hundreds of hours every year – students and professors, scientists and scholars, programmers and lawyers, to name a few. Time, saved
on reading, they spend on production – whatever is their prime product. Even a tiny increase in average reading speed will result in millions of dollars worth of annual revenue around the world. I would not be surprised, if Nobel Prize in Economics will be awarded for the best solution of this task. My motivation, however, is to make reading more comfortable for myself.
<b>Solution</b>
But what if nobody will rush to help me? I can not wait for another 300 years, so I decided to offer some kind of provisional solution.
I will boldly follow the steps of St.Cyril and St. Methodius, and won’t hesitate to employ <b>any</b> means to facilitate reading. First suggestion: capitals should be abolished. Anyone surprised? They look like a medieval relic to me. Most writing systems don’t employ them at all. Since our system is
based on a phonetic principle, it would be only honest to use determinative signs instead – to mark personal names, toponyms, abbreviations, etc. Some determinatives are already in use (e.g. quote marks), so one more would not make a difference to our system. I would prefer to see a single small glyph preceding a target word.
Minuscules should have the same height and about the same width, and each one should acquire some unique feature making it clearly distinct from the others. Should we be too timid to abolish the capitals, we might come up with some kind of “Russian solution”, i.e. using the same forms for both, as it is easier to design 26 good graphemes than 52. For example, a good solution for D-O-0-Q problematic group is the letter-forms used for the new German car number plates – apparently they had been specially developed to be recognizable from far
away.
Here I must emphasize, that proposed solution will not be similar to WRITING IN ALL CAPS, which is quite uncomfortable to read because the text size and properties (vertical and horizontal spacing, etc.) is optimized for minuscules. The reader-friendly font, however, will come with the right properties. It will be even easier to optimize them, as the letters will no longer differ so much in size (e.g. ‘W’ vs. ‘I’).
Imagine how much worse would be reading without these neat horizontal rows, if letters would jump up and down whenever they wanted! But what if the opposite happened: if any glyph would also have the same width? In that case vertical columns will mark the glyphs’ width the same way as horizontal rows mark their height. One additional step of glyph recognition – assessing its width – could be skipped. It might seem not so important
because we are used to the current pattern, but when several improvements will add up, it will make a significant impact on the overall reading speed. BTW, most editors used by software developers already employ monospace fonts to facilitate the source code reading.
I believe, it would be better if punctuation was obviously different from letters. For example they might occupy the same width, but be smaller in height. And in any case, every stand-alone element with 1×1 pixel size should be modified (e.g. they might look like a tiny cross), as they are too small and it requires too much effort to identify them (e.g. ‘:’ and ‘;’).
<b>Conclusion</b>
<i>Feci quod potui, faciant meliora potentes!</i>
No doubt that a smart man would have done much better, but where is he? Meanwhile, the proposed solution has some
advantages: all the mentioned above problems of our current script are eliminated. The only thing one really needs to solve this problem is determination, and I hope, this article will inspire somebody to develop the first ever reader-friendly font.
Article from articlesbase.com