History of User Interfaces

The development towards today’s interface can be roughly divided into 3 phases: In the early days of computer technology the command line, from 1980 onwards the development of graphical user interfaces, which made the personal computer possible in the first place, and from the beginning of 2000 onwards the emergence of attentive user interfaces and assistance systems such as Google Glasses and Alexa and Siri. With the increase in technology, the requirements for the user interface also changed. Whereas in the beginning we had a computer with a screen, today we have different sized computers from smartphones to smartwatches. Some devices, such as Alexa, no longer have a graphical user interface at all, but interact by means of voice control. And while the computer was initially a device of science, it is now increasingly integrated everywhere in our lives. [1] The Morse telegraph service was the precursor of the command line. It generated readable text already at that time, so it was not necessary to know Morse code. The directly entered text was translated into Morse code by the machine, sent and output again into readable text by the receiving machine. The punch card also played an important role in computer programming. Here, however, the contents had to be translated back into machine-readable combinations. From 1960 until the early 1990s the command line was used to interact with the computer. Similar to the morse code recorder, the entered text is translated into machine language, whereupon the machine can execute the command and output the information in human-readable text. [2] The punch card and batch processing had the disadvantage that this type of interaction was very tedious. The computer processed the punched cards batch by batch and printed out the finished data, which could take a up to an hour. In addition to the first approaches to timesharing, J.C.R. Licklider at the Massachusetts Institute of Technology (MIT) came up with the idea for the first real interaction with the computer. His idea involved entering data on a keyboard and then receiving immediate feedback from the computer on an output device. It resulted in buliding the computer named Whirlwind between 1948 – 1951. It was the first computer that could operate while processing and gave back information immediatly. [3] The further development of time-sharing and the command line was the next stage of interface evolution in 1970. Xerox PARC developed the first concept and, with Xerox Alto in 1973, the first computer with a graphical user interface (GUI) that could be operated with a mouse.[4] This invention led to raster graphics-based networked workstations and “point-and-click” WIMP GUIs. WIMP GUI stands for graphical user interfaces based on windows, icons, menus, and a pointing device, typically a mouse. [5] This concept was further developed by Steve Jobs in 1984 at Apple and later also used from Windows. This type of use interfaces still exists until today. The main advantage of graphical user interfaces was that they were easy to learn and easier to use, therefore the personal computer gained popularity so fast. [4] Andries van Dam from Brown University’s faculty and one of it´s Computer Science Department’s founders refers 1997 to post-WIMP user interfaces. These are controlled by gestures and voice and do not require manual input with a tool. The first attempts were made in 1990, but should take some time before they were implemented.[5] Apple also gave its programmers Human Interface Guidelines from the beginning to address the needs of the users. In 1989 Tim Berners-Lee created HTML and a first browser and thus invented the Internet. The structure of the Bowser window (Mosaic 1, 1993) with address line, forward and back buttons is still used today. [6] The emergence of the first mobile devices and later the development of smartphones and tablets require different usability approaches and different user interfaces than the computer. The touchscreen enables intuitive operation and the feeling of direct interaction, but also places different demands on the design of the user interface. Many elements simply do not fit on the small screens. Therefore, many user flows must be oriented differently than on the larger screen. The information architecture must take into account that not all information fits on one view. Many functions had be reduced to essential functions and add not so frequently used features on other levels. [7]

1 User Interface Design, Alexander Florin, 2015, P. 74-75.

2 User Interface Design, Alexander Florin, 2015 p. 101-103

3 https://www.britannica.com/technology/computer/Time-sharing-and-minicomputers; https://www.britannica.com/technology/Whirlwind-computer;

4 User Interface Design, Alexander Florin, 2015, P. 78-80

5 van Dam A. Post-WIMP user interfaces. Commun. ACM, 40(2):63–67, 1997.

6 User Interface Design, Alexander Florin, 2015, P. 86-87

7 User Interface Design, Alexander Florin, 2015, P. 101

Leave a Reply

Your email address will not be published. Required fields are marked *