Semiotics in UI Design

Semiotics is the study of signs and symbols as a means of communication. The word semiotics comes from the Greek sēmeĩon sign or signal. Semiotics is about signs and symbols in general, not just visual signs. Semiotic studies include the study of language, written texts, images, gestures, fashion or any other form of symbolic expression that can be interpreted through a system of codes or rules. Semiotics has its roots in linguistics but has since expanded to include all forms of human communication. [1] Therefore, it is also relevant for user interface design.

Signs have their own meaning, which is given to the sign and interpreted by the viewers. The social and cultural context plays a role in the interpretation of the signs. In UI Design, Designers have a special role in choosing signs for the interface. That means not only icons, but written as a word, graphical or in some other way.

They must convey a message and provide an interaction, which is specified by user tasks but also by the goals of the stakeholders. Not only do they have to take the user’s context into account, they also have to convey the message that the stakeholders want to convey via the user interface. [2] For example, an interface must reflect the brand identity and speak the language of the user. This underlines the importance of knowing and understanding the user and his environment, in order to assess whether he or she understands the words, icons or methapers used. When talking about Semoric it is also significant to consider the socio-cultural preferences of the target group. Using language that is appropriate for the target audience, such as youth language or specialized language, or considering codes in a specific industry can help deliver a better message. That is also true for the design of user interfaces.

For De Souza, the relationship between designer and user has a special role in the UI communication process. Basically, the designers are in a conversation with the user. The designer is the sender of the message, while the user is the receiver. The message can be conveyed through words, images, graphics, explanatory texts and behaviour (of the UI). Thus, designers must also be aware of their own communicative behaviour. According to this theory, the designer communicates with the user, not the system. The message that the designer conveys must be interpreted by the user when he interacts with the system. [3]

Especially complex software programs can benefit from the semiotic approach. If they only follow guidelines and laws, then they do not communicate the true intellectual value of the software. They should communicate the value of the software solution to the user instead of just showing them how to use it. If users are not aware that the software can offer them much more, if the designers do not communicate the real value of the software, then this can have serious consequences for the user experience. Why should a user learn a new technology or continue to use the program if it is less efficient than another method? [4]

In many cases of B2B software, where software products are commissioned by companies for reasons of digitization, the end users are very often not involved and the value of the new software is not communicated to them. Presenting the user with a fait accompli is in many cases the reason why the acceptance of new tools and software fails. Furthermore, the success of the software is not measured by the user experience, but by the satisfaction of the project managers and the numbers of usage.
If we look at the user interface design from the semiotic point of view, then explaining the strategies of the application is a more important point than the handling itself.[5] This approach, the focus on the communication between designer and user could support design patterns to produce better usability and user experience.
In order to achieve this, designers must have high communication skills – they must communicate their intentions and reasons concisely and understandably, in a way that the user can absorb quickly and easily. [6] According to the semiotic process, users interpret the user interface according to their intentions. If these match the designer’s intentions, then the communication has been successful. [7] Users interpret all the time. Sometimes their guess is correct, sometimes not, but all are either for “why” or for “what”. Evaluating these guesses leads to interactive patterns. [8] The understanding of methapers must also be taken into account. The use of methapers to increase understanding of a new kind of development is well known. The “desktop” methaper references the physical desktop to help the user deal with documents and their filing system in folders.[9]

  • [1] https://en.wikipedia.org/wiki/Semiotics, https://de.wikipedia.org/wiki/Semiotik (26. 12.1022)
  • [2]The semiotic Engineering of Human-Computer Interaction, Clarisse Sieckenius de Souza, 2005, P. 5.
  • [3]The semiotic Engineering of Human-Computer Interaction, Clarisse Sieckenius de Souza, 2005, P. 7.
  • [4]The semiotic Engineering of Human-Computer Interaction, Clarisse Sieckenius de Souza, 2005, P. 10-22.
  • [5]The semiotic Engineering of Human-Computer Interaction, Clarisse Sieckenius de Souza, 2005, P. 23-25.
  • [6]The semiotic Engineering of Human-Computer Interaction, Clarisse Sieckenius de Souza, 2005, P. 79-80
  • [7] The semiotic Engineering of Human-Computer Interaction, Clarisse Sieckenius de Souza, 2005, P.84
  • [8]The semiotic Engineering of Human-Computer Interaction, Clarisse Sieckenius de Souza, 2005, P.152
  • [9]The semiotic Engineering of Human-Computer Interaction, Clarisse Sieckenius de Souza, 2005, P.79-80

History of User Interfaces

The development towards today’s interface can be roughly divided into 3 phases: In the early days of computer technology the command line, from 1980 onwards the development of graphical user interfaces, which made the personal computer possible in the first place, and from the beginning of 2000 onwards the emergence of attentive user interfaces and assistance systems such as Google Glasses and Alexa and Siri. With the increase in technology, the requirements for the user interface also changed. Whereas in the beginning we had a computer with a screen, today we have different sized computers from smartphones to smartwatches. Some devices, such as Alexa, no longer have a graphical user interface at all, but interact by means of voice control. And while the computer was initially a device of science, it is now increasingly integrated everywhere in our lives. [1] The Morse telegraph service was the precursor of the command line. It generated readable text already at that time, so it was not necessary to know Morse code. The directly entered text was translated into Morse code by the machine, sent and output again into readable text by the receiving machine. The punch card also played an important role in computer programming. Here, however, the contents had to be translated back into machine-readable combinations. From 1960 until the early 1990s the command line was used to interact with the computer. Similar to the morse code recorder, the entered text is translated into machine language, whereupon the machine can execute the command and output the information in human-readable text. [2] The punch card and batch processing had the disadvantage that this type of interaction was very tedious. The computer processed the punched cards batch by batch and printed out the finished data, which could take a up to an hour. In addition to the first approaches to timesharing, J.C.R. Licklider at the Massachusetts Institute of Technology (MIT) came up with the idea for the first real interaction with the computer. His idea involved entering data on a keyboard and then receiving immediate feedback from the computer on an output device. It resulted in buliding the computer named Whirlwind between 1948 – 1951. It was the first computer that could operate while processing and gave back information immediatly. [3] The further development of time-sharing and the command line was the next stage of interface evolution in 1970. Xerox PARC developed the first concept and, with Xerox Alto in 1973, the first computer with a graphical user interface (GUI) that could be operated with a mouse.[4] This invention led to raster graphics-based networked workstations and “point-and-click” WIMP GUIs. WIMP GUI stands for graphical user interfaces based on windows, icons, menus, and a pointing device, typically a mouse. [5] This concept was further developed by Steve Jobs in 1984 at Apple and later also used from Windows. This type of use interfaces still exists until today. The main advantage of graphical user interfaces was that they were easy to learn and easier to use, therefore the personal computer gained popularity so fast. [4] Andries van Dam from Brown University’s faculty and one of it´s Computer Science Department’s founders refers 1997 to post-WIMP user interfaces. These are controlled by gestures and voice and do not require manual input with a tool. The first attempts were made in 1990, but should take some time before they were implemented.[5] Apple also gave its programmers Human Interface Guidelines from the beginning to address the needs of the users. In 1989 Tim Berners-Lee created HTML and a first browser and thus invented the Internet. The structure of the Bowser window (Mosaic 1, 1993) with address line, forward and back buttons is still used today. [6] The emergence of the first mobile devices and later the development of smartphones and tablets require different usability approaches and different user interfaces than the computer. The touchscreen enables intuitive operation and the feeling of direct interaction, but also places different demands on the design of the user interface. Many elements simply do not fit on the small screens. Therefore, many user flows must be oriented differently than on the larger screen. The information architecture must take into account that not all information fits on one view. Many functions had be reduced to essential functions and add not so frequently used features on other levels. [7]

1 User Interface Design, Alexander Florin, 2015, P. 74-75.

2 User Interface Design, Alexander Florin, 2015 p. 101-103

3 https://www.britannica.com/technology/computer/Time-sharing-and-minicomputers; https://www.britannica.com/technology/Whirlwind-computer;

4 User Interface Design, Alexander Florin, 2015, P. 78-80

5 van Dam A. Post-WIMP user interfaces. Commun. ACM, 40(2):63–67, 1997.

6 User Interface Design, Alexander Florin, 2015, P. 86-87

7 User Interface Design, Alexander Florin, 2015, P. 101

Evaluation of design patterns in the context of time and technological progress

Designpattern – helpful concept or simple habit?
We live with pattern every day. In fact, our brains are hardwired with pattern. Pattern recognition is a basic human information processing skill and an important process for perceiving our environment. Perception depends on knowledge and experience that people already have. The brain compares stimuli to previously stored information in long-term memory to categorize them. Without previously acquired experience, humans cannot recognize patterns. [1]

It is related to habit because habit is a routine of behavior that is repeated regularly and usually takes place automatically. It is related to habit because habit is a behavioral routine that is regularly repeated and usually automatic. We follow habits and patterns because they make things easier for us, but sometimes this behavior is even fatal, as common diseases of civilization prove. I was wondering if the same is true for UI/UX patterns.

In UI design, Usability is one of the most important factors to consider when designing a UI. And Usability is often critical for a good User Experience. To ensure good usability, designers need not only carefully consider their design but also use a common design pattern. Jenifer Tidwell, a renowned interface designer, is responsible for bringing pattern design into the world of UI design. She created many of the first patterns that are now used in modern web and app design. Since then, Designers have also built up design pattern libraries, which hold a solution for a variety of Interface Design Problems. [2] But does that mean if we follow design patterns and design principles the UI will always turn out good? Therefore, I want to explore what distinguishes good from bad UI design, and whether following design patterns is always a good idea, or whether constant repetition of past solutions tends to prevent innovative ideas.

Design Pattern – definition of term
A pattern is something that repeats itself and has a predictable outcome. Even though people’s experiences are tied to their past empirical experiences, the way the human brain and perception works makes behavior predictable. Design patterns are based on the fact that humans repeat behavior and act in a certain way. [3] Common patterns like login and registration processes, social logins, user on boarding, breadcrumbs, date pickers have become established in digital products nowadays. One rule of good interface design is to repeat what’s already in use. Indeed, using a tried and tested solution, does not even save time for developing concept and UI design but also promotes the users learnability curve. [4]

The Concept and term was first defined by the Austrian-born U.S. architect and architectural theorist Christopher Alexander [5] in early 1977, who created his own design patterns for architecture. [6] This approach was later used in software development where patterns became proven, tested methods for developing software faster. The use of pattern can save time in the development process and reduce the potential for unexpected issues. Also it makes code more understandable which is also true for general understanding and communication approaches in the field of UI/UX design.[7] With the close relationship between the two disciplines, the leap to UI design was not far.

Design Pattern – then, now, forever the same?
According to current knowledge about perception and human behavior, the use of design patterns is a good thing, as they help save time in development and improve UX. But is this really true, or are designers and developers simply addicted to a self-perpetuating habit, even though the solution may not be the best – just as some habits turn out not to be good. When we break free from constant repetition, do we gain a new perspective and find even better solutions to UI problems?

Another considerable aspect is the innate human drive to progress. When do we reach the point where we get tired and bored of repeating the same thing over and over again? There are already movements that intentionally break patterns, like Brutalism in web design. That might be an indicator that we need some change. Many new trends in disciplines like architecture, art, and fashion have been created in opposition to conformity. Brutalism in 2019 came up as a protest against the very shiny design of mainstream website design and fed the retro feeling at the time. It did not catch on, but is an interesting new approach to making web design less conformist and developing independent design styles. [8]

Designpattern – predict the future
Once a pattern is invented, is it valid forever or does it change over time and especially with technological change? In fact, design patterns naturally change over time, especially as technology advances. When the mobile phone came along, designers had to find new ways to design concepts and interfaces, they had to adapt to voice-controlled devices like Alexa, and they will continue to have to find new ways with new inventions in the future. Another interesting aspect of change in the future is the evolution of AI. If we consider all the rules and patterns that already exist, can AI be useful for UI/UX engineers?

I would also like to make a short digression on the question: If we follow the don´t-make-them-think paradigm, will users not think at all in the future? Will designers make us think and will they relieve our cognitive processes and allow us to engage in more relevant thoughts, or will they impair the human ability to think in complex ways?

  1. Youguo Pi, Wenzhi Liao, Mingyou Liu and Jianping Lu (2008). Theory of Cognitive Pattern Recognition, Pattern Recognition Techniques, Technology and Applications, Peng-Yeng Yin (Ed.) InTech, S.434-435
    Available from: http://www.intechopen.com/books/pattern_recognition_techniques_technology_and_applications/theory_of_cognitive_pattern_recognition –
  2. Using Design Patterns in User Interface Design, Chelsea Chase, 2012, S. 2
  3. Designing Interfaces, Jenifer Tidwell, 2005, S. 11
  4. Dialogprinzipien DIN EN ISO 9241-110
  5. https://de.wikipedia.org/wiki/Christopher_Alexander
  6. Using Design Patterns in User Interface Design, Chelsea Chase, 2012, S. 3
  7. https://en.wikipedia.org/wiki/Software_design_pattern
  8. The rise of brutalism and antidesign, Ellen Brage, 2019, S. 2 (https://www.diva-portal.org/smash/get/diva2:1304924/FULLTEXT01.pdf),