Impulse #5 – Klanglicht

In October our group took part in the city festival “Klanglicht”. At this festival, new technologies were used to immerse visitors in a special atmosphere, to awaken feelings and emotions, to make them think.

I like the concept of the festival as a whole, Graz for a few days becomes a huge exhibition platform where artists can realize their most daring ideas and citizens and guests of the city can touch something beautiful and interesting without any special efforts.

It was very interesting for me to look at the organization of the process from the inside, to find out what technologies can be used, what places and locations are involved and so on.

Impulse #4 – Article “Revolutionizing Education with AI: Exploring the Transformative Potential of ChatGPT”

Below you can read an analysis of the article I read according to the SQ3R method.

Survey.

I found the structure of the article simple and clear. It is divided into several parts, namely: Introduction, AI and AI in Education. The titles of the main parts and sub-chapters are conveniently highlighted in colors, so it doesn’t take much effort to do a brief overview.

Questions.

  1. How exactly can chatbots be used in education?
  2. How can they be implemented into the learning process in schools as part of UX design?
  3. The ethical side of the issue.
  4. How might the introduction of AI affect classical educational methods?

Read.

The article briefly reviews the history of the emergence of AI as well as its development, which is undoubtedly important for a general understanding of the issue. Separate subchapters are devoted to chat bots in general and Chat-GPT in particular.  Then there is a chapter devoted to a specific topic, namely AI in learning. Both pros and cons of the approach are discussed in detail, and the ethical side of the issue is also not left without attention, and a separate subchapter is devoted to it. Of course, at the end there is a brief conclusion that summarizes the results.

I would say that the article did not disappoint me at all, I got my questions answered and was satisfied with what I read. A few new insights have definitely emerged in my mind and I hope to be able to use the new to me information in writing my Master thesis.

Recite.

  • Chatbots can be used to personalize the learning process for each student as much as possible, not to mention that, unlike a teacher, a chatbot is always ready to answer any question at any time of the day or night.
  • The chatbot should behave as much like a real person as possible, the interface should not be very different from a normal chat with a teacher, so for the student such an introduction will be smooth.
  • There are some concerns that the article provides:
    • reliability and accuracy of the information it presents
    • potential biases in the data, resulting in discriminatory or misleading responses
    • privacy issues since it may collect and store personal information about students
    • questions about the role of teachers and the impact on the job market for educators
    • lack of human interaction, reducing the quality of the educational experience for students
    • over-reliance and dependency on technology
    • concerns about the rights of intellectual property
    • transparency and accountability as it might be difficult to identify how decisions are being made by the chatbot
  • The article says chatbots can make learning a lot easier for teachers and administrators, but they can never replace a real person.

Review.

Revisiting the article and my analysis above I can confidently say that I found the article very helpful. It expands my knowledge in AI and provides certain insights that will be useful to me when writing my paper. At first glance, it may seem that the topic is not that relevant to the topic of my Master thesis, however, the use of chatbots, specifically Chat GPT in training, is a great example of smart use of high technology and from the UX point of view it is also exciting to consider. What implementation methods are already in place, what can be improved, how to deal with ethical issues – all of this is relevant to any field where new technologies such as AI are or will be used.

Impulse #3 – Podcast “Sounds like the truth. How neural networks learned to mimic speech.”

I listened to a podcast on the history and development of neural networks and algorithms that worked with language and speech. It started with studying the very structure of language, not any particular language, but any language, looking for various common features in order to identify patterns and implement them in algorithms for recognizing and generating speech. Then followed the simplest algorithms, for example, the well-known function for button phones “T9”. Then came the simplest machine learning, neural networks learned from huge arrays of text and were able to predict answers. For example, such an algorithm can solve the simplest arithmetic problem without knowing arithmetic at all, it just knows that if there are “2” “+” and “2” in the text, the answer according to the probability theory will be “4”. At this stage, the neural network could already be useful, but it is not perfect at all. Such systems did not look at the whole sentence, but checked one word at a time, so they could produce an incoherent set of words if the sentence was constructed in an atypical way.

In 2015, a company called Open AI appeared, which would later turn the idea of language models upside down, but in the beginning, they were losing the technology race to Google. In 2022, Open AI releases ChatGPT, which was a turning point in how people perceive language models. Users use ChatGPT for literally everything: It to help write an essay, find a bug in code, formulate the right Google query, now it can already draw pictures, and so on.

ChatGPT is a great example of how new technology can be used in real life by real people, and the best part is that it is actually being used.

Ontology and Epistemology

Ontology

At the beginning of my research, I can make a few assumptions at once about the potential attitude of most of intuitive users towards the use of interfaces with AI. Of course, these assumptions are based primarily on my own background, my experience and my own perceptions of AI.

In my opinion, a typical middle-station user is more likely to not trust AI in interfaces. So far, this direction is too new for the majority of users to get used to it.

Epistemology

If we talk about accurate measurements of satisfaction or dissatisfaction with AI user interfaces, here a large number of different UX methods can come to the rescue. However, I am sure that in fact measuring emotional state is far from easy. At this stage it is important to understand exactly what the user is satisfied or dissatisfied with and strictly share his emotions regarding the interface as a whole or specifically AI.

“What is this?” Task

What is this?

  • Drinking fountain
  • Technological device
  • Accessible water for a cat

How did it get here?

  • I ordered it online
  • It hit the store’s warehouses
  • The device was assembled at the factory
  • Components were delivered to the factory

Who uses it?

  • People who want the cat to drink more water
  • Actually, cats

What does it do?

  • Provides clean circulating water
  • Encourages the cat to drink more as these animals prefer running water
  • Make the cat healthier
  • Prolongs pet’s life as a result

When is it used?

  • Daily

What is it made of?

  • Plastic, rubber, technical filling, pump, filter

Who made it?

  • Me, when I change the filters and pour new water
  • Manufacturers
  • People who developed the technology

Why does it exist?

  • This makes it easier to care for pets.

Impulse 8: Dark Patterns: Manipulative UX Design and the Role of Regulation webinar

Guest speaker Dr. Jen King, a Privacy and Data Policy Fellow at the Stanford Institute for Human-Centered Artificial Intelligence, provided a comprehensive overview of the Dark Pattern phenomenon in a recent webinar.

The webinar began by defining what constitutes a Dark Pattern and identifying common areas of occurrence, such as ecommerce, online shopping, privacy disclosures, and attention manipulation in gaming. Dr. King highlighted that Dark Patterns often emerge at decision points, where individuals must make choices. Notably, these manipulative techniques extend beyond the digital realm and manifest in the physical world, such as deceptive discount labels in supermarkets.

The evolution of Dark Patterns was discussed, citing A/B Testing as a key factor in their development. Through experiments designed to encourage increased purchasing behavior, companies refine the implementation of Dark Patterns. Dr. King also categorized different types of Dark Patterns, including asymmetric, conversion-focused, restrictive, and information-hiding patterns.

Practical examples were presented during the webinar, such as the automatic acceptance of privacy terms on platforms like Facebook when users click the “Create Account” button, illustrating the real-world implications of Dark Patterns.

Personally, the webinar proved to be immensely helpful, complementing my prior research on the topic. While I was already familiar with some aspects, the session introduced new insights, particularly in recognizing Dark Patterns in the physical world. The realization that even discounted prices can fall under the umbrella of Dark Patterns was a valuable takeaway. I am confident that the knowledge gained from the webinar will significantly contribute to the theoretical portion of my master’s thesis.

Impulse 7: UXPodcast (Episode 316)

In this podcast episode is a guest speaker – Kim Goodwin, who is a renowned design leadership expert and reflects on her career in healthcare design.

The conversation is about the importance of accountability in healthcare design, touching on issues of integrations, configurations, and the need for traceability. Kim discusses the challenges of creating a more professional design industry and draws parallels with professions like medicine and hairdressing that require licensing. The hosts and Kim also address the lack of a formal certification process in design and the potential benefits of establishing one.

From a master’s thesis perspective on “dark patterns,” the podcast provides insights into the ethical considerations and accountability in design. Kim’s opinion on traceability, accountability, and the need for a more mature product development process aligns with the exploration of bad user interfaces in the context of dark patterns.

In conclusion, the podcast encourages designers to focus on enhancing user well-being, reflecting on their design decisions, and advocating for professional standards in the industry. This insight is valuable for my master’s thesis, especially in understanding the ethical dimensions of design.

Impulse 6: UXPodcast (Episode 319)

The podcast episode discussed two articles: “Don’t get stuck in discovery with insights no one asked for” by Martin Sandström and “UX strategy – What is it?” by Eddie Rich.

In the first article, Martin Sandström discussed the balance between research and action. He pointed out how designers often find themselves stuck in extensive research phases, causing delays in problem-solving. Martin talked about the importance of effective communication and prioritizing solutions to the problems presented by stakeholders.

The second article by Eddie Rich was about proposing a shift in terminology – from “UX strategy” to “experience strategy.” Eddie argued that the term “UX” can be misunderstood, especially by executives, and that reframing it could facilitate better communication.

The hosts of the podcast discussed the challenges associated with terminology in the field of UX. They highlighted the need for designers to adapt to the context of their organizations, listen actively, and build trusting relationships for successful collaboration.

For me, the key takeaway lies in the importance of effective communication and understanding the unique context of each organization. As I am going to create prototypes for my master’s thesis on app design, these insights might be helpful. Improving terminology and focusing on the customer experience will enhance my ability to communicate. In conclusion, the podcast offered practical insights that align with the real-world challenges faced by designers.

Impulse 5: Adobe MAX 2023

This year’s Adobe MAX 2023 presented a lot of advancements in AI across the Adobe Creative Cloud Suite, particularly with Adobe Firefly.

Adobe Firefly:
Adobe Firefly introduces the Image Model, Vector Model, and Design Model. Each plays a pivotal role in reshaping the creative process across Adobe’s suite.

Photoshop and Photoshop Web:
In Photoshop, the Generative Fill Tool was introduced, which improves streamlining workflows and boosting creative exploration.

Illustrator:
Illustrator embraced the Firefly Vector Model with features like ‘Retype’ and ‘Text to Vector,’ offering enhanced control and efficiency in design iteration.

Premiere Pro:
Premiere Pro showcased Text-Based Editing, a feature streamlining transcription and revolutionizing video editing through the Transcript panel.

Adobe Express:
Adobe Express, a web-based tool launched in October 2023, integrates the Firefly Design Model in ‘Text to Template,’ shows the potential of AI in generating design templates.

Firefly Models:
Updates to Firefly’s Image Model grant users more control over generated images. Upcoming models like the Audio, Video, and 3D Models promise further creative possibilities.

Overall, I really liked the conference, especially the part about how AI is changing how we make things. But for my Master’s thesis on Dark Patterns, the conference might not have exactly what I need. Still, learning about how AI influences design is interesting. It may not directly connect with my thesis, but it gives me more understanding about the big picture of design and technology.

https://www.adobe.com/max/2023/sessions/opening-keynote-gs1.html