“What is this?” Task

What is this?

  • Drinking fountain
  • Technological device
  • Accessible water for a cat

How did it get here?

  • I ordered it online
  • It hit the store’s warehouses
  • The device was assembled at the factory
  • Components were delivered to the factory

Who uses it?

  • People who want the cat to drink more water
  • Actually, cats

What does it do?

  • Provides clean circulating water
  • Encourages the cat to drink more as these animals prefer running water
  • Make the cat healthier
  • Prolongs pet’s life as a result

When is it used?

  • Daily

What is it made of?

  • Plastic, rubber, technical filling, pump, filter

Who made it?

  • Me, when I change the filters and pour new water
  • Manufacturers
  • People who developed the technology

Why does it exist?

  • This makes it easier to care for pets.

Impulse 8: Dark Patterns: Manipulative UX Design and the Role of Regulation webinar

Guest speaker Dr. Jen King, a Privacy and Data Policy Fellow at the Stanford Institute for Human-Centered Artificial Intelligence, provided a comprehensive overview of the Dark Pattern phenomenon in a recent webinar.

The webinar began by defining what constitutes a Dark Pattern and identifying common areas of occurrence, such as ecommerce, online shopping, privacy disclosures, and attention manipulation in gaming. Dr. King highlighted that Dark Patterns often emerge at decision points, where individuals must make choices. Notably, these manipulative techniques extend beyond the digital realm and manifest in the physical world, such as deceptive discount labels in supermarkets.

The evolution of Dark Patterns was discussed, citing A/B Testing as a key factor in their development. Through experiments designed to encourage increased purchasing behavior, companies refine the implementation of Dark Patterns. Dr. King also categorized different types of Dark Patterns, including asymmetric, conversion-focused, restrictive, and information-hiding patterns.

Practical examples were presented during the webinar, such as the automatic acceptance of privacy terms on platforms like Facebook when users click the “Create Account” button, illustrating the real-world implications of Dark Patterns.

Personally, the webinar proved to be immensely helpful, complementing my prior research on the topic. While I was already familiar with some aspects, the session introduced new insights, particularly in recognizing Dark Patterns in the physical world. The realization that even discounted prices can fall under the umbrella of Dark Patterns was a valuable takeaway. I am confident that the knowledge gained from the webinar will significantly contribute to the theoretical portion of my master’s thesis.

Impulse 7: UXPodcast (Episode 316)

In this podcast episode is a guest speaker – Kim Goodwin, who is a renowned design leadership expert and reflects on her career in healthcare design.

The conversation is about the importance of accountability in healthcare design, touching on issues of integrations, configurations, and the need for traceability. Kim discusses the challenges of creating a more professional design industry and draws parallels with professions like medicine and hairdressing that require licensing. The hosts and Kim also address the lack of a formal certification process in design and the potential benefits of establishing one.

From a master’s thesis perspective on “dark patterns,” the podcast provides insights into the ethical considerations and accountability in design. Kim’s opinion on traceability, accountability, and the need for a more mature product development process aligns with the exploration of bad user interfaces in the context of dark patterns.

In conclusion, the podcast encourages designers to focus on enhancing user well-being, reflecting on their design decisions, and advocating for professional standards in the industry. This insight is valuable for my master’s thesis, especially in understanding the ethical dimensions of design.

Impulse 6: UXPodcast (Episode 319)

The podcast episode discussed two articles: “Don’t get stuck in discovery with insights no one asked for” by Martin Sandström and “UX strategy – What is it?” by Eddie Rich.

In the first article, Martin Sandström discussed the balance between research and action. He pointed out how designers often find themselves stuck in extensive research phases, causing delays in problem-solving. Martin talked about the importance of effective communication and prioritizing solutions to the problems presented by stakeholders.

The second article by Eddie Rich was about proposing a shift in terminology – from “UX strategy” to “experience strategy.” Eddie argued that the term “UX” can be misunderstood, especially by executives, and that reframing it could facilitate better communication.

The hosts of the podcast discussed the challenges associated with terminology in the field of UX. They highlighted the need for designers to adapt to the context of their organizations, listen actively, and build trusting relationships for successful collaboration.

For me, the key takeaway lies in the importance of effective communication and understanding the unique context of each organization. As I am going to create prototypes for my master’s thesis on app design, these insights might be helpful. Improving terminology and focusing on the customer experience will enhance my ability to communicate. In conclusion, the podcast offered practical insights that align with the real-world challenges faced by designers.

Blog Post 6 – Brain Dump 2

After the semester had settled down and the exhibition was more or less successfully dismantled, I was treated to three one-on-one talks about what will happen after the FH – my portfolio, future career, life decisions, what I want to achieve in life and so on. But before I can get to all that, I will have to tackle my Master’s Thesis.

Both the talk with Roman Pürcher and Ursula Lagger were about exactly that, albeit with slightly different focuses. As the title suggests, this blog post attempts to freeze my current headspace in time, because I feel like I got a lot of really useful input during the two sessions and I want to write down what is going through my head right now while it’s still fresh.

Roman’s talk

I talked to Roman about more general approaches to the thesis, when I would want to do what, how to go forward with the Design & Research blog posts and impulses and so on. But we also talked specifically about the practical and theoretical parts of the thesis, only briefly discussing the latter, to be discussed with Ursula Lagger the following day.

The practical part

We talked about some technical approaches and I took some notes on those, but most notably for me was the hypothesis of what could happen if my practical part doesn’t work out like I keep assuming; If anime style animation using AI simply is not possible at a level of quality I deem ‘good enough’ with the current tools, then what happens?

We came to the conclusion that that would be fine, too. In the unlikely event that the animation looks so bad and is so unusable that I could not use it, the worst case scenario would be that my anime video looks like the trailer I already made for StoryVis – featuring essentially no animation, yet brilliant backgrounds, amazing colors and a carefully art-directed aesthetic. That doesn’t sound too bad I think and I could still animate some things manually. This does mean, however, that my conclusion would have to be brutally honest: ‘AI Anime doesn’t work (yet)’. While disappointing, this doesn’t make the conclusion any less valid.

Going forward & next steps

Blog posts and impulses.

Just kidding – while the deadline of 22.FEB.24 approaches slowly but steadily, not to mention my holiday from the 8th til the 15th, I’m also thinking about the time after Design & Research. The next practical task I will need to tackle is of course the actual animation of my characters, since I think I don’t need to spend much more time on the backgrounds because Midjourney is already capable at producing essentially perfect backgrounds.

I want to use Pika to try and animate my already existing AI movie for StoryVis. This would allow me to use a story, art style, world and characters I already established and really like and therefore saves me a lot of time, serving as an experimental playground to test out the AI. Having said that, I really have no experience with animation AI and I don’t even know if you can give the AI images or if it can only do prompt-based generation of if Pika is even the tool to go with. I definitely want to talk to Kris van Hout about her amazing AI movie which featured a lot of very convincing AI generated motion that was generated on top of the generated images from Midjourney.

Ursula Lagger’s talk

Talking with Roman, we came up with a list of questions for Ursula Lagger, mainly concerned with the theoretical and scientific part of the thesis. Upon mentioning the questions about expert interviews and how to write research blog posts as quickly as possible before I go on holiday, it quickly became apparent that the core concept and structure of my theoretical part still needed a lot more work.

The theoretical part: structure

My overall idea of writing a cultural / historic section about past paradigm shifts and how they affected work culture is a great approach, but is a slippery slope that could cost me a lot of time. In order to make the section relevant, I would need to find out which example of innovative technologies in the past bears the most resemblance with the current developments of AI and then compare the two and speculate on the future of AI using my findings. These steps each require immense amounts of actual scientific and literary research. Just filtering out what I won’t write about because it’s not relevant enough will require so much work that it honestly might not be worth it, which is why I will need to drastically reduce my aspirations about the cultural and historical part of the paper, as not to get lost in the sauce.

However short this section is going to be, it necessitates a chapter in which I theorise about the future of AI and how it correlates to a past paradigm shift. For this, I will need to look into future studies, maybe conduct interviews with ‘Zukunftsforschenden’ (I really don’t know how to translate that accurately), and finally, give a prognosis or at least a personal opinion on the topic.

Something else I hadn’t considered up until my talk with Ursula Lagger was the inclusion of examples from works of other creatives and artists using AI technologies. According to Ursula Lagger, this is an absolutely essential part and cannot be left out of the thesis. A no-brainer, really, that contextualises my own findings and work in the current landscape of AI tools and possibilities. There are so many approaches and use-cases out there, out of which I have chosen a very niche combination: the creation of an anime.

  • How have other people tackled this?
  • Why am I not doing it the same way?
  • What AM I doing the same way?
  • What about ethical concerns of certain use cases?
  • What do I think of potentially dangerous use cases?
  • How are people using AI in the best and worst ways?
  • How can I compare this to new technologies in the past?

New technologies usually scare people and can cause shifts not only in the work culture but also in the art form itself. Another obvious observation if you think about it. Maybe I can write about early awful Photoshop creations of people overusing the layer styles resulting in terrible artworks, or how early photography was used completely differently from today? I could compare that to early AI creations, how we can usually tell when something is generated by an AI, how anatomy is weird, how text doesn’t work properly or how scripts and company or movie names generated by ChatGPT usually sound very cheesy and almost have a style on their own.

It’s only a matter of time when young artists figure out how to make something genuinely new with these new tools, genuinely good works of art that are not at all hindered by AI, but made possible because of it. Ultimately, that’s what I want to achieve with my practical part too – a genuinely good work of art that doesn’t scream ‘I WAS MADE WITH AI’.

Going forward & next steps

In any case, it seems like my practical part still requires a lot of thought and work to figure out how to weigh each of its parts, which at the moment seem to be:

  • Documentation of my practical work
  • Comparison to other approaches
  • Similarities to paradigm shifts in the past
  • Conclusions and Speculations

This list is what the theoretical chapters could look like judging from my current state of mind. I want to use the time until the 22nd of February to figure things out even further, continuing to write blog posts about my findings. I feel like Ursula Lagger’s inputs were as useful as they were abundant – so I need time to let all of it sit and see what I truly want to write about in the theoretical section of my thesis.

Blog Post – Tangible STEM-Education

In this blogpost I want to give insights in the second of my two topics I researched during this semester. A concept for a hands-on exhibit in the context of science education.

Concept

Using the microscope as a reference for centuries of research, visitors will explore hidden information of objects by placing them under a modified microscope and looking through it.

The microscope as an invention to enlarge and discover details normally not seen by the human eye, gets therefore transformed into a hidden screen, displaying animations layered on top of the object in real time.

Explore

Visitors experience the research process in a playful way and explore hidden information about selected objects.

Physical Material

The Objects will have a reference to STEM-Education and be selected to fit different educational levels. They will be designed as slides normally put under microscopes. Depending on the size, objects will be cut in slices and protected by epoxy resin.

Participation

As only one person can look into the microscope, I also thought about alternative methods for displaying content. While still using the microscope as the primary display, one possibility would be to also project the content to a second display, visible to visitors walking by. However, by doing so technology gets visible what I try to avoid.

Blog Post – Tangible Data Visualization

In this blogpost I want to give insights in the first of my two topics I researched during this semester. A concept for a hands-on exhibit in the context of science education.

Concept

A physical representation of an iceberg that will change its shape and texture according to the ongoing process of ice melting in real life because of climate change. Together with an interactive soundscape of ice cracking visitors get a multisensory experience.

Interactivity will either be realized by controlling the melting process by turning a wheel which symbolizes the time in years or voting on climate related questions by placing the hand on a reactive surface.

Texture

The texture being projected on the mesh will simulate the sun casting shadows on the lower parts of the iceberg. Furthermore, the texture will change as the shape changes and visualize the transition from ice to water and cracking ice sheets. Based on my interviews, a grayscale texture will be the preferred method if done in real time processing. If the textures will be pre rendered, further research and comparison on visibility between grayscale and color will be necessary.

Projection

The projection will be made from the ceiling. However, as in figure 1 shown, there are two possibilities to hide the beamer. While the first approach will mask the beamer with a lampshade the second will hide the beamer in the ceiling and project with the help of a mirror on top of the mesh.

Physical Material

The physical representation will be realized with sticks and an elastic mesh on top. The sticks will be connected to small motors that pull or push the stick individually. With this approach the shape can be changed and simulate the melting process.

A realization of the mechanics can be seen under the following link: https://vimeo.com/125111011#t=590s

Blog Post – Interview 2

For this impulse I decided to conduct another interview with a professors of the Master at the Universitat Politecnica de Valencia (UPV) called Multimedia and Visual Arts. This interview was with Prof. Francisco Giner Martínez. Before the interview I summarized my project idea and questions in the following Miro board:

During this interview, I received the following feedback:

  • The most important question in terms of processing power is if you want to do the rendering in real time or pre rendered.
  • As you are focusing more on real time, the processing of texture takes a lot of processing power.
  • I would suggest concentrating on a matrix system with an alpha channel.
  • Try to focus on the black-white transition for good shadows, which is in your case important.
  • You could realize something like this with a matrix system that checks the height with its neighbors and calculates a corresponding alpha value.

My takeaway: Reduce complexity as much as possible, find the details that have the most impact and focus on them.

The following to photos were made at an exhibition at the Universitat Politecnica de Valencia and demonstrates how a grayscale image is being projected onto a three-dimensional terrain made out of sand.

Impulse 5: Adobe MAX 2023

This year’s Adobe MAX 2023 presented a lot of advancements in AI across the Adobe Creative Cloud Suite, particularly with Adobe Firefly.

Adobe Firefly:
Adobe Firefly introduces the Image Model, Vector Model, and Design Model. Each plays a pivotal role in reshaping the creative process across Adobe’s suite.

Photoshop and Photoshop Web:
In Photoshop, the Generative Fill Tool was introduced, which improves streamlining workflows and boosting creative exploration.

Illustrator:
Illustrator embraced the Firefly Vector Model with features like ‘Retype’ and ‘Text to Vector,’ offering enhanced control and efficiency in design iteration.

Premiere Pro:
Premiere Pro showcased Text-Based Editing, a feature streamlining transcription and revolutionizing video editing through the Transcript panel.

Adobe Express:
Adobe Express, a web-based tool launched in October 2023, integrates the Firefly Design Model in ‘Text to Template,’ shows the potential of AI in generating design templates.

Firefly Models:
Updates to Firefly’s Image Model grant users more control over generated images. Upcoming models like the Audio, Video, and 3D Models promise further creative possibilities.

Overall, I really liked the conference, especially the part about how AI is changing how we make things. But for my Master’s thesis on Dark Patterns, the conference might not have exactly what I need. Still, learning about how AI influences design is interesting. It may not directly connect with my thesis, but it gives me more understanding about the big picture of design and technology.

https://www.adobe.com/max/2023/sessions/opening-keynote-gs1.html