OFFF Barcelona_Reflections

Last week, from March 23rd-25th, we had the opportunity to attend the OFFF Conference in Barcelona. The conference included a wide variety of speakers on topics from graphic and media design, entrepreneurship and business, to UX and interaction design. My colleagues and I had a great experience in a beautiful and welcoming setting. As a group, we also discussed many of the talks after, and found that quite a few had evoked strong reactions. I often found that the following discussion with my colleagues was almost more impactful that the talk itself.

In one of these discussions, it was brought up that what was shared between many of the speakers was their insistence on the importance of experimentation. No matter how “successful” the designer was, they made sure to touch on past mistakes, moments of “selling out” in order to make ends meet, and their journey in trying everything and anything. They often also highlighted that they still don’t necessarily “know exactly what they’re doing”. Personally, I found these points very comforting. I really enjoyed the disarming transparency, particularly from Mexican designer and entrepreneur Rubén Alvarez. Alvarez began his presentation by sharing his “True Bio”, which included things like “I write about what I feel” and “Sometimes I get angry if things don’t go my way”. Alvarez then used his life story to explain to us how he came to be the designer he is today, including all the mistakes and failed ventures. This candor made Alvarez’s talk the most impactful for me. Some of the other posts discuss the disappointment many of us felt at talks given by more well known designers, who arguably abused the time and attention we gave them. In contrast, Alvarez was a human first and designer second.

I was very inspired by the honesty shown by many of the speakers at the OFFF Conference. It’s comforting when people share their mistakes and failures, as it makes us all less afraid to try and fail and try again. I would like to carry that thought with me as I move forward in my studies and career.

Feeling the Effort of Classical Musicians – A Pipeline from Electromyography to Smartphone Vibration for Live Music Performance

The article Feeling the Effort of Classical Musicians – A Pipeline from Electromyography to Smartphone Vibration for Live Music Performance provides an insightful overview of the live-stream MappEMG pipeline project, in which a mobile application was developed to mimic the muscle response of classical music performers for audience members. This project began with the notion that these “gestures”, or invisible muscle movements made by musicians while performing, are integral to the musician’s sense of place within a piece, and would increase the audience’s sense of immersion if they could be shared.

The project utilizes EMG sensor mapping to track muscle movements, “From an artistic perspective, EMG gives a direct access to the performer’s intention in terms of implied musical effort, which is expressed through actual physiological effort”. These movements are then reproduced as vibrations through the mobile device. Sound-based vibrotactile feedback has already been used for collaborative composition, audience interaction, and greater immersion for those with hearing impairments.

I found this case very compelling as I had never heard of a similar project. It is also interesting to see a real world use case involving the tools we are using now, such as Max8. I appreciate the drive to create a more immersive audience experience, and also to explore a new element of the performance in the musicians’ gestures.

References

Verdugo, F., Ceglia, A., Frisson, C., Burton, A., Begon, M., Gibet, S., & Wanderley, M. M. (2022). Feeling the Effort of Classical Musicians – A Pipeline from Electromyography to Smartphone Vibration for Live Music Performance. NIME 2022. https://doi.org/10.21428/92fbeb44.3ce22588

The Future of Design & AI – Some Thoughts Midway

For a long time now, the news has been filled with fear-mongering articles about AI taking human jobs. While it’s true that many jobs will be replaced by AI, human-AI partnerships will also create many new jobs, with humans and AI filling in each other’s weaknesses and working towards a balanced future. While we shouldn’t downplay the challenges due to the jobs that will be lost to AI, we should focus rather on finding opportunities to work together and create new jobs in an evolving future.

For designers, AI streamlines our work and completes menial tasks for us, giving us back the time for more creativity. Personally, though, I am not convinced that greater productivity is always a good thing. During the COVID-19 pandemic, many of us found more working hours with the loss of commute times, social breaks at university or the office, or other distractions that we no longer had when working from home. However, it was widely reported and now has been studied that those moments of pause keep us mentally well and also lead to greater creativity, as boredom generates breakthroughs. Of course I am thankful when the AI embedded in Adobe products auto-selects the part of the image I am trying to trace, but I would also be remiss to say that all menial tasks should be removed from design by AI. Repetition can be meditative. I am neither entirely for or against AI in design, as it is much more of a grey area, but I do believe that it is important to carry some caution and reservations as I move forward in my research.

AI Case Study: Adobe Sensei

When I chose the topic of AI in Design, I was aware that AI was already all around me in ways both known and unknown, but I didn’t realize how much AI already impacted my work as a designer. Adobe Sensei is the AI and machine learning system implemented in all Adobe creative suite products. Adobe Sensei’s mission statement is to “handle the time-consuming parts of your job, so you have more time to be creative”, nothing that “74% of creatives recently surveyed by Pfeiffer Consulting said they spent more than 50% of their time on non-creative tasks — a huge opportunity for AI to help”.

AI and machine learning features from Adobe Sensei include recommended presets, content-aware fill for photos and videos, subject or object selection, character animator, body tracker, anomaly detection, sky replacement, neural filters, and many more. One feature that I found particularly baffling is the face aware liquify mode in Photoshop, which detects facial features and allows you to manipulate individual parts of the face, widening a smile, raising eyebrows, completing changing a facial expression with an entirely believable result.

Adobe Sensei is also used in Adobe Spark, a program that generates social graphics, webpages, and videos in 20 seconds or less. Once the initial content is created, the user can cycle through different layouts of text, imagery, and other elements, and customize with auto crop and zoom and scale slider to maximize for dynamic graphics. Brands can upload their colours, logos, and assets, and use Adobe Spark to streamline the graphic design process – turning what used to be a 2 day process of creating a social media post optimized for all platforms, into a 2 hour process.

A graphic generated with Adobe Spark

AI Case Study: Recommender Systems

Recommender Systems are the AI that suggests YouTube videos or Netflix shows you might like, curates social media posts based on your interests, or creepily shows you ads for a book you were just discussing with a friend. This AI combines supervised and unsupervised learning systems, meaning that it works with data sets (unsupervised) and also responds to the decisions you are making (supervised).

Most recommender systems combine three recommendation types: content-based, social, and personalized. Content-based recommendations ignore the user and recommend based on the quality or recency of the content. Social recommendations favour what is most popular, based on likes, subscriptions, amount of purchases, etc. Personalized recommendations are based on what you, specifically, are interested in. For example, a YouTube video might be recommended to you because you have previously engaged with that channel, or because users similar to you watched it, or both.

A few issues with recommender systems include the creation of ideological echo chambers – where we are trapped in a bubble of people who think exactly like us, which can have serious social consequences. Additionally, recommendations can be based on harmful stereotypes. Less serious issues might be missing a show you would have liked, because the AI thought it didn’t match your preferences, or seeing ads for products you just bought, or websites you just visited.

Each of us experiences a different version of the internet. Recommender systems aren’t going anywhere anytime soon, and to coexist with AI in an ethical and knowledgeable way, it is our responsibility to understand how AI influences our everyday lives

AI Case Study: ChatGPT – Friend or Foe?

At the end of November last year, the San Francisco based software giant OpenAI released ChatGPT – the most powerful chatbot yet – transforming our relationship with AI in a matter of days. ChatGPT can debug code, both providing and explaining its solutions, write a persuasive essay for your high school English class, compose lyrical poetry or write alternate endings to your favourite books and movie scripts. Over 1 million people signed up to test it within the first five days.

A poem generated by ChatGPT

ChatGPT’s response to the prompt “What do pushMatrix() and popMatrix() functions do?”

Unlike previous “stateless” chatbots, ChatGPT remembers its conversation history with you, enabling more complex and personal interactions, and leading many to wonder if ChatGPT could replace Google. Some think that the software spells the end of the educational system as we know it. Mere weeks after ChatGPT’s launch, a Princeton University student developed GPTZero, a system to detect ChatGPT usage, but it’s far from perfect. New York TImes technology columnist Kevin Roose argues that educators would be better off learning to work with ChatGPT and other AI, as software of this kind will only multiply and improve going forward. Roose also argues that students should be learning how to exist in the world they will graduate into, and living alongside powerful AI tools is a prerequisite.

A similar angle could be taken with regards to ChatGPT’s role in the design community. Some fear that the AI will put web developers out a job, but others argue that developers who know how to work with GPT will simply become more efficient and employable, and that for all affected industries, keeping up to date and familiar with this kind of software is critical as it changes the face of our technological world.

I asked ChatGPT to rewrite the above blog post be more concise and engaging

Some Food for Thought:

Roose, K. (2022, December 5). The Brilliance and Weirdness of ChatGPT . The New York Times. https://www.nytimes.com/2022/12/05/technology/chatgpt-ai-twitter.html

Roose, K. (2023, January 12). Don’t Ban ChatGPT in Schools. Teach With It . The New York Times. https://www.nytimes.com/2023/01/12/technology/chatgpt-schools-teachers.html?smid=url-share

Metz, C. (2023, January 12). How Smart Are the Robots Getting?. The New York Times. https://www.nytimes.com/2023/01/20/technology/chatbots-turing-test.html?smid=url-share

What is AI? A Brief History

The term Artificial Intelligence, or AI for short, was coined by computer scientist John McCarthy in 1956. AI can be defined most simply as a machine that thinks. More broadly, a machine is said to have artificial intelligence if it can interpret data, learn from the data, and use that knowledge to adapt and achieve specific goals.

AI can be separated into two categories – weak and strong. Most AI today is considered weak AI, meaning that it was created to focus on one specific task, mimicking some aspect of human intelligence. Examples of weak AI include SIRI, Alexa, speech to text recognition, customer service chatbots, recommendation engines, and pre-screening for job or university applications.

On the other hand, strong AI is a machine that can think exactly like us, including a self-aware consciousness that can solve problems, learn, and plan for the future. In the 1950s, Alan Turing, considered to be the father of computer science, developed the Turing Test to determine whether a machine has developed the ability to think like us. Turing argued that, if a machine could trick a human into thinking it was also a human, that meant it had strong AI. Contemporary thinkers argue that there is more to thinking like us than just being able to fool us. An argument against the Turing Test is a famous thought experiment known as The Chinese Room, developed in 1980 by John Searle. If you were isolated in a room and given a codebook telling you how to respond to messages in Chinese which were passed under the door, your responses would convince the native speakers on the other side of the door that you could speak Chinese, when really you were just manipulating data based on a set of rules. The Chinese Room argues that passing for human isn’t enough to qualify for strong AI, and that strong AI requires that the machine has actual understanding, something that Searle believed is impossible to achieve.

Although AI is far from a new topic, the years from 1950 to 2010 are considered the AI Winter, as not much progress was made during that time, but rather many small improvements that eventually led to the start of what is known as the AI Revolution in 2010, that continues today.

Some helpful links:

CrashCourse. (2016, August 8). Artificial Intelligence & Personhood: Crash Course Philosophy #23 [Video]. Youtube. https://youtu.be/39EdqUbj92U

CrashCourse. (2019, August 9). What Is Artificial Intelligence? Crash Course AI #1 [Video]. Youtube. https://youtu.be/a0_lo_GDcFw

IBM. (n.d.). What is Artificial Intelligence (AI)?. IBM. https://www.ibm.com/topics/artificial-intelligence

What is the role of AI in Art and Design?

AI can mean a lot of things – chatbots, text generators, self-driving cars – but in recent years, the capabilities of AI technologies in the fields of art and design have come increasingly into the limelight. Whether it’s turning open-source data into public art pieces, as in the case of Turkish media artist Refik Anadol, generating scarily realistic high-fashion images for Instagram, creating NFTs with just a few clicks, or recreating an existing artist’s style and calling into question the as-yet nonexistent ethical boundaries of AI visualization. A lot of AI imagery ends up on Twitter for its wacky output, but these algorithms are only getting smarter and more prominent, and the future for them is at once boundless and unknown.

Alongside the rise of AI visualization has been the fear and ridicule coming from the arts and design communities, fearing AI will lead to loss of work for “real” artists, and the devaluation of art in general. But is this fear warranted? Although AI is undeniably powerful, it is still beholden to a real, live person telling it what to do, and injecting the heart and emotion that (as of yet) is only possible by a human hand.

What are the future possibilities for human-AI collaboration in art and design? Is there any merit to the outright rejection of such technologies? As someone with a background in interior architecture, I feel very much out of my depth when it comes to discussions around AI. The topic still conjures images of The Matrix, and I couldn’t really tell you when and where AI is currently being used, and how it affects our everyday lives. As an Interaction Design student, I am fascinated by the intersection between art, design, and technology, and I chose this topic to teach myself (and anyone else who is too scared to ask), “What is AI doing out there, anyways”?.

Some interesting sources:

Baio, A. (2022, September 9). Online Art Communities Begin Banning AI-Generated Images. Waxy. https://waxy.org/2022/09/online-art-communities-begin-banning-ai-generated-images/

Herrman, J. (2022, September 19). AI Art is Here and the World is Already Different. Intelligencer. https://nymag.com/intelligencer/2022/09/ai-art-is-here-and-the-world-is-already-different.html#_ga=2.135489719.742585595.1667671967-2021799881.1667671967

NYT Cooking. (2022, November 4). Can A.I. Generate the Perfect Thanksgiving? | Priya Krishna | NYT Cooking [Video]. Youtube. https://youtu.be/yT8KoWpqUgg

Paetzhold, M. (2022, September 4). Online Art Communities Begin Banning AI-Generated Images. Intelligencer. https://nymag.com/intelligencer/article/will-dall-e-ai-artist-take-my-job.html#_ga=2.135489719.742585595.1667671967-2021799881.1667671967

TED. (2020, August 19). Art in the age of machine intelligence | Refik Anadol [Video]. Youtube. https://www.youtube.com/watch?v=UxQDG6WQT5s

TED. (2020, April 6). Art that reveals how technology frames reality | Jiabao Li [Video]. Youtube. https://www.youtube.com/watch?v=tT8icNhydtg

TED. (2022, January 12. Jeff Dean: AI isn’t as smart as you think — but it could be | TED [Video]. Youtube. https://www.youtube.com/watch?v=J-FzHIQ7SOs

TED. (2019, November 14). The danger of AI is weirder than you think | Janelle Shane [Video]. Youtube. https://www.youtube.com/watch?v=OhCzX0iLnOc