Impulse 2 – OpenAI Dev Days

When I wanted to start writing my second impulse, I noticed that the talk held by OpenAI concerning AI topics and the company itself I wanted to cover had been removed. As luck would have it, I also noticed that OpenAI had held multiple keynotes as part of their Dev Day on November 6th, which have since been uploaded to their YouTube channel.

I watched the opening keynote as well as part of their product deep dive. During the keynote, they discussed some updates concerning ChatGPT for enterprises, some general updates and performance improvements to the model and most importantly to me, introduced GPTs. GPTs is a new product that is a part of ChatGPT which allows users to train specialised versions of ChatGPT for personal, educational and commercial use.

The user can prompt the model with natural language, telling it what it should specify in, upload data the model should know about and reference and call APIs. The user can also tell the GPT to have certain “personality” traits, which the developers show off during the deep dive by creating a pirate-themed GPT. They jokingly claim that this specific demo/feature is not particularly useful but I believe it shows off the power behind the model and could come in handy for my potential use.

I could train a custom GPT for scriptwriting, training it using scripts of movies I like (and of which I can actually find the script of), and train a different one on storyboarding, supplying it with well-done storyboards and utilising the built-in DALL-E 3, or train another model that just specialises in ideas for short films. I think this feature alone has further solidified ChatGPTs dominant position as the go-to text based AI and will definitely use it for my Master’s project.

Links

Dev Day Opening Keynote

Product Deep Dive

The Business of AI

Changing my Master’s thesis – Storytelling with mixed media in music videos with the underlying theme of attention “Triggers”

The reason for changing my thesis was the lack of interest for the first topic. As I have thought more about it I have realised that it is not something I want to do mainly further in life. Music, mixed media and video editing are more of a path that I want to follow, therefore in the absence of my blog posts I have developed a new thesis which is more tailored towards my future interests.

A helpful bit of information would also be that I produce the music myself with a colleague from Cologne, Germany. Therefore all the rights for the song shouldn’t be an issue for later publishing.

The Vision

The core idea is to delve into the realm of mixed media and hybrid storytelling within the context of music videos. The primary focus revolves around dissecting how elements such as color, texture, and various experimental techniques can influence mood, capture attention, evoke emotions, and enhance memorability.

Attention “Triggers” – a Modern Social Media Phenomenon

In the era of fleeting attention spans, I aim to explore the phenomenon of attention triggers in videos, specifically tailored to the modern generation. The plan is to investigate the frequency of triggers, occurring every 2-3 seconds, and how they captivate the audience.

Framing the Narrative: “Wallpaper” Moments

An intriguing concept is to view the music video as a series of potential “wallpapers.” Each frame, when paused, should be a visual masterpiece, encapsulating the essence of the narrative and offering viewers a moment to reflect on the artistic brilliance within. The inspiration for this choice was the animated movie “Spider-Man: Into the Spider-verse”.

Further steps

From the first feedback round I have gathered some points for the further development of my project.

  • Historical Inquiry:
  • Investigate the historical evolution of mixed media, focusing on its emergence and key milestones.
  • Collage Techniques:
  • Conduct a literature review on existing works and scholarly discussions about collage techniques in mixed media.
  • Michel Gondry Analysis:
  • Analyze the works of Michel Gondry, particularly exploring the video linked here, to extract innovative approaches and narrative strategies.
  • Best Practices Examination:
  • Analyze best practices at the intersection of music videos and mixed media storytelling, seeking patterns and innovative approaches.
  • Psychological Triggers Exploration:
  • Explore psychological triggers in video content, particularly in the psychology section of the library, to understand cognitive and emotional dimensions influencing audience engagement.
  1. Utilize Springer Link:
  • Access scholarly resources on Springer Link to gather theoretical foundations, empirical studies, and insights related to mixed media and music video storytelling.
  • FH-bibliothek Online Exploration:
  • Leverage online resources from fh-bibliothek, accessing scholarly databases and digital archives to broaden source materials for the research.

Impuls 2: Adobe MAX Vorstellung

Opening Keynote -GSI | 10.10.2023 (2h)

First Shantanu Narayen the Chair and Chief Executive Officer of Adobe starts the show: “Our lives are becoming more and more digital. People are flooding every channel every medium with their creativity. AI is accelerating this shift. It is making us even more creative and even more productive.”

This has been the year of AI, especially for Adobe! Adobe Firefly, Adobe Express, … Adobe has always focused on developing art and how to develop the technologies for it. Smaller, as well as big enterprises can use these technologies. He outlines, that he thinks that AI will never replace human creativity, but it is a very inspiring time to tell your story how you experience it.

They then go on with many reviews and TikToks made about the new Photoshop Generative Tool and how it blows peoples mind and saves them time.


He then hands over to David Wadhwani. The President of Adobe. He starts talking about Adobe Firefly. It is a playground to experience new AI tools. They first started in March and with the four rules:

  1. Deeply integrated into the Adobe tools
  2. Designed to be commercially safe
  3. Transparent with training data
  4. Support Content Credentials

At this time the Adobe Community has already generated 3 Billion pictures. Some artists still share their concerns, if AI will lessen their jobs. “Painting is dead” – Paul Delaroche; when he shot his first photography in the 1800s. So as we see here in this example, new technologies don’t have to replace others. As a matter of fact while AI also strongly influenced Video Production and is implemented in Adobe Premiere Pro now, the jobs for Video Producers/Cutters are as high as never before.

Adobe Firefly is a family of models. The first one started in March with the Firefly Image Model. This is used to generate pictures of a text prompt. Next came the Firefly Vector Model which gives you the power to generate vector designs with giving the tool a text prompt. And finally, the Firefly Design Model. This gives you the ability to design templates from text prompts, which can be used within the Adobe applications.


Then Ashley Still came on the stage. There are four approaches to AI. 

First – Creative work is fueled by exploration. Iterating and developing the right idea to bring a message to live. Iterating is time consuming. With Firefly, iterating should become faster. With colors, images, sketches, video and more. 

Second – Productivity. Mundane tasks can be done by AI so designer can focuse their time on creative work.

Third – creative control. As a designer you are given the precise control, to create you’re the project you were concepting in your head before.

And fourth – the community has always been our source of inspiration. With the Beta Versions user had the opportunity to give their own feedback.


Then the Presentation about the new Version of Photoshop started. Ashley outlined the innovations of Photoshop during a timeline and the Generative fill tool, as the innovation of the year 2023.

Generative fill is after five months of existing, already the most used feature in the application.

Afterwards Anna McNaught showed the power of the features in Adobe Photoshop. She did some adding and removing with the generative fill tool. She said that the selection is as important as the text prompt. Also, editable gradients are a new feature. She then went on with how adjustment presets work. She finished her speech with the quote: “Now I can spend less time pushing pixels and more time creating art”

The Lightroom mobile App is absolutely exploding. What’s new in Lightroom? Adjustable Lens Blur, HDR Curves, Denoise and more. 

They then went on to Adobe Illustrator. In June Adobe launched the recolor feature. People started to ideate with recolor and then did the final product manually. The next major feature in Adobe Application is Adobe Firefly Vector.

  • What font is used in this screenshot? Just use Retype!You can finally convert outlined text back to editable text.
  • When you have something to color, you can use generative recolor. Type in what you are thinking and take the best option. Then you can still overwork the colors.
  • Text to vector. With this feature you can generate illustrations, icons, shapes,… by typing a text prompt. You can also define the colors, the style etc. It comes out with very clean vector lines.

They then went on to video editing. Premiere Pro gives you an automatic transcript. So you can for example find keywords you are searching for easier. With new features you now can enhance the sound so much!


Which is very interesting, is that you can already direct upload your content to Instagram or tiktok with the new Adobe Express features!

They then tried the text to template tool. Everything is layered and ready to work with. It opens right away in Adobe Express. 

You can translate your design now in over 40 different languages! You just choose the languages you want to translate, and it automatically generates the design in your picked languages.

Experiments with Adobe Firefly

As I already heard on the Adobe MAX presentation Adobe Firefly is giving us new opportunities to design faster and more efficient. I also heard that it will stay complete free to use until January 2024, after that you’ll have a limited amount of credits. So I started to experiment with the new features. As you can see there are different features you can try. The presenter of Adobe MAX called Adobe Firefly therefore as a sort of “Playground”.

For now your are able to generate picutres while typing in a prompt, you can experiment with generative fill (which we already are familiar with from the first Photoshop Beta Version this year), we can do effects on typography and we can do generative recolor. 3D to picture and text to vector are still developing.

I first started with the effects on typography feature. I played around with different effects and font choices. You can decided the background color or even export the type with a transparency, which I found really cool!

What I learned is, that it all depends on the prompt (… well same for all AI tools). But if you’re giving the tool the right prompts and you choose the right parameter on the right column you can really achieve some nice images. I specially found the tool interesting because just last year I experimented with Cinema 4D and tried to learn how to do textures on letters and so on. So if this tool still enhances a bit/ or I learn to right better prompts. I won’t have to learn more about Cinema 4D to do these cool textured letters. I just can type in my prompt and let the AI do my work which sounds amazing!

After that I tried the text to image tool. This is the tool I really thought will come next from Adobe when I was trying out Midjourney in spring this year. I was shocked how is it is to use. Because with the earlier versions of Midjourney you had to now some sort of vocabulary to get the picture generated you are imagining. I know that Midjounrey already got updated and now is more user friendly. But I think this tool from Adobe is even more easy to use. You just have to click on the style oder the color or texture, which you are imaging. So you don’t have to have that much knowledge get to a satisfying result.

I then also tried the recolor feature. I did not have any SVG by hand so I used an example-Illustration of theirs. The tool is easy to work with. I wished you can choose the colors on your own. But for now you are stuck with the few colors they already arranged for you. But I am sure that this will be fixed in the next updates. For me this was the least breathtaking tool, but I am certain, that you can save a lot of time. Especially while still being in the concept-phase and trying different approaches, this can be easily used to try out first ideas very fast and efficient.

Here we go again 3.0 

Over the summer I kind of struggled with my topic. I was questioning myself if I really want to do this, if it is the right topic, if it is even relevant and so on. So I needed to sit down and try to collect and sort everything in my head that I can get a clear picture what I want to do, what I want to examine and how I am going to do this.

Therefore I started to create a Miro board where I put every brain dump in there and try to create a mind map with all the important information and steps. The miro board helped me quite a lot to visualize and frame my potential master thesis. Even though it seems to be a huge project for me, I am motivated to take on this challenge and realize it. The board itself is not really pretty and it still an ongoing process but here we are.

This is a screenshot of my Miro board:

What I also did the last weeks was to contact Petra Duhm from the WKO. In my previous blog entries I mentioned the application Berufe-VR. She is the leader of the project and she also gave me some information and we will have a chat probably in December. Another interesting project I found online was the Job Explorer from the company XRCONSOLE in Graz. I am in contact with their CEO and we also going to have a quick interview about their project next year. https://xrconsole.net/xrc-job-explorer/.

In one of my impulses I went to the BeSt Berufsmesse and checked out how companies present themselves to teenagers and what media they use to deliver information to catch the attention. A few of them used VR Applications as well and the kids seem to enjoy the experience. Further information can be read in my impulse blog post 😉

Impulse #3 – Audio Workshop

I got the chance to take part in a music workshop where we build three types of microphones:

  1. Binaural Microphone
  2. Piezo Microphone
  3. Electromagnetic Field Microphone

If you want to build one of the microphones on your own, you will need the following parts.

Binaural Microphone:

  • 2x electret capsule mod. CME-12 (or similar omnidirectional)
  • 1x mini stereo jack male solderable aerial 3.5 mm
  • 1m coaxial audio stereo cable

Piezo Microphone:

  • piezoelectric ceramic disc buzzer model 27EE41
  • 1 mini mono jack male solderable aerial 3.5 mm
  • 1m coaxial audio stereo cable (separate each channel into two we only need one channel)

Electromagnetic Field Microphone:

  • 1 magnetic inductor (choose the one with the highest power)
  • 1 mini mono jack male solderable aerial 3.5 mm
  • 1m coaxial audio stereo cable (separate each channel into two we only need one channel)

Additional Equipment:

  • soldering iron
  • solder wire
  • electrical tape

While the piezo- and electromagnetic microphones will be connected via mono audio cable and jack, the binaural microphone needs a stereo cable and jack. The following soldering example refers to the piezo microphone but will be for all three microphones the same.

At the beginning you need to remove at the endings of the cable a small part of the outer insulation (see image below). Now you can see a red insulated wire and a loose wire around it. While the loose wire when twisted together acts the negative pole, the red insulated wire acts as the positive pole.

After this step you can start to solder one end of the cable on to the piezo microphone. The red wire should be soldered on to the silver area and the other one on to the golden area. It is important that each cable is only connected to one of the areas and doesn’t overlap with the second one.

Note: For additional protection of the solder points you can put hot glue on the entire surface, as the contact surface for the microphone is the back side.

The final step will be to solder the second end of the audio cable on to the audio jack. The red wire (positive pole) needs to be soldered in the inner part of the jack and the loose wire (negative pole) needs to be soldered on to the outer part (see image below).

Now put the cover back on the audio jack and test your new microphone.

Here you can listen to an example recording I made by scratching on a wooden plank.

Gespräch mit Gabi Lechner – Entscheidungen in der Forschung: Von Zweifeln zu Klarheit

Wenn man sich auf eine Forschungsreise begibt, gibt es oft Momente der Unsicherheit. Kürzlich befand ich mich an einem Scheideweg in meinem akademischen Bestreben, Augmented Reality (AR) im Bereich der Kunst zu erforschen. Da ich unsicher war, ob ich diesen Weg weitergehen sollte, suchte ich Rat und hatte ein anregendes Gespräch mit Gabi Lechner, einer Mentorin, deren Erkenntnisse von unschätzbarem Wert waren.

In unserem Gespräch äußerte ich meine Bedenken, mich weiter mit der AR-Forschung zu befassen, und verwies auf mein schwankendes Interesse an dem Thema. Gabi bot mir mit ihrem Blickwinkel und Erfahrung eine neue Perspektive. Sie nannte überzeugende Gründe, warum meine anderen potenziellen Forschungsrichtungen – Illustrationen und nachhaltiges Design – vielleicht nicht die ideale Lösung sind. Der Bereich der Illustration, so betonte sie, könnte aufgrund eines früheren Hypes seine Neuartigkeit verloren haben, was die Innovation möglicherweise einschränkt. Nachhaltiges Design sei zwar faszinierend, aber es fehle auch der Funke für einen überzeugenden Forschungsschwerpunkt.

Im Laufe unseres Gesprächs kam eine neue Klarheit zum Vorschein. Wir machten ein Brainstorming und fanden einen faszinierenden Mittelweg: AR in Kinderbüchern. Dieses Konzept stellte eine aufregende Verschmelzung meiner Leidenschaft für Illustration mit dem innovativen Potenzial von Augmented Reality dar. Gabis Ermutigung rührte von der Erkenntnis her, dass die Illustration zwar ein Teil der Erzählung bleibt, aber den primären Fokus auf die AR-Technologie in der Kinderliteratur nicht überschatten würde.

Einführung in Social Media Marketing

Um mir einen groben Überblick über das Thema Social Media Marketing zu verschaffen, habe ich mir das Buch „Social Media Marketing. Praxishandbuch für Facebook, Instagram, TikTok & Co.“ von Corina Pahrmann und Katja Kupka ausgeliehen und gelesen. In diesem Blogeintrag möchte ich die wichtigsten Inhalte des ersten Kapitels zusammenfassen.


Während das Internet ein Informationsmedium zum Netzwerken ist, steht Social Media für schnellen Informationsaustausch. Das wichtigste Social Network ist Meta, das mit den Plattformen Facebook, Instagram und WhatsApp nicht nur im DACH-Raum, sondern weltweit Marktführer ist.

Social Media erfüllt viele verschiedene Zwecke: Menschen teilen, liken und kommentieren, sie informieren sich und kaufen sogar ein. Kleine Unternehmen können mit nur geringen Kosten die große Reichweite von Social Media für sich nutzen.

Wie können Unternehmen Social Media nutzen?

Die Plattformen ermöglichen es, ohne großes technisches Wissen eigene Inhalte zu veröffentlichen und den Kontakt mit Kund:innen und Interessenten zu pflegen. Zusätzlich kann die Bekanntheit des Unternehmens gesteigert und das Image verbessert werden. Im Gegensatz zu klassischem PR und Marketing, ist auf Social Media emotionale, persönliche und vor allem authentische Kommunikation möglich, ohne an Sachlichkeit oder Professionalität zu verlieren.

Social Media trägt außerdem zu einer Veränderung der Unternehmenskultur bei. Unternehmen lernen durch Social Media ihren Kund:innen zuzuhören und versuchen über neue Plattformen junge Mitarbeiter:innen zu gewinnen.

Social Media Plattformen

Eine Studie aus dem Jahr 2021 ergab, dass der durchschnittliche deutsche Internetnutzer in 6 sozialen Netzwerken angemeldet ist und täglich 1,5 Stunden dort verbringt. WhatsApp ist außerdem die meistgenutzte Plattform bei den Nutzer:innen im Alter zwischen 16 und 64. Danach folgen YouTube, Facebook, Instagram, Facebook Messenger und Pinterest.

Videoplattformen erfreuen sich besonders hoher Beliebtheit: 98 Prozent der 14- bis 29-Jährigen schauen mindestens einmal pro Woche Videos auf YouTube oder ähnlichen Plattformen. Sogar bei den über 70-Jährigen sind es noch knapp ein Drittel.

Social Media Marketing

Unternehmen können durch erfolgreiches Social Media Marketing ihre Bekanntheit steigern, den Kundenservice verbessern, Kontakt zu bestehenden Kund:innen halten, neue Kund:innen erreichen und im Austausch von ihnen Neues lernen.

Der erste Schritt auf dem Weg zu erfolgreichem Social Media Marketing ist die Analyse der Ziele und Zielgruppe des Unternehmens. Dies erleichtert später die Wahl der richtigen Plattformen und der Inhalte, die darauf gepostet werden sollen.

Social Media Marketing ist im Gegensatz zu anderen Formen der Werbung mit geringen Kosten verbunden und kann daher in-house betrieben werden. Dazu müssen Unternehmen Zeit investieren in die Strategieentwicklung, die Zielgruppenanalyse, die Content-Planung und in das Management der Social-Media-Verantwortlichen.

Impulse #6: World Usability Day – Accessiblity & Inclusion

In the digital age, creating products that cater to a diverse range of users has become paramount. World Usability Day serves as an ideal platform for delving into the nuances of accessibility and inclusive design. Recently, UX Graz organized a hybrid event that featured a talk by Steffi Susser, a freelance UX consultant, who shared invaluable insights on this essential topic. Her presentation emphasized the significance of enabling users with various abilities and disabilities to navigate and interact with products, and she was just one of the experts contributing to this enlightening event. In this blog post, we will explore the key takeaways from Steffi Susser’s talk and the broader discussions that took place during this celebration of World Usability Day.

A Glimpse into World Usability Day
The event, organized by UX Graz, celebrated World Usability Day, providing a platform for professionals and enthusiasts to come together and discuss the critical facets of design that revolve around usability, accessibility, and inclusion. The online format broader participation, ensuring a wide-reaching and inclusive conversation.

The Power of Inclusive Design: Insights from Steffi Susser’s Talk and More

Steffi Susser’s Talk
Steffi Susser’s talk was a highlight of the event. She passionately articulated the importance of inclusive design, emphasizing that it goes beyond merely complying with guidelines. Inclusion, she asserted, is about fostering an environment where all individuals, regardless of their unique differences, feel welcomed, respected, supported, and valued. Her insights on the topic shed light on how designers and creators can go the extra mile to ensure their products resonate with users on a deeper level.

Accessibility vs. Inclusive Design
Steffi Susser’s talk also drew a clear distinction between accessibility and inclusive design. While accessibility focuses on making a design usable by everyone, inclusive design takes it a step further. Inclusive design aims not only to be usable but to be so appealing that everyone desires to use it. It’s a journey that transcends the realm of objective measurements and delves into the subjective and emotional aspects of design, making it a complex and fascinating field.

The Complexity of Inclusive Design
Steffi’s presentation highlighted the intertwined nature of inclusive design. She pointed to real-world examples, such as web forms, which are commonly used online but can present exclusionary challenges. These forms can deter users by requesting unnecessary data or enforcing mandatory fields. Inclusive design, in such cases, means providing a spectrum of choices and considering the multifaceted dimensions of diversity, including culture, language, ethnicity, sexual orientation, family status, religion, and spiritual beliefs.

Diversity in Design
Diversity, Steffi emphasized, encompasses various facets of being human, and designers play a crucial role in promoting inclusivity. Factors such as contrast ratios, color blindness testing, resizable fonts, and support for screen readers were discussed as ways to ensure a design is inclusive. Avoiding autoplay, scrutinizing the necessity of animations, allowing sufficient time for user interactions, employing gender-fair language, collecting only essential data, and avoiding stereotypes all contribute to the overall inclusiveness of a design.

Inclusive Design: A Piece of a Larger Puzzle
Steffi Susser views accessibility as “just a piece of the broader puzzle” that is inclusive design. While fundamental, accessibility does not stand alone; it is part of a holistic approach that addresses the complex and multifaceted needs of users.

Steffi Susser’s talk on inclusive design holds particular relevance for me as an interaction designer but more importanly for my research working on the master’s thesis focused on eHealthcare app solutions. Her insights shed light on the importance of creating designs that resonate with diverse user groups, a vital consideration in the healthcare sector. By delving into the complexities of inclusive design and understanding the emotional aspects that drive user engagement, we can equip ourselves with valuable knowledge to enhance the usability and appeal of our eHealthcare app solution. This understanding will not only contribute to the success of my master’s thesis but also empower me to design a solution that is genuinely tailored to the needs and preferences of a wide range of healthcare app users.

A Glimpse into Research by Lukas Wohofsky
The event also featured research by Lukas Wohofsky, co-lead of the research unit ENABLE for health and inclusion care at FH Carinthia. His work, in collaboration with Daniela Kraine and Sascha Fink, showcased the application of human-centered design in research on their initiative for “inclusion through cooperation: potentials of participatory research in the field of autism”. They underscored the ethical principles and best practices for involving users, emphasizing the importance of valuing data, employing gender-sensitive research design, and building trust with research participants.

Panel Discussion on Accessibility and Inclusion
The event concluded with an engaging panel discussion on accessibility and inclusion. This panel brought together experts from various backgrounds, including Steffi Susser, Lukas Wohofsky, Thomas Grill, and Christiane Moser. The discussion, moderated by Johannes Lehner, provided a rich exchange of ideas and insights, offering a comprehensive perspective on the ever-evolving field of accessibility and inclusive design.

To gain further insights from this informative event, you can watch the recorded panel discussion on YouTube: https://www.youtube.com/watch?v=zw5MG2JP0W8

Blog post 2 – Adobe Illustrator’s AI features

As a light intro to my deep dive into many different new AI tools, I will start with Adobe Illustrator. I want to get Illustrator ‘out of the way’ quickly, since it is the program I will most likely get the least use out of for my master’s thesis, but I for sure wouldn’t want to ignore it.

To test it out I will be putting it to the test for a client who have requested a logo for their tech startup “SoluCore” focussed on a sustainable alternative to traditional catalysts (I’m not entirely sure what they actually do but I don’t need to understand anyways). The client has provided a mood board and a sample font they like the style of:

Text to Vector Graphic

Illustrator’s new AI centrepiece and pendant to photoshop’s generative fill gives the user a prompt textbox, the option to match the style of a user specified artboard, artwork or image as well as four generation types to choose from: Subject, Icon, Scene and Pattern.

Starting with the ‘Subject‘ mode, I prompted the AI to generate a ‘Smiling Sun, logo of sustainable and green tech startup’:

The results are fine I suppose, they remind me of a similar Photoshop where I also wanted to create a smiling sun, makes sense when considering that both are trained on Adobe Stock images.

Here is the same prompt using the ‘Icon‘ generation type:

At first I was suprised at just how similar the results were to the previous results, but soon realised that the ‘Match active artboard style‘ option is switched on by default, which I find to be counterintuitive. I must say, though, that the AI did a fantastic job at matching the previous style. Having turned that off, The AI gave me the following results:

The decrease in detail and difference in style is immediately noticeable.

Though not remarkably applicable for my usecase, here are the results for the ‘Scene‘ option:

What becomes apparent here, is that the user can specify the region the AI should generate to using vector shapes or selections. Having specified no region on an empty artboard, the AI defaults to a roughly 512px x 512px square. Selecting the entire artboard and modifying the prompt to describe a scene to “Smiling sun setting over a field of sunflowers” gives these results:

Terrifying, I agree. Here, some inaccuracies of the AI seem to show. Not only did it leave space on either side of my selection, but also what should be a simply designed sun shows imperfections and inaccuracies. Isolating the sun and quickly rebuilding it by hand highlights this:

The artwork the AI generates usually have little anchor points, making them efficient, but these inaccuracies mean that cleanups will be needed frequently. Additionally, the AI has yet to generate strokes or smart shapes and, instead relying on vector shapes entirely.

Reverting back to the “Smiling Sun, logo of sustainable and green tech startup” prompt, I used the pattern mode for the next generation:

Worryingly, these results look the least promising by far, showing inaccuracies and unappealing results in general. Also, the AI does not generate artwork directly onto the artboard and instead adds the created patterns to the user’s swatches when a prompt is clicked.

Another counterintuitive behaviour, yet, I think a useful one. I personally have not been using the ‘Swatches’ feature myself but I could definitely see it being used by people who rely on Illustrator daily. With a bit of work, or perhaps different prompts, this feature could have great potential.

Next, I wanted to use one of the client’s provided sample logos and the Style picker to tell the AI to generate a sun using its style.

The color is immediately recognised and I can definitely see the shape language carry through as well.

Generative Recolor

An older feature infused with AI I still haven’t gotten around to trying, generative recolor allows the user to provide the AI with a prompt, upon which Illustrater will, well, recolor the artwork. The user can also provide specific colors the generations should definitely include using Illustrator’s swatches.

Retype

A feature I am particularly excited about, Retype, allows the user to analyse vector or rasterised graphics based on its text contents. Firefly will then attempt to match the font to an installed font as closely as possible and in optimal scenarios even allows the user to edit the text directly. For my example, I provided the AI with the font sample I recieved from the client.

The AI took surprisingly long to analyse the image, however that is only when compared to the rapid speeds of the other AI tools, we are talking about 30 seconds at most here. The AI was not able to find the exact font, but found 6-7 fonts that match the aesthetic of the original very well. In my opinion, it is not a problem at all that the AI was not able to find the exact font used in the image, since I have no way of knowing about the licensing of the original font.

After hitting ‘Apply’, the AI used the font I selected and made the text in the provided image editable. Strangely, the AI only activated the exact Adobe font it detected in the image, not the entire family, leaving it up to me to search for it manually and download the rest for more variety. This behaviour too should be changed in my opinion.

Getting a similar font to what the client requested is a fantastic feature, yet if I could wish for something it would have to be a ‘Text to font‘ generative AI in which a user could input a prompt for a style of font they wanted and have Illustrator provide suggestions based on the prompt. I’m sure Adobe has the resources to train an AI on many different font types to have it understand styles and aesthetics beyond the already existing sorting features inside of Adobe Illustrator.

It is also counterintuitive how to get back to the Retype panel, it’s very easy to lose and upon reopening the project file, the other font types are not shown anymore in the Retype panel. The user can then not select text and have the retype panel suggest similar fonts anymore. A workaround, silly as it may sound, is to convert the text to vector shapes or a rasterised image, and then run the retype tool again to get similar fonts.

Getting to work

Using the aforementioned tools, I got to work on the client request for the logo of ‘SoluCore’ featuring either a smiling sun or a flexing sun. Here are the results:

Working with the AI for the first time was challenging but engaging and dare I say fun. It allowed me to get access to many different ideas quickly. However, the inaccuracy of the AI forced me to recreate some of the results, especially the ones that use basic shapes and mimic accurate geometric forms. As for the more complex results like the flexing sun, I had to have many iterations be generated until I was happy with a result. The arms looked great but the smiling sun itself was rough. Using the arms as a stylistic input for the AI led to better results. Once I was happy, there was still a lot of cleaning up to do. The generative recolor was also struggling with the more complex generations and still required a lot of human input.

Conclusion

This ultimately led to me not saving a lot of time, if any. The AI was helpful, of course, but the overall time spent on the designs was similar to regular logo design. Also, the font choices were still almost completely dependant on myself, so this could be a place where a potential AI tool could help out a lot. In the end, the client asked me to remove the smile of the smiling suns and went for this result:

Luckily, this result was relatively clean and I did not have to do much cleaning up in order to deliver the logo to the client. If they had chosen another option, I would have had to do a lot of tedious cleanup work, digging through the strangely grouped generated vector shapes. If I’m being honest, I would probably have even considered recreating the more complex designs from scratch had the clients chosen one of those. This cleanup phase is needed in traditional logo work as well, just that the amount of cleanup work required with AI is hard to determine, since the quality and cleanliness of the results differ so much.

All in all, the experience was definitely more positive than negative, my main gripe concerning the cleanup phase could also be a result of my infrequent use of Illustrator and rare occasions where I need to design logos, combined with being spoiled with other AI that need little work post generation. Whenever I will need to use Illustrator, I will definitely include AI into my workflow, even If I’d end up not using any of the results, using it is so quick that it will likely never hurt to try.