Artificial Type: Usage of AI in the typographic process

During my previous research, I dived into the topic of web aesthetics and ended my last semester by conducting a small experiment to determine the effect of a small sample of aesthetic variations on users. However, in the meantime my focus and interest has shifted, and I decided to take up a new topic to explore, namely: future developments in the typographic world.

In the moment, new technologies as well as what might be regarded as “new values” in type design cause ripples within the typographic universe. Decolonization efforts as well as the rise of multilingual typography lead to a movement away from the rigid square of the glyph, which stems from the invention of the printing press (Kulkarni & Ben Ayed, 2022). In addition, type becomes more flexible (keyword: variable type), fluid (keyword: kinetic type) or even 3-dimensional. However, maybe the most interesting technological development, and the topic I want to explore further in order to find a possible Master’s topic, might be the continuous rise of Artificial Intelligence.

I am very intrigued to find out, if AI can be used at any point within the typographic process and began my research by asking myself the following key questions:

  • Can AI be used to design typefaces? If yes, how?
  • Can AI used in other areas of the editorial production process e.g. for type setting (kerning, sizing, …)? If yes, how?
  • Can AI replace typographers?
  • Which tools can be used for this?

As a first step, I began to look at the field with an open perspective and start putting together what research has already been done. During a first session, I stumbled upon a couple of examples, where designers have already tried to use machine learning or automation processes to aid at various points during the design process:

  • Thesis by Daniel Wenzel (HTWG Konstanz) “Automated Type Design“: Wenzel used any type of automated process to design typefaces, creating over 100 fonts in the process. He used the following five automation processes: “fonts by variation (comparable to Neville Brody’s FF Blur), fonts through limited tools (intentionally using the limitations of generators like FontARk or Prototype), fonts by “Art Direction” (using mathematical formulas to describe fonts rather than drawing curves by hand), fonts with the help of assistive processes (generating new weights, scripts and optical corrections using assistive tools like brush simulations), and fonts with the help of autonomous processes (using machine learning to generate new “AI fonts”)” (Ibrahim, 2019).
  • Andrea A. Trabucco Campos and Martin Azambuja have formed the publishing house “Vernacular”. Their first publication “Artificial Typography” showcases 52 typographic forms portrayed in the style of various iconic artists, that were created using AI (Thaxter, 2022).
  • Thesis „Machine Learning of Fonts“ by Antanas Kascenas (University of Edinburgh): Kascenas explores if kerning process can be automated by using machine learning (Kascenas, 2017).

It appears to me, that albeit first trials have been run and a small number of designers have already used AI to create typefaces and set type, the area still appears rather new. Especially when looking at tools and technologies, while AI seems to be rather evolved when it comes to generating images based on text prompts, no completely developed tool exists yet to develop type.

In the upcoming weeks, I want to explore the topic further and see if it is going to provide me with a basis for a Master’s topic. Possibly, I will have to narrow the topic down or widen it, in case I do not find enough material. In addition, I also want to look into the option of using AI myself and apply it to the typographic process. However, this is something I have to research further…

References

  • Ibrahim, A. (2019, October 14). Daniel Wenzel faces the question of automation in creativity head-on in Automatic Type Design. It’s Nice That. Retrieved November 7, 2023, from https://www.itsnicethat.com/articles/daniel-wenzel-automated-type-design-digital-graphic-design-141019
  • Kascenas, A. (2017). Machine Learning of Fonts [MInf Project (Part 1) Report]. University of Edinburgh.
  • Kulkarni, A., & Ben Ayed, N. (2022, June 16). Decolonizing Typography. In Futuress. https://futuress.org/learning/decolonizing-typography/
  • Thaxter, P. (2022, September 27). Vernacular’s Artificial Typography uses AI to boldly blend together type and the history of art. The Brand Identity. Retrieved November 7, 2023, from https://the-brandidentity.com/interview/vernaculars-ai-typography-is-an-a-to-z-in-typography-and-the-history-of-art-imagined-by-ai

Impulse #2 – fuse* Workshop

A workshop with fuse* design studio focused on generative art installations.

Through a lot of research in the field of machine learning and artificial images I found a design studio from Modena (Italy) named fuse* who hosts a discord server for exchange. Not only do they encourage to ask questions about their design process but also announce new projects.

One week after I joined, they announced a workshop about one of their art installations called “Artificial Botany”. Since I already knew from my previous research what algorithms and tools they might have used, I knew this would be a good opportunity to get insights into the actual design process and more importantly the scale of complexity when applied in a museum like environment.

To summarize, I got insights about the complexity and sub-processes between data collection and the final video. From my first Impulse I already knew how the technical workflow looks like, but I clearly underestimated the process of tweaking and manipulating data sets to produce the desired video output instead of a random generation. As the creation of a single Video already requires a lot of processing power, tweaking and manipulating requires many more cycles and regenerations. After this workshop I see this point in a different way – being more confused because of complexity I simply haven’t seen before.

With this knowledge I ask myself whether this complex, energy hungry and time-consuming process suites my end goal.  Are there other simpler approaches to visualize cracking ice in an interactive environment? Is this part of my installation going to be the focus, to justify the time it takes to produce the needed video content with a StyleGAN algorithm?

Whether or not, the videos that are being created with StyleGAN are truly impressive and by taking real iceberg pictures and bringing them to life through machine learning, would greatly fit the dramaturgy of my installation.

After this workshop I have strong concerns about the complexity of my concept. I think I need to get the opinion from an expert in the field of computer vision and maybe come up with simpler alternatives. So far, the following alternatives would greatly reduce the complexity of my project. The list is ordered starting with more abstract solutions up to authentic representations.

  • Draw the cracks by hand and capture them frame by frame to make a stop motion clip I could layer on top of satellite photo of an ice texture.

  • Tweak a generative algorithm (for example Random Walk) to recreate a crack structure.

This alternative would animate a generative drawing algorithm that gradually expands. The algorithm should draw a random line that has a structure similar to crack and gets bigger over time. This approach is similar to my first proposal but drawn by an algorithm.

  • Create a Blender animation with a premade texture.

For the crack structure I have found the following tutorial showing how to produce a procedural cracked earth effect. In a second step I would need to change the earth texture with an ice texture and modify the crack structure to show instead of a dark hole a water texture.

Tutorial: https://www.youtube.com/watch?v=oYEaJxw4pSo&list=PLOY1I6t4tn5CUFdRrko352uxnNTGYavV-&index=3

  • Create the complete ice texture with the help of Stable Diffusion.

A browser interface can be downloaded and run local on the computer: https://github.com/AUTOMATIC1111/stable-diffusion-webui

  • Cut a plain displaying a satellite image of ice with an 3D object.

In this approach I would create a 3D object and modify its surface with a texture modifier to produce a terrain structure. In the next step I would cut the plain with the satellite image as texture with the 3D object. By moving the 3D object up and down I could animate a melting effect of the ice.

  • Import GIS data into Blender and animate it over time.

For this alternative I could use a Blender add-on that can import google maps-, google earth- and GIS data. With this approach I would be able to rebuild the structure and its change of a real iceberg.

Blender add-on: https://github.com/domlysz/BlenderGIS

Tutorial: https://www.youtube.com/watch?v=Mj7Z1P2hUWk

This addon is extremely powerful as it not only imports the 3D-structure from NASA but also the texture. Finally, I could tweak the texture a little bit with blender’s shader editor and produce multiple renderings for different years.

Although, Google Earth offers the option to view data from previous years, I am not sure if this will work with the Blender add-on.

Link to the studio: https://www.fuseworks.it/

Impuls 3: Festival la gacilly-baden photo 2023 – Baden bei Wien

Die Ausstellung behandelt das Thema ORIENT und stellt Fotografen und Fotografinnen aus dem Iran, Afghanistan und Pakistan in den Mittelpunkt. Zu den Künstlern und Künstlerinnen gehören Abbas, Gohar Dashti, Hamed Noori, Ebrahim Noroozi, Maryam Firuzi, Hashem Shakeri, Paul Almasy, Véronique de Viguerie, Fatimah Hossaini, Shah Marai und Wakil Kohsar, sowie Sarah Caron.

Seit seiner Gründung hat das Festival niemals von seinem Auftrag abgewichen, die Schönheit der Natur ebenso zu zeigen wie die Notwendigkeit, sie zu schützen. Die Fotografen und Fotografinnen unseres Festivals möchten entschiedene Zeugen sein und sich aktiv an den Bemühungen beteiligen, unser kostbarstes gemeinsames Gut zu bewahren – den Planeten Erde. Zu den Vertretern dieser Mission gehören Mélanie Wenger, Bernard Descamps, Gabriele Cecconi, Stephan Gladieu, Money Sharma, Reporter ohne Grenzen, Brigitte Kössner-Skoff und Gerhard Skoff, Antonin Borgeaud, Jérôme Blin, Alisa Martynova, Maxime Taillez und Chloé Azzopardi.

Die Fotografie bleibt zweifellos das prägnanteste Werkzeug, um die öffentliche Meinung zu verändern und Momente der Menschlichkeit festzuhalten. In dieser Tradition stehen auch die österreichischen Fotografen Rudolf Koppitz und Horst Stasny. Gregor Schörg wird den zweiten Teil seiner Arbeit über das Wildnisgebiet Dürrenstein-Lassingtal im Rahmen des Festivals präsentieren. Die Ausstellung von Fotografien österreichischer Berufsfotografen und die Präsentation der Siegerfotos des weltweit größten Fotowettbewerbs mit fast 700.000 Bildern aus 170 Ländern, “Our World is Beautiful” von CEWE, werden das Festival ergänzen. Ebenso wird es eine Rückblende auf das Jahr 2022 in Bildern des Artists in Residence Pascal Maitre geben. Ein fotografisches Highlight ist die Auftragsarbeit von Cathrine Stukhard, die das Weltkulturerbe von Vichy besuchte und in den Kontext der elf “Great Spa Towns of Europe” der UNESCO stellte, zu denen auch Baden bei Wien gehört.

Unter dem Leitgedanken “Culture of Solidarity” wird die Zusammenarbeit mit den Festivalpartnern Garten Tulln, Celje in Slowenien und dem Monat der Fotografie Bratislava auch im Jahr 2023

fortgesetzt.

Verstärkter Fokus auf Board Game Design bei Kartenspielen

Dieses Semester werde ich mich weiterhin auf Board Game Design fokussieren. Da ich mich bereits letztes Semester mit diesem Thema beschäftigt habe bin ich zur Erkenntnis gekommen, dass ein komplettes Brettspiel zu entwickeln, zu gestalten und zu produzieren vermutlich zu aufwendig und vor allem kostspielig wäre. Besonders die Produktion würde sich für ein Einzelstück wie ich es benötige nicht rentieren und wäre nur unnötig teuer.

Natürlich könnte ich im Rahmen meiner Masterarbeit selbst ein Prototyp basteln, das entspricht allerdings nicht meinen optischen Ansprüchen. Darum werde ich mich dieses Semester hauptsächlich auf Kartenspiele fokussieren. Nicht nur da diese einfacher hochwertig zu produzieren sind, sondern auch weil es eine ganz andere Herausforderung ist ein komplexes Spiel nur mit Karten zu gestalten.

Ux Meetups

For the first time in my career, I got the courage to go to a networking event, to be honest I did not go to one before because I often felt like I did not have enough experience in the field or I just had no idea what I could talk about, but this event changed my perspective. Yes, there is still some professional feeling to it and most of the people have years of experience but most of the time they’re willing to share how they got their opportunities and how they build their careers.

I had the opportunity to attend UX Graz meetup, right in the midst of the World Usability Congress, where I had the chance of meeting amazing people in the field of design, I find interesting.

There I met the organizers and they talked to me about the community, how it was built and what are their future plans, I also talked to UX professionals based in Graz and Munich, they work in small product development companies, big tool and industrial companies or on a freelance basis.

I would totally recommend attending to a networking event even if it’s just to go out of your comfort zone, you will make amazing connections and engage with professionals who are always happy to guide you.

The need of light

Light is something that connects us, we all witness light independently of where we are, we but our experiences with it differs depending on the time, the season or the location. Thinking about how something like light that we take for granted can change our moods, perspectives and thoughts.

From growing our food, to sleeping or working, light plays such an important role in how we as beings develop in a society, but every community in the world has their own habits and conducts that might be related to the amount & type of light there is in their context, whereas that’s physical or artificial.

You don’t have to be the same in order to share a space. How light changes the experience of every expectant, playing with space and structure you can make the expectant see a certain image even if that is not real in the physical world but real for out minds to see.

In this new chapter of research, I want to find out more about light, color and its implications in the human behavior, and how we can create a new experience in a space just with basic necessity, the need for light.

References:  

Rikard Küller , Seifeddin Ballal , Thorbjörn Laike , Byron Mikellides & Graciela Tonello (2006) The impact of light and colour on psychological mood: a cross-cultural study of indoor work environments, Ergonomics, 49:14, 1496-1507, DOI: 10.1080/00140130600858142

Zeldes, J. (Director). (2019). Abstract the Art of Design (Olafur Eliasson, Art Design) [Streaming Platform]. Netflix.

2. Gespräch Gabriele Lechner 

Wir hatten im Zuge von Design and Reserach ein Gespräch mit Gabriele Lechner geführt, die mit uns unsere Themenwahl für die zukünftige Masterarbeit besprochen hat. Ich habe mit ihr über Drucktechniken und die Einbindung von analogen Gestaltungsmethoden in digitale Arbeitsprozesse gesprochen. 

Da sie mit mir auch über ihre Ideen und Zweifel am Thema gesprochen hatte wurde mir bewusst, dass ich mich doch lieber weiters auf die Suche nach einem anderen Thema machen werde. Drucktechniken ist ein schon sehr detailliert erforschtes Thema, welches wenig Spielraum für eigenständige Recherche bietet. Die Einbindung mit digitalen Prozessen kann interessant sein, weist jedoch einige Schwierigkeiten auf. Hier wäre eine genauere Themendefinition wichtig, um die Recherchearbeit einzugrenzen. 

Nach etwas Brainstorming sind wir auch noch nicht auf ein passendes Thema gestoßen.

Blogbeitrag 1: Neues Thema: Freelancing als Communication Designer

Dieses Semester möchte ich einem Research Thema widmen, welches bereits an meinem Masterarbeit’s Thema anlehnt und mich wirklich interessiert. Meine Masterarbeit wird eine Art Leitfaden für Kommunikations Designer werden, welche sich dauerhaft für die Selbstständigkeit entscheiden und ich werde verschiedenste positive sowie negative Aspekte bearbeiten und die Unterschiede vergleichen sowie recherchieren was es braucht um erfolgreich zu werden. Ich möchte mich auch mit der Preispolitik in Agenturen vs. Freelancer beschäftigen und vergleichen. Außerdem möchte ich mich darauf fokussieren, wie man auch kleine Unternehmer abholen kann, da sich meist nur Großunternehmer ein wirklich gutes und durchgängiges Branding leisten können und ich schon oft in Situationen kam wo kleine Unternehmer mit wenig Budget an mich traten und nur einen Flyer benötigten ohne jegliches CD. Auch untersuchen werde ich den Stellenwert den Grafikdesign für die Kunden hat und damit, wie oft es unterschätzt wird.

Zudem möchte ich als Werkstück meine eigene visuelle Identität aufbauen inkl. Website, Social Media inkl. Content Creation etc.

Für Design&Research werde ich neben meinen Blogbeiträgen, bei welchen ich weiter meine Gedanken zum Thema festhalten und weiter recherchieren werde um erste Grundlagen für meine Masterarbeit zu setzen, als Events viele weitere Podcasts anhören, Beiträge und Magazine lesen, Filme ansehen und Recherche über andere selbstständige Grafikdesigner machen.

Neue Ideen durch Gespräch mit Gabriele Lechner

Vor kurzem hatten wir Feedback-Gespräche über unser Masterarbeitsthema mit Frau Gabriele Lechner.

Bei dem Gespräch mit Frau Lechner bekam ich das Feedback, mit Storytelling ein sehr geeignetes Thema für eine Masterarbeit gefunden zu haben. Die Frage war nur, auf welchen Bereich des Designs ich das Thema Storytelling beziehe. Frau Lechner gab mir Tipps zu möglichen engeren Themengebieten wie Beispielsweise Storytelling im Branding, bei Magazinen wie beispielsweise speziell Wirtschafts- oder Frauenmagazine. Auch meine spontane Idee eines Leitfadens oder Guidebooks für Storytelling im Grafikdesign fand Frau Lechner spannend.

Ich glaube, dass mir diese Idee am besten gefällt und kann mir sehr gut vorstellen, ein Guidebook zu planen und zu gestalten. Ich denke aber auch, dass ich das Thema vielleicht noch auf einen kleineren Bereich des Designs eingrenzen muss, wie beispielsweise Kommunikationsdesign, Editorial Design oder Poster Design.

Blog post 1 – AI Film making #1

Intro

Artificial intelligence has been developing at a rapid pace during the last few years, so much so that I needed to reevaluate what direction my master’s thesis was going. Initially I had planned on using simple text to image models to create style frames, mood and storyboards and possibly even AI generated image textures to help in creating a short film project in a genuinely new way to breathe fresh air into the motion graphic and media design industry.

However, not only have multibillion dollar companies, but also smaller teams and creatives around the world beaten me to it in spectacular ways, with Adobe having implemented many AI assisted tools directly into their software and companies like Curious Refuge having established fully fledged AI workflows for film making.

What this is

For the aforementioned reasons I have abandoned the idea of creating a genuinely new approach to AI film making and will therefore do my best to keep researching the technological state of AI going forward, and aim to create a short film project using cutting edge AI tools.

This blog post is supposed to be a repository for the most advanced tools available at the moment. I want to keep updating this list, though I’m unsure if I should come back to this list or duplicate it, time will tell.

In any case, whenever I decide to start work on the practical part of my Master’s thesis, I will use whatever tools will be available at that time to create a short film.

List of tools

Text To Image

  • DALL-E 3
  • Firefly 2
  • Midjourney
  • Adobe Photoshop & Illustrator

Curious Refuge seems to recommend Midjourney for the most cinematic results, I’ll be following the development of Chat-GPT which can work directly with DALL-E 3 as well as Midjourney to see what fits best.

Adobe Firefly also seems to be producing images of fantastic quality and even offers camera settings in its prompts, information that is crucial in the creative decisions behind shots. Moreover, Firefly is, in my opinion, the most morally sound option, since the AI was trained using only images Adobe themselves also own the rights to, an important point for me to consider since I am thinking about putting an emphasis on moral soundness for my paper.

Adobe’s Photoshop and Illustrator tools are remarkable as well, I have already planned on dedicated blog posts testing out their new features and will definitely implement them into my daily workflow as a freelance motion designer, but I am unsure how they could fit into my current plan of making a short film for my master’s thesis.

Scripting & Storyboarding

  • Chat-GPT 4 directly integrated with DALL-E 3

At the moment, Chat-GPT seems to be by far the most promising Text based AI. With the brand new Chat-GPT 4 working directly with DALL-E 3, this combinations is likely to be the most powerful when it comes to the conceptualisation phase. This is also a tool that I would confidently use in its current state.

Prompt to Video

  • Pika Labs with Midjourney

Both work through Discord Servers, I am not sure how well this can work as a specialised workflow, Midjourney has since published a web application. However, this means that the combination of Pika Labs and Midjourney is quite efficient, as users don’t need to switch applications as much. Results are still rough, Pika Labs is still in its early development stages after all, a lot of post processing and careful prompting needs to be done to achieve usable results.

3D Models (Image to 3D & Text to Image)

  • NVIDIA MasterpieceX
  • Wonder3D
  • DreamCraft3D

As far as 3D asset creation is concerned, a lot has happened since my last blog posts about the topic. There are a multitude of promising tools, most notably of which is MasterpieceX by NVIDIA, as it seems to be capable of generating fully rigged character models which could work well with AI powered animation tools. How well the rigs work needs to be tested but visually all three models seem advanced enough to use for, at least stylised, film making workflows.

3D Animation

  • Chat-GPT 4 & Blender
  • AI Powered Motion capture
    • DeepMotion
    • Rokoko Vision
    • MOVE.AI

In line with the 3D models, it seems that many AI assisted motion capture tools are already very capable of delivering usable results, I am not yet sure which one is the best, but time will tell. Non motion capture based animation knows almost no limits with the use of Chat-GPT, as it is able to program scripts, finish animations and create ones from scratch in a variety of tools.

Gaussian Splatting

  • Polycam

A very new technology that will surely spawn many other iterations is gaussian splatting. Using simple video footage, AI is able to determine and re-create accurate and photorealistic 3D environments and models. Some developments have even shown it working in real-time. While I am excited to see what the future of this technology will hold and that it will play a huge roll in the world of VFX, I am not sure how I would use it in my short film project.

Post

  • Topaz Gigapixel Image / Video AI
  • Premiere Pro

Unfortunately, if I wanted to use video AI tools in their current state, a lot of post processing would need to be done to the results to make them usable.

However, there is another, more traditional point to be made in favour of Topaz Labs, in that using its upscaling features saves a lot of time in almost any production phase, as using lower resolutions will always speed up processes, regardless of application. Due to its pricetag of 300USD I am not sure if I will use the AI for my educational purposes, but I am convinced it is a must when pursuing commercial projects simply because of the time saved.

Premiere Pro’s new features are impressive to say the least but I feel work best in a production that works with real shot footage and a more traditional media design workflow. I’m unsure how I could be using Premiere’s AI features to their fullest extent, but my work will need to go through an editing software of some kind, so I will see.

Conclusions

After today’s research session, it has become even more apparent that the world of AI is developing at mind boggling speeds. On one hand it’s amazing what technology is capable of already and even more exciting to think about the future, on the other hand the moral and legal implications of AI tools are also increasingly concerning, and with AI having transformed into a household name, I fear that the novelty of the technology will have worn off when I start to write my thesis.

So today I am left with a bittersweet mixture of feelings made up of the excitement of the wonderful possibilities of AI and concern that my thesis will be lacking in substance, uniqueness or worst of all, scientific relevance. I will definitely need to spend some time thinking about the theoretical part of my paper.

As far as the practical part of the paper goes, I must not succumb to FOBO and decide on how much I want to leave my comfort zone for this project. I fear that if I lean too much into the AI direction, my work will not only become harder, but also less applicable in real world motion and media design scenarios. Whatever solution I come up with, I want to maximise real world application.

Links