Exploring the Future of Digital Health and AI in 2024

In a recent captivating episode of “The Hustle Daily Show,” host Ben Berkley and guest speakers Juliet Bennett Rylah and Martina Bretous discussed two pivotal topics shaping our future: the revolutionary digital health technology involving IoT tracking pills and the dynamic evolution of AI in 2024.

The Future of Digital Health: IoT Tracking Pill by Solero Systems

Juliet Bennett Rylah, durin her conversation with Ben Berkley, brought to light an extraordinary innovation in healthcare – a digital pill developed by Celero Systems. This isn’t just any pill; it’s a technological marvel, about the size of a standard multivitamin, loaded with sensors, a radio antenna, a microprocessor, and a tiny battery.

  • Revolutionizing Diagnostics

Initially aimed at diagnosing sleep apnea, this pill represents a leap forward in medical diagnostics. Traveling through the digestive system, it monitors vital signs and transmits this crucial data for medical analysis. The ease and non-invasiveness of this method could potentially replace cumbersome traditional sleep studies.

  • Economic and Ethical Implications

However, Juliet expressed concerns about the pill’s cost-effectiveness, given its apparent single-use nature. There’s also the matter of ensuring data security and patient privacy, critical in our increasingly connected world. Addressing these challenges will be vital for the broader adoption of this technology.

Insights from Martina Bretous on AI in 2024
Martina Bretous provided an engaging overview of what 2024 holds for this dynamic field of AI.

  • Bridging Generational Gaps with AI

AI’s role in making technology accessible across generations is remarkable. Martina highlighted the widespread adoption of AI tools like ChatGPT, which have shown potential in various applications, from entertainment to business.

  • The Innovation vs. Caution Debate

A key theme in Martina’s insights was the need to balance the aggressive push for AI innovation with ethical and safety considerations. This balance is crucial for sustainable development in AI, avoiding pitfalls seen in other tech sectors.

  • AI’s Expanding Role

Looking forward, Martina anticipates AI’s role in enhancing accessibility for the disabled and improving the functionality of digital assistants. The integration of AI into everyday life, including at-home health monitoring, is a trend to watch in 2024.

A Concern we ought to address is:
As AI advances, addressing ethical and privacy issues remains a top priority. Ensuring responsible use and managing the balance between innovation and safety will be key challenges in the coming year.

Blog #9 – Can AI make my master thesis?

Can we use AI for our Master thesis? The answer to that is no but is that a good thing or should we be allowed to use AI?
I came across this video on YouTube that had some pretty interesting insights about this topic.

At the beginning not everyone knew where to use AI chatbots for. But for students it was pretty obvious to use it for school assignments to give answers to questions and even with .4 versions of ChatGPT AI can analyze data, read images files, and write at the college level.

What does this mean for education?
How do we know that students have learned?

Banning AI
To ban AI, schools have to block AI websites, they need to use AI detecting software to detect Ai generated texts and shifting more work into class hours and onto paper.
But there are a lot of pros and cons for banning AI completely. Teachers don’t want to be policing measures like this. The detection software is really imperfect and can cause false positives. How can teacher accuse a student of using AI when they are not 100% sure.
Does it make sense to ban chatbot when tech companies are inserting them everywhere else.
Grammerly, Notion, Google Docs have a “help me write” and other functions that are considered AI. Do we now have to cite all the AI we are using?

Allowing AI
Allowing AI in school can be helpful for students to understand how to use AI and how future generations are using it. It will become part of our everyday lives such as spelling checkers, translation software and calculators.
But ChatGPT and other AI chatbots are still not perfect. There are a lot of flaws and you can’t just copy and paste. But how do you know when something it correct when you don’t know anything about the subject. But we have that problem with humans as well.

The risk with AI is that we are not learning enough from it because it’s a passive way of gaining information. Learning it not about having the right/correct answer but it is about the struggle and the growth. It’s for students to realize “this” is where I use ChatPGT for and “this” is where I don’t need to use ChatGPT for.

I think for highschoolers it would be better to ban ChatGPT since they didn’t choose to study and would use AI to quickly get rid of their homework. But College and University Student choose to study for a curtain topic and they want to learn and that hopefully makes them smart enough to use AI in a responsible way. As a Master student you choose to be in university and why would you waste an opportunity to let AI make all your assignments and learn nothing from it.

Vox. (2023, 12. Dezember). AI can do your homework. Now what? [Video]. YouTube. https://www.youtube.com/watch?v=bEJ0_TVXh-I

Impulse #2 – fuse* Workshop

A workshop with fuse* design studio focused on generative art installations.

Through a lot of research in the field of machine learning and artificial images I found a design studio from Modena (Italy) named fuse* who hosts a discord server for exchange. Not only do they encourage to ask questions about their design process but also announce new projects.

One week after I joined, they announced a workshop about one of their art installations called “Artificial Botany”. Since I already knew from my previous research what algorithms and tools they might have used, I knew this would be a good opportunity to get insights into the actual design process and more importantly the scale of complexity when applied in a museum like environment.

To summarize, I got insights about the complexity and sub-processes between data collection and the final video. From my first Impulse I already knew how the technical workflow looks like, but I clearly underestimated the process of tweaking and manipulating data sets to produce the desired video output instead of a random generation. As the creation of a single Video already requires a lot of processing power, tweaking and manipulating requires many more cycles and regenerations. After this workshop I see this point in a different way – being more confused because of complexity I simply haven’t seen before.

With this knowledge I ask myself whether this complex, energy hungry and time-consuming process suites my end goal.  Are there other simpler approaches to visualize cracking ice in an interactive environment? Is this part of my installation going to be the focus, to justify the time it takes to produce the needed video content with a StyleGAN algorithm?

Whether or not, the videos that are being created with StyleGAN are truly impressive and by taking real iceberg pictures and bringing them to life through machine learning, would greatly fit the dramaturgy of my installation.

After this workshop I have strong concerns about the complexity of my concept. I think I need to get the opinion from an expert in the field of computer vision and maybe come up with simpler alternatives. So far, the following alternatives would greatly reduce the complexity of my project. The list is ordered starting with more abstract solutions up to authentic representations.

  • Draw the cracks by hand and capture them frame by frame to make a stop motion clip I could layer on top of satellite photo of an ice texture.

  • Tweak a generative algorithm (for example Random Walk) to recreate a crack structure.

This alternative would animate a generative drawing algorithm that gradually expands. The algorithm should draw a random line that has a structure similar to crack and gets bigger over time. This approach is similar to my first proposal but drawn by an algorithm.

  • Create a Blender animation with a premade texture.

For the crack structure I have found the following tutorial showing how to produce a procedural cracked earth effect. In a second step I would need to change the earth texture with an ice texture and modify the crack structure to show instead of a dark hole a water texture.

Tutorial: https://www.youtube.com/watch?v=oYEaJxw4pSo&list=PLOY1I6t4tn5CUFdRrko352uxnNTGYavV-&index=3

  • Create the complete ice texture with the help of Stable Diffusion.

A browser interface can be downloaded and run local on the computer: https://github.com/AUTOMATIC1111/stable-diffusion-webui

  • Cut a plain displaying a satellite image of ice with an 3D object.

In this approach I would create a 3D object and modify its surface with a texture modifier to produce a terrain structure. In the next step I would cut the plain with the satellite image as texture with the 3D object. By moving the 3D object up and down I could animate a melting effect of the ice.

  • Import GIS data into Blender and animate it over time.

For this alternative I could use a Blender add-on that can import google maps-, google earth- and GIS data. With this approach I would be able to rebuild the structure and its change of a real iceberg.

Blender add-on: https://github.com/domlysz/BlenderGIS

Tutorial: https://www.youtube.com/watch?v=Mj7Z1P2hUWk

This addon is extremely powerful as it not only imports the 3D-structure from NASA but also the texture. Finally, I could tweak the texture a little bit with blender’s shader editor and produce multiple renderings for different years.

Although, Google Earth offers the option to view data from previous years, I am not sure if this will work with the Blender add-on.

Link to the studio: https://www.fuseworks.it/

Impulse #1 – Artificial Images Lecture

As a big fan of NYU TISCH and their two programs named Interactive Telecommunications Program and Interactive Media Arts, I often search for their teachers GitHub Accounts as many of them provide open access to lecture materials. By doing so they give people who are not studying at this institution the chance of getting in touch with state-of-the-art research for artistic expression.

One of these teachers is Derrick Schultz who not only shares his code and slides but also uploads his entire classes on YouTube. By doing so I was able to follow his Artificial Images class and learn the basics of artificial image creation and state-of-the-art algorithms working under the hood.

Derrick Schultz introduced in his class the following algorithms that are currently used to generate images:

  • Style Transfer
  • Pix2Pix Model
  • Next Frame Prediction (NFP)
  • CycleGAN / MUNIT
  • StyleGAN

He also gives his opinion on the difficulty of each model and orders them from easiest to hardest.

  1. Style Transfer
  2. SinGAN
  3. NFP
  4. MUNIT/CycleGAN
  5. StyleGAN
  6. Pix2PIX

After comparing the different models, I found two algorithms that could produce the needed video material for my installation.

  • Pix2Pix
  • StyleGAN

Unfortunately, those are also the ones rated to be the most difficult ones. The difficulty is not only the coding but also data quality and quantity, GPU power and processing time.

In the following section I will analyze the two algorithms and give my opinion whether they could help me generating the visuals for the interactive iceberg texture of my project.

Pix2Pix

As already mentioned, this algorithm can take either image or a video as input and produces according to the training data a fixed output. I could use images of icebergs from NASA or Google Earth as my data set and detect with canny edge algorithm the edges of my images. By doing so, I get from every image in my data set the corresponding edge texture and therefore train the Pix2Pix algorithm to draw iceberg texture by giving edge textures as input.

Source: Canny edges for Pix2Pix from the dataset-tools library: https://www.youtube.com/watch?v=1PMRjzd-K8E

Problem:

  • The need to develop an algorithm that generates interactive edges.
  • Depending on the training set the output can look very different and in the worst case can not be associated with iceberg texture. A lot of training and tweaking results in many iterations of model training.

StyleGAN

In this scenario I could again use iceberg textures from NASA or Google Earth as training data and produce an animation of cracking iceberg texture that shrinks. Since this algorithm produces no fixed output, one can produce endless variations of image material.

A sample animation made by the teacher can be seen here:

Problems:

  • Getting in control of the output data of the algorithm is very difficult as it produces random interpolations based on the training data.
  • Heavy manipulation of training data might be needed to get the desired outcome. This results in many iterations of model training and therefore a lot of time, computer power, heavy GPU processing and costs.
  • Big data set of at least 1000 images recommended.

Link to the Lecture: https://www.youtube.com/@ArtificialImages

Prototyping Overview

Until now the focus of my research was about communicating social and environmental problems with the help of tangible user interfaces. In this blogpost I want to concretize my current topic of my master thesis and focus on one specific environmental problem – “melting ice sheets and their impact on sea level rise – starting a so called “chain reaction”.

So how should such an exhibit be designed, to not only offer great experience but also stating a call to action?

What is the perfect symbiosis between the digital and analog medium for communicating sensitive topics like climate change?

Staying true to the concept of multisensory experiences I would like to create an exhibit that has the following layers of abstraction:

Physical feedback

For the physical representation of an ice cap, I want to build a three-dimensional model out of sticks covered by an elastic mesh. The height of the sticks can be controlled separately with a motor, giving the impression of a shape changing constantly.

c Edwin Lang

Audio feedback

By recording the cracking of an ice bucket with contact microphones, I want to create a sound similar to a cracking ice cap.

https://freesound.org/embed/sound/iframe/268023/simple/large/

Video feedback

In terms of video, I want to project morphing organic shapes onto the physical representation of the ice cap. The shapes could be generated with programs like OpenFrameworks and real data sets of melting caps[1]. With the help of machine learning and algorithms the complexity of the data sets could be drastically reduced – however further research and testing is needed[2].

Credits images: https://pin.it/4OqiwA6


User Interface

The visitors should be able to see changes of the ice cap based on their ecological footprint. The ecological footprint gives an indicator on the greenhouse gas emissions, can be converted to emitted energy and further be linked to the capability of melting ice.

The interface itself has yet to be explored but will be of great importance to successfully link the individual ecological footprint with the ice cap and make the visitors emotionally bonded to the exhibit.

References:

[1] https://umap-learn.readthedocs.io/en/latest/
[2] Algorithmus: https://umap-learn.readthedocs.io/en/latest/