Task 2: Literature Research “Debris”

I decided to take the article “Debris: A playful interface for direct manipulation of audio waveforms” as my object of study.

The idea of creating such an interface seemed to me very interesting, I like experiments and different unusual approaches to seemingly familiar things. However, after reading the article and reviewing the materials, I still have some doubts about this product.

In my opinion, this prototype can hardly be called an “interface”, I would say that it is a work of art, in a way. I am not a musician, so I can only judge from my side, but in my opinion, with such an interface it would be difficult to create any particular music, because music is linear in nature, while this product is the opposite. Plus, the interface seemed to me as unintuitive as possible to call it a tool.

But all that I have said above does not mean that I did not like the project, on the contrary, I liked it very much. In my opinion, this prototype is just hard to call an “interface”, I would say that it is a work of art, in a way. I would love to try this innovation, maybe I’m wrong and it may be convenient for musicians and will soon come into use

Reference:

Robinson, F. A. (2021). Debris: A playful interface for direct manipulation of audio waveforms. https://doi.org/10.21428/92fbeb44.02005035

Have you seen a 3D TV?

I mean seriously, when was the last time you saw or heard anything about 3D TVs? At one time this technology was on a wave of popularity, but today few people will remember it. In this article, I’d like to look at a couple of examples of this technology, which was, what they call, on a hype cycle, but ended up not being needed.

Eywa is eternal and 3D TVs are not.

In the late 2000s, the producers of televisions, monitors and even mobile phones actively pursued the theme of 3D. It was even announced that a new era of 3D television had dawned. Perhaps it all started before Avatar arrived in cinemas, but the film made things faster as everyone got addicted to 3D. The high price did not deter consumers, although any TV with 3D function was much more expensive than its analog, only without 3D. The manufacturers of tablets and phones started making “3D devices” out of a feeling of profit. But they were not very popular, and over time, sales of 3D TVs began to fade as well.

As a result, by 2017, many companies had stopped releasing new 3D TVs. What is the reason for this? It can be explained by several factors:

Untimeliness. Several years before 3D TVs entered the market, consumers had already purchased the then-new HDTVs and not all potential customers wanted to splurge.

High cost. Not enough to own a TV, you have to buy 3D content for it! This, of course, is a very expensive treat, Blu-ray with 3D support, new 3D enabled cable/satellite set-top boxes and the like.

The need for additional equipment. One word: glasses. Or rather a few words: glasses which only fit one type of TV and again cost money, especially if we’re talking about active shutter glasses.

In general, 3D has over time become an occasional entertainment, going to the cinema once a month to see a 3D movie is fine, but investing in a home 3D system has simply not been profitable.

To wire or not to wire

In 2012, Nokia launched the Lumia 820 and Lumia 920. The feature of these phones was wireless charging. I myself became a lucky owner of the Nokia Lumia 920 around 2014 and bragged terribly about being able to charge the phone without a wire. At the time, I thought it was the future of chargers. However, almost ten years have passed and wireless charging technology hasn’t really made much headway. This approach has a number of design disadvantages that are not easy to overcome.

The inability to use the phone while charging.
This is an aspect that particularly annoyed me while using a Nokia. You had to wait for the phone to charge before you could use it again. The charging stand allows you to place the phone in an upright position and not interrupt use. But it’s still not very convenient, as one wrong move of the hand would move the smartphone off the coil and the battery recharge would be interrupted. Wires are more reliable in this regard.

Extra cost.
Another obvious disadvantage of wireless charging is the need to additionally purchase a docking station. Of course, it doesn’t come with the phone. While many manufacturers equip smartphones with a factory USB cable. It turns out that wireless charging requires additional costs. A quality wireless charger from a reputable manufacturer will cost at least $20-25. Multifunctional chargers for several devices are mostly available at prices starting at $40. Not all users are willing to spend that much money to purchase an additional accessory. In order to save money, it is better to use a regular wire, especially if it comes with the phone.

Low charging speed.
Wireless charging is significantly less powerful than wired charging. For example, the top flagship Xiaomi 12 Pro (Dimensity Version) has a 67-watt wired and a 50-watt wireless charger. On paper it seems like a small difference, but in reality the smartphone will be powered up much faster from the cable. It’s a similar situation with the new Google Pixel 7: a wired 30-watt charger will power the battery up to 50% in just 30 minutes, while a wireless one will take about an hour. If you’re in a hurry, you’re better off using a cable.

Funny observation: even complimentary articles written about wireless charging acknowledge the advantage of charging from a cable, but as if urging the reader to appreciate that the creators at least tried. Truly necessary technology doesn’t usually need such reassurance.

Big Brother is watching you. Or is he?

Back in 2012, Google Glass seemed like a real miracle. Something we could only imagine in sci-fi movies. “The future is already here” was written by various publications. But is it really so? From our current perspective we can say definitely not. But why, what has happened?

There are two main reasons, technical and social. The second includes privacy issues. When Google Glass was desperately publicised, some were already beginning to be wary of the device. The glasses were even banned from bars and cinemas. And in general, with growing concerns about the protection of personal data, the ethics of such a device are being questioned a great deal.

However, this is a minor point when you consider that in reality Google Glass is simply useless. Uncomfortable controls, laggy interface, overheating problems, very low battery life (literally a couple of hours) – after all, what is Google Glass good for anyway? It’s easier to put a GoPro on your head for hands-free video shooting. The browser could only be used by people who like pain, and while driving the glasses were distracting and even caused a few accidents.

The most interesting thing is that Google hasn’t given up on their product. Not once has anyone from the top management of the corporation called Google Glass a failure or announced the cancellation of the product. The commercial release of the device has been constantly postponed. Tony Fadell, formerly of Apple, was brought on board. Five years ago, he was tasked with bringing the device to fruition. Rumours of an upgraded version of Google Glass have been circulating online from time to time, but whether it will reach users is a big question.

In conclusion, I have made one interesting point since writing this article. Marketing is a powerful thing. However, it is not durable at all.

References:

3D TV Is Dead—What You Need To Know. (2021, April 17). Lifewire. https://www.lifewire.com/why-3d-tv-died-4126776

Proença, E. (2013, August 9). Review: Wireless Charger for Nokia Lumia 820 and Lumia 920. Showmetech. https://www.showmetech.com.br/en/review-wireless-charger-for-nokia-lumia-820-and-lumia-920/

Leow, V. (2021, July 14). Does Fast Wireless Charging Really Affect Your Phone Battery? https://chargeasap.com/blogs/news/does-fast-wireless-charging-really-affect-your-phone-battery

Srivastava, P. (2022, July 13). Why Google Glass Failed? Google Glass Failure Case Study. StartupTalky. https://startuptalky.com/google-glass-failure-case-study/

What is the price of high technology?

You might have thought by reading the title that this text would be about the moral choices our society might face with the arrival of high technology. However, I must disappoint you: this time I really want to talk about the prices of the various technologies of our time and their return on investment in the industry. In this context, the technologies that concern us in our everyday lives and are aimed at the end user are particularly interesting, which is what I want to reflect on in this article.

To begin with, it is important to talk about the profitability of products, and for this it is important to know its definition and how it is calculated: The profitability helps to determine the return on investment of a product in relation to the value of the investment. In its simplest form, the return on a product is calculated quite simply: the product’s profit should be divided by the capital invested in it, so you get the right percentage. Now that we’ve got the calculations figured out, we can move on to practice: What exactly does this mean for high tech companies? According to Forbes, it’s the return on investment that will drive the tech startup market in the coming years. The author writes that the crises of recent years have changed the market dramatically and investments will be harder to come by and younger companies will have to wait longer to bring their products to market. However, according to the same article, cloud services will continue to be popular despite the recession and are therefore a worthwhile investment when it comes to product returns.

But the casual reader might ask: but what about the meta-universe and neural networks? They, and not cloud services, are now being talked about all over social networks, the biggest businessmen like Elon Musk or Mark Zuckerberg are taking them so seriously that they are not afraid to make big announcements about the future of such technologies, are they not paying off, is it all a trick? This is not the easiest question to answer. On the one hand such technologies from entertainment industries create a lot of hype around them, thus attracting attention and investors, on the other hand it must be understood that many statements by celebrities rather describe a utopia associated with them or a higher idea that is far from being realized and has little to do with payback at the moment, usually when it comes to innovation. Take for example the meta-universe that Mark Zuckerberg so often talks about in 2021:

“Metaverse isn’t a thing a company builds. It’s the next chapter of the internet overall.”
Mark Zuckerberg, Meta. (Carlson, 2022).

According to experts, the idea of a meta-universe is 8 or even 10 years away from being realised. Further growth of the idea, the author writes, will depend on both clarity of the product’s return on investment, and the readiness of industry and technology talent. And despite predictions that the meta-universe could already be used in some industry sectors by 2030, at this point it’s hard to talk about a payback for the next few years because it’s at the minimal viable product stage.

While many people fear losing their jobs because of the abilities of neural networks and nervously check willrobotstakemyjob.com, the situation around artificial intelligence is not much different from the one in which the meta-universe is: according to the latest McKinsey data, adoption of such processes has doubled since 2017, but a rather small number of companies (10%) report a payback in business. Experts attribute this to the fact that neural networks require careful and voluminous data work, which some firms lack the resources to do. This is why industry giants such as Google and Amazon are likely to offer their solutions and services to small businesses going forward. This is why artificial intelligence is still at an early stage of development and needs to identify applications.

What conclusions can be drawn from the above? Technology Payback is not always about press coverage or innovation, much more often it’s just about what a technology can (probably) pay for itself in a decade or so. Far more often the payback technologies are not in the limelight but in niche areas that do not come to the mind of the average person to begin with, such as cloud services, which, mind you, just enjoyed immense popularity in the press 10 years ago.

References:

Raynovich, S. R. (2023, January 25). Inside the Trends Driving Top Cloud Startups In 2023. Forbes. https://www.forbes.com/sites/rscottraynovich/2023/01/25/inside-the-trends-driving-top-cloud-startups-in-2023/

Carlson K., Austin American-Statesman. (2022, March 16). At SXSW, Mark Zuckerberg says metaverse is “Holy Grail” of social experience. Austin American-Statesman. https://eu.statesman.com/story/business/2022/03/16/sxsw-facebooks-mark-zuckerberg-says-metaverse-future-internet/7051230001/

Desk, T. (2023, January 25). Widespread metaverse adoption still years away, despite strong early signals: NASSCOM. The Indian Express. https://indianexpress.com/article/technology/metaverse-adoption-nasscom-report-8402349

Author, G. (2023, January 7). AI goes mainstream, but return on investment remains elusive. SiliconANGLE. https://siliconangle.com/2022/12/29/ai-goes-mainstream-return-investment-remains-elusive/

“Flying machines heavier than air are impossible!”

This remarkable phrase, which I put in the title, was said by Lord Kelvin in 1895, the eminent physicist, then president of the Royal Society. By the way, the unit of measurement “Kelvin” is named after him. This statement was made just eight years before Orville Wright made the first controlled airplane flight. Already by 1904 there was a more advanced model of the plane, capable of performing maneuvers. A year later there was a third “modification”, this model could stay in the air for about thirty minutes.

Just a year ago, we only laughed at AI and neural networks, finding their attempts to look “natural” funny. Siri was “born” only 12 years ago. The first iPhone was released in 2007. GSM was born in 1992, just 30 years ago. We never know how far technology can go or what to expect from it. In this article I would like to remember some inventions that were once met with hostility, but found their place in the world in the future.

Is it spinning or not?
Copernicus was a scientist of the early 16th century, the author of the heliocentric system of the world, which marked the beginning of the first scientific revolution. Scientists before Copernicus believed that the Earth was the center of the universe, and the world was divided into sublunary and supralunary. That was until Copernicus published his major work, On the Revolutions of the Heavenly Spheres, in 1543, outlining and justifying the heliocentric system of the world. The Polish astronomer assumed that the Sun was at the center of the universe and that the Earth was only one of the planets moving around the Sun. Copernicus also stated that the firmament in which we observe the stars every day does not revolve around the Earth, as previously thought, but is at rest. With his research, the scientist shattered the foundations of traditional worldviews, which caused resentment and misunderstanding among ordinary people. His doctrine was officially condemned 73 years after publication, and only in time did astronomers recognize that Copernicus, and his “colleague” Galileo Galilei, were right. The Earth does revolve. By the way, many people mistakenly believe that Copernicus was burned for his bold statement, but this is not true. The scientist died at the age of 70 from a stroke.

The Age of Steam or Sail?
Another telling story that not all great discoveries were welcomed with open arms is the invention of the steamer. In 1800, American engineer Robert Fulton began experiments to create a steam engine and modernize sailboats. As is not difficult to guess, the scientist’s proposal was met with a hostile reception.

“Mr. Fulton’s proposal to install a steam engine on seagoing vessels is sheer nonsense. A steam engine cannot replace sails.” – Fleet Commissioner François Le Moyne.

Despite the disapproval of colleagues and the public, Fulton still embodied his idea to life, and in 1803 created a steam ship 20 meters long. It was tested on the River Seine, where it reached a speed of three knots against the current. But successful tests did not help the scientist to convince people of the necessity of his invention. Napoleon Bonaparte did not believe in the success of the project either. It is worth noting that ten years later the emperor changed his mind. On Fulton’s models were built several steamboats, including a warship with 44 cannons. But his inventor never caught it.

Space and journalists.
The era of rocket technology did not begin so long ago, and its history was not so easy in the beginning either. In 1909, Robert Goddard proposed a project to create a multistage rocket. The scientist agitated that once the fuel from the tanks was completely consumed, the stages would be discarded, thus reducing the mass that needed to be accelerated to higher speeds. When the scientist talked about his project, many thought the scientist’s words were fantasies. In his column, the editor of the Technology News section of The New York Times even ridiculed the scientist and his idea. But no one knows the journalist’s name now, and Robert Goddard has gone down in world history. The scientist created a liquid fuel rocket, which was tested in 1926. The first prototype rocket was only about 20 centimeters, which took off in just two and a half seconds to a height of about 12 meters and flew 56 meters. Goddard’s designs were used to build dozens of real rockets in the future. The New York Times, by the way, then took it back and apologized for their article.

So, in conclusion, I would like to emphasize again that none of us can tell which technology will be successful and which will remain in the dustbin of history. Sometimes technologies that cause skepticism turn out to be vital after a couple of decades, and sometimes loud and promising inventions are soon forgotten. But that’s a topic for another article…

References:

Misra, R. (2015, December 16). The greatest newspaper correction ever written (49 years too late). Gizmodo. https://gizmodo.com/the-greatest-newspaper-correction-ever-written-49-year-1491590487

Hartenberg, R. S. (1998, July 20). Robert Fulton | Biography, Inventions, & Facts. Encyclopedia Britannica. https://www.britannica.com/biography/Robert-Fulton-American-inventor

Westman, R. S. (1998, July 20). Nicolaus Copernicus | Biography, Facts, Nationality, Discoveries, Accomplishments, & Theory. Encyclopedia Britannica. https://www.britannica.com/biography/Nicolaus-Copernicus

The Wright Brothers and the Invention of the Airplane. (2021, April 25). ThoughtCo. https://www.thoughtco.com/airplanes-flight-history-1991789

The Place of High-Tech in Our World

What technological device are you currently reading this article with: your computer, your phone, or maybe your tablet? You may even have a plugin installed that reads it aloud to you while you’re driving in your car or shopping, because that way you can do two things at once. Let’s forget this particular article for a second, can you say exactly how much time in your life is spent surrounded by various devices, or would it be easier for you to answer how much time you spend without them? Before answering, think carefully about whether you wear a smart watch or a device that helps your heart beat, or perhaps helps you hear better? For many people today, life is so intertwined with technology that they have trouble answering these questions, and why should they?

As a student of Interaction Design, I understand that as no one else in my profession will be associated not only with high and modern technology, but also with their implementation in the lives of other people. That’s why I wanted to ask myself at the very beginning of my career, “how much technology do people need? It’s great if they can help people make better diagnoses or make their lives safer, and if they can help increase education, but I also started to hear more and more about the flipside of technology: it can reduce the safety of driving if not used correctly (maybe you heard about the time that the captain of the vehicle lost control because his analog device was replaced with a touchscreen), or it can completely displace people from creative professions that they enjoy (like right now I’ll tell you this). So, given these trends, it is not hard to see that there is a certain boundary between the right level of technology and the alarming level of it, which is unspoken and which nobody marked before we entered the age of information technology. It is this line that is the subject of my interest for this blog.

First of all, I want to briefly outline the advantages and disadvantages of technology in relation to certain industry sectors, because in my opinion, the need for technology is equally distributed between different areas.

Medicine:
Technology has made huge breakthroughs in this field over the past decades. Innovations such as AR, 3D printing, AI, robotic assistants and virtual healthcare have surely saved and improved the quality of more than one life. Recently, an experiment was conducted in which 15 experienced doctors from China and an artificial intelligence called BioMind AI competed against each other in making diagnoses related to neurological pathologies. While the AI could find 87% of pathologies in 225 cases, the team of doctors could achieve only 66% accuracy. Similar results were obtained in a competition to identify brain hematomas. And despite the fact that it is already clear that such technology will displace many extremely talented diagnosticians from the working market, many will say “I do not care who diagnoses me, human or machine, as long as the disease is found”. Since we are talking about human lives, in my opinion, in this case high technology is appropriate and necessary, if it can work more accurately than humans or increase the success of treatment. Naturally, this also requires patience, adaptability and teachability in the medical profession.

Education:
In my opinion, things are more complicated in this field. On the one hand, if every student had a tablet with a complete school program, which would also adjust to the abilities of the learner, it would be great: students could decide at what pace to study and when they could try to pass exams, their cognitive abilities and interests could be taken into account in setting homework, teachers could concentrate on supporting their class and would be freed from the monotonous routine work. However, this sounds utopian enough, and any utopia always inevitably leads to various dystopian scenarios. During the coronavirus, it became clear that students, contrary to all expectations of the adult generation, were not delighted with long sessions with computers and tablets: they complained of back pain, lack of communication and even depressive states. All this points to the fact that in education technology should not be used thoughtlessly and count on it in every aspect.

Art:
This is probably the most difficult area. The vast majority of designers working in the industry use technology for their work, be it Figma or Adobe Suite. In recent years the appearance of new plugins in Figma or brushes for Photoshop has excited the community of illustrators and designers, but after the appearance of Mid Journey, many have become very skeptical of the development of technology in this area, even to the point of asking for the ” cancellation of AI in art “. This kind of antipathy to progress has not been seen since the Industrial Revolution, so it seems to me that this topic is vast and will remain an open question for a long time to come.

(“The cover illustrations for our blog posts created by human designers are on the left, and those generated with Midjourney on the right”, Source: Kirdina, 2022)

As you can see, the topic of technology is totally controversial and can be approached from different perspectives when it comes to different areas. That is why I find it interesting and fascinating, so in my next articles I plan to explore it from different angles.

References:

China Focus: AI beats human doctors in neuroimaging recognition contest – Xinhua | English.news.cn. (o. D.). http://www.xinhuanet.com/english/2018-06/30/c_137292451.htm

6 Ways Technology is Transforming the Healthcare Industry. (2020, 6. Januar). The Manufacturer. https://www.themanufacturer.com/press-releases/6-ways-technology-transforming-healthcare-industry/

Digitalisierung & Gesundheit – GIVE. (o. D.). https://www.give.or.at/angebote/themen/digitalisierung-gesundheit/

Whiddington, R. (2022, 16. Dezember). Independent Artists Are Fighting Back Against A.I. Image Generators With Innovative Online Protests. Artnet News. https://news.artnet.com/art-world/independent-artists-are-fighting-back-against-a-i-image-generators-with-innovative-online-protests-2231334

TechCrunch is part of the Yahoo family of brands. (2019, 11. August). https://techcrunch.com/2019/08/11/navy-ditches-touchscreens-for-knobs-and-dials-after-fatal-crash/

Kirdina, A. & Turner, T. (2022, 13. Dezember). Midjourney vs. human illustrators: has AI already won?—Martian Chronicles, Evil Martians’ team blog. evilmartians.com. https://evilmartians.com/chronicles/midjourney-vs-human-illustrators-has-ai-already-won