Blog Post 10 – Pika into After Effects

Continuing my animation experiments from earlier, today I’m going to composite the first shot into the animated teaser, trying out different techniques and effects to achieve the best result possible with the somewhat limited quality of the Pika generation.

As a reminder – this is the scene I want to recreate:

And this is what I have currently:


To start off, Pika has reduced my already small resolution I got out of Midjourney from 1680 × 720 to a measely 1520 × 608, and it is quite blurry and still shows some flickering, though as per my previous post, this is probably as good as it gets.

I first tried upscaling the generated video using Topaz Video AI, a paid application, whose sibling for photography, Topaz Photo AI, has given me great results in the past. Let’s see how it handles AI generated anime:

The short answer is that it pretty much doesn’t. I tried multiple settings and algorithms but it seems like the Video AI simply does not add preserved detail to my footage. I suspect that the application is geared more towards real footage and struggles heavily with more stylised video.

Next, perhaps more obviously, I tried Pika’s built-in upscaler, which I have only heard bad things about:

Immediately, we see a much better result. Overall the contrast is still low and I’m not expecting an upscaler to remove any flickering, but there is a noticeable increase in detail, sharpening and defining outlines and pen strokes that the illustrated anime style relies upon heavily.

This is great but expensive news since the Pika subscription model is extremely expensive at 10$ per month for around 70 generated videos granted to the user per month. I’ll have to see what I can do about that, maybe there’s a hidden student discount.

After Effects

Finally, familiar territory. After having upscaled the footage somewhat nicely, I loaded it into after effects and started by playing around with getting a usable alpha. I found that a simple Extract to get the rough alpha followed by a Matte Choker to smooth things out and get a consistent outline worked pretty great, although not perfectly.

The imperfections become especially apparent when playing the animation back:

There are multiple frames where the alpha looks way too messy, the flickering is still a pain and the footage is still scaling strangely, thanks to Pika’s reluctancy to have at least a little bit of camera motion.

At this point I took away two main techniques that seem to have the best effect and should be very versatile in theory: Time Remapping and Hold Keyframes. I recall speaking to my supervisor of a potential way to have Midjourney create keyframes, then have the user space those out as needed and then have AI interpolate between them to create a traditional workflow, assisted by AI. But it seems that the AI works much better the other way around – by having it create an animation with a ton frames – many of which can will probably look terrible, and then hand-picking the best ones and simply hold-keyframing your way through.

Here’s what that looks like:

Immediately, a much more “natural” result that resembles the look of an actual animated anime way more. What’s even better is that this gives me back a lot of creative control that I give up during the animation process with Pika.

After some color corrections and grunging everything up, I’m pleased with the result. I think the dark setting also helps ground the figure into the background. Still, it’s not very clear exactly what the character is doing, so this is still something I will need to experiment with further using Pika. Then again, that is expensive experimenting but oh well.

Overall I think this test was very successful – the workflow in After Effects is a straight forward one and does not care in the slightest if the video comes from Pika or any other software, which I am still open-minded about, given Pika’s pricing.

My next challenge will be getting a consistent character out of Midjourney, but I’m confident there will be a solution for that.

₁₅ Towards the Masters Thesis

Since dealing with blog posts and the master thesis preparation course has been far and inbetween this post is meant more for myself to get me a little further back on track and to collect my thoughts and findings up until now.

Preliminary title: “Effects of virtual reality training environments for large scale emergency operations in comparison to real life training”

Here the exposé I created for class:

Although to me it did feel like I narrowed down my scope quite a bit during the creation of my exposé, I realise that it is still very broad and my topic will need to be defined more clearly until I can send the topic assignment / approval form. I know which general direction I am taking, though I am lacking in understanding of what actually constitutes a masters thesis and how deep I can and should go, and consequently how much I am able to cover.

Next Steps:

1. Search through current literature and find what the latest status on research in my relevant area is and organise my findings in a sorted collection for later use.

I have already set up something like my own personal blog and resource collection space in form of a Notion page, which I plan on using as a guide throughout the process. I hope that I will be able to expand on it and properly organise everything to speed up my workflow. On this page I will collect thoughts, links, images, contacts and whatever else I might need.

2. Engage in conversation with other professionals from my field who deal with the topic or something similar on a daily basis.

I do already have a few connections from previous projects at work in mind, which I will contact in order to find out more about the topic but also gain new perspectives and maybe input on different aspects I can cover in my thesis.

3. Contact research partners and institutions to interview them about the topic.

As an extension of the previous points I will, once I have a better idea of what exactly I will be tackling, interviewing research partners and asking for permission to reference their work during my own.

4. Find possible cooperation partners for projects through step 3.

If the opportunity arises during step 3, I could possibly find a project which I can analyse and work towards during and for my thesis specifically.

5. Define a temporary structure of my work.

Self explanatory, once all the previous steps are done I would like to come back to my Notion and organise everything into a preliminary structure for me to follow.

6. Define a timeline based on all my collected information.

Once that structure is defined I will lay out more concrete plans on how to move forward from that point.

Interesting literature:

Here are some sources I found that I will further investigate for a solid foundation.

₁₄ Data Concerns with VR

As is the case with all new technology, it is easy to lose oneself in the excitement of the most recent developments and fascination of the possibilities when looking at VR/AR. However, it is important to also address parts that might prove to have a negative impact, be it in terms of emotional and physiological responses, environmental concerns, data safety or something else.

So, for this blog post, I decided to look into safety concerns that come with the usage of VR hard- and software and possible dangers connected to it, as a fitting wrap-up to my previous ventures.

Since it is a relatively new sector, augmented reality is rather vulnerable to cyber attacks like spoofing, data manipulation or snooping.1

VR-headsets are basically variations of a computer, and VR-experiences are essentially software applications, which means that a VR system is just as susceptible to issues of cyber criminality as phones or computers. A VR headset could be just as easily falling victim to a cyber attack as every other computer, possibly resulting in data breaches that can leak personal information, lead to identity theft or cause damage to the hard- and software, amongst other things. One fact that stood out to me in particular was how severely personal information might be endangered when we are talking about virtual reality: VR-systems need to track the user’s movements in order to even work properly. What most people don’t know, however, is the fact that a person’s movement is just as individual as their fingerprints, thus possibly enabling companies to always identify a person based on their movement data and without their consent.2

Due to a person’s unique movement pattern, it is almost impossible to anonymize VR- and AR-tracking data. Scientists already managed to identify users very precisely, which would be a real issue if a VR-system were to be hacked.3

VR-applications collect a lot more data than conventional technologies, as they can listen to all of the users conversation via the live mic, collect biometric data and even record eye-tracking data, thus determining what the user might look at.4

So, one of the most imminent dangers of using augmented reality technologies is the fact that the applications and hardware collect a lot of information about who the user is and what they are doing. With this, questions such as „what do AR-companies use the acquired user data for“ or „Does the company share the data with third parties“ arise.3

Generally, when big brands offer certain technologies, the public has a much higher level of trust for these applications or devices without really addressing concerns about them or questioning the product. Over-trusting is a real issue, as it is important for users and designers of VR-products to concern themselves with topics like the previously mentioned dangers of hacking, impact of user age on their experience, or the response of a user to unexpected issues such as hurting oneself while using a VR-appliance.5


1. Thetechrobot. “What are the Security and Privacy Risks of VR and AR” Published September 21, 2023.

2. IEEE. “Virtual Reality Security” IEEE Digital Reality. 2022.

3. Kaspersky. “VR und AR: Risiken für die Sicherheit und Privatsphäre” Kaspersky. n.d.

4. Awais, Maham. “What are the potential risks of virtual reality?” n.d.

5. Kenwright, Ben. “Virtual Reality: Ethical Challenges and Dangers. Physiological and Social Impacts” IEEE Technology and Society Magazine, vol. 37, no. 4, p. 20-25, Dec. 2018, doi: 10.1109/MTS.2018.2876104.

Impulse 8 – Storytelling inputs

I’ve been pretty focussed on the production side of things for my master’s project, which I can’t really blame myself for, given that I’m a media design student. But regardless of how I will animate my project or what if it is going to be a music video, short film or something else, it will need a story.

Now obviously the ChatGPT generated movie trailers were quite cheesy and a bit unoriginal. But they did follow a story structure that we provided, for me that was Dan Harmon’s story circle. When the trailer came together, it was clear to me that this style of story wasn’t going to work, but I didn’t really know why, it just felt wrong.

This is why I want to do some more research in the field of storytelling, specifically highlighting differences between western and eastern storytelling and in doing so, understanding more about the easter practices, and applying those to my film.


The aforementioned story circle by Dan Harmon of course isn’t the only western structure for storytelling, there’s also Blake Snyder’s Beat Sheet, Freytag’s pyramid or even Shakespeare’s 5 act structure. Originating in China and eventually making its way over to Korea and Japan, eastern storytelling often follows the ‘Kishotenketsu’ structure:

  • Ki (introduction)
    • Characters are intruduced
    • Establishing the setting
  • Sho (development)
    • Adding context
    • Increasing complexity
  • Ten (twist)
    • Introducing a surprising turn or revelation
  • Ketsu (conclusion)
    • Resolving the story harmoneously

You’ll notice that none of the parts involve conflict, a quality most western stories rely on heavily. Kishotenketsu instead emphasises gradual unfolding and the beauty of unexpected connections, providing a unique narrative experience characterised by a balanced and contemplative progression. While conflict can definitely be a part of the stores told, they seldom serve as a structural component.


Characters in the west usually have a certain flaw that they have to overcome on their journey to beat some antagonist. That character then embarks on a journey that changes their beliefs. In eastern stories the journey tests the beliefs that the character already holds. This tends to lead to less or flatter character development, which can be uncanny for western audiences. In general, changes are less drastic and more nuanced in eastern storytelling.

Antagonists are also treated differently. They are rarely “defeated” in the way we understand it in the west, which seems like a byproduct of the decreased importance of conflict.

World building

Whatever story is going to be told, it will need a setting and a world. I want to briefly talk about the differences in hard versus soft storytelling, a distinction that isn’t specific to eastern storytelling, but I do have some important points to make about the latter.


Hard world building involves a highly detailed and structured storytelling approach, where the world follows precise logic, rulesets and sometimes even politics, geography and history. This approach aims to immerse the audience by creating a believable world, one in which all rules and consequences make sense and every last detail about it and the story can be explained in some way.


Soft world building on the other hand is much more nuanced and plays with the viewer’s own imagination. Little is told about the world and its rulesets, giving small hints along the way to pique the viewer’s interests, making them wonder about what else there is to know. This style works very well in fantastic settings, where whimsical and unfamiliar worlds are explored, and the viewer’s wonder and lack of understanding drives their immersion.

Soft world building emphasises nuances, feelings, and imaginative involvement and therefore leaves more room for the viewer’s imagination while providing the author with more creative freedom. Not everything needs an explanation, and some authors even choose to defy logic in their soft world building approach.

Another advantage of soft world building is that the introductory phases of a story can be much shorter and focus on what’s essential – the characters, the mood and the essential lore of the world. For these reasons I want to aim at a soft world building approach and try to create a character driven story in a world that plays with the viewers imagination, while providing enough information to understand any essential rules of the world if needed.


This impulse was a challenging, since I feel like storytelling is one of my weakest skills and certainly one which I have the least experience with. But it was still kind of fun because I was watching well-made videos. Unfortunately, I don’t think YouTube counts as reliable literature for a Master’s thesis so I’m dreading having to look up literature about all of this. But maybe this is just my modern way of researching – diving into a topic in a familiar way and then choosing what I find interesting and looking up more reliable literature for it. I think it could work.

Something I will need to look into more regarding the production of the film is the issue of character continuity using Midjourney. This was already an issue during the production of my anime movie trailer, but will now get even worse, given that I believe that character driven storytelling is the way to go for my project.


₁₃ How does the body respond to VR?

I have looked at different aspects of VR and its uses, but this post wants to take a closer look at the basis for everything that I have talked about up until now: How do our bodies even respond when being exposed to virtual reality?

First of all, of course VR has its limitations. This is due to the fact that it mostly covers only one sense: seeing. Often times hearing is also involved, but things such as touching, smelling or even tasting are almost always left out, as there is simply no mainstream applicable solution available yet. However, those that have already tried a VR headset know that the immersion is still considerable. But how does it compare?

Experiments on rats have shown that the frequency of electrical spikes between neurons drop by around two thirds when experiencing VR content as opposed to the real world. Activity of cells responsible for navigation also drops to 30%, compared to 80% in real life. Furthermore, 60% of hippocampal neurons, responsible for information retention and learning, are simply turned off during VR sessions. Of course these numbers are not directly applicable to humans, as this was not fully researched yet, but the results show interesting differences nonetheless.1

Of course there are efforts made to research specific parts of how humans interact with VR. For example, a study about the difference in impact between physical and visual perturbations on users’ balance was conducted. The study participants were tasked to walk along a treadmill, while one group was being physically disturbed in their balance, whereas the other group was visually disturbed. The results showed that the physical group managed to improve their balance by 10% through the training, while the visual group increased it by 40%.2

Another interesting approach was used in a study for the Psychology of Sport & Exercise journal. During that study, participants had to lift weights in form of a dumbell using one arm. One group was looking at their real arm while doing so, and the other group was in a virtual twin of the current room, with a 3D hand holding a 3D weight, all made to mimic the exact same field of vision the non-VR users had. The arm movement was also tracked during exercise, in order to increase immersion. What is interesting is that, doing the exact same exercise, the VR group was found to have a 11% lower rating of perceived exertion, and also a 10-13% lower pain intensity rating than the people doing the “real” exercise. The study lists a mental diversion from the pain stimuli, or a lack of visual signs of exertion on the virtual hand, which the user identified to be their own, as possible reasons for the results.3

These first studies of how our bodies respond to VR, or what VR makes our bodies do, hold very promising connotations as to what it could be used for in the future, as the technology continues to rapidly develop. I will be sure to follow that evolution closely!


1. Virtual Times. “How does our brain respond to Virtual Reality?” Virtual Times. Published November 29, 2021.

2. J Crayton Pruitt Family Department of Biomedical Engineering. “Virtual Reality Trains The Mind To Balance The Body” University of Florida. Published September 17, 2018.

3. Matsangidou, Maria, Chee Siang Ang, Alexis R. Mauger, Jittrapol Intarasirisawat, Boris Otkhmezuri, and Marios N. Avraamides. “Is Your Virtual Self as Sensational as Your Real? Virtual Reality: The Effect of Body Consciousness on the Experience of Exercise Sensations.” Psychology of Sport and Exercise. 2019. Elsevier BV.

₁₂ Differences between real and VR education

After talking about where and how VR can be used, the logical next step is taking a look at what actual difference the technology makes.

In the past, companies have utlized in-person training as their primary way of educating employees. Traditionally, this method made sense and often times there was simply no alternative to experiencing things in person. However, this approach can often be quite costly, as the employee might need to be relocated to a different location and maybe also accomodated for multiple days during the training. This becomes more of an issue the further said employees need to travel to and from the training site. This is where VR training can help cut down costs, both concerning money and time. Especially industries with complex or very specific training scenarios can benefit a lot from this, as they are proportionally much easier to simulate than meet the reqiurements in real life.1

Due to this obvious benefit, a large number of professionals are alrady shifting budgets to VR training. But the benefits are not only monetary, as there are many more soft-skill or employee comfort related advantages. For example, a study conducted by assessing the impact of VR on training shows that managers who experienced training through VR were 40% more confident to apply new skills than the other two groups, which had consumed similar content through e-learning or in traditional classrooms. The study also showed that users were 3.75 times more emotionally connected to the content than classroom learners, which resulted in them completing the training four times faster.2

Though some people may certainly prefer traditional training, this aspect of user satisfaction can not be ignored, as most users prefered virtual learning environments. The increase in attention, satisfaction and effectiveness is most likely a result of the entire experience being much more pleasant. Being able to learn at their own workplace, without having to travel hours to training facilities, meetings or workshops and maybe even having to stay there multiple days just sounds like a better alternative to most people. But of course, the two ways of teaching can always co-exist, and compliment each other in areas where they are at their best, respectively.


1. Facilitate. “Virtual Reality vs. Traditional Training Methods: Which is More Effective?” LinkedIn. Published February 15, 2023.

2. Future Visual. “VR vs Traditional Training & When You Should Adopt it” FutureVisual. n.d.

₁₁ VR for students

In my previous post about use cases of VR I briefly touched on its educational aspect. In this next post I will further elaborate on this specific sector, and what VR can mean for the education of the future.

What has always been a topic and also in this case ends up being the most important factor for VR is immersion. When using VR glasses, anything is done while being fully immersed into the digital environment, and when it comes to learning, that also has a positive impact. Especially since the pandemic, teaching remotely has struggled with retention rates of students. This is where VR appliations can be used to deliver content to students in a much more engaging manner, as visual learning is proven to be more effective for most students, when compared to traditional teaching methods. This was proven in a study at 3M Corporation, where humans were found to process visual information 6000 times faster than plain text.1

Virtual reality classrooms foster active learning by enabling students to participate directly, enabling them to learn by doing. This approach requires full attention and participation from students, which is presented in a fun way. Through the use of such methods, VR can help students develop critical thinking and technological skills, which are sure to be useful in their later careers leading into an ever-evolving technological future.2

On top of that, VR engages more of the users senses, which again lends itself to be more engaging and less boring: An important factor concering young students. These virtual experiences can transform classrooms or even the students homes into a laboratory or a museum, even going back in time for history classes. Through VR, access to scenarios that used to be cumbersome is easier than ever before, be it just a day trip to a foreign country without leaving the room, or a visit to distant planets in physics class. The result stays the same: VR removes physical limitations.3

Of course, this level of technology and teaching comes with its own set of challenges. The first of these is certainly the cost of acquiring enough VR glasses for an entire classroom, as it can get quite costly very quickly. Another issue is the steep learning curve for teachers, as they need to be well-versed and trained in this area, to be able to properly deliver educational VR content.4

Seeing how fast VR has developed in recent years, however, these problems are being tackled and realising the benefits of incorporating VR in classrooms can lead to proper steps being taken. In the same way students used to get excited about being shown a movie in class, in the not so distant future, students can be excited to be taken on an educational, virtual journey to anywhere imaginable.


1. Spilka, Dmytro. “How VR And AR Are Revolutionizing eLearning For Learners Of All Ages” eLearning Industry. Published May 18, 2023.

2. Intel Corporation. “Active Learning Fosters Technical and Innovative Learning” Intel. n.d.

3. Intel Corporation. “Virtual Reality (VR) in Education” Intel. n.d.

4. Siddiqui, Wahaj. “Virtual Reality (VR) in Education: The Future of Learning” LinkedIn. Published September 12, 2023.

₁₀ Use cases of VR

Now that we know what Virtual Reality is, it is time to dive into the broad world of VR and find out in which fields, jobs, situations or settings it can be used. The first thing that comes to mind when thinking of VR is probably the video gaming or entertainment industry, as those were initially responsible for the development of the technology towards public use. While that still holds true today, more and more companies have adopted this way of communicating, researching and testing different ways of utilising VR for promoting, showcasing or bringing their products to life. Through this, VR has, in recent years, made its entrance into the mainstream, revolutionising many sectors with its fresh and extremely involving take on user experience.1

But what are these use cases, precisely? This post aims to provide some overview and short descriptions of relevant cases.

Health Care

The first and arguably one of the most important uses for VR is health care. Surgeons are using VR during their education to simulate their future work spaces and get to know procedures comfortably in a repeatable manner. This way they can practice all kinds of surgeries without the need for real patients, something that was just not as easily possible before.2

The technology also facilitates medical procedures which might require expertise that can not be provided locally, depending on a patients whereabouts and needs. This is where VR can be used to connect to doctors and specialists around the world for remote consultation and treatment.1

Lastly, VR is also being used during the treatment of patients. CBT (cognitive behavior therapy) profits immensely from this, as it can provide a controlled and safe environment, which helps patients work through anxiety or phobias. Depending on the needs of the patient, the program can also be easily adjusted and personalised. This approach is also perfectly reproducable, recordable and able to be monitored by medical staff.1


Similar to how it is being used in health care, VR can help provide virtual environments for almost all other fields aswell. Many jobs work in dangerous environments, which make it harder to train new employees. VR offers a safe solution to this, as it can simulate any environment or task, which provides immersion, realism and most importantly risk free training, which is also cost effective and easy to use. Employees can be taught and stay up to date with a very fast iterative process, which can be used anywhere in the world.2


In architecture, VR has two main use cases. It can provide a space for visualising and simulating future projects and buildings, which gives a much better insight into how the object might look and feel, making dimensions much more palpable than on a piece of paper. More advanced setups also offer the opportunity of making real-time changes in the planning phase. The second use case is then to use these existing 3D models and visualisations to provide the customer with a realistic walk-through of their future home without having to physically be there, or before the object is even fully built.3

Online Shopping

In a similar vein to showcasing buildings, VR can also be used much closer to the customer, providing visualisations of products they might want to purchase online, from the comfort of their home. Visualisations might be used to even let the user place certain products in their real life environment to understand the dimension of a new wardrobe they are thinking of purchasing.4


This virtual try-before-you-buy trend also extends to tourism. VR glasses can show a preview of what the planned holiday might look like in person. Further than that it can also serve as a full alternative through guided virtual tours and visits to distant places for a much cheaper price. This approach also makes it easier to deal with accessibility issues, as it can be done from anywhere, with a much smaller carbon footprint as an added bonus.1

These were just some of the most common uses of VR in our daily lives. Hopefully this post provided a nice overview, and who knows what else is to come in the next few years of development.


1. Expert Panel, Forbes Councils Member. “17 VR Applications That Can Provide A Powerful User Experience” Forbes. Published August 22, 2023.

2. Expert Panel, Forbes Councils Member. “13 Productive And Creative Uses For VR That Impress Tech Experts”. Published December 5, 2022.

3. Alcanja, Daniel. “10 Industries Utilizing Virtual Reality in 2024”. Trio Blog. Published February 8, 2021.

4. Leonard, Kimberlee. “Top 5 Virtual Reality Business Use Cases” Last modified February 21, 2023.

₉ AR vs VR, mixed realities

This post will explain the differences between augmented and virtual reality. Post 7 already explained the basics of VR, which is why this one will start off by explaining augmented reality in detail.

Augmented reality is the process of, as the name suggests, augmenting the users real world view by overlaying relevant data on some form of display. This can be easiest understood by taking a look at the earliest application of augmented reality: Heads-up-displays (HUDs). As most new technology, they were first utilised in a military setting, more specifically in the displays of airplanes and tanks, by projecting information onto the cockpit in order to provide information while still being able to observe their real surroundings. More recently, augmented reality is being used in video game HUDs, where the player’s health, ammunition and other status is being displayed. The army is experimenting with implementing this technology for real world soldiers as well, through the use of personal head-mounted visors.¹

More every uses are for example AR Glasses like the HoloLens or just a smartphone with AR apps and filters. Using these, the real environment is expanded on with mostly 3D objects that are superimposed to appear as they would if they were physically there. This has many uses outside of gaming or the military, as it can help visualise products or furniture during the shopping experience. AR is also used in medical settings, to practice operations or other procedures, as well as in architecture and archeology, to visualise buildings that don’t exist anymore, or buildings that are to be built.2

The main difference between augmented and virtual reality is the immersion factor. Augmented reality keeps the user in the real world, and only adds a few elements to that. This interaction with reality is often the key factor and main reason for choosing AR for a project. This in itself can however also be a drawback, as the added visuals always have to compete with what we see, which can be visually jarring next to one another. On top of that, the AR geometry has to be tracked to our head movement, so that everything stays in the right place relative to the real geometry that it is placed upon, which can also result in some immersion-breaking visuals.

In comparison, virtual reality blocks the users vision of the real world and fully immerses them. This of course does not allow for the same use cases as stated above, but can have other benefits. When using VR, the environment is usually created in 3D and can represent whatever is needed, without having to actually be there. This provides a clear advantage when thinking of large scale operations and training scenarios for users such as the military, police and medical personnel. More on that in a future post.

As this post illustrated, AR and VR are not really directly competing against each other and rather have a different area of use. For future posts towards the topic of my master’s thesis, I will be focussing on VR applications.


1. Hosch, William L.. “augmented reality.” Encyclopedia Britannica. Last modified September 8, 2023.

2. Technikum Wien. “Wie funktioniert Augmented Reality?” Technikum Wien Academy. Accessed January 10, 2024.

₈ IMPULS: VR + Film

In my previous impulse, I briefly described an example of VR usage in theater based on the description and review in a VR-blog, and now I would like to talk about VR and film as another development in the usage of VR. I experienced a bit of VR-film in an exhibition that I mentioned in a previous impulse, yet I didn’t know much about the basics of VR film. I found multiple blog posts that go into detail on specific film experiences, I would, however, like to write about the post that covers the basics of VR film. The author mentions a couple of VR-film categories:

1. Blockbuster extensions: these are being produced by more and more large film studios in order to create an extension to their blockbusters or series, providing the audience with new possibilities to get in touch with the world of the respective movies.

2. Passive VR-films: these currently make up the majority of VR-movies. They are individual VR-films that the viewer can experience passively. I think this is also the type of film I experienced while visiting the exhibition, as I was just sitting on a chair and watching the film sequence with VR-glasses in a 3D environment. One challenge of these movies is that they need to be cut properly so the viewer doesn’t get lost. In my experience, it was an artificial environment that kept continuing from beginning to end, so I didn’t really have that issue since there weren’t really any cuts.

3. Interactive VR-films: these are films that experiment with elements of interaction.

What stuck to me through all of these descriptions is the vision, and at the same time dilemma, filmmakers have when wanting to create VR-movie experiences. Ideally, some time in the future, we will be able to move around in movie worlds, feeling like we’re actually there as the audience and being fully immersed. On the downside, this freedom of movement in said worlds would make it more difficult to tell a linear story, seeing as we’re still in a movie and its plot. Especially focussing the viewers attention will prove to be very tricky, and can (in my current thoughts) mainly just be done with 3D audio cues, which lead the attention somewhere.

I am incredibly curious what the future will hold for VR movies and how much influence or even freedom the viewer will be able to have within a set story. 

Further interest