Games 720 wk3.2

Unreal Engine 4.24.4, Rift Quest and Catalina OSX

I have wanted to explore the Unreal Engine after playing with the Unity Engine in the first module and working in Flutter during the second module. With the start of the third module, I felt it was time take the plunge and finally challenge building something with Unreal. It took quite a bit of work but finally got Unreal setup on my mac using Visual Studio Code as my IDE so I could work on making things for the Rift Quest. There are a few things that few steps needed to compile and run C++ directly from Visual Studio but it was much easier than stetting up Unreal for Android. I have included the link for .Net framework for OSX below the work through videos. Being a standalone HMD with cheaper price-point, as well as portability makes it more that idea in the classroom. There are quite a few hoops to jump through to get everything working on a mac. My starting point was a recent video walkthrough for setting up Unreal on OSX for Quest development. I have also attached a reference video for Windows users.


Oculus Documentation for Quest Development:

.NET Core 3.1 SDK

GAM720 wk3.1

Exploring a Visual Novel in Unreal

At the moment I have set up a test project in the Unreal engine and am using a guide from the Unreal forums –How to Make a Flexible Visual Novel system in UE4.  I’m using this to project to explore using the Unreal engine rather than Renpy or Unity. The project is in the bare bones stage with rough ideas bouncing around in my head and a few :

  • basic sketches of character ideas  ( looking at artwork from Rad Sechrist and Dan Holland, their work on Kibo)
  • playing with the colour scheme ( characters and background) and shading style.
  • drafting story outline.
  • thinking about sound

I had thought about using photos that had a post apocalyptic feel based on the photography of Ulric Collette but have decided to go with the hand drawn look seen in most visual novels but with an edge. Below was and idea for a character type based on a Collette photo. I’ll put up my sketches in another post.

GAM720 wk3

Suspension of Disbelief in Videogames

In this module I needed to read through the PhD thesis of Dr. Douglas Brown, “fearless leader and head of Falmouth Games Academy” and review one of the chapters. I choose Chapter One – Making the Case for the Medium. Seeing there was a lot to unpack in the chapter, I am still going through it a second time after an initial reading with notes and researching some of the terminology, scholars referenced, as well as, some of the games cited. Brown takes a qualitative approach to “comparing and contrasting games with film, theatre and literature” through the lens of ludonarrative dissonance which “refers to the intersection in a video game of ludic elements and narrative elements” or in simpler terms, ” the conflict between a video game’s narrative told through the story and the narrative told through the gameplay.” I’ve attached a video below that can help explain the crux of the idea. In this first chapter Brown is looking to “to justify the value of games as storytelling vehicles” and explores modes of engagement between games and the afore mentioned media. Is is necessary to understand the ideas of gameplay gestalt,  diegetic vs. non-diegetic and the idea of the uniqueness of the suspension of disbelief in games. Brown is “investigating whether or not games require a hybrid mode of engagement made up of those required to engage with other media, or something unique and specific to their own form and construction.” But there is a caveat, he states that “in this chapter for the purposes of comparison the media types which will be set opposite games are flattened out and only their most popular or prevalent instances are engaged with. Specific kinds of film, with an emphasis on Hollywood, of literature with an emphasis on the novel and of theatre with an emphasis on mainstream classical drama have been selected to represent these media. Audiences, too, are flattened out into an assumed receiver.”  I will be updating this post with answers to following questions:

  1. Can I ‘deconstruct’ the argument – identify the gaps or jumps in the logic?
  2. What are the strengths and limitations of this study?
  3. How does this paper contribute to my own work?

GAM720 Wk1

Exploring Interactivity in VR

In an earlier module I had been exploring player interaction in VR using VRTK ( a VR framework that adds interactivity without having to code the physics of those interactions from scratch). I had also explored voice as a method of interaction us IBM Watson.

Since then there have been interesting updates to the Unity and Unreal engines and I am excited to explore new ideas that build on some my earlier projects.  I spent the time in between playing a few rogue-like RPGs and a really fun game, 60 Seconds! Reatomized.

This last game game me the idea of building an educational app that could be played in the classroom to test vocabulary and language understand while requiring quite a bit of interactivity. A student could use an Oculus Quest to challenge a learning task while fellow students and the teacher follow the action on a big screen TV, adding a immersive element to the event.

My idea for this module is to create a VR app that explores interaction in a collaborative environment but also can be applied to various platforms ( Oculus Rift vs. Oculus Quest) without having to rewrite all the code. I have been following the new Unity XR input system and also experimenting with Unreal Blueprints. The importance is that both engines now support Vulcan, which can “ make your graphics code clearer and faster, on top of allowing easier code sharing between PC and Mobile platforms”

In terms of interaction, I will be looking at basic object manipulation, such as grabbing and throwing objects, playing with haptic feedback and adding sounds between objects when they collide. At a “higher-level” of interaction, looking at interacting with doors, drawers, sliders, as well as collecting objects into an inventory. This is strongly tied into UI design.

Lastly, I need to consider how movement can impact player interaction, as well as how to implement locomotion, physical vs. artificial locomotion, that brings some excitement and motivates the to keep playing.  There are also a few miscellaneous things, such as level transition and detecting when the user has put on or remove the headset that i need to consider.

60 Seconds! Reatomized

Things to think about:


Games 710 wk12

GAM170 Reflection

I came into this course, wanting to become an independent app developer and starting a small studio making educational applications specialising in virtual reality (VR) and augmented reality (AR). My interest has always tended toward, making platform-agnostic applications as I already have a basic understanding of web-based technologies, such as HTML, CSS, and JavaScript, but wanted to learn to develop for the newer VR and AR technology coming to market because of the ability to create experiences not possible in the physical world. During the first weeks of my app synergies journey, it has become clear that the I need to focus on a few key skills, C# programming language, priority management, interaction in VR, and marketing are of particular importance to my personal growth as an app developer.

I chose C# as it is one of the more popular programming languages and was designed to be simple and easy to use. C# is also used in game development and is the primary scripting language used by Unity. While one of the most popular game engines now, Unity has also broadened its development platform beyond games to include new industries such as architecture, film, automotive, as well as VR and AR.

Also, C# is similar to other C programming languages like C, C++, and Java. Becoming technically proficient in C# makes learning these other languages more accessible and opens the door to future opportunity. My goal is to improve my C# skills using the Pluralsight courses and to make sprints for each chapter. The time frame of each sprint is eight hours over two weeks, and this allows me time to work on the weekly course modules. I have added these sprints to my Trello board allowing me to stay orgainsed. The goal is to start intermediate-level C# courses by mid-September.

From the beginning of the programme, we had been encouraged to explore and gather knowledge. By the time we had reached week three in the programme, I had already started on using Unity but was still interested in the Unreal engine. I was also interested in how 3D characters worked and so was trying to use the 3D program Maya with character rigging. I was beginning spread myself thin and wasn’t happy with the progress I had made. I was starting to feel a bit overwhelmed, thinking there was so much I didn’t know.

After a weekly support session, I took the advice of my advisor and began to focus on Unity because of its tight integration with C#. My priority going forward is to hone my Unity skills setting short sprints for the chapters from Pluralsight courses on Unity.  By mid-September, my Unity skills should firmly be at the intermediate level. The goal is to build a working prototype for an interactive virtual reality character in Unity for the Oculus Rift virtual reality headset.

The another skill I focused on was interaction with a virtual environment. I experimented with Unity’s touch interaction system, using a virtual hand to pick up objects. Setting this up was a bit difficult because I didn’t understand how to correctly implement touch using the Oculus SDK and SteamVR SDK, from the two dominant market leaders in the VR domain. After some research, I found a quick way to implement touch using Virtual Reality Toolkit (VRTK). Using this toolkit with the C# scripting allows me to set up an interactive environment quickly, and the time saved can be spent on working to extend the depth of interaction by creating interesting scripts to handle haptic feedback.My goal is to create a Unity scene able to communicate with the Watson server. I have added it to my Trello board and set up a free IBM Cloud account to access Watson.  During the session break and have planned to spend 10 hours on the initial setup and the success of the goal hinges on communicating with the speech to text server, and it able recognise verbal input through Unity.

Marketing, I believe is important not only from a professional standpoint but from a financial standpoint as well. Understanding who my audience is and how to reach users will require me to use qualitative methods and quantitative methods or a mixture of both. Also, creating persona from the collected data will help me better understand the audience and their motives. I think it’s essential to create a few personas as not everyone is going to have the same motives in using an app. I will focus on who the person is and how the app will fit into their life. I also need to think about how they will use it or access it, as well as what they expect to get out of the experience. Keep the persona relevant and the iterations back the persona should allow me to develop an app that my target audience will want to use. My goal is to create five user personas for a VR language learning app I want to develop. I have budgeted 10 hours and will post this on my Trello board. I will focus on four areas, the person, behaviors and wants, use and expectations. After creating the personas, I will gather feedback from friends and course mates, incorporating that into a finished persona deck.

Looking back on my personal and professional development, I think I have made quite a lot of progress from a personal perspective, but I can measure that by looking at what I have accomplished up to this point. I have made progress with C# and can make a basic Unity VR app and AR app for both Android and iOS. I have a much better understanding of the direction I want to take and learning how to judge what is essential for my career path from a professional point of view. I have been using Trello to track my progress but have let my blog slip. The blog is essential to my development and putting more effort in self-reflective practice will allow me to grow. I think personal case study was beneficial but having to sit down and look at five critical skills challenges and reflecting on them will help carry me through the more challenging parts of the program to come.

GAM710 wk 11

Project organization

Project organization is another critical skill that embraces agile and the iterative process where ideas are only starting points from where the production moulds into the best possible result. I am using a Trello board to organise my sprints, as well as using the board’s flexibility to prioritize what ideas move forward or are pushed back in some cases. This skill is closely linked to priority management, deciding whether it’s worth moving forward with the idea not related to the core concept. I tend to broaden the scope of projects and must pay close attention to the sprints I have outlined and the individual tasks I have going on my Trello board.

I have also been using Git along with Unity Collaborate because it was easier to save the project without having to leave the Unity editor. I also had an issue saving iOS projects to GitHub after they were build using XCode. The file was over 1GB, and Git recommends that repositories be no larger than 1GB to preserve performance. I had tried using Git’s Large File Size client, but I was unable to push the file to GitHub. I was able to work around the problem by manually editing the project file to save to GitHub, but it took a bit of time, to filter out the unnecessary files. Saving to the cloud using Unity Collaborate saves me time over fighting with Git.

For me, project organization means having a clear idea at the outset of what needs to be done but understanding that some parts are going to change to make a functional app. Iteration brings additions in terms of details and making sure all the parts work well together, but it can bring simplification, the process of “cutting fat,” which means the unnecessary ideas or unfitting features.

My priority here is to use my Trello board to organize and track my two-week sprints for learning C# and Unity on Pluralsight. The goal also includes building Unity prototypes based on the Pluralsight exercises and saving them to Collaborate. The time frame of two-weeks for the sprints allows me to stay focused on learning C# and Unity. By following the Trello board, I will realise my goal of getting my C# and Unity skills firmly at the intermediate level.


GAM710 wk10

This week’s sprint was more research on AI with voice in Unity. I had accomplished last weeks sprint by being able to communicate with Watson and having speech to text working. The next sprint will focus on getting the Watson Text-To-Speech service to receive the Speech-To-Text input from the Oculus headset and logging the event.The sprint goal is to have the Watson servers are able to translate the text from the Speech-To-Text(STT) service and output it as Text-To-Speech(TTS). I will also be logging the events to analyze the amount of delay between capturing the input and  generating the speech output. As in my previous sprint, I am keeping the task realistic because this approach works well for me and keeps me focused. I set have planned to spend 2 two hour blocks during the week and a four hour block of time on the weekend, for a total of ( hours. This will be a one week sprint. I will post the sprint to my Trello board and I am using Unity Collaborate again for my version tracking. With the upcoming freetime my in my evening schedule, I don’t see any problems with staying focused on getting the TTS communicating with the STT service.

At the moment, the only issue is that the API for the Watson Unity SDK is being update and should be released in the next week or two, according to the forum posts by one of the IBM developers. After the updated SDK is released, I will be starting the sprint.


I have also been looking into some alternatives to IBM Watson, such as Microsoft’s Azure Cognitive services. They released a beta in July for Unity and it is working for Unity 2018.3 and Unity 2019.1. The speech services they offer are STT, TTS, speech translation and a preview for what they call Voice-first virtual assistants. There are a few issues with training custom speech models with audio and transcription using UK English and this could be an issue since many of my students speak with non-American accent. Many of them study in England or Australia on summer language programs as the United States is perceived to be too dangerous. I have added this to my Trello Board as one of the next cards to explore.

In the meantime, I have also been doing some research on this weeks module on  communities of practice. Two years back when I was in a Java programming bootcamp, I was involved in a few communities, such as the local maker scene in Detroit as well as the local android users group. Being involved in the user groups gave me a chance to meet other developers with similar interests and helped spark new ideas for projects. It also pushed me to attend meetups and talks on a wide range of technical topics and learn new skills by attending workshops.

This is something I have been missing since returning to Japan and after staring the course, I have recently attended two meeting for Unity developers and met with a group that is working with android and AR. A person I met at the Unity meeting invited me to this event. It was really an amazing time and some people I talked with are working on some cool projects. The meetings have left me feeling inspired after listening to the members share their projects as well as their their experience and knowledge. Some of the people I have met, have sent me mails of upcoming some upcoming hackathons and coding jams and asked me to join them.  I’m a bit self conscious about my ability but am thinking about joining them in October for the experience. I may turn this into a sprint goal if it relates to what I’m working on now.




GAM710 wk9

IBM Watson and Unity

After watching the Sprint Planning Meeting video and then reading through the Task Board article, I logged onto my Trello board, looked at through my ‘Explore Ideas’ and identified my next sprint goal – Speech and IBM Watson Unity SDK. The goal is exploring the idea of how player and AI character speech can create a stronger sense of immersion within a VR space.

MI started with a user story centered on a person wants to interact within  VR  using speech, similar to Alexa or Google Assistant but a bit more robust .  User W: “I want to talk with the characters I meet in the  virtual world.”

The first step is to setup a Unity scene and get IBM Watson to recognize speech to text and display the text to UI text field. The ultimate goal is triggering in-game logic with a virtual character that could respond to users’ verbal actions. This could lead to a language learning environment where students could learn a vocabulary simply by interacting with an AI character.

Having moved the Trello card to the started board, I set up a free IBM Cloud account to access Watson. I have planned to spend 8 hours within a one week period, on the initial setup. Speech recognition is one of the core features of the app and will give me basic structure to build on. The success of the sprint hinges on communicating with the speech to text server, and it able recognize verbal input through Unity.

I will know I have achieved my sprint goal when the Watson servers are able to transcribe the speech as text and display the output in text field within Unity. The goal is realistic because of the small steps approach I have used in the past. I set have planned to spend 2 two hour blocks during the week and a four hour block of time on the weekend. I will update my Trello board and I am using Unity Collaborate as my version tracking system. Keeping to my time schedule, updating my Trello board and staying focused on simply communicating with Watson, will allow me to successfully complete my sprint goal.


I was able to achieve my sprint goal within the time frame I set because just focused on one basic part of having AI assistant answer questions. I have posted a video below of the finished sprint. Now, that I have the Speech-To-Text that displays in a UI text field, I will start on the next step of getting the Text-To-Speech able to output the Speech-To-Text stream.  Below is the working Speech-To-Text module in Unity.




GAM 710 wk8


After looking at the list of challenges I the  decided on,

  • Spend 20 minutes undertaking the ‘I’ phase (inspiration) of the ICEDIP method for a new idea for an app, and then spend 10 mins evaluating the results.

and followed Geoff’s advice to ” Let yourself off the leash!”. I’ve been thinking a lot about “Presence” (spatial immersion for the technically minded) and the users POV, so this activity sounded really interesting. I put the timer on and got started.  After twenty minutes I had a paper full of ideas that started off safe but moved into the “what if” realm of thought.

A few of the interesting ones where:

  • Having an AI simulation that can interact and respond to your actions or reactions using your avatar or voice.
  • Learning a language by being dropped into a “virtual city” where everyone interacts with you in the language you’ve chosen.
  • A VR app that can “read” your dreams so you can explore them again.
  • An AI / VR app that lets you talk with people from the past – you could have a chat with Jean Baudrillard about reality.
  • Having mixed reality pets that have persistence in the world and age.
  • A mixed-reality baby sitter for young children.

Spending a few minutes looking back on them, it would be really fun to be able to breath life into these ideas. I liked the idea of being able to sort through your dreams or of even creating a dream scrapbook you could share with people. One or two my be possible in a very limited way but overall it was fun to think about and gave me an idea or two play with. The challenge now would be to come up with a business plans to pitch on of these ideas and research needed to support it.

Having said that, I have started researching the idea of a VR language learning app in which users could converse with an AI character. Having an AI character like Miquela Sousa that students could learn with would be really intriguing experience. So, I started exploring just what speech capable AI’s are available and are able to integrate into the Unity platform. Microsoft has Azure client and Amazon has Sumerian but that service may only run on AWS servers. There IBM Watson and SmartBody by the University of California. At the moment, IBM Watson looks to be the most promising because of their recent partnership, bringing Watson’s AI functionality to Unity’s gaming engine, with built-in VR/AR features. I have added this to my Trello board for now and will continue to follow it.

GAM710 wk7.1

Player Movement Sprint

The challenge that I have set for the week deals with evaluating user movement.  How the user moves through and interacts with the VR environment and how this affects comfort and engagement.  I will be gathering feedback from my students

For the sprint, I will use a simple unity environment and duplicate it, then implement teleportation and thumbstick walking in the two respective scenes for use with the new style of Oculus controllers. First, I’ll start by reading Oculus VR best practices documentation and additional resources on the use of movement types and then create the VR test environment. Finally, I’ll explore the two movement types inside VR using an Oculus Rift S.

The sprint goal is to understand how movement affects user engagement with how VR cameras and Asynchronous Spacewarp (Oculus Rift) affect user perception. It also looks at how quickly moving or rotating the horizon line, or large objects affect the player comfort. By importing the Unity Locomotion sample scene as the base environment, and it will be relatively easy to build out the test environment quickly. I have committed 8 hours to the project. I am assigning 4 hours to research and the initial scene buildout. An additional 2 hours to implement the movement control types in the duplicated scenes and finally, committing the final 2 hours to test and recording the results.

The projected 8 hours should be more than enough to accomplish the sprint goal. I have experience with controller mapping inside a unity scene and will not be using additional software or addons. The sprint will conclude after the testing phase, and the next steps are dependent upon reviewing the sprint outcomes.

trello board

Link to my Trello card: (Links to an external site.)