Games 710 wk12

GAM170 Reflection

I came into this course, wanting to become an independent app developer and starting a small studio making educational applications specialising in virtual reality (VR) and augmented reality (AR). My interest has always tended toward, making platform-agnostic applications as I already have a basic understanding of web-based technologies, such as HTML, CSS, and JavaScript, but wanted to learn to develop for the newer VR and AR technology coming to market because of the ability to create experiences not possible in the physical world. During the first weeks of my app synergies journey, it has become clear that the I need to focus on a few key skills, C# programming language, priority management, interaction in VR, and marketing are of particular importance to my personal growth as an app developer.

I chose C# as it is one of the more popular programming languages and was designed to be simple and easy to use. C# is also used in game development and is the primary scripting language used by Unity. While one of the most popular game engines now, Unity has also broadened its development platform beyond games to include new industries such as architecture, film, automotive, as well as VR and AR.

Also, C# is similar to other C programming languages like C, C++, and Java. Becoming technically proficient in C# makes learning these other languages more accessible and opens the door to future opportunity. My goal is to improve my C# skills using the Pluralsight courses and to make sprints for each chapter. The time frame of each sprint is eight hours over two weeks, and this allows me time to work on the weekly course modules. I have added these sprints to my Trello board allowing me to stay orgainsed. The goal is to start intermediate-level C# courses by mid-September.

From the beginning of the programme, we had been encouraged to explore and gather knowledge. By the time we had reached week three in the programme, I had already started on using Unity but was still interested in the Unreal engine. I was also interested in how 3D characters worked and so was trying to use the 3D program Maya with character rigging. I was beginning spread myself thin and wasn’t happy with the progress I had made. I was starting to feel a bit overwhelmed, thinking there was so much I didn’t know.

After a weekly support session, I took the advice of my advisor and began to focus on Unity because of its tight integration with C#. My priority going forward is to hone my Unity skills setting short sprints for the chapters from Pluralsight courses on Unity.  By mid-September, my Unity skills should firmly be at the intermediate level. The goal is to build a working prototype for an interactive virtual reality character in Unity for the Oculus Rift virtual reality headset.

The another skill I focused on was interaction with a virtual environment. I experimented with Unity’s touch interaction system, using a virtual hand to pick up objects. Setting this up was a bit difficult because I didn’t understand how to correctly implement touch using the Oculus SDK and SteamVR SDK, from the two dominant market leaders in the VR domain. After some research, I found a quick way to implement touch using Virtual Reality Toolkit (VRTK). Using this toolkit with the C# scripting allows me to set up an interactive environment quickly, and the time saved can be spent on working to extend the depth of interaction by creating interesting scripts to handle haptic feedback.My goal is to create a Unity scene able to communicate with the Watson server. I have added it to my Trello board and set up a free IBM Cloud account to access Watson.  During the session break and have planned to spend 10 hours on the initial setup and the success of the goal hinges on communicating with the speech to text server, and it able recognise verbal input through Unity.

Marketing, I believe is important not only from a professional standpoint but from a financial standpoint as well. Understanding who my audience is and how to reach users will require me to use qualitative methods and quantitative methods or a mixture of both. Also, creating persona from the collected data will help me better understand the audience and their motives. I think it’s essential to create a few personas as not everyone is going to have the same motives in using an app. I will focus on who the person is and how the app will fit into their life. I also need to think about how they will use it or access it, as well as what they expect to get out of the experience. Keep the persona relevant and the iterations back the persona should allow me to develop an app that my target audience will want to use. My goal is to create five user personas for a VR language learning app I want to develop. I have budgeted 10 hours and will post this on my Trello board. I will focus on four areas, the person, behaviors and wants, use and expectations. After creating the personas, I will gather feedback from friends and course mates, incorporating that into a finished persona deck.

Looking back on my personal and professional development, I think I have made quite a lot of progress from a personal perspective, but I can measure that by looking at what I have accomplished up to this point. I have made progress with C# and can make a basic Unity VR app and AR app for both Android and iOS. I have a much better understanding of the direction I want to take and learning how to judge what is essential for my career path from a professional point of view. I have been using Trello to track my progress but have let my blog slip. The blog is essential to my development and putting more effort in self-reflective practice will allow me to grow. I think personal case study was beneficial but having to sit down and look at five critical skills challenges and reflecting on them will help carry me through the more challenging parts of the program to come.

GAM710 wk 11

Project organization

Project organization is another critical skill that embraces agile and the iterative process where ideas are only starting points from where the production moulds into the best possible result. I am using a Trello board to organise my sprints, as well as using the board’s flexibility to prioritize what ideas move forward or are pushed back in some cases. This skill is closely linked to priority management, deciding whether it’s worth moving forward with the idea not related to the core concept. I tend to broaden the scope of projects and must pay close attention to the sprints I have outlined and the individual tasks I have going on my Trello board.

I have also been using Git along with Unity Collaborate because it was easier to save the project without having to leave the Unity editor. I also had an issue saving iOS projects to GitHub after they were build using XCode. The file was over 1GB, and Git recommends that repositories be no larger than 1GB to preserve performance. I had tried using Git’s Large File Size client, but I was unable to push the file to GitHub. I was able to work around the problem by manually editing the project file to save to GitHub, but it took a bit of time, to filter out the unnecessary files. Saving to the cloud using Unity Collaborate saves me time over fighting with Git.

For me, project organization means having a clear idea at the outset of what needs to be done but understanding that some parts are going to change to make a functional app. Iteration brings additions in terms of details and making sure all the parts work well together, but it can bring simplification, the process of “cutting fat,” which means the unnecessary ideas or unfitting features.

My priority here is to use my Trello board to organize and track my two-week sprints for learning C# and Unity on Pluralsight. The goal also includes building Unity prototypes based on the Pluralsight exercises and saving them to Collaborate. The time frame of two-weeks for the sprints allows me to stay focused on learning C# and Unity. By following the Trello board, I will realise my goal of getting my C# and Unity skills firmly at the intermediate level.


GAM710 wk10

This week’s sprint was more research on AI with voice in Unity. I had accomplished last weeks sprint by being able to communicate with Watson and having speech to text working. The next sprint will focus on getting the Watson Text-To-Speech service to receive the Speech-To-Text input from the Oculus headset and logging the event.The sprint goal is to have the Watson servers are able to translate the text from the Speech-To-Text(STT) service and output it as Text-To-Speech(TTS). I will also be logging the events to analyze the amount of delay between capturing the input and  generating the speech output. As in my previous sprint, I am keeping the task realistic because this approach works well for me and keeps me focused. I set have planned to spend 2 two hour blocks during the week and a four hour block of time on the weekend, for a total of ( hours. This will be a one week sprint. I will post the sprint to my Trello board and I am using Unity Collaborate again for my version tracking. With the upcoming freetime my in my evening schedule, I don’t see any problems with staying focused on getting the TTS communicating with the STT service.

At the moment, the only issue is that the API for the Watson Unity SDK is being update and should be released in the next week or two, according to the forum posts by one of the IBM developers. After the updated SDK is released, I will be starting the sprint.


I have also been looking into some alternatives to IBM Watson, such as Microsoft’s Azure Cognitive services. They released a beta in July for Unity and it is working for Unity 2018.3 and Unity 2019.1. The speech services they offer are STT, TTS, speech translation and a preview for what they call Voice-first virtual assistants. There are a few issues with training custom speech models with audio and transcription using UK English and this could be an issue since many of my students speak with non-American accent. Many of them study in England or Australia on summer language programs as the United States is perceived to be too dangerous. I have added this to my Trello Board as one of the next cards to explore.

In the meantime, I have also been doing some research on this weeks module on  communities of practice. Two years back when I was in a Java programming bootcamp, I was involved in a few communities, such as the local maker scene in Detroit as well as the local android users group. Being involved in the user groups gave me a chance to meet other developers with similar interests and helped spark new ideas for projects. It also pushed me to attend meetups and talks on a wide range of technical topics and learn new skills by attending workshops.

This is something I have been missing since returning to Japan and after staring the course, I have recently attended two meeting for Unity developers and met with a group that is working with android and AR. A person I met at the Unity meeting invited me to this event. It was really an amazing time and some people I talked with are working on some cool projects. The meetings have left me feeling inspired after listening to the members share their projects as well as their their experience and knowledge. Some of the people I have met, have sent me mails of upcoming some upcoming hackathons and coding jams and asked me to join them.  I’m a bit self conscious about my ability but am thinking about joining them in October for the experience. I may turn this into a sprint goal if it relates to what I’m working on now.




GAM710 wk9

IBM Watson and Unity

After watching the Sprint Planning Meeting video and then reading through the Task Board article, I logged onto my Trello board, looked at through my ‘Explore Ideas’ and identified my next sprint goal – Speech and IBM Watson Unity SDK. The goal is exploring the idea of how player and AI character speech can create a stronger sense of immersion within a VR space.

MI started with a user story centered on a person wants to interact within  VR  using speech, similar to Alexa or Google Assistant but a bit more robust .  User W: “I want to talk with the characters I meet in the  virtual world.”

The first step is to setup a Unity scene and get IBM Watson to recognize speech to text and display the text to UI text field. The ultimate goal is triggering in-game logic with a virtual character that could respond to users’ verbal actions. This could lead to a language learning environment where students could learn a vocabulary simply by interacting with an AI character.

Having moved the Trello card to the started board, I set up a free IBM Cloud account to access Watson. I have planned to spend 8 hours within a one week period, on the initial setup. Speech recognition is one of the core features of the app and will give me basic structure to build on. The success of the sprint hinges on communicating with the speech to text server, and it able recognize verbal input through Unity.

I will know I have achieved my sprint goal when the Watson servers are able to transcribe the speech as text and display the output in text field within Unity. The goal is realistic because of the small steps approach I have used in the past. I set have planned to spend 2 two hour blocks during the week and a four hour block of time on the weekend. I will update my Trello board and I am using Unity Collaborate as my version tracking system. Keeping to my time schedule, updating my Trello board and staying focused on simply communicating with Watson, will allow me to successfully complete my sprint goal.


I was able to achieve my sprint goal within the time frame I set because just focused on one basic part of having AI assistant answer questions. I have posted a video below of the finished sprint. Now, that I have the Speech-To-Text that displays in a UI text field, I will start on the next step of getting the Text-To-Speech able to output the Speech-To-Text stream.  Below is the working Speech-To-Text module in Unity.




GAM 710 wk8


After looking at the list of challenges I the  decided on,

  • Spend 20 minutes undertaking the ‘I’ phase (inspiration) of the ICEDIP method for a new idea for an app, and then spend 10 mins evaluating the results.

and followed Geoff’s advice to ” Let yourself off the leash!”. I’ve been thinking a lot about “Presence” (spatial immersion for the technically minded) and the users POV, so this activity sounded really interesting. I put the timer on and got started.  After twenty minutes I had a paper full of ideas that started off safe but moved into the “what if” realm of thought.

A few of the interesting ones where:

  • Having an AI simulation that can interact and respond to your actions or reactions using your avatar or voice.
  • Learning a language by being dropped into a “virtual city” where everyone interacts with you in the language you’ve chosen.
  • A VR app that can “read” your dreams so you can explore them again.
  • An AI / VR app that lets you talk with people from the past – you could have a chat with Jean Baudrillard about reality.
  • Having mixed reality pets that have persistence in the world and age.
  • A mixed-reality baby sitter for young children.

Spending a few minutes looking back on them, it would be really fun to be able to breath life into these ideas. I liked the idea of being able to sort through your dreams or of even creating a dream scrapbook you could share with people. One or two my be possible in a very limited way but overall it was fun to think about and gave me an idea or two play with. The challenge now would be to come up with a business plans to pitch on of these ideas and research needed to support it.

Having said that, I have started researching the idea of a VR language learning app in which users could converse with an AI character. Having an AI character like Miquela Sousa that students could learn with would be really intriguing experience. So, I started exploring just what speech capable AI’s are available and are able to integrate into the Unity platform. Microsoft has Azure client and Amazon has Sumerian but that service may only run on AWS servers. There IBM Watson and SmartBody by the University of California. At the moment, IBM Watson looks to be the most promising because of their recent partnership, bringing Watson’s AI functionality to Unity’s gaming engine, with built-in VR/AR features. I have added this to my Trello board for now and will continue to follow it.

GAM710 wk7.1

Player Movement Sprint

The challenge that I have set for the week deals with evaluating user movement.  How the user moves through and interacts with the VR environment and how this affects comfort and engagement.  I will be gathering feedback from my students

For the sprint, I will use a simple unity environment and duplicate it, then implement teleportation and thumbstick walking in the two respective scenes for use with the new style of Oculus controllers. First, I’ll start by reading Oculus VR best practices documentation and additional resources on the use of movement types and then create the VR test environment. Finally, I’ll explore the two movement types inside VR using an Oculus Rift S.

The sprint goal is to understand how movement affects user engagement with how VR cameras and Asynchronous Spacewarp (Oculus Rift) affect user perception. It also looks at how quickly moving or rotating the horizon line, or large objects affect the player comfort. By importing the Unity Locomotion sample scene as the base environment, and it will be relatively easy to build out the test environment quickly. I have committed 8 hours to the project. I am assigning 4 hours to research and the initial scene buildout. An additional 2 hours to implement the movement control types in the duplicated scenes and finally, committing the final 2 hours to test and recording the results.

The projected 8 hours should be more than enough to accomplish the sprint goal. I have experience with controller mapping inside a unity scene and will not be using additional software or addons. The sprint will conclude after the testing phase, and the next steps are dependent upon reviewing the sprint outcomes.

trello board

Link to my Trello card: (Links to an external site.)

GAM710 wk7

This week was spent looking at the importance of Persona and why they are necessary and before diving in and developing.  Some things for guiding the persona where:

  • Who – the person using the app
  • Wants – how does this app beneficial to their life?
  • Discovery – how would they access this app?
  • Expectations

I hadn’t realized how important persona where in the success of an app and how they helped the development team make design decisions based on the user needs. Also, how important those stories were to the sprint process and keeping people focused on the needs of the target audience. I see how this could keep a project on track and help prevent feature creep that could waste time and money.

I didn’t have a personal project in mind when thinking about this assignment, but did have some ideas based on recent events at work dealing with students and SNS problems. Getting started, I put a real name to my persona and what they wanted to accomplish and what motivated them. I looked for a picture that went well with the persona I had outlined and that would provide an emotional pull for the team using it.

The persona is tied to the idea of SNS’s and the app I choose is  YouTube Kids (Links to an external site.)and focused on the idea of giving working parents ‘peace of mind’ when comes to their kids net viewing habits. Something that a majority of parents worry about and but unsure about their choices are in providing safe apps for their children to use.  A multitude of websites advise parents on responsible tech usage for children and Google’s YouTube Kids helps parents with age-appropriate video.   Google wants to “make it safer and simpler for kids to explore the world through online video”  and at the same time give parents  ” a whole suite of parental controls, so they can tailor the experience to their family’s needs.”

My persona is Lykke Li, a working mother worried about giving her children the freedom to watch YouTube but in a safe and controlled environment.


I also spent time reviewing Unit 8 again because I hadn’t put a lot of thought into gathering user research and the different qualitative methods used to gather users opinions about an app.  I going to take Erik Geelhoed’s advice and use interviewing as my main approach to recording and analysing people’s feedback. I teach at high school and the students always have an opinion on just about everything. So, when the time comes for testing my app, I have a ready group of testers.  This is something for a future sprint where I set aside some time and write out some simple questions and remember to keep them short and to the point. I will add this sprint idea to my Trello board for the future.


YouTubeKids  (persona PDF)

GAM710 wk6

 New Goal, Recent Work and Looking back

The unit 6 video with Rich Barham speaking on business plans and pitching really got me thinking about what kind of app I would like to make and how to go about presenting it. As of now, I know I’m looking to make an educational app using VR and at the moment it centered on the idea of creating a collaborative virtual work space where two students could work together fabricating 3D objects that could be saved out to a server and then printed on 3D printers. Inside the space students would be working with apps like, Tilt Brush, Medium and Blocks. After they would be able to save their work to a cloud space like Poly where they could download and print out their work on a local 3D printer or send it to a print service. This is a very rough idea that I have mentioned in our weekly meeting. The challenge going foreword is spent time flushing out the idea and then talking with friends for feedback before making a sprint goal for it.

Also, looking back on my latest sprint, VR movement using the VRTK framework, I able to accomplished two of the three goals (teleportation and object interaction) within the time frame I had set. The third goal of triggering sound effects remains unfinished. I tied sound to actions but was having an issue of syncing the audio to the event action. I will revisit using audio after I’ve had some time to dig a little deeper into how audio and audio effects work in Unity. Later, I can make interactions with audio effects part of a future sprint. Having a smooth integration between visual selection (when an object is selected) and an audio selection response to that interaction, will help reinforce immersion in the environment. The core issue I encountered audio was a lack of understand how audio in Unity really works. While adding ambient sounds to an environment was pretty straight forward, get audio events to trigger at a precise moments, was a bit more involved when I was trying to match animations to event triggers. I had looked at how people had implemented character interaction with game objects in RPG games and thought I could transfer something similar to a VR environment using hand controllers but that didn’t really work.

I am also thinking about how relevant have a deep understand of Unity audio is to my overall goals. This is quite similar to my experience with Maya. It is good to have a general understanding of the basic concepts but there wasn’t any need to be building my own characters when it would be more productive to contract that out and stay focused on my core goals. I’m thinking that contracting out the audio effects would let me spend more time working on the fundamental interactions.

Recently, I have been looking back on I’ve accomplished in the program for my week 6 video reflection. I started off the course with just the basic knowledge of Unity and realized that I would need to spend some serious time learning Unity because I  didn’t understand how to use assets correctly or what a prefab was. Also, I was bit clueless on using simple game object actions with scripts.

After our first app jam in week 3, I knew that I didn’t have the depth of knowledge I needed with Unity and some of the important addons its uses( Probuilder, Progrids, Maya integration, and VRTK).  I also wasn’t managing my time efficiently while learning and making. Going forward, I am studying Unity using Pluralsight and Lynda.  I am focused on just the fundamental training  courses and their examples without going off and exploring “too much.” To keep me on track, time management is the key, and I have set up a Trello board, also use a web app called Clockify to track my time. I’ve kept it to two-hour blocks in the evenings that can be increased on the weekend depending on other commitments. By the end of the week, my goal is to have finished a Pluralsight module and have spent time building a working example in Unity. Keeping a daily rhythm, setting deadlines with the Pluralsight modules, and keeping my timeline realistic will keep me motivated over the rest of the course. Learning to use the new concepts I’ve gained in the upcoming weeks has me excited.

GAM710 wk5.1

VRTK Sprint

I have been on track with my Unity studies using Trello. I am about to start a short sprint goal of testing what I’ve learned with VRTK. I have completed a few of the tutorial videos and have successfully installed the toolkit into an example project. The challenge now is to be able to repeat the process on a new project and then tie user interactions to my own assets rather than those used in the tutorial.

The sprint will have a time frame of one-week, with the total amount of time being 6 hours. This deadline for creating a simple working environment and importing the VRTK framework to the project is quite reasonable. The only issues I foresee at the moment is having to fine tune the VRTK components to get the assets to work correctly with the framework. First, I’ll need to clone VRTK into Unity from their github repository and then setup teleportation markers around the environment and test teleporting to them. After getting movement setup, I begin getting the controllers enabled and testing whether I can interact with objects. Having practiced with the examples, I expect these two goals to easily accomplished within the time frame I have set and  should a good challenge.  This sprint challenge is a test of what I have learned so far and a signal that  it’s time to push myself a bit harder with the Unity. I am keeping the task realistic because this approach works well for me and keeps me focused.

I set have planned to spend 9 hours split into three hour blocks during the week and have some free time on the weekend in case I have to move one of the three hour blocks to the weekend. This will be a one week sprint and I will post the sprint to my Trello board to keep track of my progress. The goal is to be able to build basic interactive environments without having to always reference my training materials which means being able to iterate quicker and start working on projects with more confidence. I believe that the goal is resonalbe and easily achiveable given the amount of time I have set for the project and my recent practice with using the toolkit. Also, the VRTK developers also have a very active Discord giving me an additional resource for help should I need it.

I will update this post with a picture from the finished sprint.






GAM710 wk5

Maya and Character Rigging

Spent the weekend doing some research after getting Maya installed and finishing a few of the tutorials. Its a really amazing what you can do with this and I’ve just scratched the surface. My goal is to get a basic understanding of the building a character and rigging it. I also installed Mudbox and was interested in how you export out the files as .fbx because the files in Mudbox save as .mud.  There are some tutorials with Mudbox on the sight but I think I will play with that another day. I am exploring Substance Alchemist which can extract textures form scans which is pretty cool. I need to stay focused and come back to the other Maya workflows later when I more time to dive in.

Working now on building a simple character and its going pretty well, just taking a bit of time with the back and forth of moving through the tutorials.

Update: When I started exploring Maya, I thought I could somehow find the time to learn but after working with it for a week or so realized that it was a lot harder to use than I imagined. I wasn’t able to translate the image I had in my mind’s eye into the character I created with Maya. Then something that I hadn’t realized until my adviser mentioned was that it would be better to contract out work like that, rather than to spend the time that I didn’t have and wasn’t contributing my core goal of making a virtual reality app. In hindsight, it was good to have gained a basic understanding of how Maya worked and general insight into how character movement works. Having this working knowledge provides an opportunity for team communication, but more importantly give me the knowledge needed to communicate the what I am looking for when contracting out Maya work from a freelancer.