Inspiration

I attempted to solve the psychological need for people to have pets using HoloLens 2. I thought it would be possible to develop an entity that provides the same sense of security as a pet by utilizing mixed reality and a variety of sensors.

What it does

HoloMon will be the new friend of the children of the new era.

For example, why do people keep pet dogs and cats? Maybe because they used to be useful creatures. But not anymore. If you want to need watchdog or rat catchers, it is cheaper and more effective to use the right tools.

It means that people do not keep pets because they are useful. A being that is different from you is always around, and you feel safe because of it. That, in my opinion, is why people still keep pets today.

Recently, however, it has become progressively more difficult to keep living creatures due to various problems. i believe that pets need not be limited to dogs and cats if the reason is that they need a non-human presence by their side.

So I am developing this mixed reality friend HoloMon. The reason I develop HoloMon in MR and not AR or VR is because it is most important to always have HoloMon by my side.

And the next most important thing is HoloMon is a sympathetic presence. HoloMon takes full advantage of the tracking capabilities of HoloLens2 to perform various reactions to people.

If you bows, HoloMon bows as well. If you shakes your head or tilts your head, HoloMon will imitate you. It also mimics the movement of the entire body. When you bends down, HoloMon also bends down. HoloMon respond to hand gestures in a variety of ways. Sometimes HoloMon will play rock-paper-scissors if you put out your hand.

Life style is the same as ours. HoloMon sleep at night, eat in the morning, and play with toys at noon. What I find most unique is that HoloMon does not have to be a useful AI.

If HoloMon had always been by my side when I was a child, it would have been enough to help me sleep peacefully on these quiet nights. That is the reason and goal for creating HoloMon.

How we built it

To enable HoloMon to recognize the user's words and hand gestures, I used the MRTK3 subsystem. For example, with MRTK3, 26 joints of the hand can be referenced, allowing HoloMon to finely identify hand gestures. Also, by using the WindowsSpeech subsystem for speech recognition, I can easily create arbitrary voice commands that HoloMon will recognize. Other features of MRTK3 were utilized, such as SpatialMapping to allow HoloMon to walk around in the real world. As for the characters, HoloMon models and animations are created by me in Blender.

Challenges we ran into

I challenged myself to identify the user's actions in more detail using the HoloLens 2 sensors. In order for the user to feel that HoloMon is aware of him or her, it was necessary for HoloMon to react to the user's various actions.

Accomplishments that we're proud of

As a result of the challenge, we were able to recognize various user actions. For example, by measuring the distance between the head and the ground, I identified standing and bending postures of the user. I was also able to identify hand gestures more finely by obtaining information on hand joints. HoloMon reacts to these user actions in a variety of ways.

What we learned

I have found that reactions to gestures occur more intuitively and in more situations than reactions to words or UI actions. I have learned that in order to have interactive communication within an app, it is important to have a mechanism to identify more detailed actions with a high degree of accuracy.

What's next for HoloMonApp

I would like to add a sharing feature so that multiple people can share the HoloMon experience. I want HoloMon to be able to recognize multiple users and their interactions and react accordingly.

Built With

Share this project:

Updates