Developing the Future

Discussing the future possibilities of AR/MR, UX design, universal interface model.

Ill be discussing ideas I have around AR/MR deving and would love input and discussions about your ideas.

Lets build tomorrow together

Let’s talk about clipping. It is annoying as soon as you run into it.
Clipping is basically when you view something and it seems like part of the scene is clipped off. You move your head around to be able to see the rest of the scene in the viewable box (containment area).

So what can we do to make it less noticeable. Clipping is very apparent when you look at something contained by some fencing in an area.

Let’s take the “Exit” the app dialog box that appear. It pops up right in front of your eyes without clipping and under normal circumstances you will make a choice there and then. If you move your head a round a little to look at the scene you will immediately notice that the dialog box get clipped and it throws you out of the AR immersion.

You have the same thing that is like a board game you play in AR with a virtual board. Your brain knows that the board itself is a containment area and expects it do be a certain shape or structure.

The reason is that your brain knows the containment fence is supposed to be continuing there but suddenly it cannot be seen and immediately your brain severs from the scene to try and make sense of what is happening. The big issue here is the fence lines that your brain predicts should be there not the scene. One must remember there can be containment areas within containment areas. For instance a house in a scene is in itself also a containment area.

Clipping is probably not something you will ever get rid of but there might be some guidelines on how to minimise the times you get severed from the AR scene.

  1. Removing containment areas. Containment areas are use to group things together. Is a dialog the frame is the containment area and all text and buttons all relate to the same thing. I’ll write some apps where dialog containment areas from current UI designs to see if your brain can still figure out by itself what belongs together. Also I’ll report back on how well it works to present severing.

For something like a board game you can remove the board if the board does not have a functional function. For instance chess the board mark where you can move to so that is not an option. But for a real time strategy game instead of having a visual fence to show where your units are allowed to move have little purpose.

Easy solution for the real time strategy game is to just leave out the visual fence. Just don’t allow the units to move further.

What can be done where the board is actually needed which will also automatically create a containment area.
I want to play around with the idea of enlarging containment areas beyond their functional space. So with chess you have the chess board but let the rim blocks actually run into a field on the outside instead of having the last fencing lines. This might trick the brain into thinking the blocks are visually supposed to run like that form this board, but you still knows its a chess game.

  1. Creating artificial perception areas. You eyes can only perceive small parts of the world sharply at any one instance. Your eyes look around constantly and stitch together a larger image of what you are seeing. If you look very far left, right, up,. or down your will see that there is huge distortions and your brain don’t bother trying to stitch.

Would be interesting to see what effect is has on your brain if you confine the perceivable area artificially. By blurring out the edges of your scene you might be able to trick the brain in thinking it cannot perceive anything beyond that and stop your eyes tracking beyond that border.
You will loose some real estate of the screen where you can present information on, but if your brain can be tricked like this your brain might not sever as badly when moving the head to perceive more.

These 3 things are all just ideas I have right now. Over the next few weeks I’ll create apps exploring these ideas and report back here of what works and how to get the best results for different situations.

If you have any other ideas or knows about things that work, please let me know. Would love to test these ideas against other ideas or improvements to the ideas here above.

Great ideas. Hope you get great apps too. :smiling_face_with_three_hearts:

1 Like

Tested the idea of removing the containment area for dialogs, and it actually works very very well. Even if it’s only a border without a fill. It is almost as if the text that goes out of scope just fades away. But with the containment area your brain fill in an artificial hard line where it goes out of scope.

If you try and see the text actually going out of scope you will see it, but if you first move your eyes to the left or right where you want to move to and then move your head the sever just fades away.

It is not perfect you sometimes do see the break without wanting to. I noticed for me I pick it sometimes up easier when I move my head from right to left. 99% of the time when I move my head from left to right I do not notice it at all.

When Im done with all the tests I want to do Ill post the APK file somewhere for anyone to play around and see the results for themselves.

Tried out the fading of the edges with a board type scene. No real success. Implemented fog of war, spacial density and edged fog.

I did realise that the closer the focal distance is to the scene the less apparent the clipping effect is. Up to I guess 2-3 m away when you view the scene continuously as you would do in a real time strategy game you don’t notice the clipping. If you are further away the clipping becomes real apparent.

Had another idea pop up in my mind while testing these ideas. Decided to create the FOV as a containment area. Basically I drew a thickish border around the field of view. Worked perfectly. Your brain only focus for details inside the containment area (wide frame) and any other visual artefacts like clipping is completely ignored.
This is a very good solution for most case where the frame can be used to display extra information about the scene. i.e. score and lives left in a game etc etc.
The only case I see it does not work well is an open world scene where you walk around in the scene. I think the containment area will be a little of a distraction is such a case.

Implemented the removal of containment area where objects appear on real floor instead of a generated textured ground.

With a generated ground the clipping is obvious, but when the generated ground is removed and objects is moving or real floor you rarely see the clipping. When there are objects that are static and they are big clipping becomes obvious on that object.

Smaller objects have a lesser clipping effect.

With animated objects that move a person’s focus also helps to minimise the clipping effect a little. If you are really focussed on an object for some reason then other objects that move around does not clip that much.

As soon as I have the APK uploaded which I used for these tests Ill update this post with a link to it.

For the next set of tests I’ll investigate how to present information to a user to make sure the details are easy to recognise and does not obscure other pieces of the scene unnecessarily.

APK to tests run for clipping experiments