Magic Leap 2 uses cameras and IMUs (Inertial Measurement Units) to determine its accurate relative pose (position and rotation) at any time. By knowing its pose, the device can make objects appear world-locked while the headset is moving in space. This means digital content remains in its initial position and orientation relative to the world while the user is looking at it from different viewpoints.
The system to determine the pose of the device is called headpose. It works reliably in most indoor environments. Due to the nature of the technology, some conditions may work better than others. This article's goal is to provide guidance to where headpose will work well and which environments might be difficult for the device to work at its best.
Environments
Different aspects of the environment can have an impact on headpose performance.
Texture
Headpose relies on recognizing and tracking features in the environment. Features are salient points such as corners, intersections of lines, and specks. Those features act as landmarks that can be recognized from different viewpoints. The more such features are visible and the more they are scattered, the better the headpose accuracy and robustness will be. Such environments are referred to as "texture-rich" environments. Although the device is designed to also track in relatively low texture environments, the more texture, the more accurate and robust the experience.
Ideal texture
Different kinds of objects in the scene. The more clutter, the better.
-
Type: Anything that is not uniform and that is static in the room.
- Examples:
- Pictures, posters, logos, signs, text, etc.
- Furniture, interior, plants
- Smaller installed (static) electronics like sensors, light switches, badge readers
- Machinery that is mostly static
- Examples:
- Distribution: As distributed in the room as possible. Ideally the cameras can see features in every part of the room.
- Size: Much less important than distribution. Object size can be between 2 cm and 2 m.
- Distance: Texture needs to be close to the device, in between 1-4m.
Challenging environments:
- Environments with large uniform surfaces should be avoided, like a room with only uniform walls.
- Problems might also arise in the presence of many similar features, which may be confused with each other. This can be caused by, such as observing a repetitive structure on the ceiling without any other salient points in the field of view of the device.
- Texture-only far away (> 4 meters). Examples include looking out the window or large open spaces (like hallwayss) with little texture in close proximity.
- Many moving objects (see section Scene Dynamics), examples including:
- Height adjustable tables
- Non-static chairs
- Large screens with changing content or video
Recommendations for environment design:
- You can hang posters or signs on the walls, add plants or other feature-rich objects to the environment to improve tracking.
- If it's not possible to alter the environment, you should ensure that some of the environment with enough features is visible to the device’s cameras at all times.
Good conditions
Challenging, uniform surfaces
Repetitive Texture
Light
Headpose works in a large variety of light conditions and should perform well in most scenarios. As a general rule of thumb the following can be followed: "The more light the better."
Ideal conditions:
- Artificial uniform indoor illumination > 50 lux
Challenging conditions:
- Low light scenarios with < 10 lux. Reduced performance, leading to tracking loss and higher variations of the pose, can be observed in very low light conditions. Headpose will then not be able to handle fast motions as well and possibly not be able to recover from tracking lost events.
- Very strong brightness differences, e.g. single very bright spot lights.
- Changing Light: Massive lighting changes such as turning ceiling lighting on or off can pose challenges to headpose and should be avoided. Headpose is more robust against incremental lighting changes.
Scene Dynamics
The environment (“scene”) consists of static and dynamic parts. Moving people, objects (doors, as an example), or textures (such as content on computer/tv screens) can cause tracking instabilities if they occupy a prominent portion of the field of view of the device. The negative impact of the dynamic objects on the tracking quality will grow stronger if more of the device’s field of view is covered by them (especially in low-texture environments such as corridors).
Ideal conditions:
- The best tracking performance can be achieved in completely static environments with rich texture. In environments with dynamic content, tracking performance can be improved by enhancing the environment with static objects, preferably ones with a lot of texture (posters on the wall, plants, etc), and keeping static content in the field of view as much as possible.
Challenging conditions:
- Crowded places can be especially challenging due to the amount of moving people. When navigating through a crowd, tracking quality can be improved by focusing on static parts in the environment (“looking above the people”) instead of focusing directly on the crowd, such that the complete field of view of the device is covered by moving people.
- Walking through doors can be another challenging scenario, especially in otherwise low-textured surroundings (see image). Looking away from the door (preferably onto an area with texture) while opening it will help to maintain the tracking quality. Also removing texture from the moving parts (like a poster from the moving door) can improve the experience.
Specular Reflections
When the cameras capture reflective surfaces such as shiny surfaces or floors, glass walls or mirrors, those areas may look like the device is moving in a different direction than it actually is. Depending on the size of those areas and the rest of the environment, this can be a problem for tracking. Try to avoid areas with large reflective surfaces, or try to cover those surfaces at least partially.
Ideal conditions:
- No reflective surfaces present. The fewer specular reflections present, the better.
Challenging conditions:
- Large glass walls taking up a big part of the field of view of the cameras on the headset
- Large mirrors taking up a big part of the field of view of the cameras on the headset
Motions
Motion Speeds
The device is designed to support all kinds of your motions, however very fast motions might be challenging and reduce performance.
Ideal conditions:
- Walking and looking around without sudden fast motions.
Challenging conditions:
- Shaking your head rapidly.
Device Handling
Headpose works best when the device is on head due to the constraint motion (see Motions). In order to mitigate headpose issues when the device is off-head (taking it off head, putting it down for later use, putting it on head, carrying it by hand) it's important to gently handle the device by adhering to the following recommendations. If you have standby mode enabled, make sure to read about best practices for when in Standby Mode.
Holding and Carrying the Device
- Never cover any cameras, hold the device on the temples in a horizontal or upright (pointing upwards) position to make sure that the cameras can see as much of the scene as possible.
-
- Left: Good - the device is horizontal and the cameras are not covered
- Middle: Bad - the device is pointing down to the floor
- Right: Bad - one camera is covered
- The device should be pointed towards a textured scene without dynamics (see Scene Dynamics)
- The device should have at least 30 cm / 1 ft distance to the the closest surface in the viewing direction
Putting the Device Aside for Later Use
"Later use" can include scenarios like charging.
- Follow the recommendations for holding or carrying the device
- Gently place the device so that it's pointed towards a textured scene without dynamics (e.g. people, or a monitor with changing content) at a sufficient distance to any close-by objects in front of the device. The device can also be placed upside down so that the cameras can see more of the room.
Left: Bad - the device is pointing towards the person on the desk
Middle: Bad - the blue box in front of the device is too close, obstructing the field-of-view
Right: Good - the device is looking at a textured scene with enough distance
Standby Mode
If standby mode is enabled, headpose will be shut down in standby. Standby mode is entered if your eyes are not detected for a period of ~5 to 10 seconds, such as when the device is taken off your head. Upon entering normal power mode (eyes detected for a few seconds), headpose attempts to regain tracking based on the session before standby. Follow the below recommendations to maximize the chances and speed for successful relocalization when returning from standby to normal mode:
- When taking the device off your head, keep it in your hands for an additional 5 to 10 seconds to allow it to enter standby mode in good conditions (looking at a textured and static scene)
- When putting the device on your head, look at a distinctive part of the scene (such as a richly textured without repetitive patterns) that has already been looked at before entering standby mode to help headpose recognize the scene and successfully regain tracking
Outdoor Conditions
Outdoor conditions are not currently supported or tested by Magic Leap.
Moving Platforms
Moving platforms are challenging due to the fact that the IMU measurements are not in agreement with what cameras see. This is very similar to the nausea that humans experience when reading a book in a moving car. Magic Leap 2 does not currently support moving platforms. Magic Leap is continuously improving the capabilities of its products and might officially support certain moving platform scenarios with software updates in the future.
The following are examples of moving platforms:
- Elevators
- Cars
- Ships / vessels
- Airplanes
Handling Headpose Issues
Headpose Lost
Under low lighting or "insufficient texture" situations, the device is not able to continue to track its environment (see earlier section on descriptions of commonly encountered scenarios). As a result the user will see the digital content “head-locked”, meaning that the content follows the head movement and is not locked to the environment, as well as a system notification indicating that headpose is lost.
The device attempts to relocalize itself from this lost state. This attempted re-localization makes use of the map created as the device was tracking the environment. To help the device re-localize (regain tracking) it needs to see a scene in the environment that was tracked earlier. The device is usually able to regain tracking within 1 seconds in most, typical cases.
If re-localization is not successful after 15 seconds, headpose will automatically reset and a new tracking session will begin (see Headpose Reset). The following the scenarios most likely to cause the tracking to be lost
- Low light (usually less than 5 lux)
- Insufficient texture in the environment like in corridors, or when the user is very near the walls.
- Dynamic objects covering the majority of the device's tracking camera view. Eg. In very crowded spaces.
- Very fast motions. General human walking to brisk walking is fine, however running might cause the tracking to be lost more often.
Recommendation:
- Go back to a place where tracking used to work prior to losing track and look around there. In most cases tracking can then continue.
Digital Content Shifted
Sometimes it can happen that the digital content does not remain locked to the environment but unexpectedly appears at a wrong location, i.e. slightly shifted from its original position.
There are mainly two types of "content shift" with different root causes:
- Virtual content may not stay exactly in the same location in the real world as you walk around it and may shift slightly. However, from the same viewpoint, it will always appear in the same location. To mitigate this issue, try placing the virtual content when you are at the place where you want to interact with the virtual content.
- Shift of the virtual content when walking further away from the content, and returning back to the original position. If the content is stuck at a shifted position in this case, the map internally built by the device may be damaged; in this case, try a forced relocation:
- Cover all 3 cameras for a few seconds until the device loses track.
- If the problem persists after relocation, reset the map completely by covering the cameras for at least 15 seconds; Note: this will require placing the virtual content again.
- Shift after a headpose reset. When headpose resets and a new tracking session is started, all information from the previous session is lost and the digital content will appear at a new place.
Digital Content Flying Away
Under challenging scenarios (low light, highly dynamic scenes), or when performing certain actions like opening doors or walking on areas with reflective surfaces, tracking might become unstable. In some rare cases, the virtual content might “fly-away”, resulting in a “headpose lost” issue. Your Magic Leap 2 will attempt to relocalize itself from this lost state.
This attempted re-localization makes use of the map that was being created as the device was tracking the environment. To help the device relocate (and regain tracking when the tracking is lost), let it see a scene in the environment that was tracked earlier.
Digital Content Appears Tilted
In order to correctly align the digital content with the world (i.e. align with gravity), headpose needs some motion in order to accurately estimate the alignment. The best practice is to move the device for 0.5m / 20 inches after headpose initialization, to make sure the alignment can be computed accurately.
Headpose Reset
If headpose is lost and cannot relocalize within 15 seconds (an example being because the user is in a new environment) it will reset. All headpose information from the previous session is lost and the digital content from the previous session will appear at the wrong place, if not handled correctly by the app.