Skip to content

Spatial Calibration

This topic covers the basic steps of aligning the physical and virtual worlds within Designer, the defining attribute in the xR workflow.

A well calibrated xR stage will reveal no seams or visual artifacts that break the seamless blend between the real and virtual environments, and with adequate preparation, can be fully calibrated in less than a few hours.

Prior to beginning spatial calibration, ensure that:

  1. A camera tracking system is set up and receiving reliable data.
  2. The xR project has been configured with an MR set, accurate OBJ models of the LED screens, and tracked cameras with video inputs assigned.
  3. The cameras, LED processors, and all servers are receiving the same genlock signal.
  4. The feed outputs have been configured and confirmed working.
  5. The Delay Calibration has been completed.

Concepts

A Calibration is a set of data that is contained inside of an individual camera object. The calibration process uses tracking data as a base against structured light patterns to determine the reality in relation to the raw tracking data.

An observation is a set of images of the stage captured by the camera of white dots on a black background (called structured light.) The number of dots and their size/spacing is determined by the user.

  • Observations are used as data points for the predefined algorithm Disguise will use to align the tracked camera with the stage and set up the lens characteristics.

There are two types of observations used in the process: Primary and Secondary (P and S).

  • Primary observations are the positional or spatial observations used to align the real world and virtual cameras. A minimum five Primary observations is required for the solving method to compute, so aim for five good observations when you begin your calibration process. Primary or secondary status is defined by the most common zoom and focus values in the pool of observations.
  • Secondary observations comprise the zoom and focus data that creates the lens intrinsics file. Each new zoom and focus position will create a new Lens Pose.

There is no need to assign a zero point in Designer. Minimal offsets and transforms should be applied to align the tracking data and Disguise origin point prior to starting the Primary calibration.

Start by taking primary observations at a single locked zoom and focus level. This Primary Calibration will calibrate the offsets between the tracking system and Disguise’s coordinate system.

  • After each observation, the alignment should look good from the current camera position. If the alignment begins to fail, review all observations and remove suboptimal ones.
  • The Secondary calibration calibrates the zoom and focus data. This will align the virtual content and the real life camera zoom and focal changes.

Lens Pose is the result of the data captured in the observation process. They are different checkpoints that Disguise will intelligently interpolate between. The number of lens poses you will end up with will be dependent on the range of your camera’s lens.

  • A new lens pose is created for every new combination of zoom and focus values. The most common zoom/focus combination will be the Primary lens pose, attributed to the Primary calibration.

Workflow

Prior to beginning spatial calibration, ensure that:

  1. The xR project has been configured with an , accurate OBJ models of the LED screens, and tracking system if being used.
  2. The cameras, LED processors, and all servers are receiving the same genlock signal.
  3. The outputs have been configured and confirmed working.
  4. The Delay Calibration has been completed.

Virtual Set Preview Setup

  1. Create a test pattern layer
  2. Assign a Direct mapping to this layer containing all screens that are used within the being calibrated.
  3. Configure feed output to send content to the LED screen.
  4. Create a virtual line up layer.
  5. Assign a Spatial mapping set to Frontplate for this layer. This will show the representation of the virtual set and will move/deform during the calibration process.
  6. Expand the MR set.
  7. Use CTRL + Left-Click on the header of the editor to pin the window to the GUI. The preview will show the current active camera view, the AR Virtual Lineup overlay, and the test pattern mapped to the LED screen outputs.
  8. For calibration of a range of focus levels, take multiple observations at each zoom level for each of the focus levels. Make sure the zoom is locked for each set of focus observations.

Primary Calibration

The Primary calibration is the set of observations that define the virtual world’s positional and rotational offsets to accurately match the real world, as interpreted by the camera lens.

The Primary observations are defined by the most consistent pair of zoom/focus values within the data set.

  1. Open the spatial calibration editor by left clicking spatial calibration from the MR set.
  2. Ensure the correct MR set is selected and the camera being calibrated is the current target.
  3. Verify the camera tracking system is outputting the correct data and there is no scaling applied from the tracking source.
  4. Set the base/most consistent shot for the project. This is called the “hero” shot.
  5. Adjust zoom and focus values to their most consistently used levels in show.
  6. In the calibration editor, left-click Lock Zoom and Lock Focus to fix the current zoom and focus values.
  7. All primary observations will be grouped based on the most consistent zoom/focus combination in the list.
  8. Use the Live Blob preview to set blob size and spacing proportional to your camera lens and LED resolution.
  9. Begin taking primary observations. This will calibrate the offsets between the tracking system and coordinate system within Disguise. Primary observations will be notated in the list with a P indicator.
  10. Take a minimum 5 good observa**tions from different camera angles/positions that will be used in show.
  11. Utilize the tools under the Debugging tab to determine if an added observation is good or bad.

Secondary Calibration

The Secondary Calibration is a set of checkpoints along the zoom and focus ranges of the camera lens. Each checkpoint is an individual Lens pose comprised of a specific zoom and focus value. Disguise will interpolate between the defined lens poses as the lens zoom and focus values are changed in show.

  1. Return to the “hero” or most base camera position.
  2. Unlock Focus.
  3. Adjust the focus value so it is a new value.
  4. Capture a Secondary observation. Secondary observations will be categorized with an S indicator.
  5. Repeat steps 3 and 4. Each new observation with a new focus value will create a new Lens pose.
  6. Unlock Zoom.
  7. Zoom in at a predefined interval, for example 10% in.
  8. Lock Zoom.
  9. Capture several different observations of varying focus values.
  10. Repeat steps 7-9 until 100% zoom has been achieved.
  11. Zoom the camera in and out and adjust the focus as needed. View the MR transmisison output to see if there are obvious points where the virtual zoom and focus of the stage elements do not match the real world. At those values, add more zoom and focus observations as needed.

Properties

Settings

The MR set to be calibrated, which will contain:

  • All LED surfaces that will display virtual content.
  • The indirection controller containing the current camera that is being calibrated.

Blob Settings

  • Adjust the settings of individual observations.

  • Adjustments to the size in pixels include:

    • each blob that will be displayed for that observation.
    • how many pixels apart they will be.
  • The option to exclude specific screens from individual observations.

    For example, many stages are calibrated with the floor excluded from the calibration due to the steep viewing angle.

Observations

A list of all observations saved within the camera object’s calibration.

The data includes:

  • The tracked zoom, focus, positional and rotational data of the camera at the time of the observation capture, as sent by the tracking source assigned to the camera object.
  • The categorization of Primary or Secondary.
  • A status indicator of if the observation is enabled or currently active within the calibration.
  • The list number of each individual observation.

Calibration

Adjusts global settings regarding the calibration.

Includes:

  • Observations image source: Live, Write, and Read.
  • Live (default): Images captured in the observation process are stored within the observation object itself and cannot be recovered if the observation is deleted.
  • Write: Backs up all captures in the observation process to a newly created folder within the Windows project folder, called /debug.
  • Read: Will read captured images to recreate a calibration offline.

Calibration Results

Provides a list of all lens poses created by differing sets of primary and secondary observations.

Debugging

  • Observation Debugger
  • Plot Calibration Errors
  • Show 3D Observations

Tracker Distortion Compensation

This property allows for the potential correction of non-physical errors in the tracking system.

The available settings are:

  • None: Only accounts for the physical offsets (tracker -> focal point and tracker origin -> Disguise origin) between the tracking system and Disguise. In theory in a perfect setup this should be all you need.
  • Single gain: Allows for a single scaling factor between the tracker and Disguise measurements.
  • Gains: Allows for different scaling factors in X/Y/Z axes.
  • Gains and Skews: Also adds skews, roughly equivalent to the tracker axes not being perpendicular.
  • Matrix:Before this setting was added to the UI, the tracker distortion compensation method was Matrix, which was hard coded in. A byproduct of Matrix is it also allows for more distortions.

Debugging

There are many contributing factors that may result in a poor spatial calibration. Below is a recommended workflow for troubleshooting the possible causes.

Primary Calibration

Carry out initial checks

  1. Press the ‘Re-run calibration’ button to ensure the set is in afully calibrated state. This is especially important when re-opening or updating the project file.
  2. Check the primary/secondary observations are labelled as expected. It’s possible that a zoom or focus value has changed and caused them to be misinterpreted as the incorrect kind.
  3. Check the desired observations are enabled in the list editor.

Check whether the ‘solved’ results look good in the observation debugger. If not, this indicates a problem during the blob detection stage. Possible causes could be

  • Blobs have been detected in the wrong places, e.g. due to reflections. Check in the observation debugger or viewer for any detected blobs that look wrong.
  • The camera moved during the observation.
  • The stage model is not accurate, or the UVs are incorrect (e.g.pointing the wrong way so the screen is flipped).
  • The lens distorts content in a way not captured by our model. For example, anamorphic lenses are not currently supported.
  • Feed output mappings are incorrect

Check whether the ‘tracked’ results look good in the observation debugger. If not, this indicates a problem with the tracking system registration. Possible causes could be

  • The tracking system is not engaged and receiving reliable data.
  • The tracking system has physically moved, or something in the setup has changed between observations.
  • The tracking system coordinate system is wrong, e.g. flipped axes or incorrect rotation order. Some of the debugging tools may help diagnose this.
  • The tracking system is encoder-based, and physical components are mis-measured or bending.
  • Try changing the solving method of the calibration.
  • If none of the above help, a normal diag with the bad observations in should be enough to look into the issue.

If the observation debugger looks good, but Virtual Lineup layer/content looks bad, this indicates that something in tracking or registration is not being applied properly. Possible causes could be

  • The calibration is not up to date. Press the ‘Re-run calibration’ button to be sure.
  • The tracking system has physically moved, or something in the setup has changed.
  • The camera has moved into a position where it is not well calibrated. Try taking another observation in this position.- The zoom/focus has changed.

If none of these issues are found, go through the following steps to create a project diagnostic

  • Take an observation which shows this issue (for example, it looks fine in the observation debugger but the Virtual Lineup Layer is not aligned).
  • Leave the camera in the same position that the observation was taken.
  • Take a short device recording of the tracking data for the camera.- Take a screenshot of the camera feed, with the test pattern on the screens but without the Virtual Lineup Layer overlay
  • If possible, take a screenshot of the camera feed with the blob pattern preview displayed on the screens.
  • Export a diagnostic of the project, and along with screenshots of the MR set preview, send to support@disguise.one