Disguise is a workflow platform, designed to be flexible and enable you to interoperate multiple functions within the platform to create your own workflows. One example of this is a collection of workflows known as EVO.
EVO (External Visualiser Overlay) is in essence two parts. An incoming video stream (NDI) from a third party visualiser, and sharing of camera coordinates between the disguise camera and the external visualiser’s camera.
By combining these two workflows, you are able to create a seamless link between the two systems - essentially visualising the lighting and video systems together in one view port.
In this example we will focus on GrandMA 3D, but other visualisers can be implemented in a similar way.
To set up the NDI stream:
-
Start the NDI Scan Converter (from Newtek) on the visualiser PC
-
In the third party visualiser, set all video surfaces & props to render as black. It might be helpful to use the Export Stage to FBX option in the disguise software to export the stage directly into the visualiser, to ensure all objects are scaled identically
-
Start the disguise software project
-
In the disguise software, create a new camera and position it as required. This will become your main visualiser camera, so you may also need to increase the resolution if you are running a 4K GUI (the new camera will default to 1920 x 1080
-
In disguise, select the visualiser camera to be that new camera in the Stage menu
-
In the Video Input Patch, map one of the Video Inputs to the incoming NDI stream from the visualiser PC. Check with the preview function that this is routing correctly.
-
Add a new Video layer to the timeline
-
In the Video layer, select the Video In clip as the media asset
-
Set the video layer to Add blend mode
-
Create a new Direct mapping and add the new camera you created to that mapping
-
Select the new mapping as the mapping for the video layer
At this point the incoming NDI stream from the external visualiser will be overlaid from the disguise visualiser. If you manually line up the two cameras, this is all that is needed - but using the tracking and control modules in the disguise software it’s possible to link the two cameras together, via open protocols (DMX).
To set up the camera position - there are two options - either disguise can receive the camera position from the visualiser, or the visualiser can receive the position from disguise.
-
Add a DMX device to the project, and patch it to Output DMX
-
Create a DMX lights screen and assign it to create an appropriate number of DMX addresses
-
Add an DMXLightsControl layer to the timeline
-
Select the DMX lights screen as the mapping for the DMXLightsControl layer
-
Use an arrow to connect the control layer to the stage camera (the expression syntax created is camera:{camera name}.offset.x)
-
Set up the appropriate commands to send from the disguise software to the visualiser - note that the DMXLightsControl layer sends only 8 bit values so you will need to convert these to 16 bit or 24 bit depending on the external visualiser requirements. See below.
-
Apply any scale factors needed in the expression to center the two worlds
Note - you will need multiple DMXcontrol modules to send all the properties of the camera)
-
Patch the external visualiser according to the data stream you created
Please note: World Offsets - It might be helpful to set up the disguise software camera as a CHILD of a null object (prop). This means that you can set the world offset using the position and rotation properties of the null object, rather than modifying the expressions (above). Create the prop, then use Add Child to select the camera as a child of the prop. When the position of the prop is adjusted, the camera will move by the same relative value. No mesh needs to be selected on the prop.
-
Add a DMX device to the project file, and patch it to Input DMX
-
Add an Open layer to the timeline
-
Hold down Alt and drag an arrow between the Open Layer and the Cameras position and rotation properties. This will connect the Open layer to the camera Position & Rotation, enabling control of them from the timeline.
-
Right click on each property and use an expression to connect each of these properties to the appropriate incoming DMX value - you may need to scale these values to match the world-scaling in the third party visualiser.
MA2 setup:
-
Create a new project file, go to setup MA Network Control>create a new session, then go to Network protocols and enable the Art-Net
-
Open the camera pool, select the Front view, it should be highlighted in green, then store a new camera view in one of the empty slots
-
Go to Setup and patch an MA camera controller fixture (18 channel) in a new universe to address 1.
-
Once the camera controller fixture is patched to the right address please invert the Tilt of the MA camera controller personality. This will allow synchronizing the tilt of the disguise virtual camera an MA camera
-
Go to the camera pool and right-click on the new camera and select the camera control fixture. Set the x,y,z values to 0 and the rotation x,y,z to 0 and the FOV at 0.79
These steps will allow you to control the new camera using the encoders of the desk.
MA3D setup:
-
Set the default camera to Front
-
Select the Stage Plane and make it 25m height x 25m wide, the default size of the disguise floor is 25m x 25m
-
Create a new plane with a size of 8m wide x 4.5m height, and set the position z at 3m. This plane is the same size and position of the default surface 1 of disguise.
-
Set the default camera to the new camera created previously
-
The visualizer will go to black since the new camera is at 0,0,0. Use the MA2 desk to move the camera to a desirable view and store a position preset for it.
-
Set the MA3D visualizer to full screen on your external monitor
These steps are needed in order to replicate the disguise 3D environment with the MA3D environment to help align both visualizer cameras
NDI setup:
-
Once the machine with the MA onPC and MA3D is ready, run NDI Scan convertor. This app will convert the outputs of the GPU in video streams over IP.
-
Open the NDI studio monitor and check that screen with the MA3D is working as an NDI stream.
disguise setup:
-
Open a new project file, remove the projector out of the stage and create a new virtual camera
-
Right click on Devices > Video input patch. Select video.in1>input configuration> select the NDI stream of the MA3D. Click on Start preview to check the stream, then stop the preview.
-
Create a DMX device, and use the IP address of the MA onPC or MA desk, and check the data with the DMX monitor.
-
Create a PositionReceiver and build new expressions to control the position x,y,z and rotation x,y,z of the virtual camera within the disguise software using the data coming from the MA lighting desk.
Expressions
Expressions are used within the disguise software to calculate the mathematical values needed to align the virtual worlds of both visualizers. The information that follows will be used in the expressions that need to be created within disguise:
-
The MA camera controller has a range from -1000m to 1000m for the x,y,z positions and a range of -720 to 720 degrees for the pan and tilt.
-
The position x,y,z of the MA camera control has a resolution of 24bits for the x,y,z, pan and tilt. Expressions within the disguise software only support 16bits, meaning that the camera movement in disguise will be smooth using 16bit data.
-
The ‘position y’ of the MA camera controller is the ‘offset z’ value of the virtual camera within the disguise software; The ‘position z’ of the MA camera controller is the ‘offset y’ value of the virtual camera within disguise.
-
These are the expressions you need to build:
-
Camera offset x:
-
Camera offset y:
-
Camera offset z:
-
Camera pan (rotation y)
-
Camera tilt (rotation x)
-
-
If the expressions are correctly built the new virtual camera within the disguise software will move to the same x,y,z position location of the MA3D camera.
-
Create a new Video layer, map it to surface 1, and add some media.
-
Create another Video layer, change the blend mode to Add, and select video-in 1 as media; the thumbnail will display the MA3D NDI stream with a checkerboard. Make a new direct mapping for your new virtual camera
-
Go to Stage>Visualizer Camera and assign to the new virtual camera
-
The disguise camera and the MA3D camera will be aligned
-
Open the virtual camera properties editor and change the background colour to black.
Once you have finished all the steps you can make the stage plane in MA3D invisible; it will not be needed once the alignment of the cameras is completed. You can start patching lights on the grandMA2 and add screens in disguise to create your show.
General Information
The MA 3D world is scaled between -1000m and +1000m, and rotation between -720 and +720°. DMX values for control are output as 24 bit control signals, using 3 DMX channels (bytes). Disguise only supports the input of 8 or 16-bit values, so we take the most significant bits from these values.
Using these real world values, we are then able to plug them into the following formulas to create the expressions needed to connect an external visualiser camera to the internal camera within disguise:
{world centre offset in meters/degrees}+(dmx16:universe.address/65536)*{world size in meters/degrees}-{world size in meters/degrees}
Position X
1000+(dmx16:1.1/65536)*2000-2000
Position Y
1000+(dmx16:1.7/65536)*2000-2000
Since Y and Z are flipped in GrandMA 3D, we pick up opposing DMX values
Position Z
1000+(dmx16:1.4/65536)*2000-2000
Rotation X
720-(dmx16:1.13/65535)*1440-1440
Rotation Y
720+(dmx16:1.10/65536)*1440-1440
Rotation Z
No expression
GrandMA 3D does not support Rotation Z via DMX
Field of View
Set this manually
MA 3D measures field of view as half horizontal value compared to the disguise software (eg if MA 3D is 22.5°, the disguise software field of view is 45°.
In the above expressions, the constants 1440 and 2000 are derived from the world scale of the GrandMA 3D scene.
-
2000 is the scale factor between the GrandMA 3D world positions into meter scaling.
-
1440 is the scale factor between the GrandMA 3D world rotations and degrees.
-
The above constants can be modified for integration with other visualisation systems.
-
Note that these expressions are only reading the top two bytes (16-bits) of the 24-bit values
-
At this point when you move the external visualiser camera, the disguise software visualiser camera will move too.
Disguise only supports the sending of 8 bit values from the DMXLightsControl layer, so we need to use expressions to split the 24 bits into separate bytes:
High Byte Position X
DMX Channel 1
(((camera.offset.x+1000)*8388.608)/65536)
Mid Byte Position X
DMX Channel 2
(((camera.offset.x+1000)*8388.608)%65536)/256)
Low Byte Position X
DMX Channel 3
(((camera.offset.x+1000)*8388.608)%256)
High Byte Position Y
DMX Channel 4
(((camera.offset.y+1000)*8388.608)/65536)
Please note: Note that Y and Z are flipped in MA 3D world
Mid Byte Position Y
DMX Channel 5
(((camera.offset.y+1000)*8388.608)%65536)/256)
Low Byte Position Y
DMX Channel 6
(((camera.offset.y+1000)*8388.608)%256)
High Byte Position z
DMX Channel 7
(((camera.offset.z+1000)*8388.608)/65536)
Mid Byte Position z
DMX Channel 8
(((camera.offset.z+1000)*8388.608)%65536)/256)
Low Byte Position z
DMX Channel 9
(((camera.offset.z+1000)*8388.608)%256)
High Byte Rotation X
DMX Channel 10
((((camera.rotation.x*-1)+720)*((256*256*256)/1440))/65536)
Mid Byte Rotation X
DMX Channel 11
((((camera.rotation.x*-1)+720)*((256*256*256)/1440))%65536/256)
Low Byte Rotation X
DMX Channel 12
((((camera.rotation.x*-1)+720)*((256*256*256)/1440))%256)
High Byte Rotation Y
DMX Channel 13
((((camera.rotation.y*-1)+720)*((256*256*256)/1440))/65536)
Please note: Y is inverted in MA3D so we invert the expression with the negative conversion on the bytes
Mid Byte Rotation Y
DMX Channel 14
((((camera.rotation.y*-1)+720)*((256*256*256)/1440))%65536/256)
Low Byte Rotation Y
DMX Channel 15
((((camera.rotation.y*-1)+720)*((256*256*256)/1440))%256)
High Byte Zoom
DMX Channel 16
Mid Byte Zoom
DMX Channel 17
Low Byte Zoom
DMX Channel 18
Constants
In the above expressions, the constants are derived from the world scale of the GrandMA 3D scene.
Position constant: 8388.608 is the scale factor between the GrandMA 3D world positions (-1000m to +1000m = 2000m) and the 24-bit DMX value (256 x 256 x 256) / 2000 = 8388.608.
Rotation constant: ((256*256*256)/1440)): 11650.84444444 is the scale factor between the GrandMA 3D world rotations (-720 to +720 degrees = 1440 degrees) and the 24-bit DMX value (256 x 256 x 256) / 1440 = 11650.84444444
The above constants can be modified for integration with other visualisation systems.
You will need to patch these same values in the external visualiser.
At this point when you move the disguise camera, the external visualiser camera will move.
The only final adjustments that need to be set manually are field of view.
These are suggested expressions for controlling the visualizer camera within Capture from disguise based on the following information available from Capture documentation.
bit depth = 16max value = 2 pow(bit depth) = 65536
output min = -32768 output max = 32768 output range = output max - output min = 65536
output to bit ratio = max value / output range = 1
Capture Camera control X, Y, Z:
value 1: (camera:camera.offset.x*100+32768%65536)/256
value 2: camera:camera.offset.x*100+32768%256
value 3: (unassigned)
value 4: (camera:camera.offset.y*100+32768%65536)/256
value 5: camera:camera.offset.y*100+32768%256
value 6: (unassigned)
value 7: ((camera:camera.offset.z*-1)*100+32768%65536)/256
value 8: (camera:camera.offset.z*-1)*100+32768%256
Camera Control Rotation:
value 1: (camera:camera.rotation.y*182+180*182.044%65536)/256
value 2: camera:camera.rotation.y*182+180*182.044%256
value 3: (unassigned)
value 4: (camera:camera.rotation.x*182+180*182.044%65536)/256
value 5: camera:camera.rotation.x*182+180*182.044%256
value 6: (unassigned)
value 7: (camera.camera.rotation.z*182+180*182.044%65536)/256
value 8: camera:camera.rotation.z*182+180*182.044%256
Please note:
- The Camera position range is -32768 to 32768 for X, Y, and Z; this value is in cm*
- The Camera rotation range is -180 to 180
- Capture requires 16bit resolution
- Z axis is inverted in Capture so you need to use camera z * -1
- Expressions must be converted to METERS so we use *100 in the expressions to convert cm to m