Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
This page is created to guide help users migrating their NatNet projects onto NatNet 3.0 libraries.
NatNet 3.0 no longer allows static linking of the libraries. If a NatNet project was utilizing NatNetLibStatic.lib to accomplish static linking, you will need to make changes to the project configurations, so that it links dynamically instead.
This is only an example. Required configuration changes may be different depending on how the projects were setup.
Visual Studio Example
Project Settings → Configuration Properties → C/C++ → Preprocessor Definitions: Add "NATNATLIB_IMPORTS"
Project Settings → Configuration Properties → Linker → Input → Additional Dependencies: Change "NatNetLibStatic.lib" to "NatNetLib.lib"
Project Settings → Configuration Properties → Linker → General: Make sure the additional library directories includes the directory where the library files are locate.
In NatNet 3.0, the structure of Rigid Body descriptions and Rigid Body frame data has been slightly modified. The sRigidBodyData:Markers member has been removed, and instead, the Rigid Body description (sRigidBodyDescription) now includes the expected Rigid Body marker positions in respect to the corresponding RB orientation axis.
Per-frame positions of Rigid Body markers have to be derived using the Rigid Body tracking data and the expected Rigid Body markers positions included in the description packet.
here are three OptiTrack developer tools for developing custom applications: the Camera SDK, the NatNet SDK, and the Motive API. All of the tools support a C/C++ interface to OptiTrack cameras and provides control over OptiTrack motion capture systems.
Visit our website to compare OptiTrack developer tools and their functions.
SDK/API Support Disclaimer
We provide developer tools to enable OptiTrack customers across a broad set of applications to utilize their systems in the ways that best suit them. Our Motive API through the NatNet SDK and Camera SDK is designed to enable experienced software developers to integrate data transfer and/or system operation with their preferred systems and pipelines. Sample projects are provided alongside each tool, and we strongly recommend the users to reference or use the samples as reliable starting points. The following list specifies the range of support that will be provided for the SDK tools:
Using the SDK tools requires background knowledge on software development; therefore, we do not provide support for basic project setup, compiling, and linking when using the SDK/API to create your own applications.
Although we ensure the SDK tools and their libraries work as intended, we do not provide support for custom developed applications that have been programmed or modified by users using the SDK tools.
Ticketed support will be provided for licensed Motive users using the Motive API and/or the NatNet SDK tools from the included libraries and sample source codes only.
The Camera SDK is a free product, and therefore we do not provide free ticketed support for it.
For other questions, please check out the NaturalPoint forums. Very often, similar development issues get reported and solved there.
Go to the Camera SDK page: Camera SDK
The Camera SDK provides hardware (cameras and hubs) controls and access to the most fundamental frame data, such as grayscale images and 2D object information, from each camera. Using the Camera SDK, you can develop your own image processing applications that utilize the capabilities of the OptiTrack cameras. The Camera SDK is a free tool that can be downloaded from our website.
Note: 3D tracking features are not directly supported with Camera SDK, but they are featured via the Motive API. For more information on the Camera SDK, visit our website.
Go to the Motive API page: Motive API
The Motive API allows control of, and access to, the backend software platform of Motive. Not only does it allow access to 2D camera images and the object data, but it also gives control over the 3D data processing pipeline, including solvers for the assets. Using the Motive API, you can employ the features of Motive into your custom application.
Note: When you install Motive, all of the required components for utilizing the API will be installed within the Motive install directory.
Go to the NatNet SDK page: NatNet SDK 4.0
The NatNet SDK is a client/server networking SDK designed for sending and receiving NaturalPoint data across networks. The NatNet SDK makes the motion capture data available to other applications in real-time. It utilizes UDP along with either Unicast or Multicast communication for integrating and streaming 3D reconstructed data, Rigid Body data, and Skeleton data from OptiTrack systems. Using the NatNet SDK, you can develop custom client/server applications that utilize motion capture data. The NatNet SDK is a free tool that can be downloaded from our website.
Visit our website or Data Streaming page for more information on NatNet SDK.
To ease your use of NatNet data in MATLAB applications, we provide a wrapper class (natnet.p) for using real-time streamed NatNet data. Using this class, you can easily connect/disconnect to the server, receive the tracking data, and parse each component.
The Matlab-NatNet wrapper class is a wrapper for the NatNet assembly and provides a simplified interface for managing the native members in MATLAB. Ther class definition and supporting code should be placed within the MATLAB PATH. The implementation automatically disposes running connections when ending a streaming session, along with basic object management. In order to use the Matlab wrapper class, the NatNetML assembly must be loaded into the MATLAB session. This is handled automatically and the first time the class is used the user is prompted to find the NatNetML.dll file in the Windows file browser. A reference to this location is used in future MATLAB sessions.
To create an instance of the natnet wrapper class, simply call the class with no input arguments and store it in a variable.
Class Properties: The available properties to the class can be seen with the following command, properties('natnet').
Class Methods: And Available methods
Then enter the following line to call the connect method for connecting to the natnet object to the host.
When creating a natnet class instance, the default host and client IP address is set to '127.0.0.1', which is the local loopback address of the computer. Ther natnet object will fail to connect if the network address of the host or client is incorrect.
The natnet wrapper class interface has a method to poll mocap data called getFrame. getFrame method returns the data structure of the streamed data packet. Polling is supported but not recommended due to accessing errors. The function, poll.m, provides a simple exmple showing out to poll the frames of mocap data. After connecting the natnet object to the host server, run the polling script to acquire the data packets in the main workspace.
The natnet class implements a simple interface to use event callbacks. The natnet method, addlistener, requires two input arguments. The first input is which listener slot to use, and the second is the name of the m-function file to be attached to the listener. Once the function is attached using addlistener method, it will be called each time a frame is received. When the callback function is first created, the listener is turned off by default. This is to ensure the user had control of the execution of the even callback function.
Enabling Listener: Start receiving streamed data by enabling the callback function by calling the enable method. The input of the enable method indicates the index value of the listener to enable. Multiple functions can be attached to the listener, and you can enable a specific listener by inputing its index value. Entering 0 will enable all listeners.
Disabling Listener: There are three types of callback functions that ships with the natnet class. IF they are added to the natnet listener list and enabled, they will execute each time the host sends a frame of data. The setup.m file, contains an example of how to operate the class. To stop streaming, use the disable method and be sure to enter a value of 0 to disable all listeners.
The natnet class also has functionality to control the Motive application. To enable recording use the startRcord and stopRecord methods, and for playback, use the startPlayback and stopPlayback methods. There are a number of additional commands as shown below.
To display the actions of the class, set the IsReporting property to true. This displays operations of the class to the Command Window.
The following guide references SampleClientML.cs client application that is provided with the SDK. This sample demonstrates the use of .NET NatNet assembly for connecting to a NatNet server, receiving a data stream, and parsing and printing out the received data.
SDK/API Support Disclaimer
We provide developer tools to enable OptiTrack customers across a broad set of applications to utilize their systems in the ways that best suit them. Our Motive API through the NatNet SDK and Camera SDK is designed to enable experienced software developers to integrate data transfer and/or system operation with their preferred systems and pipelines. Sample projects are provided alongside each tool, and we strongly recommend the users to reference or use the samples as reliable starting points. The following list specifies the range of support that will be provided for the SDK tools:
Using the SDK tools requires background knowledge on software development; therefore, we do not provide support for basic project setup, compiling, and linking when using the SDK/API to create your own applications.
Although we ensure the SDK tools and their libraries work as intended, we do not provide support for custom developed applications that have been programmed or modified by users using the SDK tools.
Ticketed support will be provided for licensed Motive users using the Motive API and/or the NatNet SDK tools from the included libraries and sample source codes only.
The Camera SDK is a free product, and therefore we do not provide free ticketed support for it.
For other questions, please check out the . Very often, similar development issues get reported and solved there.
When developing a managed client applications, you will need to link both native and managed DLL files(NatNetLib.dll and NatNetML.dll). The managed NatNet assembly is derived from the native library, so without the NatNetLib.dll, NatNetML.dll will not be imported properly. These library files can be found in the NatNetSDK\lib
folder for 32-bit platform and in the NatNetSDK\lib\x64
folder for 64-bit platform. Make sure these DLL files are properly linked and placed alongside the executables.
Also, when using the NatNetML assembly, place the NatNetML.xml file alongside imported DLL file. This allows XML documentation to be included as a reference. These library files can be found in the NatNetSDK\lib
folder for 32-bit platform and in the NatNetSDK\lib\x64
folder for 64-bit platform. Make sure these DLL files are properly linked and placed alongside the executables.
Tracking server and client network is established through an instance of NatNet client object (NatNetML.NatNetClientML). Also, this NatNetClientML object will be used for receiving tracking data and sending NatNet commands to and from the server application. When instantiating the NatNetClientML object, input an integer value for determining the desired type of UDP connection; whether it connects via multicast (0) or unicast (1).
Server Discovery
You can also use the NatNetServerDiscover class to auto-detect available servers to connect to. This is demonstrated in the WinFromSamplesApp**.**
GetDataDescriptions method in the NatNetClientML class queries a list of DataDescriptors from the connected server and saves it in a declared list of NatNetML.DataDescriptions. In the SampleClientML sample, the following lines are executed to accomplish this:
After obtaining a list of data descriptions, use the saved DataDescriptor objects to access and output data descriptions as needed. In many cases, it is better to re-organize and save the received descriptor objects into separate lists, or into hashtables, of corresponding data types, so that they can be referenced later in the program.
The best way to receive tracking data without losing any of its frames is to create a callback handler function for processing the data. The OnFrameReady event type from the client object can be used to declare a callback event, and the linked function gets called each time a frame is received from the server. Setting up a frame handler function will ensure that every frame gets processed promptly. However, these handler functions should return as quickly as possible to prevent accumulation of frames due to processing latency within the handler.
OnFrameReady2: Alternate function signatured frame ready callback handler for .NET applications/hosts that don't support the OnFrameReady event type defined above (e.g. MATLAB)
Calling the GetLastFrameOfData method returns a FrameOfMocapData of the most recent frame that was streamed out from the connected server application. This approach is should only be used for .NET applications/hosts that do not support the OnFrameReady callback handler function.
This function is supported in NatNetML only. Native implementations should always use the callback handlers.
When exiting the program, call Uninitialize method using the connected client object and disconnect the client application from the server.
This guide covers essential points to developing a native client application using the NatNet SDK. The guideline uses sample codes in the SampleClient.cpp application in the \NatNet SDK\Sample
folder, please refer to this project as an additional reference.
SDK/API Support Disclaimer
We provide developer tools to enable OptiTrack customers across a broad set of applications to utilize their systems in the ways that best suit them. Our Motive API through the NatNet SDK and Camera SDK is designed to enable experienced software developers to integrate data transfer and/or system operation with their preferred systems and pipelines. Sample projects are provided alongside each tool, and we strongly recommend the users to reference or use the samples as reliable starting points. The following list specifies the range of support that will be provided for the SDK tools:
Using the SDK tools requires background knowledge on software development; therefore, we do not provide support for basic project setup, compiling, and linking when using the SDK/API to create your own applications.
Although we ensure the SDK tools and their libraries work as intended, we do not provide support for custom developed applications that have been programmed or modified by users using the SDK tools.
Ticketed support will be provided for licensed Motive users using the Motive API and/or the NatNet SDK tools from the included libraries and sample source codes only.
The Camera SDK is a free product, and therefore we do not provide free ticketed support for it.
For other questions, please check out the . Very often, similar development issues get reported and solved there.
a. Link the Library
When developing a native NatNet client application, NatNetLib.dll file needs to be linked to the project and placed alongside its executable in order to utilize the library classes and functions. Make sure the project is linked to DLL files with matching architecture (32-bit/64-bit).
b. Include the Header Files
After linking the library, include the header files within your application and import required library declarations. The header files are located in the NatNet SDK/include
folder.
include "NatNetTypes.h"
include "NatNetClient.h"
Connection to a NatNet server application is accomplished through an instance of NatNetClient object. The client object is instantiated by calling the NatNetClient constructor with desired connection protocol (Multicast/Unicast) as its argument. Designate a desired connection protocol and instantiate the client object. In the SampleClient example, this step is done within the CreateClient function.
ConnectionType_Multicast = 0
ConnectionType_Unicast = 1
[C++] SampleClient.cpp : Server Discovery
[C++] SampleClient.cpp : Connect to the Server
[C++] SampleClient.cpp : Request Server Description
[C++] SampleClient.cpp : Send NatNet Commands
[C++] SampleClient.cpp : Get Data Descriptions
After an sDataDescriptions instance has been saved, data descriptions for each of the assets (marker, Rigid Body, Skeleton, and force plate from the server) can be accessed from it.Collapse
[C++] SampleClient.cpp : Parsing Data Descriptions
Now that we have data descriptions, let's fetch the corresponding frame-specific tracking data. To do this, a callback handler function needs to be set for processing the incoming frames. First, create a NatNetFrameReceivedCallback function that has the matching input arguments and the return values as described in the NatNetTypes.h file:typedef void (NATNET_CALLCONV* NatNetFrameReceivedCallback)(sFrameOfMocapData* pFrameOfData, void* pUserData);
The SampleClient.cpp project sets DataHandler function as the frame handler function.void NATNET_CALLCONV DataHandler(sFrameOfMocapData* data, void* pUserData)
When exiting the program, call Disconnect method to disconnect the client application from the server.
This page provides an overview of the general data structure used in the NatNet software development kit (SDK) and how the library is used to parse received tracking information.
For specific details on each of the data types, please refer to the header file.
When receiving streamed data using the NatNet SDK library, its data descriptions should be received before receiving the tracking data. NatNet data is packaged mainly into two different formats: data descriptions and frame-specific tracking data. Utilizing this format, the client application can discover which data are streamed out from the server application in advance to accessing the actual tracking data.
For every asset (e.g. reconstructed markers, Rigid Bodies, Skeletons, force plates) included within streamed capture sessions, their descriptions and tracking data are stored separately. This format allows frame-independent parameters (e.g. name, size, and number) to be stored within instances of the description structs, and frame-dependent values (e.g. position and orientation) to be stored within instances of the frame data structs. When needed, two different packets of an asset can be correlated by referencing to its unique identifier values.
Dataset Descriptions contains descriptions of the motion capture data sets for which a frame of motion capture data will be generated. (e.g. sSkeletonDescription, sRigidBodyDescription)
Frame of Mocap Data contains a single frame of motion capture data for all the datasets described from the Dataset Descriptions. (e.g. sSkeletonData, sRigidBodyData)
When streaming from Motive, received NatNet data will contain only the assets that are enabled in the and the asset types that are set to true under Streaming Settings in the tab in Motive Settings.
To receive data descriptions from a connected server, use the method. Calling this function saves a list of available descriptions in an instance of sDataSetDescriptions.
The sDataSetDescriptions structure stores an array of multiple descriptions for each of assets (Marker Sets, RigidBodies, Skeletons, and Force Plates) involved in a capture and necessary information can be parsed from it. The following table lists out the main data description structs that are available through the SDK.
Refer to the header file for more information on each data type and members of each description struct.
Description Struct
As mentioned in the beginning, frame-specific tracking data are stored separately from the DataDescription instances as this cannot be known ahead of time or out of band but only by per frame basis. These data gets saved into instances of sFrameOfMocapData for corresponding frames, and they will contain arrays of frame-specific data structs (e.g.sRigidBodyData, sSkeletonData) for each types of assets included in the capture. Respective frame number, timecode, and streaming latency values are also saved in these packets.
FrameOfMocapData
Refer to the NatNetTypes.h header file or the NatNetML.dll assembly for the most up to date descriptions of the types.
Most of the NatNet SDK data packets contain ID values. This value is assigned uniquely to individual markers as well as each of assets within a capture. These values can be used to figure out which asset a given data packet is associated with. One common use is for correlating data descriptions and frame data packets of an asset.
Decoding Member IDs
For each member object that is included within a parental model, its unique ID value points to both its parental model and the member itself. Thus, the ID value of a member object needs to be decoded in order to parse which objects and the parent models they are referencing to.
For example, a Skeleton asset is a hierarchical collection of bone Rigid Bodies, and each of its bone Rigid Bodies has unique ID that references to the involved Skeleton model and the Rigid Body itself. When analyzing Skeleton bones, its ID value needs to be decoded in order to extract the segment Rigid Body ID, and only then, it can be used to reference its descriptions.
Creating an instance of the class does not automatically connect the object to a host application. After enabling the broadcast frame data under the in Motive or in any other server, configure the connection type and IP addresses for the client and host to reflect your network setup.
To connect to the server, use the Initialize method from the instantiated NatNetClientML object. When calling this method, input the proper Local IP address and the Server IP address. The local IP address must match the IP address of the host PC, and the server IP address must match the address that the server is streaming onto, which is defined in the panel in Motive.
To confirm whether the client has successfully connected to the server application, let's try querying for a packet using the GetServerDescription method. If the server is connected, the corresponding server descriptions will be obtained. This method returns an value, and when successfully operated it will return a value of 0.
As explained in the page, there are two kinds of data formats included in streamed NatNet packets; one of which is Data Descriptions. In managed NatNet assembly, data descriptions for each of the assets (Marker Sets, Rigid Bodies, Skeletons, and force plates) included in the capture session is stored in a DataDescriptor class. A single capture take (or live streaming) may contain more than one assets, and respectively, there may be more than one data descriptions. For this reason, data descriptions are stored in a list format.
Now, let's obtain the tracking data from the connected server. Tracking data for a captured frame is stored in an instance of NatNetML.FrameOfMocapData. As explained in the page, every FrameOfMocapData contains tracking data of the corresponding frame. There are two approaches for obtaining frame data using the client object; by calling the GetLastFrameOfData method or by linking a callback handler function using the OnFrameReady method. In general, creating a callback function is recommended because this approach ensures that every frame of tracking data gets received.
The NatNet SDK includes functions for discovering available tracking servers. While client applications can connect to a tracking server by simply inputting the matching IP address, the auto-detection feature provides easier use.The function searches the network for a given amount of time and reports IP addresses of the available servers. The reported server information can be used to establish the connection. The function continuously searches for available tracking servers by repeatedly calling a callback function. This is all demonstrated in the SampleClient application.
Now that you have instantiated a NatNetClient object, connect the client to the server application at the designated IP address by calling the method.The Connect method requires a sNatNetClientConnectParams struct for the communication information; including the local IP address that the client is running on and the server IP address that the tracking data is streamed to. It is important that the client connects to appropriate IP addresses; otherwise, the data will not be received.Once the connection is established, you can use methods within the NatNetClient object to send commands and query data.
Now that the NatNetClient object is connected, let’s confirm the connection by querying the server for its descriptions. This can be obtained by calling the method and the information gets saved in the provided instance of sServerDescriptions. This is also demonstrated in the CreateClient function of the SampleClient project.
You can also confirm connection by sending a NatNet remote command to the server. NatNet commands are sent by calling the method with supported as one of its input arguments. The following sample sends a command for querying the number of analog sample for each of the mocap frames. If the client is successfully connected to the server, this method will save the data and return 0.
Now that the client application is connected, for the streamed capture session can be obtained from the server. This can be done by calling the method and saving the descriptions list into an instance of sDataDescriptions. From this instance, the client application can figure out how many assets are in the scene as well as their descriptions.This is done by the following line in the SampleClient project:Collapse
When you are finished using the data description structure, you should free the memory resources allocated by GetDataDescription using the NatNet helper routine .
The method creates a new thread and assigns the frame handler function. Call this method with the created function and the NatNetClient object as its arguments. In the SampleClient application, this is called within the CreateClient function:
Once you call the SetDataCallback method to link a data handler callback function, this function will receive a packet of each time a frame is received. The sFrameOfMocapData contains a single frame data for all of the streamed assets. This allows prompt processing of the capture frames within the handler function.
The sFrameOfMocapData can be obtained by setting up a frame handler function using the method. In most cases, a frame handler function must be assigned in order to make sure every frames are promptly processed. Refer to the provided project for an exemplary setup.
_(Deprecated)_Now, more accurate system latency values can be derived from the reported timestamp values. For more information, read through the page.
_(Deprecated)_Now, more accurate software latency values can be derived from the reported timestamp values. For more information, read through the page.
Timing information for the frame. If SMPTE timecode is detected in the system, this time information is also included. See:
The subframe value of the timecode. See: .
Given in host's high resolution ticks, this stores a timestamp value of when the cameras expose. The timestamp precisely indicates the center of the exposure window. For more information, refer to the article.
Given in host's high resolution ticks, this stores a timestamp value of when Motive receives the camera data. For more information, refer to the article.
Given in host's high resolution ticks, this stores a timestamp value of when tracking data is fully processed and ready to be streamed out. For more information, refer to the article.
One reconstructed 3D marker can be stored in two different places (e.g. in LabeledMarkers and in RigidBody) within a frame of mocap data. In those cases, of the marker can be used to correlate them in the client application if necessary.
Declarations for these data types are listed in the header files within the SDK. The SampleClient project included in the \NatNet SDK\Sample
folder illustrates how to retrieve and interpret the data descriptions and frame data.
NatNet SDK provides a C++ helper function, , for decoding member ID and model ID of an member object. You can also decode by manually parsing the ID as demonstrated in sample.
Server Description
sServerDescription
ServerDescription
Contains basic network information of the connected server application and the host computer that it is running on. Server descriptions are obtained by calling the GetServerDescription method from the NatNetClient class.
Host connection status
Host information (computer name, IP, server app name)
NatNet version
Host's high resolution clock frequency. Used for calculating the latency
Connection status
Data Descriptions
sDataDescriptions
List<DataDescriptor>
Contains an array of data descriptions for each active asset in a capture, and basic information about corresponding asset is stored in each description packet. Data descriptions are obtained by calling the GetDataDescriptions method from the NatNetClient class. Descriptions of each asset type is explained below.
Marker Sets Description
sMarkerSetDescription
MarkerSet
Marker Set description contains a total number of markers in a Marker Set and each of their labels. Note that Rigid Body and Skeleton assets are included in the Marker Set as well. Also, for every mocap session, there is a special MarkerSet named all, which contains a list of all of the labeled markers from the capture.
Name of the Marker Set
Number of markers in the set
Marker names
Rigid Body Description
sRigidBodyDescription
RigidBody
Rigid Body description contains corresponding Rigid Body names. Skeleton bones are also considered as Rigid Bodies, and in this case, the description also contains hierarchical relationship for parent/child Rigid Bodies.
Rigid Body name
Rigid Body streaming ID
Rigid Body parent ID (when streaming Skeleton as Rigid Bodies)
Offset displacement from the parent Rigid Body
Array of marker locations that represent the expected marker locations of the Rigid Body asset.
Skeleton Description
sSkeletonDescription
Skeleton
Skeleton description contains corresponding Skeleton asset name, Skeleton ID, and total number of Rigid Bodies (bones) involved in the asset. The Skeleton description also contains an array of Rigid Body descriptions which relates to individual bones of the corresponding Skeleton.
Name of the Skeleton
Skeleton ID: Unique identifier
Number of Rigid Bodies (bones)
Array of bone descriptions'
Update Note: In NatNet 3.0, Skeleton bone data description packet has been changed from left-handed convention to right-handed convention to be consistent with the convention used in all other data packets. For older versions of NatNet clients, the server, Motive, will detect the client version and stream out Skeleton data in the matching convention. This change will only affect direct-depacketization clients as well as clients that have the NatNet library upgraded to 3.0 from previous versions; for those clients, corresponding changes must be made to work with Motive 2.0.
Force Plate Description
sForcePlateDescription
ForcePlate
Force plate description contains names and IDs of the plate and its channels as well as other hardware parameter settings. Please refer to the NatNetTypes.h header file for specific details.
Force plate ID and serial number
Force plate dimensions
Electrical offset
Number of channels
Channel info
More. See NatNetTypes.h file for more information
Camera Description
sCameraDescription
Camera
An instance of the sCameraDescription contains information regarding the camera name, its position, and orientation.
Camera Name (can be used with Get/Set property commands)
Camera Position (x, y, z float variables)
Camera Orientation (qx, qy, qz, qw float variables)
For more info, see the NatnetTypes.h file.
Device Description
sDeviceDescription
Device
An instance of the sDeviceDescription contains information of the data acquisition (NI-DAQ) devices. It includes information on both the DAQ device (ID, name , serial number) as well as its corresponding channels (channel count, channel data type, channel names). Please refer to the NatNetTypes.h header file for specific details.
Device ID. Used only for identification of devices in the stream.
Device Name
Device serial number
Device Type
Channel count
Channel Names
This page provides function and class references of the NatNet SDK library.
The NatNetClient class (or NatNetClientML from the managed assembly) is the key object of the SDK. An instance of this client class allows an application to connect to a server application and query data. API helper functions are provided with the C++ library for a more convenient use of the SDK tools. For additional information, refer to the provided headers files (native) or reference the NatNatML.dll file (managed).
Note:
NatNet SDK is backwards compatible.
Deprecated methods from previous SDK versions are not documented on this page, and their use in new applications is discouraged. They are subject to removal in a future version of the SDK. Refer to the header files for complete descriptions.
The NatNetServer class has been deprecated for versions 3.0 and above.
Note that some parts of the managed .NET assembly may be slightly different from the native library reference provided here. Refer to the NatNetML.dll file using an object browser for detailed information.
Most of the NatNet SDK functions return their operation results in an integer type representation named ErrorType, which is just an enumerator that describes operation results as the following:
ErrorCode_OK
0
Operation successful
ErrorCode_Internal
1
Suspect internal errors. Contact support.
ErrorCode_External
2
External errors. Make sure correct parameters are used for input arguments when calling the methods.
ErrorCode_Network
3
The error occurred on the network side.
ErrorCode_Other
4
Unlisted error is conflicting the method call.
ErrorCode_InvalidArgument
5
Invalid input arguments have been inputted.
ErrorCode_InvalidOperation
6
Invalid operation.
The NatNetClient class is the main component of the NatNet SDK. Using an instance of the NatNetClient class, you can establish a network connection with a server application (e.g. Motive) and query data descriptions, tracking data, and send/receive remote commands. For detailed declarations, refer to the NatNetClient.h header file included in the SDK.
NatNetClient::NatNetClient()
Constructor: Creates a new instance of a NatNetClient class. Defaults to multicast connection if no input is given.
NatNetClient::NatNetClient(iConnectionType)
Constructor: Creates a new instance of a NatNet Client using the specified connection protocol; either unicast or multicast.
Input: iConnectionType: (0 = Multicast, 1 = Unicast).
This approach is being deprecated. The NatNetClient class now determines the connection type through sNatNetClientConnectParams input when calling the NatNetClient::Connect method.
NatNetClient::~NatNetClient()
Destructor: Destructor
Description
This method connects an instantiated NatNetClient object to a server application (e.g. Motive) at the inputted IP address.
Input Parameters:
Connection parameters object.
Returns:
ErrorCode, On success, it returns 0 or ErrorCode_OK.
sNatNetClientConenectParams:
Declared under the NatNetTypes.h file.
Local address. IP address of the localhost where the client application is running.
Server address. IP address where the server application is streaming to.
(Optional) Command port. Defaults to 1510.
(Optional) Data port. Defaults to 1511.
(Optional) Multicast IP address. Defaults to 239.255.42.99:1511.
Description
Calling this method disconnects the client from the Motive server application.
Input Parameters:
None
Returns:
ErrorCode, On success, it returns 0 or ErrorCode_OK.
Description
This method sets a frame handler function and creates a new thread for receiving and processing each frame of capture data.
Managed Assembly: Use OnFrameReady event type to add a function delegate.
Input Parameters:
pfnDataCallback: A NatNetFrameReceivedCallback function. NatNetFrameReceivedCallback is a type of a pointer to a frame handler function which processes each incoming frame of tracking data. Format of the inputted function must agree with the following type definition:
typedef void (NATNET_CALLCONV* NatNetFrameReceivedCallback)(sFrameOfMocapData* pFrameOfData, void* pUserData);
User definable data: the Client object.
Returns:
ErrorCode, On success, it returns 0 or ErrorCode_OK.
Description
Sends a NatNet command to the NatNet server and waits for a response. See NatNet: Remote Requests/Commands for more details.
Input Parameters:
szRequest: NatNet command.
tries: Number of attempts to send the command. Default: 10.
timeout: Number of milliseconds to wait for a response from the server before the call times out. Default: 20.
ppServerResponse: Application defined response.
pResponseSize: Number of bytes in response
Returns:
ErrorCode, On success, it returns 0 or ErrorCode_OK.
Description
Requests a description of the current NatNet server that a client object is connected to and saves it into an instance of sServerDescription. This call is blocked until the request is responded or times out.
Input Parameters:
Declared sServerDescription object.
Returns:
ErrorCode, On success, it returns 0 or ErrorCode_OK.
Description
Requests a list of dataset descriptions of the capture session and saves onto the declared instance of sDataDescriptions.
Input Parameters:
Pointer to an sDataDescriptions pointer which receives the address of the client's internal sDataDescriptions object. This pointer is valid until the client is destroyed or until the next call to GetDataDescriptions.
Returns:
ErrorCode, On success, it returns 0 or ErrorCode_OK.
Description
This method calculates and returns the time difference between a specific event in the processing pipeline and when the NatNet client application receives the tracking data. For example, if sFrameOfMocapData::CameraMideExposureTimestamp is inputted, it will return the latency from the camera exposure to when the tracking data is received. For more information on how it is used, read through the Latency Measurements page.
Input Parameters:
(uint64_t) A timestamp value from a sFrameOfMocapData struct.
Returns:
(double) The time, in seconds, past since the provided timestamp.
Once the NatNetSDK library has been imported into a client application, the following helper functions can be used.
These functions are available ONLY for C++ applications.
Description
This function gets the version (#.#.#.#) of the NatNet SDK and saves it into an array.
Input Parameters:
Unsigned char array with a array length of 4.
Returns:
Void
Description
This function assignes a callback handler function for receiving and reporting error/debug messages.
Input Parameters:
pfnLogCallback: NatNetLogCallback function. NatNetLogCallback is a type of a pointer to a callback function that is used to handle the log messages sent from the server application. Format of the linked function must agree with the following type definition:
typedef void (NATNET_CALLCONV* NatNetLogCallback)(Verbosity level, const char* message);
Returns:
Void
Description
Takes an ID of a data set (a marker, a Rigid Body, a Skeleton, or a force plate), and decodes its model ID and member ID into the provided integer variables. For example, ID of a Skeleton bone segment will be decoded into its model ID (Skeleton) and Rigid Body ID (bone). See NatNet: Data Types.
Input Parameters:
An ID value for a respective data set (sRigidBodyData, sSkeletonData, sMarker, or sFrocePLateData) from a sFrameOfMocapData packet.
Pointer to declared integer value for saving the entity ID and the member ID (e.g. Skeleton ID and its bone Rigid Body ID).
Returns:
Void
Description
Helper function to decode OptiTrack timecode data into individual components.
Input Parameters:
Timecode integer from a packet of sFrameOfMocapData. (timecode)
TimecodeSubframe integer from a packet of sFrameOfMocapData. (timecodeSubframe)
Pointers to declared integer variables for saving the hours (pOutHour), minutes (pOutMinute), seconds (pOutSecond), frames (pOutFrame), and subframes (pOutSubframe) values.
Returns:
ErrorCode, On success, it returns 0 or ErrorCode_OK.
Description
Helper function to parse OptiTrack timecode into a user friendly string in the form hh:mm:ss:ff:yy
Input Parameters:
timecode: Timecode integer from a packet of sFrameOfMocapData. (timecode)
timecodeSubframe: TimecodeSubframe integer from a packet of sFrameOfMocapData. (timecodeSubframe)
outBuffer: Declared char for saving the output.
outBufferSize: size of the character array buffer (outBuffer).
Returns:
ErrorCode, On success, it returns 0 or ErrorCode_OK.
Description
This helper function performs a deep copy of frame data from pSrc into pDst. Some members of pDst will be dynamically allocated; use NatNet_FreeFrame( pDst ) to clean them up.
Input Parameters:
Pointer to two sFrameOfMocapData variables to copy from (pSrc) and copy to (pDst).
Returns:
ErrorCode, On success, it returns 0 or ErrorCode_OK.
Description
Frees the dynamically allocated members of a frame copy created using NatNet_CopyFrame function. Note that the object pointed to by pFrame itself is NOT de-allocated, but only its nested members which were dynamically allocated are freed.
Input Parameters:
sFrameOfMocapData that has been copied using the NatNet_CopyFrame function.
Returns:
ErrorCode, On success, it returns 0 or ErrorCode_OK.
Do not call this on any pFrame data that was not the destination of a call to NatNet_CopyFrame.
Description
Deallocates data descriptions pDesc and all of its members; after this call, this object is no longer valid.
Input Parameters:
Data descriptions (sDataDescriptions).
Returns:
ErrorCode, On success, it returns 0 or ErrorCode_OK.
Description
Sends broadcast messages to discover active NatNet servers and blocks for a specified time to gather responses.
Input Parameters:
outServers: An array of length equal to the input value of pInOutNumServers. This array will receive the details of all servers discovered by the broadcast.
pInOutNumServers: A pointer to an integer containing the length of the array. After this function returns, the integer is modified to contain the total number of servers that responded to the broadcast inquiry. If the modified number is larger than the original number passed to the function, there was insufficient space for those additional servers.
timeoutMillisec: Amount of time, in milliseconds, to wait for server responses to the broadcast before returning.
Returns:
ErrorCode, On success, it returns 0 or ErrorCode_OK.
Description
Begin sending periodic broadcast messages to discover active NatNet servers in the background.
Input Parameters:
pOutDiscovery: Out pointer that will receive a handle representing the asynchronous discovery process. The handle returned should be passed to NatNet_FreeAsyncServerDiscovery method later for clean up.
pfnCallback: A NatNetServerDiscoveryCallback function pointer that will be invoked once for every new server that's discovered by the asynchronous search. The callback will also be passed onto the provided pUserContext argument.
pUserContext: User-specified context data to be passed to the provided pfnCallback when invoked.
Returns:
ErrorCode, On success, it returns 0 or ErrorCode_OK.
Description
Begin sending periodic broadcast messages to continuously search and discover active NatNet servers in the background.
Input Parameters:
pOutDiscovery: Out pointer that will receive a handle representing the asynchronous discovery process. The handle returned should be passed to NatNet_FreeAsyncServerDiscovery method later for clean up.
pfnCallback: A NatNetServerDiscoveryCallback function pointer that will be invoked once for every new server that's discovered by the asynchronous search. The callback will also be passed onto the provided pUserContext argument.
pUserContext: User-specified context data to be passed to the provided pfnCallback when invoked.
Returns:
ErrorCode, On success, it returns 0 or ErrorCode_OK.
Please use the table of contents to the right to navigate to specific functions or specific group of functions.
Important Note:
Some of the functions may be missing in the documentation. Please refer to the NPTrackingTools header file for any information that are not documented here.
Initializes the API and prepares all connected devices for capturing. Please note that TT_Initialize also loads the default profile from the ProgramData directory: C:\ProgramData\OptiTrack\MotiveProfile.motive
. When there is a need to load the profile from a separate directory, use TT_LoadProfile function.
Description
This function initializes the API library and prepares all connected devices for capturing.
When using the API, this function needs to be called at the beginning of a program before using the cameras.
Returns an NPRESULT value. When the function successfully updates the data, it returns 0 (or NPRESULT_SUCCESS).
Function Input
None
Function Output
NPRESULT
C++ Example
Shuts down all of the connected devices.
Description
This function closes down all connected devices and the camera library. To ensure that all devices properly shutdown, call this function before terminating an application.
When the function successfully closes down the devices, it returns 0 (or NPRESULT_SUCCESS).
When calling this function, currently configured camera calibration will be saved under the default System Calibration.cal file.
Function Input
None
Function Output
NPRESULT
C++ Example
Processes incoming frame data from the cameras.
Description
This function updates frame information with the most recent data from the cameras and 3D processing engines.
Another use of this function is to pick up newly connected cameras. Call this function at the beginning of a program in order to make sure that all of the new cameras are properly recognized.
TT_Update vs. TT_UpdateSingleFrame: In the case when a client application stalls momentarily, the program may get behind on updating the frames. In this situation, the TT_Update() function will disregard accumulated frames and service only the most recent frame data, but this also means that the client application will be missing the previous frames. On the other hand, the TT_UpdateSingleFrame function ensures that always a consecutive frame is updated each time the function is called. In general, a user should always use TT_Update(). Only in the case where a user wants to ensure their client application has access to every frame of tracking data and they are having problems calling TT_Update() in a timely fashion, should they consider using TT_UpdateSingleFrame(). If it is important for your program to obtain and process every single frame, use the TT_UpdateSingleFrame() function for updating the data.
Returns an NPRESULT integer value, depending on whether the operation was successful or not. Returns NPRESULT_SUCCESS when it successfully updates the frame data.
Function Input
None
Function Output
NPRESULT
C++ Example
Updates a single frame of camera data.
Description
Every time this function is called, it updates frame information with the next frame of camera data.
Using this function ensures that every frame of data is processed.
TT_Update() vs. TT_UpdateSingleFrame(): In the case when a client application stalls momentarily, the program may get behind on updating the frames. In this situation, the TT_Update() function will disregard accumulated frames and service only the most recent frame data, but this also means that the client application will be missing the previous frames. On the other hand, the TT_UpdateSingleFrame function ensures that always a consecutive frame is updated each time the function is called. In general, a user should always use TT_Update(). Only in the case where a user wants to ensure their client application has access to every frame of tracking data and they are having problems calling TT_Update() in a timely fashion, should they consider using TT_UpdateSingleFrame(). If it is important for your program to obtain and process every single frame, use the TT_UpdateSingleFrame() function for updating the data.
Returns an NPRESULT value. When the function successfully updates the data, it returns 0 (or NPRESULT_SUCCESS).
Function Input
None
Function Output
NPRESULT
C++ Example
Loads a Motive camera calibration file.
Description
These functions load a camera calibration file (CAL).
Camera calibration files need to be exported from Motive.
Returns a NPRESULT integer value. If the file was successfully loaded, it returns NPRESULT_SUCCESS.
Function Input
Filename (const char, const wchar_t)
Function Output
NPRESULT
C++ Example
Imports TRA files and loads Rigid Body assets from it.
Description
This function imports and loads Rigid Body assets from a saved TRA file.
TRA files contain exported Rigid Body asset definitions from Motive.
All existing assets in the project will be replaced with the Rigid Body assets from the TRA file when this function is called. If you want to keep existing assets and only wish to add new Rigid Bodies, use TT_AddRigidBodies function.
Returns an NPRESULT integer value. It returns NPRESULT_SUCCESS when the file is successfully loaded.
Function Input
Filename (const char, const wchat_t)
Function Output
NPRESULT
C++ Example
Saves all of the Rigid Body asset definitions into a TRA file.
Description
This function saves all of the Rigid Body assets from the project into a TRA file.
Attach *.tra extension at the end of the filename.
Returns an NPRESULT integer value. It returns 0 or NPRESULT_SUCCESS when successfully saving the file.
Function Input
Filename (const char, const wchar_t)
Function Output
NPRESULT
C++ Example
Loads a TRA file and adds its Rigid Body assets onto the project.
Description
This function adds Rigid Body assets from the imported TRA file onto the existing list.
Adds Rigid Bodies from imported TRA files onto the asset list of the current project.
Returns an NPRESULT integer value. If the Rigid Bodies have been added successfully, it returns 0 or NPRESULT_SUCCESS.
Function Input
Filename (const char, const wchat_t)
Function Output
NPRESULT
C++ Example
Loads a Motive User Profile (.MOTIVE).
Description
Loads the default application profile file (MOTIVE), which is located in the ProgramData directory: C:\ProgramData\OptiTrack\MotiveProfile.motive
The MOTIVE files store software configurations as well as other software-wide settings.
Profile files also loads trackable asset definitions. Once the application profile containing trackable assets is imported, there is no need to import TRA and SKL files separately.
Returns an NPRESULT integer value. If the project file was successfully loaded, it returns 0 (NPRESULT_SUCCESS).
Function Input
Filename (const char, const wchar_t)
Function Output
NPRESULT
C++ Example
Loads a Motive TTP project file.
Description
Loads a Motive TTP project file. TTP project file loads and saves both camera calibration and Rigid Body assets, so when using TTP files, there is no need to import or export CAL or TRA files separately.
Loading a project file will import all of the required information for tracking. These include camera calibration and Rigid Body assets that are associated with a Motive project.
Returns an NPRESULT integer value. If the project file was successfully loaded, it returns 0 (NPRESULT_SUCCESS).
Function Input
Filename (const char, const wchar_t)
Function Output
NPRESULT
C++ Example
Saves current application setting into a Profile XML file.
Description
This function saves the current configuration into an application Profile XML file.
Attach *.xml extension at the end of the filename.
Returns an NPRESULT integer value. If the profile XML file was saved successfully, it returns 0 (NPRESULT_SUCCESS).
Function Input
Filename (const char, const wchar_t)
Function Output
NPRESULT
C++ Example
Loads calibration from memory.
Description
This function loads camera calibration from memory. In order to do this, the program must have saved calibration memory.
It assumes the pointer argument (unsigned char*) points to a memory block where calibration data is already stored. The address and size of the calibration buffer must be determined by the developer using the API.
Function Input
Buffer (unsigned char*)
Size of the buffer (int)
Function Output
NPRESULT
C++ Example
Gets camera extrinsics from a calibration file in memory.
Description
This allows for acquiring camera extrinsics for cameras not connected to system.
It simply returns the list of details for all cameras contained in the calibration file.
Function Input
Buffer (unsigned char*)
Size of the buffer (int)
Result
Function Output
NPRESULT
C++ Example
Start a new calibration wanding for all cameras.
Description
This will cancel any existing calibration process.
Function Input
None
Function Output
C++ Example
Returns the current calibration state.
Description
Returns the current calibration state.
Function Input
None
Function Output
NPRESULT
C++ Example
During calibration wanding, this will return a vector of camera indices that are lacking the minimum number of calibration samples to begin calculation.
Description
When the returned vector for this method goes to zero size, you can call TT_StartCalibrationCalculation() to begin calibration calculations.
Wanding samples will continue to be collected until TT_StartCalibrationCalculation() is called.
Function Input
None
Function Output
Vector (int)
C++ Example
During calibration wanding.
Description
This will return the number of wand samples collected for the given camera.
Return 0 otherwise.
Function Input
Camera index (int)
Function Output
Number of samples (int)
C++ Example
Cancels wanding or calculation and resets calibration engine.
Description
Cancels wanding or calculation
Resets calibration engine
Function Input
none
Function Output
Exits either TT_StartCalibrationWanding() or TT_StartCalibratoinCalculation()
C++ Example
Once wanding is complete, call this to begin the calibration calculations.
Description
Starts calibration calculations after wanding.
Function Input
Boolean value
Function Output
Starts calculation
C++ Example
During calibration calculation.
Description
This method will return the current calibration quality in the range [0-5], with 5 being best.
Returns zero otherwise
Function Input
none
Function Output
Quality on scale of 0-5 (int)
C++ Example
Run once TT_CalibrationState() returns "Complete".
Description
Call this method to apply the calibration results to all cameras.
Function Input
none
Function Output
Apply calibration results
C++ Example
Set the ground plane using a standard or custom ground plane template.
Description
If true then this function will use a custom ground plane.
Function Input
Boolean value of useCustomGroundPlane
Function Output
Either applies custom or preset ground plane to calibration.
C++ Example
Translate the existing ground plane (in mm).
Description
Takes float variables to alter existing ground plane.
Function Input
X, Y, and Z values (float)
Function Output
Applies new values to existing ground plane.
C++ Example
Enables/disables the NatNet streaming of the Natrual Point tracking data.
Description
This function enables/disables NaturalPoint data stream.
This is equivalent to the Broadcase Frame Data in the Data Streaming panel in Motive.
Returns a NPRESULT integer value. If the operation was successful, it returns 0 (NPRESULT_SUCCESS).
Function Input
Boolean argument enabled (true) / disabled (false)
Function Output
NPRESULT
C++ Example
Enables/disables streaming frame data into trackd.
Description
This function enables/disables streaming data into trackd.
Returns a NPRESULT integer value. If the operation was successful, it returns 0 (NPRESULT_SUCCESS).
Function Input
True for enabling and false for disabling (bool)
Function Output
NPRESULT
C++ Example
Enables/disables data stream into VRPN.
Description
This function enables/disables data streaming into VRPN.
To stream onto VRPN, the port address must be specified. VRPN server applications run through 3883 port, which is default port for the VRPN streaming.
Returns an NPRESULT integer value. If streaming was successfully enabled, or disabled, it returns 0 (NPRESULT_SUCCESS).
Function Input
True for enabling and false for disabling (bool)
Streaming port address (int)
Function Output
NPRESULT
C++ Example
Gets total number of reconstruected markers in a frame.
Description
This function returns a total number of reconstructed 3D markers detected in current capture frame.
Use this function to count a total number of markers, access every markers, and obtain the marker index values.
Function Input
None
Function Output
Total number of reconstructed markers in the frame (int)
C++ Example
Returns x-position of a reconstructed marker.
Description
This function returns X coordinate of a reconstructed 3D marker in respect to the global coordinate system, in meters.
It requires a marker index value.
Function Input
Marker index (int)
Function Output
X-position of the 3D marker (float)
C++ Example
Returns y-position of a reconstructed marker.
Description
This function returns Y coordinate of a reconstructed 3D marker in respect to the global coordinate system, in meters.
It requires a marker index value.
Function Input
Marker index (int)
Function Output
Y-position of the 3D marker (float)
C++ Example
Returns z-position of a reconstructed marker.
Description
This function returns Z coordinate of a reconstructed 3D marker in respect to the global coordinate system, in meters.
It requires a marker index value.
Function Input
Marker index (int)
Function Output
Z-position of the 3D marker (float)
C++ Example
Returns residual value of a marker.
Description
This function returns a residual value for a given marker indicated by the marker index.
Unit of the returned value is in millimeters.
The marker index value may change between frames, but the unique identifier will always remain the same.
Function Input
Marker index (int)
Function Output
Residual value (float)
Returns a unique identifier of a marker.
Description
This function returns a unique identifier (cUID) for a given marker.
Markers have an index from 0 to [totalMarkers -1] for a given frame. In order to access unique identifier of any marker, it's index must be inputted.
The marker index value may change between frames, but the unique identifier will always remain the same.
Function Input
Marker index (int)
Function Output
Marker label (cUID)
C++ Example
Returns a timestamp value for the current frame.
Description
This function returns a timestamp value of the current frame.
Function Input
None
Function Output
Frame timestamp (double)
C++ Example
Checks whether a camera is contributing to reconstruction of a 3D marker, and saves corresponding 2D location as detected in the camera's view.
Description
This function evaluates whether the specified camera (cameraIndex) is contributing to point cloud reconstruction of a 3D point (markerIndex).
It returns true if the camera is contributing to the marker.
After confirming that the camera contributes to the reconstruction, this function will save the 2D location of the corresponding marker centroid in respect to the camera's view.
The 2D location is saved in the declared variable.
Function Input
3D reconstructed marker index (int)
Camera index (int)
Reference variables for saving x and y (floats).
Function Output
True / False (bool)
C++ Example
Flushes out the camera queues.
Description
This function flushes camera queues.
In an event when you are tracking a very high number (hundreds) of markers and the application has accumulated data processing latency, you can call TT_FlushCameraQueues() to refresh the camera queue before calling TT_Update() for processing the frame. After calling this function, avoid calling it again until the TT_Update() function is called and NPRESULT_SUCCESS is returned.
Function Input
None
Function Output
Void
C++ Example
Checks whether Rigid Body is tracked or not.
Description
Checks whether the Rigid Body is being tracked in the current frame.
Returns true if the Rigid Body is tracked.
Function Input
Rigid body index (int)
Function Output
True / False (bool)
C++ Example
Obtains and saves 3D position, quaternion orientation, and Euler orientation of a Rigid Body
Description
This function saves position and orientation of a Rigid Body. Specifically, position and orientation at the Rigid Body pivot point is obtained.
3D coordinates of the Rigid Body will be assigned in declared variable addresses (*x, *y, *z).
Orientation of the Rigid Body will be saved in two different formats; Euler and quaternion rotations. Yaw, pitch, and roll values for Euler representation will be saved in the declared variable addresses (*yaw, *pitch, *roll), and qx, qy, qz, and qw values for the quaternion rotation will be saved in declared variable addresses (*qx, *qy, *qz, and *qw).
Function Input
Rigid body index (int)
Declared variable (float) addresses for:
3D coordinates (x,y,z)
Quaternion Rotation (qx, qy, qz, qw)
Euler Rotation ( yaw, pitch, roll)
Function Output
Void
C++ Example
Clears and removes all Rigid Body assets.
Description
This function clears all of existing Rigid Body assets in the project.
Function Input
None
Function Output
Void
C++ Example
Removes a Rigid Body from the project
Description
This function removes a single Rigid Body from a project.
Returns a NPRESULT integer value. If the operation was successful, it returns 0 (NPRESULT_SUCCESS).
Function Input
Rigid body index (int)
Function Output
NPRESULT
C++ Example
Returns a total number of Rigid Bodies.
Description
This function returns a total count of Rigid Bodies involved in the project.
This can be used within a loop to set required number iterations and access each of the Rigid Bodies.
Function Input
None
Function Output
Total Rigid Body count (int)
C++ Example
Returns the User Data ID value of a Rigid Body.
Description
This function returns the User Data ID number of a Rigid Body.
User ID is a user definable ID for the Rigid Body. When working with capture data in external pipelines, this value can be used to address specific Rigid Bodies in the scene.
Function Input
Rigid body index (int)
Function Output
User Data ID (int)
C++ Example
Assigns a User Data ID number to a Rigid Body.
Description
Assigns a User Data ID number to a Rigid Body.
The User Data ID numbers can be used to point to particular assets when processing the data in external applications.
Function Input
Rigid body index (int)
Desired User Data ID (int)
Function Output
Void
C++ Example
Returns a mean error of the Rigid Body tracking data.
Description
Returns a mean error value of the respective Rigid Body data for the current frame.
Function Input
Rigid body index (int)
Function Output
Mean error (meters)
Returns the name for the Rigid Body.
Description
These functions are used to obtain name of a Rigid Body.
Returns the assigned name of the Rigid Body.
Function Input
Rigid body index (int)
Function Output
Rigid body name (const char*, const w_chart*)
C++ Example
Enables/disables tracking of a Rigid Body.
Description
This function enables, or disables, tracking of the selected Rigid Body.
All Rigid Bodies are enabled by default. Disabled Rigid Bodies will not be tracked, and no data will be received from it.
Function Input
Rigid body index (int)
Tracking status (bool)
Function Output
Void
C++ Example
Checks whether a Rigid Body is enabled.
Description
This function checks whether tracking of the Rigid Body is enabled or not.
The function returns true is the tracking is enabled.
Function Input
Rigid body index (int)
Function Output
True / False (bool)
C++ Example
Translates the pivot point of a Rigid Body.
Description
This function translates a Rigid Body.
3D position of a Rigid Body will be displaced in x/y/z directions by inputted amount (meters).
Translation is applied in respect to the local Rigid Body coordinate axis, not the global axis.
Returns a NPRESULT integer value. If the operation was successful, it returns 0 (NPRESULT_SUCCESS).
Function Input
Rigid body index (int)
Translation along x-axis, in meters. (float)
Translation along y-axis, in meters. (float)
Translation along z-axis, in meters. (float)
Function Output
NPRESULT
C++ Example
Resets orientation of a Rigid Body.
Description
This function resets orientation of the Rigid Body and re-aligns its orientation axis with the global coordinate system.
Additional Note: When creating a Rigid Body, its zero orientation is set by aligning its axis with the global axis at the moment of creation. Calling this function essentially does the same thing on an existing Rigid Body asset.
Returns true if the Rigid Body orientation was reset.
Function Input
Rigid body index (int)
Function Input
True / False (bool)
C++ Example
Gets total number of markers in a Rigid Body.
Description
This function returns total number of markers involved in a Rigid Body.
Function Input
Rigid body index (int)
Function Output
Total number of marker in the Rigid Body (int)
C++ Example
Saves 3D coordinates of a solved Rigid Body marker in respect to respective Rigid Body's local space.
Description
This function gets 3D position of a solved Rigid Body marker and saves them in designated addresses. Rigid body marker positions from this function represents solved (or expected) location of the Rigid Body markers. For actual reconstructed marker positions, use the TT_RigidBodyPointCloudMarker function.
Note that the 3D coordinates obtained by this function is represented in respect to Rigid Body's local coordinate axis. For obtaining 3D coordinate in respect to global coordinates, use TT_RigidBodyPointCloudMarker function.
Function Input
Rigid body index (int)
Marker index (int)
Three declared variable addresses for saving x, y, z coordinates of the marker (float)
Function Output
Void
C++ Example
Changes and updates the Rigid Body marker positions.
Description
This function is used to change the expected positions of a single Rigid Body marker.
Rigid body markers are expected marker positions. Read about marker types in Motive.
Function Input
Rigid body index (int)
Marker index (int)
New x-position of the Rigid Body marker in respect to the local coordinate system.
New y-position of the Rigid Body marker in respect to the local coordinate system.
New z-position of the Rigid Body marker in respect to the local coordinate system.
Function Output
Returns true if marker locations have been successfully updated.
Saves 3D coordinates of a Rigid Body marker in respect to the global space.
Description
This function saves 3D coordinates of each Rigid Body marker in designated addresses.
3D coordinates are saved in respect to global coordinate system.
Function Input
Rigid body index (int)
Marker index (int)
Tracked status, True or False (bool)
Three declared variable addresses for saving x, y, z coordinates of the marker (float).
Function Output
Void
C++ Example
Saves 3D coordinates of a Rigid Body solved marker positions in respect to the global space. Unlike TT_RigidBodyPointCloudMarker function, it does not report point cloud solved positions, but it reports the expected marker positions in respect to Rigid Body position and orientation.
Description
This function saves 3D coordinates of each expected Rigid Body marker positions in designated variable addresses.
3D coordinates are saved in respect to global coordinate system.
Function Input
Rigid body index (int)
Marker index (int)
Tracked status, True or False (bool)
Three declared variable addresses for saving x, y, z coordinates of the marker (float).
Function Output
Void
C++ Example
This function is used for obtaining unique identifiers for a specific Rigid Body indicated by the Rigid Body index number.
Function Input
Rigid body index (int)
Function Output
Rigid body unique ID (Core::cUID)
Creates a Rigid Body asset from a set of reconstructed 3D markers.
Description
This functions creates a Rigid Body from the marker list and marker count provided in its argument.
The marker list is expected to contain a list of marker coordinates in the following order: (x1, y1, z1, x2, y2, z2, …, xN, yN, zN). The x/y/z coordinates must be in respect to the Rigid Body pivot point, in meters.
Inputted 3D locations are taken as Rigid Body marker positions about the Rigid Body pivot point. If you are using TT_FrameMarkerX/Y/Z functions to obtain the marker coordinates, you will need to subtract the pivot point location from the global marker locations when creating a Rigid Body. This is shown in the below example. If this is not done, created Rigid Body will have its pivot point at the global origin.
Returns an NPRESULT integer value. If the Rigid Body was successfully created, it returns 0 or NPRESULT_SUCCESS.
Function Input
Rigid body name (char)
User Data ID (int)
Marker Count (int)
Marker list (float list)
Function Output
NPRESULT
C++ Example
Obtains Rigid Body settings for a given asset, and saves them in a cRigidBodySettings instance.
Description
This function obtains Rigid Body settings for a given Rigid Body asset and saves them into a declared cRigidBodySetting instance address.
Rigid body settings are saved into an instance of the cRigidBodySettings class.
For detailed information on member function and variables in the cRigidBodySettings class, refer to its declaration in the RigidBodySettings.h header file.
Returns a NPRESULT integer value.
Function Input
Rigid body index (int)
declared instance address (cRigidBodySettings)
Function Output
NPRESULT
C++ Example
Changes property settings of a Rigid Body.
Description
This function assigns a set of Rigid Body settings to a Rigid Body asset.
An instance of cRigidBodySettings will be attached to the provided Rigid Body.
Returns a NPRESULT integer value. If the marker was successfully created, it returns 0 (NPRESULT_SUCCESS).
Function Input
Rigid body index (int)
Function Output
NPRESULT
C++ Example
Initiates the Rigid Body refinement process. Input the number of samples and the ID of the Rigid Body you wish to refine. After starting the process, TT_RigidBodyRefineSample bust be called on everyframe in order to collect samples.
Description
This function is used to start Rigid Body refinement.
Function Input
Target Rigid Body ID
Sample count (int)
Function Output
Returns true if the refinement process has successfully initiated.
This function collects samples for Rigid Body refinement started by calling the TT_RigidBodyRefineStart function. Call this function for every frame; within the update loop. You can check the progress of calibration by calling the TT_RigidBodyRefineProgress function.
Description
This function collects sample Rigid Body tracking data for refining the definition of corresponding Rigid Body.
Function Input
None. Samples frames for the initialized refine process.
Function Output
Returns true if the refinement process has successfully collected a sample. This function does not collect samples if Rigid Body is not tracked on the frame.
This function inquiries the state of the refinement process. It returns TT_RigidBodyRefineStates enum as a result.
Description
This function queries the state of the Rigid Body refinement process. It returns an enum value for indicating whether the proess is intialized, sampling, solving, complete, or uninitialized.
<source> enum TT_RigidBodyRefineStates {
};
</source>
Function Input
None. Checks the state on the ongoing refinement process.
Function Output
Returns TT_RigidBodyRefineStates enum value.
This function inquiries the progress of the refinement sampling process.
Description
When the refinement process is under the sampling state, calling this function returns the sampling progress. It will return a percentage value representing the sampling progress in respect to the total number of samples given in the TT_RigidBodyRefineStart parameter.
Function Input
None. Checks the progress on the ongoing refinement process.
Function Output
Returns percentage completness of the sampling process (float).
These two functions returns error values of the Rigid Body definition before and after the refinement.
Description
Once the refinement process has reached complete stage, these two functions can be called to compare the error values from corresponding Rigid Body defintion before and after the refinement.
Function Input
None.
Function Output
Average error value of the target Rigid Body defintion prior (TT_RigidBodyRefineInitialError) and after (TT_RigidBodyRefineResultError) the refinement.
This function applies the refined result to the corresponding Rigid Body definition.
Description
This function applies the refined Rigid Body definition. After comparing the error values before and after the refinement using TT_RigidBodyRefineInitialError and TT_RigidBodyRefineResultError functions, use this function to apply if the results are satisfying.
Function Input
None.
Function Output
Returns true if the refined results have been successfully applied.
This function discards the final refinement result and resets the refinement process.
Description
If the final refinement result from the TT_RigidBodyRefineResultError call is not satisfying, you can call this function to discard the result and start over from the sampling process again.
Function Input
None.
Function Output
Returns true if the refined results have been successfully resetted.
Returns pointer to the CameraManager instance.
Description
This function returns a pointer to the CameraManager instance from the Camera SDK.
Camera SDK must be installed to use this function.
The version number of Motive and the Camera SDK must match.
Corresponding headers and libraries must be included in the program.
Function Input
None
Function Output
Pointer to the CameraManager instance (CameraLibrary::CameraManager*)
C++ Example
Returns Motive build number.
Description
This function returns corresponding Motive build number.
Function Input
None
Function Output
Build number (int)
C++ Example
Returns camera group count.
Description
This function returns total count of camera groups that are involved in the project.
Function Input
None
Function Output
Camera group count (int)
C++ Example
Creates a new camera group.
Description
This function adds an additional camera group (empty) to a project.
Note: Creating an additional camera group is unnecessary for most applications. Most common case is to group cameras to set them as a reference group for recording grayscale videos.
Function Input
None
Function Output
True/False (bool)
C++ Example
Removes a camera group.
Description
This function removes a camera group, specified by its index number.
The camera group must contain no cameras in order to be removed.
Returns true if the group was successfully removed.
Function Input
Camera group index (int)
Function Output
True/False (bool)
C++ Example
Returns an index value of a camera group that a camera is involved in.
Description
This function takes an index value of a camera and returns corresponding camera group index which the camera is involved in.
Function Input
Camera index (int)
Function Output
Camera group index (int)
C++ Example
Introduces shutter delay to a camera group.
Description
This function sets a shutter delay (in microseconds) to a camera group, which is designated by its index number.
After assigning the delay, all of the cameras involved in the camera group will shutter at a delayed timing when recording.
Function Input
Camera group index (int)
Delay in microseconds (int)
Function Output
Void
C++ Example
Moves a camera to a different camera group.
Description
This function assigns/moves a camera to a different camera group
Function Input
Camera index (int)
Camera group index (int)
Function Output
Void
C++ Example
Obtains the camera group's filter settings.
Description
This function fetches configured 2D filter settings from a camera group and saves the settings in the declared cCameraGroupFilterSettings instance.
Returns a NPRESULT integer value. When the function successfully assigns the filter settings, it returns 0 (or NPRESULT_SUCCESS).
Function Input
Camera group index (int)
Group filter settings instance (cCameraGroupFilterSettings)
Function Output
NPRESULT
C++ Example
Assigns camera group filter settings to a camera group.
Description
This function assigns inputted filter settings (cCameraGroupFilterSettings) instance to a camera group designated by its index number.
Returns a NPRESULT integer value. When the function successfully assigns the filter settings, it returns 0 (or NPRESULT_SUCCESS).
Function Input
Camera group index (int)
Filter settings instance (cCameraGroupFilterSettings)
Function Output
NPRESULT
C++ Example
Obtains marker size settings of a camera group
Description
This function fetches currently configured marker size settings from a camera group, and saves them onto a declared cCameraGroupMarkerSizeSettings class instance.
The marker size settings determine display properties of the 3D markers reconstructed from a specific group of cameras.
Returns a NPRESULT integer value. When the function successfully obtains the settings, it returns 0 (or NPRESULT_SUCCESS).
Function Input
Camera group index (int)
Marker size settings (cCameraGroupMarkerSizeSettings)
Function Output
NPRESULT
C++ Example
Applies given marker size settings to a camera group.
Description
This function applies an instance cCameraGroupMarkerSizeSettings to a camera group.
The marker size settings determine display properties of 3D markers reconstructed from a specific group of cameras.
Marker sizes are represented by corresponding diameter in millimeters.
Returns a NPRESULT integer value. When the function successfully applies the settings, it returns 0 (or NPRESULT_SUCCESS).
Function Input
Camera group index (int)
Marker size settings (cCameraGroupMarkerSizeSettings)
Function Output
NPRESULT
C++ Example
Enables or disables marker reconstruction contribution from a camera group.
Description
Enables or disables marker reconstruction contribution from a camera group.
Input TRUE for enable argument in order to allow the camera group to reconstruct markers.
Returns a NPRESULT integer value. When the function successfully enables/disables the reconstruction, it returns 0 (or NPRESULT_SUCCESS).
Function Input
Camera group index (int)
Boolean argument for enabling (true) and disabling (false) the mode.
Function Output
NPRESULT
C++ Example
Enables or disables filter switchers.
Description
This function enables or disables filter switches for all of the connected cameras.
Returns a NPRESULT integer value. When the function successfully changes the setting, it returns 0 (or NPRESULT_SUCCESS).
Function Input
Boolean argument for enabling (true) or disabling (false) the filter.
Function Output
NPRESULT
C++ Example
Checks whether filter switches are enabled or not.
Description
This function checks whether filter switch is enabled in all of the cameras,
It returns true if the switches are enabled.
Function Input
Void
Function Output
Enabled/disabled (bool)
C++ Example
Returns a total number of cameras connected to the system.
Description
This function returns a total camera count.
Function Input
None
Function Output
Total number of cameras (int)
C++ Example
Returns x-position of a camera.
Description
This function returns camera's X position in respect to the global coordinate system
Function Input
Camera index (int)
Function Output
Camera's X position. Measured in meters with reference to global coordinate system. (float)
C++ Example
Returns y-position of a camera.
Description
This function returns camera's Y position in respect to the global coordinate system
Function Input
Camera index (int)
Function Output
Camera Y-position. Measured in meters with reference to global coordinate system. (float)
C++ Example
Returns z-position of a camera.
Description
This function returns camera's Z position in respect to the global coordinate system
Function Input
Camera index (int)
Function Output
Camera's Z position. Measured in meters with reference to global coordinate system. (float)
C++ Example
Gets a components of the camera's orientation matrix.
Sample output from a program displaying the rotation matrix.
Description
This function returns a single constant from camera's orientation matrix in respect to the global coordinate axis.
The camera index input (int) determines which camera to obtain the matrix from.
The matrix index determines which component of the rotation matrix to return.
Function Input
Camera index (int)
Matrix index (int)
Function Output
Single component of the rotation matrix (float)
C++ Example
Returns coresponding camera's model name and serial number
Description
This function returns corresponding camera's name and serial number.
Function Input
Camera index (int)
Function Output
Camera name and serial number (const char)
C++ Example
Returns coresponding camera's serial number as an integer.
Description
This function returns corresponding camera's serial number.
Function Input
Camera index (int)
Function Output
Camera serial number (int)
C++ Example
Returns a total number of centroids detected by a camera.
Description
This function returns a total number of centroids detected by a camera.
A centroid is defined for every group of contiguous pixels that forms a shape that encloses the thresholded pixels.
Size and roundness filter (cCameraGroupFilterSettings) is not applied in this data.
Function Input
Camera index (int)
Function Output
Number of centroids (int)
C++ Example
Returns 2D location of the centroid as seen by a camera.
Description
This function saves 2D location of the centroid as detected by a camera's imager.
Returns true if the function successfully saves the x and y locations.
Function Input
Camera index (int)
Centroid index (int)
Declared variables for saving x and y (float)
Function Output
True/False (bool)
C++ Example
Saves camera's pixel resolution.
Description
This function saves camera's pixel resolutions (width x height) into declared integer variables.
Returns true when successfully saving the values.
Function Input
Camera index (int)
Declared integer variable for saving width (int)
Declared integer variable for saving height (int)
Function Output
True/False (bool)
C++ Example
Saves predistorted 2D location of a centroid.
Description
This function saves predistorted 2D location of a centroid.
This data is basically where the camera would see a marker if there were no effects from lens distortions. For most of our cameras/lenses, this location is only a few pixels different from the distorted position obtained by the TT_CameraMarker function.
Returns true when successfully saving the values.
Function Input
Camera index (int)
Marker (centroid) index (int)
Declared variable for saving x location (float)
Declared variable for saving y location (float)
Function Output
True/False (bool)
C++ Example
Configures camera settings.
Description
This function sets camera settings for a camera device specified by its index number.
Input setting parameters must agree with the supported ranges (or video types) of the camera model.
A negative return value indicates the function did not complete the task.
Each of the video types is indicated with the following integers. Supported video types may vary for different camera models. Please check the Data Recording page for more information on which image processing modes are available in different models.
Segment Mode: 0
Raw Grayscale Mode: 1
Object Mode: 2
Precision Mode: 4
MJPEG Mode: 6
Valid exposure ranges depend on the framerate settings:
Prime series and Flex 13: 1 ~ maximum time gap between the frames, which is approximately (1 / framerate) - 200 microseconds with about 200 microseconds gap for protection.
Flex3 and Duo/Trio tracking bars: 1 ~ 480 scanlines.
Valid threshold ranges: 0 - 255
Valid intensity ranges: 0 - 15
Function Input
Camera index (int)
Video type (int)
Camera exposure (int)
Pixel threshold (int)
IR light intensity (int)
For more information on the camera settings, refer to the Devices pane page.
Function Output
True/False (bool)
C++ Example
Sets camera frame rate.
Description
This function sets the frame rate of a camera.
Returns true if it successfully adjusts the settings.
Note that this function may assign a frame rate setting that is out of the supported range. Check to make sure inputted frame rates are supported.
Function Input
Camera index (int)
Frame rate (int)
Function Output
True/False (bool).
C++ Example
Gets configured frame rate of a camera.
Description
This function returns frame rate of a camera.
Function Input
Camera index (int)
Function Output
Camera frame rate (int)
C++ Example
Gets configured video type of a camera.
Description
This function checks and returns configured video type (image processing mode) of a camera.
It returns an integer value which represents a video type:
Function Input
Camera index (int)
Function Output
Video type (int)
C++ Example
Gets exposure setting of a camera.
Description
This function returns exposure setting of a camera.
Exposure values are measured in microseconds in Prime series and Flex 13 camera models, and they are measured in scanlines for the Duo/Trio tracking bars and Flex 3 cameras.
To change exposure setting, use the TT_SetCameraSettings function.
For more information on camera settings in Motive, read through the Devices pane page.
Function Input
Camera index (int)
Function Output
Camera exposure (int)
C++ Example
Gets configured threshold (THR) setting of a camera.
Description
This function returns pixel brightness threshold setting of a camera.
When processing the frames, pixels with brightness higher than the configured threshold will be processed, and pixels with lower brightness will be discarded.
To change the threshold setting, use the TT_SetCameraSettings function.
For more information on camera settings in Motive, read through the Devices pane page.
Valid range: 1 - 255.
Function Input
Camera index (int)
Function Output
Pixel brightness threshold (int)
C++ Example
Gets configured intensity (LED) setting of a camera.
Description
This function returns configured IR illumination intensity setting of a camera.
To change the intensity setting, use the TT_SetCameraSettings function.
For more information on camera settings in Motive, read through the Devices pane page.
Valid range: 1 - 15.
Function Input
Camera index (int)
Function Output
Camera IR intensity (int)
C++ Example
Measures image board temperature of a camera.
Description
This function returns temperature (in celsius) of a camera's image board.
Temperature sensors are featured only in Prime series camera models.
Function Input
Camera index (int)
Function Output
Image board temperature (float)
C++ Example
Measures IR LED board temperature of a camera.
Description
This function returns temperature (in celsius) of a camera's IR LED board.
Temperature sensors are featured only in Prime series camera models.
Function Input
Camera index (int)
Function Output
IR LED board temperature (float)
C++ Example
Gets configured grayscale image frame rate decimation ratio of a camera.
Description
This feature is available only in Flex 3 and Trio/Duo tracking bars, and it has been deprecated for other camera models.
This function returns grayscale frame rate decimation ratio of a camera.
Valid decimation ratios are 0, 2, 4, 8. (e.g. When the decimation setting is set to 4, a camera will capture one grayscale frame for four frames of the tracking data)
To set the decimation ratio, use the TT_SetCameraGrayscaleDecimation function.
Grayscale images require more load on data processing. For this reason, you may want to decimate the grayscale frame images and capture the frames at a lower frame rate.
Function Input
Camera index (int)
Function Output
Decimation ratio (int)
C++ Example
Sets frame rate decimation ratio for processing grayscale images.
Description
This feature is available only in Flex 3 and Trio/Duo tracking bars, and it has been deprecated for other camera models.
This functions sets the frame decimation ratio for processing grayscale images in a camera.
Depending on the decimation ratio, a fewer number of grayscale frames will be captured. This can be beneficial when reducing the processing loads.
Supported decimation ratios: 0, 2, 4, 6, 8. (e.g. When the decimation setting is set to 4, a camera will capture one grayscale frame for 4 frames of the tracking data)
Returns true when it successfully sets the decimation value
Function Input
Camera index (int)
Decimation value (int)
Function Output
True/False (bool)
C++ Example
Enables or disables IR filter switch of a camera.
Description
This function enables, or disables, integrated camera filter switch for detecting IR lights.
Different camera models may have different filter switches. Refer to the camera model specifications for detailed information on the type and allowed wavelengths for the filter switch.
Returns true when it successfully enables/disables the filter switch.
Function Input
Camera index (int)
A boolean argument for enabling (true) or disabling (false) the filter.
Function Output
True/False (bool)
C++ Example
Enables and disables automatic gain control.
Description
This function enables/disables automatic gain control (AGC).
Automatic Gain Control feature adjusts the camera gain level automatically for best tracking.
AGC is only available in Flex 3's and Duo/Trio tracking bars.
Returns true when the operation was done successfully.
Function Input
Camera index (int)
Enabled (true) / disabled (false) status (bool)
Function Output
True/False (bool)
C++ Example
Enables or disables automatic exposure control.
Description
This function enables, or disables, Automatic Exposure Control (AEC) for featured camera models.
This feature is only available in Flex 3 and Duo/Trio tracking bars.
It allows cameras to automatically adjust its exposure setting by looking at the properties of the incoming frames.
Returns true if the operation was successful.
Function Input
Camera index (int)
A boolean argument for enabling (true) or disabling (false) the filter.
Function Output
True/false (bool)
C++ Example
Enables or disables the high power IR illumination mode.
Description
This function enables or disables, the high power mode for featured cameras.
The high power mode allows brighter IR LED illumination using more power source.
Returns true if the function successfully enables/disables the feature.
Function Input
Camera index (int)
A boolean argument for enabling (true) or disabling (false) the filter.
Function Output
True/False (bool)
C++ Example
Sets compression quality of MJPEG images.
Description
This function sets the quality of MJPEG images captured by a camera. More specifically, it changes the compression quality of MJPEG frames.
Compression quality is indicated by an integer number between 0 - 100 (no loss).
Lower MJPEG compression quality setting can reduce the processing load for the cameras and reduce latency, but doing so will result in low-quality images.
Returns true when the function successfully enables or disables, the mode.
Function Input
Camera index (int)
MJPEG compression quality (int)
Function Output
True/false (bool)
C++ Example
Gets configured imager gain setting of a camera.
Description
This function is used to check the imager gain setting of a camera.
It returns configured gain setting as an integer value.
Function Input
Camera index (int)
Function Output
Gain setting (int)
C++ Example
Gets total number of gain levels available in a camera.
Description
This function returns a total number of available gain levels in a camera.
Different camera models may have different gain level settings. This function can be used to check the number of available gain levels.
Function Input
Camera index (int)
Function Output
Number of gain levels available (int)
C++ Example
Sets the imager gain level.
Description
This function sets the gain level of a camera's imager.
Using high gain levels may be beneficial for long range tracking. However, note that increasing gain levels may also result in amplified noise signal, which can result in false reconstructions.
Check available gain levels for the camera model using the TT_CameraImagerGainLevels function.
Function Input
Camera index (int)
Function Output
Void
C++ Example
Checks if the continuous IR mode is supported.
Description
This function checks whether the continuous IR illumination mode is available in the camera model.
In the continuous IR mode, the IR LEDs will not strobe but will illuminate continuously instead.
Continuous IR modes are available only in the Flex 3 camera model and the Duo/Trio tracking bars.
Returns true if continuous IR mode is available.
Function Input
Camera index (int)
Function Output
True / False (bool)
C++ Example
Checks if the continuous IR mode is enabled.
Description
This function checks if the continuous IR mode is enabled or disabled in a camera.
Returns true if the continuous IR mode is already enabled.
Function Input
Camera index (int)
Function Output
True / False (bool)
C++ Example
Enables/disables continuous IR.
Description
This function enables, or disables, continuous IR illumination in a camera.
Continuous IR mode outputs less light when compared to Strobed (non-continuous) illumination, but this mode could be beneficial in situations where there are extraneous IR reflections in the volume.
Use TT_IsContinuousIRAvailable function to check whether if this mode is supported.
Function Input
Camera index (int)
A boolean argument for enabling (true) or disabling (false)
Function Output
Void
C++ Example
Clears masking from camera's 2D view.
Description
This function clears existing masks from the 2D camera view.
Returns true when it successfully removes pixel masks.
Function Input
Camera index (int)
Function Output
True / False (bool)
C++ Example
Description
This function allows a user-defined image mask to be applied to a camera.
A mask is an array of bytes, one byte per mask pixel block.
Returns true when masks are applied.
Function Input
Camera index (int)
Buffer
BufferSize
Function Output
True / False (bool)
C++ Example
Description
This function returns memory block of the mask.
One bit per a pixel of the mask.
Masking pixels are rasterized from left to right and from top to bottom of the camera's view.
Function Input
Camera index (int)
Buffer
Buffer size
Function Output
True / False (bool)
C++ Example
Description
This function retrieves the width, height, and grid size of the mask for the camera at the given index.
One byte per pixel of the mask. Masking width * masking height gives the required size of the buffer.
Returns true when the information is successfully obtained and saved.
Function Input
Camera index (int)
Declared variables:
Masking width (int)
Masking height (int)
Masking grid (int)
Function Output
True / False (bool)
C++ Example
Description
Auto-mask all cameras.
This is additive to any existing masking.
To clear masks on a camera, call TT_ClearCameraMask prior to auto-masking.
Function Input
none
Function Output
Auto masks all cameras
C++ Example
Sets camera state of a camera.
Description
This function configures camera state of a camera. Different camera states are defined in the eCameraStates enumeration.
Returns true when it successfully sets the camera state.
Function Input
Camera index (int)
Camera state (eCameraStates)
Function Output
True / False (bool)
C++ Example
Checks camera states.
Camera_Enabled
0
Camera_Disabled_For_Reconstruction
1
Camera_Disabled
2
CameraStatesCount
3
Description
This function obtains and saves the camera state of a camera onto the declared variables.
Returns true if it successfully saves configured state.
Function Input
Camera index (int)
Declared variable for camera state (eCameraState)
Function Output
True / False (bool)
C++ Example
Returns the Camera ID.
Description
This function takes in a camera index number and returns the camera ID number.
Camera ID numbers are the numbers that get displayed on the devices.
The Camera ID number is different from the camera index number. On Prime camera systems, Camera IDs are assigned depending on where the cameras are positioned within the calibrated volume. On Flex camera systems, Camera IDs are assigned according to the order in which devices connected to the OptiHub(s).
Function Input
Camera index (int)
Function Output
Camera ID (int)
C++ Example
Fills a buffer with image from camera's view.
Description
This function fetches raw pixels from a single frame of a camera and fills the provided memory block with the frame buffer.
The resulting image depends on what video mode the camera is in. For example, if the camera is in grayscale mode, a grayscale image will be saved from this function call.
For obtaining buffer pixel width and height, you can use TT_CameraPixelResolution function to obtain respective camera resolution.
Byte span: Byte span is the number of bytes for each row of the frame. In a case of 8-bit pixel images (one byte per pixel), the number of pixels in the frame width will equal to the byte size of the span.
Buffer pixel bit depth: Pixel bit size for the image buffer that will be stored in the memory. If the imagers on the OptiTrack cameras capture 8-bit grayscale pixels, you will need to input 8 for this input.
Buffer: make sure enough memory is allocated for the frame buffer. A frame buffer will require memory of at least (Byte span * pixel height * Bytes per pixel) bytes. For example, on a 640 x 480 image with 8-bit black and white pixels, you will need (640 * 480 * 1) bytes allocated for the frame buffer.
Returns true if it successfully saves the image in the buffer.
Function Input
Camera index (int)
Buffer pixel width (int)
Buffer pixel height (int)
Buffer byte span (int)
Buffer pixel bit depth (int)
Buffer address (unsigned char*)
Function Output
True / False (bool)
C++ Example
Saves image buffer of a camera into a BMP file.
Description
This function saves image frame buffer of a camera into a BMP file.
Video type of the saved image depends on configured camera settings
Attach *.bmp at the end of the filename.
Returns true if it successfully saves the file.
Function Input
Camera index (int)
Filename (const char*)
Function Output
True / False (bool)
C++ Example
Obtains 2D position, of a 3D marker as seen by one of the cameras.
Description
This function reverts 3D data into 2D data. If you input a 3D location (in meters) and a camera, it will return where the point would be seen from the 2D view of the camera (in pixels) using the calibration information. In other words, it locates where in the camera's FOV a point would be located.
If a 3D marker is reconstructed outside of the camera's FOV, saved 2D location may be beyond the camera resolution range.
Respective 2D location is saved in the declared X-Y address, in pixels.
Function Input
Camera index (int)
3D x-position (float)
3D y-position (float)
3D z-position (float)
Declared variable for x and y location from camera's 2D view (float)
Function Output
Void
C++ Example
Removes lens distortion.
Description
This function removes the effect of the lens distortion filter and obtains undistorted raw x and y coordinates (as seen by the camera) and saves in the declared variables.
Lens distortion is measured during the camera calibration process.
If you want to apply the lens distortion filter back again, you can use the TT_CameraDistort2DPoint.
Function Input
Camera index (int)
Declared variables for x and y position in respect to camera's view (float)
Function Ouput
Void
C++ Example
Reapplies lens distortion model.
Description
This function restores the effect of default model for accommodating effects of the camera lens.
Note all reported 2D coordinates are already distorted to accommodate for effects of the camera lens. Apply this function to coordinates that are undistorted by using the TT_CameraUndistort2DPoint function.
This can be used to obtain raw data for 2D points that have been undistorted using the TT_CameraUndistort2DPoint function.
Function Input
Camera index (int)
Declared variables for x and y position in respect to camera's view (float)
Function Input
Void
C++ Example
Obtains 3D vector from a camera to a 3D point.
Description
This function takes in an undistorted 2D centroid location seen by a camera's imager and creates a 3D vector ray connecting the point and the camera.
Use TT_CameraUndistort2DPoint to undistort the 2D location before obtaining the 3D vector.
XYZ locations of both the start point and end point are saved into the referenced variables.
Returns true when it successfully saves the ray vector components.
Function Input
Camera index (int)
x location, in pixels, of a centroid (float)
y location, in pixels, of a centroid (float)
Three reference variables for X/Y/Z location, in meters, of the start point (float)
Three reference variables for X/Y/Z location, in meters, of the end point (float)
Function Output
True / False (bool)
C++ Example
Gets camera parameters for the OpenCV intrinsic model.
Description
This function sets camera's extrinsic (position & orientation) and intrinsic (lens distortion) parameters with values compatible with the OpenCV intrinsic model.
For retaining the extrinsic parameters, you can use the TT_CameraXLocation, TT_CameraYLocation, TT_CameraZLocation, and TT_CameraOrientationMatrix functions.
Returns true if the operation was successful.
Function Input
Camera index (int)
Three arguments for camera x,y,z-position, in mm, within the global space (float)
Camera orientation (float)
Lens center location, principleX and principleY, in pixels (float)
Lens focal length, in pixels. (float)
Barrel distortion coefficients: kc1, kc2, kc3 (float)
Tangential distortion (float)
Function Output
True / False (bool)
C++ Example
Gets pointer to the camera object from Camera SDK.
Description
This function returns a pointer to the Camera SDK's camera pointer.
While the API takes over the data path which prohibits fetching the frames directly from the camera, it is still very useful to be able to communicate with the camera directly for setting camera settings or attaching modules.
The Camera SDK must be installed to use this function.
Camera SDK libraries and the camera library header file (cameralibrary.h) must be included.
Returns Camera SDK Camera.
Function Input
Camera index (int)
Function Output
Camera SDK camera pointer (CameraLibrary::Camera*)
C++ Example
Changes position and orientation of the tracking bars.
Description
This function makes changes to the position and orientation of the tracking bar within the global space.
Note that this function will shift or rotate the entire global space, and the effects will be reflected in other tracking data as well.
By default, center location and orientation of a Tracking bar (Duo/Trio) determines the origin of the global coordinate system. Using this function, you can set a Tracking Bar to be placed in a different location within the global space instead of origin.
Function Input
X position (float)
Y position (float)
Z position (float)
Quaternion orientation X (float)
Quaternion orientation Y (float)
Quaternion orientation Z (float)
Quaternion orientation W (float)
Function Output
NPRESULT
C++ Example
Attaches/detaches cCameraModule instance to a camera object.
Description
This function attaches/detaches the cCameraModule class to a camera defined by its index number.
This function requires the project to be compiled against both the Motive API and the Camera SDK.
The cCameraModule class is inherited from the Camera SDK, and this class is used to inspect raw 2D data from a camera. Use this function to attach the module to a camera. For more details on the cCameraModule class, refer to the cameramodulebase.h header file from the Camera SDK.
The Camera SDK must be installed.
Function Input
Camera index (int)
cCameraModule instance (CameraLibrary::cCameraModule)
Function Output
Void
C++ Example
Attaches/detaches cRigidBodySolutionTest class to a Rigid Body.
Description
This function attaches/detaches the cRigidBodySolutionTest class onto a Rigid Body.
Once an instance of cRigidBodySolutionTest to a Rigid Body, it will evaluate the Rigid Body solution and return false if the solution does not qualify the provided condition.
The cRigidBodySolutionTest class uses the C++ inheritance design model. Inherit this class into your project with same function and class names, then attach the inherited class.
Function Input
Rigid body index (int)
Rigid body test module (cRigidBodySolutionTest*)
Function Output
Void
C++ Example
Attaches/detaches cTTAPIListener onto a TTAPI project.
Description
This function attaches/detaches a cTTAPIListener inherited class onto a TTAPI project.
The cTTAPIListener class uses the C++ inheritance design model. Inherit this class into your project with the same function and class names, then attach the inherited class.
This listener class includes useful callback functions that can be overrided. Including TTAPIFrameAvailable, TTAPICameraConnected, TTAPICameraDisconnected, InitialPointCloud, ApplyContinuousCalibrationResult.
Function Input
cTTAPIListener
Function Output
Void
C++ Example
Returns plain text message that corresponds to a NPRESULT value.
Description
Returns plain text message that corresponds to a result that a NPRESULT value indicates.
Function Input
NPRESULT
Function Output
Result text (const char)
C++ Example
Checks whether there is another OptiTrack software using the devices.
Description
Checks whether there is another OptiTrack software using the devices. Only one software should be occupying the devices at a time.
Function Input
None
Function Output
NPRESULT
This page provides instructions on how to use the subscribe commands in natNet. This feature is supported for Unicast streaming clients only.
Starting from Motive 3.0, the size of the data packets that are streamed over Unicast can be configured from each NatNet client. More specifically, each client can now send commands to the Motive server and subscribe to only the data types that they need to receive. For situations where we must stream to multiple wireless clients through Unicast, this will greatly reduce the size of individual frame data packets, and help to ensure that each client continuously receives frame data packets streamed out from Motive.
Notes
Supported for Unicast only.
Supported for Motive versions 3.0 or above.
This configuration is not necessary when streaming over a wired network since streaming packets are less likely to be dropped.
To make sure the packet size is minimized, it is recommended to clear out the filter at the beginning.
In order to set which type of tracking data gets included in the streamed packets, a filter must be set by sending subscription commands to Motive. This filter will allow client applications to receive only the desired data over a wireless Unicast network. To setup the filter, each NatNet client application(s) needs to call the SendMessageAndWait method and send one of the following subscribe subscription command to the Motive server:
“SubscribeToData, [Data Type], [Name of the Asset]”
“SubscribeByID, RigidBody, [StreamingID]”
“SubscribedDataOnly”
Examples:
The SubscribeToData command allows you set up the filter so that NatNet client receives only the data types that it has subscribed to. Using this command, each client can subscribe to specific data types included in NatNet data packets. To set up the filter, the following string command must be sent to the Motive server using SendMessageAndWait method:
Type
In the Type field, you will be specifying which data type to subscribe to. The following values are accepted:
RigidBody
Skeleton
ForcePlate
Device
LabeledMarkers
MarkersetMarkers
LegacyUnlabeledMarkers
AllTypes
Name
Once the type field is specified, you can also subscribe to a specific asset by inputting the name of the rigid body. You can also input "All" or leave the name field empty to subscribe to all of the assets in that data type.
Examples
If you wish to subscribe to a Rigid Bodynamed Bat, you will be sending the following string command:
You can also subscribe to specific Skeleton also. The following command subscribes to Player Skeleton only:
To subscribe to all rigid bodies in the data stream:
Please note that Motive will not validate the presence of the requested asset, please make sure they are present on the server side.
Another option for subscribing to a specific data is by providing the asset ID. This works only with rigid bodies that has Streaming ID values. This command may be easier to use when streaming to multiple clients with many rigid bodies.
Examples
For subscribing to a Rigid Bodywith streaming ID 3: <source>string command = "SubscribeByID,RigidBody,3";</source>
Subscription filters are additive. When needed, you can send multiple subscription commands to set multiple filters. If a subscription filter contradicts one another, the order of precedence listed (high-to-low) below is followed:
Filter Precedence Order:
Specified asset, either by name or the streaming ID.
Specified data type, all
Specified data type, none
All types, all
All types, none
Unspecified – respects Motive settings.
To clear the subscription filter, a client application can send an empty subscribe command OR disconnect and reconnect entirely. It’s suggested to clear the filter at the beginning to make sure the client application(s) is subscribing only to the data that’s necessary.
If you subscribe to a Rigid Bodywith a specific name or specific streaming ID, commands for unsubscribing to all will not unsubscribe to that specific object. To stop receiving data for a particular object, whether it's a Rigid Bodyor a Skeleton, the client will need to send an unsubscribe command for that specific object also.
For quickly testing the NatNet commands, you can utilize the WinFormSamples program provided in the NatNet SDK package. This program has commands tab which can be used for calling the SendMessageAndWait method. Using this input field, you can test the command string to test out its results.
The NatNet SDK features sending remote commands/requests from a client application over to a connected server application (i.e. Motive).
The SendMessageAndWait method under NatNetClient class is the core method for sending remote commands. This function takes in a string value of the command and sends it over to the connected Motive server each time it's called, and once the server receives the remote command, corresponding actions will be performed. Please note that only a selected set of commands can be understood by the server, which are listed under the remote commands chart below.
NatNet commands are sent via the UDP connection, 1510 port by default.
For a sample use of NatNet commands, refer to the provided WinFormSample.
Description
Sends a NatNet command to the NatNet server and waits for a response.
Input Parameters:
szRequest: NatNet command string, which is one of the commands listed on the below remote commands chart. If the command requires input parameters, corresponding parameters should be included in the command with comma delimiters. (e.g. string strCommand = "SetPlaybackTakeName," + TakeName;).
tries: Number of attempts to send the command. Default: 10.
timeout: Number of milliseconds to wait for a response from the server before the call times out. Default: 20.
ppServerResponse: Server response for the remote command. The response format depends on which command is sent out.
pResponseSize: Number of bytes in response
Returns:
ErrorCode, On success, it returns 0 or ErrorCode_OK.
Motive Supported NatNet Commands/Requests
UnitsToMillimeters
Sending this command requests current system’s measurement units, in terms of millimeters.
Sample command string:
<source>string command = "UnitsToMillimeters";</source>
none
float
FrameRate
Queries for the tracking framerate of the system. Returns a float value representing the system framerate.
Sample command string:
<source>string command = "FrameRate";</source>
none
float
CurrentMode
Requests current mode that Motive is in. Returns 0 if Motive is in Live mode. Returns 1 if Motive is in Edit mode.
Sample command string:
<source>string command = "CurrentMode";</source>
none
int
StartRecording
This command initiates recording in Motive
Sample command string:
<source>string command = "StartRecording";</source>
none
none
StopRecording
This command stops recording in Motive
Sample command string:
<source>string command = "StopRecording";</source>
none
none
LiveMode
This command switches Motive to Live mode
Sample command string:
<source>string command = "LiveMode";</source>
none
none
EditMode
This command switches Motive to Edit mode.
Sample command string:
<source>string command = "EditMode";</source>
none
None
TimelinePlay
Starts playback of a Take that is loaded in Motive
Sample command string:
<source>string command = "TimelinePlay";</source>
none
none
TimelineStop
Stops playback of the loaded Take
Sample command string:
<source>string command = "TimelineStop";</source>
none
none
SetPlaybackTakeName
Set playback take
Sample command string:
<source>string command = "SetPlaybackTakeName," + stringTakeName;</source>
Take name
None
SetRecordTakeName
Set a take name to record.
Sample command string:
<source>string command = "SetRecordTakeName," + stringTakeName;</source>
Take name
None
SetCurrentSession
Set current session. If the session name already exists, Motive switches to that session. If the session does not exist, Motive will create a new session. You can use absolute paths to define folder locations.
Sample command string:
<source>string command = "SetCurrentSession," + stringSessionName;</source><source>string command = "SetCurrentSession," + "c:/folder";</source>
Session name
None
CurrentSessionPath
Gets the unix-style path to the current session folder as a string value, including trailing delimiter.
Sample command string:
<source>string folder = "CurrentSessionPath";</source>
none
string
SetPlaybackStartFrame
Set start frame
Sample command string:
<source>string command = "SetPlaybackStartFrame," + stringFrameNumber;</source>
Frame number
None
SetPlaybackStopFrame
Sets stop frame.
Sample command string:
<source>string command = "SetPlaybackStopFrame," + stringFrameNumber;</source>
Frame number
None
SetPlaybackCurrentFrame
Set current frame
Sample command string:
<source>string command = "SetPlaybackCurrentFrame," + stringFrameNumber;</source>
Frame number
none
SetPlaybackLooping
Enable or disable looping in the playback. To disable, zero must be sent along with the command.
Sample command string:
<source>string enablelooping = "SetPlaybackLooping";</source><source>string disablelooping = "SetPlaybackLooping, 0";</source>
none
none
EnableAsset
Enables tracking of corresponding asset (rigid body / skeleton) from Motive
Sample command string:
<source>string command = "EnableAsset," + stringNodeName;</source>
Asset name
None
DisableAsset
Disables tracking of a corresponding asset (rigid body / skeleton) from Motive.
Sample command string:
<source>string command = "DisableAsset," + stringNodeName;</source>
Asset name
None
GetProperty
Queries the server for configured value of a property in Motive. The property name must exactly match the displayed name. This request string must have the following inputs along with the command, each of them separated by a comma.
Node name
Property name
Sample command string:
<source>string command = "GetProperty," + stringNodeName + "," + stringPropertyName;</source>
For rigid body assets, Streaming ID of rigid bodies can be used in place of the stringNodeName. For example, string command for getting name of a rigid body with streaming ID of 3 would be: <source>string command = "GetProperty," + "3"+ "," + "Name";</source>
eSync:2:
Accessing the eSync 2 requires '#' to be included at the beginning of the eSync 2's serial number. If the '#' is not present, it will make the eSync 2 inaccessible. ie. GetProperty, eSync 2 #ES002005, Source Value
Node name (if applicable)
Property name
int
SetProperty
Sample command string:
<source>string command = "SetProperty," + stringNodeName + "," + stringPropertyName + "," + stringPropertyValue;</source> <source>//Sets the frame rate of the camera system to 180FPS. string command = "SetProperty,Master Rate,180"; </source> <source>// Sets the gain on camera #13003 to 2. "SetProperty," + "Prime 13 #13003", + "," + "Gain" + "," + "2";</source>
For rigid body assets, Streaming ID of rigid bodies can be used in place of the stringNodeName. For example, string command for enabling rigid body with streaming ID of 3 would be: <source>string command = "SetProperty," + "3"+ "," + "Active" + "," + "True";</source>
eSync:2:
Accessing the eSync 2 requires '#' to be included at the beginning of the eSync 2's serial number. If the '#' is not present, it will make the eSync 2 inaccessible. ie. GetProperty, eSync 2 #ES002005, Source Value
Node name. Leave it empty if not applicable.
Property name
Desired value
int
GetTakeProperty
Sample command string:
<source>string command = "GetTakeProperty," + takeName + "," + propertyName;</source><source>
//Querying for EndFrame number on the currently loaded take. string command = "GetTakeProperty,,End Frame";</source>
Take Name. Leave empty for currently loaded take.
Depends on the property type.
CurrentTakeLength
Request length of current take.
Sample command string:
<source>string command = "CurrentTakeLength";</source>
non
int
Supported for Motive 3.0 or above.
Subscription commands work with Unicast streaming protocol only. When needed, unicast clients can send subscription commands to receive only specific data types through the data stream. This allows users to minimize the size of streaming packets. For more information, read through the NatNet: Unicast Data Subscription Commands page.
Following is a general format used for the subscription command strings:
SubscribeToData,[DataType],[All or specific asset]
SubscribeByID,[DataType],[ID]
Below is a sample use of the NatNet commands from the WinFormsSample application.
Start Recording
Framerate Query
Setting name of the recorded Take
Setting Motive Properties
Rebroadcast Motive Data sample. This is a sample NatNet application that receives tracking data from a NatNet Server (Motive) and redistributes it in other formats.
Currently, there are two supported protocols in this sample; Unity and LightCraft. The Unity protocol repackages mocap data into XML packets and delivers it to the unity game engine. The LightCraft protocol takes definitions for a single Rigid Body and sends them over a serial port in a format called the Spydercam protocol. The LightCraft protocol in addition to being a demonstration of serial port communication, it allows Motive and Previzion to be completely compatible, using Mocap data to track the Previzion camera.
Unity Protocol
Rebroadcasts data into a XML format compatible for unity.
Previzion Protocol
Rebroadcasts data via Spyder Cam protocol which
The following third-party library must be linked in order to build and run this sample application
Asio library (http://think-async.com/) : Required for communicating tracking data through serial port communication.
Download the asio C++ library.
In the RebroadcastMotiveData VS project, link the downloaded library.
Build the sample project.
You can run the app from the command line with appropriate input arguments. For example: RebroadcastMotiveData.exe 127.0.0.1 127.0.0.1 unity
There are total four arguments that can be inputed when running the sample application.
argument 1: IP address of the server, Motive, machine
argument 2: When streaming to unity, input the IP address of the local machine. When streaming to the lightcraft protocol, input local serial port name (e.g. COM1)
argument 3: Protocol type [unity/lightcraft]
argument 4 (optional): Input test, to run in the test mode.
SDK/API Support Disclaimer
We provide developer tools to enable OptiTrack customers across a broad set of applications to utilize their systems in the ways that best suit them. Our Motive API through the NatNet SDK and Camera SDK is designed to enable experienced software developers to integrate data transfer and/or system operation with their preferred systems and pipelines. Sample projects are provided alongside each tool, and we strongly recommend the users to reference or use the samples as reliable starting points. The following list specifies the range of support that will be provided for the SDK tools:
Using the SDK tools requires background knowledge on software development; therefore, we do not provide support for basic project setup, compiling, and linking when using the SDK/API to create your own applications.
Although we ensure the SDK tools and their libraries work as intended, we do not provide support for custom developed applications that have been programmed or modified by users using the SDK tools.
Ticketed support will be provided for licensed Motive users using the Motive API and/or the NatNet SDK tools from the included libraries and sample source codes only.
The Camera SDK is a free product, and therefore we do not provide free ticketed support for it.
For other questions, please check out the NaturalPoint forums. Very often, similar development issues get reported and solved there.
This guide provides detailed instructions on commonly used functions of the Motive API for developing custom applications. For a full list of the functions, refer to the Motive API: Function Reference page. Also, for a sample use case of the API functions, please check out the provided marker project. In this guide, the following topics will be covered:
Library files and header files
Initialization and shutdown
Capture setup (Calibration)
Configuring camera settings
Updating captured frames
3D marker tracking
Rigid body tracking
Data streaming
When developing a Motive API project, make sure its linker knows where to find the required library files. This can be done either by specifying its location within the project or by copying these files onto the project folder.
NPTrackingTools
Motive API libraries (.lib and .dll) are located in the lib folder within the Motive install directory; which is located at C:\Program Files\OptiTrack\Motive\lib
by default. In this folder, library files for both 64-bit (NPTrackingToolsx64.dll and NPTrackingToolsx64.lib) platforms can be found. When using the API library, all of the required DLL files must be located in the executable directory. Copy and paste the NPTrackingToolsx64.dll file onto the folder alongside the executable file.
Third-party Libraries
Additional third-party libraries are required for Motive API, and most of the DLL files for these libraries can be found in the Motive install directory C:\Program Files\OptiTrack\Motive\
. You can simply copy can paste all of the DLL files from the Motive installation directory into the directory of the Motive API project to use them. Highlighted items in the below image are the required DLL files.
Lastly, copy the C:\Program Files\OptiTrack\Motive\plugins\platforms
folder and its contents into the EXE as well since the libraries contained in this folder will also be used.
For function declarations, there are two required header files: NPTrackingTools.h and RigidBodySettings.h, and these files are located in the C:\Program Files\OptiTrack\Motive\inc\
folder. Always include the directive syntax for adding the NPTrackingTools.h header file for all programs that are developed against the Motive API. The syntax for including RigidBodySetting.h file is already included in the NPTrackingTools.h file, so there is no need to include this separately.
The NPTrackingTools.h file contains the declaration for the most of the functions and classes in the API.
The RigidBodySettings.h file contains declaration for the cRigidBodySettings class, which is used for configuring Rigid Body asset properties.
Note: You could define these directories by using the NPTRACKINGTOOLS_INC
, NPTRACKINGTOOLS_LIB
environment variables. Check the project properties (Visual Studio) of the provided marker project for a sample project configuration.
Motive API, by default, loads the default calibration (CAL) and Application profile (MOTIVE) files from the program data directory unless separately specified. These are the files that Motive also loads at the application startup, and they are located in the following folder:
Default System Calibration: C:\ProgramData\OptiTrack\Motive\System Calibration.cal
Default Application Profile: C:\ProgramData\OptiTrack\MotiveProfile.motive
If there is a specific files that needs to be loaded into the project, you can export and import two files from Motive: motive application profile (MOTIVE) and camera calibration (CAL). The application profile is imported in order to obtain software settings and trackable asset definitions. Only after the camera calibration is imported, reliable 3D tracking data can be obtained. Application profiles can be loaded using the TT_LoadProfile function, and the calibration files can be loaded using the TT_LoadCalibration function.
When using the API, connected devices and the Motive API library need to be properly initialized at the beginning of a program and closed down at the end. The following section covers Motive API functions for initializing and closing down devices.
To initialize all of the connected cameras, call the TT_Initialize function. This function initializes the API library and gets the cameras ready for capturing data, so always call this function at the beginning of a program. If you attempt to use the API functions without the initialization, you will get an error.
Motive Profile Load
Please note that TT_Initialization loads the default Motive profiles (MOTIVE) from the ProgramData directory during the initialization process. To load Motive profile, or settings, from a specified directory, use TT_LoadProfile.
The TT_Update function is primarily used for updating captured frames, which will be covered later, but there is another use. The TT_Update can also be called to update a list of connected devices, and you can call this function after the initialization to make sure all of the newly connected devices are properly initialized in the beginning.
When exiting out of a program, make sure to call the TT_Shutdown function to completely release and close down all of the connected devices. Cameras may fail to shut down completely when this function is not called.
The Motive application profile (MOTIVE) stores all of trackable assets involved in a capture and software configurations including application settings and data streaming settings. When using the API, it is strongly recommended to first configure all of the settings and define trackable assets in Motive, export a profile MOTIVE file, and then load the file by calling the TT_LoadProfile function. This way, you can adjust the settings for your need in advance and apply them to your program without worrying about configuring individual settings.
Cameras must be calibrated in order to track in 3D space. However, since camera calibration is a complex process, and because of this, it's easier to have the camera system calibrated from Motive, export the camera calibration file (CAL), and the exported file can be loaded into custom applications that are developed against the API. Once the calibration data is loaded, the 3D tracking functions can be used. For detailed instructions on camera calibration in Motive, please read through the Calibration page.
Loading Calibration
Open Motive.
[Motive] Calibrate: Calibrate camera system using the Calibration panel. Read through the Calibration page for details.
[Motive] Export: After the system has been calibrated, export the calibration file (CAL) from Motive.
Close out on Motive.
[API] Load: Import calibration onto your custom application by calling the TT_LoadCalibration function to import CAL files.
When successfully loaded, you will be able to obtain 3D tracking data using the API functions.
Note:
Calibration Files: When using the exported calibration files, make sure to use only valid calibration. Exported calibration file can be used again as long as the system setup remained unchanged. Note that the file will no longer be valid if any of the system setups have been altered after the calibration. Also, calibration quality may degrade over time due to environmental factors. For these reasons, we recommend re-calibrating the system routinely to guarantee the best tracking quality.
Tracking Bars: If you are using a tracking bar, camera calibration is not required for tracking 3D points.
Connected cameras are accessible by index numbers. The camera indexes are assigned in the order the cameras are initialized. Most of the API functions for controlling cameras require an index value. When processing all of the cameras, use the TT_CameraCount function to obtain the total camera count and process each camera within a loop. For pointing to a specific camera, you can use the TT_CameraID or TT_CameraName functions to check and use the camera with given its index value. This section covers Motive API functions for checking and configuring camera frame rate, camera video type, camera exposure, pixel brightness threshold, and IR illumination intensity.
The following functions return integer values for the configured settings of a camera specified by its index number. Camera video type is returned as an integer value that represents a image processing mode, as listed in the NPVIDEOTYPE.
These camera settings are equivalent to the settings that are listed in the Devices pane of Motive. For more information on each of the camera settings, refer to the Devices pane page.
Now that we have covered functions for obtaining configured settings, now let's modify some of them. There are two main functions for adjusting the camera settings: TT_SetCameraSettings and TT_SetCameraFramerate. The TT_SetCameraSettings function configures video type, exposure, threshold, and intensity settings of a camera which is specified by its index number. The TT_SetCameraFrameRate is used for configuring frame rate of a camera. Supported frame rate range may vary for different camera models. Check the device specifications and apply the frame rates only within the supported range.
If you wish to keep part of the current camera settings, you can use the above functions to obtain the configured settings (e.g. TT_CameraVideoType, TT_CameraFrameRate, TT_CameraExposure, etc.) and them as input arguments for the TT_SetCameraSettings function. The following example demonstrates modifying frame rate and IR illumination intensity for all of the cameras, while keeping the other settings constant.
Camera Settings
Valid frame rate values: Varies depending on camera models, refer to the respective hardware specifications.
Valid exposure values: Depends on camera model and frame rate settings.
Valid threshold values: 0 - 255
Valid intensity values: 0 - 15
Video Types
Video Type: Data Recording page for more information on image processing modes.
Segment Mode: 0
Grayscale Mode: 1
Object Mode: 2
Precision Mode: 4
MJPEG Mode: 6
There are other camera settings, such as imager gain, that can be configured using the Motive API. Please refer to the Motive API: Function Reference page for descriptions on other functions.
In order to process multiple consecutive frames, you must update the camera frames using the following API functions: TT_Update or TT_UpdateSingleFrame. Call one of the two functions repeatedly within a loop to process all of the incoming frames. In the 'marker sample', TT_Update function is called within a while loop as the frameCounter variable is incremented, as shown in the example below.
There are two functions for updating the camera frames: TT_Update and TT_UpdateSingleFrame. At the most fundamental level, these two functions both update the incoming camera frames. However, they may act differently in certain situations. When a client application stalls momentarily, it could get behind on updating the frames and the unprocessed frames may be accumulated. In this situation, each of these two functions will behave differently.
The TT_Update() function will disregard accumulated frames and service only the most recent frame data, but it also means that the client application will not be processing the previously missed frames.
The TT_UpdateSingleFrame() function ensures that only one frame is processed each time the function is called. However, when there are significant stalls in the program, using this function may result in accumulated processing latency.
In general, a user should always use TT_Update(). Only in the case where a user wants to ensure their client application has access to every frame of tracking data and they are having problems calling TT_Update() in a timely fashion, should they consider using TT_UpdateSingleFrame(). If it is important for your program to obtain and process every single frame, use the TT_UpdateSingleFrame() function for updating the data.
After loading valid camera calibration, you can use the API functions to track retroreflective markers and get their 3D coordinates. The following section demonstrates using the API functions for obtaining the 3D coordinates. Since marker data is obtained for each frame, always call the TT_Update, or the TT_UpdateSingleFrame, function each time newly captured frames are received.
Marker Index
In a given frame, each reconstructed marker is assigned a marker index number. These marker indexes are used for pointing to a particular reconstruction within a frame. You can also use the TT_FrameMarkerCount function to obtain the total marker count and use this value within a loop to process all of the reconstructed markers. Marker index values may vary between different frames, but unique identifiers will always remain the same. Use the TT_FrameMarkerLabel function to obtain the individual marker labels if you wish to access same reconstructions for multiple frames.
Marker Position
For obtaining 3D position of a reconstructed marker, you can use TT_FrameMarkerX, TT_FrameMarkerY, and TT_FrameMarkerZ functions. These functions return 3D coordinates (X/Y/Z) of a marker in respect to the global coordinate system, which was defined during the calibration process. You can further analyze 3D movements directly from the reconstructed 3D marker positions, or you can create a Rigid Body asset from a set of tracked reconstructions for 6 Degree of Freedom tracking data. Rigid body tracking via the API will be explained in the later section.
For tracking 6 degrees of freedom (DoF) movement of a Rigid Body, a corresponding Rigid Body (RB) asset must be defined. A RB asset is created from a set of reflective markers attached to a rigid object which is assumed to be undeformable. There are two main approaches for obtaining RB assets when using Motive API; you can either import existing Rigid Body data or you can define new Rigid Bodies using the TT_CreateRigidBody function. Once RB assets are defined in the project, Rigid Body tracking functions can be used to obtain the 6 DoF tracking data. This section covers sample instructions for tracking Rigid Bodies using the Motive API.
We strongly recommend reading through the Rigid Body Tracking page for more information on how Rigid Body assets are defined in Motive.
Let's go through importing RB assets into a client application using the API. In Motive, Rigid Body assets can be created from three or more reconstructed markers, and all of the created assets can be exported out into either application profile (MOTIVE) or a Motive Rigid Body file (TRA). Each Rigid Body asset saves marker arrangements when it was first created. As long as the marker locations remain the same, you can use saved asset definitions for tracking respective objects.
Exporting all RB assets from Motive:
Exporting application profile: File → Save Profile
Exporting Rigid Body file (TRA): File → Export Rigid Bodies (TRA)
Exporting individual RB asset:
Exporting Rigid Body file (TRA): Under the Assets pane, right-click on a RB asset and click Export Rigid Body
When using the API, you can load exported assets by calling the TT_LoadProfile function for application profiles and the TT_LoadRigidBodies or TT_AddRigidBodes function for TRA files. When importing TRA files, the TT_LoadRigidBodies function will entirely replace the existing Rigid Bodies with the list of assets from the loaded TRA file. On the other hand, TT_AddRigidBodes will add the loaded assets onto the existing list while keeping the existing assets. Once Rigid Body assets are imported into the application, the API functions can be used to configure and access the Rigid Body assets.
Rigid body assets can also be defined directly using the API. The TT_CreateRigidBody function defines a new Rigid Body from given 3D coordinates. This function takes in an array float values which represent x/y/z coordinates or multiple markers in respect to Rigid Body pivot point. The float array for multiple markers should be listed as following: {x1, y1, z1, x2, y2, z2, …, xN, yN, zN}. You can manually enter the coordinate values or use the TT_FrameMarkerX, TT_FrameMarkerY, and TT_FrameMarkerZ functions to input 3D coordinates of tracked markers.
When using the TT_FrameMarkerX/Y/Z functions, you need to keep in mind that these locations are taken in respect to the RB pivot point. To set the pivot point at the center of created Rigid Body, you will need to first compute pivot point location, and subtract its coordinates from the 3D coordinates of the markers obtained by the TT_FrameMarkerX/Y/Z functions. This process is shown in the following example.
Example: Creating RB Assets
6 DoF Rigid Body tracking data can be obtained using the TT_RigidBodyLocation function. Using this function, you can save 3D position and orientation of a Rigid Body into declared variables. The saved position values indicate location of the Rigid Body pivot point, and they are represented in respect to the global coordinate axis. The Orientation is saved in both Euler and Quaternion orientation representations.
Example: RB Tracking Data
In Motive, Rigid Body assets have Rigid Body properties assigned to each of them. Depending on how these properties are configured, display and tracking behavior of corresponding Rigid Bodies may vary. When using the API, Rigid Body properties are configured and applied using the cRigidBodySettings class which is declared within the RigidBodySetting.h header file.
Within your program, create an instance of cRigidBodySettings class and call the API functions to obtain and adjust Rigid Body properties. Once desired changes are made, use the TT_SetRigidBodySettings function to assign the properties back onto a Rigid Body asset.
For detailed information on individual Rigid Body settings, read through the Properties: Rigid Body page.
Once the API has been successfully initialized, data streaming can be enabled, or disabled, by calling either the TT_StreamNP, TT_StreamTrackd, or TT_StreamVRPN function. The TT_StreamNP function enables/disables data streaming via the NatNet. The NatNet SDK is a client/server networking SDK designed for sending and receiving NaturalPoint data across networks, and tracking data from the API can be streamed to client applications from various platforms via the NatNet protocol. Once the data streaming is enabled, connect the NatNet client application to the server IP address to start receiving the data.
The TT_StreamNP function is equivalent to Broadcast Frame Data from the Data Streaming pane in Motive.
The Motive API does not currently support configuring data streaming settings directly from the API. To configure the streaming server IP address and the data streaming settings, you will need to use Motive and save an application profile MOTIVE file that contains the desired configuration. Then, the exported profile can be loaded when using the API. Through this way, you will be able to set the interface IP address and decide which data to be streamed over the network.
For more information on data streaming settings, read through the Data Streaming page.
Important Note:
Motive API wiki pages are being updated for 3.0 beta. Some of the functions may be missing in the documentation. Please refer to the NPTrackingTools header file for any information that are not documented here.
The Motive API allows control of, and access to, the backend software platform of Motive via C/C++ interface. In other words, the Motive API offers Motive functions without the graphical user interface on top. Using the API, you can employ several features of Motive in your custom applications, such as accessing 2D camera images, marker centroid data, unlabeled 3D points, labeled markers, and Rigid Body tracking data. When you install Motive, all of the required components for utilizing the API are installed within the Motive install directory. The key files for using the Motive API are listed in the below section.
Camera control
Frame control
Point Cloud reconstruction engine control
Obtain and use reconstructed 3D Marker data
Rigid body tracking
Query results
Stream results over the network
In-depth hardware control (e.g. hardware sync customization). Use the Camera SDK instead.
Direct support for data recording and playback.
Control over peripheral devices (Force plates and NI-DAQ)
Functionalities for Skeleton assets.
The Motive API is supported in Windows only
Must have a valid Motive license and a corresponding Hardware key.
When you install Motive, all of the required components of the Motive API will be placed within the installation directory, and by default, Motive is installed in C:\Program Files\OptiTrack\Motive
. The following table lists all of the key files of the API and where they could be found.
NPTrackingTools.h
[Motive Install Directory]\inc\NPTrackingTools.h
The header file NPTrackingTools.h contains declarations for functions and classes of the API. Necessary functions and classes are thoroughly commented within this file. This header file must be #included in your source code for utilizing the Motive API functions.
RigidBodySettings.h
[Motive Install Directory]\inc\RigidBodySettings.h
lib folder
[Motive Install Directory]\lib
This folder includes C++ 64-bit library files (.lib and .dll) for employing the Motive API. The library is compiled using Visual Studio 2013 with the dynamic run-time (\MD) library, so make sure the client application also uses the same settings. 32-bit NPTrackingTools library has been deprecated since version 2.1.
Sample project
[Motive Install Directory]\Samples\markers
This folder contains a sample Visual Studio project (marker.sln) that uses the Motive API for accessing cameras, markers, and Rigid Body tracking information. Refer to this folder to find out how the API could be used.
Platforms folder
[Motive Install Directory]\plugins\
The platforms folder is located in the plugins folder and it contains qwindows.dll which is required for running applications using the Motive API. Copy and paste this folder into the EXE directory.
Third-party libraries
[Motive Install Directory]
This guide introduces some of the commonly used functions of the Motive API.
The following page provides a full list of the Motive API functions.
Many of the Motive API functions return their results as integer values defined NPRESULT. This value expresses the outcome of the result. Not only it indicates whether the function operated successfully or not, but it also provides more detailed information on what type of error has occurred. When you get the NPRESULT output from a function, you can use the TT_GetResultString function to get the plain text result that corresponds to the error message.
Also, camera video types, or image processing modes, are expressed as integer values as well. These values are listed below and are commented within the header file as well.
NPRESULT Values
Camera Video Type Definitions
The NatNet SDK is a networking software development kit (SDK) for receiving NaturalPoint data across networks. It allows streaming of live or recorded motion capture data from a tracking server (e.g. Motive) into various client applications. Using the SDK, you can develop custom client applications that receive data packets containing real-time tracking information and send remote commands to the connected server. NatNet uses the UDP protocol in conjunction with either Point-To-Point Unicast or IP Multicasting for sending and receiving data. The following diagram outlines the major components of a typical NatNet network setup and how they establish communication between NatNet server and client application.
For previous versions of NatNet, please refer to the provided PDF user guide that ships with the SDK.
Please read through the changelog for key changes in this version.
NatNet is backwards compatible with any version of Motive, however, in older versions there may be missing features that are present in newer versions.
The NatNet SDK consists of the following:
NatNet Library: Native C++ networking library contents, including the static library file (.lib), the dynamic library file (.dll), and the corresponding header files.
NatNet Assembly: Managed .NET assembly (NatNetML.dll) for use in .NET compatible clients.
NatNet Samples: Sample projects and compiled executables designed to be quickly integrated into your code.
A NatNet server (e.g. Motive) has 2 threads and 2 sockets: one for sending tracking data to a client and one for sending/receiving commands.
NatNet servers and clients can exist either on a same machine or on separate machines.
Multiple NatNet clients can connect to a single NatNet server.
When a NatNet server is configured to use IP Multicast, the data is broadcasted only once, to the Multicast group.
Default multicast IP address: 239.255.42.99 and Port: 1511.
IP address for unicast is defined by a server application.
The NatNet SDK is shipped in a compressed ZIP file format. Within the unzipped NatNet SDK directory, the following contents are included:
Sample Projects: NatNet SDK\Samples
The Sample folder, contains Visual Studio 2013 projects that use the NatNetSDK libraries for various applications. These samples are the quickest path towards getting NatNet data into your application. We strongly recommend taking a close look into these samples and adapt applicable codes into your application. More information on these samples are covered in the NatNet Samples page.
Library Header Files: NatNet SDK\include
The include folder contains headers files for using the NatNet SDK library.
\include\NatNetTypes.h
NatNetTypes.h header file contains the type declaration for all of the data formats that are communicated via the NatNet protocol.
\include\NatNetClient.h
\include\NatNetRequests.h
\include\NatNetRepeater.h
NatNetRepeater.h header file controls how big the packet sizes can be.
\include\NatNetCAPI.h
NatNetCAPI.h header file contains declaration for the NatNet API helper functions. These functions are featured for use with native client applications only.
Library DLL Files: NatNet SDK\lib
NatNet library files are contained in the lib folder. When running applications that are developed against the NatNet SDK library, corresponding DLL files must be placed alongside the executables.
\lib\x64
This folder contains NatNet SDK library files for 64-bit architecture.
\lib\x64\NatNetLib.dll
Native NatNet library for 64-bit platform architecture. These libraries are used for working with NatNet native clients.
\lib\x64\NatNetML.dll
Managed NatNet assembly files for 64-bit platform architecture. These libraries are used for working with NatNet managed clients, including applications that use .NET assemblies.
Note that this assembly is derived from the native library, and to use the NatNetML.dll, NatNetLib.dll must be linked as well.
\lib\x64\NatNetML.xml
Includes XML documentations for use with the NatNetML.dll assembly. Place this alongside the DLL file to view the assembly reference.
\lib\x86
No longer supported in 4.0
\lib\x86\NatNetLib.dll
No longer supported in 4.0.
\lib\x86\NatNetML.dll
No longer supported in 4.0.
\lib\x86\NatNetML.xml
No longer supported in 4.0.
NatNet class and function references for the NatNetClient object.
List of tracking data types available in the NatNet SDK streaming protocol.
NatNet commands for remote triggering the server application
NatNet commands for subscribing to specific data types only.
Tip: Code samples are the quickest path to towards getting familiar with the NatNet SDK. Please check out the NatNet samples page.
List of NatNet sample projects and the instructions.
Timecode representation in OptiTrack systems and NatNet SDK tools.
A general guideline to using the NatNet SDK for developing a native client application.
A general guideline to using the NatNet SDK for developing a managed client application.
In streamed NatNet data packets, orientation data is represented in the quaternion format (qx, qy, qz, qw). In contrast to Euler angles, Quaternion orientation convention is order independent, however, it indicates the handedness. When converting quaternion orientation into Euler angles, it is important to consider and decide which coordinate convention that you want to convert into. Some of the provided NatNet samples demonstrate quaternion to Euler conversion routines. Please refer to the included WinFormSample, SampleClient3D, or Matlab samples for specific implementation details and usage examples.
To convert from provided quaternion orientation representation, the following aspects of desired Euler angle convention must be accounted:
Rotation Order
Handedness: Left handed or Right handed
Axes: Static (Global) or relative (local) axes.
For example, Motive uses the following convention to display the Euler orientation of an object:
Rotation Order: X (Pitch), Y (Yaw), Z (Roll)
Handedness: Right-handed (RHS)
Axes: Relative Axes (aka 'local')
Important Note: Use of the direct depacketization is not recommended. The syntax of the bit-stream packets is subject to change, requiring an application to update its parsing routines to be compatible with the new format. The direct depacketization approach should be used only where the use of the NatNet library is not applicable.
In situations where the use of the NatNet library is not applicable (e.g. developing on unsupported platforms such as Unix), you can also depacketize the streamed data directly from the raw bit-stream without using the NatNet library. In order to provide the most current bitstream syntax, the NatNet SDK includes a testable working depacketization sample (PacketClient, PythonClient) that decodes NatNet Packets directly without using the NatNet client class.
For the most up-to-date syntax, please refer to either the PacketClient sample or the PythonClient sample to use them as a template for depacketizing NatNet data packets.
Adapt the PacketClient sample (PacketClient.cpp) or the PythonClient sample (NatNetClient.py) to your application's code.
Regularly update your code with each revision to the NatNet bitstream syntax.
The 4.0 update includes bit-stream syntax changes to allow up to 32 force plates to be streamed at once. This requires corresponding updates for each program that uses the direct depacketization approach for parsing streamed data. A system under 32 force plates should still avoid using direct depacketization. See the Important Note above in the Direct Depacketization section for more information.
Starting from Motive 3.0, you can send NatNet remote commands to Motive and select the version of bitstream syntax to be outputted from Motive. This is accomplished by sending a command through the command port. For details on doing this, please refer to the SetNatNetVersion function demonstrated in the PacketClient.
Bit-Stream NatNet Versions
NatNet 4.0 (Motive 3.0)
NatNet 3.1 (Motive 2.1)
NatNet 3.0 (Motive 2.0)
NatNet 2.10 (Motive 1.10)
NatNet 2.9 (Motive 1.9)
This page provides detailed information on the definition of latency measurements in Motive and the NatNet SDK streaming protocol.
The OptiTrack systems combine state of art technologies to provide swift processing of captured frame data in order to accomplish 3D tracking in real-time. However, minimal processing latencies are inevitably introduced throughout processing pipelines. For timing-sensitive applications, these latency metrics can be monitored from the Status Panel of Motive or in the NatNet SDK 4.0 streaming protocol.
The latency in an OptiTrack system can be broken down in the the components described in the image below.
With Motive 3.0+ PrimeX cameras now can go at up to 1000 Hz. In order to do this the image size processes reduces as the frequency goes over the native frame rate for a particular camera. Because less camera data is being process at higher rates, the latency also decreases. The image below shows how the latency changed from the previous image by going from 240 Hz to 500 Hz with PrimeX 13 cameras.
Example frame rates vs the latency added by the camera for a PrimeX 41 camera....
20 Hz - 180 Hz
5.56 ms
240 Hz
4.17 ms
360 Hz
2.78 ms
500 Hz
2.00 ms
1000 Hz
1.00 ms
A
This point represents the center of the camera exposure window
B
This is when Motive receives the 2D data from the cameras
C
This is when tracking data is fully solved in Motive.
D
This is when the tracking data is all processed and ready to be streamed out.
E
This is when the Client application receives the streamed tracking data.
This measurement is reported in the Status Panel in Motive.
(Available for Ethernet camera systems only) This value represents current system latency. This is reported under the Status Panel, and it represents the total time taken from when the cameras expose and when the data is fully solved.
This measurement is reported in the Status Panel in Motive.
It represents the amount of time it takes Motive to process each frame of captured data. This includes the time taken for reconstructing the 2D data into 3D data, labeling and modeling the trackable assets, displaying in the viewport, and other processes configured in Motive.
Please note that this does not include the time it takes for Motive to convert the solved data into the NatNet streaming protocol format. This conversion accounts for a slight additional latency (≈ 0.2 ms) which is only reflected in the software latency value reported via NatNet SDK 4.0, therefore resulting in a small delta between the software latency values as reported by Motive and NatNet.
Latencies from the point cloud reconstruction engine, Rigid Body solver, and Skeleton solver are reported individually on the Status Panel in Motive.
This is available only in NatNet version 3.0 or above.
Exemplary latency calculations are demonstrated in the SampleClient project and the WinFormSample project. Please refer to these sources to find more about how these latency values are derived.
In NatNet 3.0, new data types have been introduced to allow users to monitor measured timestamps from specific points in the pipeline. From sFrameOfMocapData received in the NatNet client applications, the following timestamps can be obtained:
Timestamp of Point A: sFrameOfMocapData::CameraMidExposureTimestamp. Available for Ethernet cameras only (Prime / Slim13E).
Timestamp of Point B: sFrameOfMocapData::CameraDataReceivedTimestamp.
Timestamp of Point D: sFrameOfMocapData::TransmitTimestamp.
Refer to the NatNet:_Data_Types page or the NatNetTypes.h file for more information
These timestamps are reported in “ticks” that are provided by the host computer clock. When computing latencies, you need to divide the timestamp values by the clock frequency in order to obtain the time values in seconds. The clock frequency is included in the server description packet: sServerDescriptionPacket::HighResClockFrequency. To calculate the time that has elapsed since a specific timestamp, you can simply call the NatNetClient::SecondsSinceHostTimestamp method, passing the desired timestamp as its input. Using clock synchronization between the client and server, this will return the time in seconds since the corresponding timestamp.
System Latency (NatNet)
(Available for Ethernet camera systems only)
This is the latency introduced by both hardware and software component of the system. This represents the time difference between when the cameras expose and when the capture data is all processed and prepared to be streamed out.
This value needs to be derived from the NatNet SDK 4.0 streaming protocol by subtracting two timestamp values that are reported in NatNet:
System Latency may not always be available for all system configurations. Thus, it is suggested to enclose this calculation within a conditional statement.
Software Latency (NatNet)
This value needs be derived from the NatNet streaming protocol.
This latency value represents the time it takes Motive to process the captured data and have it fully ready to be streamed out. This measurement also covers the data packaging time.
This can be derived by subtracting two timestamp values that are reported in NatNet:
In the older versions of NatNet, the software latency was roughly estimated and reported as fLatency data, which is now deprecated. The derived software latency described in this can be used to replace the fLatency values.
Transmission Latency
This value must be derived from the NatNet SDK 4.0 streaming protocol.
The transmission latency represents the time difference between when Motive streams out the packaged tracking data and when the data reaches the client application(s) through a selected network.
This value can be obtained by calling the SecondsSinceHostTimestampmethod using sFrameOfMocapData::TransmitTimestamp as the input.
Client Latency
This value must be derived from the NatNet SDK 4.0 streaming protocol.
The client latency is the time difference between when the cameras expose and when the NatNet client applications receive the processed data. This is basically the total time it takes for a client application(s) to receive the tracking data from a mocap system.
This value can be obtained by calling the SecondsSinceHostTimestamp method using sFrameOfMocapData::CameraMidExposureTimestamp as the input.
In previous versions of Motive (prior to 2.0), the only reported latency metric was the software latency. This was an estimation of the software latency derived from the sum of the processing times taken from each of the individual solvers in Motive. The latency calculation in Motive 2.0 is a more accurate representation and will be slightly larger by comparison than the latency reported in the older versions.
This page provides a sample and instructions on how to use Motive API functions to calibrate a camera system.
The following sample code demonstrates calibration process using the Motive API. For details on specific functions, please refer to the page.
Auto-Masking
Auto-Masking is done directly by calling the TT_AutoMaskAllCameras function. When this is called, Motive will sample for a short amount of time and apply a mask to the camera imagers where light was detected.
<source> //== To auto-mask, call TT_AutoMaskAllCameras().
TT_AutoMaskAllCameras(); printf( "============\nCamera masking started\n============\n" ); </source>
Camera Mask
This function returns memory block of the mask. One bit per a pixel of the mask. Masking pixels are rasterized from left to right and from top to bottom of the camera's view.
Clear Masks
This function can clear existing masks from the 2D camera view. It returns true when it successfully removes pixel masks. Otherwise, masking is always additive through the API.
Set Camera Mask
The TT_SetCameraMask can be used to replace the existing camera mask for any camera. A mask is an array of bytes, one byte per mask pixel block. Returns true when masks are applied.
Setting ground plane is done directly by calling the TT_SetGroundPlane function. When this is called, camera system will search for 3-markers that resemble a calibration square and uses the inputted vertical offset value to configure the ground plane.
This class contains size properties of reconstructed 3D markers. The size of reconstructed marker is defined by its diameter (in mm), and an instance of this class can be assigned to a camera group. Use the and Motive API functions for obtaining and assigning an instance of this class.
MarkerSizeCalculated: This marker size type will calculate suitable diameter of a marker to be displayed.
MarkerSizeFixed: This marker size type will override calculated diameter and assign a set diameter.
MarkerSizeCount: Returns number of marker size types.
The cCameraGroupFilterSettings class determines 2D object filter settings for a camera group, which consists of multiple cameras.
FilterNone will not use any filters.
FilterSizeRoundness distinguishes marker reflections by the marker size and circularity of the reflection. More specifically, it looks at the number of pixels seen by the imager (MinMarkerSize and MaxMarkerSize) as well as the symmetry ratio of the shape of reflections (MinRoundness).
Use and functions to obtain the existing filter settings and to apply an instance to a camera group.
For explanations on this filter in Motive, read through the page.
The cCameraGroupPointCloudSettings class contains variables for configuring the point cloud reconstruction settings. When first declaring an instance of this class, it will query the currently configured point cloud settings, and you can view or modify the settings through its member methods. There are multiple enum variables for defining various reconstruction settings. Please refer to the page for detailed descriptions on individual settings.
These member functions are used for changing reconstruction settings (bool, double, long) of a cCameraGroupPointCloudSettings instance. Call matching data type functions to modify the parameters.
These member functions fetches a point cloud reconstruction parameter from a cCameraGroupPointCloudsettings instance and saves it in the designated address. Call matching data type functions to obtain the parameters.
<source> class TTAPI cCameraGroupPointCloudSettings { public:
To attach cCameraModule instances to a camera object using Motive API, call the following functions:
to attach.
to detach
There is a known issue where default constructor for Rigidbody setting is not set. We will be addressing this issue in the next release. Until then, please create a Rigid Body in Motive and use the existing definitions.
cRigidBodySettings class contains all of setting variables for configuring Rigid Body asset properties.
In the Motive API, you can use...
function to obtain and save configured Rigid Body settings in to a cRigidBodySettings instance.
function to assign a cRigidBodySettings instance to a Rigid Body asset in order to apply the settings.
Refer to the below source code for Rigid Body properties available in the cRigidBodySettings class.
For more information on each settings read through the page.
cRigidBodySettings class contains all of setting variables for configuring Rigid Body asset properties.
In the Motive API, you can use...
Refer to the below source code for Rigid Body properties available in the cRigidBodySettings class.
Allows a custom name to be assigned to the Rigid Body. Default is "Rigid Body X" where x is the Rigid Body ID.
Enables/Disables tracking of the selected Rigid Body. Disabled Rigid Bodies will not be tracked, and its data will not be included in the exported or streamed tracking data.
User definable ID for the selected Rigid Body. When working with capture data in the external pipeline, this value can be used to address specific Rigid Bodies in the scene.
The minimum number of markers that must be tracked and labeled in order for a Rigid Body asset, or each Skeleton bone, to be booted or first tracked.
The minimum number of markers that must be tracked and labeled in order for a Rigid Body asset, or each Skeleton bone, to continue to be tracked after the initial boot.
_[Advanced]_The order of the Euler axis used for calculating the orientation of the Rigid Body and Skeleton bones. Motive computes orientations in Quaternion and converts them into an Euler representation as needed. For exporting specific Euler angles, it's recommended to configure it from the Exporter settings, or for streaming, convert Quaternion into Euler angles on the client-side.
Selects whether or not to display the Rigid Body name in the 3D Perspective View. If selected, a small label in the same color as the Rigid Body will appear over the centroid in the 3D Perspective View.
Show the corresponding Rigid Body in the 3D viewport when it is tracked by the camera system.
Color of the selected Rigid Body in the 3D Perspective View. Clicking on the box will bring up the color picker for selecting the color.
For Rigid Bodies, this property shows, or hides, visuals of the Rigid Body pivot point.
[Advanced] Enables the display of a Rigid Body's local coordinate axes. This option can be useful in visualizing the orientation of the Rigid Body, and for setting orientation offsets.
Shows a history of the Rigid Body’s position. When enabled, you can set the history length and the tracking history will be drawn in the Perspective view.
Shows the Asset Model Markers as transparent spheres on the Rigid Body. Asset mode markers are the expected marker locations according to the Rigid Body solve.
Draws lines between labeled Rigid Body or Skeleton markers and corresponding expected marker locations. This helps to visualize the offset distance between actual marker locations and the asset model markers.
[Advanced] When enabled, all markers that are part of the Rigid Body definition will be dimmed, but still visible, when not present in the point cloud.
When a valid geometric model is loaded in the Attached Geometry section, the model will be displayed instead of a Rigid Body when this entry is set to true.
Attached Geometry setting will be visible if the Replace Geometry setting is enabled. Here, you can load an OBJ file to replace the Rigid Body. Scale, positions, and orientations of the attached geometry can be configured under the following section also. When a OBJ file is loaded, properties configured in the corresponding MTL files alongside the OBJ file will be loaded as well.
Attached Geometry Settings
When the Attached Geometry is enabled, you can attach a 3D model to a Rigid Body and the following setting will be available also.
Pivot Scale: Adjusts the size of the Rigid Body pivot point.
Scale: Rescales the size of attached object.
Yaw (Y): Rotates the attached object in respect to the Y-axis of the Rigid Body coordinate axis.
Pitch (X): Rotates the attached object in respect to the X-axis of the Rigid Body coordinate axis.
Roll (Z): Rotates the attached object in respect to the Z-axis of the Rigid Body coordinate axis.
X: Translate the position of attached object in x-axis in respect to the Rigid Body coordinate.
Y: Translate the position of attached object in y-axis in respect to the Rigid Body coordinate.
Z: Translate the position of attached object in z-axis in respect to the Rigid Body coordinate.
Opacity: Sets the opacity of an attached object. An OBJ file typically comes with a corresponding MTL file which defines its properties, and the transparency of the object is defined within these MTL files. The Opacity value under the Rigid Body properties applies a factor between 0 ~ 1 in order to rescale the loaded property. In other words, you can set the transparency in the MTL file and rescale them using the Opacity property in Motive.
IMU feature is not fully supported in Motive 3.x. Please use Motive 2.3 for using IMU active components.
Uplink ID assigned to the Tag or Puck using the Active Batch Programmer. This ID must match with the Uplink ID assigned to the Active Tag or Puck that was used to create the Rigid Body.
Radio frequency communication channel configured on the Active Tag, or Puck, that was used to define the corresponding Rigid Body. This must match the RF channel configured on the active component; otherwise, IMU data will not be received.
Applies double exponential smoothing to translation and rotation of the Rigid Body. Increasing this setting may help smooth out noise in the Rigid Body tracking, but excessive smoothing can introduce latency. Default is 0 (disabled).
Compensate for system latency when tracking of the corresponding Rigid Body by predicting its movement into the future. Please note that predicting further into the future may impact the tracking stability.
[Advanced] When needed, you can damp down translational and/or rotational tracking of a Rigid Body or a Skeleton bone on selected axis.
The Camera SDK provides hardware (cameras and hubs) controls and access to the most fundamental frame data, such as grayscale images and 2D object information, from each camera. Using the Camera SDK, you can develop your own image processing applications that utilize the capabilities of the OptiTrack cameras. The Camera SDK is a free tool that can be downloaded from our website.
Note: that 3D tracking features are not directly supported with Camera SDK but they are featured via the Motive API. For more information on the Camera SDK, visit our .
Please keep in mind, the Camera SDK is compatible only with the same released version of Motive. For instance, if you are using Motive 2.3.1, you'll want to download and use the Camera SDK version 2.3.1.
Camera hardware controls
Receiving frame data and 2D object data from each camera
Device synchronization controls
Sample applications with source code
After you install the Camera SDK, there will be a folder in your OptiTrack installation directory. This folder can also be accessed from Windows start menu → OptiTrack → Camera SDK
:
(\OptiTrack\Camera SDK\bin) Includes an executable sample application, visualtest.exe, which was developed using the Camera SDK. This sample application allows you to configure camera settings and monitor captured 2D frames from each camera.
(\OptiTrack\Camera SDK\lib) Includes native C++ application construction library.
(\OptiTrack\Camera SDK\include) Includes header files for the SDK. Usage of each class is commented within the header files.
(\OptiTrack\Camera SDK\doc) Includes topic specific instructions on how to utilize the Camera SDK.
(\OptiTrack\Camera SDK\samples) Includes sample projects that employ the Camera SDK. Source code for these applications are included for additional references.
This class can be inherited into your TTAPI project for testing Rigid Body solutions. Input desired testing protocol for Rigid Bodies within the RigidBodySolutionTest member function of the inherited class, and attach/detach this class onto a Rigid Body using the following functions.
When using the Motive API, you can attach or detach cTTAPIListener class to a project by using the following functions.
TTAPIFrameAvailable
TTAPIFrameAvailable callback is called when a new synchronized group of camera frames has been delivered to the TTAPI and is ready for processing. You can use this notification to then call TT_Update() without having to poll blindly for new data.
TTAPICameraConnected
This callback function is called when a new camera is connected to the system.
TTAPICameraDisconnected
This callback function is called when a camera is disconnected from the system.
InitialPointCloud
InitialPointCloud is called when the initial point cloud is calculated from the connected cameras. During this callback 3D markers can be added (up to MaxMarkers) or removed by modifying the Markers list as well as the MarkerCount variable. After this callback the marker list is passed onto the Rigid Body solver.
ApplyContinuousCalibrationResult
By overriding this function in the attached listener class, you can control when the gets applied. The continuous calibration will continue to call this callback function when the updated calibration is available. You can check parameters in this functions and simply return True to accept the updated calibration, and return False to discard the updated calibration.
Requests Motive to configure specified properties. The property name must exactly match the respective name of setting displayed in Motive. Please refer to the page for the list of properties. Master Rate can be used for controlling the frame rate of the camera system. For configuring camera settings remotely, use the "model #[serial]" string format.
Request property of a Take. You can query property of a specific Take by entering the name, or enter empty string to query the currently loaded take. Most of the properties available in the can be queried through this command.
Name of the property. See .
The header file RigidBodySettings.h contains the class declaration for the class, which is used to modify and configure the settings for Rigid Body assets. This header file is included within the NPTrackingTools.h file, so it would not be necessary to separately include this.
Third-party DLL libraries are required for all applications built against the API. Please see for more information
NetNetClient.h header file contains declaration of the class, which is the key object used in the SDK. This object must be initialized in order to run a client application for receiving the data packets.
NatNetRequest.h header file contains a list of that can be sent over to a server application using the SendMessageAndWait function.
Most of the settings in the cRigidBodySettings class is described in the page.
function to obtain and save configured Rigid Body settings in to a cRigidBodySettings instance.
function to assign a cRigidBodySettings instance to a Rigid Body asset in order to apply the settings.
For more information on each settings read through the page.
Most of the settings in the cRigidBodySettings class is described in the page.
This page lists out the NatNet sample applications provided with the SDK and provides instructions for some of the samples. The code samples are the quickest path towards getting NatNet data into your application. We typically recommend you:
1. Identify your application’s development/interface requirements (managed, native, etc).2. Adapt the NatNet sample code from the corresponding NatNet sample application in the samples folder into your application.3. Use the API reference in the next page for additional information.
The Visual Studio solution file \Samples\NatNetSamples.sln
will open and build all of the NatNet sample projects. If you are creating an application from scratch, please refer to the following sections for application specific requirements.
The following projects are located in the NatNet SDK\Samples
folder.
NatNet SDK Samples
The following sample projects utilizes NatNet SDK library for obtaining tracking data from a connected server application.
Managed: Matlab
Native: C++
Sample NatNet console app that connects to a NatNet server, receives a data stream, and writes that data stream to an ASCII file. This sample
Native: C++
Sample NatNet application that connects to a NatNet server, receives a data stream, and displays that data in an OpenGL 3D window.
SampleClientML
Managed: .NET (C#)
Managed: C# .NET
Simple C# .NET sample showing how to use the NatNet managed assembly (NatNetML.dll). This sample also demonstrates how to send and receive the NatNet commands.
Direct Depacketization Samples
The following sample projects do not use the NatNet SDK library. Client/Server connection is established at a low-level by creating sockets and threads within the program, and the streamed data are depacketized directly from the bit-stream syntax. The following sample approaches should be used only when the use of NatNet SDK library is not applicable (e.g. streaming into UNIX clients).
PacketClient
C++
Simple example showing how to connect to a NatNet multicast stream and decode NatNet packets directly without using the NatNet SDK.
PythonClient
Python
Sample Python code file (.py) for using Python with NatNet streaming. This sample depacketizes data directly from the bit-stream without using the library.
XML trigger broadcast
The following samples demonstrate how to use remote triggering in Motive using the XML formatted UDP broadcast packets.
BroadcastSample
C++
XML broadcast. Sample application illustrating how to use remote record trigger in Motive using XML formatted UDP broadcast packets.
1. [Motive] Start the Optitrack Server (e.g. Motive) and begin streaming data via the Streaming Panel.
2. [SampleClient] Start the client application from the command prompt or directly from the NatNet SDK/Samples/bin
folder.
3. [SampleClient] Once the sample application starts up, it will search the local network and list out IP addresses of available tracking servers where tracking data is streamed from. Select a server address by pressing the corresponding number key.
4. [SampleClient] The client application is connected to the local loopback address (127.0.0.1) and receiving tracking data.
The Rigid Body sample (SampleClient3D) illustrates how to decode NatNet 6DOF Rigid Body and Skeleton Segment data from OptiTrack quaternion format to euler angles and display them in a simple OpenGL 3D viewer. This sample also illustrates how to associate RigidBody/Skeleton Segment names and IDs from the data descriptions with the IDs streamed in the FrameOfMocapData packet.
1. [Motive] Load a dataset with Rigid Body or Skeleton definitions
2. [Motive] Enable network streaming ( Data Streaming Pane -> Check Broadcast Frame Data )
3. [Motive] Enable streaming Rigid Body data (check Stream Options -> Stream Rigid Bodies = True)
4. [Sample3D] File -> Connect
1. [Motive] Load a dataset with Rigid Body or Skeleton definitions
2. [Motive] Set IP address to stream from (Network Interface Selection -> Local Interface)
3. [Motive] Enable network streaming ( Data Streaming Pane -> Check Broadcast Frame Data )
4. [Motive] Enable streaming Rigid Body data (check Stream Options -> Stream Rigid Bodies = True)
5. [Sample3D] Set Client and Server IP addresses
6. [Sample3D] File -> Connect
IP Address IP Address of client NIC card you wish to use.
Server IP Address IP Address of server entered in step 2 above.
1. [Motive] Start a NatNet server application (e.g. Motive).
2. [Motive] Enable NatNet streaming from the Server application.
3. [WinFormTestApp] Start the WinForms sample application from the NatNet Samples folder.
4. [WinFormTestApp] Update the “Local” and “Server” IP Addresses as necessary.
5. [WinFormTestApp] Press the “Connect” button to connect to the server.
6. [WinFormTestApp] Press the “Get Data Descriptions” button to request and display a detailed description of the Server’s currently streamed objects.
7. [WinFormTestApp] Select a Row in the DataGrid to display that value in the graph.
1. [Motive] Start a NatNet server application (e.g. Motive).
2. [Motive] Enable NatNet streaming from the Server application.
3. [Matlab] Start Matlab
4. [Matlab] Open the NatNetPollingSample.m file.
5. [Matlab] From the editor window, press Run
Sample MATLAB code file (.m) for using MATLAB with the NatNet managed assembly (NatNetML.dll) using the provided class. Works in Matlab version 2014 or above.
Sample NatNet C# console appication that connects to a NatNet server on the local IP address, receives data stream, and outputs the received data. Note: must be set to false.