RV 7: REFERENCE MANUALimage: 0_Users_arasiah_git_tweak-devel_rv-cxx98-release-qt4-python2_7_html_temp_images_rv_small.png

USERREFERENCESDIRELEASE NOTESimage: 1_Users_arasiah_git_tweak-devel_rv-cxx98-releas___7_html_temp_images_logo_shotgun_grey_551x79.png
Table of Contents
Chapter 1 Overview >
Chapter 2 Image Processing Graph >
Chapter 3 Writing a Custom GLSL Node >
Chapter 4 Python >
Chapter 5 Event Handling >
Chapter 6 RV File Format >
Chapter 7 Using Qt in Mu >
Chapter 8 Modes and Widgets >
Chapter 9 Package System >
Chapter 10 A Simple Package >
Chapter 11 The Custom Matte Package >
Chapter 12 Automated Color and Viewing Management >
Chapter 13 Network Communication >
Chapter 14 Webkit JavaScript Integration >
Chapter 15 Hierarchical Preferences >
Chapter 16 Node Reference >
Chapter 17 Additional GLSL Node Reference >
Appendix A Open Source Components >
Appendix B Licensed Components >

Chapter 1 Overview

RV comes with the source code to its user interface. The code is written in a language called Mu which is not difficult to learn if you know Python, MEL, or most other computer languages used for computer graphics. As of 3.12, RV can also use Python in a nearly interchangeable manner.
If you are completely unfamiliar with programming, you may still glean information about how to customize RV in this manual; but the more complex tasks like creating a special overlay or slate for RVIO or adding a new heads-up widget to RV might be difficult to understand without help from someone more experienced.
This manual does not assume you know Mu to start with, so you can dive right in. For Python, some assumptions are made. The chapters are organized with specific tasks in mind.
The reference chapters contain detailed information about various internals that you can modify from the UI.
Using the RV file format (.rv) is detailed in Chapter 6.

The Big Picture

RV is two different pieces of software: the core (written in C++) and the interface (written in Mu and Python). The core handles the following things:
The interface — which is available to be modified — is concerned with the following:
RVIO shares almost everything with RV including the UI code (if you want it to). However it will not launch a GUI so its UI is normally non-existent. RVIO does have additional hooks for modification at the user level: overlays and leaders. Overlays are Mu scripts which allow you to render additional visual information on top of rendered images before RVIO writes them out. Leaders are scripts which generate frames from scratch (there is nothing rendered under them) and are mainly there to generate customized flexible slates automatically.

Drawing

In RV's user interface code or RVIO's leader and overlays it's possible draw on top of rendered frames. This is done using the industry standard API OpenGL. There are Mu modules which implement OpenGL 1.1 functions including the GLU library. In addition, there is a module which makes it easy to render true type fonts as textures (so you can scale, rotate, and composite characters as images). For Python there is PyOpenGL and related modules.
Mu has a number of OpenGL friendly data types which include native support for 2D and 3D vectors and dependently typed matrices (e.g., float[4,4], float[3,3], float[4,3], etc). The Mu GL modules take the native types as input and return them from functions, but you can use normal GL documentation and man pages when programming Mu GL. In this manual, we assume you are already familiar with OpenGL. There are many resources available to learn it in a number of different programming languages. Any of those will suffice to understand it.

Menus

The menu bar in an RV session window is completely controlled (and created) by the UI. There are a number of ways you can add menus or override and replace the existing menu structure.
Adding one or more custom menus to RV is a common customization. This manual contains examples of varying complexity to show how to do this. It is possible to create static menus (pre-defined with a known set of menu items) or dynamic menus (menus that are populated when RV is initialized based on external information, like environment variables).

Chapter 2 Image Processing Graph

The UI needs to communicate with the core part of RV. This is done in two ways: by calling special command functions (commands) which act directly on the core (e.g. play() causes it to start playing), or by setting variables in the underlying image processing graph which control how images will be rendered.
Inside each session there is a directed acyclic graph (DAG) which determines how images and audio will be evaluated for display. The DAG is composed of nodes which are themselves collections of properties.
A node is something that produces images and/or audio as output from images and audio inputs (or no inputs in some cases). An example from RV is the color node; the color node takes images as input and produces images that are copies of the input images with the hue, saturation, exposure, and contrast potentially changed.
A property is a state variable. The node's properties as a whole determine how the node will change its inputs to produce its outputs. You can think of a node's properties as parameters that change its behavior.
RV's session file format (.rv file) stores all of the nodes associated with a session including each node's properties. So the DAG contains the complete state of an RV session. When you load an .rv file into RV, you create a new DAG based on the contents of the file. Therefore, to change anything in RV that affects how an image looks, you must change a property in some node in its DAG.
There are a few commands which RV provides to get and set properties: these are available in both Mu and Python.
Finally, there is one last thing to know about properties: they are arrays of values. So a property may contain zero values (it's empty) or one value or an array of values. The get and set functions above all deal with arrays of numbers even when a property only has a single value.
16 lists all properties and their function for each node type.

Top-Level Node Graph

When RV is started with e.g. two media (movies, file sequences) it will create two top-level group nodes: one for each media source. These are called RVSourceGroup nodes. In addition, there are four other top-level group nodes created and one display group node for each output device present on the system (i.e. one for each connected monitor and in the case of RVSDI one for each SDI device).
image: 2_Users_arasiah_git_tweak-devel_rv-cxx98-release-qt4-python2_7_html_temp_images_rv4_top_level.png
Figure 2.1:
Top-Level node graph when two sources are present.
There is always a default layout (RVLayoutGroup), sequence (RVSequenceGroup), and stack node (RVStackGroup) as well as a view group node (RVViewGroup). The view group is connected to each of the active display groups (RVDisplayGroup). There is only one input to the view group and that input determines what the user is seeing in the image viewer. When the user changes views, the view group input is switch to the node the user wishes to see. For example, when the user looks at one of the sources (not in a sequence) the view group will be connected directly to that source group.
New top level nodes can be created by the user. These nodes can have inputs of any other top level node in the session other than the view group and display groups.
In the scripting languages, the nodes are referred to by internal name. The internal name is not normally visible to the user, but is used extensively in the session file. Most of the node graph commands use internal node names or the name of the node's type.
Command
Mu Return Type
Python Return Type
Description
nodes()
string[]
Unicode String List
Returns an array of all nodes in the graph.
nodesOfType (string typename)
string[]
Unicode String List
Returns all nodes in the graph of the specified type
nodeTypes()
string[]
Unicode String List
Returns an array of all node types known to the application
nodeType (string nodename)
string
Unicode String
Returns the type of the node specified by nodename
deleteNode (string nodename)
void
None
Deletes the node specified by nodename
setViewNode (string nodename)
void
None
Connect the specified node to the view group
newNode (string typename, string nodename = nil)
string
Unicode String
Create a node of type typename with name nodename or if nodename is nil use a default name
nodeConnections (string nodename, bool traverseGroups = false)
(string[],string[])
Tuple of two Unicode String Lists
Return a tuple of nodes connected to the specified node: the first as an array of input node names, the second as an array of output node names.
nodesExists (string nodename)
bool
Bool
Returns true if the specified node exists false otherwise
setNodeInputs (string nodename, string[] inputNodes)
void
None
Connect a nodes inputs to an array of node names
testNodeInputs (string nodename, string[] inputNodes)
string
Unicode String
Test the validity of a set of input nodes for nodename. If nil is returned than the inputs are valid. If a string is returned than the inputs are not valid and the string contains a user readable reason why they are not.
Table 2.1:
Commands used to manage nodes in the graph

Group Nodes and Pipeline Groups

A group node is composed of multiple member nodes. The graph connectivity is determined by the value of the group node's properties or it is fixed. Group nodes can contain other group nodes. The member nodes are visible to the user interface scripting languages and their node names are unique in the graph. Nodes may only be connected to nodes that are members of the same group. In the case of top level nodes they can be connected to other top level nodes.
In RV 4, a new type of group node has been introduced: the pipeline group. A pipeline group is a group node that connects it members into a single pipeline (no branches). Every pipeline group has a string array property called pipeline.nodes which determines the types of the nodes in the pipeline and the order in which they are connected. Any node type other than view and display group nodes can be specified in the pipeline.nodes property.
Each type of pipeline group has a default pipeline. Except for the RVLinearizePipelineGroup which has two nodes in its default pipeline all others have a single node with the exception of the view pipeline which is empty by default. By modifying the pipeline.nodes property in any of these pipeline groups the default member nodes can either be swapped out, removed completely, or additional nodes can be inserted.
For example. the following python code will set the view pipeline to use a user defined node called “FilmLook”:
setStringProperty("#RVViewPipelineGroup.pipeline.nodes", ["FilmLook"], True)

Source Group Node

The source group node (RVSourceGroup) has fixed set of nodes and three pipeline groups which can be modified to customize the source color management.
image: 3_Users_arasiah_git_tweak-devel_rv-cxx98-releas___python2_7_html_temp_images_rv4_source_group.png
Figure 2.2:
Source Group Internals
The source group takes no inputs. There is eiher an RVFileSource or an RVImageSource node at the leaf position of the source group. A file source contains the name of the media that is provided by the source. An image source contains raw pixels of its media (usually obtained directly from a renderer etc).
The source group is responsible for linearizing the incoming pixel data, possibly color correcting it and applying a look, and holding per-source annotation and transforms. Any of these operations can be modified by changing property values on the member nodes of the source group.
Pixels are expected to be in the working space (normally linear) after existing the source group.
Command
Mu Return Type
Python Return Type
Description
sources ()
(string,int,int,intfloat,bool,bool)[]
Same
Returns an array of media info for all loaded media.
sourcesAtFrame (int frame)
string[]
String Array
Returns array of source node names (RVFileSource and/or RVImageSource). This is equivalent to nodesOfType(“RVSource”)
sourceAtributes (string nodename, string medianame = nil)
(string,string)[]
Array of (String,String)
Returns an array of image attribute name/value pairs at the current frame. The sourceName can be the node name or the source path as returned by PixelImageInfo, etc. The optional media argument can be used to constraint the attributes to that media only.
sourceMediaInfo (string nodename, string medianame = nil)
SourceMediaInfo
Dictionary
Returns a SourceMediaInfo structure for the given source and optional media. The SourceMediaInfo supplies geometric and timing information about the image and sequence
sourceDisplayChannelNames (string nodename)
string[]
String Array
Returns the names of channels in a source which are mapped to the display RGBA channels
addSource (string filename, string tag = nil)
void
None
Creates a new source group with the specified media
addSource (string[] filenames, string tag = nil)
void
None
Creates a new source group with the specified media
addSourceVerbose (string filename[], string tag = nil)
string
String
Creates a new source group with the specified media. Returns the name of the source node created.
addToSource (string filename, string tag = nil)
void
None
Adds media to an existing source group
addToSource (string[] filenames, string tag = nil)
void
None
Adds media to an existing source group
setSourceMedia (string nodename, string[] filenames, string tag = nil)
void
None
Replace all media in the given RVFileSource node with new media with optional tag
relocateSource (string nodename, string oldfilenames, string newfilename)
void
None
Replace media (one) in the specified RVFilesSource source with new media
relocateSource (string oldfilenames, string newfilename)
void
None
Replace media (one) in the current RVFilesSource source with new media
newImageSource (string mediaName, int width, int height, int uncropWidth, int uncropHeight, int uncropX, int uncropY, float pixelAspect, int channels, int bitsPerChannel, bool floatingPoint, int startFrame, int endFrame, float fps, string[] layers = nil, string[] views = nil )
string
String
Create a new source group with an image source as the media. The name of the newly created image source node is returned.
sourceMedia (string nodename)
(string,string[],string[])
Same
Returns tuples describing media in nodename. This command only returns information about the primary media and is deprecated. Use sourceMediaInfo() instead.
Table 2.2:
Commands used to manage and create source groups

View Group Node

The view group (RVViewGroup) is responsible for viewing transforms and is the final destination for audio in most cases. The view group is also responsible for rendering any audio waveform visualization.
Changing the view in RV is equivalent to changing the input of the view group. There is only one view group in an RV session.
The view group contains a pipeline into which arbitrary nodes can be inserted for purposes of QC and visualization. By default, this pipeline is empty (it has no effect).
image: 4_Users_arasiah_git_tweak-devel_rv-cxx98-release-qt4-python2_7_html_temp_images_rv4_view_group.png
Figure 2.3:
View Group Internals
Command
Mu Return Type
Python Return Type
Description
setViewNode (string nodename)
void
None
Connect the specified node to the view group
nextViewNode ()
void
None
Switch to next view in the view history (if there is one)
prevViewNode ()
void
None
Switch to next previous in the view history (if there is one)
Table 2.3:
High level commands used to change the view group inputs

Sequence Group Node

The sequence group node causes its inputs to be rendered one after another in time.
The internal RVSequence node contains an EDL data structure which determines the order and possibly the frame ranges for its inputs. By default the EDL is automatically created by sequencing the inputs in order from the first to last with their full frame ranges. The automatic EDL function can be disabled in which case arbitrary EDL data can be set including cuts back to a single source multiple times.
Each input to a sequence group has a unique sub-graph associated with it that includes an RVPaint node to hold annotation per input and an optional retime node to force all input media to the same FPS.
image: 5_Users_arasiah_git_tweak-devel_rv-cxx98-releas___thon2_7_html_temp_images_rv4_sequence_group.png
Figure 2.4:
Sequence Group Internals

Stack Group Node

The stack group node displays its inputs on top of each other and can control a crop per input in order to allow pixels from lower layers to be seen under upper layers. Similar to a sequence group, the stack group contains an optional retime node per input in order to force all of the input FPS' to the same value.
Unlike the sequence group, the stack group's paint node stores annotation after the stacking so it always appears on top of all images.
image: 6_Users_arasiah_git_tweak-devel_rv-cxx98-release-qt4-python2_7_html_temp_images_rv4_stack_group.png
Figure 2.5:
Stack Group Internals

Layout Group Node

The layout group is similar to a stack group, but instead of showing all of its inputs on top of one another, the inputs are transformed into a grid, row, column, or under control of the user (manually). Like the other group nodes, there is an optional retime node to force all inputs to a common FPS. Annotations on the layout group appear on top of all images regardless of their input order.
image: 7_Users_arasiah_git_tweak-devel_rv-cxx98-releas___python2_7_html_temp_images_rv4_layout_group.png
Figure 2.6:
Layout Group Internals

Display Group Node

There is one display group for each video device accessible to RV. For example in the case of a dual monitor setup, there would be two display groups: one for each monitor. In the case of RVSDI, there is also an additional display group for each SDI output device.
The display group has two functions: to prepare the working space pixels for display on the associated device and to set any stereo modes for that device.
By default the display group's pipeline uses an RVDisplayColor node to provide the color correction. The user can use any node for that purpose instead or in addition to the existing RVDisplayColor. For example, when OpenColorIO is being used a DisplayOCIONode is used in place of the RVDisplayColor.
For a given desktop setup with multiple monitors only one of the RVDisplayGroups is active at a time: the one corresponding to the monitor that RV's main window is on. In presentation mode, two RVDisplayGroups will be active: one for RV's main window and one for the presentation device. Each display group has properties which identify their associated device.
Changes to a display group affect the color and stereo mode for the associated device only. In order to make a global color change that affects all devices, a node should be inserted into the view group's pipeline or earlier in the graph.
image: 8_Users_arasiah_git_tweak-devel_rv-cxx98-releas___ython2_7_html_temp_images_rv4_display_group.png
Figure 2.7:
Display Group Internals

Addressing Properties

A full property name has three parts: the node name, the component name, and the property name. These are concatenated together with dots like nodename.componentname.propertyname. Each property has its own type which can be set and retrieved with one of the set or get functions. You must use the correct get or set function to access the property. For example, to set the display gamma, which is part of the ``display” node, you need to use setFloatProperty() like so in Mu:
setFloatProperty("display.color.gamma", float[] {2.2, 2.2, 2.2}, true)
or in Python:
setFloatProperty("display.color.gamma",  [2.2, 2.2, 2.2], True)
In this case the value is being set to 2.2.
image: 9_Users_arasiah_git_tweak-devel_rv-cxx98-releas___ython2_7_html_temp_images_rv4_prop_inactive.png
Figure 2.8:
Conceptual diagram of RV Image and Audio Processing Graph for a session with a single sequence of two sources. The default stack and layout are not included in this diagram, but would be present.
In an RV session, some node names will vary per the source(s) being displayed and some will not. Figure 2.8 shows a pipeline diagram for one possible configuration and indicates which are per-source (duplicated) and which are not.
At any point in time, a subset of the graph is active. For example if you have three sources in a session and RV is in sequence mode, at any given frame only one source branch will be active. There is a second way to address nodes in RV: by their types. This is done by putting a hash (#) in front of the type name. Addressing by node type will affect all of the currently active nodes of the given type. For example, a property in the color node is exposure which can be addressed directly like this in Mu:
color.color.exposure
or using the type name like this:
#RVColor.color.exposure
When the “#” type name syntax is used, and you use one of the set or get functions on the property, only nodes that are currently active and which are the first reachable of the given type will be considered. So in this case, if we were to set the exposure using type-addressing:
setFloatProperty("#RVColor.color.exposure", float[] {2.0, 2.0, 2.0}, true)
or in Python:
setFloatProperty("#RVColor.color.exposure", [2.0, 2.0, 2.0], True)
In sequence mode (i.e. the default case), only one RVColor node is usually active at a time (the one belonging to the source being viewed at the current frame). In stack mode, the RVColor nodes for all of the sources could be active. In that case, they will all have their exposure set. In the UI, properties are almost exclusively addressed in this manner so that making changes affects the currently visible sources only. See figure 2.9 for a diagrammatic explanation.
image: 10_Users_arasiah_git_tweak-devel_rv-cxx98-relea___-python2_7_html_temp_images_rv4_prop_active.png
Figure 2.9:
Active Nodes in the Image Processing Graph. The active nodes are those nodes which contribute to the rendered view at any given frame. In this configuration, when the sequence is active, there is only one source branch active (the yellow nodes). By addressing properties using their node's type name, you can affect only active nodes with that type without needing search for the exact node(s).
There is an additional shorthand using “@” in front of a type name:
@RVDisplayColor.color.brightness
The above would affect only the first RVDisplayColor node it finds instead of all RVDisplayColor nodes of depth 1 like “#” does. This is useful with presentation mode for example because setting the brightness would be confined to the first RVDisplayColor node which would be the one associated with the presentation device. If “#” was used, all devices would have their brightness modified. The utility of the “@” syntax is limited compared to “#” so if you are unsure of which to use try “#” first.
Chapter 16 has all the details about each node type.

User Defined Properties

It's possible to add your own properties when creating an RV file from scratch or from the user interface code using the newProperty() function.
Why would you want to do this? There are a few reasons to add a user defined property:
  1. You wish to save something in a session file that was created interactively by the user.
  2. You're generating session files from outside RV and you want to include additional information (e.g. production tracking, annotations) which you'd like to have available when RV plays the session file.
Some of the packages that come with RV show how to implement functionality for the above.

Getting Information From Images

RV's UI often needs to take actions that depend on the context. Usually the context is the current image being displayed. Table 2.4 shows the most useful command functions for getting information about displayed images.
Command
Description
sourceAtPixel
Given a point in the view, returns a structure with information about the source(s) underneath the point.
sourcesRendered
Returns information about all sources rendered in the current view (even those that may have been culled).
sourceLayers
Given the name of a source, returns the layers in the source.
sourceGeometry
Given the name of a source, returns the geometry (bounding box) of that source.
sourceMedia
Given the name of a source, returns the a list of its media files.
sourcePixelValue
Given the name of a source and a coordinate in the image, returns an RGBA pixel value at that coordinate. This function may convert chroma image pixels to Rec709 primary RGB in the process.
sourceAttributes
Given the name of a source and optionally the name a particular media file in the source, returns an array of tuples which contain attribute names and values.
sourceStructure
Given the name of a source and optionally the name a particular media file in the source, returns information about image size, bit depth, number of channels, underlying data type, and number of planes in the image.
sourceDisplayChannelNames
Given the name of a source, returns an array of channel names current being displayed.
Table 2.4:
Command Functions for Querying Displayed Images
For example, when automating color management, the color space of the image or the origin of the image may be required to determine the best way to view the image (e.g., for a certain kind of DPX file you might want to use a particular display or file LUT). The color space is often stored as image attribute. In some cases, image attributes are misleading–for example, a well known 3D software package renders images with incorrect information about pixel aspect ratio—usually other information in the image attributes coupled with the file name and origin are enough to make a good guess.

Chapter 3 Writing a Custom GLSL Node

RV can use custom shaders to do GPU accelerated image processing. Shaders are authored in a superset of the GLSL language. These GLSL shaders become image processing Nodes in RV's session. Note that nodes can be either “signed” or “unsigned”. As of RV6, nodes can be loaded by any product in the RV line (RV, RV-SDI, RVIO). The most basic workflow is as follows:

Node Definition Files

Node definition files are GTO files which completely describe the operation of an image processing node in the image/audio processing graph.
A node definition appears as a single GTO object of type IPNodeDefinition. This makes it possible for a node definition to appear in a session file directly or in an external definition file which can contain a library of definition objects.
The meat of a node definition is source code for a kernel function written in an augmented version of GLSL which is described below.
The following example defines a node called "Gamma" which takes a single input image and applies a gamma correction:
GTOa (4)

Gamma : IPNodeDefinition (1)
{
    node
    {
        string evaluationType = "color"
        string defaultName = "gamma"
        string creator = "Tweak Software"
        string documentation = "Gamma" 
        int userVisible = 1
    }

    render
    {
        int intermediate = 0
    }

    function
    {
        string name = "main" # OPTIONAL
       string glsl = "vec4 main (const in inputImage in0, const in vec3 gamma) { return vec4(pow(in0().rgb, gamma), in0().a); }"
    }

    parameters
    {
        float[3] gamma = [ [ 0.4545 0.4545 0.4545 ] ]
    }
}

Fields in the IPNodeDefinition

node.evaluationType
one of:
color
one input, per-pixel operations only
filter
one input, multiple input pixels sampled to create one output pixel
transition
two inputs, an animated transition
merge
one or more inputs, typically per-pixel operation
combine
one input to node, many inputs to function, pulls views, layers, eyes, multiple frames from input
node.defaultName
the default name prefix for newly instantiated nodes
node.creator
documentation about definition author
node.documentation
Possibly html documentation string. In practice this may be quite large
node.userVisible
if non-0 a user can create this node directly otherwise only programmatically
render.intermediate
if non-0 the node results are forced to be cached
function.name
the name of the entry point in source code. By default this is main
function.fetches
approximate number of fetches performed by function. This is meaningful for filters. E.g. a 3x3 blur filter does 9 fetches.
function.glsl
Source code for the function in the augmented GLSL language. Alternately this can be a file URL pointing to the location of a separate text file containing the source code. See below for more details on file URL handling.
parameters
bindable parameters should be given default values in the parameters component. Special variables need not be given default values. (e.g. input images, the current frame, etc).

3.2.1 The “combine” Evaluation Type

A “combine” node will evaluate its single input once for each parameter to the shader of type "inputImage".
The names of the inputImage parameters in the shader may be chosen to be meaningful to the shader writer; they are not meaningful to the evaluation of the combine node. The order of the inputImage parameters in the shader parameter list will correspond to the multiple evaluations of the node's input (see below).
Each time the input is evaluated, there are a number of variations that can be made in the context by way of properties specified in the node definition. To be clear, these properties are specified in the “parameters” section of the node definition, but they are “evaluation parameters” not shader parameters. These are:
eye
stereo eye, int, 0 for left, 1 for right
channel
color channel, string, eg "R" or "Z"
layer
named image layer, string, typically from EXR file
view
named image view, string, typically from EXR file
frame
absolute frame number, int
offset
frame number offset, int
A context-modifying property has 3 parts: the name (see above), an "inputImage index" (the int tacked onto the name), and the value. The affect of the parameter is that the context of the evaluation of the input specified by the index will be modified by the value. So for example "int eye0 = 1" means that the "eye" parameter of the context used in the first evaluation of the input will be set to "1".
So for example, suppose a “StereoDifference” has this definition:
StereoDifference : IPNodeDefinition (1) 
{
    node
    {
        string evaluationType = "combine"
    }
    function
    {
        string glsl = "file://${HERE}/StereoQC.glsl"
    }
    parameters
    {
        int eye0 = 0
        int eye1 = 1
    }
} 
And this shader parameter list:
vec4 main (const in inputImage left, const in inputImage right)
Then:
As another example, here's a “FrameBlend” node:
FrameBlend : IPNodeDefinition (1)
{
    node
    {
        string evaluationType = "combine"
    }
    function
    {
	    string glsl = "file://${HERE}/FrameBlend.glsl"
    }
    parameters
    {
    	int offset0 = -2
	    int offset1 = -1
	    int offset2 = 0
	    int offset3 = 1
	    int offset4 = 2
    }
}
And this shader parameter list:
vec4 main (const in inputImage in0,
           const in inputImage in1,
           const in inputImage in2,
           const in inputImage in3,
           const in inputImage in4)
So the result is that the input to the FrameBlend node will be evaluated 5 times, and in each case the evaluation context will have a frame value that is equal to the incoming frame value, plus the corresponding offset. Note that the shader doesn't know anything about this, and from it's point of view it has 5 input images.

Alternate File URL

Language source code can be either inlined for a self contained definition or can be a modified file URL which points to an external file. An example file URL might be:
file:///Users/foo/glsl/foo_shader_source.glsl
If the node definition reader sees a file URL it will also perform variable substitution from the environment and any special predefined variables. For example if the $HOME environment variable exists the following would be equivalent on a Mac:
file://${HOME}/glsl/foo_shader_source.glsl
There is currently one special variable defined called $HERE which has the value of the directory in which the definition file lives. So if for example the node definition file lives in the filesystem like so:
/Users/foo/nodes/my_nodes.gto
/Users/foo/nodes/glsl/node1_source_code.glsl
/Users/foo/nodes/glsl/node2_source_code.glsl
/Users/foo/nodes/glsl/node3_source_code.glsl
and it references the GLSL files mentioned above then valid file URLs for the source files would like this:
file://${HERE}/glsl/node1_source_code.glsl
file://${HERE}/glsl/node2_source_code.glsl
file://${HERE}/glsl/node3_source_code.glsl

Augmented GLSL Syntax

GLSL source code can contain any set of functions and global static data but may not contain any uniform block definitions. Uniform block values are managed by the underlying renderer.

3.4.1 The main() Function

The name of the function which serves as the entry point must be specified if it's not main().
The main() function must always return a vec4 indicating the computed color at the current pixel.
For each input to a node there should be a parameter of type inputImage. The parameters are applied in the order they appear. So the first node image input is assigned to the first inputImage parameter and so on.
There are four special parameters which are supplied by the renderer:
float frame
The current frame number (local to the node)
float fps
The current frame rate (local to the node)
float baseFrame
The current global frame number
float stereoEye
The current stereo eye (0=left,1=right,2=default)
Table 3.1:
Special Parameters to main() Function
Any additional parameters are searched for in the 'parameters' component of the node. When a node is instantiated this will be populated by additional parameters to the main() function.
For example, the Gamma node defined above has the following main() function:
vec4 main (const in inputImage in0, const in vec3 gamma)
{
	vec4 P = in0();
    return vec4(pow(P.rgb, gamma), P.a);
}
In this case the node can only take a single input and will have a property called parameters.gamma of type float[3]. By changing the gamma property, the user can modify the behavior of the Gamma node.

3.4.2 The inputImage Type

A new type inputImage has been added to GLSL. This type represents the input images to the node. So a node with one image argument must take a single inputImage argument. Likewise, a two input node should take two such arguments.
There are a number of operations that can be done on a inputImage object. For the following examples the parameter name will be called i.
i()
Returns the current pixel value as a vec4. Functions which only call this operator on inputImage parameters can be of type "color"
i(vec2 OFF)
If P is the current pixel location this returns the pixel at OFF + P
i.size()
Returns a vec2 (width,height) indicating the size of the input image
i.st
Returns the absolute current pixel coordinates ([0,width], [0,height]) with swizzling
Table 3.2:
Type inputImage Operations
Use of the inputImage type as a function argument is limited to the main() function. Use of the inputImage type as a function should be minimized where possible e.g. the result should be stored into a local variable and the local variable used there after. For example:
vec4 P = i();
return vec4(P.rgb * 0.5, P.a);
NOTE: The st value return by an inputImage has a value ranging from 0 to the width in X and 0 to the height in Y. So for example, the pixel value of the first pixel in the image is located at (0.5, 0.5) not at (0, 0). Similarily, the last pixel in the image is located at (width-0.5, height-0.5) not (width-1, height-1) as might be expected. See ARB_texture_rectangle for information on why this is. In GLSL 1.5 and greater the rectangle coordinates are built into the language.

3.4.3 The outputImage Type

The type outputImage has also been added. This type provides information about the output framebuffer.
The main() function may have a single outputImage parameter. You cannot pass an outputImage to auxiliary functions nor can you have outputImage parameter to an auxiliary function. You can pass the results of operations on the outputImage object to other functions.
outputImage has the following operations:
w.st
Returns the absolute fragment coordinate with swizzling
w.size()
Returns the size of the output framebuffer as a vec2
Table 3.3:
Type outputImage Operations

3.4.4 Use of Samplers

Samplers can be used as inputs to node functions. The sampler name and type must match an existing parameter property on the node. So for example a 1D sampler would correspond to a 1D property the value of which is a scalar array. A 3D sampler would have a type like float[3,32,32,32] if it were an RGB 32^3 LUT.
sampler1D
type[D,X]
sampler2D
type[D,X,Y]
sampler2DRect
type[D,X,Y]
sampler3D
type[D,X,Y,Z]
Table 3.4:
Sampler to Parameter Type Correspondences
In the above table, D would normally 1, 3, or 4 for scalar, RGB, or RGBA. A value of 2 is possible but unusual.
Use the new style texture() call instead of the non-overloaded pre GLSL 1.30 function calls like texture3D() or texture2DRect(). This should be the case even when the driver only supports 1.20.

Testing the Node Definition

Once you have a NodeDefinition GTO file that contains or references your shader code as described above, you can test the node as follows:
  1. Add the node definition file to the Nodes directory on your RV_SUPPORT_PATH. For example, on Linux, you can put it in $HOME/.rv/Nodes. If the GLSL code is in a separate file, it should be in the location specified by the URL in the Node Definition file.You can use the ${HERE}/myshader.glsl notation (described above) to indicate that the GLSL is to be found in the same directory.
  2. Start RV and from the Session Manager add a node with the “plus” button or the right-click menu (“New Viewable”) by choosing “Add Node by Type” and entering the type name of the new node (“Gamma” in the above example).
  3. At this point you might want to save a Session File for easy testing.
  4. You can now iterate by changing your shader code or the parameter values in the Session File and re-running RV to test.

Publishing the Node Definition

When you have tested sufficiently in RV and would like to make the new Node Definition available to other users running RV, RVSDI, RVIO, etc, you need to:
Make the Node Definition available to users. RV will pick up Node Definition files from any Nodes sub-directory along the RV_SUPPORT_PATH. So your definitions can be distributed by simply inserting them into those directories, or by including them in an RV Package (any GTO/GLSL files in an RV Package will be added to the appropriate “Nodes” sub-directory when the Package is installed). With some new node types, you may want to distribute Python or or Mu code to help the user manage the creation and parameter-editing of the new nodes, so wrapping all that up in an RV Package would be appropriate in those cases.

Chapter 4 Python

As of RV 3.12 you can use Python in RV in conjunction with Mu or in place of it. It's even possible to call Python commands from Mu and vice versa. So in answer to the question: which language should I use to customize RV? The answer is whichever you like. At this point we recommend using Python.
There are some slight differences that need to be noted when translating code between the two languages:
In Python the modules names required by RV are the same as in Mu. As of this writing, these are commands, extra_commands, rvtypes, and rvui. However, the Python modules all live in the rv package. So while in Mu you can:
or
use commands
require commands
to make the commands visible in the current namespace. In Python you need to include the package name:
from rv.commands import *
or
import rv.commands
Pythonistas will know all the permutations of the above.

Calling Mu From Python

It's possible to call Mu code from Python, but in practice you will probably not need to do this unless you need to interface with existing packages written in Mu.
To call a Mu function from Python, you need to import the MuSymbol type from the pymu module. In this example, the play function is imported and called F on the Python side. F is then executed:
from pymu import MuSymbol
F = MuSymbol("commands.play")
F()
If the Mu function has arguments you supply them when calling. Return values are automatically converted between languages. The conversions are indicated in Figure4.3 .
from pymu import MuSymbol
F = MuSymbol("commands.isPlaying")
G = MuSymbol("commands.setWindowTitle")
if F() == True:
    G("PLAYING")
Once a MuSymbol object has been created, the overhead to call it is minimal. All of the Mu commands module is imported on start up or reimplemented as native CPython in the Python rv.commands module so you will not need to create MuSymbol objects yourself; just import rv.commands and use the pre-existing ones.
When a Mu function parameter takes a class instance, a Python dictionary can be passed in. When a Mu function returns a class, a dictionary will be returned. Python dictionaries should have string keys which have the same names as the Mu class fields and corresponding values of the correct types.
For example, the Mu class Foo { int a; float b; } as instantiated as Foo(1, 2.0) will be converted to the Python dictionary {'a' : 1, 'b' : 2.0} and vice versa.
Existing Mu code can be leveraged with the rv.runtime.eval call to evaluate arbitrary Mu from Python. The second argument to the eval function is a list of Mu modules required for the code to execute and the result of the evaluation will be returned as a string. For example, here's a function that could be a render method on a mode; it uses the Mu gltext module to draw the name of each visible source on the image:
def myRender (event) :
    event.reject()

    for s in rv.commands.renderedImages() :
        if (rv.commands.nodeType(rv.commands.nodeGroup(s["node"])) != "RVSourceGroup") :
            continue
        geom    = rv.commands.imageGeometry(s["name"])

        if (len(geom) == 0) :
            continue

        x       = geom[0][0]
        y       = (geom[0][1] + geom[2][1]) / 2.0         
        domain  = event.domain()
        w       = domain[0]
        h       = domain[1]

        drawCode = """
       {
           rvui.setupProjection (%d, %d);
           gltext.color (rvtypes.Color(1.0,1.0,1.0,1));
           gltext.size(14);
           gltext.writeAt(%f, %f, extra_commands.uiName("%s"));
       }
       """
       rv.runtime.eval(drawCode % (w, h, float(x), float(y), s["node"]), ["rvui", "rvtypes", "extra_commands"])
NOTE: Python code in RV 4 can assume that default parameters in Mu functions will be supplied if needed. Prior to RV 4 all parameters had to be specified even when the parameter had a default value.

Calling Python From Mu

There are two ways to call Python from Mu code: a Python function being used as a call back function from Mu or via the "python" Mu module.
In order to use a Python callable object as a call back from Mu code simply pass the callable object to the Mu function. The call back function's arguments will be converted according to the Mu to Python value conversion rules show in Figure 4.3 . There are restrictions on which callable objects can be used; only callable objects which return values of None, Float, Int, String, Unicode, Bool, or have no return value are currently allowed. Callable objects which return unsupported values will cause a Mu exception to be thrown after the callable returns.
The Mu "python" module implements a small subset of the CPython API. You can see documentation for this module in the Mu Command API Browser under the Help menu. Here is an example of how you would call os.path.join from Python in Mu.
require python;

let pyModule = python.PyImport_Import ("os");

python.PyObject pyMethod = python.PyObject_GetAttr (pyModule, "path");
python.PyObject pyMethod2 = python.PyObject_GetAttr (pyMethod, "join");

string result = to_string(python.PyObject_CallObject (pyMethod2, ("root","directory","subdirectory","file")));

print("result: %s\n" % result); // Prints "result: root/directory/subdirectory/file"
If the method you want to call takes no arguments like os.getcwd, then you will want to call it in the following manner.
require python;

let pyModule = python.PyImport_Import ("os");

python.PyObject pyMethod = python.PyObject_GetAttr (pyModule, "getcwd");

string result = to_string(python.PyObject_CallObject (pyMethod, PyTuple_New(0)));

print("result: %s\n" % result); // Prints "result: /var/tmp"
If you are interested in retrieving an attribute alone then here is an example of how you would call sys.platform from Python in Mu.
require python;

let pyModule = python.PyImport_Import ("sys");

python.PyObject pyAttr = python.PyObject_GetAttr (pyModule, "platform");

string result = to_string(pyAttr);

print("result: %s\n" % result); // Prints "result: darwin"

Python Mu Type Conversions

Python Type
Converts to Mu Type
Converts To Python Type
Str or Unicode
string
Unicode string
Normal byte strings and unicode strings are both converted to Mu's unicode string. Mu strings always convert to unicode Python strings.
Int
int, short, or byte
Int
Long
int64
Long
Float
float or half or double
Float
Mu double values may lose precision. Python float values may lose precision if passed to a Mu function that takes a half.
Bool
bool
Bool
(Float, Float)
vector float[2]
(Float, Float)
Vectors are represented as tuples in Python
(Float, Float, Float)
vector float[3]
(Float, Float, Float)
(Float, Float, Float, Float)
vector float[4]
(Float, Float, Float, Float)
Event
Event
Event
MuSymbol
runtime.symbol
MuSymbol
Tuple
tuple
Tuple
Tuple elements each convert independently. NOTE: two to four element Float tuples will convert to vector float[N] in Mu. Currently there is no way to force conversion of these Float-only tuples to Mu float tuples.
List
type[] or type[N]
List
Arrays (Lists) convert back and forth
Dictionary
Class
Dictionary
Class labels become dictionary keys
Callable Object
Function Object
Not Applicable
Callable objects may be passed to Mu functions where a Mu function type is expected. This allows Python functions to be used as Mu call back functions.
Table 4.1:
Mu-Python Value Conversion

PyQt versus PySide

RV 6 uses Qt 4.8. This version of Qt is supported by both the PySide and PyQt modules. However, from RV 6.x.4 onwards, RV ships with PySide for all platforms (OSX, Linux, Windows).
Below is a simple pyside example using RV's py-interp.
#!/Applications/RV64.app/Contents/MacOS/py-interp

# Import PySide classes
import sys
from PySide.QtCore import *
from PySide.QtGui import *

# Create a Qt application.
# IMPORTANT: RV's py-interp contains an instance of QApplication;
# so always check if an instance already exists.
app = QApplication.instance()
if app == None:     
	app = QApplication(sys.argv)

# Display the file path of the app.
print app.applicationFilePath()

# Create a Label and show it.
label = QLabel("Using RV's PySide")
label.show()

# Enter Qt application main loop.
app.exec_()

sys.exit()
To access RV's essential session window Qt QWidgets, i.e. the main window, the GL view, top tool bar and bottom tool bar, import the python module 'rv.qtutils'.
import rv.qtutils

# Gets the current RV session windows as a PySide QMainWindow.
rvSessionWindow = rv.qtutils.sessionWindow()

# Gets the current RV session GL view as a PySide QGLWidget.
rvSessionGLView = rv.qtutils.sessionGLView()

# Gets the current RV session top tool bar as a PySide QToolBar.
rvSessionTopToolBar = rv.qtutils.sessionTopToolBar()

# Gets the current RV session bottom tool bar as a PySide QToolBar.
rvSessionBottomToolBar = rv.qtutils.sessionBottomToolBar()

Shotgun Toolkit in RV

As of RV version 7.0, the standard Shotgun integration (known as “SG Review”) is supplied by Shotgun Toolkit code that is distributed with RV. In future releases, this will allow Toolkit apps to be versioned independently from RV and for the RV Toolkit engine to host user-developed apps.
Some details about the Shotgun Toolkit usage in RV:

Chapter 5 Event Handling

Aside from rendering, the most important function of the UI is to handle events. An event can be triggered by any of the following:
Each specific event has a name may also have extra data associated with it in the form of an event object. To see the name of an event (at least for keyboard and mouse pointer events) you can select the HelpDescribe... which will let you interactively see the event name as you hit keys or move the mouse. You can also use HelpDescribe Key.. to see what a specific key is bound to by pressing it.
Table 5.1 shows the basic event type prefixes.
Event Prefix
Description
key-down
Key is being pressed on the keyboard
key-up
Key is being released on the keyboard
pointer
The mouse moved, button was pressed, or the pointer entered (or left) the window
dragdrop
Something was dragged onto the window (file icon, etc)
render
The window needs updating
user
The user's state changed (active or inactive, etc)
remote
A network event
Table 5.1:
Event Prefixes for Basic Device Events
When an event is generated in RV, the application will look for a matching event name in its bindings. The bindings are tables of functions which are assigned to certain event names. The tables form a stack which can be pushed and popped. Once a matching binding is found, RV will execute the function.
When receiving an event, all of the relevant information is in the Event object. This object has a number of methods which return information depending on the kind of event.
Method
Events
Description
pointer (Vec2;)
pointer-* dragdrop-*
Returns the location of the pointer relative to the view.
relativePointer (Vec2;)
pointer-* dragdrop-*
Returns the location of the pointer relative to the current widget or view if there is none.
reference (Vec2;)
pointer-* dragdrop-*
Returns the location of initial button mouse down during dragging.
domain (Vec2;)
pointer-* render-* dragdrop-*
Returns the size of the view.
subDomain (Vec2;)
pointer-* render-* dragdrop-*
Returns the size of the current widget if there is one. relativePointer() is positioned in the subDomain().
buttons (int;)
pointer-* dragdrop-*
Returns an int or'd from the symbols: Button1, Button2, and Button3.
modifiers (int;)
pointer-* key-* dragdrop-*
Returns an int or'd from the symbols: None, Shift, Control, Alt, Meta, Super, CapLock, NumLock, ScrollLock.
key (int;)
key-*
Returns the “keysym” value for the key as an int
name (string;)
any
Returns the name of the event
contents (string;)
internal events
dragdrop-*
Returns the string content of the event if it has any. This is normally the case with internal events like new-source, new-session, etc. Pointer, key, and other device events do not have a contents() and will throw if it's called on them. Drag and drop events return the data associated with them. Some render events have contents() indicating the type of render occurring.
contentsArray (string[];)
internal events
Same as contents(), but in the case of some internal events ancillary information may be present which can be used to avoid calling additional commands.
sender (string;)
any
Returns the name of the sender
contentType (int;)
dragdrop-*
Returns an int describing the contents() of a drag and drop event. One of: UnknownObject, BadObject, FileObject, URLObject, TextObject.
timeStamp (float;)
any
Returns a float value in seconds indicating when the event occurred
reject (void;)
any
Calling this function will cause the event to be send to the next binding found in the event table stack. Not calling this function stops the propagation of the event.
setReturnContents (void; string)
internal events
Events which have a contents may also have return content. This is used by the remote network events which can have a response.
Table 5.2:
Event Object Methods. Python methods have the same names and return the same value types.

Binding an Event

In Mu (or Python) you can bind an event using any of the bind() functions. The most basic version of bind() takes the name of the event and a function to call when the event occurs as arguments. The function argument (which is called when the event occurs) should take an Event object as an argument and return nothing (void). Here's a function that prints hello in the console every time the ``j'' key is pressed:
1
If this is the first time you've seen this syntax, it's defining a Mu function. The first two characters \: indicate a function definition follows. The name comes next. The arguments and return type are contained in the parenthesis. The first identifier is the return type followed by a semicolon, followed by an argument list.
E.g, \: add (int; int a, int b) { return a + b; }
\: my_event_function (void; Event event)
{
    print("Hello!\n");
}

bind("key-down--j", my_event_function);
or in Python:
def my_event_function (event):
    print "Hello!"

bind("default", "global", "key-down--j", my_event_function);
There are more complicated bind() functions to address binding functions in specific event tables (the Python example above is using the most general of these). Currently RV's user interface has one default global event table an couple of other tables which implement the parameter edit mode and help modes.
Many events provide additional information in the event object. Our example above doesn't even use the event object, but we can change it to print out the key that was pressed by changing the function like so:
\: my_event_function (void; Event event)
{
    let c = char(event.key());
    print("Key pressed = %c\n" % c);
}
or in Python:
def my_event_function (event):
    c = event.key()
    print "Key pressed = %s\n" % c
In this case, the Event object's key() function is being called to retrieve the key pressed. To use the return value as a key it must be cast to a char. In Mu, the char type holds a single unicode character. In Python a string is used.
See the section on the Event class to find out how to retrieve information from it. At this point we have not talked about where you would bind an event; that will be addressed in the customization sections.

Keyboard Events

There are two keyboard events: key-down and key-up. Normally the key-down events are bound to functions. The key-up events are necessary only in special cases.
The specific form for key down events is key-down–something where something uniquely identifies both the key pressed and any modifiers that were active at the time.
So if the ``a'' key was pressed the event would be called: key-down–a. If the control key were held down while hitting the ``a'' key the event would be called key-down–control–a.
There are five modifiers that may appear in the event name: alt, caplock, control, meta, numlock, scrolllock, and shift in that order. The shift modifier is a bit different than the others. If a key is pressed with the shift modifier down and it would result in a different character being generated, then the shift modifier will not appear in the event and instead the result key will. This may sound complicated but these examples should explain it:
For control + shift + A the event name would be key-down–control–A. For the ``*'' key (shift + 8 on American keyboards) the event would be key-down–*. Notice that the shift modifier does not appear in any of these. However, if you hold down shift and hit enter on most keyboards you will get key-down–shift–enter since there is no character associated with that key sequence.
Some keys may have a special name (like enter above). These will typically be spelled out. For example pressing the ``home'' key on most keyboards will result in the event key-down–home. The only way to make sure you have the correct event name for keys is to start RV and use the HelpDescribe... facility to see the true name. Sometimes keyboards will label a key and produce an unexpected event. There will be some keyboards which will not produce an event all for some keys or will produce a unicode character sequence (which you can see via the help mechanism).

Pointer (Mouse) Events

The mouse (called pointer from here on) can produce events when it is moved, one of its buttons is pressed, an attached scroll wheel is rotated, or the pointer enters or leaves the window.
The basic pointer events are move, enter, leave, wheelup, wheeldown, push, drag, and release. All but enter and leave will also indicate any keyboard modifiers that are being pressed along with any buttons on the mouse that are being held down. The buttons are numbered 1 through 5. For example if you hold down the left mouse button and movie the mouse the events generated are:
pointer-1--push
pointer-1--drag
pointer-1--drag
...
pointer-1-release
Pointer events involving buttons and modifiers always come in there parts: push, drag and release. So for example if you press the left mouse, move the mouse, press the shift key, move the mouse, release everything you get:
pointer-1--push
pointer-1--drag
pointer-1--drag
...
pointer-1-release
pointer-1--shift--push
pointer-1--shift--drag
pointer-1--shift--drag
...
pointer-1--shift--release
Notice how the first group without the shift is released before starting the second group with the shift even though you never released the mouse button. For any combination of buttons and modifiers, there will be a push-drag-release sequence that is cleanly terminated.
It is also possible to hold multiple mouse buttons and modifiers down at the same time. When multiple buttons are held (for example, button 1 and 2) they are simply both included (like the modifiers) so for buttons 1 and 2 the name would be pointer-1-2–push to start the sequence.
The mouse wheel behaves more like a button: when the wheel moves you get only a wheelup or wheeldown event indicating which direction the wheel was rotated. The buttons and modifiers will be applied to the event name if they are held down. Usually the motion of the wheel on a mouse will not be smooth and the event will be emitted whenever the wheel ``clicks''. However, this is completely a function of the hardware so you may need to experiment with any particular mouse.
There are three more pointer events that can be generated. When the mouse moves with no modifiers or buttons held down it will generate the event pointer–move. When the pointer enters the view pointer–enter is generated and when it leaves pointer–leave. Something to keep in mind: when the pointer leaves the view and the device is no longer in focus on the RV window, any modifiers or buttons the user presses will not be known to RV and will not generate events. When the pointer returns to the view it may have modifiers that became active when out-of-focus. Since RV cannot know about these modifiers and track them in a consistent manner (at least on X Windows) RV will assume they do not exist.
Pointer events have additional information associated with them like the coordinates of the pointer or where a push was made. These will be discussed later.

The Render Event

The UI will get a render event whenever it needs to be updated. When handling the render event, a GL context is set up and you can call any GL function to draw to the screen. The event supplies additional information about the view so you can set up a projection.
At the time the render event occurs, RV has already rendered whatever images need to be displayed. The UI is then called in order to add additional visual objects like an on-screen widget or annotation.
Here's a render function that draws a red polygon in the middle of the view right on top of your image.
Listing 5.1:
Example Render Function
\: my_render (void; Event event)
{
    let domain = event.domain(),
        w      = domain.x,
        h      = domain.y,
        margin = 100;

    use gl;
    use glu;

    glMatrixMode(GL_PROJECTION);
    glLoadIdentity();
    gluOrtho2D(0.0, w, 0, h);

    glMatrixMode(GL_MODELVIEW);
    glLoadIdentity(); 

    // Big red polygon
    glColor(Color(1,0,0,1));
    glBegin(GL_POLYGON);
    glVertex(margin, margin);
    glVertex(w-margin, margin);
    glVertex(w-margin, h-margin);
    glVertex(margin, h-margin);
    glEnd();
}
Note that for Python, you will need to use the PyOpenGL module or bind the symbols in the gl Mu module manually in order to draw in the render event.
The UI code already has a function called render() bound the render event; so this function basically disables existing UI rendering.

Remote Networking Events

RV's networking generates a number of events indicating the status of the network. In addition, once a connection has been established, the UI may generate sent to remote programs, or remote programs may send events to RV. These are typically uniquely named events which are specific to the application that is generating and receiving them.
For example the sync mechanism generates a number of events which are all named remote-sync-something.

Internal Events

Some events will originate from RV itself. These include things like new-source or new-session which include information about what changed. The most useful of these is new-source which can be used to manage color and other image settings between the time a file is loaded and the time it is first displayed. (See Color Management Section). Other internal events are functional, but are placeholders which will become useful with future features.
The current internal events are listed in table 5.3.
Event
Event.(data/contents)
Ancillary Data (contentsArray)
Description
render
Main view render
pre-render
Before rendering
post-render
After rendering
per-render-event-processing
Qt Event processing between renders (a “safe” time to edit the graph)
layout
Main view layout used to handle view margin changes
new-source
nodename;;RVSource;;filename
DEPRECATED A new source node was added (or media was reset)
source-group-complete
group nodename;;action_type
A new or modified source group is complete
source-modified
nodename;;RVSource;;filename
An existing source was changed
media-relocated
nodename;;oldmedia;;newmedia
A movie, image sequence, audio file was swapped out
source-media-set
nodename;;tag
before-session-read
filename
Session file is about to be read
after-session-read
filename
Session file was read
before-session-write
filename
Session file is about to be written
after-session-write
filename
Session file was just written
before-session-write-copy
filename
A copy of the session is about to be written
after-session-write-copy
filename
A copy of a session was just written
before-session-deletion
The session is about to be deleted
before-graph-view-change
nodename
The current view node is about to change.
after-graph-view-change
nodename
The current view node changed.
new-node
nodename
A new view node was created.
graph-new-node
nodename
nodename protocol version groupname
A new node of any kind was created.
before-progressive-loading
Loading will start
after-progressive-loading
Loading is complete (sent immediately if no files will be loaded)
graph-layer-change
DEPRECATED use after-graph-view-change
frame-changed
The current frame changed
fps-changed
Playback FPS changed
play-start
Playback started
play-stop
Playback stopped
incoming-source-path
infilename;;tag
A file was selected by the user for loading.
missing-image
An image could not be loaded for rendering
cache-mode-changed
buffer or region or off
Caching mode changed
view-size-changed
The viewing area size changed
new-in-point
frame
The in point changed
new-out-point
frame
The out point changed
before-source-delete
nodename
Source node will be deleted
after-source-delete
nodename
Source node was deleted
before-node-delete
nodename
View node will be deleted
after-node-delete
nodename
View node was deleted
after-clear-session
The session was just cleared
after-preferences-write
Preferences file was written by the Preferences GUI
state-initialized
Mu/Python init files read
session-initialized
All modes toggled, command-line processed, etc.
realtime-play-mode
Playback mode changed to realtime
play-all-frames-mode
Playback mode changed to play-all-frames
before-play-start
Play mode will start
mark-frame
frame
Frame was marked
unmark-frame
frame
Frame was unmarked
pixel-block
Event.data()
A block of pixels was received from a remote connection
graph-state-change
A property in the image processing graph changed
graph-node-inputs-changed
nodename
Inputs of a top-level node added/removed/re-ordered
range-changed
The time range changed
narrowed-range-changed
The narrowed time range changed
margins-changed
left right top bottom
View margins changed
view-resized
old-w new-w | old-h new-h
Main view changed size
preferences-show
Pref dialog will be shown
preferences-hide
Pref dialog was hidden
read-cdl-complete
cdl_filename;;cdl_nodename
CDL file has been loaded
read-lut-complete
lut_filename;;lut_nodename
LUT file has been loaded
remote-eval
code
Request to evaluate external Mu code
remote-pyeval
code
Request to evaluate external Python code
remote-pyexec
code
Request to execute external Python code
remote-network-start
Remote networking started
remote-network-stop
Remote networking stopped
remote-connection-start
contact-name
A new remote connection has been made
remote-connection-stop
contact-name
A remote connection has died
remote-contact-error
contact-name
A remote connection error occurred while being established
Table 5.3:
Internal Events

5.6.1 File Changed Event

It is possible to watch a file from the UI. If the watched file changes in any way (modified, deleted, moved, etc) a file-changed event will be generated. The event object will contain the name of the watched file that changed. A function bound to file-changed might look something like this:
\: my_file_changed (void; Event event)
{
    let file = event.contents();
    print("%s changed on disk\n" % file);
}
In order to have a file-changed event generated, you must first have called the command function watchFile().

5.6.2 Incoming Source Path Event

This event is sent when the user has selected a file or sequence to load from the UI or command line. The event contains the name of the file or sequence. A function bound to this event can change the file or sequence that RV actually loads by setting the return contents of the event. For example, you can cause RV to check and see if a single file is part of a larger sequence and if so load the whole sequence like so:
\: load_whole_sequence (void; Event event)
{
    let file        = event.contents(),
        (seq,frame) = sequenceOfFile(event.contents());

    if (seq != "") event.setReturnContent(seq); 
}

bind("incoming-source-path", load_whole_sequence);
or in Python:
def load_whole_sequence (event):

    file = event.contents();
    (seq,frame) = rv.commands.sequenceOfFile(event.contents());

    if seq != "":
         event.setReturnContent(seq); 


bind("default", "global", "incoming-source-path", load_whole_sequence, "Doc string");

5.6.3 Missing Images

Sometimes an image is not available on disk when RV tries to read. This is often the case when looking at an image sequence while a render or composite is ongoing. By default, RV will find a nearby frame to represent the missing frame if possible. The missing-image event will be sent once for each image which was expected but not found. The function bound to this event can render information on on the screen indicating that the original image was missing. The default binding display a message in the feedback area.
The missing-image event contains the domain in which rendering can occur (the window width and height) as well as a string of the form ``frame;source'' which can be obtained by calling the contents() function on the event object.
The default binding looks like this:
\: missingImage (void; Event event)
{
    let contents = event.contents(),
        parts = contents.split(";"),
        media = io.path.basename(sourceMedia(parts[1])._0);

    displayFeedback("MISSING: frame %s of %s" 
                     % (parts[0], media), 1, drawXGlyph);
}

bind("missing-image", missingImage);

Chapter 6 RV File Format

The RV file format (.rv) is a text GTO file. GTO is an open source file format which stores arbitrary data — mostly for use in computer graphics applications. The text GTO format is meant to be simple and human readable. It's helpful to have familiarized yourself with the GTO documentation before reading this section. The documentation should come with RV, or you can read it on line at the GTO web site.

How RV Uses GTO

RV defines a number of new GTO object protocols (types of objects). The GTO file is made up of objects, which contain components, which contain properties where the actual data resides. RV's use of the format is to store nodes in an image processing graph as GTO objects. How the nodes are connected is determined by RV and is not currently arbitrary so there are no connections between the objects stored in the file.
Some examples of RV object types include:
Normally, RV will write out all objects to the session file, but it does not require all of them to create a session from scratch. For example, if you have a file with a single RVFileSource object in it, RV will use that and create default objects for everything else. So when creating a file without RV, it's not a bad idea to only include information that you need instead of replicating the output of RV itself. (This helps make your code future proof as well).
The order in which the objects appear in the file is not important. You can also include information that RV does not know about if you want to use the file for other programs as well.

Naming

The names of objects in the session are not visible to the user, however they must follow certain naming naming conventions. There is a separate user interface name for top level nodes which the user does see. The user name can be set by creating a string property on a group node called ui.name.

A Simple Example

The simplest RV file you can create is a one which just causes RV to load a single movie file or image. This example loads a QuickTime file called “test.mov” from the directory RV was started in:
GTOa (3)

sourceGroup000000_source : RVFileSource (0)
{
    media
    {         
        string movie =  "test.mov"
    }
}
The first line is required for a text GTO file: it indicates the fact that the file is text format and that the GTO file version is 3. All of the other information (the frame ranges, etc) will be automatically generated when the file is read. By default RV will play the entire range of the movie just as if you dropped it into a blank RV session in the UI.
For this version of RV, you should name the first RVFileSource object sourceGroup000000_source and the second sourceGroup000001_source and the third sourceGroup000002_source, and so on. Eventually we'll want to make an EDL which will index the source objects so the names mostly matter (but not the order in which they appear).
Now suppose we have an image sequence instead of a movie file. We also have an associated audio file which needs to be played with it. This is a bit more complicated, but we still only need to make a single RVFileSource object:Here we've got test.#.dpx as an image layer and soundtrack.aiff which is an audio layer.
GTOa (3)

sourceGroup000000_source : RVFileSource (0)
{
    media
    {
        string movie = [ "test.#.dpx" "soundtrack.aiff" ]
    }

    group
    {
        float fps =  24
        float volume =  0.5
        float audioOffset = 0.1
    } 
}
You can have any number of audio and image sequence/movie files in the movie list. All of them together create the output of the RVFileSource object. If we were creating a stereo source, we might have left.#.dpx and right.#.dpx instead of test.#.dpx. When there are multiple image layers the first two default to the left and right eyes in the order in which they appear. You can change this behavior per-source if necessary. The format of the various layers do not need to match.
The group component indicates how all of media should be combined. In this case we've indicated the FPS of the image sequence, the volume of all audio for this source and an audio slip of 0.1 (one tenth) of a second. Keep in mind that FPS here is for the image sequence(s) in the source it has nothing to do with the playback FPS!. The playback FPS is independent of the input sources frame rate.

Aside: What is the FPS of an RVFileSource Object Anyway?

If you write out an RV file from RV itself, you'll notice that the group FPS is often 0! This is a special cookie value which indicates that the FPS should be taken from the media. Movie file formats like QuickTime or AVI store this information internally. So RV will use the frame rate from the media file as the FPS for the source.
However, image sequences typically do not include this information
1
OpenEXR files are a notable exception.
. When you start RV from the command line it will use the playback FPS as a default value for any sources created. If there is no playback FPS on startup either via the command line or preferences, it will default to 24 fps. So it's not a bad idea to include the group FPS when creating an RV file yourself when you're using image sequences. If you're using a movie file format you should either use 0 for the FPS or not include it and let RV figure it out.
What happens when you get a mismatch between the source FPS and the playback FPS? If there's no audio, you won't notice anything; RV always plays back every frame in the source regardless of the source FPS. But if you have audio layers along with your image sequence or if the media is a movie file, you will notice that the audio is either compressed or expanded in order to maintain synchronization with the images.
This is a very important thing to understand about RV: it will always playback every image no matter what the playback FPS is set to; and it will always change the audio to compensate for that and maintain synchronization with the images.
So the source FPS is really important when there is audio associated with the images.

Per-Source and Display Color Settings and LUT Files

If you want to include per-source color information – such as forcing a particular LUT to be applied or converting log to linear – you can include only the additional nodes you need with only the parameters that you wish to pass in. For example, to apply a file LUT to the first source (e.g. sourceGroup000000_source) you can create an RVColor node similarly named sourceGroup000000_color.
sourceGroup000000_color : RVColor (1)
{
    lut
    {
        string file = "/path/to/LUTs/log2sRGB.csp"
        int active = 1
    }
}
This is a special case in the rv session file: you can refer to a LUT by file. Version 3.6 and earlier will not write a session file in this manner: a baked version of the LUT will be inlined directly in the session file.
If you have a new-source event bound to a function which modifies incoming color settings based on the image type, any node properties in your session file override the default values created there. To state it another way: values you omit in the session file still exist in RV and will take on whatever values the function bound to new-source made for them. To ensure that you get exactly the color you want you can specify all of the relevant color properties in the RVColor, RVLinearize, and RVDisplayColor nodes:
sourceGroup000000_colorPipeline_0 : RVColor (2)
{
    color
    {
        int invert = 0
        float[3] gamma = [ [ 1 1 1 ] ]
        string lut = "default"
        float[3] offset = [ [ 0 0 0 ] ]
        float[3] scale = [ [ 1 1 1 ] ]
        float[3] exposure = [ [ 0 0 0 ] ]
        float[3] contrast = [ [ 0 0 0 ] ]
        float saturation = 1
        int normalize = 0
        float hue = 0
        int active = 1
    }

    CDL
    {
        float[3] slope = [ [ 1 1 1 ] ]
        float[3] offset = [ [ 0 0 0 ] ]
        float[3] power = [ [ 1 1 1 ] ]
        float saturation = 1
        int noClamp = 0
    }

    luminanceLUT
    {
        float lut = [ ]
        float max = 1
        int size = 0
        string name = ""
        int active = 0
    }

    "luminanceLUT:output"
    {
        int size = 256
    }
}

sourceGroup000000_tolinPipeline_0 : RVLinearize (1)
{
    lut
    {
        float[16] inMatrix = [ [ 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 ] ]
        float[16] outMatrix = [ [ 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 ] ]
        float lut = [ ]
        float prelut = [ ]
        float scale = 1
        float offset = 0
        string type = "Luminance"
        string name = ""
        string file = ""
        int size = [ 0 0 0 ]
        int active = 0
    }

    color
    {
        string lut = "default"
        int alphaType = 0
        int logtype = 0
        int YUV = 0
        int invert = 0
        int sRGB2linear = 1
        int Rec709ToLinear = 0
        float fileGamma = 1
        int active = 1
        int ignoreChromaticities = 0
    }

    cineon
    {
        int whiteCodeValue = 0
        int blackCodeValue = 0
        int breakPointValue = 0
    }

    CDL
    {
        float[3] slope = [ [ 1 1 1 ] ]
        float[3] offset = [ [ 0 0 0 ] ]
        float[3] power = [ [ 1 1 1 ] ]
        float saturation = 1
        int noClamp = 0
    }
}

defaultOutputGroup_colorPipeline_0 : RVDisplayColor (1)
{
    lut
    {
        float[16] inMatrix = [ [ 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 ] ]
        float[16] outMatrix = [ [ 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 ] ]
        float lut = [ ]
        float prelut = [ ]
        float scale = 1
        float offset = 0
        string type = "Luminance"
        string name = ""
        string file = ""
        int size = [ 0 0 0 ]
        int active = 0
    }

    color
    {
        string lut = "default"
        string channelOrder = "RGBA"
        int channelFlood = 0
        int premult = 0
        float gamma = 1
        int sRGB = 0
        int Rec709 = 0
        float brightness = 0
        int outOfRange = 0
        int dither = 0
        int active = 1
    }

    chromaticities
    {
        int active = 0
        int adoptedNeutral = 0
        float[2] white = [ [ 0.3127 0.329 ] ]
        float[2] red = [ [ 0.64 0.33 ] ]
        float[2] green = [ [ 0.3 0.6 ] ]
        float[2] blue = [ [ 0.15 0.06 ] ]
        float[2] neutral = [ [ 0.3127 0.329 ] ]
    }
}
The above example values assume default color pipeline slots for a single source session. Please see section 12.3 to learn more about the specific color pipeline groups.

Information Global to the Session

Now let's add an RVSession object with in and output points. The session object should be called rv in this version. There should only be one RVSession object in the file.
2
From now on we're just going to show fragments of the file and assume that you can put them all together in your text editor.
rv : RVSession (1)
{
    session
    {
        string viewNode = "defaultSequence"
        int marks = [ 1 20 50 80 100 ]
        int[2] range = [ [ 1 100 ] ]
        int[2] region = [ [ 20 50 ] ]
        float fps =  24
        int realtime =  1
        int currentFrame =  30
    }
}
Assuming this was added to the top of our previous file with the source in it, the session object now indicates the frame range (1-100) and an in and out region (20-50) which is currently active. Frames 1, 20, 50, 80, and 100 are marked and the default frame is frame 30 when RV starts up. The realtime property is a flag which indicates that RV should start playback in real time mode. The view node indicates what will be viewed in the session when the file is opened.
Note that it's usually a good idea to skip the frame range boundaries unless an EDL is also specified in the file (which is not the case here). RV will figure out the correct range information from the source media. If you force the range information to be different than the source media's you may get unexpected results.
Starting in version 3.10 the marks and range can also be stored on each viewable top-level object. For example the defaultLayout and defaultSequence can have different marks and in and out points:
defaultStack : RVStackGroup (1)
{
    session
    {
        float fps = 24
        int marks = [ ]
        int[2] region = [ [ 100 200 ] ]
        int frame = 1
    }
}
If a group has a session component than the contents can provide an in/out region, marks, playback fps and a current frame. When the user views the group node these values will become inherited by the session.

The Graph

Internally, RV holds a single image processing graph per session which is represented in the session file. The graph can have multiple nodes which determine how the sources are combined. These are the top-level nodes and are always group nodes.
Versions prior to 3.10 did not store graph connectivity in the file because the user was not allowed to change it. In 3.10, the user can create new top-level nodes (like sequences, stacks, layouts, retimings, etc). So the inputs for each node need to be stored in order to reproduce what the user created.
The connections between the top-level group nodes are stored in the connections object. In addition, in 3.10.9, a list of the top level nodes is also included. For example, this is what RV will write out for a session with a single source in it:
connections : connection (1)
{
    evaluation
    {
        string lhs = [ "sourceGroup000000" 
                       "sourceGroup000000" 
                       "sourceGroup000000" ]

        string rhs = [ "defaultLayout" 
                       "defaultSequence" 
                       "defaultStack" ]
    }

    top
    {
        string nodes = [ "sourceGroup00000", 
                         "defaultLayout",
                         "defaultStack", 
                         "defaultSequence" ]
    }
}
The connections should be interpreted as arrows between objects. The lhs (left hand side) is the base of the arrow. The rhs (right hand side) is the tip. The base and tips are stored in separate properties. So in the case, the file has three connections
3
RV may write out a connection to the display group as well. However, that connection is redundant and may be overridden by the value of the view node property in the RVSession.
:
  1. sourceGroup000000defaultLayout
  2. sourceGroup000000defaultSequence
  3. sourceGroup000000defaultStack
The nodes property, if it exists, will determine which nodes are considered top level nodes. Otherwise, nodes which include connections and nodes which have user interface name are considered top level.

6.6.1 Default Views

There are three default views that are always created by RV: the default stack, sequence, and layout. Whenever a new source is added by the user each of these will automatically connect the new source as an input. When a new viewing node is created (a new sequence, stack, layout, retime) the default views will not add those —- only sources are automatically added.
When writing a .rv file you can co-opt these views to rearrange or add inputs or generate a unique EDL but it's probably a better idea to create a new one instead; RV will never automatically edit a sequence, stack, layout, etc, that is not one of the default views.

Creating a Session File for Custom Review

One of the major reasons to create session files outside of RV is to automatically generate custom review workflows. For example, if you want to look at an old version of a sequence and a new version, you might have your pipeline output a session file with both in the session and have pre-constructed stacked views with wipes and a side-by-side layout of the two sequences.
To start with lets look at creating a session file which creates a unique sequence (not the default sequence) with plays back sources in a particular order. In this case, no EDL creation is necessary — we only need to supply the sequence with the source inputs in the correct order. This is analogous to the user reordering the inputs on a sequence in the user interface.
This file will have an RVSequenceGroup object as well as the sources. Creating sources is covered above so we'll skip to the creation of the RVSequenceGroup. For this example we'll assume there are three sources and that they all have the same FPS (so no retiming is necessary). We'll let RV handle creation of the underlying RVSequence and its EDL and only create the group:
// define sources ...

reviewSequence : RVSequenceGroup (1)
{
    ui { string name = "For Review" }
}

connections : connection (1)
{
    evaluation
    {
        string lhs = [ "sourceGroup000002" 
                       "sourceGroup000000" 
                       "sourceGroup000001" ]

        string rhs = [ "reviewSequence"
                       "reviewSequence"
                       "reviewSequence" ]
    }
}
RV will automatically connect up the default views so we can skip their inputs in the connections object for clarity. In this case, the sequence is connected up so that by default it will play sourceGroup000002 followed by sourceGroup000000 followed by sourceGroup000001 because the default EDL of a sequence just plays back the inputs in order. Note that for basic ordering of playback, no EDL creation is necessary. We could also create additional sequence groups with other inputs. Also note the use of the UI name in the sequence group.
Of course, the above is not typical in a production environment. Usually there are handles which need to (possibly) be edited out. There are two ways to do this with RV: either set the cut points in each source and tell the sequence to use them, or create an EDL in the sequence which excludes the handles.
To start with we'll show the first method: set the cut points. This method is easy to implement and the sequence interface has a button on it that lets the user toggle the in/out cuts on/off in realtime. If the user reorders the sequence, the cuts will be maintained. When using this method any sequence in the session can be made to use the same cut information — it propagates down from the source to the sequence instead of being stored for each sequence.
Setting the cut in/out points requires adding a property to the RVFileSource objects and specifying the in and out frames:
sourceGroup000000_source : RVFileSource (1)
{
    media { string movie = "shot00.mov" }

    cut
    {
        int in = 8
        int out = 55
    }
}

sourceGroup000001_source : RVFileSource (1)
{
    media { string movie = "shot01.mov" }

    cut
    {
        int in = 5
        int out = 102
    }
}

sourceGroup000002_source : RVFileSource (1)
{
    media { string movie = "shot02.mov" }

    cut
    {
        int in = 3
        int out = 22
    }
}
Finally, the most flexibly way to control playback is to create an EDL. The EDL is stored in an RVSequence node which is a member of the RVSequenceGroup. Whenever an RVSequenceGroup is created, it will create a sequence node to hold the EDL. If you are not changing the default values or behavior of the sequence node it's not necessary to specify it in the file. In this case, however we will be creating a custom EDL.

6.7.1 RVSequence

The sequence node can be in one of two modes: auto EDL creation or manual EDL creation. This is controlled by the mode.autoEDL property. If the property is set to 1 then the sequence will behave like so:
When auto EDL is not on, the sequence node behavior is not well-defined when the inputs are changed. In future, we'd like to provide more interface for EDL modification (editing) but for the moment, a custom EDL should only be created programmatically in the session file.
For this next example, we'll use two movie files: a.mov and b.mov. They have audio so there's nothing interesting about their source definitions: just the media property with the name of the movie
4
The example RV file has fewer line breaks than one which RV would write. However, it's still valid.
. They are both 24 fps and the playback will be as well:
GTOa (3)

rv : RVSession (2) 
{
    session
    {
        string viewNode = "mySequence"
    }
}

sourceGroup000000_source : RVFileSource (0) { media { string movie =  "a.mov" } }
sourceGroup000001_source : RVFileSource (0) { media { string movie =  "b.mov" } }

connections : connection (1)
{
    evaluation
    {
        string lhs = [ "sourceGroup000000"
                       "sourceGroup000001" ]
        string rhs = [ "mySequence"
                       "mySequence" ]
    }
}

mySequence : RVSequenceGroup (0)
{
    ui
    {
        string name = "GUI Name of My Sequence"
    }
}

mySequence_sequence : RVSequence (0)
{
    edl
    {
        int frame  = [  1 11 21 31 41 ]
        int source = [  0  1  0  1  0 ]
        int in     = [  1  1 11 11  0 ]
        int out    = [ 10 10 20 20  0 ]
    }

    mode
    {
        int autoEDL = 0
    }
} 
The source property indexes the inputs to the sequence node. So index 0 refers to sourceGroup000000 and index 1 refers to sourceGroup000001. This EDL has four edits which are played sequentially as follows:
  1. a.mov, frames 1-10
  2. b.mov, frames 1-10
  3. a.mov, frames 11-20
  4. b.mov, frames 11-20
You can think of the properties in the sequence as forming a transposed matrix in which the properties are columns and edits are rows as in 6.1. Note that there are only 4 edits even though there are 5 rows in the matrix. The last edit is really just a boundary condition: it indicates how RV should handle frames past the end of the EDL. To be well formed, an RV EDL needs to include this.
Note that the in frame and out frame may be equal to implement a “held” frame.
global start frame
source
in
out
edit #1
1
a.mov
1
10
edit #2
11
b.mov
1
10
edit #3
21
a.mov
11
20
edit #4
31
b.mov
11
20
past end
41
a.mov
0
0
Table 6.1:
EDL as Matrix

6.7.2 RVLayoutGroup and RVStackGroup

The stack and layout groups can be made in a similar manner to the above. The important thing to remember is the inputs for all of these must be specified in the connections object of the file. Each of these view types uses the input ordering; in the case of the stack it determines what's on top and in the case of the layout it determines how automatic layout will be ordered.

6.7.3 RVOverlay

Burned in metadata can be useful when creating session files. Shot status, artist, name, sequence, and other static information can be rendered on top of the source image directly by RV's render. Figure 6.1 shows an example of metadata rendered by the RVOverlay node.
image: 11_Users_arasiah_git_tweak-devel_rv-cxx98-release-qt4-python2_7_html_temp_images_overlay_shot.png
Figure 6.1:
Metadata Rendered By RVOverlay Node From Session File
Each RVSourceGroup can have an RVOverlay node. The RVOverlay node is used for matte rendering by user interface, but it can do much more than that. The RVOverlay node currently supports drawing arbitrary filled rectangles and text in addition to the mattes. The text and filled rectangle are currently limited to static shapes and text; in a future version we plan on expanding this to dynamically updated text (e.g. drawing the current frame number, etc).
Text and rectangles rendered in this fashion are considered part of the image by RV. If you pass a session file with an active RVOverlay node to rvio it will render the overlay the same way RV would. This is completely independent of any rvio overlay scripts which use a different mechanism to generate overlay drawings and text.
Figure 6.2 shows an example which draws three colored boxes with text starting at each box's origin.
image: 12_Users_arasiah_git_tweak-devel_rv-cxx98-relea___-python2_7_html_temp_images_overlay_example.png
Figure 6.2:
RVOverlay Node Example
The session file used to create the example contains a movieproc source (white 720x480 image) with the overlay rendered on top of it. Note that the coordinates are normalized screen coordinates relative to the source image:
GTOa (3)

sourceGroup1_source : RVFileSource (1)
{
    media
    {
        string movie = "solid,red=1.0,green=1.0,blue=1.0,start=1,end=1,width=720,height=480.movieproc"
    }
}

sourceGroup1_overlay : RVOverlay (1)
{
    overlay
    {
        int show = 1 
    }
    "rect:red"
    {
        float width = 0.3
        float height = 0.3
        float[4] color = [ [ 1.0 0.1 0.1 0.4 ] ]
        float[2] position = [ [ 0.1 0.1 ] ]
    }
    "rect:green"
    {
        float width = 0.6
        float height = 0.2
        float[4] color = [ [ 0.1 1.0 0.1 0.4 ] ]
        float[2] position = [ [ -0.2 -0.3 ] ]
    }
    "rect:blue"
    {
        float width = 0.2
        float height = 0.4
        float[4] color = [ [ 0.1 0.1 1.0 0.4 ] ]
        float[2] position = [ [ -0.5 -0.1 ] ]
    }
    "text:red"
    {
        float[2] position = [ [ 0.1 0.1 ] ]
        float[4] color = [ [ 0 0 0 1 ] ]
        float spacing = 0.8
        float size = 0.005
        float scale = 1
        float rotation = 0
        string font = ""
        string text = "red"
        int debug = 0
    }
    "text:green"
    {
        float[2] position = [ [ -0.2 -0.3 ] ]
        float[4] color = [ [ 0 0 0 1 ] ]
        float spacing = 0.8
        float size = 0.005
        float scale = 1
        float rotation = 0
        string font = ""
        string text = "green"
        int debug = 0
    }
    "text:blue"
    {
        float[2] position = [ [ -0.5 -0.1 ] ]
        float[4] color = [ [ 0 0 0 1 ] ]
        float spacing = 0.8
        float size = 0.005
        float scale = 1
        float rotation = 0
        string font = ""
        string text = "blue"
        int debug = 0
    }
}

Components in the RVOverlay which have names starting with “rect:” are used to render filled rectangles. Components starting with “text:” are used for text. The format is similar to that used by the RVPaint node, but the result is rendered for all frames of the source. The reference manual contains complete information about the RVOverlay node's properties and how the control rendering.

Limitations on Number of Open Files

RV does not impose any artificial limits on the number of source which can be in an RV session file. However, the use of some file formats, namely Quicktime .mov, .avi, and .mp4, require that the file remain open while RV is running.
Each operating system (and even shell on Unix systems) has different limits on the number of open files a process is allowed to have. For example on Linux the default is 1024 files. This means that you cannot open more than 1000 or so movie files without changing the default. RV checks the limit on startup and sets it to the maximum allowed by the system.
There are a number of operating system and shell dependent ways to change limits. Your facility may also have limits imposed by the IT department for accounting reasons.

What's the Best Way to Write a .rv (GTO) File?

GTO comes in three types: text (UTF8 or ASCII), binary, and compressed binary. RV can read all three types. RV normally writes text files unless an RVImageSource is present in the session (because an image was sent to it from another process instead of a file). In that case it will write a compressed binary GTO to save space on disk.
If you think you might want to generate binary files in addition to text files you can do so using the GTO API in C++ or python. However, the text version is simple enough to write using only regular I/O APIs in any language. We recommend you write out .rv session files from RV and look at them in an editor to generate templates of the portions that are important to you. You can copy and paste parts of session files into source code as strings or even shell scripts as templates with variable substitution.

Chapter 7 Using Qt in Mu

Since version 3.8 RV has had limited Qt bindings in Mu. In 3.10 the number of available Qt classes has been greatly expanded. You can browse the Qt and other Mu modules with the documentation browser. RV 6 wraps the Qt 4.8 API.
Using Qt in Mu is similar to using it in C++. Each Qt class is presented as a Mu class which you can either use directly or inherit from if need be. However, there are some major differences that need to be observed:
1
Prior to RV 6, it was necessary to supply all parameters to a Mu function in Python even when those parameters had default values. This is no longer the case in RV 6: Python code can assume that default parameters values will be supplied if not specified.

Signals and Slots

Possibly the biggest difference between the Mu and C++ Qt API is how signals and slots are handled. This discussion will assume knowledge of the C++ mechanism. See the Qt documentation if you don't know what signals and slots are.
Jumping right in, here is an example hello world MuQt program. This can be run from the mu-interp binary:
use qt;

\: clicked (void; bool checked)
{
    print("OK BYE\n");
    QCoreApplication.exit(0);
}

\: main ()
{
    let app    = QApplication(string[] {"hello.mu"}),
        window = QWidget(nil, Qt.Window),
        button = QPushButton("MuQt: HELLO WORLD!", window);

    connect(button, QPushButton.clicked, clicked);

    window.setSize(QSize(200, 50));
    window.show();
    window.raise();
    QApplication.exec();
}

main();
The main thing to notice in this example is the connect() function. A similar C++ version of this would look like this:
connect(button, SIGNAL(clicked(bool)), SLOT(myclickslot(bool)));
where myclickslot would be a slot function declared in a class. In Mu it's not necessary to create a class to receive a signal. In addition the SIGNAL and SLOT syntax is also unnecessary. However, it is necessary to exactly specify which signal is being referred to by passing its Mu function object directly. In this case QPushButton.clicked. The signal must be a function on the class of the first argument of connect().
In Mu, any function which matches the signal's signature can be used to receive the signal. The downside of this is that some functions like sender() are not available in Mu. However this is easily overcome with partial application. In the above case, if we need to know who sent the signal in our clicked function, we can change its signature to accept the sender and partially apply it in the connect call like so:
\: clicked (void; bool checked, QPushButton sender)
{
    // do something with sender
}

\: main ()
{
    ...

    connect(button, QPushButton.clicked, clicked(,button));
}
And of course additional information can be passed into the clicked function by applying more arguments.
It's also possible to connect a signal to a class method in Mu if the method signature matches. Partial application can be used in that case as well. This is frequently the case when writing a mode which uses Qt interface.

Inheriting from Qt Classes

It's possible to inherit directly from the Qt classes in Mu and override methods. Virtual functions in the C++ version of Qt are translated as class methods in Mu. Non-virtual functions are regular functions in the scope of the class. In practice this means that the Mu Qt class usage is very similar to the C++ usage.
The following example shows how to create a new widget type that implements a drop target. Drag and drop is one aspect of Qt that requires inheritance (in C++ and Mu):
use qt;

class: MyWidget : QWidget
{
    method: MyWidget (MyWidget; QObject parent, int windowFlags)
    {
        // REQUIRED: call base constructor to build Qt native object
        QWidget.QWidget(this, parent, windowFlags); 
        setAcceptDrops(true);
    }

    method: dragEnterEvent (void; QDragEnterEvent event)
    {
        print("drop enter\n");
        event.acceptProposedAction();
    }

    method: dropEvent (void; QDropEvent event)
    {
        print("drop\n");
        let mimeData = event.mimeData(),
            formats = mimeData.formats();

        print("--formats--\n");
        for_each (f; formats) print("%s\n" % f);

        if (mimeData.hasUrls())
        {
            print("--urls--\n");
            for_each (u; event.mimeData().urls()) 
                print("%s\n" % u.toString(QUrl.None));
        }

        if (mimeData.hasText())
        {
            print("--text--\n");
            print("%s\n" % mimeData.text());
        }

        event.acceptProposedAction();
    }
}
Things to note in this example: the names of the drag and drop methods matter. These are same names as used in C++. If you browser the documentation of a Qt class in Mu these will be the class methods. Only class methods can be overridden.

Chapter 8 Modes and Widgets

The user interface layer can augment the display and event handling in a number of different ways. For display, at the lowest level it's possible to intercept the render event in which case you override all drawing. Similarily for event handling you can bind functions in the global event table possibly overwriting existing bindings and thus replace their functions.
At a higher level, both display and event handling can be done via Modes and Widgets. A Mode is a class which manages an event table independent of the global event table and a collection of functions which are bound in that table. In addition the mode can have a render function which is automatically called at the right time to augment existing rendering instead of replacing it. The UI has code which manages modes so that they may be loaded externally only when needed and automatically turned on and off.
Modes are further classified as being minor or major. The only difference between them is that a major mode will always get precedence over any minor mode when processing events and there can be only a single major mode active at a time. There can be many minor modes active at once. Most extensions are created by creating a minor mode. RV currently has a single basic major mode.
image: 13_Users_arasiah_git_tweak-devel_rv-cxx98-release-qt4-python2_7_html_temp_images_event_prop.png
Figure 8.1:
Event Propagation. Red and Green modes process the event. On the left the Red mode rejects the event allowing it to continue. On the right Red mode does not reject the event stopping the propagation.
By using a mode to implement a new feature or replace or augment an existing feature in RV you can keep your extensions separate from the portion of the UI that ships with RV. In other words, you never need to touch the shipped code and your code will remain isolated.
A further refinement of a mode is a widget. Widgets are minor modes which operate in a constrained region of the screen. When the pointer is in the region, the widget will receive events. When the pointer is outside the region it will not. Like a regular mode, a widget has a render function which can draw anywhere on the screen, but usually is constrainted to its input region. For example, the image info box is a widget as is the color inspector.
Multiple modes and widgets may be active at the same time. At this time Widgets can only be programmed using Mu.

Outline of a Mode

In order to create a new mode you need to create a module for it and derive your mode class from the MinorMode class in the rvtypes module. The basic outline which we'll put in a file called new_mode.mu looks like this:
use rvtypes;

module: new_mode {

class: NewMode : MinorMode
{
    method: NewMode (NewMode;)
    {
        init ("new-mode",
              [ global bindings ... ],
              [ local bindings ... ],
              Menu(...) );
    }
}

\: createMode (Mode;)
{
    return NewMode();
}

} // end of new_mode module
The function createMode() is used by the mode manager to create your mode without knowing anything about it. It should be declared in the scope of the module (not your class) and simply create your mode object and initialize it if that's necessary.
When creating a mode it's necessary to call the init() function from within your constructor method. This function takes at least three arguments and as many as six. Chapter 10 goes into detail about the structure in more detail. It's declared like this in rvtypes.mu:
method: init (void; 
              string name, 
              BindingList globalBindings,
              BindingList overrideBindings,
              Menu menu = nil,
              string sortKey = nil,
              int ordering = 0)
The name of the mode is meant to be human readable.
The “bindings” arguments supply event bindings for this mode. The bindings are only active when the mode is active and take precedence over any “global” bindings (bindings not associated with any mode). In your event function you can call the “reject” method on an event which will cause rv to pass it on to bindings “underneath” yours. This technique allows you to augment an existing binding instead of replacing it. The separation of the bindings into overrideBindings and globalBindings is due to backwards compatibility requirements, and is no longer meaningful.
The menu argument allows you to pass in a menu structure which is merged into the main menu bar. This makes it possible to add new menus and menu items to the existing menus.
Finally the sortKey and ordering arguments allow fine control over the order in which event bindings are applied when multiple modes are active. First the ordering value is checked (default is 0 for all modes), then the sortKey (default is the mode name).
Again, see chapter 10 for more detailed information.

Outline of a Widget

A Widget looks just like a MinorMode declaration except you will derive from Widget instead of MinorMode and the base class init() function is simpler. In addition, you'll need to have a render() method (which is optional for regular modes).
use rvtypes;

module: new_widget {

class: NewWidget : Widget
{
    method: NewWidget (NewWidget;)
    {
        init ("new-widget",
              [ local bindings ... ] );
    }

    method: render (void; Event event)
    {
        ...
        updateBounds(min_point, max_point);
        ...
    }
}

\: createMode (Mode;)
{
    return NewWidget();
}

} // end of new_widget module
In the outline above, the function updateBounds() is called in the render() method. updateBounds() informs the UI about the bounding box of your widget. This function must be called by the widget at some point. If your widget can be interactively or procedurally moved, you will probably want to may want to call it in your render() function as shown (it does not hurt to call it often). The min_point and max_point arguments are Vec2 types.

Chapter 9 Package System

With previous versions of RV we recommend directly hacking the UI code or setting up ad hoc locations in the MU_MODULE_PATH to place files.
For RV 3.6 or newer, we recommend using the new package system instead. The documentation in older versions of the reference manual is still valid, but we will no longer be using those examples. There are hardly any limitations to using the package system so no additional features are lost.

rvpkg Command Line Tool

The rvpkg command line tool makes it possible to manage packages from the shell. If you use rvpkg you do not need to use RV's preferences UI to install/uninstall add/remove packages from the file system. We recommend using this tool instead of manually editing files to prevent the necessity of keeping abreast of how all the state is stored in new versions.
The rvpkg tool can perform a superset of the functions available in RV's packages preference user interface.
-include directory
include directory as if part of RV_SUPPORT_PATH
-env
show RV_SUPPORT_PATH include app areas
-only directory
use directory as sole content of RV_SUPPORT_PATH
-add directory
add packages to specified support directory
-remove
remove packages (by name, rvpkg name, or full path to rvpkg)
-install
install packages (by name, rvpkg name, or full path to rvpkg)
-uninstall
uninstall packages (by name, rvpkg name, or full path to rvpkg)
-optin
opt-in (load) now on behalf of all users, so it will be as if they opted in
-list
list installed packages
-info
detailed info about packages (by name, rvpkg name, or full path to rvpkg)
-force
Assume answer is 'y' to any confirmations – don't be interactive
Table 9.1:
rvpkg Options
Note: many of the below commands, including install, uninstall, and remove will look for the designated packages in the paths in the RV_SUPPORT_PATH environment variable. If the package you want to operate on is not in a path listed there, that path can be added on the command line with the -include option.

9.1.1 Getting a List of Available Packages

shell> rvpkg -list
Lists all packages that are available in the RV_SUPPORT_PATH directories. Typical output from rvpkg looks like this:
I L - 1.7 "Annotation" /SupportPath/Packages/annotate-1.7.rvpkg
I L - 1.1 "Documentation Browser" /SupportPath/Packages/doc_browser-1.1.rvpkg
I - O 1.1 "Export Cuts" /SupportPath/Packages/export_cuts-1.1.rvpkg
I - O 1.3 "Missing Frame Bling" /SupportPath/Packages/missing_frame_bling-1.3.rvpkg
I - O 1.4 "OS Dependent Path Conversion" /SupportPath/Packages/os_dependent_path_conversion_mode-1.4.rvpkg
I - O 1.1 "Nuke Integration" /SupportPath/Packages/rvnuke-1.1.rvpkg
I - O 1.2 "Sequence From File" /SupportPath/Packages/sequence_from_file-1.2.rvpkg
I L - 1.3 "Session Manager" /SupportPath/Packages/session_manager-1.3.rvpkg
I L - 2.2 "RV Color/Image Management" /SupportPath/Packages/source_setup-2.2.rvpkg
I L - 1.3 "Window Title" /SupportPath/Packages/window_title-1.3.rvpkg
The first three columns indicate installation status (I), load status (L), and whether or not the package is optional (O).
If you want to include a support path directory that is not in RV_SUPPORT_PATH, you can include it like this:
shell> rvpkg -list -include /path/to/other/support/area
To limit the list to a single support area:
shell> rvpkg -list -only /path/to/area
The -include and -only arguments may be applied to other options as well.

9.1.2 Getting Information About the Environment

You can see the entire support path list with the command:
shell> rvpkg -env
This will show alternate version package areas constructed from the RV_SUPPORT_PATH environment variable to which packages maybe added, removed, installed and uninstalled. The list may differ based on the platform.

9.1.3 Getting Information About a Package

shell> rvpkg -info /path/to/file.rvpkg
This will result in output like:
Name: Window Title
Version: 1.3
Installed: YES
Loadable: YES
Directory: 
Author: Tweak Software
Organization: Tweak Software
Contact: an actual email address
URL: http://www.tweaksoftware.com
Requires: 
RV-Version: 3.9.11
Hidden: YES
System: YES
Optional: NO
Writable: YES
Dir-Writable: YES
Modes: window_title
Files: window_title.mu

9.1.4 Adding a Package to a Support Area

shell> rvpkg -add /path/to/area /path/to/file1.rvpkg /path/to/file2.rvpkg
You can add multiple packages at the same time.
Remember that adding a package makes it become available for installation, it does not install it.

9.1.5 Removing a Package from a Support Area

shell> rvpkg -remove /path/to/area/Packages/file1.rvpkg
Unlike adding, the package in this case is the one in the support area's Packages directory. You can remove multiple packages at the same time.
If the package is installed rvpkg will interactively ask for confirmation to uninstall it first. You can override that by using -force as the first argument:
shell> rvpkg -force -remove /path/to/area/Packages/file1.rvpkg

9.1.6 Installing and Uninstalling Available Packages

shell> rvpkg -install /path/to/area/Packages/file1.rvpkg
shell> rvpkg -uninstall /path/to/area/Packages/file1.rvpkg
If files are missing when uninstalling rvpkg may complain. This can happen if multiple versions where somehow installed into the same area.

9.1.7 Combining Add and Install for Automated Installation

If you're using rvpkg from an automated installation script you will want to use the -force option to prevent the need for interaction. rvpkg will assume the answer to any questions it might ask is “yes”. This will probably be the most common usage:
shell> rvpkg -force -install -add /path/to/area /path/to/some/file1.rvpkg
Multiple packages can be specified with this command. All of the packages are installed into /path/to/area.
To force uninstall followed by removal:
shell> rvpkg -force -remove /path/to/area/Packages/file1.rvpkg
The -uninstall option is unnecessary in this case.

9.1.8 Overrideing Default Optional Package Load Behavior

If you want optional packages to be loaded by default for all users, you can do the following:
shell> rvpkg -optin /path/to/area/Packages/file1.rvpkg
In this case, rvkpg will rewrite the rvload2 file associated with the support area to indicate the package is no longer optional. The user can still unload the package if they want, but it will be loaded by default after running the command.

Package File Contents

A package file is zip file with at least one special file called PACKAGE along with .mu, .so, .dylib, and support files (plain text, images, icons, etc) which implement the actual package.
Creating a package requires the zip binary. The zip binary is usually part of the default install on each of the OSes that RV runs on.
The contents of the package should NOT be put in a parent directory before being zipped up. The PACKAGE manifest as well as any other files should be at the root level of the zip file.
When a package is installed, RV will place all of its contents into subdirectories in one of the RV_SUPPORT_PATH locations. If the RV_SUPPORT_PATH is not defined in the environment, it is assumed to have the value of RV_HOME/plugins followed by the home directory support area (which varies with each OS: see the user manual for more info). Files contained in one zip file will all be under the same support path directory; they will not be installed distributed over more than one support path location.
The install locations of files in the zip file is described in a filed called PACKAGE which must be present in the zip file. The minimum package file contains two files: PACKAGE and one other file that will be installed. A package zip file must reside in the subdirectory called Packages in one of the support path locations in order to be installed. When the user adds a package in the RV package manager, this is where the file is copied to.

PACKAGE Format

The PACKAGE file is a YAML file providing information about how the package is used and installed as well as user documentation. Every package must have a PACKAGE file with an accurate description of its contents.
The top level of the file may contain the following fields:
Field
Value Type
Required
Description
package
string
The name of the package in human readable form
author
string
The name of the author/creator of the package
organization
string
The name of the organization (company) the author created the package for
contact
email address
The email contact of the author/support person
version
version number
The package version
url
URL
Web location for the package where updates, additional documentation resides
rv
version number
The minimum version of RV which this package is compatible with
requires
zip file name list
Any other packages (as zip file names) which are required in order to install/load this package
icon
PNG file name
The name of an file with an icon for this package
imageio
file list
List of files in package which implement Image I/O
movieio
file list
List of files in package which implement Movie I/O
hidden
boolean
Either “true” or “false” indicating whether package should be visible by default in the package manager
system
boolean
Either “true” or “false” indicating whether the package was pre-installed with RV and cannot be removed/uninstalled
optional
boolean
Either “true” or “false” indicating whether the package should appear loaded by default. If true the package is not loaded by default after it is installed. Typically this is used only for packages that are pre-installed. (Added in 3.10.9)
modes
YAML list
List of modes implemented in the package
files
YAML list
List non-mode file handling information
description
HTML 1.0 string
HTML documentation of the package for user viewing in the package manager
Table 9.2:
Top level fields of PACKAGE file.
Each element of the modes list describes one Mu module which is implemented as either a .mu file or a .so file. Files implementing modes are assumed to be Mu module files and will be placed in the Mu subdirectory of the support path location. The other fields are used to optionally create a menu item and/or a short cut key either of which will toggle the mode on/off. The load field indicates when the mode should be loaded: if the value is “delay” the mode will be loaded the first time it is activated, if the value is “immediate” the mode will be loaded on start up.
Field
Value Type
Required
Description
file
string
The name of the file which implements the mode
menu
string
If defined, the string which will appear in a menu item to indicate the status (on/off) of the mode
shortcut
string
If defined and menu is defined the shortcut for the menu item
event
string
Optional event name used to toggle mode on/off
load
string
Either immediate or delay indicating when the mode should be loaded
icon
PNG image file
Icon representing the mode
requires
mode file name list
Names of other mode files required to be active for this mode to be active
Table 9.3:
Mode Fields
As an example, the package window_title-1.0.rvpkg has a relatively simple PACKAGE file shown here:
package: Window Title
author: Tweak Software
organization: Tweak Software
contact: some email address of the usual form
version: 1.0
url: http://www.tweaksoftware.com
rv: 3.6
requires: ''

modes: 
  - file: window_title
    load: immediate

description: |

  <p> This package sets the window title to something that indicates the
  currently viewed media. 
  </p> 

  <h2>How It Works</h2> 

  <p> The events play-start, play-stop, and frame-changed, are bound to
  functions which call setWindowTitle(). </p>
When the package zip file contains additional support files (which are not specified as modes) the package manager will try to install them in locations according to the file type. However, you can also directly specify where the additional files go relative to the support path root directory.
Field
Value Type
Required
Description
file
string
The name of the file in the package zip file
location
string
Location to install file in relative to the support path root. This can contain the variable $PACKAGE to specify special package directories. E.g. SupportFiles/$PACKAGE is the support directory for the package.
Table 9.4:
File Fields
For example if you package contains icon files for user interface, they can be forced into the support files area of the package like this:
files:
  - file: myicon.tif
    location: SupportFiles/$PACKAGE

Package Management Configuration Files

There are two files which the package manager creates and uses: rvload2 (previous releases had a file called rvload) in the Mu subdirectory and rvinstall in the Packages subdirectory. rvload2 is used on start up to load package modes and create stubs in menus or events for toggling the modes on/off if they are lazy loaded. rvinstall lists the currently known package zip files with a possible an asterisk in front of each file that is installed. The rvinstall file in used only by the package manager in the preferences to keep track of which packages are which.
The rvload2 file has a one line entry for each mode that it knows about. This file is automatically generated by the package manager when the user installs a package with modes in it. The first line of the file indicates the version number of the rvload2 file itself (so we can change it in the future) followed by the one line descriptions.
For example, this is the contents of rvload2 after installing the window title package:
3
window_title,window_title.zip,nil,nil,nil,true,true,false
The fields are:
  1. The mode name (as it appears in a require statement in Mu)
  2. The name of the package zip file the mode originally comes from
  3. An optional menu item name
  4. An optional menu shortcut/accelerator if the menu item exists
  5. An optional event to bind mode toggling to
  6. A boolean indicating whether the mode should be loaded immediately or not
  7. A boolean indicating whether the mode should be activated immediately
  8. A boolean indicating whether the mode is optional so it should not be loaded by default unless the user opts-in.
    1
    Added in 3.10.9. The rvload2 file version was also bumped up to version 3.
Each field is separated by a comma and there should be no extra whitespace on the line. The rvinstall file is much simpler: it contains a single zip file name on each line and an asterisk next to any file which is current known to be installed. For example:
crop.zip
layer_select.zip
metadata_info.zip
sequence_from_file.zip
*window_title.zip
In this case, five modes would appear in the package manager UI, but only the window title package is actually installed. The zip files should exist in the same directory that rvinstall lives in.

Developing a New Package

In order to start a new package there is a chicken and egg problem which needs to be overcome: the package system wants to have a package file to install.
The best way to start is to create a source directory somewhere (like your source code repository) where you can build the zip file form its contents. Create a file called PACKAGE in that directory by copying and pasting from either this manual (listing 9.3) or from another package you know works and edit the file to reflect what you will be doing (i.e. give it a name, etc).
If you are writing a Mu module implementing a mode or widget (which is also a mode) then create the .mu file in that directory also.
You can at that point use zip to create the package like so:
shell> zip new_package-0.0.rvpkg PACKAGE the_new_mode.mu
This will create the new_package-0.0.rvpkg file. At this point you're ready to install your package that doesn't do anything. Open RV's preferences and in the package manager UI add the zip file and install it (preferably in your home directory so it's visible only to you while you implement it).
Once you've done this, the rvload2 and rvinstall files will have been either created or updated automatically. You can then start hacking on the installed version of your Mu file (not the one in the directory you created the zip file in). Once you have it working the way you want copy it back to your source directory and create the final zip file for distribution and delete the one that was added by RV into the Packages directory.

9.5.1 Older Package Files (.zip)

RV version 3.6 used the extension .zip for its package files. This still works, but newer versions prefer the extension .rvpkg along with a preceding version indicator. So a new style package will look like: rvpackagename-X.Y.rvpkg where X.Y is the package version number that appears in the PACKAGE file. New style package files are required to have the version in the file name.

9.5.2 Using the Mode Manager While Developing

It's possible to delay making an actual package file when starting development on individual modes. You can force RV to load your mode (assuming it's in the MU_MODULE_PATH someplace) like so:
shell> rv -flags ModeManagerLoad=my_new_mode
where my_new_mode is the name of the .mu file with the mode in it (without the extension).
You can get verbose information on what's being loaded and why (or why not by setting the verbose flag):
shell> rv -flags ModeManagerVerbose
The flags can be combined on the command line.
shell> rv -flags ModeManagerVerbose ModeManagerLoad=my_new_mode
If your package is installed already and you want to force it to be loaded (this overrides the user preferences) then:
shell> rv -flags ModeManagerPreload=my_already_installed_mode
similarly, if you want to force a mode not to be loaded:
shell> rv -flags ModeManagerReject=my_already_installed_mode

9.5.3 Using -debug mu

Normally, RV will compile Mu files to conserve space in memory. Unfortunately, that means loosing a lot of information like source locations when exceptions are thrown. You can tell RV to allow debugging information by adding -debug mu to the end of the RV command line. This will consume more memory but report source file information when displaying a stack trace.

9.5.4 The Mu API Documentation Browser

The Mu modules are documented dynamically by the documentation browser. This is available under RV's help menu “Mu API Documentation Browser”.

Loading Versus Installing and User Override

The package manager allows each user to individually install and uninstall packages in support directories that they have permission in. For directories that the user does not have permission in the package manager maintains a separate list of packages which can be excluded by the user.
For example, there may be a package installed facility wide owned by an administrator. The support directory with facility wide packages only allows read permission for normal users. Packages that were installed and loaded by the administrator will be automatically loaded by all users.
In order to allow a user to override the loading of system packages, the package manager keeps a list of packages not to load. This is kept in the user's preferences file (see user manual for location details). In the package manager UI the “load” column indicates the user status for loading each package in his/her path.

9.6.1 Optional Packages

The load status of optional packages are also kept in the user's preferences, however these packages use a different preference variable to determine whether or not they should be loaded. By default optional packages are not loaded when installed. A package is made optional by setting the ``optional'' value in the PACKAGE file to true.

Chapter 10 A Simple Package

This first example will show how to create a package that defines some key bindings and creates a custom personal menu. You will not need to edit a .rvrc.mu file to do this as in previous versions.
We'll be creating a package intended to keep all our personal customizations. To start with we'll need to make a Mu module that implements a new mode. At first won't do anything at all: just load at start up. Put the following in to a file called mystuff.mu.
use rvtypes;
use extra_commands;
use commands;

module: mystuff {

class: MyStuffMode : MinorMode 
{
    method: MyStuffMode (MyStuffMode;)
    {
        init("mystuff-mode",
             nil,
             nil,
             nil);
    }
} 

\: createMode (Mode;)
{
    return MyStuffMode();
}
    
} // end module
Now we need to create a PACKAGE file in the same directory before we can create the package zip file. It should look like this:
package: My Stuff
author: M. VFX Artiste
version: 1.0
rv: 3.6
requires: ''

modes: 
  - file: mystuff
    load: immediate

description: |
  <p>M. VFX Artiste's Personal RV Customizations</p>
Assuming both files are in the same directory, we create the zip file using this command from the shell:
shell> zip mystuff-1.0.rvpkg PACKAGE mystuff.mu
The file mystuff-1.0.rvpkg should have been created. Now start RV, open the preferences package pane and add the mystuff-1.0.rvpkg package. You should now be able to install it. Make sure the package is both installed and loaded in your home directory's RV support directory so it's private to you.
At this point, we'll edit the installed Mu file directly so we can see results faster. When we have something we like, we'll copy it back to the original mystuff.mu and make the rvpkg file again with the new code. Be careful not to uninstall the mystuff package while we're working on it or our changes will be lost. Alternately, for the more paranoid (and wiser), we could edit the file elsewhere and simply copy it onto the installed file.
To start with let's add two functions on the ``<'' and ``>'' keys to speed up and slow down the playback by increasing and decreasing the FPS. There are two main this we need to do: add two method to the class which implement speeding up and slowing down, and bind those functions to the keys.
First let's add the new methods after the class constructor MyStuffMode() along with two global bindings to the ``<'' and ``>'' keys. The class definition should now look like this:
...

class: MyStuffMode : MinorMode 
{
    method: MyStuffMode (MyStuffMode;)
    {
        init("mystuff-mode",
             [("key-down-->", faster, "speed up fps"),
               ("key-down--<", slower, "slow down fps")], 
             nil,
             nil);
    }

method: faster (void; Event event)
    {
        setFPS(fps() * 1.5);
        displayFeedback("%g fps" % fps());
    }

method: slower (void; Event event)
    {
        setFPS(fps() * 1.0/1.5);
        displayFeedback("%g fps" % fps());
    }
}
The bindings are created by passing a list of tuples to the init function. Each tuple contains three elements: the event name to bind to, the function to call when it is activated, and a single line description of what it does. In Mu a tuple is formed by putting parenthesis around comma separated elements. A list is formed by enclosing its elements in square brackets. So a list of tuples will have the form:
[ (...), (...), ... ]
Where the ``...'' means ``and so on''. The first tuple in our list of bindings is:
(key-down-->, faster, speed up fps)
So the event in this case is key-down–> which means the point at which the > key is pressed. The symbol faster is referring to the method we declared above. So faster will be called whenever the key is pressed. Similarily we bind slower (from above as well) to key-down–<.
("key-down--<", slower, "slow down fps")
And to put them in a list requires enclose the two of them in square brackets:
[("key-down-->", faster, "speed up fps"),
 ("key-down--<", slower, "slow down fps")]
To add more bindings you create more methods to bind and add additional tuples to the list.
The python version of above looks like this:
from rv.rvtypes import *
from rv.commands import *
from rv.extra_commands import *

class PyMyStuffMode(MinorMode):

   def __init__(self):
        MinorMode.__init__(self)
        self.init("py-mystuff-mode",
                  [ ("key-down-->", self.faster, "speed up fps"),
                    ("key-down--<", self.slower, "slow down fps") ],
                  None,
                  None)

    def faster(self, event):
        setFPS(fps() * 1.5)
        displayFeedback("%g fps" % fps(), 2.0);

    def slower(self, event):
        setFPS(fps() * 1.0/1.5)
        displayFeedback("%g fps" % fps(), 2.0);


def createMode():
    return PyMyStuffMode()

How Menus Work

Adding a menu is fairly straightforward if you understand how to create a MenuItem. There are different types of MenuItems: items that you can select in the menu and cause something to happen, or items that are themselves menus (sub-menu). The first type is constructed using this constructor (shown here in prototype form) for Mu:
MenuItem(string       label,
         (void;Event) actionHook,
         string       key,
         (int;)       stateHook);
or in Python this is specified as a tuple:
("label", actionHook, "key", stateHook)
The actionHook and stateHook arguments need some explanation. The other two (the label and key) are easier: the label is the text that appears in the menu item and the key is a hot key for the menu item.
The actionHook is the purpose of the menu item–it is a function or method which will be called when the menu item is activated. This is just like the method we used with bind() — it takes an Event object. If actionHook is nil, than the menu item won't do anything when the user selects it.
The stateHook provides a way to check whether the menu item should be enabled (or greyed out)–it is a function or method that returns an int. In fact, it is really returning one of the following symbolic constants: NeutralMenuState, UncheckMenuState, CheckedMenuState, MixedStateMenuState, or DisabledMenuState. If the value of stateHook is nil, the menu item is assumed to always be enabled, but not checked or in any other state.
A sub-menu MenuItem can be create using this constructor in Mu:
MenuItem(string     label, 
         MenuItem[] subMenu);
or a tuple of two elements in Python:
("label", subMenu)
The subMenu is an array of MenuItems in Mu or a list of menu item tuples in Python.
Usually we'll be defining a whole menu — which is an array of MenuItems. So we can use the array initialization syntax to do something like this:
let myMenu = MenuItem {"My Menu", Menu {
    {"Menu Item", menuItemFunc, nil, menuItemState},
    {"Other Menu Item", menuItemFunc2, nil, menuItemState2}
}}
Finally you can create a sub-menu by nesting more MenuItem constructors in the subMenu.
MenuItem myMenu = {"My Menu", Menu {
        {"Menu Item", menuItemFunc, nil, menuItemState},
        {"Other Menu Item", menuItemFunc2, nil, menuItemState2},
        {"Sub-Menu", Menu {
             {"First Sub-Menu Item", submenuItemFunc1, nil, submenu1State}
        }}
    }};
in Python this looks like:
("My Menu", [
  ("Menu Item", menuItemFunc, None, menuItemState),
  ("Other Menu Item", menuItemFunc2, None, menuItemState2)])
You'll see this on a bigger scale in the rvui module where most the menu bar is declared in one large constructor call.

A Menu in MyStuffMode

Now back to our mode. Let's say we want to put our faster and slower functions on menu items in the menu bar. The fourth argument to the init() function in our constructor takes a menu representing the menu bar. You only define menus which you want to either modify or create. The contents of our main menu will be merged into the menu bar.
By merge into we mean that the menus with the same name will share their contents. So for example if we add the File menu in our mode, RV will not create a second File menu on the menu bar; it will add the contents of our File menu to the existing one. On the other hand if we call our menu MyStuff RV will create a brand new menu for us (since presumably MyStuff doesn't already exist). This algorithm is applied recursively so sub-menus with the same name will also be merged, and so on.
So let's add a new menu called MyStuff with two items in it to control the FPS. In this example, we're only showing the actual init() call from mystuff.mu:
init("mystuff-mode",
     [ ("key-down-->", faster, "speed up fps"),
       ("key-down--<", slower, "slow down fps") ], 
     nil,
     Menu {
         {"MyStuff", Menu {
                 {"Increase FPS", faster, nil},
                 {"Decrease FPS", slower, nil}
             }
         }
     });
Normally RV will place the new menu (called ``MyStuff'') just before the Windows menu.
If we wanted to use menu accelerators instead of (or in addition to) the regular event bindings we add those in the menu item constructor. For example, if we wanted to also use the keys - and = for slower and faster we could do this:
init("mystuff-mode",
     [ ("key-down-->", faster, "speed up fps"),
       ("key-down--<", slower, "slow down fps") ], 
     nil,
     Menu {
         {"MyStuff", Menu {
                 {"Increase FPS", faster, "="},
                 {"Decrease FPS", slower, "-"}
             }
         }
     });
The advantage of using the event bindings instead of the accelerator keys is that they can be overridden and mapped and unmapped by other modes and ``chained'' together. Of course we could also use > and < for the menu accelerator keys as well (or instead of using the event bindings).
The Python version of the script might look like this:
from rv.rvtypes import *
from rv.commands import *
from rv.extra_commands import *

class PyMyStuffMode(MinorMode):

   def __init__(self):
        MinorMode.__init__(self)
        self.init("py-mystuff-mode",
                  [ ("key-down-->", self.faster, "speed up fps"),
                    ("key-down--<", self.slower, "slow down fps") ],
                  None,
                  [ ("MyStuff", 
                     [ ("Increase FPS", self.faster, "=", None),
                       ("Decrease FPS", self.slower, "-", None)] )] )

    def faster(self, event):
        setFPS(fps() * 1.5)
        displayFeedback("%g fps" % fps(), 2.0);

    def slower(self, event):
        setFPS(fps() * 1.0/1.5)
        displayFeedback("%g fps" % fps(), 2.0);


def createMode():
    return PyMyStuffMode()

Finishing up

Finally, we'll create the final rvpkg package by copying mystuff.mu back to our temporary directory with the PACKAGES file where we originally made the rvpkg file.
Next start RV and uninstall and remove the mystuff package so it no longer appears in the package manager UI. Once you've done this recreate the rvpkg file from scratch with the new mystuff.mu file and the PACKAGES file:
shell> zip mystuff-1.0.rvpkg PACKAGES mystuff.mu
or if you're using python:
shell> zip mystuff-1.0.rvpkg PACKAGES mystuff.py
You can now add the latest mysuff-1.0.rvpkg file back to RV and use it. In the future add personal customizations directly to this package and you'll always have a single file you can install to customize RV.

Chapter 11 The Custom Matte Package

Now that we've tried the simple stuff, let's do something useful.
1
Previous versions of this manual presented a different approach which still works in RV 3.6, but is no longer the preferred method.
RV has a number of settings for viewing mattes. These are basically regions of the frame that are darkened or completely blackened to simulate what an audience will see when the movie is projected. The size and shape of the matte is an artistic decision and sometimes a unique matte will be required.
You can find various common mattes already built into RV under the View menu.
In this example we'll create a Python package that reads a file when RV starts to get a list of matte geometry and names. We'll make a custom menu out of these which will set some state in the UI.
To start with, we'll assume that the path to the file containing the mattes is located in an environment variable called RV_CUSTOM_MATTE_DEFINITIONS. We'll get the value of that variable, open and parse the file, and create a data struct holding all of the information about the mattes. If it is not defined we will provide a way for the user to locate the file through an open-file-dialog and then parse the file.

Creating the Package

Use the same method described in Chapter 10 to begin working on the package. If you haven't read that chapter please do so first. A completed version of the package created in this chapter is included in the RV distribution. So using that as reference is a good idea.

The Custom Matte File

The file will be a very simple comma separated value (CSV) file. Each line starts with the name of the custom matte (shown in the menu) followed by four floating point values and then a text field description which will be displayed when that matte is activated. So each line will look something like this:
matte menu name, aspect ratio, fraction of image visible, center point of matte in X, center point of matte in Y, descriptive text

Parsing the Matte File

Before we actually parse the file, we should decide what we want when we're done. In this case we're going to make our own data structure to hold the information in each line of the file. We'll hold all of the information we collect in a Python dictionary with the following keys:
"name", "ratio", "heightVisible", "centerX", "centerY", and "text"
Next we'll write a method
2
If you are unfamiliar with object oriented programing you can substitute the word function for method. This manual will sometimes refer to a method as a function. It will never refer to a non-method function as a method.
for our mode that does the parsing and updates our internal mattes dictionary.
    def updateMattesFromFile(self, filename):
        # Make sure the definition file exists
        if (not os.path.exists(filename)):
            raise KnownError("ERROR: Custom Mattes Mode: Non-existent mattes" +
                " definition file: '%s'" % filename)

        # Walk through the lines of the definition file collecting matte
        # parameters
        order = []
        mattes = {}
        for line in open(filename).readlines():
            tokens = line.strip("\n").split(",")
            if (len(tokens) == 6):
                order.append(tokens[0])
                mattes[tokens[0]] = {
                    "name" : tokens[0], "ratio" : tokens[1],
                    "heightVisible" : tokens[2], "centerX" : tokens[3],
                    "centerY" : tokens[4], "text" : tokens[5]}

        # Make sure we got some valid mattes
        if (len(order) == 0):
            self._order = []
            self._mattes = {}
            raise KnownError("ERROR: Custom Mattes Mode: Empty mattes" +
                " definition file: '%s'" % filename)

        self._order = order
        self._mattes = mattes
There are a number of things to note in this function. First of all, to keep track of the order in which we read the definitions from the mattes file you will see that stored in the “_order” Python list. The “_mattes” dictionary's keys are the same as the “_order” list, but since dictionaries are not ordered we use the list to remember the order.
We check to see if the file actually exists and if not simply raise a KnownError Exception. So the caller of this function will have to be ready to except a KnownError if the matte definition file cannot be found or if it is empty. The KnowError Exception is simply our own Exception class. Having our own Exception class allows us to raise and except Exceptions that we know about while letting others we don't expect to still reach the user. Here is the definition of our KnownError Exception class.
class KnownError(Exception): pass
We use the built-in Python readlines() method to go through the mattes file contents one line at a time. Each time through the loop, the next line is split over commas since that's how defined the fields of each line.
If there are not exactly 6 tokens after splitting the line, that means the line is corrupt and we ignore it. Otherwise, we add a new dictionary to our “_mattes” dictionary of matte definition dictionaries.
If we cannot find the path defined in the environment variable then we leave it blank:
        try:
            definition = os.environ["RV_CUSTOM_MATTE_DEFINITIONS"]
        except KeyError:
            definition = ""
At this point the custom_mattes.py file looks like this:
from rv import commands, rvtypes
import os

class KnownError(Exception): pass

class CustomMatteMinorMode(rvtypes.MinorMode):

    def __init__(self):
        rvtypes.MinorMode.__init__(self)
        self._order = []
        self._mattes = {}
        self._currentMatte = ""
        self.init("custom-mattes-mode", None, None, None)

        try:
            definition = os.environ["RV_CUSTOM_MATTE_DEFINITIONS"]
        except KeyError:
            definition = ""
        try:
            self.updateMattesFromFile(definition)
        except KnownError,inst:
            print(str(inst))

    def updateMattesFromFile(self, filename):

        # Make sure the definition file exists
        if (not os.path.exists(filename)):
            raise KnownError("ERROR: Custom Mattes Mode: Non-existent mattes" +
                " definition file: '%s'" % filename)

        # Walk through the lines of the definition file collecting matte
        # parameters
        order = []
        mattes = {}
        for line in open(filename).readlines():
            tokens = line.strip("\n").split(",")
            if (len(tokens) == 6):
                order.append(tokens[0])
                mattes[tokens[0]] = {
                    "name" : tokens[0], "ratio" : tokens[1],
                    "heightVisible" : tokens[2], "centerX" : tokens[3],
                    "centerY" : tokens[4], "text" : tokens[5]}
        
        # Make sure we got some valid mattes
        if (len(order) == 0):
            self._order = []
            self._mattes = {}
            raise KnownError("ERROR: Custom Mattes Mode: Empty mattes" +
                " definition file: '%s'" % filename)
        
        self._order = order
        self._mattes = mattes

def createMode():
    return CustomMatteMinorMode()

Adding Bindings and Menus

The mode constructor needs to do three things: call the file parsing function, do something sensible if the matte file parsing fails, and build a menu with the items found in the matte file as well as add bindings to the menu items.
We have already gone over the parsing. Once parsing is done we either have a good list of mattes or an empty one, but either way we move on to setting up the menus. Here is the method that will build the menus and bindings.
    def setMenuAndBindings(self):
        
        # Walk through all of the mattes adding a menu entry as well as a
        # hotkey binding for alt + index number
        # NOTE: The bindings will only matter for the first 9 mattes since you
        # can't really press "alt-10".
        matteItems = []
        bindings = []
        if (len(self._order) > 0):
            matteItems.append(("No Matte", self.selectMatte(""), "alt `",
                self.currentMatteState("")))
            bindings.append(("key-down--alt--`", ""))

            for i,m in enumerate(self._order):
                matteItems.append((m, self.selectMatte(m),
                    "alt %d" % (i+1), self.currentMatteState(m)))
                bindings.append(("key-down--alt--%d" % (i+1), m))
        else:
            def nada():
                return commands.DisabledMenuState
            matteItems = [("RV_CUSTOM_MATTE_DEFINITIONS UNDEFINED",
                None, None, nada)]

        # Always add the option to choose a new definition file
        matteItems += [("_", None)]
        matteItems += [("Choose Definition File...", self.selectMattesFile,
            None, None)]

        # Clear the menu then add the new entries
        matteMenu = [("View", [("_", None), ("Custom Mattes", None)])]
        commands.defineModeMenu("custom-mattes-mode", matteMenu)
        matteMenu = [("View", [("_", None), ("Custom Mattes", matteItems)])]
        commands.defineModeMenu("custom-mattes-mode", matteMenu)

        # Create hotkeys for each matte
        for b in bindings:
            (event, matte) = b
            commands.bind("custom-mattes-mode", "global", event,
                self.selectMatte(matte), "")
You can see that creating the menus and bindings walks through the contents of our “_mattes” dictionary in the order dictated by “_order”. If there are no valid mattes found then we add the alert in the menu to the user that the environment variable was not defined. You can also see from the example above that each menu entry is set to trigger a call to selectMatte for the associated matte definition. This is a neat technique where we use a factory method to create our event handling method for each valid matte we found. Here is the content of that:
    def selectMatte(self, matte):

        # Create a method that is specific to each matte for setting the
        # relevant session node properties to display the matte
        def select(event):
            self._currentMatte = matte
            if (matte == ""):
                commands.setIntProperty("#Session.matte.show", [0], True)
                extra_commands.displayFeedback("Disabling mattes", 2.0)
            else:
                m = self._mattes[matte]
                commands.setFloatProperty("#Session.matte.aspect",
                    [float(m["ratio"])], True)
                commands.setFloatProperty("#Session.matte.heightVisible",
                    [float(m["heightVisible"])], True)
                commands.setFloatProperty("#Session.matte.centerPoint",
                    [float(m["centerX"]), float(m["centerY"])], True)
                commands.setIntProperty("#Session.matte.show", [1], True)
                extra_commands.displayFeedback(
                    "Using '%s' matte" % matte, 2.0)
        return select
Notice that we didn't say which matte to set it to. The function just sets the value to whatever its argument is. Since this function is going to be called when the menu item is selected it needs to be an event function (a function which takes an Event as an argument and returns nothing). In the case where we want no matte drawn, we'll pass in the empty string (“”).
The menu state function (which will put a check mark next to the current matte) has a similar problem. In this case we'll use a mechanism with similar results. We'll create a method which returns a function given a matte. The returned function will be our menu state function. This sounds complicated, but it's simple in use:
The thing to note here is that the parameter m passed into currentMatteState() is being used inside the function that it returns. The m inside the matteState() function is known as a free variable. The value of this variable at the time that currentMatteState() is called becomes wrapped up with the returned function. One way to think about this is that each time you call currentMatteState() with a new value for m, it will return a different copy of matteState() function where the internal m is replaced the value of currentMatteState()'s m.
    def currentMatteState(self, m):
        def matteState():
            if (m != "" and self._currentMatte == m):
                return commands.CheckedMenuState
            return commands.UncheckedMenuState
        return matteState
Selecting mattes is not the only menu option we added in setMenuAndBindings(). We also added an option to select the matte definition file (or change the selected one) if none was found before. Here is the contents of the selectMatteFile() method:
    def selectMattesFile(self, event):
        definition = commands.openFileDialog(True, False, False, None, None)[0]
        try:
            self.updateMattesFromFile(definition)
        except KnownError,inst:
            print(str(inst))
        self.setMenuAndBindings()
Notice here that we basically repeat what we did before when parsing the mattes definition file from the environment. We update our internal mattes structures and the setup the menus and bindings.
It is also important to clear out any existing bindings when we load a new mattes file. Therefore we should modify our parsing function do this for us like so:
    def updateMattesFromFile(self, filename):

        # Make sure the definition file exists
        if (not os.path.exists(filename)):
            raise KnownError("ERROR: Custom Mattes Mode: Non-existent mattes" +
                " definition file: '%s'" % filename)

        # Clear existing key bindings
        for i in range(len(self._order)):
            commands.unbind(
                "custom-mattes-mode", "global", "key-down--alt--%d" % (i+1))

          ... THE REST IS AS BEFORE ...
So the full mode constructor function now looks like this:
class CustomMatteMinorMode(rvtypes.MinorMode):

    def __init__(self):
        rvtypes.MinorMode.__init__(self)
        self._order = []
        self._mattes = {}
        self._currentMatte = ""
        self.init("custom-mattes-mode", None, None, None)

        try:
            definition = os.environ["RV_CUSTOM_MATTE_DEFINITIONS"]
        except KeyError:
            definition = ""
        try:
            self.updateMattesFromFile(definition)
        except KnownError,inst:
            print(str(inst))

Handling Settings

Wouldn't it be nice to have our package remember what our last matte setting was and where the last definition file was? Lets see how to add settings. First thing is first. We need to write our settings in order to read them back later. Lets start by writing out the location of our mattes definition file when we parse a new one. Here is an updated version of updateMattesFromFile():
    def updateMattesFromFile(self, filename):

        # Make sure the definition file exists
        if (not os.path.exists(filename)):
            raise KnownError("ERROR: Custom Mattes Mode: Non-existent mattes" +
                " definition file: '%s'" % filename)

        # Clear existing key bindings
        for i in range(len(self._order)):
            commands.unbind(
                "custom-mattes-mode", "global", "key-down--alt--%d" % (i+1))

        # Walk through the lines of the definition file collecting matte
        # parameters
        order = []
        mattes = {}
        for line in open(filename).readlines():
            tokens = line.strip("\n").split(",")
            if (len(tokens) == 6):
                order.append(tokens[0])
                mattes[tokens[0]] = {
                    "name" : tokens[0], "ratio" : tokens[1],
                    "heightVisible" : tokens[2], "centerX" : tokens[3],
                    "centerY" : tokens[4], "text" : tokens[5]}
        
        # Make sure we got some valid mattes
        if (len(order) == 0):
            self._order = []
            self._mattes = {}
            raise KnownError("ERROR: Custom Mattes Mode: Empty mattes" +
                " definition file: '%s'" % filename)
        
        # Save the definition path and assign the mattes
        commands.writeSettings(
            "CUSTOM_MATTES", "customMattesDefinition", filename)
        self._order = order
        self._mattes = mattes
See how at the bottom of the function we are now writting the definition file to the CUSTOM_MATTES settings. Now lets also update the selectMatte() method to remember what matte we selected.
    def selectMatte(self, matte):

        # Create a method that is specific to each matte for setting the
        # relevant session node properties to display the matte
        def select(event):
            self._currentMatte = matte
            if (matte == ""):
                commands.setIntProperty("#Session.matte.show", [0], True)
                extra_commands.displayFeedback("Disabling mattes", 2.0)
            else:
                m = self._mattes[matte]
                commands.setFloatProperty("#Session.matte.aspect",
                    [float(m["ratio"])], True)
                commands.setFloatProperty("#Session.matte.heightVisible",
                    [float(m["heightVisible"])], True)
                commands.setFloatProperty("#Session.matte.centerPoint",
                    [float(m["centerX"]), float(m["centerY"])], True)
                commands.setIntProperty("#Session.matte.show", [1], True)
                extra_commands.displayFeedback(
                    "Using '%s' matte" % matte, 2.0)
            commands.writeSettings("CUSTOM_MATTES", "customMatteName", matte)
        return select
Here notice the second to last line. We save the matte that was just selected. Lastly lets see what we have to do to make use of these when we initialize our mode. Here is the final version of the constructor:
class CustomMatteMinorMode(rvtypes.MinorMode):

    def __init__(self):
        rvtypes.MinorMode.__init__(self)
        self._order = []
        self._mattes = {}
        self._currentMatte = ""
        self.init("custom-mattes-mode", None, None, None)

        try:
            definition = os.environ["RV_CUSTOM_MATTE_DEFINITIONS"]
        except KeyError:
            definition = str(commands.readSettings(
                "CUSTOM_MATTES", "customMattesDefinition", ""))
        try:
            self.updateMattesFromFile(definition)
        except KnownError,inst:
            print(str(inst))
        self.setMenuAndBindings()

        lastMatte = str(commands.readSettings(
            "CUSTOM_MATTES", "customMatteName", ""))
        for matte in self._order:
            if matte == lastMatte:
                self.selectMatte(matte)(None)
Here we grab the last known location of the mattes definition file if we did not find one in the environment. We also attempt to look up the last matte that was used and if we can find it among the mattes we parsed then we enable that selection.

The Finished custom_mattes.py File

from rv import commands, rvtypes, extra_commands
import os

class KnownError(Exception): pass

class CustomMatteMinorMode(rvtypes.MinorMode):

    def __init__(self):
        rvtypes.MinorMode.__init__(self)
        self._order = []
        self._mattes = {}
        self._currentMatte = ""
        self.init("custom-mattes-mode", None, None, None)

        try:
            definition = os.environ["RV_CUSTOM_MATTE_DEFINITIONS"]
        except KeyError:
            definition = str(commands.readSettings(
                "CUSTOM_MATTES", "customMattesDefinition", ""))
        try:
            self.updateMattesFromFile(definition)
        except KnownError,inst:
            print(str(inst))
        self.setMenuAndBindings()

        lastMatte = str(commands.readSettings(
            "CUSTOM_MATTES", "customMatteName", ""))
        for matte in self._order:
            if matte == lastMatte:
                self.selectMatte(matte)(None)

    def currentMatteState(self, m):
        def matteState():
            if (m != "" and self._currentMatte == m):
                return commands.CheckedMenuState
            return commands.UncheckedMenuState
        return matteState

    def selectMatte(self, matte):

        # Create a method that is specific to each matte for setting the
        # relevant session node properties to display the matte
        def select(event):
            self._currentMatte = matte
            if (matte == ""):
                commands.setIntProperty("#Session.matte.show", [0], True)
                extra_commands.displayFeedback("Disabling mattes", 2.0)
            else:
                m = self._mattes[matte]
                commands.setFloatProperty("#Session.matte.aspect",
                    [float(m["ratio"])], True)
                commands.setFloatProperty("#Session.matte.heightVisible",
                    [float(m["heightVisible"])], True)
                commands.setFloatProperty("#Session.matte.centerPoint",
                    [float(m["centerX"]), float(m["centerY"])], True)
                commands.setIntProperty("#Session.matte.show", [1], True)
                extra_commands.displayFeedback(
                    "Using '%s' matte" % matte, 2.0)
            commands.writeSettings("CUSTOM_MATTES", "customMatteName", matte)
        return select

    def selectMattesFile(self, event):
        definition = commands.openFileDialog(True, False, False, None, None)[0]
        try:
            self.updateMattesFromFile(definition)
        except KnownError,inst:
            print(str(inst))
        self.setMenuAndBindings()

    def setMenuAndBindings(self):
        
        # Walk through all of the mattes adding a menu entry as well as a
        # hotkey binding for alt + index number
        # NOTE: The bindings will only matter for the first 9 mattes since you
        # can't really press "alt-10".
        matteItems = []
        bindings = []
        if (len(self._order) > 0):
            matteItems.append(("No Matte", self.selectMatte(""), "alt `",
                self.currentMatteState("")))
            bindings.append(("key-down--alt--`", ""))

            for i,m in enumerate(self._order):
                matteItems.append((m, self.selectMatte(m),
                    "alt %d" % (i+1), self.currentMatteState(m)))
                bindings.append(("key-down--alt--%d" % (i+1), m))
        else:
            def nada():
                return commands.DisabledMenuState
            matteItems = [("RV_CUSTOM_MATTE_DEFINITIONS UNDEFINED",
                None, None, nada)]

        # Always add the option to choose a new definition file
        matteItems += [("_", None)]
        matteItems += [("Choose Definition File...", self.selectMattesFile,
            None, None)]

        # Clear the menu then add the new entries
        matteMenu = [("View", [("_", None), ("Custom Mattes", None)])]
        commands.defineModeMenu("custom-mattes-mode", matteMenu)
        matteMenu = [("View", [("_", None), ("Custom Mattes", matteItems)])]
        commands.defineModeMenu("custom-mattes-mode", matteMenu)

        # Create hotkeys for each matte
        for b in bindings:
            (event, matte) = b
            commands.bind("custom-mattes-mode", "global", event,
                self.selectMatte(matte), "")

    def updateMattesFromFile(self, filename):

        # Make sure the definition file exists
        if (not os.path.exists(filename)):
            raise KnownError("ERROR: Custom Mattes Mode: Non-existent mattes" +
                " definition file: '%s'" % filename)

        # Clear existing key bindings
        for i in range(len(self._order)):
            commands.unbind(
                "custom-mattes-mode", "global", "key-down--alt--%d" % (i+1))

        # Walk through the lines of the definition file collecting matte
        # parameters
        order = []
        mattes = {}
        for line in open(filename).readlines():
            tokens = line.strip("\n").split(",")
            if (len(tokens) == 6):
                order.append(tokens[0])
                mattes[tokens[0]] = {
                    "name" : tokens[0], "ratio" : tokens[1],
                    "heightVisible" : tokens[2], "centerX" : tokens[3],
                    "centerY" : tokens[4], "text" : tokens[5]}
        
        # Make sure we got some valid mattes
        if (len(order) == 0):
            self._order = []
            self._mattes = {}
            raise KnownError("ERROR: Custom Mattes Mode: Empty mattes" +
                " definition file: '%s'" % filename)
        
        # Save the definition path and assign the mattes
        commands.writeSettings(
            "CUSTOM_MATTES", "customMattesDefinition", filename)
        self._order = order
        self._mattes = mattes

def createMode():
    return CustomMatteMinorMode()

Chapter 12 Automated Color and Viewing Management

Color management in RV can be broken into three separate issues:
Each of the above corresponds to a set of features in RV which can be automated:
In addition to the color issues there are a few others which might need to be detected and/or corrected:
RV lets you customize all of the above for your facility and workflow by hooking into the user interface code. The most important method of doing so is using special events generated by RV internally and setting internal state at that time.

The source-group-complete Event

The source-group-complete event is generated whenever media is added to a session; this includes when a Source is created, or when the set of media held be a Source is modified. By binding a function to this event, it's possible to configure any color space or other image dependant aspects of RV at the time the file is added. This can save a considerable amount of time and headache when a large number of people are using RV in differing circumstances.
See the sections below for information about creating a package which binds source-group-complete to do color management.

The default source-group-complete behavior

By default RV binds its own color management function located in the source_setup.mu file called sourceSetup(). This is part of the source_setup system package introduced in version 3.10. In RV 6.0 the source_setup is implemented in Python.
It's a good idea to override or augment this package for use in production environments. For example, you may want to have certain default color behavior for technical directors using movie files which differs from how a coordinator might view them (the coordinator may be looking at movies in sRGB space instead of with a film simulation for example).
RV's default color management tries to use good defaults for incoming file formats. Here's the complete behavior shown as a set of heuristics applied in order:
  1. If the incoming image is a TIFF file and it has no color space attribute assume it's linear
  2. If the image is JPEG or a quicktime movie file (.mov) and there is no color space attribute assume it's in sRGB space
  3. If there is an embedded ICC profile and that profile is for sRGB space use RV's internal sRGB space transform instead (because RV does not yet handle embedded ICC profiles)
  4. If the image is TIFF and it was created by ifftoany, assume the pixel aspect ratio is incorrect and fix it
  5. If the image is JPEG, has no pixel aspect ratio attribute and no density attribute and looks like it comes from Maya, fix the pixel aspect ratio
  6. Use the proper built-in conversion for the color space indicated in the color space attribute of the image
  7. Use the sRGB display transform if any color space was successfully determined for the input image(s)
From the user's point of view, the following situations will occur:
In addition, the default color management implements two varieties of user-level control, as examples of what you can do from the scripting level.
First, environment variables with a standard format can be used to control the linearization process for a given file type. An environment variable of the form “RV_OVERRIDE_TRANSFER_<type>” will set the linearization transform for the specified file type (and this will override the default rules described above). For example, if the environment variable “RV_OVERRIDE_TRANSFER_TIF” is set to “sRGB” then all files with extension “tif” or “TIF” will be linearized with the sRGB transform. If you want, you can also specify the bit depth. So you could set RV_OVERRIDE_TRANSFER_TIF_8 to sRGB and RV_OVERRIDE_TRANSFER_TIF_32 to Linear. The transform function name must be one of the following standard transforms. (The number following “Gamma” is arbitrary.)
Linear
sRGB
Cineon Log
ALEXA LogC
Viper Log
Rec709
Gamma f
Table 12.1:
Standard Linearization Transforms
Second, any of the above-described environment variable names and standard transform names can appear on the command line following the “-flags” option. For example:
rv test.dpx -flags "RV_OVERRIDE_TRANSFER_DPX=ALEXA LogC"

Breakdown of sourceSetup() in the source_setup Package

The source_setup system package defines the default sourceSetup() function. This is where RV's default color management comes from
1
RV 6.0 uses Python to implement the source_setup package. Previous versions used Mu
. The function starts by parsing the event contents (which contains the name of the file, the type of source node, and the source node name) as well as setting up the regular expressions used later in the function:
2
The actual sourceSetup() function in source_setup.py may differ from what is described here since it is constantly being refined.
        args         = event.contents().split(";;")
        group        = args[0]
        fileSource   = groupMemberOfType(group, "RVFileSource")
        imageSource  = groupMemberOfType(group, "RVImageSource")
        source       = fileSource if imageSource == None else imageSource
        linPipeNode  = groupMemberOfType(group, "RVLinearizePipelineGroup") 
        linNode      = groupMemberOfType(linPipeNode, "RVLinearize")
        lensNode     = groupMemberOfType(linPipeNode, "RVLensWarp")
        fmtNode      = groupMemberOfType(group, "RVFormat")
        tformNode    = groupMemberOfType(group, "RVTransform2D")
        lookPipeNode = groupMemberOfType(group, "RVLookPipelineGroup")
        lookNode     = groupMemberOfType(lookPipeNode, "RVLookLUT")
        typeName     = commands.nodeType(source)
        fileNames    = commands.getStringProperty("%s.media.movie" % source, 0, 1000)
        fileName     = fileNames[0]
        ext          = fileName.split('.')[-1].upper()
        igPrim       = self.checkIgnorePrimaries(ext)
        mInfo        = commands.sourceMediaInfo(source, None)
The event.contents() function returns a string which might look something like this:
sourceGroup000000;;new
The split() function is used to create a dynamic array of strings to extract the source group's name. The nodes associated with the source group are then located and the media names are taken from the source node. The source node is either an RVImageSource which stores its image data directly in the session or an RVFileSource which references media on the filesystem. Both of these node types have a media component which contains the actual media names (usually a single file in the case of an RVFileSource node).
In RV 6 pipelines were introduced. There are three pipeline group nodes in each source group node and one pipeline group in the display group. For the default source_setup the linearize pipeline group is need to get the default RVLinearize node it contains.
The next section of the function iterates over the image attributes and caches the ones we're interested in. The most important of these is the Colorspace attribute which is set by the file readers when the image color space is known.
        srcAttrs = commands.sourceAttributes(source, fileName)
        attrDict = dict(zip([i[0] for i in srcAttrs],[j[1] for j in srcAttrs]))
        attrMap = {
            "ColorSpace/ICC/Description" : "ICCProfileDesc",
            "ColorSpace" : "ColorSpace",
            "ColorSpace/Transfer" : "TransferFunction",
            "ColorSpace/Primaries" : "ColorSpacePrimaries",
            "DPX-0/Transfer" : "DPX0Transfer",
            "ColorSpace/Conversion" : "ConversionMatrix",
            "JPEG/PixelAspect" : "JPEGPixelAspect",
            "PixelAspectRatio" : "PixelAspectRatio",
            "JPEG/Density" : "JPEGDensity",
            "TIFF/ImageDescription" : "TIFFImageDescription",
            "DPX/Creator" : "DPXCreator",
            "EXIF/Orientation" : "EXIFOrientation",
            "EXIF/Gamma" : "EXIFGamma",
            "ARRI-Image/DataSpace" : "ARRIDataSpace"}
        for key in attrMap.keys():
            try:
                exec('%s = "%s"' % (attrMap[key],attrDict[key]))
            except KeyError:
                pass
The function sourceAttributes() returns the image attributes for a given file in a source. In this case we're passing in the source and file which caused the event. The return value of the function is a dynamic array of tuples of type (string,string) where the first element is the name of the attribute and the second is a string representation of the value. Each iteration through the loop, the next tuple is used to assign the attribute value to the a variable with name of the attribute.
The variables ICCProfileName, Colorspace, JPEGPixelAspect, etc, are all variable of type string which are defined earlier in the function.
Before getting to the meat of the function, there are two helper functions declared: setPixelAspect() and setFileColorSpace().
The next major section of the function matches the file name against the regular expressions that were declared at the beginning and against the values of some of the attributes that were cached.
        #
        #  Rules based on the extension
        #

        if (ext == 'DPX'):
            if (DPXCreator == "AppleComputers.libcineon" or 
                DPXCreator == "AUTODESK"):
                #
                #  Final Cut's "Color" and Maya write out bogus DPX
                #  header info with the aspect ratio fields set
                #  improperly (to 0s usually). Properly undefined DPX
                #  headers do not have the value 0.
                #

                if (int(PixelAspectRatio) == 0):
                    self.setPixelAspect(lensNode, 1.0)
            elif (DPXCreator == "Nuke" and
                    (ColorSpace == ""   or ColorSpace == "Other (0)") and
                    (DPX0Transfer == "" or DPX0Transfer == "Other (0)")):
                #
                #  Nuke produces identical (uninformative) dpx headers for
                #  both Linear and Cineon files.  But we expect Cineon to be
                #  much more common, so go with that.
                #

                TransferFunction = "Cineon Log"
        elif (ext == 'TIF' and TransferFunction == ""):
            #
            #  Assume 8bit tif files are sRGB if there's no other indication;
            #  fall back to linear.
            #

            if (mInfo['bitsPerChannel'] == 8):
                TransferFunction = "sRGB"
            else:
                TransferFunction = "Linear"
        elif (ext == "ARI" and TransferFunction == ""):
            #
            #  Assume tif files are linear if there's no other indication
            #

            TransferFunction = "ALEXA LogC"
        elif (ext in ['JPEG','JPG','MOV','AVI','MP4'] and TransferFunction == ""):
            #
            #  Assume jpeg/mov is in sRGB space if none is specified
            #
            
            TransferFunction = "sRGB"
        elif (ext in ['J2C','J2K','JPT','JP2'] and ColorSpacePrimaries == "UNSPECIFIED"):
            #
            #  If we're assuming XYZ primaries, but ignoring primaries just set
            #  transfer to sRGB.
            #

            if (igPrim):
                TransferFunction = "sRGB";

        if (igPrim):
            commands.setIntProperty(linNode + ".color.ignoreChromaticities", [1], True)

        if (ICCProfileDesc != ""):
            #
            #  Hack -- if you see sRGB in a color profile name just use the
            #  built-in sRGB conversion.
            #

            if ("sRGB" in ICCProfileDesc):
                TransferFunction = "sRGB"
            else:
                TransferFunction = ""

        if (TIFFImageDescription == "Image converted using ifftoany"):
            #
            #  Get around maya bugs
            #

            print("WARNING: Assuming %s was created by Maya with a bad pixel aspect ratio\n" % fileName)
            self.setPixelAspect(lensNode, 1.0)

        if (JPEGPixelAspect != "" and JPEGDensity != ""):
            info     = commands.sourceMediaInfo(source, fileName)
            attrPA   = float(JPEGPixelAspect)
            imagePA  = float(info['width']) / float(info['height'])
            testDiff = attrPA - 1.0 / imagePA

            if ((testDiff < 0.0001) and (testDiff > -0.0001)):
                #
                #  Maya JPEG -- fix pixel aspect
                #

                print("WARNING: Assuming %s was created by Maya with a bad pixel aspect ratio\n" % fileName)
                self.setPixelAspect(lensNode, 1.0)

        if (EXIFOrientation != ""):
            #
            #  Some of these tags are beyond the internal image
            #  orientation choices so we need to possibly rotate, etc
            #

            if not self.definedInSessionFile(tformNode):
                rprop = tformNode + ".transform.rotate"
                if (EXIFOrientation == "right - top"):
                    commands.setFloatProperty(rprop, [90.0], True)
                elif (EXIFOrientation == "right - bottom"):
                    commands.setFloatProperty(rprop, [-90.0], True)
                elif (EXIFOrientation == "left - top"):
                    commands.setFloatProperty(rprop, [90.0], True)
                elif (EXIFOrientation == "left - bottom"):
                    commands.setFloatProperty(rprop, [-90.0], True)
At this point in the function the color space of the input image will be known or assumed to be linear. Finally, we try to set the color space (which will result in the image pixels being converted to the linear working space). If this succeeds, use sRGB display as the default.
        if (not noColorChanges):
	    #
	    #  Assume (in the absence of info to the contrary) any 8bit file will be in sRGB space.
	    #
            if (TransferFunction == "" and mInfo['bitsPerChannel'] == 8):
                TransferFunction = "sRGB"

	    #
	    #  Allow user to override with environment variables
	    #
        TransferFunction = self.checkEnvVar(ext, mInfo['bitsPerChannel'], TransferFunction)

        if (self.setFileColorSpace(linNode, TransferFunction, ColorSpace)):

            #
            #  The default display correction is sRGB if the
            #  pixels can be converted to (or are already in)
            #  linear space
            #
            #  For gamma instead do this:
            #
            #      setFloatProperty("#RVDisplayColor.color.gamma", float[] {2.2}, true);
            #
            #  For a linear -> screen LUT do this:
            #
            #      readLUT(lutfile, "#RVDisplayColor");
            #      setIntProperty("#RVDisplayColor.lut.active", int[] {1}, true);
            #      updateLUT();
            #
            #  If this is not the first source, assume that user or source_seetup
            #  has already set the desired display transform

            if len(commands.sources()) == 1:
                self.setDisplayFromProfile()

Setting up 3D and Channel LUTs

The default source-group-complete event function does not set up any non-built-in transforms. When you need to automatically apply a LUT, as a file, look, or a display LUT, you need to do the following:
readLUT(file, nodeName, True)
updateLUT()
The nodeName will be ``#RVDisplayColor'' (to refer to it by type) for the display LUT. For a file or look LUT, you use the associated node name for the color node — in the default sourceSetup() function this would be the linNode variable. The file parameter to readLUT() will be the name of the LUT file on disk and can be any of the LUT types that RV reads.

Setting CDL Values From File

As with using LUT files to fill in where built-in transforms do not cover your needs, you can read in CDL property values from a file. Use the following to read values from a CDL file on disk:
readCDL(file, nodeName, True)
When using readCDL the “nodeName” should be that of the targeted RVColor or RVLookLUT node to which you are applying the CDL values read from “file”. In the default RV graph you will find CDL properties to set in the RVColor and RVLookLUT nodes for each source, but there are none out-of-the-box in the display pipeline. However, you can add RVColor or RVLookLUT nodes to any pipeline you need CDL control that does not have them by default.
You can also add RVCDL nodes where you want CDL control, but these nodes do not require the use of readCDL. With RVCDL nodes you only need to set the node's node.file property and it will automatically load and parse the file from the path provided. Errors will be thrown if the file provided is invalid.

Building a Package For Color Management

As of RV 3.6 the recommend way to handle all event bindings is via a package. In version 3.10 the color management was made a system package. In version 3.12 the package was converted to Python. To customize color management you can either create a new package from scratch as described here, or copy, rename, and hack the existing source_setup package.
The use of source-group-complete is no different from any other event. By creating a package you can override the existing behavior or modify it. It also makes it possible to have layers of color management packages which (assuming they don't contradict each other) can collectively create a desired behavior.
from rv import rvtypes, commands, extra_commands
import os, re

class CustomColorManagementMode(rvtypes.MinorMode):

    def sourceSetup (self, event, noColorChanges=False):

        // do work on the new source here
        event.reject()

    def __init__(self): 
        rvtypes.MinorMode.__init__(self)
        self.init("Source Setup",
                  None,
                  None,
                  [ ("source-group-complete", self.sourceSetup, "Color and Geometry Management") ],
                  "source_setup",
                  20)

def createMode():
    return CustomColorManagementMode()
Note that we use the sortKey “source_setup” and the sortOrder “20”. This will ensure that our additional sourceSetup runs after the default color management.
The included optional package “ocio_source_setup” is a good example of a package that does additional source setup.

Chapter 13 Network Communication

RV can communicate with multiple external programs via its network protocol. The mechanism is designed to function like a “chat” client. Once a connection is established, messages can be sent and received including arbitrary binary data.
There are a number of applications which this enables:
Any number of network connections can be established simultaneously, so for example it's possible to have a synchronized RV session with a remote RV and drive it with an external hardware device at the same time.

Example Code

There are two working examples that come with RV: the rvshell program and pyNetwork.py python example.
The rvshell program uses a C++ library included with the distribution called TwkQtChat which you can use to make interfacing easier — especially if your program will use Qt. We highly recommend using this library since this is code which RV uses internally so it will always be up-to-date. The library is only dependent on the QtCore and QtNetwork modules.
The pyNetwork example implements the network protocol using only python native code. You can use it directly in python programs.

13.1.1 Using rvshell

To use rvshell, start RV from the command line with the network started and a default port of 45000 (to make sure it doesn't interfere with existing RV sessions):
shell> rv -network -networkPort 45000
Next start the rvshell program program from a different shell:
shell> rvshell user localhost 45000
Assuming all went well, this will start rvshell connected to the running RV. There are three things you can experiment with using rvhell: a very simple controller interface, a script editor to send portions of script or messages to RV manually, and a display driver simulator that sends stereo frames to RV.
Start by loading a sequence of images or a quicktime movie into RV. In rvshell switch to the “Playback Control” tab. You should be able to play, stop, change frames and toggle full screen mode using the buttons on the interface. This example sends simple Mu commands to RV to control it. The feedback section of the interface shows the RETURN message send back from RV. This shows whatever result was obtained from the command.
The “Raw Event” section of the interface lets you assemble event messages to send to RV manually. The default event message type is remote-eval which will cause the message data to be treated like a Mu script to execute. There is also a remote-pyeval event which does the same with Python (in which case you should type in Python code instead of Mu code). Messages sent this way to RV are translated into UI events. In order for the interface code to respond to the event something must have bound a function to the event type. By default RV can handle remote-eval and remote-pyeval events, but you can add new ones yourself.
When RV receieves a remote-eval event it executes the code and looks for a return value. If a return value exists, it converts it to a string and sends it back. So using remote-eval it's possible to querry RV's current state. For example if you load an image into RV and then send it the command renderedImages() it will return a Mu struct as a string with information about the rendered image. Similarily, sending a remote-pyeval with the same command will return a Python dictionary as a string with the same information.
The last tab “Pixels” can be used to emulate a display driver. Load a JPEG image into rvshell's viewer (don't try something over 2k — rvshell is using Qt's image reader). Set the number of tiles you want to send in X and Y, for example 10 in each. In RV clear the session. In rvshell hit the Send Image button. rvshell will create a new stereo image source in RV and send the image one tile at a time to it. The left eye will be the original image and the right eye will be its inverse. Try ViewStereoSide by Side to see the results.

13.1.2 Using rvNetwork.py

document here

TwkQtChat Library

The TwkQtChat library is composed of three classes: Client, Connection, and Server.
sendMessage
Generic method to send a standard UTF-8 text message to a specific contact
sendData
Generic method to send a data message to a specific contact
broadcastMessage
Send a standard UTF-8 message to all contacts
sendEvent
Send an EVENT or RETURNEVENT message to a contact (calls sendMessage)
broadcastEvent
Send an EVENT or RETURNEVENT message to all contacts
connectTo
Initiate a connection to a specific contact
hasConnection
Query connection status to a contact
disconnectFrom
Force the shutdown of connection
waitForMessage
Block until a message is received from a specific contact
waitForSend
Block until a message is actually sent
signOff
Send a DISCONNECT message to a contact to shutdown gracefully
online
Returns true of the Server is running and listening on the port
Table 13.1:
Important Client Member Functions
newMessage
A new message has been received on an existing connection
newData
A new data message has been received on an existing connection
newContact
A new contact (and associated connection) has been established
contactLeft
A previously established connection has been shutdown
requestConnection
A remote program is requesting a connection
connectionFailed
An attempted connection failed
contactError
An error occurred on an existing connection
Table 13.2:
Client Signals
A single Client instance is required to represent your process and to manage the Connections and Server instances. The Connection and Server classes are derived from the Qt QTcpSocket and QTcpServer classes which do the lower level work. Once the Client instance exists you can get pointer to the Server and existing Connections to directly manipulate them or connect their signals to slots in other QObject derived classes if needed.
The application should start by creating a Client instance with its contact name (usually a user name), application name, and port on which to create the server. The Client class uses standard Qt signals and slots to communicate with other code. It's not necessary to inherit from it.
The most important functions on the Client class are list in table 13.1.

The Protocol

There are two types of messages that RV can receive and send over its network socket: a standard message and a data message. Data messages can send arbitrary binary data while standard messages are used to send UTF-8 string data.
The greeting is used only once on initial contact. The standard message is used in most cases. The data message is used primarily to send binary files or blocks of pixels to/from RV.

13.3.1 Standard Messages

RV recognizes these types of standard messages:
MESSAGE
The string payload is subdivided into multiple parts the first of which indicates the sub-type of the message. The rest of the message is interpreted according to its sub-type.
GREETING
Sent by RV to a synced RV when negotiating the initial contact.
NEWGREETING
Sent by external controlling programs to RV during initial contact.
PINGPONGCONTROL
Used to negotiate whether or not RV and the connected process should exchange PING and PONG messages on a regular basis.
PING
Query the state of the other end of the connection — i.e. check and see if the other process is still alive and functioning.
PONG
Returned when a PING message is received to indicate state.
Table 13.3:
Message Types
When an application first connects to RV over its TCP port, a greeting message is exchanged. This consists of an UTF-8 byte string composed of:
The string “NEWGREETING”
1st word
The UTF-8 value 32 (space)
-
A UTF-8 integer composed of the characters [0-9] with the value N + M + 1 indicating the number of bytes remaining in the message
2nd word
The UTF-8 value 32 (space)
-
Contact name UTF-8 string (non-whitespace)
N bytes
The UTF-8 value 32 (space)
1 byte
Application name UTF-8 string (non-whitespace)
M bytes
Table 13.4:
Greeting Message
In response, the application should receive a NEWGREETING message back. At this point the application will be connected to RV.
A standard message is a single UTF-8 string which has the form:
The string “MESSAGE”
1st word
The UTF-8 value 32 (space)
-
A UTF-8 integer composed of the characters [0-9] the value of which is N indicating the size of the remaining message
2nd word
The UTF-8 value 32 (space)
-
The message payload (remaining UTF-8 string)
N bytes
Table 13.5:
Standard Message
When RV receives a standard message (MESSAGE type) it will assume the payload is a UTF-8 string and try to interpret it. The first word of the string is considered the sub-message type and is used to decide how to respond:
EVENT
Send the rest of the payload as a UI event (see below)
RETURNEVENT
Same as EVENT but will result in a response RETURN message
RETURN
The message is a response to a recently received RETURNEVENT message
DISCONNECT
The connection should be disconnected
Table 13.6:
Sub-Message Types
The EVENT and RETURNEVENT messages are the most common. When RV receives an EVENT or RETURNEVENT message it will translate it into a user interface event. The additional part of the string (after EVENT or RETURNEVENT) is composed of:
EVENT or RETURNEVENT
UTF-8 string identifying the message as an EVENT or RETURNEVENT message.
space character
-
non-whitespace-event-name
The event that will be sent to the UI as a string event (e.g. remote-eval). This can be obtained from the event by calling event.name()in Mu or Python
space character
-
non-whitespace-target-name
Present for backwards compatibility only. We recommend you use a single “*” character to fill this slot.
space character
-
UTF-8 string
The string event contents. Retrievable with event.contents() in Mu or Python.
Table 13.7:
EVENT Messages
For example the full contents of an EVENT message might look like:
MESSAGE 34 EVENT my-event-name red green blue
The first word indicates a standard message. The next word (34) indicates the length of the rest of the data. EVENT is the message sub-type which further specifies that the next word (my-event-name) is the event to send to the UI with the rest of the string (red green blue) as the event contents.
If a UI function that receives the event sets the return value and the message was a RETURNEVENT, then a RETURN will be sent back. A RETURN will have a single string that is the return value. An EVENT message will not result in a RETURN message.
RETURN
UTF-8 string identifying the message as an RETURN message.
space character
-
UTF-8 string
The string event returnContents(). This is the value set by setReturnContents() on the event object in Mu or Python.
Table 13.8:
RETURN Message
Generally, when a RETURNEVENT is sent to your application, a RETURN should be sent back because the other side may be blocked waiting. It's ok to send an empty RETURN. Normally, RV will not send EVENT or RETURNEVENT messages to other non-RV applications. However, it's possible that this could happen while connected to an RV that is also engaged in a sync session with another RV.
Finally a DISCONNECT message comes with no additional data and signals that the connection should be closed.

Ping and Pong Messages

There are three lower level messages used to keep the status of the connection up to date. This scheme relies on each side of the connection returning a PONG message if it ever receives a PING message whenever ping pong messages are active.
Whether or not it's active is controlled by sending the PINGPONGCONTROL message: when received, if the payload is the UTF-8 value “1” then PING messages should be expected and responded to. If the value is “0” then responding to a PING message is not mandatory.
For some applications especially those that require a lot of computation (e.g. a display driver for a renderer) it can be a good to shut down the ping pong notification. When off, both sides of the connection should assume the other side is busy but not dead in the absence of network activity.
Message
Description
Full message value
PINGPONGCONTROL
A payload value of “1” indicates that PING and PONG messages should be used
PINGPONGCONTROL 1 (1 or 0)
PING
The payload is always the character “p”. Should result in a PONG response
PING 1 p
PONG
The payload is always “p”. Should be sent in response to a PING message
PONG 1 p
Table 13.9:
PING and PONG Messages

13.3.2 Data Messages

The data messages come it two types: PIXELTILE and DATAEVENT. These take the form:
PIXELTILE(parameters) -or- DATAEVENT(parameters)
1st word
space character
-
A UTF-8 integer composed of the characters [0-9] the value of which is N indicating the size of the remaining message
2nd word
space character
-
Data of size N
N bytes
Table 13.10:
PIXELTILE and DATAEVENT
The PIXELTILE message is used to send a block of pixels to or from RV. When received by RV the PIXELTILE message is translated into a pixel-block event (unless another event name is specified) which is sent to the user interface. This message takes a number of parameters which should have no whitespace characters and separated by commas (“,”):
w
Width of data in pixels.
h
Height of the data in pixels.
1
If the height of the block of pixels is 1 and the width is the width of the image, the block is equivalent to a scanline.
x
The horizontal offset of the pixel block relative to the image origin
y
The vertical offset of the pixel block relative to the image origin
f
The frame number
event-name
Alternate event name (instead of pixel-block). RV will only recognize pixel-block event by default. You can bind to other events however.
media
The name of the media associated with data.
layer
The name of the layer associated with the meda. This is analogous to an EXR layer
view
The name of the view associated with the media. This is analogous to an EXR view
Table 13.11:
PIXELTILE Message
For example, the PIXELTILE header to the data message might appear as:
PIXELTILE(media=out.9.exr,layer=diffuse,view=left,w=16,h=16,x=160,y=240,f=9)
Which would be parsed and used to fill fields in the Event type. This data becomes available to Mu and Python functions binding to the event. By default the Event object is sent to the insertCreatePixelBlock() function which fins the image source associated with the meda and inserts the data into the correct layer and view of the image. Each of the keywords in the PIXELTILE header is optional.
The DATAEVENT message is similar to the PIXELTILE but is intended to be implemented by the user. The message header takes at least three parameters which are ordered (no keywords like PIXELTILE). RV will use only the first three parameters:
event-name
RV will send a raw data event with this name
target
Required but not currently used
content type string
An arbitrary string indicating the type of the content. This is available to the UI from the Event.contentType() function.
Table 13.12:
DATAEVENT Message
For example, the DATAEVENT header might appear as:
DATAEVENT(my-data-event,unused,special-data)
Which would be sent to the user interface as a my-data-event with the content type “special-data”. The content type is retrievable with Event.contentType(). The data payload is available via Event.dataContents() method.

Chapter 14 Webkit JavaScript Integration

RV can communicate with JavaScript running in a QWebView widget. This makes it possible to serve custom RV-aware web pages which can interact with a running RV. JavaScript running in the web page can execute arbitrary Mu script strings as well as receive events from RV.
You can experiment with this using the example webview package included with RV.
If you are not familiar with Qt's webkit integration this page can be helpful.

Executing Mu or Python from JavaScript

RV exports a JavaScript object called rvsession to the Javascript runtime environment. Two of the functions in that namespace are evaluate() and pyevaluate(). By calling evaluate() or pyevaluate() or pyexec() you can execute arbitrary Mu or Python code in the running RV to control it. If the executed code returns a value, the value will be converted to a string and returned by the (py)evaluate() functions. Note that pyevaluate() triggers a python eval which takes an expression and returns a value. pyexec() on the other hand takes an arbitrary block of code and triggers a python exec call.
As an example, here is some html which demonstates creating a link in a web page which causes RV to start playing when pressed:
<script type="text/javascript">
function play () { rvsession.evaluate("play()"); } 
</script>

<p><a href="javascript:play()">Play</a></p> 
If inlining the Mu or Python code in each call back becomes onerous you can upload function definitions and even whole classes all in one evaluate call and then call the defined functions later. For complex applications this may be the most sane way to handle call back evaluation.

Getting Event Call Backs in JavaScript

RV generates events which can be converted into call backs in JavaScript. This differs slightly from how events are handled in Mu and Python.
Signal
Events
eventString
Any internal RV event and events generated by the command sendInternalEvent() command in Mu or Python
eventKey
Any key- event (e.g. key-down–a)
eventPointer
Any pointer- event (e.g. pointer-1–push) or tablet event (e.g. stylus-pen–push)
eventDragDrop
Any dragdrop- event
Table 14.1:
JavaScript Signals Produced by Events
The rvsession object contains signal objects which you can connect by supplying a call back function. In addition you need to supply the name of one or more events as a regular expression which will be matched against incoming events. For example:
function callback_string (name, contents, sender)
{
    var x = name + " " + contents + " " + sender;
    rvsession.evaluate("print(\"callback_string " + x + "\\n\");");
}

rvsession.eventString.connect(callback_string);
rvsession.bindToRegex("source-group-complete");
connects the function callback_string() to the eventString signal object and binds to the source-group-complete RV event. For each event the proper signal object type must be used. For example pointer events are not handled by eventString but by the eventPointer signal. There are four signals available: eventString, eventKey, eventPointer, and eventDragDrop. See tables describing which events generate which signals and what the signal call back arguments should be.
In the above example, any time media is loaded into RV the callback_string() function will be called. Note that there is a single callback for each type of event. In particular if you want to handle both the “new-source” and the “frame-changed” events, your eventString handler must handle both (it can distinguish between them using the “name” parameter passed to the handler. To bind the handler to both events you can call “bindToRegex” multiple times, or specify both events in a regular expression:
rvsession.bindToRegex("source-group-complete|frame-changed");
The format of this regular expression is specified on the qt-project website.
Argument
Description
eventName
The name of the RV event. For example “source-group-complete
contents
A string containing the event contents if it has any
senderName
Name of the sender if it has one
Table 14.2:
eventString Signal Arguments
Argument
Description
eventName
The name of the RV event. For example “source-group-complete
key
An integer representing the key symbol
modifiers
An integer the low order five bits of which indicate the keyboard modifier state
Table 14.3:
eventKey Signal Arguments
Argument
Description
eventName
The name of the RV event. For example “source-group-complete
x
The horizontal position of the mouse as an integer
y
The vertical position of the mouse as an integer
w
The width of the event domain as an integer
h
The height of the event domain as an integer
startX
The starting horizontal position of a mouse down event
startY
The starting vertical position of a mouse down event
buttonStates
An integer the lower order five bits of which indicate the mouse button states
activationTime
The relative time at which button activation occurred or 0 for regular pointer events
Table 14.4:
eventPointer Signal Arguments
Argument
Description
eventName
The name of the RV event. For example “source-group-complete
x
The horizontal position of the mouse as an integer
y
The vertical position of the mouse as an integer
w
The width of the event domain as an integer
h
The height of the event domain as an integer
startX
The starting horizontal position of a mouse down event
startY
The starting vertical position of a mouse down event
buttonStates
An integer the lower order five bits of which indicate the mouse button states
dragDropType
A string the value of which will be one of “enter”, “leave”, “move”, or “release”
contentType
A string the value of which will be one of “file”, “url”, or “text”
stringContent
The contents of the drag and drop event as a string
Table 14.5:
eventDragDrop Signal Arguments

Using the webview Example Package

This package creates one or more docked QWebView instances, configurable from the command line as described below. JavaScript code running in the webviews can execute arbitrary Mu code in RV by calling the rvsession.evaluate() function. This package is intended as an example.
These command-line options should be passed to RV after the -flags option. The webview options below are shown with their default values, and all of them can apply to any of four webviews in the Left, Right, Top, and Bottom dock locations.
shell> rv -flags ModeManagerPreload=webview
The above forces the load of the webview package which will display an example web page. Additional arguments can be supplied to load specific web pages into additional panes. While this will just show the sample html/javascript file that comes with the package in a webview docked on the right. To see what's happening in this example, bring up the Session Manager so you can see the Sources appearing and disappearing, or switch to the defaultLayout view. Note that you can play while reconfiguring the session with the javascript checkboxes.
The following additional arguments can be passed via the -flags mechanism. In the below, POS should be replaced by one of Left, Right, Bottom, or Top.
ModeManagerPreload=webview
Force loading of the webview package. The package should not be loaded by default but does need to be installed. This causes rv to treat the package as if it were loaded by the user.
webviewUrlPOS=URL
A webview pane will be created at POS and the URL will be loaded into it. It can be something from a web server or a file:// URL. If you force the package to load, but do not specify any URL, you'll get a single webview in the Right dock lockation rendering the sample html/javascript page that ships with the package. Note that the string "EQUALS" will be replaced by an "=" character in the URL.
webviewTitlePOS=string
Set the title of the webview pane to string.
webviewShowTitlePOS=true or false
A value of true will show and false will remove the title bar from the webview pane.
webviewShowProgressPOS=true or false
Show a progress bar while loading for the web pane.
webviewSizePOS=integer
Set the width (for right and left panes) or height (for top and bottom panes) of the web pane.
An example using all of the above:
shell> rv -flags ModeManagerPreload=webview \
      webviewUrlRight=file:///foo.html \
      webviewShowTitleRight=false \
      webviewShowProgressRight=false \
      webviewSizeRight=200 \
      webviewUrlBottom=file:///bar.html \
      webviewShowTitleBottom=false \
      webviewShowProgressBottom=false \
      webviewSizeBottom=300

Chapter 15 Hierarchical Preferences

Each RV user has a Preferences file where her personal rv settings are stored. Most preferences are viewed and edited with the Preferences dialog (accessed via the RV menu), but preferences can also be programmatically read and written from custom code via the readSetting and writeSetting Mu commands. The preferences files a stored in different places on different platforms.
Platform
Location
Mac OS X
$HOME/Library/Preferences/com.tweaksoftware.RV.plist
Linux
$HOME/.config/TweakSoftware/RV.conf
Windows 7
~$HOME/AppData/Roaming/TweakSoftware/RV.ini
Initial values of preferences can be overridden on a site-wide or show-wide basis by setting the environment variable RV_PREFS_OVERRIDE_PATH to point to one or more paths that contain files of the name and type listed in the above table. Each of these overriding preferences file can provide default values for one or more preferences. A value from one of these overriding files will override the users's preference only if the user's preferences file has no value for this preference yet.
In the simplest case, if you want to provide overriding initial values for all preferences, you should
  1. Delete your preferences file.
  2. Start RV, go to the Preferences dialog, and adjust any preferences you want.
  3. Close the dialog and exit RV.
  4. Copy your preferences file into the RV_PREFS_OVERRIDE_PATH.
If you want to override at several levels (say per-site and per-show), you can add preferences files to any number of directories in the PATH, but you'll have to edit them so that each only contains the preferences you want to override with that file. Preferences files found in directories earlier in the path will override those found in later directories.
Note that this system only provides the ability to override initial settings for the preferences. Nothing prevents the user from changing those settings after initialization.
It's also possible to create show/site/whatever-specific preferences files that always clobber the user's personal preferences. This mechanism is exactly analogous to the above, except that the name of the environment variable that holds paths to clobbering prefs files is RV_PREFS_CLOBBER_PATH. Again, the user can freely change any “live” values managed in the Preferences dialog, but in the next run, the clobbering preferences will again take precedence. Note that a value from a clobbering file (at any level) will take precedence over a value from an overriding file (at any level).

Chapter 16 Node Reference

This chapter has a section for each type of node in RV's image processing graph. The properties and descriptions listed here are the default properties. Any top level node that can be seen in the session manager can have the “name” property of the “ui” component set in order to control how the node is listed.

RVCache

The RVCache node has no external properties.

RVCacheLUT and RVLookLUT

The RVCacheLUT is applied in software before the image is cached and before any software resolution and bit depth changes. The RVLookLUT is applied just before the display LUT but is per-source.
Property
Type
Size
Description
lut.lut
float
div 3
Contains either a 3D or a channel look LUT
lut.prelut
float
div 3
Contains a channel pre-LUT
lut.inMatrix
float
16
Input color matrix
lut.outMatrix
float
16
Output color matrix
lut.scale
float
1
LUT output scale factor
lut.offset
float
1
LUT output offset
lut.file
string
1
Path of LUT file to read when RV session is loaded
lut.size
int
1 or 3
With 1 size value, the look LUT is a channel LUT of the specified size, if there are 3 values the look LUT is a 3D LUT with the dimensions indicated
lut.active
int
1
If non-0 the LUT is active
lut:output.size
int
1 or 3
The resampled LUT output size
lut:output.lut
float or half
div 3
The resampled output LUT
lut:output.prelut
float or half
div 3
The resampled output pre-LUT

RVCDL

This node can be used to load CDL properties from CCC, CC, and CDL files on disk.
Property
Type
Size
Description
node.active
int
1
If non-0 the CDL is active. A value of 0 disables the node.
node.colorspace
string
1
Can be "rec709", "aces", or "aceslog" and the default is "rec709".
node.file
string
1
Path of CCC, CC, or CDL file from which to read properties.
node.slope
float[3]
1
Color Decision List per-channel slope control
node.offset
float[3]
1
Color Decision List per-channel offset control
node.power
float[3]
1
Color Decision List per-channel power control
node.saturation
float
1
Color Decision List saturation control
node.noClamp
int
1
Set to 1 to remove clamping from CDL equations

RVChannelMap

This node can be used to remap channels that may have been labeled incorrectly.
Property
Type
Size
Description
format.channels
string
>= 0
An array of channel names. If the property is empty the image will pass though the node unchanged. Otherwise, only those channels appearing in the property array will be output. The channel order will be the same as the order in the property.

RVColor

The color node has a large number of color controls. This node is usually evaluated on the GPU, except when normalize is 1. The CDL is applied after linearization and linear color changes.
Property
Type
Size
Description
color.normalize
int
1
Non-0 means to normalize the incoming pixels to [0,1]
color.invert
int
1
If non-0, invert the image color using the inversion matrix (See User's Manual)
color.gamma
float[3]
1
Apply a gamma. The default is [1.0, 1.0, 1.0]. The three values are applied to R G and B channels independently.
color.offset
float[3]
1
Color bias added to incoming color channels. Default = 0 (not bias). Each component is applied to R G B independently.
color.scale
float[3]
1
Scales each channel by the respective float value.
color.exposure
float[3]
1
Relative exposure in stops. Default = [0, 0, 0], See user's manual for more information on this. Each component is applied to R G and B independently.
color.contrast
float[3]
1
Contrast applied per channel (see User's Manual)
color.saturation
float
1
Relative saturation (see User's Manual)
color.hue
float
1
Hue rotation in radians (see User's Manual)
color.active
int
1
If 0, do not apply any color transforms. Disables the node.
CDL.slope
float[3]
1
Color Decision List per-channel slope control
CDL.offset
float[3]
1
Color Decision List per-channel offset control
CDL.power
float[3]
1
Color Decision List per-channel power control
CDL.saturation
float
1
Color Decision List saturation control
CDL.noClamp
int
1
Set to 1 to remove clamping from CDL equations
luminanceLUT.lut
float
div 3
Luminance LUT to be applied to incoming image. Contains R G B triples one after another. The LUT resolution
luminanceLUT.max
float
1
A scale on the output of the Luminance LUT
luminanceLUT.active
int
1
If non-0, luminance LUT should be applied
luminanceLUT:output.size
int
1
Output Luminance lut size
luminanceLUT:output.lut
float
div 3
Output resampled luminance LUT

RVDispTransform2D

This node is used to do any scaling or translating of the corresponding view group.
Property
Type
Size
Description
transform.translate
float[2]
1
Viewing translation
transform.scale
float[2]
1
Viewing scale

RVDisplayColor

This node is used by default by any display group as part of its color management pipeline.
Property
Type
Size
Description
color.channelOrder
string
1
A four character string containing any of the characters [RGBA10]. The order allows permutation of the normal R G B and A channels as well as filling any channel with 1 or 0.
color.channelFlood
int
1
If 0 pass the channels through as they are. When the value is 1, 2, 3, or 4, the R G B or A channels are used to flood the R G and B channels. When the value is 5, the luminance of each pixel is computed and displayed as a gray scale image.
color.gamma
float
1
A single gamma value applied to all channels, default = 1.0
color.sRGB
int
1
If non-0 a linear to sRGB space transform occurs
color.Rec709
int
1
If non-0 the Rec709 transfer function is applied
color.brightness
float
1
In relative stops, the final pixel values are brightened or dimmed according to this value. Occurs after all color space transforms.
color.outOfRange
int
1
If non-0 pass pixels through an out of range filter. Channel values in the (0,1] are set to 0.5, channel values [-inf,0] are set to 0 and channel values (1,inf] are set to 1.0.
color.active
int
1
If 0 deactivate the display node
lut.lut
float
div 3
Contains either a 3D or a channel display LUT
lut.prelut
float
div 3
Contains a channel pre-LUT
lut.scale
float
1
LUT output scale factor
lut.offset
float
1
LUT output offset
lut.inMatrix
float
16
Input color matrix
lut.outMatrix
float
16
Output color matrix
lut.file
string
1
Path of LUT file to read when RV session is loaded
lut.size
int
1 or 3
With 1 size value, the display LUT is a channel LUT of the specified size, if there are 3 values the display LUT is a 3D LUT with the dimensions indicated
lut.active
int
1
If non-0 the display LUT is active
lut:output.size
int
1 or 3
The resampled LUT output size
lut:output.lut
float or half
div 3
The resampled output LUT
lut:output.prelut
float or half
div 3
The resampled output pre-LUT

RVDisplayGroup and RVOutputGroup

The display group provides per device display conditioning. The output group is the analogous node group for RVIO. The display groups are never saved in the session, but there is only one output group and it is saved for RVIO. There are no user external properties at this time.

RVDisplayStereo

This node governs how to handle stereo playback including controlling the placement of stereo sources.
Property
Type
Size
Description
rightTransform.flip
int
1
Flip the right eye top to bottom.
rightTransform.flop
int
1
Flop the right eye left to right.
rightTransform.rotate
float
1
Rotation of right eye in degrees.
rightTransform.translate
float[2]
1
Translation offset in X and Y for the right eye.
stereo.relativeOffset
float
1
Relative stereo offset for both eyes.
stereo.rightOffset
float
1
Stereo offset for right eye only.
stereo.swap
int
1
If set to 1 treat left eye as right and right eye as left.
stereo.type
string
1
Stereo mode in use. For example: left, right, pair, mirror, scanline, anaglyph, checker... (default is off)

RVFileSource

The source node controls file I/O and organize the source media into layers (in the RV sense). It has basic controls needed to mix the layers together.
Name
Type
Size
Description
media.movie
string
> 1
The movie, image, audio files and image sequence names. Each name is a layer in the source.There is typically at least one value in this property
group.fps
float
1
Overrides the fps found in any movie or image file or if none is found overrides the default fps of 24.
group.volume
float
1
Relative volume. This can be any positive number or 0.
group.audioOffset
float
1
Audio offset in seconds. All audio layers will be offset.
group.rangeOffset
int
1
Shifts the start and end frame numbers of all image media in the source.
group.rangeStart
int
1
Resets the start frame of all image media to given value. This is an optional property. It must be created to be set and removed to unset.
group.balance
float
1
Range of [-1,1]. A value of 0 means the audio volume is the same for both the left and right channels.
group.noMovieAudio
int
1
Do not use audio tracks in movies files
cut.in
int
1
The preferred start frame of the sequence/movie file
cut.out
int
1
The preferred end frame of the sequence/movie file
request.readAllChannels
int
1
If the value is 1 and the image format can read multiple channels, it is requested to read all channels in the current image layer and view.
request.imageComponent
string
2, 3, or 4
This array is of the form: type, view, [layer[, channel]]. The type describes what is defined in the remainder of the array. The type may be one of ”view”, ”layer”, or ”channel”. The 2nd element of the array must be defined and is the value of the view. If there are 3 elements defined then the 3rd is the layer name. If there are 4 elements defined then the 4th is the channel name.
request.stereoViews
string
0 or 2
If there are values in this property, they will be passed to the image reader when in stereo viewing mode as requested view names for the left and right eyes.
attributes.key
string, int, or float
1
This optional container of properties will get automatically included in the metadata associated with the source. The key can be any string and will be displayed as the metadata item name when displayed in the Image Info. The value of the property will be displayed as the value of the metadata.

RVFolderGroup

The folder group contains either a SwitchGroup or LayoutGroup which determines how it is displayed.
Name
Type
Size
Description
ui.name
string
1
This is a user specified name which appears in the user interface.
mode.viewType
string
1
Either “switch” or “layout”. Determines how the folder is displayed.

RVFormat

This node is used to alter geometry or color depth of an image source. It is part of an RVSourceGroup.
Property
Type
Size
Description
geometry.xfit
int
1
Forces the resolution to a specific width
geometry.yfit
int
1
Forces the resolution to a specific height
geometry.xresize
int
1
Forces the resolution to a specific width
geometry.yresize
int
1
Forces the resolution to a specific height
geometry.scale
float
1
Multiplier on incoming resolution. E.g., 0.5 when applied to 2048x1556 results in a 1024x768 image.
geometry.resampleMethod
string
1
Method to use when resampling. The possible values are area, cubic, and linear,
crop.active
int
1
If non-0 cropping is active
crop.xmin
int
1
Minimum X value of crop in pixel space
crop.ymin
int
1
Minimum Y value of crop in pixel space
crop.xmax
int
1
Maximum X value of crop in pixel space
crop.ymax
int
1
Maximum Y value of crop in pixel space
uncrop.active
int
1
In non-0 uncrop region is used
uncrop.x
int
1
X offset of input image into uncropped image space
uncrop.y
int
1
Y offset of input image into uncropped image space
uncrop.width
int
1
Width of uncropped image space
uncrop.height
int
1
Height of uncropped image space
color.maxBitDepth
int
1
One of 8, 16, or 32 indicating the maximum allowed bit depth (for either float or integer pixels)
color.allowFloatingPoint
int
1
If non-0 floating point images will be allowed on the GPU otherwise, the image will be converted to integer of the same bit depth (or the maximum bit depth).

RVImageSource

The RV image source is subset of what RV can handle from an external file (basically just EXR). Image sources can have multiple views each of which have multiple layers. However, all views must have the same layers. Image sources cannot have layers within layers, orphaned channels, empty views, missing views, or other weirdnesses that EXR can have.
Name
Type
Size
Description
media.movie
string
> 1
The movie, image, audio files and image sequence names. Each name is a layer in the source.There is typically at least one value in this property.
media.name
string
1
The name for this image.
cut.in
int
1
The preferred start frame of the sequence/movie file.
cut.out
int
1
The preferred end frame of the sequence/movie file.
image.channels
string
1
String representing the channels in the image.
image.layers
string
> 1
List of strings representing the layers in the image.
image.views
string
> 1
List of strings representing the views in the image.
image.defaultLayer
string
1
String representing the layer from image.layers that should be treated as default layer.
image.defaultView
string
1
String representing the view from image.views that should be treated as default view.
image.start
int
1
First frame of the source.
image.end
int
1
Last frame of the source.
image.inc
int
1
Number of frames to step by.
image.fps
float
1
Frame rate of source in float ratio of frames per second.
image.pixelAspect
float
1
Image aspect ratio as a float of width over height.
image.uncropHeight
int
1
Height of uncropped image space.
image.uncropWidth
int
1
Width of uncropped image space.
image.uncropX
int
1
X offset of image into uncropped image space.
image.uncropY
int
1
Y offset of image into uncropped image space.
image.width
int
1
Image width in integer pixels.
image.height
int
1
Image height in integer pixels.
request.imageChannelSelection
string
Any
Any values are considered image channel names. These are passed to the image readers with the request that only these layers be read from the image pixels.
request.imageComponent
string
2, 3, or 4
This array is of the form: type, view, [layer[, channel]]. The type describes what is defined in the remainder of the array. The type may be one of ”view”, ”layer”, or ”channel”. The 2nd element of the array must be defined and is the value of the view. If there are 3 elements defined then the 3rd is the layer name. If there are 4 elements defined then the 4th is the channel name.
request.stereoViews
string
0 or 2
If there are values in this property, they will be passed to the image reader when in stereo viewing mode as requested view names for the left and right eyes.
attributes.key
string, int, or float
1
This optional container of properties will get automatically included in the metadata associated with the source. The key can be any string and will be displayed as the metadata item name when displayed in the Image Info. The value of the property will be displayed as the value of the metadata.

RVLayoutGroup

The source group contains a single chain of nodes the leaf of which is an RVFileSource or RVImageSource. It has a single property.
Name
Type
Size
Description
ui.name
string
1
This is a user specified name which appears in the user interface.
layout.mode
string
1
The string mode that dictates the way items are layed out. Possible values are: packed, packed2, row, column, and grid (default is packed).
layout.spacing
float
1
Scale the items in the layout. Legal values are between 0.0 and 1.0.
layout.gridColumns
int
1
When in grid mode constrain grid to this many columns. If this set to 0, then the number of columns will be determined by gridRows. If both are 0, then both will be automatically calculated.
layout.gridRows
int
1
When in grid mode constrain grid to this many rows. If this is set to 0, then the number of rows will be determined by gridColumns. This value is ignored when gridColumns is non-zero.
timing.retimeInputs
int
1
Retime all inputs to the output fps if 1 otherwise play back their frames one at a time at the output fps.

RVLensWarp

This node handles the pixel aspect ratio of a source group. The lens warp node can also be used to perform radial and/or tangential distortion on a frame. It implements the Brown's distortion model (similar to that adopted by OpenCV or Adobe Lens Camera Profile model) and 3DE4's Anamorphic Degree6 model. This node can be used to perform operations like lens distortion or artistic lens warp effects.
Name
Type
Size
Description
warp.pixelAspectRatio
float
1
If non-0 set the pixel aspect ratio. Otherwise use the pixel aspect ratio reported by the incoming image. (default 0, ignored)
warp.model
string
Lens model: choices are “brown”, “opencv”, “pfbarrel”, “adobe”, “3de4_anamorphic_degree_6, “rv4.0.10”.
warp.k1
float
1
Radial coefficient for r^2 (default 0.0)
Applicable to “brown”, “opencv”, “pfbarrel”, “adobe”.
warp.k2
float
1
Radial coefficient for r^4 (default 0.0)
Applicable to “brown”, “opencv”, “pfbarrel”, “adobe”.
warp.k3
float
1
Radial coefficient for r^6 (default 0.0)
Applicable to “brown”, “opencv”, “adobe”.
warp.p1
float
1
First tangential coefficient (default 0.0)
Applicable to “brown”, “opencv”, “adobe”.
warp.p2
float
1
Second tangential coefficient (default 0.0)
Applicable to “brown”, “opencv”, “adobe”.
warp.cx02
float
1
Applicable to “3de4_anamorphic_degree_6”. (default 0.0)
warp.cy02
float
1
Applicable to “3de4_anamorphic_degree_6”. (default 0.0)
warp.cx22
float
1
Applicable to “3de4_anamorphic_degree_6”. (default 0.0)
warp.cy22
float
1
Applicable to “3de4_anamorphic_degree_6”. (default 0.0)
warp.cx04
float
1
Applicable to “3de4_anamorphic_degree_6”. (default 0.0)
warp.cy04
float
1
Applicable to “3de4_anamorphic_degree_6”. (default 0.0)
warp.cx24
float
1
Applicable to “3de4_anamorphic_degree_6”. (default 0.0)
warp.cy24
float
1
Applicable to “3de4_anamorphic_degree_6”. (default 0.0)
warp.cx44
float
1
Applicable to “3de4_anamorphic_degree_6”. (default 0.0)
warp.cy44
float
1
Applicable to “3de4_anamorphic_degree_6”. (default 0.0)
warp.cx06
float
1
Applicable to “3de4_anamorphic_degree_6”. (default 0.0)
warp.cy06
float
1
Applicable to “3de4_anamorphic_degree_6”. (default 0.0)
warp.cx26
float
1
Applicable to “3de4_anamorphic_degree_6”. (default 0.0)
warp.cy26
float
1
Applicable to “3de4_anamorphic_degree_6”. (default 0.0)
warp.cx46
float
1
Applicable to “3de4_anamorphic_degree_6”. (default 0.0)
warp.cy46
float
1
Applicable to “3de4_anamorphic_degree_6”. (default 0.0)
warp.cx66
float
1
Applicable to “3de4_anamorphic_degree_6”. (default 0.0)
warp.cy66
float
1
Applicable to “3de4_anamorphic_degree_6”. (default 0.0)
warp.center
float[2]
1
Position of distortion center in normalized values [0...1] (default [0.5 0.5]. Applicable to all models.
warp.offset
float[2]
1
Offset from distortion center in normalized values [0...1.0] (default [0.0 0.0]). Applicable to all models.
warp.fx
float
1
Normalized FocalLength in X (default 1.0).
Applicable to “brown”, “opencv”, “adobe”, “3de4_anamorphic_degree_6”.
warp.fy
float
1
Normalized FocalLength in Y (default 1.0).
Applicable to “brown”, “opencv”, “adobe”, “3de4_anamorphic_degree_6”.
warp.cropRatioX
float
1
Crop ratio of fovX (default 1.0). Applicable to all models.
warp.cropRatioY
float
1
Crop ratio of fovY (default 1.0). Applicable to all models.
node.active
int
1
If 0, do not apply any warp/pixel aspect ratio transform. Disables the node. (default 1)
Example use case: Using OpenCV to determine lens distort parameters for RVLensWarp node based on GoPro footage. First capture some footage of a checkboard with your GoPro. Then you can use OpenCV camera calibration approach on this footage to solve for k1,k2,k3,p1 and p2. In OpenCV these numbers are reported back as follows. For example our 1920x1440 Hero3 Black GoPro solve returned:
    fx=829.122253 0.000000 cx=969.551819
    0.000000 fy=829.122253 cy=687.480774
    0.000000 0.000000 1.000000
    k1=-0.198361 k2=0.028252 p1=0.000092 p2=-0.000073
The OpenCV camera calibration solve output numbers are then translated/normalized to the RVLensWarpode property values as follows:
    warp.model = "opencv"
    warp.k1 = k1
    warp.k2 = k2
    warp.p1 = p1
    warp.p2 = p2
    warp.center = [cx/1920 cy/1440]
    warp.fx = fx/1920
    warp.fy = fy/1920
e.g. mu code:
    set("#RVLensWarp.warp.model", "opencv");
    set("#RVLensWarp.warp.k1", -0.198361);
    set("#RVLensWarp.warp.k2", 0.028252);
    set("#RVLensWarp.warp.p1", 0.00092);
    set("#RVLensWarp.warp.p2", -0.00073);
    setFloatProperty("#RVLensWarp.warp.offset", float[]{0.505, 0.4774}, true);
    set("#RVLensWarp.warp.fx", 0.43185);
    set("#RVLensWarp.warp.fy", 0.43185);
Example use case: Using Adobe LCP (Lens Camera Profile) distort parameters for RVLensWarp node. Adobe LCP files can be located in '/Library/Application Support/Adobe/CameraRaw/LensProfiles/1.0' under OSX. Adobe LCP file parameters maps to the RVLensWarp node properties as follows:
    warp.model = "adobe"
    warp.k1 = stCamera:RadialDistortParam1
    warp.k2 = stCamera:RadialDistortParam2
    warp.k3 = stCamera:RadialDistortParam3
    warp.p1 = stCamera:TangentialDistortParam1
    warp.p2 = stCamera:TangentialDistortParam2
    warp.center = [stCamera:ImageXCenter stCamera:ImageYCenter]
    warp.fx = stCamera:FocalLengthX
    warp.fy = stCamera:FocalLengthY

RVLinearize

The linearize node has a large number of color controls. The CDL is applied before linearization occurs.
Property
Type
Size
Description
color.alphaType
int
1
By default (0), uses the alpha type reported by the incoming image. Otherwise, 1 means the alpha is premultiplied, 0 means the incoming alpha is unpremultiplied.
color.YUV
int
1
If the value is non-0, convert the incoming pixels from YUV space to linear space.
color.logtype
int
1
The default (0), means no log to linear transform, 1 uses the cineon transform (see cineon.whiteCodeValue and cineon.blackCodeValue below), 2 means use the Viper camera log to linear transform, and 3 means use LogC log to linear transform.
color.sRGB2linear
int
1
If the value is non-0, convert the incoming pixels from sRGB space to linear space.
color.Rec709ToLinear
int
1
If the value is non-0, convert the incoming pixels using the inverse of the Rec709 transfer function.
color.fileGamma
float
1
Apply a gamma to linearize the incoming image. The default is 1.0.
color.active
int
1
If 0, do not apply any color transforms. Disables the node.
color.ignoreChromaticities
int
1
If non-0, ignore any non-Rec 709 chromaticities reported by the incoming image.
CDL.slope
float[3]
1
Color Decision List per-channel slope control.
CDL.offset
float[3]
1
Color Decision List per-channel offset control.
CDL.power
float[3]
1
Color Decision List per-channel power control.
CDL.saturation
float
1
Color Decision List saturation control.
CDL.noClamp
int
1
Set to 1 to remove clamping from CDL equations.
CDL.active
int
1
If non-0 the CDL is active.
lut.lut
float
div 3
Contains either a 3D or a channel file LUT.
lut.prelut
float
div 3
Contains a channel pre-LUT.
lut.inMatrix
float
16
Input color matrix.
lut.outMatrix
float
16
Output color matrix.
lut.scale
float
1
LUT output scale factor.
lut.offset
float
1
LUT output offset.
lut.file
string
1
Path of LUT file to read when RV session is loaded.
lut.size
int
1 or 3
With 1 size value, the file LUT is a channel LUT of the specified size, if there are 3 values the file LUT is a 3D LUT with the dimensions indicated.
lut.active
int
1
If non-0 the file LUT is active.
lut:output.size
int
1 or 3
The resampled LUT output size.
lut:output.lut
float or half
div 3
The resampled output LUT.

OCIO (OpenColorIO), OCIOFile, OCIOLook, and OCIODisplay

OpenColorIO nodes can be used in place of existing RV LUT pipelines. Properties in RVColorPipelineGroup, RVLinearizePipelineGroup, RVLookPipelineGroup, and RVDisplayPipelineGroup determine whether or not the OCIO nodes are used. All OCIO nodes have the same properties and function, but their location in the color pipeline is determined by their type. The exception is the generic OCIO node which can be created by the user and used in any context.
NOTE: THIS IS INCOMPLETE – SEE ACCOMPANYING OCIO INTEGRATION DOCUMENT
Property
Type
Size
Description
ocio.lut
float
div 3
Contains a 3D LUT, size determined by ocio.lut3DSize
lut.prelut
float
div 3
Currently unused
ocio.active
int
1
Non-0 means node is active
ocio.lut3DSize
int
1
3D LUT size of all dimensions (default is 32)
ocio.inSpace
string
1
Name of OCIO input colorspace
ocio_context.name
string
1
Name/Value pairs for OCIO context

RVOverlay

Overlay nodes can be used with any source. They can be used to draw arbitrary rectangles and text over the source but beneath any annotations. Overlay nodes can hold any number of 3 types of components: rect components describe a rectangle to be rendered, text components describe a string (or an array of strings, one per frame) to be rendered, and window components describe a matted region to be indicated either by coloring the region outside the window, or by outlining it. The coordiates of the corners of the window may be animated by specifying one number per frame.
In the below the “id” in the component name can be any string, but must be different for each component of the same type.
Property
Type
Size
Description
overlay.nextRectId
int
1
(unused)
overlay.nextTextId
int
1
(unused)
overlay.show
int
1
If 1 display any rectangles/text/window entries. If 0 do not.
matte.show
int
1
If 1 display the source specific matte, not the global
matte.aspect
float
1
Aspect ratio of the source's matte
matte.opacity
float
1
Opacity of the source's matte
matte.heightVisible
float
1
Fraction of the source height that is still visible from the matte.
matte.centerPoint
float[2]
1
The center of the matte stored as X, Y in normalized coordinates.
rect:id.color
float[4]
1
The color of the rectangle
rect:id.width
float
1
The width of the rectangle in the normalized coordinate system
rect:id.height
float
1
The height of the rectangle in the normalized coordinate system
rect:id.position
float[2]
1
Location of the rectangle in the normalized coordinate system
rect:id.active
int
1
If 0, rect will not be rendered
rect:id.eye
int
1
If absent, or set to 2, the rectangle will be rendered in both stereo eyes. If set to 0 or 1, only in the corresponding eye.
text:id.pixelScale
float[2]
1
X and Y scaling factors for position, IE expected source resolution, if present and non-zero, position is expected in “pixels”.
text:id.position
float[2]
1
Location of the text (coordinate are normalized unless pixelScale is set, in which case they are in “pixels”)
text:id.color
float[4]
1
The color of the text
text:id.spacing
float
1
The spacing of the text
text:id.size
float
1
The size of the text
text:id.scale
float
1
The scale of the text
text:id.rotation
float
1
(unused)
text:id.font
string
1
The path to the .ttf (TrueType) font to use (Default is Luxi Serif)
text:id.text
string
N
Text to be rendered, if multi-valued there should be one string per frame in the expected range.
text:id.origin
string
1
The origin of the text box. The position property will store the location of the origin, but the origin can be on any corner of the text box or centered in between. The valid possible values for origin are top-left, top-center, top-right, center-left, center-center, center-right, bottom-left, bottom-center, bottom-right, and the empty string (which is the default for backwards compatibility).
text:id.eye
int
1
If absent, or set to 2, the rectangle will be rendered in both stereo eyes. If set to 0 or 1, only in the corresponding eye.
text:id.active
int
1
If active is 0, the text item will not be rendered
text:id.firstFrame
int
1
If the “text” property is multi-valued, this property indicates the frame number corresponding to the first text value.
text:id.debug
int
1
(unused)
window:id.eye
int
1
If absent, or set to 2, the rectangle will be rendered in both stereo eyes. If set to 0 or 1, only in the corresponding eye.
window:id.antialias
int
1
If 1, outline/window edge drawing will be antialiased. Default 0.
window:id.windowActive
int
1
If windowActive is 0, the window “matting” will not be rendered
window:id.outlineActive
int
1
If outlineActive is 0, the window outline will not be rendered
window:id.outlineWidth
float
1
Assuming antialias = 1, nominal width in image-space pixels of the outline (and the degree of blurriness of the matte edge). Default 3.0
window:id.outlineBrush
string
1
Assuming antialias = 1, brush used to stroke the outline (choices are "gauss" or "solid"). Default is gauss.
window:id.windowColor
float[4]
1
The color of the window “matting”.
window:id.outlineColor
float[4]
1
The color of the window outline.
window:id.imageAspect
float
1
The expected imageAspect of the media. If imageAspect is present and non-zero, normalized window coordinates are expected.
window:id.pixelScale
float[2]
1
X and Y scaling factors for window coordinates, IE expected source resolution. Used to normalize window coords in “pixels”. For pixelScale to take effect, imageAspect must be missing or 0.
window:id.firstFrame
int
1
If any of the window coord properties is multi-valued, this property indicates the frame number corresponding to the first coord value.
window:id.windowULx
float
N
Upper left window corner (x coord).
window:id.windowULy
float
N
Upper left window corner (y coord).
window:id.windowLLx
float
N
Lower left window corner (x coord).
window:id.windowLLy
float
N
Lower left window corner (y coord).
window:id.windowURx
float
N
Upper right window corner (x coord).
window:id.windowURy
float
N
Upper right window corner (y coord).
window:id.windowLRx
float
N
Lower right window corner (x coord).
window:id.windowLRy
float
N
Lower right window corner (y coord).

RVPaint

Paint nodes are used primarily to store per frame annotations. Below id is the value of nextID at the time the paint command property was created, frame is the frame on which the annotation will appear, user is the username of the user who created the property.
Property
Type
Size
Description
paint.nextId
int
1
A counter used by the annotation mode to uniquely tag annotation pen strokes and text.
paint.nextAnnotationId
int
1
(unused)
paint.show
int
1
If 1 display any paint strokes and text entries. If 0 do not.
paint.exclude
string
N
(unused)
paint.include
string
N
(unused)
pen:id:frame:user.color
float[4]
1
The color of the pen stroke
pen:id:frame:user.width
float
1
The width of the pen stroke
pen:id:frame:user.brush
string
1
Brush style of “gauss” or “circle” for soft or hard lines respectively
pen:id:frame:user.points
float[2]
N
Points of the stroke in the normalized coordinate system
pen:id:frame:user.debug
int
1
If 1 show multicolored bounding lines around the stroke.
pen:id:frame:user.join
int
1
The joining style of the stroke:
NoJoin = 0; BevelJoin = 1; MiterJoin = 2; RoundJoin = 3;
pen:id:frame:user.cap
int
1
The cap style of the stroke:
NoCap = 0; SquareCap = 1; RoundCap = 2;
pen:id:frame:user.splat
int
1
pen:id:frame:user.mode
int
1
Drawing mode of the stroke (Default if missing is 0):
RenderOverMode = 0; RenderEraseMode = 1;
text:id:frame:user.position
float[2]
1
Location of the text in the normalized coordinate system
text:id:frame:user.color
float[4]
1
The color of the text
text:id:frame:user.spacing
float
1
The spacing of the text
text:id:frame:user.size
float
1
The size of the text
text:id:frame:user.scale
float
1
The scale of the text
text:id:frame:user.rotation
float
1
(unused)
text:id:frame:user.font
string
1
The path to the .ttf (TrueType) font to use (Default is Luxi Serif)
text:id:frame:user.text
string
1
Content of the text
text:id:frame:user.origin
string
1
The origin of the text box. The position property will store the location of the origin, but the origin can be on any corner of the text box or centered in between. The valid possible values for origin are top-left, top-center, top-right, center-left, center-center, center-right, bottom-left, bottom-center, bottom-right, and the empty string (which is the default for backwards compatibility).
text:id:frame:user.debug
int
1
(unused)

RVPrimaryConvert

The primary convert node can be used to perform primary colorspace conversion with illuminant adaptation on a frame that has been linearized. The input and output colorspace primaries are specified in terms of input and output chromaticities for red, green, blue and white points. Illuminant adaptation is implemented using the Bradford transform where the input and output illuminant are specified in terms of their white points. Illuminant adaptation is optional. Default values are set for D65 Rec709.
Property
Type
Size
Description
node.active
int
1
If non-zero node is active. (default 0)
illuminantAdaptation.useBradfordTransform
int
1
If non-zero illuminant adaptation is enabled using Bradford transform. (default 1)
illuminantAdaptaton.inIlluminantWhite
float
1
Input illuminant white point. (default [0.3127 0.3290])
illuminantAdaptation.outIlluminantWhite
float
1
Output illuminant white point. (default [0.3127 0.3290])
inChromaticities.red
float[2]
1
Input chromaticities red point. (default [0.6400 0.3300])
inChromaticities.green
float[2]
1
Input chromaticities green point. (default [0.3000 0.6000])
inChromaticities.blue
float[2]
1
Input chromaticities blue point. (default [0.1500 0.0600])
inChromaticities.white
float[2]
1
Input chromaticities white point. (default [0.3127 0.3290])
outChromaticities.red
float[2]
1
Output chromaticities red point. (default [0.6400 0.3300])
outChromaticities.green
float[2]
1
Output chromaticities green point. (default [0.3000 0.6000])
outChromaticities.blue
float[2]
1
Output chromaticities blue point. (default [0.1500 0.0600])
outChromaticities.white
float[2]
1
Output chromaticities white point. (default [0.3127 0.3290])

PipelineGroup, RVDisplayPipelineGroup, RVColorPipelineGroup, RVLinearizePipelineGroup, RVLookPipelineGroup and RVViewPipelineGroup

The PipelineGroup node and the RV specific pipeline nodes are group nodes that manages a pipeline of single input nodes. There is a single property on the node which determines the structure of the pipeline. The only difference between the various pipeline node types is the default value of the property.
Property
Type
Size
Description
pipeline.nodes
string
1 or more
The type names of the nodes in the managed pipeline from input to output order.
Node Type
Default Pipeline
PipelineGroup
No Default Pipeline
RVLinearizePipelineGroup
RVLinearize
RVColorPipelineGroup
RVColor
RVLookPipelineGroup
RVLookLUT
RVViewPipelineGroup
No Default Pipeline
RVDisplayPipelineGroup
RVDisplayColor

RVRetime

Retime nodes are in many of the group nodes to handle any necessary time changes to match playback between sources and views with different native frame rates. You can also use them for “artistic retiming” of two varieties.
The properties in the “warp” component (see below) implement a key-framed “speed warping” variety of retiming, where the keys describe the speed (as a multiplicative factor of the target frame rate - so 1.0 implies no difference, 0.5 implies half-speed, and 2.0 implies double-speed) at a given input frame. Or you can provide an explicit map of output frames from input frames with the properties in the “explicit” component (see below). Note that the warping will still make use of what it can of the “standard” retiming properties (in particular the output fps and the visual scale), but if you use explicit retiming, none of the standard properties will have any effect. The “precedence” of the retiming types depends on the active flags: if “explicit.active” is non-zero, the other properties will have no effect., and if there is no explicit retiming, warping will be active if “warp.active” is true. Please note that neither speed warping nor explicit mapping does any retiming of the input audio.
Property
Type
Size
Description
visual.scale
float
1
If extending the length scale is greater than 1.0. If decreasing the length scale is less than 1.0.
visual.offset
float
1
Number of frames to shift output.
audio.scale
float
1
If extending the length scale is greater than 1.0. If decreasing the length scale is less than 1.0.
audio.offset
float
1
Number of seconds to shift output.
output.fps
float
1
Output frame rate in frames per second.
warp.active
int
1
1 if warping should be active.
warp.keyFrames
int
N
Input frame numbers at which target speed should change.
warp.keyRates
float
N
Target speed multipliers for each input frame number above (1.0 means no speed change).
explicit.active
int
1
1 if an explicit mapping is provided and should be used.
explicit.firstOutputFrame
int
1
The output frame range provided by the Retime node will start with this frame. The last frame provided will be determined by the length of the array in the “inputFrames” property.
explicit.inputFrames
int
N
Each element in this array corresponds to an output frame, and the value of each element is the input frame number that will be used to provide the corresponding output frame.

RVRetimeGroup

The RetimeGroup is mostly just a holder for a Retime node. It has a single property.
Name
Type
Size
Description
ui.name
string
1
This is a user specified name which appears in the user interface.

RVSequence

Information about how to create a working EDL can be found in the User's Manual. All of the properties in the edl component should be the same size.
Property
Type
Size
Description
edl.frame
int
N
The global frame number which starts each cut
edl.source
int
N
The source input number of each cut
edl.in
int
N
The source relative in frame for each cut
edl.out
int
N
The source relative out frame for each cut
output.fps
float
1
Output FPS for the sequence. Input nodes may be retimed to this FPS.
output.size
int[2]
1
The virtual output size of the sequence. This may not match the input sizes.
output.interactiveSize
int
1
If 1 then adjust the virtual output size automatically to the window size for framing.
output.autoSize
int
1
Figure out a good size automatically from the input sizes if 1. Otherwise use output.size.
mode.useCutInfo
int
1
Use cut information on the inputs to determine EDL timing.
mode.autoEDL
int
1
If non-0, automatically concatenate new sources to the existing EDL, otherwise do not modify the EDL

RVSequenceGroup

The sequence group contains a chain of nodes for each of its inputs. The input chains are connected to a single RVSequence node which controls timing and switching between the inputs.
Name
Type
Size
Description
ui.name
string
1
This is a user specified name which appears in the user interface.
timing.retimeInputs
int
1
Retime all inputs to the output fps if 1 otherwise play back their frames one at a time at the output fps.

RVSession

The session node is a great place to store centrally located information to easily access from any other node or location. Almost like a global grab bag.
Name
Type
Size
Description
matte.aspect
float
1
Centralized setting for the aspect ratio of the matte used in all sources. Float ratio of width divided by height.
matte.centerPoint
float[2]
1
Centralized setting for the center of the matte used in all sources. Value stored as X, Y in normalized coordinates.
matte.heightVisible
float
1
Centralized setting for the fraction of the source height that is still visible from the matte used in all sources.
matte.opacity
float
1
Centralized setting for the opacity of the matte used in all sources. 0 == clear 1 == opaque.
matte.show
int
1
Centralized setting to enable or disable the matte used in all sources. 0 == OFF 1 == ON.

RVSoundTrack

Used to construct the audio waveform textures.
Property
Type
Size
Description
audio.volume
float
1
Global audio volume
audio.balance
float
1
[-1,1] left/right channel balance
audio.offset
float
1
Globl audio offset in seconds
audio.mute
int
1
If non-0 audio is muted

RVSourceGroup

The source group contains a single chain of nodes the leaf of which is an RVFileSource or RVImageSource. It has a single property.
Name
Type
Size
Description
ui.name
string
1
This is a user specified name which appears in the user interface.

RVSourceStereo

The source stereo nodes are used to control independent eye transformations.
Property
Type
Size
Description
stereo.swap
int
1
If non-0 swap the left and right eyes
stereo.relativeOffset
float
1
Offset distance between eyes, default = 0. Both eyes are offset.
stereo.rightOffset
float
1
Offset distance between eyes, default = 0. Only right eye is offset.
rightTransform.flip
int
1
If non-0 flip the right eye
rightTransform.flop
int
1
If non-0 flop the right eye
rightTransform.rotate
float
1
Right eye rotation in degrees
rightTransform.translate
float[2]
1
independent 2D translation applied only to right eye (on top of offsets)

RVStack

The stack node is part of a stack group and handles control for settings like compositing each layer as well as output playback timing.
Property
Type
Size
Description
output.fps
float
1
Output FPS for the stack. Input nodes may be retimed to this FPS.
output.size
int[2]
1
The virtual output size of the stack. This may not match the input sizes.
output.autoSize
int
1
Figure out a good size automatically from the input sizes if 1. Otherwise use output.size.
output.chosenAudioInput
string
1
Name of input which becomes the audio output of the stack. If the value is .all. then all inputs are mixed. If the value is .first. then the first input is used.
composite.type
string
1
The compositing operation to perform on the inputs. Valid values are: over, add, difference, -difference, and replace
mode.useCutInfo
int
1
Use cut information on the inputs to determine EDL timing.
mode.strictFrameRanges
int
1
If 1 match the timeline frames to the source frames instead of retiming to frame 1.
mode.alignStartFrames
int
1
If 1 offset all inputs so they start at same frame as the first input.

RVStackGroup

The stack group contains a chain of nodes for each of its inputs. The input chains are connected to a single RVStack node which controls compositing of the inputs as well as basic timing offsets.
Name
Type
Size
Description
ui.name
string
1
This is a user specified name which appears in the user interface.
timing.retimeInputs
int
1
Retime all inputs to the output fps if 1 otherwise play back their frames one at a time at the output fps.

RVSwitch

The switch node is part of a switch group and handles control for output playback timing.
Property
Type
Size
Description
output.fps
float
1
Output FPS for the switch. This is normally determined by the active input.
output.size
int[2]
1
The virtual output size of the stack. This is normally determined by the active input.
output.autoSize
int
1
Figure out a good size automatically from the input sizes if 1. Otherwise use output.size.
output.input
string
1
Name of the active input node.
mode.useCutInfo
int
1
Use cut information on the inputs to determine EDL timing.
mode.alignStartFrames
int
1
If 1 offset all inputs so they start at same frame as the first input.

RVSwitchGroup

The switch group changes it behavior depending on which of its inputs is “active”. It contains a single Switch node to which all of its inputs are connected.
Name
Type
Size
Description
ui.name
string
1
This is a user specified name which appears in the user interface.

RVTransform2D

The 2D transform node controls the image transformations. This node is usually evaluated on the GPU.
Property
Type
Size
Description
transform.flip
int
1
non-0 means flip the image (vertically)
transform.flop
int
1
non-0 means flop the image (horizontally)
transform.rotate
float
1
Rotate the image in degrees about its center.
pixel.aspectRatio
float
1
If non-0 set the pixel aspect ratio. Otherwise use the pixel aspect ratio reported by the incoming image.
transform.translate
float[2]
1
Translation in 2D in NDC space
transform.scale
float[2]
1
Scale in X and Y dimensions in NDC space
stencil.visibleBox
float
4
Four floats indicating the left, right, top, and bottom in NDC space of a stencil box.

RVViewGroup

The RVViewGroup node has no external properties.

Chapter 17 Additional GLSL Node Reference

This chapter describes the list of GLSL custom nodes that come bundled with RV. These nodes are grouped into five sections within this chapter based on the nodes "evaluationType" i.e. color, filter, transition, merge or combine. Each sub-section within a section describes a node and its parameters. For a complete description of the GLSL custom node itself, refer to the chapter on that topic i.e. "Chapter 3: Writing a Custom GLSL Node".
The complete collection of GLSL custom nodes that come with each RV distribution are stored in the following two files located at:
Linux & Windows:
<RV install dir>/plugins/Nodes/AdditionalNodes.gto 
<RV install dir>/plugins/Support/additional_nodes/AdditionalNodes.zip

Mac:
<RV install dir>/Contents/PlugIns/Nodes/AdditionalNodes.gto 
<RV install dir>/Contents/PlugIns/Support/additional_nodes/AdditionalNodes.zip 
The file "AdditionalNodes.gto" is a GTO formatted text file that contains the definition of all the nodes described in this chapter. All of the node definitions found in this file are signed for use by all RV4 versions. The GLSL source code that implements the node's functionality is embedded within the node definition's function block as an inlined string. In addition, the default values of the node's parameters can be found within the node definition's parameter block. The accompanying support file "AdditionalNodes.zip" is a zipped up collection of individually named node ".gto" and ".glsl" files. Users can unzip this package and refer to each node's .gto/.glsl file as examples of custom written RV GLSL nodes. Note the file "AdditionalNodes.zip" is not used by RV. Instead RV only uses "AdditionalNodes.gto" which was produced from all the files found in "AdditionalNodes.zip".
These nodes can be applied through the session manager to sources, sequences, stacks, layouts or other nodes. First you select a source (for example) and from the session manager "+" pull menu select "New Node by Type" and type in the name of the node in the entry box field of the "New Node by Type" window.

Color Nodes

This section describes all the GLSL nodes of evaluationType "color" found in "AdditionalNodes.gto".

17.1.1 Matrix3x3

This node implements a 3x3 matrix multiplication on the RGB channels of the inputImage. The inputImage alpha channel is not affected by this node.
Input parameters:
Property
Type
Default
node.parameters.m33
float[9]
[ 1 0 0 0 1 0 0 0 1 ]

17.1.2 Matrix4x4

This node implements a 4x4 matrix multiplication on the RGBA channels of the inputImage. The inputImage alpha channel is affected by this node.
Input parameters:
Property
Type
Default
node.parameters.m44
float[16]
[ 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 ]

17.1.3 Premult

This node implements the "premultiply by alpha" operation on the RGB channels of the inputImage. The inputImage alpha channel is not affected by this node.
Input parameters: None

17.1.4 UnPremult

This node implements the "unpremultiply by alpha" (i.e. divide by alpha) operation on the RGB channels of the inputImage. The inputImage alpha channel is not affected by this node.
Input parameters: None

17.1.5 Gamma

This node implements the gamma (i.e. pixelColor^gamma) operation on the RGB channels of the inputImage. The inputImage alpha channel is not affected by this node. Input parameters:
Property
Type
Default
node.parameters.gamma
float[3]
[ 0.4545 0.4545 0.4545 ]

17.1.6 CDL

This node implements the Color Description List operation on the RGB channels of the inputImage. The inputImage alpha channel is not affected by this node.
Parameter lumaCoefficients defaults to full range Rec709 luma values.
Input parameters:
Property
Type
Default
node.parameters.slope
float[3]
[ 1 1 1 ]
node.parameters.offset
float[3]
[ 0 0 0 ]
node.parameters.power
float[3]
[ 1 1 1 ]
node.parameters.saturation
float
[ 1 ]
node.parameters.lumaCoefficients
float[3]
[ 0.2126 0.7152 0.0722 ]
node.parameters.minClamp
float
[ 0 ]
node.parameters.maxClamp
float
[ 1 ]

17.1.7 CDLForACESLinear

This node implements the Color Description List operation in ACES linear colorspace on the RGB channels of the inputImage. The inputImage alpha channel is not affected by this node.
Parameter lumaCoefficients defaults to full range Rec709 luma values.
If the inputImage colorspace is NOT in ACES linear, but in some X linear colorspace; then one must set the 'toACES' property to the X-to-ACES colorspace conversion matrix and similarly the 'fromACES' property to the ACES-to-X colorspace conversion matrix.
Input parameters:
Property
Type
Default
node.parameters.slope
float[3]
[ 1 1 1 ]
node.parameters.offset
float[3]
[ 0 0 0 ]
node.parameters.power
float[3]
[ 1 1 1 ]
node.parameters.saturation
float
[ 1 ]
node.parameters.lumaCoefficients
float[3]
[ 0.2126 0.7152 0.0722 ]
node.parameters.toACES
float[16]
[ 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 ]
node.parameters.fromACES
float[16]
[ 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 ]
node.parameters.minClamp
float
[ 0 ]
node.parameters.maxClamp
float
[ 1 ]

17.1.8 CDLForACESLog

This node implements the Color Description List operation in ACES Log colorspace on the RGB channels of the inputImage. The inputImage alpha channel is not affected by this node.
Parameter lumaCoefficients defaults to full range Rec709 luma values.
If the inputImage colorspace is NOT in ACES linear, but in some X linear colorspace; then one must set the 'toACES' property to the X-to-ACES colorspace conversion matrix and similarly the 'fromACES' property to the ACES-to-X colorspace conversion matrix.
Input parameters:
Property
Type
Default
node.parameters.slope
float[3]
[ 1 1 1 ]
node.parameters.offset
float[3]
[ 0 0 0 ]
node.parameters.power
float[3]
[ 1 1 1 ]
node.parameters.saturation
float
[ 1 ]
node.parameters.lumaCoefficients
float[3]
[ 0.2126 0.7152 0.0722 ]
node.parameters.toACES
float[16]
[ 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 ]
node.parameters.fromACES
float[16]
[ 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 ]
node.parameters.minClamp
float
[ 0 ]
node.parameters.maxClamp
float
[ 1 ]

17.1.9 SRGBToLinear

This linearizing node implements the sRGB to linear transfer function operation on the RGB channels of the inputImage. The inputImage alpha channel is not affected by this node.
Input parameters: None

17.1.10 LinearToSRGB

This node implements the linear to sRGB transfer function operation on the RGB channels of the inputImage. The inputImage alpha channel is not affected by this node.
Input parameters: None

17.1.11 Rec709ToLinear

This linearizing node implements the Rec709 to linear transfer function operation on the RGB channels of the inputImage. The inputImage alpha channel is not affected by this node.
Input parameters: None

17.1.12 LinearToRec709

This node implements the linear to Rec709 transfer function operation on the RGB channels of the inputImage. The inputImage alpha channel is not affected by this node.
Input parameters: None

17.1.13 CineonLogToLinear

This linearizing node implements the Cineon Log to linear transfer function operation on the RGB channels of the inputImage. The implementation is based on Kodak specification "The Cineon Digital Film System". The inputImage alpha channel is not affected by this node.
Input parameters: (values must be specified within the range [0..1023])
Property
Type
Default
node.parameters.refBlack
float
95
node.parameters.refWhite
float
685
node.parameters.softClip
float
0

17.1.14 LinearToCineonLog

This node implements the linear to Cineon Log film transfer function operation on the RGB channels of the inputImage. The implementation is based on Kodak specification "The Cineon Digital Film System". The inputImage alpha channel is not affected by this node.
Input parameters: (values must be specified within the range [0..1023])
Property
Type
Default
node.parameters.refBlack
float
95
node.parameters.refWhite
float
685

17.1.15 ViperLogToLinear

This linearizing node implements the Viper Log to linear transfer function operation on the RGB channels of the inputImage. The inputImage alpha channel is not affected by this node.
Input parameters: None

17.1.16 LinearToViperLog

This node implements the linear to Viper Log transfer function operation on the RGB channels of the inputImage. The inputImage alpha channel is not affected by this node.
Input parameters: None

17.1.17 RGBToYCbCr601

This node implements the RGB to YCbCr 601 conversion operation on the RGB channels of the inputImage. Implementation is based on ITU-R BT.601 specification. The inputImage alpha channel is not affected by this node.
Input parameters: None

17.1.18 RGBToYCbCr709

This node implements the RGB to YCbCr 709 conversion operation on the RGB channels of the inputImage. Implementation is based on ITU-R BT.709 specification. The inputImage alpha channel is not affected by this node.
Input parameters: None

17.1.19 RGBToYCgCo

This node implements the RGB to YCgCo conversion operation on the RGB channels of the inputImage. The inputImage alpha channel is not affected by this node.
Input parameters: None

17.1.20 YCbCr601ToRGB

This node implements the YCbCr 601 to RGB conversion operation on the RGB channels of the inputImage. Implementation is based on ITU-R BT.601 specification. The inputImage alpha channel is not affected by this node.
Input parameters: None

17.1.21 YCbCr709ToRGB

This node implements the YCbCr 709 to RGB conversion operation on the RGB channels of the inputImage. Implementation is based on ITU-R BT.709 specification. The inputImage alpha channel is not affected by this node.
Input parameters: None

17.1.22 YCgCoToRGB

This node implements the YCgCo to RGB conversion operation on the RGB channels of the inputImage. The inputImage alpha channel is not affected by this node.
Input parameters: None

17.1.23 YCbCr601FRToRGB

This node implements the YCbCr 601 "Full Range" to RGB conversion operation on the RGB channels of the inputImage. Implementation is based on ITU-R BT.601 specification. The inputImage alpha channel is not affected by this node.
Input parameters: None

17.1.24 RGBToYCbCr601FR

This node implements the RGB to YCbCr 601 "Full Range" conversion operation on the RGB channels of the inputImage. Implementation is based on ITU-R BT.601 specification. The inputImage alpha channel is not affected by this node.
Input parameters: None

17.1.25 AlexaLogCToLinear

This node implements the Alexa LogC to linear conversion operation on the RGB channels of the inputImage. Implementation is based on Alexa LogC v3 specification. The inputImage alpha channel is not affected by this node. Default values are for EI=800 and LogCBlackSignal = 0.
NB: To translate logC to linear scene data, we use the Alexa v3 spec a,b,c,d,e,f, cutoff coefficients in its equivalent form as Alexa file format metadata params (or this node's properties); see eqns below
For a given EI (e.g. EI=800; a=5.555556; b=0.052272; c=0.247190;
d=0.385537; e=5.367655; f=0.092809; cutoff=0.010591)

LogCBlackSignal = 0 
LogCGraySignal = 1.0 / a                        (0.18)
LogCBlackOffset = b + a * LogCBlackSignal       (0.052272)
LogCCutPoint = e * cutoff + f                   (0.149658) // i.e. LogCLinearCutPoint in RV's imageinfo
LogCEncodingGain = c                            (0.247190)
LogCEncodingOffset = d                          (0.385537)
LogCLinearSlope = e / (a * c)                   (3.90864)
LogCLinearOffset = (f - d - (e * b ) / a) / c   (-1.38854)
                          
Input parameters:
Property
Type
Default
node.parameters.LogCBlackSignal
float
0.0
node.parameters.LogCEncodingOffset
float
0.385537
node.parameters.LogCEncodingGain
float
0.24719
node.parameters.LogCGraySignal
float
0.18
node.parameters.LogCBlackOffset
float
0.052272
node.parameters.LogCLinearSlope
float
3.90864
node.parameters.LogCLinearOffset
float
-1.38854
node.parameters.LogCCutPoint*
float
0.149658
* This value is displayed as LogCLinearCutPoint in RV's imageinfo.

17.1.26 LinearToAlexaLogC

This node implements the linear to Alexa LogC conversion operation on the RGB channels of the inputImage. Implementation is based on Alexa LogC v3 specification. The inputImage alpha channel is not affected by this node. Default values are for EI=800 and LogCBlackSignal = 0.
NB: To translate linear to LogC data, we use the Alexa v3 spec a,b,c,d,e,f, cutoff coefficients in its equivalent form as Alexa file format metadata params (or this node's properties); see eqns below
For a given EI (e.g. EI=800; a=5.555556; b=0.052272; c=0.247190;
d=0.385537; e=5.367655; f=0.092809; cutoff=0.010591)

LogCBlackSignal = 0 
LogCGraySignal = 1.0 / a                         (0.18)
LogCBlackOffset = b + a * LogCBlackSignal        (0.052272)
LogCCutPoint = a * cutoff + b                    (0.111111)
LogCEncodingGain = c                             (0.247190)
LogCEncodingOffset = d                           (0.385537)
LogCLinearSlope = e / (a * c)                    (3.90864)
LogCLinearOffset = (f - d - (e * b ) / a) / c    (-1.38854)
                          
Input parameters:
Property
Type
Default
node.parameters.LogCBlackSignal
float
0.0
node.parameters.LogCEncodingOffset
float
0.385537
node.parameters.LogCEncodingGain
float
0.24719
node.parameters.LogCGraySignal
float
0.18
node.parameters.LogCBlackOffset
float
0.052272
node.parameters.LogCLinearSlope
float
3.90864
node.parameters.LogCLinearOffset
float
-1.38854
node.parameters.LogCCutPoint
float
0.111111

17.1.27 Saturation

This node implements the saturation operation on the RGB channels of the inputImage. The inputImage alpha channel is not affected by this node.
Parameter lumaCoefficients defaults to full range Rec709 luma values.
Input parameters:
Property
Type
Default
node.parameters.saturation
float
[ 1 ]
node.parameters.lumaCoefficients
float[3]
[ 0.2126 0.7152 0.0722 ]
node.parameters.minClamp
float
[ 0 ]
node.parameters.maxClamp
float
[ 1 ]

Transition Nodes

This section describes all the GLSL nodes of evaluationType "transition" found in "AdditionalNodes.gto".

17.2.1 CrossDissolve

This node implements a simple cross dissolve transition effect on the RGBA channels of two inputImage sources beginning from startFrame until (startFrame + numFrames -1). The inputImage alpha channel is affected by this node.
Input parameters:
Property
Type
Default
node.parameters.startFrame
float
40
node.parameters.numFrame
float
20

17.2.2 Wipe

This node implements a simple wipe transition effect on the RGBA channels of two inputImage sources beginning from startFrame until (startFrame + numFrames -1). The inputImage alpha channel is affected by this node.
Input parameters:
Property
Type
Default
node.parameters.startFrame
float
40
node.parameters.numFrame
float
20

Appendix A Open Source Components

RV uses components licensed under the GNU LGPL and other open-source licenses. There is no GPL code in any of RV's binaries. LGPL code for which Tweak Software (or Tweak Films) is the copyright holder is sometimes directly compiled into RV (not as a shared library).
Tweak Software takes open source licensing seriously. Open source software can have huge social benefits and we ourselves have benefited from the work of open source developers. We have in the past contributed, time, code and funding to open source projects and will continue to do so in the future.

GTO

The session file (.rv) is a form of GTO file. The GTO file library is distributed under the terms similar to the BSD license and is available from our website. The GTO format was invented and is copyrighted by Tweak Films.
The GTO source distribution includes a handful of tools to edit GTO files independently of any application. Also included is a Python module which makes editing the files extremely easy.
RV ships with prebuilt versions of GTO command line tools.

Libquicktime

RV can use libquicktime which is distributed under the terms of the GNU LGPL to read and write QuickTime, dv, MP4, and AVI movie files. The libquicktime library can be found in $RV_HOME/lib as a shared object. Libquicktime is capable of reading codecs not shipped with RV. You can read documentation at the libquicktime website to find out how to write or install new codecs. Plugin codecs can be found in $RV_HOME/plugins/lqt in the RV distribution tree. New codecs can be installed in the same location.
Source code for libquicktime and the plugins used by RV is included with the RV distribution.

FFMPEG

On Linux, RV uses an LGPL-only version of FFMPEG by itself and as a libquicktime plugin to decode H.264 video and AAC audio. Source code for FFMPEG is included with RV. We build FFMPEG with the flags generated by its configure script, but we do not use its make files. RV (via the ffmpeg libquicktime plugin) uses only a small portion of FFMPEG. These portions are restricted to the codecs for which Tweak Software has a license – namely the AVC1 (H.264) video and AAC audio codecs for decoding only (the MPEG4 codecs generally).
If you are using ffmpeg through our ffmpeg plugin directly then you can find directions in $RV_HOME/src/mio_ffmpeg/README for recompiling with support for additional codecs ffmpeg supports. The obligation is yours, however, to sort out licensing if doing so.

FreeType

RV uses FreeType for rendering text on the image view.

FTGL

Copyright (c) 2001-2004 Henry Maddocks <ftgl@opengl.geek.nz>

LibRaw

This software uses LibRaw (libraw.org) to decode some raw camera file formats.

Libtiff

RV uses version 3.10.0 of libtiff as to read and write TIFF files and EXIF jpeg metadata.
Copyright (c) 1988-1997 Sam Leffler
Copyright (c) 1991-1997 Silicon Graphics, Inc.
Permission to use, copy, modify, distribute, and sell this software and its documentation for any purpose is hereby granted without fee, provided that (i) the above copyright notices and this permission notice appear in all copies of the software and related documentation, and (ii) the names of Sam Leffler and Silicon Graphics may not be used in any advertising or publicity relating to the software without the specific, prior written permission of Sam Leffler and Silicon Graphics.
THE SOFTWARE IS PROVIDED "AS-IS" AND WITHOUT WARRANTY OF ANY KIND, EXPRESS, IMPLIED OR OTHERWISE, INCLUDING WITHOUT LIMITATION, ANY WARRANTY OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. IN NO EVENT SHALL SAM LEFFLER OR SILICON GRAPHICS BE LIABLE FOR ANY SPECIAL, INCIDENTAL, INDIRECT OR CONSEQUENTIAL DAMAGES OF ANY KIND, OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER OR NOT ADVISED OF THE POSSIBILITY OF DAMAGE, AND ON ANY THEORY OF LIABILITY, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.

LibEXIF

RV uses libexif (LGPL) to decode EXIF tags in JPEG files.

Libjpeg

This software is based in part on the work of the Independent JPEG Group. RV uses the Independent JPEG Group's free JPEG software library to decode jpeg.

OpenJPEG

Copyright (c) 2002-2007, Communications and Remote Sensing Laboratory, Universite catholique de Louvain (UCL), Belgium
Copyright (c) 2002-2007, Professor Benoit Macq
Copyright (c) 2001-2003, David Janssens
Copyright (c) 2002-2003, Yannick Verschueren
Copyright (c) 2003-2007, Francois-Olivier Devaux and Antonin Descampe
Copyright (c) 2005, Herve Drolon, FreeImage Team
All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS `AS IS' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

OpenEXR

RV uses the the OpenEXR library. The source code for the library and tools can be found on the OpenEXR web site.
Copyright (c) 2007, Industrial Light & Magic, a division of Lucas Digital Ltd. LLC
All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
* Neither the name of Industrial Light & Magic nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

Minizip

RV uses the minizip package (which comes with the libz source code) Copyright (C) 1998-2005 Gilles Vollant.

Audiofile

The Audiofile library is distributed under the terms of the LGPL.

OpenColorIO

Copyright (c) 2003-2010 Sony Pictures Imageworks Inc., et al. All Rights Reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
* Neither the name of Sony Pictures Imageworks nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

Yaml-CPP and libyaml

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

tinyxml

This software uses tinyxml

libresample

RV uses libresample distributed under the terms of LGPL.

OpenImageIO

Copyright 2008 Larry Gritz and the other authors and contributors. All Rights Reserved.
Based on BSD-licensed software Copyright 2004 NVIDIA Corp. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
* Neither the name of the software's owners nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

Atomic Ops

Copyright (c) 2003 Hewlett-Packard Development Company, L.P.
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

Boehm-Demers Garbage Collector

Copyright 1988, 1989 Hans-J. Boehm, Alan J. Demers
Copyright (c) 1991-1995 by Xerox Corporation. All rights reserved.
Copyright 1996-1999 by Silicon Graphics. All rights reserved.
Copyright 1999 by Hewlett-Packard Company. All rights reserved.
Copyright (C) 2007 Free Software Foundation, Inc
Copyright (c) 2000-2011 by Hewlett-Packard Development Company.
THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY EXPRESSED OR IMPLIED. ANY USE IS AT YOUR OWN RISK.
Permission is hereby granted to use or copy this program for any purpose, provided the above notices are retained on all copies. Permission to modify the code and to distribute modified code is granted, provided the above notices are retained, and a notice that the code was modified is included with the above copyright notice.

mp4v2

Software distributed under the License is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License for the specific language governing rights and limitations under the License.
The Original Code is MPEG4IP.
The Initial Developer of the Original Code is Cisco Systems Inc. Portions created by Cisco Systems Inc. are Copyright (C) Cisco Systems Inc. 2001 - 2005. All Rights Reserved.
3GPP features implementation is based on 3GPP's TS26.234-v5.60, and was contributed by Ximpo Group Ltd. Portions created by Ximpo Group Ltd. are
Copyright (C) Ximpo Group Ltd. 2003, 2004. All Rights Reserved.
Contributor(s):
Dave Mackie dmackie@cisco.com Alix Marchandise-Franquet alix@cisco.com Ximpo Group Ltd. mp4v2@ximpo.com Bill May wmay@cisco.com

lcms

Little Color Management System Copyright (c) 1998-2014 Marti Maria Saguer
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

OpenCV

Copyright (C) 2000, Intel Corporation, all rights reserved.
Third party copyrights are property of their respective owners.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
* Redistribution's of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
* Redistribution's in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
* The name of Intel Corporation may not be used to endorse or promote products derived from this software without specific prior written permission.
This software is provided by the copyright holders and contributors "as is" and any express or implied warranties, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose are disclaimed. In no event shall the Intel Corporation or contributors be liable for any direct, indirect, incidental, special, exemplary, or consequential damages (including, but not limited to, procurement of substitute goods or services; loss of use, data, or profits; or business interruption) however caused and on any theory of liability, whether in contract, strict liability, or tort (including negligence or otherwise) arising in any way out of the use of this software, even if advised of the possibility of such damage.

PySide

Copyright (C) 2013 Digia Plc and/or its subsidiary(-ies).
Contact: PySide team <contact@pyside.org>
This library is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License version 2.1 as published by the Free Software Foundation. This library is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more details. You should have received a copy of the GNU Lesser General Public License along with this library; if not, write to the Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.

PyOpenGL

PyOpenGL License (v3)
PyOpenGL is based on PyOpenGL 1.5.5, Copyright &copy; 1997-1998 by James Hugunin, Cambridge MA, USA, Thomas Schwaller, Munich, Germany and David Ascher, San Francisco CA, USA.
Contributors to the PyOpenGL project in addition to those listed above include:
* David Konerding
* Soren Renner
* Rene Liebscher
* Randall Hopper
* Michael Fletcher
* Thomas Malik
* Thomas Hamelryck
* Jack Jansen
* Michel Sanner
* Tarn Weisner Burton
* Andrew Cox
* Rene Dudfield
PyOpenGL is Copyright (c) 1997-1998, 2000-2006 by the contributors.
All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
* The names of the contributors may not be used to endorse or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

Appendix B Licensed Components

MPEG-4

THIS PRODUCT IS LICENSED UNDER THE MPEG-4 VISUAL PATENT PORTFOLIO LICENSE FOR THE PERSONAL AND NON-COMMERCIAL USE OF A CONSUMER FOR (i) ENCODING VIDEO IN COMPLIANCE WITH THE MPEG-4 VISUAL STANDARD ("MPEG-4 VIDEO") AND/OR (ii) DECODING MPEG-4 VIDEO THAT WAS ENCODED BY A CONSUMER ENGAGED IN A PERSONAL AND NON- COMMERCIAL ACTIVITY AND/OR WAS OBTAINED FROM A VIDEO PROVIDER LICENSED BY MPEG LA TO PROVIDE MPEG-4 VIDEO. NO LICENSE IS GRANTED OR SHALL BE IMPLIED FOR ANY OTHER USE. ADDITIONAL INFORMATION INCLUDING THAT RELATING TO PROMOTIONAL, INTERNAL AND COMMERCIAL USES AND LICENSING MAY BE OBTAINED FROM MPEG LA, LLC. SEE HTTP://WWW.MPEGLA.COM.

AVC

THIS PRODUCT IS LICENSED UNDER THE AVC PATENT PORTFOLIO LICENSE FOR THE PERSONAL AND NON- COMMERCIAL USE OF A CONSUMER TO (i)ENCODE VIDEO IN COMPLIANCE WITH THE AVC STANDARD (“AVC VIDEO”) AND/OR (ii)DECODE AVC VIDEO THAT WAS ENCODED BY A CONSUMER ENGAGED IN A PERSONAL AND NON-COMMERCIAL ACTIVITY AND/OR WAS OBTAINED FROM A VIDEO PROVIDER LICENSED TO PROVIDE AVC VIDEO. NO LICENSE IS GRANTED OR SHALL BE IMPLIED FOR ANY OTHER USE. ADDITIONAL INFORMATION MAY BE OBTAINED FROM MPEG LA, L.L.C. SEE HTTP://WWW.MPEGLA.COM