Difference between revisions of "COVISE and OpenCOVER support"

From Immersive Visualization Lab Wiki
Jump to: navigation, search
(Covise configuration files)
(COVISE Tutorial)
 
(86 intermediate revisions by 9 users not shown)
Line 1: Line 1:
===Lab hardware===
+
===COVISE Tutorial===
  
We are using Suse Linux 10.0 on most of our lab machines. The names of the lab machines you have an account on are:  
+
There is a six-lecture video course on Youtube at:
  
* coutsound.ucsd.edu: AMD Opteron based file server and computer connected to dual-head setup next to projectors. Use this machine to compile and to change your password. Let me know when you change your password so that I can update it on the rest of the lab machines.
+
[http://www.youtube.com/user/Calit2ube#p/search/0/TQrsa4YLBhY Lecture 1],
* visualtest01.ucsd.edu: AMD Opteron based, drives lower half of c-wall
+
[http://www.youtube.com/user/Calit2ube#p/search/1/QZPc1FnKEGU Lecture 2],
* visualtest02.ucsd.edu: AMD Opteron based, drives upper half of c-wall
+
[http://www.youtube.com/user/Calit2ube#p/search/2/qSlwN288MPI Lecture 3],
* flint.ucsd.edu: Intel Pentium D based Dell in terminal room
+
[http://www.youtube.com/user/Calit2ube#p/search/3/7xfwY863bxU Lecture 4],
* chert.ucsd.edu: Intel Pentium D based Dell in terminal room
+
[http://www.youtube.com/user/Calit2ube#p/search/4/hgpEnbPbhr4 Lecture 5],
* basalt.ucsd.edu: Intel Pentium D based Dell in terminal room
+
[http://www.youtube.com/user/Calit2ube#p/search/5/PHyUIdzap9o Lecture 6]
* rubble.ucsd.edu: Intel Pentium D based Dell in terminal room
+
* slate.ucsd.edu: Intel Pentium D based Dell in terminal room
+
  
Due to the different computer hardware we use in the terminal room and the cave room, you should always compile your plugins on coutsound. All these machines can be accessed from the public internet. You can log in with your username by using ssh. Example for user 'jschulze' logging into 'flint': ssh jschulze@flint.ucsd.edu
+
The accompanying slides are at:
  
===Account information===
+
[[Media:COVISE-Tutorial-1.pdf | Lecture 1]],
 +
[[Media:COVISE-Tutorial-2.pdf | Lecture 2]],
 +
[[Media:COVISE-Tutorial-3.pdf | Lecture 3]],
 +
[[Media:COVISE-Tutorial-4.pdf | Lecture 4]],
 +
[[Media:COVISE-Tutorial-5.pdf | Lecture 5]],
 +
[[Media:COVISE-Tutorial-6.pdf | Lecture 6]]
  
In order to cross login between lab machines without a password you will need to create a DSA key pair. To generate this you run the command 'ssh-keygen -t dsa' on any lab machine and use the default file names, and do not enter a pass phrase. This should generate two files in your ~/.ssh/ directory: id_dsa and id_dsa.pub. The last step is to copy the file id_dsa.pub to new file authorized_keys, or append the contents of id_dsa.pub to authorized_keys if it already exists.
+
Some of the slides of this course have been generously provided by Dr. Uwe Woessner from HLRS.
  
===General information about Covise modules===
+
===General information about COVISE modules===
  
The entire Covise installation, including all plugins, is located at ~jschulze/covise/. Each user should have a link in their home directory named 'covise' which points to this directory. There should also be a link 'plugins' which points to the plugins directory. You should put all the plugins you write in this course into the directory: plugins/cse291/.
+
The entire COVISE installation, including all plugins, is located at /home/covise/covise/. Each user should have a link in their home directory named 'covise' which points to this directory. There should also be a link 'plugins' which points to the plugins directory: /home/covise/plugins/. You should put all the plugins you write into the directory: plugins/calit2/.
  
 
Other directories you might need throughout the project are:
 
Other directories you might need throughout the project are:
 
<ul>
 
<ul>
   <li>covise/src/renderer/OpenCOVER/kernel/: core OpenCOVER functionality, especially coVRModuleSupport.cpp</li>
+
   <li>covise/src/renderer/OpenCOVER/kernel/: core OpenCOVER functionality, especially coVRPluginSupport.cpp</li>
 
   <li>covise/src/kernel/OpenVRUI/: OpenCOVER's user interface elements. useful documentation in doc/html subdirectory; browse by running Firefox on index.html</li>
 
   <li>covise/src/kernel/OpenVRUI/: OpenCOVER's user interface elements. useful documentation in doc/html subdirectory; browse by running Firefox on index.html</li>
 
   <li>covise/src/renderer/OpenCOVER/osgcaveui/: special CaveUI functions, not required in class but useful</li>
 
   <li>covise/src/renderer/OpenCOVER/osgcaveui/: special CaveUI functions, not required in class but useful</li>
 
</ul>
 
</ul>
  
You compile your plugin with the 'make' command. This creates a library file in covise/amd64/lib/OpenCOVER/plugins/. Covise uses qmake, so the make file is being generated by the .pro file. The name of the plugin is determined by the project name in the .pro file in your plugin directory (first line, keyword TARGET). I defaulted TARGET to be p1<username> for project #1. It is important that the TARGET name be unique, or else you will overwrite somebody else's plugin. You can change the name of your source files or add additional source files (.cpp,.h) by listing them after the SOURCES tag in the .pro file.
+
You compile your plugin with the 'make' command, or 'make verbose' if you want to see the full compiler commands. This creates a library file in covise/rhel5/lib/OpenCOVER/plugins/. Covise uses qmake, so the make file is being generated by the .pro file. The name of the plugin is determined by the project name in the .pro file in your plugin directory (first line, keyword TARGET). I defaulted TARGET to be p1<username> for project #1. It is important that the TARGET name be unique, or else you will overwrite somebody else's plugin. You can change the name of your source files or add additional source files (.cpp,.h) by listing them after the SOURCES tag in the .pro file.
  
 
You run OpenCOVER by typing 'opencover' anywhere at the command line. You quit opencover by hitting the 'q' key on the keyboard or ctrl-c in the shell window you started it from.
 
You run OpenCOVER by typing 'opencover' anywhere at the command line. You quit opencover by hitting the 'q' key on the keyboard or ctrl-c in the shell window you started it from.
Line 37: Line 40:
 
Plugins do not get loaded by opencover before they are configured in the configuration file.
 
Plugins do not get loaded by opencover before they are configured in the configuration file.
  
===Important Directories===
+
===Important Directories and URLs===
  
* ~/covise/config: configuration files
+
* /home/covise/covise/config: configuration files
* ~/plugins: plugin code
+
* /home/covise/covise/src/renderer/OpenCOVER/plugins/calit2: Calit2 plugin code
* /home/jschulze/svn/extern_libs/amd64/OpenSceneGraph-svn/OpenSceneGraph: OpenSceneGraph installation directory
+
* /home/covise/covise/extern_libs/src/OpenSceneGraph-2.8.2: OpenSceneGraph installation directory
 +
* /home/covise/covise/src/renderer/OpenCOVER: OpenCOVER source directory; core functions are in kernel subdirectory
 +
* /home/covise/covise/src/kernel/OpenVRUI: virtual reality user interface widgets
 
* http://www.openscenegraph.org: main OSG web site
 
* http://www.openscenegraph.org: main OSG web site
 
* http://openscenegraph.org/archiver/osg-users: OSG email archive. If you have an OSG problem, this is a good place to start.
 
* http://openscenegraph.org/archiver/osg-users: OSG email archive. If you have an OSG problem, this is a good place to start.
Line 47: Line 52:
 
===Covise configuration files===
 
===Covise configuration files===
  
The configuration files are in the directory $HOME/covise/config/. In class, the only files you need to look at are:
+
The configuration files are in the directory /home/covise/covise/config. The most important files are:
  
 
<ul>
 
<ul>
   <li>config.ucsd.xml: general configuration information for all lab machines, except the c-wall machines</li>
+
   <li>ivl.xml: general configuration information for all lab machines on the 2nd floor and room 6307</li>
   <li>config.calitcwall.xml: C-wall specific configuration information (visualtest02.ucsd.edu)</li>
+
   <li>NODENAME.xml: node specific information, e.g., sand.xml for sand.ucsd.edu</li>
   <li>config.calitvarrier.xml: Varrier specific configuration information (varrier.ucsd.edu)</li>
+
   <li>cwall.xml: C-wall specific configuration information (cwall-1 and cwall-2.ucsd.edu)</li>
 +
  <li>Xml configuration files can be syntax validated with xmllint, e.g. xmllint --debug --noout sand.xml</li>
 
</ul>
 
</ul>
  
The configuration files are XML files which can be edited with any ASCII text editor (vi, emacs, nedit, gedit, ...). There are sections specific for certain machines. To load your plugin (e.g., p1jschulze) on one or more machines (e.g., chert and basalt), you need to add or modify a section to contain:
+
The configuration files are XML files which can be edited with any ASCII text editor (vi, emacs, nedit, gedit, ...). There are sections specific for certain machines. To load your plugin (e.g., MyPlugin) on one or more machines (e.g., chert and sand), you need to add or modify a section to contain:
 
<pre>
 
<pre>
  &lt;LOCAL host="chert,basalt"&gt;
+
  &lt;LOCAL host="chert,sand"&gt;
   &lt;COVERConfig&gt;
+
   &lt;COVER&gt;
     &lt;Module value="p1jschulze" name="p1jschulze"/&gt;
+
     &lt;Plugin&gt;
   &lt;/COVERConfig&gt;
+
      &lt;MyPlugin value="on" /&gt;
 +
    &lt;/Plugin&gt;
 +
   &lt;/COVER&gt;
 
  &lt;/LOCAL&gt;
 
  &lt;/LOCAL&gt;
 
</pre>
 
</pre>
Line 95: Line 103:
  
 
The head node needs to be configured in the same way. If you are navigating with a mouse on the head node, you probably want to configure a larger screen size for the head node, so that it covers a larger area of the tiled screen. You can do this by adjusting the width and height values, but you want to make sure that the aspect ratio of the new values corresponds to the one of the OpenCOVER window.
 
The head node needs to be configured in the same way. If you are navigating with a mouse on the head node, you probably want to configure a larger screen size for the head node, so that it covers a larger area of the tiled screen. You can do this by adjusting the width and height values, but you want to make sure that the aspect ratio of the new values corresponds to the one of the OpenCOVER window.
 +
 +
===Create new plugin using SVN and copy over to StarCAVE===
 +
To create new plugin:
 +
 +
> cd ~/plugins/calit2<br>
 +
> mkdir your_new_plugin_folder<br>
 +
> svn add your_new_plugin_folder<br>
 +
> cd your_new_plugin_folder<br>
 +
> svn commit<br>
 +
 +
Move to StarCAVE:
 +
 +
Loggin to StarCAVE<br>
 +
> cd ~/plugins/calit2<br>
 +
> svn update your_new_plugin_folder<br>
 +
 +
From now on, just use svn update and commit inside your plugin folder.
 +
 +
===Change Default VRUI Menu Position/Size===
 +
 +
Let WindowTitle be the title of the window (the text in its title bar). Then add the following section to the config file:
 +
 +
<pre>
 +
  <COVER>
 +
    <VRUI>
 +
      <WindowTitle>
 +
        <Menu>
 +
          <Position x="0" y="0" z="0" />
 +
          <Size value="1.0" />
 +
        </Menu>
 +
      </WindowTitle>
 +
    </VRUI>
 +
  </COVER>
 +
</pre>
 +
 +
===Configure Lighting===
 +
 +
By default there is a light source from 45 degrees up behind the viewer. To change this the following parameters can be set in the config file:
 +
 +
<pre>
 +
<COVER>
 +
  <Lights>
 +
    <Sun>
 +
      <Specular value="on" r="1.0" g="1.0" b="1.0" />
 +
      <Diffuse value="on" r="1.0" g="1.0" b="1.0" />
 +
      <Ambient value="on" r="0.3" g="0.3" b="0.3" />
 +
      <Position value="on" x="0.0" y="0.0" z="10000.0" />
 +
      <Spot value="on" x="0.0" y="0.0" z="1.0" expo="0.0" angle="180.0" />
 +
    </Sun>
 +
    <Lamp>
 +
      <Specular value="on" r="1.0" g="1.0" b="1.0" />
 +
      <Diffuse value="on" r="1.0" g="1.0" b="1.0" />
 +
      <Ambient value="on" r="0.3" g="0.3" b="0.3" />
 +
      <Position value="on" x="0.0" y="0.0" z="10000.0" />
 +
      <Spot value="on" x="0.0" y="0.0" z="1.0" expo="0.0" angle="180.0" />
 +
    </Lamp>
 +
    <Light1>
 +
      <Specular value="on" r="1.0" g="1.0" b="1.0" />
 +
      <Diffuse value="on" r="1.0" g="1.0" b="1.0" />
 +
      <Ambient value="on" r="0.3" g="0.3" b="0.3" />
 +
      <Position value="on" x="0.0" y="0.0" z="10000.0" />
 +
      <Spot value="on" x="0.0" y="0.0" z="1.0" expo="0.0" angle="180.0" />
 +
    </Light1>
 +
    <Light2>
 +
      <Specular value="on" r="1.0" g="1.0" b="1.0" />
 +
      <Diffuse value="on" r="1.0" g="1.0" b="1.0" />
 +
      <Ambient value="on" r="0.3" g="0.3" b="0.3" />
 +
      <Position value="on" x="0.0" y="0.0" z="10000.0" />
 +
      <Spot value="on" x="0.0" y="0.0" z="1.0" expo="0.0" angle="180.0" />
 +
    </Light2>
 +
  </Lights>
 +
</COVER>
 +
</pre>
  
 
===Debugging OpenCover Plugins===
 
===Debugging OpenCover Plugins===
Line 100: Line 181:
 
OpenCover code can be debugged with gdb. If it throws a 'Segmentation Fault' make sure the core is getting dumped with 'unlimit coredumpsize'. Then you should find a file named 'core' or 'core.<pid>' in the directory you are running opencover from. Let's assume your latest core file is called core.4567 then you can run gdb with:
 
OpenCover code can be debugged with gdb. If it throws a 'Segmentation Fault' make sure the core is getting dumped with 'unlimit coredumpsize'. Then you should find a file named 'core' or 'core.<pid>' in the directory you are running opencover from. Let's assume your latest core file is called core.4567 then you can run gdb with:
 
   
 
   
* gdb ~/covise/amd64/bin/Renderer/OpenCOVER core.4567
+
* gdb ~/covise/rhel5/bin/Renderer/OpenCOVER core.4567
 
   
 
   
 
RETURN through the startup screens until you get a command prompt. The two most important commands are:
 
RETURN through the startup screens until you get a command prompt. The two most important commands are:
Line 109: Line 190:
 
Documentation for gdb is at: http://sourceware.org/gdb/documentation/
 
Documentation for gdb is at: http://sourceware.org/gdb/documentation/
  
===Intersection testing===
+
===Create OpenCOVER Menus===
  
If you have wondered how you can find out if the wand pointer intersects your objects, here is a template routine for it. You need to pass it the beginning and end of a line you're intersecting with, in world coordinates. The line will be starting from the hand position and extend along the Y axis.
+
Here is an example which creates a sub-menu off the main OpenCOVER menu with two check boxes.
 +
 
 +
In header file:
 +
 
 +
Step #1: Derive plugin class from coMenuListener. Example:
  
 
<pre>
 
<pre>
#include <osgUtil/IntersectVisitor>
+
  class MyClass : public coVRPlugin, public coMenuListener
 +
</pre>
  
class IsectInfo    // this is an optional class to illustrate the return values of the accept() function
+
Step #2: Declare attributes for menu items. Example:
{
+
  public:
+
      bool      found;              ///< false: no intersection found
+
      osg::Vec3  point;              ///< intersection point
+
      osg::Vec3  normal;            ///< intersection normal
+
      osg::Geode *geode;            ///< intersected Geode
+
};
+
  
void getObjectIntersection(osg::Node *root, osg::Vec3& wPointerStart, osg::Vec3& wPointerEnd, IsectInfo& isect)
+
<pre>
 +
  coSubMenuItem*        _myMenuItem;
 +
  coRowMenu*            _myMenu;
 +
  coCheckboxMenuItem*    _myFirstCheckbox, *_mySecondCheckbox;
 +
</pre>
 +
 
 +
Step #3: Declare menu callback function. Example:
 +
 
 +
<pre>
 +
  void menuEvent(coMenuItem*);
 +
</pre>
 +
 
 +
In .cpp file:
 +
 
 +
Step #4: Create menu in init() callback. Example:
 +
 
 +
<pre>
 +
void MyClass::createMenus()
 
{
 
{
    // Compute intersections of viewing ray with objects:
+
  _myMenuItem = new coSubMenuItem("My Menu");
    osgUtil::IntersectVisitor iv;
+
  _myMenu = new coRowMenu("My Menu");
    osg::ref_ptr<osg::LineSegment> testSegment = new osg::LineSegment();
+
  _myMenuItem->setMenu(_myMenu);
    testSegment->set(wPointerStart, wPointerEnd);
+
    iv.addLineSegment(testSegment.get());
+
    iv.setTraversalMask(2);
+
  
    // Traverse the whole scenegraph.
+
  _myFirstCheckbox = new coCheckboxMenuItem("First Checkbox", true);
    // Non-Interactive objects must have been marked with setNodeMask(~2):    root->accept(iv);
+
  _myMenu->add(_myFirstCheckbox);
    isect.found = false;
+
  _myFirstCheckbox->setMenuListener(this);
    if (iv.hits())
+
 
    {
+
  _mySecondCheckbox = new coCheckboxMenuItem("Second Checkbox", false);
        osgUtil::IntersectVisitor::HitList& hitList = iv.getHitList(testSegment.get());
+
  _myMenu->add(_mySecondCheckbox);
        if(!hitList.empty())
+
  _mySecondCheckbox->setMenuListener(this);
        {
+
            isect.point    = hitList.front().getWorldIntersectPoint();
+
  cover->getMenu()->add(_myMenuItem);
            isect.normal    = hitList.front().getWorldIntersectNormal();
+
            isect.geode    = hitList.front()._geode.get();
+
            isect.found    = true;
+
        }
+
    }
+
 
}
 
}
 
</pre>
 
</pre>
 +
 +
Step #5: Create callback function for menu interaction. Example:
 +
 +
<pre>
 +
void MyClass::menuEvent(coMenuItem* item)
 +
{
 +
  if(item == _myFirstCheckbox)
 +
  {
 +
    _myFirstCheckbox->setState(true);
 +
    _mySecondCheckbox->setState(false);
 +
  }
 +
  if(item == _mySecondCheckbox)
 +
  {
 +
    _myFirstCheckbox->setState(false);
 +
    _mySecondCheckbox->setState(true);
 +
  }
 +
}
 +
</pre>
 +
 +
===Wall Clock Time===
 +
 +
If you are going to animate anything, keep in mind that the rendering system runs anywhere between 1 and 100 frames per second, so you can't rely on the time between frames being anything you assume. Instead, you will want to know exactly how much time has passed since you last rendered something, i.e., you last preFrame() call. You should use cover->frameTime(), or better cover->frameDuration(); these return a double value with the number of seconds (at an accuracy of milli- or even microseconds) passed since the start of the program, or since the last preFrame(), respectively.
  
 
===Tracker Data===
 
===Tracker Data===
Line 156: Line 266:
  
 
<pre>
 
<pre>
  osg::Vec3 pointerPos1Wld = cover->getPointerMat().getTrans();
+
  osg::Vec3 pointerPos1Wld = cover->getPointerMat().getTrans();
  osg::Vec3 pointerPos2Wld = osg::Vec3(0.0, 1000.0, 0.0);
+
  osg::Vec3 pointerPos2Wld = osg::Vec3(0.0, 1000.0, 0.0);
  pointerPos2Wld = pointerPos2Wld * cover->getPointerMat();
+
  pointerPos2Wld = pointerPos2Wld * cover->getPointerMat();
 
</pre>
 
</pre>
  
This is the way to get the head position:
+
This is the way to get the head position in world space:
  
 
<pre>
 
<pre>
  Vec3 viewerPosWld = cover->getViewerMat().getTrans();
+
  Vec3 viewerPosWld = cover->getViewerMat().getTrans();
 
</pre>
 
</pre>
  
===Interaction handling===
+
Or in object space:
 +
<pre>
 +
  Vec3 viewerPosWld = cover->getViewerMat().getTrans();
 +
  Vec3 viewerPosObj = viewerPosWld * cover->getInvBaseMat();
 +
</pre>
  
To register an interaction so that only your plugin uses the mouse pointer while a button on the wand is pressed, you want to use the TrackerButtonInteraction class. For sample usage see plugins/Volume. Here are the main calls you will need. Most of these functions go in the preFrame routine, unless otherwise noted:
+
===Taking a Screenshot from within OpenCOVER===
  
<ul>
+
The first solution is courtesy of Emmett McQuinn. He says:
  <li>Make sure you include at the top of your code:
+
"This code is fairly robust in our application, it captures the screen with the proper orientation when a trackball manipulator is used and returns to the proper orientation so the end user's view is never modified. It also preserves the correct aspect ratio and works with double buffering. The routine take an offscreen framebuffer capture, which can be higher than the native display resolution (up to 8k on modern cards)."
    <pre>
+
      #include &lt;OpenVRUI/coTrackerButtonInteraction.h&gt;
+
    </pre>
+
  </li>
+
  <li>In the constructor you want to create your interaction for button A, which is the left wand button:
+
    <pre>
+
      interaction = new coTrackerButtonInteraction(coInteraction::ButtonA,"MoveObject",coInteraction::Menu);
+
    </pre>
+
  </li>
+
  <li>In the destructor you want to call:
+
    <pre>
+
      delete interaction;
+
    </pre>
+
  </li>
+
 
+
  <li>To register your interaction and thus disable button A interaction in all other plugins call the following function. Make sure that to call this function before other modules can register the interaction. In particular, this might mean that you need to register the interaction before a mouse button is pressed, for instance by registering it when intersecting with the object to interact with.
+
    <pre>
+
      if(!interaction->registered)
+
      {
+
        coInteractionManager::the()->registerInteraction(interaction);
+
      }
+
    </pre>
+
  
  </li>
+
<pre>
 +
#include "ScreenCapture.h"
 +
#include <osgGA/TrackballManipulator>
 +
#include <osgDB/WriteFile>
 +
#include <assert.h>
 +
#include <type/emath.h>
 +
#include <cmath>
  
  <li>To do something just once, after the interaction has just started:
+
bool osge::ScreenCapture(const char *filename, osgViewer::Viewer *viewer, int width, int height, bool keepRatio, bool doubleBuffer)
    <pre>
+
{
      if(interaction->wasStarted())
+
// current technologies (quadro FX 5800) can support up to 8192x8192 framebuffer
      {
+
// the frame buffer does not have to be square
      }
+
// most recent cards should be able to do 4096x4096
    </pre>
+
const int max_pixels = 8192;
  </li>
+
  
  <li>To do something every frame while the interaction is running:
+
assert(width <= max_pixels);
    <pre>
+
assert(height <= max_pixels);
    if(interaction->isRunning())
+
    {
+
    }
+
    </pre>
+
  </li>
+
  
  <li>To do something once at the end of the interaction:
+
osg::Image *shot = new osg::Image();
    <pre>
+
    if(interaction->wasStopped())
+
    {
+
    }
+
    </pre>
+
  </li>
+
  
  <li>To unregister the interaction and free button A for other plugins:
+
int w = width;
    <pre>
+
int h = height;
      if(interaction->registered && (interaction->getState()!=coInteraction::Active))
+
      {
+
        coInteractionManager::the()->unregisterInteraction(interaction);
+
      }
+
  
     </pre>
+
     osg::ref_ptr<osg::Camera> newcamera = new osg::Camera;
  </li>
+
    osg::ref_ptr<osg::Camera> oldcamera = viewer->getCamera();
</ul>
+
  
===OSG Text===
+
// if we want to keep the native ratio rather than given pixels
 +
if (keepRatio)
 +
{
 +
// will never be larger than the parameters width and height
 +
double fov, ratio, near, far;
 +
oldcamera->getProjectionMatrixAsPerspective(fov, ratio, near, far);
  
Make sure you #include <osgText/Text>. There is a good example for how osgText can be used in ~/covise/src/renderer/OpenCOVER/osgcaveui/Card.cpp. _highlight is the osg::Geode the text gets created for, <b>createLabel()</b> returns the Drawable with the text, _labelText is the text string, and osgText::readFontFile() reads the font.
+
if (emath::isnan(ratio) || (std::abs(ratio -0.0) < 0.01) || (ratio < 0))
 +
{
 +
// orthographic projection
 +
double left, right, bottom, top;
 +
oldcamera->getProjectionMatrixAsOrtho(left, right, bottom, top, near, far);
 +
float dw = right - left;
 +
float dh = top - bottom;
 +
ratio = dw / dh;
 +
}
  
===How to Create a Rectangle with a Texture===
+
const int max_w = width;
 +
w = h * ratio;
 +
if (w > max_w)
 +
{
 +
w = max_w;
 +
h = w / ratio;
 +
}
 +
}
  
The following code sample from osgcaveui/Card.cpp demonstrates how to create a rectangular geometry with a texture.
+
shot->allocateImage(w, h, 1, GL_RGB, GL_UNSIGNED_BYTE);
  
<pre>
+
// store old camera settings
Geometry* Card::createIcon()
+
osgGA::TrackballManipulator *manipulator = (osgGA::TrackballManipulator*)viewer->getCameraManipulator();
{
+
osg::Vec3 eye, center, up;
  Geometry* geom = new Geometry();
+
    oldcamera->getViewMatrixAsLookAt(eye,center,up);
 +
center = manipulator->getCenter();
 +
manipulator->setHomePosition(eye, center, up);
 +
    //Copy the settings from sceneView-camera to get exactly the view the user sees at the moment:
 +
//newcamera->setClearColor(oldcamera->getClearColor());
  
  Vec3Array* vertices = new Vec3Array(4);
+
newcamera->setClearColor(osg::Vec4(0,0,0,0));
  float marginX = (DEFAULT_CARD_WIDTH  - ICON_SIZE * DEFAULT_CARD_WIDTH) / 2.0;
+
  float marginY = marginX;
+
                                                  // bottom left
+
  (*vertices)[0].set(-DEFAULT_CARD_WIDTH / 2.0 + marginX, DEFAULT_CARD_HEIGHT /
+
2.0 - marginY - ICON_SIZE * DEFAULT_CARD_WIDTH, EPSILON_Z);
+
                                                  // bottom right
+
  (*vertices)[1].set( DEFAULT_CARD_WIDTH / 2.0 - marginX, DEFAULT_CARD_HEIGHT /
+
2.0 - marginY - ICON_SIZE * DEFAULT_CARD_WIDTH, EPSILON_Z);
+
                                                  // top right
+
  (*vertices)[2].set( DEFAULT_CARD_WIDTH / 2.0 - marginX, DEFAULT_CARD_HEIGHT /
+
2.0 - marginY, EPSILON_Z);
+
                                                  // top left
+
  (*vertices)[3].set(-DEFAULT_CARD_WIDTH / 2.0 + marginX, DEFAULT_CARD_HEIGHT /
+
2.0 - marginY, EPSILON_Z);
+
  geom->setVertexArray(vertices);
+
  
  Vec2Array* texcoords = new Vec2Array(4);
+
    newcamera->setClearMask(oldcamera->getClearMask());
  (*texcoords)[0].set(0.0, 0.0);
+
    newcamera->setColorMask(oldcamera->getColorMask());
  (*texcoords)[1].set(1.0, 0.0);
+
    newcamera->setTransformOrder(oldcamera->getTransformOrder());
  (*texcoords)[2].set(1.0, 1.0);
+
  (*texcoords)[3].set(0.0, 1.0);
+
  geom->setTexCoordArray(0,texcoords);
+
  
  Vec3Array* normals = new Vec3Array(1);
+
// just inherit the main cameras view
  (*normals)[0].set(0.0f, 0.0f, 1.0f);
+
newcamera->setReferenceFrame(osg::Transform::ABSOLUTE_RF);
  geom->setNormalArray(normals);
+
osg::Matrixd proj = oldcamera->getProjectionMatrix();
  geom->setNormalBinding(Geometry::BIND_OVERALL);
+
newcamera->setProjectionMatrix(proj);
 +
osg::Matrixd view = oldcamera->getViewMatrix();
 +
newcamera->setViewMatrix(view);
  
  Vec4Array* colors = new Vec4Array(1);
+
// set viewport
  (*colors)[0].set(1.0, 1.0, 1.0, 1.0);
+
newcamera->setViewport(0, 0, w, h);
  geom->setColorArray(colors);
+
  geom->setColorBinding(Geometry::BIND_OVERALL);
+
  
  geom->addPrimitiveSet(new DrawArrays(PrimitiveSet::QUADS, 0, 4));
+
// set the camera to render before the main camera.
 +
newcamera->setRenderOrder(osg::Camera::POST_RENDER);
  
  // Texture:
+
// tell the camera to use OpenGL frame buffer object where supported.
  StateSet* stateset = geom->getOrCreateStateSet();
+
newcamera->setRenderTargetImplementation(osg::Camera::FRAME_BUFFER_OBJECT);
  stateset->setMode(GL_LIGHTING, StateAttribute::OFF);
+
  stateset->setRenderingHint(StateSet::TRANSPARENT_BIN);
+
  stateset->setTextureAttributeAndModes(0, _icon, StateAttribute::ON);
+
  
  return geom;
+
// attach the texture and use it as the color buffer.
}
+
newcamera->attach(osg::Camera::COLOR_BUFFER, shot);
</pre>
+
  
===Load and display an image from disk===
+
osg::ref_ptr<osg::Node> root_node = viewer->getSceneData();
  
Here is sample code to create a geode (imageGeode) with an image which gets loaded from disk. OpenGL's size limitation for textures applies (usually 4096x4096 pixels). The image might have to have powers of two edges, but that limitation should not hold anymore on newer graphics cards.
+
    // add subgraph to render
 +
    newcamera->addChild(root_node.get());
  
<pre>
+
    //Need to make it part of the scene :
/** Loads image file into geode; returns NULL if image file cannot be loaded */
+
    viewer->setSceneData(newcamera.get());
Geode* createImageGeode(const char* filename)
+
{
+
  // Create OSG image:
+
  Image* image = new Image();
+
  image = osgDB::readImageFile(filename);
+
  
  // Create OSG texture:
+
    //I make it frame
  if (image)
+
viewer->frame();
  {
+
if (doubleBuffer)
    imageTexture = new Texture2D();
+
{
    imageTexture->setImage(image);
+
// double buffered so two frames
  }
+
viewer->frame();
  else
+
}
  {
+
    std::cerr << "Cannot load image file " << filename << std::endl;
+
    delete image;
+
    return NULL;
+
  }
+
  
  // Create OSG geode:
+
bool ret = osgDB::writeImageFile(*shot, filename);;
  imageGeode = new Geode();
+
 
  imageGeode->addDrawable(createImageGeometry());
+
//Reset the old data to the sceneView, so it doesn´t always render to image:
  return imageGeode;
+
    viewer->setSceneData(root_node.get());
 +
 
 +
// need to reset to regular frame for camera manipulator to work properly
 +
viewer->frame();
 +
 
 +
viewer->home();
 +
 
 +
return ret;
 
}
 
}
  
/** Used by createImageGeode() */
+
bool osge::BracketCapture(const char *filebase, osgViewer::Viewer *viewer, int width, int height, bool keepRatio, bool doubleBuffer)
Geometry* createImageGeometry()
+
 
{
 
{
  const float WIDTH  = 3.0f;
+
osgGA::TrackballManipulator *manipulator = (osgGA::TrackballManipulator*)viewer->getCameraManipulator();
  const float HEIGHT = 2.0f;
+
  Geometry* geom = new Geometry();
+
  
  // Create vertices:
+
// take persp screenshot
  Vec3Array* vertices = new Vec3Array(4);
+
char filename[2048];
  (*vertices)[0].set(-WIDTH / 2.0, HEIGHT / 2.0, 0); // bottom left
+
sprintf(filename, "%s_persp.jpg", filebase);
  (*vertices)[1].set( WIDTH / 2.0, HEIGHT / 2.0, 0); // bottom right
+
bool ret = ScreenCapture(filename, viewer, width, height, keepRatio, doubleBuffer);
  (*vertices)[2].set( WIDTH / 2.0, HEIGHT / 2.0, 0); // top right
+
  (*vertices)[3].set(-WIDTH / 2.0, HEIGHT / 2.0, 0); // top left
+
  geom->setVertexArray(vertices);
+
  
  // Create texture coordinates for image texture:
+
osg::Camera *camera = viewer->getCamera();
  Vec2Array* texcoords = new Vec2Array(4);
+
  (*texcoords)[0].set(0.0, 0.0);
+
  (*texcoords)[1].set(1.0, 0.0);
+
  (*texcoords)[2].set(1.0, 1.0);
+
  (*texcoords)[3].set(0.0, 1.0);
+
  geom->setTexCoordArray(0,texcoords);
+
  
  // Create normals:
 
  Vec3Array* normals = new Vec3Array(1);
 
  (*normals)[0].set(0.0f, 0.0f, 1.0f);
 
  geom->setNormalArray(normals);
 
  geom->setNormalBinding(Geometry::BIND_OVERALL);
 
  
  // Create colors:
+
// take front, right ortho's
  Vec4Array* colors = new Vec4Array(1);
+
  (*colors)[0].set(1.0, 1.0, 1.0, 1.0);
+
  geom->setColorArray(colors);
+
  geom->setColorBinding(Geometry::BIND_OVERALL);
+
  
  geom->addPrimitiveSet(new DrawArrays(PrimitiveSet::QUADS, 0, 4));
+
// backup projection matrix
 +
osg::Quat originalRotation = manipulator->getRotation();
 +
osg::Matrix originalProjection = camera->getProjectionMatrix();
  
  // Set texture parameters:
+
// set to ortho
  StateSet* stateset = geom->getOrCreateStateSet();
+
// top
  stateset->setMode(GL_LIGHTING, StateAttribute::OFF); // make texture visible independent of lighting
+
osg::Quat topRotation(0,0,0,1);
  stateset->setRenderingHint(StateSet::TRANSPARENT_BIN); // only required for translucent images
+
// front
  stateset->setTextureAttributeAndModes(0, imageTexture, StateAttribute::ON);
+
osg::Quat frontRotation(1,0,0,1);
 +
{
 +
osg::Vec3 axis(0,1,0);
 +
        double angle = osg::PI;
 +
        osg::Quat rot;
 +
        rot.makeRotate(angle, axis);
 +
        frontRotation = rot * frontRotation;
 +
}
 +
// right
 +
osg::Quat rightRotation(0.5, 0.5, 0.5, 0.5);
  
  return geom;
+
// set to ortho
}
+
osg::Vec3 eye, center, up;
 +
camera->getViewMatrixAsLookAt(eye,center,up);
 +
double fovy, ratio, near, far;
 +
// assumes captured in perspective
 +
camera->getProjectionMatrixAsPerspective(fovy, ratio, near, far);
 +
float distance = eye.length();
 +
float top = (distance) * std::tan(fovy * osg::PI/180.f * 0.5);
 +
float right = top * ratio;
  
</pre>
+
camera->setProjectionMatrixAsOrtho(-right, right, -top, top, 0.1, 100);
 +
// bracket 3 views
  
===Wall Clock Time===
+
manipulator->setRotation(topRotation);
 +
sprintf(filename, "%s_top.jpg", filebase);
 +
// takes a frame to update the camera from the manipulator
 +
viewer->frame();
 +
ret &= ScreenCapture(filename, viewer, width, height, keepRatio, doubleBuffer);
  
If you are going to animate anything, keep in mind that the rendering system runs anywhere between 1 and 100 frames per second, so you can't rely on the time between frames being anything you assume. Instead, you will want to know exactly how much time has passed since you last rendered something, i.e., you last preFrame() call. You should use cover->frameTime(), or better cover->frameDuration(); these return a double value with the number of seconds (at an accuracy of milli- or even microseconds) passed since the start of the program, or since the last preFrame(), respectively.
+
manipulator->setRotation(frontRotation);
 +
sprintf(filename, "%s_front.jpg", filebase);
 +
viewer->frame();
 +
ret &= ScreenCapture(filename, viewer, width, height, keepRatio, doubleBuffer);
  
===Backtrack all nodes up to top of scene graph===
+
manipulator->setRotation(rightRotation);
 +
sprintf(filename, "%s_right.jpg", filebase);
 +
viewer->frame();
 +
ret &= ScreenCapture(filename, viewer, width, height, keepRatio, doubleBuffer);
  
<pre>
+
// set to projection
osg::NodePath path = getNodePath();
+
camera->setProjectionMatrix(originalProjection);
ref_ptr<osg::StateSet> state = new osg::StateSet;
+
manipulator->setRotation(originalRotation);
for (osg::NodePath::iterator it = path.begin(); it != path.end(); ++it)
+
viewer->frame();
{
+
 
  if ((*it)->getStateSet())
+
return ret;
  {
+
    state->merge((*it)->getStateSet());
+
  }
+
 
}
 
}
 
</pre>
 
</pre>
  
 +
The second solution is from an email thread at http://osgcvs.no-ip.com/osgarchiver/archives/April2007/0083.html using wxWindows. The basic idea is to use osg::camera, which allows you to take a screenshot at higher than physical display resolution.
  
===Taking a screenshot from the command line===
+
<pre>
 +
Hi,
  
* run application on visualtest02 and bring up desired image, freeze head tracking
+
I have solved this by setting the HUD-Camera to "NESTED_RENDER" and  
* log on to coutsound
+
putting all geometries of the HUD-Node into the transparent bin.  
* Make sure screenshot is taken of visualtest02: setenv DISPLAY :0.0
+
* Take screenshot: xwd -root -out <screenshot_filename.xwd>
+
* Convert image file to TIFF: convert <screenshot_filename.xwd> <screenshot_filename.tif>
+
  
===Taking a Screenshot from within OpenCOVER===
+
So this is how it works:
  
osg::camera allows you to take a screenshot at higher than physical display resolution. Here is an example from the email thread at
+
I have a sceneView with scene-Data.  
http://openscenegraph.org/archiver/osg-users/2007-April/0054.html using wxWindows.
+
  
<pre>
+
I remove the scene-Data from the sceneView, add it to a cameraNode and
//save the old viewport:
+
then add this cameraNode to the sceneView.
osg::ref_ptr<osg::Viewport> AlterViewport = sceneView->getViewport();
+
Then I update the sceneView, the cameraNOde renders to the image, and
osg::ref_ptr<osg::Image> shot = new osg::Image();
+
then I remove the camera Node again and put the sceneData into the
//shot->setPixelFormat(GL_RGB);
+
sceneView back again.
  
int w = 0; int h = 0;
+
The trouble was: I wanted to save work by constructing the cameraNode
GetClientSize(&w, &h);
+
with the copy-Constructor starting with the original sceneView´s camera.
float ratio = (float)w/(float)h;
+
This was not a good idea, probably because the renderToImage could not
w = 2500;
+
be set to Image after being constructed with the copyConstructor.  
h = (int)((float)w/ratio);
+
//shot->scaleImage(w, h, 24);
+
shot->allocateImage(w, h, 24, GL_RGB, GL_UNSIGNED_BYTE);
+
osg::Node* subgraph = TheDocument->RootGroup.get();
+
  
osg::ref_ptr<osg::Camera> camera = new
+
So anyone who would like to have a simple, high-res screenshot, here is
osg::Camera(*(sceneView->getCamera()) );
+
the complete source:  
  
// set view
+
    shot = new osg::Image();
camera->setReferenceFrame(osg::Transform::ABSOLUTE_RF);
+
   
 +
  //This is wxWidgets-Stuff to get the image ratio:
 +
    int w = 0; int h = 0;
 +
    GetClientSize(&w, &h);
 +
   
 +
    int newSize = (int) wxGetNumberFromUser(_("Geben Sie die Breite des
 +
Bildes in Pixeln an: "), _("Aufloesung:"), _("Aufloesung"), w, 300, 5000 );
 +
    if (newSize == -1)
 +
        return false;
 +
   
 +
   
 +
    float ratio = (float)w/(float)h;
 +
    w = newSize;
 +
   
 +
   
 +
    h = (int)((float)w/ratio);
 +
   
 +
    shot->allocateImage(w, h, 1, GL_RGB, GL_UNSIGNED_BYTE);
 +
   
 +
    osg::ref_ptr<osg::Node> subgraph = TheDocument->RootGroup.get();
 +
   
 +
    osg::ref_ptr<osg::Camera> camera = new osg::Camera;
 +
   
 +
   
 +
    osg::ref_ptr<osg::Camera> oldcamera = sceneView->getCamera();
 +
    //Copy the settings from sceneView-camera to get exactly the view
 +
the user sees at the moment:
 +
    camera->setClearColor(oldcamera->getClearColor() );
 +
    camera->setClearMask(oldcamera->getClearMask() );
 +
    camera->setColorMask(oldcamera->getColorMask() );
 +
    camera->setTransformOrder(oldcamera->getTransformOrder() );
 +
    camera->setProjectionMatrix(oldcamera->getProjectionMatrix() );
 +
    camera->setViewMatrix(oldcamera->getViewMatrix() );
 +
   
 +
    // set view  
 +
    camera->setReferenceFrame(osg::Transform::ABSOLUTE_RF);  
  
// set viewport
 
camera->setViewport(0,0,w,h);
 
  
// set the camera to render before the main camera.
+
    // set viewport
camera->setRenderOrder(osg::Camera::PRE_RENDER);
+
    camera->setViewport(0,0,w,h);  
  
// tell the camera to use OpenGL frame buffer object where supported.
 
camera->setRenderTargetImplementation(osg::Camera::FRAME_BUFFER_OBJECT);
 
  
camera->attach(osg::Camera::COLOR_BUFFER, shot.get());
+
    // set the camera to render before after the main camera.
 +
    camera->setRenderOrder(osg::Camera::POST_RENDER);  
  
// add subgraph to render
 
camera->addChild(subgraph);
 
//Need to mage it part of the scene :
 
sceneView->setSceneData(camera);
 
  
//Make it frame:
+
    // tell the camera to use OpenGL frame buffer object where supported.
Update();
+
    camera->setRenderTargetImplementation(osg::Camera::FRAME_BUFFER_OBJECT);  
Refresh();
+
  
wxImage img;
 
img.Create(w, h);
 
img.SetData(shot->data());
 
shot.release();
 
  
wxImage i2 = img.Mirror(false);
+
   
i2.SaveFile(filename);
+
    camera->attach(osg::Camera::COLOR_BUFFER, shot.get());
sceneView->setSceneData(subgraph);
+
   
sceneView->setViewport(AlterViewport.get() );  
+
 
 +
 
 +
    // add subgraph to render
 +
    camera->addChild(subgraph.get());
 +
    //camera->addChild(TheDocument->GetHUD().get() );
 +
    //Need to mage it part of the scene :
 +
    sceneView->setSceneData(camera.get());
 +
    //Make it frame:
 +
   
 +
    sceneView->update();
 +
    sceneView->cull();
 +
    sceneView->draw();
 +
   
 +
    //Write the image the wxWidgets-Way, which works better for me:
 +
    wxImage img;
 +
    img.Create(w, h);
 +
    img.SetData(shot->data());
 +
    //Damit der Destruktor des Image nicht meckert:
 +
    shot.release();
 +
   
 +
    wxImage i2 = img.Mirror(false);  
 +
    i2.SaveFile(filename);  
 +
   
 +
    //Reset the old data to the sceneView, so it doesn´t always render
 +
to image:
 +
    sceneView->setSceneData(subgraph.get() );  
 +
   
 +
    //This would work, too:
 +
    //return osgDB::writeImageFile(*shot, filename.c_str() );
 +
   
 +
    return true;  
 
</pre>
 
</pre>
  
Line 528: Line 662:
 
</pre>
 
</pre>
  
===Return a ref_ptr from a function===
+
===Message Passing===
  
Here is a safe way to return a ref_ptr type from a function.
+
This is how you can send a message from the master to all nodes in the rendering cluster. These functions are defined in covise/src/renderer/OpenCOVER/kernel/coVRMSController.h.
  
 
<pre>
 
<pre>
osg::ref_ptr<osg::Group> makeGroup(...Some arguments..)  
+
if(coVRMSController::instance()->isMaster())
 
{
 
{
   osg::ref_ptr<osg::MatrixTransform> mt=new MatrixTransform();
+
   coVRMSController::instance()->sendSlaves((char*)&appReceiveBuffer,sizeof(receiveBuffer));
  // ...some operations...
+
}
   return mt.get();
+
else
}  
+
{
 +
   coVRMSController::instance()->readMaster((char*)&appReceiveBuffer,sizeof(receiveBuffer));
 +
}
 
</pre>
 
</pre>
  
Also check out this link to learn more about how to use ref_ptr:  
+
The above functions make heavy use of the class coVRSlave (covise/src/renderer/OpenCOVER/kernel/coVRSlave.h). This class uses the Socket class to implement the communication between nodes. The Socket class can be used, using a different port, to implement communication between rendering nodes without going through the master node. The Socket class is defined in /home/covise/covise/src/kernel/net/covise_socket.h. The Socket class can also be used to communicate with a computer outside of the visualization cluster.
http://donburns.net/OSG/Articles/RefPointers/RefPointers.html
+
 
 +
Notice that the above functions are for communication <b>WITHIN</b> a rendering cluster. In order to send a message to a remote OpenCOVER (running on another rendering cluster connected via a WAN) you would use cover->sendMessage. The source code for this function is at covise/src/renderer/OpenCOVER/kernel/coVRPluginSupport.h.
 +
 
 +
===How to Add Third Party Libraries to OpenCOVER===
 +
 
 +
* Put the library sources in ~covise/covise/extern_libs/src
 +
* Build the library
 +
* Two options for the installation (do NOT install to /usr/lib64):  
 +
** 1) Install the .so files in ~covise/covise/extern_libs/lib64 and the .h files in ~covise/covise/extern_libs/include
 +
** 2) Leave the .so and .h files where they are and register their paths in OpenCOVER:
 +
*** Give library a unique name, for instance 'bluetooth'.
 +
*** Add an entry to /home/covise/covise/common/mkspecs/config-extern.pri, for instance:
 +
<pre>
 +
bluetooth {
 +
  INCLUDEPATH *= $$(BLUETOOTH_INC)
 +
  LIBS += $$(BLUETOOTH_LIB)
 +
}
 +
</pre>
 +
*** Add environment variables to .cshrc:
 +
<pre>
 +
setenv BLUETOOTH_INC $EXTERNLIBS/src/bluetooth/include
 +
setenv BLUETOOTH_LIB "-L$EXTERNLIBS/src/bluetooth/lib64 -lbluetooth"
 +
</pre>
 +
*** Add library name to plugin's .pro file:
 +
<pre>
 +
CONFIG          *= coappl colib openvrui math vrml97 bluetooth
 +
</pre>
 +
 
 +
===Moving an object with the pointer===
 +
 
 +
Here is some sample code to move an object. object2w is the object's transformation matrix in world space. lastWand2w and wand2w are the wand matrices from the previous and current frames, respectively, from cover->getPointer().
 +
 
 +
<pre>
 +
void move(Matrix& lastWand2w, Matrix& wand2w)
 +
{
 +
    // Compute difference matrix between last and current wand:
 +
    Matrix invLastWand2w = Matrix::inverse(lastWand2w);
 +
    Matrix wDiff = invLastWand2w * wand2w;
 +
 
 +
    // Perform move:
 +
    _node->setMatrix(object2w * wDiff);
 +
}
 +
</pre>
 +
 
 +
===Covise Animation Manager===
 +
Covise's Animation Manager provides a simple way to mark time.  It allows the user to navigate through a series of frames either automatically at a variable frame-rate, or manually by stepping forwards or backwards.
 +
 
 +
First, include the following line of code:
 +
<pre>
 +
  #include <kernel/coVRAnimationManager.h>
 +
</pre>
 +
 
 +
Here is a simple example of an Animation Manager setup.
 +
<pre>
 +
  coVRAnimationManager::instance()->setAnimationSpeed(int framerate);  //Set the default frame-rate for playback
 +
  coVRAnimationManager::instance()->enableAnimation(bool play);        //Set the animation to play/pause by default
 +
  coVRAnimationManager::instance()->setAnimationFrame(int frame);      //Set the first frame
 +
  coVRAnimationManager::instance()->showAnimMenu(bool on);            //Add the "Animation" SubMenu to the Opencover Main Menu, or remove it
 +
  coVRAnimationManager::instance()->setNumTimesteps(int steps, this);  //Set the total number of frames to cycle through
 +
</pre>
 +
 
 +
This can be done at initialization or at any point during program execution.
 +
 
 +
The Animation Manager has been setup and can now provide information to the rest of your program.  Calling
 +
<pre>
 +
  coVRAnimationManager::instance()->getAnimationFrame();
 +
</pre>
 +
Will return the current frame number.  This frame number will automatically loop back to zero after it reaches the value provided to setNumTimesteps.
 +
 
 +
Finally, you can jump to a specific frame by calling
 +
<pre>
 +
  coVRAnimationManager::instance()->setAnimationFrame(int framenumber);
 +
</pre>
  
===Occlusion Culling in OpenSceneGraph===
+
===Occlusion Culling===
  
 
Occlusion culling removes objects which are hidden behind other objects in the culling stage so they never get rendered, thus resulting in a higher rendering rate. In covise/src/renderer/OpenCOVER/kernel/VRViewer.cpp, the SceneView is being created. By default CullingMode gets set like this:
 
Occlusion culling removes objects which are hidden behind other objects in the culling stage so they never get rendered, thus resulting in a higher rendering rate. In covise/src/renderer/OpenCOVER/kernel/VRViewer.cpp, the SceneView is being created. By default CullingMode gets set like this:
Line 640: Line 848:
  
 
An alternative to occlusion culling is to use LOD (level of detail) nodes in the scene graph. This means that when you are farther away, less polygons get rendered. See the [http://www.openscenegraph.com/index.php?page=OSGExp.OSGLOD osglod example] for inspiration.
 
An alternative to occlusion culling is to use LOD (level of detail) nodes in the scene graph. This means that when you are farther away, less polygons get rendered. See the [http://www.openscenegraph.com/index.php?page=OSGExp.OSGLOD osglod example] for inspiration.
 
===Message Passing in OpenCOVER===
 
 
This is how you can send a message from the master to all nodes in the rendering cluster. These functions are defined in covise/src/renderer/OpenCOVER/kernel/coVRMSController.h.
 
 
<pre>
 
if(coVRMSController::instance()->isMaster())
 
{
 
  coVRMSController::instance()->sendSlaves((char*)&appReceiveBuffer,sizeof(receiveBuffer));
 
}
 
else
 
{
 
  coVRMSController::instance()->readMaster((char*)&appReceiveBuffer,sizeof(receiveBuffer));
 
}
 
</pre>
 
 
The above functions make heavy use of the class coVRSlave (covise/src/renderer/OpenCOVER/kernel/coVRSlave.h). This class uses the Socket class to implement the communication between nodes. The Socket class can be used, using a different port, to implement communication between rendering nodes.
 
 
Notice that the above functions are for communication <b>WITHIN</b> a rendering cluster. In order to send a message to a remote OpenCOVER (running on another rendering cluster connected via a WAN) you would use cover->sendMessage. The source code for this function is at covise/src/renderer/OpenCOVER/kernel/coVRPluginSupport.h.
 
 
===Moving an object with the pointer===
 
 
Here is some sample code to move an object. object2w is the object's transformation matrix in world space. lastWand2w and wand2w are the wand matrices from the previous and current frames, respectively, from cover->getPointer().
 
 
<pre>
 
void move(Matrix& lastWand2w, Matrix& wand2w)
 
{
 
    // Compute difference matrix between last and current wand:
 
    Matrix invLastWand2w = Matrix::inverse(lastWand2w);
 
    Matrix wDiff = invLastWand2w * wand2w;
 
 
    // Perform move:
 
    _node->setMatrix(object2w * wDiff);
 
}
 
</pre>
 
 
===Using Shared Memory===
 
 
A great tutorial page is at: http://www.ecst.csuchico.edu/~beej/guide/ipc/shmem.html
 
 
===Find out which node a plugin is running on===
 
 
The following routine works on our Varrier system to find out the host number, starting with 0. The node names are vellum1-10, vellum2-10, etc.
 
 
<pre>
 
int getNodeIndex()
 
{
 
  char hostname[33];
 
  gethostname(hostname, 32);
 
  int node;
 
  sscanf(hostname, "vellum%d-10", &node);  // this needs to be adjusted to naming convention
 
  return (node-1);  // names start with 1
 
}
 
</pre>
 
 
 
===Hide or select different pointer===
 
 
The pointer is by default a line coming out of the users's hand held device. Icons like an airplane, a steering wheel, slippers, or a magnifying glass indicate the current navigation mode. This is the mode the left mouse button will use. To hide the pointer, remove all geodes which are part of the pointer:
 
<pre>
 
  while(VRSceneGraph::instance()->getHandTransform()->getNumChildren())
 
    VRSceneGraph::instance()->getHandTransform()->removeChild(m_handScaledTransform->getChild(0));
 
</pre>
 
 
To select a different pre-defined pointer:
 
<pre>
 
  VRSceneGraph::instance()->setPointerType(<new_type>);
 
</pre>
 
 
===Hide the mouse cursor===
 
 
<pre>
 
  osgViewer::Viewer::Windows windows;
 
  viewer.getWindows(windows);
 
  for(osgViewer::Viewer::Windows::iterator itr = windows.begin(); itr != windows.end(); ++itr)
 
  {
 
    (*itr)->useCursor(false);
 
  }
 
</pre>
 

Latest revision as of 11:32, 22 May 2013

Contents

COVISE Tutorial

There is a six-lecture video course on Youtube at:

Lecture 1, Lecture 2, Lecture 3, Lecture 4, Lecture 5, Lecture 6

The accompanying slides are at:

Lecture 1, Lecture 2, Lecture 3, Lecture 4, Lecture 5, Lecture 6

Some of the slides of this course have been generously provided by Dr. Uwe Woessner from HLRS.

General information about COVISE modules

The entire COVISE installation, including all plugins, is located at /home/covise/covise/. Each user should have a link in their home directory named 'covise' which points to this directory. There should also be a link 'plugins' which points to the plugins directory: /home/covise/plugins/. You should put all the plugins you write into the directory: plugins/calit2/.

Other directories you might need throughout the project are:

  • covise/src/renderer/OpenCOVER/kernel/: core OpenCOVER functionality, especially coVRPluginSupport.cpp
  • covise/src/kernel/OpenVRUI/: OpenCOVER's user interface elements. useful documentation in doc/html subdirectory; browse by running Firefox on index.html
  • covise/src/renderer/OpenCOVER/osgcaveui/: special CaveUI functions, not required in class but useful

You compile your plugin with the 'make' command, or 'make verbose' if you want to see the full compiler commands. This creates a library file in covise/rhel5/lib/OpenCOVER/plugins/. Covise uses qmake, so the make file is being generated by the .pro file. The name of the plugin is determined by the project name in the .pro file in your plugin directory (first line, keyword TARGET). I defaulted TARGET to be p1<username> for project #1. It is important that the TARGET name be unique, or else you will overwrite somebody else's plugin. You can change the name of your source files or add additional source files (.cpp,.h) by listing them after the SOURCES tag in the .pro file.

You run OpenCOVER by typing 'opencover' anywhere at the command line. You quit opencover by hitting the 'q' key on the keyboard or ctrl-c in the shell window you started it from.

Good examples for plugins are plugins/Volume and plugins/PDBPlugin. Look at the code in these plugins to find out how to add menu items and how to do interaction. Note that there are two ways to do interaction: with pure OpenCOVER routines, or with OSGCaveUI. In this course we will try to use only OpenCOVER's own routines. Plugins do not get loaded by opencover before they are configured in the configuration file.

Important Directories and URLs

  • /home/covise/covise/config: configuration files
  • /home/covise/covise/src/renderer/OpenCOVER/plugins/calit2: Calit2 plugin code
  • /home/covise/covise/extern_libs/src/OpenSceneGraph-2.8.2: OpenSceneGraph installation directory
  • /home/covise/covise/src/renderer/OpenCOVER: OpenCOVER source directory; core functions are in kernel subdirectory
  • /home/covise/covise/src/kernel/OpenVRUI: virtual reality user interface widgets
  • http://www.openscenegraph.org: main OSG web site
  • http://openscenegraph.org/archiver/osg-users: OSG email archive. If you have an OSG problem, this is a good place to start.

Covise configuration files

The configuration files are in the directory /home/covise/covise/config. The most important files are:

  • ivl.xml: general configuration information for all lab machines on the 2nd floor and room 6307
  • NODENAME.xml: node specific information, e.g., sand.xml for sand.ucsd.edu
  • cwall.xml: C-wall specific configuration information (cwall-1 and cwall-2.ucsd.edu)
  • Xml configuration files can be syntax validated with xmllint, e.g. xmllint --debug --noout sand.xml

The configuration files are XML files which can be edited with any ASCII text editor (vi, emacs, nedit, gedit, ...). There are sections specific for certain machines. To load your plugin (e.g., MyPlugin) on one or more machines (e.g., chert and sand), you need to add or modify a section to contain:

 <LOCAL host="chert,sand">
   <COVER>
     <Plugin>
       <MyPlugin value="on" />
     </Plugin>
   </COVER>
 </LOCAL>

Screen configuration:

OpenCOVER requires the following global tags to be configured for a proper display configuration: PipeConfig, WindowConfig, and ChannelConfig. Another required tag, ScreenConfig, needs to be set on a node by node basis, because it differs for every screen. In the following example, a cluster with one graphics card (pipe) per rendering node, one large desktop in Twinview mode (window) of size 3840x1200 pixels, and two separate rendering windows (channels), each 1920x1200 pixels, are being configured.

 <PipeConfig>
   <Pipe display=":0.0" name="0" screen="0" pipe="0" /> 
 </PipeConfig>
 <WindowConfig>
   <Window width="3840" comment="MAIN" window="0" pipeIndex="0" height="1200" left="0" bottom="0" name="0" decoration="false" /> 
 </WindowConfig>
 <ChannelConfig>
   <Channel windowIndex="0" stereoMode="LEFT" channel="0" left="0" width="1920" bottom="0" height="1200" comment="C_A" name="0" /> 
   <Channel windowIndex="0" stereoMode="LEFT" channel="1" left="1920" width="1920" bottom="0" height="1200" comment="C_B" name="1" /> 
 </ChannelConfig>

The display parameters for the tiles are set on a per node basis with the ScreenConfig tag. The following example configures two tiles for node 'tile-0-0'. On each tile, the visible screen dimensions are 520x325 millimeters. The centers of the monitors are offset from the world coordinate system horizontaly by -1100 and -570 millimeters, respectively, and -360 millimeters vertically. A proper configuration file will list a section like the one below for every rendering node.

 <LOCAL host="tile-0-0.local">
   <COVER>
     <ScreenConfig>
       <Screen width="520" h="0.0" height="325" p="0.0" originX="-1100" comment="S_A" originY="0" r="0.0" name="0" originZ="-360" screen="0" /> 
       <Screen width="520" h="0.0" height="325" p="0.0" originX="-570"  comment="S_B" originY="0" r="0.0" name="1" originZ="-360" screen="1" /> 
     </ScreenConfig>
   </COVER>
 </LOCAL>

The head node needs to be configured in the same way. If you are navigating with a mouse on the head node, you probably want to configure a larger screen size for the head node, so that it covers a larger area of the tiled screen. You can do this by adjusting the width and height values, but you want to make sure that the aspect ratio of the new values corresponds to the one of the OpenCOVER window.

Create new plugin using SVN and copy over to StarCAVE

To create new plugin:

> cd ~/plugins/calit2
> mkdir your_new_plugin_folder
> svn add your_new_plugin_folder
> cd your_new_plugin_folder
> svn commit

Move to StarCAVE:

Loggin to StarCAVE
> cd ~/plugins/calit2
> svn update your_new_plugin_folder

From now on, just use svn update and commit inside your plugin folder.

Change Default VRUI Menu Position/Size

Let WindowTitle be the title of the window (the text in its title bar). Then add the following section to the config file:

  <COVER>
    <VRUI>
      <WindowTitle>
        <Menu>
          <Position x="0" y="0" z="0" />
          <Size value="1.0" />
        </Menu>
      </WindowTitle>
    </VRUI>
  </COVER>

Configure Lighting

By default there is a light source from 45 degrees up behind the viewer. To change this the following parameters can be set in the config file:

<COVER>
  <Lights>
    <Sun>
      <Specular value="on" r="1.0" g="1.0" b="1.0" />
      <Diffuse value="on" r="1.0" g="1.0" b="1.0" />
      <Ambient value="on" r="0.3" g="0.3" b="0.3" />
      <Position value="on" x="0.0" y="0.0" z="10000.0" />
      <Spot value="on" x="0.0" y="0.0" z="1.0" expo="0.0" angle="180.0" />
    </Sun>
    <Lamp>
      <Specular value="on" r="1.0" g="1.0" b="1.0" />
      <Diffuse value="on" r="1.0" g="1.0" b="1.0" />
      <Ambient value="on" r="0.3" g="0.3" b="0.3" />
      <Position value="on" x="0.0" y="0.0" z="10000.0" />
      <Spot value="on" x="0.0" y="0.0" z="1.0" expo="0.0" angle="180.0" />
    </Lamp>
    <Light1>
      <Specular value="on" r="1.0" g="1.0" b="1.0" />
      <Diffuse value="on" r="1.0" g="1.0" b="1.0" />
      <Ambient value="on" r="0.3" g="0.3" b="0.3" />
      <Position value="on" x="0.0" y="0.0" z="10000.0" />
      <Spot value="on" x="0.0" y="0.0" z="1.0" expo="0.0" angle="180.0" />
    </Light1>
    <Light2>
      <Specular value="on" r="1.0" g="1.0" b="1.0" />
      <Diffuse value="on" r="1.0" g="1.0" b="1.0" />
      <Ambient value="on" r="0.3" g="0.3" b="0.3" />
      <Position value="on" x="0.0" y="0.0" z="10000.0" />
      <Spot value="on" x="0.0" y="0.0" z="1.0" expo="0.0" angle="180.0" />
    </Light2>
  </Lights>
</COVER>

Debugging OpenCover Plugins

OpenCover code can be debugged with gdb. If it throws a 'Segmentation Fault' make sure the core is getting dumped with 'unlimit coredumpsize'. Then you should find a file named 'core' or 'core.<pid>' in the directory you are running opencover from. Let's assume your latest core file is called core.4567 then you can run gdb with:

  • gdb ~/covise/rhel5/bin/Renderer/OpenCOVER core.4567

RETURN through the startup screens until you get a command prompt. The two most important commands are:

  • bt: to display the stack trace. The topmost call is the one which caused the segmentation fault.
  • quit: to quit gdb

Documentation for gdb is at: http://sourceware.org/gdb/documentation/

Create OpenCOVER Menus

Here is an example which creates a sub-menu off the main OpenCOVER menu with two check boxes.

In header file:

Step #1: Derive plugin class from coMenuListener. Example:

  class MyClass : public coVRPlugin, public coMenuListener

Step #2: Declare attributes for menu items. Example:

  coSubMenuItem*         _myMenuItem;
  coRowMenu*             _myMenu;
  coCheckboxMenuItem*    _myFirstCheckbox, *_mySecondCheckbox;

Step #3: Declare menu callback function. Example:

  void menuEvent(coMenuItem*);

In .cpp file:

Step #4: Create menu in init() callback. Example:

void MyClass::createMenus()
{
  _myMenuItem = new coSubMenuItem("My Menu");
  _myMenu = new coRowMenu("My Menu");
  _myMenuItem->setMenu(_myMenu);

  _myFirstCheckbox = new coCheckboxMenuItem("First Checkbox", true);
  _myMenu->add(_myFirstCheckbox);
  _myFirstCheckbox->setMenuListener(this);

  _mySecondCheckbox = new coCheckboxMenuItem("Second Checkbox", false);
  _myMenu->add(_mySecondCheckbox);
  _mySecondCheckbox->setMenuListener(this);
 
  cover->getMenu()->add(_myMenuItem);
}

Step #5: Create callback function for menu interaction. Example:

void MyClass::menuEvent(coMenuItem* item)
{
  if(item == _myFirstCheckbox)
  {
    _myFirstCheckbox->setState(true);
    _mySecondCheckbox->setState(false);
  }
  if(item == _mySecondCheckbox)
  {
    _myFirstCheckbox->setState(false);
    _mySecondCheckbox->setState(true);
  }
}

Wall Clock Time

If you are going to animate anything, keep in mind that the rendering system runs anywhere between 1 and 100 frames per second, so you can't rely on the time between frames being anything you assume. Instead, you will want to know exactly how much time has passed since you last rendered something, i.e., you last preFrame() call. You should use cover->frameTime(), or better cover->frameDuration(); these return a double value with the number of seconds (at an accuracy of milli- or even microseconds) passed since the start of the program, or since the last preFrame(), respectively.

Tracker Data

Here is a piece of code to get the pointer (=wand) position (pos1) and a point 1000 millimeters from it (pos2) along the pointer line:

  osg::Vec3 pointerPos1Wld = cover->getPointerMat().getTrans();
  osg::Vec3 pointerPos2Wld = osg::Vec3(0.0, 1000.0, 0.0);
  pointerPos2Wld = pointerPos2Wld * cover->getPointerMat();

This is the way to get the head position in world space:

  Vec3 viewerPosWld = cover->getViewerMat().getTrans();

Or in object space:

  Vec3 viewerPosWld = cover->getViewerMat().getTrans();
  Vec3 viewerPosObj = viewerPosWld * cover->getInvBaseMat();

Taking a Screenshot from within OpenCOVER

The first solution is courtesy of Emmett McQuinn. He says: "This code is fairly robust in our application, it captures the screen with the proper orientation when a trackball manipulator is used and returns to the proper orientation so the end user's view is never modified. It also preserves the correct aspect ratio and works with double buffering. The routine take an offscreen framebuffer capture, which can be higher than the native display resolution (up to 8k on modern cards)."

#include "ScreenCapture.h"
#include <osgGA/TrackballManipulator>
#include <osgDB/WriteFile>
#include <assert.h>
#include <type/emath.h>
#include <cmath>

bool osge::ScreenCapture(const char *filename, osgViewer::Viewer *viewer, int width, int height, bool keepRatio, bool doubleBuffer)
{
	// current technologies (quadro FX 5800) can support up to 8192x8192 framebuffer
	//	the frame buffer does not have to be square
	//	most recent cards should be able to do 4096x4096
	const int max_pixels = 8192;

	assert(width <= max_pixels);
	assert(height <= max_pixels);

	osg::Image *shot = new osg::Image();

	int w = width;
	int h = height;

    osg::ref_ptr<osg::Camera> newcamera = new osg::Camera;
    osg::ref_ptr<osg::Camera> oldcamera = viewer->getCamera();

	// if we want to keep the native ratio rather than given pixels
	if (keepRatio)
	{
		// will never be larger than the parameters width and height
		double fov, ratio, near, far;
		oldcamera->getProjectionMatrixAsPerspective(fov, ratio, near, far);

		if (emath::isnan(ratio) || (std::abs(ratio -0.0) < 0.01) || (ratio < 0))
		{
			// orthographic projection
			double left, right, bottom, top;
			oldcamera->getProjectionMatrixAsOrtho(left, right, bottom, top, near, far);
			float dw = right - left;
			float dh = top - bottom;
			ratio = dw / dh;
		}

		const int max_w = width;
		w = h * ratio;
		if (w > max_w)
		{
			w = max_w;
			h = w / ratio;
		}
	}

	shot->allocateImage(w, h, 1, GL_RGB, GL_UNSIGNED_BYTE);

	// store old camera settings
	osgGA::TrackballManipulator *manipulator = (osgGA::TrackballManipulator*)viewer->getCameraManipulator();
	osg::Vec3 eye, center, up;
    oldcamera->getViewMatrixAsLookAt(eye,center,up);
	center = manipulator->getCenter();
	manipulator->setHomePosition(eye, center, up);
    //Copy the settings from sceneView-camera to get exactly the view the user sees at the moment:
	//newcamera->setClearColor(oldcamera->getClearColor());

	newcamera->setClearColor(osg::Vec4(0,0,0,0));

    newcamera->setClearMask(oldcamera->getClearMask());
    newcamera->setColorMask(oldcamera->getColorMask());
    newcamera->setTransformOrder(oldcamera->getTransformOrder());

	// just inherit the main cameras view
	newcamera->setReferenceFrame(osg::Transform::ABSOLUTE_RF);
	osg::Matrixd proj = oldcamera->getProjectionMatrix();
	newcamera->setProjectionMatrix(proj);
	osg::Matrixd view = oldcamera->getViewMatrix();
	newcamera->setViewMatrix(view);

	// set viewport
	newcamera->setViewport(0, 0, w, h);

	// set the camera to render before the main camera.
	newcamera->setRenderOrder(osg::Camera::POST_RENDER);

	// tell the camera to use OpenGL frame buffer object where supported.
	newcamera->setRenderTargetImplementation(osg::Camera::FRAME_BUFFER_OBJECT);

	// attach the texture and use it as the color buffer.
	newcamera->attach(osg::Camera::COLOR_BUFFER, shot);

	osg::ref_ptr<osg::Node> root_node = viewer->getSceneData();

     // add subgraph to render
    newcamera->addChild(root_node.get());

    //Need to make it part of the scene :
    viewer->setSceneData(newcamera.get());

    //I make it frame
	viewer->frame();
	if (doubleBuffer)
	{
	// double buffered so two frames
		viewer->frame();
	}

	bool ret = osgDB::writeImageFile(*shot, filename);;

	//Reset the old data to the sceneView, so it doesn´t always render to image:
    viewer->setSceneData(root_node.get());

	// need to reset to regular frame for camera manipulator to work properly
	viewer->frame();

	viewer->home();

	return ret;
}

bool osge::BracketCapture(const char *filebase, osgViewer::Viewer *viewer, int width, int height, bool keepRatio, bool doubleBuffer)
{
	osgGA::TrackballManipulator *manipulator = (osgGA::TrackballManipulator*)viewer->getCameraManipulator();

	// take persp screenshot
	char filename[2048];
	sprintf(filename, "%s_persp.jpg", filebase);
	bool ret = ScreenCapture(filename, viewer, width, height, keepRatio, doubleBuffer);

	osg::Camera *camera = viewer->getCamera();


	// take front, right ortho's

	// backup projection matrix
	osg::Quat originalRotation = manipulator->getRotation();
	osg::Matrix originalProjection = camera->getProjectionMatrix();

	// set to ortho
	// top
	osg::Quat topRotation(0,0,0,1);
	// front
	osg::Quat frontRotation(1,0,0,1);
	{
	osg::Vec3 axis(0,1,0);
        double angle = osg::PI;
        osg::Quat rot;
        rot.makeRotate(angle, axis);
        frontRotation = rot * frontRotation;
	}
	// right
	osg::Quat rightRotation(0.5, 0.5, 0.5, 0.5);

	// set to ortho
	osg::Vec3 eye, center, up;
	camera->getViewMatrixAsLookAt(eye,center,up);
	double fovy, ratio, near, far;
	// assumes captured in perspective
	camera->getProjectionMatrixAsPerspective(fovy, ratio, near, far);
	float distance = eye.length();
	float top = (distance) * std::tan(fovy * osg::PI/180.f * 0.5);
	float right = top * ratio;

	camera->setProjectionMatrixAsOrtho(-right, right, -top, top, 0.1, 100);
	// bracket 3 views

	manipulator->setRotation(topRotation);
	sprintf(filename, "%s_top.jpg", filebase);
	// takes a frame to update the camera from the manipulator
	viewer->frame();
	ret &= ScreenCapture(filename, viewer, width, height, keepRatio, doubleBuffer);

	manipulator->setRotation(frontRotation);
	sprintf(filename, "%s_front.jpg", filebase);
	viewer->frame();
	ret &= ScreenCapture(filename, viewer, width, height, keepRatio, doubleBuffer);

	manipulator->setRotation(rightRotation);
	sprintf(filename, "%s_right.jpg", filebase);
	viewer->frame();
	ret &= ScreenCapture(filename, viewer, width, height, keepRatio, doubleBuffer);

	// set to projection
	camera->setProjectionMatrix(originalProjection);
	manipulator->setRotation(originalRotation);
	viewer->frame();

	return ret;
}

The second solution is from an email thread at http://osgcvs.no-ip.com/osgarchiver/archives/April2007/0083.html using wxWindows. The basic idea is to use osg::camera, which allows you to take a screenshot at higher than physical display resolution.

Hi, 

I have solved this by setting the HUD-Camera to "NESTED_RENDER" and 
putting all geometries of the HUD-Node into the transparent bin. 

So this is how it works: 

I have a sceneView with scene-Data. 

I remove the scene-Data from the sceneView, add it to a cameraNode and 
then add this cameraNode to the sceneView. 
Then I update the sceneView, the cameraNOde renders to the image, and 
then I remove the camera Node again and put the sceneData into the 
sceneView back again. 

The trouble was: I wanted to save work by constructing the cameraNode 
with the copy-Constructor starting with the original sceneView´s camera. 
This was not a good idea, probably because the renderToImage could not 
be set to Image after being constructed with the copyConstructor. 

So anyone who would like to have a simple, high-res screenshot, here is 
the complete source: 

    shot = new osg::Image(); 
    
   //This is wxWidgets-Stuff to get the image ratio: 
    int w = 0; int h = 0; 
    GetClientSize(&w, &h); 
    
    int newSize = (int) wxGetNumberFromUser(_("Geben Sie die Breite des 
Bildes in Pixeln an: "), _("Aufloesung:"), _("Aufloesung"), w, 300, 5000 ); 
    if (newSize == -1) 
        return false; 
    
    
    float ratio = (float)w/(float)h; 
    w = newSize; 
    
    
    h = (int)((float)w/ratio); 
    
    shot->allocateImage(w, h, 1, GL_RGB, GL_UNSIGNED_BYTE); 
    
    osg::ref_ptr<osg::Node> subgraph = TheDocument->RootGroup.get(); 
    
    osg::ref_ptr<osg::Camera> camera = new osg::Camera; 
    
    
    osg::ref_ptr<osg::Camera> oldcamera = sceneView->getCamera(); 
    //Copy the settings from sceneView-camera to get exactly the view 
the user sees at the moment: 
    camera->setClearColor(oldcamera->getClearColor() ); 
    camera->setClearMask(oldcamera->getClearMask() ); 
    camera->setColorMask(oldcamera->getColorMask() ); 
    camera->setTransformOrder(oldcamera->getTransformOrder() ); 
    camera->setProjectionMatrix(oldcamera->getProjectionMatrix() ); 
    camera->setViewMatrix(oldcamera->getViewMatrix() ); 
    
    // set view 
    camera->setReferenceFrame(osg::Transform::ABSOLUTE_RF); 


    // set viewport 
    camera->setViewport(0,0,w,h); 


    // set the camera to render before after the main camera. 
    camera->setRenderOrder(osg::Camera::POST_RENDER); 


    // tell the camera to use OpenGL frame buffer object where supported. 
    camera->setRenderTargetImplementation(osg::Camera::FRAME_BUFFER_OBJECT); 


    
    camera->attach(osg::Camera::COLOR_BUFFER, shot.get()); 
    


    // add subgraph to render 
    camera->addChild(subgraph.get()); 
    //camera->addChild(TheDocument->GetHUD().get() ); 
    //Need to mage it part of the scene : 
    sceneView->setSceneData(camera.get()); 
    //Make it frame: 
    
    sceneView->update(); 
    sceneView->cull(); 
    sceneView->draw(); 
    
    //Write the image the wxWidgets-Way, which works better for me: 
    wxImage img; 
    img.Create(w, h); 
    img.SetData(shot->data()); 
    //Damit der Destruktor des Image nicht meckert: 
    shot.release(); 
    
    wxImage i2 = img.Mirror(false); 
    i2.SaveFile(filename); 
    
    //Reset the old data to the sceneView, so it doesn´t always render 
to image: 
    sceneView->setSceneData(subgraph.get() ); 
    
    //This would work, too: 
    //return osgDB::writeImageFile(*shot, filename.c_str() ); 
    
    return true; 

Another approach for a screenshot is to create an object derived from osg::Geometry, for instance a rectangle, and put the following code in its drawImplementation().

/** Copy the currently displayed OpenGL image to a memory buffer and
  resize the image if necessary.
  @param w,h image size in pixels
  @param data _allocated_ memory space providing w*h*3 bytes of memory space
  @return memory space to which volume was rendered. This need not be the same
          as data, if internal space is used.
*/
void takeScreenshot(int w, int h, uchar* data)
{
  uchar* screenshot;
  GLint viewPort[4];                              // x, y, width, height of viewport
  int x, y;
  int srcIndex, dstIndex, srcX, srcY;
  int offX, offY;                                 // offsets in source image to maintain aspect ratio
  int srcWidth, srcHeight;                        // actually used area of source image

  // Save GL state:
  glPushAttrib(GL_ALL_ATTRIB_BITS);

  // Prepare reading:
  glGetIntegerv(GL_VIEWPORT, viewPort);
  screenshot = new uchar[viewPort[2] * viewPort[3] * 3];
  glDrawBuffer(GL_FRONT);                         // set draw buffer to front in order to read image data
  glPixelStorei(GL_PACK_ALIGNMENT, 1);            // Important command: default value is 4, so allocated memory wouldn't suffice

  // Read image data:
  glReadPixels(0, 0, viewPort[2], viewPort[3], GL_RGB, GL_UNSIGNED_BYTE, screenshot);

  // Restore GL state:
  glPopAttrib();

  // Maintain aspect ratio:
  if (viewPort[2]==w && viewPort[3]==h)
  {                                               // movie image same aspect ratio as OpenGL window?
    srcWidth  = viewPort[2];
    srcHeight = viewPort[3];
    offX = 0;
    offY = 0;
  }
  else if ((float)viewPort[2] / (float)viewPort[3] > (float)w / (float)h)
  {                                               // movie image more narrow than OpenGL window?
    srcHeight = viewPort[3];
    srcWidth = srcHeight * w / h;
    offX = (viewPort[2] - srcWidth) / 2;
    offY = 0;
  }
  else                                            // movie image wider than OpenGL window
  {
    srcWidth = viewPort[2];
    srcHeight = h * srcWidth / w;
    offX = 0;
    offY = (viewPort[3] - srcHeight) / 2;
  }

  // Now resample image data:
  for (y=0; y<h; ++y)
  {
    for (x=0; x<w; ++x)
    {
      dstIndex = 3 * (x + (h - y - 1) * w);
      srcX = offX + srcWidth  * x / w;
      srcY = offY + srcHeight * y / h;
      srcIndex = 3 * (srcX + srcY * viewPort[2]);
      memcpy(data + dstIndex, screenshot + srcIndex, 3);
    }
  }
  delete[] screenshot;
}

Message Passing

This is how you can send a message from the master to all nodes in the rendering cluster. These functions are defined in covise/src/renderer/OpenCOVER/kernel/coVRMSController.h.

if(coVRMSController::instance()->isMaster())
{
  coVRMSController::instance()->sendSlaves((char*)&appReceiveBuffer,sizeof(receiveBuffer));
}
else
{
  coVRMSController::instance()->readMaster((char*)&appReceiveBuffer,sizeof(receiveBuffer));
}

The above functions make heavy use of the class coVRSlave (covise/src/renderer/OpenCOVER/kernel/coVRSlave.h). This class uses the Socket class to implement the communication between nodes. The Socket class can be used, using a different port, to implement communication between rendering nodes without going through the master node. The Socket class is defined in /home/covise/covise/src/kernel/net/covise_socket.h. The Socket class can also be used to communicate with a computer outside of the visualization cluster.

Notice that the above functions are for communication WITHIN a rendering cluster. In order to send a message to a remote OpenCOVER (running on another rendering cluster connected via a WAN) you would use cover->sendMessage. The source code for this function is at covise/src/renderer/OpenCOVER/kernel/coVRPluginSupport.h.

How to Add Third Party Libraries to OpenCOVER

  • Put the library sources in ~covise/covise/extern_libs/src
  • Build the library
  • Two options for the installation (do NOT install to /usr/lib64):
    • 1) Install the .so files in ~covise/covise/extern_libs/lib64 and the .h files in ~covise/covise/extern_libs/include
    • 2) Leave the .so and .h files where they are and register their paths in OpenCOVER:
      • Give library a unique name, for instance 'bluetooth'.
      • Add an entry to /home/covise/covise/common/mkspecs/config-extern.pri, for instance:
bluetooth {
   INCLUDEPATH *= $$(BLUETOOTH_INC)
   LIBS += $$(BLUETOOTH_LIB)
}
      • Add environment variables to .cshrc:
setenv BLUETOOTH_INC $EXTERNLIBS/src/bluetooth/include
setenv BLUETOOTH_LIB "-L$EXTERNLIBS/src/bluetooth/lib64 -lbluetooth"
      • Add library name to plugin's .pro file:
CONFIG          *= coappl colib openvrui math vrml97 bluetooth

Moving an object with the pointer

Here is some sample code to move an object. object2w is the object's transformation matrix in world space. lastWand2w and wand2w are the wand matrices from the previous and current frames, respectively, from cover->getPointer().

void move(Matrix& lastWand2w, Matrix& wand2w)
{
    // Compute difference matrix between last and current wand:
    Matrix invLastWand2w = Matrix::inverse(lastWand2w);
    Matrix wDiff = invLastWand2w * wand2w;

    // Perform move:
    _node->setMatrix(object2w * wDiff);
}

Covise Animation Manager

Covise's Animation Manager provides a simple way to mark time. It allows the user to navigate through a series of frames either automatically at a variable frame-rate, or manually by stepping forwards or backwards.

First, include the following line of code:

  #include <kernel/coVRAnimationManager.h>

Here is a simple example of an Animation Manager setup.

  coVRAnimationManager::instance()->setAnimationSpeed(int framerate);  //Set the default frame-rate for playback
  coVRAnimationManager::instance()->enableAnimation(bool play);        //Set the animation to play/pause by default
  coVRAnimationManager::instance()->setAnimationFrame(int frame);      //Set the first frame
  coVRAnimationManager::instance()->showAnimMenu(bool on);             //Add the "Animation" SubMenu to the Opencover Main Menu, or remove it
  coVRAnimationManager::instance()->setNumTimesteps(int steps, this);  //Set the total number of frames to cycle through

This can be done at initialization or at any point during program execution.

The Animation Manager has been setup and can now provide information to the rest of your program. Calling

  coVRAnimationManager::instance()->getAnimationFrame();

Will return the current frame number. This frame number will automatically loop back to zero after it reaches the value provided to setNumTimesteps.

Finally, you can jump to a specific frame by calling

  coVRAnimationManager::instance()->setAnimationFrame(int framenumber);

Occlusion Culling

Occlusion culling removes objects which are hidden behind other objects in the culling stage so they never get rendered, thus resulting in a higher rendering rate. In covise/src/renderer/OpenCOVER/kernel/VRViewer.cpp, the SceneView is being created. By default CullingMode gets set like this:

  osg::CullStack::CullingMode cullingMode = cover->screens[i].sceneView->getCullingMode();
  cullingMode &= ~(osg::CullStack::SMALL_FEATURE_CULLING);
  cover->screens[i].sv->setCullingMode(cullingMode);

There are several types of culling options available. However, the easiest way to test your culling code would be to set the cullingMode to ENABLE_ALL_CULLING.

There isn't any way to automatically add occlusion culling to a scene, you'll need to insert convex planar occluders into your scene. See the for inspiration. Be sure to check out the plugins that use occluders. The Calit2Building plugin shows the use of occluders generated from a .osg file of the model (/local/home/jschulze/svn/trunk/covise/src/renderer/OpenCOVER/plugins/Calit2Building). However for a simple example of basic occlusion manipulation with matrix transforms be sure to visit the OccluderHelper plugin (/local/home/jschulze/svn/trunk/covise/src/renderer/OpenCOVER/plugins/OccluderHelper). If you're looking for code snippets check out this osgoccluder example or look at the pseudo code below to see a nice green occlusion plane based on four points you can hard code in:


using namespace osg;

int Main()
{
	Group *res = createOcclusionFromPoints();
	cover->getObjectsRoot()->addChild(res);
        return 0;
}

Group*
OccluderHelper::createOcclusionFromPoints()
{
	const Vec3& point1 = Vec3(point1X, point1Y, point1Z);                  //define the points of the plane
	const Vec3& point2 = Vec3(point2X, point2Y, point2Z);
	const Vec3& point3 = Vec3(point3X, point3Y, point3Z);
	const Vec3& point4 = Vec3(point4X, point4Y, point4Z);	
        MatrixTransform occluderMT = new MatrixTransform();
        occluderMT->addChild(createOcclusion(point3, point1, point4, point2));  //note the order
	Group *scene = new Group();
	scene->setName("rootgroup");

	scene->addChild(occluderMT);	
	
	return scene;
}


Node* 
OccluderHelper::createOcclusion(const Vec3& v1, const Vec3& v2, const Vec3& v3, const Vec3& v4)
{
	// create and occluder which will site along side the loadmodel model.
	OccluderNode* occluderNode = new OccluderNode;

	// create the convex planer occluder 
    	ConvexPlanarOccluder* cpo = new ConvexPlanarOccluder;

    	// attach it to the occluder node.
   	occluderNode->setOccluder(cpo);
   	occluderNode->setName("occluder");
    
    	// set the occluder up for the front face of the bounding box.
    	ConvexPlanarPolygon& occluder = cpo->getOccluder();
    	occluder.add(v1);
    	occluder.add(v2);
    	occluder.add(v3);
    	occluder.add(v4);   

   	// create a drawable for occluder.
    	Geometry* geom = new Geometry;
    
    	Vec3Array* coords = new Vec3Array(occluder.getVertexList().begin(),occluder.getVertexList().end());
    	geom->setVertexArray(coords);
    
    	Vec4Array* colors = new Vec4Array(1);
    	(*colors)[0].set(0.0f,1.0f,0.0f,0.5f);
    	geom->setColorArray(colors);
    	geom->setColorBinding(Geometry::BIND_OVERALL);
    
    	geom->addPrimitiveSet(new DrawArrays(PrimitiveSet::QUADS,0,4));
    
    	Geode* geode = new Geode;
    	geode->addDrawable(geom);
    
    	StateSet* stateset = new StateSet;
    	stateset->setMode(GL_LIGHTING,StateAttribute::OFF);
    	stateset->setMode(GL_BLEND,StateAttribute::ON);
    	stateset->setRenderingHint(StateSet::TRANSPARENT_BIN);
    
    	geom->setStateSet(stateset);
       	occluderNode->addChild(geode);    
   
    	return occluderNode;
}

If you have access to 3D Studio Max, you can find instructions on how to install and use the OSG exporter which gives you access to culling and LOD helpers for your 3D models. However, 3ds 9 is not stable with osgExp and will not allow you to have access to these occluderHelpers. I am unaware of any progress to improve osgExp for the newer versions of 3ds. If you choose this option, use it with the stable 3ds 8 or 7 with osgExp version 9.3. Otherwise check out the Calit2Building plugin which manually generates occlusion planes on OpenCOVER based on geometry created in 3ds by parsing through the .osg export file.

Check out the osgoccluder example located in: svn/extern_libs/amd64/OpenSceneGraph-svn/OpenSceneGraph/examples

An alternative to occlusion culling is to use LOD (level of detail) nodes in the scene graph. This means that when you are farther away, less polygons get rendered. See the osglod example for inspiration.