Difference between revisions of "Graphics Programming Support"

From Immersive Visualization Lab Wiki
Jump to: navigation, search
(Tracker Data)
Line 41: Line 41:
 
</pre>
 
</pre>
  
===Tracker Data===
 
  
Here is a piece of code to get the pointer (=wand) position (pos1) and a point 1000 millimeters from it (pos2) along the pointer line:
 
 
<pre>
 
  osg::Vec3 pointerPos1Wld = cover->getPointerMat().getTrans();
 
  osg::Vec3 pointerPos2Wld = osg::Vec3(0.0, 1000.0, 0.0);
 
  pointerPos2Wld = pointerPos2Wld * cover->getPointerMat();
 
</pre>
 
 
This is the way to get the head position in world space:
 
 
<pre>
 
  Vec3 viewerPosWld = cover->getViewerMat().getTrans();
 
</pre>
 
 
Or in object space:
 
<pre>
 
  Vec3 viewerPosWld = cover->getViewerMat().getTrans();
 
  Vec3 viewerPosObj = viewerPosWld * cover->getInvBaseMat();
 
</pre>
 
  
 
===Interaction handling===
 
===Interaction handling===

Revision as of 11:19, 23 February 2012

Contents

Intersection testing

If you have wondered how you can find out if the wand pointer intersects your objects, here is a template routine for it. You need to pass it the beginning and end of a line you're intersecting with, in world coordinates. The line will be starting from the hand position and extend along the Y axis.

#include <osgUtil/IntersectVisitor>

class IsectInfo     // this is an optional class to illustrate the return values of the accept() function
{
  public:
      bool       found;              ///< false: no intersection found
      osg::Vec3  point;              ///< intersection point
      osg::Vec3  normal;             ///< intersection normal
      osg::Geode *geode;             ///< intersected Geode
};

void getObjectIntersection(osg::Node *root, osg::Vec3& wPointerStart, osg::Vec3& wPointerEnd, IsectInfo& isect)
{
    // Compute intersections of viewing ray with objects:
    osgUtil::IntersectVisitor iv;
    osg::ref_ptr<osg::LineSegment> testSegment = new osg::LineSegment();
    testSegment->set(wPointerStart, wPointerEnd);
    iv.addLineSegment(testSegment.get());
    iv.setTraversalMask(2);

    // Traverse the whole scenegraph.
    // Non-Interactive objects must have been marked with setNodeMask(~2):     root->accept(iv);
    isect.found = false;
    if (iv.hits())
    {
        osgUtil::IntersectVisitor::HitList& hitList = iv.getHitList(testSegment.get());
        if(!hitList.empty())
        {
            isect.point     = hitList.front().getWorldIntersectPoint();
            isect.normal    = hitList.front().getWorldIntersectNormal();
            isect.geode     = hitList.front()._geode.get();
            isect.found     = true;
        }
    }
}


Interaction handling

To register an interaction so that only your plugin uses the mouse pointer while a button on the wand is pressed, you want to use the TrackerButtonInteraction class. For sample usage see plugins/Volume. Here are the main calls you will need. Most of these functions go in the preFrame routine, unless otherwise noted:

  • Make sure you include at the top of your code:
     
          #include <OpenVRUI/coTrackerButtonInteraction.h>
        
  • In the constructor you want to create your interaction for button A, which is the left wand button:
          interaction = new coTrackerButtonInteraction(coInteraction::ButtonA,"MoveObject",coInteraction::Menu);
        
  • In the destructor you want to call:
          delete interaction;
        
  • The code for handling the interaction needs to go in the preFrame() function. To register your interaction and thus disable button A interaction in all other plugins call the following function. Make sure that to call this function before other modules can register the interaction. In particular, this might mean that you need to register the interaction before a mouse button is pressed, for instance by registering it when intersecting with the object to interact with.
          if(!interaction->registered)
          {
             coInteractionManager::the()->registerInteraction(interaction);
          }
        
  • To do something just once, after the interaction has just started:
          if(interaction->wasStarted())
          {
          }
        
  • To do something every frame while the interaction is running:
         if(interaction->isRunning())
         {
         }
        
  • To do something once at the end of the interaction:
         if(interaction->wasStopped())
         {
         }
        
  • To unregister the interaction and free button A for other plugins:
          if(interaction->registered && (interaction->getState()!=coInteraction::Active))
          {
             coInteractionManager::the()->unregisterInteraction(interaction);
          }
    
        

OSG Text

Make sure you #include <osgText/Text>. There is a good example for how osgText can be used in ~/covise/src/renderer/OpenCOVER/osgcaveui/Card.cpp. _highlight is the osg::Geode the text gets created for, createLabel() returns the Drawable with the text, _labelText is the text string, and osgText::readFontFile() reads the font.

How to Create a Rectangle with a Texture

The following code sample from osgcaveui/Card.cpp demonstrates how to create a rectangular geometry with a texture.

Geometry* createIcon()
{
  Texture2D _icon = new Texture2D();
  Image* image = NULL;
  image = osgDB::readImageFile("image.jpg");  // make sure it's power of 2 edges
  if (image)
  {
    _icon->setImage(image);
  }
  else return NULL;

  Geometry* geom = new Geometry();

  Vec3Array* vertices = new Vec3Array(4);
  float marginX = (DEFAULT_CARD_WIDTH  - ICON_SIZE * DEFAULT_CARD_WIDTH) / 2.0;
  float marginY = marginX;
                                                  // bottom left
  (*vertices)[0].set(-DEFAULT_CARD_WIDTH / 2.0 + marginX, DEFAULT_CARD_HEIGHT /
2.0 - marginY - ICON_SIZE * DEFAULT_CARD_WIDTH, EPSILON_Z);
                                                  // bottom right
  (*vertices)[1].set( DEFAULT_CARD_WIDTH / 2.0 - marginX, DEFAULT_CARD_HEIGHT /
2.0 - marginY - ICON_SIZE * DEFAULT_CARD_WIDTH, EPSILON_Z);
                                                  // top right
  (*vertices)[2].set( DEFAULT_CARD_WIDTH / 2.0 - marginX, DEFAULT_CARD_HEIGHT /
2.0 - marginY, EPSILON_Z);
                                                  // top left
  (*vertices)[3].set(-DEFAULT_CARD_WIDTH / 2.0 + marginX, DEFAULT_CARD_HEIGHT /
2.0 - marginY, EPSILON_Z);
  geom->setVertexArray(vertices);

  Vec2Array* texcoords = new Vec2Array(4);
  (*texcoords)[0].set(0.0, 0.0);
  (*texcoords)[1].set(1.0, 0.0);
  (*texcoords)[2].set(1.0, 1.0);
  (*texcoords)[3].set(0.0, 1.0);
  geom->setTexCoordArray(0,texcoords);

  Vec3Array* normals = new Vec3Array(1);
  (*normals)[0].set(0.0f, 0.0f, 1.0f);
  geom->setNormalArray(normals);
  geom->setNormalBinding(Geometry::BIND_OVERALL);

  Vec4Array* colors = new Vec4Array(1);
  (*colors)[0].set(1.0, 1.0, 1.0, 1.0);
  geom->setColorArray(colors);
  geom->setColorBinding(Geometry::BIND_OVERALL);

  geom->addPrimitiveSet(new DrawArrays(PrimitiveSet::QUADS, 0, 4));

  // Texture:
  StateSet* stateset = geom->getOrCreateStateSet();
  stateset->setMode(GL_LIGHTING, StateAttribute::OFF);
  stateset->setRenderingHint(StateSet::TRANSPARENT_BIN);
  stateset->setTextureAttributeAndModes(0, _icon, StateAttribute::ON);

  return geom;
}

Color Specification: Materials

If your text or geometry doesn't show the color you gave it and is just white, or if you have to turn lighting off to see the color, the geometry is probably missing a material.

You can fix this by adding a material to your geode:

// color is RGBA, A should always be 1 for opaque objects, other values are 0..1
// example: setColor(Vec4(1.0,0.0,0.0,1.0)) for a red object
setColor(Vec4 color)  
{
    StateSet* stateSet = geode->getOrCreateStateSet();
    Material* mat = new Material();
    mat->setColorMode(Material::AMBIENT_AND_DIFFUSE);
    mat->setDiffuse(Material::FRONT,color);
    mat->setSpecular(Material::FRONT,color);
    stateSet->setAttribute(mat);
    stateSet->setAttributeAndModes(mat, StateAttribute::ON);
}

Khanh Luc created http://ivl.calit2.net/wiki/index.php/MatEdit, a material editor GUI which will generate OSG source code. The executable is at /home/covise/covise/extern_libs/src/MatEdit/Release/MatEdit.

Load and display an image from disk

Here is sample code to create a geode (imageGeode) with an image which gets loaded from disk. OpenGL's size limitation for textures applies (usually 4096x4096 pixels). The image might have to have powers of two edges, but that limitation should not hold anymore on newer graphics cards.

/** Loads image file into geode; returns NULL if image file cannot be loaded */
Geode* createImageGeode(const char* filename)
{
  // Create OSG image:
  Image* image = new Image();
  image = osgDB::readImageFile(filename);

  // Create OSG texture:
  if (image)
  {
    imageTexture = new Texture2D();
    imageTexture->setImage(image);
  }
  else 
  {
    std::cerr << "Cannot load image file " << filename << std::endl;
    delete image;
    return NULL;
  }

  // Create OSG geode:
  imageGeode = new Geode();
  imageGeode->addDrawable(createImageGeometry());
  return imageGeode;
}

/** Used by createImageGeode() */
Geometry* createImageGeometry()
{
  const float WIDTH  = 3.0f;
  const float HEIGHT = 2.0f;
  Geometry* geom = new Geometry();

  // Create vertices:
  Vec3Array* vertices = new Vec3Array(4);
  (*vertices)[0].set(-WIDTH / 2.0, -HEIGHT / 2.0, 0); // bottom left
  (*vertices)[1].set( WIDTH / 2.0, -HEIGHT / 2.0, 0); // bottom right
  (*vertices)[2].set( WIDTH / 2.0, HEIGHT / 2.0, 0); // top right
  (*vertices)[3].set(-WIDTH / 2.0, HEIGHT / 2.0, 0); // top left
  geom->setVertexArray(vertices);

  // Create texture coordinates for image texture:
  Vec2Array* texcoords = new Vec2Array(4);
  (*texcoords)[0].set(0.0, 0.0);
  (*texcoords)[1].set(1.0, 0.0);
  (*texcoords)[2].set(1.0, 1.0);
  (*texcoords)[3].set(0.0, 1.0);
  geom->setTexCoordArray(0,texcoords);

  // Create normals:
  Vec3Array* normals = new Vec3Array(1);
  (*normals)[0].set(0.0f, 0.0f, 1.0f);
  geom->setNormalArray(normals);
  geom->setNormalBinding(Geometry::BIND_OVERALL);

  // Create colors:
  Vec4Array* colors = new Vec4Array(1);
  (*colors)[0].set(1.0, 1.0, 1.0, 1.0);
  geom->setColorArray(colors);
  geom->setColorBinding(Geometry::BIND_OVERALL);

  geom->addPrimitiveSet(new DrawArrays(PrimitiveSet::QUADS, 0, 4));

  // Set texture parameters:
  StateSet* stateset = geom->getOrCreateStateSet();
  stateset->setMode(GL_LIGHTING, StateAttribute::OFF);  // make texture visible independent of lighting
  stateset->setRenderingHint(StateSet::TRANSPARENT_BIN);  // only required for translucent images
  stateset->setTextureAttributeAndModes(0, imageTexture, StateAttribute::ON);

  return geom;
}

Wall Clock Time

If you are going to animate anything, keep in mind that the rendering system runs anywhere between 1 and 100 frames per second, so you can't rely on the time between frames being anything you assume. Instead, you will want to know exactly how much time has passed since you last rendered something, i.e., you last preFrame() call. You should use cover->frameTime(), or better cover->frameDuration(); these return a double value with the number of seconds (at an accuracy of milli- or even microseconds) passed since the start of the program, or since the last preFrame(), respectively.

Backtrack all nodes up to top of scene graph

osg::NodePath path = getNodePath();
ref_ptr<osg::StateSet> state = new osg::StateSet;
for (osg::NodePath::iterator it = path.begin(); it != path.end(); ++it)
{
  if ((*it)->getStateSet())
  {
    state->merge((*it)->getStateSet());
  }
}


Taking a screenshot from the command line

  • run application on visualtest02 and bring up desired image, freeze head tracking
  • log on to coutsound
  • Make sure screenshot is taken of visualtest02: setenv DISPLAY :0.0
  • Take screenshot: xwd -root -out <screenshot_filename.xwd>
  • Convert image file to TIFF: convert <screenshot_filename.xwd> <screenshot_filename.tif>

Taking a Screenshot from within OpenCOVER

The first solution is courtesy of Emmett McQuinn. He says: "This code is fairly robust in our application, it captures the screen with the proper orientation when a trackball manipulator is used and returns to the proper orientation so the end user's view is never modified. It also preserves the correct aspect ratio and works with double buffering. The routine take an offscreen framebuffer capture, which can be higher than the native display resolution (up to 8k on modern cards)."

#include "ScreenCapture.h"
#include <osgGA/TrackballManipulator>
#include <osgDB/WriteFile>
#include <assert.h>
#include <type/emath.h>
#include <cmath>

bool osge::ScreenCapture(const char *filename, osgViewer::Viewer *viewer, int width, int height, bool keepRatio, bool doubleBuffer)
{
	// current technologies (quadro FX 5800) can support up to 8192x8192 framebuffer
	//	the frame buffer does not have to be square
	//	most recent cards should be able to do 4096x4096
	const int max_pixels = 8192;

	assert(width <= max_pixels);
	assert(height <= max_pixels);

	osg::Image *shot = new osg::Image();

	int w = width;
	int h = height;

    osg::ref_ptr<osg::Camera> newcamera = new osg::Camera;
    osg::ref_ptr<osg::Camera> oldcamera = viewer->getCamera();

	// if we want to keep the native ratio rather than given pixels
	if (keepRatio)
	{
		// will never be larger than the parameters width and height
		double fov, ratio, near, far;
		oldcamera->getProjectionMatrixAsPerspective(fov, ratio, near, far);

		if (emath::isnan(ratio) || (std::abs(ratio -0.0) < 0.01) || (ratio < 0))
		{
			// orthographic projection
			double left, right, bottom, top;
			oldcamera->getProjectionMatrixAsOrtho(left, right, bottom, top, near, far);
			float dw = right - left;
			float dh = top - bottom;
			ratio = dw / dh;
		}

		const int max_w = width;
		w = h * ratio;
		if (w > max_w)
		{
			w = max_w;
			h = w / ratio;
		}
	}

	shot->allocateImage(w, h, 1, GL_RGB, GL_UNSIGNED_BYTE);

	// store old camera settings
	osgGA::TrackballManipulator *manipulator = (osgGA::TrackballManipulator*)viewer->getCameraManipulator();
	osg::Vec3 eye, center, up;
    oldcamera->getViewMatrixAsLookAt(eye,center,up);
	center = manipulator->getCenter();
	manipulator->setHomePosition(eye, center, up);
    //Copy the settings from sceneView-camera to get exactly the view the user sees at the moment:
	//newcamera->setClearColor(oldcamera->getClearColor());

	newcamera->setClearColor(osg::Vec4(0,0,0,0));

    newcamera->setClearMask(oldcamera->getClearMask());
    newcamera->setColorMask(oldcamera->getColorMask());
    newcamera->setTransformOrder(oldcamera->getTransformOrder());

	// just inherit the main cameras view
	newcamera->setReferenceFrame(osg::Transform::ABSOLUTE_RF);
	osg::Matrixd proj = oldcamera->getProjectionMatrix();
	newcamera->setProjectionMatrix(proj);
	osg::Matrixd view = oldcamera->getViewMatrix();
	newcamera->setViewMatrix(view);

	// set viewport
	newcamera->setViewport(0, 0, w, h);

	// set the camera to render before the main camera.
	newcamera->setRenderOrder(osg::Camera::POST_RENDER);

	// tell the camera to use OpenGL frame buffer object where supported.
	newcamera->setRenderTargetImplementation(osg::Camera::FRAME_BUFFER_OBJECT);

	// attach the texture and use it as the color buffer.
	newcamera->attach(osg::Camera::COLOR_BUFFER, shot);

	osg::ref_ptr<osg::Node> root_node = viewer->getSceneData();

     // add subgraph to render
    newcamera->addChild(root_node.get());

    //Need to make it part of the scene :
    viewer->setSceneData(newcamera.get());

    //I make it frame
	viewer->frame();
	if (doubleBuffer)
	{
	// double buffered so two frames
		viewer->frame();
	}

	bool ret = osgDB::writeImageFile(*shot, filename);;

	//Reset the old data to the sceneView, so it doesn´t always render to image:
    viewer->setSceneData(root_node.get());

	// need to reset to regular frame for camera manipulator to work properly
	viewer->frame();

	viewer->home();

	return ret;
}

bool osge::BracketCapture(const char *filebase, osgViewer::Viewer *viewer, int width, int height, bool keepRatio, bool doubleBuffer)
{
	osgGA::TrackballManipulator *manipulator = (osgGA::TrackballManipulator*)viewer->getCameraManipulator();

	// take persp screenshot
	char filename[2048];
	sprintf(filename, "%s_persp.jpg", filebase);
	bool ret = ScreenCapture(filename, viewer, width, height, keepRatio, doubleBuffer);

	osg::Camera *camera = viewer->getCamera();


	// take front, right ortho's

	// backup projection matrix
	osg::Quat originalRotation = manipulator->getRotation();
	osg::Matrix originalProjection = camera->getProjectionMatrix();

	// set to ortho
	// top
	osg::Quat topRotation(0,0,0,1);
	// front
	osg::Quat frontRotation(1,0,0,1);
	{
	osg::Vec3 axis(0,1,0);
        double angle = osg::PI;
        osg::Quat rot;
        rot.makeRotate(angle, axis);
        frontRotation = rot * frontRotation;
	}
	// right
	osg::Quat rightRotation(0.5, 0.5, 0.5, 0.5);

	// set to ortho
	osg::Vec3 eye, center, up;
	camera->getViewMatrixAsLookAt(eye,center,up);
	double fovy, ratio, near, far;
	// assumes captured in perspective
	camera->getProjectionMatrixAsPerspective(fovy, ratio, near, far);
	float distance = eye.length();
	float top = (distance) * std::tan(fovy * osg::PI/180.f * 0.5);
	float right = top * ratio;

	camera->setProjectionMatrixAsOrtho(-right, right, -top, top, 0.1, 100);
	// bracket 3 views

	manipulator->setRotation(topRotation);
	sprintf(filename, "%s_top.jpg", filebase);
	// takes a frame to update the camera from the manipulator
	viewer->frame();
	ret &= ScreenCapture(filename, viewer, width, height, keepRatio, doubleBuffer);

	manipulator->setRotation(frontRotation);
	sprintf(filename, "%s_front.jpg", filebase);
	viewer->frame();
	ret &= ScreenCapture(filename, viewer, width, height, keepRatio, doubleBuffer);

	manipulator->setRotation(rightRotation);
	sprintf(filename, "%s_right.jpg", filebase);
	viewer->frame();
	ret &= ScreenCapture(filename, viewer, width, height, keepRatio, doubleBuffer);

	// set to projection
	camera->setProjectionMatrix(originalProjection);
	manipulator->setRotation(originalRotation);
	viewer->frame();

	return ret;
}

The second solution is from an email thread at http://osgcvs.no-ip.com/osgarchiver/archives/April2007/0083.html using wxWindows. The basic idea is to use osg::camera, which allows you to take a screenshot at higher than physical display resolution.

Hi, 

I have solved this by setting the HUD-Camera to "NESTED_RENDER" and 
putting all geometries of the HUD-Node into the transparent bin. 

So this is how it works: 

I have a sceneView with scene-Data. 

I remove the scene-Data from the sceneView, add it to a cameraNode and 
then add this cameraNode to the sceneView. 
Then I update the sceneView, the cameraNOde renders to the image, and 
then I remove the camera Node again and put the sceneData into the 
sceneView back again. 

The trouble was: I wanted to save work by constructing the cameraNode 
with the copy-Constructor starting with the original sceneView´s camera. 
This was not a good idea, probably because the renderToImage could not 
be set to Image after being constructed with the copyConstructor. 

So anyone who would like to have a simple, high-res screenshot, here is 
the complete source: 

    shot = new osg::Image(); 
    
   //This is wxWidgets-Stuff to get the image ratio: 
    int w = 0; int h = 0; 
    GetClientSize(&w, &h); 
    
    int newSize = (int) wxGetNumberFromUser(_("Geben Sie die Breite des 
Bildes in Pixeln an: "), _("Aufloesung:"), _("Aufloesung"), w, 300, 5000 ); 
    if (newSize == -1) 
        return false; 
    
    
    float ratio = (float)w/(float)h; 
    w = newSize; 
    
    
    h = (int)((float)w/ratio); 
    
    shot->allocateImage(w, h, 1, GL_RGB, GL_UNSIGNED_BYTE); 
    
    osg::ref_ptr<osg::Node> subgraph = TheDocument->RootGroup.get(); 
    
    osg::ref_ptr<osg::Camera> camera = new osg::Camera; 
    
    
    osg::ref_ptr<osg::Camera> oldcamera = sceneView->getCamera(); 
    //Copy the settings from sceneView-camera to get exactly the view 
the user sees at the moment: 
    camera->setClearColor(oldcamera->getClearColor() ); 
    camera->setClearMask(oldcamera->getClearMask() ); 
    camera->setColorMask(oldcamera->getColorMask() ); 
    camera->setTransformOrder(oldcamera->getTransformOrder() ); 
    camera->setProjectionMatrix(oldcamera->getProjectionMatrix() ); 
    camera->setViewMatrix(oldcamera->getViewMatrix() ); 
    
    // set view 
    camera->setReferenceFrame(osg::Transform::ABSOLUTE_RF); 


    // set viewport 
    camera->setViewport(0,0,w,h); 


    // set the camera to render before after the main camera. 
    camera->setRenderOrder(osg::Camera::POST_RENDER); 


    // tell the camera to use OpenGL frame buffer object where supported. 
    camera->setRenderTargetImplementation(osg::Camera::FRAME_BUFFER_OBJECT); 


    
    camera->attach(osg::Camera::COLOR_BUFFER, shot.get()); 
    


    // add subgraph to render 
    camera->addChild(subgraph.get()); 
    //camera->addChild(TheDocument->GetHUD().get() ); 
    //Need to mage it part of the scene : 
    sceneView->setSceneData(camera.get()); 
    //Make it frame: 
    
    sceneView->update(); 
    sceneView->cull(); 
    sceneView->draw(); 
    
    //Write the image the wxWidgets-Way, which works better for me: 
    wxImage img; 
    img.Create(w, h); 
    img.SetData(shot->data()); 
    //Damit der Destruktor des Image nicht meckert: 
    shot.release(); 
    
    wxImage i2 = img.Mirror(false); 
    i2.SaveFile(filename); 
    
    //Reset the old data to the sceneView, so it doesn´t always render 
to image: 
    sceneView->setSceneData(subgraph.get() ); 
    
    //This would work, too: 
    //return osgDB::writeImageFile(*shot, filename.c_str() ); 
    
    return true; 

Another approach for a screenshot is to create an object derived from osg::Geometry, for instance a rectangle, and put the following code in its drawImplementation().

/** Copy the currently displayed OpenGL image to a memory buffer and
  resize the image if necessary.
  @param w,h image size in pixels
  @param data _allocated_ memory space providing w*h*3 bytes of memory space
  @return memory space to which volume was rendered. This need not be the same
          as data, if internal space is used.
*/
void takeScreenshot(int w, int h, uchar* data)
{
  uchar* screenshot;
  GLint viewPort[4];                              // x, y, width, height of viewport
  int x, y;
  int srcIndex, dstIndex, srcX, srcY;
  int offX, offY;                                 // offsets in source image to maintain aspect ratio
  int srcWidth, srcHeight;                        // actually used area of source image

  // Save GL state:
  glPushAttrib(GL_ALL_ATTRIB_BITS);

  // Prepare reading:
  glGetIntegerv(GL_VIEWPORT, viewPort);
  screenshot = new uchar[viewPort[2] * viewPort[3] * 3];
  glDrawBuffer(GL_FRONT);                         // set draw buffer to front in order to read image data
  glPixelStorei(GL_PACK_ALIGNMENT, 1);            // Important command: default value is 4, so allocated memory wouldn't suffice

  // Read image data:
  glReadPixels(0, 0, viewPort[2], viewPort[3], GL_RGB, GL_UNSIGNED_BYTE, screenshot);

  // Restore GL state:
  glPopAttrib();

  // Maintain aspect ratio:
  if (viewPort[2]==w && viewPort[3]==h)
  {                                               // movie image same aspect ratio as OpenGL window?
    srcWidth  = viewPort[2];
    srcHeight = viewPort[3];
    offX = 0;
    offY = 0;
  }
  else if ((float)viewPort[2] / (float)viewPort[3] > (float)w / (float)h)
  {                                               // movie image more narrow than OpenGL window?
    srcHeight = viewPort[3];
    srcWidth = srcHeight * w / h;
    offX = (viewPort[2] - srcWidth) / 2;
    offY = 0;
  }
  else                                            // movie image wider than OpenGL window
  {
    srcWidth = viewPort[2];
    srcHeight = h * srcWidth / w;
    offX = 0;
    offY = (viewPort[3] - srcHeight) / 2;
  }

  // Now resample image data:
  for (y=0; y<h; ++y)
  {
    for (x=0; x<w; ++x)
    {
      dstIndex = 3 * (x + (h - y - 1) * w);
      srcX = offX + srcWidth  * x / w;
      srcY = offY + srcHeight * y / h;
      srcIndex = 3 * (srcX + srcY * viewPort[2]);
      memcpy(data + dstIndex, screenshot + srcIndex, 3);
    }
  }
  delete[] screenshot;
}

Return a ref_ptr from a function

Here is a safe way to return a ref_ptr type from a function.

osg::ref_ptr<osg::Group> makeGroup(...Some arguments..) 
{
  osg::ref_ptr<osg::MatrixTransform> mt=new MatrixTransform();
  // ...some operations...
  return mt.get();
} 

Also check out this link to learn more about how to use ref_ptr: http://donburns.net/OSG/Articles/RefPointers/RefPointers.html

Occlusion Culling in OpenSceneGraph

Occlusion culling removes objects which are hidden behind other objects in the culling stage so they never get rendered, thus resulting in a higher rendering rate. In covise/src/renderer/OpenCOVER/kernel/VRViewer.cpp, the SceneView is being created. By default CullingMode gets set like this:

  osg::CullStack::CullingMode cullingMode = cover->screens[i].sceneView->getCullingMode();
  cullingMode &= ~(osg::CullStack::SMALL_FEATURE_CULLING);
  cover->screens[i].sv->setCullingMode(cullingMode);

There are several types of culling options available. However, the easiest way to test your culling code would be to set the cullingMode to ENABLE_ALL_CULLING.

There isn't any way to automatically add occlusion culling to a scene, you'll need to insert convex planar occluders into your scene. See the for inspiration. Be sure to check out the plugins that use occluders. The Calit2Building plugin shows the use of occluders generated from a .osg file of the model (/local/home/jschulze/svn/trunk/covise/src/renderer/OpenCOVER/plugins/Calit2Building). However for a simple example of basic occlusion manipulation with matrix transforms be sure to visit the OccluderHelper plugin (/local/home/jschulze/svn/trunk/covise/src/renderer/OpenCOVER/plugins/OccluderHelper). If you're looking for code snippets check out this osgoccluder example or look at the pseudo code below to see a nice green occlusion plane based on four points you can hard code in:


using namespace osg;

int Main()
{
	Group *res = createOcclusionFromPoints();
	cover->getObjectsRoot()->addChild(res);
        return 0;
}

Group*
OccluderHelper::createOcclusionFromPoints()
{
	const Vec3& point1 = Vec3(point1X, point1Y, point1Z);                  //define the points of the plane
	const Vec3& point2 = Vec3(point2X, point2Y, point2Z);
	const Vec3& point3 = Vec3(point3X, point3Y, point3Z);
	const Vec3& point4 = Vec3(point4X, point4Y, point4Z);	
        MatrixTransform occluderMT = new MatrixTransform();
        occluderMT->addChild(createOcclusion(point3, point1, point4, point2));  //note the order
	Group *scene = new Group();
	scene->setName("rootgroup");

	scene->addChild(occluderMT);	
	
	return scene;
}


Node* 
OccluderHelper::createOcclusion(const Vec3& v1, const Vec3& v2, const Vec3& v3, const Vec3& v4)
{
	// create and occluder which will site along side the loadmodel model.
	OccluderNode* occluderNode = new OccluderNode;

	// create the convex planer occluder 
    	ConvexPlanarOccluder* cpo = new ConvexPlanarOccluder;

    	// attach it to the occluder node.
   	occluderNode->setOccluder(cpo);
   	occluderNode->setName("occluder");
    
    	// set the occluder up for the front face of the bounding box.
    	ConvexPlanarPolygon& occluder = cpo->getOccluder();
    	occluder.add(v1);
    	occluder.add(v2);
    	occluder.add(v3);
    	occluder.add(v4);   

   	// create a drawable for occluder.
    	Geometry* geom = new Geometry;
    
    	Vec3Array* coords = new Vec3Array(occluder.getVertexList().begin(),occluder.getVertexList().end());
    	geom->setVertexArray(coords);
    
    	Vec4Array* colors = new Vec4Array(1);
    	(*colors)[0].set(0.0f,1.0f,0.0f,0.5f);
    	geom->setColorArray(colors);
    	geom->setColorBinding(Geometry::BIND_OVERALL);
    
    	geom->addPrimitiveSet(new DrawArrays(PrimitiveSet::QUADS,0,4));
    
    	Geode* geode = new Geode;
    	geode->addDrawable(geom);
    
    	StateSet* stateset = new StateSet;
    	stateset->setMode(GL_LIGHTING,StateAttribute::OFF);
    	stateset->setMode(GL_BLEND,StateAttribute::ON);
    	stateset->setRenderingHint(StateSet::TRANSPARENT_BIN);
    
    	geom->setStateSet(stateset);
       	occluderNode->addChild(geode);    
   
    	return occluderNode;
}

If you have access to 3D Studio Max, you can find instructions on how to install and use the OSG exporter which gives you access to culling and LOD helpers for your 3D models. However, 3ds 9 is not stable with osgExp and will not allow you to have access to these occluderHelpers. I am unaware of any progress to improve osgExp for the newer versions of 3ds. If you choose this option, use it with the stable 3ds 8 or 7 with osgExp version 9.3. Otherwise check out the Calit2Building plugin which manually generates occlusion planes on OpenCOVER based on geometry created in 3ds by parsing through the .osg export file.

Check out the osgoccluder example located in: svn/extern_libs/amd64/OpenSceneGraph-svn/OpenSceneGraph/examples

An alternative to occlusion culling is to use LOD (level of detail) nodes in the scene graph. This means that when you are farther away, less polygons get rendered. See the osglod example for inspiration.

Message Passing in OpenCOVER

This is how you can send a message from the master to all nodes in the rendering cluster. These functions are defined in covise/src/renderer/OpenCOVER/kernel/coVRMSController.h.

if(coVRMSController::instance()->isMaster())
{
  coVRMSController::instance()->sendSlaves((char*)&appReceiveBuffer,sizeof(receiveBuffer));
}
else
{
  coVRMSController::instance()->readMaster((char*)&appReceiveBuffer,sizeof(receiveBuffer));
}

The above functions make heavy use of the class coVRSlave (covise/src/renderer/OpenCOVER/kernel/coVRSlave.h). This class uses the Socket class to implement the communication between nodes. The Socket class can be used, using a different port, to implement communication between rendering nodes without going through the master node. The Socket class is defined in /home/covise/covise/src/kernel/net/covise_socket.h. The Socket class can also be used to communicate with a computer outside of the visualization cluster.

Notice that the above functions are for communication WITHIN a rendering cluster. In order to send a message to a remote OpenCOVER (running on another rendering cluster connected via a WAN) you would use cover->sendMessage. The source code for this function is at covise/src/renderer/OpenCOVER/kernel/coVRPluginSupport.h.

Moving an object with the pointer

Here is some sample code to move an object. object2w is the object's transformation matrix in world space. lastWand2w and wand2w are the wand matrices from the previous and current frames, respectively, from cover->getPointer().

void move(Matrix& lastWand2w, Matrix& wand2w)
{
    // Compute difference matrix between last and current wand:
    Matrix invLastWand2w = Matrix::inverse(lastWand2w);
    Matrix wDiff = invLastWand2w * wand2w;

    // Perform move:
    _node->setMatrix(object2w * wDiff);
}

Using Shared Memory

A great tutorial page is at: http://www.ecst.csuchico.edu/~beej/guide/ipc/shmem.html

Find out which node a plugin is running on

The following routine works on our Varrier system to find out the host number, starting with 0. The node names are vellum1-10, vellum2-10, etc.

int getNodeIndex()
{
  char hostname[256];
  gethostname(hostname, sizeof(hostname));
  int node;
  sscanf(hostname, "vellum%d-10", &node);  // this needs to be adjusted to naming convention
  return (node-1);  // names start with 1
}

For the StarCave:

int getStarCaveNodeIndex()
{
  char hostname[256];
  gethostname(hostname, sizeof(hostname));
  int row,column;
  sscanf(hostname, "tile-%d-%d.local", &column,&row);
  return (column*3 + row);  // names start with 0, goes up to 14
}

Hide or select different pointer

The pointer is by default a line coming out of the users's hand held device. Icons like an airplane, a steering wheel, slippers, or a magnifying glass indicate the current navigation mode. This is the mode the left mouse button will use. To hide the pointer, remove all geodes which are part of the pointer:

  while(VRSceneGraph::instance()->getHandTransform()->getNumChildren())
    VRSceneGraph::instance()->getHandTransform()->removeChild(m_handScaledTransform->getChild(0));

To select a different pre-defined pointer:

  VRSceneGraph::instance()->setPointerType(<new_type>); 

Hide the mouse cursor

  osgViewer::Viewer::Windows windows;
  viewer.getWindows(windows);
  for(osgViewer::Viewer::Windows::iterator itr = windows.begin(); itr != windows.end(); ++itr)
  {
    (*itr)->useCursor(false);
  }

Keyboard Input

This is what you need to do to process keyboard events in a plugin: Define a function in your main class with a sig like void key (int type, int keySym, int mod). In the area of your code where you define coVRInit, coVRDelete, coVRPreFrame, etc. place a hook like this:

void coVRKey(int type, int keySym, int mod) 
{
  plugin->key(type, keySym, mod);
}

The function is called when a key is pressed or released. type indicates if the key has just been pressed or released (values 6 and 7), keySym identifies the key, and mod is normally 0.

Fast Shader-Based Spheres

Use the class coSphere. A single instance of the class suffices. Things you need to do outside of having a valid COVISEDIR environment variable:

  • Set the radius. It is initialized to NULL so it wont have a valid radius until it is set.
  • Set coordinates. Like the radius, it is initialized to NULL so coordinates must be set.
  • Add the geode to a group node.

Here is a piece of sample code:

void drawSphere(osg::Group* root)
{
	coSphere *drawable = new coSphere();

	drawable->setNumberOfSpheres(1);
	float radius[] = {0.5f};
	float coordsX[] = {0.0f};
	float coordsY[] = {0.0f};
	float coordsZ[] = {0.0f};
	drawable->updateRadii(radius);
	drawable->updateCoords(coordsX,coordsY,coordsZ);

	drawable->setColor(0,0.0f, 1.0f, 0.0f, 1.0f); // set color of 0th sphere

	osg::Geode *geode = new osg::Geode();
	root->addChild(geode);
	geode->addDrawable(drawable);
}

Andrew Note: It seems to be using the first light source that is enabled to draw the spheres. I will try to add the ability to set the light source soon.


Sound Effects with the Audio Server

Documentation for the Audio Server is at: [[1]]

COVISE's VRML reader supports audio nodes and has built in playback functions for the Audio Server.

The main driver file is
covise/src/kernel/vrml97/vrml/PlayerAServer.cpp

If vrml97.pro won't link and complains about missing jpeg functions, add the following lines to .cshrc:

  setenv JPEG_LIBS -ljpeg
  setenv JPEG_INCPATH /usr/include

Covise Animation Manager

Covise's Animation Manager provides a simple way to mark time. It allows the user to navigate through a series of frames either automatically at a variable frame-rate, or manually by stepping forwards or backwards.

First, include the following line of code:

  #include <kernel/coVRAnimationManager.h>

Here is a simple example of an Animation Manager setup.

  coVRAnimationManager::instance()->setAnimationSpeed(int framerate);  //Set the default frame-rate for playback
  coVRAnimationManager::instance()->enableAnimation(bool play);        //Set the animation to play/pause by default
  coVRAnimationManager::instance()->setAnimationFrame(int frame);      //Set the first frame
  coVRAnimationManager::instance()->showAnimMenu(bool on);             //Add the "Animation" SubMenu to the Opencover Main Menu, or remove it
  coVRAnimationManager::instance()->setNumTimesteps(int steps, this);  //Set the total number of frames to cycle through

This can be done at initialization or at any point during program execution.

The Animation Manager has been setup and can now provide information to the rest of your program. Calling

  coVRAnimationManager::instance()->getAnimationFrame();

Will return the current frame number. This frame number will automatically loop back to zero after it reaches the value provided to setNumTimesteps.

Finally, you can jump to a specific frame by calling

  coVRAnimationManager::instance()->setAnimationFrame(int framenumber);

How to Add Third Party Libraries to OpenCOVER

  • Put the library sources in ~covise/covise/extern_libs/src
  • Build the library
  • Two options for the installation (do NOT install to /usr/lib64):
    • 1) Install the .so files in ~covise/covise/extern_libs/lib64 and the .h files in ~covise/covise/extern_libs/include
    • 2) Leave the .so and .h files where they are and register their paths in OpenCOVER:
      • Give library a unique name, for instance 'bluetooth'.
      • Add an entry to /home/covise/covise/common/mkspecs/config-extern.pri, for instance:
bluetooth {
   INCLUDEPATH *= $$(BLUETOOTH_INC)
   LIBS += $$(BLUETOOTH_LIB)
}
      • Add environment variables to .cshrc:
setenv BLUETOOTH_INC $EXTERNLIBS/src/bluetooth/include
setenv BLUETOOTH_LIB "-L$EXTERNLIBS/src/bluetooth/lib64 -lbluetooth"
      • Add library name to plugin's .pro file:
CONFIG          *= coappl colib openvrui math vrml97 bluetooth

Blackmagic Intensity Pro Capture Card Support

This section explains how to get the Intensity Pro card from Blackmagic working on a CentOS system.

  • 1) Physically install the card into the system.
  • 2) Install the latest x86_64 .rpm Intensity drivers from the Blackmagic Website
  • 3) Make sure the permissions for the driver, located at /dev/blackmagic/card0, are set to 777.
  • 4) Run the BlackmagicControlPanel application and make sure the changes you make are saved. (You might have to log in as the root user to do this).
  • 5) Download the Blackmagic SDK.
  • 6) Hook a video source to the video input HDMI slot of the card. Find the "Capture" sample code in the SDK and run with each possible video input type until you find one that consistently captures input.
  • 7) Use the Capture code in your own plugin to get live captured data. (Note: Images come in encoded in the YUV422 format)

Simplifying Geometry and Optimizing with OSG

In order to improve the performance of your plug-in, you may wish to run an optimizer on it. The OpenSceneGraph Optimizer can be called on any osg::Node, and it will apply the optimization to the node and its subgraph. The Optimizer can perform the following operations: (Source)

  • FLATTEN_STATIC_TRANSFORMS - Flatten Static Transform nodes by applying their transform to the geometry on the leaves of the scene graph, and then removing the now redundant transforms.
  • REMOVE_REDUNDANT_NODES - Remove redundant nodes, such as groups with one single child.
  • REMOVE_LOADED_PROXY_NODES - Remove loaded proxy nodes.
  • COMBINE_ADJACENT_LODS - Optimize the LOD groups, by combining adjacent LOD's which have complementary ranges.
  • SHARE_DUPLICATE_STATE - Optimize State in the scene graph by removing duplicate state, replacing it with shared instances, both for StateAttributes, and whole StateSets.
  • MERGE_GEOMETRY - Not documented.
  • CHECK_GEOMETRY - Not documented.
  • SPATIALIZE_GROUPS - Spatialize scene into a balanced quad/oct tree.
  • COPY_SHARED_NODES - Copy any shared subgraphs, enabling flattening of static transforms.
  • TRISTRIP_GEOMETRY - Not documented.
  • TESSELLATE_GEOMETRY - Tessellate all geodes, to remove POLYGONS.
  • OPTIMIZE_TEXTURE_SETTINGS - Optimize texture usage in the scene graph by combining textures into texture atlas. Use of texture atlas cuts down on the number of separate states in the scene, reducing state changes and improving the chances of use larger batches of geometry.
  • MERGE_GEODES - Combine geodes.
  • FLATTEN_BILLBOARDS - Flatten MatrixTransform/Billboard pairs.
  • TEXTURE_ATLAS_BUILDER - Texture Atlas Builder creates a set of textures/images which each contain multiple images.
  • STATIC_OBJECT_DETECTION - Optimize the setting of StateSet and Geometry objects in scene so that they have a STATIC DataVariance when they don't have any callbacks associated with them.
  • FLATTEN_STATIC_TRANSFORMS_DUPLICATING_SHARED_SUBGRAPHS - Remove static transforms from the scene graph, pushing down the transforms to the geometry leaves of the scene graph. Any subgraphs that are shared between different transforms of duplicated and flatten individually.
  • ALL_OPTIMIZATIONS - Performs all of the optimizations listed above
  • DEFAULT_OPTIMIZATIONS - Performs all of the default optimizations.

Sample Code:

#include <osgUtil/Optimizer>
osgUtil::Optimizer optimizer;
optimizer.optimize(pyramidGeode, osgUtil::Optimizer::ALL_OPTIMIZATIONS);

You can also use the Simplifier to reduce the number of triangles in an osg::Geometry node. The Simplifier will do its best to maintain the shape of the geometry while reducing the number of triangles.

Sample Code:

#include <osgUtil/Simplifier>
osg::Geometry* geometry;
//some code...
osgUtil::Simplifier simple;
simple.setSampleRatio(0.7f); //reduces the number of triangles by 30% 
geometry->accept(simple)

Problems When Mixing OpenGL With OSG

OSG allows OpenGL code to be called within a Drawable's DrawImplementation.

Problem: OpenGL geometry gets culled when it should be displayed.

Resolution: The problem is most likely that the OpenGL geometry does not have a proper OSG bounding box. Every time a custom Drawable is created with custom OpenGL code, the bounding box of this Drawable needs to be carefully adjusted to contain the entire geometry generated by the OpenGL commands.


Problem: Textures do not show up or get corrupted, even in the COVER menu.

Resolution: This is likely an OSG state issue. Whenever the OpenGL state gets changed by custom OpenGL code outside of OSG commands, it needs to carefully be restored before control is returned to OSG. In practice this means that at the beginning of the drawImplementation in which OpenGL code is executed, the OpenGL state needs to be saved, and at the end it needs to be restored. If any parts of the state are not restored correctly, the observed problems may be observed.


Problem: Textures and other OpenGL context specific objects do not show up on all screens.

Resolution: This can happen if custom OpenGL does not pay attention to the context ID. All user created OpenGL objects(textures, VBOs, etc.) need to be generated and have their data loaded separately in each context. For a custom OSG drawable you will get a call to the drawImplementation function from each render context each frame. You can find what context you are in by using the getContextID function in the RenderInfo object passed into the drawImplementation function. Also note that the draw call can potentially be multi-threaded.

Useful Links

  • osgWorks adds useful functionality to OpenSceneGraph