http://ivl.calit2.net/wiki/api.php?action=feedcontributions&user=Azavodny&feedformat=atom Immersive Visualization Lab Wiki - User contributions [en] 2024-03-28T22:36:44Z User contributions MediaWiki 1.21.1 http://ivl.calit2.net/wiki/index.php/COVISE_and_OpenCOVER_support COVISE and OpenCOVER support 2007-07-19T17:10:14Z <p>Azavodny: /* Tracker Data */</p> <hr /> <div>===Lab hardware===<br /> <br /> We are using Suse Linux 10.0 on most of our lab machines. The names of the lab machines you have an account on are: <br /> <br /> * coutsound.ucsd.edu: AMD Opteron based file server and computer connected to dual-head setup next to projectors. Use this machine to compile and to change your password. Let me know when you change your password so that I can update it on the rest of the lab machines.<br /> * visualtest01.ucsd.edu: AMD Opteron based, drives lower half of c-wall<br /> * visualtest02.ucsd.edu: AMD Opteron based, drives upper half of c-wall<br /> * flint.ucsd.edu: Intel Pentium D based Dell in terminal room<br /> * chert.ucsd.edu: Intel Pentium D based Dell in terminal room<br /> * basalt.ucsd.edu: Intel Pentium D based Dell in terminal room<br /> * rubble.ucsd.edu: Intel Pentium D based Dell in terminal room<br /> * slate.ucsd.edu: Intel Pentium D based Dell in terminal room<br /> <br /> Due to the different computer hardware we use in the terminal room and the cave room, you should always compile your plugins on coutsound. All these machines can be accessed from the public internet. You can log in with your username by using ssh. Example for user 'jschulze' logging into 'flint': ssh jschulze@flint.ucsd.edu<br /> <br /> ===Account information===<br /> <br /> In order to cross login between lab machines without a password you will need to create a DSA key pair. To generate this you run the command 'ssh-keygen -t dsa' on any lab machine and use the default file names, and do not enter a pass phrase. This should generate two files in your ~/.ssh/ directory: id_dsa and id_dsa.pub. The last step is to copy the file id_dsa.pub to new file authorized_keys, or append the contents of id_dsa.pub to authorized_keys if it already exists.<br /> <br /> ===General information about Covise modules===<br /> <br /> The entire Covise installation, including all plugins, is located at ~jschulze/covise/. Each user should have a link in their home directory named 'covise' which points to this directory. There should also be a link 'plugins' which points to the plugins directory. You should put all the plugins you write in this course into the directory: plugins/cse291/.<br /> <br /> Other directories you might need throughout the project are:<br /> &lt;ul&gt;<br /> &lt;li&gt;covise/src/renderer/OpenCOVER/kernel/: core OpenCOVER functionality, especially coVRModuleSupport.cpp&lt;/li&gt;<br /> &lt;li&gt;covise/src/kernel/OpenVRUI/: OpenCOVER's user interface elements. useful documentation in doc/html subdirectory; browse by running Firefox on index.html&lt;/li&gt;<br /> &lt;li&gt;covise/src/renderer/OpenCOVER/osgcaveui/: special CaveUI functions, not required in class but useful&lt;/li&gt;<br /> &lt;/ul&gt;<br /> <br /> You compile your plugin with the 'make' command. This creates a library file in covise/amd64/lib/OpenCOVER/plugins/. Covise uses qmake, so the make file is being generated by the .pro file. The name of the plugin is determined by the project name in the .pro file in your plugin directory (first line, keyword TARGET). I defaulted TARGET to be p1&lt;username&gt; for project #1. It is important that the TARGET name be unique, or else you will overwrite somebody else's plugin. You can change the name of your source files or add additional source files (.cpp,.h) by listing them after the SOURCES tag in the .pro file.<br /> <br /> You run OpenCOVER by typing 'opencover' anywhere at the command line. You quit opencover by hitting the 'q' key on the keyboard or ctrl-c in the shell window you started it from.<br /> &lt;p&gt;<br /> Good examples for plugins are plugins/Volume and plugins/PDBPlugin. Look at the code in these plugins to find out how to add menu items and how to do interaction. Note that there are two ways to do interaction: with pure OpenCOVER routines, or with OSGCaveUI. In this course we will try to use only OpenCOVER's own routines.<br /> <br /> Plugins do not get loaded by opencover before they are configured in the configuration file.<br /> <br /> ===Important Directories===<br /> <br /> * ~/covise/config: configuration files<br /> * ~/plugins: plugin code<br /> * /home/jschulze/svn/extern_libs/amd64/OpenSceneGraph-svn/OpenSceneGraph: OpenSceneGraph installation directory<br /> * http://www.openscenegraph.org: main OSG web site<br /> * http://openscenegraph.org/archiver/osg-users: OSG email archive. If you have an OSG problem, this is a good place to start.<br /> <br /> ===Covise configuration files===<br /> <br /> The configuration files are in the directory $HOME/covise/config/. In class, the only files you need to look at are:<br /> <br /> &lt;ul&gt;<br /> &lt;li&gt;config.ucsd.xml: general configuration information for all lab machines, except the c-wall machines&lt;/li&gt;<br /> &lt;li&gt;config.calitcwall.xml: C-wall specific configuration information (visualtest02.ucsd.edu)&lt;/li&gt;<br /> &lt;li&gt;config.calitvarrier.xml: Varrier specific configuration information (varrier.ucsd.edu)&lt;/li&gt;<br /> &lt;/ul&gt;<br /> <br /> The configuration files are XML files which can be edited with any ASCII text editor (vi, emacs, nedit, gedit, ...). There are sections specific for certain machines. To load your plugin (e.g., p1jschulze) on one or more machines (e.g., chert and basalt), you need to add or modify a section to contain:<br /> &lt;pre&gt;<br /> &amp;lt;LOCAL host=&quot;chert,basalt&quot;&amp;gt;<br /> &amp;lt;COVERConfig&amp;gt;<br /> &amp;lt;Module value=&quot;p1jschulze&quot; name=&quot;p1jschulze&quot;/&amp;gt;<br /> &amp;lt;/COVERConfig&amp;gt;<br /> &amp;lt;/LOCAL&amp;gt;<br /> &lt;/pre&gt;<br /> <br /> ===Debugging OpenCover Plugins===<br /> <br /> OpenCover code can be debugged with gdb. If it throws a 'Segmentation Fault' make sure the core is getting dumped with 'unlimit coredumpsize'. Then you should find a file named 'core' or 'core.&lt;pid&gt;' in the directory you are running opencover from. Let's assume your latest core file is called core.4567 then you can run gdb with:<br /> <br /> * gdb ~/covise/amd64/bin/Renderer/OpenCOVER core.4567<br /> <br /> RETURN through the startup screens until you get a command prompt. The two most important commands are:<br /> <br /> * bt: to display the stack trace. The topmost call is the one which caused the segmentation fault.<br /> * quit: to quit gdb<br /> <br /> Documentation for gdb is at: http://sourceware.org/gdb/documentation/<br /> <br /> ===Intersection testing===<br /> <br /> If you have wondered how you can find out if the wand pointer intersects your objects, here is a template routine for it. You need to pass it the beginning and end of a line you're intersecting with, in world coordinates. The line will be starting from the hand position and extend along the Y axis.<br /> <br /> &lt;pre&gt;<br /> #include &lt;osgUtil/IntersectVisitor&gt;<br /> <br /> class IsectInfo // this is an optional class to illustrate the return values of the accept() function<br /> {<br /> public:<br /> bool found; ///&lt; false: no intersection found<br /> osg::Vec3 point; ///&lt; intersection point<br /> osg::Vec3 normal; ///&lt; intersection normal<br /> osg::Geode *geode; ///&lt; intersected Geode<br /> };<br /> <br /> void getObjectIntersection(osg::Node *root, osg::Vec3&amp; wPointerStart, osg::Vec3&amp; wPointerEnd, IsectInfo&amp; isect)<br /> {<br /> // Compute intersections of viewing ray with objects:<br /> osgUtil::IntersectVisitor iv;<br /> osg::ref_ptr&lt;osg::LineSegment&gt; testSegment = new osg::LineSegment();<br /> testSegment-&gt;set(wPointerStart, wPointerEnd);<br /> iv.addLineSegment(testSegment.get());<br /> iv.setTraversalMask(2);<br /> <br /> // Traverse the whole scenegraph.<br /> // Non-Interactive objects must have been marked with setNodeMask(~2): root-&gt;accept(iv);<br /> isect.found = false;<br /> if (iv.hits())<br /> {<br /> osgUtil::IntersectVisitor::HitList&amp; hitList = iv.getHitList(testSegment.get());<br /> if(!hitList.empty())<br /> {<br /> isect.point = hitList.front().getWorldIntersectPoint();<br /> isect.normal = hitList.front().getWorldIntersectNormal();<br /> isect.geode = hitList.front()._geode.get();<br /> isect.found = true;<br /> }<br /> }<br /> }<br /> &lt;/pre&gt;<br /> <br /> ===Tracker Data===<br /> <br /> Here is a piece of code to get the pointer (=wand) position (pos1) and a point 1000 millimeters from it (pos2) along the pointer line:<br /> <br /> &lt;pre&gt;<br /> osg::Vec3 pointerPos1Wld = cover-&gt;getPointerMat().getTrans();<br /> osg::Vec3 pointerPos2Wld = osg::Vec3(0.0, 1000.0, 0.0);<br /> pointerPos2Wld = pointerPos2Wld * cover-&gt;getPointerMat();<br /> &lt;/pre&gt;<br /> <br /> This is the way to get the head position:<br /> <br /> &lt;pre&gt;<br /> Vec3 viewerPosWld = cover-&gt;getViewerMat().getTrans();<br /> &lt;/pre&gt;<br /> <br /> ===Interaction handling===<br /> <br /> To register an interaction so that only your plugin uses the mouse pointer while a button on the wand is pressed, you want to use the TrackerButtonInteraction class. For sample usage see plugins/Volume. Here are the main calls you will need. Most of these functions go in the preFrame routine, unless otherwise noted:<br /> <br /> &lt;ul&gt;<br /> &lt;li&gt;Make sure you include at the top of your code:<br /> &lt;pre&gt; <br /> #include &amp;lt;OpenVRUI/coTrackerButtonInteraction.h&amp;gt;<br /> &lt;/pre&gt;<br /> &lt;/li&gt;<br /> &lt;li&gt;In the constructor you want to create your interaction for button A, which is the left wand button:<br /> &lt;pre&gt;<br /> interaction = new coTrackerButtonInteraction(coInteraction::ButtonA,&quot;MoveObject&quot;,coInteraction::Menu);<br /> &lt;/pre&gt;<br /> &lt;/li&gt;<br /> &lt;li&gt;In the destructor you want to call:<br /> &lt;pre&gt;<br /> delete interaction;<br /> &lt;/pre&gt;<br /> &lt;/li&gt;<br /> <br /> &lt;li&gt;To register your interaction and thus disable button A interaction in all other plugins call the following function. Make sure that to call this function before other modules can register the interaction. In particular, this might mean that you need to register the interaction before a mouse button is pressed, for instance by registering it when intersecting with the object to interact with.<br /> &lt;pre&gt;<br /> if(!interaction-&gt;registered)<br /> {<br /> coInteractionManager::the()-&gt;registerInteraction(interaction);<br /> }<br /> &lt;/pre&gt;<br /> <br /> &lt;/li&gt;<br /> <br /> &lt;li&gt;To do something just once, after the interaction has just started:<br /> &lt;pre&gt;<br /> if(interaction-&gt;wasStarted())<br /> {<br /> }<br /> &lt;/pre&gt;<br /> &lt;/li&gt;<br /> <br /> &lt;li&gt;To do something every frame while the interaction is running:<br /> &lt;pre&gt;<br /> if(interaction-&gt;isRunning())<br /> {<br /> }<br /> &lt;/pre&gt;<br /> &lt;/li&gt;<br /> <br /> &lt;li&gt;To do something once at the end of the interaction:<br /> &lt;pre&gt;<br /> if(interaction-&gt;wasStopped())<br /> {<br /> }<br /> &lt;/pre&gt;<br /> &lt;/li&gt;<br /> <br /> &lt;li&gt;To unregister the interaction and free button A for other plugins:<br /> &lt;pre&gt;<br /> if(interaction-&gt;registered &amp;&amp; (interaction-&gt;getState()!=coInteraction::Active))<br /> {<br /> coInteractionManager::the()-&gt;unregisterInteraction(interaction);<br /> }<br /> <br /> &lt;/pre&gt;<br /> &lt;/li&gt;<br /> &lt;/ul&gt;<br /> <br /> ===OSG Text===<br /> <br /> Make sure you #include &lt;osgText/Text&gt;. There is a good example for how osgText can be used in ~/covise/src/renderer/OpenCOVER/osgcaveui/Card.cpp. _highlight is the osg::Geode the text gets created for, &lt;b&gt;createLabel()&lt;/b&gt; returns the Drawable with the text, _labelText is the text string, and osgText::readFontFile() reads the font.<br /> <br /> ===How to Create a Rectangle with a Texture===<br /> <br /> The following code sample from osgcaveui/Card.cpp demonstrates how to create a rectangular geometry with a texture.<br /> <br /> &lt;pre&gt;<br /> Geometry* Card::createIcon()<br /> {<br /> Geometry* geom = new Geometry();<br /> <br /> Vec3Array* vertices = new Vec3Array(4);<br /> float marginX = (DEFAULT_CARD_WIDTH - ICON_SIZE * DEFAULT_CARD_WIDTH) / 2.0;<br /> float marginY = marginX;<br /> // bottom left<br /> (*vertices)[0].set(-DEFAULT_CARD_WIDTH / 2.0 + marginX, DEFAULT_CARD_HEIGHT /<br /> 2.0 - marginY - ICON_SIZE * DEFAULT_CARD_WIDTH, EPSILON_Z);<br /> // bottom right<br /> (*vertices)[1].set( DEFAULT_CARD_WIDTH / 2.0 - marginX, DEFAULT_CARD_HEIGHT /<br /> 2.0 - marginY - ICON_SIZE * DEFAULT_CARD_WIDTH, EPSILON_Z);<br /> // top right<br /> (*vertices)[2].set( DEFAULT_CARD_WIDTH / 2.0 - marginX, DEFAULT_CARD_HEIGHT /<br /> 2.0 - marginY, EPSILON_Z);<br /> // top left<br /> (*vertices)[3].set(-DEFAULT_CARD_WIDTH / 2.0 + marginX, DEFAULT_CARD_HEIGHT /<br /> 2.0 - marginY, EPSILON_Z);<br /> geom-&gt;setVertexArray(vertices);<br /> <br /> Vec2Array* texcoords = new Vec2Array(4);<br /> (*texcoords)[0].set(0.0, 0.0);<br /> (*texcoords)[1].set(1.0, 0.0);<br /> (*texcoords)[2].set(1.0, 1.0);<br /> (*texcoords)[3].set(0.0, 1.0);<br /> geom-&gt;setTexCoordArray(0,texcoords);<br /> <br /> Vec3Array* normals = new Vec3Array(1);<br /> (*normals)[0].set(0.0f, 0.0f, 1.0f);<br /> geom-&gt;setNormalArray(normals);<br /> geom-&gt;setNormalBinding(Geometry::BIND_OVERALL);<br /> <br /> Vec4Array* colors = new Vec4Array(1);<br /> (*colors)[0].set(1.0, 1.0, 1.0, 1.0);<br /> geom-&gt;setColorArray(colors);<br /> geom-&gt;setColorBinding(Geometry::BIND_OVERALL);<br /> <br /> geom-&gt;addPrimitiveSet(new DrawArrays(PrimitiveSet::QUADS, 0, 4));<br /> <br /> // Texture:<br /> StateSet* stateset = geom-&gt;getOrCreateStateSet();<br /> stateset-&gt;setMode(GL_LIGHTING, StateAttribute::OFF);<br /> stateset-&gt;setRenderingHint(StateSet::TRANSPARENT_BIN);<br /> stateset-&gt;setTextureAttributeAndModes(0, _icon, StateAttribute::ON);<br /> <br /> return geom;<br /> }<br /> &lt;/pre&gt;<br /> <br /> ===OSG Images (Textures)===<br /> <br /> Again, the file ~/covise/src/renderer/OpenCOVER/osgcaveui/Card.cpp has a good example for this. _highlight is again the osg::Geode the image gets added to as a osg::Geometry. The osg::Geometry gets created in Card::createIcon(). The image itself is stored in _icon, which gets created at the top of Card::createGeometry(), but using osg::Image and osg::Texture2D. <br /> <br /> ===Wall Clock Time===<br /> <br /> If you are going to animate anything, keep in mind that the rendering system runs anywhere between 1 and 100 frames per second, so you can't rely on the time between frames being anything you assume. Instead, you will want to know exactly how much time has passed since you last rendered something, i.e., you last preFrame() call. You should use cover-&gt;frameTime(), or better cover-&gt;frameDuration(); these return a double value with the number of seconds (at an accuracy of milli- or even microseconds) passed since the start of the program, or since the last preFrame(), respectively.<br /> <br /> ===Backtrack all nodes up to top of scene graph===<br /> <br /> &lt;pre&gt;<br /> osg::NodePath path = getNodePath();<br /> ref_ptr&lt;osg::StateSet&gt; state = new osg::StateSet;<br /> for (osg::NodePath::iterator it = path.begin(); it != path.end(); ++it)<br /> {<br /> if ((*it)-&gt;getStateSet())<br /> {<br /> state-&gt;merge((*it)-&gt;getStateSet());<br /> }<br /> }<br /> &lt;/pre&gt;<br /> <br /> <br /> ===Taking a screenshot from the command line===<br /> <br /> * run application on visualtest02 and bring up desired image, freeze head tracking<br /> * log on to coutsound<br /> * Make sure screenshot is taken of visualtest02: setenv DISPLAY :0.0<br /> * Take screenshot: xwd -root -out &lt;screenshot_filename.xwd&gt;<br /> * Convert image file to TIFF: convert &lt;screenshot_filename.xwd&gt; &lt;screenshot_filename.tif&gt;<br /> <br /> ===Taking a Screenshot from within OpenCOVER===<br /> <br /> osg::camera allows you to take a screenshot at higher than physical display resolution. Here is an example from the email thread at <br /> http://openscenegraph.org/archiver/osg-users/2007-April/0054.html using wxWindows.<br /> <br /> &lt;pre&gt;<br /> //save the old viewport:<br /> osg::ref_ptr&lt;osg::Viewport&gt; AlterViewport = sceneView-&gt;getViewport();<br /> osg::ref_ptr&lt;osg::Image&gt; shot = new osg::Image();<br /> //shot-&gt;setPixelFormat(GL_RGB);<br /> <br /> int w = 0; int h = 0;<br /> GetClientSize(&amp;w, &amp;h);<br /> float ratio = (float)w/(float)h;<br /> w = 2500;<br /> h = (int)((float)w/ratio);<br /> //shot-&gt;scaleImage(w, h, 24);<br /> shot-&gt;allocateImage(w, h, 24, GL_RGB, GL_UNSIGNED_BYTE);<br /> osg::Node* subgraph = TheDocument-&gt;RootGroup.get();<br /> <br /> osg::ref_ptr&lt;osg::Camera&gt; camera = new<br /> osg::Camera(*(sceneView-&gt;getCamera()) );<br /> <br /> // set view<br /> camera-&gt;setReferenceFrame(osg::Transform::ABSOLUTE_RF);<br /> <br /> // set viewport<br /> camera-&gt;setViewport(0,0,w,h);<br /> <br /> // set the camera to render before the main camera.<br /> camera-&gt;setRenderOrder(osg::Camera::PRE_RENDER);<br /> <br /> // tell the camera to use OpenGL frame buffer object where supported.<br /> camera-&gt;setRenderTargetImplementation(osg::Camera::FRAME_BUFFER_OBJECT);<br /> <br /> camera-&gt;attach(osg::Camera::COLOR_BUFFER, shot.get());<br /> <br /> // add subgraph to render<br /> camera-&gt;addChild(subgraph);<br /> //Need to mage it part of the scene :<br /> sceneView-&gt;setSceneData(camera);<br /> <br /> //Make it frame:<br /> Update();<br /> Refresh();<br /> <br /> wxImage img;<br /> img.Create(w, h);<br /> img.SetData(shot-&gt;data());<br /> shot.release();<br /> <br /> wxImage i2 = img.Mirror(false);<br /> i2.SaveFile(filename);<br /> sceneView-&gt;setSceneData(subgraph);<br /> sceneView-&gt;setViewport(AlterViewport.get() ); <br /> &lt;/pre&gt;<br /> <br /> ===Return a ref_ptr from a function===<br /> <br /> Here is a safe way to return a ref_ptr type from a function.<br /> <br /> &lt;pre&gt;<br /> osg::ref_ptr&lt;osg::Group&gt; makeGroup(...Some arguments..) <br /> {<br /> osg::ref_ptr&lt;osg::MatrixTransform&gt; mt=new MatrixTransform();<br /> // ...some operations...<br /> return mt.get();<br /> } <br /> &lt;/pre&gt;<br /> <br /> Also check out this link to learn more about how to use ref_ptr: <br /> http://donburns.net/OSG/Articles/RefPointers/RefPointers.html<br /> <br /> ===Occlusion Culling in OpenSceneGraph===<br /> <br /> Occlusion culling removes objects which are hidden behind other objects in the culling stage so they never get rendered, thus resulting in a higher rendering rate. In covise/src/renderer/OpenCOVER/kernel/VRViewer.cpp, the SceneView is being created. By default CullingMode gets set like this:<br /> <br /> &lt;pre&gt;<br /> osg::CullStack::CullingMode cullingMode = cover-&gt;screens[i].sceneView-&gt;getCullingMode();<br /> cullingMode &amp;= ~(osg::CullStack::SMALL_FEATURE_CULLING);<br /> cover-&gt;screens[i].sv-&gt;setCullingMode(cullingMode);<br /> &lt;/pre&gt;<br /> <br /> There isn't any way to automatically add occlusion culling to a scene, you'll need to insert convex planar occluders into your scene. See the [http://www.openscenegraph.com/index.php?page=OSGExp.OSGOccluder osgoccluder example] for inspiration.<br /> <br /> If you have access to 3D Studio Max, you can find instructions on how to install and use the [http://www.openscenegraph.com/index.php?page=OSGExp.HOWTO OSG exporter] which gives you access to culling and LOD helpers for your 3D models. However, 3ds 9 is not stable with osgExp and will not allow you to have access to these occluderHelpers. I am unaware of any progress to improve osgExp for the newer versions of 3ds. If you choose this option, use it with the stable 3ds 8 or 7 with osgExp version 9.3.<br /> <br /> Check out the osgoccluder example located in: svn/extern_libs/amd64/OpenSceneGraph-svn/OpenSceneGraph/examples<br /> <br /> An alternative to occlusion culling is to use LOD (level of detail) nodes in the scene graph. This means that when you are farther away, less polygons get rendered. See the [http://www.openscenegraph.com/index.php?page=OSGExp.OSGLOD osglod example] for inspiration.<br /> <br /> ===Message Passing in OpenCOVER===<br /> <br /> This is how you can send a message from the master to all nodes in the rendering cluster. These functions are defined in covise/src/renderer/OpenCOVER/kernel/coVRMSController.h.<br /> <br /> &lt;pre&gt;<br /> if(coVRMSController::instance()-&gt;isMaster())<br /> {<br /> coVRMSController::instance()-&gt;sendSlaves((char*)&amp;appReceiveBuffer,sizeof(receiveBuffer));<br /> }<br /> else<br /> {<br /> coVRMSController::instance()-&gt;readMaster((char*)&amp;appReceiveBuffer,sizeof(receiveBuffer));<br /> }<br /> &lt;/pre&gt;<br /> <br /> The above functions make heavy use of the class coVRSlave (covise/src/renderer/OpenCOVER/kernel/coVRSlave.h). This class uses the Socket class to implement the communication between nodes. The Socket class can be used, using a different port, to implement communication between rendering nodes.<br /> <br /> Notice that the above functions are for communication &lt;b&gt;WITHIN&lt;/b&gt; a rendering cluster. In order to send a message to a remote OpenCOVER (running on another rendering cluster connected via a WAN) you would use cover-&gt;sendMessage. The source code for this function is at covise/src/renderer/OpenCOVER/kernel/coVRPluginSupport.h.<br /> <br /> ===Moving an object with the pointer===<br /> <br /> Here is some sample code to move an object. object2w is the object's transformation matrix in world space. lastWand2w and wand2w are the wand matrices from the previous and current frames, respectively, from cover-&gt;getPointer().<br /> <br /> &lt;pre&gt;<br /> void move(Matrix&amp; lastWand2w, Matrix&amp; wand2w)<br /> {<br /> // Compute difference matrix between last and current wand:<br /> Matrix invLastWand2w = Matrix::inverse(lastWand2w);<br /> Matrix wDiff = invLastWand2w * wand2w;<br /> <br /> // Perform move:<br /> _node-&gt;setMatrix(object2w * wDiff);<br /> }<br /> &lt;/pre&gt;<br /> <br /> ===Using Shared Memory===<br /> <br /> A great tutorial page is at: http://www.ecst.csuchico.edu/~beej/guide/ipc/shmem.html<br /> <br /> ===Find out which node a plugin is running on===<br /> <br /> The following routine works on our Varrier system to find out the host number, starting with 0. The node names are vellum1-10, vellum2-10, etc.<br /> <br /> &lt;pre&gt;<br /> int getNodeIndex()<br /> {<br /> char hostname[33];<br /> gethostname(hostname, 32);<br /> int node;<br /> sscanf(hostname, &quot;vellum%d-10&quot;, &amp;node); // this needs to be adjusted to naming convention<br /> return (node-1); // names start with 1<br /> }<br /> &lt;/pre&gt;</div> Azavodny http://ivl.calit2.net/wiki/index.php/COVISE_and_OpenCOVER_support COVISE and OpenCOVER support 2007-07-12T18:25:27Z <p>Azavodny: /* Intersection testing */ This is the actual code I use... and it compiles and does some obvious stuff the other was missing. I haven't changed the algorithm at least so it's just slightly improved.</p> <hr /> <div>===Lab hardware===<br /> <br /> We are using Suse Linux 10.0 on most of our lab machines. The names of the lab machines you have an account on are: <br /> <br /> * coutsound.ucsd.edu: AMD Opteron based file server and computer connected to dual-head setup next to projectors. Use this machine to compile and to change your password. Let me know when you change your password so that I can update it on the rest of the lab machines.<br /> * visualtest01.ucsd.edu: AMD Opteron based, drives lower half of c-wall<br /> * visualtest02.ucsd.edu: AMD Opteron based, drives upper half of c-wall<br /> * flint.ucsd.edu: Intel Pentium D based Dell in terminal room<br /> * chert.ucsd.edu: Intel Pentium D based Dell in terminal room<br /> * basalt.ucsd.edu: Intel Pentium D based Dell in terminal room<br /> * rubble.ucsd.edu: Intel Pentium D based Dell in terminal room<br /> * slate.ucsd.edu: Intel Pentium D based Dell in terminal room<br /> <br /> Due to the different computer hardware we use in the terminal room and the cave room, you should always compile your plugins on coutsound. All these machines can be accessed from the public internet. You can log in with your username by using ssh. Example for user 'jschulze' logging into 'flint': ssh jschulze@flint.ucsd.edu<br /> <br /> ===Account information===<br /> <br /> In order to cross login between lab machines without a password you will need to create a DSA key pair. To generate this you run the command 'ssh-keygen -t dsa' on any lab machine and use the default file names, and do not enter a pass phrase. This should generate two files in your ~/.ssh/ directory: id_dsa and id_dsa.pub. The last step is to copy the file id_dsa.pub to new file authorized_keys, or append the contents of id_dsa.pub to authorized_keys if it already exists.<br /> <br /> ===General information about Covise modules===<br /> <br /> The entire Covise installation, including all plugins, is located at ~jschulze/covise/. Each user should have a link in their home directory named 'covise' which points to this directory. There should also be a link 'plugins' which points to the plugins directory. You should put all the plugins you write in this course into the directory: plugins/cse291/.<br /> <br /> Other directories you might need throughout the project are:<br /> &lt;ul&gt;<br /> &lt;li&gt;covise/src/renderer/OpenCOVER/kernel/: core OpenCOVER functionality, especially coVRModuleSupport.cpp&lt;/li&gt;<br /> &lt;li&gt;covise/src/kernel/OpenVRUI/: OpenCOVER's user interface elements. useful documentation in doc/html subdirectory; browse by running Firefox on index.html&lt;/li&gt;<br /> &lt;li&gt;covise/src/renderer/OpenCOVER/osgcaveui/: special CaveUI functions, not required in class but useful&lt;/li&gt;<br /> &lt;/ul&gt;<br /> <br /> You compile your plugin with the 'make' command. This creates a library file in covise/amd64/lib/OpenCOVER/plugins/. Covise uses qmake, so the make file is being generated by the .pro file. The name of the plugin is determined by the project name in the .pro file in your plugin directory (first line, keyword TARGET). I defaulted TARGET to be p1&lt;username&gt; for project #1. It is important that the TARGET name be unique, or else you will overwrite somebody else's plugin. You can change the name of your source files or add additional source files (.cpp,.h) by listing them after the SOURCES tag in the .pro file.<br /> <br /> You run OpenCOVER by typing 'opencover' anywhere at the command line. You quit opencover by hitting the 'q' key on the keyboard or ctrl-c in the shell window you started it from.<br /> &lt;p&gt;<br /> Good examples for plugins are plugins/Volume and plugins/PDBPlugin. Look at the code in these plugins to find out how to add menu items and how to do interaction. Note that there are two ways to do interaction: with pure OpenCOVER routines, or with OSGCaveUI. In this course we will try to use only OpenCOVER's own routines.<br /> <br /> Plugins do not get loaded by opencover before they are configured in the configuration file.<br /> <br /> ===Important Directories===<br /> <br /> * ~/covise/config: configuration files<br /> * ~/plugins: plugin code<br /> * /home/jschulze/svn/extern_libs/amd64/OpenSceneGraph-svn/OpenSceneGraph: OpenSceneGraph installation directory<br /> * http://www.openscenegraph.org: main OSG web site<br /> * http://openscenegraph.org/archiver/osg-users: OSG email archive. If you have an OSG problem, this is a good place to start.<br /> <br /> ===Covise configuration files===<br /> <br /> The configuration files are in the directory $HOME/covise/config/. In class, the only files you need to look at are:<br /> <br /> &lt;ul&gt;<br /> &lt;li&gt;config.ucsd.xml: general configuration information for all lab machines, except the c-wall machines&lt;/li&gt;<br /> &lt;li&gt;config.calitcwall.xml: C-wall specific configuration information (visualtest02.ucsd.edu)&lt;/li&gt;<br /> &lt;li&gt;config.calitvarrier.xml: Varrier specific configuration information (varrier.ucsd.edu)&lt;/li&gt;<br /> &lt;/ul&gt;<br /> <br /> The configuration files are XML files which can be edited with any ASCII text editor (vi, emacs, nedit, gedit, ...). There are sections specific for certain machines. To load your plugin (e.g., p1jschulze) on one or more machines (e.g., chert and basalt), you need to add or modify a section to contain:<br /> &lt;pre&gt;<br /> &amp;lt;LOCAL host=&quot;chert,basalt&quot;&amp;gt;<br /> &amp;lt;COVERConfig&amp;gt;<br /> &amp;lt;Module value=&quot;p1jschulze&quot; name=&quot;p1jschulze&quot;/&amp;gt;<br /> &amp;lt;/COVERConfig&amp;gt;<br /> &amp;lt;/LOCAL&amp;gt;<br /> &lt;/pre&gt;<br /> <br /> ===Intersection testing===<br /> <br /> If you have wondered how you can find out if the wand pointer intersects your objects, here is a template routine for it. You need to pass it the beginning and end of a line you're intersecting with, in world coordinates. The line will be starting from the hand position and extend along the Y axis.<br /> <br /> &lt;pre&gt;<br /> #include &lt;osgUtil/IntersectVisitor&gt;<br /> <br /> class IsectInfo // this is an optional class to illustrate the return values of the accept() function<br /> {<br /> public:<br /> bool found; ///&lt; false: no intersection found<br /> osg::Vec3 point; ///&lt; intersection point<br /> osg::Vec3 normal; ///&lt; intersection normal<br /> osg::Geode *geode; ///&lt; intersected Geode<br /> };<br /> <br /> void getObjectIntersection(osg::Node *root, osg::Vec3&amp; wPointerStart, osg::Vec3&amp; wPointerEnd, IsectInfo&amp; isect)<br /> {<br /> // Compute intersections of viewing ray with objects:<br /> osgUtil::IntersectVisitor iv;<br /> osg::ref_ptr&lt;osg::LineSegment&gt; testSegment = new osg::LineSegment();<br /> testSegment-&gt;set(wPointerStart, wPointerEnd);<br /> iv.addLineSegment(testSegment.get());<br /> iv.setTraversalMask(2);<br /> <br /> // Traverse the whole scenegraph.<br /> // Non-Interactive objects must have been marked with setNodeMask(~2): root-&gt;accept(iv);<br /> isect.found = false;<br /> if (iv.hits())<br /> {<br /> osgUtil::IntersectVisitor::HitList&amp; hitList = iv.getHitList(testSegment.get());<br /> if(!hitList.empty())<br /> {<br /> isect.point = hitList.front().getWorldIntersectPoint();<br /> isect.normal = hitList.front().getWorldIntersectNormal();<br /> isect.geode = hitList.front()._geode.get();<br /> isect.found = true;<br /> }<br /> }<br /> }<br /> &lt;/pre&gt;<br /> <br /> ===Tracker Data===<br /> <br /> Here is a piece of code to get the pointer (=wand) position (pos1) and a point 1000 millimeters from it (pos2) along the pointer line:<br /> <br /> &lt;pre&gt;<br /> Vec3 pointerPos1Wld = cover-&gt;getPointerMat().getTrans();<br /> pointerPos1Wld.set(0.0, 1000.0, 0.0);<br /> Vec3 pointerPos2Wld = pointerPos1Wld * cover-&gt;getPointerMat();<br /> &lt;/pre&gt;<br /> <br /> This is the way to get the head position:<br /> <br /> &lt;pre&gt;<br /> Vec3 viewerPosWld = cover-&gt;getViewerMat().getTrans();<br /> &lt;/pre&gt;<br /> <br /> ===Interaction handling===<br /> <br /> To register an interaction so that only your plugin uses the mouse pointer while a button on the wand is pressed, you want to use the TrackerButtonInteraction class. For sample usage see plugins/Volume. Here are the main calls you will need. Most of these functions go in the preFrame routine, unless otherwise noted:<br /> <br /> &lt;ul&gt;<br /> &lt;li&gt;Make sure you include at the top of your code:<br /> &lt;pre&gt; <br /> #include &amp;lt;OpenVRUI/coTrackerButtonInteraction.h&amp;gt;<br /> &lt;/pre&gt;<br /> &lt;/li&gt;<br /> &lt;li&gt;In the constructor you want to create your interaction for button A, which is the left wand button:<br /> &lt;pre&gt;<br /> interaction = new coTrackerButtonInteraction(coInteraction::ButtonA,&quot;MoveObject&quot;,coInteraction::Menu);<br /> &lt;/pre&gt;<br /> &lt;/li&gt;<br /> &lt;li&gt;In the destructor you want to call:<br /> &lt;pre&gt;<br /> delete interaction;<br /> &lt;/pre&gt;<br /> &lt;/li&gt;<br /> <br /> &lt;li&gt;To register your interaction and thus disable button A interaction in all other plugins call the following function. Make sure that to call this function before other modules can register the interaction. In particular, this might mean that you need to register the interaction before a mouse button is pressed, for instance by registering it when intersecting with the object to interact with.<br /> &lt;pre&gt;<br /> if(!interaction-&gt;registered)<br /> {<br /> coInteractionManager::the()-&gt;registerInteraction(interaction);<br /> }<br /> &lt;/pre&gt;<br /> <br /> &lt;/li&gt;<br /> <br /> &lt;li&gt;To do something just once, after the interaction has just started:<br /> &lt;pre&gt;<br /> if(interaction-&gt;wasStarted())<br /> {<br /> }<br /> &lt;/pre&gt;<br /> &lt;/li&gt;<br /> <br /> &lt;li&gt;To do something every frame while the interaction is running:<br /> &lt;pre&gt;<br /> if(interaction-&gt;isRunning())<br /> {<br /> }<br /> &lt;/pre&gt;<br /> &lt;/li&gt;<br /> <br /> &lt;li&gt;To do something once at the end of the interaction:<br /> &lt;pre&gt;<br /> if(interaction-&gt;wasStopped())<br /> {<br /> }<br /> &lt;/pre&gt;<br /> &lt;/li&gt;<br /> <br /> &lt;li&gt;To unregister the interaction and free button A for other plugins:<br /> &lt;pre&gt;<br /> if(interaction-&gt;registered &amp;&amp; (interaction-&gt;getState()!=coInteraction::Active))<br /> {<br /> coInteractionManager::the()-&gt;unregisterInteraction(interaction);<br /> }<br /> <br /> &lt;/pre&gt;<br /> &lt;/li&gt;<br /> &lt;/ul&gt;<br /> <br /> ===OSG Text===<br /> <br /> Make sure you #include &lt;osgText/Text&gt;. There is a good example for how osgText can be used in ~/covise/src/renderer/OpenCOVER/osgcaveui/Card.cpp. _highlight is the osg::Geode the text gets created for, &lt;b&gt;createLabel()&lt;/b&gt; returns the Drawable with the text, _labelText is the text string, and osgText::readFontFile() reads the font.<br /> <br /> ===How to Create a Rectangle with a Texture===<br /> <br /> The following code sample from osgcaveui/Card.cpp demonstrates how to create a rectangular geometry with a texture.<br /> <br /> &lt;pre&gt;<br /> Geometry* Card::createIcon()<br /> {<br /> Geometry* geom = new Geometry();<br /> <br /> Vec3Array* vertices = new Vec3Array(4);<br /> float marginX = (DEFAULT_CARD_WIDTH - ICON_SIZE * DEFAULT_CARD_WIDTH) / 2.0;<br /> float marginY = marginX;<br /> // bottom left<br /> (*vertices)[0].set(-DEFAULT_CARD_WIDTH / 2.0 + marginX, DEFAULT_CARD_HEIGHT /<br /> 2.0 - marginY - ICON_SIZE * DEFAULT_CARD_WIDTH, EPSILON_Z);<br /> // bottom right<br /> (*vertices)[1].set( DEFAULT_CARD_WIDTH / 2.0 - marginX, DEFAULT_CARD_HEIGHT /<br /> 2.0 - marginY - ICON_SIZE * DEFAULT_CARD_WIDTH, EPSILON_Z);<br /> // top right<br /> (*vertices)[2].set( DEFAULT_CARD_WIDTH / 2.0 - marginX, DEFAULT_CARD_HEIGHT /<br /> 2.0 - marginY, EPSILON_Z);<br /> // top left<br /> (*vertices)[3].set(-DEFAULT_CARD_WIDTH / 2.0 + marginX, DEFAULT_CARD_HEIGHT /<br /> 2.0 - marginY, EPSILON_Z);<br /> geom-&gt;setVertexArray(vertices);<br /> <br /> Vec2Array* texcoords = new Vec2Array(4);<br /> (*texcoords)[0].set(0.0, 0.0);<br /> (*texcoords)[1].set(1.0, 0.0);<br /> (*texcoords)[2].set(1.0, 1.0);<br /> (*texcoords)[3].set(0.0, 1.0);<br /> geom-&gt;setTexCoordArray(0,texcoords);<br /> <br /> Vec3Array* normals = new Vec3Array(1);<br /> (*normals)[0].set(0.0f, 0.0f, 1.0f);<br /> geom-&gt;setNormalArray(normals);<br /> geom-&gt;setNormalBinding(Geometry::BIND_OVERALL);<br /> <br /> Vec4Array* colors = new Vec4Array(1);<br /> (*colors)[0].set(1.0, 1.0, 1.0, 1.0);<br /> geom-&gt;setColorArray(colors);<br /> geom-&gt;setColorBinding(Geometry::BIND_OVERALL);<br /> <br /> geom-&gt;addPrimitiveSet(new DrawArrays(PrimitiveSet::QUADS, 0, 4));<br /> <br /> // Texture:<br /> StateSet* stateset = geom-&gt;getOrCreateStateSet();<br /> stateset-&gt;setMode(GL_LIGHTING, StateAttribute::OFF);<br /> stateset-&gt;setRenderingHint(StateSet::TRANSPARENT_BIN);<br /> stateset-&gt;setTextureAttributeAndModes(0, _icon, StateAttribute::ON);<br /> <br /> return geom;<br /> }<br /> &lt;/pre&gt;<br /> <br /> ===OSG Images (Textures)===<br /> <br /> Again, the file ~/covise/src/renderer/OpenCOVER/osgcaveui/Card.cpp has a good example for this. _highlight is again the osg::Geode the image gets added to as a osg::Geometry. The osg::Geometry gets created in Card::createIcon(). The image itself is stored in _icon, which gets created at the top of Card::createGeometry(), but using osg::Image and osg::Texture2D. <br /> <br /> ===Wall Clock Time===<br /> <br /> If you are going to animate anything, keep in mind that the rendering system runs anywhere between 1 and 100 frames per second, so you can't rely on the time between frames being anything you assume. Instead, you will want to know exactly how much time has passed since you last rendered something, i.e., you last preFrame() call. You should use cover-&gt;frameTime(), or better cover-&gt;frameDuration(); these return a double value with the number of seconds (at an accuracy of milli- or even microseconds) passed since the start of the program, or since the last preFrame(), respectively.<br /> <br /> ===Backtrack all nodes up to top of scene graph===<br /> <br /> &lt;pre&gt;<br /> osg::NodePath path = getNodePath();<br /> ref_ptr&lt;osg::StateSet&gt; state = new osg::StateSet;<br /> for (osg::NodePath::iterator it = path.begin(); it != path.end(); ++it)<br /> {<br /> if ((*it)-&gt;getStateSet())<br /> {<br /> state-&gt;merge((*it)-&gt;getStateSet());<br /> }<br /> }<br /> &lt;/pre&gt;<br /> <br /> <br /> ===Taking a Screenshot===<br /> <br /> osg::camera allows you to take a screenshot at higher than physical display resolution. Here is an example from the email thread at <br /> http://openscenegraph.org/archiver/osg-users/2007-April/0054.html using wxWindows.<br /> <br /> &lt;pre&gt;<br /> //save the old viewport:<br /> osg::ref_ptr&lt;osg::Viewport&gt; AlterViewport = sceneView-&gt;getViewport();<br /> osg::ref_ptr&lt;osg::Image&gt; shot = new osg::Image();<br /> //shot-&gt;setPixelFormat(GL_RGB);<br /> <br /> int w = 0; int h = 0;<br /> GetClientSize(&amp;w, &amp;h);<br /> float ratio = (float)w/(float)h;<br /> w = 2500;<br /> h = (int)((float)w/ratio);<br /> //shot-&gt;scaleImage(w, h, 24);<br /> shot-&gt;allocateImage(w, h, 24, GL_RGB, GL_UNSIGNED_BYTE);<br /> osg::Node* subgraph = TheDocument-&gt;RootGroup.get();<br /> <br /> osg::ref_ptr&lt;osg::Camera&gt; camera = new<br /> osg::Camera(*(sceneView-&gt;getCamera()) );<br /> <br /> // set view<br /> camera-&gt;setReferenceFrame(osg::Transform::ABSOLUTE_RF);<br /> <br /> // set viewport<br /> camera-&gt;setViewport(0,0,w,h);<br /> <br /> // set the camera to render before the main camera.<br /> camera-&gt;setRenderOrder(osg::Camera::PRE_RENDER);<br /> <br /> // tell the camera to use OpenGL frame buffer object where supported.<br /> camera-&gt;setRenderTargetImplementation(osg::Camera::FRAME_BUFFER_OBJECT);<br /> <br /> camera-&gt;attach(osg::Camera::COLOR_BUFFER, shot.get());<br /> <br /> // add subgraph to render<br /> camera-&gt;addChild(subgraph);<br /> //Need to mage it part of the scene :<br /> sceneView-&gt;setSceneData(camera);<br /> <br /> //Make it frame:<br /> Update();<br /> Refresh();<br /> <br /> wxImage img;<br /> img.Create(w, h);<br /> img.SetData(shot-&gt;data());<br /> shot.release();<br /> <br /> wxImage i2 = img.Mirror(false);<br /> i2.SaveFile(filename);<br /> sceneView-&gt;setSceneData(subgraph);<br /> sceneView-&gt;setViewport(AlterViewport.get() ); <br /> &lt;/pre&gt;<br /> <br /> ===Return a ref_ptr from a function===<br /> <br /> Here is a safe way to return a ref_ptr type from a function.<br /> <br /> &lt;pre&gt;<br /> osg::ref_ptr&lt;osg::Group&gt; makeGroup(...Some arguments..) <br /> {<br /> osg::ref_ptr&lt;osg::MatrixTransform&gt; mt=new MatrixTransform();<br /> // ...some operations...<br /> return mt.get();<br /> } <br /> &lt;/pre&gt;<br /> <br /> Also check out this link to learn more about how to use ref_ptr: <br /> http://donburns.net/OSG/Articles/RefPointers/RefPointers.html<br /> <br /> ===Occlusion Culling in OpenSceneGraph===<br /> <br /> Occlusion culling removes objects which are hidden behind other objects in the culling stage so they never get rendered, thus resulting in a higher rendering rate. In covise/src/renderer/OpenCOVER/kernel/VRViewer.cpp, the SceneView is being created. By default CullingMode gets set like this:<br /> <br /> &lt;pre&gt;<br /> osg::CullStack::CullingMode cullingMode = cover-&gt;screens[i].sceneView-&gt;getCullingMode();<br /> cullingMode &amp;= ~(osg::CullStack::SMALL_FEATURE_CULLING);<br /> cover-&gt;screens[i].sv-&gt;setCullingMode(cullingMode);<br /> &lt;/pre&gt;<br /> <br /> There isn't any way to automatically add occlusion culling to a scene, you'll need to insert convex planar occluders into your scene. See the [http://www.openscenegraph.com/index.php?page=OSGExp.OSGOccluder osgoccluder example] for inspiration.<br /> <br /> If you have access to 3D Studio Max, you can find instructions on how to install and use the [http://www.openscenegraph.com/index.php?page=OSGExp.HOWTO OSG exporter] which gives you access to culling and LOD features for your 3D models. <br /> <br /> Check out the osgoccluder example located in: svn/extern_libs/amd64/OpenSceneGraph-svn/OpenSceneGraph/examples<br /> <br /> An alternative to occlusion culling is to use LOD (level of detail) nodes in the scene graph. This means that when you are farther away, less polygons get rendered. See the [http://www.openscenegraph.com/index.php?page=OSGExp.OSGLOD osglod example] for inspiration.<br /> <br /> ===Message Passing in OpenCOVER===<br /> <br /> This is how you can send a message from the master to all nodes in the rendering cluster. These functions are defined in covise/src/renderer/OpenCOVER/kernel/coVRMSController.h.<br /> <br /> &lt;pre&gt;<br /> if(coVRMSController::instance()-&gt;isMaster())<br /> {<br /> coVRMSController::instance()-&gt;sendSlaves((char*)&amp;appReceiveBuffer,sizeof(receiveBuffer));<br /> }<br /> else<br /> {<br /> coVRMSController::instance()-&gt;readMaster((char*)&amp;appReceiveBuffer,sizeof(receiveBuffer));<br /> }<br /> &lt;/pre&gt;<br /> <br /> The above functions make heavy use of the class coVRSlave (covise/src/renderer/OpenCOVER/kernel/coVRSlave.h). This class uses the Socket class to implement the communication between nodes. The Socket class can be used, using a different port, to implement communication between rendering nodes.<br /> <br /> Notice that the above functions are for communication &lt;b&gt;WITHIN&lt;/b&gt; a rendering cluster. In order to send a message to a remote OpenCOVER (running on another rendering cluster connected via a WAN) you would use cover-&gt;sendMessage. The source code for this function is at covise/src/renderer/OpenCOVER/kernel/coVRPluginSupport.h.<br /> <br /> ===Moving an object with the pointer===<br /> <br /> Here is some sample code to move an object. object2w is the object's transformation matrix in world space. lastWand2w and wand2w are the wand matrices from the previous and current frames, respectively, from cover-&gt;getPointer().<br /> <br /> &lt;pre&gt;<br /> void move(Matrix&amp; lastWand2w, Matrix&amp; wand2w)<br /> {<br /> // Compute difference matrix between last and current wand:<br /> Matrix invLastWand2w = Matrix::inverse(lastWand2w);<br /> Matrix wDiff = invLastWand2w * wand2w;<br /> <br /> // Perform move:<br /> _node-&gt;setMatrix(object2w * wDiff);<br /> }<br /> &lt;/pre&gt;<br /> <br /> ===Using Shared Memory===<br /> <br /> A great tutorial page is at: http://www.ecst.csuchico.edu/~beej/guide/ipc/shmem.html<br /> <br /> ===Find out which node a plugin is running on===<br /> <br /> The following routine works on our Varrier system to find out the host number, starting with 0. The node names are vellum1-10, vellum2-10, etc.<br /> <br /> &lt;pre&gt;<br /> int getNodeIndex()<br /> {<br /> char hostname[33];<br /> gethostname(hostname, 32);<br /> int node;<br /> sscanf(hostname, &quot;vellum%d-10&quot;, &amp;node); // this needs to be adjusted to naming convention<br /> return (node-1); // names start with 1<br /> }<br /> &lt;/pre&gt;</div> Azavodny http://ivl.calit2.net/wiki/index.php/Rincon Rincon 2007-06-21T19:00:04Z <p>Azavodny: /* Project Overview */</p> <hr /> <div>==Project Overview==<br /> <br /> [[Image:tiled-wall.jpg|thumb|right|Stitching of two 1920x1080 streams]]<br /> <br /> <br /> The goal of this project is to implement a scalable real-time solution for streaming multiple high definition videos to be stitched into a &quot;super high-definition&quot; panoramic video. This is accomplished by streaming feeds from multiple calibrated HD cameras to a centralized location, where they are processed for storage/live viewing on multi-panel displays. To account for perspective distortion that affects displays with large vertical or horizontal fields of view, the videos are passed through a spherical warping algorithm on graphics hardware before being displayed.<br /> <br /> Another important aspect of the project is in efficiently streaming the video across a network with unreliable bandwidth: if the bandwidth between the sender and receiver is lessened, we wish to allow the user to choose which aspect of the video to degrade to maintain desired performance (e.g. degrading frame rate to preserve quality, or vice-versa).<br /> <br /> ==Project Status==<br /> <br /> Currently, we are able to stream video from the two HD cameras we have available to us and display the warped videos simultaneously on the receiving end. The user can then interactively change the bitrate of the stream, change the focal depth of the virtual camera, and manually align the images.<br /> <br /> ==Implementation==<br /> <br /> The current implementation of the project consists of two programs, one of which runs on the sending side (Xvidsend) and one of which runs on the receiving side (Amalgamator).<br /> <br /> Each camera in the system is connected to its own machine, which obtains raw Bayer RGB from the camera, converts it to 32-bit RGB, compresses it with the XVid codec, and sends it across a wide network. As the SDK for interacting with the cameras (the GigE Link SDK, available from the [http://www.siliconimaging.com/software downloads section of the Silicon Imaging website]) is only available for Windows, these machines are running Windows XP.<br /> <br /> On the receiving end, there is one Linux box (currently running CentOS, though has also been tested on SuSE 10.0) that receives all of these streams, processes them, and creates a buffer for the entire image that it then sends to SAGE. <br /> <br /> The program on the sending end, xvidsend, and the code for receiving the data across the network and obtaining a single frame at a time on the receiving end, was written by Andrew. The program on the receiving end, the Amalgamator, was written by Alex. The code for the Amalgamator has been commented and is in that sense self-documented, but a high level overview of the structure of the program is provided here for clarity. <br /> <br /> ===Xvidsend===<br /> <br /> This program interfaces with the silicon imaging SI1920HDRGB-CL video camera and sends its frames to a receiver program using xvid compression. This compression is adjusted to try to maintain a target bitrate for the stream. This target can be changed dynamically by the receiver.<br /> <br /> ====GigE Link SDK====<br /> This program requires the GigE Link SDK to be installed. The SDK is available from the silicon imaging webpage downloads directory. We used 2.2.0, but it should work as long as the same version used to compile is used at runtime.<br /> <br /> ====xvidcore 0.9.0====<br /> This version of xvid is required for use with the vvxvid interface class. The package contains a Visual C++ project file. <br /> A few notes: <br /> The package requires nasm to compile. We found that v0.98.35 worked for this. For xvid to compile, all instances of nasm in the project file must be replaced with the full path to nasmw.exe. There must not be any spaces in the path to nasmw.exe or to the project directory.<br /> <br /> ====xvidsend====<br /> Add the paths to the includes and libs for xvid and the GigE Link SDK. The Microsoft Platform SDK is also needed. This can be found on the Microsoft website. Make sure the vvxvid and vvdebugmsg files are included in the project. Add this path as well. Add the following libs to the linker input additional dependencies: kernel32.lib, WSock32.lib, libxvidcore.lib, WS2_32.Lib, Winmm.lib.<br /> <br /> Xvidsend expects a configuration file in the same directory as the executable. A sample configuration file (Config.xml) is located in the xvidsend project file's Release directory.<br /> <br /> ===The Amalgamator===<br /> <br /> ====The Makefile====<br /> If you intend to compile the program yourself, you will most likely have to modify the makefile. The makefile is rather simple, and contains a few rules that are hardcoded to enable or disable certain options. For example, a make local will make the amalgamator without any special options. make sage enables SAGE support, make sage-tile enables tiled rendering for SAGE output (which is almost always necessary), and make sage-tile-shader enables tiled rendering for SAGE output and warps each of the images with a fragment shader. make network and other variations are deprecated.<br /> <br /> ====The program====<br /> <br /> The Amalgamator is a multithreaded application. The idea is to have one thread for each incoming stream and one thread for main processing. The stream threads sit and wait for network input, and on receiving data will decompress it and update a shared buffer. The main thread reads from the buffers for these images and draws them on a textured quad, so as to apply a pixel shader to each image. As the resulting image is too large for the local screen, and as the graphics card on our current machine does not have Linux drivers that enable support for OpenGL Framebuffer Objects, we make use of a [http://www.mesa3d.org/brianp/TR.html tile rendering library] to break the image into smaller chunks and render each chunk individually.<br /> <br /> This is beneficial as it doesn't complicate the code very much, but detrimental because it results in a drastic slowdown in speed -- it seems that the overhead of switching from tile to tile effectively doubles the rendering time. The optimal choice, then, is to make the image as large as possible on the local end, which in this case is 1920x1080 (the size of half of the total image for two streams), as the window into which it is being rendered must fit on the desktop of the local screen. <br /> <br /> Then, the user is allowed to control the position and warping amount of each image to align them manually. As of now there is no GUI on the local end; it wouldn't be difficult to use the windowspace necessary for drawing the images to display a GLUI GUI and provide the same functionality, but for now it's all operated with keyboard commands. There may be output to the console after some of the commands, so it's useful to have that within view. The commands are:<br /> <br /> * W, A, S, D : Move the current image up, left, down, or right, respectively.<br /> * N : Make the next image the current image.<br /> * E, C : Increase or decrease the focal length, respectively. Decreasing the focal length will cause a greater spherical warping in the image.<br /> <br /> ===Downloads===<br /> <br /> Here is the complete set of code.<br /> <br /> * [http://ivl.calit2.net/wiki/files/README.txt Readme file]<br /> * [http://ivl.calit2.net/wiki/files/xvidsend.zip Xvid send module, requires Visual C++ 2005]<br /> * [http://ivl.calit2.net/wiki/files/GigE_Link_SDK_2(1).2.0_build_270_(SP4)_for_Windows.zip Silicon Imaging API] <br /> * [http://ivl.calit2.net/wiki/files/nasm-0.98.35-win32.zip NASM, required for Xvid]<br /> * [http://ivl.calit2.net/wiki/files/xvidcore-0(1).9.0.tar.gz Xvid for video compression]<br /> * [http://ivl.calit2.net/wiki/files/amalgamator.tar.gz Amalgamator source code]<br /> <br /> ==Students==<br /> <br /> [[Andrew Prudhomme]]:<br /> Andrew has worked on the back end of the project, specifically in receiving input from the cameras, performing the necessary steps to obtain the data in a usable format, and compressing and streaming it efficiently across a network.<br /> <br /> [[Alex Zavodny]]:<br /> Alex has worked on the front end of the project, which involves receiving the individual streams from the cameras, processing them, and displaying them in an efficient and timely manner.</div> Azavodny http://ivl.calit2.net/wiki/index.php/Rincon Rincon 2007-06-21T18:55:35Z <p>Azavodny: /* Project Overview */</p> <hr /> <div>==Project Overview==<br /> <br /> [[Image:tiled-wall.jpg|thumb|right]]<br /> <br /> <br /> The goal of this project is to implement a scalable real-time solution for streaming multiple high definition videos to be stitched into a &quot;super high-definition&quot; panoramic video. This is accomplished by streaming feeds from multiple calibrated HD cameras to a centralized location, where they are processed for storage/live viewing on multi-panel displays. To account for perspective distortion that affects displays with large vertical or horizontal fields of view, the videos are passed through a spherical warping algorithm on graphics hardware before being displayed.<br /> <br /> Another important aspect of the project is in efficiently streaming the video across a network with unreliable bandwidth: if the bandwidth between the sender and receiver is lessened, we wish to allow the user to choose which aspect of the video to degrade to maintain desired performance (e.g. degrading frame rate to preserve quality, or vice-versa).<br /> <br /> ==Project Status==<br /> <br /> Currently, we are able to stream video from the two HD cameras we have available to us and display the warped videos simultaneously on the receiving end. The user can then interactively change the bitrate of the stream, change the focal depth of the virtual camera, and manually align the images.<br /> <br /> ==Implementation==<br /> <br /> The current implementation of the project consists of two programs, one of which runs on the sending side (Xvidsend) and one of which runs on the receiving side (Amalgamator).<br /> <br /> Each camera in the system is connected to its own machine, which obtains raw Bayer RGB from the camera, converts it to 32-bit RGB, compresses it with the XVid codec, and sends it across a wide network. As the SDK for interacting with the cameras (the GigE Link SDK, available from the [http://www.siliconimaging.com/software downloads section of the Silicon Imaging website]) is only available for Windows, these machines are running Windows XP.<br /> <br /> On the receiving end, there is one Linux box (currently running CentOS, though has also been tested on SuSE 10.0) that receives all of these streams, processes them, and creates a buffer for the entire image that it then sends to SAGE. <br /> <br /> The program on the sending end, xvidsend, and the code for receiving the data across the network and obtaining a single frame at a time on the receiving end, was written by Andrew. The program on the receiving end, the Amalgamator, was written by Alex. The code for the Amalgamator has been commented and is in that sense self-documented, but a high level overview of the structure of the program is provided here for clarity. <br /> <br /> ===Xvidsend===<br /> <br /> This program interfaces with the silicon imaging SI1920HDRGB-CL video camera and sends its frames to a receiver program using xvid compression. This compression is adjusted to try to maintain a target bitrate for the stream. This target can be changed dynamically by the receiver.<br /> <br /> ====GigE Link SDK====<br /> This program requires the GigE Link SDK to be installed. The SDK is available from the silicon imaging webpage downloads directory. We used 2.2.0, but it should work as long as the same version used to compile is used at runtime.<br /> <br /> ====xvidcore 0.9.0====<br /> This version of xvid is required for use with the vvxvid interface class. The package contains a Visual C++ project file. <br /> A few notes: <br /> The package requires nasm to compile. We found that v0.98.35 worked for this. For xvid to compile, all instances of nasm in the project file must be replaced with the full path to nasmw.exe. There must not be any spaces in the path to nasmw.exe or to the project directory.<br /> <br /> ====xvidsend====<br /> Add the paths to the includes and libs for xvid and the GigE Link SDK. The Microsoft Platform SDK is also needed. This can be found on the Microsoft website. Make sure the vvxvid and vvdebugmsg files are included in the project. Add this path as well. Add the following libs to the linker input additional dependencies: kernel32.lib, WSock32.lib, libxvidcore.lib, WS2_32.Lib, Winmm.lib.<br /> <br /> Xvidsend expects a configuration file in the same directory as the executable. A sample configuration file (Config.xml) is located in the xvidsend project file's Release directory.<br /> <br /> ===The Amalgamator===<br /> <br /> ====The Makefile====<br /> If you intend to compile the program yourself, you will most likely have to modify the makefile. The makefile is rather simple, and contains a few rules that are hardcoded to enable or disable certain options. For example, a make local will make the amalgamator without any special options. make sage enables SAGE support, make sage-tile enables tiled rendering for SAGE output (which is almost always necessary), and make sage-tile-shader enables tiled rendering for SAGE output and warps each of the images with a fragment shader. make network and other variations are deprecated.<br /> <br /> ====The program====<br /> <br /> The Amalgamator is a multithreaded application. The idea is to have one thread for each incoming stream and one thread for main processing. The stream threads sit and wait for network input, and on receiving data will decompress it and update a shared buffer. The main thread reads from the buffers for these images and draws them on a textured quad, so as to apply a pixel shader to each image. As the resulting image is too large for the local screen, and as the graphics card on our current machine does not have Linux drivers that enable support for OpenGL Framebuffer Objects, we make use of a [http://www.mesa3d.org/brianp/TR.html tile rendering library] to break the image into smaller chunks and render each chunk individually.<br /> <br /> This is beneficial as it doesn't complicate the code very much, but detrimental because it results in a drastic slowdown in speed -- it seems that the overhead of switching from tile to tile effectively doubles the rendering time. The optimal choice, then, is to make the image as large as possible on the local end, which in this case is 1920x1080 (the size of half of the total image for two streams), as the window into which it is being rendered must fit on the desktop of the local screen. <br /> <br /> Then, the user is allowed to control the position and warping amount of each image to align them manually. As of now there is no GUI on the local end; it wouldn't be difficult to use the windowspace necessary for drawing the images to display a GLUI GUI and provide the same functionality, but for now it's all operated with keyboard commands. There may be output to the console after some of the commands, so it's useful to have that within view. The commands are:<br /> <br /> * W, A, S, D : Move the current image up, left, down, or right, respectively.<br /> * N : Make the next image the current image.<br /> * E, C : Increase or decrease the focal length, respectively. Decreasing the focal length will cause a greater spherical warping in the image.<br /> <br /> ===Downloads===<br /> <br /> Here is the complete set of code.<br /> <br /> * [http://ivl.calit2.net/wiki/files/README.txt Readme file]<br /> * [http://ivl.calit2.net/wiki/files/xvidsend.zip Xvid send module, requires Visual C++ 2005]<br /> * [http://ivl.calit2.net/wiki/files/GigE_Link_SDK_2(1).2.0_build_270_(SP4)_for_Windows.zip Silicon Imaging API] <br /> * [http://ivl.calit2.net/wiki/files/nasm-0.98.35-win32.zip NASM, required for Xvid]<br /> * [http://ivl.calit2.net/wiki/files/xvidcore-0(1).9.0.tar.gz Xvid for video compression]<br /> * [http://ivl.calit2.net/wiki/files/amalgamator.tar.gz Amalgamator source code]<br /> <br /> ==Students==<br /> <br /> [[Andrew Prudhomme]]:<br /> Andrew has worked on the back end of the project, specifically in receiving input from the cameras, performing the necessary steps to obtain the data in a usable format, and compressing and streaming it efficiently across a network.<br /> <br /> [[Alex Zavodny]]:<br /> Alex has worked on the front end of the project, which involves receiving the individual streams from the cameras, processing them, and displaying them in an efficient and timely manner.</div> Azavodny http://ivl.calit2.net/wiki/index.php/Rincon Rincon 2007-06-21T07:54:17Z <p>Azavodny: /* Technical Specs */</p> <hr /> <div>==Project Overview==<br /> <br /> The goal of this project is to implement a scalable real-time solution for streaming multiple high definition videos to be stitched into a &quot;super high-definition&quot; panoramic video. This is accomplished by streaming feeds from multiple calibrated HD cameras to a centralized location, where they are processed for storage/live viewing on multi-panel displays. To account for perspective distortion that affects displays with large vertical or horizontal fields of view, the videos are passed through a spherical warping algorithm on graphics hardware before being displayed.<br /> <br /> Another important aspect of the project is in efficiently streaming the video across a network with unreliable bandwidth: if the bandwidth between the sender and receiver is lessened, we wish to allow the user to choose which aspect of the video to degrade to maintain desired performance (e.g. degrading frame rate to preserve quality, or vice-versa).<br /> <br /> ==Project Status==<br /> <br /> Currently, we are able to stream video from the two HD cameras we have available to us and display the warped videos simultaneously on the receiving end. The user can then interactively change the bitrate of the stream, change the focal depth of the virtual camera, and manually align the images.<br /> <br /> ==Implementation==<br /> <br /> The current implementation of the project consists of two programs, one of which runs on the sending side and one of which runs on the receiving side.<br /> <br /> <br /> Each camera in the system is connected to its own machine, which obtains raw Bayer RGB from the camera, converts it to 32-bit RGB, compresses it with the XVid codec, and sends it across a wide network. As the SDK for interacting with the cameras (the GigE Link SDK, available from the [http://www.siliconimaging.com/software/ downloads section of the Silicon Imaging website]) is only available for Windows, these machines are running Windows XP.<br /> <br /> <br /> On the receiving end, there is one Linux box (currently running CentOS, though has also been tested on SuSE 10.0) that receives all of these streams, processes them, and creates a buffer for the entire image that it then sends to SAGE. <br /> <br /> <br /> The program on the sending end, xvidsend, and the code for receiving the data across the network and obtaining a single frame at a time on the receiving end, was written by Andrew. The program on the receiving end, the Amalgamator, was written by Alex. The code for the Amalgamator has been commented and is in that sense self-documented, but a high level overview of the structure of the program is provided here for clarity.<br /> <br /> ====The Amalgamator====<br /> <br /> =====The Makefile=====<br /> If you intend to compile the program yourself, you will most likely have to modify the makefile. The makefile is rather simple, and contains a few rules that are hardcoded to enable or disable certain options. For example, a make local will make the amalgamator without any special options. make sage enables SAGE support, make sage-tile enables tiled rendering for SAGE output (which is almost always necessary), and make sage-tile-shader enables tiled rendering for SAGE output and warps each of the images with a fragment shader. make network and other variations are deprecated.<br /> <br /> =====The program=====<br /> <br /> The Amalgamator is a multithreaded application. The idea is to have one thread for each incoming stream and one thread for main processing. The stream threads sit and wait for network input, and on receiving data will decompress it and update a shared buffer. The main thread reads from the buffers for these images and draws them on a textured quad, so as to apply a pixel shader to each image. As the resulting image is too large for the local screen, and as the graphics card on our current machine does not have Linux drivers that enable support for OpenGL Framebuffer Objects, we make use of a [http://www.mesa3d.org/brianp/TR.html tile rendering library] to break the image into smaller chunks and render each chunk individually.<br /> <br /> This is beneficial as it doesn't complicate the code very much, but detrimental because it results in a drastic slowdown in speed -- it seems that the overhead of switching from tile to tile effectively doubles the rendering time. The optimal choice, then, is to make the image as large as possible on the local end, which in this case is 1920x1080 (the size of half of the total image for two streams), as the window into which it is being rendered must fit on the desktop of the local screen. <br /> <br /> Then, the user is allowed to control the position and warping amount of each image to align them manually. As of now there is no GUI on the local end; it wouldn't be difficult to use the windowspace necessary for drawing the images to display a GLUI GUI and provide the same functionality, but for now it's all operated with keyboard commands. There may be output to the console after some of the commands, so it's useful to have that within view. The commands are:<br /> <br /> * W, A, S, D : Move the current image up, left, down, or right, respectively.<br /> * N : Make the next image the current image.<br /> * E, C : Increase or decrease the focal length, respectively. Decreasing the focal length will cause a greater spherical warping in the image.<br /> <br /> ==Students==<br /> <br /> [[Andrew Prudhomme]]:<br /> Andrew has worked on the back end of the project, specifically in receiving input from the cameras, performing the necessary steps to obtain the data in a usable format, and compressing and streaming it efficiently across a network.<br /> <br /> [[Alex Zavodny]]:<br /> Alex has worked on the front end of the project, which involves receiving the individual streams from the cameras, processing them, and displaying them in an efficient and timely manner.</div> Azavodny http://ivl.calit2.net/wiki/index.php/Rincon Rincon 2007-06-01T17:33:02Z <p>Azavodny: /* Project Status */</p> <hr /> <div>==Project Overview==<br /> <br /> The goal of this project is to implement a scalable real-time solution for streaming multiple high definition videos to be stitched into a &quot;super high-definition&quot; panoramic video. This is accomplished by streaming feeds from multiple calibrated HD cameras to a centralized location, where they are processed for storage/live viewing on multi-panel displays. To account for perspective distortion that affects displays with large vertical or horizontal fields of view, the videos are passed through a spherical warping algorithm on graphics hardware before being displayed.<br /> <br /> Another important aspect of the project is in efficiently streaming the video across a network with unreliable bandwidth: if the bandwidth between the sender and receiver is lessened, we wish to allow the user to choose which aspect of the video to degrade to maintain desired performance (e.g. degrading frame rate to preserve quality, or vice-versa).<br /> <br /> ==Project Status==<br /> <br /> Currently, we are able to stream video from the two HD cameras we have available to us and display the warped videos simultaneously on the receiving end. The user can then interactively change the bitrate of the stream, change the focal depth of the virtual camera, and manually align the images.<br /> <br /> ==Technical Specs==<br /> <br /> ==Students==<br /> <br /> [[Andrew Prudhomme]]:<br /> Andrew has worked on the back end of the project, specifically in receiving input from the cameras, performing the necessary steps to obtain the data in a usable format, and compressing and streaming it efficiently across a network.<br /> <br /> [[Alex Zavodny]]:<br /> Alex has worked on the front end of the project, which involves receiving the individual streams from the cameras, processing them, and displaying them in an efficient and timely manner.</div> Azavodny http://ivl.calit2.net/wiki/index.php/Andrew_Prudhomme Andrew Prudhomme 2007-06-01T17:31:39Z <p>Azavodny: </p> <hr /> <div>#REDIRECT [[User:aprudhom]]</div> Azavodny http://ivl.calit2.net/wiki/index.php/Alex_Zavodny Alex Zavodny 2007-06-01T17:31:28Z <p>Azavodny: </p> <hr /> <div>#REDIRECT [[ User:azavodny]]</div> Azavodny http://ivl.calit2.net/wiki/index.php/Alex_Zavodny Alex Zavodny 2007-06-01T17:31:16Z <p>Azavodny: </p> <hr /> <div>#REDIRECT [[ http://ivl.calit2.net/wiki/index.php/User:azavodny]]</div> Azavodny http://ivl.calit2.net/wiki/index.php/Alex_Zavodny Alex Zavodny 2007-06-01T17:30:59Z <p>Azavodny: </p> <hr /> <div>#REDIRECT [[http://ivl.calit2.net/wiki/index.php/User:azavodny]]</div> Azavodny http://ivl.calit2.net/wiki/index.php/Andrew_Prudhomme Andrew Prudhomme 2007-06-01T17:29:18Z <p>Azavodny: </p> <hr /> <div>#REDIRECT [[http://ivl.calit2.net/wiki/index.php/User:aprudhom]]</div> Azavodny http://ivl.calit2.net/wiki/index.php/People People 2007-06-01T17:27:52Z <p>Azavodny: /* Students */</p> <hr /> <div>==Faculty==<br /> * [http://www.calit2.net/people/staff_detail.php?id=67 Thomas DeFanti]<br /> * [http://www.calit2.net/~jschulze/ J&amp;uuml;rgen Schulze]<br /> * Javier Girado<br /> <br /> ==Staff==<br /> * [http://www.calit2.net/people/staff_detail.php?id=68 Greg Dawe]<br /> * Hector Bracho<br /> * [http://www.calit2.net/people/staff_detail.php?id=79 Qian Liu]<br /> <br /> ==Students==<br /> * Alex Liu<br /> * [http://ivl.calit2.net/wiki/index.php/User:Azavodny Alex Zavodny]<br /> * Andre Barbosa<br /> * [http://ivl.calit2.net/wiki/index.php/User:aprudhom Andrew Prudhomme]<br /> * Ava Pierce<br /> * Chih Liang<br /> * [http://ivl.calit2.net/wiki/index.php/User:DAR Daniel Rohrlick]<br /> * David Coughlan<br /> * David Jackson<br /> * Iman Mostafavi<br /> * Jeffrey Kuramoto<br /> * [http://ivl.calit2.net/wiki/index.php/User:Kalice Karen Lin]<br /> * [http://ivl.calit2.net/wiki/index.php/User:Mbajorek Michael Bajorek]<br /> * Mike Toillion<br /> * [http://ivl.calit2.net/wiki/index.php/User:Pweber Philip Weber]<br /> * Praveen Subramani<br /> * Sara Richardson<br /> * Sendhil Panchadsaram<br /> * [http://ivl.calit2.net/wiki/index.php/User:feng Shaofeng(Leo) Liu]<br /> * Stephen Boyd<br /> * [http://ivl.calit2.net/wiki/index.php/User:Vhuynh Vinh Huynh]<br /> * Ya Betsy Kao<br /> <br /> ==Affiliated Researchers==<br /> * [http://www.telascience.org/Members/johng/ John Graham]<br /> * [http://ivl.calit2.net/wiki/index.php/User:Hsieht Tung-Ju Hsieh]<br /> <br /> ==Former Members==<br /> * Elaine Liu (PRIME Scholar 2006)</div> Azavodny http://ivl.calit2.net/wiki/index.php/User:Azavodny User:Azavodny 2007-06-01T17:05:51Z <p>Azavodny: </p> <hr /> <div>==About Me==<br /> I am a fifth-year Computer Science transfer student at UCSD, and will be graduating this spring quarter. I work as a tutor for the CSE department and at the Immersive Visualization Lab.<br /> <br /> ==Projects==<br /> <br /> I work on the [[Rincon]] project with [[Andrew Prudhomme]].</div> Azavodny http://ivl.calit2.net/wiki/index.php/People People 2007-05-11T20:13:42Z <p>Azavodny: /* Students */</p> <hr /> <div>==Faculty==<br /> * [http://www.calit2.net/people/staff_detail.php?id=67 Thomas DeFanti]<br /> * [http://www.calit2.net/~jschulze/ J&amp;uuml;rgen Schulze]<br /> * Javier Girado<br /> <br /> ==Staff==<br /> * [http://www.calit2.net/people/staff_detail.php?id=68 Greg Dawe]<br /> * Hector Bracho<br /> * [http://www.calit2.net/people/staff_detail.php?id=79 Qian Liu]<br /> <br /> ==Students==<br /> * Alex Liu<br /> * [http://ivl.calit2.net/wiki/index.php/User:Azavodny Alex Zavodny]<br /> * Andre Barbosa<br /> * Andrew Prudhomme<br /> * Ava Pierce<br /> * Chih Liang<br /> * [http://ivl.calit2.net/wiki/index.php/User:DAR Daniel Rohrlick]<br /> * David Coughlan<br /> * David Jackson<br /> * Iman Mostafavi<br /> * Jeffrey Kuramoto<br /> * [http://ivl.calit2.net/wiki/index.php/User:Kalice Karen Lin]<br /> * Michael Bajorek<br /> * Mike Toillion<br /> * Philip Weber<br /> * Praveen Subramani<br /> * Sara Richardson<br /> * Sendhil Panchadsaram<br /> * Shaofeng (Leo) Liu<br /> * Stephen Boyd<br /> * Vinh Huynh<br /> * Ya Betsy Kao<br /> <br /> ==Affiliated Researchers==<br /> [http://ivl.calit2.net/wiki/index.php/User:Hsieht Tung-Ju Hsieh]<br /> <br /> ==Former Members==<br /> * Elaine Liu (PRIME Scholar 2006)</div> Azavodny http://ivl.calit2.net/wiki/index.php/Rincon Rincon 2007-05-11T06:11:27Z <p>Azavodny: </p> <hr /> <div>==Project Overview==<br /> <br /> The goal of this project is to implement a scalable real-time solution for streaming multiple high definition videos to be stitched into a &quot;super high-definition&quot; panoramic video. This is accomplished by streaming feeds from multiple calibrated HD cameras to a centralized location, where they are processed for storage/live viewing on multi-panel displays. To account for perspective distortion that affects displays with large vertical or horizontal fields of view, the videos are passed through a spherical warping algorithm on graphics hardware before being displayed.<br /> <br /> Another important aspect of the project is in efficiently streaming the video across a network with unreliable bandwidth: if the bandwidth between the sender and receiver is lessened, we wish to allow the user to choose which aspect of the video to degrade to maintain desired performance (e.g. degrading frame rate to preserve quality, or vice-versa).<br /> <br /> ==Project Status==<br /> <br /> Currently, we are able to stream video from the three HD cameras we have available to us and display the warped videos simultaneously on the receiving end. At the current moment, technical issues prevent the entirety of all three streams from being displayed on a multi-panel display at once, but having reached this point is a milestone as it evidences functional software on both the sending end (interaction with the cameras) and the receiving end (processing multiple streams at once).<br /> <br /> The immediate goals are to fix the multi-panel display bug, and to optimize the process by performing certain aspects of the video encoding process in hardware as opposed to software.<br /> <br /> ==Technical Specs==<br /> <br /> ==Students==<br /> <br /> [[Andrew Prudhomme]]:<br /> Andrew has worked on the back end of the project, specifically in receiving input from the cameras, performing the necessary steps to obtain the data in a usable format, and compressing and streaming it efficiently across a network.<br /> <br /> [[Alex Zavodny]]:<br /> Alex has worked on the front end of the project, which involves receiving the individual streams from the cameras, processing them, and displaying them in an efficient and timely manner.</div> Azavodny http://ivl.calit2.net/wiki/index.php/Projects Projects 2007-05-11T05:43:42Z <p>Azavodny: /* Active Projects */</p> <hr /> <div>==Active Projects==<br /> <br /> * StarCave<br /> * San Francisco Bay Bridge<br /> * Protein Visualization<br /> * Super Browser<br /> * SIGGRAPH 2007 Art Installation<br /> * [[Virtual Calit2 Building]]<br /> * Nested Volumes<br /> * CineGrid<br /> * Social Networks in Myspace <br /> * Digital Archaeology<br /> * Brain Observatory<br /> * Interaction with Multi-Spectral Images<br /> * Neuroscience in Architecture<br /> * [[Screen]]<br /> * OssimPlanet<br /> * [[Rincon]]<br /> * Spatialized Sound<br /> * PRIME 2007<br /> <br /> ==Inactive Projects==<br /> * Cell Structures (NCMIR)<br /> * Children's Hospital<br /> * Earthquake Visualization (SIO)<br /> * Calit2 Undergraduate Scholarship 2006<br /> * Calit2 Undergraduate Scholarships 2007<br /> * PRIME 2006</div> Azavodny