Difference between revisions of "Project4S19"

From Immersive Visualization Lab Wiki
Jump to: navigation, search
(Documentation (10 Points))
 
(49 intermediate revisions by 2 users not shown)
Line 1: Line 1:
'''IN DEVELOPMENT - EXPECTED RELEASE DATE: MAY 24, 2019'''
+
=Homework Assignment 4: Dual User VR=
 
+
<!--
+
=Homework Assignment 4: Social VR=
+
  
 
For this project you need to implement a dual user 3D VR application for two Oculus Rifts with Touch controllers.  
 
For this project you need to implement a dual user 3D VR application for two Oculus Rifts with Touch controllers.  
Line 12: Line 9:
 
The project is designed to be a team project for two people. You can team up with the same person as for project 1, or someone else.  
 
The project is designed to be a team project for two people. You can team up with the same person as for project 1, or someone else.  
  
For inspiration, we recommen watching some of the videos from last year's CSE190 course [https://www.youtube.com/playlist?list=PLINx2DKpKpTuRzXIEkhCVwcoVUl4-Pgtj here], as well as the [https://www.youtube.com/watch?v=iFEMiyGMa58 Oculus Toy Box application].
+
For inspiration, we recommend watching some of the videos from last year's CSE190 course [https://www.youtube.com/playlist?list=PLINx2DKpKpTuRzXIEkhCVwcoVUl4-Pgtj here], as well as the [https://www.youtube.com/watch?v=iFEMiyGMa58 Oculus Toy Box application].
  
==Application Requirements==
+
'''Update:''' Here is [https://www.youtube.com/playlist?list=PLYZSzgYHJxiyt9ymwjVNnx9wq4d-WeQ9R&jct=ufDq_RuX9hfP3H8grHdZUUe0nACmJQ&disable_polymer=1 this year's video playlist].
 
+
The following requirements apply to your application. Each item gets you 10% of your technical score.
+
 
+
# Your application needs to be a dual user application running on two Oculus Rifts with Touch controllers, attached to two separate computers.
+
# The two users need to be located in the same VR space, which requires network communication between the two computers the Rifts are attached to.
+
# Your application needs to use full 6 degree of freedom head tracking.
+
# The users need to work together on something, for example: hand an object to the other user, play [http://www.ponggame.org/3dpong.php 3D Pong], play chess, build something with Legos, cook together, etc.
+
# Collision detection needs to be part of the interaction algorithm. It can be simply done by proximity, or bounding box collisions.
+
# Each user needs to be able to use at least one of their Touch controllers to do something with it.
+
# Either head positions or the positions of the interacting hand(s) of each user need to be indicated with at least a simple piece of geometry for both users (e.g., a cube), to represent the user to the other one. The representations of head and/or hands need to be distinguishable from one another. The representation of the head needs to have an indicator for the direction the user is looking in, hands can't be rotationally symmetrical (i.e., can't be represented as spheres).
+
# The application needs to run fluidly and without judder in the HMDs, i.e., at 90 fps.
+
# You need to use audio in your application, e.g., for background music, sound effects, etc. It doesn't have to be spatialized, unless you want to compete for the extra credit for audio.
+
# At least one 3D object in your application needs to be custom made by your team. You can use photogrammetry (with Agisoft Photoscan or other tools such as 123D Catch), or a 3D scanner such as the one in the VR lab.
+
 
+
==Tips==
+
 
+
You are allowed to use any software libraries which you used in homework assignments 1 through 3. In addition, you are allowed to use the following libraries:
+
 
+
* [http://www.lonesock.net/soil.html SOIL] to load texture images, or any other library listed [https://www.khronos.org/opengl/wiki/Image_Libraries here]
+
* The [https://developer.oculus.com/downloads/package/oculus-avatar-sdk/ Oculus Avatar SDK]
+
* [http://www.assimp.org/index.html Assimp] for importing OBJs
+
* Very simple, [https://github.com/nothings/stb/ single header file] solution to load images.
+
* [https://github.com/syoyo/tinyobjloader Tiny OBJ Loader] to load OBJ files.
+
* [https://www.libsdl.org/ SDL], to replace GLFW
+
* [https://openal.org/ OpenAL] for audio support
+
* XML parsers, such as [http://michaelrsweet.github.io/mxml/ MiniXML] or [http://pugixml.org/ PugiXML] - useful for configuration files
+
* Physics engines [https://www.geforce.com/hardware/technology/physx PhysX] or [http://bulletphysics.org/wordpress/ Bullet].
+
 
+
You are allowed to use any source for 3D models and textures, including:
+
* [https://3dwarehouse.sketchup.com Google 3D Warehouse]
+
* [https://www.turbosquid.com/Search/3D-Models/free Turbosquid]
+
* [https://www.cgtrader.com/free-3d-models CGTrader]
+
 
+
To create your own models, here a few tips:
+
* The 3D scanner at the Vive computer in the VR lab is a scanner from [https://matterandform.net/scanner Matter and Form]. It scans objects up to a size of 9.8 inches high, 7.0 inches in diameter, and a weight of 6lbs. The scanning software is installed on the computer. We recommend to export the scans to the OBJ file format to process with MeshLab or load into your application directly.
+
* [http://www.agisoft.com/ Agisoft Photoscan] offers free 30 day trial licenses. You do have to register your email address to get one, but they are legit.
+
* An open source alternative to Photoscan is WebODM. UCSD has a research compute cluster that you can use to process your images into a 3D model. The images need to be taken of a static scene (no lighting or shadow changes while you take the images), best to put the object outside in a shaded area. Take 30-100 images from all sides. Log in to the [http://webodm.nautilus.optiputer.net WebODM front end] with the account credentials given on Piazza. Create a project, click Select Images and upload your images. The default settings should get you a reasonable 3D reconstruction that you can download in its textured format for the best quality.
+
 
+
If your 3D models are too big to render at 90 fps, try using [http://www.meshlab.net/ MeshLab] to [https://www.shapeways.com/tutorials/polygon_reduction_with_meshlab reduce the polygon count] of your models.
+
 
+
To communicate between the two Rift PCs, you will need to implement network communication. You can keep this very simple. You are allowed to use any network communication library, including cloud services, data bases, or anything related. We recommend using direct socket communication, [https://www.codeproject.com/Articles/412511/Simple-client-server-network-using-Cplusplus-and-W as described in this example], and the code can be downloaded [[Media:simple_network.zip |here]]. Another approach, which is more elegant but not as easy to debug is to use a [http://rpclib.net/ remote procedure call (RPC) library such as this one]. You can choose to create a server program which both applications connect to, or have each application connect directly to the other. In the latter case you should run your network communication in a separate thread so that the rendering loop cannot get interrupted, which would likely lead to the frame rate dropping below 90 fps.
+
 
+
Like for the other homework projects, the tutors will hold office hours for you to get help with your projects, including to brainstorm ideas for your application.
+
 
+
==Rift Access==
+
 
+
For this project, every student has a dedicated Oculus Rift HMD, so that teams of two will have two dedicated units available to them.
+
We will also have a limited number of shared Rift units for those who need to do this project without a partner, so that they have a second Rift available. Please let the instructor know if you need access to the pool of shared Rifts.
+
 
+
You are allowed to use the HTC Vive in the VR lab instead of a second Rift. However, because we only have one of those, in case that multiple people want to use it you need to come up with a sharing schedule.
+
 
+
If you have access to other 6 DOF VR HMDs with one or two 6 DOF controllers you are allowed to one or two of these to substitute the Rifts. If you choose to do this, make sure that you can bring your devices to demonstrate your project in the VR lab on the due date in finals week.
+
  
 
==Grading==
 
==Grading==
Line 73: Line 18:
  
 
* Documentation (10 points)
 
* Documentation (10 points)
* Application (90 points)
+
* VR Application (70 points)
 +
* VR Experience (20 points)
 
* Extra Credit (10 points)
 
* Extra Credit (10 points)
  
 
==Documentation (10 Points)==
 
==Documentation (10 Points)==
  
You need to create a blog to report on the progress you're making on your project. You need to make at least two blog entries to get the full score. The first is due on '''Monday, June 4th at 12 noon''', the second is due on '''Monday, June 11th at 12 noon'''.  
+
You need to create a web site or blog to report on the progress you're making on your project. You need to make at least two entries to get the full score.
  
The first blog entry needs to contain (at a minimum) the following pieces of information:
+
The first report entry needs to contain (at a minimum) the following pieces of information:
  
 
* The name of your project (you need to come up with one)
 
* The name of your project (you need to come up with one)
 
* The names of your team members
 
* The names of your team members
* A short description of the project
+
* A short description of your project
 
* One or more screen shots of your application in its current state
 
* One or more screen shots of your application in its current state
  
 
In week 2 you need to write about the progress you made and update on any changes you made to team or team name. You also need to post another screen shot.
 
In week 2 you need to write about the progress you made and update on any changes you made to team or team name. You also need to post another screen shot.
  
You are free to create the blog on any web based blog site, such as [http://www.blogger.com Blogger] or [http://wordpress.com WordPress]. You should use the same blog each time and just add new blog entries. You are free to add more entries than the two required ones.  
+
You are free to create the report on any type of web site or blog. We recommend [https://sites.google.com Google Sites]. You are free to create more entries than the two required ones.  
  
Each team also needs to make an up to 3 minutes long Youtube video of their application, to show during the first hour of the grading event during finals week. We are going to create a Youtube playlist to which we are going to ask you to add your video. This video is due by '''June 12th at 12 noon'''.
+
Each team also needs to make a 2-3 minutes long Youtube video of their application, to show during the first hour of the grading event during finals week. We are going to create a Youtube playlist to which we are going to ask you to add your video. For a full video score you need to show both users and what they see in the Rift - not necessarily all the time but at times it helps understand how the app is used.
  
 
The points are distributed like this:
 
The points are distributed like this:
* Blog entry #1: 3 points
+
* Report entry #1: 3 points. Due '''Monday, June 3th at 11:59pm'''
* Blog entry #2: 3 points
+
* Report entry #2: 3 points. Due '''Monday, June 10th at 11:59pm'''
* Video: 4 points
+
* Video: 4 points. Due '''June 11th at 3pm'''
  
==VR Application (90 Points)==
+
==VR Application (70 Points)==
  
The final project has to be presented to the course staff during our final exam slot on '''Tuesday, June 12th'''. The presentations start off with a video screening session at 3pm in room 1242. Then we will do science fair-style demos in VR lab B210 in two grading windows: one from 4-5pm, the other from 5-6pm. You are allowed to bring friends to both video screening and demos.
+
The following requirements apply to your application. The listed numbers are the percentages for how much the respective line item counts towards your technical score.
  
The points for your project demonstration will be distributed as follows:
+
===Dual User Application (40 Points)===
 +
* Your application needs to be a dual (i.e., two) user application running on two Oculus Rifts with Touch controllers, attached to two separate computers. (10 points)
 +
* Each user needs to be able to use at least one of their Touch controllers to help with the interaction. (10 points)
 +
* The users need to work together on something, for example: hand an object to the other user, play a ball game together, play chess, build something with Legos, cook together, etc. This requires network communication between the two computers. (10 points)
 +
* Either head positions or the positions of the interacting hand(s) of each user need to be indicated with at least a simple piece of geometry for both users (e.g., a cube), to represent the user to the other one. The representations of head and/or hands need to be distinguishable from one another. The representation of the head needs to have an indicator for the direction the user is looking in, hands can't be rotationally symmetrical (i.e., can't be represented as spheres). (10 points)
  
* Technical quality: 75% (strictly based on your programming)
+
===Other Requirements (30 Points)===
* VR Experience: 25% (subjective score factoring in your project idea, aesthetics, usability, wow factor, etc.)
+
* Use per-pixel lighting with the Phong or Blinn-Phong shading model. This requires adding at least one light source to your scene, and modifying the shaders. (10 points)
 +
* Collision detection needs to be part of the interaction algorithm. It can be simply done by proximity, or bounding box collisions. (10 points)
 +
* At least one 3D object in your application needs to be custom made by your team. You can use photogrammetry (with Agisoft Photoscan or other tools such as 123D Catch), or a 3D scanner such as the one in the VR lab (at the HTC Vive desk), or a 3D modeling tool such as Blender or Sketchup. Make sure your polygon normals are correct, or else your lighting won't work. Your 3D object can't be a primitive 3D object (such as sphere, cube, cuboid, torus, pyramid, planes, etc). (5 points)
 +
* You need to use audio in your application, e.g., for background music, sound effects, etc. It doesn't have to be spatialized, unless you want to compete for the extra credit for audio. (5 points)
  
The scores will be determined by the graders, which are instructor and tutors. You need to be prepared to let the graders try out your application during your grading window.
+
===Grading===
 +
* -10 points if your application does not run at 90 fps (frames per second)
 +
* -10 points if your application does not support 6 degree of freedom head tracking
 +
 
 +
The final project has to be presented to the course staff during our final exam slot on '''Tuesday, June 11th'''. The presentations start off with a video screening session at 3pm in room 1242. Then we will do science fair-style demos in VR lab B210 in two grading windows: one from 4-5pm, the other from 5-6pm. You are allowed to bring friends to both video screening and demos.
 +
 
 +
==VR Experience (20 Points)==
 +
 
 +
This score is for the user experience of your VR application. It is a subjective score factoring in your project idea, aesthetics, usability, wow factor, etc. The score will be determined by the graders, which are instructor, TA and tutors. You need to be prepared to let the graders try out your application during the demo window.
  
 
==Extra Credit (10 Points max.)==
 
==Extra Credit (10 Points max.)==
  
We created a number of categories for which we might offer extra credit. There will be 5 points of extra credit for each category a project wins, for a maximum of 10 extra credit points total. It is possible for multiple teams to tie in a category, in which case each team gets 5 extra credit points.
+
We are going to give extra credit in the categories below. There will be 5 points of extra credit for each category a project wins, for a maximum of 10 extra credit points total. These awards will be given after grading and will be listed in a Piazza post.
 
+
If a team wants extra credit we encourage them to tell us in their blog which categories they want to compete in and why. But even if you don't tell us that you're competing we might nominate you for one or more categories anyways.
+
  
 
The categories are:  
 
The categories are:  
* Best video (the one you make for the video screening): nicely edited video that shows live footage of both users using the application, along with what they see in the Rift, and it needs to include audio
+
* Best app to make use of dual users (i.e., requires two users, not very well usable with single user)
* Best app to make use of dual users (ie, requires two users, not very well usable with single user)
+
* Most intuitive controls (in-app training can help)
* Best non-game app (i.e., art, medical, engineering, education)
+
* Best game app
+
* Most intuitive controls (can include in-app training etc)
+
 
* Best user interaction concept
 
* Best user interaction concept
 
* Best aesthetics
 
* Best aesthetics
 
* Most technically challenging app
 
* Most technically challenging app
 
* Best use of audio
 
* Best use of audio
* Judges' favorite
+
* Most polished app
-->
+
* Most entertaining app
 +
* Best video (the one you make for the video screening): nicely edited video that shows live footage of both users using the application, along with what they see in the Rift, and it needs to include audio
 +
 
 +
==Tips==
 +
 
 +
You are allowed to use any software libraries which you used in homework assignments 1 through 3. In addition, you are allowed to use the following libraries:
 +
 
 +
* [http://www.lonesock.net/soil.html SOIL] to load texture images, or any other library listed [https://www.khronos.org/opengl/wiki/Image_Libraries here]
 +
* The [https://developer.oculus.com/downloads/package/oculus-avatar-sdk/ Oculus Avatar SDK]
 +
* [http://www.assimp.org/index.html Assimp] for importing OBJs
 +
* Very simple, [https://github.com/nothings/stb/ single header file] solution to load images.
 +
* [https://github.com/syoyo/tinyobjloader Tiny OBJ Loader] to load OBJ files.
 +
* [https://openal.org/ OpenAL] for audio support
 +
* XML parsers, such as [http://michaelrsweet.github.io/mxml/ MiniXML] or [http://pugixml.org/ PugiXML] - useful for configuration files
 +
* Physics engines [https://www.geforce.com/hardware/technology/physx PhysX] or [http://bulletphysics.org/wordpress/ Bullet].
 +
 
 +
You are allowed to use any source for 3D models and textures, including:
 +
* [https://3dwarehouse.sketchup.com Google 3D Warehouse]
 +
* [https://www.turbosquid.com/Search/3D-Models/free Turbosquid]
 +
* [https://www.cgtrader.com/free-3d-models CGTrader]
 +
 
 +
To create your own models, here a few tips:
 +
* The 3D scanner at the Vive computer in the VR lab is a scanner from [https://matterandform.net/scanner Matter and Form]. It scans objects up to a size of 9.8 inches high, 7.0 inches in diameter, and a weight of 6lbs. The scanning software is installed on the computer. We recommend to export the scans to the OBJ file format to process with MeshLab or load into your application directly.
 +
* [http://www.agisoft.com/ Agisoft Photoscan] offers free 30 day trial licenses. You do have to register your email address to get one, but they are legit.
 +
* An open source alternative to Photoscan is WebODM. UCSD has a research compute cluster that you can use to process your images into a 3D model. The images need to be taken of a static scene (no lighting or shadow changes while you take the images), best to put the object outside in a shaded area. Take 30-100 images from all sides. Log in to the [http://webodm.nautilus.optiputer.net WebODM front end] with the account credentials given on Piazza. Create a project, click Select Images and upload your images. The default settings should get you a reasonable 3D reconstruction that you can download in its textured format for the best quality.
 +
 
 +
If your 3D models are too big to render at 90 fps, try using [http://www.meshlab.net/ MeshLab] to [https://www.shapeways.com/tutorials/polygon_reduction_with_meshlab reduce the polygon count] of your models.
 +
 
 +
===Network Communication===
 +
 
 +
To communicate between the two Rift PCs, you will need to implement network communication. You can keep this very simple. You are allowed to use any network communication library, including cloud services, data bases, or anything related.
 +
 
 +
We recommend using [http://rpclib.net/ this remote procedure call (RPC) library]. Here is [[Media:Rpcdemo.zip |a minimal example (updated on 05/30)]] of a client-server project using rpclib (It only compiles in x64 - release mode).
 +
 
 +
If you want to use a different approach, you could do direct socket communication with a library such as [https://www.codeproject.com/Articles/412511/Simple-client-server-network-using-Cplusplus-and-W this one], the code can be downloaded [[Media:simple_network.zip |here]]. You can choose to create a server program which both applications connect to, or have each application connect directly to the other. In the latter case you should run your network communication in a separate thread so that the rendering loop cannot get interrupted, which would likely lead to the frame rate dropping below 90 fps.
 +
 
 +
<!-- next time: don't allow projects to be based on earlier homework projects -->

Latest revision as of 16:32, 12 June 2019

Contents

Homework Assignment 4: Dual User VR

For this project you need to implement a dual user 3D VR application for two Oculus Rifts with Touch controllers.

You can obtain 100 points, plus up to 10 points for extra credit.

This homework assignment is due on Tuesday, June 11th at 3:00pm.

The project is designed to be a team project for two people. You can team up with the same person as for project 1, or someone else.

For inspiration, we recommend watching some of the videos from last year's CSE190 course here, as well as the Oculus Toy Box application.

Update: Here is this year's video playlist.

Grading

Your final project score consists of three parts:

  • Documentation (10 points)
  • VR Application (70 points)
  • VR Experience (20 points)
  • Extra Credit (10 points)

Documentation (10 Points)

You need to create a web site or blog to report on the progress you're making on your project. You need to make at least two entries to get the full score.

The first report entry needs to contain (at a minimum) the following pieces of information:

  • The name of your project (you need to come up with one)
  • The names of your team members
  • A short description of your project
  • One or more screen shots of your application in its current state

In week 2 you need to write about the progress you made and update on any changes you made to team or team name. You also need to post another screen shot.

You are free to create the report on any type of web site or blog. We recommend Google Sites. You are free to create more entries than the two required ones.

Each team also needs to make a 2-3 minutes long Youtube video of their application, to show during the first hour of the grading event during finals week. We are going to create a Youtube playlist to which we are going to ask you to add your video. For a full video score you need to show both users and what they see in the Rift - not necessarily all the time but at times it helps understand how the app is used.

The points are distributed like this:

  • Report entry #1: 3 points. Due Monday, June 3th at 11:59pm
  • Report entry #2: 3 points. Due Monday, June 10th at 11:59pm
  • Video: 4 points. Due June 11th at 3pm

VR Application (70 Points)

The following requirements apply to your application. The listed numbers are the percentages for how much the respective line item counts towards your technical score.

Dual User Application (40 Points)

  • Your application needs to be a dual (i.e., two) user application running on two Oculus Rifts with Touch controllers, attached to two separate computers. (10 points)
  • Each user needs to be able to use at least one of their Touch controllers to help with the interaction. (10 points)
  • The users need to work together on something, for example: hand an object to the other user, play a ball game together, play chess, build something with Legos, cook together, etc. This requires network communication between the two computers. (10 points)
  • Either head positions or the positions of the interacting hand(s) of each user need to be indicated with at least a simple piece of geometry for both users (e.g., a cube), to represent the user to the other one. The representations of head and/or hands need to be distinguishable from one another. The representation of the head needs to have an indicator for the direction the user is looking in, hands can't be rotationally symmetrical (i.e., can't be represented as spheres). (10 points)

Other Requirements (30 Points)

  • Use per-pixel lighting with the Phong or Blinn-Phong shading model. This requires adding at least one light source to your scene, and modifying the shaders. (10 points)
  • Collision detection needs to be part of the interaction algorithm. It can be simply done by proximity, or bounding box collisions. (10 points)
  • At least one 3D object in your application needs to be custom made by your team. You can use photogrammetry (with Agisoft Photoscan or other tools such as 123D Catch), or a 3D scanner such as the one in the VR lab (at the HTC Vive desk), or a 3D modeling tool such as Blender or Sketchup. Make sure your polygon normals are correct, or else your lighting won't work. Your 3D object can't be a primitive 3D object (such as sphere, cube, cuboid, torus, pyramid, planes, etc). (5 points)
  • You need to use audio in your application, e.g., for background music, sound effects, etc. It doesn't have to be spatialized, unless you want to compete for the extra credit for audio. (5 points)

Grading

  • -10 points if your application does not run at 90 fps (frames per second)
  • -10 points if your application does not support 6 degree of freedom head tracking

The final project has to be presented to the course staff during our final exam slot on Tuesday, June 11th. The presentations start off with a video screening session at 3pm in room 1242. Then we will do science fair-style demos in VR lab B210 in two grading windows: one from 4-5pm, the other from 5-6pm. You are allowed to bring friends to both video screening and demos.

VR Experience (20 Points)

This score is for the user experience of your VR application. It is a subjective score factoring in your project idea, aesthetics, usability, wow factor, etc. The score will be determined by the graders, which are instructor, TA and tutors. You need to be prepared to let the graders try out your application during the demo window.

Extra Credit (10 Points max.)

We are going to give extra credit in the categories below. There will be 5 points of extra credit for each category a project wins, for a maximum of 10 extra credit points total. These awards will be given after grading and will be listed in a Piazza post.

The categories are:

  • Best app to make use of dual users (i.e., requires two users, not very well usable with single user)
  • Most intuitive controls (in-app training can help)
  • Best user interaction concept
  • Best aesthetics
  • Most technically challenging app
  • Best use of audio
  • Most polished app
  • Most entertaining app
  • Best video (the one you make for the video screening): nicely edited video that shows live footage of both users using the application, along with what they see in the Rift, and it needs to include audio

Tips

You are allowed to use any software libraries which you used in homework assignments 1 through 3. In addition, you are allowed to use the following libraries:

You are allowed to use any source for 3D models and textures, including:

To create your own models, here a few tips:

  • The 3D scanner at the Vive computer in the VR lab is a scanner from Matter and Form. It scans objects up to a size of 9.8 inches high, 7.0 inches in diameter, and a weight of 6lbs. The scanning software is installed on the computer. We recommend to export the scans to the OBJ file format to process with MeshLab or load into your application directly.
  • Agisoft Photoscan offers free 30 day trial licenses. You do have to register your email address to get one, but they are legit.
  • An open source alternative to Photoscan is WebODM. UCSD has a research compute cluster that you can use to process your images into a 3D model. The images need to be taken of a static scene (no lighting or shadow changes while you take the images), best to put the object outside in a shaded area. Take 30-100 images from all sides. Log in to the WebODM front end with the account credentials given on Piazza. Create a project, click Select Images and upload your images. The default settings should get you a reasonable 3D reconstruction that you can download in its textured format for the best quality.

If your 3D models are too big to render at 90 fps, try using MeshLab to reduce the polygon count of your models.

Network Communication

To communicate between the two Rift PCs, you will need to implement network communication. You can keep this very simple. You are allowed to use any network communication library, including cloud services, data bases, or anything related.

We recommend using this remote procedure call (RPC) library. Here is a minimal example (updated on 05/30) of a client-server project using rpclib (It only compiles in x64 - release mode).

If you want to use a different approach, you could do direct socket communication with a library such as this one, the code can be downloaded here. You can choose to create a server program which both applications connect to, or have each application connect directly to the other. In the latter case you should run your network communication in a separate thread so that the rendering loop cannot get interrupted, which would likely lead to the frame rate dropping below 90 fps.