Difference between revisions of "CSE167FinalProjectF12"

From Immersive Visualization Lab Wiki
Jump to: navigation, search
(Animal Maze (Amell Alghamdi, Kristina Pinkerton))
(Robot Defense (Kuen-Han Lin, Matteo Mannino, Ashish Tawari))
Line 223: Line 223:
 
=Robot Defense (Kuen-Han Lin, Matteo Mannino, Ashish Tawari)=
 
=Robot Defense (Kuen-Han Lin, Matteo Mannino, Ashish Tawari)=
  
[[Image:2012-p15.png|thumb]]
+
[[Image:2012-p15.png|thumb|Robot world]]
 +
[[Image:2012-p15-explosion.png|Explosion]]
  
 
===Background Story===
 
===Background Story===

Revision as of 20:25, 13 December 2012

Contents

The Lonely Island (Trung Thanh Lam, Chi Tsui, Kazuhito Ochiai)

Procedural robots
Island

Technical features

  • Environment mapping
  • Normal mapping
  • Procedural terrain
  • Procedural robots
  • Toon shading
  • Fog
  • Illumination: 1 spot light ( for the sun in outer space ) , 1 directional light ( for the sun when camera is inside the earth )
  • Camera runs along a Bezier curve or is in free mode.
  • Revolved Bezier surface create a bottle-like shape . It's used for creating robot limbs.
  • IFramework: a simple OpenGL framework created by us.

Interaction

Camera

  • F : toggle to switch between free mode and running along bezier
  • W,A,S,D : Move camera when in free mode
  • -/= : decrease/increase camera speed

General

  • 1 : Toggle wireframe

Creative Efforts

Ocean

We create many patches of water bounded by boxes to help for frustum culling. Each patch of water is moved up and down by a heightmap. Water texture is also moved but at slower rate. Normal mapping is applied.

Procedural robots

We use the same idea as creating the bottle of coke in lecture ( shape grammar ). Each time we create a robot, it gives robots of different body parts. It first randomly choose a torso. Torso defines 6 positions ( two arms, two legs, back and head ). Rules are applied to these positions to create corresponding robot parts. Similar approach applies to other detailed robot parts. We can apply a shader/texture to a whole robot or each individual part.


Procedural terrain

there are two types of our terrain: one is generated by a heightmap, one is generated by mid-points displacement algorithm and we smooth it out. The first terrain is also applied lightmap ( black/white color ) to create shadow for terrain with less computation but looks very good. The terrain is textured based on the slope of it. All terrains are applied local illumination.

Camera system

We have two mode: free mode and running along a bezier curve. For second mode, we defines two curves, one for position curve, and one for target curve. The vectors in these curves are used in gluLookAt function. Interpolation between vectors to make the movement look smoothly. We can also move back and forward along the curve in case we want to rewind the movie.

Fog

A simple fog is applied to occlude the boundaries. We don’t want to viewers to see the boundaries so it looks like we have a far distance.

Revolved Bezier Surface

We create a bezier curve on x-y plane. Then we multiply the vertices by rotation matrix to create other bezier curves rotated around y-axis. Finally we connect them together and calculate smooth-normal vectors, texture coordinates. The surface is then used to create limbs for robots.

IFramework

We architect an OpenGL program again by putting light, material, shader, texture, etc. in a neat way so that we can do more with less codes. We also refractor Scene Graph by putting Transform class as a property of each Group or Geode. To make simple animation, we just need one line of code. By having a good architecture, we can scale the project bigger.

Corgi Problems (Stephanie Ho, Xue Guo)

Cow
Throwing a cow

Theme

The user is a corgi on a farm while the humans are away. Problem is, the cows have escaped and scattered about! The user has natural herding instincts to round the cows back, but in this world, the user is no ordinary corgi. Despite having short corgi legs, the user has super strength and can lift cows to transport them to a desired location. As a side effect of having super strength, the user is also incapable of gently placing cows after picking them up, and instead, cows are thrown across so try to aim well. Carrying cows will make the user walk slower so throwing cows around is encouraged. The cows are scattered all over the area, so it is the user’s job to bring them back before the humans return. Only one cow can be picked up at a time so make sure to find all the cows quickly!

Technical Features

  • Randomly generated terrain: Implemented using the diamond-square algorithm to make terrain more interesting and hilly. Dealt with interesting bugs that generated extreme (high and low) height values. Played around with terrain scale to make terrain have smoother hills rather than rocky. Stored quad vertices in height map.
  • Collision detection with terrain: Given (x,z) coordinate, find which terrain quad contains the coordinate and perform bilinear interpolation on that quad to calculate the terrain height at that point. Collision detection on terrain is used to implement camera movement as user walks over hilly terrain and is used to implement cows bouncing over the terrain. Border checks are used to prevent user from walking off map.
  • Pickup and throw cows: Check if there is a cow within pick up distance from user. When a cow is thrown, Eueler physics is used to adjust velocities over time for the bouncing effect. The velocity adjustments were played around with to achieve desired bouncing effect. Cows that bounce on the border of map will bounce back as if there is an invisible wall to prevent them from bouncing off the map.
  • L-system trees: Most of the time was spent on figuring out the input strings to parse.

Creative

  • Cows: Created a simple cow with scene graph and drew cow texture. Cow legs animate about when the user picks up the cow and while the cow is bouncing in the air. - Farm: Created a flat dirt patch of land for the farm. Barn object modeland texture acquired from online. From the inside of the barn, the roof gets culled so back face culling can be toggled.
  • Sky Box: Make distant area more scenic.
  • Seamless textures used everywhere.

Controls

The user can use a combination of keyboard and mouse to travel around the map although the keyboard alone also works.

  • Rotate left: key ‘a’ or mouse Rotate right: key ‘d’ or mouse
  • Move forward: key ‘w’
  • Move backward: key ‘s’
  • Pick up a cow: key ‘e’ (when user is near cow and not already holding a cow)
  • Throw a cow: key ‘e’ (when holding a cow)
  • Toggle back face culling: key ‘c’ Toggle sky box: key 'b'

Day and Night (Hu Gao, Kevin Liao, Ivan Tham)

Day
Night

Our group has decided to create an interactive 3D scene for our final project named "Day and Night". The theme of our scene is going to focus on the environment and lighting changes between day and night. The story of our scene involves someone who fainted while walking across the Sahara Desert and dreamed of that very scene and its abundance of interesting objects and effects. Due to his current state, desperate and thirsty for water, the scene appropriately displays the desires of this lost wanderer. The rain symbolizes his desire for water while the castle shows his desire for a resting place. The tree symbolizes his desire for a long life. He also dreamed of an ever-changing mountain to show how the mountain seemed infinitely far away. Interactions

The terrain may be regenerated by a keypress. Also, the position of the sun can be changed, which alters the time of day/night. The shadows can also be toggled at will.

Keyboard Commands

  • s - toggle shadows
  • t - rebuild build the terrain
  • l - lower the sun and dim the light
  • u - raise the sun and brighten the light

Technical Effects

The technical features that we are going to use for this project include: procedurally generated terrain, shadow mapping, bounding box collision detection, and particle effect. There will be a mountain that is procedurally generated. The major objects in the scene will have shadows with respect to the light source. Particles are used to generate the sun and rain. The rain will bounce back up when it hits the bounding box of the castle. The user may also enable and disable shadow mapping.

  • Shadow Mapping: The shadow map is gendered offscreen using an OpenGL framebuffer object (FBO) in the form of a z-buffer from the light's point of view. The scene is then rendered from the camera's original point of view and the resulting distances are compared with those in the z-buffer. A shadow is casted on that spot if it is farther from the point in the z-buffer.
  • Procedurally Generated Terrain: We generate a 2D array of height values, often referred to as a heightmap. It is similar to a bitmap for an image but it stores height values instead of color values. We generate the terrain by filling the heightmap with random values and then averaging the values at any point with those around it. Then we just draw the terrain based on that filled array (using GL_QUADS).
  • Particle System: We implemented a particle class with the fundamental components of a particle such as position, lifetime, color, and speed. We created emitters for these particles so that the particles would work together to create the rain and sun. Two types of emitters we used was a point and plane emitter, the point emitter was used for the sun and had the particles explode outwards so it looks like a dandelion, and the rain emitter would have particles drip down like rain from a cloud.
  • Bounding Box Collision: We calculated the midpoint of our castle as well as the minimum and maximum x, y, and z values in order to find the height and width of our bounding box for the castle. After finding this, we would call these values into our update method so that if any object would fall inside this range, the appropriate collision would occur. We do this by checking if the individual values fall within the minimum and maximum values in each of the 3 dimensions. In our scene, the rain would collide into the castle and reverse its direction and change its speed randomly.

Creative Efforts

The user can change between day and night for the whole scene. Depending on whether it is day or night, the position of the sun and the light intensity changes. Also, when it is night time, the shadows are disabled and vice versa. We used some interesting object models for our scene.

The Bipolar Graphics Student (Eric Chung, Yixin Zhu)

Broken teapot
Rage mode

Story

The scene is a desolated landscape desert. The main character is a bipolar UCSD student who woke up in the middle of the desert after a night of drinking in order to forget about how terribly he thought he did on the CSE 167 midterm. He is extremely bipolar and goes in and out of rage mode. When in rage mode, he utilizes the things he learned in CSE 167 and starts creating particle effects - or as we see it, blowing up things. He walks around looking for a way back to campus but all the way going in and out of rage mode in a random fashion - almost as if there was some user controlling his mood swings. When in rage mode, if he walks into a teapot, it will begin exploding, but if not in rage mode then nothing happens. Sometimes, due to his hangover, he imagines watching himself walk around in 3rd person. Surrounded by beautiful terrain, stuck in the middle of nowhere, and unable to control his rage, the UCSD student searches helplessly for a way back home where he can pick up his midterm and end his psychological torment.

Interactions

  • In normal mode, the user is able to move around the skybox with the w/a/s/d keys and jump with space. It can collide into various sized teapots but doing so will have no visual effects.
  • In debug mode, the user is able to move around the skybox with the w/a/s/d keys and jump with space. However, unlike in normal mode, when the user collides into a teapot, the collision will cause the teapot to explode.
  • In rage mode, the user is not in control of the character. When the character approaches the boundaries of the skybox, we, the architects, will automatically teleport the character back to the origin of the world and thus, saving its life.

Supported keyboard and mouse commands

  • w/s - move forward/backward
  • a/d - rotate left/right
  • space - jump
  • f - fullscreen
  • r - toggles rage mode
  • p - resets character position in world
  • h - enables toon shading
  • u - enables debug mode
  • v - change view
  • 1/2/3 - change terrain models

Technical effects

  • Procedurally generated terrain
  • Partical effects
  • Collision detection w/ bounding box
  • Toon shading

Creative focus

The creative focus of the project is to blow shit up. Specifically, we focused on user interactivity with the scene and ability to blow up randomly positioned teapots. The result of collisions will be beautifully crafted explosions. In addition, we added a skybox and terrain to setup a desert-like environment for the user. As for when the character is in rage mode, we emphasized on the lack of controls given to by the player. As a result, the whole scene becomes bloodshot red to represent the anger and frustration within the character.

Experience

We had a great time programming this project because we allowed our creative juices to flow by adding an arsenal of features. Initially, we planned our movement based on mouse control but later settled with w/a/s/d movements due to simplicity and time constraints. Most of our features described by the initial write up are implemented in the final version of our project.

Animal Maze (Amell Alghamdi, Kristina Pinkerton)

2012-p4.png
2012-p4-TreesPicture.png

Theme and Story

Welcome to the Animal Maze! The baby animals want to play hide and seek with you today. They will all be randomly hidden inside the brick maze, hiding behind various trees. Use the mouse and keys to look and walk all around, they could be hidden anywhere. You better find them all before the time runs out or you will lose and the animals will be very sad and give you no confetti!

List of Technical Features in our Game

  1. Shadow Mapping: We implemented Shadow Mapping with two passes. The first pass renders the scene from the light’s point of view and copies it into a texture image. Then in the second pass we render from the camera’s point of view and send the texture matrix, texture coordinates, and texture image of the shadow plus other uniforms necessary for lighting, textures, etc. to render the actual scene.
  2. Particle System: We implemented a particle system that gave each particle a random bright colors and a slow velocity to resemble confetti falling when the player wins or findings an animal.
  3. L-System (plant): The trees in our scene are recursively generated using the L-System technique and randomly positioned throughout the maze.
  4. Procedural Terrain (maze): Our maze specifications are generated using a depth-first search to randomly break down the walls. We then use this data to build up the blocks of the maze in the scene.
  5. Collision Detection (walls-only): We used a simple collision detection algorithm to only compare against the walls of the maze, that consisted of checking the players desired location, converting it to the square in the maze and checking it against the blocked walls of that square.

An overview of our Creative Efforts

  • Creating original 3D animals and a girl avatar object with Blender. We have a little bunny rabbit, a crocodile, an elephant, a mouse, and pink pony. We built that girl in separate pieces so that she could be animated while she walks. Her feet and arms will move back and forth!
  • The game is set during a starry night within a brick maze. When you move around the scene the stars in the sky appear to be moving and sparkling setting a beautiful scene.
  • Adding important game mechanics such as a timer and a win condition. If you don’t find all five animals within the time limit you will lose and not get to see the grand finale!
  • We implemented randomness for the animal locations, and collision detection with the walls to make the game more difficult and realistic. With these to features our game is different every play and keeps the player from cheating!

Keyboard Controls

W Move forward
D Move right
A Move left
S Move backward
Q Turn left
E Turn right
O Toggle shadow mapping
T Toggle view point
ESC exit game

Robot Defense (Kuen-Han Lin, Matteo Mannino, Ashish Tawari)

Robot world

Explosion

Background Story

This augmented reality driven experience places you in an awkward situation. Martian robots have worked up the nerve to inhabit your desktop. It all has fallen onto you to do what you born to do, and blow those sons of guns back to the dark side of Jupiter. Right-click fires your proton bomb while aiming with the mouse, and the number 2 key whips out a flashlight to go dark and turn off the lights in your virtual room. (1 switches back to ambient) Just when you think you've seen it all, you may look to your right and find out that, my god, it wasn't a Martian on your desktop - it was your best friend at the keyboard! That's right, this is a two-player experience, fun for whole family!

Project Description

This project features a 3D point cloud mesh reconstruction through Delauney triangulation to model a common desktop. The point cloud from this desktop was obtained from a camera sensor, and the associated camera images have been used to texture the mesh. Rectangle-to-sphere and sphere-to-point cloud collision detection is done to detect bullet collisions with the desktop and the animated player 2 controlled robot. The point cloud also has a bounding box around it for efficiency. The explosions are rendered as particle effects, where the particles are billboarded texture sprites with fire, smoke, and a shockwave. A spotlight is added for an eerie effect, and the whole scene is covered by a galaxy sky box that is literally out of this world.

Our creative efforts were mostly focused on making the desktop mesh look fairly good, and automatically generated from point cloud data. The desktop we used had books, papers, and obstacles on it for the purpose of interesting geometry and height variation. Much time was also spent on tuning our particle explosion effects. The game loop and controls took some time to get the right feel, and various algorithms – such as triangle detection, binary space partition trees, and plane detection - were tried for collision detection with the mesh before settling on point cloud collisions with a bounding box. This provided the most efficient solution, presumably because comparing points is easy, and since it is done in a linear array we minimize cache misses. In contrast, comparisons and calculations for triangle collisions simply took longer due to all the calculations, and the BSP trees with either collision detection was overkill for 400 points that have easy calculations.

Features

  • Point cloud, box, and sphere collisions
    • Point cloud collisions with spherical bullet projectile
    • Box collisions (Bounding box on top of mesh) with spherical bullet projectile
    • Sphere-to-sphere collisions with robot to projectile
  • Mesh construction
    • Delauney triangulation
    • Matching two meshes
  • Particle effects
    • Explosions and smoke effects

Controls

  • Mouse: Aim Gun
  • 1 to toggle ambient light mode
  • 2 to toggle dark flashlight mode
  • Right click to fire
  • 'J' to rotate robot left
  • 'L' to rotate robot right
  • 'I' to move robot forward