Blogger Templates

Monday, October 30, 2017

0

Graphics (9 of ∞) The world of 2D is no more.

Posted in ,
EAE 6320-001
write-ups


In this writeup, I will talk about my experience of converting our 2D graphics project to a more cooler 3D graphics project. I will talk about adding the 3rd coordinate z axis and the related changes we do in the engine code and the shader code. I will also be adding a camera to support the viewport  navigation and to control the frame related data in the 3d space.


Re Architecturing & Project Setup
I started this project by restructuring some of my existing implementation. This includes having different level of abstraction for widgets and the gameobject. In our project, widgets will represent the UI element that support effect, sprite and textures and gameObject represents the simulation object that support effect, and mesh data.
The game code will submit the widgets and the gameObjects through the SubmitDataToBeRendered method. After this I just added a physics project to support rigidbody related calculations. Our gameObject would represent a rigidbody in 3D space and would give access to things like position, velocity, angular velocity and so on. For the current state of the project, the only reference that was required by this Physics project was the Math project.


Baby steps
The first thing I did on this project was to add the third coordinate for the position to support the z axis. This also required me to update the shader code position to be float3/vec3 and the format I used to support this structure in D3D was DXGI_FORMAT_R32G32B32_FLOAT I was able to run the ditto version of my previous version of the project in 2D with 0 as input for all the Zs.


Rigid Bodies
Then I replaced my position and velocity struct from the gameObject and replaced them with the a rigidBody struct which gives me direct access to those two vector. I was referring the given code for calculating the future position and it turned out to be exactly the same as mine. Just to maintain the consistency, I tried to follow JP’s code and made some structural changes to the project to leverage those functions like PrefictFuturePosition. There was no change in the level of smoothness after doing this as my existing implementation also follows the same.


Transforms
Then I worked on creating a local to world transform that will be calculated during submission of the game code and the same will be used in the shader code where we can multiply the matrix with the position data to get the new position. I wrote a platform independent macro that can be used in the shader file without having to worry about the underlying complexity/differences. I was carefully using GIT to commit and push frequently so I exactly have a hint of when a particular crash/error happens. I moved on after being able to apply the transform and still move the position of the simulation object.


Local Space - Object’s vertices relative to the object’s origin and facing forward.
World Space - Coordinates are defined with having a central origin in the world and facing forward. We need this space to be able to translate our object’s vertices to that of the world. This is achieved by multiplying the translation matrix with each vertex to center the vertices around the required point in the world.
Camera Space - This is the world relative to the location of the viewer. Coordinates are defined with having a central origin in the camera and facing forward. Camera space is required to be able to represent the object's vertex points in the camera space coordinates.
Projected Space - This is generally represented by a frustum and everything on the screen basically fits into this frustum. The idea of having this space is to skew/flat the 3D representation into the interval of -1 to +1


Cameras
To navigate the viewport in 3D space, I had to create a camera representation that acts like a rigidbody with parameters like position, velocity, angular velocity and so on. The camera will also have provision to instantiate with with field of view, near plane, far plan z value and aspect ration. Using the orientation and position, we calculate the world to camera transform. And the other parameters like field of view and near planes will be used to calculate the camera to projected transform. The game code will then submit these two transforms to the graphics code for each frame representation. When I was using the same transforms in my shader file, I had to write a macro for platform independent usage of multiplying a matrix with that of a vector.
Above is an example of how I could implement the platform independent macro for multiplying a matrix and the vector.


3D Mesh
One of the challenge I had while defining indice for the vertex point was to calculate the winding order for all the  sides of the cube. I started by getting a basic square up on screen using our usual winding order pattern and did the same with another square for which the z was set for -1. I was able to generate another square for the -1 z value using the same winding order. So I started doing trial and error to understand how the winding order worked on the remaining sides of the cube. I did this by adding one triangle at a time.


Depth Buffer
I ran into a weird issue where a back side of a cube will still be visible on top of my front side of the cube. This was due to the way the graphics handled image depth coordinates. By enabling the depth buffer, I could tell the graphics system to handle the z-buffering or the management of image depth coordinates in 3D graphics. This resolved my layering issue and I was able to render a cube on a plane, between a plane with respect to its z layering.


To make my debugging easier, I started with my optional challenge of having my camera move in 3d axis and rotate in its local origin. This allowed me to go to each side of the cube and see my rendering pattern based on the indices. This reminded me of a quadcopter in GTA liberty city and I tried giving similar controls to control the camera.


Screenshots
[Game Start]


[Screenshot after cube and camera is moved]
[Screenshot showing the depth buffer in action]

[More action with camera position and rotation]
Controls
Game Object:
Right Arrow - Move the 3D game object to the right.
Left Arrow - Move the 3D game object to the left.
Up Arrow - Move the 3D game object upwards.
Down Arrow - Move the 3D game object downwards.


Camera:
W - Forward Acceleration
S - Backward Acceleration
A - Rotate towards the left side
D - Rotate towards the right side
Q - Rotate Upwards
E - Rotate Downwards


0 comments: