Sunday, October 2, 2011

Writing a 2D Library On Top of OpenGL 3.x Practices

Having straightened out MyGL this weekend to behave far more transparently with the underlying OpenGL state machine (effectively created an object-oriented OpenGL 3.x filter) I now have to take on the challenge of actually writing easy-to-use code bases on top of it. The first issue that I'm going to tackle is two-dimensional rendering. I don't think OpenGL gives 2D enough love by default. You look at an API like DirectX and they've got a custom sprite rendering component that provides accelerated blitting operations right along side traditional 3D rendering techniques. Although it's closed-source I have to imagine that under hood this sprite rendering system is uses much the same hardware techniques as the rest of the API. There's no reason this can't be done with OpenGL, except that OpenGL chooses not to anything like it. My own opinion is that OpenGL is trying to be as low-level and "simple" as it can be, and as a result the idea of an additional layer of indirection from the hardware that a sprite-rendering system would provide doesn't quite feel right.

Tuesday, September 27, 2011

MyGL

Part of The Rewrite Engine is a pseudo-abstraction layer that sits on top of OpenGL called MyGL. I originally wrote it as a fully namespaced library for software rendering within my Computer Graphics 1 course on rasterization theory and implementation. In that original form MyGL was essentially a hybrid of practices found within both Deprecated-OpenGL and DirectX. The overall library was a state machine much as OpenGL, having matrix stacks as well as a dedicated vertex buffer, color specification etc. The DirectX likeness came in the form of Object-Oriented practices being employed to manage this state. Vertices themselves were objects containing various state and helper functions. Colors were specified per-vertex,  model transformations resided with a specific "Model", and the view and projection matrices were part of an encapsulating Camera class that provided additional levels of abstraction and functionality for implementing different cameras.

Friday, September 9, 2011

The Rewrite To End All Rewrites

So I've had this huge problem for the past 5 years or so that I've been writing programs where I just can't close the deal on a piece of code or subsystem to save my life. Whenever I spend a significant amount of time working on a feature, I inevitably go back almost immediately to rewrite the entire thing convinced that it just wasn't right!. This was much more of a problem prior to entering college, when my programming work was comprised of nothing but schedule-less projects that went on for an eternity. Even after school and eventually life made me realize that I couldn't afford to rewrite things 500 times however I still feel that the problem as a whole persists. I've got a few decent projects under my belt (namely my ray-marcher VolTex I've mentioned before or another project from last year that I'm sure I'll write about eventually) and at my current job I can get things done ahead of time, but all in all I think that when the schedule and the project is left entirely up to me I fail to deliver at any reasonable pace due to my previously mentioned hindrance.

Thursday, June 23, 2011

Particles Update

Been a while. Quick outline of what I did since last time:
  • Added math to base processor so that particles can be drawn or expelled from a central gravity well.
  • Finished adding support for custom shaders (no more COMPILING after shader modifications!!)
  • Added more attributes and exposed them to shaders. Currently color is calculated based on particle's age and velocity. Size is likewise.
  • Figured out how to turn on aliased (circular) GL_POINTS
  • Turned on alpha blending
So not too much in all actuality...but it felt like a lot. Bunch'a pictures as well:



Thursday, June 16, 2011

Particle Madness!

Just a bunch of screens I grabbed today. Not much changed except for an optimization I made that doubled my frame rates. I was just sick of looking at 3 blocky particles.



500 Particles

VBOs and Vertex Array Objects

So I decided on using Vertex Buffer Objects (VBOS) to store the state information on each particle in my system. This should give me both a speed advantage as well as a simplicity advantage in the long run. I'd originally considered something akin to a complex Particle object to store information about each particle as well as siblings in a linked list of some sort. The downside to this is that the PC memory cache is not really optimized (read: at all) for things like linked lists where we're jumping all over the place between objects. A static (or in my case, semi-static) linear allocation of memory is far more efficient and ultimately makes certain things like multithreading a lot easier in the future.

Sunday, June 12, 2011

Starting a new project

Might as well keep using this to document my work on various things since it's already set up...


Currently working on a framework for particle systems. The basic idea I'm following is that particles can be done either all on the GPU or on the CPU with a bit of shader-based fanciness thrown into the mix to spice things up. After a lot of thought on the subject I've decided to, at least for the time being, go with door number 2. I'd like to design the system in such a way that individual packets of information representing each particle can be streamed through a "processor" of sorts which would churn through the data and write it back each frame with updates. Deferring these brains as I've been referring to them in my head so that they are singleton classes should aid in my ability to thread them and should also yield general speed boosts if all is done correctly.

Sunday, May 15, 2011

CG2: Final Project

Okay so VolTex is finished. This is my final post on it. Just submitting some info.

Report

Visualizing MRI Data through Volumetric Ray-marching

Download (includes two datasets)

Sunday, May 1, 2011

Renderman Assignment

Had to product two images, one modifying the given RIB file to give the 3 supplied shaders new instance variables:



 

Another modifying three downloaded shaders of our choosing:



 

I chose to gave my sphere hair, use a cinder block shader that I butchered into looking like some sort of bad programmer-art, and tweaked a marble shader until it looked like oil.

Monday, April 25, 2011

CG2: Assignment 6

Refraction:





Total Internal Reflection kind of looks bad in this shot since my background is black. But it's there, and that's been my basic scene so I thought I wouldn't change it.

Here's a spruced up scene so you can see the TIR more clearly:





I'll update when I finish the extra.



UPDATE:

Extra...



I missed class and don't know if how to do this was discussed, but essentially what I did was start with a light transparency of 100% for a shadow ray. Each time an object was hit on it's way from a point on an object to a light source, the light transparency factor was multiplied by the transparency of the object in between. After accumulating all of these, I multiply the final shading for that point by it to get the effect of being in some form of shadow.

Wednesday, April 13, 2011

CG2: Assignment 5

This one was easy. I'd already set up my ray tracer to deal with recursion, but hadn't had a use for it so it did nothing. I'd also already written a function CalculateReflection(Vec3D toSource, Vec3D surfaceNormal) that does exactly what is necessary to reflect a ray off of a surface with a given normal. With this at hand and my framework already set up for recursive tracing, this literally amounted to little more than the addition of an if statement.



Note that this was done with 4x supersampling (16 samples per pixel).



The only real interesting snag I hit was when I briefly didn't realize that using the previous ray's point of intersection as the origin for the reflection ray wouldn't work. For obvious reasons, the reflection ray will sometimes decide that the closest intersecting object is the object that it's reflecting off of because of that. Adding to the reflecting ray's origin an extremely small EPSILON value of 0.00001 in the direction of the outgoing ray solved this problem.


Extra #1


I got sick of the boring backdrop, so I added to the scene a bit now that there's a use for it with reflections:

This one has super sampling





Here's one without, you can see the randomness of the rays picked per cone much clearer but the picture isn't as nice.



These were taken with a +/- 5 degree spread for each cone.

Sunday, April 10, 2011

CG2: Midterm Project Update

Alright, some preliminaries...

 

Name: Michael Mayo

Professor: Warren R. Carithers

Course: 4003-571

Project Updates URL: http://mseanmayo.uni.cc/news/?cat=6

 

Now then. My original project proposal didn't quite adhere to the template provided for whatever reason (I think I just completely forgot it existed) so I'll have to improvise on the "schedule" I've been following so far.

 

Here's a quick run-down of the original objectives I had for this project, along with what I have completed or revised:

  • Volumetric Display of MRI data (Revised, see below)

  • Volumetric Density Mapping (Removed)

  • Three-Dimensional Object Segmentation (Unnecessary)


 

So yeah, it would APPEAR as if I had failed in my original plans for this project. However I can assure you anything BUT has occurred. A timeline was also absent from my original proposal, so I'll have to get right into the meat of what I've accomplished so far and then go from there.

 

What's been done


As is mentioned above, MRI data visualization has been revised while the other two core items have been either dropped or deemed unnecessary. What this boils down to is the realization I had after achieving initial results that there was more than enough work to be done with MRI visualization in general. I decided that rather than focus on developing interesting density equations to display interesting effects, or try to segment the volumes themselves, I'd focus all of my energy into developing as much visualization tech. as possible for MRI data. Before I go on, here are the results of such visualizations thus far:



 

As you can see, there are quite a number of ways to render the data. My initial attempts focused on a rudimentary technique based off of the original "slices" of the images that make up the 3D texture. I wanted to draw them each on 2D quads in 3D space with a bit of alpha blending between them in order to simulate the effect of peering into a 3D volume. I ran into problems with this approach both due to aliasing effects when looking from the side (2D flat images are quite boring from the profile...) as well as high polygon counts making the approach less than viable if I wanted to do any special post processing or work with the data in any way. Here's an example of the aliasing effect I was encountering:



 

I started reading about a more advanced method for volume visualization known as ray marching, in which rays are fired into the volume and accumulate density values until an opaque threshold is met and the ray terminates. This method was daunting at first due to its complexity. I was worried at first I didn't understand the theory well enough to implement it, and that once I did I wouldn't be able to get it to render fast enough to be interactive (I had this quarter's experience with the ray tracing assignments to reflect on in that respect). However after much frustration I DID manage to get it working with shaders, which gave me all the performance I would need. Pic related:


 

There's tons of more info about how this works and what I've done, but I'm not sure how much I'm supposed to include for this update. I've documented most of it in other posts however, so here they are: link 1, link 2.

 

My general assessment of my own work on this project so far is very positive and optimistic. Initially, as the links above detail, I felt I was far too ambitious with this project. It took me quite a bit of time to get ray marching working correctly, and I had a few other crucial roadblocks in my progress. Overall I think I'm actually getting more done than I had originally envisioned, and faster than I had anticipated. I'm working on cleaning everything up at the moment and adding support for a few different file formats, and then I'll start adding features again and getting this ready for the presentation.

 

What's next


This sums up what I would like to get done for the final presentation, prioritized highest importance first:

  1. Implement Transfer Functions to allow the MRI data to be isolated as the user sees fit.

  2. Provide a User Interface of some sort

  3. Add support for more file formats (public MRI data comes in several)


 

Monday, April 11th marks the beginning of Week 6. I anticipate that transfer functions be implemented by the start of Week 7,  with a user interface of some sort and extra file format support added in the final Week 8 / Week 9 stretch before presentation.

Saturday, April 9, 2011

CG2 Assignment 4

So we had to do procedural shading this time. Checkerboard texture was required so here it is:





Interesting bit about this is that I had a really hard time finding out how to calculate UV coordinates for a quad. If you use triangles, you can apparently do some fancy stuff with barycentric coordinates to figure out UVs, but I couldn't really work that out and my triangle intersection code generates parametric coordinates instead (I guess this is more efficient for intersection tests). So what I ended up doing instead was figuring out the math on my own using vector projections as described on this wikipedia page. Here's my method:

After I've got the intersection registered for the quad, I do the following calculations:

  1. Math::Point3D PoS = ray.origin + ray.dir*intersection.distanceFromEye;

  2. Math::Vec3D uvOffsets(PoS - myOrigin);

  3. intersection.u = Math::Dot(myEdge1, uvOffsets) / myEdge1.lengthSquared();

  4. intersection.v = Math::Dot(myEdge2, uvOffsets) / myEdge2.lengthSquared();

And that's it, intersection.u/v holds the UV coordinates. The cool thing is that the actual projection equation to get the components of the offset relative to the edges of the quad actually only requires the dot product to be divided by the edge's length, but then to normalize the coordinates to u/v space I needed to divide by the length again. Obviously that means that the two divides could be simplified by dividing by the length of the vector squared, avoiding a costly square root calculation on both parts.

I'll update this space as I finish the extras. I'm shooting for a mandelbrot as extra #1.


Update #1: Mandelbrot


Here it is:



Kinda hard to see, but it's there. I'm just drawing it on the floor quad which is why it's at a weird angle.

Saturday, April 2, 2011

Project Update 2

Okay, lots of research has been done and I think I know where I'm going with this. First things first, an update on the original project description:

  • Volumetric display of MRI data: This has become the focus of my project due to the subject's complexity.

  • Volumetric density mapping: This has been all but discarded from the project. The actual accomplishment of volumetric rendering is more than enough on my plate.

  • Three-Dimensional object segmentation: This too has been put very far onto the back burner. For the same reasons as density mapping.

Now then, with that in mind, here's what I've learned:

First and foremost, my initial method of rendering slices is being thrown out. I've already posted on this matter once before so I'll leave the specifics there. Essentially the only way I could accomplish this well is if I use a shader to blend between slices, and at that point I might as well just try a different method.

Therefore, the logical path is to implement volume rendering via RAY MARCHING! HOORAH!!!! [/sarcasm]

But seriously, this is a pretty cool concept. I'm not going to go into the specifics here for now because I'm more than certain that I'll be writing a lot about it for my mid-quarter update soon. What's important now is that it's INCREDIBLY cool and INCREDIBLY complex, as most things in CG seem to be...



So that's the justification for the changes to the original project. I'm going to probably spend the majority of my time implementing ray marching successfully. However once I DO do that, I'll be able to plug a lot of different things into the pipeline like transfer functions which will allow me to color my MRI scans accurately to the human body. I'm very excited about this.



I'll be updating here with my progress implementing ray marching.


Sub-Update 1


Okay so the research I've done lead me towards the method listed in GPU Gems Volume 3 (Link). The first thing that I have to do in this approach is be able to render the "back faces" of my cube. Thankfully since I'm doing this in shaders, all of OpenGL's state information is replicated and I can render a cube with glCullFace(GL_FRONT) in order to drop the front faces!


 

This is actually rendered with the first part of what will become my ray-marching fragment shader as well.

So yeah, that's done. Another update soon.


Sub-Update 2


Next up was making sure that I could render "ray march" the front faces of the cube as well. I put that in quotes because it's not really happening, but what IS happening is I'm managing to look up the texture correctly and pass it along to the ray marching algorithm:



That meant that I had the starting and ending positions of the ray encoded within both texture objects (the colors are actually XYZ positions) and I could calculate the ray directions, rendered below:





More to come soon.


Sub-Update 3


Success! Here's proof before I talk about the specifics...





So this is fully ray marched. The technique I was originally exploring actually fell through. I'm sure there's plenty of 3D gurus out there that can make it work, but at subset does not include me. The method I opted for is a little more traditional to the article I originally posted, and goes something like this:

  1. I render the back faces of the cube to a texture bound to a frame buffer object via glCullFace(GL_FRONT).

  2. I render the front faces of the cube to another texture bound to the frame buffer object via glCullFace(GL_BACK)

  3. I render a full-screen quad running my shader that makes the magic happen:

    1. I take the color values from the backface texture and subtract the colors from the frontface texture and interpret this as position information, this gives me a start point, end point and direction for each ray.

    2. At this same coordinate in the fullscreen quad, I step my rays through with the traditional method of ray marching and accumulate color information from the 3D texture representing the volume (this could be replaced with a density function as well)

  4. I then render this full-screen quad to the screen, and we get the final image!

As luck would have it, this method was extremely easy to implement, but only came to me after DAYS of agonizing failure with the other methods.

Next up: Transfer functions, better camera control (I still can't zoom) and more MRI datasets!


Sub-Update 4


This is more of a sub-sub-update. I'm working on an all-purpose dataset loader and I've got my hands on a few more volumes that I can render at this point in time. Also I've been reading up on transfer functions and have been toying with those as well. Here's a rendering of a yellow foot with blue bones:



Also, a red skull:



Revised my blending equations, skull again:




Sub-Update 5


Didn't really follow the timeline I put in the midterm update. But I got a UI set up and I've managed to load some more advanced datasets (after a frustrating weekend).




Tuesday, March 29, 2011

Raytracer: Assignment 3

So we had to implement Phong illumination for this assignment. Here's the results:

From CG2 - Week 3



I've got the basics done. I don't think it's very efficient, I'll probably tweak some values and see what happens. Also, there were a couple of extras to be done, namely implementing Phong-Blinn illumination and multiple light sources, which I haven't done yet. That's next.


Update #1: Multiple light sources


Well, this actually wasn't any additional functionality, I just had to add another light to the scene. Gotta love modularity.

From CG2 - Week 3

The instructions were to make it obvious that there was more than one light in the scene. So I put two of them behind the scene. The red circles were added afterwards, and surround the two point light sources contributing to the scene.


Update #2: Phong-Blinn illumination


Well, just what it says. Here it is.

From CG2 - Week 3



I also implemented Cook-Torrance Illumination just for the hell of it, not sure if I got it 100% though:

From CG2 - Week 3

Note: I'm hard-coding the roughness at 1.0 and the refractive incidence at 0.0 since my ray tracer currently doesn't support refraction and I don't feel like getting look-ups for roughness values.

Monday, March 28, 2011

Project Update 1

This is more of an update for me, to gather my thoughts so far...

So I've started this project, and I'm realizing that there's a bit more to volumetric rendering than I had first thought. The general idea is really simple, but that's also what makes it so risky. Most of the reading I've done on the subject (it's actually tougher to find info on than one would think) points to the quick-and-dirty method, rendering via slices:

[caption id="" align="alignnone" width="500" caption="Source: http://charhut.info/files/cs280/volume1.png"][/caption]

So in the above picture you'd be rendering a crude sphere. Obviously aliasing is a massive concern here. But nevertheless I decided I'd try this approach first. My initial attempt was promising:


However there's that bit of aliasing in the right photo from looking at the slices from the "side". Sort of like the demo pic above. It It looks horrid, and will seriously interfere when I start using complex volumetric textures instead of a big green block. I thought adding alpha blending would remedy this, but while it made the block look cooler it actually made the aliasing more apparent...just smoother:

From Voltex



So this method wasn't going to work. Or at least not the way I wanted it to. I also experimented with rendering three dimensions worth of slices, oriented to each axis in turn. This DID result in a near-perfect rendering of the volumetric cube from all angles, but it also tanked my frame rate for obvious reasons. I threw the code into a display list and compiled it just to see, but it actually didn't help much. Even if the speed had been bearable, when I was doing the 3-dimensional slice renderings I wasn't using alpha blending, I have a feeling the aliasing would have come back once I did...



So now I've got to try another route. I stumbled across some old PS3.0 demos on NVidia's site yesterday, and one of them happened to be on volumetric texture rendering. I had a peek at the demo code and saw that they use a different approach of ray-marching a solid cube in the pixel shader. It also runs extremely fast compared to my old-fashioned OpenGL code, and the aliasing is completely absent in all cases. All around it just seems like the right direction to go, although I'm very inexperienced with shaders. I'm going to try this over the next week or so and see if it gets me closer to what I'm aiming for.



On the subject of the MRI data. The one dataset I have in my possession is just a folder of JPEG images. I've managed to load these into my 3D texture volume using a great little C library called SOIL. Unfortunately the fact that they are jpg images means there's no alpha blending at all, so the renderings are pretty boring and not worth showing here. I haven't decided what I want to do with them in that case. One thought is to preprocess them with photoshop's batch image action and see if I can apply alpha to the images based on pixel contrast, this might work since they are grayscale, but I'm very inexperienced with photoshop as well. Another idea is to implement color keying to blend out all of the black, however this would be a crude way to do it, and would miss the finer details of the dataset. The option I was looking at prior to stumbling upon the ray-marching demo was to implement the conversion in a pixel shader, and essentially do what photoshop would do for me on the fly. This still might be possible, especially since I'd be dealing with shaders anyway at that point, but I'd rather not over-complicate the shader code if I don't have to.



As for other MRI datasets. I found a website absolutely FULL of them: Link

The only issue is that the license they are under requires them to be packaged in a special format that guarantees that whenever the data is distributed the property rights are included. However, they also distribute an open-source program that can read and render the dataset in these files meaning I can learn to open them myself. This is next after I get my volumes rendering correctly and the initial MRI data drawing how I want it. I've had a look at the code for opening these special files and it seems pretty easy to implement, so it shouldn't be too much work. Plus then I'll have MUCH clearer MRIs to work with!



On a final note: I initially intended to do procedural generation via volumetric density algorithms as state in Ken Perlin's paper. This is currently on the back burner. I'm hoping that everything else goes smoothly and I get to it, but I'm starting to see that rendering the volumes efficiently can itself be a complicated process, and I'd rather get that working with some quality MRI data before I experiment with generating something on my own.

Tuesday, March 22, 2011

Raytracer Assignment 2

So I got the ray tracing working for at least non-recursive color picking the other day. Been waiting to put this up until I cleaned it up a bit. I've learned a lot on this assignment aside from how to implement the ray tracer itself:

  1. 90% of the time, if the algorithm is showing anything at all, you're 99% correct and one little bug is holding you back.

  2. Never assume you know math by heart. I've taken and easily aced all levels of calculus, differential equations, matrix algebra and multi-variable calculus; yet the source of almost a solid day's worth of frustration on this assignment this weekend was the result of me misplacing parenthesis in the quadratic formula.

  3. These things are slow. When I first got everything rendering this weekend, I was looking at an average of 7 seconds per frame. That skyrocketed to over half a minute for a simple 2x2 multi-sampling technique, with an exponential increase as the complexity rose. I've profiled my C++ code using a fantastic utility called Very Sleepy which picks up on the debugging symbols that visual studio embeds in my program and can profile individual lines of code with up-to-date source code every time. This helped me narrow down the problematic aspects of my renderer and get that 7 seconds per frame down to a more tolerable 2 seconds. More on this later.



So anyways yeah, it works and it renders the scene as it's supposed to:

From CG2 - Week 2



Once I could stand the wait time for multi-sampled renderings to process, I snapped one of those too at 4x4 = 16 samples per pixel:

From CG2 - Week 2

The results of the 16x multi-sampling look good, but the ray algorithm performs really slowly. I could probably improve it a bit, but I don't think it's worth my time at the moment, and it's probably not the largest bottleneck in the system still.



In fact, this is another important thing I learned during this. I strove for an object-oriented approach to this whole thing, and unfortunately it turns out that all of that overhead that the C-fans rant about actually matters with something as CPU intensive as a ray tracer. I actually believe this is the source of much if not all of my low-performance problems. Multiple levels of data abstraction and hierarchical class design make some operations much slower than they could be. In my efforts to get this down to where it's at now time-wise, I already reduced my Ray3D class to a simple struct with public members. I also dropped STL iterator functionality from the scene's intersection code as well, since Very Sleepy claimed that iterator incrementing was huge. That was probably the biggest boost in speed that I got over all.



Anyways, I'm a little early on this assignment, I think I've got roughly a week until it's due. I plan to work on my project a bit during that time, but if that's looking good by later in the week I just might go back and re-implement parts of mRay's pipeline in straight C. Another possibility I'm entertaining is OpenCL. I'm gonna do a bit of research on that, since if I'm using straight C I might as well do something cool with it. We'll see.

Monday, March 14, 2011

CG2: Project Proposal

For my independent project, I plan to do some experimental work with Volumetric Textures (Hypertextures) based loosely on Ken Perlin's work. Specifically I would like to experiment with using hypertextures as a means for accurately displaying volumetric data in various forms. Some of the things I would like to do include:

 

Volumetric display of MRI data


I have in my possession a series of 280 MRI "slices" of a human head. These are completely anonymous and obtained from Professor Vallino in the SE dept. during a project last year. For that assignment I had to create multi-dimensional reconstructions of the MRI data (specifically from the Sagittal and Coronal planes) from the existing top-down slices. I think it would be very interesting and rewarding to use these images to construct a 3D representation of the data.

 
A screenshot from my original work with the image reconstructions:



 

Volumetric density mapping


In Ken Perlin's paper, he talks about modeling "soft" distributions of data for objects, and in class we used the analogy of the internals of a peach to discuss this. I would like to do something similar, modeling either the structure of a peach or the structure of some other internally-complex object.

 

Three-dimensional Object segmentation


On the above two points, I've begun thinking about the possibilities of "splitting" these hypertextures to simulate fracturing the objects they represent. Using OpenGL's 3D texture support, I believe this would be possible for me to do without overwhelming myself. The support for mapping texture coordinates to vertices should allow me to initially "break" the objects along planes that I construct based on mouse clicking and the camera view. With a little bit of work, I think this would let me interactively peer inside of the objects that I create.

 

Hopefully this is all adequate, and you don't think it is too much or too little work. I'm very interested in all three of these ideas, and I think they'll provide me with enough work over the quarter to keep my busy in between assignments. Plus, the end results SHOULD be very cool!

Thursday, March 10, 2011

CG2: Week 1 Assignment

Setting the scene:


From CG2 - Week 1

Note: The dark artifact on the smaller sphere I believe is a result of using a combination of OpenGL's alpha blending and built-in lighting. I could have played with the blending settings and probably gotten rid of it, or at least done all the lighting with shaders, but as it wasn't required I just let it be.

Details


Camera

Position: (0.0, 0.0, 0.0)

Lookat: (0.0, 0.0, 1.0)

Up Vector: (0.0, 1.0, 0.0)

Floor

Front-left Corner: (-1.0, -0.5, 1.0)

Back-right Corner: (1.0, -0.5, -5.0)



Back Sphere (smaller)

Position: (-0.35, -0.2, -2.0)

Radius: 0.25



Front Sphere (larger)

Position: (0.0, 0.0, -1.6)

Radius: 0.30



Light

Position: (3.0, 3.0, -10.0)

Ambient Color: (0.0, 0.0, 0.0)

Diffuse Color: (1.0, 1.0, 1.0)

Specular Color: (1.0, 1.0, 1.0)