Massassi Forums Logo

This is the static archive of the Massassi Forums. The forums are closed indefinitely. Thanks for all the memories!

You can also download Super Old Archived Message Boards from when Massassi first started.

"View" counts are as of the day the forums were archived, and will no longer increase.

ForumsDiscussion Forum → 3d game engine tech
3d game engine tech
2005-01-18, 12:41 PM #1
This is going to be kind of a rant, but not really. I have questions, so any programmer feel free to speak up!

With the advances in technology, games are getting more and more complex, and require faster processors to run. Along those same lines, the tools are harder to use. Nowadays you need software worth thousands of dollars just to make a game character, and the levels.. level editing aint what it used to be.

I used to edit Doom2 and then later on, Duke nukem 3d. If you've ever used the duke3d level editor, you probably know where this is going. What you did was draw your sector in the map view, and add verts as you see fit, then with a press of a key you are in an in-game like environment, where you can raise and lower the ceiling or floor, apply a slope, add decals etc. Everything you would normally do in the orthographic views of say, gtk radiant, you would do in the full screen 3d view. it was intuitive and EASY. Brush editing just isnt as fast and easy (or more powerful. the only limits were on the game engine itself)

Games today use shadow maps. Looks pretty, but dynamic lighting is pretty much out the window. To me, it seems almost like a step back.(not graphically, but tech wise) Doom3's lighting is insanely processor intensive. My idea? use some sort of lod system that subdivides surfaces based on distance to the player. You know, keep the already fast vertex lighting up to date.

How come no game engines use vector based images for.. well, anything? You could use them for text, or particles, or even textures. I know for a fact they render faster than bitmaps (depending on complexity of course), and you get the added bonus of infinite scaling with no jaggies.

Ah, if only I was a programmer :o
Jedi Knight Enhanced
Freelance Illustrator
2005-01-18, 12:46 PM #2
Oh yes, BUILD used to rock so damn much.

Now it's only UnrealED.
Star Wars: TODOA | DXN - Deus Ex: Nihilum
2005-01-18, 12:47 PM #3
So become one, and build your own engine that 0wnz all
"Nulla tenaci invia est via"
2005-01-18, 12:48 PM #4
What we need is more engines with real-time environment damage, like in Red Faction. Combine that with the Havok physics engine.... now that would be quite fun.
Stuff
2005-01-18, 12:50 PM #5
and then your processor explodes within a nanosecond of the first in game explosion
eat right, exercise, die anyway
2005-01-18, 12:54 PM #6
Quote:
Originally posted by kyle90
What we need is more engines with real-time environment damage, like in Red Faction. Combine that with the Havok physics engine.... now that would be quite fun.


I always wanted to see some sort of gore enhancement based on Geomod.

:P
2005-01-18, 12:57 PM #7
Quote:
Originally posted by kyle90
What we need is more engines with real-time environment damage, like in Red Faction. Combine that with the Havok physics engine.... now that would be quite fun.


That engine had its limitations, though. You could only go so far, and there were some pillars that were indestructible.
the idiot is the person who follows the idiot and your not following me your insulting me your following the path of a idiot so that makes you the idiot - LC Tusken
2005-01-18, 12:57 PM #8
^idea added to things to put into a game if i ever was magically given control over a full dev team
eat right, exercise, die anyway
2005-01-18, 12:58 PM #9
One thing I always thought would be cool would be to have "arhitecture maps", which contain geometry information as well as shaders. It would have a rigorous LOD system, but the basic idea is, you just apply this shader-like map to a flat surface, and depending on the map, it will then have the architecture of a brick wall in-game or something. Actual TEXTURES, not just visual information.
"it is time to get a credit card to complete my financial independance" — Tibby, Aug. 2009
2005-01-18, 1:35 PM #10
So like.. a normal map, but it applies actual geometry instead of faking it? Or am I getting this?
Jedi Knight Enhanced
Freelance Illustrator
2005-01-18, 2:00 PM #11
Quote:
Originally posted by kyle90
What we need is more engines with real-time environment damage, like in Red Faction. Combine that with the Havok physics engine.... now that would be quite fun.


Red Faction's Geomodding was awesome, but I found it to be silly in many instances. Like how you were able to drill a hole into the ground with missles as far as you liked. While I suppose it's not entirely unrealistic, I definately found it ridiculous that I could do such a thing.

It did make for a great boredom killer though... Drill holes into a wall, then drill a tunnel, and try and get to the roof of the level.
"In the beginning, the Universe was created. This has made a lot of people very angry and has been widely regarded as a bad move." - Douglas Adams
Are you finding Ling-Ling's head?
Last Stand
2005-01-18, 3:43 PM #12
Eh...brush editing is considerably more powerful. Especially if you can combine positive and negative space (Unreal?).

If you want fast dynamic lighting with soft shadows, wait a few months until STALKER comes out.
Bassoon, n. A brazen instrument into which a fool blows out his brains.
2005-01-18, 4:23 PM #13
Quote:
Originally posted by Freelancer
One thing I always thought would be cool would be to have "arhitecture maps", which contain geometry information as well as shaders. It would have a rigorous LOD system, but the basic idea is, you just apply this shader-like map to a flat surface, and depending on the map, it will then have the architecture of a brick wall in-game or something. Actual TEXTURES, not just visual information.


Sounds rather like the displacement mapping in the Unreal 3 engine.
Detty. Professional Expert.
Flickr Twitter
2005-01-18, 5:08 PM #14
Quote:
Originally posted by Shred18
This is going to be kind of a rant, but not really. I have questions, so any programmer feel free to speak up!

With the advances in technology, games are getting more and more complex, and require faster processors to run. Along those same lines, the tools are harder to use. Nowadays you need software worth thousands of dollars just to make a game character, and the levels.. level editing aint what it used to be.

I used to edit Doom2 and then later on, Duke nukem 3d. If you've ever used the duke3d level editor, you probably know where this is going. What you did was draw your sector in the map view, and add verts as you see fit, then with a press of a key you are in an in-game like environment, where you can raise and lower the ceiling or floor, apply a slope, add decals etc. Everything you would normally do in the orthographic views of say, gtk radiant, you would do in the full screen 3d view. it was intuitive and EASY. Brush editing just isnt as fast and easy (or more powerful. the only limits were on the game engine itself)

Games today use shadow maps. Looks pretty, but dynamic lighting is pretty much out the window. To me, it seems almost like a step back.(not graphically, but tech wise) Doom3's lighting is insanely processor intensive. My idea? use some sort of lod system that subdivides surfaces based on distance to the player. You know, keep the already fast vertex lighting up to date.

How come no game engines use vector based images for.. well, anything? You could use them for text, or particles, or even textures. I know for a fact they render faster than bitmaps (depending on complexity of course), and you get the added bonus of infinite scaling with no jaggies.

Ah, if only I was a programmer :o


The in-game editor you speak of...uh...Doom 3 has one (I believe, haven't even tried editing D3), FarCry definitely does, some other games have them.

You don't need to buy any software to edit a game character. If it calls for Maya, it usually comes with Maya PLE (i.e. Unreal). If it calls for 3DSMax, gmax. And those are really the only two it'd call for.
D E A T H
2005-01-18, 5:12 PM #15
Quote:
Originally posted by Freelancer
One thing I always thought would be cool would be to have "arhitecture maps", which contain geometry information as well as shaders. It would have a rigorous LOD system, but the basic idea is, you just apply this shader-like map to a flat surface, and depending on the map, it will then have the architecture of a brick wall in-game or something. Actual TEXTURES, not just visual information.


It's been done, it's called bump mapping. :/

2005-01-18, 7:58 PM #16
your FACE is called bumpmapping!

<<

>>
2005-01-18, 8:02 PM #17
No, it's not. Bumpmapping is ancient and ugly. Bumpmaps are flat, as are normal maps. FLAT, damnit. If you look at a bumpmapped surface from a parallel angle, you can clearly see as such. That's not even my main concern. The main concern is this:

bumpmapping and normalmapping both look HIDEOUS along the kind of nonplanar polygon edges that require AA. The whole illusion is ruined when you see a 45 degree angle in the architecture arbitrarily defining the height data, overriding the height data in the normal map.

Sige was the only person with the right idea. I'm talking actual VERTEX data here. Actual polygons.

Edit: Oh Det was onto the right idea too, but the problem with displacement mapping is that it is still a flat surface. Unless the mapping takes effect outside of the polygon it is used on, it's useless at simulating depth. Because think about this: what happens when you have a concave 90 degree corner? You're going to have a completely flat line, breaking the illusion, unless the mapping extends outside the poly.
"it is time to get a credit card to complete my financial independance" — Tibby, Aug. 2009
2005-01-18, 8:05 PM #18
Real depth-mapping isn't exactly a difficult thing to implement, it's just the amount of extra polygons it introduces.
Detty. Professional Expert.
Flickr Twitter
2005-01-18, 8:09 PM #19
Okay, are you sure that the displacement mapping used in U3 is the same thing that's used in certain modelling programs? Because from what I saw in the U3 tech demo, it was just a hack. Sure, it looked like there was actual vertex data, but it didn't seem quite right. I'ma do a little more research and brb.
"it is time to get a credit card to complete my financial independance" — Tibby, Aug. 2009
2005-01-18, 8:10 PM #20
In the U3 demo they only showed it closeup in alcoves where you couldn't view the surface from the side, so it was hard to tell if it was illusion or real displacement.

But in regards to Shred18's original post, these new (relatively) methods such as displacement methods are intended to make level design easy again, you can get to making the level again rather than meticulously crafting every wall.
Detty. Professional Expert.
Flickr Twitter
2005-01-18, 8:13 PM #21
Det, I did some research, and they use what's called virtual displacement mapping, and it's what I feared -- looks great in the middle but the edges are flat. :\ It's still a completely 2d surface. Now, non-virtual displacement mapping is what I described above, and it's used in certain 3d modelling programs, but not even U3 will have true displacement maps.
"it is time to get a credit card to complete my financial independance" — Tibby, Aug. 2009
2005-01-18, 8:30 PM #22
I'd like to add that that's a nice idea you have there Shred. The closest think I can think of is Morrowind. That game has a damn vertex every few pixels inherently.

As for some of my other ideas -- I think we're going the wrong route with shaders. Shaders shouldn't contain environment maps of any kind at all. That's just stupid. Shaders should only contain data pertinent to how the engine renders the surface. They shouldn't literally dictate arbitrary visual data.

My peeve is that I see a lot of shaders trying to emulate specular lighting with environment maps, when that should be part of the lighting algorithm. Shaders should simply dictate data like the luster of each surface, so the engine can properly light it. Much like the distinction between CSS and XML - xml handles the data, css handles the graphics. The engine should handle the procedural graphics procedurally and the dynamic graphics dynamically, while the shaders should dictate how to render the surface.
"it is time to get a credit card to complete my financial independance" — Tibby, Aug. 2009
2005-01-18, 9:22 PM #23
Using per vertex lighting simply sucks. You end up with more triangles than you want in order to make things look nicer. Yeah, it's fast, but you're still only interpolating across each vertex; it's always going to be a much nicer approach when you can at least simulate per pixel lighting.

It'd by nice to see triangle rendering thrown out the window completely, and a different rendering primitive take its place. This has been done before by having a set of points in 3D space that have no specified connectivity, each storing enough information to effectively simulate light, etc.. AKA surfels. However, in order to make the switch over, innovation would be needed in more areas than just rendering, and hardware is already so dedicated to rendering more triangles faster. With this in mind, I doubt it will happen soon, if any time.

For those curious as to how surfels are done, I'd recommend the following PDF file:

http://graphics.ethz.ch/Downloads/Publications/Papers/2000/p_Pfi00.pdf
2005-01-18, 9:41 PM #24
What exactly is wrong with normalmapping? Yeah, of course it's flat. It's designed to still be flat because if you used displacement mapping, no PC for the next five years at least would be able to run the game that did. There's no innovation we're lacking, all the technology is there, it just needs time.

Quote:
My peeve is that I see a lot of shaders trying to emulate specular lighting with environment maps, when that should be part of the lighting algorithm. Shaders should simply dictate data like the luster of each surface, so the engine can properly light it.

Shaders.

Anyone interested in the future of modeling characters or anything organic (and possibly more things) check out ZBrush. It's essentially the future. Watch the videos here. You can watch a guy create, in a few hours (video is sped up, it's 30 minutes of watching time), such an incredibly detailed model that would probably take someone several days or even a week in conventional programs (depending on what tools are available...only have pushing points, make it like two months).
Bassoon, n. A brazen instrument into which a fool blows out his brains.
2005-01-18, 9:59 PM #25
Oh my hell! Did you see that part where he made the fins?!!?! Did you see it?!! WTF!!!! :eek:
"it is time to get a credit card to complete my financial independance" — Tibby, Aug. 2009
2005-01-19, 1:05 AM #26
if you'd like to see the next step beyond bump mapping and relief mapping, take a look at these screenshots...
http://www.paralelo.com.br/img/relief_curved_new.jpg
http://www.paralelo.com.br/img/relief_curved_new_rockbump_sphere.jpg
http://www.paralelo.com.br/img/relief_curved_new_rockwall_sphere.jpg
http://www.paralelo.com.br/img/relief_curved_new_brickwall_sphere.jpg

This uses a basic shape for an object (a sphere, a plane, a low poly model, whatever) and does some limited ray tracing on the graphics card to do relief mapping that looks correct on-edge. Also does shadowing as a byproduct. Basically bump mapping + parallax/virtual displacement mapping that looks correct on-edge for *most* models. Works by moving the surface inward, sort of, so it does have some issues, but everything does.

The relief mapping from the unreal 3 demo came about as a step up from bump mapping (warping the tex coords per pixel based on a bumpmap and the viewing angle to simulate height, in addition to normal bump mapped lighting).

Both of those relief mapping techniques require a newer 3d card, and the top one only works on geforce 5xxx/6xxx cards. The top one is relatively slow, while the one in the unreal 3 demo can be added to normal bump mapping on a radeon 9500+ and geforce 5xxx+ for little cost.

Vector graphics are pretty slow for 3d cards, you can use it for things like text but a pair of textured triangles is much faster. You'd have to simulate it with triangles...

Shadow mapping now (not static lightmaps like in quake3) is starting to replace the shadow volumes you see in doom 3 because it scales better in terms of speed as you get more detailed geometry, and can also handle transparency (to some extent) and deformable geometry. Properly done shadow mapping gives you dynamic shadows that are of a very high resolution, and it is possible to do soft shadows with them.

You still need environment maps because it's just not possible to do reflections (other than light source reflections as specular) on most geometry (anything more complex than a single plane). You'd have to render the scene from every pixel's point of view multiple times to get real reflections...

Displacement mapping (the real deal, vertex displacement) is possible on the newest 3d cards (the geforce 6xxx series), though you'd still need a well tessellated surface to make it look good. It is possible because these cards let you do texture reads in the vertex programs, which allows you to adjust a vertex based on a heightmap, among other things. It's slow though.

Vertex lighting just isn't good enough, it can't handle dynamic lights or shadows, and tessellation isn't the solution. Besides, rendering would be slowed down a LOT if you had to tessellate surfaces, and we are probably still a couple years off from hardware that can tessellate properly on the graphics card.
Air Master 3D (my iPhone game)
My Programming Website
2005-01-19, 11:01 AM #27
Sige wins the thread.l
"it is time to get a credit card to complete my financial independance" — Tibby, Aug. 2009
2005-01-19, 11:31 AM #28
On second thought; we should just start using voxels for everything.
Stuff

↑ Up to the top!