Jump to content

Rendering the Geoscape Globe


zeldafreak

Recommended Posts

I'm working on a project using XNA where I'm trying to render the Geoscape Globe in three dimensions. Looking through the WORLD.DAT entry on UFOPaedia and also some other sites on Latitude/Longitude and ways to render a sphere, I've familiarized myself with a little of the necessary math, but I have a long way to go. Before I spent a ton of time, I wanted to know if anyone out there had done any similar or related work that I could steal :P

 

A little about me: I've been programming for nearly ten years, from QuickBASIC to C/C++ to C# today, professionally for about five years. The development side is no problem for me, but I will have to admit that my 3D math is rusty and I'm learning/relearning as I go.

 

As far as what I've already done, I created a 2-dimensional render, so I have somewhat of a handle on that. Where I'm having trouble is combining the information stored in WORLD.DAT (essentially texture coordinates) with a 3D sphere.

 

I have some ideas on where to get started, but wanted to cast my line out there since, from what I've read, this seems to be a very helpful and hard-working community. Any work you've done or even just ideas based on your understanding of the problem-space would be helpful.

 

Looking at how it's rendered in the game, I'm kinda thinking that they used some kind of ray-tracing method for rendering the globe pixel by pixel because (to my knowledge) there's no geometry stored anywhere. Anyone else have any clues?

 

Thanks,

David

Link to comment
Share on other sites

The game most likely processes each polygon, works out whether or not that polygon would be visible from the user's point of view, then renders it pixel by pixel if so.

 

In my geoscape renderer (which is also only 2D thus far, the source is in the file bb_tact\GeoRender.java from my toolkit, you can see sample renders here), I built some code based around the concepts described here - basically, I consider a theoretical rectangle around the polygon I'm working on, then work through each point in that rectangle one at a time (left to right, top to bottom), working out whether or not that point should be rendered as part of the polygon.

 

I assume the actual game does much the same thing, but once it's determined whether a dot should be drawn, it then runs the co-ordinate through some trig functions to determine where it'd go on the 3D globe.

 

Rendering the textures this way is the easy bit, luckily. Each texture is a 32x32 bitmap. All you need to do is work out out where the current pixel should be drawn on the screen - then, AND the x/y positions with the number 31.

 

Working out the correct location for these pixels is the hard bit, of course. You run a scaler depending on how far away each point is from the viewer. For example, the point that appears to be the middle of the globe to the user, has no scaling. As you move out to towards the edges, each point should be skewed more and more towards that central point, as these points are further-est from the user on the 3D globe.

 

The user can see exactly 1440x1440 points of the world map at any given time - that's the full latitude, and half of the longitude. Say the user is looking at latitude 720 by longitude 0 (keeping in mind that in WORLD.DAT, latitude runs from 0 to 2880, and longitude goes from -720 to 720). You'd hence draw all polygons in the space of 0lat/-720long through to 1440lat/720long.

 

The exact center of this space gets drawn in the center of the screen. But each pixel away from this central point gets it's co-ords scaled towards that central point, depending on how far away they are - nearby points get very little scaling, but points on the edges get a lot.

 

The amount you skew the pixels by a percentage of a number based on whatever your zoom level is multiplied the cosine of the distance of the point you're working with from the center of the viewing area. At least, I think that's it. I'm reading some of my old code with no idea how it's supposed to work, merely an idea as to what it's supposed to do. Keep in mind that every 720 points describes a quarter-circle, anyways.

 

Hmm. Suppose you'd need to apply a similar transform to latitude depending on longitude too, to account for how the polygons get closer together near the poles.

 

If that makes no sense, I can probably try and work all that into some actual functioning code (have always wanted to try), but will probably take me a week or so.

Link to comment
Share on other sites

Wow, thanks for taking the time to describe all that.

 

Do you think that rendering by hand, pixel-by-pixel, is the easiest solution? I was playing around with the globe last night and it looks to me like it renders a solid blue sphere, and then renders the land geometry on top of it. Seems to me that that should be fairly straightforward to implement. Thoughts?

 

BTW, thanks for that link on points within a polygon; I've been looking for something like that for a while.

 

All in all, I'm interested in leveraging today's graphics technology as much as possible, even if it means sacrificing some of the faithfulness of the recreation.

Link to comment
Share on other sites

Do you think that rendering by hand, pixel-by-pixel, is the easiest solution?

"Easiest"? As far as the polygons go, yes, though my current method is far from "efficient" - originally I used that code so I could work out which polygon a crash site or whatever was sitting on (a requirement for my ComboMod), then I just wrapped a few FOR loops around it and hey presto! Instant rendering engine.

 

A better way would be to work out which lines of the polygon intersect the current scanline and where, then plot pixels between those intersects.

 

For example, say you're working on a given row of the points in whatever polygon. You work out which lines intersect the current scanline (through some simple greater then/less then checks) - you'll always end up with an even number of intersecting lines (usually two in the case of the WORLD.DAT polys, though you may end up with four on rare occasions). Next you solve those lines for X, given that Y = whatever row you're working on. Sort the results in order from lowest to highest, divide them up into groups of two, then just render all the points between each group.

 

In any case, you need to handle it a point at a time or else 1) you can't run the points through the trig transforms 2) you can't draw the texture maps. You can even skip rendering every few points when the globe is zoomed out, trial and error being the easiest way to work out how many.

 

For all I know there's a graphics card funtion these days which'll draw polygons on a ball for you without trying to twist the texture maps (the game always has the textures face the user, they never "turn" because the polygons are way over on the edge of the globe), but I've never done any 3D coding so I wouldn't know.

 

I was playing around with the globe last night and it looks to me like it renders a solid blue sphere, and then renders the land geometry on top of it. Seems to me that that should be fairly straightforward to implement. Thoughts?

That's more or less how it works, but there's also the lighting. Note that the poles are always shaded darker then the equator (in addition to the "time of day" shading). If you take a look at the renders I linked to before, you should see that the "iceberg" polygons have darker water on them then the non-polygon areas around them, that's 'cause my render has no lighting effects.

Link to comment
Share on other sites

Using this bit of code:

 

			float radLat = MathHelper.ToRadians((float)Latitude / 8f);
		float radLong = MathHelper.ToRadians((float)Longitude / 8f);

		float x = (float)(radius * Math.Cos(radLat) * Math.Cos(radLong));
		float y = (float)(radius * Math.Cos(radLat) * Math.Sin(radLong));
		float z = (float)(radius * Math.Sin(radLat));
		return new Vector3(x,
			y,
			z);

 

I was able to generate a 3D representation of the globe. The only issue is that the points do not respect clockwise or counter-clockwise orientation and so the only way to get a complete render is to turn off culling (hardly efficient). But I suspect that I can use dot-product magic to choose dynamically the order of vertices in a triangle.

 

As far as what you said about how textures always face the user, seems like that could be handled in a shader, assuming again that faithfulness to the original was paramount; a more "simple" approach would be to just give the vertices UV coordinates that would wrap the texture around the globe. Assuming a 1:1 correlation between the texture maps and points in WORLD.DAT, wrapping each texture 45-90 times would suffice.

 

Alternatively, you could "fake" the 3D render by simulating sending "rays" into the scene, one ray per rendered pixel, determining the point on the world that each ray struck (simpler for a sphere than one might think), and rendering the texture map by tiling it across the screen. The viewing area in the original game is 256x200 so that would be tiling the texture 8 times. I wrote a photon mapping engine for generating lightmaps several years ago and I think I could use the same techniques here.

 

Lastly, you brought up the lighting. A true rendering engine (using Direct3D or OpenGL) should be able to handle the lighting with little help. It becomes more difficult if you're faking the render using raycasting, but still doable.

Link to comment
Share on other sites

Alternatively, you could "fake" the 3D render by simulating sending "rays" into the scene, one ray per rendered pixel, determining the point on the world that each ray struck (simpler for a sphere than one might think), and rendering the texture map by tiling it across the screen.

It strikes me that this is probably the easiest way to do things - draw the entire world map to one bigass texture, then raytrace as though it were stuck on a sphere. This would use way more RAM then strictly need be (which isn't really a problem in this day and age), but if you really want to wrap the textures around it'd be the way to go. Means you only need to process each polygon once then forget about them, too.

 

This texture can either contain the contents of the TEXTURE.DAT bitmaps (if you want the textures to face the way the polygons would be facing), or just indexes to which type of texture should be used for each point (if you want to have them always face the user like in the original game). Regardless, it'd need to use some form of transparency.

 

(Having the textures face the user is easy no matter WHAT method you use, as once you know where a point is being rendered on the screen and what texture it should be using, you can determine the exact colour to use with the simple AND operations mentioned earlier. Doing this will probably look best, as the textures look a bit rubbish when you try to scale them).

 

The main downside to doing this is that, in the original game, lighting is determined on a polygon-by-polygon basis, whereas this'd force you to do it on a point-by-point basis (unless you save extra data to keep track of the time zone of the polygon each point belonged to). Point-by-point lighting is obviously going to look "better", though it won't look "authentic".

 

The other "catch" is that the original game only bothers to transform the corners of each polygon - it then just draws that in 2D, resulting in straight lines when it should really be rendering curves. A raytrace would produce a proper spherical render.

Link to comment
Share on other sites

If you're curious, OpenXcom renders the Geoscape globe using Ortographic projection, mapping each polygon point in WORLD.DAT to a coordinate on the screen with those formulas, and shading each polygon individually. This provdes a pretty accurate representration of the original globe, however it doesn't take any advantage of 3D hardware features since it's a purely 2D mapping.

 

There might be better projection techniques that you can use (raytracing etc.) but I'm not sure how well it will come out with the original world polygons since they were designed for 2D projection, so they have a lot of peculiar details like overlaps that might look odd in 3D. The original game just rendered everything pixel-by-pixel on screen, those were the days... :P

Link to comment
Share on other sites

You're right, trying to render these triangles/quads has proved to be an absolute pain. The order of the points is different for many quads and I haven't been able to figure out how to make it work with culling enabled. I'll be studying your code there to see how you've done it and I'll probably end up doing it similarly.

 

But I guess you and I are working on the same thing. The difference is that I am reimplementing the game using .NET while you are using C++. I'll be interested in your pathfinding (geoscape, as I plan on trying A* for battlescape pathfinding) and how you're handling alien UI.

Link to comment
Share on other sites

You're right, trying to render these triangles/quads has proved to be an absolute pain. The order of the points is different for many quads and I haven't been able to figure out how to make it work with culling enabled. I'll be studying your code there to see how you've done it and I'll probably end up doing it similarly.

 

But I guess you and I are working on the same thing. The difference is that I am reimplementing the game using .NET while you are using C++. I'll be interested in your pathfinding (geoscape, as I plan on trying A* for battlescape pathfinding) and how you're handling alien UI.

Geoscape doesn't have any fancy pathfinding yet, just lazily goes straight from point A to point B without accounting for the fact the globe isn't actually a rectangle. :P

 

As for the point order, you could use something like this to check their order and reverse them if necessary.

Link to comment
Share on other sites

After correcting some minor mathematical errors, I managed to get the orthographic projection to work correctly. I used your code to figure out the units to the equation (It would have taken me forever to figure out that my problems were due to the radius being in pixels). The primary issue I have now is that the edges of the globe are blue (because I'm drawing a perfect circle and drawing a not-so-perfect polygonal projection on top of it). I'm searching through your logic to see what's different that could be causing that to not work...
Link to comment
Share on other sites

The original game appears to deal with that by rendering the polygons according to a sphere that's slightly larger then the blue ball that's underneath. That is to say, decrease the radius of your perfect circle a little.

 

I was quite taken with the idea of rendering one bigass texture onto a globe, if only to see what it looked like. This evening I sat and thought until trig made sense again, and managed to produce a render:

 

Render_comparison.png

 

Had to crank the resolution right up for the textures to appear as anything other then a fuzzy mess (would've been simple enough to render them the traditional way, but that'd take much of the fun out of it). Rotating the thing horizontally is simple enough, as is resizing it, but I haven't yet coded in vertical rotation.

 

The basic look of the thing is, at a glance, near identical to the original, though it's interesting to see how the lakes and pools are of a noticeably different shape on closer inspection. I'm not entirely certain if this is entirely due to the straight lines drawn by the original engine (given that my render curves absolutely everything); I think I might be stretching along the vertical axis or something...

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
  • Create New...