Extrasolar Developer's Blog: Topo Map Generation

I love maps. To those who know me, this probably isn't a surprise. My parents were both cave surveyors and I grew up with a compass and clinometer in my hand. I've spent many weekends doing orienteering and adventure racing.

Maps are a huge part of Extrasolar, but so far, players haven't needed to know precise elevations. When we started to realize that this data will be more essential in the upcoming seasons, I was excited to take on the task of turning our digital height map data into useful topo maps.

I tried to keep the algorithms as simple as useful. In fact, I didn't even write any new shader code. I started by using the existing shadow shader to compute the distance from an overhead orthographic camera to the terrain. I then pulled this buffer from the GPU to the CPU and did some basic math to transform the data from its raw representation (distance from camera) into elevation values. At this point, I had a big floating point array of elevation data and I did the rest of the work on the CPU.

Basic contour lines

The first step was to draw basic contour lines. To start, I just tested each pixel to see if I cross an isocontour when I walk to the immediately adjacent neighbor in the +x or +y direction. Here's what that looks like in pseudocode. The result is shown below. Click the image for a huge version.

for (py=0; py<width-1; py++):
    for (px=0; px<height-1; px++):
        h00 = depthData[py*width+px]
        h01 = depthData[py*width+px+1]
        h10 = depthData[(py+1)*width+px]
        max = max(h00, h01, h10)
        nearestIncrement = floor(max/heightIncrement)*heightIncrement;

        if (h00 < 0.0f):
            color pixel px, py blue        
        if h00 < nearestIncrement or h01 < nearestIncrement or h10 < nearestIncrement:
            color pixel px, py black

Antialiased lines of varying thickness

As you may have noticed, we have some overhanging ledges that make this data particularly complicated. Our result is a good start, but it would be nice to have cleaner antialiased lines and to have major and minor elevation intervals to convey more useful information to the viewer.

To get sub-pixel accuracy in our line drawing, we can be more precise in how we compute where our isolines cross between pixels. If we consider any 4 adjacent data samples with known height values, we can express simple equations that represent a linear interpolation between the height values as t goes from 0 to 1.

If we want to know the value of t for a given contour line interval, then we can solve these equations as follows:

If the result is between 0 and 1, then we know that we should plot a point between our samples. Note that we only need to consider these 2 equations. The other 2 lines that would fully connect all 4 data samples into a square will be considered as we walk through the full data array. Here's how this is done in pseudocode:

for (py=0; py<width-1; py++):
    for (px=0; px<height-1; px++):
        h00 = depthData[py*width+px]
        h01 = depthData[py*width+px+1]
        h10 = depthData[(py+1)*width+px]
        max = max(h00, h01, h10)

        nearestIncrementNumber = max/heightIncrement
        if nearestIncrementNumber % majorIncrementFrequency == 0:
            // Major line increment.
            majorIncrement = true
            lineWidth = 2.0
            lineIntensity = 2.0
            lineColor = black
        else:
            // Minor line increment
            majorIncrement = false
            lineWidth = 1.5
            lineIntensity = 1.5
            lineColor = grey
        nearestIncrement = nearestIncrementNumber*heightIncrement

        // Calculate the exact position where our isoline crosses into our grid
        // square (if at all). For each square of pixels, we solve the parametric
        // equations along 2 edges for the isoline crossing.
        t = (nearestIncrement-h00)/(h01-h00)
        if (t > 0 && t <= 1):
            plot a point at px, py+t with our selected line color and width
            if majorIncrement:
                save point (px, py+t, nearestIncrement) to a list for future use
            
        t = (nearestIncrement-h00)/(h10-h00)
        if (t > 0 && t <= 1):
            plot a point at px+t, py with our selected line color and width
            if majorIncrement:
                save point (px+t, py, nearestIncrement) to a list for future use

If we were to zoom in closely so that we can observe the sub-pixel precision, we'd notice two types of points getting plotted at regular grid intervals. In the image here, grid intersections represent the positions of our elevation data samples. The green points are plotted by our first equation (px, py+t) and the red points are plotted by our second equation (px+t, py).

While these points aren't technically connected together, when we zoom back out to our image scale, the points are close enough together that they end up looking like nice, contiguous lines.

It's a little unfair that I just listed "plot a point" with subpixel accuracy. I mean, how do we do that? I've written a little helper function that looks at a neighborhood around the desired point and colors the pixel based on the distance from the pixel center to the point center.

I use a simple equation that has a value of 1 close to the center of the point, 0 when it's not close to the point, and it linearly blends between 1 and 0 based on an intensity value. Here's what that equation looks like when graphed, before and after the clamping function is applied.

And here's how the function looks in pseudocode:

plotPoint(x, y, pointSize, pointIntensity, color):
    pointRadius = pointSize/2.0f;
    
    // Determine the range of pixels that might overlap our point,
    // clamped to the bounds of our image
    int minX = clamp(x-pointRadius, 0, width-1);
    int maxX = clamp(x+pointRadius+1, 0, width-1);
    int minY = clamp(y-pointRadius, 0, height-1);
    int maxY = clamp(y+pointRadius+1, 0, height-1);

    // For each point within these bounds, color the image
    // based on the distance to the circle's center.
    for (imgY=minY; imgY<=maxY; imgY++):
        for (imgX=minX; imgX<=maxX; imgX++):
            distance = distance between (x,y) and (imgX, imgY)
            alpha = clamp(pointIntensity - distance*(pointIntensity/pointRadius), 0, 1)

            // Alpha blend our antialised point.
            pixel(imgX,imgY) = pixel(imgX,imgY)*(1-alpha) + color*alpha

Elevation labels

As you might have already guessed, this is the tricky party. If you're hand-labeling a topo map, it's a fairly common convention to place the labels on different isocontours close to each as demonstrated here.

Labels along the same contour are spaced with a greater distance so that the map doesn't get too crowded. If we were do place the labels by hand, we'd probably first put labels on adjacent contours -- close, but not so close that they overlap. We'd then go out further and start the process again. So the question is: how do we express this as an algorithm?

If you were reading through the code, you may have noticed a place up above where I saved every point on the major contour lines for later use. This is where that huge collection of points becomes useful. What we do now is iterate over the points repeatedly, considering each one as a possible label. New labels are constrained to not be within 50 pixels of another label with a different value or within 200 pixels of another label with the same value. On each pass through the array, we can remove points that violate any of these constraints while we search for the point that is closest to any constraint without violating it. Repeat until the array of points is empty. Just like we discussed above, this process tends to first find nearby points on adjacent isocontours and then, when it can't go any further, hop further to an isocontour that's already been labeled.

Here's what that looks like:

Remove from the elevationPoints array any points within edgeProximityLimit edges
targetIndex = array index of the highest point in elevationPoints
render label at elevationPoints[targetIndex]
push elevationPoints[targetIndex] to the labelConstraints array
    
while (true):
    targetIndex = -1
    targetDistance = 999999.0
    
    for indices i in elevationPoints:
        minDistance = 999999.0
        indices c in labelConstraints:
            dist = distance between elevationPoint[i] and labelConstraint[c]
            
            // Distance comparison for labels at the same elevation.
            if (elevationPoint[i].z == labelConstraints[c].z):
                if (dist < sameLabelDistance):
                    // Point lies within constraint. There can be no label here.
                    remove item i from elevationPoints
                    minDistance = 999999.0
                    break
                else if (dist-sameLabelDistance < minDistance):
                    // This is a potential label candidate.
                    minDistance = dist-sameLabelDistance
                
            // Distance comparison for labels at different elevations.
            else if (dist < differentLabelDistance):
                // Point lies within constraint. There can be no label here.
                remove item i from elevationPoints
                minDistance = 999999.0
                break
            else if (dist-differentLabelDistance < minDistance):
                // This is a potential label candidate.
                minDistance = dist-sameLabelDistance;
        
        // We've considered all constraints. Is this the best label candidate so far?
        if (minDistance < targetDistance):
            targetDistance = minDistance
            targetIndex = i;
    
    if (targetIndex == -1):
        // All points have been disqualified due to contraints.
        break
    else:
        render label at elevationPoints[targetIndex]
        push elevationPoints[targetIndex] to the labelConstraints array
        remove item i from elevationPoints

I should note that in this code, we're deleting elements from our elevationPoints array at the same time as we're iterating over it. This needs to be done very carefully and the counter should not be incremented when we delete a point before a break statement. For brevity, I didn't handle this explicitly in the code. Here's what the result looks like.

It's a little evil that I glossed over font rendering. Depending on the language you're using, this can be a tricky problem. That being said, there's very little platform agnostic advice I can give on the topic, so you'll have to figure that out for yourself.

Adding the final polish

At this point, the trickiest stuff is done. To really polish up the result, I blurred the input data with an edge-sensitive filter to smooth out the lines without blurring away the undercut ledges. I added some coloration, shaded the terrain based on the normal, and overlaid it with a full render of the underlying imagery to get the great-looking result that we teased you with at the start.

And just to show the flexibility of the approach, here's a demonstration of it running on a greyscale image that was never intended to be used as elevation data.

I hope this makes you adore maps as much as I do. If you have other questions that aren't adequately answered here, feel free to email me at rob@lazy8studios.com.