Showing posts with label Relaxation. Show all posts
Showing posts with label Relaxation. Show all posts

Sunday, July 24, 2011

City Lots

Last week's post showed a practical way to create countries and provinces. The next logical step was to split the resulting regions even more and produce cities and even lots for individual buildings.

My approach as usual was to find a sweet-spot between effort and realism. I made geography the main influence to city location and layout, with some hints from cultural properties.

First I subdivided the regions I had found before. I added a new set of points and computed the resulting Voronoi for it:

Image

Yes, it is quite messy, but it will clean up very nicely later. Each patch in the previous image is a few hundred meters diameter. They are roughly the size of a real-life city block.

Two things about this Voronoi phase: Seed points were placed where the slope was maximum. This is so the boundaries would end up in places where the slope was not too high.

Also the distance function is warped in the Y coordinate, which is up/down in the world. The warp makes a difference in the vertical axis weight more than a difference in the horizontal axis. This has an interesting effect on the resulting Voronoi cells. They are not convex anymore in Euclidian space, and they tend to be constrained to areas of similar altitude. Later you will see why this helps.

It is time for clean up. First I removed patches in the frontier with another province. I wanted some undeveloped space between cities.

Then I removed patches that were in steep terrain. This is controlled by a cultural parameter, since some cultures are better building on the slopes.

Here you can see the results:

Image

It is harder to see in just a screenshot, but the resulting cities end up in mostly flat locations. They still make use of interesting places like some mountain tops or the border of a cliff. This is thanks to the peculiar characteristics of the earlier Voronoi phase.

Next I extended the roads so the city blocks would be connected to the main road system:

Image

At this point I had the city mapped out. The next step was to break down each block and determine the location of individual buildings.

There was one hairy problem. All this generation so far was done in two dimensional arrays, where each cell of the array was roughly 5 meters wide. My architecture system, on the other hand, is based on vector shapes. Somehow I needed to convert the 2D maps into shapes before I could feed building locations and sizes to the architecture layer.

The first step was to clean the 2D map. I filtered it to make sure there were no weird pixel configurations ending in non-manifold shapes. I also made it all just two values: 0 for empty and 1 for taken. The following image shows the resulting bitmap for one area:

Image

So it was about converting a map of bits into polygonal shapes. I wonder where did I hear that before?

At the beginning of this project I considered representing voxels just as bits and then produce smooth polygonal surfaces by using mesh relaxation. I did not use that method for the virtual world at the end, but all the work I did back then became really helpful for this task. It was pretty much the same problem, only in 2D.

The first step was to create a Lattice of points, but only for those cells in the boundaries; that is, when the 1 became a 0 or the other way around:

Image


Next I applied my old smoothing algorithm to the lattice grid. For each point a new position is computed based on the two other points that are connected to it. After enough iterations it not only becomes smooth, it actually flattens out and sharp features appear.

Even if sharp and flat, there were a lot of vertices in the resulting polygons. I used some form of greedy simplification to take them out:

Image

Not bad. While it is all polygons now, this is still far from what the architecture system can take.

In this system, buildings start from what it's called a "scope", which is pretty much a 3D box with some orientation. Even round buildings start from a scope. Some buildings may span from one scope to another, like a castle for instance, which may be composed of many scopes bound together by hallways, flying bridges, etc.

I devised several methods of cutting these polygons into boxes. Each method produces a different feel, and each one can be controlled by some parameters. All this is determined by the culture settings of the city.

These methods work in two phases. First they subdivide polygons until they are roughly square:

Image
And then they output actual boxes for building scopes:

Image

In this last image the boxes appear in black. This is where the buildings will be.

As you can see, the buildings closely follow the roads and make good use of the space available. Not every block has to be filled with buildings. Depending on cultural or even random factors, there could be parks or entirely undeveloped lots. Some further blocks could be marked as agriculture, if the climate and culture allows for it.

At this point you may be curious about how these lots would look over the terrain. It is quite cool actually, I will post soon about it. As usual I look forward to you comments and ideas.

Sunday, November 14, 2010

Just Relax

It was settled then: I would use voxels to represent everything in my virtual world.

I knew I had to pick a voxel resolution that was small enough to handle the detail I wanted, but still coarse enough so the data volumes were manageable. I chose one tenth of a meter, which is close to four inches. That was good enough for rocks, boulders, even architectural elements like stairs and columns.

It is actually possible to have details smaller than the voxel resolution, like a tree branch with a diameter of one inch. The catch was that the next non-empty element, the next branch for instance, would necessarily be separated by twice the voxel resolution. So I could have two one-inch diameter sticks, but the closer they could ever be was eight inches. You can look into the Sampling Theorem if you want to understand where those limits come from.

Those were good enough for the basic geometry of the world. Smaller details would come from traditional texturing. That is normal and parallax mapping, and even instancing for tree leaves and grass. 

The following image will give you a sense of the amount of detail that can be achieved with this resolution, at least for terrains. The white ball near the top is one meter diameter.

Image

But voxels are square by nature. A naive rendering of my voxels would result in conspicuous four-inch boxes. Somehow the boxes needed to become smooth and shape the world in a realistic way. The boxes, as seen below, needed to turn into the sphere:

Image

This is a very old problem. Medical scans produce 3D data which is very similar to voxels. There are several algorithms that would take the 3D data and render it as a series of triangles. This problem in general is referred to as isosourface extraction, which means obtaining a surface out of a three dimensional field.

The best known of these algoritms is Marching Cubes. Other similar algorithms are Marching Thetraedron and Dual Contouring. The idea behind them is simple, and there are many implementations available. They look at different corners of the voxel and depending if the corner is solid or not, they discover that the resulting surface is actually crossing the voxel somewhere. The surface fragment is output and the algorithm proceeds to look at the next voxel.

My problem with these algorithms was that in order to guess the right position of the surface, you not only needed to know whether a voxel was solid or not, you also needed to know its "density". The virtual world was not a clean cut between black and white anymore, it was all shades of gray. In that sense, at a given point you may have 30% air and 70% gravel. 

This is not trivial. It is like comparing a raster image with a vector image. With the raster image, like a BMP, everything is simple. Each pixel has one distinct color. If you want to modify the image, you just change the color value of the pixel and that's it. With a vector image, you never have individual points, just a series of mathematical functions that evaluate to shapes. If you want to modify the image, you either alter the parameters of the existing shapes or add new ones.

If I wanted to use any of those algorithms, I would have to represent everything in my virtual word as a mathematical function. Or at least, I would need to provide a diffuse boundary to all the voxel construction tools I might devise. I didn't like that. I hoped for an approach in which it was realy simple to modify the voxel.

So I found a simple way to come up with the surfaces. I realized that if each voxel corner was moved to the averaged position of its neighboring corners, the voxel grid would soon converge into a smooth approximation of the surface. It would take only seven iterations to produce the sphere in the earlier image.

This method was even able to produce sharp boundaries. The contribution of neighboring voxels to the averaged position depended on material type. As the grid relaxation increased on each iteration, tension lines would naturally appear in the boundaries between different materials. 

The following image shows the results of this method:

Image

The material boundaries appear as red voxels in the section at the left. As you can see, they translate into sharp features once the surface is generated.

Since it was so easy to work with the voxel, I soon found myself adding noise, and then applying convolution filters to the voxels. It was a very simple way to get interesting results. I even implemented an L-system for some complex volumes like the shell below:
Image

But it did not take long before I discovered the relaxation approach would not take me as far as I wanted to go. I started turning triangle-based geometry into voxels and the results were never good. I was creating the models in traditional 3D software like 3ds max and then importing them as voxels. I was getting a lot of noise I could not deal with. It seemed that black and white would not cut it, that I really needed to consider voxel density.

I dumped all the relaxation code and started looking at the voxels from a fresh perspective. I was back to square one.