Showing posts with label Procedural. Show all posts
Showing posts with label Procedural. Show all posts

Monday, April 15, 2019

Moonmaker

This is a Moon-making system we devised a while ago. The goal was to produce moons similar in size and composition to the ones found in our solar system. The main challenge was how to produce the massive surfaces of these moons -and their interiors- while keeping them interesting. Also, we had to make sure these could be rendered with crisp detail regardless of how far you are from them.

The system assumes the moons have a spherical base. The system uses a geodesic meshing for the base sphere to guarantee each surface patch has the same area. The system uses this structure only as a computational grid for the procedural generation, the actual moon surface will be much smoother than the generation grid.

Image
Geodesic sphere

For smaller, irregular moons, the base geometric definition of the moon will be perturbed by a low-frequency 3D noise.

Image
Noise-perturbed geodesic sphere

Starting from this base, the system will use a series of concentric shells. Each cell will determine the volumetric characteristics up to the next concentric shell. The outmost shell will provide the surface properties of the moon.

Image
Multiple concentric shells

Each shell will be optionally distorted by a low-frequency 3D noise that is unique to the shell. If configured to be so, it is possible for an inner shell to overcome an outer shell.


Image
Shell distortion

The preceding diagrams have exaggerated the distance between shells, and the magnitude of their distortion, to provide a better understanding of this constructive process. In practice, shells will be much closer to each other, and the magnitude of their distortion will be proportionally smaller.

The following sections will describe how the outmost shell is produced. The same principles will apply to the production of the inner shells, for this reason, the definition of inner shells will not be discussed in detail.

The procedural moon system requires both a real-time component and an offline component. The real-time component will be running in the player’s computer. The offline component will run in the game developer’s computers. The offline generation component will produce information that can be quickly augmented by the real-time component.

Each vertex in a spherical shell will be classified as belonging to one specific biome. A biome is a collection of surface properties, including, but not limited to: elevation, material distribution, instance placement, and material coloration.

Image
Example of biome

The biome information contains entropy-rich features like craters, dry seas, surface cracks due to gravitational tides.

Image
Large crater captured in biome information

The biome definition also contains planting rules, which will determine the location, frequency, and randomization of smaller features like rocks, boulders, overhangs, etc.

Image
Rock instances over terrain

Each biome is be made of tile-able elements so the real-time component can apply the same information in multiple locations of the moon, and in other moons as well. The biome information will also be designed in a way that fast distortion and re-combination are possible, thus reducing repetitive and predictable patterns in the environment.

The system uses two types of biome tiles: Transitional and Isolated.

Transitional biome information will appear in regions where a biome is transitioning into a neighboring biome. Isolated biome information will appear in regions the system can guarantee there are no neighboring biomes.

These two distinct modes are needed because some large biome features like craters, can only be placed in areas where the biome is not transitioning into another biome type since biome transitions can affect the height profile and overall look of the terrain features.

The following simplified example shows a moon that uses three different biomes: Polar, Tropical, and Equatorial. The biomes are colored Red, Blue, and Green respectively.

Image
A moon with three biomes

Patches, where there is an Isolated biome, appear in a solid color, patches with Transitional biomes show a blend of colors.

The system will use a set of pre-computed noises to introduce variation in the biome transition zones, creating interesting, unique transitions from one biome into another.

Image
Procedural noise applied to biome transitions

The preceding images have exaggerated the size of each biome patch relative to the moon’s size to provide a better understanding of the biome patching technique. A single biome patch will cover an area of approximately 10km by 10km. A moon having 1,500 km radius will have a surface area of 30,000,000 km2. To cover its surface, it would take nearly 300,000 of these patches.

Image
Biome grid resolution for a moon of 1,500 km radius

The biome grid resolution can lead to a very large number of biome patches. The system will not keep track of these individually since each patch’s location can be analytically computed, and for most of the patches, their biome assignment can be inferred from a high-order biome map.

High-order biome maps will provide the moon’s key characteristics when viewed from far away. The system will use these maps to generate additional detail for closer views, keeping the moon’s definition consistent when viewed at different scales.

High-order biome maps are 2D images that can fit the moon’s surface using a custom 2D parametrization. Each point of the map contains a numeric identifier for the biome that is prevalent in that location of the surface or internal shell.

Image
2D biome map

The image above shows a map with four different biome types (blue, red, yellow and white). The image is wrapped around the sphere using a custom 2D parametrization. One possible parametrization is shown below:

Image
2D parametrization for Biome Id map

In addition to the biome Id map, the system will allow other maps. For instance, maps containing elevation and surface color.

Image
Elevation, tint, and a far-range rendering of the moon

A single pixel in each image may cover 4 km2, making them inexpensive to produce. These elevation and tint maps can be either procedurally generated or artist-made. For a project containing only dozens of moons, and where each moon is required to have rich, unique natural properties, this is a stage where artist input is likely to yield the best returns.

The following image captures the entire approach to generating shell surface elevation and other properties:

Image
The three scales used for moon construction

A moon will be made of at least one spherical shell. In case there are multiple shells, the system will extrude inner shells based on their maximum radius and the shell’s height function, which is obtained from the same multi-scale process described in the previous sections.

For each shell, the moon designer will provide a high-order biome distribution map, biome definitions for the biomes appearing in this map, and material definitions for the materials appearing in the biome.

The system accepts “air” as a valid material, which can be used to create cavities within any shell.

Image
A cross-section of the moon terrain, displaying two different shells

A single shell will also be a volumetric object, and its depth will be defined by a stack of materials. The material stack information will be contained in the biome.

Image
A material stack made of 6 materials

Since underground materials will rarely become exposed, they can be expressed at a much lower resolution than biome surface materials and the use of local procedural noises will not be noticed by the player.

The material stack functionality may be sufficient to place as some materials in the stack can be sufficiently rare. The biome designer will be able to configure the abundance and occurrence pattern of any material in the stack.

The procedural generation is executed by both GPU shaders and CPU voxel algorithms.

The shader will compute a fragment color for each of the three scales, and it will blend these samples based on the distance from the camera to the fragment.

Any detail that is small enough to not register in the geometry, but that still contributes to the perceived complexity of the surfaces, will be captured by normal maps generated in real-time from the procedural definition of materials, biomes or the high-order moon definition maps. This technique will keep low polygon counts for the scene.

The trade-off is that biome and high-order definition maps will have to be resident on GPU while the moon is in view. It is possible to selectively upload only the mipmaps necessary for rendering the current moon scale. The total amount of GPU memory required at any time will be reduced to a minimum by streaming mipmaps in and out of GPU as the viewer position changes.

When features become large enough due to their proximity to the camera, they begin to appear in the geometry output by the real-time voxel generation. This also applies to any sections of the moon that may have been terraformed or mined by the players. If the changes are large enough they could be perceived from orbit, as Voxel Farm’s adaptive scene manager will increase the LOD for any areas with modifications deemed important by the application.

Friday, July 15, 2016

Intelligent Terrain Synthesis

Don't you hate it when your favorite TV series puts out an episode that is just clips of stuff that happened in earlier episodes? This post has some of that but hopefully will provide you with a better idea of how we see Procedural Generation in the near future.

This video shows the new procedural terrain system to be released in Voxel Farm 3:



In case you want to find out more about what is happening under the hood, these previous posts may help:

Geometry is Destiny Part I and Part II
Introducing Pollock
Terrain Synthesis

The idea is simple. Instead of asking an artist to labor over a hundred of different assets, either by hand or by using complex generation tools like World Machine, we now have a synthetic entity that can do some of that work through a mix of AI and simulation. You do not have to be an expert or initiated at all in the arts of procedural generation to get a satisfactory outcome.

Why are AI and simulation important? After working for a while in procedural generation, it became clear to me there was no workaround to the entropy problem. This I believe can be stated like this: Viewers of procedurally generated content will perceive only the "seed" information, not the "expanded" data. Yes, you may have a noise function that can output terabytes of data, but all this data will be compressed by the human physique to the few bytes that take to express the noise function itself. I posted more in detail about this problem here:

Uncanny Valley of Procedural Generation
Procedural Information
Evolution of Procedural

This does not mean all procedural generation is bad. It means it must produce information in order to be good. Good Procedural Generation is closer to simulation, automation and AI. You cannot have information without work being done, and if work is to be done better leave it to the machine.

The video at the top shows our first attempt at having AI that can do procedural generation, stay tuned because more is coming.


Monday, June 27, 2016

Introducing Pollock

Pollock is the code-name for our new terrain generation system. Why Pollock? This system went rogue for a couple of days and started producing things that did not look like terrain and more like Jackson Pollock paintings:

Image

It seems mad randomness at first, just like Pollock, but there is a lot of order in this chaos (and also what appeared to be a buffer overrun error somewhere in the code.)

Here are some images of the system when it behaves as expected:

Image

Image

Image

Image

Image

Image

The colors you see in these renders are not the final landscape colors. Each color identifies a different layer of more detailed material that will go there. These are placeholder materials Pollock is creating for you.

Pollock's main input is photographs, which you provide to suggest the geography of each biome. In case you want to create a full continent, Pollock will ask you some additional basic facts about elevation, temperature and wind direction.

In continents, you will get nice surprises like a desert appearing on one side of a mountain range:

Image

While the other side of the same range is all made of fertile land:

Image

This has happened due to all the moisture coming from the sea precipitating over one side and having only dry air go over the mountains.

It takes around five minutes to set this up from scratch. The system will so some pre-processing for a few minutes (usually less than five) and that's it. In less than 15 minutes you can complete the creation of an entire continent that spans over a dozen different biomes.

We are in the last stages of completion for this system. There are two main features missing: the addition of forests, rocks, etc. and plugging this with the lake generator to get inner lakes. Right now the system only does ocean.

This system will be included in the Voxel Farm 3 release.

Wednesday, May 18, 2016

Terrain Synthesis

This is just a teaser. We are still working on this, but we got some results that are already good enough to show. It is not about where terrain types appear (that was covered here and here), but how a particular terrain type is generated.

We want to make procedural generation as accessible as possible. Just like a movie director who shows a portfolio of photos and concept art to the CGI team and just says "make it look like this", we wanted the creator to be entirely clueless about how everything works.

This is how it feels to create a new terrain type. You provide a few pictures of it and we take it from there:

Image

This system builds a probabilistic model based on the samples you provide. That is enough to get an idea of the base elevation. On top of that, several natural filters are applied. It turns out we do know a bit more about this landscape. We know how dry it is, what is the average temperature among other things. The only fact we are missing and have to ask about is how old do you think this is. The time scales range from hundreds of millions of years to billions of years. (If you believe your terrain is 6000 years old we cannot accommodate you at the moment.)

You can provide one or more sample pictures. The more pictures you provide, the better, but just one picture is often enough. Ready to see some results? The following terrains were synthesized out of a single photo in every case (do not mind the faux coloring, this is only to indentify the different terrain layers for now):

Image

Image

Image

Providing multiple samples creates some sort of mix, similar to how you find both mother and father features in their kids:

Image

This works with any kind of image. It could be some fancy concept art as seen below:

Image

The natural filters in this case added some realism to the concept, and eroded some of the original hill shape. This could be avoided if you are after a more stylized look. But if you are short on time, and want to prototype different realistic terrains, the ability to quickly sketch something and feed it to the generator is a big help.

Of course you can still look under the hood and tinker with generation frequencies, filter parameters, etc. You can still have terrain models imported from Digital Elevation Models, or from third party software like World Machine. The key here is you do not have to anymore.

I'd be glad to enter into details of how this works if you guys are interested. Just let me know. I still owe the Part 2 of the continent generation. That should come shortly.

Tuesday, April 26, 2016

Geometry is Destiny

In the previous post, I introduced our new land mass generation system. Let's take a look at how it works.

For such a large thing like a continent, I knew we would need some kind of global generation method. Global methods involve more than just the point of space you are generating. The properties for a given point are influenced by points potentially very far away. Global methods, like simulations, may require you to perform multiple iterations over the entire dataset. I favor global methods for anything larger than a coffee stain in your procedural table cloth. The reason is they can produce information whereas local methods cannot: information is limited to the seeds used in the local functions.

The problem in using a global simulation is speed. Picking the right evaluation structure is paramount. I wanted to produce maps of approximately 2000x2000 pixels, where each pixel would cover around 2 km. I wanted this process to run in less than five seconds for a single CPU thread. Running the generation algorithm over pixels would not get me there.

The alternative to simulating on a discrete grid (pixels) is to use a graph of interconnected points. A good approach here is to scatter points over the map, compute the Voronoi cells for them, and use the cells and their dual triangulation as the scaffolding for the simulation.

Image

I had tried this in the past with fairly good results, but there was something about it that did not sit well with me. In order to have pleasant results, the Voronoi cells must be relaxed so they become similarly shaped and the dual triangulation is made of regular triangles.

If the goal was to produce a fairly uneven but still regular triangle mesh, why not just start there and avoid the expensive Voronoi generaion phase? We would still have implicit Voronoi cells because they are dual to the mesh.

We started from the most regular mesh possible, an evenly tessellated plane. While doing so we made sure all diagonal edges would not go in the same direction by making their orientation flip randomly:

Image


Getting the organic feel of the Voronoi driven meshes from here was simple. Each triangle is assigned a weight and all vertices are pulled or pushed into triangles depending on these weights. After repeating the process a few times you get something that looks like this:

Image

This is already very close to what you would get from the relaxed Voronoi phase. The rest of the generation process operates over the vertices in this mesh and transfers information from one point to another using the edges connecting vertices.

With the simulation scafolding ready, the first actual step into creating the land mass is to define its boundaries. The system allows a user to input a shape, in case you were looking for that heart-shaped continent, but if no shape is provided a simple multiresolution fractal is used. This is a fairly simple stage, where vertices are classified as "in" or "out". The result is the continent shoreline:

Image

Once we have this, we can compute a very important bit of information that will be used over and over later during the generation: the distance to shoreline. This is fairly quick to to compute thanks to the fact we operate in mesh space. For those triangle edges that cross the shoreline we set the distance to zero, for edges connected into these the distance is +1 and so on. It is trivial to produce a signed distance if you add for edges in mainland and subtract for edges in the ocean.

It is time to add some mountain ranges. A naive approach would be to use distance to shore to raise the ground, but this would be very unrealistic. If you look at some of the most spectacular mountain ranges on Earth, they happen pretty close to coast lines. What is going on there?

It is the interaction of plate tectonics what has produced most of the mountain ranges that have captured our imagination. This process is called orogeny, and there are basically two flavors of it, accounting for most mountains on Earth. The first is when two plates collide and raise the ground. This is what gave us the Himalayas. The second is when the oceanic crust (which is a thinner New-York-pizza-style crust) sinks below the thicker continental crust. This raises the continental crust producing mountains like the Rockies and the Andes. The two processes are necessary if you look for a desirable distribution of mountains in your synthetic world.

Since we already have the shape of the continental land, it is safe to assume this is part of a plate that originated some time before. More-so, we can assume we are looking at more than one continental plate. This is what you see when you look at northern India, even if it is all a single land mass, three plates meet at this point: the Arabian, Indian and Eurasian plates.

Picking points fairly inland, we can create fault lines going from these points into the map edge. Again this works in mesh space, so it is fairly quick and the results have the rugged nature we initially imprinted into the mesh:
Image

Contrary to what you may think, this is not a pathfinding algorithm. This is your good-old midpoint displacement in action. We start with a single segment spanning from the fault source to the edge of the map. This segment, and each subsequent segment, is refined by adding a point in the middle. This point is shifted along a vector perpendicular to the segment by a random amount. It is fairly quick to know which triangles are crossed by the segments so the fault can be incorporated into the simulation mesh.

In this particular case the operation has created three main plates, but we are still missing the oceanic plates. These occur a bit randomly, as not every shoreline corresponds to an oceanic plate. We simulated their occurrence by doing vertex flood fills on selective corners of the map. Here you can see the final set of plates for the continent:

Image

The mere existence of plates is not enough to create mountain ranges. They have to move and collide. To each plate we assign a movement vector. This encodes not only the direction, but also the speed at which the plate is moving:

Image

Based on these vectors we can compute the pressure on each vertex and decide how much it should be raised or lowered, resulting in the base elevation map for the continent:

Image

All the mountains happened to occur in the South side of the continent. You can see why this was determined by the blue plate drifting away from the mainland, otherwise we would have had a very different outcome. This will be an interesting place anyway. While the gray-scale image does not show it, the ground where the blue plate begins sinks considerably, creating a massive continent-wide ravine.

Getting the continent shape and mountain ranges is only half the story. Next comes how temperature, humidity and finally biomes are computed. Stay close for part two!


Sunday, May 31, 2015

Evolution of Procedural

This post is mostly about how I feel about procedural generation and where we will be going next.

A couple of months ago we released Voxel Farm and Voxel Studio. Many of you have already played with it and often we get the same questions: why so much focus on artist input, what happened to building entire worlds with the click of a button?

The short answer is "classic" real time procedural generation is bad and should be avoided, but if you stop reading here you will get the wrong idea, so I really encourage you to get to my final point.

While you can customize our engine to produce any procedural object you may think of, our current out-of the-box components favor artist generated content. This is how our current workflow looks like:

Image

It can be read like this: The world is the result of multiple user-supplied data layers which are shuffled together. Variety is introduced at the mixer level, so there is a significant procedural aspect to this, but at the same time the final output is limited to combinations of the samples provided by a human as input files.

This approach is fast enough to allow real time generation, and at the same time it can produce interesting and varied enough results to keep humans interested for a while. The output can be incredibly good, in fact as good as the talent of the human who created the input files. But here is also the problem. You still need to provide reasonably good input to the system.

This is the first piece of bad news. Procedural generation is not a replacement for talent. If you lack artistic talent, procedural generation is not likely to help much. You can amplify an initial set into a much larger set, but you can't turn a bad set into a good one. A microphone won't make you a good singer.

The second batch of bad news is the one you should worry about: Procedural generation has a cost. I have posted about this before. You cannot make something out of nothing. Entropy matters. It takes serious effort for a team of human creators to come up with interesting scenes. In the same way, you must pay a similar price (in energy or time) when you synthesize something from scratch. As a rule of thumb, the complexity of the system you can generate is proportional to the amount of energy you spend. If you think you can generate entire planets on-the-fly on a console or PC, you are up for some serious disappointment.

I will be very specific about this. Procedural content based on local mathematical functions like Perlin, Voronoi, etc. cannot hold our interest and it is guaranteed to produce boring content. Procedural content based on simulation and AI can rival human nature and what humans create, but it is not fast enough for real time generation.

Is there any point in pursuing real time procedural generation? Absolutely, but you have to take it for what it is. As I see it, the only available path at this point is to have large sets of interesting content that can be used to produce even larger sets, but there are limits for how much you can grow the content before it becomes predictable.

For real time generation, our goal will be to help people produce better base content sets. These could be produced on-the-fly, assuming the application allows some time and energy for it. 

Here is where we are going next:

Image

Hopefully that explains why we chose to start with a system that can closely obey the instructions of an artist. Our goal is to replace human work by automating the artist, not by automating the subject. We have not taken a detour, I believe this is the only way.

Wednesday, November 26, 2014

The Missing Dimension

I believe when you combine voxels with procedural generation you get something that goes well beyond the sum of these two parts. You can be very successful at any of these two in isolation, but it is when you mix them that you open up a whole set of possibilities. I came to this realization only recently.

I was watching a TV series the other night. Actors were filmed against a green screen and the whole fantasy environment was computer generated. I noticed something about the ruins in this place. The damage was clearly done by an artist's hand. Look at the red arrows:

Image

The way bricks are broken (left arrow) reminds me more of careful chisel work than anything else. The rubble (right arrow) is carefully arranged and placed around the floor. Also we should see smaller fragments of rocks and dust.

While the artists were clearly talented, it seems they did not have the budget to create physically plausible damage by hand. The problem with the series environment was not that it was computer generated. It wasn't computer generated enough.

Consider physically-based rendering. It is used everywhere now, but there was a time when artists had to solve the illumination problem by hand. Computing photons is no different than computing rolling stones. You may call it procedural generation when it is about stones, and rendering when it is photons, but these are the same thing.

As we move forward, I see physically based generation becoming a thing. But there is a problem. Until now we have been too focused on rendering. Most virtual worlds (like game scenes) are described only as a surface. You cannot perform physically based generation in a world that is only a surface. We are missing the inner dimension.

Our world is 4D. This is not your usual "time is the fourth dimension" pickup line. The fourth dimension is the what, like when you ask what's inside a box. Rendering was focused mostly on where the what turns from air into solid, which is a 3D surface. While 3D is good enough for physically based rendering, we need 4D for a physically plausible world.

Is that bad that we are not 4D? In games this translates to static worlds, or scripted destruction at best. You may be holding the most powerful weapon in the universe but it won't make a dent on the floor. It shows everywhere as poor art, implausible placement of rocks, snow, debris and damage, also as lack of detail in much larger features like cities, castles and landscape.

If you want worlds that can be changed by its inhabitants, or if you want to generate content by simulation, you need to know your world as a volumetric entity. Voxels are a very simple way to achieve this.

Going 4D with your content is a bit of a problem. Many of the assets you may have could not work. Not every mesh defines a volume. Often, meshes have holes in them. They do not show because they are hidden by other parts of the object. These are not holes like the center of a doughnut. It is a cut in the mesh that makes it just a surface in 3D space, not a closed volume.

Take a look at the following asset:
Image

The stem of this mushroom is not volumetric. It is missing the cap. This does not show because the top of the mushroom is sunk into the stem and this hole is completely hidden from sight. If you tried to voxelize this stem it would have unpredictable results. This hole is a singularity to the voxelization, it may produce all sorts of artifacts.

We have voxelization that can deal with this. If you voxelized the top and bottom together, the algorithm is robust enough to realize the hole is capped by other pieces. But we just got lucky in this case, the same does not apply to any open mesh.

Even if you get meshes that are closed and topologically correct, you are only describing a surface. What happens when you scratch the surface? If I cut the mushroom with a knife, it should reveal some sort of mushy, moist material. Where is this information coming from? Whoever is creating this asset has to put it there. The same applies to the bricks, rocks, plants, even living beings of your virtual world.

I think the have reached a turning point. Virtual worlds will remain static and very expensive to build unless we can make physically correct decisions about the objects in there. Either to destroy them or to enhance them, we need to know what they are made of, what is inside.

Saturday, March 8, 2014

Procedural Information

Information is the measure of what you do not know. 

You just looked at your watch and realized it is 3:00 PM, then someone comes into your office and tells you it is 3:00 PM. The amount of information the person gave you amounts to zero. You already knew that. That person did give you data, but data is not necessarily information.

Information is measured in bits, bytes, etc. If you ask someone, "Is it raining out there?", the answer will be one bit worth of information, no matter what the weather looks like.

You are now looking at a photo of a real lake on your computer screen:

Image


Let's imagine it is the first time you see this photo. This is information to you, but how many bits of it? You could check the file size, it is already in bytes. It turns out it is a BMP file and it is 300 KBytes. Did you just receive 300 KBytes through your eyes? Somehow this seems suspicious to you. You know that if the file was compressed as a PNG the file size would be a lot less, probably around 90 KBytes, no visual degradation. So what is going on, is it 300 or 90 KBytes what you just saw? Nobody can tell you the right amount. Your eyes, brain and psyche are still mysterious objects to modern science. But whatever it is, it will be closer to 90 than 300. The PNG compression took out a lot of bits that were not really information. Compression algorithms reshuffle data in ways redundancy becomes evident. Then they take it out. It is like having someone else stop that person before entering your office to announce it was 3:00 PM. How is this related to procedural generation? Now imagine I have sent you this little EXE file. It is only 300 KBytes. When you run it, it turns out to be a game. You see terrain, trees, buildings. There are some creatures that want you dead. You learn to hate them back, you fight them everywhere you go. You find it amusing that even if you keep walking, this world appears to never end. You play for days, weeks. Eventually you realize the game's world is infinite, it has no limit. All this was contained in 300K, still the information coming out of it appears to be infinite. How is this possible? You are being tricked. You are not getting infinite information, it is all redundant. The information was the initial 300 KBytes. You have been listening to echoes believing someone was talking to you. This is a hallmark of procedural generation: A trick of mirrors that produces interesting effects, like a kaleidoscope. A successful procedural generator deceives you into thinking you are getting information when you are not. That is hard to achieve. In the same way we love information, we dislike redundancy. It wastes space and time, it does nothing for us. Our brains are very good at discovering it, and we adapt quickly to see through any new trick. Now, does this mean software cannot create information? There is energy going into this system, can it be used for more than powering infinite echoes? This is one of the big questions out there. It is beyond software. Can anyone create information at all? If you look at the lake picture again, you may ask yourself how it came to be. Not the picture, the actual lake. Is it there partly by chance, or because there was no other choice. Its exact shape, location and size, could they be the inevitable result of a chain of cause-effect events that started when the Universe began? If that is the case, the real lake is not information, it is an echo of a much smaller but powerful universal seed. The real answer probably does not matter. Even if the lake was an echo of the Big Bang, 42 or some sort of universal seed, the emergent complexity is so high we cannot realize it. Our brains and senses cannot go that far. If you are ready to accept that, then, yes, software can create information. The key is simulation. Simulations are special because they acknowledge the existence of time, cause and effect. You pick a set of rules, a starting state and you let things unfold from there. If humans are allowed to participate by changing the current state at any point in time, the end results could be very surprising. The problem with simulation is that it is very expensive. If you keep rules too simple or simulate for very little time, results may not be realistic enough. If you choose the right amount of rules and simulate for the right amount of time, you may realize it would take too long to be practical. When it comes to procedural generation you will find these two big families of techniques. One family is based on deception, produces no information, but it is fast and cheap. The other family has great potential, but it is expensive and difficult. As a world builder you should play to their strengths, avoid their pitfalls. And what is more interesting: learn how to mix them.

Monday, July 9, 2012

The Uncanny Valley of Procedural Generation

As a developer of procedural worlds, what worries me most is not failing spectacularly at my goals. My biggest fear is to produce something that looks believable, but still is somehow off. Even a seemingly perfect world could be rejected by your subconsciousness. You may not be able to put it into words, or point your finger at it, but you feel there is something wrong. 

To make matters even worse, it seems we can be collectively hypnotized into liking something just because it is a new way of doing things. Soon the novelty wears off and we realize the Emperor was naked all the time. 

We want to believe things look better that they do. For instance we know 3D graphics are a developing field so we are ready to forgive a lot, until something better appears and sets a new standard.

Remember this beauty?

Image

This is Peter Gabriel's "Kiss That Frog". You can see the video here. It won a MTV award for special effects in 1994. I remember loving this video. It is hard to watch now.

This is not specific of procedural techniques. Humans are equally able to generate uncanny, ugly things. The problem is proceduralism makes it a lot easier. The world is not crafted by hand, any aberration in the algorithms will be mindlessly multiplied. It also depends on the degree of realism you want to achieve. If you are trying to fake nature, odds are your creation is some form of monstrosity that will not stand the pass of time.

I often wonder whether any sort of synthetic reality is doomed in the long term. At this point I don't know for sure, but I have two simple ideas to guide me across this maze:

1. Global rather than Local

Procedural methods can be divided into two large families, global methods and local methods. In local methods content generated for a given point in space does not depend on the content of neighbor points.

The Perlin noise function, for instance, can be evaluated locally. This means the output of the function depends only on the coordinates of the point, plus some constants. It also means many points can be evaluated in parallel. Since they are isolated from their neighbors, you do not need to compute any neighbors before evaluating a single point.

They are blindingly fast, but it comes at a cost: They do not have a soul. They do not produce any information. All what you see comes from a small seed of values and the specific ways these values are churned and shaken by these clever algorithms.

Because of their speed and fairly good results, they can be used in many subtle places, but they should not be the backbone of your world. Our minds are very good a discovering redundancy. All these methods are like a kaleidoscope effect, they can trick you for a moment but soon you start seeing the mirrors. And once you have seen them, the magic is gone for good.

Here is your typical multifractal Perlin terrain:

Image

Just don't do it. We all know it is not real.

Another popular local method is L-Systems. This is when they are in their vanilla form, as context free grammars where symbols are replaced with no awareness of their surroundings. If used to produce trees, you soon have branches that intersect each other, or that go into illogical directions.

Here is one gem that illustrates this point:


Image


Global methods, on the other hand, are closer to simulations. The value of a single point may depend on the values of very distant points. Imagine a fluvial erosion filter. A point at the base of a mountain may be largely influenced by a large streak of points uphill.

Global methods ere effective because they have cause-effect relationships built into them. This makes all the difference. It brings entropy into the world, it gives it time and history.

Here is an example showing some fluvial erosion. The patterns you see here are far more believable:


Image

In the same fashion, most successful tree and plant generators use some sort of simulation or at least global constraints. For my trees I chose a global method that grows branches in full awareness of each other. It is a very simple algorithm and still beats the results you get from vanilla L-Systems.

The problem with simulations is they are costly to evaluate. There is information moving around so they are harder to compute in parallel. 

Consider this example. The location, shape and strength of a river will be determined by a water source many kilometers away. A visitor to the virtual world may encounter the river long before than the source, but still the source and everything between must be accounted for. For worlds that are generated on demand as the viewer moves this may be too much to handle.

Even then, you should always consider using a global method. It may make your solution complex and slower, but you will have something to show.

2. Steal from Mother (Nature)

Nature has already spent a lot of time and energy producing the patterns we accept as real. We already use them to texture 3D models, there is no reason why we couldn't go beyond that.

Here is a very interesting approach to terrain synthesis. It combines elevation samples from real sites from Earth and stitches them into new ways. This results in fairly believable scenes that can cover huge spaces without any apparent repetition.


Image

In this case they used some samples taken from the Grand Canyon.

A similar technique can be used for smaller terrain features like rocks, cliffs and small boulders. It is possible to have a set of volumetric natural textures and map them over larger terrain features. 

The biggest issue is how to mask the repetition, but this can be done using Wang tiles. I have reworked many of my core functions to be like this and I like them better than my previous functions. I will be posting some results soon.

To sum it up, I think entropy matters. 






Thursday, August 18, 2011

The case for procedural generation

John Carmack recently said procedural techniques are nothing more than a crappy form of compression. This is while he was asked about the Unlimited Detail tech. To be fair, this should be taken only in the context of that question and runtime-engines, which is anyway a bit strange since Unlimited Detail is not really using anything procedural, it is mostly tiling instances of unique detail. Still, is he right about that? Is procedural generation just crappy compression?

Others have said that procedural generation in games inevitably leads to large, bland scenarios. Or maddening repetitive dungeons and corridors. As they put it, humans can only be engaged by content that was created by other humans.

I actually agree with them. And yes, you are still reading Procedural World.

Despite of what we naysayers may believe, procedural techniques have seen a lot of success already. It is not really in your face, not a household name, but it has been instrumental producing content that otherwise would simply not exist.

Take a look at this image:

Image

Well thanks, it does looks like the shinny things I'm trying to do, but this is Pandora from the Avatar movie. Now, do you think this was entirely handcrafted?

Pandora is mostly procedural. They have synthetic noises everywhere, L-systems for plants and ground cover, it is actually a textbook example for procedural techniques. A lot of people went to visit it, and it made heaps of money. If you define success on those terms, this baby takes the cake. And it is thanks to procedural generation. Without it Pandora would have looked like this:

Image

So clearly proceduralism can help creating appealing content. As Carmack says, it is some form of compression. What got compressed in this case was not information over the wire, or over the bus to the graphics card, it is the amount of work artists needed to do. Instead of worrying about every little fern, they could spend their effort somewhere else.

What is wrong with this? Now imagine it was a game and you were marooned in this huge Pandora place. You would probably get bored after a while, besides staying alive there is not much to do. If there is a thing we humans can do is we can get bored anywhere, all the time.

We need this sort of battle-is-brewing, storm-is-coming thing lurking in the horizon. We need a sense of purpose and destiny, or a mystery for the lack of it. Is this something that only other humans can create for us?

At this point it becomes about Artificial Intelligence. Think of a variation of the Turing Test, where a human player would not be able to tell whether a game was designed by another human or by a computer. This is the real frontier for pure Procedural Content generation.

In the field of physics there are two theories that are very successful in explaining the two halves of the real world: Quantum Physics and General Relativity. They don't get along at all. There is a discontinuity where one theory ends and the other starts.

A similar scenario can be found in Procedural generation. On one side you have the Roguelike-Dwarf Fortress type of games which can generate compelling experiences, but lack any visual appeal. These games use text or ASCII screens. It is like never jacking into the Matrix and keep staring at the floating text. Can you see the woman in the red dress?

Image

Then any attempt to visualize the worlds they describe, the magic dies. You realize it was better when it was all ASCII, which is sad cause only weirdos play ASCII games.

The other side is rich visual worlds like Pandora. With the advent of computing as a commodity -the cloud- bigger and richer worlds will become available, even for devices with limited power like mobiles and tablets. But they lack the drama. Even a Pocahontas remix is beyond what they can do.

Triple A studios have realized this. It is better to use procedural techniques at what they do best: terrain filler, trees, texturing, and leave the creative work for the humans. They do not have a reason to invest in this type of AI, they won't push this front if it was up to them.

I do believe it is possible to marry these two opposed ends. Someone with an unique vision into both fields will be able to come up with a unified theory of procedural generation. And I think it will be the Unlimited Detail guys.



(EDIT: OK, OK. I'm kidding about Unlimited Detail. Their next bogus claim could be they made procedural content generation 100,000 times better. It is surprising to me they still can be taken seriously.)