Texture Synthesis Based Hybridisation for Images and Geometry
Texture synthesis deals with the example based creation of highly repetitive information. It is an important and widely studied problem in the computer graphics community due to the ever increasing need for art. Texture synthesis offers a way to automate the creative process so that artists are free to focus on the core aspects of their project, defining a look and style, and free from the mundane and tedious creative tasks of producing many unique instances of similar, repetitive content. Specifically, we explore the idea of hybridisation, taking multiple unique objects and mixing them to create new instances of those objects which share similar statistical features both locally and globally with the input. Hybrids occur in most textures but thus far have not been studied as a texture synthesis problem. In this thesis hybridisation is identified as a sub-problem of texture synthesis and hybrids are synthesised independent of the texture they would exist in. The work presented in this thesis takes as input some form of example data (in the form of images, geometric curves and meshes) and attempts to automatically imagine what more of it could look like.
Synthesizing Structured Image Hybrids
Example-based texture synthesis algorithms generate novel texture images from example data. A popular hierarchical pixel-based approach uses spatial jitter to introduce diversity, at the risk of breaking coarse structure beyond repair. We propose a multiscale descriptor that enables appearance-space jitter, which retains structure. This idea enables repurposing of existing texture synthesis implementations for a qualitatively different problem statement and class of inputs: generating hybrids of structured images.
Multiscale Texture Synthesis
Example-based texture synthesis algorithms have gained widespread popularity for their ability to take a single input image and create a perceptually similar non-periodic texture. However, previous methods rely on single input exemplars that can capture only a limited band of spatial scales. For example, synthesizing a continent-like appearance at a variety of zoom levels would require an impractically high input resolution. In this paper, we develop a multiscale texture synthesis algorithm. We propose a novel example-based representation, which we call an exemplar graph, that simply requires a few low- resolution input exemplars at different scales. Moreover, by allowing loops in the graph, we can create infnite zooms and infnitely detailed textures that are impossible with current example-based methods. We also introduce a technique that ameliorates inconsistencies in the userís input, and show that the application of this method yields improved interscale coherence and higher visual quality. We demonstrate optimizations for both CPU and GPU implementations of our method, and use them to produce animations with zooming and panning at multiple scales, as well as static gigapixel-sized images with features spanning many spatial scales.
Rendering 3D Volumes Using Per-Pixel Displacement Mapping
Rendering 3D Volumes Using Per-Pixel Displacement Mapping offers a simple and practical solution to the problem of seamlessly integrating many highly detailed 3D objects into a scene without the need to render large sets of polygons or introduce the overhead of an obtrusive scene-graph. This work takes advantage of modern programmable GPU's as well as recent related research in the area of per-pixel displacement mapping to achieve view independent, fully 3D rendering with per-pixel level of detail. To achieve this, a box is used to bound texture- defined volumes. The box acts as a surface to which the volume will be drawn on. By computing a viewing ray from the camera to a point on the box and using that point as a ray origin, the correct intersection with the texture volume can be found using various per-pixel displacement mapping techniques. Once the correct intersection is found, the final color value for the corresponding point on the box can be computed. This outperforms previous methods in terms of speed, quality and flexibility.
Nvidia GPU Gems 3
True Impostors takes advantage of the latest graphics hardware to achieve accurate geometric representation updated every frame for an arbitrary viewing angle. It builds off Relief Mapping of Non-Height Field Surfaces and continues past the goals of traditional per-pixel displacement mapping techniques to any arbitrary surface by generating whole 3D objects on a billboard. True Impostors distinguishes itself from previous per-pixel displacement mapping impostor methods by offering no viewing restrictions
True Impostors is features as chapter 21 in Nvidia's latest incarnation of the popular GPU gems series. Due to the various legalish looking forms I signed for Nvidia I don't think I have the rights to post the paper or even put up any of the images from the chapter. Pretty sad I know, but hey, sadness is like happiness for deep people, so stop being shallow and cheer up!
Oh yeah, and buy the book!
Faster Relief Mapping Using the Secant Method
Eric Risser, Musawir A. Shah, Sumanta Pattanaik
Journal of Graphics Tools: Volume 12, Number 3
Faster Relief Mapping Using the Secant Method offers an efficient method for adding per pixel height field based displacement to an arbitrary polygonal mesh in real time. The technique utilizes an interval based method in which bounds of the interval are computed in the beginning and are refined at every iteration until the intersection point is reached. The search space defined by the interval reduces and converges in the intersection point rapidly and outperforms currently popular binary search based method (relief mapping) used for performing this task. We compute the bounds using simple ray segment intersection method. We demonstrate the algorithm and show empirical and explicit evidence of the speedup.
Game Developers Conference 2008
20 minute lecture
Game Developers Conference 2007
Symposium on Interactive 3D Graphics and Games 2006
Grass Mapping involves the use of off screen render targets to generate textures on the fly that represent grass from the cameras current viewing direction. Textures are generated in order to recycle the generated image over a vast terrain. This technique does suffer from parallax inaccuracies and only produces a reasonable grass rendering approximation for very short grass.