We demonstrate a new approach for designing functional material
definitions for multi-material fabrication using our system called Foundry.
Foundry provides an interactive and visual process for hierarchically
designing spatially-varying material properties (e.g.,
appearance, mechanical, optical). The resulting
meta-materials exhibit structure at the micro and macro level and
can surpass the qualities of traditional composites. The material
definitions are created by composing a set of operators into an
Each operator performs a volume decomposition operation,
remaps space, or constructs and assigns a material composition.
The operators are implemented using a domain-specific language for
multi-material fabrication; users can easily extend the
library by writing their own operators. Foundry can be used to
build operator graphs that describe complex, parameterized,
resolution-independent, and reusable material definitions.
We also describe how to stage the evaluation of the final material
in conjunction with progressive refinement, allows for
interactive material evaluation even for complex designs.
We show sophisticated and functional parts designed with our system.
3D printing hardware is rapidly scaling up to output continuous mixtures of
multiple materials at increasing resolution over ever larger print volumes.
This poses an enormous computational challenge:
large high-resolution prints comprise trillions of voxels and petabytes of
data and simply modeling and describing the input with spatially varying
material mixtures at this scale is challenging.
Existing 3D printing software is insufficient;
in particular, most software is designed to support only a few million
primitives, with discrete material choices per object.
We present OpenFab, a programmable pipeline for synthesis of multi-material 3D printed objects
that is inspired by RenderMan and modern GPU pipelines.
The pipeline supports procedural evaluation of geometric detail and material
composition, using shader-like fablets, allowing models to be specified
easily and efficiently.
We describe a streaming architecture for OpenFab; only
a small fraction of the final volume is stored
in memory and output is fed to the printer with little startup delay.
We demonstrate it on a variety of multi-material objects.
We present surface based anti-aliasing (SBAA), a new approach to real-time
anti-aliasing for deferred renderers that improves
the performance and lowers the memory requirements for anti-aliasing
methods that sample sub-pixel visibility. We introduce a novel way of
decoupling visibility determination from shading that, compared to
previous multi-sampling based approaches, significantly reduces the number
of samples stored and shaded per pixel. Unlike post-process anti-aliasing techniques used
in conjunction with deferred renderers, SBAA correctly resolves
visibility of sub-pixel features, minimizing spatial and temporal
Abstract: We introduce adaptive volumetric shadow maps (AVSM), a real-time shadow
algorithm that supports high-quality shadowing from dynamic volumetric
media such as hair and smoke.
The key contribution of AVSM is the introduction of a streaming
simplification algorithm that generates an accurate volumetric
light attenuation function using a small fixed memory footprint.
This compression strategy leads to high performance because the
visibility data can remain in on-chip memory during simplification and can be
efficiently sampled during rendering.
We demonstrate that AVSM compression closely approximates the ground-truth
correct solution and performs competitively to existing real-time rendering
techniques while providing higher quality volumetric shadows.
In computer cinematography, the process of lighting design
involves placing and configuring lights to define the visual
appearance of environments and to enhance story elements. This
process is labor intensive and time consuming, primarily
because lighting artists receive poor feedback from existing
tools: interactive previews have very poor quality, while
final-quality images often take hours to render.
This paper presents an interactive cinematic lighting system
used in the production of computer-animated feature films
containing environments of very high complexity, in which
surface and light appearances are described using procedural
RenderMan shaders. Our system provides lighting artists with
high-quality previews at interactive framerates with only small
approximations compared to the final rendered images. This is
accomplished by combining numerical estimation of surface
response, image-space caching, deferred shading, and the
computational power of modern graphics hardware.
Our system has been successfully used in the production of two
feature-length animated films, dramatically accelerating
lighting tasks. In our experience interactivity fundamentally
changes an artist's workflow, improving both productivity and
Fabio Pellacini and Kiril Vidimce. In Randima Fernando, ed., GPU Gems, Addison-‐Wesley, 2004.
In this chapter, we present a simplified implementation of uberlight, a light shader that expresses the lighting model described by Ronen Barzel (1997, 1999). A superset of this model was developed over several years at Pixar Animation Studios and used for the production of animated movies such as the Walt Disney presentations of the Pixar Animation Studios films Toy Story, A Bug's Life, Monsters, Inc., and Finding Nemo. Our Cg implementation is based on Barzel's original approach and on the RenderMan Shading Language implementation written by Larry Gritz (Barzel 1999). Further details about this lighting approach and its uses in movie production can be found in Apodaca and Gritz 1999 and Birn 2000.
Real-time graphics hardware continues to offer improved
resources for programmable vertex and fragment shaders.
However, shader programmers continue to write shaders
that require more resources than are available in the
hardware. One way to virtualize the resources necessary
to run complex shaders is to partition the shaders into
multiple rendering passes. This problem, called the
"Multi-Pass Partitioning Problem" (MPP), and a solution
for the problem, Recursive Dominator Split (RDS), have
been presented by Eric Chan et al. The O(n3) RDS algorithm
and its heuristic-based O(n2) cousin, RDSh, are robust
in that they can efficiently partition shaders for many
architectures with varying resources. However, RDS's
high runtime cost and inability to handle multiple
outputs per pass make it less desirable for real-time
use on today's latest graphics hardware. This paper
redefines the MPP as a scheduling problem and uses
scheduling algorithms that allow incremental resource
estimation and pass computation in O(n log n) time. Our
scheduling algorithm, Mio, is experimentally compared
to RDS and shown to have better run-time scaling and
produce comparable partitions for emerging hardware
Igor Guskov, Kiril Vidimče, Wim Sweldens, Peter Schroeder.
Proceedings of SIGGRAPH 2000 (ACM Transactions on Graphics).
Normal meshes are new fundamental surface descriptions inspired
by differential geometry. A normal mesh is a multiresolution mesh
where each level can be written as a normal offset from a coarser
version. Hence the mesh can be stored with a single ﬂoat per vertex.
We present an algorithm to approximate any surface arbitrarily
closely with a normal semi-regular mesh. Normal meshes can be
useful in numerous applications such as compression, ﬁltering,
rendering, texturing, and modeling.
We describe a 3D graphical interaction tool called
an amplification widget that allows a user to control the
position or orientation of an object at multiple scales.
Fine and coarse adjustments are available within a single tool which gives visual feedback to indicate the level
of resolution being applied. Amplification widgets have
been included in instructional modules of The Optics
Project, designed to supplement undergraduate physics
courses. The user evaluation is being developed by the
Institute of the Mid-South Educational Research Association under the sponsorship of a 2-year grant from the
National Science Foundation.
The Optics Project (TOP) is a 3D interactive computer graphics system that visualizes optical phenomena. The primary motivation for creating TOP was to develop an educational aide for the student studying undergraduate optics. TOP runs on SGI workstations and PC computers. We have developed a Web-based version of the system that encompasses a physical simulation, an overview of the theory involved, a showcase of examples, and a set of suggested exercises. The actual simulation is implemented using VRML, Java, and the External Authoring Interface. This work is significant in that it represents, to our knowledge, the first complete 3D interactive optics system on the Web.
Modeling the Digital Earth in VRML,
Martin Reddy, Yvan G. Leclerc, Lee Iverson, Nat Bletter, and Kiril Vidimče.
28th Applied Imagery Pattern Recognition Workshop, October 1999, pp. 13-‐15.
This paper describes the representation and navigation of large, multi-resolution, georeferenced datasets in VRML97. This
requires resolving nontrivial issues such as how to represent deep level of detail hierarchies efficiently in VRML; how to
model terrain using geographic coordinate systems instead of only VRML’s Cartesian representation; how to model
georeferenced coordinates to sub-meter accuracy with only single-precision floating point support; how to enable the
integration of multiple terrain datasets for a region, as well as cultural features such as buildings and roads; how to navigate
efficiently around a large, global terrain dataset; and finally, how to encode metadata describing the terrain. We present
solutions to all of these problems. Consequently, we are able to visualize geographic data in the order of terabytes or more,
from the globe down to millimeter resolution, and in real-time, using standard VRML97.
Simulation and Visualization in a Browser,
Kiril Vidimče, Viktor Miladinov, and David C. Banks.
Proceedings of 1998 IEEE Visualization Workshop on Distributed Visualization Systems.
We describe how a Web-based visualization tool can be
constructed using Java and VRML, and present a brief
survey of the choices available for providing (a) 3D
graphics and (b) behavior in a Web browser. The tool
is collaborative, multi-platform, and interactive.
Through this tool a user can interactively modify and
view a physical simulation and share the changes
dynamically with other users. The design described has
been implemented and successfully run through Netscape
and Internet Explorer on Unix workstations and Windows
We describe a multidisciplinary effort for creating interactive 3D graphical modules for visualizing optical phenomena. These mod- ules are designed for use in an upper-level undergraduate course. The modules are developed in Open Inventor, which allows them to run under both Unix and Windows. The work is significant in that it applies contemporary interactive 3D visualization techniques to instructional courseware, which represents a considerable advance compared to the current state of the practice.