yt.visualization.volume_rendering.camera.Camera

class yt.visualization.volume_rendering.camera.Camera(center, normal_vector, width, resolution, transfer_function=None, north_vector=None, steady_north=False, volume=None, fields=None, log_fields=None, sub_samples=5, ds=None, min_level=None, max_level=None, no_ghost=True, data_source=None, use_light=False)[source]

A viewpoint into a volume, for volume rendering.

The camera represents the eye of an observer, which will be used to generate ray-cast volume renderings of the domain.

Parameters:

center : array_like

The current “center” of the view port – the focal point for the camera.

normal_vector : array_like

The vector between the camera position and the center.

width : float or list of floats

The current width of the image. If a single float, the volume is cubical, but if not, it is left/right, top/bottom, front/back.

resolution : int or list of ints

The number of pixels in each direction.

transfer_function : yt.visualization.volume_rendering.TransferFunction

The transfer function used to map values to colors in an image. If not specified, defaults to a ProjectionTransferFunction.

north_vector : array_like, optional

The ‘up’ direction for the plane of rays. If not specific, calculated automatically.

steady_north : bool, optional

Boolean to control whether to normalize the north_vector by subtracting off the dot product of it and the normal vector. Makes it easier to do rotations along a single axis. If north_vector is specified, is switched to True. Default: False

volume : yt.extensions.volume_rendering.AMRKDTree, optional

The volume to ray cast through. Can be specified for finer-grained control, but otherwise will be automatically generated.

fields : list of fields, optional

This is the list of fields we want to volume render; defaults to Density.

log_fields : list of bool, optional

Whether we should take the log of the fields before supplying them to the volume rendering mechanism.

sub_samples : int, optional

The number of samples to take inside every cell per ray.

ds : ~yt.data_objects.api.Dataset

For now, this is a require parameter! But in the future it will become optional. This is the dataset to volume render.

use_kd: bool, optional :

Specifies whether or not to use a kd-Tree framework for the Homogenized Volume and ray-casting. Default to True.

max_level: int, optional :

Specifies the maximum level to be rendered. Also specifies the maximum level used in the kd-Tree construction. Defaults to None (all levels), and only applies if use_kd=True.

no_ghost: bool, optional :

Optimization option. If True, homogenized bricks will extrapolate out from grid instead of interpolating from ghost zones that have to first be calculated. This can lead to large speed improvements, but at a loss of accuracy/smoothness in resulting image. The effects are less notable when the transfer function is smooth and broad. Default: True

data_source: data container, optional :

Optionally specify an arbitrary data source to the volume rendering. All cells not included in the data source will be ignored during ray casting. By default this will get set to ds.all_data().

Examples

>>> from yt.mods import *
>>> import yt.visualization.volume_rendering.api as vr
>>> ds = load('DD1701') # Load a dataset
>>> c = [0.5]*3 # Center
>>> L = [1.0,1.0,1.0] # Viewpoint
>>> W = np.sqrt(3) # Width
>>> N = 1024 # Pixels (1024^2)

# Get density min, max >>> mi, ma = ds.all_data().quantities[‘Extrema’](‘Density’)[0] >>> mi, ma = np.log10(mi), np.log10(ma)

# Construct transfer function >>> tf = vr.ColorTransferFunction((mi-2, ma+2)) # Sample transfer function with 5 gaussians. Use new col_bounds keyword. >>> tf.add_layers(5,w=0.05, col_bounds = (mi+1,ma), colormap=’spectral’)

# Create the camera object >>> cam = vr.Camera(c, L, W, (N,N), transfer_function=tf, ds=ds)

# Ray cast, and save the image. >>> image = cam.snapshot(fn=’my_rendering.png’)

Attributes

Methods