yt.visualization.volume_rendering.camera.
Camera
(center, normal_vector, width, resolution, transfer_function=None, north_vector=None, steady_north=False, volume=None, fields=None, log_fields=None, sub_samples=5, ds=None, min_level=None, max_level=None, no_ghost=True, data_source=None, use_light=False)[source]¶A viewpoint into a volume, for volume rendering.
The camera represents the eye of an observer, which will be used to generate ray-cast volume renderings of the domain.
Parameters: | center : array_like
normal_vector : array_like
width : float or list of floats
resolution : int or list of ints
transfer_function : yt.visualization.volume_rendering.TransferFunction
north_vector : array_like, optional
steady_north : bool, optional
volume : yt.extensions.volume_rendering.AMRKDTree, optional
fields : list of fields, optional
log_fields : list of bool, optional
sub_samples : int, optional
ds : ~yt.data_objects.api.Dataset
use_kd: bool, optional :
max_level: int, optional :
no_ghost: bool, optional :
data_source: data container, optional :
|
---|
Examples
>>> from yt.mods import *
>>> import yt.visualization.volume_rendering.api as vr
>>> ds = load('DD1701') # Load a dataset
>>> c = [0.5]*3 # Center
>>> L = [1.0,1.0,1.0] # Viewpoint
>>> W = np.sqrt(3) # Width
>>> N = 1024 # Pixels (1024^2)
# Get density min, max >>> mi, ma = ds.all_data().quantities[‘Extrema’](‘Density’)[0] >>> mi, ma = np.log10(mi), np.log10(ma)
# Construct transfer function >>> tf = vr.ColorTransferFunction((mi-2, ma+2)) # Sample transfer function with 5 gaussians. Use new col_bounds keyword. >>> tf.add_layers(5,w=0.05, col_bounds = (mi+1,ma), colormap=’spectral’)
# Create the camera object >>> cam = vr.Camera(c, L, W, (N,N), transfer_function=tf, ds=ds)
# Ray cast, and save the image. >>> image = cam.snapshot(fn=’my_rendering.png’)
Attributes
Methods