15. Image Operations
15.1. Image Operations Overview
Vulkan Image Operations are operations performed by those SPIRV Image
Instructions which take an OpTypeImage
(representing a
VkImageView
) or OpTypeSampledImage
(representing a
(VkImageView
, VkSampler
) pair) and texel coordinates as
operands, and return a value based on one or more neighboring texture
elements (texels) in the image.
Note
Texel is a term which is a combination of the words texture and element. Early interactive computer graphics supported texture operations on textures, a small subset of the image operations on images described here. The discrete samples remain essentially equivalent, however, so we retain the historical term texel to refer to them. 
Image Operations include the functionality of the following SPIRV Image Instructions:

OpImageSample
* andOpImageSparseSample
* read one or more neighboring texels of the image, and filter the texel values based on the state of the sampler.
Instructions with
ImplicitLod
in the name determine the LOD used in the sampling operation based on the coordinates used in neighboring fragments. 
Instructions with
ExplicitLod
in the name determine the LOD used in the sampling operation based on additional coordinates. 
Instructions with
Proj
in the name apply homogeneous projection to the coordinates.


OpImageFetch
andOpImageSparseFetch
return a single texel of the image. No sampler is used. 
OpImage
*Gather
andOpImageSparse
*Gather
read neighboring texels and return a single component of each. 
OpImageRead
(andOpImageSparseRead
) andOpImageWrite
read and write, respectively, a texel in the image. No sampler is used. 
OpImageSampleFootprintNV
identifies and returns information about the set of texels in the image that would be accessed by an equivalentOpImageSample
* instruction. 
Instructions with
Dref
in the name apply depth comparison on the texel values. 
Instructions with
Sparse
in the name additionally return a sparse residency code.
15.1.1. Texel Coordinate Systems
Images are addressed by texel coordinates. There are three texel coordinate systems:

normalized texel coordinates [0.0, 1.0]

unnormalized texel coordinates [0.0, width / height / depth)

integer texel coordinates [0, width / height / depth)
SPIRV OpImageFetch
, OpImageSparseFetch
, OpImageRead
,
OpImageSparseRead
, and OpImageWrite
instructions use integer texel
coordinates.
Other image instructions can use either normalized or unnormalized texel
coordinates (selected by the unnormalizedCoordinates
state of the
sampler used in the instruction), but there are
limitations on what operations, image
state, and sampler state is supported.
Normalized coordinates are logically
converted to unnormalized as part of
image operations, and certain steps are
only performed on normalized coordinates.
The array layer coordinate is always treated as unnormalized even when other
coordinates are normalized.
Normalized texel coordinates are referred to as (s,t,r,q,a), with the coordinates having the following meanings:

s: Coordinate in the first dimension of an image.

t: Coordinate in the second dimension of an image.

r: Coordinate in the third dimension of an image.

(s,t,r) are interpreted as a direction vector for Cube images.


q: Fourth coordinate, for homogeneous (projective) coordinates.

a: Coordinate for array layer.
The coordinates are extracted from the SPIRV operand based on the
dimensionality of the image variable and type of instruction.
For Proj
instructions, the components are in order (s [,t] [,r]
q), with t and r being conditionally present based on the
Dim
of the image.
For nonProj
instructions, the coordinates are (s [,t] [,r]
[,a]), with t and r being conditionally present based on the
Dim
of the image and a being conditionally present based on the
Arrayed
property of the image.
Projective image instructions are not supported on Arrayed
images.
Unnormalized texel coordinates are referred to as (u,v,w,a), with the coordinates having the following meanings:

u: Coordinate in the first dimension of an image.

v: Coordinate in the second dimension of an image.

w: Coordinate in the third dimension of an image.

a: Coordinate for array layer.
Only the u and v coordinates are directly extracted from the
SPIRV operand, because only 1D and 2D (nonArrayed
) dimensionalities
support unnormalized coordinates.
The components are in order (u [,v]), with v being conditionally
present when the dimensionality is 2D.
When normalized coordinates are converted to unnormalized coordinates, all
four coordinates are used.
Integer texel coordinates are referred to as (i,j,k,l,n), with the coordinates having the following meanings:

i: Coordinate in the first dimension of an image.

j: Coordinate in the second dimension of an image.

k: Coordinate in the third dimension of an image.

l: Coordinate for array layer.

n: Coordinate for the sample index.
They are extracted from the SPIRV operand in order (i, [,j], [,k],
[,l]), with j and k conditionally present based on the Dim
of the image, and l conditionally present based on the Arrayed
property of the image.
n is conditionally present and is taken from the Sample
image
operand.
For all coordinate types, unused coordinates are assigned a value of zero.
The Texel Coordinate Systems  For the example shown of an 8×4 texel two dimensional image.

Normalized texel coordinates:

The s coordinate goes from 0.0 to 1.0.

The t coordinate goes from 0.0 to 1.0.


Unnormalized texel coordinates:

The u coordinate within the range 0.0 to 8.0 is within the image, otherwise it is outside the image.

The v coordinate within the range 0.0 to 4.0 is within the image, otherwise it is outside the image.


Integer texel coordinates:

The i coordinate within the range 0 to 7 addresses texels within the image, otherwise it is outside the image.

The j coordinate within the range 0 to 3 addresses texels within the image, otherwise it outside the image.


Also shown for linear filtering:

Given the unnormalized coordinates (u,v), the four texels selected are i_{0}j_{0}, i_{1}j_{0}, i_{0}j_{1}, and i_{1}j_{1}.

The fractions α and β.

Given the offset Δ_{i} and Δ_{j}, the four texels selected by the offset are i_{0}j'_{0}, i_{1}j'_{0}, i_{0}j'_{1}, and i_{1}j'_{1}.

Note
For formats with reducedresolution channels, Δ_{i} and Δ_{j} are relative to the resolution of the highestresolution channel, and therefore may be divided by two relative to the unnormalized coordinate space of the lowerresolution channels. 
The Texel Coordinate Systems  For the example shown of an 8×4 texel two dimensional image.

Texel coordinates as above. Also shown for nearest filtering:

Given the unnormalized coordinates (u,v), the texel selected is ij.

Given the offset Δ_{i} and Δ_{j}, the texel selected by the offset is ij'.

For cornersampled images, the texel samples are located at the grid intersections instead of the texel centers.
15.2. Conversion Formulas
15.2.1. RGB to Shared Exponent Conversion
An RGB color (red, green, blue) is transformed to a shared exponent color (red_{shared}, green_{shared}, blue_{shared}, exp_{shared}) as follows:
First, the components (red, green, blue) are clamped to (red_{clamped}, green_{clamped}, blue_{clamped}) as:

red_{clamped} = max(0, min(sharedexp_{max}, red))

green_{clamped} = max(0, min(sharedexp_{max}, green))

blue_{clamped} = max(0, min(sharedexp_{max}, blue))
where:
Note
NaN, if supported, is handled as in IEEE 7542008

The largest clamped component, max_{clamped} is determined:

max_{clamped} = max(red_{clamped}, green_{clamped}, blue_{clamped})
A preliminary shared exponent exp' is computed:
The shared exponent exp_{shared} is computed:
Finally, three integer values in the range 0 to 2^{N} are computed:
15.2.2. Shared Exponent to RGB
A shared exponent color (red_{shared}, green_{shared}, blue_{shared}, exp_{shared}) is transformed to an RGB color (red, green, blue) as follows:

\(red = red_{shared} \times {2^{(exp_{shared}BN)}}\)

\(green = green_{shared} \times {2^{(exp_{shared}BN)}}\)

\(blue = blue_{shared} \times {2^{(exp_{shared}BN)}}\)
where:

N = 9 (number of mantissa bits per component)

B = 15 (exponent bias)
15.3. Texel Input Operations
Texel input instructions are SPIRV image instructions that read from an image. Texel input operations are a set of steps that are performed on state, coordinates, and texel values while processing a texel input instruction, and which are common to some or all texel input instructions. They include the following steps, which are performed in the listed order:
For texel input instructions involving multiple texels (for sampling or gathering), these steps are applied for each texel that is used in the instruction. Depending on the type of image instruction, other steps are conditionally performed between these steps or involving multiple coordinate or texel values.
If Chroma Reconstruction is implicit, Texel Filtering instead takes place during chroma reconstruction, before sampler Y′C_{B}C_{R} conversion occurs.
15.3.1. Texel Input Validation Operations
Texel input validation operations inspect instruction/image/sampler state or coordinates, and in certain circumstances cause the texel value to be replaced or become undefined. There are a series of validations that the texel undergoes.
Instruction/Sampler/Image View Validation
There are a number of cases where a SPIRV instruction can mismatch with the sampler, the image view, or both. There are a number of cases where the sampler can mismatch with the image view. In such cases the value of the texel returned is undefined.
These cases include:

The sampler
borderColor
is an integer type and the image viewformat
is not one of the VkFormat integer types or a stencil component of a depth/stencil format. 
The sampler
borderColor
is a float type and the image viewformat
is not one of the VkFormat float types or a depth component of a depth/stencil format. 
The sampler
borderColor
is one of the opaque black colors (VK_BORDER_COLOR_FLOAT_OPAQUE_BLACK
orVK_BORDER_COLOR_INT_OPAQUE_BLACK
) and the image view VkComponentSwizzle for any of the VkComponentMapping components is notVK_COMPONENT_SWIZZLE_IDENTITY
. 
The VkImageLayout of any subresource in the image view does not match that specified in VkDescriptorImageInfo::
imageLayout
used to write the image descriptor. 
If the instruction is
OpImageRead
orOpImageSparseRead
and theshaderStorageImageReadWithoutFormat
feature is not enabled, or the instruction isOpImageWrite
and theshaderStorageImageWriteWithoutFormat
feature is not enabled, then the SPIRV Image Format must be compatible with the image view’sformat
. 
The sampler
unnormalizedCoordinates
isVK_TRUE
and any of the limitations of unnormalized coordinates are violated. 
The sampler was created with
flags
containingVK_SAMPLER_CREATE_SUBSAMPLED_BIT_EXT
and the image was not created withflags
containingVK_IMAGE_CREATE_SUBSAMPLED_BIT_EXT
. 
The sampler was not created with
flags
containingVK_SAMPLER_CREATE_SUBSAMPLED_BIT_EXT
and the image was created withflags
containingVK_IMAGE_CREATE_SUBSAMPLED_BIT_EXT
. 
The sampler was created with
flags
containingVK_SAMPLER_CREATE_SUBSAMPLED_BIT_EXT
and is used with a function that is notOpImageSampleImplicitLod
orOpImageSampleExplicitLod
, or is used with operandsOffset
orConstOffsets
. 
The SPIRV instruction is one of the
OpImage
*Dref
* instructions and the samplercompareEnable
isVK_FALSE

The SPIRV instruction is not one of the
OpImage
*Dref
* instructions and the samplercompareEnable
isVK_TRUE

The SPIRV instruction is one of the
OpImage
*Dref
* instructions and the image viewformat
is not one of the depth/stencil formats with a depth component, or the image view aspect is notVK_IMAGE_ASPECT_DEPTH_BIT
. 
The SPIRV instruction’s image variable’s properties are not compatible with the image view:

Rules for
viewType
:
VK_IMAGE_VIEW_TYPE_1D
must haveDim
= 1D,Arrayed
= 0,MS
= 0. 
VK_IMAGE_VIEW_TYPE_2D
must haveDim
= 2D,Arrayed
= 0. 
VK_IMAGE_VIEW_TYPE_3D
must haveDim
= 3D,Arrayed
= 0,MS
= 0. 
VK_IMAGE_VIEW_TYPE_CUBE
must haveDim
= Cube,Arrayed
= 0,MS
= 0. 
VK_IMAGE_VIEW_TYPE_1D_ARRAY
must haveDim
= 1D,Arrayed
= 1,MS
= 0. 
VK_IMAGE_VIEW_TYPE_2D_ARRAY
must haveDim
= 2D,Arrayed
= 1. 
VK_IMAGE_VIEW_TYPE_CUBE_ARRAY
must haveDim
= Cube,Arrayed
= 1,MS
= 0.


If the image was created with VkImageCreateInfo::
samples
equal toVK_SAMPLE_COUNT_1_BIT
, the instruction must haveMS
= 0. 
If the image was created with VkImageCreateInfo::
samples
not equal toVK_SAMPLE_COUNT_1_BIT
, the instruction must haveMS
= 1.


If the image was created with VkImageCreateInfo::
flags
containingVK_IMAGE_CREATE_CORNER_SAMPLED_BIT_NV
, the sampler addressing modes must only use a VkSamplerAddressMode ofVK_SAMPLER_ADDRESS_MODE_CLAMP_TO_EDGE
. 
The SPIRV instruction is
OpImageSampleFootprintNV
withDim
= 2D andaddressModeU
oraddressModeV
in the sampler is notVK_SAMPLER_ADDRESS_MODE_CLAMP_TO_EDGE
. 
The SPIRV instruction is
OpImageSampleFootprintNV
withDim
= 3D andaddressModeU
,addressModeV
, oraddressModeW
in the sampler is notVK_SAMPLER_ADDRESS_MODE_CLAMP_TO_EDGE
.
Only OpImageSample
* and OpImageSparseSample
* can be used with a
sampler that enables sampler Y′C_{B}C_{R}
conversion.
OpImageFetch
, OpImageSparseFetch
, OpImage
*Gather
, and
OpImageSparse
*Gather
must not be used with a sampler that enables
sampler Y′C_{B}C_{R} conversion.
The ConstOffset
and Offset
operands must not be used with a
sampler that enables sampler Y′C_{B}C_{R}
conversion.
Integer Texel Coordinate Validation
Integer texel coordinates are validated against the size of the image level, and the number of layers and number of samples in the image. For SPIRV instructions that use integer texel coordinates, this is performed directly on the integer coordinates. For instructions that use normalized or unnormalized texel coordinates, this is performed on the coordinates that result after conversion to integer texel coordinates.
If the integer texel coordinates do not satisfy all of the conditions

0 ≤ i < w_{s}

0 ≤ j < h_{s}

0 ≤ k < d_{s}

0 ≤ l < layers

0 ≤ n < samples
where:

w_{s} = width of the image level

h_{s} = height of the image level

d_{s} = depth of the image level

layers = number of layers in the image

samples = number of samples per texel in the image
then the texel fails integer texel coordinate validation.
There are four cases to consider:

Valid Texel Coordinates

If the texel coordinates pass validation (that is, the coordinates lie within the image),
then the texel value comes from the value in image memory.


Border Texel

If the texel coordinates fail validation, and

If the read is the result of an image sample instruction or image gather instruction, and

If the image is not a cube image,
then the texel is a border texel and texel replacement is performed.


Invalid Texel

If the texel coordinates fail validation, and

If the read is the result of an image fetch instruction, image read instruction, or atomic instruction,
then the texel is an invalid texel and texel replacement is performed.


Cube Map Edge or Corner
Otherwise the texel coordinates lie beyond the edges or corners of the selected cube map face, and Cube map edge handling is performed.
Cube Map Edge Handling
If the texel coordinates lie beyond the edges or corners of the selected
cube map face, the following steps are performed.
Note that this does not occur when using VK_FILTER_NEAREST
filtering
within a mip level, since VK_FILTER_NEAREST
is treated as using
VK_SAMPLER_ADDRESS_MODE_CLAMP_TO_EDGE
.

Cube Map Edge Texel

If the texel lies beyond the selected cube map face in either only i or only j, then the coordinates (i,j) and the array layer l are transformed to select the adjacent texel from the appropriate neighboring face.


Cube Map Corner Texel

If the texel lies beyond the selected cube map face in both i and j, then there is no unique neighboring face from which to read that texel. The texel should be replaced by the average of the three values of the adjacent texels in each incident face. However, implementations may replace the cube map corner texel by other methods. The methods are subject to the constraint that for linear filtering if the three available texels have the same value, the resulting filtered texel must have that value, and for cubic filtering if the twelve available samples have the same value, the resulting filtered texel must have that value.

Sparse Validation
If the texel reads from an unbound region of a sparse image, the texel is a sparse unbound texel, and processing continues with texel replacement.
Layout Validation
If all planes of a disjoint multiplanar image are not in the same image layout, the image must not be sampled with sampler Y′C_{B}C_{R} conversion enabled.
15.3.2. Format Conversion
Texels undergo a format conversion from the VkFormat of the image view to a vector of either floating point or signed or unsigned integer components, with the number of components based on the number of components present in the format.

Color formats have one, two, three, or four components, according to the format.

Depth/stencil formats are one component. The depth or stencil component is selected by the
aspectMask
of the image view.
Each component is converted based on its type and size (as defined in the Format Definition section for each VkFormat), using the appropriate equations in 16Bit FloatingPoint Numbers, Unsigned 11Bit FloatingPoint Numbers, Unsigned 10Bit FloatingPoint Numbers, FixedPoint Data Conversion, and Shared Exponent to RGB. Signed integer components smaller than 32 bits are signextended.
If the image view format is sRGB, the color components are first converted as if they are UNORM, and then sRGB to linear conversion is applied to the R, G, and B components as described in the “sRGB EOTF” section of the Khronos Data Format Specification. The A component, if present, is unchanged.
If the image view format is blockcompressed, then the texel value is first decoded, then converted based on the type and number of components defined by the compressed format.
15.3.3. Texel Replacement
A texel is replaced if it is one (and only one) of:

a border texel,

an invalid texel, or

a sparse unbound texel.
Border texels are replaced with a value based on the image format and the
borderColor
of the sampler.
The border color is:
Sampler borderColor 
Corresponding Border Color 


[B_{r}, B_{g}, B_{b}, B_{a}] = [0.0, 0.0, 0.0, 0.0] 

[B_{r}, B_{g}, B_{b}, B_{a}] = [0.0, 0.0, 0.0, 1.0] 

[B_{r}, B_{g}, B_{b}, B_{a}] = [1.0, 1.0, 1.0, 1.0] 

[B_{r}, B_{g}, B_{b}, B_{a}] = [0, 0, 0, 0] 

[B_{r}, B_{g}, B_{b}, B_{a}] = [0, 0, 0, 1] 

[B_{r}, B_{g}, B_{b}, B_{a}] = [1, 1, 1, 1] 
Note
The names 
This is substituted for the texel value by replacing the number of components in the image format
Texel Aspect or Format  Component Assignment 

Depth aspect 
D = B_{r} 
Stencil aspect 
S = B_{r} 
One component color format 
Color_{r} = B_{r} 
Two component color format 
[Color_{r},Color_{g}] = [B_{r},B_{g}] 
Three component color format 
[Color_{r},Color_{g},Color_{b}] = [B_{r},B_{g},B_{b}] 
Four component color format 
[Color_{r},Color_{g},Color_{b},Color_{a}] = [B_{r},B_{g},B_{b},B_{a}] 
The value returned by a read of an invalid texel is undefined, unless that
read operation is from a buffer resource and the robustBufferAccess
feature is enabled.
In that case, an invalid texel is replaced as described by the
robustBufferAccess
feature.
If the
VkPhysicalDeviceSparseProperties::residencyNonResidentStrict
property is VK_TRUE
, a sparse unbound texel is replaced with 0 or 0.0
values for integer and floatingpoint components of the image format,
respectively.
If residencyNonResidentStrict
is VK_FALSE
, the value of the
sparse unbound texel is undefined.
15.3.4. Depth Compare Operation
If the image view has a depth/stencil format, the depth component is
selected by the aspectMask
, and the operation is a Dref
instruction, a depth comparison is performed.
The value of the result D is 1.0 if the result of the compare
operation is true, and 0.0 otherwise.
The compare operation is selected by the compareOp
member of the
sampler.
where, in the depth comparison:

D_{ref} = shaderOp.D_{ref} (from optional SPIRV operand)

D (texel depth value)
15.3.5. Conversion to RGBA
The texel is expanded from one, two, or three components to four components based on the image base color:
Texel Aspect or Format  RGBA Color 

Depth aspect 
[Color_{r},Color_{g},Color_{b}, Color_{a}] = [D,0,0,one] 
Stencil aspect 
[Color_{r},Color_{g},Color_{b}, Color_{a}] = [S,0,0,one] 
One component color format 
[Color_{r},Color_{g},Color_{b}, Color_{a}] = [Color_{r},0,0,one] 
Two component color format 
[Color_{r},Color_{g},Color_{b}, Color_{a}] = [Color_{r},Color_{g},0,one] 
Three component color format 
[Color_{r},Color_{g},Color_{b}, Color_{a}] = [Color_{r},Color_{g},Color_{b},one] 
Four component color format 
[Color_{r},Color_{g},Color_{b}, Color_{a}] = [Color_{r},Color_{g},Color_{b},Color_{a}] 
where one = 1.0f for floatingpoint formats and depth aspects, and one = 1 for integer formats and stencil aspects.
15.3.6. Component Swizzle
All texel input instructions apply a swizzle based on:

the VkComponentSwizzle enums in the
components
member of the VkImageViewCreateInfo structure for the image being read if sampler Y′C_{B}C_{R} conversion is not enabled, and 
the VkComponentSwizzle enums in the
components
member of the VkSamplerYcbcrConversionCreateInfo structure for the sampler Y′C_{B}C_{R} conversion if sampler Y′C_{B}C_{R} conversion is enabled.
The swizzle can rearrange the components of the texel, or substitute zero or one for any components. It is defined as follows for each color component:
where:
If the border color is one of the VK_BORDER_COLOR_*_OPAQUE_BLACK
enums
and the VkComponentSwizzle is not VK_COMPONENT_SWIZZLE_IDENTITY
for all components (or the
equivalent identity mapping),
the value of the texel after swizzle is undefined.
15.3.7. Sparse Residency
OpImageSparse
* instructions return a structure which includes a
residency code indicating whether any texels accessed by the instruction
are sparse unbound texels.
This code can be interpreted by the OpImageSparseTexelsResident
instruction which converts the residency code to a boolean value.
15.3.8. Chroma Reconstruction
In some color models, the color representation is defined in terms of monochromatic light intensity (often called “luma”) and color differences relative to this intensity, often called “chroma”. It is common for color models other than RGB to represent the chroma channels at lower spatial resolution than the luma channel. This approach is used to take advantage of the eye’s lower spatial sensitivity to color compared with its sensitivity to brightness. Less commonly, the same approach is used with additive color, since the green channel dominates the eye’s sensitivity to light intensity and the spatial sensitivity to color introduced by red and blue is lower.
Lowerresolution channels are “downsampled” by resizing them to a lower spatial resolution than the channel representing luminance. The process of reconstructing a full color value for texture access involves accessing both chroma and luma values at the same location. To generate the color accurately, the values of the lowerresolution channels at the location of the luma samples must be reconstructed from the lowerresolution sample locations, an operation known here as “chroma reconstruction” irrespective of the actual color model.
The location of the chroma samples relative to the luma coordinates is
determined by the xChromaOffset
and yChromaOffset
members of the
VkSamplerYcbcrConversionCreateInfo structure used to create the
sampler Y′C_{B}C_{R} conversion.
The following diagrams show the relationship between unnormalized (u,v) coordinates and (i,j) integer texel positions in the luma channel (shown in black, with circles showing integer sample positions) and the texel coordinates of reducedresolution chroma channels, shown as crosses in red.
Note
If the chroma values are reconstructed at the locations of the luma samples
by means of interpolation, chroma samples from outside the image bounds are
needed; these are determined according to Wrapping Operation.
These diagrams represent this by showing the bounds of the “chroma texel”
extending beyond the image bounds, and including additional chroma sample
positions where required for interpolation.
The limits of a sample for 
Reconstruction is implemented in one of two ways:
If the format of the image that is to be sampled sets
VK_FORMAT_FEATURE_SAMPLED_IMAGE_YCBCR_CONVERSION_CHROMA_RECONSTRUCTION_EXPLICIT_BIT
,
or the VkSamplerYcbcrConversionCreateInfo
’s
forceExplicitReconstruction
is set to VK_TRUE
, reconstruction is
performed as an explicit step independent of filtering, described in the
Explicit Reconstruction section.
If the format of the image that is to be sampled does not set
VK_FORMAT_FEATURE_SAMPLED_IMAGE_YCBCR_CONVERSION_CHROMA_RECONSTRUCTION_EXPLICIT_BIT
and if the VkSamplerYcbcrConversionCreateInfo
’s
forceExplicitReconstruction
is set to VK_FALSE
, reconstruction
is performed as an implicit part of filtering prior to color model
conversion, with no separate postconversion texel filtering step, as
described in the Implicit Reconstruction
section.
Explicit Reconstruction

If the
chromaFilter
member of the VkSamplerYcbcrConversionCreateInfo structure isVK_FILTER_NEAREST
:
If the format’s R and B channels are reduced in resolution in just width by a factor of two relative to the G channel (i.e. this is a “
_422
” format), the \(\tau_{ijk}[level]\) values accessed by texel filtering are reconstructed as follows:\[\begin{aligned} \tau_R'(i, j) & = \tau_R(\lfloor{i\times 0.5}\rfloor, j)[level] \\ \tau_B'(i, j) & = \tau_B(\lfloor{i\times 0.5}\rfloor, j)[level] \end{aligned}\] 
If the format’s R and B channels are reduced in resolution in width and height by a factor of two relative to the G channel (i.e. this is a “
_420
” format), the \(\tau_{ijk}[level]\) values accessed by texel filtering are reconstructed as follows:\[\begin{aligned} \tau_R'(i, j) & = \tau_R(\lfloor{i\times 0.5}\rfloor, \lfloor{j\times 0.5}\rfloor)[level] \\ \tau_B'(i, j) & = \tau_B(\lfloor{i\times 0.5}\rfloor, \lfloor{j\times 0.5}\rfloor)[level] \end{aligned}\]NotexChromaOffset
andyChromaOffset
have no effect ifchromaFilter
isVK_FILTER_NEAREST
for explicit reconstruction.


If the
chromaFilter
member of the VkSamplerYcbcrConversionCreateInfo structure isVK_FILTER_LINEAR
:
If the format’s R and B channels are reduced in resolution in just width by a factor of two relative to the G channel (i.e. this is a “422” format):

If
xChromaOffset
isVK_CHROMA_LOCATION_COSITED_EVEN
:\[\tau_{RB}'(i,j) = \begin{cases} \tau_{RB}(\lfloor{i\times 0.5}\rfloor,j)[level], & 0.5 \times i = \lfloor{0.5 \times i}\rfloor\\ 0.5\times\tau_{RB}(\lfloor{i\times 0.5}\rfloor,j)[level] + \\ 0.5\times\tau_{RB}(\lfloor{i\times 0.5}\rfloor + 1,j)[level], & 0.5 \times i \neq \lfloor{0.5 \times i}\rfloor \end{cases}\] 
If
xChromaOffset
isVK_CHROMA_LOCATION_MIDPOINT
:\[\tau_{RB}(i,j)' = \begin{cases} 0.25 \times \tau_{RB}(\lfloor{i\times 0.5}\rfloor  1,j)[level] + \\ 0.75 \times \tau_{RB}(\lfloor{i\times 0.5}\rfloor,j)[level], & 0.5 \times i = \lfloor{0.5 \times i}\rfloor\\ 0.75 \times \tau_{RB}(\lfloor{i\times 0.5}\rfloor,j)[level] + \\ 0.25 \times \tau_{RB}(\lfloor{i\times 0.5}\rfloor + 1,j)[level], & 0.5 \times i \neq \lfloor{0.5 \times i}\rfloor \end{cases}\]


If the format’s R and B channels are reduced in resolution in width and height by a factor of two relative to the G channel (i.e. this is a “420” format), a similar relationship applies. Due to the number of options, these formulae are expressed more concisely as follows:
\[\begin{aligned} i_{RB} & = \begin{cases} 0.5 \times (i) & \textrm{If xChromaOffset = COSITED}\_\textrm{EVEN} \\ 0.5 \times (i  0.5) & \textrm{If xChromaOffset = MIDPOINT} \end{cases}\\ j_{RB} & = \begin{cases} 0.5 \times (j) & \textrm{If yChromaOffset = COSITED}\_\textrm{EVEN} \\ 0.5 \times (j  0.5) & \textrm{If yChromaOffset = MIDPOINT} \end{cases}\\ \\ i_{floor} & = \lfloor i_{RB} \rfloor \\ j_{floor} & = \lfloor j_{RB} \rfloor \\ \\ i_{frac} & = i_{RB}  i_{floor} \\ j_{frac} & = j_{RB}  j_{floor} \end{aligned}\]\[\begin{aligned} \tau_{RB}'(i,j) = & \tau_{RB}( i_{floor}, j_{floor})[level] & \times & ( 1  i_{frac} ) & & \times & ( 1  j_{frac} ) & + \\ & \tau_{RB}( 1 + i_{floor}, j_{floor})[level] & \times & ( i_{frac} ) & & \times & ( 1  j_{frac} ) & + \\ & \tau_{RB}( i_{floor}, 1 + j_{floor})[level] & \times & ( 1  i_{frac} ) & & \times & ( j_{frac} ) & + \\ & \tau_{RB}( 1 + i_{floor}, 1 + j_{floor})[level] & \times & ( i_{frac} ) & & \times & ( j_{frac} ) & \end{aligned}\]

Note
In the case where the texture itself is bilinearly interpolated as described
in Texel Filtering, thus requiring four
fullcolor samples for the filtering operation, and where the reconstruction
of these samples uses bilinear interpolation in the chroma channels due to

Implicit Reconstruction
Implicit reconstruction takes place by the samples being interpolated, as
required by the filter settings of the sampler, except that
chromaFilter
takes precedence for the chroma samples.
If chromaFilter
is VK_FILTER_NEAREST
, an implementation may
behave as if xChromaOffset
and yChromaOffset
were both
VK_CHROMA_LOCATION_MIDPOINT
, irrespective of the values set.
Note
This will not have any visible effect if the locations of the luma samples coincide with the location of the samples used for rasterization. 
The sample coordinates are adjusted by the downsample factor of the channel (such that, for example, the sample coordinates are divided by two if the channel has a downsample factor of two relative to the luma channel):
15.3.9. Sampler Y′C_{B}C_{R} Conversion
Sampler Y′C_{B}C_{R} conversion performs the following operations, which an implementation may combine into a single mathematical operation:
Sampler Y′C_{B}C_{R} Range Expansion
Sampler Y′C_{B}C_{R} range expansion is applied to color channel values after all texel input operations which are not specific to sampler Y′C_{B}C_{R} conversion. For example, the input values to this stage have been converted using the normal format conversion rules.
Sampler Y′C_{B}C_{R} range expansion is not applied if ycbcrModel
is
VK_SAMPLER_YCBCR_MODEL_CONVERSION_RGB_IDENTITY
.
That is, the shader receives the vector C'_{rgba} as output by the Component
Swizzle stage without further modification.
For other values of ycbcrModel
, range expansion is applied to the
texel channel values output by the Component
Swizzle defined by the components
member of
VkSamplerYcbcrConversionCreateInfo.
Range expansion applies independently to each channel of the image.
For the purposes of range expansion and Y′C_{B}C_{R} model conversion, the R and
B channels contain color difference (chroma) values and the G channel
contains luma.
The A channel is not modified by sampler Y′C_{B}C_{R} range expansion.
The range expansion to be applied is defined by the ycbcrRange
member
of the VkSamplerYcbcrConversionCreateInfo
structure:

If
ycbcrRange
isVK_SAMPLER_YCBCR_RANGE_ITU_FULL
, the following transformations are applied:\[\begin{aligned} Y' &= C'_{rgba}[G] \\ C_B &= C'_{rgba}[B]  {{2^{(n1)}}\over{(2^n)  1}} \\ C_R &= C'_{rgba}[R]  {{2^{(n1)}}\over{(2^n)  1}} \end{aligned}\]NoteThese formulae correspond to the “full range” encoding in the “Quantization schemes” chapter of the Khronos Data Format Specification.
Should any future amendments be made to the ITU specifications from which these equations are derived, the formulae used by Vulkan may also be updated to maintain parity.

If
ycbcrRange
isVK_SAMPLER_YCBCR_RANGE_ITU_NARROW
, the following transformations are applied:\[\begin{aligned} Y' &= {{C'_{rgba}[G] \times (2^n1)  16\times 2^{n8}}\over{219\times 2^{n8}}} \\ C_B &= {{C'_{rgba}[B] \times \left(2^n1\right)  128\times 2^{n8}}\over{224\times 2^{n8}}} \\ C_R &= {{C'_{rgba}[R] \times \left(2^n1\right)  128\times 2^{n8}}\over{224\times 2^{n8}}} \end{aligned}\]NoteThese formulae correspond to the “narrow range” encoding in the “Quantization schemes” chapter of the Khronos Data Format Specification.

n is the bitdepth of the channels in the format.
The precision of the operations performed during range expansion must be at least that of the source format.
An implementation may clamp the results of these range expansion operations such that Y′ falls in the range [0,1], and/or such that C_{B} and C_{R} fall in the range [0.5,0.5].
Sampler Y′C_{B}C_{R} Model Conversion
The rangeexpanded values are converted between color models, according to
the color model conversion specified in the ycbcrModel
member:
VK_SAMPLER_YCBCR_MODEL_CONVERSION_RGB_IDENTITY

The color channels are not modified by the color model conversion since they are assumed already to represent the desired color model in which the shader is operating; Y′C_{B}C_{R} range expansion is also ignored.
VK_SAMPLER_YCBCR_MODEL_CONVERSION_YCBCR_IDENTITY

The color channels are not modified by the color model conversion and are assumed to be treated as though in Y′C_{B}C_{R} form both in memory and in the shader; Y′C_{B}C_{R} range expansion is applied to the channels as for other Y′C_{B}C_{R} models, with the vector (C_{R},Y′,C_{B},A) provided to the shader.
VK_SAMPLER_YCBCR_MODEL_CONVERSION_YCBCR_709

The color channels are transformed from a Y′C_{B}C_{R} representation to an R′G′B′ representation as described in the “BT.709 Y′C_{B}C_{R} conversion” section of the Khronos Data Format Specification.
VK_SAMPLER_YCBCR_MODEL_CONVERSION_YCBCR_601

The color channels are transformed from a Y′C_{B}C_{R} representation to an R′G′B′ representation as described in the “BT.601 Y′C_{B}C_{R} conversion” section of the Khronos Data Format Specification.
VK_SAMPLER_YCBCR_MODEL_CONVERSION_YCBCR_2020

The color channels are transformed from a Y′C_{B}C_{R} representation to an R′G′B′ representation as described in the “BT.2020 Y′C_{B}C_{R} conversion” section of the Khronos Data Format Specification.
In this operation, each output channel is dependent on each input channel.
An implementation may clamp the R′G′B′ results of these conversions to the range [0,1].
The precision of the operations performed during model conversion must be at least that of the source format.
The alpha channel is not modified by these model conversions.
Note
Sampling operations in a nonlinear color space can introduce color and intensity shifts at sharp transition boundaries. To avoid this issue, the technically precise color correction sequence described in the “Introduction to Color Conversions” chapter of the Khronos Data Format Specification may be performed as follows:
The additional calculations and, especially, additional number of sampling
operations in the 
15.4. Texel Output Operations
Texel output instructions are SPIRV image instructions that write to an image. Texel output operations are a set of steps that are performed on state, coordinates, and texel values while processing a texel output instruction, and which are common to some or all texel output instructions. They include the following steps, which are performed in the listed order:
15.4.1. Texel Output Validation Operations
Texel output validation operations inspect instruction/image state or coordinates, and in certain circumstances cause the write to have no effect. There are a series of validations that the texel undergoes.
Texel Format Validation
If the image format of the OpTypeImage
is not compatible with the
VkImageView
’s format
, the write causes the contents of the
image’s memory to become undefined.
15.4.2. Integer Texel Coordinate Validation
The integer texel coordinates are validated according to the same rules as for texel input coordinate validation.
If the texel fails integer texel coordinate validation, then the write has no effect.
15.4.3. Sparse Texel Operation
If the texel attempts to write to an unbound region of a sparse image, the
texel is a sparse unbound texel.
In such a case, if the
VkPhysicalDeviceSparseProperties::residencyNonResidentStrict
property is VK_TRUE
, the sparse unbound texel write has no effect.
If residencyNonResidentStrict
is VK_FALSE
, the write may have a
side effect that becomes visible to other accesses to unbound texels in any
resource, but will not be visible to any device memory allocated by the
application.
15.4.4. Texel Output Format Conversion
If the image format is sRGB, a linear to sRGB conversion is applied to the R, G, and B components as described in the “sRGB EOTF” section of the Khronos Data Format Specification. The A component, if present, is unchanged.
Texels then undergo a format conversion from the floating point, signed, or unsigned integer type of the texel data to the VkFormat of the image view. Any unused components are ignored.
Each component is converted based on its type and size (as defined in the Format Definition section for each VkFormat). Floatingpoint outputs are converted as described in FloatingPoint Format Conversions and FixedPoint Data Conversion. Integer outputs are converted such that their value is preserved. The converted value of any integer that cannot be represented in the target format is undefined.
15.5. Derivative Operations
SPIRV derivative instructions include OpDPdx
, OpDPdy
,
OpDPdxFine
, OpDPdyFine
, OpDPdxCoarse
, and OpDPdyCoarse
.
Derivative instructions are only available in
compute and
fragment shaders.
Derivatives are computed as if there is a 2×2 neighborhood of fragments for each fragment shader invocation. These neighboring fragments are used to compute derivatives with the assumption that the values of P in the neighborhood are piecewise linear. It is further assumed that the values of P in the neighborhood are locally continuous. Applications must not use derivative instructions in nonuniform control flow.
For a 2×2 neighborhood, for the four fragments labled 0, 1, 2 and 3,
the Fine
derivative instructions must return:
Coarse derivatives may return only two values. In this case, the values should be:
OpDPdx
and OpDPdy
must return the same result as either
OpDPdxFine
or OpDPdxCoarse
and either OpDPdyFine
or
OpDPdyCoarse
, respectively.
Implementations must make the same choice of either coarse or fine for both
OpDPdx
and OpDPdy
, and implementations should make the choice
that is more efficient to compute.
If the subgroupSize
field of VkPhysicalDeviceSubgroupProperties
is at least 4, the 2x2 neighborhood of fragments corresponds exactly to a
subgroup quad.
The order in which the fragments appear within the quad is implementation
defined.
15.5.1. Compute Shader Derivatives
For compute shaders, derivatives are also evaluated using a 2×2
logical neighborhood of compute shader invocations.
Compute shader invocations are arranged into neighborhoods according to one
of two SPIRV execution modes.
For the DerivativeGroupQuadsNV
execution mode, each neighborhood is
assembled from a 2×2×1 region of invocations based on the
LocalInvocationId
builtin.
For the DerivativeGroupLinearNV
execution mode, each neighborhood is
assembled from a group of four invocations based on the
LocalInvocationIndex
builtin.
The Compute shader derivative group assignments table specifies the
LocalInvocationId
or LocalInvocationIndex
values for the four
values of P in each neighborhood, where x and y are perneighborhood
integer values.
Value  DerivativeGroupQuadsNV  DerivativeGroupLinearNV 

P_{i0,j0} 
(2x + 0, 2y + 0, z) 
4x + 0 
P_{i1,j0} 
(2x + 1, 2y + 0, z) 
4x + 1 
P_{i0,j1} 
(2x + 0, 2y + 1, z) 
4x + 2 
P_{i1,j1} 
(2x + 1, 2y + 1, z) 
4x + 3 
For multiplanar formats, the derivatives are computed based on the plane with the largest dimensions.
15.6. Normalized Texel Coordinate Operations
If the image sampler instruction provides normalized texel coordinates, some of the following operations are performed.
15.6.1. Projection Operation
For Proj
image operations, the normalized texel coordinates
(s,t,r,q,a) and (if present) the D_{ref} coordinate are
transformed as follows:
15.6.2. Derivative Image Operations
Derivatives are used for LOD selection.
These derivatives are either implicit (in an ImplicitLod
image
instruction in a fragment shader) or explicit (provided explicitly by shader
to the image instruction in any shader).
For implicit derivatives image instructions, the derivatives of texel coordinates are calculated in the same manner as derivative operations above. That is:
Partial derivatives not defined above for certain image dimensionalities are set to zero.
For explicit LOD image instructions, if the optional SPIRV operand Grad is provided, then the operand values are used for the derivatives. The number of components present in each derivative for a given image dimensionality matches the number of partial derivatives computed above.
If the optional SPIRV operand Lod is provided, then derivatives are set to zero, the cube map derivative transformation is skipped, and the scale factor operation is skipped. Instead, the floating point scalar coordinate is directly assigned to λ_{base} as described in LevelofDetail Operation.
For implicit derivative image instructions, the partial derivative values may be computed by linear approximation using a 2×2 neighborhood of shader invocations (known as a quad), as described above. If the instruction is in control flow that is not uniform across the quad, then the derivative values and hence the implicit LOD values are undefined.
If the image or sampler object used by an implicit derivative image
instruction is not uniform across the quad and
quadDivergentImplicitLod
is not
supported, then the derivative and LOD values are undefined.
Implicit derivatives are welldefined when the image and sampler and control
flow are uniform across the quad, even if they diverge between different
quads.
If quadDivergentImplicitLod
is
supported, then derivatives and implicit LOD values are welldefined even if
the image or sampler object are not uniform within a quad.
The derivatives are computed as specified above, and the implicit LOD
calculation proceeds for each shader invocation using its respective image
and sampler object.
For the purposes of implicit derivatives, Flat
fragment input variables
are uniform within a quad.
15.6.3. Cube Map Face Selection and Transformations
For cube map image instructions, the (s,t,r) coordinates are treated as a direction vector (r_{x},r_{y},r_{z}). The direction vector is used to select a cube map face. The direction vector is transformed to a perface texel coordinate system (s_{face},t_{face}), The direction vector is also used to transform the derivatives to perface derivatives.
15.6.4. Cube Map Face Selection
The direction vector selects one of the cube map’s faces based on the largest magnitude coordinate direction (the major axis direction). Since two or more coordinates can have identical magnitude, the implementation must have rules to disambiguate this situation.
The rules should have as the first rule that r_{z} wins over r_{y} and r_{x}, and the second rule that r_{y} wins over r_{x}. An implementation may choose other rules, but the rules must be deterministic and depend only on (r_{x},r_{y},r_{z}).
The layer number (corresponding to a cube map face), the coordinate selections for s_{c}, t_{c}, r_{c}, and the selection of derivatives, are determined by the major axis direction as specified in the following two tables.
Major Axis Direction  Layer Number  Cube Map Face  s_{c}  t_{c}  r_{c} 

+r_{x} 
0 
Positive X 
r_{z} 
r_{y} 
r_{x} 
r_{x} 
1 
Negative X 
+r_{z} 
r_{y} 
r_{x} 
+r_{y} 
2 
Positive Y 
+r_{x} 
+r_{z} 
r_{y} 
r_{y} 
3 
Negative Y 
+r_{x} 
r_{z} 
r_{y} 
+r_{z} 
4 
Positive Z 
+r_{x} 
r_{y} 
r_{z} 
r_{z} 
5 
Negative Z 
r_{x} 
r_{y} 
r_{z} 
Major Axis Direction  ∂s_{c} / ∂x  ∂s_{c} / ∂y  ∂t_{c} / ∂x  ∂t_{c} / ∂y  ∂r_{c} / ∂x  ∂r_{c} / ∂y 

+r_{x} 
∂r_{z} / ∂x 
∂r_{z} / ∂y 
∂r_{y} / ∂x 
∂r_{y} / ∂y 
+∂r_{x} / ∂x 
+∂r_{x} / ∂y 
r_{x} 
+∂r_{z} / ∂x 
+∂r_{z} / ∂y 
∂r_{y} / ∂x 
∂r_{y} / ∂y 
∂r_{x} / ∂x 
∂r_{x} / ∂y 
+r_{y} 
+∂r_{x} / ∂x 
+∂r_{x} / ∂y 
+∂r_{z} / ∂x 
+∂r_{z} / ∂y 
+∂r_{y} / ∂x 
+∂r_{y} / ∂y 
r_{y} 
+∂r_{x} / ∂x 
+∂r_{x} / ∂y 
∂r_{z} / ∂x 
∂r_{z} / ∂y 
∂r_{y} / ∂x 
∂r_{y} / ∂y 
+r_{z} 
+∂r_{x} / ∂x 
+∂r_{x} / ∂y 
∂r_{y} / ∂x 
∂r_{y} / ∂y 
+∂r_{z} / ∂x 
+∂r_{z} / ∂y 
r_{z} 
∂r_{x} / ∂x 
∂r_{x} / ∂y 
∂r_{y} / ∂x 
∂r_{y} / ∂y 
∂r_{z} / ∂x 
∂r_{z} / ∂y 
15.6.5. Cube Map Coordinate Transformation
15.6.6. Cube Map Derivative Transformation
15.6.7. Scale Factor Operation, LevelofDetail Operation and Image Level(s) Selection
LOD selection can be either explicit (provided explicitly by the image
instruction) or implicit (determined from a scale factor calculated from the
derivatives).
The implicit LOD selected can be queried using the SPIRV instruction
OpImageQueryLod
, which gives access to the λ' and
d_{l} values, defined below.
These values must be computed with mipmapPrecisionBits
of accuracy
and may be subject to implementationspecific maxima and minima for very
large, outofrange values.
Scale Factor Operation
The magnitude of the derivatives are calculated by:

m_{ux} = ∂s/∂x × w_{base}

m_{vx} = ∂t/∂x × h_{base}

m_{wx} = ∂r/∂x × d_{base}

m_{uy} = ∂s/∂y × w_{base}

m_{vy} = ∂t/∂y × h_{base}

m_{wy} = ∂r/∂y × d_{base}
where:

∂t/∂x = ∂t/∂y = 0 (for 1D images)

∂r/∂x = ∂r/∂y = 0 (for 1D, 2D or Cube images)
and:

w_{base} = image.w

h_{base} = image.h

d_{base} = image.d
(for the baseMipLevel
, from the image descriptor).
For cornersampled images, the w_{base}, h_{base}, and d_{base} are instead:

w_{base} = image.w  1

h_{base} = image.h  1

d_{base} = image.d  1
A point sampled in screen space has an elliptical footprint in texture space. The minimum and maximum scale factors (ρ_{min}, ρ_{max}) should be the minor and major axes of this ellipse.
The scale factors ρ_{x} and ρ_{y}, calculated from the magnitude of the derivatives in x and y, are used to compute the minimum and maximum scale factors.
ρ_{x} and ρ_{y} may be approximated with functions f_{x} and f_{y}, subject to the following constraints:
The minimum and maximum scale factors (ρ_{min},ρ_{max}) are determined by:

ρ_{max} = max(ρ_{x}, ρ_{y})

ρ_{min} = min(ρ_{x}, ρ_{y})
The ratio of anisotropy is determined by:

η = min(ρ_{max}/ρ_{min}, max_{Aniso})
where:

sampler.max_{Aniso} =
maxAnisotropy
(from sampler descriptor) 
limits.max_{Aniso} =
maxSamplerAnisotropy
(from physical device limits) 
max_{Aniso} = min(sampler.max_{Aniso}, limits.max_{Aniso})
If ρ_{max} = ρ_{min} = 0, then all the partial derivatives are
zero, the fragment’s footprint in texel space is a point, and N
should be treated as 1.
If ρ_{max} ≠ 0 and ρ_{min} = 0 then all partial
derivatives along one axis are zero, the fragment’s footprint in texel space
is a line segment, and η should be treated as max_{Aniso}.
However, anytime the footprint is small in texel space the implementation
may use a smaller value of η, even when ρ_{min} is zero
or close to zero.
If either VkPhysicalDeviceFeatures::samplerAnisotropy
or
VkSamplerCreateInfo::anisotropyEnable
are VK_FALSE
,
max_{Aniso} is set to 1.
If η = 1, sampling is isotropic. If η > 1, sampling is anisotropic.
The sampling rate (N) is derived as:

N = ⌈η⌉
An implementation may round N up to the nearest supported sampling rate. An implementation may use the value of N as an approximation of η.
LevelofDetail Operation
The LOD parameter λ is computed as follows:
where:
and maxSamplerLodBias is the value of the VkPhysicalDeviceLimits
feature maxSamplerLodBias
.
Image Level(s) Selection
The image level(s) d, d_{hi}, and d_{lo} which texels are read from are determined by an imagelevel parameter d_{l}, which is computed based on the LOD parameter, as follows:
where:
and:

level_{base} =
baseMipLevel

q =
levelCount
 1
baseMipLevel
and levelCount
are taken from the
subresourceRange
of the image view.
If the sampler’s mipmapMode
is VK_SAMPLER_MIPMAP_MODE_NEAREST
,
then the level selected is d = d_{l}.
If the sampler’s mipmapMode
is VK_SAMPLER_MIPMAP_MODE_LINEAR
,
two neighboring levels are selected:
δ is the fractional value, quantized to the number of mipmap precision bits, used for linear filtering between levels.
15.6.8. (s,t,r,q,a) to (u,v,w,a) Transformation
The normalized texel coordinates are scaled by the image level dimensions and the array layer is selected.
This transformation is performed once for each level used in filtering (either d, or d_{hi} and d_{lo}).
where:

width_{scale} = width_{level}

height_{scale} = height_{level}

depth_{scale} = depth_{level}
for conventional images, and:

width_{scale} = width_{level}  1

height_{scale} = height_{level}  1

depth_{scale} = depth_{level}  1
for cornersampled images.
and where (Δ_{i}, Δ_{j}, Δ_{k}) are
taken from the image instruction if it includes a ConstOffset
or
Offset
operand, otherwise they are taken to be zero.
Operations then proceed to Unnormalized Texel Coordinate Operations.
15.7. Unnormalized Texel Coordinate Operations
15.7.1. (u,v,w,a) to (i,j,k,l,n) Transformation And Array Layer Selection
The unnormalized texel coordinates are transformed to integer texel coordinates relative to the selected mipmap level.
The layer index l is computed as:

l = clamp(RNE(a), 0,
layerCount
 1) +baseArrayLayer
where layerCount
is the number of layers in the image subresource
range of the image view, baseArrayLayer
is the first layer from the
subresource range, and where:
The sample index n is assigned the value zero.
Nearest filtering (VK_FILTER_NEAREST
) computes the integer texel
coordinates that the unnormalized coordinates lie within:
where:

shift = 0.0
for conventional images, and:

shift = 0.5
for cornersampled images.
Linear filtering (VK_FILTER_LINEAR
) computes a set of neighboring
coordinates which bound the unnormalized coordinates.
The integer texel coordinates are combinations of i_{0} or i_{1},
j_{0} or j_{1}, k_{0} or k_{1}, as well as weights
α, β, and γ.
where:

shift = 0.5
for conventional images, and:

shift = 0.0
for cornersampled images, and where:
where the number of fraction bits retained is specified by
VkPhysicalDeviceLimits
::subTexelPrecisionBits
.
Cubic filtering (VK_FILTER_CUBIC_EXT
) computes a set of neighboring
coordinates which bound the unnormalized coordinates.
The integer texel coordinates are combinations of i_{0}, i_{1},
i_{2} or i_{3}, j_{0}, j_{1}, j_{2} or j_{3},
k_{0}, k_{1}, k_{2} or k_{3}, as well as weights
α, β, and γ.
where:
where the number of fraction bits retained is specified by
VkPhysicalDeviceLimits
::subTexelPrecisionBits
.
15.8. Integer Texel Coordinate Operations
Integer texel coordinate operations may supply a LOD which texels are to be
read from or written to using the optional SPIRV operand Lod
.
If the Lod
is provided then it must be an integer.
The image level selected is:
If d does not lie in the range [baseMipLevel
,
baseMipLevel
+ levelCount
) then any values fetched are
undefined, and any writes are discarded.
15.9. Image Sample Operations
15.9.1. Wrapping Operation
Cube
images ignore the wrap modes specified in the sampler.
Instead, if VK_FILTER_NEAREST
is used within a mip level then
VK_SAMPLER_ADDRESS_MODE_CLAMP_TO_EDGE
is used, and if
VK_FILTER_LINEAR
is used within a mip level then sampling at the edges
is performed as described earlier in the Cube map
edge handling section.
The first integer texel coordinate i is transformed based on the
addressModeU
parameter of the sampler.
where:
j (for 2D and Cube image) and k (for 3D image) are similarly
transformed based on the addressModeV
and addressModeW
parameters of the sampler, respectively.
15.9.2. Texel Gathering
SPIRV instructions with Gather
in the name return a vector derived
from 4 texels in the base level of the image view.
The rules for the VK_FILTER_LINEAR
minification filter are applied to
identify the four selected texels.
Each texel is then converted to an RGBA value according to
conversion to RGBA and then
swizzled.
A fourcomponent vector is then assembled by taking the component indicated
by the Component
value in the instruction from the swizzled color value
of the four texels.
If the operation does not use the ConstOffsets
image operand then the
four texels form the 2 × 2 rectangle used for texture filtering:
If the operation does use the ConstOffsets
image operand then the
offsets allow a custom filter to be defined:
where:
OpImage
*Gather
must not be used on a sampled image with
sampler Y′C_{B}C_{R} conversion enabled.
15.9.3. Texel Filtering
Texel filtering is first performed for each level (either d or d_{hi} and d_{lo}).
If λ is less than or equal to zero, the texture is said to be
magnified, and the filter mode within a mip level is selected by the
magFilter
in the sampler.
If λ is greater than zero, the texture is said to be
minified, and the filter mode within a mip level is selected by the
minFilter
in the sampler.
Texel Nearest Filtering
Within a mip level, VK_FILTER_NEAREST
filtering selects a single value
using the (i, j, k) texel coordinates, with all texels taken from
layer l.
Texel Linear Filtering
Within a mip level, VK_FILTER_LINEAR
filtering combines 8 (for 3D), 4
(for 2D or Cube), or 2 (for 1D) texel values, together with their linear
weights.
The linear weights are derived from the fractions computed earlier:
The values of multiple texels, together with their weights, are combined to produce a filtered value.
The VkSamplerReductionModeCreateInfo::reductionMode
can control
the process by which multiple texels, together with their weights, are
combined to produce a filtered texture value.
When the reductionMode
is set (explicitly or implicitly) to
VK_SAMPLER_REDUCTION_MODE_WEIGHTED_AVERAGE
, a weighted average is
computed:
However, if the reduction mode is VK_SAMPLER_REDUCTION_MODE_MIN
or
VK_SAMPLER_REDUCTION_MODE_MAX
, the process operates on the above set
of multiple texels, together with their weights, computing a componentwise
minimum or maximum, respectively, of the components of the set of texels
with nonzero weights.
Texel Cubic Filtering
Within a mip level, VK_FILTER_CUBIC_EXT
, filtering computes a weighted
average of
64 (for 3D),
16 (for 2D), or 4 (for 1D) texel values, together with their CatmullRom
weights.
CatmullRom weights are derived from the fractions computed earlier.
The values of multiple texels, together with their weights, are combined to produce a filtered value.
The VkSamplerReductionModeCreateInfo::reductionMode
can control
the process by which multiple texels, together with their weights, are
combined to produce a filtered texture value.
When the reductionMode
is set (explicitly or implicitly) to
VK_SAMPLER_REDUCTION_MODE_WEIGHTED_AVERAGE
, a weighted average is
computed:
However, if the reduction mode is VK_SAMPLER_REDUCTION_MODE_MIN
or
VK_SAMPLER_REDUCTION_MODE_MAX
, the process operates on the above set
of multiple texels, together with their weights, computing a componentwise
minimum or maximum, respectively, of the components of the set of texels
with nonzero weights.
Texel Mipmap Filtering
VK_SAMPLER_MIPMAP_MODE_NEAREST
filtering returns the value of a single
mipmap level,
τ = τ[d].
VK_SAMPLER_MIPMAP_MODE_LINEAR
filtering combines the values of
multiple mipmap levels (τ[hi] and τ[lo]), together with their linear
weights.
The linear weights are derived from the fraction computed earlier:
The values of multiple mipmap levels, together with their weights, are combined to produce a final filtered value.
The VkSamplerReductionModeCreateInfo::reductionMode
can control
the process by which multiple texels, together with their weights, are
combined to produce a filtered texture value.
When the reductionMode
is set (explicitly or implicitly) to
VK_SAMPLER_REDUCTION_MODE_WEIGHTED_AVERAGE
, a weighted average is
computed:
Texel Anisotropic Filtering
Anisotropic filtering is enabled by the anisotropyEnable
in the
sampler.
When enabled, the image filtering scheme accounts for a degree of
anisotropy.
The particular scheme for anisotropic texture filtering is implementation
dependent.
Implementations should consider the magFilter
, minFilter
and
mipmapMode
of the sampler to control the specifics of the anisotropic
filtering scheme used.
In addition, implementations should consider minLod
and maxLod
of the sampler.
The following describes one particular approach to implementing anisotropic filtering for the 2D Image case, implementations may choose other methods:
Given a magFilter
, minFilter
of VK_FILTER_LINEAR
and a
mipmapMode
of VK_SAMPLER_MIPMAP_MODE_NEAREST
:
Instead of a single isotropic sample, N isotropic samples are be sampled within the image footprint of the image level d to approximate an anisotropic filter. The sum τ_{2Daniso} is defined using the single isotropic τ_{2D}(u,v) at level d.
When VkSamplerReductionModeCreateInfo::reductionMode
is set to
VK_SAMPLER_REDUCTION_MODE_WEIGHTED_AVERAGE
, the above summation is
used.
However, if the reduction mode is VK_SAMPLER_REDUCTION_MODE_MIN
or
VK_SAMPLER_REDUCTION_MODE_MAX
, the process operates on the above
values, together with their weights, computing a componentwise minimum or
maximum, respectively, of the components of the values with nonzero
weights.
15.10. Texel Footprint Evaluation
The SPIRV instruction OpImageSampleFootprintNV
evaluates the set of
texels from a single mip level that would be accessed during a
texel filtering operation.
In addition to the inputs that would be accepted by an equivalent
OpImageSample
* instruction, OpImageSampleFootprintNV
accepts two
additional inputs.
The Granularity
input is an integer identifying the size of texel
groups used to evaluate the footprint.
Each bit in the returned footprint mask corresponds to an aligned block of
texels whose size is given by the following table:
Granularity 
Dim = 2D 
Dim = 3D 

0 
unsupported 
unsupported 
1 
2x2 
2x2x2 
2 
4x2 
unsupported 
3 
4x4 
4x4x2 
4 
8x4 
unsupported 
5 
8x8 
unsupported 
6 
16x8 
unsupported 
7 
16x16 
unsupported 
8 
unsupported 
unsupported 
9 
unsupported 
unsupported 
10 
unsupported 
16x16x16 
11 
64x64 
32x16x16 
12 
128x64 
32x32x16 
13 
128x128 
32x32x32 
14 
256x128 
64x32x32 
15 
256x256 
unsupported 
The Coarse
input is used to select between the two mip levels that may
be accessed during texel filtering when using a mipmapMode
of
VK_SAMPLER_MIPMAP_MODE_LINEAR
.
When filtering between two mip levels, a Coarse
value of true
requests the footprint in the lowerresolution mip level (higher level
number), while false
requests the footprint in the higherresolution
mip level.
If texel filtering would access only a single mip level, the footprint in
that level would be returned when Coarse
is set to false
; an empty
footprint would be returned when Coarse
is set to true
.
The footprint for OpImageSampleFootprintNV
is returned in a structure
with six members:

The first member is a boolean value that is true if the texel filtering operation would access only a single mip level.

The second member is a two or threecomponent integer vector holding the footprint anchor location. For twodimensional images, the returned components are in units of eight texel groups. For threedimensional images, the returned components are in units of four texel groups.

The third member is a two or threecomponent integer vector holding a footprint offset relative to the anchor. All returned components are in units of texel groups.

The fourth member is a twocomponent integer vector mask, which holds a bitfield identifying the set of texel groups in an 8x8 or 4x4x4 neighborhood relative to the anchor and offset.

The fifth member is an integer identifying the mip level containing the footprint identified by the anchor, offset, and mask.

The sixth member is an integer identifying the granularity of the returned footprint.
For footprints in twodimensional images (Dim2D
), the mask returned by
OpImageSampleFootprintNV
indicates whether each texel group in a 8x8
local neighborhood of texel groups would have one or more texels accessed
during texel filtering.
In the mask, the texel group with local group coordinates
\((lgx,lgy)\) is considered covered if and only if
where:

\(0<=lgx<8\) and \(0<=lgy<8\); and

\(mask\) is the returned twocomponent mask.
The local group with coordinates \((lgx,lgy)\) in the mask is considered covered if and only if the texel filtering operation would access one or more texels \(\tau_{ij}\) in the returned miplevel where:
and

\(i0<=i<=i1\) and \(j0<=j<=j1\);

\(gran\) is a twocomponent vector holding the width and height of the texel group identified by the granularity;

\(anchor\) is the returned twocomponent anchor vector; and

\(offset\) is the returned twocomponent offset vector.
For footprints in threedimensional images (Dim3D
), the mask returned
by OpImageSampleFootprintNV
indicates whether each texel group in a
4x4x4 local neighborhood of texel groups would have one or more texels
accessed during texel filtering.
In the mask, the texel group with local group coordinates
\((lgx,lgy,lgz)\), is considered covered if and only if:
where:

\(0<=lgx<4\), \(0<=lgy<4\), and \(0<=lgz<4\); and

\(mask\) is the returned twocomponent mask.
The local group with coordinates \((lgx,lgy,lgz)\) in the mask is considered covered if and only if the texel filtering operation would access one or more texels \(\tau_{ijk}\) in the returned miplevel where:
and

\(i0<=i<=i1\), \(j0<=j<=j1\), \(k0<=k<=k1\);

\(gran\) is a threecomponent vector holding the width, height, and depth of the texel group identified by the granularity;

\(anchor\) is the returned threecomponent anchor vector; and

\(offset\) is the returned threecomponent offset vector.
If the sampler used by OpImageSampleFootprintNV
enables anisotropic
texel filtering via anisotropyEnable
, it is possible that the set of
texel groups accessed in a mip level may be too large to be expressed using
an 8x8 or 4x4x4 mask using the granularity requested in the instruction.
In this case, the implementation uses a texel group larger than the
requested granularity.
When a larger texel group size is used, OpImageSampleFootprintNV
returns an integer granularity value that can be interpreted in the same
manner as the granularity value provided to the instruction to determine the
texel group size used.
If anisotropic texel filtering is disabled in the sampler, or if an
anisotropic footprint can be represented as an 8x8 or 4x4x4 mask with the
requested granularity, OpImageSampleFootprintNV
will use the requested
granularity asis and return a granularity value of zero.
OpImageSampleFootprintNV
supports only two and threedimensional image
accesses (Dim2D
and Dim3D
) and the footprint returned is undefined
if a sampler uses an addressing mode other than
VK_SAMPLER_ADDRESS_MODE_CLAMP_TO_EDGE
.
15.11. Image Operation Steps
Each step described in this chapter is performed by a subset of the image instructions:

Texel Input Validation Operations, Format Conversion, Texel Replacement, Conversion to RGBA, and Component Swizzle: Performed by all instructions except
OpImageWrite
. 
Depth Comparison: Performed by
OpImage
*Dref
instructions. 
All Texel output operations: Performed by
OpImageWrite
. 
Projection: Performed by all
OpImage
*Proj
instructions. 
Derivative Image Operations, Cube Map Operations, Scale Factor Operation, LevelofDetail Operation and Image Level(s) Selection, and Texel Anisotropic Filtering: Performed by all
OpImageSample
* andOpImageSparseSample
* instructions. 
(s,t,r,q,a) to (u,v,w,a) Transformation, Wrapping, and (u,v,w,a) to (i,j,k,l,n) Transformation And Array Layer Selection: Performed by all
OpImageSample
,OpImageSparseSample
, andOpImage
*Gather
instructions. 
Texel Gathering: Performed by
OpImage
*Gather
instructions. 
Texel Footprint Evaluation: Performed by
OpImageSampleFootprint
instructions. 
Texel Filtering: Performed by all
OpImageSample
* andOpImageSparseSample
* instructions. 
Sparse Residency: Performed by all
OpImageSparse
* instructions.