โ back to quackie.at ยท github.com/fini03/vkDuck
Before building, make sure you have the following:
Clone the repository and build with Meson:
git clone https://github.com/fini03/vkDuck.git
cd vkDuck
meson setup build
meson compile -C build
./build/main
After building, the editor opens. Your working project follows this structure:
my-project/
shaders/ # your .slang shader sources
data/
models/ # glTF/GLB models go here
saved_states/ # scene configuration files
Note: All 3D models must be placed inside data/models/ at the project root before they can be loaded in the Asset Library.
vkDuck uses the Slang shading language. Each shader file must follow a specific structure so the editor can reflect on it, auto-generate node pins, and bind resources correctly.
common.slang Module (Fixed Contract)
The common.slang file defines the shared CPU/GPU types for lights, camera, object transforms, and materials. The light and camera structs are fixed and must not be changed. All written shaders should import it.
module common;
export public struct Light {
public float3 position;
public float radius;
public float3 color;
public float intensity;
};
export public struct LightsBuffer {
public int numLights;
public int _pad0;
public int _pad1;
public int _pad2;
public Light lights[128];
};
export public struct Camera {
public float4x4 view;
public float4x4 invView;
public float4x4 proj;
public float4x4 invProj;
};
export public struct ObjectUniforms {
public float4x4 model;
public float4x4 normal;
};
export public struct MaterialParams {
public float4 baseColorFactor;
public float4 emissiveFactor;
public float metallicFactor;
public float roughnessFactor;
public float _padding0;
public float _padding1;
};
Every shader file must define three specific structures. The names don't matter โ the semantic annotations do, since the editor uses them for reflection.
VSInput)Supported semantics:
| Semantic | Type | Description |
|---|---|---|
POSITION | float3/float4 | Vertex object/world position |
NORMAL | float3 | Vertex normal |
TEXCOORD0 | float2 | Primary UV coordinates |
TANGENT | float4 | Tangent vector |
struct VSInput {
float3 position : POSITION;
float3 normal : NORMAL;
float2 uv : TEXCOORD0;
};
VSOutput)
Must include SV_Position for clip-space position. Any other fields (UV, world-space position, normals, etc.) are interpolated and passed to the fragment stage.
struct VSOutput {
float4 position : SV_Position;
float2 uv : UV;
float3 positionW : POSITION;
float3 normalW : NORMAL;
};
FSOut)
Must write to SV_Target0 for the main colour attachment. Additional render targets can be added as SV_Target1, SV_Target2, etc.
struct FSOut {
float4 color : SV_Target0;
};
All resources must be explicitly tagged with [[vk::binding(binding, set)]]. The first argument is the binding number within the descriptor set; the second is the set number.
// Texture in set 0, binding 0
[[vk::binding(0, 0)]] Sampler2D albedoTexture;
// Camera UBO in set 1, binding 0
[[vk::binding(0, 1)]] ConstantBuffer<Camera> camera;
// Object transform UBO in set 1, binding 1
[[vk::binding(1, 1)]] ConstantBuffer<ObjectUniforms> objectUBO;
Slang uses [shader("vertex")] and [shader("fragment")] attributes to mark entry points. The editor reflects these to determine what stages the shader file exposes.
[shader("vertex")]
VSOutput vertexMain(VSInput IN) { ... }
[shader("fragment")]
FSOut fragmentMain(VSOutput IN) { ... }
When you load or save a shader in the editor, vkDuck reflects the SPIR-V to discover inputs, outputs, and resource bindings. Each reflected resource automatically becomes a pin on the Pipeline node, letting you wire up Camera, Light, and Model nodes without any manual configuration.
The heart of vkDuck is the visual node graph. A renderer is assembled by creating nodes and connecting their output pins to input pins of other nodes.
| Node | Purpose |
|---|---|
| Model Source | Loads a glTF/GLB model file and exposes its mesh sub-nodes |
| Vertex Data | Provides vertex/index buffer data from a model mesh to the pipeline |
| UBO | Provides per-object uniform buffer data (model & normal matrices) |
| Material | Provides texture and material parameter bindings for a mesh |
| Camera | Provides view/projection matrices and camera mode controls |
| Light | Provides a light source (position, colour, intensity) to the pipeline |
| Pipeline | The core render pass node โ takes a shader, model data, and resources |
| Present | Outputs the final rendered image to the swapchain / preview window |
Copy all .gltf or .glb files into data/models/ at your project root. The editor will not see them otherwise.
Create a Model Source node. Open the Asset Library tab in the editor. Your model files will appear there. Click a model to import it into the scene.
For each mesh inside the Model Source, create three companion nodes and connect them:
Note: A single model with multiple meshes (e.g. a character with separate body/armour/hair meshes) needs one set of Vertex Data + UBO + Material per mesh.
Create a Camera node from the node menu. Choose a camera mode (FPS, Orbital, or Fixed) using the node's drop-down. Connect its output pin to the Camera input of the Pipeline node.
Create a Light node. Set the position, color, radius and intensity for each. Connect them to the LightsBuffer input on the Pipeline node.
Create a Pipeline node. In the node's properties:
Create a Present node and connect the Pipeline node's output to it. The real-time preview window will immediately show the result หห๐ ฐหห
A complete, minimal diffuse shader demonstrating all required conventions:
import common;
// โโ Resource Bindings โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
[[vk::binding(0, 0)]] Sampler2D albedoTexture;
[[vk::binding(0, 1)]] ConstantBuffer<Camera> camera;
[[vk::binding(0, 2)]] ConstantBuffer<ObjectUniforms> obj;
[[vk::binding(0, 3)]] ConstantBuffer<LightsBuffer> lightsBuffer;
[[vk::binding(1, 0)]] ConstantBuffer<MaterialParams> material;
// โโ Vertex Input / Output โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
struct VSInput {
float3 position : POSITION;
float3 normal : NORMAL;
float2 uv : TEXCOORD0;
};
struct VSOutput {
float4 position : SV_Position;
float2 uv : UV;
float3 positionW : POSITION;
float3 normalW : NORMAL;
};
struct FSOut {
float4 color : SV_Target0;
};
// โโ Vertex Shader โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
[shader("vertex")]
VSOutput vertexMain(VSInput IN) {
VSOutput OUT;
float4 worldPos = mul(obj.model, float4(IN.position, 1.0));
OUT.positionW = worldPos.xyz;
OUT.normalW = normalize(mul((float3x3)obj.normal, IN.normal));
OUT.position = mul(camera.proj, mul(camera.view, worldPos));
OUT.uv = IN.uv;
return OUT;
}
// โโ Fragment Shader โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
[shader("fragment")]
FSOut fragmentMain(VSOutput IN) {
FSOut OUT;
float4 baseColor = albedoTexture.Sample(IN.uv)
* material.baseColorFactor;
// Simple Lambertian shading over all lights
float3 diffuse = float3(0.0, 0.0, 0.0);
for (int i = 0; i < lightsBuffer.numLights; ++i) {
Light l = lightsBuffer.lights[i];
float3 dir = normalize(l.position - IN.positionW);
float NdotL = max(dot(IN.normalW, dir), 0.0);
float dist = length(l.position - IN.positionW);
float atten = clamp(1.0 - dist / l.radius, 0.0, 1.0);
diffuse += l.color * l.intensity * NdotL * atten;
}
OUT.color = float4(baseColor.rgb * diffuse, baseColor.a);
return OUT;
}
โฏโฒ GitHub ยท โฏโฒ Vulkanised 2026 Talk ยท โ quackie.at