Experimental Linux-native DirectStorage-style runtime (CPU today, GPU tomorrow) with an early GPU/Vulkan backend, towards Wine/Proton integration.
ds-runtime is an experimental C++ runtime that explores how a DirectStorage-style I/O and decompression pipeline can be implemented natively on Linux.
The project focuses on:
-
Clean, idiomatic systems-level C++
-
Explicit and auditable concurrency
-
POSIX-first file I/O
-
A backend-based architecture that cleanly separates:
-
request orchestration (queues, lifetime, synchronization)
-
execution backends (CPU now, Vulkan/GPU planned)
The long-term motivation is to provide a solid architectural foundation for a native Linux implementation of DirectStorage-like workloads, suitable for eventual integration with Wine / Proton.
This repository intentionally prioritizes structure, clarity, and correctness over premature optimization.
- Status: Experimental
- Backend: CPU β Fully Implemented and Working
- GPU/Vulkan backend: Experimental (staging buffer copies only, no GPU compute yet)
- io_uring backend: Experimental (host memory only, requires liburing)
The codebase has been significantly improved:
- β All build-breaking issues fixed - compiles cleanly
- β CPU backend fully functional - all tests passing
- β Comprehensive test suite - 4 test suites with 100% pass rate
- β Enhanced C ABI - proper enum support and bytes_transferred tracking
- β Error handling - robust error reporting with rich context
- β Request management - take_completed() API works correctly
- basic_queue_test: Core queue operations
- cpu_backend_test: Read/write, partial reads, compression, concurrent ops
- error_handling_test: Invalid FD, missing files, error context
- gdeflate_stub_test: Unsupported compression error handling
- β CPU backend with thread pool
- β Read and write operations
- β FakeUppercase demo compression
- β Error reporting with callbacks
- β Request completion tracking
- β Partial read handling
- β C ABI for Wine/Proton integration
- β Multiple concurrent requests
β οΈ GDeflate compression: Returns ENOTSUP error (intentional stub - requires format specification)β οΈ Vulkan GPU compute: Only staging buffer copies work, compute pipelines not implementedβ οΈ io_uring backend: Requires liburing dependency (not built by default)β οΈ Request cancellation: Enum added but cancel() method not yet implemented
See MISSING_FEATURES.md for the complete roadmap and COMPARISON.md for documentation vs reality comparison.
This is:
-
A reference-quality runtime skeleton
-
A clean async I/O queue with pluggable backends
-
A realistic starting point for Linux-native asset streaming
-
A codebase meant to be read, reviewed, and extended
This is not:
-
A drop-in replacement for Microsoft DirectStorage
-
A performance benchmark
-
A full GPU decompression implementation (yet)
-
Production-ready middleware
Modern games increasingly rely on DirectStorage-style I/O to stream large assets (textures, meshes, shaders) efficiently from disk to memory and GPU. On Windows, DirectStorage provides a standardized API that reduces CPU overhead, minimizes copying, and enables better asset streaming at scale.
On Linux, no equivalent native API exists today.
As a result, Proton/Wine-based games that depend on DirectStorage semantics must either:
- Fall back to legacy, CPU-heavy I/O paths, or
- Emulate Windows behavior in user space with limited visibility into Linux-native async I/O, memory management, and GPU synchronization.
This creates a structural gap.
Without a native Linux-side abstraction:
- Asset streaming is fragmented across engines and ad-hoc thread pools
- CPU cores are wasted on I/O and decompression work that could be pipelined or offloaded
- GPU upload paths are often bolted on after the fact rather than designed into the I/O model
- Proton/Wine must translate Windows semantics without a clear Linux analogue
DirectStorage is not just βfaster file I/Oβ β it is an architectural contract between the game, the OS, and the GPU.
A Linux-native DirectStorage-style runtime enables:
- A clear, explicit async I/O model built on Linux primitives (e.g. thread pools today, io_uring tomorrow)
- Batching and queue-based submission that matches how modern engines structure asset streaming
- A first-class path from disk β CPU β GPU, rather than implicit or engine-specific glue
- Cleaner integration points for Wine/Proton, avoiding opaque shims or duplicated logic
- An evolution path toward GPU-assisted decompression and copies via Vulkan
This is not about copying Windows APIs verbatim. It is about providing a native Linux abstraction that maps cleanly onto modern storage, memory, and GPU systems.
ds-runtime explores what such a runtime could look like on Linux:
- A small, explicit request/queue model inspired by DirectStorage semantics
- Pluggable backends (CPU today, Vulkan/GPU tomorrow)
- A stable C ABI suitable for integration into Wine/Proton or engines
- Clear ownership, lifetime, and synchronization rules
- Documentation that treats integration as a first-class concern
Even partial adoption of these ideas can reduce duplication, clarify I/O paths, and make asset streaming behavior more predictable on Linux.
The long-term goal is not to replace engines or drivers, but to provide a shared, understandable foundation for high-performance game I/O on Linux.
βββββββββββββββ
β Client β
β (game/app) β
ββββββββ¬βββββββ
β enqueue Request
βΌ
βββββββββββββββ
β ds::Queue β β orchestration, lifetime, waiting
ββββββββ¬βββββββ
β submit
βΌ
ββββββββββββββββββββββ
β ds::Backend β β execution (CPU / Vulkan / future)
ββββββββββββββββββββββ
See docs/design.md for details on backend evolution.
ds::Request
Describes what to load:
-
POSIX file descriptor
-
Byte offset
-
Size
-
Destination pointer
-
Optional GPU buffer/offset for Vulkan-backed transfers
-
Operation (read/write)
-
Memory location (host or GPU buffer)
-
Compression mode
-
Optional GPU buffer handle + offset when using the Vulkan backend
-
Completion status / error
This maps cleanly to:
-
Linux I/O semantics
-
DirectStorage request descriptors
-
Future GPU-resident workflows
ds::Queue
Responsible for:
-
Collecting requests
-
Submitting them to a backend
-
Tracking in-flight work
-
Optional blocking via
wait_all() -
Retrieving completed requests via
take_completed()
The queue does not perform I/O itself.
ds::Backend
Abstract execution interface.
Error reporting:
ds::set_error_callbackinstalls a process-wide hook for rich diagnosticsds::report_erroremits subsystem/operation/file/line context and timestampsds::report_request_erroradds request-specific fields (fd/offset/size/memory)
Current implementation:
-
CPU backend
-
pread()/pwrite()-based I/O -
Vulkan backend (experimental)
-
pread()/pwrite()plus Vulkan staging buffer copies to GPU buffers -
io_uring backend (experimental)
-
io_uringhost I/O path (host memory only) -
Small internal thread pool
-
Demo βdecompressionβ stage (uppercase transform); GDeflate is stubbed
Planned backends:
-
Vulkan compute backend (GPU copy / decompression)
-
Vendor-specific GPU paths
This mirrors how real DirectStorage systems are structured:
-
A front-end queue API
-
A backend that owns execution and synchronization
-
Clear separation between:
-
disk I/O
-
decompression
-
GPU involvement
Keeping these layers explicit makes the code:
-
Easier to reason about
-
Easier to test
-
Easier to extend without rewrites
-
Code hygiene goals
The project follows conventions expected by experienced Linux developers:
-
Header / implementation split
-
No global state
-
RAII throughout
-
Direct use of POSIX APIs (open, pread, close)
-
No exceptions crossing public API boundaries
-
Minimal but explicit threading
-
No macro or template magic
-
If something happens, it should be obvious where and why.
Modern Windows titles increasingly rely on DirectStorage-style APIs for asset streaming and decompression. On Linux, these calls are currently handled via compatibility-layer shims or fall back to traditional I/O paths.
This project explores what a native Linux runtime for DirectStorage-like workloads could look like, with an emphasis on:
- Correct API semantics
- Clean separation between queue orchestration and execution
- Explicit backend design (CPU today, GPU later)
- Compatibility with Wine / Proton architecture
The current implementation focuses on a CPU backend that provides:
- Asynchronous I/O semantics
- Explicit completion tracking
- A decompression stage hook (currently a demo transform)
This is intended as a foundational layer that could back a future
dstorage.dll implementation in Wine/Proton, with GPU acceleration added
incrementally once semantics and integration points are validated.
For integration guidance (including a no-shim option), see docs/wine_proton.md. For Arch Linuxβspecific Vulkan notes, see docs/archlinux_vulkan_integration.md.
The included demo program:
-
Writes a small test asset to disk
-
Enqueues two asynchronous requests:
-
One raw read
-
One βcompressedβ read (fake uppercase transform)
-
Submits them concurrently
-
Waits for completion
-
Prints the results
Example output:
[demo] starting DirectStorage-style CPU demo
[demo] wrote 41 bytes to demo_asset.bin
[demo] submitting 2 requests
[demo] waiting for completion (in-flight=2)
[demo] all requests completed (in-flight=0)
raw : "Hello DirectStorage-style queue on Linux!"
upper : "HELLO DIRECTSTORAGE-STYLE QUEUE ON LINUX!"
Additional demo:
ds_asset_streamingwrites a packed asset file and issues concurrent reads, exercising request offsets and the error reporting callback.
Linux
C++20 compiler (Clang or GCC)
pthreads
CMake β₯ 3.16
Vulkan SDK (optional, required for the Vulkan backend)
liburing (optional, required for the io_uring backend)
git clone https://github.com/infinityabundance/ds-runtime.git
cd ds-runtime
mkdir build
cd build
cmake ..
cmake --build .Run the demo:
# from inside build/examples/
./ds_demoEnable tests with:
cmake -B build -S . -DDS_BUILD_TESTS=ON
cmake --build build
ctest --test-dir buildRun the asset streaming demo:
# from inside build/examples/
./ds_asset_streamingThe build produces a shared object libds_runtime.so that exposes both the
C++ API (include/ds_runtime.hpp) and a C-compatible ABI (include/ds_runtime_c.h).
This makes it easier to integrate the runtime with non-C++ code or FFI layers
like Wine or Proton shims.
To link against the shared object from another project:
cc -I/path/to/ds-runtime/include \
-L/path/to/ds-runtime/build \
-lds_runtime \
your_app.cWhen built with Vulkan, you can construct a Vulkan backend and submit requests
with RequestMemory::Gpu to move data between files and GPU buffers. Requests
use gpu_buffer + gpu_offset to identify the destination/source GPU buffer.
βββ CMakeLists.txt # Top-level CMake build configuration
β
βββ include/ # Public C++ API headers
β βββ ds_runtime.hpp # Core DirectStorage-style runtime interface
β βββ ds_runtime_vulkan.hpp # Vulkan backend interface (experimental)
β βββ ds_runtime_uring.hpp # io_uring backend interface (experimental)
β
βββ src/ # Runtime implementation
β βββ ds_runtime.cpp # Queue, backend, and CPU execution logic
β βββ ds_runtime_vulkan.cpp # Vulkan backend implementation
β βββ ds_runtime_uring.cpp # io_uring backend implementation
β
βββ examples/ # Standalone example programs
β βββ ds_demo_main.cpp # CPU-only demo exercising ds::Queue and requests
β βββ asset_streaming_main.cpp # Asset streaming demo with concurrent reads
β β
β βββ vk-copy-test/ # Experimental Vulkan groundwork
β βββ copy.comp # Vulkan compute shader (GLSL)
β βββ copy.comp.spv # Precompiled SPIR-V shader
β βββ demo_asset.bin # Small test asset for GPU copy
β βββ vk_copy_test.cpp # Vulkan copy demo (CPU β GPU β CPU)
βββ docs/ # Design and architecture documentation
β βββ design.md # Backend evolution and architectural notes
β
βββ assets/ # Non-code assets used by documentation
β βββ logo.png # Project logo displayed in README
β
βββ README.md # Project overview, build instructions, roadmap
βββ LICENSE # Apache-2.0 license
This project is not affiliated with Microsoft, Valve, or the Wine project.
However, it is intentionally structured so that:
-
ds::Requestcan map toDSTORAGE_REQUEST_DESC -
ds::Queuecan map to a DirectStorage queue object -
A Vulkan backend can integrate with Protonβs D3D12 β Vulkan interop
The goal is to explore what a native Linux DirectStorage-style runtime could look like, with real code and real execution paths.
π¨ Fix build issues (missing fields/methods - CRITICAL)
β»οΈ Real compression format (CPU GDeflate first)
io_uring backend (host memory, needs verification)
β»οΈ Wine / Proton integration experiments
β»οΈ Real-world game testing
This project intentionally starts small and correct. See MISSING_FEATURES.md for detailed status.
Discussion, feedback, and code review are welcome.
If you are a:
-
Linux systems developer
-
Graphics / Vulkan developer
-
Wine or Proton contributor
β¦your perspective is especially appreciated.
This project is licensed under the Apache License 2.0. See the LICENSE file for details.
Linux deserves first-class asset streaming paths β not just compatibility shims.
Even if this repository never becomes the solution, it aims to push that discussion forward with real, auditable code rather than speculation.