VISCA Library Overhaul: Zero-Allocation & Ergonomic API

by Omar Yusuf 56 views

Hey guys! Today, we're diving deep into the exciting overhaul of the VISCA library. This project, codenamed EPIC, aims to deliver a clean, legacy-free, and profile-centric VISCA library. Think consistent mental models, crisp runtime semantics, and zero heap allocations. Let's break down the objectives and see how this will revolutionize the way we work with VISCA!

Objective: A New Era for VISCA Library

The main objective of this EPIC is to create a VISCA library that is not only efficient but also a joy to use. We're targeting a library with:

  • Consistent mental model: The library should behave predictably whether you're using blocking or async operations.
  • Runtime-agnostic semantics: No more hidden fallbacks or unexpected behavior based on feature flags. What you see is what you get.
  • Zero or minimal heap allocations: Hot paths should be as lean as possible, avoiding unnecessary memory allocations.
  • Compile-time safety: Catch errors early with compile-time checks for commands and profiles.
  • First-class DX (Developer Experience): A discoverable API with comprehensive documentation and examples.

This EPIC is all about consolidating our learnings and locking in the highest-quality approaches upfront. This way, implementers won't have to make tough choices at PR time. We're setting the stage for a VISCA library that's both powerful and easy to work with.

Context & Current Pain Points

Okay, so why are we doing this? Let’s look at some of the current pain points that this EPIC aims to address. We need to understand the current challenges to fully appreciate the solutions we're implementing.

1. Inconsistent Timeouts & Runtime Semantics

The current implementation has some inconsistencies that can lead to confusion and unexpected behavior. Let's break it down:

  • TransportExt::recv_with_timeout takes an optional runtime "sleep" handle. If it's None, it silently falls back to Tokio when the tokio feature is on. If not, it logs and proceeds without a timeout. This is a big no-no! We want strict runtime agnosticism, and this breaks that promise. Behavior shouldn't depend on feature flags or ambient context.
  • The blocking timeout uses a custom executor::timeout that busy-waits and polls a Ready future with a thread::sleep(1ms) spin. This is both surprising and ineffective for I/O bounded by OS timeouts. Imagine waiting in a loop when you could be doing something else!
  • TCP blocking recv relies solely on OS read_timeout, but errors like TimedOut aren’t normalized to Error::Timeout in the same way as UDP (which maps WouldBlock to Timeout). This means different error semantics per transport, which is not ideal.

These inconsistencies make it harder to reason about the code and can lead to bugs. We need a unified approach to timeouts and error handling.

2. Heap-Allocating and Type-Erased Async Transport Futures

Async operations are crucial for performance, but the current implementation has a drawback: heap allocations. The Tokio TCP transport returns Pin<Box<dyn Future<...>>> via wrapper fut types, causing allocation and dynamic dispatch in hot paths. This can slow things down, especially when dealing with high-frequency operations.

The good news is that with TAIT (impl Trait in associated types) stable in impls, we can emit concrete, allocation-free futures per impl. This means we can get rid of the boxing and dynamic dispatch, leading to significant performance improvements.

3. Macros Sometimes Allocate and Double-Handle Bytes

Macros are powerful tools, but they need to be used carefully. Currently, visca_command! often builds a Vec<u8> and then re-feeds it into a CommandBuilder, leading to temporary heap allocation and extra copies. This is inefficient and unnecessary.

We already have zero-copy builder variants (visca_builder!), so we should be using them everywhere. The constant and parameterized macro variants are already zero-copy; we should make the “enum with variants” path equally efficient. Zero-copy is the way to go!

4. Feature Flags and Behavior Coupling

Feature flags are great for conditional compilation, but they can also lead to complexity. The async feature currently pulls in async-trait/pin-project, but code paths don’t rely on async-trait and can remove it for a lighter dependency surface. Cargo features show async = ["dep:async-trait", ...] while the code uses GATs and manual futures. This is a bit of an overkill.

We need to rationalize our features and make sure we're only including the dependencies we actually need. This will make the library lighter and easier to maintain.

5. Optional Actor/Spawner Behavior in Async

The current async implementation has different behaviors depending on whether a socket manager actor/spawner is available. When it is, we get concurrent request/response multiplexing. Without it, logic falls back to “direct send,” changing behavior under the same public API. This inconsistency can be confusing and hard to debug.

We should standardize the async path on the actor, with an explicit builder knob only if truly necessary. This will provide a consistent and predictable experience for async users. Think of it as having a single, reliable way to handle async operations.

6. Leaky Timeout Enforcement

We have CommandCategory and TimeoutConfig, which are great concepts. However, end-to-end enforcement depends on ambient runtime, OS socket timeouts, or busy-poll loops. This is leaky and can lead to inconsistent timeout behavior.

We need to unify the notion of deadline across blocking and async and route everything through one path. This will ensure that timeouts are enforced consistently and reliably.

7. Leveraging Type-State Builder and Compile-Time Safety

The type-state builder and compile-time safety are a major strength of the current library. Termination is enforced, which is fantastic, and examples document a migration path. We should finalize that path, remove legacy branches, and add a brief lint/static-assertion layer. Let’s go all-in on compile-time safety!

North-Star Decisions (Pre-Approved)

Alright, guys, let's talk about the decisions that are already set in stone. These are the