Skip to content

Add language neutral protocol conformance fixtures for concore core behavior #440

@Titas-Ghosh

Description

@Titas-Ghosh

Hi @pradeeban ,
I’ve been looking through the recent CI and testing improvements (like #196, #193, and #210), and the test setup is looking really solid but then it got me thinking about a follow up improvement that could make maintaining the project much easier in the long run.

As the project evolves, there's always a risk that protocol behavior might drift slightly across different implementations. Right now, we don’t really have a single, central "source of truth" that defines exactly how the protocol must behave, regardless of the language it's written in.

what i am proposing is a Language Neutral Conformance Suite ,which is basically adding a fixture based conformance layer. Essentially, we would create a set of standardized test cases (fixtures) that make the expected behavior explicit and easy to validate across the board.

What this would look like:

  • Protocol Fixtures: A collection of JSON input/expected output pairs covering core behaviors (e.g., initval handling, read/write framing semantics, param parsing edge cases, and fallback behaviors for malformed inputs).
  • A pytest Conformance Runner: A tool to validate our current Python behavior against these new fixtures.
  • Implementation Agnostic Format: Keeping the test vectors completely language neutral so that other runtimes can reuse the exact same files later to prove compatibility.

Why i think it would be a very beneficial thing to add because it creates :

  • Zero Ambiguity: It creates a crystal clear baseline for how the protocol should act.
  • Future-Proofing: Gives us a strict compatibility standard for any new runtime implementations we build down the line.
  • Safety: Catches weird edge-case regressions early.

If this sounds like a good direction to you, I’d be happy to open a focused PR to get it started.

For the first iteration, i would just introduce the fixture schema, add an initial set of tests, and wire it into our existing pytest runs. There would be absolutely no runtime behavior changes in this first step, i will be just laying the groundwork.

Let me know what you think! I'm happy to pivot if you have a different vision for the testing architecture.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions