TestMu AI Launches Test.md, an Agent-Native Test Framework for Kane CLI
New markdown-based framework enables replayable, AI-readable testing workflows for modern software development environments
Opinions expressed by Entrepreneur contributors are their own.
You're reading Entrepreneur Asia Pacific, an international franchise of Entrepreneur Media.
TestMu AI, formerly known as LambdaTest, has announced the launch of Test.md, a new agent-native testing framework integrated into Kane CLI, aimed at transforming how engineering teams create, manage, and scale software testing in AI-driven development environments.
The newly introduced framework enables developers and AI agents to define, store, and replay tests using a lightweight markdown-based structure. Designed to bridge the growing gap between AI-generated software development and traditional testing methodologies, Test.md converts exploratory testing sessions into persistent, replayable, and verifiable test coverage.
As software engineering increasingly embraces AI-assisted coding and autonomous workflows, conventional testing frameworks are facing challenges related to scalability, maintainability, and complexity. Test.md addresses these limitations by removing the dependency on complex scripting, selectors, and framework-specific configurations.
At the core of the framework is a simple markdown-native structure where tests are written as step-based objectives in natural language. This allows test files to function both as executable test cases and human-readable documentation, making them accessible to developers, QA engineers, and AI agents alike.
“AI has fundamentally changed how software is built, but testing workflows have not evolved at the same pace,” said Asad Khan, CEO and Co-Founder of TestMu AI. “Test.md is our approach to closing that gap. It creates a shared, durable test format that both humans and AI agents can read, write, and execute, without the overhead of traditional frameworks.”
Unlike traditional automated testing approaches that rely heavily on static scripts, Test.md introduces a replay-first execution model. Tests can be authored once and replayed deterministically, while Kane CLI intelligently determines when to reuse existing recorded flows and when to regenerate them based on changes in application behavior.
The framework also includes modular test composition through @import functionality, enabling teams to reuse common workflows such as authentication, setup, and teardown across multiple test cases. Configuration settings, including environment variables, runtime limits, and execution parameters, are embedded directly within test files using front matter.
Test.md is built to support enterprise-scale engineering workflows, including native compatibility with CI/CD pipelines, headless execution, parallel testing, and agent-driven automation. Each test execution generates structured artifacts with step-level results, execution traces, and shareable evidence to improve reproducibility and traceability.
According to the company, one of the platform’s key innovations is its ability to support AI-generated code validation through persistent, version-controlled testing artifacts that can be interpreted and reused by autonomous AI agents.
By combining exploratory testing, automation, and documentation into a unified workflow, TestMu AI aims to simplify quality engineering and reduce the divide between manual and automated testing practices.
Test.md is now available as part of Kane CLI and supports deployment across local, cloud, and CI/CD environments.
TestMu AI, formerly LambdaTest, positions itself as the world’s first Agentic AI-powered Quality Engineering platform, focused on helping organizations automate and scale software testing with AI-driven capabilities integrated into modern development pipelines.
TestMu AI, formerly known as LambdaTest, has announced the launch of Test.md, a new agent-native testing framework integrated into Kane CLI, aimed at transforming how engineering teams create, manage, and scale software testing in AI-driven development environments.
The newly introduced framework enables developers and AI agents to define, store, and replay tests using a lightweight markdown-based structure. Designed to bridge the growing gap between AI-generated software development and traditional testing methodologies, Test.md converts exploratory testing sessions into persistent, replayable, and verifiable test coverage.
As software engineering increasingly embraces AI-assisted coding and autonomous workflows, conventional testing frameworks are facing challenges related to scalability, maintainability, and complexity. Test.md addresses these limitations by removing the dependency on complex scripting, selectors, and framework-specific configurations.