
Testkit Mastery, Part 4: Design for Easy Integration into Testing Environments
This article is part of a series on building testkits for complex libraries that behave differently in non-production environments. In our case, we needed a testkit for Vulcan, a global state management library that centralizes data across microfrontends and prevents redundant data requests. Vulcan relies on a specific production setup, which makes it challenging to test in isolated environments like Vitest and Storybook.
This Series on Testkit Mastery:
- Overcoming Testing Challenges for Complex Libraries
- Designing a Developer-Friendly Testkit
- Building the Core Structure of a Flexible and Reliable Testkit
- Design for Easy Integration into Testing Environments
- Key Takeaways and Lessons Learned for Building Better Testkits
In previous parts, we covered the initial challenges and defined the essential components for the testkit. Now, with those components in place, the next step was to integrate the testkit into Vitest and Storybook, making setup as minimal as possible. In this article, we’ll explain what it means to automatically integrate a testkit into these environments and explore practical techniques for achieving a seamless configuration. By the end, you’ll have a clear understanding of how we streamlined testkit setup to ensure a consistent experience across Vitest and Storybook.
Sidenote: Why the Testkit Isn’t Used in Playwright or Cypress
Unlike unit and component tests, Playwright and Cypress run on a real environment where Vulcan’s global instance is already available. Since these tests interact with the full application, there’s no need to mock Vulcan—its real behavior is already in place.
E2E tests are primarily used to validate high-level user flows rather than individual components, making the testkit unnecessary in this context. While the testkit plays a crucial role in unit and component tests, its value in E2E testing remains limited—for now.
Integrating with Vitest
To ensure that Vulcan worked in isolated environments, we created a function called setupVulcanMock
. This function generates a fully mocked instance of Vulcan by calling generateMockedVulcan
and makes it available in the testing environment just as it would be in production. After calling setupVulcanMock
, any code that relies on Vulcan can function in a similar way to production, even in isolation. The next step was to make sure setupVulcanMock
was called in the right place across all isolated environments.To streamline the setup in Vitest, we wrote a global setup file inside the testkit itself called test-setup.ts
, which initializes a mocked version of Vulcan before each test:
import { setupVulcanMock } from "./vulcan-mock";
setupVulcanMock(); // Initialize Vulcan mock for all tests
beforeEach(() => {
setupVulcanMock(); // Reset the mock before each test
});
By calling setupVulcanMock
before each test, we ensured that any code accessing Vulcan would work as expected and that no state would leak between tests.
Next, we configured this file as the global setup file in our shared Vitest configuration, so it would run in all test environments across our microfrontends:
const vitestConfig = {
setupFiles: ['@mondaydotcomorg/vulcan-testkit/test-setup.js'],
};
This automated approach allowed the testkit to be available across microfrontends without developers needing to add any extra configuration, making the testing process more efficient.
Why Reset the Mock Before Each Test?
To ensure test isolation, we reset the mock before each test. This prevents state from leaking between tests, ensuring consistency and avoiding unpredictable behaviors.
In our case, since we’re working with a global state management testkit, one reason for resetting is handling mutations—state changes that persist across queries. However, since this article focuses on testkit design as a whole, we don’t dive into state mutations, as they’re specific to our use case. Regardless of the library type, enforcing a fresh state between tests is a best practice that helps maintain reliable and independent test runs.
Integrating with Storybook
For Storybook, the integration process was similar. We wanted Vulcan to be initialized automatically whenever a story was loaded, so developers could focus on writing stories without needing additional setup.To achieve this, we used a global decorator in Storybook that initializes the Vulcan mock for all stories. Inside the testkit, we created a file called preview-vulcan.ts
, which calls setupVulcanMock
to set up Vulcan before each story:
import { setupVulcanMock } from './vulcan-mock';
setupVulcanMock(); // Initialize Vulcan mock for all stories
Then, in our shared Storybook configuration, we configured preview-vulcan.ts
as a global decorator so that Vulcan would be automatically available in all Storybook environments:
const storybookConfig = {
previewAnnotations: [require.resolve('@mondaydotcomorg/vulcan-testkit/preview-vulcan.js')],
};
By including this decorator globally, we ensured that developers could render components in Storybook with Vulcan already mocked, eliminating the need for manual initialization. This setup made working with Vulcan in Storybook as seamless as possible, providing a consistent testing experience across both Storybook and Vitest.
Summary
For library maintainers, integrating a testkit into environments like Vitest and Storybook can simplify testing for libraries that rely on a global setup or have complex testing requirements. By using shared configurations and automated setup, you can provide a seamless experience that allows developers to work with the library in isolated environments, such as component tests and Storybook stories, with minimal configuration.
In the final part of this series, we’ll dive into the broader takeaways and lessons learned, offering insights and strategies that can help library maintainers create effective, developer-friendly testkits for complex systems.