How do you keep replayed tests trustworthy over time as dependencies and schemas evolve? (i.e. without turning into brittle snapshot tests)
Also, how do you normalize non-determinism (like time/IDs etc.), expire/refresh recordings, and classify diffs as "intentional change" vs "regression"?
1. With our Cloud offering, Tusk Drift detects schema changes, then automatically re-records traces from new live traffic to replace the stale traces in the test suite. If using Drift purely locally though, you'd need to manually re-record traces for affected endpoints by hitting them in record mode to capture the updated behavior.
2. Our CLI tool includes built-in dynamic field rules that handle common non-deterministic values with standard UUID, timestamp, and date formats during response comparison. You can also configure custom matching rules in your `.tusk/config.yaml` to handle application-specific non-deterministic data.
3. Our classification workflow correlates deviations with your actual code changes in the PR/MR (including context from your PR/MR title and body). Classification is "fine-tuned" over time for each service based on past feedback on test results.
How do you keep replayed tests trustworthy over time as dependencies and schemas evolve? (i.e. without turning into brittle snapshot tests)
Also, how do you normalize non-determinism (like time/IDs etc.), expire/refresh recordings, and classify diffs as "intentional change" vs "regression"?
Good questions. I'll respond one by one:
1. With our Cloud offering, Tusk Drift detects schema changes, then automatically re-records traces from new live traffic to replace the stale traces in the test suite. If using Drift purely locally though, you'd need to manually re-record traces for affected endpoints by hitting them in record mode to capture the updated behavior.
2. Our CLI tool includes built-in dynamic field rules that handle common non-deterministic values with standard UUID, timestamp, and date formats during response comparison. You can also configure custom matching rules in your `.tusk/config.yaml` to handle application-specific non-deterministic data.
3. Our classification workflow correlates deviations with your actual code changes in the PR/MR (including context from your PR/MR title and body). Classification is "fine-tuned" over time for each service based on past feedback on test results.
Cool. Definitely a pain point worth attacking. Bookmarked, plan to explore when time allows.
Sounds good Chris, would love to hear your thoughts once you've played around with it.
What does this do that I can't do with mitmproxy?