A firsthand experience with A/J testing:
My team was a few months in to a new platform launch with a good/better/best packaging and pricing model.
Every 2 weeks, we launched a new A/B test to compare new product features / marketing to the old product experience...
With 3 weeks left in December, we ran into a timing issue where the newest product release wouldn't be ready until after the new year.
Not a big deal, we may have an A/B test that lasts 3 weeks instead of 2. We wondered if there was an opportunity here...
Leadership had hunches about pricing but never felt like it was the right time to test because it could overshadow product changes.
I proposed running an A/J test on the week 3 if first 2 weeks were flat. We prepped the sales team since they were essential for the test...
We ran a test increasing the price of the 'best' package by 50%. Ambitious for us. Some expected disaster.
We didn't measure for statistical significance. We used feedback from the sales team and calculated whether the increased price offset a lower conversion rate...
It worked. We kept the increased price, and checked for statistical significance in additional tests. Results stayed consistent.
Massive win which led to significant revenue gains for the platform going forward. All because we decided to do a crazy A/J test on a whim...
Even if the test failed, our downside would have been some lost revenue during 1 week of the year. Low risk with a very high reward.
To summarize, if you have a crazy idea you want to test, this week is a great week to do it.
Embrace A/J testing!