A common error: Comparing the effects of two variables or treatments by comparing p-values

This is something I've seen a few times in papers recently and I find it kind of striking, since it's not something that's hard to spot as incorrect.

Let's imagine I've got a left-right neuron pair, neurons A and B, that I think might drive some particular behavior in C. elegans, say, head motion. I find that if I ablate neuron A, head motion gets slower (specifically, head velocity decreases, with a p-value < 0.05), and if I ablate neuron B, head motion doesn't get slower (specifically, head velocity decreases, with a p-value > 0.05).  Can I conclude that neuron A affects head motion differently than neuron B?

The answer should be pretty obviously no! I need to do a different statistical test to see if A has a different effect than B! Simply knowing the p-values of the two tests is wholly insufficient to tell if they have different effects. Imagine for one neuron p=0.049 and p=0.051. Obviously you wouldn't want to conclude they had different effects!

What statistical test is appropriate? I'm not 100% sure. My default  approach would be to fit a simple regression model in which we estimate Y = B_0 + B_1(A+B) + B_2 and test whether B_2 is nonzero.  However, this approach might not be optimal. Any suggestions?