I was reading this question and I failed to fully understand the introductory part of it.
The OP (@Artem Kaznatcheev) says:
Most analytic models like to assume weak selection because it allows the authors to Taylor expand the selection function and linearize it by dropping terms that are higher order in the stength of selection.
I don't fully understand it. Can you help me making sense of why assuming weak selection allows one to Taylor expand the selection function. I am hoping someone would answer by presenting a mathematical model that at first does not assume weak selection and show why assuming weak selection allows the use of a Taylor series to linearize the function. I'd like to understand which terms fall down and which terms left with this assumption.