Survey respondents self-select along lines of motivation or interest. That is to say, unless there’s a mechanism to oblige participation or to incent participation independent of the topic (e.g. being paid), the most likely respondents are not representative. Many of them come from a subset who are motivated enough on that subject to take the time to reply.
I am used to surveys where the implication is to make your voice heard, you need to fill out the survey. This could be a PTA or school survey or a neighborhood development survey. In all these surveys, the dynamic is “Respond if you want your voice heard. If you don’t, tough for you.” This is different from a scientific survey where you want to hear from a representative sample of your target population. I see now that this is MUCH harder to achieve.
As an example, my survey of ‘movement in the classroom’ was sent to a subset of the population, my friends on Facebook. They are far from randomly chosen. They have been bombarded with my posts about education and are mostly moving in similar experiences to mine. Then there’s the self-selection on the ~10% who chose to respond. While the cumulative response “feels” reasonable to me, in truth, I have no science to support my conclusion. Also, I have no idea whatsoever whether these responses are generalizeable nor do they help even my intuition in this regard.