I disagree with the original assertion that people so readily lie in surveys, Sure, it happens, but I don’t think it’s as rampant as suggested. Regardless, I definitely support the notion that good surveys are difficult to construct. There is an art to it, and there are methods that can be employed to weed out rampant “gaming” - perhaps not completely, but no survey will be perfect (hence the need to clearly understand how much confidence one has in the results).
I would also point out that there is no comparison between the type of election polling we’re used to seeing these days and well-written decision-making surveys. Polls tend to ask relatively few questions, with very few qualifiers that help frame and qualify answers (other than demographics, which serve a different purpose). To be sure, the success of well-constructed surveys isn’t merely predicated on the number of questions (too many and people get bored and don’t provide thoughtful answers), but just enough questions of the right type to allow correlative analysis. Results are rarely gleaned through a set of answers to a single question (e.g., “please rank the relative importance of the following five features”), but how multiple questions take a respondent down a path where a high-confidence conclusion can be deduced.
Along the way, answers that seem non-genuine or - more commonly - don’t seem to “hang together” because they aren’t well-thought out or are contradictory, can be filtered out. This oftentimes results in many survey responses being wholly discarded, which is not a bad thing unless the survey is so poorly written that few responses are left at the end (meaning there isn’t a statistically relevant sample size).
There are typically two ways poorly designed surveys end up failing: a) when they result in too small a sample size (oops!), and b) when the results aren’t recognized for how poorly they serve the need, and poor decisions are made based on those results. The latter is worse, and unfortunately much more common.
You’d be right to think that it’s challenging and time-consuming to craft a good survey, and equally difficult to carefully analyze the results. How much effort one puts into the process should ideally be based on how big the risk is of making a bad decision. Decisions about feature priority in a fairly well-established application like Plex probably don’t require true survey “artists” and the time & expense that goes along with that. On the other hand, decisions about strategic direction can benefit greatly from careful attention to how choices are framed and presented, and who is asked to help provide input on those choices.
I don’t know what’s in the Plex survey because I haven’t tried taking it (I suspect I’d get bounced early, like many of you). But I doubt - regardless of how much effort went into constructing the survey - any decisions made could be as catastrophic as those based on any of the 2020 election polls! 