One thing about a study, unlike a news story or news editorial, is that one must consider the study's methodology in terms of what the study aims to do. The OP-er did not consider this in determining to make his/her remarks about the study's merits/demerits. I discuss this later in the post.
Another key thing to keep in mind is whether, in making predictions based on the observed/obtained results of a study the researcher(s) accounted for proportional discrepancies between the sample subjects and the population at large from which the sample subjects came. The OP-er, in criticizing the survey, does not even mention such a thing, yet the survey's report does.
Data were collected in the February and March 2017 using an online survey made available to users (N = 8,728) of the digital media platforms of twenty-eight different newsrooms across the United States. Newsrooms included were Annenberg Media, Ball State Daily News, Casper Star-Tribune, Cincinnati Enquirer, Coloradoan, Columbia Missourian, Dallas Morning News, Denver Post, Evergrey, Fort Worth Star-Telegram, Fresno Bee, Jacksboro, Herald-Gazette, Kansas City Star, KUT, Lima News, Minneapolis Star Tribune, NBC, Ogden Standard-Examiner, Rains County Leader, San Angelo Standard-Times, Skagit Publishing, Springfield News-Leader, St. Louis Magazine, St. Louis Public Radio, Steamboat Pilot & Today, USA TODAY, WCPO, and WDET.
Due to unbalanced participation rates across newsrooms, it is possible a single newsroom with a high response rate could systematically bias statistical analyses. To address this concern, several steps were taken. First, in addition to having the names of the newsrooms associated with each observation, zip codes were reported by nearly all respondents (99.6%) in the sample....Second, weights were calculated assuming it would be more desirable to have an equal number of responses from each news room. Group level means were examine for a number of different cross-sections and no discernable pattern distinguishing the weighted and unweighted samples emerged. Finally, in addition to the linear regression models reported in the following section, multilevel models were also performed to directly model variability explained by differences between newsrooms rather than differences between individuals. As was the case with the survey weights, the results appeared consistent across all statistical solutions.
Careful readers of the study's report will observe that weighting/adjusting the results to account for whatever may be differences attributable and corresponding to newsroom readers being from different parts of the country produced no statistically relevant difference in the data analytical and predictive value of the survey.
The point of the above is that while it is absolutely appropriate to criticize a study/survey on the basis of its methodology, the criticism one levies must be valid. To be valid, methodological criticism must be mindful of the entire methodology the researchers used and applied to the data, not merely to one trait of the data the study obtained, yet that last is precisely the approach/nature of criticism the OP-er presents. Two material dimensions of the OP-er's criticism are straight-up not valid, most especially not the criticism with regard to the ideological quantity and distribution of the survey participants.
So the President Trump scores low on University of Missouri journalism institute’s trustworthiness survey
Trump didn't score highly with liberal or conservatives. One will note that even among self-identified conservatives Trump scored notably lower than did Fox News and, on average, the most conservative (self-identified) didn't actually cite Trump as a trusted source.
the authors congratulate themselves on supposedly so scientific so methodical research...
Actually they don't at all congratulate themselves. They merey published what they did, how they did it and what were the results.
Furthermore, as stated in the summary of the report,
the goal of the project wasn't to identify what sources of information are most trusted; identifying that was but a ancillary result of the project. "
The goal of the Trusting News project is to better understand elements of trust and distrust in the relationship between journalists and nonjournalists."
Overall, the sample leaned slighly liberal (M = 3.41, SD = 1.03), which could be a reflection of the specific newsrooms participating in the current investigation,
a tendency to among conservatives to avoid surveys conducted by “the media,”
What is there to say about that? If one (many) won't "stand up and be counted," then one's thoughts won't be counted. That's one's thoughts aren't counted and thus reflected is attributable to no one but oneself.
The researchers who conducted the noted survey haven't tried to hide any of the details about the nature of the people who responded to the survey, what respondents said, how the responses were used, etc. The researchers haven't tried to present their findings as something they're not or as being representative of people who did not participate in the survey.
Good, Lord! To make their report's work accessible to readers who may not understand statistical sampling, the study's authors go above and beyond what is typical of empirical research studies and explain what the chart above indicates, thereby giving all readers the ability to understand how to interpret it.
The plot below [the plot shown above in this post] depicts the “trusted” media sources with the highest and lowest mean estimates of political ideology—meaning, on average users who mentioned Rachel Maddow as a trusted source were an average of roughly 1.35 points more liberal than the scale’s midpoint, while users who mentioned Limbaugh were over 1.00 point more conservative than the scale’s midpoint.
Talk about "spoon feeding" one's reader! Simple replacement allows any reader who finished the eighth grade to know what is represented by any row/line on the chart.