Check the following:
Who conducted the investigation?
Was it a recognized, independent research company for members of the relevant associations? If not, then take the results with a grain of salt. If you are not sure, then it is a willingness to answer the questions below. Recognized companies will provide all the information you need to evaluate the trial.
Who paid for the investigation and why was it conducted?
If it was conducted for reputable institutions, or for independent researchers, there is a high probability that it was conducted impartially. If it was conducted for a partisan client, such as a company or political party, it can still be good research (although readers, listeners, viewers should be told who the client was). The validity of the survey depends on whether it was conducted by a company that used scientific methods for sampling and questionnaire design, whether unbiased questions were asked, and whether full information was provided on all questions asked and results. If these data do not exist, then the examination should be taken with a grain of salt. In any case, beware of overloaded questions and selective findings, aimed at enhancing the client’s image, rather than giving a full and objective opinion to the public.
How many people participated in the trial?
The more people the better, although a small sample scientific trial is ALWAYS better than a large self-selected sample trial. However, keep in mind that the total sample is not always the relevant number. For example, surveys of voting intentions often exclude “don’t know” respondents who they feel are unlikely to vote, as well as those who refuse to reveal their preferences. While excluding these groups allows the survey to show the opinion of the most relevant group – “those most likely to vote”, the reported voting sample may be much lower than the total sample and the risk of sampling error is therefore greater. Also, be careful when comparing subgroups, e.g. men and women. The sampling error for each subgroup can be significantly higher than the total sample. If the total sample is 500 and consists of an equal number of men and women, the margin of error for each gender (counting only random errors and ignoring any systematic error) is about 6 percent.
How were the respondents selected?
Is it clear who is included in the sample and who is left out? If the survey claims that the general public is represented (or a significant portion of the public), did the survey company conduct one of the methods listed in the previous articles? If the survey was self-selected, such as newspaper or magazine readers, or writing, phoning, emailing or texting television viewers, then it should NEVER be presented as a representative survey.
If the examination was conducted at specific locations, e.g. in cities but not in rural areas, then this information should be clearly stated in each report.
When was the survey conducted?
Acute events have a significant impact on test results. The interpretation of the test should depend on when it was conducted in relation to the relevant events. Even the latest test results can be affected by these events. The results of a survey a few weeks or months old may be perfectly valid, for example, if they relate to underlying cultural attitudes or behavior rather than current events, but the date the survey was conducted (as opposed to when it was published) should always be disclosed. The field date is especially important for pre-election polls where voting intentions can change by the time a voter casts his vote.
How were the interviews conducted?
There are four main methods: in person, over the phone, online or via email. mail. Each method has its pros and cons. Telephone surveys cannot capture those who do not have or do not use a landline. Tests via e. mail can only be used by those who have access to the Internet. All methods depend on the availability and voluntary cooperation of respondents, and response rates can be very wide. In any case, reputable companies have developed statistical techniques to address these issues and to convert their raw data into representative results.
What were the questions?
Try to get a copy of the entire questionnaire, not just the published questions. Reputable organizations will publish the questionnaire on their website, or provide it upon request. Decide whether the questions are balanced and take note of the results if the interview is structured in a way that seems to lead the interviewee to a certain conclusion.
Are the results consistent with other trials?
If possible, check other trials to see if the results are similar or very different. Studies covering the same topic should reach similar conclusions. If the answers are very different, the reasons may become apparent when the questionnaire or sampling method is checked.
Was it a ” push poll “?
The purpose of push polls is to spread rumors or even outright lies about opponents. These efforts are not polls, but political manipulation that tries to hide behind a “smoke screen” in the form of public opinion polls. In a push poll , a large number of respondents are called by phone and asked to participate in the survey. “Questions” are actually thinly veiled accusations of the opponent, or repetition of gossip about the candidate’s personal or professional conduct. The focus here is to ensure that the respondent hears and understands the allegations in the question, rather than gathering the respondent’s opinion. Push polls have nothing to do with genuine public opinion polls. The best way to defend against 
Are ” exit polls ” (surveys conducted immediately after voting) valid?
This question applies only to elections. Exit polls , if conducted correctly, are an excellent source of information about voters in that poll. It is the only opportunity to actually survey the voters and only the voters. They are usually conducted immediately after the voting is over and are therefore able to reflect (in theory) actual behavior. Pre-election polls, even those conducted the day before the election, cannot completely avoid the danger that some people will change their minds at the last minute or whether they will vote at all, and which party/candidate to support. Done right, they are an excellent source of information about voters in those elections. In addition to answering the question “who won”, they provide information to answer the questions: Who voted for the winner and why candidate/party (a) or candidate/party (b) won. Exit polls are characterized by a complex design and a much larger number of interviews than in pre-election surveys, often tens of thousands and, in some countries, hundreds of thousands of people interviewed.