It is important to not just look at the raw numbers but to consider them in their broader context of what polls are about, the limitations of polls and problems one encounters when trying to predict voting.
First the poll has a 5% margin of error, a stastistical measurement of how far the numbers could be off which is related to the size of the sample. (The more people polled, the lower the margin of error. A normal margin of error for these candidate polls is about 4%.) In other words, Coats could be at 31% and Hostettler at 29%...even within this particular poll. So Coats lead is just outside the margin of error.
Second, occasionally you get problems with the wording of the poll. How a question is worded is the No. 1 way of inserting bias into a poll...to get a particular result. The people who wrote the Wishard and Guion Creek referendum know this when they inserted "hot button" language in the questions to try to skew voters to vote for the referenda. This particular bias generally isn't a problem in an election poll. It sometimes though is used to create certain results when candidates commission the poll themselves- the candidates want a particular result they can show supporters and contributors.
Third, with regard to the Senate poll, you have to ask what "population" was the "sample" taken from. If the population was ALL Republicans who are registered to vote, several people will be polled who may not show up on Election Day. The more expensive (and better) polls will use a "likely voter" screen to get a more clear snapshot of who will likely vote on Election Day.
Fourth, getting a good "sample" has grown increasingly difficult. When a person conducts a poll, you can only talk to the person who has been selected to participate. If someone else in the household is home, you can't just ask that person the question. To get the precise sample needed often becomes very difficult in this era of cell phones, caller ID and voice mail. One has to wonder if the people who do pick up their phone to respond to a poll is truly representative of the ones who do not. Right now pollsters struggle to adjust polls so as to not undercount young people with cell phones only whose phone numbers do not not show up on polling lists which are generally land lines.
Fifth, and this is the big kahuna when it comes to problems with trying to accurately poll election results accurately, polls cannot measure intensity. Intensity is what motivates voters to go to the polls. A classic example is 1980. Right before the presidential election of that year, polls showed Reagan with only a slight lead over Carter, within the margin of error. On Election Day, Reagan had a blow out win. Why? Intensity? Reagan's voters were enthused about him. Carter's voters were lukewarm about the sitting President. Reagan's voters showed up to vote while many Carter voters stayed home. As a result, not only did Reagan win a big victory, Republicans across the country who were trailing in the polls ended up winning, including Dan Quayle who defeated Sen. Birch Bayh even though Quayle was double digits behind in pre-election polls.
What is to be made of these Senate poll results? First, I would strongly suspect that Dan Coats' supporters are not that intense in their support of the former Indiana Senator while I would bet that Hostettler and Stutzman's supporters feel much more passionately about their candidate. That would mean you should expect that some of the people that poll for Coats won't show up on Election Day. That closes the gap between Coats and Hostettler/Stutzman. I think it will be tough though for Hostettler and Stutzman to prevail, when they, and the other candidates, are splitting up the anti-Coats vote which is undoubtedly a significant percentage of the Republican primary vote. I still lean toward thinking Coats will win the primary (and lose the general election), but it should be very close.