The literature on goal priming has suffered the misfortune of a lot of slings and arrows, some of which were self-inflicted. A couple prominent fraud cases (Stapel, Smeesters) coincided with a number of failures to replicate prominent results (e.g., by excellent experimenters like , , , and others -- see www.psychfiledrawer.org for some examples).
In many ways, the replicability crisis reached a crisis point following a couple angry blog posts from John Bargh. At the time, I commented on how those posts were not the right way to respond to a failure to replicate. According to the article, Bargh apparently now agrees (he has since deleted the posts, and that's not the right approach either-- better to add an explanation of his change of heart to them as a marked edit). Bargh's posts touched off a firestorm, with a flurry of email threads and posts both from proponents and "opponents" in the crisis. Danny Kahneman entered the fray by encouraging priming researchers to replicate each other's work in order to shore up the credibility of a field that he has championed in his own writing. They do not appear to have heeded his call, and outsiders to the field continue to attempt direct replications, often finding no effects.
This article by Tom Bartlett at the Chronicle of Higher Education does a nice job describing the sequence of events leading up to the current state of the field. Read it -- it's an excellent synopsis of a thorny set of issues.