telling lies with the truth

Lindzen's position on smoking simply demonstrates his willingness to use logic to rationalize a defense of the indefensible.
 
Sod this. This is the new reality:

JANUARY’S brutal heatwave may have killed 100 Melburnians - and more than 200 people across south-eastern Australia - an ‘‘invisible tragedy’’ now the subject of investigations by the Department of Human Services and the Coroner’s Office.

A Monash University analysis of the event in late January - when temperatures rose above 43 degrees for three consecutive days - indicates the heatwave claimed hundreds of lives across Victoria, South Australia and northern Tasmania.

The majority of victims were heat-stricken elderly and chronically sick people who died prematurely, often alone in their homes or suddenly of heart failure.

The Department of Human Services and the State Coroner’s Office have launched separate investigations into how to reduce heat-related deaths, after the three-day heatwave overloaded the health system and filled to capacity the mortuary that only a week later would be full again with victims of the Black Saturday bushfire disaster of February 7, in which at least 210 people died.

Monash University’s Professor Neville Nicholls analysed funeral notices to estimate a sudden 45 per cent jump in deaths in late January - 100 at least in Melbourne and more than 200 across south-eastern Australia. The three most deadly days were from Wednesday, January 28, to Friday, January 30.

Heatwave left hundreds dead | theage.com.au
 
Have any of Lindzen’s claims regarding the consensus been published in a peer review journal?


No.



Does Lindzen have a standing paper in any peer review journal that can provide a mechanism as to why current global warming isn't something to worry about?


No.

I'm not certain of what you mean when you write "regarding the consensus" and "standing paper." But just yesterday I posted a link to a Lindzen paper that was published in a peer reviewed journal in 2007 and that states his view that "...serious and persistent doubts remain concerning the danger of anthropogenic global warming despite the frequent claims that 'the science is settled."' In the paper, he includes discussion of climate "mechanisms" in his argument in support of that view. Here is a link to the paper again:

http://eaps.mit.edu/faculty/lindzen/230_TakingGr.pdf

And, yes, Energy and Environment is a peer review journal. Of course, there have been numerous ad hominem attacks against it; I think because it will actually publish papers in opposition to the "consensus" point of view. In fact I think that's one of the pillars of "global warmists" tactics. Any time any scientist expresses a contrary view the ad hominem attacks ensue. It's "You can't believe him (or her) because (insert ad hominem attack)."

So you can go to Wikipedia (which has had its own credibility attacked because it can be edited by anybody and thereby serve as an editorial platform) and see an attack upon Energy & Environment. Nevertheless, the existence of the journal and its willingness to give those outside of the "consensus" an opportunity to publish renders statements such as that referring to Lindzen above as well as statements such as Namoi Oreskes contention that views challenging the consensus are absent from the published peer reviewed literature objectively false.
 
http://eaps.mit.edu/faculty/lindzen/230_TakingGr.pdf


The failures cited do not per se deal with the matter of climate sensitivity. However,
there is ample evidence that current models are indeed exaggerating climate
sensitivity. The fact that so little of recent observed warming can be attributed to
greenhouse warming may be a sign of this. Moreover, specific mechanisms have been
identified such as the iris effect (Lindzen et al., 2000, Spencer et al., 2007) which is
based on observations that current models fail to replicate. This effect should, if
correct, provide a powerful negative feedback. As mentioned earlier, ocean delay is
itself proportional to climate sensitivity, and the work of Lindzen and Giannitsis
(1998) and Douglass et al., (2006) strongly suggested that the observed delay time is
too short to allow large sensitivities.
On the other hand, it has been argued by Hansen (2005) that observed changes in
ocean temperature (Levitus, 2005) implied model sensitivity was correct. While there
are significant difficulties with Hansen’s analysis – most notably that it assumes that
the ocean is slave to the atmosphere on the time scales examined as well as with
Hansen’s interpretation (Lindzen, 2002), it remains of interest that more recent data
suggests no statistically significant ocean warming (Gouretski and Koltermann, 2007);
not surprisingly, this too has been contested albeit somewhat ambiguously (AchatRao
et al., 2007).
.......................
Lindzen keeps stating that the modelers overstate the climate sensitivety. Yet the melting of the Arctic Ice Cap and the amount of melt in Greenland and Antarctica have taken everybody by surprise. The rapidity of the melt in the permafrost in North America and Siberia, the amount of CO2 and CH4 outgassing in the perma-frost, and the CH4 outgassing in the clathrates of the Arctic Ocean started many decades sooner than expected. That shows far more sensitivety, not less.

Once again, Lindzen refers to an almost mythical "iris effect". Very little evidence that it actually occurs, and even less that if it occurs, it has any real effect.

Lindzen states that there has been no significant ocean warming observed, yet virtually everything I have read states otherwise. And the reports from the fisherman all state that they are seeing species of southern fish that they never used to see off both coast of North America. And those who study the ocean completely disagree with Lindzen;
NOAA 200th Top Tens: Breakthroughs: Warming of the World Ocean
 
Energy and Environment describes itself as "an interdisciplinary journal aimed at natural scientists, technologists and the international social science and policy communities covering the direct and indirect environmental impacts of energy acquisition, transport, production and use." The journal's publisher is Multi-Science and its editor since 1996 is Sonja Boehmer-Christiansen, who is a former Reader in Geography at the University of Hull in England and writer on the political and policy aspects of climate change. The journal's editorial advisory board has 20 members, including 11 professors and 5 other PhDs in 2008.

Energy and Environment ("E&E") has been published since 1996. People who have published in this journal include Sallie Baliunas, Ian Castles, Bjorn Lomborg, Patrick Michaels, Ross McKitrick, Stephen McIntyre, Roger Pielke Jr., Willie Soon, Richard Tol, and Gary Yohe. The journal is not listed in the ISI's Journal Citation Reports indexing service for academic journals.[1]. According to the WorldCat.org database, the journal can be found at 26 libraries worldwide, mostly at universities.[2]

The journal's peer-review process has at times been criticised for publishing substandard papers [1][3].Roger A. Pielke (Jr), who published a paper on hurricane mitigation in the journal, said in a post answering a question on Nature's blog in May about peer-reviewed references and why he published in E&E: "...had we known then how that outlet would evolve beyond 1999 we certainly wouldn't have published there. The journal is not carried in the ISI and thus its papers rarely cited. (Then we thought it soon would be.) We were invited to submit a piece in 1997 or 1998 and we had this in prep and sent it in."[4]
Numerous people considered climate skeptics or contrarians have published in the journal. Skeptics on the journal's editorial staff include Boehmer-Christiansen herself and anthropologist Benny Peiser. Some of the journal's articles opposing the scientific consensus on climate change have been quoted by policy makers known to be skeptical of the subject, such as U.S. Senator James Inhofe and U.S. Congressman Joe Barton.[1] When asked about the publication of these papers Boehmer-Christiansen replied, "I'm following my political agenda -- a bit, anyway. But isn't that the right of the editor?"[5]

The publication's ISSN is 0958-305X and OCLC is 21187549http://en.wikipedia.org/wiki/Energy_and_Environment

..............................................
 
This is a good spot to mention that I truely lament the faith the public invests in the concept of journal "peer review." I guess people think it's a really careful process where bias is absent and the pristine ideal principles of objective scientific inquiry apply. Before I write what I'm going to write, I'll say up front that I am not going to give certain specifics. Some of the papers I'm going to describe were authored by people I know personally and who I consider friends. In this world of Google I do not want to take the chance, however small it may be, of having someone I know recognize me using their paper as an example of problems with published literature.

The "overrated" nature of "peer review journal" first came to my attention in the 1980s when I was charged to assess a new fishery. One of the first things I did was conduct a literature search for surveys to describe size distributions and sex ratios. I was horrified to find that not a single survey I found in the published peer reviewed literature was without serious flaws in methodology. I conducted my own survey while carefully trying to stick as close as I could to survey theory and, of course, found that there was a substantial difference in the estimates I obtained and what one would garner by reading the sacred "peer reviewed" papers. I ended up putting my experience into a presentation and won a "best presentation" award at a fisheries conference for it.

Another experience I had in the 1980s was working with a guy who did surveys of clam populations. I assisted him in the field work so I knew what was going on. He published numerous papers on clam densities. The only problem is that he described his sampling approach as random and it wasn't. As a result his published papers substantially over estimated clam densities. People actually moved from other states thinking they'd make their fortunes by obtaining clam leases. They lost their shirts.

During that time I found the paper you can see at http://www.sciencenet.cn/upload/blog/file/2008/10/20081024213243103154.pdf . It's old, by I keep it to this day as a reminder of how porous the peer review process is. I really don't think things have changed much. Basically, what the author contended is that 48% of the published peer reviewed studies he looked at which included the use of inferential statistics included a fundamental error that invalidated the conclusions.

Most recently, within the past year, I had occasion to try to use a paper on the inactivation of pollution indicators in the estuarine environment to try to determine whether or not a certain area could be expected to be impacted by a certain event. I plugged numbers into the model presented in the paper and the output made no sense. What was happening is that the model was presented as estimating percent reduction when in reality it was estimating percent remaining. For various reasons it still wasn't useful so I tried using some figures describing the decay of pollution indicators. When I looked at the two figures of interest, I noticed that the captions were reversed. In other words, it was clear that the caption for the figure at the top of the page was supposed to go under the figure at the bottom of the page and visa versa. In the end, I obtained the raw data from the study from the authors and did my own modeling in order to assess the situation.

The point is that any real effort to review that paper would've caught some serious errors. If, for instance, someone would've read the text in which the figures were referenced and checked to see if the figures really supported what was asserted in that text, they would have immediately noticed that the captions were reversed.

Furthermore, though I can't prove it, I think that people who think that papers aren't subjected to different levels of scrutiny depending on how "popular" they are are naive. I don't think there's any way, in the current environment, that a paper that challenges the climate change consensus is going to get the same treatment as one that does not. First of all, validity is not the only criterion journals use. They can and do, for example, reject papers because they do not find them to be "of interest." I also think that, if a paper on climate change that expresses a contrarian view makes it past the point of just not being liked by the journal to which it's submitted, it's going to be subjected to a "fine toothed comb" scrutiny by reviewers who have the aim of finding a reason to reject it. I do not think the same thing happens to papers that express the consensus view.

I think it's a very bad situation right now in that the major journals are controlled by people who favor the consensus view. I think that, to a large extent, the people on the "other side" are locked out. The gatekeepers are in opposition to them.
 
The journal's peer-review process has at times been criticised for publishing substandard papers [1][3..............................................

Yes, as I said, Wikipedia has a nice, ad hominem piece on Energy & Environment. And Wikipedia has had its own credibility questioned because it can serve as an editorial platform.

Regardless, of whether or not global warmists have been successful in demonizing it because it allows the "other side" a forum, Energy & Environment is a peer review publication.

I find the thing about "substandard papers" laughable. If people are going to worry about publishing "substandard" papers, there are an awful lot of journals they can look at. Kind of funny. I just did a Google search for the journal in which one of the papers described in my previous post was published. I'm talking about the one where the model didn't estimate what it was said to estimate and the captions on two figures were reversed so that there were obvious, glaring errors that would've been caught by any reasonably serious effort at review. Funny thing: There's nothing I could see in the Google results questioning it's peer review process. I wonder if it could be because it's not publishing papers challenging a cherished point of view associated with a hot political issue.
 
John, given your views on evolution, on peer reviewed journals, and on science in general, I don't think that you are capable of having a credible thought on any scientific subject.

Your debating abilities are dismal. You simply do not make points by stating that you are not going to present your evidence, but it trumps anything that your opponents says. Present specifics, and who said it, and when and where.
 
John, given your views on evolution, on peer reviewed journals, and on science in general, I don't think that you are capable of having a credible thought on any scientific subject.

Your debating abilities are dismal. You simply do not make points by stating that you are not going to present your evidence, but it trumps anything that your opponents says. Present specifics, and who said it, and when and where.

I have presented specifics in just about all of my posts, but I am not going to give you the names of the papers on the clams or the one with the graphs reversed because I know those people personally. As far as the fisheries surveys I mentioned, it's been a long time and I don't remember the names and/or authors of the papers. I can tell you, though, that the problem was that they described what they did as random sampling when it wasn't then used descriptive statistical techniques that are only valid if the data were obtained through random sampling. If you don't want to take my word for it, that's the way it goes. And I did give you a link to a paper asserting that 48% of the papers using inferential statistics the guy looked at in peer reviewed literature were characterized by the error of pseudoreplication. That means the inferences weren't shown to be valid.
 
Here's one of my favorite anecdotal pieces on the problem of bias in peer reviewed journals. It deals with social science, but I don't see why one wouldn't assume the problem wouldn't surface in other areas:

A Physicist Experiments With Cultural Studies

A quote:

"So, to test the prevailing intellectual standards, I decided to try a modest (though admittedly uncontrolled) experiment: Would a leading North American journal of cultural studies -- whose editorial collective includes such luminaries as Fredric Jameson and Andrew Ross -- publish an article liberally salted with nonsense if (a) it sounded good and (b) it flattered the editors' ideological preconceptions?

The answer, unfortunately, is yes."


What happened is that a physicist intentionally submitted a paper filled with nonsense but which he thought would flatter the philosophical bent of the editors to see if they'd publish it. And they did.
 
Here's one of my favorite anecdotal pieces on the problem of bias in peer reviewed journals. It deals with social science, but I don't see why one wouldn't assume the problem wouldn't surface in other areas:

A Physicist Experiments With Cultural Studies

A quote:

"So, to test the prevailing intellectual standards, I decided to try a modest (though admittedly uncontrolled) experiment: Would a leading North American journal of cultural studies -- whose editorial collective includes such luminaries as Fredric Jameson and Andrew Ross -- publish an article liberally salted with nonsense if (a) it sounded good and (b) it flattered the editors' ideological preconceptions?

The answer, unfortunately, is yes."


What happened is that a physicist intentionally submitted a paper filled with nonsense but which he thought would flatter the philosophical bent of the editors to see if they'd publish it. And they did.

and if you think the social sciences are weak with facts, try education 'studies.' Cripes almighty! They just make stuff up, because it's 'logical.' :cuckoo:
 
I'm going to post an a mail I received at my work address just this past Thursday. I forwarded it to my home computer anticipating that a discussion of the peer review process might come up. I think what it shows is that, right or wrong, I am by no means alone in my perception of it as being way over rated and that it's a huge mistake to assume that appearing in a peer review journal is equivalent to "handed down by God" while not being in a peer reviewjournal automatically means "unreliable." Before I post the e mail, I'll post a couple of things reference in it.

Here is the full text of the piece in which Horrobin writes that peer review, ""is a non-validated charade whose processes generate results little better than does chance:"

Horrobin on Peer Review

The quote in some context:

"These appalling figures will not be surprising to critics of peer review, but they give solid substance to what these critics have been saying. The core system by which the scientific community allots prestige (in terms of oral presentations at major meetings and publication in major journals)and funding is a non-validated charade whose processes generate results little better than does chance. Given the fact that most reviewers are likely to be mainstream and broadly supportive of the existing organization of the scientific enterprise, it would not be surprising if the likelihood of support for truly innovative research was considerably less than that provided by chance."

Here is the statistical study Horrobin referenced:

Reproducibility of peer review in clinical neuroscience: Is agreement between reviewers any greater than would be expected by chance alone? -- Rothwell and Martyn 123 (9): 1964 -- Brain

Here's the e mail I got:

From: ISPR/KGCM 2009 [mailto:[email protected]]
Sent: Thursday, February 19, 2009 9:49 PM
To: Veazey, John E
Subject: Invitation to a Symposium on Peer Reviewing

Only 8% members of the Scientific Research Society agreed that "peer
review works well as it is". (Chubin and Hackett, 1990; p.192).

"A recent U.S. Supreme Court decision and an analysis of the peer review
system substantiate complaints about this fundamental aspect of
scientific research." (Horrobin, 2001)

Horrobin concludes that peer review "is a non-validated charade whose
processes generate results little better than does chance." (Horrobin,
2001). This has been statistically proven and reported by an increasing
number of journal editors.

But, "Peer Review is one of the sacred pillars of the scientific
edifice" (Goodstein, 2000), it is a necessary condition in quality
assurance for Scientific/Engineering publications, and "Peer Review is
central to the organization of modern science...why not apply scientific
[and engineering] methods to the peer review process" (Horrobin, 2001).

This is the purpose of the International Symposium on Peer Reviewing:
ISPR (ISPR 2009) being organized in the context of
The 3rd International Conference on Knowledge Generation, Communication
and Management: KGCM 2009 (KGCM 2009), which will be
held on July 10-13, 2009, in Orlando, Florida, USA.

-------------------------------------------------------
Deadlines for ISPR 2009 and KGCM 2009

March 18th, 2009, for papers/abstracts submissions and Invited Sessions
Proposals
April 13th, 2009: Authors Notification
May 27th, 2009: Camera ready, final version
-------------------------------------------------------

Pre-Conference and Post-conference VIRTUAL sessions (via electronic
forums) will be held for each session included in the conference
program, so sessions' papers can be read before the conference, and
authors presenting at the same session can interact during one week
before the conferences, and after it is over. Authors can also
participate in peer-to-peer reviewing in virtual sessions.

All Submitted papers will be reviewed using a double-blind (at least
three reviewers), non-blind, and participative peer review. Accepted
papers of registered authors will be published in the printed copy and
the CD versions of the Proceedings. The proceedings will also be
published via web for Full Open Access.

All attendees of the last KGCM 2008 conference, and its collocated ones,
were asked to be surveyed online. Those who filled the survey form (602
attendees) rated the conferences with an average of 8.42 on a scale of
10. More specifically, 58.7% rated them in the range of 8-10, while just
7 attendees (1.16%) rated them below 5. More details regarding this and
other questions can be found at:
2008 Conference Statistics and Opinions

Authors of accepted papers who registered in the conference will have
access to the reviews made to their submission so they can accordingly
improve the final version of their papers. Non-registered authors will
not have access to the reviews of their respective submissions.

All accepted papers where at least one author is registered in the
conference will be included in the hard copy and the CD versions of the
conference proceedings.

For Invited Sessions Proposals, please go to the conference web site.
The best paper of each invited session will also be published in JSCI at
no additional cost for the author Invited session organizers will be
co-editors of the proceedings volume where their session's papers are to
be included, and they will be guest editors of the Journal issue where
the best paper presented at their invited session has been included.

Awards will be granted to the best paper of those presented at each
session. The best 10%-20% of the papers presented at the conference will
be selected from these session's best papers, and will also be published
in Volume 7 of JSCI Journal (Table of Contents - Current Issue - Journal of Systemics, Cybernetics and Informatics), with no
additional cost for their authors. Libraries of journal author's
organizations will receive complimentary subscriptions to at least one
volume (6 issues).

Best regards,

ISPR/KGCM 2009 Organizing committee

Chubin, D. R. and Hackett E. J., 1990, Peerless Science, Peer Review and
U.S. Science Policy; New York, State University of New York Press.

Horrobin, D., 2001, "Something Rotten at the Core of Science?" Trends in
Pharmacological Sciences, Vol. 22, No. 2, February 2001. Also at
Something Rotten at the Core of Science? by David F. Horrobin and
Horrobin on Peer Review (both pages were accessed
on February 1, 2009)

Goodstein, D., 2000, "How Science Works", U.S. Federal Judiciary
Reference Manual on Evidence, pp. 66-72 (referenced in Hoorobin, 2000)

If you wish to be removed from this mailing list, please send an email
to [email protected] with REMOVE MLCONFERENCES in the subject
line Address: Torre Profesional La California, Av. Francisco de Miranda,
Caracas, Venezuela.
 
Here's one of my favorite anecdotal pieces on the problem of bias in peer reviewed journals. It deals with social science, but I don't see why one wouldn't assume the problem wouldn't surface in other areas:

A Physicist Experiments With Cultural Studies

A quote:

"So, to test the prevailing intellectual standards, I decided to try a modest (though admittedly uncontrolled) experiment: Would a leading North American journal of cultural studies -- whose editorial collective includes such luminaries as Fredric Jameson and Andrew Ross -- publish an article liberally salted with nonsense if (a) it sounded good and (b) it flattered the editors' ideological preconceptions?

The answer, unfortunately, is yes."


What happened is that a physicist intentionally submitted a paper filled with nonsense but which he thought would flatter the philosophical bent of the editors to see if they'd publish it. And they did.

And that has to do with peer reviewed scientific journals in what manner? That is a journal dealing in humanities, not science.
 
Once again, John, if all of these scientists are so damned incompetant, how is it that we are exchanging messages on this medium? Not only that, where are the articles disproving AGW? And what are their predictions? And what is their record on making predictions? As dismal as that of Lindzen?
 
Once again, John, if all of these scientists are so damned incompetant, how is it that we are exchanging messages on this medium? Not only that, where are the articles disproving AGW? And what are their predictions? And what is their record on making predictions? As dismal as that of Lindzen?

I didn't say "all these scientists" are "incompetent." I said that I think those who think publication in peer review journals is the end all and be all and that arguments made outside of that context can't have any credibility have a misplaced faith in the journal peer review process. The approach of dismissing arguments not made in "peer review" journals as invalid is an ad hominem approach.

As far as anthropogenic global warming goes, it can not be disproven. It is not possible to prove the the planet is not warmer than it would be if humans were not here. It would not be possible even if everybody agreed that there was a downward trend in temperatures. So I would not expect any articles "disproving" AGW. What I would expect is articles contending that human kind as cause of a particular level of global warming has not been proven. And I've seen that. I've even seen the IPCC concede it.
 
And that has to do with peer reviewed scientific journals in what manner? That is a journal dealing in humanities, not science.

Dude, the social sciences are considered science.

John, it's typical of the environuts to ignore all other sciences thinking that only that which supports their outrageous claims exist. Hell, climatologists really have no business interpreting computer models either but only their interpretations are of value to the environuts. It's the way of many types, "ignore everything that doesn't support your idea even when what does support what you think is real is only a small portion of reality". It's the only way they can make it look so bad, otherwise they look like the fools that they are.
 

Forum List

Back
Top