Thank you for your concern. The web page you referred us to is one of the web pages trying to discredit our conference. Other groups have been trying to discredit other well known conferences as well; as it is the case, for example, of the IEEE conferences (see http://www.scamieee.netfirms.com/ for an example).

But, those who actually go to our conferences already know the level of quality we have been achieving right from the beginning. In past conferences about 50%-60% of the papers presented at them were identified and chosen by invited session organizers. Each organizer is autonomous regarding the paper acceptance policy of his, or her, invited session. So, the quality level of past conferences depended mostly on the autonomous invited session organizers from about 80 countries. This decentralization of past conference organization proved to be a generator of good papers. Otherwise, how can we explain the way our conferences have been growing: from 40 papers in Baden-Baden (Germany) in 1995 to about 1500-2500 papers presented at our conferences in last years (including collocated conferences and presentations at invited sessions). We have had about 100 volumes of hard copy proceedings, most of which were included in several products of ISI-SCI (The Institute of Scientific Information). Actually all of them had been included in the last five years. Since we started with the organization of IIIS conferences (12 years ago) we had about 8.000 papers, 500 invited session organizers, 1000 session chairs and co-chairs, 500 papers that were chosen among the best 10%-20% of the papers presented in conferences of the last 4 years (and, consequently, were published in 24 issues of 4 volumes of the JSCI journal), 25.000 reviews have been done via about 3000 reviewers, etc. You can get a glimpse about invited sessions organizers, best papers, regular session chairs, etc at the IIIS web page: www.iiis.org/iiis. After getting a glimpse at this page you can have an opinion whether it is possible for so many respected scholars and professionals, from so many countries, to get in collusion in order to organize invited sessions, chair regular sessions, select best papers to be published in the journal, etc. for a low quality level conference.

With regards to the two papers generated by three MIT Students, which were sent to us for the last conference, I would like to be a little more comprehensive, because we rejected one them (our reviewers detected it as a bogus paper, or as a "bad joke" as one of its reviewers pointed out in an e-mail sent directly to me). The second paper was accepted as a NON-REVIEWED one. The computer program for "Random Paper Generation" they provided in their web page was certainly used and we detected the bogus papers sent to us. But, in the case of the bogus paper accepted as a non-reviewed one we did not have, by the acceptance deadline, any feedback from its reviewers, so we thought it was not fair to refuse it while not having at least one refusal recommendation from the reviewers. We thought that accepting a paper as a non reviewed one makes its author(s) completely responsible for its content. This is the case for accepted presentations in conferences where no review at all is done.

In our opinion, and it has been our experience, the acceptance of a small percentage of non-reviewed papers does not significantly decrease the quality level of a conference, in fact, it could well increase the probability of not refusing a good paper with a content differing from established paradigms. Different kinds of reasoning can be found in the specialized literature on the subject, explaining why non-reviewed papers might be, and even should be, accepted. Robin and Burke (1987, Peer review in medical journals. 91(2), 252-255), for example, affirms with regards to journals that "Editors should reserve space for articles.that receive poor review.they should publish unreviewed material..." (In A. C. Weller, 2001, Editorial Peer Review, Its Strength and weaknesses, p.317). It was established for the Database PubMed Central (following suggestions made by Harold Varmus, then Director of the National Institute of Health: NIH) that "the non-peer-reviewed reports will also enter PubMed Central.reports may never be submitted to a Journal for a traditional peer review, yet will be deposited in PubMed Central." (Weller, 2001, Editorial Peer Review, Its Strength and Weaknesses, p.320). Gordon (1978, Optional published refereeing. Physics Today, 31(10), 81) championed the idea of adopting an optional published refereeing where "the publication of almost everything will be guaranteed with the requirement that referees' comments be published along with the articles." (Weller, 2001, Editorial Peer Review, Its Strength and Weaknesses, p.317). This is why several conferences have a mixture of reviewed and non-reviewed papers. In some academic areas, their respective largest, oldest and more prestigious associations (with chapters in most universities in some cases) have been organizing yearly conferences, for many years, without any paper review at all. They invite to make submission of abstracts of no more than 50 words. While holding a very high refusal rate for their journals, these prestigious Associations have almost no review for the papers presented at their very large yearly meetings. This is because conference meetings have different objectives than journal publishing. Walker and Hurt (1990, Scientific and Technical Literature. Chicago: American Library Association) emphasizes saying "don't confuse the purpose of a proceedings with that of a Journal" (p. 94), and they add (quoting UNESCO's Bulletin for Libraries 24:82-97, 1970) that "Valuable oral exchange does not usually become valuable publication simply by printing it." They insist that "An important element in the process of transmission of scientific information and knowledge is oral communication." (p. 95). If this applies to scientific conferences, there are more reasons to apply it to conferences designed to be a forum for idea interchange and oral interaction among scientists, engineers, practicing professionals, consultants and managers. As you know, we have a very high refusal rate, in the range of 80% to 90% in our journals, and in our past conferences we accepted 10%-12% of non reviewed papers because of the same reasoning why prestigious conference make no review at all and why Database PubMed Central includes non-peer-reviewed reports.

With regards to the two papers randomly generated by the three MIT students and submitted to us for the WMSCI 2005 conference, the actual facts are as follow:

1. We received two bogus papers that were randomly generated. Based on the feedback we got from the respective reviewers, we sent a notice of non-acceptation to the author of one of the papers. By the deadline, we had no reviews for about 10% of the submitted papers, so we sent an acceptance as a NON-REVIEWED paper to their respective authors, while we could get some feedback from the reviewers. If this feedback were positive the respective paper changed its status of NON-REVIEWED to a REVIEWED one.

2. This acceptation policy was explicitly stated in the conference web site. We clearly stated that we were accepting about 10%-12% of NON-REVIEWED papers. So we did nothing that we were not saying clearly and explicitly up front.

3. Since one of our objectives is to have, in the same place, different kinds of disciplines with different kinds of reviewing processes and acceptance policies, we thought that there was nothing wrong with mixing both kinds of presentations (REVIEWED AND NON-REVIEWED) in the same conference, especially if the non-reviewed ones are just about 10-12%, and if we said so up front.

With these conditions it is easy to pass a bogus paper. In the same blog where we were attacked, one of the bloggers (Mikero) posted on April 14, 2005 at 12:11 AM that "In the recent IEEE conference in Boston, my team from the university of Colorado published 6 JUNK Papers. Shame on IEEE." (http://3dpancakes.typepad.com/ernie/2005/04/academic_spam_a.html). This is just an example of how many hoax papers are accepted at conferences and journals. You can even find hoaxes in Ph.D. dissertations (the Bogdanoff affair denounced by Baez, J., in 2004 http://math.ucr.edu/home/baez/bogdanov.html). There are many published books and articles describing the huge weaknesses of peer reviewing in journals, let alone reviewing for conferences' presentations. David Lazarus (1982), editor-in-Chief of the American Physical Society, emphasized that "the peer-reviewed system's being of finite value, particularly when used deceptively. We [in the Physical Review] rely on the honesty and integrity of our authors - and their own self-selection of the quality of the papers they send us - as much as on our referees and editors, to ensure the quality of our journals." (p. 219) (The American Physical Society publishes The Physical Review, Physical Review Letters and Review of Modern Physics). We had a similar perspective of the publishing of academic papers. It is sad, very sad, that this perspective has to be reviewed. Lance Fortnow (professor of Computer Science at The University of Chicago, editor of four journals and Program Chair of several IEEE. ACM, etc. conferences in Computer Science) affirmed that "virtually none of the conferences in computer science fully referee their submissions. A clever student could write a paper with a bogus proof and have a chance of that paper being accepted at a major conference like STOC." As it is known, STOC (The Annual ACM Symposium on Theory of Computing) is one of the most prestigious conferences in Computer Science. Professor Lance Fortnow added that "I would consider someone who intentionally submits a bogus paper to STOC guilty of academic fraud. Why are these MIT students any different?" (http://weblog.fortnow.com/2005/04/fine-line-between-prank-and-fraud.html).

After publishing the information regarding what the three MIT students did, a blogger wrote the following text in his blog (http://oemperor.blogspot.com/2005_04_01_oemperor_archive.html)

Last Friday I wrote about the hoax perpetrated by Jeremy Stribling and others, in which a computer-generated paper (written by SCIGen) was temporarily accepted as a non-reviewed paper by the World Multiconference on Systemics, Cybernetics and Informatics (WMSCI).

Well, Jeremy's list of papers is here. How many of these did he actually author or co-author, and how many were computer-generated?

Even though I don't have his computer science background, it appears that his papers (such as A performance vs. cost framework for evaluating DHT design tradeoffs under churn [PDF]) are legit.

Incidentally, just to show how you can overcome past transgressions, one of Stribling's co-authors on this paper (and others) is Robert Morris of MIT, commonly known as "rtm." Here's Robert's MIT page. And here's the abstract for one of his old papers. (If you don't recall the name, maybe this abstract will remind you.)

-------------------------------------------------------------------------------- The 4.2 Berkeley Software Distribution of the Unix operating system (4.2BSD for short) features an extensive body of software based on the "TCP/IP" family of protocols. In particular, each 4.2BSD system "trusts" some set of other systems, allowing users logged into trusted systems to execute commands via a TCP/IP network without supplying a password. These notes describe how the design of TCP/IP and the 4.2BSD implementation allow users on untrusted and possibly very distant hosts to masquerade as users on trusted hosts. Bell Labs has a growing TCP/IP network connecting machines with varying security needs; perhaps steps should be taken to reduce their vulnerability to each other. --------------------------------------------------------------------------------

Perhaps. If such steps are not taken, something bad could happen. Clicking this last link takes you to the web page http://www.answers.com/topic/morris-worm, where the following text is found.

A famous occurrence of Internet sabotage. On November 2, 1988, Robert Morris, a Cornell University graduate student, unleashed a worm on the Internet that infected between 6,000 and 9,000 computers, overloading the entire Internet and causing many servers to fail as a result. As a computer science student, he was interested in determining how far and how quickly the worm could spread throughout the network, but he did not anticipate that it would cause as much trouble as it did due to his own misjudgment in coding the program's logic.

Morris was convicted and sentenced to three years of probation and 400 hours of community service as well as a $10,000 fine. This was a seminal incident in the history of Internet security that led directly to the founding of the CERT/CC a month later. See CERT, worm and denial of service attack.

This worm may have been an honest mistake, as I think it was, but the harm was already done, as it might be the case of the bogus papers sent by the three MIT students. Participants of our past conferences know first hand the quality of our conferences. However, some scholars that did not participate in our past conferences may start perceiving the conference wrongly or may start having doubts about it. In the case of these scholars, the harm has already been done. Sooner or later the truth will be known, as it is expected in any scientific activity.

4. We handled about 10.000 reviews and just one bogus paper was accepted as a non-reviewed one. I think this is not worse than IEEE accepting and publishing 8 bogus papers (according to the blogger mentioned above) and the Bogdanoff brothers achieving the acceptance of two Ph.D. dissertations, especially if you take into account that we did nothing that was not clearly and explicitly written in our conference web page. In the worst case, we might have had an inadequate acceptance policy, mixing two very used policies in the same conference. We might have made a judgmental mistake, but an honest mistake because everything was said upfront in the conference web site since the beginning of the organizational process. Consequently, if there was any dishonest act it was definitely not ours. This is why we can sleep at night with our human and academic conscience in peace. I am not sure the same could be said about those who are playing with people's reputation via deceptive acts.

5. We trusted authors making submissions because, as long as I understand, science and engineering are based on trust. We might have made a judgmental mistake regarding this issue. We might have confused science with scientists (or science apprentices); we might have confused engineering with engineers. The fact that the scientific enterprise is based on trust does not imply that all scientists can be trusted. We did nothing that was not clearly and explicitly said, from the beginning of the organizing process, in the conference web site. We did not do any deception, and this is an objective fact; but we were deceived, and this is also an objective fact. We were unjustly treated with conclusions derived from part of the truth, but we believe in Universal Justice by which the whole truth will be known sooner or later.

6. Meanwhile, we reviewed, and we are still reviewing, our acceptance policy. One of the main changes we made is to have a two-tiers reviewing process: a closed, double-blinded one (as we did in the past) and an open, not-blinded process. Acceptance decisions will be based on both kinds of reviewing. In this way we will be free to publish the comments of those reviewers who did the open review, in case we have a similar act of deception again.

7. In spite of all the half truths and the smears (like saying that our conference is a bogus one) that circulated regarding our conferences, we had about 1400 scholars/researchers who participated in our last conference, along with its collocated ones. We renewed the 100% of our Program Committee, and about 400 scholars/researchers accepted to participate in the PC of the next WMSCI 2006, and about 2500 accepted to participate as additional reviewers. They know that our conferences are not bogus ones because they have been working with us for about 12 years, and many of them were co-editors of the approximately 140 different hard copy volumes (containing an average of 550 pages) we have published up to the present as proceedings of our conferences. I am willing to send you, or to the library of your organization, the 12 first issues of our journal where we are publishing the best 10% of the papers presented at our conferences. You can check the electronic version of the Journal at http://www.iiisci.org/Journal/SCI/.You can also check the participants and the organizers of our past conferences at http://www.iiis.org/iiis

Thank you for your time and for giving me the opportunity to present the other perspective, so you can have a better informed opinion.

Sincerely,

Nagib Callaos

Editor-in-Chief of JSCI, and General Chair of WMSCI 2005

Hosted by www.Geocities.ws

1