Latest Research Papers Computer Science 2011

‹ The template below (List of publications intro) is being considered for deletion. See templates for discussion to help reach a consensus. ›

This is a list of important publications in computer science, organized by field.

Some reasons why a particular publication might be regarded as important:

  • Topic creator – A publication that created a new topic
  • Breakthrough – A publication that changed scientific knowledge significantly
  • Influence – A publication which has significantly influenced the world or has had a massive impact on the teaching of computer science.

Artificial intelligence[edit]

Computing Machinery and Intelligence[edit]

Description: This paper discusses whether machines can think and suggested the Turing test as a method for checking it.

A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence[edit]

Description: This summer research proposal inaugurated and defined the field. It contains the first use of the term artificial intelligence and this succinct description of the philosophical foundation of the field: "every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it." (See philosophy of AI) The proposal invited researchers to the Dartmouth conference, which is widely considered the "birth of AI". (See history of AI.)

Fuzzy sets[edit]

  • Lotfi Zadeh
  • Information and Control, Vol. 8, pp. 338–353. (1965).

Description: The seminal paper published in 1965 provides details on the mathematics of fuzzy set theory.

Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference[edit]

Description: This book introduced Bayesian methods to AI.

Artificial Intelligence: A Modern Approach[edit]

Description: The standard textbook in Artificial Intelligence. The book web site lists over 1100 colleges.

Machine learning[edit]

An Inductive Inference Machine[edit]

  • Ray Solomonoff
  • IRE Convention Record, Section on Information Theory, Part 2, pp. 56–62, 1957
  • (A longer version of this, a privately circulated report, 1956, is online).

Description: The first paper written on machine learning. Emphasized the importance of training sequences, and the use of parts of previous solutions to problems in constructing trial solutions to new problems.

Language identification in the limit[edit]

Description: This paper created Algorithmic learning theory.

On the uniform convergence of relative frequencies of events to their probabilities[edit]

Description: Computational learning theory, VC theory, statistical uniform convergence and the VC dimension.

A theory of the learnable[edit]

Description: The Probably approximately correct learning (PAC learning) framework.

Learning representations by back-propagating errors[edit]

Description: Development of Backpropagation algorithm for artificial neural networks. Note that the algorithm was first described by Paul Werbos in 1974.

Induction of Decision Trees[edit]

Description: Decision Trees are a common learning algorithm and a decision representation tool. Development of decision trees was done by many researchers in many areas, even before this paper. Though this paper is one of the most influential in the field.

Learning Quickly When Irrelevant Attributes Abound: A New Linear-threshold Algorithm[edit]

Description: One of the papers that started the field of on-line learning. In this learning setting, a learner receives a sequence of examples, making predictions after each one, and receiving feedback after each prediction. Research in this area is remarkable because (1) the algorithms and proofs tend to be very simple and beautiful, and (2) the model makes no statistical assumptions about the data. In other words, the data need not be random (as in nearly all other learning models), but can be chosen arbitrarily by "nature" or even an adversary. Specifically, this paper introduced the winnow algorithm.

Learning to predict by the method of Temporal difference[edit]

Description: The Temporal difference method for reinforcement learning.

Learnability and the Vapnik–Chervonenkis dimension[edit]

Description: The complete characterization of PAC learnability using the VC dimension.

Cryptographic limitations on learning boolean formulae and finite automata[edit]

Description: Proving negative results for PAC learning.

The strength of weak learnability[edit]

Description: Proving that weak and strong learnability are equivalent in the noise free PAC framework. The proof was done by introducing the boosting method.

A training algorithm for optimum margin classifiers[edit]

Description: This paper presented support vector machines, a practical and popular machine learning algorithm. Support vector machines often use the kernel trick.

A fast learning algorithm for deep belief nets[edit]

Description: This paper presented a tractable greedy layer-wise learning algorithm for deep belief networks which led to great advancement in the field of deep learning.

Knowledge-based analysis of microarray gene expression data by using support vector machines[edit]

Description: The first application of supervised learning to gene expression data, in particular Support Vector Machines. The method is now standard, and the paper one of the most cited in the area.

Collaborative networks[edit]

  • Camarinha-Matos, L. M.; Afsarmanesh, H. (2005). "Collaborative networks: A new scientific discipline, J". Intelligent Manufacturing. 16 (4–5): 439–452. doi:10.1007/s10845-005-1656-3. 
  • Camarinha-Matos, L. M.; Afsarmanesh, H. (2008). Collaborative Networks: Reference Modeling, Springer: New York.


On the translation of languages from left to right[edit]

Description: LR parser, which does bottom up parsing for deterministic context-free languages. Later derived parsers, such as the LALR parser, have been and continue to be standard practice, such as in Yacc and descendents.[1]

Semantics of Context-Free Languages.[edit]

Description: About grammar attribution, the base for yacc's s-attributed and zyacc's LR-attributed approach.

A program data flow analysis procedure[edit]

Description: From the abstract: "The global data relationships in a program can be exposed and codified by the static analysis methods described in this paper. A procedure is given which determines all the definitions which can possibly reach each node of the control flow graph of the program and all the definitions that are live on each edge of the graph."

A Unified Approach to Global Program Optimization[edit]

Description: Formalized the concept of data-flow analysis as fixpoint computation over lattices, and showed that most static analyses used for program optimization can be uniformly expressed within this framework.

YACC: Yet another compiler-compiler[edit]

Description: Yacc is a tool that made compiler writing much easier.

gprof: A Call Graph Execution Profiler[edit]

Description: The gprofprofiler

Compilers: Principles, Techniques and Tools[edit]

Description: This book became a classic in compiler writing. It is also known as the Dragon book, after the (red) dragon that appears on its cover.

Computer architecture[edit]

Colossus computer[edit]

Description: The Colossus machines were early computing devices used by British codebreakers to break German messages encrypted with the Lorenz Cipher during World War II. Colossus was an early binary electronic digital computer. The design of Colossus was later described in the referenced paper.

First Draft of a Report on the EDVAC[2][edit]

Description: It contains the first published description of the logical design of a computer using the stored-program concept, which has come to be known as the von Neumann architecture.

Architecture of the IBM System/360[edit]

Description: The IBM System/360 (S/360) is a mainframe computer system family announced by IBM on April 7, 1964. It was the first family of computers making a clear distinction between architecture and implementation.

The case for the reduced instruction set computer[edit]

Description: The reduced instruction set computer( RISC) CPU design philosophy. The RISC is a CPU design philosophy that favors a reduced set of simpler instructions.



The CRAY-1 Computer System[edit]

Description: The Cray-1 was a supercomputer designed by a team including Seymour Cray for Cray Research. The first Cray-1 system was installed at Los Alamos National Laboratory in 1976, and it went on to become one of the best known and most successful supercomputers in history.

Validity of the Single Processor Approach to Achieving Large Scale Computing Capabilities[edit]

Description: The Amdahl's Law.

A Case for Redundant Arrays of Inexpensive Disks (RAID)[edit]

Description: This paper discusses the concept of RAID disks, outlines the different levels of RAID, and the benefits of each level. It is a good paper for discussing issues of reliability and fault tolerance of computer systems, and the cost of providing such fault-tolerance.

The case for a single-chip multiprocessor[edit]

Description: This paper argues that the approach taken to improving the performance of processors by adding multiple instruction issue and out-of-order execution cannot continue to provide speedups indefinitely. It lays out the case for making single chip processors that contain multiple "cores". With the mainstream introduction of multicore processors by Intel in 2005, and their subsequent domination of the market, this paper was shown to be prescient.

Computer graphics[edit]

The Rendering Equation[edit]

  • J. Kajiya
  • SIGGRAPH: ACM Special Interest Group on Computer Graphics and Interactive Techniques pages 143—150[3]

Elastically deformable models[edit]

  • Demetri Terzopoulos, John Platt, Alan Barr, Kurt Fleischer
  • Computer Graphics, 21(4), 1987, 205–214, Proc. ACM SIGGRAPH'87 Conference, Anaheim, CA, July 1987.
  • Online version(PDF)

Description: The Academy of Motion Picture Arts and Sciences cited this paper as a "milestone in computer graphics".

Computer vision[edit]

The Phase Correlation Image Alignment Method[edit]

  • C.D. Kuglin and D.C. Hines
  • IEEE 1975 Conference on Cybernetics and Society, 1975, New York, pp. 163–165, September

Description: A correlation method based upon the inverse Fourier transform

Determining Optical Flow[edit]

Description: A method for estimating the image motion of world points between 2 frames of a video sequence.

An Iterative Image Registration Technique with an Application to Stereo Vision[edit]

Description: This paper provides efficient technique for image registration

The Laplacian Pyramid as a compact image code[edit]

Description: A technique for image encoding using local operators of many scales.

Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images[edit]

Description: introduced 1) MRFs for image analysis 2) the Gibbs sampling which revolutionized computational Bayesian statistics and thus had paramount impact in many other fields in addition to Computer Vision.

Snakes: Active contour models[edit]

Description: An interactive variational technique for image segmentation and visual tracking.

Condensation – conditional density propagation for visual tracking[edit]

Description: A technique for visual tracking

Object recognition from local scale-invariant features[edit]

Description: A technique (scale-invariant feature transform) for robust feature description

Concurrent, parallel, and distributed computing[edit]

Main article: List of important publications in concurrent, parallel, and distributed computing

Topics covered: concurrent computing, parallel computing, and distributed computing.


A relational model for large shared data banks[edit]

Description: This paper introduced the relational model for databases. This model became the number one model.

Binary B-Trees for Virtual Memory[edit]

  • Rudolf Bayer
  • ACM-SIGFIDET Workshop 1971, San Diego, California, Session 5B, p. 219–235.

Description: This paper introduced the B-Treesdata structure. This model became the number one model.

Relational Completeness of Data Base Sublanguages[edit]

  • E. F. Codd
  • In: R. Rustin (ed.): Database Systems: 65-98, Prentice Hall and IBM Research Report RJ 987, San Jose, California : (1972)
  • Online version (PDF)

Description: Completeness of Data Base Sublanguages

The Entity Relationship Model – Towards a Unified View of Data[edit]

Description: This paper introduced the entity-relationship diagram(ERD) method of database design.

SEQUEL: A structured English query language[edit]

  • Donald D. Chamberlin, Raymond F. Boyce
  • International Conference on Management of Data, Proceedings of the 1974 ACM SIGFIDET (now SIGMOD) workshop on Data description, access and control, Ann Arbor, Michigan, pp. 249–264

Description: This paper introduced the SQL language.

The notions of consistency and predicate locks in a database system[edit]

Description: This paper defined the concepts of transaction, consistency and schedule. It also argued that a transaction needs to lock a logical rather than a physical subset of the database.

Federated database systems for managing distributed, heterogeneous, and autonomous databases[edit]

  • Amit Sheth, J.A. Larson,"
  • ACM Computing Surveys - Special issue on heterogeneous databases Surveys, Volume 22 Issue 3, Pages 183 - 236, Sept. 1990
  • ACM source

Description: Introduced federated database systems concept leading huge impact on data interoperability and integration of hetereogenous data sources.

Mining association rules between sets of items in large databases[edit]

Description: Association rules, a very common method for data mining.

History of computation[

Search the Internet for any research article published in 2011, and you have a 50–50 chance of downloading it for free. This claim — made in a report1 produced for the European Commission — suggests that many more research papers are openly available online than was previously thought. The finding, released on 21 August, is heartening news for advocates of open access. But some experts are raising their eyebrows at the high numbers.

There has been a steady move over the past few years towards getting research papers that are funded by government money into the public domain, and the best estimates2, 3 for the proportion of papers free online run at around 30%. But these are underestimates, argues Éric Archambault, the founder and president of Science-Metrix, a consultancy in Montreal, Canada, that conducted the analysis for the European Commission.

The firm initially asked a team led by Stevan Harnad, an open-access campaigner and cognitive scientist at the University of Quebec in Montreal, to check a random sample of 20,000 papers published in 2008 (from the Scopus database of papers run by Elsevier). It used a program designed by Yassine Gargouri, a computer scientist at the same university, to find free articles. The team found that 32% of the papers that it downloaded in December 2012 were freely available. But when Archambault’s group checked 500 of these papers manually using Google and other search engines and repositories, the figure rose to 48%.

On the basis of this initial test, Science-Metrix applied its own automated software, or ‘harvester’, to 320,000 papers downloaded from2004 to 2011; the tool searches publishers’ websites, institutional archives, repositories such as arXiv and PubMed Central, and sites such as the academic networking site ResearchGate and the search engine CiteSeerX.

It found that an average of 43% of articles published during 2008–11 are available online for free, with the results varying by country and discipline (see ‘Freedom online’). But the true figure is probably higher, because the harvester does not pick up every free paper. When the incompleteness is adjusted for, the proportion of free articles from 2011 rises to about 50%, says Archambault.

The report “confirms my optimism”, says Peter Suber, director of the Office for Scholarly Communication at Harvard University in Cambridge, Massachusetts, and a proponent of open access to research. He thinks that it reflects the experiences of working scientists today. “When researchers hit a paywall online, they turn to Google to search for free copies — and, increasingly, they are finding them,” he says.

The rise of open-access journals is part of the explanation: the share of papers published in these journals rose from 4% in 2004 to 12% by 2011, the report found — agreeing with figures published last year by Bo-Christer Björk, who studies information systems at the Hanken School of Economics in Helsinki.

But the number of peer-reviewed manuscripts made free by other means has also increased, the report says. That includes those eventually made free — often a year after publication, and sometimes on a temporary promotional basis — by publishers that charge for subscription. But it also includes manuscripts that researchers themselves archive online on repositories and personal websites. Some of the articles, although free to read, may not meet formal definitions of open access because, for example, they do not include details on whether readers can freely reuse the material.

The report does not try to distinguish between types of manuscript, nor where and how they were posted, says Archambault. “The situation is so complex that it’s very hard to measure.”

Björk says that the latest measurements seem to have been carefully done, although he adds that because he does not have details of the robotic harvester’s code, he cannot evaluate its method. “Experts on the subject would probably agree that the open-access share of papers, measured around a year and a half after publication, is currently at least 30%,” he says. “Anything above that is dependent on ways of measuring, with this new study representing the highest estimate.”

The report, which was not peer reviewed, calls the 50% figure for 2011 a “tipping point”, a rhetorical flourish that Suber is not sure is justified. “The real tipping point is not a number, but whether scientists make open access a habit,” he says.

Harnad thinks that the next step should be to obtain more accurate measures of when papers become free. “It’s hardly a triumph if articles are only accessible after a one-year embargo,” he says. Greater measurement accuracy is tricky to achieve, he adds, because Google routinely blocks all robotic harvesters. He believes that research on the growth of open access should be given special concessions.

The proportion of free online papers is likely to increase in the next few years. The European Commission says that, from 2014, the results of all research funded by the European Union must be open access. And in February, the US White House announced that government-funded research should be made free to read within 12 months of publication (see Nature 494, 414–415; 2013). Federal agencies are due to submit their plans for achieving this to the US Office of Science and Technology Policy by 22 August.

SOURCE: Science-Metrix

Journal name:
Date published:

One thought on “Latest Research Papers Computer Science 2011

Leave a Reply

Your email address will not be published. Required fields are marked *