Catalyzing Inquiry at the Interface of Computing and Biology

(nextflipdebug5) #1
CULTURE AND RESEARCH INFRASTRUCTURE 341

10.2.2.4 Graduate Programs


Graduate programs at the BioComp interface are often intended to provide B.S. graduates in one
discipline with the complementary expertise of the other. For example, individuals with bachelor’s
degrees in biology may acquire computational or analytical skills during early graduate school, with
condensed “retraining” programs that expose then to nonlinear dynamics, algorithms, and so on. Alter-
natively, individuals with bachelor’s degrees in computer science might take a number of courses to
expose them to essential biological concepts and techniques.
Graduate education at the interface is much more diverse than at the undergraduate level. Al-
though there is general agreement that an undergraduate degree should expose the student to the
component sciences and prepare him or her for future work, the graduate degree involves a far wider
array of goals, focuses, fields, and approaches. Like undergraduate programs, graduate programs can
be stand-alone departments, independent interdisciplinary programs, or certificate programs that re-
quire students to have a “home” department.
A bioinformatics program oriented toward genomics is very common. Virginia Tech’s program, for
example, has been renamed the program in “Genetics, Bioinformatics, and Computational Biology,”
indicating its strong focus on genetic analysis. In contrast, the Keck Graduate Institute at Claremont
stresses the interdisciplinary skill set necessary for the effective management of companies that straddle
the biology-quantitative science boundary. It awards a master’s of bioscience, a professional degree


models, discrete time events, randomization, and convergence, as well as the use of abstraction to hide
irrelevant detail.


  • Algorithmic thinking and programming: concepts of algorithmic thinking, including functional decomposi-
    tion, repetition (iteration and/or recursion), basic data organization (record, array, list), generalization and
    parameterization, algorithm vs. program, top-down design, and refinement.

  • Universality and computability: ability of any computer to perform any computational task.

  • Limitations of information technology: notions of complexity, growth rates, scale, tractability, decidability,
    and state explosion combine to express some of the limitations of information technology; connections to
    applications, such as text search, sorting, scheduling, and debugging.

  • Societal impact of information and information technology: technical basis for social concerns about priva-
    cy, intellectual property, ownership, security, weak/strong encryption, inferences about personal characteris-
    tics based on electronic behavior such as monitoring Web sites visited, “netiquette,” “spamming,” and free
    speech in the Internet environment.


A third perspective is provided by Steven Salzberg, senior director of bioinformatics at the Institute for Genom-
ic Research in Rockville, Maryland. In a tutorial paper for biologists, he lists the following areas as important
for biologists to understand:^3


  • Basic computational concepts (algorithms, program execution speed, computing time and space require-
    ments as a function of input size; really expensive computations),

  • Machine learning concepts (learning from data, memory-based reasoning),

  • Where to store learned knowledge (decision trees, neural networks),

  • Search (defining a search space, search space size, tree-based search),

  • Dynamic programming, and

  • Basic statistics and Markov chains.


(^3) S.L. Salzberg, “A Tutorial Introduction to Computation for Biologists,” Computational Methods in Molecular Biology, S.L. Salzberg, D.
Searls, and S. Kasif, eds., Elsevier Science Ltd., New York, 1998.

Free download pdf