For more than thirty years, the parallel programming community
has used the *dependence graph* as the main abstraction for
reasoning about and exploiting parallelism in ``regular'' algorithms
that use dense arrays, such as finite-differences and FFTs.
In this paper, we argue that the dependence graph is not a
suitable abstraction for algorithms in new application areas
like machine learning and network analysis in which the
key data structures are ``irregular'' data structures like
graphs, trees, and sets.

To address the need for better abstractions, we introduce a data-centric
formulation of algorithms called the *operator formulation* in which
an algorithm is expressed in terms of its action on data structures.
This formulation is the basis for a structural analysis of algorithms
that we call *tao-analysis*. Tao-analysis can be viewed as an
abstraction of algorithms that distills out algorithmic properties
important for parallelization. It reveals that a generalized
form of data-parallelism called *amorphous data-parallelism*
is ubiquitous in algorithms, and that, depending on the tao-structure
of the algorithm, this parallelism may be exploited by compile-time,
inspector-executor or optimistic parallelization, thereby unifying these
seemingly unrelated parallelization techniques. Regular
algorithms emerge as a special case of irregular algorithms, and
many application-specific optimization techniques can be generalized
to a broader context.

These results suggest that the operator formulation and tao-analysis of algorithms can be the foundation of a systematic approach to parallel programming.