I have been marinating on this today, putting on my algorithmic information theory hat following the semester's close. As it turned out, PLD and automated theorem proving hats were also needed.
Think of decompositions as a language to be recognized by some turing machine. We have all the decidable languages at our disposal; can dependence-free decomposition be described by a DL?
An hour's review of the research suggests that:
- there's a ton of PLD research into this at a formal languages / program transformation standpoint. most of it makes use of toy languages that'll work great for half a page examples in some obscure journal. LogP is a great example of this kind of thing.
- from a dynamics standpoint, you have hoare and milner and all that hoary old communicating processes stuff. it's a great mathematical description of processes as they're running but not applicable, or at least not applied, to program analysis. deriving a milner system or any other pi-calculus description from program text seems a nonstarter in the Entscheidungsproblem sense
- quentin stout's seminal busy beaver work seems a great place to start putting the nails in any behavioral analysis argument and thus also anything involving "so we start simulating...." nosir, no we don't
- EXPTIME is a class not very well known outside the complexity and AI communities, but a reduction to this class would work for our purposes. Yu Ding's work on description logic satisfiability could be a good starting point.
- so let's turn to the AIT folks (this gets a bit esoteric). i've read chaitin's book, and it doesn't have anything really relevant, save all the mathematical tools necessary for talking about program size. how far can we push Chaitin's arithmetization process? this is for data- and not task-based parallelism. can we reduce to some Kolmolgorov description and pair it with some number theory / analysis (think Banerjee's Inequality) to isolate information-theoretic chunks of parallelism?
- yeah, heuristics (ala the FFTW paper and countless others). pretty much anything with heuristics is kinda lame IMHO, because the underlying assumptions change more quickly than the code. a spiritual dead end.
What would be just as useful, theoretically, is determining that AIT models *can't* address certain parallelisms, or at a more fundamental level that some decomposition languages can't be described by a type-0 Chomsky grammar. Of course, what we really want is to find a suitable DL, or preferably a parameterized set of such languages generated from a description, in some operator syntax, because then we could *apply* it.
Operator syntaxes! If we can decompose a program into, say, primitive recursive functions and SSA -- well, from Chaitin we already know changes that can be made assuming exponential time. Thus, we don't need a decidable language for the source; we could merely recognize safe and parallel transformations of said IR, if we had a language for THOSE, and rather than try to solve HOW to do it just start testing structured or random transformations upon the source!
To be more practical, you of course can't EXPTIME-ly generate all modifications within a finite distance, but I bet randomized methods could be used to find (with a high degree of probability) any "seed points" with a spark of parallelism, the local space around which more targeted (possibly genetic?) algorithms could search. Think of them as jeweled raindrops of parallelism forming in our stormy, branch-heavy cloud of code.
Either way, recognizing transformations as opposed to recognizing source patterns could be a very key insight here. I should go talk to Prof. Lipton, who ran one hell of a Randomized Algorithms class this semester.
Holla with your thoughts!