What Is Theoretical Computer Science?

(cacm.acm.org)

73 points | by levlaz 6 hours ago ago

36 comments

  • pron 5 minutes ago

    > NP-completeness theory, however, does not explain or predict the unreasonable effectiveness of SAT solvers.

    I don't think that's accurate. Clearly, the SAT subset that SAT solvers solve efficiently is in P, so in that sense complexity theory "explains" it and even "predicts" it (clearly, there are subsets of SAT in P).

    What complexity theory is yet to do is:

    1. Either prove that the subset of SAT that can be solved efficiently covers all of SAT (in which case P=NP) or succinctly describe the subset of SAT that modern solvers solve efficiently. But there are many open questions (including P vs NP) in complexity theory; that doesn't mean that the discipline is flawed.

    2. Explain why that subset of SAT is prevalent in practice (making it "effective"). I'm not sure this is the job for complexity theory or any theoretical discipline for that matter (as opposed to empirical ones); after all theoretical computer science is meant to be theoretical. But there are mathematical disciplines (e.g. random graph theory) that do describe and even explain and predict mathematical structures that are more likely to arise in "nature".

  • poulpy123 2 minutes ago

    it's when I'm a computer scientist, theoretically

  • ykonstant 5 hours ago

    This is a great article and I especially liked the notion:

    >Theoretical physics is highly mathematical, but it aims to explain and predict the real world. Theories that fail at this “explain/predict” task would ultimately be discarded. Analogously, I’d argue that the role of TCS is to explain/predict real-life computing.

    as well as the emphasis on the difference between TCS in Europe and the US. I remember from the University of Crete that the professors all spent serious time in the labs coding and testing. Topics like Human-Computer Interaction, Operating Systems Research and lots of Hardware (VLSI etc) were core parts of the theoretical Computer Science research areas. This is why no UoC graduate could graduate without knowledge both in Algorithms and PL theory, for instance, AND circuit design (my experience is from 2002-2007).

    I strongly believe that this breadth of concepts is essential to Computer Science, and the narrower emphasis of many US departments (not all) harms both the intellectual foundations and practical employment prospects of the graduate. [I will not debate this point online; I'll be happy to engage in hours long discussion in person]

    • wirrbel 4 hours ago

      There is (mechanical/optical/*) engineering, experimental and theoretical physics, and then there is maths (focussing on physical problems). I think taking these four abstraction levels could be a model for computer science.

      Now, theoretical physics is a bit of a troubled child however in recent years.

      If we map computer science aspects in not the four physics disciplines, we get:

      Software / hardware engineering

      Applied computer science

      Theoretical computer science

      Mathematics dealing with problems inspired by computer science

    • nonameiguess 2 hours ago

      Not all that interested in debate, either, but it's hard to tell what you're really claiming here. It isn't the case with departments I'm aware of, certainly not where I went to school myself, that someone can graduate with a CS degree having taken nothing but theory courses. Tenured researchers can eventually specialize in that, but even PhD candidates have to demonstrate broad mastery of the entire core of CS across multiple sub-disciplines.

      Perhaps you're talking about the split between Electrical Engineering and Computer Science? That one isn't universal as some departments only offer EECS and not CS as a major, but when CS on its own is offered, "hardware" courses tend to be about microarchitecture, with practical work done using simulators. You're not required to know much of anything about electronics. But there is no program I'm aware of where a person can do nothing but math and get a CS degree. You have to write and test code.

    • ninetyninenine 3 hours ago

      > Theoretical physics is highly mathematical, but it aims to explain and predict the real world. Theories that fail at this “explain/predict” task would ultimately be discarded. Analogously, I’d argue that the role of TCS is to explain/predict real-life computing.

      No this guy doesn’t get it. He doesn’t understand what science is.

      In science nothing can be proven. If I say all swans are white as my hypothesis this statement can never be proven because I can never actually verify that I observed all swans. There may be some swan hidden on earth or in the universe that I haven’t seen. Since the universe is infinite in size I can never confirm ever that I’ve observed all swans.

      However if I observe one black swan it means I falsified the entire hypothesis. Thus in science and in reality as we know it nothing can be proven… things can only be falsified.

      Math on the other hand is different. Math is all about a made up universe where axioms are known absolutely. It has nothing to do with observation or evidence in the same way science does. Math is an imaginary game we play and in this game it is possible to prove things.

      This proof is the domain of mathematics… not science. Physics is a science because it involves gathering evidence and attempting to falsify the hypothesis.

      Einstein said it best: “No amount of experimentation can ever prove me right; a single experiment can prove me wrong”

      Basically newtons laws of motion are a perfect example of falsification via experimentation with relativity later being confirmed as the more accurate theory that matches more with observation.

      So what’s the deal with computer science?

      First of all the term already hits the first nomenclature issue. Computer science is ironically not a science. It lives in the same axiomatic based world as mathematics and therefore things can be proven in computer science but not in science itself.

      So this nomenclature issue is what’s confusing everyone. The op failed to identify that computer science isn’t actually a freaking science. Physics is a science but computer science isn’t.

      So what is computer science? Sorry to say but it’s a math. I mean it’s all axioms and theorems. It’s technically math.

      CS is a math in the same way algebra and geometry is math. Physics is a science and it is not a math. It’s a totally orthogonal comparison.

      Your job as programmers is more like applied math. It’s completely orthogonal to the whole topic but People often get this mixed up. They start thinking that because programming is applied computer science then computer science itself is not a math.

      Applied math ironically isn’t really math in the same way writing isn’t a pencil. Yes you use a pencil to write but they are not the same. Same thing with computer science and programming.

      • moring 3 hours ago

        > In science nothing can be proven.

        Asking here because it is mostly on-topic: This phrase is repeated often, but shouldn't it actually be, "In science, a hypothesis is either fundamentally verifiable or fundamentally falsifiable, but never both"? The two simply being the logical negation of each other.

        "All swans are white" is fundamentally falsifiable (by seeing a black swan) but not verifiable, as you described.

        "Black swans exist" is fundamentally verifiable (by seeing a black swan), but not falsifiable.

        • epgui 37 minutes ago

          Science (as in the “natural sciences”, ie.: everything that is not philosophy or mathematics) is not about “proving” anything at all.

          Science is about measuring, quantifying and qualifying uncertainty.

        • RandomThoughts3 2 hours ago

          I think both are wrong.

          Science has a word encompasses a broad variety of definitions and practices. The philosophical question of the relationship of empiricism to truth (because fundamentally the question has more to do with the relationship between observations and deductions than science in itself) is a complex one and overall of little interest to actual practitioners of science. It can’t easily be subsumed in broad aphorisms.

          I would hazard that most modern sciences - and physics especially - are nowadays mostly concerned about building models and the resulting predictions. The question then often becomes: does the tool I have built allow me to correctly predict how observations will unfold? The answer to which is often more shades of grey than simply yes or no.

      • jacobsimon 2 hours ago

        I think you’re missing the broader point, which is that there is a lot to computer science outside of the purely mathematical formalism.

        For example, distributed systems and networking are more like a physical science because they seek to make generalized learnings and theorems about real world systems.

        The author’s last point around complexity theory also resonates because it demonstrates the value of designing experiments with real-world conditions like computing hardware speed and input sizes.

  • youoy 5 hours ago

    > Thinking of theoretical computer science as a branch of mathematics is harmful to the discipline.

    Maybe the issue is how he thinks about mathematics... He quotes Von Neuman saying:

    > We should remember the warning of John von Neuman,e one of the greatest mathematicians and computer scientists of the 20th century, regarding the danger of mathematics driven solely by internal esthetics: “There is a grave danger that the subject will develop along the line of least resistance.”

    But for example, there is an article in Quanta [0] about a recent proof in Number Theory (you cannot get more "mathematical" than that), and the guy who proved it said:

    > “Mathematics is not just about proving theorems — it’s about a way to interact with reality, maybe.”

    Which is in line with Von Neuman's quote, and with the spirit of what the author is saying.

    So maybe a more accurate subtitle would be:

    "Thinking of theoretical computer science as a mathematical formal system is harmful to the discipline."

    [0] https://www.quantamagazine.org/big-advance-on-simple-soundin...

  • n00b101 4 hours ago

    Ah, Professor Vardi, a fascinating case study in our department. His devotion to the 'science' in computer science is truly something to behold. It's not every day you see someone try to reconcile Turing machines with the second law of thermodynamics ...

    Dr. Vardi's Second Law of Thermodynamics for boolean SAT and SMT (Satisfiability Modulo Theory) solvers is truly a marvel of interdisciplinary ambition. In his framework, computational entropy is said to increase with each transition of the Turing machine, as if bits themselves somehow carry thermodynamic weight. He posits that any algorithm—no matter how deterministic—gradually loses "information purity" as it executes, much like how heat dissipates in a closed system. His real stroke of genius lies in the idea that halting problems are not just undecidable, but thermodynamically unstable. According to Dr. Vardi, attempts to force a Turing machine into solving such problems inevitably lead to an "entropy singularity," where the machine's configuration becomes so probabilistically diffuse that it approaches the heat death of computation. This, he claims, is why brute-force methods become inefficient: they aren’t just computationally expensive, they are thermodynamically costly as well. Of course, there are skeptics who suggest that his theory might just be an elaborate metaphor stretched to breaking point—after all, it’s unclear if bits decay in quite the same way as particles in a particle accelerator.

    • youoy 3 hours ago

      I have to say that reading this from a non expert point of view leaves me wondering if this comment is true or if it is just the result of some elaborate prompt on ChatGPT

    • calf 2 hours ago

      Did Vardi write about this? I only found some other authors instead; is it possible you are referring to Yuri Manin instead? :

      From https://arxiv.org/pdf/1010.2067 "Manin and Marcolli [20] derived similar results in a broader context and studied phase transitions in those systems. Manin [18, 19] also outlined an ambitious program to treat the infinite runtimes one finds in undecidable problems as singularities to be removed through the process of renormalization. In a manner reminiscent of hunting for the proper definition of the “one-element field” F_un, he collected ideas from many different places and considered how they all touch on this central theme. While he mentioned a runtime cutoff as being analogous to an energy cutoff, the renormalizations he presented are uncomputable. In this paper, we take the log of the runtime as being analogous to the energy; the randomness described by Chaitin and Tadaki then arises as the infinite-temperature limit."

    • booleandilemma 10 minutes ago

      Why do you write like this?

  • epgui 34 minutes ago

    I’m just a biochemist/engineer who is passionate about other sciences, but IMHO his understanding of mathematics is what is counter-productive. It’s a bizarrely anti-intellectual take from someone who is clearly an intellectual.

  • calf 5 hours ago

    I met the author informally once but didn't know "who" he was, a famous academic scientist and professor etc. He read my paper and caught a math typo. :)

    Regarding the article, I feel like there's some context missing, is this part of some ongoing debate about TCS? The piece abruptly ends.

    One that comes to mind is the recent breakthroughs in AI which have caught theorists flat-footed. Sanjeev Arora and others openly admit we don't have a good theoretical understanding of the empirical successes of deep learning, how they are able to do the things they do with emergent phenomena, scaling effects, etc.

    Scott Aaronson has also suggested that theoretical CS should be seen as analogous to theoretical physics.

    On the other hand, I don't know if the argument could be abused to dismiss things like P vs NP as having no realistic scientific value. That would be the flip side of Vardi's argument. I do vaguely recall Aaronson in one of his blog posts or pdf essays* giving a more subtle argument about why P vs NP is central to computer science anyways, and I think that would be a slightly different position than either Vardi's or his reading of Widgerson's here.

    *See the first several subsections about common Objections in pp 6-7 in: https://www.scottaaronson.com/papers/pnp.pdf

  • PeterStuer 5 hours ago

    For nature we have many models, physics, chemistry, biology, ..., depending on our needs. None of them are more wrong, but they operate at different scales and are useful in different contexts.

    My gripe with theoretical computer science was that if felt like a Newtonian physics level model of digital processes, while an equivalent of biology level models would be needed for/suited to most "real-life computing".

    • pmontra 3 hours ago

      A biology level model for computing would be some billion of very small CPU cores each one doing its own thing, interacting with the others (actor model?) and yielding a collective result by averaging their states (by some definition of average.) It could be OK for some problems (simulations of physical systems?) but not much for others (placing a window in the middle of the screen.)

      By the way, a lot of small CPU cores is what we use inside graphic cards. However they are not actors. They are very deterministic. The Newtonian physics model.

      • GenericCanadian 2 hours ago

        Sounds a lot like what Wolfram is working on with categorizing cellular automota. Strikes me that a lot of his work is very biological in its search for axioms from experimentation

      • PeterStuer 3 hours ago

        How about we start by introducing time, interactions with things exterior to the subsystem modeled?

    • psychoslave 4 hours ago

      Well, that’s basically what we have with applications, isn’t it? It’s not like we need to think about each bit-trick we rely on when making a visio in parallel of a pair programming session over whatever IDE of the day we might use.

      • PeterStuer 3 hours ago

        But how is that supported by TCS?

  • peterkos 4 hours ago

    I like thinking of CS theory as "math, with more hand-waving". Or, I can't remember where I read it, but something about CS being the mathematics of asymptotes.

  • sampo 3 hours ago

    > Theoretical physics is highly mathematical, but it aims to explain and predict the real world. Theories that fail at this “explain/predict” task would ultimately be discarded.

    Still waiting for theoretical physics to discard string theory, as it fails in predicting anything.

    • coliveira an hour ago

      In this case I believe it will take a whole generation. People will have to retire before we see the mainstream openly agree that it was a failed attempt.

  • hrkucuk 5 hours ago

    >“There is a grave danger that the subject will develop along the line of least resistance.”

    What does von Neumann mean here? Why is it bad that it will develop along the line of least resistance? Does von Neumann advice that working on "harder" problems is more beneficial for TCS? Could not one argue that we should be solving away low hanging fruits first?

    I am not sure if I am understanding von Neumann's quote nor this article properly. I would love to hear some simpler explanation (I am a new BSc. CS graduate).

    • youoy 4 hours ago

      This is how I see it: you can work on mathematics from two different levels. There is more direct level, which is the formal system, and there is more indirect level which are the intuitions behind the formal system.

      If you look at new theories (let's say you are starting to study topology, or group theory) they start from some definitions/axioms that seem to come from "nowhere", but they are in fact a product of working and perfectioning a language for the intuitions that we have in mind. Once we set for the correct descriptions, then there are a lot of consequences and new results that come from interaction almost entirely with the formal system.

      The interactions with the formal system is the path of least resistance.

      The power of mathematics is that once you figure out a correct formalization of the intuitions, using just the formal system allows you to get a lot of information. That is why sometimes people identify mathematics with just the formal system.

      • tightbookkeeper 3 hours ago

        To take it one step further, mathematical ideas which do not nicely fit within one of the highly developed theories then feels underbaked and less attractive to other mathematicians.

        Knuth thinking about algorithms led to all these research questions about combinatorics, that have turned out to be very interesting, but are much more messy and disjointed results.

    • cubefox an hour ago

      My guess is that von Neumann worried that it develops in directions that people happen to find interesting, instead of in directions that are important by some objective external measure.

  • cjfd 5 hours ago

    He says he affiliates a bit with physics. That is what I studied when I was young. Yes, physics attempts to concern itself with the real world. For instance, nobody in their right mind would have anything to do with quantum mechanics if it wasn't how the real world operated. In that sense it seems to me that computer science is much closer to mathematics. The computer is an artificial system constructed to be relatively easy to reason about.

    • eru 4 hours ago

      Some people really like the math of quantum mechanics for its own sake.

      (See also how (much of) the math for General Relatively was developed without any application in mind.)

      • cjfd 3 hours ago

        'Liking for its own sake' is not quite enough. The first question is whether quantum mechanics would have been invented in the first place if it wasn't for experiments that showed that it was necessary. The second question is even if it was invented, would anyone bother to study anything beyond a single particle wave function? A wave function by itself is not yet quantum mechanics, there is quite a bit of wave mechanics in classical physics. I am quite sure that if quantum mechanics was not necessary nobody would attempt to say anything about a quantum mechanical carbon atom. I.e., a quantum mechanical six body problem. Let alone quantum field theory.

        General Relativity much more natural than quantum mechanics. It was mostly created from a theoretical motivation. People were dragged towards quantum mechanics kicking and screaming and it took about 30 years to develop.

    • globular-toast 4 hours ago

      The Turing machine is an artificial system constructed to be easy to reason about. The computer on my desk is most certainly not!

      • RandomThoughts3 2 hours ago

        The Turing machine is a realisation of what a computing machine could be in a logical sense. It’s not so much constructed to be easy to reason about as to be the simplest form of what such a machine could be.

        The fact that functions computable with such a machine is equivalent to functions computable in the lambda calculus and Herbrand general recursive functions is the remarkable results.

        The fact that it can somehow be linked to an actual computing machines outside of logic is merely a happy accident.

        Having said that you could think I disagree with Vardi but the truth is: I think the point he brings is just void of substance. That’s only of interest to people who like university politics and how departments hire. It’s of no impact to the field whatsoever. Why does it matter what is or isn’t semantically TCS and if it is or not mathematics? The problems will still be there the same.

      • cjfd 3 hours ago

        Well, it depends. Have you ever considered writing a word processor in a Turing machine?