Faster Than Dijkstra?

(systemsapproach.org)

39 points | by drbruced 4 days ago ago

12 comments

  • shiandow 7 minutes ago

    Interestingly sorting is O(N) for a surprisingly large class of datatypes. Anything that behaves well with lexicographic sorting really.

    Supposing one uses a 'trie' as a priority queue, the inserts and pops are effectively constant.

    • qsort a few seconds ago

      Only if you assume a finite alphabet and bounded length. Relax either and you're back to O(n log n) for a fully general solution. Examples of both: tuples and strings.

      (There's also the problem of how you define your computational model. You can do better than O(n log n) in transdichotomous models. I'm assuming the hand-wavy, naive model the average algorithms class goes along with.)

  • alias_neo an hour ago

    Deja Vu.

    I read this article a few days ago I'm sure, word-for-word, but it wasn't on this site in OP? It stood out because when it mentioned textbooks and said "including ours" I looked at the site and thought to myself "they do textbooks?".

  • jason_s an hour ago

    Intriguing article. Sometimes practical issues override theoretical ones, and it would be interesting to see which one dominates in networking.

    (side note: does anyone else get thrown off by the Epilogue font? It looks very wide in some cases and very narrow in others... makes me want to override it with Stylus if my employer hadn't blocked browser extensions for security reasons, which raises the question of why I am even reading the article on this computer....)

  • qsort an hour ago

    I struggle to see the point. The paper in question doesn't claim to be practically faster or to want to "replace" Dijkstra, they are just saying "we got a better big-O" and I don't see any reason to doubt they're wrong about that.

    It's actually common for algorithms with a lower asymptotic complexity to be worse in practice, a classic example is matrix multiplication.

    Also please, please, can we stop with the "eww, math" reactions?

    > The new approach claims order (m log^(2/3) n) which is clearly going to be less for large enough n. (I had to take a refresher course on log notation before I could even write that sentence with any confidence.)

    I'm sure the author is just exaggerating, he's clearly very competent, but it's a sentence with the vibes of "I can't do 7x8 without a calculator."

    • yborg an hour ago

      The Quanta article on the paper was considerably more breathless in describing a fine piece of work in mathematics. The author here points out that one of the things that makes Dijkstra's result iconic is that it could be used practically in a straightforward way. As an engineer, beautiful mathematics is useless if I can't convert it to running code.

    • shermantanktop an hour ago

      I read it as a musing on the folly of improvements that don’t deliver benefits within the practical bounds of actual problems. Which is a lesson seen everywhere in physical systems and manufacturing. Is it an amazing insight? No, but it’s a lesson that is relearned by everyone several times.

    • gowld an hour ago

      More important is that the new algorithm has a multiplicative factor in m (edges), so it's only efficient for extremely sparse graphs.

      If m > n (log n)^{1/3}

      Then this algorithm is slower.

      for 1 Million nodes, if the average degree is >3.5, the new algorithm has worse complexity (ignoring unstated constant factors)

      • bee_rider an hour ago

        Yeah, just based on this article that really stood out. It seems to be for a different use-case than Djikstra’s. An average degree of 3.5 seems like an extremely practical a useful use-case in real life, I just don’t see any reason to put it and Djikstra’s against each-other in a head-to-head comparison.

      • usrusr an hour ago

        "Any sufficiently sparse graph is indistinguishable from a linked list" comes to mind ;)

    • mightyham an hour ago

      > I struggle to see the point. The paper in question doesn't claim to be practically faster...

      I struggle to see the point of your comment. The blog post in question does not say that the paper in question claims to be faster in practice. It simply is examining if the new algorithm has any application in network routing; what is wrong with that?