Falsify: Hypothesis-Inspired Shrinking for Haskell (2023)

(well-typed.com)

44 points | by birdculture 4 hours ago ago

6 comments

  • mjw1007 an hour ago

    I've found in practice that shrinking to get the "smallest amount of detail" is often unhelpful.

    Suppose I have a function which takes four string parameters, and I have a bug which means it crashes if the third is empty.

    I'd rather see this in the failure report:

    ("ldiuhuh!skdfh", "nd#lkgjdflkgdfg", "", "dc9ofugdl ifugidlugfoidufog")

    than this:

    ("", "", "", "")

  • sshine 3 hours ago

    How does Hedgehog and Hypothesis differ in their shrinking strategies?

    The article uses the words "integrated" vs. "internal" shrinking.

    > the raison d’être of internal shrinking: it doesn’t matter that we cannot shrink the two generators independently, because we are not shrinking generators! Instead, we just shrink the samples that feed into those generators.

    Besides that it seems like falsify has many of the same features like choice of ranges and distributions.

  • shae 44 minutes ago

    I care about the edge between "this value fails, one value over succeeds". I wish shrinking were fast enough to tell me if there are multiple edges between those values.

  • thesz 2 hours ago

    This is fascinating!

    If I understand correctly, they approximate language of inputs of a function to discover minimal (in some sense, like "shortest description length") inputs that violate relations between inputs and outputs of a function under scrutiny.

  • evertedsphere 2 hours ago

        newtype Parser a = Parser ([Word] -> (a, [Word])
    
    missing a paren here
  • moomin an hour ago

    I’m honestly completely failing to understand the basic idea here. What does this look like for generating and shrinking random strings,