7 comments

  • youoy 3 hours ago

    In causal inference we usually call counterfactual the unobserved outcome of a treatment, and try to understand the difference in the effect of this two treatments.

    Here they go the other way around, they look for how the variables need to change in their classifier input so that they get the outcome that they want, and call that change an explanation, hence the name "counterfactual explanation". I don't like it but ok...

    Apart from that, I am not sure how meaningful the closest point to the decision boundary is. I have played with ReLU ANN to classify t-shirts of different colours, and then fixed the trained parameters and optimize the input to get a green t-shirt to output blue, and the image of the "new" t-shirt is still green for me (but not the ANN).

    So I would be very careful with this:

    > For example, in the setting of mortgage applications, a customer may request a counterfactual explanation to improve their profile and reapply at a future moment. The counterfactual explanation may suggest that a successfully candidature would need increases in financial features, such as salary, savings, credit score. In this setting, providing minimal counterfactuals promotes financial inclusion and decreases the burden of reapplication for consumers

    • mjburgess an hour ago

      "Counterfactuals" in XAI aren't counterfactuals. Nor are "explanations" explanations. The whole field is basically, "you dont like these associative stats? here, what about these other ones?"

      I oscillate between reading computer scientists here as liars (using this language to disguise that they cannot offer explanations, etc.), morons (who do not know the basic meaning of the terms they use) or pseudoscientists (people with no care to know, and no interest in honest communication).

      In the end, the answer is perhaps much more disappointing: they're engineers with little interest in, or training around, anything beyond the most naive operational definition that suits their interest at any given moment.

      However we should note how duplicitious this becomes when aligned with hype, regulatory demands, and funding models.

  • lapcat 2 hours ago

    This submission title has been editorialized, contrary to the HN guidelines. The actual article title is "Polyhedral Complex Informed Counterfactual Explanations".

  • bilekas 3 hours ago

    Not related to the content of the paper (for reasons outlined below) but what is this ?

    > Polyhedral geometry can be used to shed light on the behaviour of piecewise linear neural networks, such as ReLU-based architectures. Counterfactual explanations are a popular class of methods for examining model behaviour by comparing a query to the closest point with a different label, subject to constraints.

    I swear my brain is degrading daily or every new "paper" that comes out is trying to find a way to be as obtuse as possible.

    I could swear "Counterfactual Explanations" is an oxymoron.

    > Minimality Guarantees and Targeting Desiderata

    Really ?

    I didn't think it would be J.P. Morgan Chase that would break my ability to follow a sentence today.

    • afiori 3 hours ago

      > I could swear "Counterfactual Explanations" is an oxymoron.

      I suspect that by Counterfactual Explanation they mean something like "property A must be X otherwise constraint B is broken".

  • sieste 6 hours ago

    Calling nearest input points that are mapped to a different label "minimal counterfactual explanations" is quite the exaggeration IMO. I'm less inclined to keep reading the article.

    • OgsyedIE 6 hours ago

      Wouldn't it depend on whether their "decision boundary surface" actually has the properties its name implies?