Last I saw, it seemed like the plan was to unconditionally enable it, and on the off chance there's ever a piece of hardware where it's a substantial performance win, offer a way to opt out of it.
Sorry, I may be missing the point here, but reading that page doesn’t immediately make it obvious to me what that feature is. Is it some constant time execution mechanism that you can enable / disable on a per-thread basis to do… what exactly?
This link seems broken. Some searching suggests that the title and slug may have changed, but I haven't found a working link to the article. Just from the title alone, I am extremely interested in reading more about this, because it's been largely mythical for a long time.
Too bad that Intel chips more or less reserve the right to take LLVM’s nice output and make it non-constant-time anyway. See:
https://www.intel.com/content/www/us/en/developer/articles/t...
Sure, you could run on some hypothetical OS that supports DOITM and insert syscalls around every manipulation of secret data. Yeah, right.
Last I saw, it seemed like the plan was to unconditionally enable it, and on the off chance there's ever a piece of hardware where it's a substantial performance win, offer a way to opt out of it.
Sorry, I may be missing the point here, but reading that page doesn’t immediately make it obvious to me what that feature is. Is it some constant time execution mechanism that you can enable / disable on a per-thread basis to do… what exactly?
It turns off CPU features that could cause execution time to vary in a way that depends on the data being operated on.
This link seems broken. Some searching suggests that the title and slug may have changed, but I haven't found a working link to the article. Just from the title alone, I am extremely interested in reading more about this, because it's been largely mythical for a long time.
https://web.archive.org/web/20251125224147/https://blog.trai...