SMHasher/Murmurhash author here - I don't see anything fundamentally wrong with this hash function, it uses basically the same operations as the Murmur family (and a lot of other hashes at this point).
The handling of the "tail" of the key (the last < 32 bytes) is slightly odd (the "if (l & 1) { mix 1 byte of key }" happens before 8-byte chunks and 2-byte chunks), but if it passes SMHasher it should be fine for general use.
While the benefit of processing chunks of 8 bytes is obvious, what's the purpose of grouping those into macrogroups of 4? Does it trigger any implicit parallelism I failed to spot? Or is it just to end this phase with the 4 h[] having had the same amount of entropy, and thus starting the next one with h[0]?
> The way it loads the 8 bytes is also important. The correct way is to load via shift+or
> This is free of any UB, works on any alignment and on any machine regardless of it's endianness. It's also fast, gcc and clang recognize this pattern and optimize it into a single mov instruction on x86 targets.
Is a single MOV instruction still fast when the 8 bytes begin on an odd address?
SMHasher/Murmurhash author here - I don't see anything fundamentally wrong with this hash function, it uses basically the same operations as the Murmur family (and a lot of other hashes at this point).
The handling of the "tail" of the key (the last < 32 bytes) is slightly odd (the "if (l & 1) { mix 1 byte of key }" happens before 8-byte chunks and 2-byte chunks), but if it passes SMHasher it should be fine for general use.
> if it passes SMHasher it should be fine for general use.
Author of the post writes:
> I kept making changes until the tests passed.
If SMHasher has become a target, shouldn't it be considered a bad measure now? [1]
[1]: https://en.wikipedia.org/wiki/Goodhart%27s_law
While the benefit of processing chunks of 8 bytes is obvious, what's the purpose of grouping those into macrogroups of 4? Does it trigger any implicit parallelism I failed to spot? Or is it just to end this phase with the 4 h[] having had the same amount of entropy, and thus starting the next one with h[0]?
> The way it loads the 8 bytes is also important. The correct way is to load via shift+or > This is free of any UB, works on any alignment and on any machine regardless of it's endianness. It's also fast, gcc and clang recognize this pattern and optimize it into a single mov instruction on x86 targets.
Is a single MOV instruction still fast when the 8 bytes begin on an odd address?
How does it compare to CRC64, for the purpose of detecting errors?