In one of my databases class, we had to use Perl (cursed language) and B-trees to implement a database from scratch. We tried building a "universal relation" system where any table could join with any other table, which Graham's paper proved was NP-complete.
# Join paths multiply exponentially when Sales department exists in multiple buildings
%employee = ('1' => { name => 'Alice', dept => 'Sales', building => 'West', project => 'Alpha' });
# Each B-tree indexes one specific relationship: employee_id → dept_id → building_id
sub get_employee_location {
my ($emp_id) = @_;
return $building{$department{$employee{$emp_id}{dept_id}}{building_id}}{name};
}
The normalized approach means each B-tree has a clear purpose and lookups follow a defined path. The "Universal relation" seems intuitive, but it's definitely computationally expensive.
Prolog's resolution engine is more about logical inference - it will happily backtrack through different paths to find solutions.
What Graham proved specifically was about relational databases: if you try to maintain consistent joins across all possible paths (aka "join consistency"), it becomes an NP-complete problem. The "Universal Relation Problem" requires ALL possible join paths to give consistent results.
Prolog doesn't have verify consistency across an exponential number of join path, so I fail to see how it is a "universal relation" system.
This concept of universal relation strongly reminds me ideas from Datomic database where all entities are described as changelog which will fit in table with columns "entity-attribute-relation-timestamp-operation-type"
Interesting connection, but Datomic's EAV (Entity-Attribute-Value) model solves a different problem than Graham's universal relation. In the "Universal relation problem," we're fighting with consistency across arbitrary join paths. This problem is more about the NP-completeness of maintaining consistency across all possible join paths
Datomic's approach is more like a temporal log of facts, it's not trying to maintain universal join consistency across all possible paths. The EAV model lets you treat everything as "facts over time" (Datomic's strength yes) instead trying to make every possible join path consistent
Based on the host name, and the wording in the first paragraph, actually the whole article, I was expecting something on the Mandella effect. No, the author's name is Max Bernstein.
In one of my databases class, we had to use Perl (cursed language) and B-trees to implement a database from scratch. We tried building a "universal relation" system where any table could join with any other table, which Graham's paper proved was NP-complete.
# Join paths multiply exponentially when Sales department exists in multiple buildings
%employee = ('1' => { name => 'Alice', dept => 'Sales', building => 'West', project => 'Alpha' });
# Each B-tree indexes one specific relationship: employee_id → dept_id → building_id
%employee = ('1' => { name => 'Alice', dept_id => '1' });
%department = ('1' => { name => 'Sales', building_id => '1' });
%building = ('1' => { name => 'West' });
sub get_employee_location { my ($emp_id) = @_; return $building{$department{$employee{$emp_id}{dept_id}}{building_id}}{name}; }
The normalized approach means each B-tree has a clear purpose and lookups follow a defined path. The "Universal relation" seems intuitive, but it's definitely computationally expensive.
as far as i see a "universal relation" system is Prolog and the likes.
Prolog's resolution engine is more about logical inference - it will happily backtrack through different paths to find solutions.
What Graham proved specifically was about relational databases: if you try to maintain consistent joins across all possible paths (aka "join consistency"), it becomes an NP-complete problem. The "Universal Relation Problem" requires ALL possible join paths to give consistent results.
Prolog doesn't have verify consistency across an exponential number of join path, so I fail to see how it is a "universal relation" system.
This concept of universal relation strongly reminds me ideas from Datomic database where all entities are described as changelog which will fit in table with columns "entity-attribute-relation-timestamp-operation-type"
Interesting connection, but Datomic's EAV (Entity-Attribute-Value) model solves a different problem than Graham's universal relation. In the "Universal relation problem," we're fighting with consistency across arbitrary join paths. This problem is more about the NP-completeness of maintaining consistency across all possible join paths
Datomic's approach is more like a temporal log of facts, it's not trying to maintain universal join consistency across all possible paths. The EAV model lets you treat everything as "facts over time" (Datomic's strength yes) instead trying to make every possible join path consistent
Based on the host name, and the wording in the first paragraph, actually the whole article, I was expecting something on the Mandella effect. No, the author's name is Max Bernstein.
UofT mentioned. We are massive!
[flagged]