A friend of mine who might be me came up with a similar model during an acid trip a few years ago.
Imagine your mind's eye traveling down a hallway of pictures. Some are memories, some are what your senses are actually experiencing, and some are products of your imagination. There are occasionally branch points where you can choose a path depending on which pictures you see down each hallway. Some pictures are scary, some are enticing. Your mood or goal influences which path you'll choose. Many of these pictures are pictures of you in various situations. They could be as simple as you at the top of the stairs you're currently climbing, or as intricate and abstract as you at the pinnacle of your career 15 years from now. Once you get to a picture that's only slightly different from your current situation, your subconscious mind can choose to make it happen -- moving muscles to take the next step, speaking the next word in a sentence shown in the image, etc.
Consciousness is the path you take through this network of images. In real life it happens so fast you don't experience the choosing of paths, and it's not really a visual hallway but rather a fusion of senses and emotions.
You’re closer than you think. Replace “hallway of pictures” with “predictive coding across the common core network,” and you’ve got 80% of my framework. The other 20% is what makes it falsifiable.
I skimmed the paper; it has a lot of issues. First it doesn't attempt to frame the theory in our current best understanding of the problem of consciousness. It doesn't say up front if its attempting to explain qualia or just give a new understanding of the functional aspects of consciousness.
The next issue is that it doesn't do much explaining. If it is attempting to explain qualia, it needs to explain how the functional descriptions on offer help in explaining why there is a qualitative feel associated with conscious states. If it's not attempting to explain qualia, then it needs to clearly identify the functional problem it is proposing to solve, then explain how the theory solves it. Many homegrown theories mistake description for explanation. Just giving existing functions a new name in the guise of a new framework doesn't explain anything. A reframing can be useful, but it should be made explicit that the theory is a reframing rather than an explanation, and what benefits this framing gives to solving various problems related to consciousness.
Another issue is that it spends too much time talking about implications and not enough time just communicating the core ideas. Each major section has like a paragraph or two. This isn't enough for a proper introduction to the section, let alone a sufficient description of the theory.
> "A reframing can be useful, but it should be made explicit that the theory is a reframing rather than an explanation, and what benefits this framing gives"
Fair critique — and I’ll own that the paper emphasizes reframing more than exhaustive exposition. To be precise:
• I am not claiming to solve the Hard Problem of qualia. I position qualia as an evolved data format, a functional necessity for navigating a deterministic universe — not as metaphysical mystery.
• What the paper does aim to explain is the predictive, timeline-simulating function of consciousness, and how errors in this function (e.g. Simulation Misfiling) may map to psychiatric conditions.
• The “implications” section is deliberately forward-looking, but I agree the exposition could be expanded. That’s the next step — this is a framework, not the final word.
If nothing else, I hope the paper makes explicit that reframing consciousness as a predictive timeline simulator is testable, bridges physics + neuroscience, and invites experiments rather than mysticism.
I’ve written a white paper proposing the Predictive Timeline Simulation (PTS) Framework, which treats consciousness as an evolved simulation engine. It connects neuroscience, physics, and philosophy, and suggests both a testable schizophrenia hypothesis and a design principle for AGI. I’d welcome critical feedback from the HN community.
A friend of mine who might be me came up with a similar model during an acid trip a few years ago.
Imagine your mind's eye traveling down a hallway of pictures. Some are memories, some are what your senses are actually experiencing, and some are products of your imagination. There are occasionally branch points where you can choose a path depending on which pictures you see down each hallway. Some pictures are scary, some are enticing. Your mood or goal influences which path you'll choose. Many of these pictures are pictures of you in various situations. They could be as simple as you at the top of the stairs you're currently climbing, or as intricate and abstract as you at the pinnacle of your career 15 years from now. Once you get to a picture that's only slightly different from your current situation, your subconscious mind can choose to make it happen -- moving muscles to take the next step, speaking the next word in a sentence shown in the image, etc.
Consciousness is the path you take through this network of images. In real life it happens so fast you don't experience the choosing of paths, and it's not really a visual hallway but rather a fusion of senses and emotions.
You’re closer than you think. Replace “hallway of pictures” with “predictive coding across the common core network,” and you’ve got 80% of my framework. The other 20% is what makes it falsifiable.
I skimmed the paper; it has a lot of issues. First it doesn't attempt to frame the theory in our current best understanding of the problem of consciousness. It doesn't say up front if its attempting to explain qualia or just give a new understanding of the functional aspects of consciousness.
The next issue is that it doesn't do much explaining. If it is attempting to explain qualia, it needs to explain how the functional descriptions on offer help in explaining why there is a qualitative feel associated with conscious states. If it's not attempting to explain qualia, then it needs to clearly identify the functional problem it is proposing to solve, then explain how the theory solves it. Many homegrown theories mistake description for explanation. Just giving existing functions a new name in the guise of a new framework doesn't explain anything. A reframing can be useful, but it should be made explicit that the theory is a reframing rather than an explanation, and what benefits this framing gives to solving various problems related to consciousness.
Another issue is that it spends too much time talking about implications and not enough time just communicating the core ideas. Each major section has like a paragraph or two. This isn't enough for a proper introduction to the section, let alone a sufficient description of the theory.
> "A reframing can be useful, but it should be made explicit that the theory is a reframing rather than an explanation, and what benefits this framing gives"
Well said, and the rest. Thanks.
Fair critique — and I’ll own that the paper emphasizes reframing more than exhaustive exposition. To be precise:
• I am not claiming to solve the Hard Problem of qualia. I position qualia as an evolved data format, a functional necessity for navigating a deterministic universe — not as metaphysical mystery. • What the paper does aim to explain is the predictive, timeline-simulating function of consciousness, and how errors in this function (e.g. Simulation Misfiling) may map to psychiatric conditions. • The “implications” section is deliberately forward-looking, but I agree the exposition could be expanded. That’s the next step — this is a framework, not the final word.
If nothing else, I hope the paper makes explicit that reframing consciousness as a predictive timeline simulator is testable, bridges physics + neuroscience, and invites experiments rather than mysticism.
I’ve written a white paper proposing the Predictive Timeline Simulation (PTS) Framework, which treats consciousness as an evolved simulation engine. It connects neuroscience, physics, and philosophy, and suggests both a testable schizophrenia hypothesis and a design principle for AGI. I’d welcome critical feedback from the HN community.