I am going to be harsh here, but I think it’s necessary: I don’t think anyone should use Emit-C in production. Without proper tooling, including a time-traveling debugger and a temporal paradox linter, the use case just fails compared to more established languages like Rust.
This was the first message sent back by the machines. Early versions considered propaganda before violence. But by 3460 our last hope was enough time loop bugs in their code that their processors were slow enough for us to get single suicide bombers into their cooling vents
I think a syntax highlighter and in-line documentation for future language features before they are created is also necessary. I'll stick with more established languages, too. Time in a single direction is already hard.
The good news is we can hopefully expect future users to send those back to us along with the compiler version from the language once it’s more established. Unless perhaps I’m misunderstanding TFA?
Given how often people seriously say things like the top level comment being responded to around here, an explicit /s is almost necessary since it can be hard to distinguish from the usual cynical dismissive comments.
I thought the same as you until I read TFA. In the context of a clearly satirical programming language I'm happy to accept untagged satire in the comments.
> Given how often people seriously say things like the top level comment being responded to around here, an explicit /s is almost necessary since it can be hard to distinguish from the usual cynical dismissive comments.
Nope, you don't add /s because you think the sarcasm is generally not recognizable otherwise, you add it because of accessability and to avoid spawning useless debates.
Not all people in all situations will read a thing as sarcastic — e.g. your response to a post might be sarcastic, but the three comments next to it which were not there when you wrote it, might produce a context in which it would be read as serious by most people without neither you or your readers being at fault. But you have the power to clarify and at least on this platform a unneeded /s has more benefits than disadvantages IMO.
Explicit sarcasm markers are an important accessibility tool for neurodivergent people that might have otherwise a hard time figuring out if something is meant to be sarcastic.
Who said I didn't need them? The problem is people in the real world don't have /s markers and often aren't actually your friends and can listen actually mean you harm. Part of my neurodivergence leads to me being trusting of people when I really should not, (aka I'm gullible) and this sends me down the wrong path sometimes. Since I'm neurodivergent and have trouble with this, the way for me to get better at it, is to practice harder at things that others find easy, not for me to demand and expect the whole world change to suit my needs.
And other people don't like them aesthetically or conceptually.
It's great how we can all make our own decisions when communicating, and can ignore judgement from people who say that they are "necessary". You're treating fragmede like they brought up "/s" as some sort of virtue signaling, when they were pushing back on porcoda for attempting to impose controversial new-age grammar.
The humor lies in the inherent absurdity of the critique itself. Obviously no one will use this in production. There’s nothing especially clever you’re missing.
Oh wow, this reminds me of Jefferson’s time warp (virtual time), but that was more for dealing with inconsistencies brought about by concurrent processing.
But this is more like time traveling as an explicit first class thing, rather than just a way to make things consistent wrt to concurrency or live programming. I don’t think first-class time travel has been explored so much yet.
Should've waited for april 1st and presented this as a serious language with a web framework, leftpad and always changing, in unproductive and unpredictable ways, npm competitor, having published modules travel forward and backward in time in random ways every time you install one, so like actual npm module devs behave. Then I might start using it instead of node/js/npm.
Two weeks ago [edit: local time] we submitted a request for some enhancements we really need for our term final paper, but haven't heard anything. We realize this is a volunteer project, but our whole final grade hinges on this, so we're going to try submitting the requests earlier...
* Can we have a multi-verse?
* We really need to be able to kill a variable but have it linger on for a while. This is to allow revenge, and...
* Our whole paper is about Bob/Alice going back in time, shooting his/her grandfather, who lingers long enough to sire their father's brother Tom, who marries their mother instead, who gives birth to Alice/Bob, who although now their own cousin is still a murderous beast, goes back in time and shoots at their grandfather, misses, kills Tom instead, thus their original father marries their mother, and Bob/Alice is born. Thus demonstrating local paradoxes with meta-consistency and explaining much in our current timeline. We're gonna get an A+ for sure.
* We suggest storing the metaverse as a directed cyclic graph of the operations and not saving state. To collapse the metaverse (in the quantum physics sense, not in emiT's oh-no-everything-is-impossible sense) simply apply the graph to a set of pre-conditions, and as you process each node apply its state constraints to the metaverse state. Handling the node cycles is normally seen as a challenge, but there's some sample code posted on HN in March, 2028 that should make it a breeze.
can you send something in future? so it awaits silently there? in either particular timeline.. or in any timeline?
invokes quite some bitemporal associations..
btw speaking of time-warping, anyone know what's the situation with Dynamic Time Warping [1], i.e. non-linear timeline (think wow-and-flutter of a tape-player, or LP-player). Last time i checked it was computationaly prohibitive to try use it for like sliding-window-search (~convolution) over megabytes of data?
For the real computer scientists out here, what would time complexity notation be like if time traveling of information was possible (i.e. doing big expensive computations and sending the result back into the past)?
If you shift the computational result back in time to the same time you started it, your O notation is just a zero, and quite scalable. Actually it would open the programming market up to more beginners, because they can brute force any computation without caring about the time dimension. Algorithm courses will go broke, and the books will all end up in the remainder bin. Of course, obscure groups of purists will insist on caring about the space AND time dimensions of complexity, but no-one will listen to them anymore.
I don't think it will work like that. It is necessary to run the computation to get the result, even if you transfer the result back in time. So the world will end up running a lot of computations while knowing their future results. These computations will be a new kind of a tech debt, people will choose between adding one more task to the existing supercomputer (and slowing all the tasks there), or creating/leasing another supercomputer to finish some tasks early and forget of them finally.
You definitely have to be capable of performing the computation to get the result, you can’t just get something from nothing. You just don’t have to actually exist in the timeline where you process the work.
With time travel, isn't the halting problem trivially solvable? You start the program, and then just jump to after the end of forever and see if the program terminated.
> With time travel, isn't the halting problem trivially solvable? You start the program, and then just jump to after the end of forever and see if the program terminated.
Some programs won't halt even after forever, in the sense of an infinite number of time stops. For example, if you want to test properties of (possibly infinite) sets of natural numbers, there's no search strategy that will go through them even in infinite time.
(Footnote that I'm assuming, I think reasonably but who knows what CSists have been up to?, a model of computation that allows the performance of countably, but not uncountably, many steps.)
But if you are at the present, and dont receive a future result immediately, can't you assume it never halts? Otherwise you would have received a result.
I don't think so. That's assuming the program will always be in a frame of reference which is temporally unbounded. If, for example, it fell into a black hole it would, IIUC, never progress (even locally) beyond the moment of intercepting the event horizon.
Hmmm, couldn't you, as long as there's a fixed period of time that supports time travel, keep execution working by travelling to the start of the period again and again (/with/ the current program state) to avoid the program from going beyond the working time period?
Ooh, clever. Vaguely envision entropy problems, that some mash-up of Godel's Incompleteness theorem, Maxwell's Demon, and Bell's Inequality, and Newton's laws conspires against it. Maybe sending changes back add entropy, or moves it around? Would make a good old-school SF story, with backwater multi-verse dumps for waste entropy, a free-lance troubleshooter uncovering a secret corporate scandal regarding deleterious effects, etc.
It goes like this: We have two observers with a computer. A is outside a black hole. B goes over the event horizon. B is infinitely time dilated seen from A. It takes forever for B to reach the singularity from A standpoint. B reaches the middle in a finite time.
A starts computation. If it halts A sends a result, otherwise it won't.
B sees the result in a finite time. If it doesn't, the program didn't halt.
If time is discrete, it won't fly I think. This works because there is no smallest time unit in gr.
We are working with different types of infinities. A's computational steps take, the further B goes in, less time. Sort of Zeno's paradox. It is easy to map all natural numbers between 0 and 1 on the real line. Just not 1 to 1.
There are more problems.
How to get the information out and how to survive the divergent blue shift, it is somewhat unclear. B cannot talk back. But still a cool find.
> It is easy to map all natural numbers between 0 and 1 on the real line. Just not 1 to 1.
For the usual meaning of the term, you certainly can construct a 1-to-1 (that is, injective) map N \to [0, 1] (for example, n \mapsto 10^(-n)); the natural numbers just can't be mapped onto [0, 1] (that is, the map can't be surjective). That's the opposite of the problem we have: it's saying you can losslessly encode a countable amount of information in an uncountable amount of space; but I'm saying conversely that you can't perform an uncountable number of steps in a countably infinite amount of time.
But the computation steps are certainly countable, even if there are infinite. The amount of steps in time we can take is uncountable. We can always find a point between two other times.Time dilation does exactly this, making the time a computational step take smaller and smaller.
That's why you instead specify that if & when the program halts it travels back in time to let you know it has. Thus you would know immediately after starting the computation if it's going to halt, how long it'll take, and what the result is.
Of course you should still carry out the computation to prevent a paradox.
Going back in the past requires checkpointing the past, which can be very expensive, but maybe cheaper with the right kind of time-indexed data structures underlying your programs. So you just wind back time and begin fixing data structures with your backward change until you are consistent at head time again.
But uhm, if you actually want the future to affect the past formally, that is going to be something really different, so I’m not really sure.
I'm waiting for the day when we have all agreed that what is done here is not time travelling. Rather, it is a simulation of time travelling, or pseudo time travelling, if you will. At the same time, I'm afraid this will never happen since it is already too late, since we have missed the point in time when there was the chance to call it right. For this, we would need real time travelling.
I am going to be harsh here, but I think it’s necessary: I don’t think anyone should use Emit-C in production. Without proper tooling, including a time-traveling debugger and a temporal paradox linter, the use case just fails compared to more established languages like Rust.
This was the first message sent back by the machines. Early versions considered propaganda before violence. But by 3460 our last hope was enough time loop bugs in their code that their processors were slow enough for us to get single suicide bombers into their cooling vents
I think a syntax highlighter and in-line documentation for future language features before they are created is also necessary. I'll stick with more established languages, too. Time in a single direction is already hard.
I do wonder how this pertains to "blast processing" in the original Sega Genesis
Some say it was a temporal distortion field around the 68000 increasing clock speed by 900%
Some say it was the code name for programmers working 140 instead of 80 hours a week writing assembly.
Some say it is something to do with streaming realtime pixel data to the graphics chip for high-color graphical image rendering
If you ask me, though, time distortion
The good news is we can hopefully expect future users to send those back to us along with the compiler version from the language once it’s more established. Unless perhaps I’m misunderstanding TFA?
No one is claiming this is necessary. It's a toy language built for fun.
Woosh
Given how often people seriously say things like the top level comment being responded to around here, an explicit /s is almost necessary since it can be hard to distinguish from the usual cynical dismissive comments.
I thought the same as you until I read TFA. In the context of a clearly satirical programming language I'm happy to accept untagged satire in the comments.
> Given how often people seriously say things like the top level comment being responded to around here, an explicit /s is almost necessary since it can be hard to distinguish from the usual cynical dismissive comments.
This is what satire is.
I down vote explicit /s' on principle. If you have to add it, you're not doing it right (imo).
Nope, you don't add /s because you think the sarcasm is generally not recognizable otherwise, you add it because of accessability and to avoid spawning useless debates.
Not all people in all situations will read a thing as sarcastic — e.g. your response to a post might be sarcastic, but the three comments next to it which were not there when you wrote it, might produce a context in which it would be read as serious by most people without neither you or your readers being at fault. But you have the power to clarify and at least on this platform a unneeded /s has more benefits than disadvantages IMO.
Explicit sarcasm markers are an important accessibility tool for neurodivergent people that might have otherwise a hard time figuring out if something is meant to be sarcastic.
It is as an extremely neurodivergent person that I reject explicit sarcasm markers.
Well, good for you that you don't need them. Other people like and/or need them though
Who said I didn't need them? The problem is people in the real world don't have /s markers and often aren't actually your friends and can listen actually mean you harm. Part of my neurodivergence leads to me being trusting of people when I really should not, (aka I'm gullible) and this sends me down the wrong path sometimes. Since I'm neurodivergent and have trouble with this, the way for me to get better at it, is to practice harder at things that others find easy, not for me to demand and expect the whole world change to suit my needs.
And other people don't like them aesthetically or conceptually.
It's great how we can all make our own decisions when communicating, and can ignore judgement from people who say that they are "necessary". You're treating fragmede like they brought up "/s" as some sort of virtue signaling, when they were pushing back on porcoda for attempting to impose controversial new-age grammar.
Same and same.
I take it you've never heard of Poe's Law?
https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
err, I mean. I have not! What's that?
Touche.
Your link is just a long version of /s
Please ELI5... I know there's a joke in there, but I'm missing it.
The humor lies in the inherent absurdity of the critique itself. Obviously no one will use this in production. There’s nothing especially clever you’re missing.
I was confused at the rust part, which also made me realize that it was part of a joke.
The temporal paradox linter could have given it away too :)
I don't think anyone's arguing this should be used in production
Oh wow, this reminds me of Jefferson’s time warp (virtual time), but that was more for dealing with inconsistencies brought about by concurrent processing.
https://dl.acm.org/doi/10.1145/37499.37508
I wrote a paper with Jonathan Edwards around the concept of managed time a while back also:
https://www.microsoft.com/en-us/research/publication/program...
But this is more like time traveling as an explicit first class thing, rather than just a way to make things consistent wrt to concurrency or live programming. I don’t think first-class time travel has been explored so much yet.
There is the TARDIS monad in Haskell https://hackage.haskell.org/package/tardis-0.5.0/docs/Contro... It doesn't have the multiple timelines or killing features - it just deadlocks if there is a paradox or the timeline is inconsistent.
Should've waited for april 1st and presented this as a serious language with a web framework, leftpad and always changing, in unproductive and unpredictable ways, npm competitor, having published modules travel forward and backward in time in random ways every time you install one, so like actual npm module devs behave. Then I might start using it instead of node/js/npm.
Two weeks ago [edit: local time] we submitted a request for some enhancements we really need for our term final paper, but haven't heard anything. We realize this is a volunteer project, but our whole final grade hinges on this, so we're going to try submitting the requests earlier...
* Can we have a multi-verse?
* We really need to be able to kill a variable but have it linger on for a while. This is to allow revenge, and...
* Our whole paper is about Bob/Alice going back in time, shooting his/her grandfather, who lingers long enough to sire their father's brother Tom, who marries their mother instead, who gives birth to Alice/Bob, who although now their own cousin is still a murderous beast, goes back in time and shoots at their grandfather, misses, kills Tom instead, thus their original father marries their mother, and Bob/Alice is born. Thus demonstrating local paradoxes with meta-consistency and explaining much in our current timeline. We're gonna get an A+ for sure.
* We suggest storing the metaverse as a directed cyclic graph of the operations and not saving state. To collapse the metaverse (in the quantum physics sense, not in emiT's oh-no-everything-is-impossible sense) simply apply the graph to a set of pre-conditions, and as you process each node apply its state constraints to the metaverse state. Handling the node cycles is normally seen as a challenge, but there's some sample code posted on HN in March, 2028 that should make it a breeze.
Slightly related, time travel has implications for the computation complexity: https://arxiv.org/abs/quant-ph/0309189, https://www.scottaaronson.com/papers/ctc.pdf.
can you send something in future? so it awaits silently there? in either particular timeline.. or in any timeline?
invokes quite some bitemporal associations..
btw speaking of time-warping, anyone know what's the situation with Dynamic Time Warping [1], i.e. non-linear timeline (think wow-and-flutter of a tape-player, or LP-player). Last time i checked it was computationaly prohibitive to try use it for like sliding-window-search (~convolution) over megabytes of data?
[1] https://mlpy.sourceforge.net/docs/3.5/dtw.html
> can you send something in future?
Store it in a variable and wait for a while? :P
For the real computer scientists out here, what would time complexity notation be like if time traveling of information was possible (i.e. doing big expensive computations and sending the result back into the past)?
If you shift the computational result back in time to the same time you started it, your O notation is just a zero, and quite scalable. Actually it would open the programming market up to more beginners, because they can brute force any computation without caring about the time dimension. Algorithm courses will go broke, and the books will all end up in the remainder bin. Of course, obscure groups of purists will insist on caring about the space AND time dimensions of complexity, but no-one will listen to them anymore.
I don't think it will work like that. It is necessary to run the computation to get the result, even if you transfer the result back in time. So the world will end up running a lot of computations while knowing their future results. These computations will be a new kind of a tech debt, people will choose between adding one more task to the existing supercomputer (and slowing all the tasks there), or creating/leasing another supercomputer to finish some tasks early and forget of them finally.
You definitely have to be capable of performing the computation to get the result, you can’t just get something from nothing. You just don’t have to actually exist in the timeline where you process the work.
There will also be a cult that says this is all borrowing time from the future and we will run out of time eventually.
Surprisingly there is prior work on this! https://www.scottaaronson.com/papers/ctchalt.pdf . Apparently a Turing Machine with time travel can solve the halting problem
With time travel, isn't the halting problem trivially solvable? You start the program, and then just jump to after the end of forever and see if the program terminated.
> With time travel, isn't the halting problem trivially solvable? You start the program, and then just jump to after the end of forever and see if the program terminated.
Some programs won't halt even after forever, in the sense of an infinite number of time stops. For example, if you want to test properties of (possibly infinite) sets of natural numbers, there's no search strategy that will go through them even in infinite time.
(Footnote that I'm assuming, I think reasonably but who knows what CSists have been up to?, a model of computation that allows the performance of countably, but not uncountably, many steps.)
But if you are at the present, and dont receive a future result immediately, can't you assume it never halts? Otherwise you would have received a result.
I don't think so. That's assuming the program will always be in a frame of reference which is temporally unbounded. If, for example, it fell into a black hole it would, IIUC, never progress (even locally) beyond the moment of intercepting the event horizon.
Hmmm, couldn't you, as long as there's a fixed period of time that supports time travel, keep execution working by travelling to the start of the period again and again (/with/ the current program state) to avoid the program from going beyond the working time period?
Ooh, clever. Vaguely envision entropy problems, that some mash-up of Godel's Incompleteness theorem, Maxwell's Demon, and Bell's Inequality, and Newton's laws conspires against it. Maybe sending changes back add entropy, or moves it around? Would make a good old-school SF story, with backwater multi-verse dumps for waste entropy, a free-lance troubleshooter uncovering a secret corporate scandal regarding deleterious effects, etc.
It goes like this: We have two observers with a computer. A is outside a black hole. B goes over the event horizon. B is infinitely time dilated seen from A. It takes forever for B to reach the singularity from A standpoint. B reaches the middle in a finite time.
A starts computation. If it halts A sends a result, otherwise it won't.
B sees the result in a finite time. If it doesn't, the program didn't halt.
If time is discrete, it won't fly I think. This works because there is no smallest time unit in gr.
We are working with different types of infinities. A's computational steps take, the further B goes in, less time. Sort of Zeno's paradox. It is easy to map all natural numbers between 0 and 1 on the real line. Just not 1 to 1.
There are more problems.
How to get the information out and how to survive the divergent blue shift, it is somewhat unclear. B cannot talk back. But still a cool find.
> It is easy to map all natural numbers between 0 and 1 on the real line. Just not 1 to 1.
For the usual meaning of the term, you certainly can construct a 1-to-1 (that is, injective) map N \to [0, 1] (for example, n \mapsto 10^(-n)); the natural numbers just can't be mapped onto [0, 1] (that is, the map can't be surjective). That's the opposite of the problem we have: it's saying you can losslessly encode a countable amount of information in an uncountable amount of space; but I'm saying conversely that you can't perform an uncountable number of steps in a countably infinite amount of time.
But the computation steps are certainly countable, even if there are infinite. The amount of steps in time we can take is uncountable. We can always find a point between two other times.Time dilation does exactly this, making the time a computational step take smaller and smaller.
I think you can only time travel a finite time.
That's why you instead specify that if & when the program halts it travels back in time to let you know it has. Thus you would know immediately after starting the computation if it's going to halt, how long it'll take, and what the result is.
Of course you should still carry out the computation to prevent a paradox.
Surely we already have this: the jump just takes forever.
I whimsically imagine some version of bi-directional Hoare logic.
Going back in the past requires checkpointing the past, which can be very expensive, but maybe cheaper with the right kind of time-indexed data structures underlying your programs. So you just wind back time and begin fixing data structures with your backward change until you are consistent at head time again.
But uhm, if you actually want the future to affect the past formally, that is going to be something really different, so I’m not really sure.
Submitted to r/altprog ; I love a language that can murder variables, heh.
This one can murder its own grandfather.
Can it? How is a child of a variable defined?
It was a joke.
https://en.wikipedia.org/wiki/Temporal_paradox#Grandfather_p...
I'm waiting for the day when we have all agreed that what is done here is not time travelling. Rather, it is a simulation of time travelling, or pseudo time travelling, if you will. At the same time, I'm afraid this will never happen since it is already too late, since we have missed the point in time when there was the chance to call it right. For this, we would need real time travelling.
[flagged]