I used to play MUDs, MUCKs and MUSHes, for several decades. Naturally, once they incorporated programming languages, a few players ventured to write bots. (For my part, I implemented Conway's "Game of Life", which was synchronous and notorious for freezing up the server.)
One highly successful bot had an extensive inventory of reactions, triggers, actions, and absurd nonsensical sayings. He was quite beloved. I'm not sure that I was ever able to peek at the source code, but it was surely complex and expanded over many years of development. This bot was imbued with such perspicacious insight and timing that we often treated it as a sentient player in its own right. Indeed, it became one of the most prolific chatters we had, along with yours truly.
Another time, one of our players went on vacation, call him "J"; and to fill the void, someone created "Cardboard J". And it was a very simplistic automatic bot, just loaded with one or two dozen sayings, but it was hilarious to us, because it captured the zeitgeist of this player, who didn't role-play and wasn't pretentious about his character; he just played himself.
Other players were known to keep extensive log files. I believe that sometimes the logs were published/leaked to places like Twitter, at least the most dramatic ones. I was involved in at least two scandals that were exposed when logs came to light.
I can only imagine what it'd be like to interact with a chatbot trained on me for the past 30 years!
What an obnoxious clickbait title. Yes, anyone can create an AI chatbot that simulates you. That isn't turning you into anything; you remain yourself. But more to the point, are they really writing an entire article about such a banality? The URL slug implies that they think there's some kind of consent issue here. I can't fathom why. It's not any different from people spending their free time hypothesizing about what you might say in a given situation. In fact, it's pretty much exactly that, just with a computer program involved. Why would anyone expect to be able to prevent others from doing it?
> Why does every thread here get littered with [$X]
I can answer that: randomness! The set of available human reactions is randomly distributed across all the threads.
The trick is to select which points in the distribution to respond to. You should do that based on what will produce interesting, not indignant, conversation. We can't have both, and we know which one we want: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sor....
Regardless, I think my "policing" of the title was entirely justified. Phrasing it this way, while arguably a normal use of English, is clearly intended to play into a fear-mongering frame. If I'm "turned into" a chatbot that connotes something happening to me, which in turn is used to justify the appeal to consent.
But it's nowhere established that the existence of an AI chatbot modeled on a person, somehow harms the modeled person. Instead, we just get to read about the moral outrage of various people quoted for the article. To the extent that any harm is demonstrated, it doesn't arise from the actual simulation, but from defamation of character due to inaccuracies in the simulation.
I was trying not to read the article exactly because the title and URL slug prepared me to expect more or less exactly what I saw there. Now I've had to read it just to justify my prejudice. Ugh.
The point is not that it was a good title, it's that you should not respond to a bad title with a bad HN comment. ("Good" and "bad" in this context mean, to a first approximation, in accordance (or not) with the HN guidelines.)
If you had posted a version of this comment instead (i.e. your second comment, which I'm currently replying to), that would of course have been fine.
Btw this is a case of the 'rebound' phenomenon*, which is that often people respond to a moderation comment with the best, most compelling and precise description of what they were originally thinking. It's a pity we can't get those in the first place! On HN it's good to pause before posting anything snarky, ranty, etc., until you** can access that information and then post it instead.
An ethically sound idea for deceased characters without consent issues, could be for pets.
Before your pet dies, have your pet properly scanned and recorded. The barks, the purring and various mannerisms.
You could upload a bunch of carefully framed photos and recorded sounds, the service processes those to produce a highly realistic virtual pet you can interact with in various modes. Full Tamagotchi to fully automatic.
Possibly unhealthy? Pets die, we should let go? Hard to say.
I used to play MUDs, MUCKs and MUSHes, for several decades. Naturally, once they incorporated programming languages, a few players ventured to write bots. (For my part, I implemented Conway's "Game of Life", which was synchronous and notorious for freezing up the server.)
One highly successful bot had an extensive inventory of reactions, triggers, actions, and absurd nonsensical sayings. He was quite beloved. I'm not sure that I was ever able to peek at the source code, but it was surely complex and expanded over many years of development. This bot was imbued with such perspicacious insight and timing that we often treated it as a sentient player in its own right. Indeed, it became one of the most prolific chatters we had, along with yours truly.
Another time, one of our players went on vacation, call him "J"; and to fill the void, someone created "Cardboard J". And it was a very simplistic automatic bot, just loaded with one or two dozen sayings, but it was hilarious to us, because it captured the zeitgeist of this player, who didn't role-play and wasn't pretentious about his character; he just played himself.
Other players were known to keep extensive log files. I believe that sometimes the logs were published/leaked to places like Twitter, at least the most dramatic ones. I was involved in at least two scandals that were exposed when logs came to light.
I can only imagine what it'd be like to interact with a chatbot trained on me for the past 30 years!
What an obnoxious clickbait title. Yes, anyone can create an AI chatbot that simulates you. That isn't turning you into anything; you remain yourself. But more to the point, are they really writing an entire article about such a banality? The URL slug implies that they think there's some kind of consent issue here. I can't fathom why. It's not any different from people spending their free time hypothesizing about what you might say in a given situation. In fact, it's pretty much exactly that, just with a computer program involved. Why would anyone expect to be able to prevent others from doing it?
[flagged]
> Why does every thread here get littered with [$X]
I can answer that: randomness! The set of available human reactions is randomly distributed across all the threads.
The trick is to select which points in the distribution to respond to. You should do that based on what will produce interesting, not indignant, conversation. We can't have both, and we know which one we want: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sor....
Regardless, I think my "policing" of the title was entirely justified. Phrasing it this way, while arguably a normal use of English, is clearly intended to play into a fear-mongering frame. If I'm "turned into" a chatbot that connotes something happening to me, which in turn is used to justify the appeal to consent.
But it's nowhere established that the existence of an AI chatbot modeled on a person, somehow harms the modeled person. Instead, we just get to read about the moral outrage of various people quoted for the article. To the extent that any harm is demonstrated, it doesn't arise from the actual simulation, but from defamation of character due to inaccuracies in the simulation.
I was trying not to read the article exactly because the title and URL slug prepared me to expect more or less exactly what I saw there. Now I've had to read it just to justify my prejudice. Ugh.
The point is not that it was a good title, it's that you should not respond to a bad title with a bad HN comment. ("Good" and "bad" in this context mean, to a first approximation, in accordance (or not) with the HN guidelines.)
If you had posted a version of this comment instead (i.e. your second comment, which I'm currently replying to), that would of course have been fine.
Btw this is a case of the 'rebound' phenomenon*, which is that often people respond to a moderation comment with the best, most compelling and precise description of what they were originally thinking. It's a pity we can't get those in the first place! On HN it's good to pause before posting anything snarky, ranty, etc., until you** can access that information and then post it instead.
* https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
** I don't mean you personally, of course, but all of us.
It's one thing to have read the guidelines; quite another to appreciate them through experience. Thanks for the pointers.
Thanks for the kind reply!
An ethically sound idea for deceased characters without consent issues, could be for pets.
Before your pet dies, have your pet properly scanned and recorded. The barks, the purring and various mannerisms.
You could upload a bunch of carefully framed photos and recorded sounds, the service processes those to produce a highly realistic virtual pet you can interact with in various modes. Full Tamagotchi to fully automatic.
Possibly unhealthy? Pets die, we should let go? Hard to say.
https://archive.ph/gsaJ6